Telemetry
This page adapts the original AI SDK documentation: Telemetry.
The AI SDK uses OpenTelemetry to collect telemetry data. OpenTelemetry is an open-source observability framework designed to provide standardized instrumentation for collecting telemetry data.
Enabling telemetry
Section titled “Enabling telemetry”You can use the experimentalTelemetry parameter to enable telemetry on specific function calls while the feature is experimental:
import SwiftAISDKimport OpenAIProvider
let result = try await generateText( model: openai("gpt-4o"), prompt: "Write a short story about a cat.", experimentalTelemetry: TelemetrySettings(isEnabled: true))When telemetry is enabled, you can also control if you want to record the input values and the output values for the function.
By default, both are enabled. You can disable them by setting the recordInputs and recordOutputs options to false.
Disabling the recording of inputs and outputs can be useful for privacy, data transfer, and performance reasons. You might for example want to disable recording inputs if they contain sensitive information.
let result = try await generateText( model: openai("gpt-4o"), prompt: "Sensitive user data...", experimentalTelemetry: TelemetrySettings( isEnabled: true, recordInputs: false, recordOutputs: false ))Telemetry Metadata
Section titled “Telemetry Metadata”You can provide a functionId to identify the function that the telemetry data is for,
and metadata to include additional information in the telemetry data.
let result = try await generateText( model: openai("gpt-4o"), prompt: "Write a short story about a cat.", experimentalTelemetry: TelemetrySettings( isEnabled: true, functionId: "my-awesome-function", metadata: [ "something": "custom", "someOtherThing": "other-value" ] ))Collected Data
Section titled “Collected Data”generateText function
Section titled “generateText function”generateText records 3 types of spans:
-
ai.generateText(span): the full length of the generateText call. It contains 1 or moreai.generateText.doGeneratespans. It contains the basic LLM span information and the following attributes:operation.name:ai.generateTextand the functionId that was set throughtelemetry.functionIdai.operationId:"ai.generateText"ai.prompt: the prompt that was used when callinggenerateTextai.response.text: the text that was generatedai.response.toolCalls: the tool calls that were made as part of the generation (stringified JSON)ai.response.finishReason: the reason why the generation finishedai.settings.maxOutputTokens: the maximum number of output tokens that were set
-
ai.generateText.doGenerate(span): a provider doGenerate call. It can containai.toolCallspans. It contains the call LLM span information and the following attributes:operation.name:ai.generateText.doGenerateand the functionId that was set throughtelemetry.functionIdai.operationId:"ai.generateText.doGenerate"ai.prompt.messages: the messages that were passed into the providerai.prompt.tools: array of stringified tool definitionsai.prompt.toolChoice: the stringified tool choice setting (JSON)ai.response.text: the text that was generatedai.response.toolCalls: the tool calls that were made as part of the generation (stringified JSON)ai.response.finishReason: the reason why the generation finished
-
ai.toolCall(span): a tool call that is made as part of the generateText call. See Tool call spans for more details.
streamText function
Section titled “streamText function”streamText records 3 types of spans and 2 types of events:
-
ai.streamText(span): the full length of the streamText call. It contains aai.streamText.doStreamspan. It contains the basic LLM span information and the following attributes:operation.name:ai.streamTextand the functionId that was set throughtelemetry.functionIdai.operationId:"ai.streamText"ai.prompt: the prompt that was used when callingstreamTextai.response.text: the text that was generatedai.response.toolCalls: the tool calls that were made as part of the generation (stringified JSON)ai.response.finishReason: the reason why the generation finishedai.settings.maxOutputTokens: the maximum number of output tokens that were set
-
ai.streamText.doStream(span): a provider doStream call. This span contains anai.stream.firstChunkevent andai.toolCallspans. It contains the call LLM span information and the following attributes:operation.name:ai.streamText.doStreamand the functionId that was set throughtelemetry.functionIdai.operationId:"ai.streamText.doStream"ai.prompt.messages: the messages that were passed into the providerai.prompt.tools: array of stringified tool definitionsai.prompt.toolChoice: the stringified tool choice setting (JSON)ai.response.text: the text that was generatedai.response.toolCalls: the tool calls that were made as part of the generation (stringified JSON)ai.response.msToFirstChunk: the time it took to receive the first chunk in millisecondsai.response.msToFinish: the time it took to receive the finish part of the LLM stream in millisecondsai.response.avgCompletionTokensPerSecond: the average number of completion tokens per secondai.response.finishReason: the reason why the generation finished
-
ai.toolCall(span): a tool call that is made as part of the streamText call. See Tool call spans for more details. -
ai.stream.firstChunk(event): an event that is emitted when the first chunk of the stream is received.ai.response.msToFirstChunk: the time it took to receive the first chunk
-
ai.stream.finish(event): an event that is emitted when the finish part of the LLM stream is received.
generateObject function
Section titled “generateObject function”generateObject records 2 types of spans:
-
ai.generateObject(span): the full length of the generateObject call. It contains 1 or moreai.generateObject.doGeneratespans. It contains the basic LLM span information and the following attributes:operation.name:ai.generateObjectand the functionId that was set throughtelemetry.functionIdai.operationId:"ai.generateObject"ai.prompt: the prompt that was used when callinggenerateObjectai.schema: Stringified JSON schema version of the schema that was passed into thegenerateObjectfunctionai.schema.name: the name of the schema that was passed into thegenerateObjectfunctionai.schema.description: the description of the schema that was passed into thegenerateObjectfunctionai.response.object: the object that was generated (stringified JSON)ai.settings.output: the output type that was used, e.g.objectorno-schema
-
ai.generateObject.doGenerate(span): a provider doGenerate call. It contains the call LLM span information and the following attributes:operation.name:ai.generateObject.doGenerateand the functionId that was set throughtelemetry.functionIdai.operationId:"ai.generateObject.doGenerate"ai.prompt.messages: the messages that were passed into the providerai.response.object: the object that was generated (stringified JSON)ai.response.finishReason: the reason why the generation finished
streamObject function
Section titled “streamObject function”streamObject records 2 types of spans and 1 type of event:
-
ai.streamObject(span): the full length of the streamObject call. It contains 1 or moreai.streamObject.doStreamspans. It contains the basic LLM span information and the following attributes:operation.name:ai.streamObjectand the functionId that was set throughtelemetry.functionIdai.operationId:"ai.streamObject"ai.prompt: the prompt that was used when callingstreamObjectai.schema: Stringified JSON schema version of the schema that was passed into thestreamObjectfunctionai.schema.name: the name of the schema that was passed into thestreamObjectfunctionai.schema.description: the description of the schema that was passed into thestreamObjectfunctionai.response.object: the object that was generated (stringified JSON)ai.settings.output: the output type that was used, e.g.objectorno-schema
-
ai.streamObject.doStream(span): a provider doStream call. This span contains anai.stream.firstChunkevent. It contains the call LLM span information and the following attributes:operation.name:ai.streamObject.doStreamand the functionId that was set throughtelemetry.functionIdai.operationId:"ai.streamObject.doStream"ai.prompt.messages: the messages that were passed into the providerai.response.object: the object that was generated (stringified JSON)ai.response.msToFirstChunk: the time it took to receive the first chunkai.response.finishReason: the reason why the generation finished
-
ai.stream.firstChunk(event): an event that is emitted when the first chunk of the stream is received.ai.response.msToFirstChunk: the time it took to receive the first chunk
embed function
Section titled “embed function”embed records 2 types of spans:
-
ai.embed(span): the full length of the embed call. It contains 1ai.embed.doEmbedspan. It contains the basic embedding span information and the following attributes:operation.name:ai.embedand the functionId that was set throughtelemetry.functionIdai.operationId:"ai.embed"ai.value: the value that was passed into theembedfunctionai.embedding: a JSON-stringified embedding
-
ai.embed.doEmbed(span): a provider doEmbed call. It contains the basic embedding span information and the following attributes:operation.name:ai.embed.doEmbedand the functionId that was set throughtelemetry.functionIdai.operationId:"ai.embed.doEmbed"ai.values: the values that were passed into the provider (array)ai.embeddings: an array of JSON-stringified embeddings
embedMany function
Section titled “embedMany function”embedMany records 2 types of spans:
-
ai.embedMany(span): the full length of the embedMany call. It contains 1 or moreai.embedMany.doEmbedspans. It contains the basic embedding span information and the following attributes:operation.name:ai.embedManyand the functionId that was set throughtelemetry.functionIdai.operationId:"ai.embedMany"ai.values: the values that were passed into theembedManyfunctionai.embeddings: an array of JSON-stringified embeddings
-
ai.embedMany.doEmbed(span): a provider doEmbed call. It contains the basic embedding span information and the following attributes:operation.name:ai.embedMany.doEmbedand the functionId that was set throughtelemetry.functionIdai.operationId:"ai.embedMany.doEmbed"ai.values: the values that were sent to the providerai.embeddings: an array of JSON-stringified embeddings for each value
Span Details
Section titled “Span Details”Basic LLM span information
Section titled “Basic LLM span information”Many spans that use LLMs (ai.generateText, ai.generateText.doGenerate, ai.streamText, ai.streamText.doStream,
ai.generateObject, ai.generateObject.doGenerate, ai.streamObject, ai.streamObject.doStream) contain the following attributes:
resource.name: the functionId that was set throughtelemetry.functionIdai.model.id: the id of the modelai.model.provider: the provider of the modelai.request.headers.*: the request headers that were passed in throughheadersai.response.providerMetadata: provider specific metadata returned with the generation responseai.settings.maxRetries: the maximum number of retries that were setai.telemetry.functionId: the functionId that was set throughtelemetry.functionIdai.telemetry.metadata.*: the metadata that was passed in throughtelemetry.metadataai.usage.completionTokens: the number of completion tokens that were usedai.usage.promptTokens: the number of prompt tokens that were used
Call LLM span information
Section titled “Call LLM span information”Spans that correspond to individual LLM calls (ai.generateText.doGenerate, ai.streamText.doStream, ai.generateObject.doGenerate, ai.streamObject.doStream) contain
basic LLM span information and the following attributes:
ai.response.model: the model that was used to generate the responseai.response.id: the id of the responseai.response.timestamp: the timestamp of the response- Semantic Conventions for GenAI operations
gen_ai.system: the provider that was usedgen_ai.request.model: the model that was requestedgen_ai.request.temperature: the temperature that was setgen_ai.request.max_tokens: the maximum number of tokens that were setgen_ai.request.frequency_penalty: the frequency penalty that was setgen_ai.request.presence_penalty: the presence penalty that was setgen_ai.request.top_k: the topK parameter value that was setgen_ai.request.top_p: the topP parameter value that was setgen_ai.request.stop_sequences: the stop sequencesgen_ai.response.finish_reasons: the finish reasons that were returned by the providergen_ai.response.model: the model that was used to generate the responsegen_ai.response.id: the id of the responsegen_ai.usage.input_tokens: the number of prompt tokens that were usedgen_ai.usage.output_tokens: the number of completion tokens that were used
Basic embedding span information
Section titled “Basic embedding span information”Many spans that use embedding models (ai.embed, ai.embed.doEmbed, ai.embedMany, ai.embedMany.doEmbed) contain the following attributes:
ai.model.id: the id of the modelai.model.provider: the provider of the modelai.request.headers.*: the request headers that were passed in throughheadersai.settings.maxRetries: the maximum number of retries that were setai.telemetry.functionId: the functionId that was set throughtelemetry.functionIdai.telemetry.metadata.*: the metadata that was passed in throughtelemetry.metadataai.usage.tokens: the number of tokens that were usedresource.name: the functionId that was set throughtelemetry.functionId
Tool call spans
Section titled “Tool call spans”Tool call spans (ai.toolCall) contain the following attributes:
operation.name:"ai.toolCall"ai.operationId:"ai.toolCall"ai.toolCall.name: the name of the toolai.toolCall.id: the id of the tool callai.toolCall.args: the input parameters of the tool callai.toolCall.result: the output result of the tool call. Only available if the tool call is successful and the result is serializable.