Settings
This page adapts the original AI SDK documentation: Settings.
Large language models (LLMs) typically provide settings to augment their output.
All AI SDK functions support the following common settings in addition to the model, the prompt, and additional provider-specific settings:
import SwiftAISDKimport OpenAIProvider
let result = try await generateText( model: openai("gpt-4.1"), settings: CallSettings( maxOutputTokens: 512, temperature: 0.3, maxRetries: 5 ), prompt: "Invent a new holiday and describe its traditions.")Note Some providers do not support every common setting. Unsupported values generate entries in
result.warnings. Always inspect warnings when you depend on a particular capability.
maxOutputTokens
Section titled “maxOutputTokens”Maximum number of tokens to generate.
temperature
Section titled “temperature”Temperature setting.
The value is passed through to the provider. The range depends on the provider and model. For most providers, 0 means almost deterministic results, and higher values mean more randomness.
It is recommended to set either temperature or topP, but not both.
Note In AI SDK 5.0, temperature is no longer set to
0by default.
Nucleus sampling.
The value is passed through to the provider. The range depends on the provider and model. For most providers, nucleus sampling is a number between 0 and 1. For example, 0.1 means only tokens with the top 10% probability mass are considered.
It is recommended to set either temperature or topP, but not both.
Only sample from the top K options for each subsequent token.
Used to remove “long tail” low-probability responses.
Recommended for advanced use cases only. You usually only need to use temperature.
presencePenalty
Section titled “presencePenalty”The presence penalty affects the likelihood of the model to repeat information that is already in the prompt.
The value is passed through to the provider. The range depends on the provider and model.
For most providers, 0 means no penalty.
frequencyPenalty
Section titled “frequencyPenalty”The frequency penalty affects the likelihood of the model to repeatedly use the same words or phrases.
The value is passed through to the provider. The range depends on the provider and model.
For most providers, 0 means no penalty.
stopSequences
Section titled “stopSequences”The stop sequences to use for stopping the text generation.
If set, the model will stop generating text when one of the stop sequences is generated. Providers may have limits on the number of stop sequences.
import SwiftAISDKimport OpenAIProvider
let result = try await generateText( model: openai("gpt-4.1"), settings: CallSettings(stopSequences: ["END", "DONE"]), prompt: "Produce a three-step checklist that ends with DONE.")The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.
maxRetries
Section titled “maxRetries”Maximum number of retries. Set to 0 to disable retries. Default: 2.
abortSignal
Section titled “abortSignal”An optional cancellation check that can be used to cancel the call.
For example, you can forward UI cancellation or implement a timeout.
Example: Timeout
Section titled “Example: Timeout”import SwiftAISDKimport OpenAIProvider
let deadline = Date().addingTimeInterval(5)
let result = try await generateText( model: openai("gpt-4o"), prompt: "Invent a new holiday and describe its traditions.", settings: CallSettings( abortSignal: { Date() >= deadline } ))headers
Section titled “headers”Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.
You can use request headers to provide additional information to the provider,
depending on what the provider supports. For example, some observability providers support
headers such as Prompt-Id.
import SwiftAISDKimport OpenAIProvider
let result = try await generateText( model: openai("gpt-4o"), prompt: "Invent a new holiday and describe its traditions.", settings: CallSettings(headers: [ "Prompt-Id": "my-prompt-id" ]))Note The
headerssetting is for request-specific headers. You can also configure default headers when constructing a provider; those headers are sent with every request.