Mistral AI
The Mistral AI provider contains language model support for the Mistral chat API.
The Mistral provider is available in the MistralProvider module. Add it to your Swift package:
dependencies: [ .package(url: "https://github.com/teunlao/swift-ai-sdk", from: "0.2.1")],targets: [ .target( name: "YourTarget", dependencies: [ .product(name: "SwiftAISDK", package: "swift-ai-sdk"), .product(name: "MistralProvider", package: "swift-ai-sdk") ] )]Provider Instance
Section titled “Provider Instance”You can import the default provider instance mistral from MistralProvider:
import MistralProvider
// Use the global mistral instancelet model = mistral("mistral-large-latest")If you need a customized setup, you can use createMistralProvider to create a provider instance with your settings:
import MistralProvider
let customMistral = createMistralProvider( settings: MistralProviderSettings( baseURL: "https://custom.api.com/v1", apiKey: "your-api-key" ))You can use the following optional settings to customize the Mistral provider instance:
-
baseURL String
Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is
https://api.mistral.ai/v1. -
apiKey String
API key that is being sent using the
Authorizationheader. It defaults to theMISTRAL_API_KEYenvironment variable. -
headers [String: String]
Custom headers to include in the requests.
Language Models
Section titled “Language Models”You can create models that call the Mistral chat API using a provider instance.
The first argument is the model id, e.g. mistral-large-latest.
Some Mistral chat models support tool calls.
let model = mistral("mistral-large-latest")Mistral chat models also support additional model settings that are not part of the standard call settings. You can pass them as provider options:
import SwiftAISDKimport MistralProvider
let result = try await generateText( model: mistral("mistral-large-latest"), prompt: "Write a vegetarian lasagna recipe for 4 people.", providerOptions: MistralChatOptions( safePrompt: true, // optional safety prompt injection parallelToolCalls: false // disable parallel tool calls ))The following optional provider options are available for Mistral models:
-
safePrompt Bool
Whether to inject a safety prompt before all conversations.
Defaults to
false. -
documentImageLimit Int
Maximum number of images to process in a document.
-
documentPageLimit Int
Maximum number of pages to process in a document.
-
strictJsonSchema Bool
Whether to use strict JSON schema validation for structured outputs. Only applies when a schema is provided.
Defaults to
false. -
structuredOutputs Bool
Whether to use structured outputs. When enabled, tool calls and object generation will be strict and follow the provided schema.
Defaults to
true. -
parallelToolCalls Bool
Whether to enable parallel function calling during tool use. When set to false, the model will use at most one tool per response.
Defaults to
true.
Document OCR
Section titled “Document OCR”Mistral chat models support document OCR for PDF files. You can optionally set image and page limits using the provider options.
import SwiftAISDKimport MistralProviderimport Foundation
let result = try await generateText( model: mistral("mistral-small-latest"), messages: [ .user( content: [ .text(.init(text: "What is an embedding model according to this document?")), .file(.init( data: .url(URL(string: "https://github.com/vercel/ai/blob/main/examples/ai-core/data/ai.pdf?raw=true")!), mediaType: "application/pdf" )) ], providerOptions: nil ) ], providerOptions: MistralChatOptions( documentImageLimit: 8, documentPageLimit: 64 ))Reasoning Models
Section titled “Reasoning Models”Mistral offers reasoning models that provide step-by-step thinking capabilities:
- magistral-small-2506: Smaller reasoning model for efficient step-by-step thinking
- magistral-medium-2506: More powerful reasoning model balancing performance and cost
These models return content that includes <think>...</think> tags containing the reasoning process. To properly extract and separate the reasoning from the final answer, use the extract reasoning middleware:
import SwiftAISDKimport MistralProvider
let result = try await generateText( model: wrapLanguageModel( model: mistral("magistral-small-2506"), middleware: extractReasoningMiddleware(tagName: "think") ), prompt: "What is 15 * 24?")
print("REASONING:", result.reasoningText ?? "")// Output: "Let me calculate this step by step..."
print("ANSWER:", result.text)// Output: "360"The middleware automatically parses the <think> tags and provides separate reasoningText and text properties in the result.
Example
Section titled “Example”You can use Mistral language models to generate text with the generateText function:
import SwiftAISDKimport MistralProvider
let result = try await generateText( model: mistral("mistral-large-latest"), prompt: "Write a vegetarian lasagna recipe for 4 people.")let text = result.textMistral language models can also be used in the streamText and generateObject functions.
Structured Outputs
Section titled “Structured Outputs”Mistral chat models support structured outputs using JSON Schema. You can use generateObject with Codable types. The SDK sends your schema via Mistral’s response_format: { type: 'json_schema' }.
import SwiftAISDKimport MistralProvider
struct Recipe: Codable { let name: String let ingredients: [String] let instructions: [String]}
let result = try await generateObject( model: mistral("mistral-large-latest"), schema: Recipe.self, schemaName: "recipe", prompt: "Generate a simple pasta recipe.")
print(result.object)You can enable strict JSON Schema validation using a provider option:
import SwiftAISDKimport MistralProvider
struct ShoppingList: Codable { let title: String let items: [Item]
struct Item: Codable { let id: String let qty: Int }}
let result = try await generateObject( model: mistral("mistral-large-latest"), schema: ShoppingList.self, schemaName: "shopping_list", prompt: "Generate a small shopping list.", providerOptions: MistralChatOptions( strictJsonSchema: true // reject outputs that don't strictly match the schema ))Model Capabilities
Section titled “Model Capabilities”| Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
|---|---|---|---|---|
pixtral-large-latest | ||||
mistral-large-latest | ||||
mistral-medium-latest | ||||
mistral-medium-2505 | ||||
mistral-small-latest | ||||
magistral-small-2506 | ||||
magistral-medium-2506 | ||||
ministral-3b-latest | ||||
ministral-8b-latest | ||||
pixtral-12b-2409 | ||||
open-mistral-7b | ||||
open-mixtral-8x7b | ||||
open-mixtral-8x22b |
Embedding Models
Section titled “Embedding Models”You can create models that call the Mistral embeddings API
using the .textEmbedding() factory method.
let model = mistral.textEmbedding("mistral-embed")You can use Mistral embedding models to generate embeddings with the embed function:
import SwiftAISDKimport MistralProvider
let result = try await embed( model: mistral.textEmbedding("mistral-embed"), value: "sunny day at the beach")let embedding = result.embeddingModel Capabilities
Section titled “Model Capabilities”| Model | Default Dimensions |
|---|---|
mistral-embed | 1024 |