Skip to content

Embeddings

This page adapts the original AI SDK documentation: Embeddings. Embeddings are a way to represent words, phrases, or images as vectors in a high-dimensional space. In this space, similar words are close to each other, and the distance between words can be used to measure their similarity. In Swift you work with embeddings as [Double] values returned by embed/embedMany.

The AI SDK provides the embed function to embed single values, which is useful for tasks such as finding similar words or phrases or clustering text. You can use it with embeddings models, e.g. openai.textEmbeddingModel('text-embedding-3-large') or mistral.textEmbeddingModel('mistral-embed').

import SwiftAISDK
import OpenAIProvider
let result = try await embed(
model: .v3(openai.textEmbeddingModel(modelId: "text-embedding-3-small")),
value: "sunny day at the beach"
)
let embedding = result.embedding // [Double]

When loading data, e.g. when preparing a data store for retrieval-augmented generation (RAG), it is often useful to embed many values at once (batch embedding).

The AI SDK provides the embedMany function for this purpose. Similar to embed, you can use it with embeddings models, e.g. openai.textEmbeddingModel('text-embedding-3-large') or mistral.textEmbeddingModel('mistral-embed').

import SwiftAISDK
import OpenAIProvider
let result = try await embedMany(
model: .v3(openai.textEmbeddingModel(modelId: "text-embedding-3-small")),
values: [
"sunny day at the beach",
"rainy afternoon in the city",
"snowy night in the mountains"
]
)
// embeddings is [[Double]] and aligned with the input order
let embeddings = result.embeddings

After embedding values, you can calculate the similarity between them using the cosineSimilarity function. This is useful to e.g. find similar words or phrases in a dataset. You can also rank and filter related items based on their similarity.

import SwiftAISDK
import OpenAIProvider
let result = try await embedMany(
model: .v3(openai.textEmbeddingModel(modelId: "text-embedding-3-small")),
values: ["sunny day at the beach", "rainy afternoon in the city"]
)
let similarity = try cosineSimilarity(
vector1: result.embeddings[0],
vector2: result.embeddings[1]
)
print("cosine similarity: \(similarity)")

Many providers charge based on the number of tokens used to generate embeddings. Both embed and embedMany provide token usage information in the usage property of the result object:

import SwiftAISDK
import OpenAIProvider
let result = try await embed(
model: .v3(openai.textEmbeddingModel(modelId: "text-embedding-3-small")),
value: "sunny day at the beach"
)
print(result.usage.tokens)

Embedding model settings can be configured using providerOptions for provider-specific parameters:

import SwiftAISDK
import OpenAIProvider
let result = try await embed(
model: .v3(openai.textEmbeddingModel(modelId: "text-embedding-3-small")),
value: "sunny day at the beach",
providerOptions: ["openai": [
"dimensions": 512
]]
)
let embedding = result.embedding

The embedMany function now supports parallel processing with configurable maxParallelCalls to optimize performance:

import SwiftAISDK
import OpenAIProvider
let parallelResult = try await embedMany(
model: .v3(openai.textEmbeddingModel(modelId: "text-embedding-3-small")),
values: [
"sunny day at the beach",
"rainy afternoon in the city",
"snowy night in the mountains"
],
maxParallelCalls: 2
)
let usage = parallelResult.usage

Both embed and embedMany accept an optional maxRetries parameter of type number that you can use to set the maximum number of retries for the embedding process. It defaults to 2 retries (3 attempts in total). You can set it to 0 to disable retries.

import SwiftAISDK
import OpenAIProvider
let noRetry = try await embed(
model: .v3(openai.textEmbeddingModel(modelId: "text-embedding-3-small")),
value: "sunny day at the beach",
maxRetries: 0
)

Both embed and embedMany accept an optional abortSignal closure of type @Sendable () -> Bool that you can use to abort the embedding process or set a timeout.

import SwiftAISDK
import OpenAIProvider
let timeout = Date().addingTimeInterval(1)
let embedding = try await embed(
model: .v3(openai.textEmbeddingModel(modelId: "text-embedding-3-small")),
value: "sunny day at the beach",
abortSignal: { Date() >= timeout }
)

Both embed and embedMany accept an optional headers parameter of type Record<string, string> that you can use to add custom headers to the embedding request.

import SwiftAISDK
import OpenAIProvider
let customHeader = try await embed(
model: .v3(openai.textEmbeddingModel(modelId: "text-embedding-3-small")),
value: "sunny day at the beach",
headers: ["X-Custom-Header": "custom-value"]
)

Both embed and embedMany return response information that includes the raw provider response:

import SwiftAISDK
import OpenAIProvider
let info = try await embed(
model: .v3(openai.textEmbeddingModel(modelId: "text-embedding-3-small")),
value: "sunny day at the beach"
)
print(String(describing: info.response)) // Raw provider response metadata/body

Several providers offer embedding models:

ProviderModelEmbedding Dimensions
OpenAItext-embedding-3-large3072
OpenAItext-embedding-3-small1536
OpenAItext-embedding-ada-0021536
Google Generative AIgemini-embedding-0013072
Google Generative AItext-embedding-004768
Mistralmistral-embed1024
Cohereembed-english-v3.01024
Cohereembed-multilingual-v3.01024
Cohereembed-english-light-v3.0384
Cohereembed-multilingual-light-v3.0384
Cohereembed-english-v2.04096
Cohereembed-english-light-v2.01024
Cohereembed-multilingual-v2.0768
Amazon Bedrockamazon.titan-embed-text-v11536
Amazon Bedrockamazon.titan-embed-text-v2:01024