Workflow Patterns
This page adapts the original AI SDK documentation: Workflow Patterns.
Workflow Patterns
Section titled “Workflow Patterns”Combine the building blocks from the overview with these patterns to add structure and reliability to your agents:
- Sequential Processing - Steps executed in order
- Parallel Processing - Independent tasks run simultaneously
- Evaluation/Feedback Loops - Results checked and improved iteratively
- Orchestration - Coordinating multiple components
- Routing - Directing work based on context
Choose Your Approach
Section titled “Choose Your Approach”Consider these key factors:
- Flexibility vs Control - How much freedom does the LLM need vs how tightly you must constrain its actions?
- Error Tolerance - What are the consequences of mistakes in your use case?
- Cost Considerations - More complex systems typically mean more LLM calls and higher costs
- Maintenance - Simpler architectures are easier to debug and modify
Start with the simplest approach that meets your needs. Add complexity only when required by:
- Breaking down tasks into clear steps
- Adding tools for specific capabilities
- Implementing feedback loops for quality control
- Introducing multiple agents for complex workflows
Let’s look at examples of these patterns in action.
Patterns with Examples
Section titled “Patterns with Examples”These patterns, adapted from Anthropic’s guide on building effective agents, serve as building blocks you can combine to create comprehensive workflows. Each pattern addresses specific aspects of task execution. Combine them thoughtfully to build reliable solutions for complex problems.
Sequential Processing (Chains)
Section titled “Sequential Processing (Chains)”The simplest workflow pattern executes steps in a predefined order. Each step’s output becomes input for the next step, creating a clear chain of operations. Use this pattern for tasks with well-defined sequences, like content generation pipelines or data transformation processes.
import SwiftAISDKimport OpenAIProvider
struct MarketingQuality: Codable, Sendable { let hasCallToAction: Bool let emotionalAppeal: Int let clarity: Int}
struct MarketingResult: Sendable { let copy: String let quality: MarketingQuality}
func generateMarketingCopy(_ input: String) async throws -> MarketingResult { let copy = try await generateText( model: openai("gpt-4o"), prompt: "Write persuasive marketing copy for: \(input). Focus on benefits and emotional appeal." ).text
let quality = try await generateText( model: openai("gpt-4o"), experimentalOutput: Output.object(MarketingQuality.self, name: "quality_check"), prompt: """ Evaluate this marketing copy for: 1. Presence of call to action (true/false) 2. Emotional appeal (1-10) 3. Clarity (1-10)
Copy to evaluate: \(copy) """ ).experimentalOutput
if !quality.hasCallToAction || quality.emotionalAppeal < 7 || quality.clarity < 7 { let improved = try await generateText( model: openai("gpt-4o"), prompt: "Rewrite this marketing copy with a clear CTA, stronger emotional appeal, and improved clarity. Original: \(copy)" ) return .init(copy: improved.text, quality: quality) }
return .init(copy: copy, quality: quality)}Routing
Section titled “Routing”This pattern lets the model decide which path to take through a workflow based on context and intermediate results. The model acts as an intelligent router, directing the flow of execution between different branches of your workflow. Use this when handling varied inputs that require different processing approaches. In the example below, the first LLM call’s results determine the second call’s model size and system prompt.
import SwiftAISDKimport OpenAIProvider
struct RoutingDecision: Codable, Sendable { let reasoning: String let type: String let complexity: String}
func handleCustomerQuery(_ query: String) async throws -> (String, RoutingDecision) { let decision = try await generateText( model: openai("gpt-4o"), experimentalOutput: Output.object(RoutingDecision.self, name: "routing"), prompt: "Classify this customer query and explain your reasoning: \(query)" ).experimentalOutput
let systemMap: [String: String] = [ "general": "You are an expert customer service agent handling general inquiries.", "refund": "You are a customer service agent specializing in refund requests. Follow company policy and collect necessary information.", "technical": "You are a technical support specialist with deep product knowledge. Focus on clear step-by-step troubleshooting." ] let model = decision.complexity == "simple" ? openai("gpt-4.1-mini") : openai("o4-mini") let response = try await generateText(model: model, system: systemMap[decision.type], prompt: query) return (response.text, decision)}Parallel Processing
Section titled “Parallel Processing”Break down tasks into independent subtasks that execute simultaneously. This pattern uses parallel execution to improve efficiency while maintaining the benefits of structured workflows. For example, analyze multiple documents or process different aspects of a single input concurrently (like code review).
import SwiftAISDKimport OpenAIProvider
struct ReviewDetails: Codable, Sendable { let summary: String let issues: [String]}
struct Review: Sendable { let kind: String let details: ReviewDetails}
func parallelCodeReview(_ code: String) async throws -> (reviews: [Review], summary: String) { async let security = generateText( model: openai("gpt-4o"), experimentalOutput: Output.object(ReviewDetails.self, name: "security_review"), system: "Security expert.", prompt: """ Review: \(code) """ ) async let performance = generateText( model: openai("gpt-4o"), experimentalOutput: Output.object(ReviewDetails.self, name: "performance_review"), system: "Performance expert.", prompt: """ Review: \(code) """ ) async let quality = generateText( model: openai("gpt-4o"), experimentalOutput: Output.object(ReviewDetails.self, name: "quality_review"), system: "Code quality expert.", prompt: """ Review: \(code) """ )
let reviews = try await [ Review(kind: "security", details: security.experimentalOutput), Review(kind: "performance", details: performance.experimentalOutput), Review(kind: "maintainability", details: quality.experimentalOutput) ]
let summaryPrompt = reviews .map { review in "\(review.kind.capitalized): \(review.details.summary)\nIssues: \(review.details.issues.joined(separator: ", \"))" } .joined(separator: "\n\n")
let summary = try await generateText( model: openai("gpt-4o"), system: "You are a technical lead summarizing multiple code reviews.", prompt: """ Synthesize these code review results into a concise summary with key actions: \(summaryPrompt) """ ).text
return (reviews, summary)}Orchestrator-Worker
Section titled “Orchestrator-Worker”A primary model (orchestrator) coordinates the execution of specialized workers. Each worker optimizes for a specific subtask, while the orchestrator maintains overall context and ensures coherent results. This pattern excels at complex tasks requiring different types of expertise or processing.
import SwiftAISDKimport OpenAIProvider
struct ImplementationPlan: Codable, Sendable { let overview: String let steps: [String] let risks: [String]}
func implementFeature(_ feature: String) async throws -> (plan: ImplementationPlan, changes: [String]) { let plan = try await generateText( model: openai("o4-mini"), experimentalOutput: Output.object(ImplementationPlan.self, name: "implementation_plan"), system: "You are a senior software architect planning feature implementations.", prompt: "Analyze this feature request and create an implementation plan: \(feature)" ).experimentalOutput
// Workers would implement per-file changes based on `plan.steps`; omitted for brevity. return (plan, [])}Evaluator-Optimizer
Section titled “Evaluator-Optimizer”Add quality control to workflows with dedicated evaluation steps that assess intermediate results. Based on the evaluation, the workflow proceeds, retries with adjusted parameters, or takes corrective action. This creates robust workflows capable of self-improvement and error recovery.
import SwiftAISDKimport OpenAIProvider
struct TranslationEvaluation: Codable, Sendable { let qualityScore: Int let preservesTone: Bool let preservesNuance: Bool let culturallyAccurate: Bool let specificIssues: [String] let improvementSuggestions: [String]}
func translateWithFeedback(text: String, targetLanguage: String) async throws -> (final: String, iterations: Int) { var current = try await generateText( model: openai("gpt-4.1-mini"), system: "You are an expert literary translator.", prompt: """ Translate this text to \(targetLanguage), preserving tone and cultural nuances: \(text) """ ).text
var iterations = 0 while iterations < 3 { let evaluation = try await generateText( model: openai("gpt-4o"), experimentalOutput: Output.object(TranslationEvaluation.self, name: "translation_evaluation"), system: "You are an expert in evaluating literary translations.", prompt: """ Evaluate this translation.
Original: \(text) Translation: \(current) """ ).experimentalOutput
if evaluation.qualityScore >= 8 && evaluation.preservesTone && evaluation.preservesNuance && evaluation.culturallyAccurate { break }
let feedback = (evaluation.specificIssues + evaluation.improvementSuggestions).joined(separator: "\n") current = try await generateText( model: openai("gpt-4o"), system: "You are an expert literary translator.", prompt: """ Improve this translation based on the following feedback: \(feedback)
Original: \(text) Current Translation: \(current) """ ).text
iterations += 1 }
return (current, iterations)}