Documentation Index Fetch the complete documentation index at: https://docs.getfoil.ai/llms.txt
Use this file to discover all available pages before exploring further.
Vercel AI SDK Integration
The Foil SDK integrates with the Vercel AI SDK to automatically trace streaming responses, tool calls, and multi-step interactions.
Quick Start (Recommended)
The easiest way - just initialize Foil and your Vercel AI SDK calls are automatically traced:
const { Foil } = require ( '@getfoil/foil-js/otel' );
const { openai } = require ( '@ai-sdk/openai' );
const { streamText } = require ( 'ai' );
// Initialize Foil once at app startup
Foil . init ({
apiKey: process . env . FOIL_API_KEY ,
agentName: 'vercel-ai-agent' ,
});
// Use Vercel AI SDK as normal - automatically traced!
const result = await streamText ({
model: openai ( 'gpt-4o' ),
prompt: 'Write a haiku about coding' ,
});
for await ( const chunk of result . textStream ) {
process . stdout . write ( chunk );
}
// ↑ This call was automatically traced to Foil
Manual Callbacks (Alternative)
For more control over tracing, use the callbacks:
const { openai } = require ( '@ai-sdk/openai' );
const { streamText } = require ( 'ai' );
const { createFoilTracer , createVercelAICallbacks } = require ( '@getfoil/foil-js' );
const tracer = createFoilTracer ({
apiKey: process . env . FOIL_API_KEY ,
agentName: 'vercel-ai-agent' ,
});
await tracer . trace ( async ( ctx ) => {
const callbacks = createVercelAICallbacks ( tracer , { context: ctx });
const result = await streamText ({
model: openai ( 'gpt-4o' ),
prompt: 'Write a haiku about coding' ,
... callbacks ,
});
// Consume the stream
for await ( const chunk of result . textStream ) {
process . stdout . write ( chunk );
}
return result . text ;
});
What Gets Captured
Event Captured Data onStartModel, input, start time onTokenStreaming tokens, TTFT onToolCallTool name, arguments onToolResultTool output, duration onFinishFull response, tokens, latency onErrorError message, stack
Automatic tracking of tool executions:
import { tool } from 'ai' ;
import { z } from 'zod' ;
await tracer . trace ( async ( ctx ) => {
const callbacks = createVercelAICallbacks ( tracer , { context: ctx });
const result = await streamText ({
model: openai ( 'gpt-4o' ),
prompt: 'What is the weather in San Francisco?' ,
tools: {
getWeather: tool ({
description: 'Get weather for a location' ,
parameters: z . object ({
location: z . string ()
}),
execute : async ({ location }) => {
return await fetchWeather ( location );
}
})
},
... callbacks
});
// Process result...
});
Creates a trace like:
Trace: vercel-ai-agent
├── LLM: gpt-4o
│ ├── Tool: getWeather
│ └── (continued response)
Multi-Step Conversations
Track multi-turn interactions:
import { generateText } from 'ai' ;
await tracer . trace ( async ( ctx ) => {
const callbacks = createVercelAICallbacks ( tracer , { context: ctx });
const messages = [];
// First turn
messages . push ({ role: 'user' , content: 'Hello!' });
const response1 = await generateText ({
model: openai ( 'gpt-4o' ),
messages ,
... callbacks
});
messages . push ({ role: 'assistant' , content: response1 . text });
// Second turn
messages . push ({ role: 'user' , content: 'Tell me a joke' });
const response2 = await generateText ({
model: openai ( 'gpt-4o' ),
messages ,
... callbacks
});
return response2 . text ;
});
With Next.js Route Handlers
Integrate with Next.js API routes:
// instrumentation.ts (Next.js instrumentation file)
import { Foil } from '@getfoil/foil-js/otel' ;
export function register () {
Foil . init ({
apiKey: process . env . FOIL_API_KEY ! ,
agentName: 'nextjs-chat' ,
});
}
// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai' ;
import { streamText } from 'ai' ;
export async function POST ( req : Request ) {
const { messages } = await req . json ();
// Automatically traced!
const result = await streamText ({
model: openai ( 'gpt-4o' ),
messages ,
});
return result . toDataStreamResponse ();
}
// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai' ;
import { streamText } from 'ai' ;
import { createFoilTracer , createVercelAICallbacks } from '@getfoil/foil-js' ;
const tracer = createFoilTracer ({
apiKey: process . env . FOIL_API_KEY ! ,
agentName: 'nextjs-chat' ,
});
export async function POST ( req : Request ) {
const { messages } = await req . json ();
return await tracer . trace ( async ( ctx ) => {
const callbacks = createVercelAICallbacks ( tracer , { context: ctx });
const result = await streamText ({
model: openai ( 'gpt-4o' ),
messages ,
... callbacks ,
});
return result . toDataStreamResponse ();
}, {
name: 'chat-completion' ,
input: messages ,
});
}
With useChat Hook
The server-side tracing works seamlessly with the client useChat hook:
// Client component
'use client' ;
import { useChat } from 'ai/react' ;
export function Chat () {
const { messages , input , handleInputChange , handleSubmit } = useChat ({
api: '/api/chat'
});
return (
< div >
{ messages . map ( m => (
< div key = {m. id } > {m. content } </ div >
))}
< form onSubmit = { handleSubmit } >
< input value = { input } onChange = { handleInputChange } />
</ form >
</ div >
);
}
Object Generation
Track structured output generation:
import { generateObject } from 'ai' ;
import { z } from 'zod' ;
await tracer . trace ( async ( ctx ) => {
const callbacks = createVercelAICallbacks ( tracer , { context: ctx });
const result = await generateObject ({
model: openai ( 'gpt-4o' ),
schema: z . object ({
recipe: z . object ({
name: z . string (),
ingredients: z . array ( z . string ()),
steps: z . array ( z . string ())
})
}),
prompt: 'Generate a recipe for chocolate cake' ,
... callbacks
});
return result . object ;
});
Configuration Options
const callbacks = createVercelAICallbacks ( tracer , {
context: ctx , // Trace context (required)
captureTokens: true , // Count tokens (default: true)
captureContent: true // Capture full content (default: true)
});
Error Handling
Errors are automatically captured:
await tracer . trace ( async ( ctx ) => {
const callbacks = createVercelAICallbacks ( tracer , { context: ctx });
try {
const result = await streamText ({
model: openai ( 'gpt-4o' ),
prompt: 'Hello' ,
... callbacks
});
return result . text ;
} catch ( error ) {
// Error automatically recorded via onError callback
throw error ;
}
});
Next Steps
OpenAI Integration OpenAI-specific features
Signals & Feedback Record user feedback
Alerting Set up alerts for issues