The fastest way to integrate. The Foil Wizard is an AI agent that scans your codebase and automatically adds Foil instrumentation.
Copy
npx @getfoil/wizard
The wizard edits your source files. We recommend running it on a separate branch.
The wizard will install the SDK, identify your LLM calls and agent patterns, and add tracing automatically. Review the changes, test, and merge when you’re happy.
Use foil.trace() and ctx.llmCall() for full control over your span tree.
Copy
const { Foil } = require('@getfoil/foil-js');const OpenAI = require('openai');const openai = new OpenAI();const foil = new Foil({ apiKey: process.env.FOIL_API_KEY, agentName: 'my-first-agent',});const result = await foil.trace(async (ctx) => { // Create an LLM span const response = await ctx.llmCall('gpt-4o', async () => { return await openai.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: 'What is the capital of France?' }], }); }); return response.choices[0].message.content;}, { name: 'capital-query' });console.log(result); // "Paris"await foil.shutdown();
Auto-instrumentation traces LLM calls automatically. Combine with ctx.executeTools() for agentic tool calling:
Copy
const OpenAI = require('openai');const { Foil } = require('@getfoil/foil-js');const foil = new Foil({ apiKey: process.env.FOIL_API_KEY, agentName: 'my-first-agent', instrumentModules: { openAI: OpenAI },});const openai = new OpenAI();// Define tools the LLM can callconst tools = [{ type: 'function', function: { name: 'get_capital', description: 'Get the capital city of a country', parameters: { type: 'object', properties: { country: { type: 'string' } }, required: ['country'], }, },}];const toolMap = { get_capital: async (args) => ({ capital: 'Paris', country: args.country }),};const result = await foil.trace(async (ctx) => { const messages = [{ role: 'user', content: 'What is the capital of France?' }]; // LLM call is auto-traced — no wrapper needed let response = await openai.chat.completions.create({ model: 'gpt-4o', messages, tools, }); // LLM decides to call tools — executeTools traces each one while (response.choices[0].message.tool_calls) { const toolMessages = await ctx.executeTools(response, toolMap); messages.push(response.choices[0].message, ...toolMessages); response = await openai.chat.completions.create({ model: 'gpt-4o', messages, tools, }); } return response.choices[0].message.content;}, { name: 'capital-query' });console.log(result);await foil.shutdown();
Copy
from openai import OpenAIfrom foil import Foilimport osclient = OpenAI()foil = Foil(api_key=os.environ['FOIL_API_KEY'])# Wrap OpenAI client - all calls automatically tracedwrapped_client = foil.wrap_openai(client)# Make the API call - automatically tracedresponse = wrapped_client.chat.completions.create( model='gpt-4o', messages=[{'role': 'user', 'content': 'What is the capital of France?'}])print(response.choices[0].message.content) # "Paris"
More examples: Browse complete, runnable examples at github.com/getfoil/foil-examples — including auto-instrumentation, custom evaluations, semantic search, and real-world agent scenarios.