Skip to main content

Quickstart

This guide will get you from zero to traced AI calls in under 5 minutes.

Prerequisites

  • A Foil account (sign up here)
  • An API key from the Foil dashboard
  • Node.js 18+ or Python 3.8+
The fastest way to integrate. The Foil Wizard is an AI agent that scans your codebase and automatically adds Foil instrumentation.
npx @getfoil/wizard
The wizard edits your source files. We recommend running it on a separate branch.
The wizard will install the SDK, identify your LLM calls and agent patterns, and add tracing automatically. Review the changes, test, and merge when you’re happy.

Wizard Documentation

Full guide on what the wizard instruments, troubleshooting, and rate limits

Option B: Manual Integration

Prefer to wire things up yourself? Follow the steps below.

Step 1: Install the SDK

npm install @getfoil/foil-js

Step 2: Initialize Foil

The primary SDK — works with any LLM provider and gives you nested span trees.
const { Foil } = require('@getfoil/foil-js');

const foil = new Foil({
  apiKey: process.env.FOIL_API_KEY,
  agentName: 'my-first-agent',
});

Step 3: Trace Your First Call

Use foil.trace() and ctx.llmCall() for full control over your span tree.
const { Foil } = require('@getfoil/foil-js');
const OpenAI = require('openai');

const openai = new OpenAI();
const foil = new Foil({
  apiKey: process.env.FOIL_API_KEY,
  agentName: 'my-first-agent',
});

const result = await foil.trace(async (ctx) => {
  // Create an LLM span
  const response = await ctx.llmCall('gpt-4o', async () => {
    return await openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [{ role: 'user', content: 'What is the capital of France?' }],
    });
  });

  return response.choices[0].message.content;
}, { name: 'capital-query' });

console.log(result); // "Paris"
await foil.shutdown();

Step 4: View Your Trace

  1. Go to the Foil Dashboard
  2. Navigate to Traces
  3. Click on your trace to see the full span details
You’ll see:
  • The input and output of your LLM call
  • Token usage breakdown
  • Latency metrics
  • Any errors or warnings

What’s Next?

More examples: Browse complete, runnable examples at github.com/getfoil/foil-examples — including auto-instrumentation, custom evaluations, semantic search, and real-world agent scenarios.

Complete Example

Here’s a full working example with an agentic tool-calling loop — the LLM decides which tools to call:
// app.js
const { Foil } = require('@getfoil/foil-js');
const OpenAI = require('openai');

const openai = new OpenAI();
const foil = new Foil({
  apiKey: process.env.FOIL_API_KEY,
  agentName: 'quickstart-agent',
  instrumentModules: { openAI: OpenAI },
});

// Define tools the LLM can call
const tools = [{
  type: 'function',
  function: {
    name: 'web_search',
    description: 'Search the web for information',
    parameters: {
      type: 'object',
      properties: { query: { type: 'string' } },
      required: ['query'],
    },
  },
}];

// Map tool names to implementations
const toolMap = {
  web_search: async (args) => {
    // Replace with your actual search implementation
    return { results: [`Top result for "${args.query}": Paris is the capital of France...`] };
  },
};

async function main() {
  const result = await foil.trace(async (ctx) => {
    const messages = [
      { role: 'system', content: 'You are a helpful research assistant. Use the web_search tool to find information before answering.' },
      { role: 'user', content: 'What are the top attractions in Paris?' },
    ];

    // LLM calls are auto-instrumented, no ctx.llmCall() needed
    let response = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages,
      tools,
    });

    // Agentic loop — LLM decides which tools to call
    while (response.choices[0].message.tool_calls) {
      const toolMessages = await ctx.executeTools(response, toolMap);
      messages.push(response.choices[0].message, ...toolMessages);

      response = await openai.chat.completions.create({
        model: 'gpt-4o',
        messages,
        tools,
      });
    }

    return response.choices[0].message.content;
  }, { name: 'paris-research' });

  console.log(result);
  await foil.shutdown();
}

main();
This produces a span tree like:
Trace: paris-research
├── llm (gpt-4o) — auto-captured, returns tool_calls
│   └── tool (web_search) — via ctx.executeTools()
└── llm (gpt-4o) — auto-captured, final answer
Run it:
FOIL_API_KEY=your-key OPENAI_API_KEY=your-key node app.js