Analytics
Foil provides comprehensive analytics to understand your AI application’s performance, costs, and quality.Dashboard Overview
The main dashboard shows key metrics at a glance:| Metric | Description |
|---|---|
| Total Requests | Number of traces in the time period |
| Success Rate | Percentage of successful completions |
| Avg Latency | Mean response time |
| Active Agents | Agents with recent activity |
| Alert Count | Active alerts requiring attention |
Time-Series Charts
Requests Over Time
Track request volume with agent breakdown:Success/Failure Rates
Monitor error trends:Latency Distribution
Understand response time patterns:- < 200ms
- 200-500ms
- 500ms-1s
- 1-2s
- 2-5s
-
5s
Latency Percentiles
Track p50, p95, p99 over time:Token & Cost Analytics
Token Usage
Monitor input/output token consumption:Cost Breakdown
View costs by model:Error Analytics
Errors by Type
Understand what’s failing:Error Rate Over Time
Track error trends:Tool Usage
Tool Usage Heatmap
See which tools are used by which agents:Filtering
All analytics endpoints support filtering:| Parameter | Description |
|---|---|
startDate | Start of time range (ISO 8601) |
endDate | End of time range (ISO 8601) |
agentId | Filter by specific agent |
granularity | Time grouping: ‘hourly’, ‘daily’, ‘weekly’, ‘monthly’ |
Key Metrics API
Get all primary dashboard metrics in one call:Comparing Periods
Compare metrics across time periods:Exporting Data
Export analytics data for external analysis:Best Practices
Monitor success rate trends
Monitor success rate trends
A dropping success rate often indicates issues before they become critical.
Track p95 latency
Track p95 latency
Average latency can hide outliers. p95 shows what your slowest users experience.
Set up cost alerts
Set up cost alerts
Monitor costs closely, especially when testing new prompts or models.
Compare week-over-week
Compare week-over-week
Regular comparisons help identify regressions quickly.