The most in-depth observability platform for AI agents.
See every decision. Find exactly where it fails.
| TRACE | AGENT | DURATION | TOKENS | COST | STATUS |
|---|---|---|---|---|---|
| trc_8f3a | support | 1.24s | 2,847 | $0.02 | ok |
| trc_7e2b | research | 3.87s | 8,234 | $0.09 | warn |
| trc_6d1c | code | 2.13s | 4,521 | $0.04 | ok |
| trc_5c0b | support | 0.89s | 1,203 | $0.01 | ok |
Use Cases
From support bots to research agents, trace every thought and decision. Know exactly what went wrong and why.
Trace every thought, every decision, every retrieval. See the complete chain of reasoning and find exactly where the failure occurred. No more guessing.
Watch your agent's memory in action. See which documents it retrieves, which passages it focuses on, and whether it's actually using the context you gave it.
Watch your agent think in real-time. Visualize decision trees, track tool calls, and understand exactly why it chose one path over another. Catch runaway loops before they drain your API budget.
Understand how your agent crafts each output. See the reasoning behind every word. Catch hallucinations, safety issues, and quality problems with deep inspection.
Platform Features
Purpose-built for AI agents. See what no other tool can show you.
See every step your agent takes: each LLM call, tool invocation, memory read, and branching decision. Understand its complete reasoning chain.
Catch the moment your agent makes things up. Flag responses that contradict context or fabricate sources, in real-time.
Detect policy violations, harmful outputs, and prompt injections before they reach users. See why they happened.
Track token usage and API spend down to individual decisions. Find exactly what's burning through your budget.
Link user feedback directly to agent decisions. Understand which reasoning paths lead to good or bad outcomes.
Replay any failed execution step-by-step. Rewind to the exact moment things went wrong and see every detail.
Integration
Add a few lines of code. Works with any framework.
npm install @foil/sdk or pip install foil-sdk. Framework-agnostic with first-class support for LangChain, LlamaIndex, and CrewAI.
One line to instrument. All LLM calls, tool uses, and memory operations are captured automatically.
Traces stream in real-time. Set up alerts, analyze trends, and debug issues from a single UI.
import { Foil } from '@foil/sdk';
// Initialize with your API key
const foil = new Foil({ apiKey: 'sk-...' });
// Wrap your agent - that's it!
const tracedAgent = foil.wrap(myAgent);
// All LLM calls, tool uses, and decisions are now traced
const result = await tracedAgent.run("Help the user with their request");
Join 50+ teams who finally understand what their AI agents are doing.