Getting Started
Retrace is the execution replay engine for AI agents. Record, replay, fork and share agent runs.
Retrace captures every LLM call, tool invocation, and error your AI agent makes, then lets you replay, fork, and share the execution like a video.
Quick Start
1. Install the SDK
pip install retrace2. Configure your API key
import retrace
retrace.configure(api_key="rt_live_...")3. Add the decorator
@retrace.record(name="my-agent")
def run_agent(prompt: str):
response = client.chat.completions.create(
model="gemini-3.1-pro-preview",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content4. Verify in the dashboard
Run your agent. Open the Retrace dashboard to see the recorded trace with every step visualized.
What Gets Captured
| Data | How |
|---|---|
| LLM calls | Automatic (Retrace AI) |
| Tool calls | Manual spans or interceptors |
| Errors | Automatic exception capture |
| Token usage | Extracted from API response |
| Cost | Calculated from model pricing |
| Duration | Measured start to end |
| Input/Output | Full request and response |
| Metadata | Custom key-value pairs |
[!TIP] Set
RETRACE_ENABLED=falseto disable recording in production without changing code.
Next Steps
- Recording Traces - Deep dive into the @record decorator
- Replaying Executions - Learn the tape player
- Forking - Branch and replay with modifications
- API Reference - Full REST API documentation