See every LLM call, tool invocation, and error — then replay, fork, or share the full execution as a URL.
uv add retrace · No credit card required
// what retrace does
One @record decorator captures every LLM call, tool invocation, and error as spans in a trace.
1 line to instrument
Step through agent executions with play, pause, scrub, and speed controls. See every decision.
Frame-by-frame playback
Branch from any span, modify the input, and watch a new execution path diverge side-by-side.
Git bisect for agents
Publish any trace as an interactive tape URL. Anyone can replay it — no login required.
Shareable via link
Persistent semantic memory with auto-extraction from traces. Agents remember across runs.
pgvector search
AI coding agents can search traces, recall memories, and inspect executions via MCP protocol.
6 tools available
// how it's different
Existing tools focus on monitoring metrics. Retrace focuses on understanding decisions.
Dashboard of aggregate metrics (latency, error rate)
Step-by-step replay of every individual execution
Log-based tracing — search through JSON spans
Interactive timeline — click any span, see full I/O
No way to test a fix without re-running the agent
Fork from the exact failure point, modify input, re-execute
Share a trace ID — teammate needs platform access
Share a URL — anyone can replay without an account
Framework-locked (LangChain-only or vendor-specific)
Works with any Python/TS agent — OpenAI, Anthropic, Google
Complex setup — SDKs, configs, environment variables
One decorator, one API key — traces flow in 2 minutes
// how it works
Full visibility into every step your agent takes — inputs, outputs, timing, costs — all in one timeline.
Find where things went wrong instantly. Branch from any point and try a different approach without rerunning.
Share any execution as an interactive link. Your team can replay, comment, and learn from every run.
// pricing
For experimenting
For shipping
For teams
// questions
Under 2 minutes. Run uv add retrace, add the @record decorator to your agent function, and traces start streaming immediately.
Python and TypeScript SDKs with first-class support. Works with any LLM provider (OpenAI, Anthropic, Google AI) and any agent framework (LangChain, CrewAI, Vercel AI SDK, AutoGen, LlamaIndex).
Select any span in a trace, modify its input, and Retrace re-executes from that point forward. You get a side-by-side diff showing exactly what diverged — without re-running the entire agent.
All data is encrypted in transit (TLS) and at rest. API keys are SHA-256 hashed and stored securely. We never access your trace data without explicit permission.
500 traces per month, 10 shared tapes, 7-day retention, and 1 project. No credit card required. Upgrade to Pro ($29/mo) for unlimited traces and 90-day retention.
The Python and TypeScript SDKs are open source. The platform is cloud-hosted, but enterprise customers can discuss private deployment options with dedicated infrastructure.
Most tools show aggregate dashboards — latency charts, error rates, token counts. Retrace lets you replay individual executions step-by-step, fork from any point, and share interactive traces as public URLs.
Yes. Each agent gets its own trace, and spans capture inter-agent communication. You can follow execution across agents in a single timeline.
Free to start. Under 2 minutes to set up. No credit card required.
uv add retrace