Recording Traces
Capture AI agent executions with the @record decorator and manual spans.
The @record Decorator
The simplest way to record an agent execution. Wrap any function and Retrace captures every LLM call, tool invocation, and error automatically.
import retrace
retrace.configure(api_key="rt_live_...")
@retrace.record(name="research-agent")
def research(topic: str) -> str:
response = ai.models.generateContent(
model="gemini-3.1-pro-preview",
messages=[{"role": "user", "content": f"Research {topic}"}]
)
return response.choices[0].message.content
result = research("quantum computing 2026")Context Manager
For more control over the trace lifecycle:
with retrace.record(name="my-agent", input={"prompt": "hello"}) as t:
result = agent.run("hello")
t.output = resultManual Spans
Add custom spans for non-LLM operations like tool calls, database queries, or API requests:
with retrace.record(name="agent") as t:
# Start a custom span
span = t._recorder.start_span(
name="web_search",
span_type=retrace.SpanType.TOOL_CALL,
input={"query": "quantum computing"}
)
results = search_web("quantum computing")
t._recorder.end_span(span.id, output=results)What Gets Captured
| Data | Source | Notes |
|---|---|---|
| LLM calls | Auto-intercepted | Google Gen AI SDK |
| Messages | Input field | Full message array |
| Response | Output field | Model response content |
| Token usage | API response | prompt_tokens + completion_tokens |
| Cost | Calculated | Based on model pricing table |
| Duration | Measured | Wall-clock milliseconds |
| Errors | Exception handler | Stack trace and message |
| Model name | API params | gemini-3.1-pro-preview, gemini-2.5-flash, etc. |
[!TIP] Spans can be nested. Set
parent_idwhen creating manual spans to build a tree structure.
[!WARNING] Input and output are stored as-is. Avoid passing sensitive PII (passwords, tokens) in LLM messages if you share tapes publicly.