Forking Executions
Branch from any step in an agent execution and replay with modifications.
What is Forking?
Forking lets you branch from any step in a recorded execution and replay with modified input. Think of it as git bisect for AI agents - find where things went wrong, try a different path.
Creating a Fork
From the Dashboard
- Open any trace in the trace viewer
- Click the fork icon on any span in the left panel
- Modify the input JSON in the dialog
- Click "Create Fork"
- The fork executes and creates a new trace from the branch point
From the API
curl -X POST https://api.retrace.yashbogam.me/api/v1/forks \
-H "x-retrace-key: rt_live_..." \
-H "Content-Type: application/json" \
-d '{
"trace_id": "uuid-of-original-trace",
"fork_point_span_id": "uuid-of-span-to-fork-from",
"name": "Try different prompt",
"modified_input": {
"messages": [{"role": "user", "content": "modified prompt here"}]
}
}'Then replay the fork:
curl -X POST https://api.retrace.yashbogam.me/api/v1/forks/{fork_id}/replay \
-H "x-retrace-key: rt_live_..."Divergence Score
Every fork produces a divergence score (0.0 to 1.0) showing how much the new path differs:
| Score | Meaning |
|---|---|
| 0.0 | Identical execution path |
| 0.1-0.3 | Minor differences (same structure, different content) |
| 0.3-0.6 | Moderate divergence (different tool calls or reasoning) |
| 0.6-0.9 | Major divergence (completely different approach) |
| 1.0 | Entirely different execution |
Use Cases
- Debug failed runs: Fork from the step before the error, change the input
- A/B test prompts: Fork from the same point with different prompts, compare outputs
- Explore alternatives: Try different tool combinations without re-running from scratch
- Sensitivity analysis: Quantify how input changes affect agent behavior
[!NOTE] Forking copies all spans before the fork point into a new trace. The fork itself only re-executes from the branch point forward. Cost is proportional to the remaining steps, not the full trace.
Using Your Own API Keys
By default, fork replay uses the server's Retrace AI API key. For production use, store your own provider keys in the Settings page or via the API:
curl -X PUT https://api.retrace.yashbogam.me/api/v1/user-api-keys \
-H "Authorization: Bearer rt_live_..." \
-H "Content-Type: application/json" \
-d '{"provider": "google", "key": "your-gemini-api-key"}'Keys are encrypted at rest with AES-256-CBC. When a fork replay runs, Retrace checks for your stored key first, then falls back to the server key.
Supported providers:
google— for Gemini 3.1 Pro, Gemini 2.5 Pro, Gemini 2.5 Flash models
[!NOTE] Provider keys are available on all plans.
SDK Callback Replay
For advanced use cases, you can replay a fork in your own agent code and submit the results back to Retrace:
- Create a fork via the API
- Initialize the fork with
POST /api/v1/forks/:id/replay - Run the modified LLM call in your own code
- Submit the replay spans via
POST /api/v1/forks/:id/replay/callback
curl -X POST https://api.retrace.yashbogam.me/api/v1/forks/:id/replay/callback \
-H "Authorization: Bearer rt_live_..." \
-H "Content-Type: application/json" \
-d '{
"spans": [
{
"span_type": "llm_call",
"name": "My custom replay",
"model": "gemini-3.1-pro-preview",
"input": {"messages": [{"role": "user", "content": "modified prompt"}]},
"output": "Replay result from my own infrastructure",
"input_tokens": 150,
"output_tokens": 80,
"cost": 0.0015,
"duration_ms": 1200
}
]
}'[!TIP] SDK callback replay lets you use any model provider, run custom tool calls, or execute replays in your own infrastructure.