Viewing Token Consumption and AI History

Studio AI provides two complementary tools for monitoring how the AI is being used: the Token Usage tab in AI Configuration gives a per-model summary of cumulative consumption, and the AI Agent History panel gives a full log of every individual request and response.

Token Usage Summary

The Token Usage tab is found in the AI Configuration panel (click the configuration icon in the AI Chat panel bar, then select the Token Usage tab).

It displays a table with one row per language model that has been used in the current Studio session:

Column Description
Model The LLM identifier (e.g., anthropic/claude-opus-4-5)
Input Tokens Cumulative tokens sent to this model — prompts, attached files, context, and conversation history
Output Tokens Cumulative tokens generated by this model in responses
Total Tokens Input + Output combined
Last Used Timestamp of the most recent request sent to this model

The table resets when you restart the Studio. It covers all agents in the session, not just the active chat — background agents such as session naming and code completion are included.

Interpreting Token Counts

High input token counts typically indicate that large files are being attached as context, or that a conversation has grown long (the full history is sent with each message). If you notice unexpectedly high consumption, check whether you are attaching large files unnecessarily, or whether a session has become very long and should be started fresh.

Output tokens reflect the length of the responses generated. Agent Mode sessions that involve many autonomous steps will naturally produce more output tokens than a single Edit Mode response.

AI Agent History Panel

The AI Agent History panel provides a detailed, chronological log of every request sent to any agent and every response received during the current session.

Opening the Panel

Go to View > AI Agent History from the menu bar. The panel opens as a separate view alongside your other panels.

Using the History Panel

At the top of the panel is an agent dropdown that lets you filter the history to a specific agent, or view all agents at once. This is useful when you have been working across multiple agents and want to review what a particular one did.

Each entry in the history shows:

  • Agent name — which agent handled the request
  • Timestamp — when the request was sent
  • Request — the full message sent to the model, including any context that was injected (files, variables, prompt template content)
  • Response — the full text of the model's reply
  • Request ID — a unique identifier for the request, useful for support or debugging
  • Session ID — the chat session this request belongs to

Reviewing Full Prompts

One of the primary uses of the History panel is to see the complete prompt that was actually sent to the model — including all injected context variables, file contents, and prompt template expansions. This helps you understand exactly what the model received, which is invaluable for debugging unexpected responses or tuning your prompt templates.

For example, if an agent is ignoring a file you thought you attached, the History panel will show whether the file content actually appeared in the prompt.

Session ID and Request ID

The Session ID links a request to the chat session it came from. You can use this to correlate a specific AI response in the History panel with the corresponding chat session in the Session History view.

The Request ID is a unique identifier that can be shared with support teams when reporting unexpected model behavior, allowing the specific request to be traced on the backend.

Viewing Token Consumption and AI History

Studio AI provides two complementary tools for monitoring how the AI is being used: the Token Usage tab in AI Configuration gives a per-model summary of cumulative consumption, and the AI Agent History panel gives a full log of every individual request and response.

Token Usage Summary

The Token Usage tab is found in the AI Configuration panel (click the configuration icon in the AI Chat panel bar, then select the Token Usage tab).

It displays a table with one row per language model that has been used in the current Studio session:

Column Description
Model The LLM identifier (e.g., anthropic/claude-opus-4-5)
Input Tokens Cumulative tokens sent to this model — prompts, attached files, context, and conversation history
Output Tokens Cumulative tokens generated by this model in responses
Total Tokens Input + Output combined
Last Used Timestamp of the most recent request sent to this model

The table resets when you restart the Studio. It covers all agents in the session, not just the active chat — background agents such as session naming and code completion are included.

Interpreting Token Counts

High input token counts typically indicate that large files are being attached as context, or that a conversation has grown long (the full history is sent with each message). If you notice unexpectedly high consumption, check whether you are attaching large files unnecessarily, or whether a session has become very long and should be started fresh.

Output tokens reflect the length of the responses generated. Agent Mode sessions that involve many autonomous steps will naturally produce more output tokens than a single Edit Mode response.

AI Agent History Panel

The AI Agent History panel provides a detailed, chronological log of every request sent to any agent and every response received during the current session.

Opening the Panel

Go to View > AI Agent History from the menu bar. The panel opens as a separate view alongside your other panels.

Using the History Panel

At the top of the panel is an agent dropdown that lets you filter the history to a specific agent, or view all agents at once. This is useful when you have been working across multiple agents and want to review what a particular one did.

Each entry in the history shows:

  • Agent name — which agent handled the request
  • Timestamp — when the request was sent
  • Request — the full message sent to the model, including any context that was injected (files, variables, prompt template content)
  • Response — the full text of the model's reply
  • Request ID — a unique identifier for the request, useful for support or debugging
  • Session ID — the chat session this request belongs to

Reviewing Full Prompts

One of the primary uses of the History panel is to see the complete prompt that was actually sent to the model — including all injected context variables, file contents, and prompt template expansions. This helps you understand exactly what the model received, which is invaluable for debugging unexpected responses or tuning your prompt templates.

For example, if an agent is ignoring a file you thought you attached, the History panel will show whether the file content actually appeared in the prompt.

Session ID and Request ID

The Session ID links a request to the chat session it came from. You can use this to correlate a specific AI response in the History panel with the corresponding chat session in the Session History view.

The Request ID is a unique identifier that can be shared with support teams when reporting unexpected model behavior, allowing the specific request to be traced on the backend.