Studio AI provides two complementary tools for monitoring how the AI is being used: the Token Usage tab in AI Configuration gives a per-model summary of cumulative consumption, and the AI Agent History panel gives a full log of every individual request and response.
The Token Usage tab is found in the AI Configuration panel (click the configuration icon in the AI Chat panel bar, then select the Token Usage tab).
It displays a table with one row per language model that has been used in the current Studio session:
The table resets when you restart the Studio. It covers all agents in the session, not just the active chat — background agents such as session naming and code completion are included.
High input token counts typically indicate that large files are being attached as context, or that a conversation has grown long (the full history is sent with each message). If you notice unexpectedly high consumption, check whether you are attaching large files unnecessarily, or whether a session has become very long and should be started fresh.
Output tokens reflect the length of the responses generated. Agent Mode sessions that involve many autonomous steps will naturally produce more output tokens than a single Edit Mode response.
The AI Agent History panel provides a detailed, chronological log of every request sent to any agent and every response received during the current session.
Go to View > AI Agent History from the menu bar. The panel opens as a separate view alongside your other panels.
At the top of the panel is an agent dropdown that lets you filter the history to a specific agent, or view all agents at once. This is useful when you have been working across multiple agents and want to review what a particular one did.
Each entry in the history shows:
One of the primary uses of the History panel is to see the complete prompt that was actually sent to the model — including all injected context variables, file contents, and prompt template expansions. This helps you understand exactly what the model received, which is invaluable for debugging unexpected responses or tuning your prompt templates.
For example, if an agent is ignoring a file you thought you attached, the History panel will show whether the file content actually appeared in the prompt.
The Session ID links a request to the chat session it came from. You can use this to correlate a specific AI response in the History panel with the corresponding chat session in the Session History view.
The Request ID is a unique identifier that can be shared with support teams when reporting unexpected model behavior, allowing the specific request to be traced on the backend.
Studio AI provides two complementary tools for monitoring how the AI is being used: the Token Usage tab in AI Configuration gives a per-model summary of cumulative consumption, and the AI Agent History panel gives a full log of every individual request and response.
The Token Usage tab is found in the AI Configuration panel (click the configuration icon in the AI Chat panel bar, then select the Token Usage tab).
It displays a table with one row per language model that has been used in the current Studio session:
The table resets when you restart the Studio. It covers all agents in the session, not just the active chat — background agents such as session naming and code completion are included.
High input token counts typically indicate that large files are being attached as context, or that a conversation has grown long (the full history is sent with each message). If you notice unexpectedly high consumption, check whether you are attaching large files unnecessarily, or whether a session has become very long and should be started fresh.
Output tokens reflect the length of the responses generated. Agent Mode sessions that involve many autonomous steps will naturally produce more output tokens than a single Edit Mode response.
The AI Agent History panel provides a detailed, chronological log of every request sent to any agent and every response received during the current session.
Go to View > AI Agent History from the menu bar. The panel opens as a separate view alongside your other panels.
At the top of the panel is an agent dropdown that lets you filter the history to a specific agent, or view all agents at once. This is useful when you have been working across multiple agents and want to review what a particular one did.
Each entry in the history shows:
One of the primary uses of the History panel is to see the complete prompt that was actually sent to the model — including all injected context variables, file contents, and prompt template expansions. This helps you understand exactly what the model received, which is invaluable for debugging unexpected responses or tuning your prompt templates.
For example, if an agent is ignoring a file you thought you attached, the History panel will show whether the file content actually appeared in the prompt.
The Session ID links a request to the chat session it came from. You can use this to correlate a specific AI response in the History panel with the corresponding chat session in the Session History view.
The Request ID is a unique identifier that can be shared with support teams when reporting unexpected model behavior, allowing the specific request to be traced on the backend.