Overview
ThechargeTokensActivity records token consumption for billing and usage tracking. It sends token usage information to the main API, which handles billing calculations and usage limits.
Purpose
- Track token consumption for billing
- Record input and output token usage
- Update user/flow usage statistics
- Enable usage-based billing calculations
When it’s executed
This activity is called after successful AI model operations:- After text generation: Records tokens used in text generation operations
- After model calls: Tracks tokens consumed by any AI model interaction
- Before completion: Executes before marking the workflow as complete
Signature
Inputs
| Parameter | Type | Description |
|---|---|---|
flowId | string | ID of the flow that consumed the tokens |
userId | string | ID of the user who owns the flow |
inputTokens | number | Number of input tokens consumed |
outputTokens | number | Number of output tokens generated |
model | string | Name/identifier of the AI model used |
aiSystem | string | AI system/provider (e.g., openai, anthropic) |
Outputs
Returnstrue if token charging was successful, false otherwise.
Implementation details
The activity makes an HTTP POST request to the main API’s token charging endpoint:flowId: Flow identifieruserId: User identifierinputTokens: Input token countoutputTokens: Output token countmodel: Model nameaiSystem: AI system/provider
INTERNAL_API_KEY header.
Example usage in workflow
Error handling
The activity returnsfalse if:
- The API request fails
- Token charging cannot be processed
- The user has exceeded usage limits
- The billing system is unavailable
Token calculation
Tokens are typically calculated by:- Input tokens: Count of tokens in the prompt/messages sent to the model
- Output tokens: Count of tokens in the response generated by the model
API integration
The activity communicates with the main API’s internal endpoint:- Endpoint:
/worker/charge-tokens - Method: POST
- Authentication: Internal API key (
INTERNAL_API_KEY) - Request body:
{ flowId, userId, inputTokens, outputTokens, model, aiSystem } - Response:
{ success: boolean }