Skip to main content

Overview

The chargeTokensActivity records token consumption for billing and usage tracking. It sends token usage information to the main API, which handles billing calculations and usage limits.

Purpose

  • Track token consumption for billing
  • Record input and output token usage
  • Update user/flow usage statistics
  • Enable usage-based billing calculations

When it’s executed

This activity is called after successful AI model operations:
  1. After text generation: Records tokens used in text generation operations
  2. After model calls: Tracks tokens consumed by any AI model interaction
  3. Before completion: Executes before marking the workflow as complete

Signature

async chargeTokensActivity({
  flowId: string,
  userId: string,
  inputTokens: number,
  outputTokens: number,
  model: string,
  aiSystem: string
}): Promise<boolean>

Inputs

ParameterTypeDescription
flowIdstringID of the flow that consumed the tokens
userIdstringID of the user who owns the flow
inputTokensnumberNumber of input tokens consumed
outputTokensnumberNumber of output tokens generated
modelstringName/identifier of the AI model used
aiSystemstringAI system/provider (e.g., openai, anthropic)

Outputs

Returns true if token charging was successful, false otherwise.

Implementation details

The activity makes an HTTP POST request to the main API’s token charging endpoint:
POST ${URL_MAIN_API}/worker/charge-tokens
The request includes:
  • flowId: Flow identifier
  • userId: User identifier
  • inputTokens: Input token count
  • outputTokens: Output token count
  • model: Model name
  • aiSystem: AI system/provider
The request is authenticated using the INTERNAL_API_KEY header.

Example usage in workflow

// After successful text generation
const output = await textGeneratorActivity({
  nodeId,
  sessionId,
  userId,
});

// Charge tokens for the operation
await chargeTokensActivity({
  flowId: node.flowId,
  userId,
  inputTokens: output.inputTokens,
  outputTokens: output.outputTokens,
  model: node.data.aiModel,
  aiSystem: node.data.AiSystem,
});

Error handling

The activity returns false if:
  • The API request fails
  • Token charging cannot be processed
  • The user has exceeded usage limits
  • The billing system is unavailable
The workflow should handle failures appropriately:
const success = await chargeTokensActivity({
  flowId,
  userId,
  inputTokens,
  outputTokens,
  model,
  aiSystem,
});

if (!success) {
  // Log error but don't fail the workflow
  // Token charging failures are logged but don't block execution
}

Token calculation

Tokens are typically calculated by:
  • Input tokens: Count of tokens in the prompt/messages sent to the model
  • Output tokens: Count of tokens in the response generated by the model
The AI model provider’s API usually returns these values as part of the response.

API integration

The activity communicates with the main API’s internal endpoint:
  • Endpoint: /worker/charge-tokens
  • Method: POST
  • Authentication: Internal API key (INTERNAL_API_KEY)
  • Request body: { flowId, userId, inputTokens, outputTokens, model, aiSystem }
  • Response: { success: boolean }