Overview
The text generator node produces LLM output using the Vercel AI SDK (generateText). The Worker loads node configuration from Postgres, resolves the concrete model via system_ai_models, builds system/messages/tools, calls the provider, persists results on the node row, and returns token usage to the workflow for billing.
textGeneratorActivity delegates to TextGeneratorService.process.
For product and UI details (inputs, provider knobs, model capability tables), see AI Text Generator.
When it runs
textGeneratorActivity is invoked from the processSingleNode workflow when the node type is text generator (NodeType.TEXT_GENERATOR). The workflow typically calls validateModelAccessActivity before execution and chargeTokensActivity afterward using the returned token counts.
Activity signature
TextGeneratorResponse:
flows_nodes.data inside TextGeneratorService.process.
End-to-end flow
- Load node β
SELECTfromflows_nodesbynodeId; readdata(prompt, provider fields, memory flags, etc.). - Resolve model field β From
data.AiSystem, the service picks which property holds the model id (aiModel,groqModel,geminiModel,anthropicModel, orperplexityType). SeegetModelFieldByAiSystemin the service. - Load model metadata β
SELECTfromsystem_ai_modelswheremodelNamematches that id (aiCompany, limits,isVisionModel, etc.). - Instantiate model β
selectModelmapsaiCompanyto a client (openai,anthropic,gemini,perplexity,xai,groq) and returns aLanguageModel. Anthropic ids are normalized via an internal map when the stored id matches known keys. - Build request β
buildConfigcomposessystem,messages, optional web-searchtools, andcleanedPrompt(used for memory). Persona and instructions append tosystem. Prompt lines come frompromptData+prompt; context fromcontentData+content. Memory injects prior turns whenmemoryis true andsessionIdis present. - Generate β
generateTextwith provider-specific options frombuildGenerateTextOptions(temperature, max tokens, stop sequences, Perplexity search context, OpenAI reasoning effort, etc.).autoTokeninteracts withmaxOutputTokenfromsystem_ai_models. - Persist β
UPDATE flows_nodeswithtext, structuredlogsfrombuildAILog,executionStatus: "COMPLETED",nodeUsedTokens, appendedpreviewResponses, and updateduserSessionsChatwhen memory is enabled. - Return β
output,inputTokens,outputTokensfromresult.totalUsage.
Persisted node data highlights
| Field | Role |
|---|---|
text | Model output string. |
logs | Human-readable execution log (model id, tools used, search queries, URLs, token usage, finish reason). |
executionStatus | Set to COMPLETED on success. |
nodeUsedTokens | Input token count from usage (used for display; billing uses activity return values). |
previewResponses | Previous outputs plus the new output appended. |
userSessionsChat | When memory is true, the current sessionβs memories gain new user/assistant entries from this run. |
Web search
Whennode.webSearch is true, buildConfig attaches provider-specific tools:
AiSystem | Tool source (conceptually) |
|---|---|
| OpenAI | openai.tools.webSearch |
| Anthropic | anthropic.tools.webSearch_20250305 |
| Gemini | google.tools.googleSearch |
Multimodal
IfmultimodalEnabled is true and system_ai_models.isVisionModel is true, URLs in the prompt are extracted; non-file URLs are stripped from the text sent as prompt. File URLs are validated (type, size via HEAD), capped at 3 files, and merged into messages as image or file parts depending on provider. Limits include max sizes per type (images 5MB, generic 10MB, PDF/video/audio rules as in code).
Session memory
Whenmemory is true, sessionId is required (throws if missing). The service loads userSessionsChat for that session, applies processMemoryFilter using selectedType / selectedNumber / selectedIndex (first/last N entries or words, or all), and prepends those messages. After generation, if cleanedPrompt is non-empty, new user and assistant memory entries are appended for that session.
Provider-specific generation options
buildGenerateTextOptions branches on modelInfo.aiCompany (lowercased). Typical mappings:
- Anthropic β
anthropicToken,anthropicTemperature,anthropicTopP,anthropicTopK; max output resolved withautoTokenvssystem_ai_models.maxOutputToken. - Gemini β
geminiMaxOutputTokens,temperature,topP,topK. - OpenAI β
aiTokens,aiTemperature,aiPp(top P),aiFp(mapped to topK in options), optionalaiEffortβproviderOptions.openai.reasoningEffort. - Perplexity β
max_tokens/perplexityTokens, temperature, top P/K,search_context_sizein provider options. - XAI β
aiTokens,aiTemperature. - Groq β
groqMaxTokens, temperature/topP (with fallbacks),seed,groqStopas stop sequences.
Related documentation
- AI Text Generator β Field-level and model-matrix reference
processSingleNodeworkflow- Validate model access
- Charge tokens
- Worker overview