Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.lyzr.ai/llms.txt

Use this file to discover all available pages before exploring further.

Nodes are the building blocks of a SuperFlow. The left node palette in the editor groups them by category — Control Flow, Data Transform, I/O & Compute, Utility, AI, Document, and Human-in-the-Loop. Use the search box at the top of the palette to filter, then drag a node onto the canvas (or click to drop it at the center). This page covers the 10 most-used nodes in detail. The full catalog of remaining nodes is in the quick reference table at the bottom.
Every node is durably journaled. Once a node completes successfully, its output is recorded; the engine will never re-execute it during the same run, even if the service restarts. Nodes that talk to external systems (AI Agent, LLM, HTTP Request, Tool, document nodes) also support automatic retry on failure — configure max attempts and backoff in the Retry section of the node’s config drawer. See Reliability for the full guarantees.

Trigger

What it does. Every SuperFlow starts with exactly one Trigger node. It defines the shape of the input data the workflow accepts and decides how the workflow gets started. When to use it. Always — your SuperFlow won’t run without one. Configure it once to declare your input fields, then choose how runs are initiated. Key parameters.
  • Trigger mode — choose Manual, Webhook, or Schedule. See Triggers & schedules for the full guide.
  • Webhook secret — used when the trigger mode is Webhook. External callers must include the matching secret in the X-Webhook-Secret header.
  • Schedule — when the mode is Schedule, a visual cron builder lets you pick frequency, time, and timezone.
Output. Downstream nodes can reference Trigger fields with expressions like {{ $('Trigger').json.customer_message }}.

AI Agent

What it does. Runs an existing Lyzr agent as a step in your SuperFlow. The full agent — its model, system prompt, tools, knowledge bases, memory — is reused exactly as configured in the Agents section of Studio. The node receives the upstream data, runs the agent’s reasoning/tool loop, and returns its response. When to use it. When you’ve already built (or want to build) a reusable agent in Studio and want to drop it into a workflow. Common use cases: classification, content generation, multi-tool task execution, customer reply drafting — anything where the same agent might be called from multiple workflows or directly. The AI Agent node is only for reusing existing agents. If you want a single model call configured inline — system prompt, model, query, no tools — use the LLM node instead. Key parameters.
  • Agent — pick an existing agent from the dropdown. Its agent ID is filled in automatically and all of its configuration is reused as-is.
  • Query — the input you want to send to the agent. This is almost always an expression like {{ $('Trigger').json.question }}.
  • Run as sub-agent — when this AI Agent node is connected downstream of another LLM-driven node, enabling this turns it into a callable tool of the parent, instead of running it as a separate DAG step.
Output. The agent’s response is emitted as the node’s output.

LLM

What it does. Calls a model inline on the node — pick a provider/model, write a system prompt, supply a query, get a response. By default it’s a single, one-shot model call: no tools, no loop. But it can do more. If you connect downstream LLM, AI Agent, or Tool nodes and set Run as sub-agent on them, those downstream nodes become tools that this LLM can call. The LLM then runs in an agentic ReAct loop — reasoning, picking sub-agents/tools to call, getting their responses, and continuing — until it decides it’s done. This is how you build hierarchical agents (a planner LLM coordinating worker LLMs / agents / tools) without building a separate agent definition. When to use it. For pure text transformations: summarize, rewrite, classify, extract, or format. Or — when wired with sub-agent nodes downstream — as the brain of a multi-step, multi-tool workflow. The LLM node is the node to reach for when you want the agent loop but don’t want to bounce out to the Agents section to build a reusable agent first. Key parameters.
  • Provider / Model / Credentials — pick the model to call.
  • System prompt — the instructions for the LLM, written directly on the node. Use the Generate with AI button to draft one from a description.
  • Query — the user-side prompt, usually an expression pulling from upstream. Leave empty to auto-pick a message from common field names on the input (message, query, input, etc.).
  • Temperature, Max tokens — standard generation controls.
  • Run as sub-agent — flip this on when this LLM node is itself meant to be a tool of an upstream LLM/AI Agent. When enabled, the node is skipped in the main DAG and runs only when the upstream calls it.
Output. The model’s response — a string by default. If the LLM ran an agentic loop with sub-agents, the response is the final answer after the loop terminates. AI Agent vs LLM at a glance. Use AI Agent to reuse a fully-built Lyzr agent (with its tools, knowledge bases, memory, system prompt). Use LLM when you want the model configured right there on the node — as a single call, or as the head of a downstream sub-agent loop.

HTTP Request

What it does. Makes an HTTP call to any URL and returns the response. The bridge between your SuperFlow and the outside world. When to use it. Calling a third-party API, posting to a webhook, hitting your own backend, fetching data to feed into a downstream agent. Key parameters.
  • Method — GET, POST, PUT, PATCH, DELETE.
  • URL — the endpoint. Expressions are supported, so you can build URLs from upstream data (https://api.example.com/users/{{ $json.user_id }}).
  • Headers — key/value pairs. Common ones: Authorization, Content-Type.
  • Query parameters — key/value pairs appended to the URL.
  • Body — JSON, form, or raw text. Expression-aware, so any field can be templated from upstream nodes.
Output. The response body (parsed as JSON if it is JSON), plus status code and response headers.

Code

What it does. Runs JavaScript inline. The Code node opens a Monaco editor in the config drawer (with an Expand button for a full-screen editor) where you can write arbitrary JS to transform data, do math, run regex, or anything else that’s awkward to express with the visual nodes. When to use it. Quick data shaping that no other node does cleanly — combining fields, generating IDs, custom date formatting, filtering with custom logic, etc. Key parameters.
  • Code — the JavaScript. The following globals are available:
    • $input.all() — the full list of input items
    • $input.first() — the first input item
    • $json — the first input item’s json payload (shorthand)
    • $items — alias for the full list
  • Timeout — code is sandboxed and capped at 10 seconds.
The Code node does not use the return keyword. The last expression evaluated in your code is automatically used as the output. Don’t write return { foo: 'bar' } — write { foo: 'bar' } as the final line, or assign to a variable and reference it last. Using return outside of a function will error.
Output. The value of the last expression evaluated. Emit an object or array of objects for downstream nodes to consume.

If

What it does. Binary conditional branch. Evaluates a condition on the input data and routes the items to one of two outputs: true (output 0) or false (output 1). When to use it. Splitting flow based on a value — “is the customer priority urgent?”, “did the LLM say yes or no?”, “is the response longer than 200 characters?”. Two modes.
  • Rule mode (default) — a visual condition builder. Each row is left operand · operator · right operand. Both operands can be literals or expressions, and multiple rows are joined with AND / OR. Best for deterministic checks: numeric comparisons, exact string matches, field presence.
  • AI mode — write the condition as a natural-language statement (for example, “the customer message expresses frustration” or “the document mentions a refund request”). An LLM evaluates the statement against the upstream data and returns true or false. Best for fuzzy, judgment-call routing that’s awkward to express as a strict rule.
Key parameters.
  • ModeRule or AI.
  • Conditions (Rule mode) — rows of left · operator · right. Combine with AND / OR when there’s more than one.
  • Condition statement (AI mode) — the natural-language condition for the LLM to evaluate. Use expressions to include upstream context (for example, Is "{{ $json.message }}" expressing frustration?).
Output. Two output handles. Items that match the condition go out of the top (true) handle; the rest go out of the bottom (false) handle. Connect each handle to its own downstream chain.

Set

What it does. Reshapes the data. You define key/value pairs, and the Set node emits items with those fields populated. When to use it. Renaming fields, adding constants, building a clean payload for the next node, “stamping” a record with metadata before saving it. Key parameters.
  • Assignments — a list of key = value rows. The value can be a literal, a field reference (use the picker), or any expression.
Output. Items shaped exactly as your assignments specify.

Loop

What it does. Iterates over the input items. For every item (or every batch of items), the body of the loop runs once and its outputs accumulate. When to use it. Processing a list — for example, looping over rows from an HTTP response and running an agent on each, or batching 100 emails into groups of 10 for an external API. Key parameters.
  • Modeeach (run the body once per input item) or batches (group input items into chunks).
  • Batch size — when mode is batches, how many items per group.
  • Loop body — the nodes between the Loop node and its loop-back edge form the body. Each iteration runs the entire body before moving to the next item or batch.
Output. The combined outputs of every iteration, available downstream of the loop.

Wait for Approval

What it does. Pauses the SuperFlow and asks a human to approve or reject before continuing. The run stays paused (durably — restarts won’t lose it) until someone responds. When to use it. Anywhere you want a safety gate: before sending an email an LLM drafted, before charging a customer, before deleting records, before publishing content. Key parameters.
  • Approval message — what the approver sees. Markdown is supported, and expressions are encouraged so the message can show upstream context. For example: Please review the proposed reply to {{ $json.customer_email }}: {{ $json.draft_reply }}.
Output. Two output handles:
  • Output 0 (approved) — items continue down this path when a human approves. The original input items flow through unchanged.
  • Output 1 (rejected) — items continue down this path when a human rejects.
Approvers act from the Approvals drawer in the editor, or from any external system you wire up to the resume API.

Execute Workflow

What it does. Calls another SuperFlow as a single step. The sub-workflow runs to completion, and its final output is emitted as this node’s output. When to use it. Reuse — extract a common sub-flow (a “send notification” pipeline, a “validate customer” pipeline) into its own SuperFlow and call it from many parents. Key parameters.
  • Source — either pick an existing SuperFlow from the dropdown (recommended) or paste an inline workflow JSON.
  • Input — the data passed to the sub-workflow’s Trigger node.
Output. Whatever the sub-workflow’s final node returned.

Quick reference table

The remaining nodes — useful but typically less central to a first SuperFlow. Each is one drag away from the same node palette.
NodeCategoryWhat it does
SwitchControl FlowMulti-way conditional. Routes items to one of several outputs based on which case matches first.
MergeControl FlowCombines outputs from multiple upstream branches back into a single stream.
FilterControl FlowDrops items that don’t satisfy a condition; items that do pass through unchanged.
Stop and ErrorControl FlowHalts execution with a custom error message. Useful inside a conditional path you want to fail loudly.
WaitControl FlowPauses for a fixed duration (seconds, minutes, hours, or days) before continuing.
NoOpControl FlowPass-through. Acts as a junction for cleaner graphs.
AggregateData TransformRoll up data across items into a single summary item.
SortData TransformReorder items by a field.
LimitData TransformTake the first N items, drop the rest.
Remove DuplicatesData TransformDeduplicate items by a field or set of fields.
Rename KeysData TransformRename fields without otherwise changing the data.
DateTimeUtilityParse, format, or shift dates between timezones.
CryptoUtilityHash, sign, or encode data (SHA, HMAC, base64, etc.).
XMLUtilityConvert between XML and JSON.
AI SwarmAIBreak a complex query into sub-tasks, run an agent on each in parallel, then aggregate the results into a single answer.
ToolAICall a Lyzr platform tool directly, without an agent and without an LLM in the loop.
ParseDocumentExtract text from PDFs, DOCX, images, and other documents. Tiers from basic OCR to vision-LLM-assisted parsing.
ExtractionDocumentPull structured fields out of a document using a JSON schema you supply.
LabelDocumentClassify text or a document against a set of rules and return the matching label.
For details on any node, hover its entry in the palette in the editor — every node ships with inline parameter descriptions next to each field.