Skip to main content
The execute() function is the primary way your agent responds to users. It starts an AI loop that iterates until the model reaches a conclusion. On each iteration, the model can read the conversation, call tools, and generate code. The loop ends when the model sends a message to the user (and waits for a reply), triggers an exit, or hits the iteration limit (which defaults to 10 and is configurable). The await resolves once the loop finishes.
Under the hood, execute() is powered by LLMz.

Basic usage

export default new Conversation({
  channel: "webchat.channel",
  handler: async ({ execute }) => {
    await execute({
      instructions: "You are a helpful customer support agent.",
    })
  },
})
The model reads the full conversation transcript, follows your instructions, and responds. If tools are available, it can call them before responding.

Instructions

Pass instructions into the instructions field to tell the model how to behave. They can be a static string or a function that returns a string:
// String
await execute({
  instructions: "You are a helpful assistant that speaks formally.",
})

// Function that returns a string
await execute({
  instructions: () => {
    const hour = new Date().getHours()
    return hour < 12
      ? "You are a cheerful morning assistant."
      : "You are a calm evening assistant."
  },
})
Instructions are evaluated fresh on each execution. Use a function when you need dynamic behavior based on state, time, or other context.

Knowledge

Pass knowledge bases into the knowledge field to give the model access to your documents:
import { DocsKB } from "../knowledge/docs"
import { FaqKB } from "../knowledge/faq"

await execute({
  instructions: "Answer questions using the documentation.",
  knowledge: [DocsKB, FaqKB],
})
The model automatically searches the knowledge bases when it needs information. Results include citations that trace back to the source documents.
For more information on defining knowledge bases, check out the Knowledge base documentation.

Tools

Give the model functions it can call by passing them into the tools field:
import { getWeather } from "../tools/weather"
import { createTicket } from "../tools/ticket"

await execute({
  instructions: "You are a helpful assistant.",
  tools: [getWeather, createTicket],
})
The model decides when to call a tool based on the conversation. For a full guide on defining tools, see Define Tools.

Exits

Pass in exits to let the model end execution with a structured result:
import { Autonomous, z } from "@botpress/runtime"

const handoff = new Autonomous.Exit({
  name: "handoffToHuman",
  description: "Transfer the conversation to a human agent",
  schema: z.object({
    reason: z.string(),
    priority: z.enum(["low", "medium", "high"]),
  }),
})

const result = await execute({
  instructions: "Help the user. If you can't resolve their issue, hand off to a human.",
  exits: [handoff],
})

if (result.exit?.name === "handoffToHuman") {
  const { reason, priority } = result.exit.value
  // Route to human agent
}

Model override

You can override the default model for a specific execution:
await execute({
  model: "openai:gpt-4.1-2025-04-14",
  instructions: "You are a helpful assistant.",
})
You can also pass an array for fallback:
await execute({
  model: ["cerebras:gpt-oss-120b", "openai:gpt-4.1-2025-04-14"],
  instructions: "You are a helpful assistant.",
})
The default model is set in agent.config.ts under defaultModels.autonomous. You can also browse and change models from the dev console under Settings > LLM Config.

Temperature and reasoning

To control the model’s temperature and reasoning effort, use the temperature and reasoningEffort props:
await execute({
  instructions: "You are a helpful assistant.",
  temperature: 0.3,
  reasoningEffort: "high",
})
OptionDefaultDescription
temperature0.7Controls randomness. Lower is more deterministic.
reasoningEffortnone"low", "medium", "high", "dynamic", or "none". Only for models that support reasoning.

Iterations

The model can loop multiple times in a single execution (e.g., call a tool, read the result, call another tool, then respond). The iterations prop controls the maximum number of loops:
await execute({
  instructions: "You are a research assistant.",
  iterations: 20,
})
The number of iterations defaults to 10 and is clamped between 1 and 100.

Cancellation

Pass an AbortSignal to cancel execution mid-loop. Useful when the caller needs to bail out (e.g. a timeout or the user navigating away):
const controller = new AbortController()

setTimeout(() => controller.abort(), 30_000)

await execute({
  instructions: "You are a helpful assistant.",
  signal: controller.signal,
})

Mode

Execution runs in chat mode by default, where the model sends messages to the user. You can switch to worker mode for background processing:
await execute({
  mode: "worker",
  instructions: "Analyze the data and update the cart object.",
  objects: [cart],
})
In worker mode, the model executes without sending messages to the conversation.

Hooks

Hooks let you observe and intercept execution at key moments:
await execute({
  instructions: "You are a helpful assistant.",
  hooks: {
    onBeforeTool: async ({ tool, input, controller }) => {
      console.log(`Calling tool: ${tool.name}`, input)
      // Return { input: modifiedInput } to change the input
      // Call controller.abort() to cancel the tool call
    },

    onAfterTool: async ({ tool, input, output }) => {
      console.log(`Tool ${tool.name} returned:`, output)
      // Return { output: modifiedOutput } to change the output
    },

    onTrace: ({ trace, iteration }) => {
      // Log or monitor execution traces
    },

    onIterationStart: async (iteration, controller, context) => {
      // Runs before each iteration
    },

    onIterationEnd: async (iteration, controller) => {
      // Runs after each iteration
      // Use controller to stop execution early
    },

    onBeforeExecution: async (iteration, controller) => {
      // Runs before code execution
      // Return { code: modifiedCode } to change what gets executed
    },

    onExit: async (result) => {
      // Runs when the model triggers an exit
    },
  },
})

Debugging execution

To see exactly what happened during an execution (which tools were called, what the model generated, how many iterations it took, and any errors), see Debug with logs and traces.
Traces view in dev console
Last modified on April 24, 2026