Skip to main content

Overview

Agents in RunTools are AI entities that run inside sandboxes. Each agent has:
  • Model — Which LLM to use (Claude, GPT, Gemini, etc.)
  • System Prompt — Instructions for the agent
  • Tools — Core tools (always included) + marketplace/custom tools
  • Sandbox — The compute environment (required on dashboard, optional in SDK for local mode)

Defining Agents

Via Code (SDK)

agents/code-assistant.ts
import { defineAgent } from '@runtools/sdk';

export default defineAgent({
  slug: 'code-assistant',
  name: 'Code Assistant',
  model: 'claude-sonnet-4',
  systemPrompt: `You are an expert software engineer.
You help users build applications by writing clean, well-documented code.
Always explain your reasoning before making changes.`,
  tools: ['bash', 'read_file', 'edit_file', 'grep', 'web_search'],
});
Deploy with:
runtools deploy

Via Dashboard

  1. Go to Dashboard > Agents > New Agent
  2. Fill in name, slug, model, system prompt
  3. Select tools (core tools are always included)
  4. Select a sandbox
  5. Click Deploy

Via CLI

# Run an agent
runtools agent run code-assistant --prompt "Create a REST API"

# List agents
runtools agent list

# Get agent details
runtools agent get code-assistant

# Delete agent
runtools agent delete code-assistant

Core Tools (Always Included)

Every agent gets these built-in tools. They cannot be removed:
ToolDescription
bashExecute shell commands in the sandbox
read_fileRead file contents with optional line range
edit_fileEdit files with string replacement
write_fileWrite new files
delete_fileDelete files
list_directoryList directory contents
file_searchFind files by name pattern
grepSearch file contents using ripgrep
web_searchSearch the web via Exa
get_dev_urlGet public URL for a dev server port

Adding Marketplace Tools

Install tools from the marketplace, then add them to your agent:
# Install a tool
runtools tool install github

# Store credentials
runtools tool credentials github --json '{"token": "ghp_xxx"}'
Then reference it in your agent:
export default defineAgent({
  slug: 'github-bot',
  tools: ['bash', 'read_file', 'edit_file', 'github'],
  // ...
});
Available marketplace tools: github, gmail, slack. More coming soon.

Model Support

RunTools is model-agnostic. The runtime resolves model aliases to AI SDK providers:
ProviderModels
Anthropicclaude-opus-4.6, claude-opus-4.5, claude-sonnet-4.5, claude-sonnet-4, claude-haiku
OpenAIgpt-5.2, gpt-5.2-pro, gpt-5, gpt-5-mini, gpt-5-nano, gpt-4.1, o4-mini
Googlegemini-3-pro, gemini-3-flash, gemini-2-flash
xAIgrok-4, grok-3
Mistralmistral-large
DeepSeekdeepseek, deepseek-reasoner
Groqllama-4, llama-3.3-70b
Raw AI SDK model IDs are also accepted as pass-through.

Advanced Configuration

All AI SDK parameters are supported:
export default defineAgent({
  slug: 'precise-coder',
  model: 'claude-sonnet-4',
  systemPrompt: '...',
  tools: ['bash', 'read_file', 'edit_file'],
  
  // AI SDK parameters
  maxIterations: 50,        // Max agentic loop steps (default 25, max 100)
  maxTokens: 8192,          // Max tokens per LLM call
  temperature: 0.3,         // 0-1
  topP: 0.9,                // Nucleus sampling
  topK: 50,                 // Top-k sampling
  presencePenalty: 0.1,     // Repetition control
  frequencyPenalty: 0.1,    // Repetition control
  seed: 42,                 // Deterministic generation
  toolChoice: 'auto',       // 'auto' | 'none' | 'required'
  
  // Provider-specific options
  providerOptions: {
    anthropic: {
      cacheControl: true,
      thinking: { type: 'enabled', budgetTokens: 10000 },
    },
    openai: {
      reasoningEffort: 'high',
      parallelToolCalls: true,
    },
  },
});

The Agent Loop

When you run an agent, the runtime executes a streaming loop:
  1. Prompt — User message + system prompt sent to the model
  2. Think — Model analyzes context, decides next action
  3. Tool Call — If needed, executes a tool and adds result to context
  4. Repeat — Continues until model returns a final text response
The loop runs inside the sandbox VM via the runtools-runtime (baked into the rootfs). It uses AI SDK 6 (streamText()) with manual tool execution.

Concurrency

One agent run at a time per sandbox. If a run is active, new requests are rejected with an error.

Events Streamed

During execution, SSE events are emitted:
  • text-delta — Streaming text output
  • tool-call — Tool being invoked (name + input)
  • tool-result — Tool output
  • thinking — Model reasoning (if enabled)
  • error — Error occurred
  • done — Run complete

Running Agents

Via API

curl -X POST https://api.runtools.ai/v1/run \
  -H "Authorization: Bearer rt_live_xxx" \
  -H "Content-Type: application/json" \
  -d '{"agentSlug": "code-assistant", "message": "Create a todo app"}'

Via CLI

runtools agent run code-assistant --prompt "Create a todo app"

Via SDK

// Coming soon: rt.agents.run()

Workflows (Coming Soon)

Workflows orchestrate multiple agents in a DAG (directed acyclic graph). Build them visually in the workflow editor:
  • Drag existing agents onto the canvas as nodes
  • Connect them with edges to define execution flow
  • Each agent can run in its own sandbox or share a global sandbox
  • Deploy workflows as API endpoints: POST /v1/workflows/:slug/run

Best Practices

Be explicit about what the agent should do, how it should behave, and what tools to use. Include examples.
Don’t worry about adding bash, read_file, etc. — they’re always available. Only add marketplace/custom tools you need.
Default is 25. Increase for complex tasks, but cap at a reasonable limit to prevent runaway agents.
Use runtools secret set ANTHROPIC_API_KEY sk-xxx instead of hardcoding keys.