Overview
Agents in RunTools are AI entities that run inside sandboxes. Each agent has:- Model — Which LLM to use (Claude, GPT, Gemini, etc.)
- System Prompt — Instructions for the agent
- Tools — Core tools (always included) + marketplace/custom tools
- Sandbox — The compute environment (required on dashboard, optional in SDK for local mode)
Defining Agents
Via Code (SDK)
agents/code-assistant.ts
Via Dashboard
- Go to Dashboard > Agents > New Agent
- Fill in name, slug, model, system prompt
- Select tools (core tools are always included)
- Select a sandbox
- Click Deploy
Via CLI
Core Tools (Always Included)
Every agent gets these built-in tools. They cannot be removed:| Tool | Description |
|---|---|
bash | Execute shell commands in the sandbox |
read_file | Read file contents with optional line range |
edit_file | Edit files with string replacement |
write_file | Write new files |
delete_file | Delete files |
list_directory | List directory contents |
file_search | Find files by name pattern |
grep | Search file contents using ripgrep |
web_search | Search the web via Exa |
get_dev_url | Get public URL for a dev server port |
Adding Marketplace Tools
Install tools from the marketplace, then add them to your agent:github, gmail, slack. More coming soon.
Model Support
RunTools is model-agnostic. The runtime resolves model aliases to AI SDK providers:| Provider | Models |
|---|---|
| Anthropic | claude-opus-4.6, claude-opus-4.5, claude-sonnet-4.5, claude-sonnet-4, claude-haiku |
| OpenAI | gpt-5.2, gpt-5.2-pro, gpt-5, gpt-5-mini, gpt-5-nano, gpt-4.1, o4-mini |
gemini-3-pro, gemini-3-flash, gemini-2-flash | |
| xAI | grok-4, grok-3 |
| Mistral | mistral-large |
| DeepSeek | deepseek, deepseek-reasoner |
| Groq | llama-4, llama-3.3-70b |
Advanced Configuration
All AI SDK parameters are supported:The Agent Loop
When you run an agent, the runtime executes a streaming loop:- Prompt — User message + system prompt sent to the model
- Think — Model analyzes context, decides next action
- Tool Call — If needed, executes a tool and adds result to context
- Repeat — Continues until model returns a final text response
runtools-runtime (baked into the rootfs). It uses AI SDK 6 (streamText()) with manual tool execution.
Concurrency
One agent run at a time per sandbox. If a run is active, new requests are rejected with an error.Events Streamed
During execution, SSE events are emitted:text-delta— Streaming text outputtool-call— Tool being invoked (name + input)tool-result— Tool outputthinking— Model reasoning (if enabled)error— Error occurreddone— Run complete
Running Agents
Via API
Via CLI
Via SDK
Workflows (Coming Soon)
Workflows orchestrate multiple agents in a DAG (directed acyclic graph). Build them visually in the workflow editor:- Drag existing agents onto the canvas as nodes
- Connect them with edges to define execution flow
- Each agent can run in its own sandbox or share a global sandbox
- Deploy workflows as API endpoints:
POST /v1/workflows/:slug/run
Best Practices
Write clear, specific system prompts
Write clear, specific system prompts
Be explicit about what the agent should do, how it should behave, and what tools to use. Include examples.
Core tools are always included
Core tools are always included
Don’t worry about adding bash, read_file, etc. — they’re always available. Only add marketplace/custom tools you need.
Set reasonable maxIterations
Set reasonable maxIterations
Default is 25. Increase for complex tasks, but cap at a reasonable limit to prevent runaway agents.
Store provider keys as secrets
Store provider keys as secrets
Use
runtools secret set ANTHROPIC_API_KEY sk-xxx instead of hardcoding keys.