Ship agents,
not infrastructure

Build and deploy AI agents to your own cloud. One config file. No vendor lock-in.

coming soon
$ curl -sSL agentctl.sh/install | sh

macOS, Linux, WSL · view releases


Three commands. That's it.

01

agentctl init

Scaffold an agent.toml — pick your model, tools, and memory backend.

02

agentctl dev

Run locally with hot-reload. Chat in the terminal. Stream tool calls live.

03

agentctl deploy

Deploy to your AWS account. Infrastructure generated, provisioned, done.


# agent.toml — that's your entire agent
[agent]
name = "incident-responder"
description = "Diagnose production issues using logs and database"

[model]
provider = "anthropic"
model = "claude-sonnet-4-20250514"

[[tools]]
name = "prod_db"
type = "database"
read_only = true

[[tools]]
name = "logs"
type = "http"

Built for production

Your cloud, your data

Deploys to your AWS account inside your VPC. Nothing touches a third-party platform.

Any model

Anthropic, OpenAI, or self-hosted vLLM. Swap providers by changing one line.

ReAct loop

Multi-step reasoning with tool use. The agent thinks, acts, observes, repeats.

Built in Rust

Single static binary. Fast cold starts, tiny memory footprint, no garbage collector.

Escape when you need to

Write custom tools in WASM or go full SDK. Start simple, grow without rewriting.

Stream everything

SSE endpoint out of the box. Watch your agent think and call tools in real-time.