Build and deploy AI agents to your own cloud. One config file. No vendor lock-in.
$ curl -sSL agentctl.sh/install | sh
macOS, Linux, WSL · view releases
Scaffold an agent.toml — pick your model, tools, and memory backend.
Run locally with hot-reload. Chat in the terminal. Stream tool calls live.
Deploy to your AWS account. Infrastructure generated, provisioned, done.
# agent.toml — that's your entire agent [agent] name = "incident-responder" description = "Diagnose production issues using logs and database" [model] provider = "anthropic" model = "claude-sonnet-4-20250514" [[tools]] name = "prod_db" type = "database" read_only = true [[tools]] name = "logs" type = "http"
Deploys to your AWS account inside your VPC. Nothing touches a third-party platform.
Anthropic, OpenAI, or self-hosted vLLM. Swap providers by changing one line.
Multi-step reasoning with tool use. The agent thinks, acts, observes, repeats.
Single static binary. Fast cold starts, tiny memory footprint, no garbage collector.
Write custom tools in WASM or go full SDK. Start simple, grow without rewriting.
SSE endpoint out of the box. Watch your agent think and call tools in real-time.