You wrote an agent. It works on your laptop. Now what?
The gap after "it works locally"
Most agent frameworks stop at the definition layer. You get tool calling, prompt templates, maybe memory abstractions. But when it's time to run in production, you're on your own:
- Where does it execute? Your laptop can't serve 100 concurrent users.
- Is it safe? An agent with shell access can destroy your server.
- Where's the memory? Context needs to survive across sessions.
- How do you monitor it? When an agent loops, who notices?
- How do others call it? Your app, your mobile client, your webhook — they all need an API.
What Polpo gives you
Deploy your agent and you immediately get:
- An OpenAI-compatible API — any app can call your agent with a standard POST request
- Sandboxed execution — every run is isolated, every sandbox is disposable
- Persistent memory — context survives across sessions, per agent
- Encrypted vault — store API keys and secrets, AES-256-GCM at rest
- Real-time events — SSE stream to your app, React hooks included
- Multi-model support — Anthropic, OpenAI, xAI, Google, any provider
- Missions & orchestration — multi-step workflows with dependencies
- Assessment pipeline — automated quality checks on every output
- CLI + SDKs — TypeScript, React, and a CLI that deploys in one command
Framework-agnostic
Polpo doesn't care how you build your agent. Use CrewAI, LangGraph, or just a prompt and a model. Polpo is the runtime that runs it — not the framework that defines it.
bash
# That's it. Your agent is live.
polpo deployYou focus on what your agent does. Polpo handles everything else.
