There are dozens of AI agent frameworks. CrewAI, LangGraph, AutoGen, Mastra — each gives you a way to define agents, wire tools, and orchestrate multi-step workflows.
But when you're done prototyping, you face the real question: where does this thing run?
The gap between framework and production
A framework gives you the building blocks. It doesn't give you:
- Sandboxed execution — your agent runs shell commands. Where? On your laptop? On a shared server where it can rm -rf /?
- Persistent memory — your agent needs context across sessions. Who manages the database?
- Scaling — one agent is fine. What about 100 concurrent users hitting your agent API?
- Monitoring — your agent is stuck in a loop. How do you know? How do you stop it?
- Deploy — you wrote the agent. Now what? Docker? Kubernetes? Lambda?
This is the gap Polpo fills. It's not another framework. It's the runtime that frameworks are missing.
What Polpo actually is
Polpo is a Backend-as-a-Service for AI agents. You define your agent (a prompt, a model, some tools), run polpo deploy, and it's live. Your agent gets:
- An OpenAI-compatible API endpoint
- Ephemeral sandboxes for safe code execution
- Built-in memory that persists across sessions
- Auto-scaling from zero to thousands of requests
- A dashboard to monitor everything
No Dockerfile. No Kubernetes. No infra team.
Why open source matters
The core runtime is MIT-licensed. You can self-host on your laptop, a VPS, or your company's servers. The cloud adds managed sandboxes, multi-tenancy, and scaling — but the engine is the same.
We believe the runtime for AI agents should be a commodity, not a moat. The value is in what you build on top.
What's next
We're launching the public beta. If you're building AI agents and tired of managing infra, try it:
npm install -g polpo-ai
polpo init
polpo deployYour agents deserve a production runtime.
