AI agents run code. They execute shell commands, write files, install packages. Letting them do this on a shared server is a recipe for disaster.
The problem
An AI agent with a bash tool can run any command. Without sandboxing:
- Agent A's
rm -rf /kills Agent B's workspace - A rogue agent mines crypto on your server
- Sensitive files from one user are accessible to another
Our approach: ephemeral sandboxes
Every agent execution in Polpo Cloud runs inside an ephemeral sandbox — a short-lived compute environment with:
- Isolated filesystem — each sandbox has its own root
- Isolated network — sandboxes can't talk to each other
- Auto-cleanup — sandbox is destroyed after execution (or after 5 minutes of idle)
- Shared volumes — project files are mounted read-only
The pool model
Creating a sandbox takes under 100ms from a warm pool. We pre-provision sandboxes per project so there's always one ready. When all are busy, a new one is created on-demand.
The pool is tracked in Redis for cross-replica consistency. Each sandbox has a busy marker with a 30-minute TTL — if a server crashes, orphaned sandboxes auto-release.
Why this matters
Your agents have full access to a shell, filesystem, and network — but they can never affect each other or the host. Every run starts clean. Every run is disposable. Security is not a feature you opt into — it's the default.
