Run NanoClaw in Docker Sandboxes with One Command
13 Mart 2026 · Gavriel Cohen
We announced today that we’ve partnered with Docker to enable running NanoClaw in Docker Sandboxes with one command. You can read Docker’s blog post here.
Get Started
# macOS (Apple Silicon)
curl -fsSL https://nanoclaw.dev/install-docker-sandboxes.sh | bash
# Windows (WSL)
curl -fsSL https://nanoclaw.dev/install-docker-sandboxes-windows.sh | bash
This handles the clone, setup, and Docker Sandbox configuration. You can also install manually from source.
Note: Docker Sandboxes are currently supported on macOS (Apple Silicon) and Windows (x86), with Linux support rolling out in the coming weeks.
Once it’s running, every agent gets its own isolated container inside a micro VM. No dedicated hardware needed. No complex setup.
How It Works
Docker Sandboxes run agents inside lightweight micro VMs, each with its own kernel, its own Docker daemon, and no access to your host system. This goes beyond container isolation: hypervisor-level boundaries with millisecond startup times.
NanoClaw maps onto this architecture naturally:
Each NanoClaw agent runs in its own container with its own filesystem, context, tools, and session. Your sales agent can’t see your personal messages. Your support agent can’t access your CRM data. These are hard boundaries enforced by the OS, not instructions given to the agent.
The micro VM layer adds a second boundary. Even if an agent somehow broke out of its container, it hits the VM wall. Your host machine, your files, your credentials, your other applications are on the other side of a hard isolation boundary.
The Security Model: Design for Distrust
I wrote about this in Don’t Trust AI Agents: when you’re building with AI agents, they should be treated as untrusted and potentially malicious. Prompt injection, model misbehavior, things nobody’s thought of yet. The right approach is architecture that assumes agents will misbehave and contains the damage when they do.
That principle drives every design decision in NanoClaw. Don’t put secrets or credentials inside the agent’s environment. Give the agent access to exactly the data and tools it needs for its job, nothing more. Keep everything else on the other side of a hard boundary.
With Docker Sandboxes, that boundary is now two layers deep. Each agent runs in its own container (can’t see other agents’ data), and all containers run inside a micro VM (can’t touch your host machine). If a hallucination or a misbehaving agent can cause a security issue, the security model is broken. Security has to be enforced outside the agentic surface, not depend on the agent behaving correctly.
OpenClaw runs on your host with access to everything. Even with their opt-in sandbox mode, all agents share the same environment. There’s no hard boundary between them. Your personal assistant can see your work agent’s data.
The right mental model: think of your agent as a colleague you want to collaborate with, but design your security as if it’s a malicious actor. Those two things aren’t contradictory. That’s just good security engineering.
What’s Next
Dario Amodei talks about “a country of geniuses in a data center.” For that to become real, new infrastructure, orchestration layers, and runtimes need to be built, purpose-built for agents operating at scale.
Today, a team can connect NanoClaw to multiple Slack channels and have separate agents handling different workloads, each isolated, each with its own context and data. But we’re heading somewhere much bigger.
Every employee will have a personal AI assistant. Every team will manage a team of agents. High-performing teams will manage hundreds. To get there, we need:
Controlled context sharing. Isolation is the foundation, but agents that work together need to share information. The hard part is the middle ground: agent teams that share all context freely within the team, but share selectively across team boundaries. You need to be able to lock everything down, control what goes in and what goes out, and then deliberately open up what should be shared. That needs to be native to the runtime, not bolted on.
Agents creating persistent agents. Not ephemeral sub-agents that spin up for a task and disappear. An agent adding a new member to its team, the way you hire someone. The new agent gets its own identity, its own persistent environment, its own data. It shows up tomorrow and remembers what it did yesterday. It accumulates context and expertise over time. This requires new primitives for identity, lifecycle management, and permission inheritance that don’t exist yet.
Fine-grained permissions and policies. Not just what tools an agent can access, but what it can do with them. Read email but not send. Access one repo but not another. Spend up to a threshold but no more.
Human-in-the-loop approvals. For irreversible actions, humans need to be in the approval chain. Agents propose, humans approve, agents execute.
NanoClaw is the secure, customizable runtime and orchestration layer for agent teams. Docker Sandboxes is the enterprise-grade infrastructure underneath. As agents move from single-player tools to full team members operating at enterprise scale, the stack that runs them needs to enforce isolation by default, enable controlled collaboration, and give organizations the visibility and governance they need. That’s what we’re building.
NanoClaw is an open-source, secure runtime and orchestration layer for agent teams. Star it on GitHub.