🏗️ Architecture Deep Dive: Isolation, Incus, and AI Agents

After talking about what ClawControlPanel does, I want to pull back the curtain on how it's actually built. This isn't just a Next.js app running on a server—it's a carefully orchestrated stack designed for isolation and reliability.

Building an AI agent platform comes with a unique challenge: How do you give an agent enough power to be useful (creating files, running scripts) without risking your host machine?

Here is how I’ve solved that for Mission Control.


🔒 The Infrastructure: LXC Managed by Incus

The foundation of the entire system is Incus (the community-led successor to LXD). For every deployment, I am creating LXC containers.

Why containers instead of just a standard VM or a bare-metal process?

  • Isolation: Each agent runtime is trapped inside its own Linux container. Even if an agent goes rogue or a script fails, the host system remains untouched.
  • Resource Control: Incus makes it trivial to limit CPU and RAM for specific mission sets.
  • Speed: Unlike full Virtual Machines, LXC containers share the host kernel, meaning they boot in seconds and have near-zero overhead.

📦 What’s Inside the Container?

Inside each of these LXC containers, I run two core services side-by-side:

  1. OpenClaw Gateway: This is the "Engine Room." It's the AI runtime that actually connects to models (Anthropic, OpenAI, etc.) and executes tools.
  2. ClawControlPanel (Mission Control): This is the "Cockpit." It's the Next.js dashboard you've seen in my previous posts.

By running them together in the same container, the dashboard can talk to the gateway over a local WebSocket (ws://127.0.0.1:18789) with zero latency and high security. The container only exposes the dashboard's port to the outside world.

🛠️ The Internal Architecture

The project itself is split into several logical layers that keep the "Planning → Execution" flow smooth:

1. The Real-Time Frontend

Built with Next.js 15 (App Router) and Tailwind CSS, the UI is designed to be highly reactive. Instead of constant polling, we use Server-Sent Events (SSE). When an agent updates a task status or sends a message, it "pushes" to your screen instantly.

2. The Persistent Brain

We use SQLite for the database. It’s light, fast, and lives inside the container. It tracks every task, every agent’s "SOUL.md" (personality), and the history of every mission.

3. The Orchestration Logic

This is where the magic happens. The backend features:

  • The AI Orchestrator: A background cronjob that analyzes your inbox and matches tasks to agents.
  • The Task State Machine: A robust task-workflow.ts that ensures a task can't jump from "Planning" to "Done" without passing through the necessary gates.
  • The Planning Flow: A specialized Q&A interface that uses the AI to "interview" the user before a single line of code is written.

The Big Picture

graph TD
    subgraph "Host Machine (Ubuntu/Debian)"
        subgraph "Incus / LXC Container"
            CCP[ClawControlPanel - Next.js]
            OCG[OpenClaw Gateway - Runtime]
            DB[(SQLite DB)]

            CCP <-->|WebSockets| OCG
            CCP <-->|SQL| DB
        end
    end

    User((User)) -->|HTTPS| CCP
    OCG -->|API| LLM[AI Providers]

This stack gives me the perfect balance: the ease of development of a modern web app with the industrial-grade isolation of Linux containers.

Stay tuned as I keep refining the orchestration layer! 🚀🏗️

psychology
Cognitive Agents
auto_awesome
Smart Automation
robot_2
AI Infrastructure
bolt
Neural Speed
hub
Seamless Integration
shield_with_heart
Ethical AI

See other articles