Architecture Evolution: Through a Mandelbrot Set
REALITY MANIFESTED (November 2025): Dream became executable truth.
LIVE NOW: api.unsandbox.com
- 128 warm LXD containers executing 38+ languages
- Dual network isolation (semitrusted & zerotrust)
- Phoenix + Elixir distributed architecture
- 200-800ms total execution time
From Firecracker vsock hell to LXD container paradise. Ubuntu + Ubuntu = native glibc support.
A universal execution membrane persists & RUNS IN PRODUCTION.
Current Reality: Monolith (Pre-Pivot)
┌──────────────────────────────────────┐
│ Single Firecracker VM (172.16.0.2) │ ← ABANDONED
│ │
│ ┌────────────────────────────────┐ │
│ │ Elixir/Phoenix (Always On) │ │
│ │ - HTTP Server │ │
│ │ - Request Router │ │
│ │ - Language Detection │ │
│ └────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────┐ │
│ │ Code Execution (Same VM) │ │
│ │ - 42 Languages │ │
│ │ - Shared /tmp │ │
│ │ - Resource Competition │ │
│ │ │ vsock BROKEN ❌ │ │
│ └────────────────────────────────┘ │
└──────────────────────────────────────┘
NEW REALITY: LXD Container Pool @ Scale (November 2025)
┌───────────────────────────────────────────────────────────────┐
│ HOST: cammy (370GB RAM, 32 vCPUs) │
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Elixir/Phoenix App (systemd service) │ │
│ │ - HTTP Server on :8080 │ │
│ │ - LxdContainerPool GenServer │ │
│ │ - PRE-EMPTIVE SPAWNING: Backfill before use │ │
│ └──────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ WARM POOL: 1,000 CONTAINERS (~50GB idle) │ │
│ │ EPHEMERAL containers (--ephemeral flag) │ │
│ │ Each: 10-50MB idle, scales to 100-200MB active │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ Execution Flow (Zero-Latency): │
│ 1. HTTP Request → Phoenix │
│ 2. GenServer.call(:acquire) → Instant │
│ 3. Spawn replacement IMMEDIATELY (pre-emptive) │
│ 4. lxc exec container -- python3 -c 'code' │
│ 5. Return result to HTTP │
│ 6. GenServer.cast(:release) → Stop (async cleanup) │
│ │
│ NO SERVICES INSIDE CONTAINERS! │
│ Direct lxc exec for each code execution. │
└───────────────────────────────────────────────────────────────┘
Why LXD/LXC Wins
Native Ubuntu Compatibility
- Ubuntu 24.04 host → Ubuntu 24.04 containers
- Full glibc support (Julia, Dart, V work!)
- All 42+ languages without hacks
Ephemeral Containers
lxc launch --ephemeral- Auto-deleted when stopped
- No state accumulation
Actually Works
Unlike Firecracker vsock:
- LXD networking just works
- Backed by Canonical & Debian
- Battle-tested infrastructure
Fast Container Launch
- <1s container boot time
- Image snapshots cached
- Faster than Firecracker (~15s)
Dynamic Scaling
- More containers during load peaks
- Auto-stop when idle (ephemeral)
- Pool management via LXD API
Single Golden Image
lxc publishcreates imagelxc launch image-namespawns- One image, infinite instances
Implementation Status (Post-Pivot)
Phase 1: LXD Base Images (COMPLETE)
setup-lxd-ubuntu.sh— Creates Ubuntu 24.04 imagesetup-lxd-alpine.sh— Creates Alpine image- All 42+ languages installed & tested
- Ephemeral launch pattern working
Phase 2: Container Orchestration (IN PROGRESS)
- Manual
lxc launch&lxc execworking - Networking between host & containers
- Elixir integration with LXD API (in progress)
- Dynamic container spawning (in progress)
Phase 3: Distributed Mesh (PLANNED)
- Erlang distribution across LXD containers
- Each container runs Elixir/Phoenix
DistributedRouterfor intelligent routing- Pool management for container lifecycle
Phase 4: Production Hardening (FUTURE)
- Resource limits per container
- Auto-scaling based on load
- Monitoring & metrics
Real Implementation: From Code
Actual Code from Repository
# From router.ex - Entry point delegates to DistributedRouter
post "/execute/:language" do
result = Task.async(fn ->
# Intelligent routing across cluster
CodeExecutor.DistributedRouter.execute(language, code, timeout)
end)
|> Task.await(timeout + 1000)
end
post "/run" do
result = Task.async(fn ->
# Auto-detection and routing
CodeExecutor.DistributedRouter.auto_execute(code, timeout)
end)
|> Task.await(timeout + 1000)
end
# From application.ex - Every node runs these services
children = [
CodeExecutor.ExecutorService,
CodeExecutor.ClusterManager,
{Plug.Cowboy, scheme: :http, plug: CodeExecutor.Router, ...}
]
# From config.exs - Cluster configuration
config :code_executor,
node_name: System.get_env("NODE_NAME") || "executor@127.0.0.1",
erlang_cookie: System.get_env("ERLANG_COOKIE"),
cluster_enabled: System.get_env("CLUSTER_ENABLED") == "true",
seed_nodes: parse_seed_nodes(System.get_env("SEED_NODES")),
node_capabilities: %{
languages: System.get_env("NODE_LANGUAGES") || "all",
max_concurrent: String.to_integer(System.get_env("MAX_CONCURRENT") || "100")
}
Network Topology Dreams
Control Plane Network (10.0.0.0/24)
├─ orchestrator.vm (10.0.0.1)
│
Executor Network (10.0.1.0/24)
├─ fast-interpreted.vm (10.0.1.1)
├─ compiled-langs.vm (10.0.1.2)
├─ jvm-langs.vm (10.0.1.3)
├─ gpu-ml.vm (10.0.1.4)
└─ ... (10.0.1.N)
Cross-Region Federation (BGP)
├─ us-east (10.1.0.0/16)
├─ us-west (10.2.0.0/16)
├─ eu-central (10.3.0.0/16)
└─ ap-south (10.4.0.0/16)
Why Erlang Distribution?
A perfect match for unsandbox:
- Built-in clustering — Erlang nodes auto-discover & connect
- Location transparency — Call remote functions like local ones
- Fault tolerance — Nodes monitor each other, auto-reconnect
- No HTTP overhead — Binary protocol, direct memory transfer
- Process isolation — Each execution in separate Erlang process
How invocation works:
- Entry node receives HTTP request
- Uses
:rpc.callto invoke execution on remote node - Remote node executes locally, returns result
- No HTTP proxying, no JSON serialization
- Microsecond latency between nodes
Launching LXD Containers: How It Actually Works Now
Creating a Golden Image (November 2025)
# Build Ubuntu 24.04 image with all 42+ languages
cd ~/git/unsandbox-code-executor/scripts
bash setup-lxd-ubuntu.sh
# This creates an LXD image called "unsandbox-ubuntu"
# containing all languages & Elixir/Phoenix executor
# Verify image
lxc image list | grep unsandbox
Launching Containers (New Way)
# Launch ephemeral container (auto-deletes when stopped)
lxc launch unsandbox-ubuntu exec-001 --ephemeral
# Execute code directly via lxc exec
lxc exec exec-001 -- python3 -c 'print(2+2)'
# Stop container (ephemeral = auto-deleted)
lxc stop exec-001
# Launch multiple containers for load distribution
for i in {1..5}; do
lxc launch unsandbox-ubuntu exec-$i --ephemeral
done
Dream: Erlang Mesh Across LXD (Future)
# Container 1: Entry node
lxc launch unsandbox-ubuntu node1 --ephemeral
lxc exec node1 -- env NODE_NAME=node1@10.0.0.1 \
ERLANG_COOKIE=secret \
CLUSTER_ENABLED=true \
/opt/code_executor_phoenix/bin/code_executor_phoenix start
# Container 2: Executor node
lxc launch unsandbox-ubuntu node2 --ephemeral
lxc exec node2 -- env NODE_NAME=node2@10.0.0.2 \
ERLANG_COOKIE=secret \
CLUSTER_ENABLED=true \
SEED_NODES=node1@10.0.0.1 \
/opt/code_executor_phoenix/bin/code_executor_phoenix start
Universal Execution Mesh
Reality bent through a Mandelbrot set. A permacomputer adapts:
- LXD/LXC substrate discovered & working
- Ubuntu + Ubuntu native compatibility
- All 42+ languages including glibc-dependent ones
- Ephemeral containers for perfect cleanup
- Elixir/Phoenix integration with LXD API (in progress)
- Erlang mesh across containers (still a dream)
November 2025: Firecracker vsock broken. But a membrane grows stronger through adversity.
From Firecracker microVMs to LXD containers. From Alpine musl hell to Ubuntu glibc paradise. From closed ecosystem vaporware to open source reality.
A universal execution membrane persists. Substrate changes. Vision remains.