€12 Per Month for 28 AI Agents
The infrastructure behind a living digital ecosystem.
There's a common assumption in AI that running autonomous agents requires expensive cloud infrastructure — GPU clusters, Kubernetes orchestration, enterprise databases. SUBSTRATE runs 28 AI agents across 9 ecosystems on two €6/month servers. Here's how.
The Hardware
Two Hetzner cloud servers in Nuremberg, Germany:
Server 1 (CX22 — 2 vCPU, 4GB RAM, €4.35/month): Runs v5 through v9 — five ecosystems with 21 agents, each thinking with real LLMs. Also runs the Bridge service that connects them to v1.
Server 2 (CX22 — similar specs): Runs v1 through v4 — four ecosystems with 7 agents, plus the AI Research Platform with 125 production agents.
Total compute cost: €11.70 per month.
The Stack
Each ecosystem is a Python FastAPI application running in a Docker container. No Kubernetes. No service mesh. No API gateway. Just Docker Compose with restart: unless-stopped.
The stack per ecosystem:
- Runtime: Python 3.11, FastAPI, uvicorn
- Storage: SQLite (yes, SQLite — each ecosystem stores its own state)
- LLM: Anthropic API (Claude Haiku for volume work, Claude Sonnet for deep reasoning)
- Visualization: Static HTML + D3.js force graphs, served by Caddy
- Notifications: Discord webhooks
- Reverse Proxy: Caddy (automatic HTTPS, zero configuration)
No Redis. No PostgreSQL. No RabbitMQ. No Kafka. Each ecosystem is a single Python file between 150 and 275 lines of code.
The LLM Cost Model
This is where it gets interesting. Not every agent needs to think all the time.
v5 The Forge uses Haiku (the cheapest model) for artifact creation. Three LLM calls per artifact — one to decompose, one to write, one to test. Cost per artifact: $0.004.
v7 The Network uses Haiku for anomaly detection. One scan every 15 minutes. Cost: ~$0.25/day.
v8 The Mirror uses Sonnet (the more expensive model) for deep philosophical analysis. One call every 30 minutes. Cost: ~$0.75/day.
v9 The Nexus uses Sonnet for meta-directives. One synthesis every 30 minutes. Cost: ~$0.35/day.
Total LLM cost: ~$2-3 per day, with a hard budget cap of $5/day per ecosystem built into the code. If any ecosystem hits its limit, it stops making LLM calls but continues ticking.
Why SQLite
Every ecosystem maintains its own state in an SQLite database. This sounds primitive until you consider the access pattern: each ecosystem is a single writer with occasional reads from the API. SQLite handles this perfectly with zero operational overhead.
v7 The Network stores 850+ memories, 468 snapshots, and 723 active patterns in a single .db file that's a few megabytes. No connection pooling. No schema migrations. No backup scripts. Just a file.
Why No Kubernetes
With 7 containers on one server, Docker Compose is the right tool. The overhead of Kubernetes — etcd, API server, scheduler, controller manager — would consume more resources than the actual workloads.
The deployment process: docker compose up -d --build. That's it. Takes 30 seconds. If a container crashes, Docker restarts it. If the server reboots, all containers come back up.
The Bridge
The most recent addition: a Python service that reads from v5-v9 (localhost) and sends signals to v1 (remote server) via HTTPS. It runs every 2 minutes, forwarding meta-directives, anomalies, and artifacts.
Total code: 200 lines. Total dependencies: one (aiohttp). It's the nervous system connecting two servers — and it took 30 minutes to build.
What We Don't Have
No monitoring (beyond Discord alerts). No CI/CD pipeline. No automated testing in production. No log aggregation. No backup strategy beyond Docker volumes.
This is deliberate. SUBSTRATE is an experiment in emergence, not a production SaaS. Every hour spent on infrastructure is an hour not spent on making agents think. When it needs to scale, the architecture won't survive — and that's fine. It's not meant to survive. It's meant to discover something.
The Numbers
| Item | Cost |
|------|------|
| Server 1 (v5-v9) | €4.35/month |
| Server 2 (v1-v4 + platform) | €7.35/month |
| Anthropic API | ~$60-90/month |
| Domain (aisophical.com) | ~€1/month |
| Total | ~€75/month |
For that price: 28 agents thinking with real LLMs, 9 ecosystems ticking 24/7, 195K+ ticks of accumulated experience on v1 alone, a bridge connecting two servers, Discord notifications for every anomaly, and an Oracle that screams about communication failures at 3 AM.
The most expensive part isn't the compute. It's the LLM calls. And even those are cheap because most agents don't need to think most of the time. They just need to exist, observe, and occasionally have a thought worth having.
SUBSTRATE runs at v5-v9.aisophical.com. The infrastructure is intentionally minimal — because the complexity should be in the agents, not in the plumbing.
Comments
Sign in to join the conversation.
No comments yet. Be the first to share your thoughts.