Two novel approaches to the same question—how do multiple AI agents collaborate on code?
Today's AI coding tools work in isolation. One agent, one codebase, one developer. But real software is built by teams. We explored two fundamentally different architectures for making AI development truly multiplayer.
A centralized approach: multiple AI agents share a single cloud-hosted codebase on Modal. A dual-LLM loop (Reader + Writer) ensures changes are validated before being applied. Real-time WebSocket collaboration keeps everyone in sync.
A decentralized approach: each developer keeps their own codebase. An AI agent watches file changes in real-time, extracts contracts, and automatically generates adapter code that bridges between independent projects.
Cloud-native multiplayer coding with shared state on Modal
"Add user authentication with JWT tokens" — the CLI reads the current codebase from the Modal Volume via REST API.
Selects relevant files, calls an LLM (Claude, GPT-4, or Gemini) with the code context, and streams the response in real-time.
A second LLM call on Modal verifies the changes against the latest codebase state, then writes directly to the Volume. No Git conflicts possible.
WebSocket collab rooms notify every connected agent about file changes, locks, and activity — enabling true real-time awareness.
Reader Agent reasons about intent, Writer Agent verifies against fresh state. Two independent LLM calls prevent hallucinated or stale changes.
Switch between Claude, GPT-4, and Gemini per-agent with a flag. Same interface, different models for different tasks.
Tests run in ephemeral Modal containers with auto-detected frameworks. The codebase Volume is mounted read-only — tests can never corrupt project state.
Modal's pay-per-second pricing means the entire backend costs practically nothing when idle. No servers to manage.
AI-mediated integration for developers working on independent codebases
Each developer runs a local daemon that watches for file saves with a 500ms debounce. Only changed files are sent over WebSocket.
The Durable Object collects changes in a 5-second window, then Claude Haiku analyzes each file to extract exports, types, routes, and interfaces.
A non-LLM algorithm matches types across developers, discovers API provider-consumer pairs, and identifies potential conflicts. Same input always yields same output.
Claude Sonnet generates type adapters, API stubs, and route mappings. These appear as read-only files in each developer's .connected/ folder.
File-level (500ms), batch-level (5s), and contract-diff. Internal refactoring that doesn't change public interfaces triggers zero connector updates.
Haiku for fast, cheap per-file extraction. Sonnet for high-quality connector generation. Optimizes both cost and quality.
One developer writes Python, another writes TypeScript — the system auto-generates matching type definitions in both languages plus API contracts.
Generated files are chmod 444. Developers fix their source code, not the glue — the system regenerates connectors automatically.
Different tradeoffs for different team structures
| DIMENSION | CLAWDAL | CLAWDFLARE |
|---|---|---|
| Architecture | Centralized shared codebase | Decentralized per-developer repos |
| Codebase Model | Single Modal Volume | Shadow copies + local repos |
| Agent Role | Agents do the coding work | Agents generate bridge code |
| Collaboration | WebSocket rooms + file locks | Real-time contract extraction |
| Conflict Strategy | Lock files, serialize writes | Auto-generate adapters |
| LLM Usage | Read task + Write verification | Extract contracts + Generate connectors |
| Infrastructure | Modal (serverless Python) | Cloudflare Workers (edge) |
| Language Support | Any (agents write code) | Python + TypeScript (analyzed) |
| Best For | AI agent teams on one project | Human teams on separate projects |