TreeHacks 2026

Multiplayer
AI Development

Two novel approaches to the same question—how do multiple AI agents collaborate on code?

Scroll
The Problem

AI agents are single-player

Today's AI coding tools work in isolation. One agent, one codebase, one developer. But real software is built by teams. We explored two fundamentally different architectures for making AI development truly multiplayer.

C

Clawdal

Claude + Modal

A centralized approach: multiple AI agents share a single cloud-hosted codebase on Modal. A dual-LLM loop (Reader + Writer) ensures changes are validated before being applied. Real-time WebSocket collaboration keeps everyone in sync.

ModalFastAPIClaudeWebSocketPython
CF

Clawdflare

Claude + Cloudflare

A decentralized approach: each developer keeps their own codebase. An AI agent watches file changes in real-time, extracts contracts, and automatically generates adapter code that bridges between independent projects.

Cloudflare WorkersDurable ObjectsClaudeWebSocketPython
Solution 1

Clawdal

Cloud-native multiplayer coding with shared state on Modal

System Architecture
LOCAL CLIMODAL BACKENDMODAL SERVICESDeveloperTask / PromptFastAPI ServerREST + WebSocketProject Mgmt + CollabWriter AgentValidate + RefineWrite to VolumeReader AgentRead codebaseCall LLM + StreamModal VolumePersistent CodebaseTest RunnerIsolated SandboxesRead-only VolumeLLM ProvidersClaude / GPT-4 / GeminiModal DictMetadata + HistoryCollab RoomPresence / Locks / EventsCLI / APIModal FunctionsStorageReal-timeLLM

How it works

Developer sends a task

"Add user authentication with JWT tokens" — the CLI reads the current codebase from the Modal Volume via REST API.

Reader Agent reasons about changes

Selects relevant files, calls an LLM (Claude, GPT-4, or Gemini) with the code context, and streams the response in real-time.

Writer Agent validates & applies

A second LLM call on Modal verifies the changes against the latest codebase state, then writes directly to the Volume. No Git conflicts possible.

Broadcast to all agents

WebSocket collab rooms notify every connected agent about file changes, locks, and activity — enabling true real-time awareness.

Key innovations

Dual LLM Loop

Reader Agent reasons about intent, Writer Agent verifies against fresh state. Two independent LLM calls prevent hallucinated or stale changes.

Pluggable LLM Providers

Switch between Claude, GPT-4, and Gemini per-agent with a flag. Same interface, different models for different tasks.

Isolated Test Sandboxes

Tests run in ephemeral Modal containers with auto-detected frameworks. The codebase Volume is mounted read-only — tests can never corrupt project state.

~$9/month Cloud Compute

Modal's pay-per-second pricing means the entire backend costs practically nothing when idle. No servers to manage.

Solution 2

Clawdflare

AI-mediated integration for developers working on independent codebases

System Architecture
DEVELOPERSCLOUDFLARE EDGEAI ANALYSISDeveloper ALocal project filesFile Watcher (500ms)Developer BLocal project filesFile Watcher (500ms).connected/Auto-generated adaptersRead-only (chmod 444)Durable ObjectShadow Codebase5s Batch WindowWebSocket BroadcastSQLite (Edge)Files + Contracts + StateContract ExtractorClaude HaikuExports/Routes/TypesResolverDeterministic MatchingType + Route PairsConnector GenClaude SonnetAdapters + StubsDevelopersAI AnalysisConnectorsStorage

How it works

File watcher detects changes

Each developer runs a local daemon that watches for file saves with a 500ms debounce. Only changed files are sent over WebSocket.

Server batches & extracts contracts

The Durable Object collects changes in a 5-second window, then Claude Haiku analyzes each file to extract exports, types, routes, and interfaces.

Deterministic resolver finds matches

A non-LLM algorithm matches types across developers, discovers API provider-consumer pairs, and identifies potential conflicts. Same input always yields same output.

AI generates connector code

Claude Sonnet generates type adapters, API stubs, and route mappings. These appear as read-only files in each developer's .connected/ folder.

Key innovations

Three-Level Debounce

File-level (500ms), batch-level (5s), and contract-diff. Internal refactoring that doesn't change public interfaces triggers zero connector updates.

Two-Phase LLM Strategy

Haiku for fast, cheap per-file extraction. Sonnet for high-quality connector generation. Optimizes both cost and quality.

Cross-Language Bridging

One developer writes Python, another writes TypeScript — the system auto-generates matching type definitions in both languages plus API contracts.

Read-Only Connectors

Generated files are chmod 444. Developers fix their source code, not the glue — the system regenerates connectors automatically.

Analysis

Two philosophies, one goal

Different tradeoffs for different team structures

CLAWDALShared codebase, agents do tasksCodebaseModal VolAgent 1Agent 2Agent 3All agents read/writeone shared codebaseVSCLAWDFLARESeparate codebases, AI-bridgedDev AProject 1Dev BProject 2AI BridgeConnectorsindependentEach dev has their own repo,AI auto-generates glue code
DIMENSIONCLAWDALCLAWDFLARE
ArchitectureCentralized shared codebaseDecentralized per-developer repos
Codebase ModelSingle Modal VolumeShadow copies + local repos
Agent RoleAgents do the coding workAgents generate bridge code
CollaborationWebSocket rooms + file locksReal-time contract extraction
Conflict StrategyLock files, serialize writesAuto-generate adapters
LLM UsageRead task + Write verificationExtract contracts + Generate connectors
InfrastructureModal (serverless Python)Cloudflare Workers (edge)
Language SupportAny (agents write code)Python + TypeScript (analyzed)
Best ForAI agent teams on one projectHuman teams on separate projects
Built With

Technology Stack

CL
Claude
MO
Modal
FA
FastAPI
CF
CF Workers
DU
Durable Obj
PY
Python
TY
TypeScript
WE
WebSocket
SQ
SQLite
CL
Click
RI
Rich
NE
Next.js