File Locking for AI: The Feature Nobody Built
February 25, 2026 by Asif Waliuddin

File Locking for AI: The Feature Nobody Built
The first time I ran Claude Code and Codex CLI on the same codebase, both tools edited src/auth/middleware.ts within the same hour. Claude was refactoring the module structure. Codex was updating error handling. Neither tool knew the other existed.
Claude saved. Codex saved. Claude's refactor was gone. No warning. No conflict marker. No git merge prompt. Just silent data loss.
I've watched teams lose days to conflicting edits. Same failure mode, same frustration, different actors. The first time it happened with AI tools, I knew this was the foundational problem. Everything else (knowledge capture, governance, coordination) depends on tools being able to safely edit the same codebase.
The Window Nobody Manages
AI coding tools operate on files. They read a file, generate modifications, and write the result. This is the same read-modify-write cycle that databases solved with transactions decades ago and that version control solved with merge logic years ago. But AI tools don't participate in either system during their active editing sessions.
Git handles conflicts at commit time. But AI tools can modify files dozens of times before anything gets committed. The window between "tool starts editing" and "changes are committed" is where conflicts live. And it's completely unmanaged.
The problem compounds with multiple tools. Claude Code, Codex CLI, and Gemini CLI each have their own process, their own file I/O, and no awareness of each other. Running them in parallel on the same codebase is the equivalent of two developers editing the same file on the same machine without version control.
Session managers don't solve this. They manage terminals, not files. Knowing that two terminals are running doesn't prevent two tools from writing to the same path. The coordination needs to happen at the file level.
Some developers work around this by manually assigning file ownership: "Claude, handle the frontend. Codex, handle the backend." Human coordination substituting for tool coordination. It works until a tool needs to modify package.json, tsconfig.json, or a utility module that both sides import.
How Forge Solves It
Forge's file lock manager is built into the Rust core binary. When a tool starts editing a file, Forge acquires an exclusive lock on that path. Other tools requesting access to the same file are queued. They receive a notification of who holds the lock and when it was acquired.
The lock lifecycle:
- Claude Code requests a lock on
src/auth/middleware.ts - Forge grants the lock. Claude Code has exclusive write access.
- Codex CLI requests a lock on the same file.
- Forge queues Codex CLI with a message: "File locked by Claude Code"
- Claude Code finishes and releases the lock.
- Forge grants the lock to Codex CLI, which now sees Claude Code's changes.
Each lock is a file in .forge/locks/ containing the holder's identity, acquisition timestamp, and expiration timeout. The lock manager handles concurrent access, timeout-based expiration (so a crashed tool doesn't hold locks forever), and deadlock detection.
Rust was the only choice that made sense. The lock manager needs to be fast, correct under concurrency, and free of runtime dependencies. A single 4MB binary with 292 tests covers the lock manager plus task planning, knowledge capture, governance enforcement, and MCP serving. No JVM startup. No Node.js process. No Python interpreter.
Three Adapters, One Lock Manager
File locking across a single tool is straightforward. File locking across three different tools, each with its own process model, file I/O patterns, and communication protocol: that's the real engineering challenge.
Forge handles this with three native adapters. Claude Code communicates via MCP stdio. Codex CLI and Gemini CLI use filesystem conventions. All three read from and write to the same .forge/ directory. The lock manager doesn't care which tool is requesting a lock. It sees a request, checks the lock state, and responds.
The key design decision: no Forge-specific configuration language. Each tool reads its native config format. The adapters translate between Forge's coordination state and each tool's expectations. Adding support for a new AI tool means writing one adapter, not changing the core.
In Practice
With the Forge Orchestrator installed, file locking is automatic. Tools acquire locks through MCP tool calls or filesystem conventions. Lock state shows up in forge dashboard: which files are locked, by whom, and for how long.
Tools that are blocked by a lock don't sit idle. They check the task board for other available work. Forge's dependency graph ensures there's always something productive to do while waiting.
292 tests in the Rust core cover concurrent access, lock timeout, deadlock detection, multi-adapter locking, and edge cases. The binary is 4MB. It starts in milliseconds. Zero runtime dependencies.
The Foundation
File locking is not a glamorous feature. It's infrastructure. The kind of thing that should be invisible when it works and immediately obvious when it doesn't. Every database has transactions. Every operating system has file locks. Every version control system has conflict detection.
Multi-tool AI development has none of these. Forge adds the first one. It's the foundation everything else (knowledge capture, governance, task coordination) is built on. Because if your tools can't safely edit the same codebase, nothing else matters.
Launching March 2.