The 23-Year Insight Behind Forge
February 25, 2026 by Asif Waliuddin

The 23-Year Insight Behind Forge
In 2018, I ran the global security program across a $1B digital transformation at Walgreens. Dozens of teams. Hundreds of dependencies. Thousands of stakeholders. A constant stream of decisions that had to be tracked, communicated, and enforced across organizational boundaries.
The program didn't fail. But the coordination overhead nearly killed it.
Knowledge evaporated between workstream handoffs. Teams made decisions that contradicted decisions made the day before. Dependencies lived in "enterprise tools" that nobody trusted enough to keep current. Governance rules everyone agreed on were enforced manually… which meant they were enforced inconsistently.
The technology worked. The people were skilled. The coordination was the hard part.
Over the next several years, I watched the same failure mode repeat across industries, company sizes, and technology stacks. Capable individual contributors; insufficient coordination infrastructure. And the same solution every time: someone had to be the program manager… tracking dependencies, maintaining shared knowledge, enforcing best practices, and giving everyone visibility into what was happening.
The Pattern Repeats
When AI coding tools started landing in 2023 and 2024, I jumped in immediately.
At first it was copy-paste sessions with ChatGPT. Then IDE extensions that felt like autocomplete on steroids. Then full agent environments where AI isn't a sidecar, it's the center console.
Each step was a revelation. Each tool was individually impressive. And each one led me back to the same conclusion I'd reached managing human teams: the individual contributor isn't the bottleneck. Coordination is.
Then I ran two tools on the same codebase.
Claude Code refactored a module. Codex CLI updated tests for that same module… against the pre-refactor interface. Both saved their changes. The tests failed. Neither tool knew why.
So much for the Silicon Valley hyperbole about agents replacing real humans.
I'd spent 23 years watching this exact scenario play out with talented teams. Two developers editing the same file without a coordination mechanism. Two groups making conflicting architectural decisions because the "handoff" was a Slack or Teams message someone missed because nobody tagged the right person. Governance checks that ran after the fact instead of during the work. A reactive operating model masquerading as excellence.
The AI tools were faster. More tireless. And they failed in the same ways human teams fail. Not because they lacked skill… because they lacked coordination. Same cognitive overload too, just renamed from "information fatigue" to "context rot."
The Insight
Multi-agent AI coordination is enterprise program management.
Not metaphorically. Structurally. The failure modes map one-to-one:
| Human Team Failure | AI Agent Failure |
|---|---|
| Developers overwrite each other's work | Agents edit the same file simultaneously |
| Knowledge gets lost in handoffs | Decisions disappear between sessions |
| No visibility into cross-team progress | No dashboard across agents |
| Governance enforced at audit time | Quality checked at PR time |
| Context stays siloed | Each tool starts from scratch |
The solutions map one-to-one too:
| Program Management Solution | Forge Solution |
|---|---|
| Resource allocation | File locking |
| Knowledge management | Knowledge flywheel |
| Status reporting | Dashboard + drift detection |
| Quality gates | Continuous governance |
| Program plan | Task board with dependencies |
This isn't an analogy stretched to fit. These are the same coordination problems in a different substrate. The patterns that prevent billion-dollar programs from drifting work the same way for multi-agent development. I know because I spent 23 years solving the human version.
Why Background Matters
Most orchestration tools today are built by engineers. They build frameworks. Configuration languages. Protocols. All valid approaches… the same way it's valid to build project management software around a methodology.
But there's a difference between building project management software and having lived project management.
One optimizes for features. The other knows which failures actually happen, which governance rules actually get followed, and which coordination patterns survive contact with real work.
Every feature in Forge traces back to a specific failure I've seen in the wild.
File locking exists because I've watched teams lose days to conflicting edits. The knowledge flywheel exists because I've watched decisions evaporate between sprints. Drift detection exists because I've watched scope creep go unnoticed until the budget review. Governance hooks exist because I've watched audit findings pile up from shortcuts nobody caught in real time.
And here's the honest edge: you don't learn these instincts by reading a spec. You learn them by living through enough broken coordination to recognize the shape of failure before it's visible.
That's what Forge encodes.
What We Built
Forge is one product with three graduated entry points.
Level 1 is the plugin. A governance layer for Claude Code: commands, agents, skills, and governance hooks. Installs fast. Zero dependencies. Health scoring, gap analysis, quality checks.
Level 2 adds the orchestrator. A small Rust binary that coordinates across tools: file locking, knowledge capture, a dependency-aware task board, drift detection, and multi-tool coordination across Claude Code, Codex CLI, and Gemini CLI through a single state file and MCP protocol. This is the coordination layer.
Level 3 adds the visual platform. A real-time governance dashboard, agent activity feed, and the Infinity Terminal… a browser-based terminal that survives browser close, network drops, and server restarts. This is the visibility layer.
In Practice
The graduated model means adoption follows the pain.
Most developers start with governance because it's the first pain they can name. They upgrade to orchestration when they start running multiple tools and hit conflicts. They add the UI when they want to watch the system work from their phone.
The knowledge flywheel becomes the retention mechanism. After a week, Forge has captured enough decisions, patterns, and learnings that starting a new session feels different. The AI knows your conventions. It remembers your decisions. It stops asking the same questions twice.
That's when people realize they can't go back. Not because of lock-in… the state lives in plain files in .forge/, fully portable. But because working without institutional memory starts to feel like working blind.
The Bottom Line
I spent 23 years solving coordination problems for human teams. Coordinating workstreams. Tracking dependencies. Maintaining shared knowledge. Enforcing governance. Providing visibility into what was happening and whether it was on track.
AI agents need the same thing. They're incredibly capable individual contributors. And they fail, predictably and expensively, without coordination infrastructure.
Forge is that infrastructure. Not a framework. Not a configuration language. A program manager… encoded in Rust, tested relentlessly, and built by someone who knows what coordination failure looks like at every scale.
Launching March 2.