I’ve spent the better part of my career walking a tightrope between two very different worlds: the deep trenches of Free and Open Source Software (FOSS), and the sprawling landscapes of large enterprise software.
You would think the enterprise world, with its massive computing budgets, rigid processes, and endless review cycles, would consistently produce the most coherent software. Yet frequently, the opposite is true. Open source projects, which on paper look like uncontrolled chaos, often produce the most elegant, unified systems in the world.
In the enterprise, it’s not uncommon to open a single application and literally feel the boundary where one team’s work ends and another’s begins. You can see the seams. The navigation isn’t structured around what the user actually needs to do. It’s structured around the company’s internal reporting lines. Everything feels like a bolt-on, because everything is a bolt-on.
Contrast that with pulling down a tool built by a small, opinionated FOSS team. Everything just clicks. The navigation makes sense. You can guess the API patterns. There is a distinct, undeniable point of view.
I keep coming back to that contrast. What actually causes that gap? What separates a coherent masterpiece like SQLite from a sprawling, disjointed enterprise mess? What’s the actual variable here?
Conway’s Law gets cited constantly. “Organizations design systems that mirror their own communication structures.” Brooks’s Law covers the timeline angle. “Adding manpower to a late software project makes it later.”
Both useful. But neither explains incoherence. Conway tells you the shape of the system. Brooks tells you it’ll be late. Neither tells you why the product feels like three apps duct-taped together.
I think the missing variable is simpler than either of those.
The law
Software coherence is inversely proportional to the number of people who can say no.
Not team size. Not org structure. Not how many microservices you have or what your sprint velocity looks like. Just this: how many people hold veto power over any given decision?
I’m calling it Hemanth’s Law because nobody else named it yet.
The evidence
I looked at startups, FOSS, and enterprise software and tracked one thing: how many people have to agree before a decision turns into code.
SQLite: a small team, zero outside contributors
SQLite is maintained by a small team of developers spread across three continents, led by D. Richard Hipp. They don’t accept patches or pull requests from the public. Every line is written by the core team.
What does that produce? ~156,000 lines of C with 100% branch test coverage. Over 92 million lines of test code. It runs on every phone, every browser, every OS you touched today. Airbus uses it in flight software. It follows DO-178B aviation testing standards.
One person holds the entire system in his head. Decisions move from thought to code in the same brain. The internal consistency is almost unsettling.
Decision-makers per feature: 1. Maybe 2 on a bad day.
Linux kernel: thousands of contributors, one funnel
The kernel has seen contributions from over 20,000 developers. But writing code and deciding what ships are very different things.
Every patch goes through a subsystem maintainer, then a lieutenant, then (for anything that matters) Linus Torvalds. The MAINTAINERS file maps every file in the kernel to the person who says yes or no.
Thousands contribute. A handful decide. Linus has rejected patches from billion-dollar companies because they didn’t pass his taste bar.
Decision-makers per subsystem: 1 to 3. For the kernel’s overall direction: 1.
Windows Vista: what happens when nobody owns the picture
Longhorn (which became Vista) started around 2001. By 2004 the codebase was so broken that Jim Allchin, co-president of Windows development, called it “a pig” internally. Microsoft threw out the entire codebase and restarted from Windows Server 2003 SP1.
What happened? The Windows team and the .NET team were fighting over managed code vs. native. Multiple groups built disconnected features (WinFS, Indigo, Avalon) with no integration plan. Nobody owned the whole picture.
They eventually reorganized into two divisions, Core OS and Windows Client. Didn’t matter. Vista shipped in 2007, four years late, and the reviews were brutal. The product felt exactly like what it was: a negotiated compromise between teams that disagreed with each other.
Decision-makers per feature: dozens. Architecture review, security review, feature team, platform team, management chain. Every group could block. The product showed it.
Spotify squads: autonomy without anyone in charge of “the whole thing”
Spotify’s squad model (2012) gave small teams full ownership. Each squad was supposed to be a “mini-startup.” In theory this should work. Small teams, clear ownership, right?
In practice the squads turned into silos. Each picked different tech, different UI patterns, different UX conventions. The product felt inconsistent. “Autonomy without consequence” meant squads optimized for their own thing and nobody was minding the store for the overall experience.
Each squad had decision-making power. Nobody had integration power. Nobody could step in and say “these five features need to feel like the same product.” Lots of independent decision-makers produced independence. Not coherence.
Decision-makers per feature: 1 per squad (fine). Decision-makers for the product as a whole: 0 (not fine).
AWS: the deliberate sacrifice of coherence for throughput
The famous “two-pizza team” API mandate at Amazon is a masterclass in this law, applied in reverse. Amazon explicitly removed the integration authority veto. By demanding that every team build distinct, decoupled services with their own public APIs, Amazon destroyed the “does this fit the unified product vision?” bottleneck.
The result? Unmatched engineering velocity. AWS shipped a staggering volume of services because no team had to wait for a centralized design committee to sign off. But the cost was coherence. The AWS Management Console is famously disjointed. The UI for EC2 looks and behaves differently than S3, which operates on different UX paradigms than IAM or CloudWatch. They sacrificed coherence to maximize throughput. It proves the rule: when you remove the centralized integration veto, you gain speed, but you permanently lose the “one mind” user experience.
The pattern
| Veto holders per feature | Product feel | Examples | |
|---|---|---|---|
| 1-3 | Focused, opinionated | Coherent | SQLite, Rails, early Notion, Sublime Text |
| 4-8 | Managed, intentional | Solid, less distinctive | Linux kernel, PostgreSQL, well-run startups |
| 9+ | Committee-driven | Fragmented, bloated, disjointed | Vista, enterprise platforms, “three apps taped together” |
There’s real data behind this. The HBS mirroring hypothesis study (MacCormack, Baldwin, Rusnak, 2012) compared open-source and commercial software using Design Structure Matrix analysis. Open-source projects with fewer centralized decision-makers produced more modular code. Commercial products with committee-driven decisions had propagation costs 8x higher, meaning a change in one place broke 8x more things elsewhere.
Those commercial products weren’t badly written. They were badly decided. Too many cooks making different broths and pouring them into the same pot.
When multiple teams solve the same problem
There is a specific pathology that emerges in high-veto environments: redundant solutions. Think of what happens when two different internal teams decide to solve the exact same problem.
In an environment with a strong integration authority, a decision is made early. One approach wins. The other is killed. The user gets one clear, definitive path.
In a consensus-driven environment, neither team has the authority to kill the other’s project. Canceling a team’s work requires a political fight that nobody wants to have. So to maintain peace, the organization simply ships both.
Look at Google. They didn’t just ship one messaging app; over a decade, internal team boundaries gave us Talk, Hangouts, Allo, Duo, Meet, and Chat, all competing for the same users. We’re seeing it happen again right now in their AI ecosystem: a developer looking to build with Gemini has to navigate overlapping capabilities across Google AI Studio, Vertex AI, and Firebase Genkit. Even if someone at the very top had the formal authority, nobody was willing to spend the political capital to step in and say, “this is our only path.”
They are externalizing their internal organizational stalemates onto the user. Consensus doesn’t just produce averages. Sometimes it produces duplicates.
How this differs from Conway’s Law
Conway says the product mirrors the org chart. That’s about shape. Four teams, four components.
This is about coherence. You can have four teams and a coherent product if one person has final say on how the pieces fit. Linus runs a massive org, but the kernel feels like one system because the decision funnel narrows to one person.
Conway tells you what the architecture will look like. This tells you whether it’ll feel like one product or a committee report.
How this differs from Brooks’s Law
Brooks is about time. Adding people makes things late.
This is about coherence, which is a separate axis. You can ship on time and still ship an incoherent mess. Vista eventually shipped. It just shipped something that felt like two organizations built it without talking. Which is more or less what happened.
Delivery time
There’s a corollary here:
Delivery speed is determined by decision latency, the gap between “we should do X” and someone writing code.
At a five-person startup, you decide at lunch and ship after lunch. At a large org, you might need architecture review, security sign-off, accessibility review, legal, and three management layers. The coding takes a day. Getting permission takes a month.
Where this breaks
I should be honest about the weak spots.
Single decision-makers become bottlenecks. The funnel that creates coherence also caps your engineering velocity at the processing speed of one human brain. If a product scales horizontally into twenty different domains, one person can’t possibly hold the context to make good calls on all of them. You preserve coherence, but you sacrifice throughput. The org charts of massive tech companies are designed entirely to solve that throughput problem. They just sacrifice coherence to do it.
Coherence isn’t the same as quality. One person can have a consistent vision and still be wrong. If the person holding that final integration seat lacks the capability or the “taste” required for the domain, you don’t get a masterpiece. You get a coherently bad product. In fact, heavy bureaucracy and consensus-seeking often emerge in big companies as a defense mechanism to dilute the impact of incompetent leaders. A single terrible decision-maker will ship a unified product, but it will be unified around a bad idea. Consistency and correctness are different things.
And some committees do produce good work. The Apollo Guidance Computer came out of a large bureaucracy and it landed on the moon. But even there, Margaret Hamilton’s software team was about 100 people, and she had clear authority. The committee existed, but the decision funnel was tight.
The part that makes people uncomfortable
If this holds, then what most orgs do to “improve quality” (adding reviewers, adding stakeholders, adding approval gates) actually degrades the product. Not because review is bad. Because every additional person in the decision chain sands down one more edge, removes one more strong opinion, averages out one more choice.
You end up with software that offends nobody and delights nobody.
I’ve seen it. You know it when you use it. You open an enterprise app and think “which team built this screen, and have they met the team that built the next one?” Usually they have. In a weekly meeting with twelve people. Where nothing gets decided.
What I’d do about it
Keep the number of veto holders as small as possible. Not contributors. Vetoes. Let many people write code. Let very few people block decisions.
When you cannot eliminate multiple veto holders (because security, legal, and architecture all legitimately need a seat at the table), you must relentlessly constrain their veto boundaries. Security gets a veto on vulnerabilities, not on user flows. Architecture gets a veto on system scalability, not on interface copy. The moment a domain expert is allowed to veto something outside their explicit jurisdiction, you slide right back into committee-driven design. You have to clearly define the boundaries of a veto before the project even boots up.
Start measuring decision latency, not just cycle time. How long do features sit waiting for a green light versus actually being built?
Establish clear integration authority. It doesn’t necessarily have to be a single dictator, but the funnel must be narrow enough to share one cohesive context. The hardest part of this isn’t defining the role; it’s deciding who actually gets the seat. If you default to the person with the most seniority, you often end up with disconnected architecture astronauts. If you give it to someone who doesn’t understand the underlying technical constraints, you get impossible roadmaps.
The integration authority has to be someone who still deeply understands the codebase, uses the product every day, and possesses the technical “taste” to look at the boundaries between teams and ask, “Does this feel like one product?” It is a curation role, not a project management one. It requires the soft power to tell a team their solution doesn’t fit the broader vision. And yes, this creates a single point of failure in judgment. If they make bad calls, the vision fails. But it is far easier to course-correct a single, clear vision than it is to untangle a product built by twenty competing committees.
And know when to resist the pull toward consensus. Consensus is vital for standards bodies and foundational platforms like TC39 or Node.js, where ecosystem trust is the absolute primary feature. But how do those groups avoid shipping disjointed averages? They do it by strictly separating creation from approval. In TC39, a “champion” holds the cohesive vision for a proposal. The committee doesn’t design the feature; they audit its constraints. When committees try to design, you get a disjointed mess. When they audit a strong, singular vision, you get a durable standard.
But when building a fast-moving, user-facing product, even audit-level consensus is usually too slow. You don’t have the luxury of a multi-year RFC process. You need a single integration authority who can look at a stalled room and say, “Three teams disagree, but we are doing it this way because the product needs to feel like this.”
One more time
Hemanth’s Law: Software coherence is inversely proportional to the number of people who can say no.
Conway gave us the shape. Brooks gave us the timeline. This is about coherence.
The best software I’ve used was built by a small number of people with strong opinions and the authority to ship them. The worst was built by organizations where everyone had a voice and nobody had a decision.
More vetoes, less vision.
References: Conway (1968) “How Do Committees Invent?”; Brooks (1975) The Mythical Man-Month; MacCormack, Baldwin, Rusnak (2012) “Exploring the Duality between Product and Organizational Architectures” HBS; SQLite Testing; Linux Kernel Patch Process; Snover (2025) “The Longhorn Story”; Lee “Spotify’s Failed Squad Goals”; Node.js Technical Steering Committee.
About Hemanth HM
Hemanth HM is a Sr. Machine Learning Manager at PayPal, Google Developer Expert, TC39 delegate, FOSS advocate, and community leader with a passion for programming, AI, and open-source contributions.