From Headcount to Compute: How the Fundamentals of Growth Are Evolving
Why traditional metrics no longer predict successful companies
A partner at a major venture capital firm recently said something that would have been heresy five years ago: when she sees a company with 85-90% gross margins, it raises an "orange flag."
For two decades, high gross margins were gospel in enterprise software. They were the single clearest signal of a scalable, sustainable business model. Investors celebrated them. Operators optimized for them. The logic was bulletproof: if your customer service costs were low, you could scale efficiently and capture more value over time. It's what separated the winners from everyone else.
Those same high margins might signal that a company isn't building something defensible. The inversion is striking — and it's just one example of how the fundamental rules of company building are being rewritten.
Having advised companies through multiple technology transitions — on-premise to cloud, subscription models, tech-enabled services — over the past fifteen years, I've learned to recognize the pattern: new technology arrives, old metrics break down, new economics emerge. Each wave brought its own benchmarks: perpetual licenses and maintenance contracts gave way to annual recurring revenue, then monthly recurring revenue and net retention became the standards to beat.
But this shift is different in magnitude. We're not just tweaking the metrics — we're inverting them. And the implications go far beyond any single technology trend or business model.
The Playbook That Defined Enterprise Software
To understand what's changing, it helps to establish what "normal" looked like.
For twenty years, a remarkably predictable playbook governed how we evaluated scalable technology businesses. If you were evaluating a company — whether as an investor, board member, or potential acquirer — you knew exactly what to look for. Gross margins above 80%. Rule of 40 (growth rate plus profit margin). CAC payback under twelve months. LTV:CAC ratios of 3:1 or better. Burn multiples under 2x.
These weren't arbitrary targets. They reflected fundamental economics that actually predicted which companies would succeed. The cost structure was transparent and comparable: roughly 70% of spending went to people (engineers, salespeople, customer support), 15-20% to infrastructure, and the remainder to everything else. This held true whether you were building pure software, enterprise platforms, or technology-enabled services at scale.
These frameworks worked. You could look at a company's metrics and predict sustainability with reasonable confidence. The pattern held across industries — financial services, healthcare, enterprise software, business services. Even when companies struggled, the metrics told you why.
The implicit assumption underlying all of this was elegant: the marginal cost of serving an additional customer approached zero. Build the capability once, sell it many times. Scale meant efficiency. More revenue per employee meant a more valuable company. It was a beautiful model, and it created enormous value.
The Inversion
Then the P&L flipped.
Today, there are companies generating $50 million in annual recurring revenue with only 25 employees. Cursor, the code editor, reportedly crossed this threshold with a lean team that would have been unthinkable in the previous era. Revenue per employee isn't hitting the celebrated $200,000 benchmark — it's hitting seven figures. The metric that was supposed to signal efficiency has become almost meaningless.
More striking: these companies often have gross margins in the 50-60% range, not the 80-90% that used to be table stakes. And yet they're raising billions in capital from sophisticated investors who understand exactly what they're looking at.
The cost structure has inverted. Where traditional enterprise software businesses spent 70% on people and 15-20% on infrastructure, many of today's most valuable companies spend 80-90% on compute and infrastructure, with only 10-20% on headcount.
This shift isn't confined to one category. Traditional software companies adding these capabilities, services businesses becoming more technology-intensive, enterprises building internal solutions — all are grappling with the same inverted economics. Notion, for example, has been transparent about how their infrastructure costs shifted dramatically as usage deepened, with costs per user dropping roughly 3x over two years as they optimized their compute strategy.
This isn't inefficiency. It's a fundamentally different business model where the costs reflect the value being delivered. When margins are "too high" — that 85-90% range that used to be ideal — it might signal that the product isn't being used deeply enough, that the technology integration is shallow, or that the company is building a thin layer on top of someone else's capabilities rather than solving a genuinely hard problem.
The companies with lower margins are often the ones with the deepest product engagement. Heavy infrastructure costs mean users are actually extracting value. The unit economics look concerning by traditional standards, but they reflect something else entirely: cost structure aligned with the problem being solved.
The New Resource Allocation Calculus
This shift changes the fundamental questions that operators need to answer.
Resource allocation used to be straightforward: How many engineers do we need to build the roadmap? What's the optimal sales rep quota? When do we break even on a customer cohort? The math was well-understood across business models.
Now the questions sound different: What's our optimal mix of compute capacity versus engineering talent? How long should we lock up infrastructure contracts given pricing volatility? How much should we spend on data versus model optimization? What's our compute runway relative to our fundraising roadmap?
These aren't just financial modeling questions. They're deeply operational decisions that require understanding both the technology and the business model implications. One company described their ideal finance hire as having "fixer energy" — someone who could be a "white-collar plumber." The metaphor stuck with me because it captures something essential: the most valuable finance professionals today aren't just analytical. They're operational problem-solvers who can model contract length decisions, compute depreciation curves, and data spend against roadmap priorities.
When Fortune 500 enterprises evaluate whether to build or buy these capabilities, the traditional frameworks don't capture what matters anymore. Gross margins and revenue multiples can mislead. Instead, they're looking at founder vision and product roadmap velocity. They're asking: Will this vendor still be ahead in six months? Because in fast-moving categories, a six-month capability lag has become a competitive disadvantage, where it used to be perfectly acceptable.
The procurement conversation has evolved. Buyers want to understand: Is this team solving a genuinely hard problem? Are they positioned to stay ahead as the technology evolves? Do they have the operational sophistication to manage costs while maintaining quality?
What Endures Across Waves
Despite all this disruption, patterns persist.
Across four major technology transitions — ERP implementations, cloud migration, subscription models, and now this shift — some fundamentals remain constant even as the economics change.
Product-market fit still matters, though it manifests differently. The companies that win aren't just those with the best underlying technology — they're the ones that create superior product experiences and integrate deeply into workflows. GitHub Copilot had first-mover advantage and massive distribution, yet competitors like Cursor gained significant traction by delivering demonstrably better product experiences. Usage patterns still predict retention, even if the absolute thresholds have changed.
Organizational barriers look remarkably similar across waves. The adoption patterns are consistent — the champions who push for change, the explorers willing to experiment, the skeptics who resist, the experts who need to be convinced — whether we're talking about cloud migration or the latest technology shift. Change management challenges haven't disappeared just because the technology is more powerful.
And go-to-market eventually normalizes. Once companies scale beyond the early adopters, enterprise procurement starts to look familiar. You still need to prove value, run proof-of-concepts, win over multiple stakeholders. Distribution advantages still matter, even if they matter less in the earliest stages.
The nuance is this: it's not that the old playbook has become irrelevant. It's that it applies at different stages now and in different proportions. The early economics look radically different from what we're used to. But as companies mature and scale, some of the traditional patterns re-emerge. They just take longer to appear, and the path to get there looks different.
Implications and Open Questions
What does "efficient growth" mean when your cost structure is inverted? How do you benchmark against comparables when the traditional metrics are misleading?
These aren't rhetorical questions. Operators are grappling with them in real-time. Pricing models are evolving from pure subscription to hybrid usage-based models as the economics shift. The path to profitability looks different when you're infrastructure-heavy rather than people-heavy. Traditional frameworks like the Rule of 40 don't apply the same way when your costs scale differently.
For enterprises making build-or-buy decisions, the evaluation criteria have shifted. Financial stability metrics that used to be reliable — gross margin, cash burn, runway — can now mislead. A company with "concerning" unit economics might actually be the one solving the hardest problems and building the most defensible moat. Speed and innovation capacity matter more than traditional stability signals in categories where the technology is moving quickly.
And there are genuine open questions that won't be resolved for years. Will gross margins expand over time as infrastructure costs continue to drop? Public pricing data from leading model providers shows costs have fallen dramatically — OpenAI's GPT-4 pricing dropped roughly 10x within a year, with some use cases seeing even steeper declines as newer, more efficient models emerged. If that trend continues, do we eventually see compression back toward traditional margin profiles? Or is 50-60% the new steady state for companies in this category?
How do pricing models evolve as costs become more predictable? What happens when the current infrastructure investment wave peaks and capacity normalizes? How do we think about competitive moats when the cost structure and technology foundation are both evolving rapidly?
Nobody has definitive answers to these questions yet. We're watching the patterns emerge in real-time.
Pattern Recognition, Not Prediction
The mistake is trying to force-fit old frameworks onto new economics.
Each time a major transition happens, the metrics that worked for the previous era break down. We spend energy trying to make the new reality conform to our inherited benchmarks, when what we really need is the intellectual honesty to acknowledge that the fundamentals have shifted.
The shift from headcount-driven to compute-driven cost structures is the most dramatic change in business model economics since the cloud transition. Companies that succeed today are questioning inherited wisdom while building genuinely sustainable businesses. That combination — willingness to break old rules plus operational rigor — seems to be what separates signal from noise.
For twenty years, we knew what "efficient growth" looked like. There was a playbook. The benchmarks were clear. Today, a company with 50% gross margins and $2 million in revenue per employee might be more sustainable and valuable than one with 90% margins hitting all the traditional metrics.
The fundamentals of successful company-building haven't disappeared. They've evolved. And the companies and operators who recognize this shift earliest — who can distinguish between metrics that still matter and metrics that now mislead — will have a significant advantage.
This is documentation of a transition happening in real-time. But if history is any guide, new benchmarks will emerge. They just won't look the way we expect. And that's exactly what makes this moment so interesting to watch.