Why sweat equity doesn't work for AI startups
AI startups and the equity bet you can't win
In 1976, Ron Wayne sold his 10% stake in Apple Computer for $800. He'd co-founded the company with Steve Jobs and Steve Wozniak 12 days earlier, drafted the partnership agreement by hand, and drawn the first Apple logo himself. He was 42, had been burned by a failed slot machine venture, and didn't want to be personally liable for debts run up by two twenty-somethings building circuit boards in a garage. The $800 seemed reasonable. That stake would be worth roughly $300 billion today.
We tell the Ron Wayne story as a cautionary tale about paper hands. But it's really about something else: in 1976, a person could contribute real labor to a technology company, receive equity in exchange, and have that equity become extraordinarily valuable, because the company's success depended almost entirely on what its people built with their hands and their brains. The tools were cheap. The labor was the product.
This equation held for 40 years.
It holdeth, no longer.
What sweat equity required to work
Startups, for most of the modern technology era, took a known set of technologies (databases, web servers, programming languages, APIs) and welded them into a novel product. The raw materials were free. Open-source software, commodity cloud computing, standardized protocols. Marginal cost of inputs: close to zero.
What mattered was the assembly. The design decisions, the architecture, the product intuition, the speed of iteration.
Two engineers in a garage had access to the same toolkit as Google. The difference was what they did with it. Instagram launched with 13 employees. WhatsApp was famously lean. Y Combinator's entire premise was that a small amount of capital, combined with a large amount of focused human effort, could produce outsized returns.
Equity was cheap to issue. Engineers were expensive to pay. The work those engineers did (writing code, talking to users) was the primary value-creation activity.
You were giving people a share of the thing their labor was directly creating. The incentive alignment was almost perfect.
This model had one dependency nobody thought about, because it was so obviously true as to be invisible: human effort was the scarce input, and the tools were either free or cheap.
GPUs broke the equation
AI startups operate in a different economic reality.
The core product, whether a foundation model company, a fine-tuning shop, or an application-layer company building on top of large language models, depends on compute. Training a competitive large language model costs tens of millions of dollars in GPU time. Fine-tuning or running inference at scale requires compute budgets that would have been unthinkable for a SaaS startup 5 years ago.
In the traditional startup, the expensive thing was the engineer and the cheap thing was the tool. In AI, compute is far more expensive than the engineer. The ratio has inverted.
When two engineers stay up all night writing code for a traditional SaaS product, they're directly creating the product. More labor, more product. When two engineers stay up all night writing training code for an AI model, they're creating instructions for a process that will then consume millions of dollars of compute.
Their labor is necessary but wildly insufficient. A carpenter building a house with hand tools versus an architect drawing blueprints for a skyscraper. The architect's work matters, but without the steel, concrete, and capital, the blueprints are paper.
Dilution math
The traditional startup financing model: founders build a prototype with sweat equity, demonstrate traction, raise a seed round, hire more people who also receive equity, demonstrate more traction, raise a Series A. At each stage, capital funded human labor. More engineers, more salespeople. The equity those people received was a share of value their work was directly creating.
AI startups can't follow this model because the capital is primarily funding compute.
When Anthropic raises billions of dollars, the money is going to rent GPUs. When a smaller AI startup raises a $20 million Series A, a large portion goes directly to cloud compute providers. The humans matter, critically, but they're a smaller fraction of the total cost structure than in a traditional software company.
A traditional SaaS startup might raise $30 million total before reaching profitability. An AI startup doing anything involving model training might need $300 million or more. That's an order of magnitude more dilution.
The founders who spent 3 years working for below-market salaries, who put in the nights and weekends, who made the personal sacrifices that sweat equity is supposed to compensate, end up owning a much smaller percentage of a company that needs to be worth correspondingly more for their equity to be worth the same amount.
The math can work, but only at truly enormous scale. Sweat equity in a traditional startup was a reasonable bet because many outcomes between "total failure" and "billion-dollar company" could still pay off. Sweat equity in an AI startup is binary.
Where moats aren't
In a traditional software company, accumulated labor created durable competitive advantages. The codebase, the institutional knowledge, the customer relationships, the operational processes. Human effort that compounded over time. A team working together for 3 years had built something a better-funded competitor couldn't easily replicate.
AI startups have a harder time with this. The core algorithms are published in academic papers, and the model architectures are well-known. Training techniques get shared at conferences and in blog posts. What differentiates one AI company from another is primarily the data it trains on and the amount of compute it can access.
Neither of those is a product of sweat equity.
What the sweat-equity employees build (the model, the product, the fine-tuning pipeline) is more vulnerable to replication than a traditional software product. A well-funded competitor can reach parity in months. The labor that went into the original product is less durable and less useful as a basis for equity compensation.
Sweat equity only makes sense when sweat creates equity. When capital is the primary creator of equity, compensating people with equity asks them to bear a risk they have limited ability to influence.
What comes instead
The best AI researchers and engineers command enormous salaries because their contributions are so leveraged: a single architectural insight can save millions in compute costs.
But the compensation model needs to reflect the actual economics. Higher cash, lower equity dependence, particularly for early employees. Equity grants structured to account for expected dilution. Honesty with potential hires about capital intensity and what it means for their ownership stake.
Some AI companies already get this. The most competitive offers from AI labs include base salaries of $400,000, $500,000, or more for senior researchers. They can't credibly offer the equity upside that a traditional startup could, so they compensate with cash.
The lean, equity-driven, everyone-sacrifices-together culture that defined Silicon Valley for decades is poorly suited to AI companies. You can't train a large language model in a garage. You need datacenters, and datacenters require capital.
Labor-constrained to capital-constrained
For 30 years, the binding constraint on technology companies was attracting and retaining talented people. Capital was abundant and undifferentiated: one dollar of venture capital was much like another. What mattered was what people did with it.
In AI, capital is becoming the binding constraint, and not all capital is equal. The companies with cloud provider relationships, favorable compute contracts, proprietary data: those companies have advantages that no amount of sweat equity can overcome.
This is uncomfortable for an industry that built its identity around the primacy of human ingenuity. The Ron Wayne story works as mythology because it assumes that the value was in the labor, the partnership agreement drafted by hand, the logo sketched at the kitchen table. In AI, the value is increasingly in the GPU hours. The kitchen table is irrelevant without the datacenter.
The interesting question is what happens to startup culture when the garage stops being a viable origin story. I think (and I'm probably wrong about the timeline) we're about 5 years from a world where "AI startup" and "capital-light startup" are recognized as contradictions in terms, and the compensation structures will have finished catching up by then. The founders still operating on sweat equity mythology at that point will have asked their teams to make sacrifices the business model can't repay.



