Google Invented Generative AI. That Didn't Buy It the Market.
A new business-school case on Google's AI predicament is the cleanest empirical exhibit of the innovator's dilemma in years. The lesson generalizes.

Google invented the architecture that powers nearly every modern large language model. In May 2025, Google searches on Safari fell for the first time in more than 20 years.
For two decades, Google’s research lab produced most of the technology behind the current generation of AI. The 2017 paper Attention Is All You Need — written by eight Google Brain scientists — introduced the Transformer, the architecture behind nearly every commercial large language model. Geoffrey Hinton, affiliated with Google when the foundational papers were published, was awarded the 2024 Nobel Prize in Physics for his contributions to machine learning. Google owns DeepMind. It runs Gemini. It had roughly $100 billion in cash on the balance sheet entering 2025. And in May of that year, Apple revealed that Google searches on Safari had fallen for the first time in more than 20 years. Alphabet’s stock dropped 7 percent the same day.
That gap — between the technical leadership and the market response — is the single most useful exhibit a C-suite can study right now. Andy Wu and Anna Yang’s case study AI Wars in 2025 lays out the predicament in unusual detail. The lesson is not about Google specifically. It is about what inventing a technology does and does not buy you when your existing business model is incompatible with the technology’s deployment.
The advantages on paper
As of early 2025, Google sat on what looked, from the outside, like an unassailable position. It led the global desktop search market with a 93.4 percent share at the end of 2022. It had the in-house chip program — the sixth-generation Trillium TPU — that let it train Gemini without paying Nvidia. It had the open-source release pipeline (Gemma) and the closed-source flagship (Gemini). It had the consumer brand. It had enterprise distribution through Workspace and Cloud. It had a lineage of AI research that competitors had spent years trying to replicate.
It still had a problem.
The trap nobody had to invent
Google’s search advertising business is one of the most efficient revenue engines ever built — an estimated 1.61 cents per search across more than 5 trillion searches a year. A chat answer that synthesizes the result removes the page of links where the ads are placed. Adding generative AI to Search introduces a new inference cost — roughly 0.356 cents per query, on top of the existing 1.06 cents per index query — for a feature that may earn less per impression than what it replaces. By June 2024, AI Overviews showed up on only 7 percent of searches, down from a 15 percent peak. AI Mode, the more aggressive chat-style interface, was rolled out cautiously the following May.
That is not technical caution. It is business-model arithmetic. The chat product cannibalizes the ad product. Every percentage point of query traffic shifted is a measurable revenue change. The same arithmetic that lets a public company forecast quarterly earnings is the arithmetic that traps it.
Open source breaks the moat
The trap is harder because Google cannot wait it out. A 2023 leaked internal memo titled “We Have No Moat, and Neither Does OpenAI” warned that open-source releases would erode the defensibility of any closed model. By 2025, the empirical case was in. Meta released LLaMA, then LLaMA 2, then later versions, each available for commercial use within license limits. Google released Gemma. DeepSeek released R1, trained at roughly one-fifth the compute of comparable models. Competitors do not have to match a closed model on quality. They have to be free, recent, and good enough — and they are.
For an incumbent, this changes the structure of the competitive question. The choice is no longer “build a better model than OpenAI.” It is whether the closed-model business is defensible at all when “good enough” is downloadable. Open-source pressure is not a marketing trend; it is a unit-economics correction.
What the case is actually about
The case is structured around questions Google has to answer in the next 12 to 24 months. The strategy student grades Google’s options. The operator faces a different question: what general lesson does the case prove?
The lesson is that inventing the technology in your lab does not buy you the right to deploy it through your distribution. Google’s research dividend — the Transformer, DeepMind, Hinton’s lineage, the TPU program — is real. It does not translate into the freedom to absorb the cannibalization cost on a 1.61-cents-per-search business. The same dividend made it possible for OpenAI, with no advertising business to protect, to ship ChatGPT in November 2022 and reach 100 million monthly active users in two months. Microsoft, with no consumer search business of any consequence, could risk a $10 billion bet on OpenAI without endangering Office or Azure. Google could not.
That is the innovator’s dilemma in a form that is unusually hard to wave away. Christensen’s original framing addressed disruption from below, where the new technology was initially worse than the incumbent’s offering. The current situation inverts that. The new technology is, in some uses, dramatically better. But the incumbent still cannot deploy it at full speed because the deployment destroys the revenue that funded the research. The most important AI capability in this environment is not technical. It is the judgment to know how fast to cannibalize a working business in service of a potentially larger one.
Where this leaves the rest of the C-suite
Most companies will never face Google’s specific problem at Google’s scale. They will face the same shape of it. Three questions reliably surface from the case:
- What part of the existing revenue model is incompatible with the new mode? If the answer is “none,” the AI rollout is a product question. If the answer is “a meaningful share,” it is a portfolio question, and it has to be sequenced with that frame.
- What is the unit economics of the new mode at scale? OpenAI lost an estimated $5 billion in 2024. Subscription conversion remains low — Microsoft 365, Netflix, and Spotify Premium have each capped near 400 million paying users globally. Building the product is the easy half; making it pay is the half nobody has resolved.
- What does the open-source floor cost to underwrite? If a free, locally-runnable model can do 80 percent of the work, the closed product has to defend the remaining 20 percent on quality, integration, or trust. Each of those is a different investment.
. . .
Two decades of dominance produced something Google did not anticipate it would need: the habit of treating its core business as the thing to defend. The decision in front of its leadership now is whether to keep defending it, or to spend it. The case ends without an answer. The companies watching from outside should take seriously how hard the answer is, even for the company that invented the underlying technology.