In the fast-paced world of technology, it's easy to get swept away by the latest buzzwords and promises of a utopian future. For the past couple of years, Artificial Intelligence—specifically generative AI and Large Language Models (LLMs)—has dominated headlines, corporate budgets, and stock market valuations. Trillions of dollars have poured into AI infrastructure, startups, and massive funding rounds. But what if it's all built on a fragile foundation? What if the Emperor has no clothes?
This comprehensive deep-dive explores every facet of the AI bubble, from forced corporate adoption and inherent technological limitations to staggering computing costs and questionable accounting practices.
The Productivity Myth: Billions Spent, Zero Gains
The primary pitch for generative AI is that it will revolutionize the workplace, drastically speeding up software engineering, content creation, and administrative tasks. However, the data paints a starkly different picture.
According to a comprehensive study by the National Bureau of Economic Research (NBER), which surveyed 6,000 CEOs across the US, Europe, and Australia, a staggering 90% of business leaders saw no impact on employment or productivity in the last three years from the adoption of AI.
Rather than streamlining workflows, AI adoption is bearing a suspicious resemblance to the early days of the computer information revolution. While early computers were massive, room-sized machines that eventually boosted output, the initial flood of raw data they produced actually slowed productivity down. AI is currently suffering from the same phenomenon—but on a much larger scale. Generative tools are churning out an overwhelming amount of low-quality information, summaries, and boilerplate text, creating a dense digital noise that workers now have to sift through, effectively slowing down real, measurable output.
Zitron points out that if AI were genuinely going to help streamline operations in a transformative way, it would have shown undeniable results by now. Instead, corporations have burned through a lot of cash with nothing to show for it except a mandate that employees must use the new tools.
Forced Adoption: The "Shadow IT" Reversal
One of the most telling signs that a technology lacks organic utility is how it is distributed. When the iPhone first launched, it wasn't immediately embraced by corporate IT departments. In fact, it birthed the era of "Shadow IT"—employees secretly bringing their personal iPhones into the office and bypassing corporate systems because the technology was genuinely useful to them. Workers fought to use it.
With generative AI, the exact opposite is happening. Employees aren't sneaking ChatGPT into their workflows; bosses are forcing it down their throats.
Companies like Accenture are reportedly implementing strict mandates where employees are forced to use AI, and their performance evaluations will be directly tied to their adoption of these tools. This top-down pressure stems from a generation of executives who, as Zitron bluntly describes, are "pushing AI because everyone's blaring in their ear that AI is important," rather than identifying genuine workflow bottlenecks that the technology solves.
Furthermore, big tech has made it virtually impossible to avoid AI. It is being crammed into every possible crevice of our digital lives. Apple Intelligence forces its way into text messages, Meta AI pops up unprompted in Instagram searches, and Windows 11 features Copilot baked directly into the operating system. Zitron hilariously compares Microsoft Copilot to "a vagrant [who] moved into your basement" or someone who "crawled through your vents and starts telling you that it could generate a summary of your emails". It's ubiquitous, yes, but not by consumer consent.
The Illusion of Growth: Rigging the User Metrics
Because true, organic demand for AI chatbots is questionable, tech giants are resorting to clever tricks to artificially inflate their user numbers.
When Google transitioned its widespread Google Assistant to Gemini, or when Microsoft integrated Copilot directly into its massive Microsoft 365 suite (Word, Excel, Docs), hundreds of millions of users were "magically" onboarded overnight. If you open a Google Doc and a Gemini pop-up appears, you might be counted as an active user, regardless of whether you actually engaged with the AI to accomplish a task.
This metric-rigging creates an illusion of massive adoption. If these LLMs had to stand on their own two legs as standalone products, without being subsidized by and anchored to legacy software monopolies, the genuine user base would be a fraction of what is reported to investors.
The Hallucination Problem: A Foundation of Mistrust
Beyond the economic oddities, there is a fundamental technological flaw that AI companies have yet to solve: hallucinations. LLMs do not "think" or cross-reference facts; they predict the next most likely word in a sequence. Because of this architecture, they confidently make up false information.
While companies like OpenAI constantly promise that hallucinations are being minimized, internal studies suggest otherwise. OpenAI released findings acknowledging that hallucinations are an inherent, unavoidable part of large language models.
If the primary use case for an LLM is research and data synthesis, how can any professional rely on a tool that fundamentally lies? If the only way to verify whether an AI-generated fact is correct is to already know the answer, the tool's utility as a research assistant is entirely nullified. It becomes a machine for confirmation bias, not a reliable engine for discovery.
The Endless "J-Curve" and Moving Goalposts
Despite the lack of current returns, executives remain stubbornly optimistic, forecasting a meager 1.4% average increase in productivity over the next three years. Proponents of the AI boom lean heavily on the economic concept of the "J-Curve." The argument goes that massive upfront capital expenditures (the dip in the "J") will eventually lead to a parabolic explosion in growth and profitability (the stem of the "J").
But as Zitron observes, the timeline for this promised payoff is perpetually delayed. When asked for concrete deadlines, AI leaders continuously push the goalposts into the future. Sam Altman claims we will reach Artificial General Intelligence (AGI) by the end of 2028, warning people to enjoy their jobs while they last. Anthropic’s Dario Amodei places the magic date at the end of 2027.
These distant promises serve a distinct financial purpose: they justify the immediate, unprecedented burning of cash. It is a constant plea of "we need all your money now so that we can spend it, so that then we can be rich."
Astronomical Costs: The Most Expensive Illusion in Tech
To truly grasp the absurdity of the AI bubble, one must look at the capital expenditures. Let's compare it to Amazon Web Services (AWS)—arguably one of the most consequential infrastructural shifts in modern computing history. AWS took roughly $69 billion over nine years to become cash-flow positive.
In stark contrast, OpenAI is actively raising a funding round exceeding $100 billion in a single calendar year. But that's just the tip of the iceberg:
Anthropic's Compute Bill: Anthropic raised $30 billion, but projected compute costs (for model training, bug fixes, and preventing "model drift") indicate they will need to spend $160 billion over the next three years.
OpenAI's Master Plan: According to reports, OpenAI plans to spend an unfathomable $450 billion purely on computing power in the coming years.
How is this massive infrastructure funded? Much of it operates on highly questionable internal economics. Cloud providers are effectively investing cloud credits into these startups to artificially boost their own cloud revenue. This creates a dangerous codependency where big tech is feeding itself its own money to prop up the illusion of a booming AI industry.
Worse yet, the end-user products are heavily subsidized to speed-run revenue growth and secure market share. A mathematical breakdown of Anthropic's Claude subscriptions revealed that a user paying $100 a month can actually burn through $1,300 worth of computing credits. If AI companies charged what it actually cost to run these queries, subscriptions would cost hundreds of dollars a week, and the consumer user base would evaporate overnight. They are literally burning money to keep the lights on.
Nvidia, Debt, and Market Anxiety
At the center of this massive capital expenditure is Nvidia, the company manufacturing the GPUs that power these data centers. However, recently, Nvidia's valuation has remained suspiciously flat despite continuous data center build-outs.
According to Zitron, this stagnation suggests that investors are slowly waking up to the math. GPUs are so expensive that tech giants cannot fund these data centers through regular cash flow; they are raising hundreds of billions of dollars in debt. This isn't just a test of the tech industry; it's a test of global private credit markets.
Data centers are horribly unprofitable without a permanent tenant, and rumors are already circulating that hyperscalers like Oracle are pausing certain data center expansions because OpenAI cannot generate enough revenue to justify the leasing costs. The market is essentially holding its breath, waiting to see if AI will miraculously prove its worth, or if the debt-fueled house of cards will collapse.
WeWork 2.0: "Community Adjusted" Chaos
The financial gymnastics required to keep the AI industry afloat bear a striking, terrifying resemblance to the WeWork disaster—but without the physical real estate.
SoftBank, famously burned by WeWork, is allegedly preparing to dump another $30 billion into OpenAI. Meanwhile, AI CEOs are beginning to use bizarre accounting metrics to hide their unprofitability. Anthropic's Dario Amodei recently suggested that profitability shouldn't be calculated via standard Cost of Goods Sold (COGS), but rather through "stylized facts" about how much a model costs versus the revenue it magically generated. Zitron equates this directly to WeWork's infamous "Community Adjusted EBITDA"—a nonsensical metric designed to hide massive operational bleeding.
The main difference between WeWork and the AI giants? WeWork actually had hard assets (leases, desks, buildings). OpenAI and Anthropic possess almost no physical assets. They hold leases on servers they don't own, employ highly-paid scientists, and possess proprietary code that requires billions of dollars just to maintain. If the bubble bursts, there is virtually nothing to liquidate.
Conclusion: Waiting for the S1
We are currently living in an era defined by Wile E. Coyote economics: tech giants are sprinting off the edge of a cliff, legs spinning wildly in the air, surviving purely on the hope that nobody looks down.
Between the lack of genuine productivity gains, the inherently flawed technology, the fabricated user metrics, and the hundreds of billions of dollars in subsidized compute costs, the generative AI industry is standing on a precipice. The ultimate reckoning will likely come when companies like OpenAI or Anthropic are forced to file their S1 documents to go public. Once the world gets to look under the hood and see the true, unvarnished economics of these companies, the illusion will shatter.
Until then, we will continue to endure the relentless hype, the forced integration of chatbots into our daily software, and the endless promises that utopia is just one more $100 billion funding round away.



