

Artificial intelligence is tracing the familiar, weary boom-and-bust trajectory identified in 1837 by Lord Overstone of quiescence, improvement, confidence, prosperity, excitement, overtrading, convulsion, pressure, stagnation, and distress.
There are three primary concerns. First, there are doubts about the technology. Building on earlier technologies such as neural networks, rule-based expert systems, big data, pattern recognition, and machine learning algorithms, generative AI, the newest iteration, uses large language models (LLMs) trained on massive data sets to create text and imagery. The holy grail is ‘singularity’, a hypothetical point where machines surpass human intelligence. It would, in Silicon Valley speak, lead to ‘the merge’, when humans and machines come together, potentially transforming creativity and technology.
LLMs require enormous quantities of data. Existing firms in online search, sales platforms, and social media platforms can exploit their own troves. This is frequently supplemented by aggressive and unauthorised scraping of online data, sometimes confidential, leading to litigation around access, compensation, and privacy. In practice, most AI models must rely on incomplete data, which is difficult to clean to ensure accuracy.
Despite massive scaling up of computing power, genAI consistently fails in relatively simple factual tasks due to errors, biases, and misinformation in datasets. AI models are adept at interpolating answers between things within the data set, but poor at extrapolation. This means they struggle with novel problems and their ability to act autonomously interacting within dynamic environments remains questionable. Cognitive scientists argue that simply scaling up LLMs based on sophisticated pattern-matching built to autocomplete will disappoint. The claimed progress is difficult to measure, as the benchmarks are inconclusive.
Cheerleaders miss that LLMs do not reason, but are probabilistic prediction engines. A system that trawls existing data, even assuming that is correct, cannot create anything new. Once the existing data sources are devoured, scaling produces diminishing returns. Rather than fully generalisable intelligence, generative models are regurgitation engines struggling with truth, hallucinations, and reasoning.
AI models can take over certain labour-intensive tasks like data-driven research, writing, travel planning, computer coding, certain medical diagnostics, testing, and routine administrative tasks like handling standard customer queries. But its loftier aims may prove elusive. Microsoft’s CEO drew the ire of true believers when he argued that AI had yet to produce a profitable killer application to match the impact of email or Excel.
For the moment, genAI, an ill-defined marketing rather than technical term, remains a costly parlour trick for some low-level applications—making memes and allowing scammers to deceive.
Second, financial returns may prove elusive. Capital expenditure on AI is expected to total up to $5-7 trillion by 2030. It has added around 40 percent or a full percentage point to 2025 US growth. AI companies accounts for 80 percent of US stock returns. AI startup valuations based on the latest round of funding were $2.30 trillion, up from $1.69 trillion in 2024, and up from $469 billion in 2020. But AI’s capacity to generate cash and returns on the large required investment remains questionable.
Revenues would have to grow over 20 times from the current $15-20 billion a year to cover the current investment in land, building, rapidly depreciating chips, and power and water. Revenues totalling more than $1 trillion may be required to earn an adequate return. Microsoft’s Windows and Office, among the world’s most used software, generates less than $100 billion in commercial and consumer revenue. Less than 3 percent of its 800 million users currently pay to use ChatGPT.
The hope is AI will be paid for from higher productivity and corporate profits. But 95 percent of corporate genAI pilot projects failed to raise revenue growth. Many users, after cutting hundreds of jobs and replacing them with AI, were subsequently forced to reemploy staff when the technology proved inadequate.
Monetisation of AI faces other uncertainties. In 2024, the cheaper Chinese DeepSeek-R1 model cast doubts about the capital-intensive approach of Western firms. China’s favoured open-source design would also undermine firms that have invested heavily in proprietary technology.
Meanwhile, AI firms remain a cash-burning furnace. In the first half of 2025, OpenAI, owner of ChatGPT, generated $4.3 billion in revenue, but spent $2 billion on sales and marketing and nearly $2.5 billion on stock-based compensation, posting an operating loss of $7.8 billion.
Third, there are financial circularities seen during the dot-com boom. CoreWeave, an equipment rental business trying to cash in the AI boom, purchases graphics processers for AI applications and rents them to users. Nvidia is an investor in the company and the bulk of revenues is from a few customers. There is concern around the rate of depreciation of the chips and the firm’s significant borrowings. In 2025, Nvidia invested $100 billion in OpenAI, which in turn bought an equivalent dollar value of graphics processing units from it and also invested in chipmaker AMD.
These transactions distort financial performance. The firm selling capital goods reports sales and profits, while the funding of the sale is treated as an investment. The buyer depreciates the cost over longer periods than its limited life. Dubious earnings boost share prices in a dizzying financial merry-go-round.
AI investment may be 17 times that of the 2000 dot-com and four times the 2008 sub-prime housing bubble. Rather than equity, it is funded by debt with the amount tied to AI totalling around $1.2 trillion, 14 percent of all investment-grade debt.
Investors have convinced themselves that the greater risk is underinvesting, not overinvesting. Amazon founder Jeff Bezos hails it a “good kind of bubble”, arguing that the money spent will bring long-term returns and deliver gigantic benefits to society—the tech-bro’s persistent bromide. But the share of used fibre-optic capacity is around 50 percent and the average global network use is 26 percent. When that boom ended, Microsoft, Apple, Oracle, and Amazon fell 65, 80, 88 percent, and 94 percent, respectively, taking 16, 5, 14 and 7 years to recover their 2000 peaks.
Consensual hallucinations notwithstanding, it would be surprising if the ending is different this time.
Satyajit Das | Former banker and author of The Age of Stagnation
(Views are personal)