The Collapse of AI Companies: What We Can Learn and Preserve from the Fallout | Cory Doctorow

AI companies dominate stock indices and boardroom conversations, yet the fear of collapse grows each quarter. The current AI investment wave looks less like stable technology innovation and more like an over-leveraged bet on automation that does not work as promised. Drawing on the arguments popularized by Cory Doctorow, this text examines how business failure in AI mirrors past bubbles, which lessons matter for workers and users, and which tools deserve preservation once the fallout hits. The focus is not on predicting the future of AI but on dissecting who benefits, who pays, and what remains when the hype drains away.

Behind the glossy demos stands a harsh economic reality. Growth-obsessed tech monopolies need a new story to justify extreme valuations, after crypto and the metaverse lost their shine. AI companies fill that role by selling investors a narrative where software replaces expensive labor at industrial scale. Yet in practice, AI systems often degrade quality, hollow out expertise, and create what Doctorow calls “reverse centaurs”: humans reduced to biological peripherals for indifferent algorithms. When this model breaks, the fallout risks looking similar to past speculative frenzies in crypto, as illustrated by analyses of the Trump-linked crypto empire collapse or the fragility of leveraged Bitcoin wealth. Understanding these parallels helps identify what deserves preservation after the AI bubble bursts: open tools, human skills, and hard infrastructure rather than financial fantasies.

The collapse of AI companies and Doctorow’s critique of growth myths

The central thesis behind “the collapse of AI companies” view is brutally simple. Big tech firms already hold near-monopoly positions in ads, mobile ecosystems, and cloud infrastructure. Once a company saturates its market, investors start to treat it as “mature,” which threatens its price-to-earnings ratio and its ability to buy rivals or talent with inflated stock. To avoid this reclassification, executives need a fresh growth story that looks large enough to move the needle on trillion‑dollar valuations. AI companies provide exactly that narrative.

Cory Doctorow stresses that this is less about pure technology innovation and more about financial theater. The pitch to Wall Street is straightforward: AI will do your job, your boss will fire you, keep half your salary, and send the other half to an AI vendor. That scenario underpins eye‑watering projections of trillions in new value. History shows similar scripts in crypto bubbles and metaverse pushes, many covered deep in pieces like the warning about the AI bubble even Google’s CEO tips around. The technical claims matter, but the real engine is the requirement to keep growth multiples alive at any cost.

AI companies, monopoly power and the bubble machine

Once monopoly control exists over ads, search, mobile operating systems, and cloud platforms, organic expansion slows. Growth stories shift from winning markets to “reinventing” them. In practice, this means a cycle of hype around video, crypto, metaverse and now AI. Each cycle burns capital in the hope that one narrative sticks long enough to bridge to the next. When an executive says AI will “transform everything,” the subtext is often “our PE ratio needs a new reason to stay inflated.”

Doctorow argues that this recurring pattern feeds a structural bubble machine. A handful of AI companies soak up hundreds of billions, trading IOUs among themselves while relying on cloud credits, cheap money, and stock‑based acquisitions. Evidence from crypto history reinforces how fragile such constructions look when external conditions change. Reports like the analysis of the historical performance of ICOs show how retail investors end up holding the bag once insiders exit. AI sits on the same fault line, with a far larger footprint across core infrastructure.

See also  Future Predictions For AI In Autonomous Vehicle Technology

Reverse centaurs and the human cost of AI business failure

One of the strongest lessons to draw from Doctorow’s work is the distinction between centaurs and reverse centaurs. A centaur uses technology as an extension that amplifies human strengths. A reverse centaur serves as the fragile wetware add‑on to a rigid machine process. AI companies promise centaur‑style enhancement but often deploy systems in ways that push workers into the reverse position, bearing stress and blame while algorithms dictate pace and outcomes.

Consider an over‑the‑top AI logistics system that measures every motion of a delivery driver. Cameras flag “distraction,” microphone models classify singing as unproductive, and route optimizers squeeze breaks to seconds. The van cannot walk parcels to the door, yet the driver no longer controls time or tempo. When the AI misjudges conditions, management disciplines the driver, not the code. Business failure for such AI companies would not erase the damage: normalized surveillance, degraded trust, and burnt‑out staff remain as fallout that must be addressed and, where possible, reversed.

AI in radiology and coding: where the promise breaks

Radiology often appears in business decks as proof that AI will outperform experts. Doctorow flips this logic. A realistic deployment would use AI for secondary screening, letting human radiologists process fewer images per day with higher accuracy. That path improves outcomes but increases costs. The real commercial pitch targets executives instead: fire nine out of ten radiologists, pay an AI vendor a large subscription, and leave one human as “accountability sink” for missed tumors.

Software engineering shows similar tensions. Many developers appreciate AI assistance for boilerplate or quick refactors. However, when management uses this as a reason to lay off experienced staff and expect the remaining team to validate vast volumes of AI‑generated code, the error profile changes. Statistical models generate plausible but subtly flawed library calls or patterns that attackers exploit. Senior developers, the very people most likely to catch such issues, often appear first on layoff lists. Studies about AI and employment, such as discussions of AI replacing jobs in knowledge work, frequently ignore this dynamic of quality collapse disguised as productivity gain.

Economic fallout: crypto lessons for the future of AI collapse

AI companies operate in a macro environment shaped by earlier speculative frenzies. Crypto experienced waves of euphoric adoption and brutal corrections, linked to political branding, celebrity endorsements, and “digital gold” narratives. Many high‑profile figures promoted tokens that later imploded, as highlighted by investigations into crypto‑driven wealth collapses around political personalities or corporate strategies tied to aggressive Bitcoin exposure. The AI rush borrows heavily from that playbook but with stronger integration into critical infrastructure.

When AI valuations unwind, the immediate victims will include pension funds and index investors heavily exposed to big‑tech indices. Yet the secondary fallout spreads further. Data center overcapacity, stranded GPU clusters, and abandoned experimental platforms create a material mess. The comparison with articles covering potential collapse scenarios for crypto‑heavy firms is instructive. In both cases, the question becomes whether anything productive remains once the leveraged structure fails.

Bubble mechanics: AI companies, finance, and risk transfer

Doctorow underlines how AI companies function as risk transfer machines. Insiders enjoy stock appreciation fueled by narratives that promise radical automation. External investors absorb the downside when expectations fail. The mechanism resembles early crypto markets, where structured products and derivatives amplified volatility. Analysts tracking major Bitcoin and Ether declines show how cascading liquidations magnify stress well beyond initial triggers.

In AI, the leverage takes a different form. Cloud credits, stock‑based compensation, vendor financing, and cross‑investments between hyperscalers all layer on hidden dependencies. When one major AI company stumbles, related suppliers and customers experience rapid mark‑to‑market shocks. This systemic link explains why Doctorow describes AI as “asbestos in the walls”: something stuffed everywhere in pursuit of short‑term returns, with long‑term cleanup costs pushed onto society. The core lesson is clear: bubble economics, not pure technology capabilities, shapes the real risk profile.

See also  Exploring the Role of AI in Redefining Online Communication Platforms

Preservation after the fallout: what remains useful when AI companies fail

Despite the criticism, Doctorow does not argue for technological regression. The key question is what deserves preservation once AI companies collapse. History offers encouraging precedents. Fraud‑ridden telecoms left behind fiber networks that later delivered cheap broadband. Failed dot‑coms produced skills, open protocols, and codebases that still power the web. In the same spirit, the AI fallout will likely leave infrastructure and tools that become more valuable once speculation ends.

Three categories stand out. First, human capital: thousands of engineers trained in applied statistics, systems design, and ML operations. Second, hardware: surplus GPUs and accelerators repurposed for climate simulation, scientific modeling, and effects work. Third, open models: smaller systems fine‑tuned for targeted tasks like transcription, summarization, and image editing that run on commodity devices. Outside speculative frames, these look less like world‑eating intelligence and more like efficient plugins.

Open models, local tools, and healthy technology innovation

Once capital markets lose interest in giant loss‑making “foundation models,” attention will shift to compact, auditable systems. Local transcription, image description, and background removal already deliver strong value on laptops and phones. These capabilities do not require permanent internet connectivity, invasive data collection, or dependence on a single vendor. They align with a healthier view of technology innovation where tools empower users rather than extract rents or enable mass surveillance.

Doctorow’s argument points toward an ecosystem where AI behaves more like a standard library or plugin. Developers integrate models as components, not as monolithic replacements for human judgment. This framing also reduces the leverage that allowed a small cartel of AI companies to dictate terms to entire industries. The most valuable preservation work, then, involves documenting open weights, training procedures, and deployment patterns before corporate collapses bury them behind bankruptcy proceedings and asset sales. Turning bubble residue into public infrastructure transforms fallout into foundation.

Legal lessons: copyright, AI art and business failure

A large share of the public debate around AI companies centers on training data and copyright. Doctorow takes an unpopular but rigorous position. Existing copyright law allows reading, counting, and publishing facts about works. Model training fits that framework. Extending copyright to cover such analysis risks capturing a wide range of socially useful activities, from search indexing to academic text mining, while doing little to protect individual artists from business failure at AI companies or media conglomerates.

He emphasises a structural point. Concentrated media markets ensure that extra rights usually benefit intermediaries, not creators. History since the 1970s shows a steady increase in copyright scope alongside a declining income share for working artists. Lawsuits framed as defending creativity often result in boilerplate contract updates where studios and record labels demand assignment of new rights. In that environment, a “training right” looks less like a shield for artists and more like fresh ammunition for corporate negotiations.

Public domain, AI output and strategic preservation

The more promising legal development highlighted by Doctorow comes from the US Copyright Office. By insisting that AI‑generated works do not qualify for copyright unless significant human creative input is involved, regulators introduce a business constraint that market hype often ignores. If a studio produces a film entirely with AI, competitors may copy or redistribute it freely. To secure exclusive rights, companies must employ human creators and document their contribution.

See also  What happens when crypto and AI come together?

This principle yields a clear preservation path. Hybrid workflows where humans direct and refine AI outputs remain protectable and therefore commercially viable. Fully automated content farms, in contrast, produce unprotected material that others mirror, remix, or undercut at will. From a lessons perspective, this regime nudges the system toward centaur‑style collaboration and away from reverse‑centaur exploitation. It also strips a key revenue story from AI companies that promise to eliminate creative labor altogether.

Worker power, sectoral bargaining and resilience after collapse

An essential strand in Doctorow’s thinking addresses labor power. The Writers Guild of America strike showed how organized workers in a creative sector can force employers to negotiate limits on AI deployment. Collective action secured safeguards around credit, compensation, and minimum human involvement in script production. Unlike most occupations, US screenwriters enjoy sectoral bargaining, allowing them to negotiate as a unified block against every major studio.

Most workers lack such leverage. Traditional labor law restricts sector‑wide agreements in many countries, fragmenting bargaining into firm‑level fights where large corporations hold overwhelming advantage. In an AI context, this fragmentation makes it easier for employers to set reverse‑centaur conditions and present them as inevitable. One of the deepest lessons from Doctorow’s analysis is that durable protection against abusive AI deployment depends more on collective bargaining frameworks than on fine‑tuned copyright amendments or technical standards.

Key practical lessons for workers and users

Condensing Doctorow’s perspective into actionable guidance leads to a concrete list of behaviors and priorities. These help individuals interpret AI company claims and prepare for fallout when business failure occurs. The goal is not fear, but clear‑eyed risk management.

  • Treat promises of full job replacement with skepticism and examine who gains from the narrative.
  • Resist reverse‑centaur roles where humans carry legal and emotional risk for opaque systems.
  • Support open, local AI tools that run on devices you control instead of opaque cloud platforms.
  • Back efforts toward sectoral bargaining and collective standards on AI use at work.
  • Track parallels with other financial bubbles, such as aggressive crypto leverage or political token schemes.
  • Focus advocacy on data rights, workplace conditions, and surveillance limits rather than narrow copyright tweaks.
  • Push institutions to document and open‑source models and tooling before distressed asset sales lock them away.

Each of these points aligns with the broader thread in Doctorow’s work: treating AI not as prophecy or fate, but as a set of contingent corporate decisions that respond to pressure.

Our opinion

The collapse of AI companies, if and when it arrives, will not mark the end of AI as a set of techniques. It will mark the end of a specific financial story that treats statistical models as a shortcut to suppress wages and extend monopoly power. The most important lessons from Cory Doctorow’s analysis concern where to focus resistance: on business practices that create reverse centaurs, on bubble‑economy incentives that reward hype over reliability, and on legal frameworks that entrench corporate control rather than human creativity.

Preservation efforts should prioritise open models, reusable infrastructure, and worker‑friendly governance structures. Society will likely inherit abundant compute capacity, skilled practitioners, and a library of smaller, capable tools once the current speculative wave breaks. The task then is to refuse fatalism, steer those assets toward centaur‑style augmentation, and treat the fallout from AI business failure as raw material for a safer, more accountable future of AI rather than a reason to give up on technology altogether.