Complex Machinery

Subscribe
Archives
October 21, 2025

#047 - What's left after it all falls apart

If AI is indeed a bubble, what happens after the crash?

You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

The remains of a grounded airplane. Photo by Rita Morais on Unsplash.
(Photo by Rita Morais on Unsplash)

Risk management is ultimately a matter of asking "what if?" and "what next?" to explore possible future scenarios. Lately I've been thinking about a scenario in which the world of generative AI experiences a massive market correction.

My usual refrain is that it's technically too early to call it a bubble, but I acknowledge that the scene is rather frothy. The whiff of a correction is in the air. And now that some genAI investors are openly calling it a bubble (more on that in a moment), perhaps it's time I change my tune?

Let's say, hypothetically, that genAI is indeed a bubble that is stretched to the limit. What next? What does the world look like after it collapses? What might fade away, and what leftover pieces might become seeds for the Next Big Thing?

I break that down into artifacts, people, and ideas.

But first, let's talk about some warning signs:

Feeling bubbly

The genAI scene exhibits a number of bubble-like characteristics. I provided a short rundown in April, including the large gap between the technology's actual, proven capabilities and the hazy, advertised use cases. That gap continues to widen as investors ditch market fundamentals and due diligence in favor of irrational exuberance vibes and momentum-chasing.

Beyond the misallocation of financial capital, we see a misallocation of human capital as people change jobs in order to grab a slice of the AI pie. That's a mix of the newly-minted genAI "experts" (notice all of those LinkedIn title changes) as well as a shuffling of industry practitioners (say, accepting sizable pay packages to join FaceMeta).

More recently, big-named genAI providers and investors are singing a new song. Meta abruptly capped its AI hiring rush. OpenAI's Sam Altman flat-out said that genAI was a bubble back in August. A couple of weeks ago Jeff Bezos one-upped Altman by clarifying that genAI isn't just any bubble, but a good one.

Bezos is not alone:

"Of course there’s a bubble,” said Hemant Taneja, chief executive of venture capital firm General Catalyst, which raised an $8bn fund last year and has backed Anthropic and Mistral. “Bubbles are good. Bubbles align capital and talent in a new trend, and that creates some carnage but it also creates enduring, new businesses that change the world.”

For the moment, I'll ignore the callous way in which they wave off the potential damage. Instead, I'll ask whether this Good Bubble™ talk is an attempt to prolong the AI euphoria. Or is it simple spin-doctoring? Or, like so much else in the world these days, are the carnage enthusiasts simply trying to Be Part Of The Conversation? That's one way to influence the narrative, for sure. But it'd also be one hell of a red flag.

I acknowledge, the bubble positivity isn't completely off-base. Tasteless? Perhaps. Lacking empathy? Seems like. But under the right circumstances, there is a kernel of truth: collapsing bubbles are sometimes seeds from which new economies can grow.

Sometimes.

Artifacts

You can evaluate a bubble by the damage it causes on its way out. Sometimes there's a fairly narrow blast radius. The impact of the 1630s Dutch Tulip Mania, for example, was largely limited to the Netherlands. With other bubbles, like the 2008 US mortgage crisis, you learn that the lengthy tendrils form a network connecting other fields and even other countries.

(If you're thinking of that "got a wire" scene from The Hurt Locker, yes. Just like that.)

Once in a while a collapse leaves behind important, useful artifacts that can be repurposed. The 1800s UK railway bust left mile after mile of usable track. The Dot-Com bubble left a triple gift of widespread networking and telco implementation, a boost to custom software development (with a generous nod to open-source tooling), and a stronger consumer appetite for doing things online. Arguably only 1.5 of those three are tangible artifacts; but combined they drove cloud computing, mobile apps, and the various flavors of The Field We Now Call AI.

So what artifacts might an AI collapse leave behind?

Datacenters, for one. This is top of mind because AI-related companies keep promising to build them, despite the growing warnings of a bubble. It's still not clear how many datacenters will actually come online before a market correction arrives. Nor do we know who might want them after the fact. That hinges on whether the world has uncovered other uses for the compute power and networking gear therein – either as a whole, or divvied up and sold for parts – and if not, whether the buildings can reasonably adapt to some other use case. It's not unlike the questions around repurposing pandemic-vacant office buildings into residential real estate.

Location is also an issue. For anyone setting up in central nowhere – and sometimes you plant a datacenter there precisely because it's far away from casual traffic and prying eyes – there will be a narrow audience for reuse of those buildings. Though a datacenter with its own power plant could be an attractive prospect. Future mad scientists, take note.

Speaking of which, the datacenter boom has also inspired a power supply boom. It should be easier to repurpose electricity than buildings or computers, so that's promising. (This is already happening, to a limited extent, as Meta is selling off its excess power.) Doubly so, if the rush to establish power sources drives research into clean, renewable energy. And with enough cheap power nearby, it's not a stretch to imagine new settlements popping up in the more remote areas. Who knows? One such far-out community could eventually blossom into a bustling metropolis.

Wild times

A set of Russian nesting dolls (matryoshka).  Photo by Didssph on Unsplash.
(Photo by Didssph on Unsplash)

Not all bubble artifacts lead to positive outcomes, though. Consider the collapse of the newly-capitalist, post-Soviet economy that gave way to Russia's Wild Nineties ("Лихие девяностые" for those in the know). History books chronicle this as a decade of lawlessness and economic insecurity, giving rise to oligarchs, a new criminal class, and a black market for formerly-government resources. Like, say, weaponry.

If you think that there'd be no parallel between technology and underground arms dealing, consider the strong adoption of genAI by scammers. From there, imagine what these criminals could do were someone to grant them unfettered access to those tools. They'd have no more need to sneak around, evading content moderation filters and phone-based verification. The story gets worse if you imagine those same criminal groups behind the wheel of the super-secret tools that the big players currently keep to themselves.

The same goes for raw data. Today's legal-yet-hidden-and-sleazy data broker market is a special ring of hell. It would be several times worse if some out-of-work AI bro were to put a few thumb drives to good use. A particularly enterprising person could live large selling their former employer's data to organized groups, terrorist cells, foreign governments, and local businesses.

People

A genAI collapse would also leave people behind. Specifically, people with skills in data analysis, predictive modeling, and Getting GenAI To Do Things For Real. What happens to them?

Wars may tell us a thing or two. Large-scale, extended conflicts train groups of people to work as a team in pursuit of a goal. Some of them return to civilian society as private security or firearms trainers. Others employ those same talents as muscle for criminal gangs. Or they become the gangs themselves. American servicemembers from both World War II and Viet Nam came home and joined biker gangs. Former members of Mexico's special forces created Los Zetas, a group which started off as a protection squad for a cartel before becoming a cartel in its own right. And real-life members of the Peaky Blinders gang – basis for the fictional TV series of the same name – were veterans who had survived the horrors of World War I trench warfare.

AI practitioners face similarly varied (though less violent) post-bubble prospects. Companies will certainly have some demand for AI skills, similar to the way plenty of tech roles survived the Dot-Com fallout, but demand will probably be low. Especially in the companies that went overboard in building up an AI department without first defining meaningful use cases. Which is … well … most companies.

The experienced, aggressive practitioners will snap up those remaining roles. Or they'll launch their own companies. We already see traces of this in the early-day data science cohort, some members of which have built businesses where the data analyses and modeling exist far from the spotlight. I won't name names, but you know who you are. And I salute you.

(And for one person in particular: I haven't forgotten. Let's catch up soon, shall we?)

What about the genAI hype squad? Those cheerleaders will also be out of work, right? Yes. Sort of. But they'll be fine. They'll find the next thing to shill. Because, grifters gonna grift.

If you want a piece of their action, keep your ear to the ground. Make note of people who are loud but ill-informed. You know, the kind who make bold, sweeping proclamations about technology they can barely spell. They'll inflate the next emerging-tech bubble and you can go along for the ride.

Ideas

On a more positive note, collapsing bubbles can destroy barriers. This opens newer – often, smaller – doors of opportunity. How might that look for AI?

In a webinar a couple years back I posited that genAI prices will eventually fall and we'll get new interfaces, paving the way for artists to embrace the technology on their terms.

To expand on that: I mean a naturally lower price point. None of this "artificially low prices, subsidized by VCs and hopeful investors" nonsense.

As for the people, I'll reframe "artists" as "professional artists as well as students, plus hobbyists and tinkerers, and anyone else with a creative streak." They're already thinking about new ways to use genAI. And they want new ways of interacting with the underlying genAI models. That brings us to …

… the interfaces. Imagine we move beyond the text-entry box and APIs. We can skip past the whole "programming" aspect altogether. What if providers were to offer genAI in more intuitive form factors that encourage experimentation?

Kick it

A guitar, on the floor, in front of an array of effects pedals. Photo by Jackie Alexander on Unsplash.
(Photo by Jackie Alexander on Unsplash)

My latest analogy for one of these still-to-be-defined interfaces is a guitarist's effects pedal. To the person building it, it's a mathematical function neatly wrapped in a metal package. The function modifies the input waveform to create distortion, echo, or any other sound cooked up by some mad scientist. To the musician, it's a box with a foot switch. Wire it up to your guitar, step on it, and you're off to the races.

(When I tossed this idea at John Tolva, he pointed out: "An effects pedal is a mathematical function you can kick." We're printing t-shirts as you read this.)

This design abstracts you, the musician, from the mathematics and hardware skills required to build the pedal. You get to focus on how that new sound impacts your art. You can also daisy-chain them, as many guitarists do. And you'll quickly learn that the order of the pedals – the order of the waveforms passing through those mathematical functions – influences the sound that ultimately spills out of your amp. For a taste of what I mean, pull up Siamese Dream by The Smashing Pumpkins. That liquid-laser sound you hear is the distortion + phaser + who-knows-what-else combination Billy Corgan wired together.

Nor am I the only person to draw this analogy. Shortly after it came to mind, Tim Harford wrote a piece in the FT about Jimi Hendrix breaking new ground with a wah pedal. I imagine other people have had this idea, too. Hopefully it spreads and inspires new interfaces for AI.

Do we need a bubble's collapse in order to get there? Not at all. But we do need the hype wave to calm down. That's how we get Small And Interesting genAI, the kinds of tools we actually want. Right now we get the AI Created By Companies Desperate To Recoup Their VC Investment. And it's lame.

So if a crashing AI bubble is one path to fun AI, I'm all for it.

Postscript

In the days when I covered web3, people rightfully pointed out that the damage from crypto fallout was isolated to a small sector of the economy. You would occasionally find, say, a retirement fund that had inadvertently taken on exposure to crypto tokens because they'd made an investment that was several levels removed. But that was the extent of crypto's reach into large-scale mainstream finance. Now that the US government has been pushing so hard on crypto, that may no longer hold true.

I hold similar concerns about genAI. It's one thing to say "the quirky chatbot shut down" or "a few startups collapsed." But between the money and people pouring into the sector, the companies caught up in it, the investments (both individual and group-retirement plans) fueling it, and now governments that are hell-bent on some bullshit "AI arms race," we have a problem. Even the most AI-skeptical individual still holds exposure to that sector. They will be affected if and when things come crashing down.

So, yes. Back to the point about the so-called "good" bubbles. Hopefully the carnage from a genAI fallout is limited to the investors and providers. You know, the people who chose to gamble. The rest of us deserve new AI toys.

In other news …

  • An attorney gets caught using AI to defend himself … for using AI. (Futurism)
  • An attorney gets caught using AI to– no this is a different case. (The Guardian)
  • Consulting firm Deloitte gets caught using AI (is there an echo in here?) and issues a refund. (Ars Technica)
  • A couple of journalists tested Sora. Here's one take, from Le Monde and ... (Le Monde 🇫🇷)
  • ... here's another, creepier experience from a journalist at Insider. (Business Insider)
  • OpenAI reverses its stance on adult-themed content, just in time to offer its own NSFW chatbot. If you see OpenAI's goal as "get people to spend tokens," this will certainly fit the bill. (CNBC)
  • Large-scale Bitcoin miners are getting a piece of the AI datacenter action. This is a top-notch business move across both AI and crypto. Bravo. (Bloomberg)
  • You know how a person's search history sometimes winds up as evidence in a court case? Now that people are using genAI as a search substitute, their chatbot history can also wind up as evidence in a court case. (Rolling Stone)

The wrap-up

This was an issue of Complex Machinery.

Reading online? You can subscribe to get this newsletter in your inbox every time it is published.

Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.

Disclaimer: This newsletter does not constitute professional advice.

Read more →

  • May 15, 2024

    #007 - Asking the important questions

    (Photo by Pieter on Unsplash) Inflated expectations Before we ask "Is AI in a bubble?" we need to answer "Why would an AI bubble matter?" For those who are...

    Read article →
  • Sep 23, 2025

    #045 - Tracing the connections

    The key lesson from complex systems: when everything's connected, large players can pull others up and also drag them down.

    Read article →
Don't miss what's next. Subscribe to Complex Machinery:
Bluesky Mastodon