#046 - It's a long way down
genAI keeps digging a hole. Will it be able to climb out?
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

When I grow up, I want to write for FT Alphaville.
Granted, this will never happen as I 1/ clearly can't publish on any kind of deadline (Exhibit A: this newsletter) and 2/ have no plans to ever "grow up" (I do just enough adulting to get by). But as a close second I would happily hang out with the Alphaville crowd and spring for drinks.
I imagine they're a fun bunch. They've carved out that perfect niche of snarking at the foolishness of the business world, all from the safety and prestige of the FT's castle walls. Through this they've managed to drop memes, 1990s rap lyrics, and even the word "douchebag" (by quoting Urban Dictionary) into the otherwise stiff and formal finance newspaper.
Three recent Alphaville pieces, in particular, have caught my eye:
- A rundown of the Nvidia/OpenAI deal.
- A look at how AI's not-officially-but-as-good-as-a bubble might fall apart.
- An interview with legendary short-seller Jim Chanos. You may not recognize his name, but he was early to detect trouble at Enron.
A lot more bricks in the debt wall
Those three articles led me to review an essay I wrote last year, titled "The Looming AI Debt Wall."
In it, I drew parallels between the debt wall in commercial real estate and the promises made by the AI field – genAI in particular. Just like commercial real estate loans, the various flavors of AI debt will eventually come due. And it's not clear whether AI will be able to pay up. As of late, other voices have begun pointing out that the technology has yet to bear widespread, meaningful fruit to match the increased spend.
One key difference between the two fields is that real estate debt is easier to track: those loans have documented, predefined dates at which the borrower must either pay up or refinance. AI "debt" has no such formal due date. Investors put their money in and Someday™ it (might!) come back at a multiple. Precisely when that happens is based on a sequence of events, instead of a specific timetable.
At least, that was my take until I learned that companies have been loading up on real debt, using hard-deadline loans to fund their kinda-possibly-maybe-deadline genAI outcomes. Which wouldn't be so bad if AI were poised to deliver actual utility, and not just hope and vibes.
Hope and vibes are fine if you're selling wholesale, as a big genAI vendor. If you are selling retail – you buy your raw AI from someone else, dress it up in some nice packaging, then pass on the markup – you need that AI to actually work. Why? Because the retail approach requires that your buyers want the AI more than you do. And if your AI-backed product doesn't work, it will gather dust on the shelf.
Sometimes you can be slick and bundle the not-quite-working AI with something that actually does work. But that's a move for Microsoft or Google a company selling industry-standard tools that already have an entrenched user base. The other 99% of the marketplace will have to find a different approach.
I get that AI is deep into Gold Rush territory right now. For investors and even some vendors, it's lottery tickets all the way down. A twisted version of the Silicon Valley VC model. "Just throw everything into AI, and eventually we'll find the use case with hockey-stick returns."
The difference is that the mainstream lotteries are a couple bucks a pop. The kind of money you won't miss if you only play now and then as you dream of the big jackpots. The way companies are treating AI, they've waded into "problem gambler" territory. They're spending a lot of their money on the rare chance of a win, and taking out loans to cover their habit.
If that sounds similar to The Gambler's Ruin, that's because it is. And while the genAI crowd might attract the infinite cash required to overcome the odds, they don't have the equally requisite infinite time frame.
Break open the bubbly
Speaking of that intersection of debt and possible outcomes, there's been an uptick in bubble talk.
(Longtime readers know that I avoid describing AI as a bubble. We technically can't discern a bubble from an extended bull run until after things come crashing down. Still, not everyone matches my discipline.)
Financial analysts have been pointing out just how much money has piled up in the House of AI:
Fortune reports that a note written by George Saravelos of Deutsche Bank warned that spending in the AI sector is “parabolic.” In fact, it is so vast, the researcher said, that it might single-handedly be propping up the American economy. “AI machines—in quite a literal sense—appear to be saving the U.S. economy right now,” he wrote. “In the absence of tech-related spending, the U.S. would be close to, or in, recession this year.” That checks out: earlier this year, the Wall Street Journal reported capital expenditure spending for AI contributed more to growth in the US economy than all consumer spending combined has so far this year.
[...]
“The bad news is that in order for the tech cycle to continue contributing to GDP growth, capital investment needs to remain parabolic. This is highly unlikely,” Saravelos warned.
It's bad enough to hear the bubble warning from the investor-guidance crowd. Even worse to hear it from people who are heavily invested in the field and trying to attract even more cash. Sam Altman made headlines when he said it in August. More recently, Jeff Bezos went as far as to say that AI was a "good" bubble:
“This is a kind of industrial bubble, as opposed to financial bubbles,” Bezos said in a conversation with Ferrari Chair John Elkann in Turin, Italy.
“The ones that are industrial are not nearly as bad,” he continued. “It could even be good because when the dust settles and you see who are the winners, society benefits from those inventions.”
He pointed to a bubble that formed in the biotechnology industry in the 1990s, noting investors “all lost money” when it burst, but society ultimately received several lifesaving drugs as a result.
Bezos added that AI will provide "gigantic" benefits to society, and that the excitement around bubbles drives investment and experimentation.
I don't completely disagree with those last two takes. An industrial bubble can indeed lead to wider social benefit (something I plan to cover in the next newsletter). And when excitement leads people to try The New Thing, that creates a testing lab in which companies collectively figure out what The New Thing is actually good for.
Yes.
Yes, and.
While the scattershot approach isn't wholly irrational, it is a rather foolish painfully inefficient way to explore the solution space.
Far better to go in armed with the knowledge of what genAI can actually do, so you're able to evaluate use cases before building them out and watching them fail in public. I sense that many of the public-facing genAI flubs are rooted in a lack of AI literacy.
You also see this in the choice of use cases. While companies are trying to cram genAI into every crevice, they're none too creative about it. Look closely and you'll see that most of them are trying the same handful of ideas. Ideas that don't work so well.
What's that line, about trying the same thing over and over but expecting a different result?
Is it insanity?
Or maybe it's a Monte Carlo simulation?
No. It's definitely insanity. People heed the lessons learned from a Monte Carlo sim.
I strongly disagree with Bezos's point that AI does not represent a financial bubble. Even sticking to my "you can't call it a bubble till after the fact" approach, consider the amount of money invested in AI and how the market valuations are way out of line with actual, proven utility. AI rests on a fragile pedestal of hope, with little to break its fall once the excitement fades. (Remember just a few months ago, when the markets briefly shed about a trillion dollars because of a dip in genAI sentiment?) Should that happen, expect markets to become a scramble for a selloff.
(Are you thinking of the Richard Bookstaber analogy about a fire in a nightclub with a single, narrow exit? You should be.)
That said, I do agree that even a financial bubble could be a good one. Because every financial bubble is good for someone! As I said in the previous newsletter, that's the nature of zero-sum games. The few people who are right split the winnings. And those winnings are the collective capital of those who lost out.
So when someone says that a bubble will be a good thing, just know that they mean good for them.
Product guidance
If you're searching for one of those rare, meaningful genAI use cases, allow me to offer some free product advice.
(This is not professional advice.)
Look at your product-to-be and ask yourself:
Who the hell wants this?
Specifically:
Does anyone outside of this room want this? Or is it just the folks in here, who have been high-fiving each other for the past few weeks…?
You might think this public service announcement was inspired by the AI-based radio hosts coming to YouTube Music. Far more likely that it was inspired by the folks at LA Comic Con creating a genAI-based hologram-chatbot of comics legend Stan Lee.
Besides the usual concerns about creating an interactive chatbot – especially a chatbot of a real person – I get the impression that the Comic Con team really wanted this for themselves and that fans' views take a back seat. I invite you to read the linked Ars Technica article, which includes a number of quotes from LA Comic Con CEO Chris DeMoulin, to judge for yourself.
And if you find yourself getting side-eye over your own AI-based product, maybe … maybe review the questions I posed above.
Celebrating a near-miss
I'll end on a positive note, and with a little something from my next book (which is in the home stretch!).
One of the book's reviewers reminded me of the story of Stanislav Petrov, an officer in the Soviet Air Defense Forces. In 1983 Petrov's early-warning system declared that the USA had launched nuclear missiles at the USSR. Instead of immediately launching the counter-attack, Petrov thankfully questioned his alert system. He determined that the warning was incorrect and didn't press the big red button.
One person, by questioning their computer, spared us from nuclear war.
While double-checking my recollection of the story, I came across this 2004 Freelance Bureau interview with Petrov. He noted why he challenged his computer's warning:
[...] компьютер по определению - дурак. Мало ли что он за пуск примет.
Loosely translated thanks to my rusty Russian, that boils down to:
By definition, the computer is an idiot. You never know what will (cause it to) trigger a launch.
Wise words for anyone building or buying AI-based systems.
In other news …
- Despite the growing concerns around genAI-based companion bots, some services are springing up to offer chatbot girlfriends. (The Guardian)
- Chatbot companions are also popular in Korea, outweighing ChatGPT and even Netflix in some cases. (Les Echos 🇫🇷)
- As governments create age- and ID-verification laws, app makers are forced to collect data that they're ill-equipped to protect. What could go wrong? Other than, y'know, hackers walking off with info collected by Discord. (This Week In Security)
- Remember back in May, when I said that Meta could sell ads based on your chatbot interactions? Yep. I called it. (TechCrunch)
- Here's another study that challenges the alleged workplace gains from genAI. (The Register)
- You've heard of "shadow IT?" Microsoft is now enabling shadow AI by letting people use their personal MS 365 Copilot licenses at work. (The Register)
- First there was "vibe coding," then "vibe physics." Now we have "vibe working." (Ars Technica)
- You probably don't want ChatGPT to make purchases for you. But just in case, it's now possible. (Gizmodo)
- We keep hearing about the so-called "AI arms race." It's not between companies or nation-states, but between job applicants and hiring managers. (New York Times)
- Anthropic and IBM are pairing up. (WSJ)
- OpenAI changes its stance on how content creators can keep their work out of the Sora 2 video generator. (The Guardian)
- Here's a nice longread on the ways people use genAI for personal, rather than professional, matters. (Le Monde 🇫🇷)
- Instagram and Facebook are required to offer clear settings for a plain, non-personalized timeline view in the Netherlands. (Die Zeit 🇩🇪)
The wrap-up
This was an issue of Complex Machinery.
Reading online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.