#007 - Asking the important questions
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)
Inflated expectations
Before we ask "Is AI in a bubble?" we need to answer "Why would an AI bubble matter?"
For those who are just poking fun at the latest fad, then sure, go ahead and say that AI is in a bubble. Have at it.
Then there are the people who've made an investment in AI - those who have put in money, time, or effort; launched a company; made a career shift; or spent any other resource you could have applied elsewhere. You know that bubbles wipe out tons of investment as they collapse. Your question of "is AI in a bubble?" is a pretty serious one.
The unfortunate answer is that it's hard to say. A bubble, like a recession, is something you can only call in hindsight. Till then it's indistinguishable from an extended bull run. But when you consider 1630s Dutch tulips, the 1700s South Sea, and 2008 US mortgages, AI is definitely following the bubble playbook.
(Hmm. I was about to do that lazy thing where I quote myself from another piece in order to avoid writing something from scratch. But then I realized that I've yet to publish the paper in question. So when I finally get around to doing that, you'll know that the newsletter lifted from the paper and not the other way around. Let's try this again:)
It's not just the widespread enthusiasm, hype, and diversion of talent. It's not the misallocation of capital, with people making investment decisions based more on vibes and ill-informed hope than on fact and utility. It's not even that prices are disconnected from reality. All of that would amount to a fad and nothing more.
What makes for a bubble is when we have all of those traits, at the same time, taken to an extreme. The elements of that ecosystem feed into each other, leading to growth upon growth. New ventures crop up to absorb the talent and capital that is desperate to be part of the Hot New Thing™. And everything is fine, until it's suddenly not.
That leads us to two important sources of a bubble's growth:
- Social proof replaces serious investment research.
- It's difficult, if not impossible, to take a short position.
To the second point, a market exists in part to set prices based on people's collective views. Shorting is market-speak for "no – I think this is a bad idea." And without a way for anyone to express no, all the crowds hear is yes. That feeds into the first point, social proof, which further drowns out naysayers. And the cycle continues.
Is AI heading down this road? Again, we can't really call a bubble until after it's over. But I can tell you that I'm getting some serious Dot-Com flashbacks.
All that's left is for Pets.com to return with a genAI sock puppet mascot.
This time, it really is different
We should also consider what sets the AI scene apart from the classic bubble stories: it's got staying power. In no small part because it keeps changing its name.
What we call AI today is the latest structural evolution of "analyzing data for fun and profit" – where the previous steps have been "predictive analytics," "Big Data," and the pre-neural network models that have since been relegated to "classical machine learning."
A couple years ago in "Rebranding Data" I noted that each name change resets the hype cycle, skipping over the come-down where we reluctantly accept that the Hot New Thing won't solve our problems. The collective amnesia allows people to believe this is one long bull market when it's really just a pileup of several smaller market corrections-in-waiting.
Don't get me wrong – AI is worth quite a bit! But not nearly as much as the market would like to think right now. Which means the come-down is going to hurt. This is why I often describe risk as a measure of distance between perception and reality.
(Hmm. I'm now reminded of something else I've yet to publish…)
With that, I can finally do that annoying thing where I quote words I've written elsewhere. Going back to "Rebranding Data," here's a thought from Cem Karsan:
"If you’re on an airplane, and you’re 30,000 feet off the ground, that 30,000 feet off the ground is the valuation gap. That’s where valuations are really high. But if those engines are firing, are you worried up in that plane about the valuations? No! You’re worried about the speed and trajectory of where you’re going, based on the engines. […] But, when all of the sudden, those engines go off, how far off the ground you are is all that matters."
From there, I note:
Right now most of AI’s 30,000-foot altitude is hype. When the hype fades—when changing the name fails to keep the field aloft—that hype dissipates. At that point you’ll have to sell based on what AI can really do, instead of a rosy, blurry picture of what might be possible.
What if you want to build a real AI offering? One that will stand the test of time? Glasswing Ventures founder Rudina Seseri offers an idea:
Seseri made it clear that just because you connect to some AI APIs, it doesn’t make you an AI company. “And by AI-native I don’t mean you’re slapping a shiny wrapper with some call to OpenAI or Anthropic with a user interface that’s human-like and you’re an AI company,” Seseri said. “I mean when you truly have algorithms and data at the core and part of the value creation that you are delivering.”
Hm. Value creation. In AI. Novel idea!
Straining data
To be fair, some AI companies do have business models. (But not all of them.) They're just shady business models. Following in the footsteps of classic data brokers, who snagged your personal details from in-store purchases, the newer ML- and AI-based data brokers – disguised as social networks and ecommerce sites – surreptitiously mine your online activities and shopping. GenAI startups have (allegedly!) extended this game of All Your Data Are Belong To Us by scraping others' websites to build their training datasets.
The next step in this progression involves "zombie trainers" and forced labor:
A new type of "Zombie Trainer" has emerged. These are people who work as data labelers, content moderators, or image data collectors without their knowledge. Captive audiences, like refugees, children, prisoners, and low wage workers are all Zombie Trainers, unaware of the hidden tasks they perform, the new industries they're building, or the communities being harmed in the process.
Granted, AI companies are hardly unique in exploiting people to fatten their margins. But they've certainly scaled the idea.
I've been thinking about under-priced raw materials and sketchy business arrangements in light of OpenAI's deal with The Financial Times. (I originally found this in Les Echos. The Verge also covered it if you'd prefer something in English.)
The deal is that OpenAI will get FT's content to use as training data without having to sneak around and risk a lawsuit. In exchange, FT will receive … something vaguely AI'ish, it looks like? On the one hand, that means getting access to AI power and expertise that the newspaper likely couldn't manage on its own. On the other hand, cutting a deal with The Company Currently Being Sued By A Bunch Of News Outlets For Allegedly Stealing Their Content To Build A Training Dataset seems iffy at best.
OpenAI will also link back to the FT website as part of the agreement. That's a good thing, right? News outlets always want more traffic … but … Wait, isn't this the deal news sites made with Facebook and Google way back when? And how did that turn out?
Some might say that a newspaper working with OpenAI is making a deal with the devil. That's a bit harsh. I prefer a Vegas analogy:
Remember that OpenAI is the house. Not a fellow gambler. Not a partner. The house.
And the house always wins.
Up next
I cut a couple of segments from this newsletter because it was running long. I plan to share those next time, or in the issue after that.
(While these segments were connected to recent news, the ideas should be fresh for a while.)
In other news…
- US police departments that want facial recognition AI won't get it through Microsoft. (TechCrunch)
- ChatGPT shows its romantic side. (WSJ)
- And you get an AI clone! And you get an AI clone! And … (Bloomberg)
- OpenAI experiences another leadership shakeup. (Business Insider)
The wrap-up
This was an issue of Complex Machinery.
Reading this online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.