#059 - Breaking away from the pack
Generative AI is playing out a movie we've seen before. But it's also carving a new path.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

This time, it's different. No, really.
There are bubbles, and there are bubbles.
As I've noted before, it's technically too early to call the genAI period a bubble. We have to wait until it collapses, as the wave of excitement may still turn into a successful, extended bull run. But the more I look around, the latter path becomes less likely. The lack of use cases and the hazy promises, all built on a sandcastle of marketing and circular financing, have dug quite a hole. Risk is building in the system and I don't see a relief valve.
So for now I can definitely call the genAI wave a mania. And a most bubbly mania, at that. It has a lot in common with bubbles of yore:
- Skipping over market fundamentals ("but what's it actually good for?") in favor of vibes ("sounds good to me") and momentum ("line go up").
- A misallocation of human capital as people abandon other professions in order to make some fast cash.
- An echo chamber of belief, in which the yeses drown out the nos. (Nos that were hard to express in the first place.)
- All of these elements feeding into each other, till the system grows to a size that is ultimately unsustainable.
We've seen this in the South Sea bubble, Dutch tulip mania, the railroad boom, Dot-Com, the 2008 mortgage crisis … Today's genAI mania has been that. All of that. To a multiple. It's irrational exuberance times ten.
It's also very much its own deal.
For one, there's the issue of use. Historical bubbles were pure financial speculation – you'd buy shares or other claims to The Hot Thing and later someone would pay you (more) money to buy that claim from you. Generative AI has that, but there's also a market for excitement in which people get some value out of Doing Something with the technology. What that Something is, remains to be seen. But damn if they aren't trying.
Some of that energy stems from Corporate FOMO™ and the fear of being "left behind," no doubt. Execs pass that worry on to their employees, who must now find some way to keep using AI or lose their job. (Oh wait, they'll lose their job anyway – either because the execs become too excited about AI taking over and they terminate employment too soon, or because employees who learn how to use AI for their job wind up training their robotic replacements.)
But there's also a cult-like flair to it all. Even without a corporate mandate, the low barrier to access fuels peer pressure in which genAI fans pester those who don't use it. "But it's so cool! It's amazing! It does everything I need and makes me so much more productive or creative or happy inside. What's your problem?"
Two, there's the source of the excitement. Every bubble needs voices sharing the Very Positive Message to keep the music going, propping up the perceived value so people keep handing over their cash. Generative AI has proven no exception. But looking at the collective efforts of the genAI marketing machine, each dollar spent on convincing the masses exhibits diminished returns. It's feeling more and more like the world's largest pump-and-dump operation. And it's gotten desperate. (Case in point: scroll down a couple of segments to read about OpenAI buying a popular podcast.)
Three, genAI's appetite for datacenters has extended its flimsy digital promises into real-world spaces. As I noted last time, a digital flop can easily fade from memory while physical goods are much harder to forget. This is one reason why those datacenters might be the first nay votes to be heard above the yays' marketing din. Wouldn't that be a painful irony, if that is what shakes investors out of their trance? genAI, in an attempt to enter the real world, brings harsh reality upon itself.
Given genAI's break from traditional hypefests, maybe it's time to change the emerging-tech slogan from "this time, it's different" to "this time, it's worse."
PSA
A friend recently sent me a link to an old OpenAI research paper. And by "old" I mean "from six months ago," since that is an eternity in tech times.
The authors of "Why Language Models Hallucinate" explained that LLMs were destined to create nonsense answers – affectionately known as "hallucinations" – and that there was no way to completely prevent this from happening.
On the one hand: I asked myself why anyone would publish this as a proper research paper. Fabrication is, literally, what generative AI systems are all about. As longtime Complex Machinery subscribers will remember:
You know that a chatbot's entire job description is Just Make Some Shit Up. (It's in the name: everything that comes out of a genAI bot is, well, generated. We only apply the label "hallucinations" to the generated artifacts that we don't like.) It's making everything up based on patterns surfaced in its training data, yes. But those are all grammatical patterns. Not factual. Not logical. Just this-word-is-likely-followed-by-that-word. And that kind of creative whimsy is unfit for sensitive topics.
[…]
"Why would anyone ask a probabilistic bullshit artist for a factual answer?"
On the other hand, I totally get why they published this as a research paper. The core message is now a public service announcement! People can refer to The AI Experts At OpenAI when they tell their boss why this tool isn't suitable for the job. (For your convenience, here's a link to that paper again.) They can also refer to Complex Machinery, though I acknowledge my writing can be a little snarky for a formal office setting.
In short: if you need definitive, factual answers, deploy a search system. (The docs surfaced by that search system can still be wrong. But in that case you can trace the source of the error and then correct it.) And if you need a highly specialized randomness generator for your irreverent side-projects, use genAI.
OK?
OK.
Good talk.
Buying the sales pitch
I recently caught the final episode of FT's Behind the Money podcast, on troubled edtech startup Byju's.
I'd read up on the Byju's story before, but I hadn't heard the full origin – that the founder, Byju Raveendran, had started as a tutor and became something of a celebrity educator. He moved from lecture halls to stadium-sized venues before launching his eponymous company, which was later accused of financial misdeeds.
That's all well and good, but you might ask what the hell this has to do with this newsletter's charter of "risk, AI, and related topics". There's a tangential connection to the risks involved in investing in a startup, but what else?
The Byju's story was my reminder that you never buy a product. You receive the product, but you buy the sales pitch. A sales pitch delivered by an affable personality isn't always attached to an upstanding business.
It's sort of like a person who interviews well, but turns out to be a terrible employee. Or when you buy a product because of the glitzy, very convincing marketing but it later fails to perform.
I haven't a clue as to why I'm bringing this up right now, in a newsletter about AI and risk. No clue whatsoever! But I'm saving this here just in case I need to point to it in the future.
Buying their love
OpenAI is getting into the podcast space. I don't mean that they're pouring money into in-show ads; I mean that they've acquired a popular podcast.
Why would they do this? Simple:
A movement built on hype and excitement relies on upbeat, positive messaging. It's a way to reassure the participants and give them lines to parrot to others. The combined effect is to drown out the voices of naysayers, because a "no" is hype's worst enemy. An existential threat, even.
But given that OpenAI has roughly infinite money to apply to ad spend, why would they buy a podcast? That, too, is simple: it's PR, disguised by a thin veneer of journalism. And by "journalism" I mean "plausible deniability."
CEOs have long done the rounds on the popular news shows. More recently, they've been regulars on the podcast circuit. The risk of going on one of those shows, though, is that you're just a guest. It's not your show. Which means you don't control the message.
Run your own show and you get all the trappings of Put The CEO On CNBC, with none of the risk of a proper interviewer pressing you on a bullshit answer.
For an example of a proper interviewer, consider Nilay Patel. He's editor-in-chief at The Verge and also host of their podcast. In the previous issue described the episode in which he took Shishir Mehrotra, CEO of Superhuman (parent of Grammarly), to task for the company's questionable use of genAI.
And then there's the time he interviewed the CEO of Intuit, Sasan Goodarzi. Intuit's head of communications later asked Patel to edit out some of Goodarzi's damning statements. Not only did Patel refuse, he posted the request to Bluesky. Alongside a link to The Verge's ethics policy. The post reads:
To this day PR people ask me if Intuit really asked me to delete this part of the interview and shake their heads in sad disbelief. My friends what we sell here is our ethics policy
Attached to that post is a screencap of Patel's email back to Intuit:
I’m sorry, we do not allow the subjects of our reporting approval over our work.
This has been central to our very, very public ethics policy since the day we founded The Verge in 2011. I am going to treat your aggressive, explicit request that "At the very least the end portion of your interview should be deleted” as a new piece of reporting information and take it from there.
Nilay Patel • Editor-in-Chief, The Verge
And this, my friends, is why professional bullshitters (should) fear a professional journalist.
But there's still the lingering question of why OpenAI would need to risk such an interview in the first place. (As any attorney will tell you, sometimes it's best if the defendant never takes the stand.) Don't they already get tons of press simply because they're a massive genAI company? Can't they ride that wave? Well, my hunch is that OpenAI sees the tide turning against genAI in general, and them in particular. Buying a podcast is their chance to claim at least some positive news with their name on it.
Don't believe me? Here's a line from the CNBC article I linked to at the top of this segment:
In the announcement, OpenAI CEO of AGI Deployment Fidji Simo wrote that their mission of bringing artificial general intelligence comes with a responsibility to have a space for “constructive conversation about the changes AI creates.”
OpenAI's move into media is not unlike "reputation management" firms that create supportive (but fake!) websites to paint a person in a positive light. And when you think of the kinds of people who need a reputation management firm's services, well …
This all reminds me of something I wrote last year. (It's in the last segment, "Rewriting the ending: A shaky enthusiasm".) It's a twist on a scene from Andor:
Andor: Listen to me: they can't get genAI to catch up to the hype they've been pushing. And they know it. They're afraid. Right now, they're afraid.
Kino Loy: Afraid of what??
Andor: They've torched billions of dollars to puff up an AI dream! What would you call that?
Kino Loy: I'd call that power.
Andor: Power? Power doesn't panic. Their fanboys are about to find out that they're never getting AGI.
Anyway, if you'll excuse me, I need to figure out how to get OpenAI to buy Complex Machinery.
Hmmm.
In other news …
For more links to recent news, and with a slightly broader scope, I encourage you to check out my other newsletter. It's a weekly, curated drop of what I've been reading.