#044 - Five light reads
Catching up on some end-of-summer AI news.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

Summer's winding down, and readers in America are climbing out of their inbox following the Labor Day weekend.
Given that, I'll keep this issue light – just five short segments.
No escaping gravity
Can we talk about juggling for a moment? We need to talk about juggling:
- A terrible juggler will drop everything straight away.
- A fake juggler – if they're any good at faking – will keep the act going for a few moments, then convince the audience to move on before they drop everything.
- A true juggler will keep everything aloft till their arms tire out.
The common thread?
Eventually, all of the objects hit the floor. Even the best jugglers are subject to gravity.
The ideal scenario is when everyone involved is aware that this is an act. Like when you're in a designated performance venue, and you've put up posters announcing a juggling show, and the audience pays for this knowing that they will get to briefly suspend disbelief while you show off the supreme hand-eye coordination skills required to juggle chainsaws or something.
Things are different if you've sold everyone on the idea that the juggling will last forever. You know, if you've convinced them that you are a real ✨wizard 💫 and the bowling pins and chainsaws will stay aloft because you've imbued them with magic. If that's your style, you can:
1/ Swap in another juggler wizard partway through the show (and then run off with the money);
2/ Let everything hit the floor (and then run off with the money); or
3/ Let everything drop except for one chainsaw, and you tell the audience that this is now a lumberjack business. (Oh, and you still keep the money.)
No idea why I am thinking of this just now. No clue! But pure coincidence, here's an article about a startup that was quite bullish on AGI and then suddenly pivoted to selling "stories."
Hmm.
Is this the first domino of the genAI wind-down? Does it knock Meta's quest for "superintelligence?" Will other companies follow suit, now that Character.AI The Unnamed Startup has broken the ice on "OK bro maybe AGI ain't happening"? Keep in mind that "no AGI" begins the slippery slope to "genAI isn't taking off, either."
Sorry, I got distracted. I don't know why I said all of that in the middle of a segment on jugglers.
Jugglers, man.
The road ahead
Over the last couple of weeks, we've seen the head of OpenAI acknowledge that the AI field might indeed be in a bubble, Meta/Facebook suddenly halt its genAI hiring spree, and the head of Microsoft's AI unit express concerns of AI-related psychosis.
People keep asking me whether this is a sign that genAI is about to collapse.
My answer is a two-parter:
1/ Who knows? Maybe AI flatlines... Maybe it does its Bitcoin impression, taking a brief dip before hitting record highs… Maybe something else. It's the future! It's up for grabs.
2/ Why would you care? If you've done your homework, the twists and turns of the genAI hype wave won't matter to you all that much.
It helps to zoom out and take the measured, dispassionate approach of an investor:
Whether you've invested in AI directly (by purchasing shares of genAI providers' stock) or indirectly (by building your products on top of those providers or using such tools) you've chosen to take on exposure to AI. That is, you've accepted the investment's inherent risk/reward tradeoff – "I get my money back in spades," "I get nothing back," "I get a little money back." As a friend likes to remind me: Buy the ticket, ride the ride.
Now, whether you've acknowledged your risk/reward tradeoff is another matter. If you first performed a risk assessment to get an idea of the possible ups and downs, and then worked out how various possible outcomes might impact you, you're not afraid. You may not be happy with how things go, but you went in with eyes wide open. You can sit back and watch it all happen, because you did your homework.
If you didn't do your homework, your emotional state is tied to every AI-related headline.
(There is a third group: people who get pulled into investing in AI, because their retirement portfolio interacts with that first group or does business with the second. People here are subject to the whims of groups 1 and 2.)
There's one more lever available to the no-homework crowd, if you have the money. You can fund a massive marketing campaign to contribute to the hype wave. That will keep genAI aloft for at least a little while longer. If enough of your peers follow your example, maybe genAI will last long enough to deliver on all of the hazy promises. Maybe!
And if not – if this iteration of AI does indeed crumble – have no fear: they'll just rename the field. Again. And start over.
A different kind of thirst trap
Have you ever been thirsty enough to, say, order 18,000 cups of water from a fast-food restaurant? Someone recently tried that with an AI-backed Taco Bell ordering system.
This is when you'd applaud Taco Bell for performing such thorough red-teaming exercises as part of its risk management and QA efforts. And I would have to tell you that, no, this was not a company-sponsored exercise. This occurred in the wild, through the actions of a frustrated customer.
The headlines hint that this was an AI problem. I disagree. Check and Filter The Inputs is a 101-level lesson on running unattended, automated systems. AI just happens to be the latest flavor of that kind of system.
This sounds more a "rush into AI" problem. Companies are under immense pressure to adopt AI systems, and it doesn't take much imagination to see them skipping over steps like risk assessment and red-teaming in order to meet deadlines. (Or worse yet: ignoring the results of those exercises because they ruin the vibe.)
In a WSJ article, Taco Bell tried to diminish the incident by citing the number of orders the AI system has handled properly. Sure. That may be true. But if a few goofed orders make international news, have you really won out? Probably not, since the company says it plans to modify its AI rollout.
I'll offer one humble suggestion to Taco Bell: before you make another move with AI, why not review my 2023 article "Risk Management for AI Chatbots"? It might prove useful. And it's not paywalled, so the only cost is your time.
Listening to the machines
I pointed out in newsletter #038 that people love to follow whatever a screen tells them. That problem is hardly unique to AI. That said, genAI has kicked it up a notch or twelve because the bots outnumber screens by – and this is my very scientific estimate – about eleventy-billion percent. The genAI marketing machine makes it feel that way, at least. That's a big part of why so many people believe that summarization bots are a suitable substitute for search.
Then you have the interactive, conversational bots. They create personalized experiences that draw people in and keep them engaged – far more intense than a generic message. I covered problems with the so-called companion bots in newsletters #025 and #036. Last month, NYT's Kashmir Hill published an in-depth look at bots leading people down delusional spirals. Then there's the guy who took medical advice from ChatGPT and wound up needing medical attention from a team of highly-trained humans.
You'll agree that it's troubling when genAI bots lead people astray. But you might argue that experienced professionals would be immune. And to that I would say… not true: even doctors will lean on the chatbot when they shouldn't:
The AI in the study probably prompted doctors to become over-reliant on its recommendations, “leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,” the scientists said in the paper.
Kashmir Hill has just published another piece on problematic bot interactions. WSJ then covered two different chatbot incidents, both of which ended in death. And last week, several outlets noted that Meta's chatbot instructions permitted some disturbingly inappropriate conversations with teenagers.
(You'll notice there are no links in that paragraph. It's all solid reporting, to be clear. But it gets rather dark. Not the sort of thing I'd want to leave in your inbox.)
I'm… not quite sure where to go with this?
I won't suggest the bot companies drop what they're doing to establish better safeguards right now. That's clearly asking too much.
But when a field can express its failures as a body count, it's fair to ask: how many more incidents do you need before you get your act together?
Extreme team makeover
Crypto and AI go hand-in-hand. I remember seeing glimmers of this back when I covered web3 – people trying to shoehorn one technology into the other.
But this… is an extreme: Brian Armstrong, CEO of crypto exchange Coinbase, has fired some of the company's developers because they didn't leap into AI.
Specifically, he:
- Purchased a license for some AI-backed developer tools.
- Issued a mandate that everyone start using said tools.
- Terminated developers who didn't play along.
(Before you ask: no, you're thinking of that other CEO who fired people because they didn't share his love of AI. Who knows? There might be yet another one by the time you read this.)
Armstrong's move feels a bit much. Even an extremely AI-eager CEO might rework that list to:
- Ask the devs whether they're interested in AI tools.
- Conduct a small pilot project that includes training.
- Make a wider decision based on the results of the pilot.
So what's the deal? Why did he issue pink slips? We can only guess. Maybe he's all-in on the AI dreams of supercharged productivity. Maybe he doesn't like when employees question his authority. Maybe.
A cynical person would see it differently. They'd figure that an exec who had money riding on AI might want to fuel the hype wave. Y'know, by bragging about their company's 100% AI adoption rate. (While conveniently omitting the fact that they achieved 100% adoption by shrinking the denominator, not by boosting the numerator.)
A cynical person would say that.
Am I a cynical person?
Hmm.
In other news …
- I've said it before and I will sadly have to say it again (and again and again): criminals love emerging tech. For this latest example: spam rings are generating Holocaust-themed images to defraud people. (BBC)
- In a move that is stunningly tone deaf, even for a large tech company, YouTube secretly applied AI-backed "enhancements" to videos. (Ars Technica)
- So-called "agentic AI?" has fallen out of the news cycle somewhat but it's still around. And those agentic AI web browsers are quite happy to get scammed. (Guardio)
- At least one tech exec thinks that replacing entry-level software developers with genAI is a foolish idea. Maybe other CEOs will listen, since this is AWS CEO Matt Garman. (The Register)
- Remember a few weeks back, when a certain tech CEO talked about using genAI to perform "vibe physics?" The podcast hosts who interviewed him weren't too impressed. (Gizmodo)
- Have you seen that old Burnistoun sketch, about the voice-activated lift in Scotland? Today's genAI systems similarly stumble on the Bavarian dialect. (Der Spiegel 🇩🇪)
- Here's an interesting short fiction piece on our possible AI-driven future. (Les Echos 🇫🇷)
- TikTok will replace some of its human moderation team with AI. Because AI has had such a stellar track record on content moderation thus far … (Le Monde 🇫🇷)
- Here – in case you wanted more details on AI failing at content moderation. (Bloomberg)
- Adding Commonwealth Bank of Australia to the list of Companies That Tried To Replace People With AI And Then Had To Walk It Back (Ars Technica)
- Some people are so sure of an AI-driven apocalypse that they've stopped saving for retirement and are preparing bunkers. (Business Insider)
- A report shows that people who don't understand how AI works are more likely to see it as magic. This adds up. (WSJ)
The wrap-up
This was an issue of Complex Machinery.
Reading online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.