#019 - Five types of magic
AI is magic. Or, AI performs magic. Here are five acts.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)
Magic act 1: Making money appear (Kiosks lead the way)
Last time, I wrote about how the humble self-service kiosk serves as a guide to AI-based automation. Shortly thereafter I stumbled across a 2018 piece in The Atlantic, about the ways buildings changed to accommodate self-service machines like ATMs. I started to wonder about the impact AI will have on physical spaces, device interfaces, and digital UI/UX.
I made a mental note to write about that in this issue. But I got distracted when I read this line about self-service kiosks in airports:
[For] the airlines it’s a big money saver. A 2003 study by Forrester Research showed that it cost 16 cents to check in a passenger with a self-service kiosk, compared to $3.68 with an agent.
In other words, the kiosks represented about four percent of the cost of human labor in 2003 money. (For the sake of brevity, let's assume that this sixteen cents figure factors in the short- and long-term opex and capex of the machines, and that the relative costs of the two approaches have remained constant. Since technology costs have fallen over the past twenty years, that latter point probably doesn't hold. But I digress.)
And that, dear reader, is an episode of Extreme Cost Savings: Technology Edition.
You will rightfully point out that $3.68 and $0.16 are both very small prices. They are effectively the same number on a corporate balance sheet – as good as zero, even – so hardly worth any attention. This is true! At least, it's true if you consider just one instance. Multiply those prices across a large number of passengers and you get a different story. The kind of story that makes CFOs very happy.
So of course the enterprise-class companies are chasing AI. Take any activity in one of those multi-billion-dollar operations, then imagine its cost falling by 96%. Or even 50%. Hell, even a single-digit percentage point change can do wonders for the budget, if applied to the right areas. It's worth it to try AI in every department and see what shakes out.
Yes.
Yes, and.
There are ways to approach this. Throw Spaghetti At The Wall is common enough, and it works, but it's a great way to burn money on the quest to save money because many AI projects will simply not pan out. Much better to organize those projects like a stock portfolio: suss out which areas show the greatest promise, prioritize and fund them accordingly, then execute.
Another reason to take the methodical approach is that R&D is rarely an overnight success. According to that article from The Atlantic:
American Airlines tested its first self-check-in kiosks in 1984. It deployed a prototype at Dallas Love Field 14 years later and installed the first fully functioning automated check-in system at Albuquerque International Sunport on Halloween Day of 2000. Currently, a majority of American’s 350 airport destinations feature some form of self check-in.
That's right: it took sixteen years to go from initial testing to the first full rollout. I don't know how much of that time American Airlines dedicated to R&D – versus, say, keeping the idea on the shelf waiting for the right circumstances to arrive – but sixteen years is an eon in technology-time. I think about how much the unit economics must have shifted between 1984 and 2000, and how those numbers have changed even more in the time since. AI, done well, holds similar promise.
Magic act 2: Creating reality
In another callback to the previous newsletter, I mentioned a twist on a social network: Social AI lets the end-user surround themselves with the bots of their choosing, creating a fine-tailored echo chamber. I noted that this offering, similar to the Google Pixel "Add Me" feature and various genAI tools, lets people create their own reality.
At the time I was thinking about inward-facing fake reality, where only the person in question or maybe their immediate social circle will see the fabrications. A recent Garbage Day newsletter explored the outward-facing use case: trying to convince a wider audience to participate in the illusion. This excerpt is about Threads, but certainly applicable to other social sites:
[The] feeds are now — and seemingly forever will be — clogged with AI junk. Because you cannot be a useful civic resource and also give your users a near-unlimited ability to generate things that are not real. And I don’t think Meta are stupid enough to not know this. But like their own users, they have decided that it doesn’t matter what’s real, only what feels real enough to share.
I don't have a ton to add here. But I expect this idea of false realities will continue to crop up. And not in a good way.
On the plus side, genAI is also something of a job creator? It'll make sure that content moderation teams stay busy:
The Verge’s Nilay Patel recently summed up the core tension here, writing on Threads about YouTube’s own generative-AI efforts, “Every platform company is about to be at war with itself as the algorithmic recommendation AI team tries to fight off the content made by the generative AI team.” And it’s clear, at least with Meta, which side is winning the war.
(Yes, the genAI and content moderation functions are in the same company. What's that line? Something something "arsonist firefighter" something something …)
Magic act 3: Sustaining belief
AI's biggest trick of all is convincing people that it is magic.
They don't literally think it's magic. There are no wands or rabbits popping out of hats. But when you consider the outsized expectations they've heaped on the technology, they're asking for something from beyond this earthly plane.
I don't entirely blame them. The cacophony of bullshit AI marketing makes it hard to see what this technology genuinely can and cannot do. It's easy to hang one's hopes on a hazy idea. And besides, AI kinda does look like magic when it actually works. It writes reports. (So long as you can handle a little fiction with your facts.) It generates images. (Please ignore those extra fingers or people growing out of countertops.) It even creates podcasts. (Because the world needs more of those, sure.) All you have to do is ask the magic text box nicely.
But is believing in magic AI all that bad? It is, when you realize that doing so a prime source of AI-based risks. By shoving AI into places where it doesn't work so well, we're taking on all of the downside exposure while not really seeing much upside. Like, say, self-driving cars that don't self-drive, or robot bartenders that don't tend bar, or AI plagiarism detection tools that aren't so good at detecting plagiarism.
Misplaced belief also drives purchases. Which keeps false prophets afloat:
Magic act 4: Making money disappear
AI vendors keep getting tripped up on their Fake It Till You Make It routines, so they've changed tactics. They're moving away from concrete promises for today, and – taking a page from the cult playbook – declaring future dates for when things will pay off. "It's gonna be amazing in just a couple more years. You will be rewarded for your belief. Trust me. And also, pay me." This gives them extra time to turn their bold-yet-empty proclamations into something real. Or to wriggle out of the promises if things don't pan out.
(You know how cults always find some excuse when the big date passes without fanfare? With AI, they'll just rename the field and start over. Again. You heard it here first.)
I wonder how many of these AI vendor execs will make it into the history books as famous frauds? It's hard to say, since the legally-enforceable definition of "fraud" is harder to prove outside of tightly-regulated fields like banking. But the social definition will eventually catch up to those who fuel false hopes.
I can't tell you when that tide will shift, but it'll accompany a growing movement in AI literacy. It's hard to pin your unrealistic hopes on something if you know what it actually is. It's also hard to sell an unrealistic product idea when prospects know enough to ask you the difficult questions.
I have no idea why I'm thinking about this right now. No clue!
But just at random, here's a story of everything wrong with AI applications – how they are built, purchased, and applied – wrapped up in a single article.
Magic act 5: Comic relief
One thing I will give AI, is that it can turn a disaster into laughter.
According to Meredith Haggerty on Bluesky:
A pipe burst in my building and when I put all the pictures of the damage into an album, Apple scored it with a song called “El Cumbanchero” with a upbeat salsa sound
I guess if you have to experience a plumbing disaster, it should at least have a festive soundtrack.
In other news …
- Surprise, surprise. Turns out those huge, generic models don't know a hell of a lot. (WSJ)
- Meaningful use case alert: AI helps delivery drivers find packages inside the van. (Bloomberg)
- Salesforce's Marc Benioff says Microsoft's AI is disappointing. Which would be a more impressive statement if [checks notes] Salesforce weren't selling its own AI products. (Insider)
- Auction house Sotheby's posts work created by an AI robot. (Der Spiegel 🇩🇪)
- TikTok throws AI at its content moderation issues. (Les Echos 🇫🇷)
- The long-held dream of AI-based agents. (Le Monde 🇫🇷)
The wrap-up
This was an issue of Complex Machinery.
Reading this online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.