#015 - The cookie connection
Slowly grinding our way to "The" AI use case.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)
Those are pumpkin spice Oreos in the header image. That will make sense later.
AI: What is it good for?
In a recent game of CSI: Drafts Folder I tried to sort out the context and recipient of a half-written e-mail. Aside from a one-sentence intro, there was just this excerpt from a Penny Arcade writeup:
First, [temporarily losing my internet access] reminds me of when my uncle showed me a couple web pages and I didn't really know what I was looking at. The only comparison I could really think of was that it was like a very large BBS, or a series of interconnected services, like CompuServe, Quantum Link which would eventually become America Online - Earth's foremost coaster manufacturer. It looked incredibly boring. He assured me that it was going to be a very big deal, and you know what? He may have been right. But the page he showed me about tractors was not to my particular interest.
[...]
People didn't want to put their credit cards "into the computer" when this shit all started, and last Tuesday my friend Jasmine Bhullar DoorDashed some Starbucks to our livestream. I grew up with computers and this shit is still crazy.
This really captures the emerging-tech arc, the journey from "wtf is that? and why would I care?" to "how did we live without it?" The one end is our very normal, very human Reaction To The New. The other is the equally human sense of Pure Magic And Wonder because of how much it changed our lives, mixed with The Great Underappreciated because it becomes the backdrop to our everyday reality.
So it was for the web, mobile, and cloud. So it will eventually be for web3 and the wider world of AI – from predictive ML models to genAI's LLMs.
We'll have to work through a lot of use cases to get there, though. Most of those won't pan out. But by throwing AI into every possible situation, we'll eventually find the thing we didn't know we were missing.
How's that coming along? Hmmm.
The AI classifier
Surveying the AI landscape thus far, we have:
The Good: Predictive ML is in everything from your phone's autocorrect and photo album (helping you find those cat pics), to classifying documents (think: e-discovery), to forecasting failures in industrial equipment. While these models aren't perfect by any stretch – there's a reason I refer to autocorrect as "autocowrong" – overall they're doing the job as advertised.
By comparison, generative AI is still in its early days and the Good use cases are rare. Mostly of the form "let the machine give me a first draft or show me how to do something, and I'll take it from there." Software developers have been the most vocal supporters, though they've told me that the LLMs' output still requires a lot of cleanup and additional work. This path is best suited to experienced practitioners.
Both the predictive ML and genAI use cases are tainted by the questionable way companies have sourced their training data. The genAI copyright infringement lawsuits take center stage here, but let's not forget about the mountains of personal data that get hoovered up by social media sites and online retailers. The refrigerator advert that's been following you for the past two months didn't exactly come out of nowhere.
The Bad: I tried to think of a "Medium" category but came up short. Using an LLM to summarize documents? Low-grad bad, but still bad. Facial recognition tools, for finding people or for age verification? Bad. A genAI bot as a therapist? Bad, bad, bad.
The Bad use cases are a mix of fake-it-till-you-make-it offerings (possible-future ideas sold as present-day realities) or outright snake oil (cashing in on the "AI" label). In many cases we already have incumbent solutions that are cheaper, or more predictable, or better by some other standard.
The sad part about the fake-it use cases is that if they were being sold honestly – "not quite ready for prime time but we're getting there" – they would fit squarely into a Medium category. But that's not what happened. Greed has mortgaged AI's future, leading people to look askance at even the Good use cases. Too much of that and we'll never reach the can't-live-without-it AI.
Imagine the pitch meeting
One very special corner of Bad is called It Works But Nobody Asked For This. (Sometimes known by its nom de plume of Why, God, Why??) Consider the "Add Me" feature in the Google Pixel 9 phones. It uses AI to … add you to a photo. A photo you weren't in when it was taken. But you're in it now.
Lovely.
I double-checked the date (not April Fool's Day) and the source (not The Onion). I even waited a few days for Google to issue a retraction, along the lines of "oh our site got hacked we never really said that haha." But as of this writing, it's still front and center on the official Pixel website.
People generally like the world to make sense and will create all kinds of coping mechanisms when things don't add up. My personal favorite is a game I call Imagine the Pitch Meeting. Here's how I figure the Add Me pitch went down:
[Scene: A Google conference room. Product managers (PMs) wonder how else they might cram AI into an offering. The mood is tense as they have been short on ideas.]
PM 1: "Say … You know how genAI makes it impossible to believe any digital media? What if we were to … y'know … lean into that?"
PM 2: "I'm listening …"
PM 1: "We'll use AI to put people into photos when they were never there. A completely manufactured reality."
PM 2: "I love it. Way to one-up Zuck and his whole metaverse play. Why visit a virtual world when a computer can just convince you that you really lived the whole experience?"
PM 3: "Yes! And this will totally take the spotlight away from our Glue On Pizza incident. Some jackass with an AI newsletter refuses to let that one die."
[They high-five, relieved that they will survive another OKR cycle.]
[In the distance, the ghost of Philip K Dick weeps. Yet another person in tech has read "We Can Remember It For You Wholesale" (the basis of the Total Recall movies) and walked away with the completely wrong message.]
Cookie monster
Add Me is a warning sign. A blinking red light that companies have squeezed their product teams so hard on AI that they're cooking up nonsense. Perhaps as a cry for help.
Where else have I seen this? You're probably reaching for the Dot-Com craze, but I was thinking of this old Dropout (née CollegeHumor) video about Oreo cookies. The premise is that the company keeps conjuring up new flavors for the sake of marketing. And it's driving the CEO to insanity.
The entire video is quoteworthy, but the line about a "sinful glut of bullshit Oreos" crowding the cookie aisle rings in my ears whenever I read AI news. A similarly sinful glut of bullshit AI products risks choking the field before we reach the truly mind-blowing, long-term impact, changed-the-world-for-the-better use cases.
I don't know what to do about that. Asking the AI fans to tone down the toxic optimism is a surefire way to get them to double down on it. Especially since half of them have an economic incentive in keeping the dream afloat. Yet ignoring them has the same effect.
So, to a certain friend – you know who you are – I'm close. So close. Just a little more bullshit AI and I'll be ready to start that new business venture we talked about.
Promises, promises.
ChatGPT maker OpenAI is getting into the search game. I've pointed out before that genAI is a poor substitute for search. Google has provided real-world evidence that mixing search with genAI is a bad idea. (See the aforementioned glue/pizza story.) But OpenAI is convinced that This Time, It's Different:
OpenAI seems to have taken note of the blowback and says it's taking a markedly different approach. In a blog post, the company emphasized that SearchGPT was developed in collaboration with various news partners, which include organizations like the owners of The Wall Street Journal, The Associated Press, and Vox Media, the parent company of The Verge. "News partners gave valuable feedback, and we continue to seek their input," Wood says.
(That excerpt is from late July. More recently, they've also signed a deal with the Condé Nast family of publications.)
Hmm. This is a good start. Moving on:
Publishers will have a way to "manage how they appear in OpenAI search features," the company writes. They can opt out of having their content used to train OpenAI's models and still be surfaced in search.
Even better! What's not to like?
"SearchGPT is designed to help users connect with publishers by prominently citing and linking to them in searches," according to OpenAI's blog post.
Wait … This sounds familiar. Remember Google AMP? and Facebook's "pivot to video"? Both convinced news publishers that the move would Drive Traffic™.
(They did not, in fact, drive traffic.)
I should blame tech companies for continuing to sell this nonsense. But if newspapers keep falling for it …
In other news …
- OpenAI claims to have created a tool to detect text generated by ChatGPT. Much to the chagrin of educators everywhere, the company doesn't want to release it. (WSJ, Le Monde 🇫🇷)
- I mean, who wouldn't want a herd of autonomous vehicles playing Dueling Horns at 4AM? (Ars Technica)
- AI companies are apparently playing games in reporting their carbon footprint. (Bloomberg)
- Can't get enough of AI Gone Awry here at Complex Machinery? Allow me to point you to this database. (Technology Review)
- A mayoral candidate is trying to bring AI into Cheyenne, Wyoming's government. I mean, really bring it in. (No word on whether this made it into the aforementioned database.) (Washington Post)
The wrap-up
This was an issue of Complex Machinery.
Reading this online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.