#021 - Of blink tags and niche content
GenAI is in its Geocities moment. Shocking bots. And learning to be quiet.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)
On Bluesky I mentioned that this issue might include a cat pic. So here you go.
ChatGeocities?
A while back I realized that spreadsheets are underappreciated tools – they put analytic power in the hands of people who don't know how to write code. I've drawn similar conclusions about the rise of LLMs. From Dall-E and Midjourney, to ChatGPT and Copilot, you can get all the AI you want without knowing a thing about backpropagation and feature selection.
That ease of use has mixed well with ease of access. In a rare move for emerging tech, genAI made it into the hands of everyday people before it reached the corporate set. You can laugh at the people who coax ChatGPT into doing their job for them. Just remember to also hail them as use case pioneers. Their desire to do less work puts them light years ahead of the AI vendors, who still haven't figured out what everyday people want.
The spreadsheets-and-genAI connection resurfaced when I came across an article in Le Monde about the early days of the internet. The title loosely translates to "How Geocities democratized publishing on the web, through unreadable sites and animated gifs."
The term "democratize" carries a lot of baggage, but … you know what? The article has a point. Most of us in the tech space didn't appreciate it at the time. We could hand-write HTML, knew enough Perl to manage web forms, and had as-good-as-free access to web servers. We were so far removed from the barrier to entry that we couldn't appreciate what it meant to overcome it. (My Power of Spreadsheets epiphany would arrive some years later.) But for those who didn't have those resources, Geocities was magic. You had something to say online? You could just … say it.
Those "under construction" images and blink tags looked unsophisticated to professionals. But, frankly, the professionals' opinions didn't matter. The people on Geocities knew their target audience was other people who shared their interests. Their ability to publish niche sites at zero cost (to the person writing it, at least) opened the door to the long tail of online content.
That story has played out time and again. Blogs? Wordpress.com. (Which I keep typo'ing as "Worsepress" on my phone. Hmm.) Video? YouTube, and then TikTok. Newsletters? Buttondown. When self-hosting of a new medium is expensive or cumbersome, a bulk provider comes along to do it on the cheap. Despite being centralized systems, these hosting platforms give end-users a sense of autonomy by freeing them of the constraints of traditional gatekeepers like movie studios and newspapers. They publish anything, and the world gets a little bit of everything.
The pendulum will eventually shift from centralized, bulk hosting to DIY. (Maybe. Today's media distribution platforms double as places of discovery and even sources of revenue, so that should hold people a while longer.) But for now, we're in our Geocities era of genAI: the technology is in the hands of everyday people, using it to do everyday things. Surfacing niche use cases out of a sense of self-expression as well as self-interest.
Some of it will seem silly, especially to those of us who have built AI models for a living. We'll roll our eyes at the generative equivalent of the animated under-construction gifs and blink tags. And yes, some subset of that crowd will manage to do terrible things. But let's remember that this phase will lead to the next wave of AI tools, interfaces, and use cases. I can't wait.
This won't shock us for long
In social media's early days, someone told me that those platforms would normalize human foibles. Making an ass of yourself in public, they said, would lose its stigma because we'd see that it's so common. I'm not sure when we'll get there. In part, because people are really good at finding new ways to look foolish.
Will machines do any better? The jury's still out. In part because an LLM has allegedly told someone to die.
I understand why this would feel uncomfortable. For one, the Terminator movies tell us that machines from the future will want to kill us. Two, there's the way we anthropomorphize present-day bots. We find such messages chilling because we imagine them uttered by a sentient, opinionated being.
Frankly, even my use of the word "uttered" is troublesome. It's hard to describe the chatbots' actions without using verbs usually meant for people. We unwittingly imbue the machine with spirit when we claim it "thinks" this or "tells" us that, even though it doesn't have opinions or hold political beliefs. That AI model is a pile of linear algebra that creates chains of words based on grammatical patterns found in its training material. That's it.
I wonder how long it will take the world to truly internalize this? How soon till we treat the model's output as just a blob of stray text for us to interpret, and not a statement from a conscious being? Because not long after that, the outputs will cease to shock us. When we see "please die" written on the screen we will roll our eyes, tap the button to refresh, then get on with our day. It's the same thing we do with a frozen app or a misbehaving browser tab. We know that all software has bugs. We know how it should work and accept that it will sometimes glitch. And while we're disappointed, don't take it personally.
We'll eventually get there with genAI.
Bots learn boundaries
In the previous newsletter I noted that Microsoft's Copilot chatbot had refused to answer questions about the (then-)upcoming US presidential election. I applauded this as good AI risk management – there are some cases where you simply can't afford for the bot to go off the rails. Better to clam up than to inspire a dozen OMG The Model Said That think-pieces.
Copilot isn't the only bot to hold its tongue. When New York Times journalist Kashmir Hill took a "decision holiday" by putting AI bots in charge, Anthropic's Claude held off on issuing life advice:
I appreciate your interest, but I don’t think it would be advisable for me to make important life decisions for you as part of a journalistic experiment. While A.I. assistants like myself can provide information and analysis to help inform decisions, we shouldn’t be relied on as the sole decision maker, especially for consequential choices.
Joanna Stern of The Wall Street Journal took a similar AI journey, spending a day with four AI chatbots as companions. This time, it was Google's turn to tread lightly:
When I asked the bots about friendship, all but Gemini mentioned trust. To its credit, it was the only one that admitted it can’t experience human emotional bonds. And Google has said that Gemini is supposed to be more assistant than friend—or lover.
(Check out the video to see how Stern mounted four phones on a tripod and attached a mannequin head to make it more … lifelike? Something like that? If you have nightmares about AI, this will replace them.)
In the previous segment I said that we would eventually normalize bots saying weird things. I similarly expect us to normalize bots establishing boundaries. These machines cannot, and should not, attempt to answer every question.
I'm pleased to see that Anthropic and Google are building safeguards around their bots. Who's next? And who will wait until lawsuits or new laws force them to do so?
But wait, there's more
Longtime readers will notice that this newsletter is shorter than normal. One segment would have pushed this issue way over the limit, so I'll run that next time.
I might even release it mid next week, so the US holiday travelers have something to read while stuck on the tarmac.
In other news …
- A painting created by an AI-driven robot goes up for auction and fetches ten times its expected price. (Der Spiegel 🇩🇪)
- If genAI companies insist on (allegedly!) grabbing artwork without permission, then artists can poison the well. (MIT Technology Review)
- Will genAI replace smart speakers? Or give them a second wind? Or maybe, both? (Les Echos 🇫🇷)
- AI and the arts have a tense relationship. The arts are now going on the offensive. (New York Times)
- Dutch publisher VBK is testing the waters of AI-based translation. (The Guardian)
- Coca-Cola has released an AI generated advert. Not everyone's happy about it. (Gizmodo)
- When you think "content moderation," you probably don't think "AI-based trash infiltrating music streaming sites." But it is a real thing. (The Verge)
- Phone scammers were early adopters of AI. Now, the phone company's deploying AI against scammers. (TechCrunch)
The wrap-up
This was an issue of Complex Machinery.
Reading this online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.