#006 - 'Tis just a wee bit of AI fatigue, is all
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)
Reading the last few newsletters, you might get the impression that I dislike AI. Or that Complex Machinery is some Hater's Guide To Emerging Technology. Neither one is true! As a long-time AI practitioner – one who's been here since the early days of "predictive analytics" and "Big Data" – I have high hopes for this field precisely because I know what it's capable of achieving. I'm genuinely optimistic about the long-term.
That said, in the short-term I'm experiencing a wave of AI fatigue. It's not just the barrage of AI Is Amazing messaging, either. It's that the messages are so obnoxiously in the "buy the rumor" half of the old saying. It'll be a while before we sell the news.
My AI fatigue stems from four practices in particular:
1/ AI-washing: Slapping an "AI-powered" label on your product is sure to move sales numbers. Doing this works even when said AI is of questionable value or even nonexistent. AI-washing is mostly a "commit to the bit" game, where you win by never admitting you're full of hot air. (Emphasis on "mostly." Sometimes you do get caught and there are consequences.) Does that mean a computer mouse with a ChatGPT button is peak AI-washing? Only if you haven't seen a $400 AI toothbrush.
The real test of these claims is to ask the vendor how AI makes their product better. A skilled AI-washer will at least offer a weak-yet-plausible answer. The less-skilled? Hmmm. When tech journalist Shira Olvide asked Oral-B what makes their AI toothbrush so cool, they … declined to comment. That tells you everything you need to know.
2/ Fake it till you make it: To sell something is to share your vision of a future state. This is completely normal and, in many cases, reasonable. "Buy this coat and you'll be warmer." "Take this course and you'll be able to converse in Korean." And so on.
New technology lends itself to the murkier practice of pitching a distant, possible-future state as a present-day reality. For AI this means that we might someday have chatbots that are hallucination-free replacements for search. And we might someday have fully-autonomous cars. But despite what the AI hype train may tell you, we don't have either one just yet. Just ask the World Health Organization's chatbot. Or, maybe, don't.
According to reviews, Humane's so-called "AI Pin" is also projecting a future reality as the present day. The fans are having none of it, though. One went as far to accuse tech reviewer Marques Brownlee of being "almost unethical" and "potentially killing someone else's nascent project" because he … noted that the AI Pin was kinda meh.
3/ When it doesn't work and never will: An AI-washed product actually works; it just has meaningless (or no) AI inside. Similarly, a future-vision product may eventually work as advertised. Distinct from those categories is the group of "AI-powered" products that can't possibly work, because the purported functionality is beyond AI's reach.
Take image analysis as an example. It's pretty good in some places (your phone tagging images of your cat), pretty bad in others (facial recognition for … well … anything). And when it comes to spotting edible mushrooms in the wild, it's an outright failure. Fungus experts explain that visual inspection is woefully insufficient for identifying poisonous mushrooms. As such, no amount of AI image recognition can honestly tell you which one is safe to eat. But that doesn't stop people from pushing mushroom-ID apps for foragers.
(Whether these prove more deadly than autonomous vehicles remains to be seen.)
4/ When it actually works: And it works too well. AI's most unsettling use cases happen when the thing does exactly what it says on the tin. That's when it's most ripe for misuse.
Deepfakes have become more convincing over time, thanks to technology improvements. They still don't stand up to intense scrutiny, but they don't have to. Your AI-generated fakes only need to fool people long enough for them to take action. Even if those same people eventually figure out that you used AI to put terrible words in someone's mouth, by that point the damage is already done. The target's reputation is in tatters, some people will still eye them with mistrust, and others will question anything they see or hear online.
Just in time for half the world to go to the polls.
Yay?
This is the time to be terrible
Those were four variants on the theme This Should Be Better, But It's Not. The plus side is that the situation will work itself out over time, as we progress through the standard emerging-tech cycle: the AI-washed products will fall to the wayside, the future-vision crowd will finally achieve their reality, and everything else will get swept away by a mix of informed consumers and new regulations.
That's the future. And it's grand. So let's talk about how we get there.
Right now AI is in early innings. We're fumbling around as we suss out all the things it's actually good for. On the minus side, that means we're going to be terrible at AI for a while. On the plus side, that means we have permission to be terrible at it.
We should use that permission wisely. Right now we keep trying the same use cases that don't work, over and over, because we're not creative or daring enough to try something else. (See: AI Chatbot As A Substitute For Search.) Such lack of imagination exposes us to the subtle but very large risk that society will give up on AI before we find the ways it can actually help us.
To break out of this cycle of bad AI we'll need … more bad AI. Different kinds of bad AI. The kind we encounter because we're trying new things and some of them will undoubtedly fail. (Can I spike the tip jar here? Let's start with the Maybe This'll Work ideas, and skip those that are so obviously going to explode on the launch pad.)
I look forward to more AI use cases that make sense, built on technology that actually works as advertised, applied in ways that actually help the world move forward. I hope I won't be waiting long.
T plus two minus one
Risk management starts with asking a series of what-if questions. What if we lose our primary vendor? What if our AI model misprices these freight deals? What if interest rate shifts leave commercial real estate in a crunch? And so on. From there, you trace the impact of that scenario to identify knock-on effects and unexpected connections, so you can devise mitigation plans.
My favorite what-if game is to map out a process, and then imagine twisting it around. What happens if we reorder certain steps? Or skip steps? Or swap in new actors here and there? What if we shrink or stretch the timescales? And, having done that, what are the ripple effects?
This game is especially helpful in spotting the impact of technology changes. New tech means swapping in new actors ("let's use an ML model, instead of a team of people, to make this decision") and changing timescales (since machines operate much faster than people).
I've been thinking about this in relation to North American financial markets moving settlement times from two days to one – known as "T+2" and "T+1", respectively. Transactions are just entries in computer systems, and computers operate at nanosecond speeds. So what's the big deal in shaving a day off of a financial activity? Pretty big, as it turns out:
For US investors and other intermediaries, this is a hassle as it crams the settlement process into just a few hours after markets close. Working hours on the east coast will have to lengthen a little. Further east, it is much more of a pain, and a costly one at that, in large part because the Earth turns at a specific pace and currency exchange trades are better priced and more liquid when their home market is awake. To get trades done, asset managers may have to accept rubbish currency exchange rates, especially on a Friday. Or they may need some kind of stop-gap measures from their banks.
Night shifts or extra US staffing — hardly cheap options — are proving popular. “For them, it’s better to put bums on seats and take on that cost than to run the risk of liquidity shortfalls,” said Gerard Walsh, head of client solutions at custody bank Northern Trust in London. “I don’t think the market has grasped it properly.” Some will opt to buy or sell foreign currency in advance of securities trades — a clunky and undesirable outcome.
While this shift from T+2 to T+1 is no problem for computers, it complicates matters for the human actors involved. And then, for everything connected to those human actors. Which is, well, everything and everyone.
As we consider use cases for AI, let's remember to think about the full extent of the impact. Changes in a complex system rarely affect just the component in question.
In other news …
- Meta's pay-for-privacy plans are not sitting well with the European Data Protection Board. (Here's an analysis in Le Monde.)
- Remember how That Guy™ created an AI bot, called Grok? Yeh, Grok seems a little lost.
- AI chatbots for Instagram influencers. What could possibly go wrong?
- Something that's gone wrong: a Facebook AI chatbot told some tall tales about itself.
- Cosmetics giant Estée Lauder has launched a lab for testing AI.
The wrap-up
This was an issue of Complex Machinery.
Reading this online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.