#011 - Wishing it into existence
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)
AI's popularity stems from three main sources:
- Typical fad dynamics. Companies see something getting attention, so they copy it.
- Genuine optimism. This is sometimes a misinformed, unhealthy optimism – the kind that leads people to overlook AI's flaws – but it's genuine nonetheless.
- Faking it. Companies pushing their possible-future AI dreams as a present-day reality.
That last group is interesting. They are crossing their fingers, banking on their dream becoming real before anyone catches on.
And the thing is, that might work! Pinocchio became a real boy; why can't mountains of investor cash bring AI promises to life? But for the moment those hopefuls are in a race against time. Pinocchio was under no such time crunch.
The last fifteen years of my career have shown me what's possible with AI alongside what is not. And what I've learned is that when we task AI with doing what it can't … there's trouble.
Computer says guilty
Take facial recognition. It simply doesn't work that well. You'd think that would push companies away from the technology. But they love it. The latest example is temp agencies in France using it to spot identity fraud among workers. I guess they hadn't seen American news reports? Because US police departments have caught some (well-deserved) flak over facial recognition's connection to wrongful arrests.
Old and busted: "computer says no." New hotness: "computer says guilty."
One such incident involves Detroit, which has recently settled a wrongful arrest case from 2020. This excerpt from a New York Times article is quite telling:
The Detroit police are responsible for three of the seven known instances when facial recognition has led to a wrongful arrest. [...] [Detroit officials] remain optimistic about the technology's crime-solving potential, which they now use only in cases of serious crimes, including assault, murder and home invasions.
It gets better. And by "better," I mean "worse." (Emphasis added.)
James White, Detroit's police chief, has blamed "human error" for the wrongful arrests. His officers, he said, relied too heavily on the leads the technology produced. It was their judgment that was flawed, not the machine's.
That's a bold claim, considering that human judgment chose to deploy facial recognition in the first place. And that same human judgment insists on its continued use, in light of the poor track record.
As Roman politician Tacitus noted, omne ignotum pro magnifico est. That's the Latin great-great-great grandfather to Arthur C Clarke's third law: "Any sufficiently advanced technology is indistinguishable from magic."
And magic works best when you want to believe.
Tone is everything
If you think computers are bad at matching faces, they're even worse at sussing out emotion.
Ask anyone who's done a good deal of sentiment analysis. I still remember one employee satisfaction survey in which the machines, ill-equipped to suss out the semantic smokescreen that is corporate-speak, marked everything as neutral.
That happened during the old-school days of natural language processing (NLP). I'd like to tell you that sentiment analysis has improved under modern AI, backed by powerful neural networks. I'd like to tell you that, yes. But sentiment analysis, with or without powerful AI, is still a game of mapping human emotion to a math problem that a computer can handle. It still stumbles over sarcasm, irony, and situations that are completely unlike its training data:
While the failure to detect any negative feeling could be down to bad acting on my part, I get the impression more weight is being given to my words than my tone, and when I take this to [Alan Cowen, CEO of emotional AI startup Hume], he tells me it’s hard for the model to understand situations it hasn’t encountered before. “It understands your tone of voice,” he says. “But I don’t think it’s ever heard somebody say ‘I love you’ in that tone.”
(This failure also points to a weak spot in the company's testing. I reason some modest red-teaming would have uncovered the tone/words disconnect before a journalist had a chance to try it out. But I'll let that slide for now.)
That said, sentiment analysis works well on the extremes – people swearing or shouting – so I have higher hopes for its use in customer service. But what the AI giveth, the AI taketh away. Japan's SoftBank is applying AI to not only detect upset customers but also to modulate those voices to soften the impact on phone reps.
I'm all for making the customer service job less painful. But instead of using AI to hide customer anger, why not figure out why customers are angry in the first place? That seems smarter and cheaper than throwing technology at the problem.
Power hungry
I never expected my interests in AI-based risk would overlap with climate issues, but here we are. It turns out that the power demands of AI – the new-age, mass-scale, generative kind – pose an environmental threat. To make matters worse, AI's energy consumption is poised to increase.
To address AI services' environmental impact, we could try:
1/ Reducing consumption by shifting our focus to valid, worthwhile use cases.
2/ Creating more electricity to power the existing, 95% rubbish use cases.
You will be disappointed, though not surprised, to hear that major AI service providers are pushing for option 2.
All of which brings us back to that race against time. How much are we willing to throw at AI vaporware in order to make it real?
Time to chicken out
Lest you think AI is the only corporate fad, I should remind you that the companies are jumping on the fried chicken sandwich bandwagon again.
The difference? Chicken is edible. And it requires less electricity.
(No word on whether Roy Wood Jr will revive his "Chicken Sammich Coalition" sketches. For this, I am willing to cross my fingers and hope.)
In other news …
There's been a lot of AI news these past couple of weeks. I couldn't fit everything into this newsletter, so I might do a short writeup on some of these next week:
- OpenAI says the quiet part out loud: "it would be impossible to train today's leading AI models without using copyrighted materials." (Futurism)
- Figma's AI tool confounds imitation and flattery. (404 Media)
- Researchers sidestep AI protections. (The Register)
- TikTok sidesteps its own AI protections. (The Verge)
- Libel law does not mix well with LLMs. (The Atlantic)
- Companies are so hungry for AI training data that they are changing their Terms of Service agreements. (New York Times)
- The boss wants to talk a good AI game. Even when they're still figuring it out. (WSJ)
- A look into the behind-the-scenes, human side of AI: the emotional toll of content moderation work. (Der Spiegel 🇩🇪)
The wrap-up
This was an issue of Complex Machinery.
Reading this online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.