#001 – Nothing's real but the fakery
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)
Real deep fakes
Generative AI is fascinating. People now have the ability to create a synthetic representation of a person – in image, video, or audio form – in situations that range from the pleasantly mundane to the purely fantastic.
Sometimes people cook up pretty cool ideas. Like a model who can send their generated likeness on photo shoots. Those use cases get fun, playful names like "digital twin" and "digital doppelganger."
We use uglier terms like "deepfake" to describe a likeness of someone generated without their consent. In recent weeks we've had Fake Taylor Swift engaging in sex acts, Fake Joe Biden robocalling voters to stay home, and Fake Bank Boss telling someone to transfer a rather hefty sum of £20M (HK $200M) to their not-fake bank account.
That third one is a reminder that shady people are early adopters of technology. They kind of have to be. New technology offers them new capabilities of getting ahead of a mark. In this case the scammers joined a video call, deepfaked as bank higher-ups, to ask an employee to execute the transfers.
(That's quite a step up from, say, impersonating someone's voice on a phone call with your investors. Which goes to my point about technology adoption.)
Put yourself in the employee's shoes for a moment. Your boss's boss's boss has just Zoomed you and instructed you to do something. Your choices are:
Perform the task as stated.
Shift into Movie Interrogator Mode and start quizzing them. Ask them to hold up today's newspaper, or do the latest TikTok dance, to prove that they are real.
It's hard to go wrong with Option 1. Not impossible. But damned hard.
Option 2 is destined for failure. In a pre-deepfake era, a video appearing to be your ultra-boss is 99.9999% likely to actually be your ultra-boss. Today, that likelihood is … roughly the same. Telling that person "I'm afraid I can't do that, Dave" is a very, very dumb How I Got Fired story to share over drinks. Or at the unemployment office.
The general take is that banks can prevent future such incidents by instituting more controls. Say, require a second person to sign off on transfers exceeding amount X, or a mandatory time delay.
Controls help, but they also don't. Remember that every control is a barrier between "speed and convenience" on one side, and "safety" on the other. The location and number of barriers will tell you a lot about your risk appetite.
The day after a failure, your appetite for risk is very low, the right number of controls is "one more than we already had" and the right size is "huge." (The criminals also know this, which is why they will plan two barriers beyond their last job.) So you add the new control.
Two weeks after a failure, memories have faded. Every control is Just Getting In The Way of legitimate activity. People want to shrink that barrier, and shift its location from "safety" to "convenience" as they go about their work.
That leads to implied consequences of "hey, push this through or you're fired."
So as you rush to establish new controls, do everyone a favor: add a very clear "you won't get fired for doing this" across the top.
Spotting a face in a crowd
The previous flavor of AI – which was also known as "AI," but focused on analysis and prediction rather than generating content – is still alive and kicking. And its problem child, facial recognition, keeps stealing what's left of its spotlight.
Facial recognition has been proven time and again (and again, and again, and …) to be reliably unreliable. This creates risks for all involved: the vendor, the buyer, and the hapless person who is misidentified. Sometimes ridiculously misidentified, like when they were several states away at the time of the alleged crime.
And yet, companies keep investing in this flawed technology. The latest buyers include UK supermarkets, which will use it for age verification.
For a cautionary tale of Why We Don't Abdicate Responsibility To The Machine, look no further than the UK Post Office scandal. A defective IT records-keeping system – one which, according to some reports, insiders knew was defective – led to more than 700 wrongful criminal prosecutions.
When your system catches one person who claims they are innocent, you can tell yourself "well that's what a guilty person would say." When you get several people claiming their innocence against the same system, over a short timespan, you're due for a think. "Did I really have hundreds of people stealing from me, and this magic box uncovered them overnight? Or does my new tool have a flaw?"
Here's a hint: when you're having a map-versus-territory moment, the territory always wins. Always.
I don't think those UK supermarkets got that memo? Which comes as a surprise, as the company providing those facial recognition tools is Fujitsu. The same one behind the flawed Post Office IT system from two decades ago.
Minding the minders
For all the talk of AI taking jobs, it's also responsible for creating some new ones. They're not always jobs a person would want, though.
I recently caught this WSJ piece about "robot wranglers" who keep industrial and warehouse robots on-track. With people minding the machines, who will mind the people?
One weekend when Cusack was working overtime, he set up a robot to sand the material, then retreated to a nearby room to watch. But as the robot began moving, Cusack realized he had forgotten to set a critical control. The bot was on an unalterable path directly into an expensive fiberglass panel.
The robot slowly crawled forward until it “put in a giant circular hole in the side of the panel,” he said. He had to explain to his team the hundreds of thousands of dollars worth of damage was his fault for not properly instructing the bot.
I've said it before, and I will say it again: "never let the machines run unattended." Maybe we should stamp that on every AI-driven bot.
Content moderation is another form of robot-wrangling. Social media sites try (to varying degrees of "try") to filter out bad content and have to use AI to handle the volume. The catch? Those AI models sometimes drop the ball, so platforms hire … people to check on the machines' output and otherwise improve the system. Twitter – I refuse to call it "X" – is about to hire 100 new employees to do just that.
On the one hand, a company creating new jobs is a Good Thing™. (We'll conveniently ignore that That Guy™ is effectively backfilling a Trust & Safety function that he gutted after the took the reins.)
On the other hand, imagine the kinds of things that platforms want to filter out. Now imagine being a person who has to look at that content, day after day. Content moderators suffering from work-related trauma is a very real, though often overlooked, part of the so-called "magic" of AI.
It really puts your workplace drama in perspective, doesn't it?
Click here to prove you're not a human
If I can end on a bright note, there's at least one story of people taking jobs back from the machines.
Remember that AI bot that was trained on George Carlin's material?
Its creators have come clean. They say the Carlin "bot" was a human writer all along. So now I have to ask:
Do we need a reverse-CAPTCHA test, to prove that something is indeed a robot?
Someone call the VCs. This one's going to be hot.
The wrap-up
This was an issue of Complex Machinery.
Reading this online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.