#000 – Too much time in the wilderness
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)
Skeleton crew
This New York Times piece is a write-up on modern-day train robberies. It doubles as a lesson about automation. Especially the kind of automation driven by software and AI.
The article explains how the Amazon'ification of everyday life has weighed on supply chains. Online shopping had already been growing over the years. Then the onset of the Covid-19 pandemic, which closed brick-and-mortar retail stores overnight, dramatically accelerated the trend. The sum total is that there are a lot more packages to ship, a lot more often, and the railroads have adapted:
Over the past decade, in a push for greater efficiency, and amid record-breaking profits, the country’s largest railroads have been stringing together longer trains. Some now stretch two or even three miles in length, and freight trains can stretch three miles to make room for it all.
But there's a catch:
At the same time, these companies cut the number of employees by nearly 30 percent, so fewer people now manage these longer trains.
That kind of train makes for a juicy target:
It's carrying all kinds of goods, most of which are easy to resell. That includes expensive electronics.
There are fewer eyes on the system. The article notes that some trains travel hundreds of miles through relatively quiet territory, with only forest animals as witnesses.
They're easy prey. Some cars' owners won't spring for good locks because margins are so tight.
Criminals know where the trains are going ("tracks!"), they can leap on and off without alerting the crew ("what crew?"), and the train may be two states away before anyone notices a problem. I'm surprised this kind of theft isn't more widespread.
And that leads us to the lesson for software- and AI-drive automation.
A different kind of machine
Companies have been pushing that automation since the Dot-Com era. The goal? To take business tasks, as performed by people, and convert them to code.
Your process relies on firm, documented business rules? _Software's if/then
calls will handle that just fine. Customers need to book a flight? Click through the website or mobile app instead of talking to a person. Have a problem that's a little too fuzzy for software? AI can classify those documents or price those assets, no sweat. The best part is that this can all run 24x7.
This is an efficiency play, pure and simple. The automation allows you to eliminate existing human headcount and/or slow the growth of new headcount. You're trying to do as much as possible with as few people as possible.
This isn't all bad, mind you. Some tasks are all of "dull," "repetitive," and "predictable," which are my three criteria for automation. Why not free people up for the interesting work that requires nuance?
But even when deployed with the best of intentions, tech-based automation takes us right back to the railroad problem: larger and larger amounts of work, with fewer and fewer people involved to keep an eye on it all. That is fertile ground for new and difficult problems to develop.
Not exactly new
We already see this in a particular type of AI-driven automation: online content moderation.
Consider platforms that supply user-generated content, or UGC. It started with early-day meme sites like those from the Cheezburger network. That list has since grown to include YouTube, Instagram, TikTok, Twitter (no, I'm not going to call it "X"), and Facebook.
Each such site follows a familiar progression:
"Oh man, people really love to post!" And other people love to read those posts. We will monetize all of this online attention.
"Whoa. Terrible people like to post content, too!" Those are some nasty words and even nastier photos. We should perform some kind of content moderation to keep things clean.
"There's too much content for a team of humans to handle!" Say … we can use software and AI to automate this...
(There are more steps, yes. But we'll save those for another day. And don't ask me which step is "Some Dude Buys The Place, Sacks Half The Team, And The Value Plummets.")
The automation makes it possible for a small team to manage a UGC platform, 24x7x365, with a user base that exceeds several countries' combined populations.
This is also the digital equivalent of those freight trains, traveling through the wilderness with minimal onboard personnel. Bad actors get plenty of room to poke and prod at the system for vulnerabilities.
Some vulnerabilities are easier than others. AI-based systems, while better than pure rules-based pattern-matching, are ill-suited to catch slang or other language that is intended to appear innocuous. AI's other weakness is that, deep down, it is similar to software: the linear algebra and pattern-matching, once it's baked into a model, is as good as a ton of if/then
rules. (Very deep if/then
rules, but rules nonetheless.) A person can keep exploiting a weakness till someone updates the model. Which is unlikely to happen if a company doesn't have enough human staffers to keep an eye on social trends and monitor the system.
And then …
So there you go. Modern-day train thieves snag Amazon packages. Attackers on tech platforms spread inappropriate materials. Both problems stem from situations where there simply aren't enough people involved to observe and intervene.
There is one case in which the railways have it easier, though: train theft still requires a lot of manual labor.
Bad actors trying to defeat content moderation systems have generative AI on their side. Instead of poking at a site one post at a time, they can deploy genAI to create variants of posts to see which will get through. They can also generate enough nonsense posts to flood a content moderation system, which leaves admins to choose between "let the site grind to a halt" or "disable the moderation so that the site continues to generate revenue."
That's the story today. Should the railways ever move to completely automated, crew-free trains, expect higher-tech crime to follow suit.
The wrap-up
This was an issue of Complex Machinery.
Reading this online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.