#014 - Embers and ashes
More lessons from the CrowdStrike incident, Main Street's view of AI, and what Worldcom's 2002 collapse might tell us about AI.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)
The last newsletter's CrowdStrike coverage started as a short-and-sweet segment. It then blossomed ("metastasized?") into the PhD dissertation which landed in your inbox.
Believe it or not, that was the edited version.
And based on recent conversations, I trimmed a little too much. So I've recovered those thoughts from the cutting room floor, mixed in some recent news, and included them in the first segment below.
So much to learn
CrowdStrike is no longer front page news but the story promises to smolder for a while. New details will be pure gold for any company trying to avoid a similar experience.
So, to continue my list of lessons from the previous newsletter:
5/ Focus on the process, not the outcome. I often say that traders make money by being right about market movements, but they keep that money by managing risk. This involves identifying what could go wrong, then instituting processes that close off potential problems yet leave the door open to that sweet, sweet upside gain. You're welcome to steal this idea for your tech operations.
The catch? You can't predict every possible outcome. When a disaster finally breaks through your risk management processes you want to tell yourself "There's no way in hell we could have seen that coming and/or protected against it."
Was it truly a black swan that took you out? You can have a clear conscience. Was it more of a gray rhino? It's time to reflect on your life choices.
I wonder where Delta Airlines sees itself?
6/ Their incident is your learning opportunity. Delta was hit so hard by the outage that it has filed a lawsuit against CrowdStrike. The discovery phase should tell us a lot about Delta's IT practices. From the outside it's not clear why they were so much more exposed than the other carriers.
Was this simply a case of rotten luck? We'll get some case studies outlining the unfortunate interaction of events. Or was Delta already teetering on the edge of disaster, and a set of blue-screened PCs was enough to push it over? Then the story will define new antipatterns for IT infrastructure and policy. Ignore them at your peril.
7/ Gatekeeping isn't always bad. Part of why CrowdStrike was able to blue-screen eight million Windows installs worldwide is that its software had access to the deep internals of the operating system – the kernel. That's the software equivalent of giving a vendor's maintenance crew keys to your server room. The good thing is that they can let themselves in without bothering you. The bad thing is that they can let themselves in without bothering you.
To twist the old phrase, with great power comes very little room for slip-ups. The antidote? Compartmentalization limits the spread of bad news. Which is why you usually want thick walls around the kernel. And, frankly, around a variety of other processes and systems.
Apparently Microsoft left the kernel's keys under the doormat at the behest of European regulators. If that's true, Microsoft could rekindle those discussions with a smug tinge of I Told You So.
That leads to the next lesson:
8/ Never let a good disaster go to waste. Some readers have asked how to get their management chain to learn from the CrowdStrike incident. Because it doesn't help to know how to fix things, if the powers that be won't grant you the time and money to do so.
The lesson I learned early in my career is to speak their language: money.
The executive levels of the org chart see the company in terms of how money comes in, where it sits, how and why it leaves. If you want them to take action on something, explain how much money the move will bring in or protect.
This recent incident has showered you with talking points. You can send your CEO screencaps of CrowdStrike's share price. (Depending on when you click that, be sure to adjust the date range accordingly.) Maybe offer some back-of-the-envelope math on the PR hit. "Do you want this company getting those front-page headlines? Do you want those lawsuits? OK … Give me headcount for new ops hires and time to patch up our technical debt. That will keep the beast at bay."
9/ Focus on what matters. Someone has launched a parody site called ClownStrike. CrowdStrike's reaction has been to launch a DMCA takedown request.
No. No. No. Just leave it alone.
The mockery is nettlesome but it's ultimately a distraction. Far better to have your PR team issue a good-natured "ha ha you got me" and move on. Or ignore it altogether. There's so much more ROI in repairing meaningful business relationships.
Let the jesters have their day. Play your cards right, and your business will outlast the laughter.
Two out of three agree
When it comes to generative AI, the world is split into three groups:
- Vendors – Companies that sell AI hardware, tooling, and services.
- Builders – Companies that are trying to incorporate AI into their products. (They often buy from Vendors to make this happen.)
- Main Street – Everyone else.
The first two are ecstatic about AI. The third? Not so much. Which makes for an interesting dynamic. The Vendors need to hype it up because it's how they stay in business. The Builders do it because they're convinced that Main Street will want it. (Many of them just want to play with the shiny new toy and "the customers want it" is a convenient excuse. Or they're desperate and chasing the latest fad for survival.) Combined, they don't understand that Main Street is not that keen on the idea.
The lesson that Vendors and Builders fail to learn, emerging tech wave after emerging tech wave, is that Main Street doesn't care what's under the hood so long as the app or widget does what it says on the tin. AI is no different there. According to a recent study, some people are even turned off by the "AI" label. (In no small part because of that nasty problem of ill-acquired training data.) And then there's that part of Main Street that is actively fighting the latest tech darling.
The problem isn't Main Street's difference of opinion. It's that, without Main Street's buy-in, the Vendors and Builders are just high-fiving each other in an echo chamber. That artificially inflates prices – because Vendors and Builders operate in the land of stocks, VC funding, and large corporate contracts – which then positions prices for a massive sell-off. Which has already happened. And might happen again.
Playing both sides of the fence
Despite the recent stock slide, companies keep spending on AI. Spending, and hoping that things turn around in time.
A few AI providers expect that smaller, focused models will save the day. Apple has found a different hedge.
It started several weeks ago, when I noticed that their vaunted "Apple Intelligence" product relied on a partnership with ChatGPT parent OpenAI. Apple was getting AI for their devices, and in return OpenAI was getting … ExposureBucks™. Because Apple wasn't actually paying them money. Oh no. Dear reader: Apple Money is reserved for things that don't risk going poof when a fad dies down.
Apple has since held similar chats with Google, Meta, Anthropic, and Perplexity. This is a smart move:
- Apple gets to position itself as having all of the cool genAI tech without having to build it.
- If genAI actually takes off long-term, these deals buy Apple the time to build their own systems down the road while still laying claim to having it in their devices right now.
- And if genAI flops, well … I have a hunch they'll pay the other providers in ExposureBucks™, too. So an AI failure won't reflect on Apple's balance sheet and share price.
(Somewhat related, Apple has accidentally offered a peek into its AI work. Some beta releases of MacOS reveal the prompts Apple has devised to keep the backing LLMs on-track. If you're interested in LLM security, this might be worth a look.)
(Also, I'm torn: Instead of scrambling to recover the prompts, Apple could publish them so the entire LLM field could improve safety measures. Then again, in a world where chatbots regularly go off the rails, laying claim to safer LLMs could be considered a competitive advantage. Hmmm.)
This year's villain
We've recently passed the anniversary of the Worldcom collapse. It all went to hell on 21 July 2002.
Enron is the poster child for massive Dot Com-era accounting fraud – likely because its news landed first – but Worldcom's fiasco was the larger of the two.
In both cases, company executives cooked the books while external auditors tended the fire. Those auditors' stamps of approval reassured the public the numbers were trustworthy – that the money was real – allowing the frauds to grow to their massive proportions. The fallout destroyed jobs and wiped out retirement funds. All because of greed.
That was the Dot-Com years. What will be the AI era's villain?
There's too much easy money floating around for everyone to be completely honest. Too much money, not enough questions, and everyone's in a rush because of the so-called "AI arms race." This is fertile ground for dishonesty.
Someone out there is pulling a fast one.
But who?
How?
Who will play the role of Worldcom's and Enron's auditors, nursing the fraud along by lending their credibility to the company(s) in question?
Eventually we'll find out.
That's it. That's the segment. This isn't a subtle nod to any particular company, nor do I have a specific article to share here. This is just something I think about.
In other news …
(I'll have more to say about the first three links in the next newsletter.)
- Given facial recognition's poor track record, I'm disappointed to see AI used for age verification in stores. Disappointed, but not surprised. (Les Echos 🇫🇷)
- We've gone from AI companions to AI therapists. That is a move in the wrong direction. (Le Monde 🇫🇷)
- Earlier this year a journalist used facial recognition to track down a terrorist who had been in hiding for two decades. This rare facial recognition success (?) story has Germany asking about expanding police use of the technology. (Die Zeit 🇩🇪)
- I covered the Humane "AI Pin" in newsletters #006 and #009. Things have not gotten better for them. (The Verge)
- Taco Bell is looking into AI for its drive-thru experience. Now only the machines will know about your late-night cravings. (Popular Science)
- If you've seen ads for drugs on Facebook, you're not alone. (Let's conveniently ignore why Facebook's ad system thought you'd be a fit.) I'd love to know how well these ads performed. Seriously. It might tell us a lot about the effectiveness of targeted advertising. (WSJ)
- Chief Financial Officers are taking more CEO jobs. My question: when do Chief Risk Officers get their shot? (Bloomberg)
The wrap-up
This was an issue of Complex Machinery.
Reading this online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.