Complex Machinery

Subscribe
Archives
September 22, 2025

#045 - Tracing the connections

The key lesson from complex systems: when everything's connected, large players can pull others up and also drag them down.

You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

A large metal structure, made of interconnected bars. Photo by Alina Grubnyak on Unsplash.
(Photo by Alina Grubnyak on Unsplash)

I'd originally planned an essay for today's newsletter. Between wrapping up my next book and getting distracted by recent AI events, I decided to change gears and run a grab-bag of topics.

Somehow I still wound up with a theme: trace the connections. Students of complexity will no doubt feel at home.

I expect to publish the aforementioned essay next time around, or in the newsletter after that. (What's the topic? I won't spoil it. But I will say that I make brief mention of Лихие Девяностые … A couple of you already know where I'm going with that.)

Follow the money

Generative AI's popularity has bestowed cash upon anyone who can claim a connection to the technology. That's worked wonders for snake oil vendors, but also for those who sell supporting hardware like GPUs and datacenter equipment.

AI has thus breathed new life into hard drives. You need disk space to store your training data, models, and generated artifacts, after all. So the hard drive connection makes sense in hindsight. It was also clear in foresight, if you were sufficiently motivated.

We can draw a parallel to the 2008 mortgage crisis. Does the name John Paulson ring a bell? Probably not, unless you hang out in finance circles. But you certainly know of his work: Paulson's hedge fund was one of the few players to successfully bet against the housing markets, so he won big when that space finally came face-to-face with reality. (That's the deal with zero-sum games: when almost everyone is wrong, the winnings are split between the few people who were right.)

Paulson wasn't the only person to spot trouble in the subprime space and place short bets. But his team went the extra mile to trace higher-order connections to bad mortgages. Pulling from Gregory Zuckerman's book The Greatest Trade Ever:

Paulson now was even more convinced that the nation's debt problems weren't confined to subprime home mortgages. He told his team to begin shorting shares of banks with significant exposure to the credit-card business, as well, and those making commercial real estate, construction, and almost any other kind of risky loans.

Gary Shilling, the downbeat septuagenarian economist from New Jersey, kept telling Paulson's team that subprime mortgage problems would infect the entire housing market. So Paulson's hedge fund shorted shares of Fannie Mae and Freddie Mac, the big mortgage lenders, as well.

(For the life of me I can't find it now, but I could have sworn that Paulson's team also dug deeper into B2C businesses. The thesis was: if someone stops paying their mortgage, what other bills are likely to fall behind? And can we short those companies?)

When it comes to AI, then, who is tracing those Nth-order connections to shape their investment strategy? For everyone who stopped at "hard drives and other server equipment," someone else took the next steps to "power grids," "construction gear for datacenters," and "large expanses of commercial real estate." I can only imagine what other smart bets they're placing. I'd find that fascinating to research out of pure curiosity, with profit motives as a sidebar.

Those questions hold for genAI valuations going down as well as up. When I wrote "The Looming AI Debt Wall," I noted that the bills for AI's boasting would eventually come due. A year later, the debt continues to grow yet the field still looks ill-prepared to make good on it. Should the genAI providers suffer, so will their investors. Those investors won't eat that writedown on their own, either. Expect the pain to extend outward, on and on, to some N orders of effect.

Who is poised to play John Paulson in that opera? I imagine someone is tracing those connections and preparing to short the affected companies.

I emphasize "preparing" there. When it comes to shorting, it's not enough to spot indicators of trouble. You also need to know when that trouble will materialize, and ideally place your bets just before it all kicks off.

To short a raging bull market is to endure pain and possibly early ruin. But to time it right … well … let's just say that Paulson's group allegedly pulled in $15 billion from their mortgage bets. Those who read the tea leaves too soon walked away empty-handed.

When the intern swipes secrets

Over the years I've cooked up several analogies for AI models. One that I keep coming back to is:

A model is like an intern: long on energy but short on experience.

In other words, you want to keep a close eye on that model. Left unattended, it can wreak all kinds of havoc.

Today I'm amending that analogy. The model might still be your standard intern, sure. It might also be an enemy operative posing as an intern. Or even an honest intern that's been duped into doing the wrong thing, like, say, sending your secrets to an external party:

Deep Research is a ChatGPT-integrated AI agent that OpenAI introduced earlier this year. As its name is meant to convey, Deep Research performs complex, multi-step research on the Internet by tapping into a large array of resources, including a user’s email inbox, documents, and other resources. It can also autonomously browse websites and click on links.

[...]

On Thursday, security firm Radware published research showing how a garden-variety attack known as a prompt injection is all it took for company researchers to exfiltrate confidential information when Deep Research was given access to a target’s Gmail inbox. This type of integration is precisely what Deep Research was designed to do—and something OpenAI has encouraged. Radware has dubbed the attack Shadow Leak.

“ShadowLeak weaponizes the very capabilities that make AI assistants useful: email access, tool use and autonomous web calls,” Radware researchers wrote. “It results in silent data loss and unlogged actions performed ‘on behalf of the user,’ bypassing traditional security controls that assume intentional user clicks or data leakage prevention at the gateway level.”

To be fair – and this is also a point I raised when that Replit code agent deleted a customer's production database – this isn't so much an AI problem as it is a problem of missing risk controls. As companies deploy more agents and other AI models, and as they grant them greater responsibility and authority, they'd do well to establish checkpoints and monitors to keep everything under control and prevent runaway trains.

Introducing controls and breakpoints will slow things down right as companies turn to AI to speed up operations. That's precisely the point.

To borrow an old phrase: slow is smooth; smooth is fast.

Practice makes perfect

Two stories drive home the point about risk controls and other such checkpoints. The first was this past weekend's European airport snafus, in which a cyberattack against a third-party company derailed electronic check-in. (Here's an AP report with details. Credit where it's due, I first read about this in Der Spiegel 🇩🇪.)

The second story, covered in Le Monde 🇫🇷, involved a joint exercise that simulated a cyberattack. Participating companies had to handle the issue – including some messy knock-on effects – as organizers released "news updates" throughout the day.

Neither story centered on AI-based systems, but they nonetheless serve as a harsh reminder for companies deploying AI.

The rush to assign employee-like autonomy and agency to AI bots introduces new threat vectors and increases our exposure to downside risks. An accident – say, a code bug – could trigger a cascading incident. And should a bad actor poison your AI agents, you could wind up with a botnet attack that comes from inside your four walls.

(Last year's CrowdStrike incident had an eerily similar impact. That was triggered by a software bug rather than an attack, but it still ground airline operations to a halt. You can read newsletters #13 and #14 for details as well as lessons.)

It's hard to not see those two stories side-by-side.

The airports had to figure out problems in real-time, and under time pressure. Falling back to paper-based check-in doesn't seem like a big deal, until you remember that a single airport is a complex system within a complex system: a slow-down in any one part of the flow leads to ripple effects not just in the airport in question, but in every airport to which it is connected. And if the incident is large enough or lasts long enough, that eventually becomes "all airports."

By comparison, the simulated exercise gave its participants the chance to identify problems and trace knock-on effects in a safe environment. Participating companies can now go back and shore up their defenses. This is similar to the way banks stress-test certain financial conditions and how military groups engage in wargaming exercises (tabletop or live-fire) to evaluate their readiness.

We'll need similar exercises for AI systems, so we can spot problems early and devise ways to handle them. Our best route there is to heed the lessons and warning signs from past incidents. There's no need for the AI field to learn this first-hand. Even though I'm sure it will.

The high-wire act nobody wants

Do you like to sleep well at night?

Don't do live demos of your tech product.

Just don't.

Live tech demos are an unnecessary high-wire act – that terrible mix of "large risk" and "small reward." The reward is small because the audience doesn't care if you use a canned demo. The risk is that if it fails, you have to awkwardly stand there holding everyone's attention with nothing good to show them.

Meta reminded us of this lesson when showing off its fancy new AI glasses. In a most cringeworthy moment, the hapless guest had to stand in their kitchen, with the glasses malfunctioning, as the world watched in real-time.

You might think the lesson here is to steer clear of AI. Or even to steer clear of connected devices that rely on remote systems for basic functionality. Not so. The AI goof here is a symptom. The root issue is doing a live demo.

Today's modern, connected tech products rely on several layers of infrastructure and external services, all of which conspire to teach you harsh lessons about complexity. A small issue here leaves you with a headache way the hell over there. That could stem from a bug in your software, sure. The source could also be a flaky WiFi connection, or your VPN getting blocked, or a failure somewhere else along the line, or a mistaken code push two thousand miles away because the dev team missed the memo that you were doing a road show. Or all of those thrown together.

Shortly after writing that paragraph, I came across the official explanation from FaceMeta's CTO:

“When the chef said, ‘Hey, Meta, start Live AI,’ it started every single Ray-Ban Meta’s Live AI in the building. And there were a lot of people in that building,” Bosworth explained. “That obviously didn’t happen in rehearsal; we didn’t have as many things,” he said, referring to the number of glasses that were triggered.

That alone wasn’t enough to cause the disruption, though. The second part of the failure had to do with how Meta had chosen to route the Live AI traffic to its development server to isolate it during the demo. But when it did so, it did this for everyone in the building on the access points, which included all the headsets.

“So we DDoS’d ourselves, basically, with that demo,” Bosworth added.

You only do a live demo out of arrogance or ignorance. Wise people know to avoid both.

I thankfully learned my lesson a while back. I won't share specifics. I'll just say that we were considering a live demo (out of ignorance, not arrogance) and we thankfully encountered an issue in a closed testing environment.

For the handful of people who know what I'm on about:

Yes, I'm talking about "that one."

And we made the right decision.

Recommended reading

The latest issue of The Atavist Magazine – "The Worst Air Disaster You’ve Never Heard Of" – is not at all about AI. And yet, you'll hear echoes of today's AI rush in every paragraph.

This is required reading for company executives.

In other news …

  • When people use genAI to create images from the past, the subtle errors can change how we view history. (Le Monde 🇫🇷)
  • Would you like a genAI bot to play a role in your government? Albania's giving it a go. (Der Spiegel 🇩🇪)
  • Meta is selling … electricity? Yes. Electricity. It's part of that whole "massive datacenter buildout" thing they have going. (The Register)
  • Time to add "your preferred airline" to the list of data brokers. Gross. (404 Media)
  • Here's a guide for journalists to spot AI-generated image and video content. This isn't the typical "oooh, it has em-dashes, so it must be fake" nonsense. This list looks pretty solid, and appears to overlap with OSINT techniques. (Global Investigative Journalism Network)
  • Hey, software developer! Did you lose your job to an AI bot? You can now earn cash … cleaning up after AI bots' code. (404 Media)
  • It turns out that AI tools continue the grand medical tradition of relying on gender and ethnicity to under-treat patients. Who could have seen it coming? (Aside from "everyone" …) (Financial Times)
  • TikTok now identifies objects in videos, to drive links back to its shopping experience. This includes grief-stricken videos taken in war zones. Classy. Just classy. (The Verge)
  • You know that low-grade, looks-passable-from-afar-but-is-actually-terrible-up-close material that comes out of LLMs? The office version is now called "workslop." And it is expensive. (HBR)

The wrap-up

This was an issue of Complex Machinery.

Reading online? You can subscribe to get this newsletter in your inbox every time it is published.

Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.

Disclaimer: This newsletter does not constitute professional advice.

Don't miss what's next. Subscribe to Complex Machinery:
Bluesky https://the.zymocos…