Complex Machinery

Archives
May 5, 2026

#061 - Catching up on the latest

A look at recent AI news: banks have a new type of CDO (sort of), OpenAI's IPO-to-be, and more.

You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

A dog, leaping in the air and catching a frisbee.  Photo by Wolfgang Hasselmann on Unsplash.
(Photo by Wolfgang Hasselmann on Unsplash)

After the previous issue's essay, I'm catching up on recent news. Here are five short segments on the riskier parts of AI.

Last-minute additions

One benefit of writing a sometimes-topical newsletter is that the news gives me plenty to write about. One drawback is that an untimely arrival of news can lead to last-minute edits or additions. This week I had two cases of News That Changed The Newsletter:

The first is an article about OpenAI that I'll get to in another segment. It provided extra support for what I'd already written but I still had to make some small edits to include it.

The second article is the reason behind this segment. It explains how banks are having second thoughts about holding all that datacenter loan debt:

Rather than a classic SRT that may be tied to dozens of loans, banks are exploring slicing and dicing large and concentrated data centre loans to shift the riskiest portions off their books, for example.

Banks shifting loans can mean a few different things, one of which is "this looks toxic and I don't want it on my balance sheet. Let me find some other sucker who has weaker risk controls buyer with a bigger risk appetite." This was very much the playbook for loan originators in the run-up to the 2008 mortgage meltdown.

(I'm calling these new deals "collateralized datacenter obligations." What? That's the same acronym as a toxic 2008-era financial instrument? Sheer coincidence.)

It was funny to read this, barely ten days after I'd drawn parallels between 2008 and today's genAI mania.

At least one person has suggested that I rename "Complex Machinery" to "The I Told You So Gazette." Maybe. Maybe.

Speaking of those loans, and the datacenters they'll fund …

What if it's nothing?

It's funny how quickly genAI's story has changed from "this is inevitable and you'd better get on-board" to "hitting the skids because people are protesting datacenter construction." But that's exactly what's happening. Every week I see another handful of stories about protests, or town hall meetings, or some other measure intended to keep genAI on its own turf.

Weirdly, this reminds me of Apple's App Tracking Transparency (ATT) rollout in 2021. This privacy change allowed people on iDevices to limit a given app from tracking them outside of its four walls. Well, it was more along the lines of: it allowed people to enable said tracking, because Apple set it to off by default. Advertisers had a fit over the potential revenue loss. Apparently they made so much money from surreptitiously trailing people's online activities that they had become bold enough to say it out loud.

(FaceMeta claimed to have lost $10B in ad revenue over the first year, at a time when they'd also just torched $36B on that metaverse misadventure.)

The ATT rollout made me think of something beyond the revenue hit, though: what if the loss of tracking data were to have zero effect on ad performance? If click rates were the same with and without app tracking, that could damage targeted advertisers' claims that they need said data in order to be effective. Which could, in turn, put their entire business value in question.

I see a similar quandary for today's genAI hopefuls: what if they get their datacenters, and nothing changes?

AI companies claim that they'll need this extra compute capacity for … reasons. Very vague reasons. And saying "we need datacenters" is a great stalling tactic! Building these facilities takes years, which provides breathing room for anyone who's been a little flexible with the truth.

But if the genAI crew should get their datacenters, and the technology doesn't improve, that would pull back the curtain on the hype machine. Which could then put their entire business value in question.

If I ran a massive genAI company – the kind that has vastly overpromised on what genAI can deliver (which is, y'know, all of them) – I would pray that those datacenter protesters get their way. I could then tell my board and my investors that, darn, we're not going to meet our goals because those pesky citizens halted construction of our big AI factories.

In fact – and my usual disclaimer of not professional advice applies here – hypothetically speaking, it might behoove such a genAI company to quietly fund the protests. Why pray for an outcome, when you can pay for things to go your way?

The higher-order point here is that limiting genAI's expansion into the physical realm – and into the mainstream financial system at that, since some of these jokers have taken on significant debt to fund these facilities – might limit our exposure to the inevitable correction. Because even if you don't believe that genAI is in full-on bubble territory, it's clear that whatever's going on now is unsustainable. You'd have to be living in fantasy-land to think otherwise.

Hmm. Fantasyland. Maybe that's where the datacenters should go? That just happens to be where genAI works as expected. It'd be a great fit.

Asserting more control

Two WSJ articles offer a peek at what's going on inside OpenAI. The first notes that the company has missed growth targets. The second explains their plans to go public later this year and how CFO Sarah Friar's deep experience with this process will steer the company through. Along the way, Friar has had to (ahem) amend some of CEO Sam Altman's statements and plans, cutting back in places in order to present a more appealing IPO.

Combined, these articles hint at possible friction in OpenAI's C-suite. The kind of friction that stems from the different leadership roles and different phases of a company's lifecycle:

Your typical startup is a mix of dreamers and realists. The dreamers – usually the founding CEO, and sometimes the head of product – sell the company's (possible-, hopefully-) future state to parties internal and external alike. The die-hard realists include the CFO and the legal team. They keep the company on the rails such that it will last long enough to bring the CEO's vision to fruition.

This pairing of realists and dreamers is not only fine, it's pretty much required for a startup to take off. In those early days you need the CEO's blue-sky thinking on far-off possibilities to attract investors. At some point you need the CFO to show off the shiny, verifiable spreadsheets to an acquiring firm.

But what if the CEO's dreams are too far-off, for too long – say, they keep pushing five-year ideas on a five-week runway? That's a problem. The CEO usually wins here because it's their company. (How often do you see the title "CFO and founder"?) And sometimes they find a fellow dreamer to provide a cash injection, so everything turns out fine in the end.

Occasionally, though, there are times when the dreamer needs to step aside. Pay close attention when the realists take the helm. It's a sign that something serious is afoot, such as a rough economic climate or an acquisition.

That brings us back to what's happening at OpenAI:

1/ It makes perfect sense that Friar would assert more control now. Going public is a serious, formal process that invites regulatory scrutiny. There's still room for optimism and hand-waving, but not nearly as much as in everyday startup life.

(You'll notice that executives typically make fewer public statements as the IPO date approaches. When they do speak, it's a tightly-scripted affair.)

2/ It can be tough for a founder-CEO to let go of the reins, even when it's the right thing to do for the long haul. Sometimes they unwind the realist's decisions or otherwise cause trouble. As such …

3/ Keep an eye out for key departures. The hasty exit of a risk manager, CFO, HR lead, or other reality-grounded executive might be a sign that the dream is veering into a ditch. Perhaps they no longer feel able to right the ship. And/or they want to be out before litigation hits.

(Case in point: in the run-up to the US mortgage crisis, Countrywide Financial's head of risk assessment left because his warnings weren't being taken seriously.)

I'm not saying that OpenAI's CFO has her eye on the exit. I'm just saying that if she should exit before the IPO… make note.

Problem gambler

(As sung to the tune of Elton John's "Tiny Dancer.")

Skim the last couple years' news stories and you can trace genAI's descent:

  • Companies started by throwing some spare cash at a new technology. (Cool.)
  • Then, they diverted resources from other projects. (Hmm.)
  • From there, they issued bonds and took out loans. You know, accruing real-money debt. (Hmmmmm.)
  • Now, they're cutting headcount in order to fund their AI dreams. (wtaf.)

In recent weeks, Oracle, Microsoft, and now FaceMeta have announced or conducted AI-related layoffs. Not because genAI is able to do those jobs right now. But because they need more cash to throw at The Technology That's Not Quite There But They Really Think It Will Be.

This is especially troubling in light of genAI's track record. After all the fanfare and billions of dollars in investment, use cases beyond "crime" and "a small portion of software development" are few and far between.

This sounds like the desperate path of the problem gambler. Right now genAI is hunched over the roulette table, flashing a weak smile. "I promise, this time my number will come up. It has to." Shortly followed by "Well, maybe next time. Or the time after that. Or...." A performance more cringeworthy than Meta blowing $80 billion on the metaverse before walking away and pretending that nothing happened. (To be fair: such walking was only possible after those Horizon Worlds avatars got legs.)

If I'm to point to The Gambler's Ruin here, I'll also note that the hypothetical gambler only requires infinite money to guarantee the ability to play long enough to win. It's still statistically possible that a gambler with finite resources wins before they go broke. Which is what makes them a gambler. So the genAI crowd might just win after this latest round at the casino. It's unlikely, sure. And it would be more from luck than skill. But a win is a win.

If they don't win? Well, they can still walk away and pretend that nothing happened. Meta's already paved that road for them.

Poisoning the well

Speaking of gamblers …

Decision markets are simultaneously a degenerate form of traditional financial markets while also being more expansive. Degenerate, in that they're slim on regulation and don't require the participants to carry licenses. More expansive, in that people on Kalshi and Polymarket can stake on just about any future outcome, not just price fluctuations on stocks and bonds. The number of times a politician says a certain word during a speech, the timing of military action, and the weather in a certain area are all fair game.

(By the time you read this, there might be a Polymarket bet on whether I cross the 2,000-word mark on this newsletter. And bettors will fight over whether the intro and segment headlines count.)

What the traders and gamblers have in common is their risk/reward tradeoff: you can firmly believe in the reward of some possible future outcome, which is why you put your money down; but you can't guarantee a win, which is why you risk never seeing that money again.

If you know what's coming, though, you can place a bet that's not a bet. What's a little market manipulation between friends?

In recent weeks we've seen examples of mysteriously lucrative Polymarket bets that were placed shortly before US military actions. And just last week someone allegedly tampered with weather sensors at Paris Roissy Charles de Gaulle airport in time for bets on the temperature in France.

This gets even more interesting when you consider the bigger picture: what happens when you tamper with AI training data at the source by leaving poison data where the bots will pick it up? There's no need to hack through an AI system's defenses if you can sneak trouble in through the side door. When you consider how these models' outputs will feed into business processes, human decision support, and other downstream models, you see that a poisoned dataset can have far-reaching consequences.

I'm not sure where this all leads. But when you combine it with increasingly convincing generated text, images, and video, then spread it across fast-moving, worldwide social media networks, it points to a future in which we can't believe anything.

Which, in a weird way, lets us create our own reality? Hmm.

Time to reread Peter Pomerantsev's Nothing Is Real and Everything Is Possible.

Recommended reading

In December 2024, Jeju Air flight 2216 suffered a large bird strike incident as it prepared to land. Reporting at the time focused on the concrete barrier that turned the emergency landing into a fatal crash.

Last week the New York Times published an article that goes into more detail, pointing to sources that question the pilots' actions. What I took from that was that it's not enough to institute risk controls and corrective measures; the order and speed with which they are executed also matter.

In other news …

For more links to recent news, and with a slightly broader scope, I encourage you to check out my other newsletter. It's a weekly, curated drop of what I've been reading.

The wrap-up

This was an issue of Complex Machinery.

Reading online? You can subscribe to get this newsletter in your inbox every time it is published.

Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.

Disclaimer: This newsletter does not constitute professional advice.

Read more:

  • April 24, 2026

    #060 - Shadows of the mortgage meltdown

    Sowing the seeds of our subprime AI crisis

    Read article →
  • January 30, 2026

    #054 - The cracks begin to show

    genAI companies have been stress-testing their products in the real world. It doesn't always turn out well.

    Read article →
Don't miss what's next. Subscribe to Complex Machinery:
Share this email:
Share on Twitter Share on LinkedIn Share on Hacker News Share via email Share on Mastodon Share on Bluesky
Bluesky
Mastodon
LinkedIn