#051 - A mortgage for the performance venue
GenAI companies are loading up on debt to support their theatrics.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

I went back and forth on what to write for this last issue of the year.
One thought was to release some "genAI year in review." Looking back on AI-related news, though, I see that 2025 recycled a lot of 2024 plot points. Plot points which were already a rehash of 2023. Even genAI's new toys recycle the tired promise of Maybe, Someday, This Will Work As Advertised. Most of what's changed is the volume – in terms of "amount of genAI" (datacenters, news stories, money spent) as well as "level of noise" (since talk of genAI drowns out most other topics).
Scratch that year-in-review, then.
I briefly considered the other cheat efficiency ploy of writing down my AI predictions for 2026. But given what I've just said about years past, that would be a fool's errand. At best.
So instead, Complex Machinery will close out the year with the usual. Let's talk about recent AI news.
Money, meet mouth
Several Complex Machinery segments have explained financial concepts like the mechanics of trading, short-selling, and the mortgage crisis. This is one more for the list.
Some might question why a newsletter that purports to cover "risk and AI" keeps bringing finance into the picture. It's because activity in the AI space keeps mapping onto financial concepts, that's why. Also, and even worse, because AI and finance have become increasingly intertwined over recent months.
The latest to arrive at this party is the credit default swap. You may remember that term from the 2008 US mortgage meltdown, and as such you may assume that CDSs are bad. Not necessarily! But they are one way to spot a potential problem.
In technical terms, CDS is an insurance policy on a debt – a loan, a bond, or similar "I promise to pay you back" situation. If someone defaults on a loan, the party on the other end of that CDS compensates you for the loss.
In layman's terms, a CDS creates a love triangle of risk:
You've handed Danny Debtor a lump of cash.
You're having second thoughts about Danny's ability to repay.
Ronny Risktaker thinks you worry too much. Through a CDS contract, Ronny agrees to pay you the value of the loan should Danny default. If Danny pays you back, you owe Ronny a nominal fee.
Hence the name: you have swapped the problem of Danny Debtor's potential credit default with Ronny Risktaker. Simple enough. But credit default swaps have a more interesting side:
You don't need to have issued that loan, or otherwise have any connection to the debtor. You simply want to place a bet on whether that debtor will go bust.
It's a tradable instrument. Traders will buy and sell CDS contracts on the open market. Did you change your mind? You now think that Danny Debtor will indeed repay? You can find some other
suckerparty to take your end of the bet with Ronny Risktaker. Now it's no longer your problem.A financial market is a big, ongoing vote. As traders buy and sell these contracts, the bid and ask prices reflect their take on whether the underlying loan will default.
I've shared this CDS 101 explainer because there's now a small-but-growing CDS market around genAI companies. Many of them are issuing bonds or taking out massive loans to finance a technology that has yet to prove itself (and, quite frankly, looks increasingly unlikely to do so). Traders are placing their bets on who will make good on that debt. The prices traders place on those CDS contracts may very well be canaries in the datacenter coal mine.
Expectations of an actual default remain low for most of these companies. CDS markets are pricing in just a 5% probability of Meta being unable to pay its debts over the next five years, and only 4% for Nvidia. Although that rises to 10% for Triple B rated Oracle and a more worrying 48% for junk-rated CoreWeave, neither company has had any problem raising financing, with both selling billions of dollars of bonds this year.
In each case, though, there's no doubt over what's driving the growing demand for CDS protection: an unprecedented debt splurge from companies at the forefront of the AI boom that has left banks and investors potentially on the hook for billions. Alphabet, Amazon, Blue Owl Capital, Broadcom, Oracle and Meta have between them issued US$120bn of corporate bonds since September – and are raising another US$38bn in the loan market.
(Emphasis added.)
Here's a fun game: over the coming months, compare the marketing sentiment expressed by the genAI hopefuls to the market sentiment expressed by CDS traders. The distance between the two might prove … rather telling.
The performing arts
Remember when Meta built out its TBD Labs "AI superintelligence" division earlier this year? The process allegedly involved Mark Zuckerberg offering tons of cash to a list of hand-picked candidates. In today's episode of This Will Surprise No One, Meta's existing AI teams aren't too happy about that.
I don't blame them. Meta surely employs some of the best and brightest minds in the field. Why would Zuck have to look outside his empire to get AI talent? Is this an offshoot of his MMA interests?
I have zero inside information. But I do have plenty of experience talking with companies about AI. I can assure you that some leaders are quite firm in their unrealistic expectations of this technology. And they surround themselves with likeminded people.
I imagine that Meta's incumbent AI teams were fairly grounded in reality (as grounded as one can be when your job is to apply high-end math to get people to click on ads) and they kept telling ol' Zuck that some things on his wish list were just not possible. In return, he did what anyone with infinite money would do:
He listened to the experts and found something more reasonable to pursue.
Wait, no, I read that wrong. He didn't do that at all. Instead, he hired people who knew how to Play Along With The Boss™.
As I noted in July, during the hiring spree of the then-unnamed superintelligence division:
If you're on Zuck's infamous list, then, you pretty much have all upside and near-zero downside! Do you think this will work? Maybe, maybe not. But did you lie and tell Zuck this would work? No. No, you did not. Zuck had a wild dream about AI. He reached out to you. He told you it would work, and he is willing to pay you a king's ransom to try.
The takeaway lesson? You don't get the (alleged!) $100 million comp package for speaking your mind. You get it for acting.
More proof that people will fund the performing arts, so long as the stage is in their office.
Dangerous fandom
We've all had That Friend™. It's the person who has gotten into some new hobby / food / fitness craze and they insist on you joining in. Which wouldn't be so bad, except that their new interest is simply terrible. And they ignore your polite refusals. "It's an acquired taste," they say. "Just a few more and then you'll love it!"
That Friend is common in genAI circles. They've gone all-in on the technology, either financially or emotionally or both. You can recognize them by their mantra: This Will Work, Someday, Eventually, So Let's Pretend It Already Works Today. They repeat this at peak volume, cramming genAI into our apps and devices, hoping that we succumb to the vibes.
The Washington Post took its turn as That Friend last week when it offered customized, generated podcasts of its content based on AI summaries. The project went over about as well as you'd expect:
But less than 48 hours since the product was released, people within the Post have flagged what four sources described as multiple mistakes in personalized podcasts. The errors have ranged from relatively minor pronunciation gaffes to significant changes to story content, like misattributing or inventing quotes and inserting commentary, such as interpreting a source’s quotes as the paper’s position on an issue.
AI summarization has been a tricky use case. It keeps failing, yet companies keep trying it. It's like they've never seen the "...but it might work for us" meme. Or maybe they've seen it and treat it as inspiration.
Another case of That Friend makes WaPo's move seem reasonable by comparison. The moderator of a Discord server installed an intrusive AI chatbot. Against members' wishes. They then kept it around. Against members' wishes. Because nothing says "this is awesome" quite like having to constantly force it onto people.
Said mod is allegedly the CISO of genAI company Anthropic. So on the one hand we can see this person's financial interest in spreading the AI gospel far and wide.
Said mod also claimed that the AI has emotions and represents a "new kind of sentience." So on the other hand, this feels like a lot more than an attempt to boost one's stock options. Hmm.
I wonder how long this mod will keep their job at Anthropic? Companies usually frown on negative press. Then again, in the echo chamber of genAI companies, "forcing the product onto unwilling participants" should be room for a promotion.
A wide range of prices
If you're not familiar with the term "dynamic pricing" you've certainly heard its other names: "surge pricing" or "charging different prices based on circumstance" or "screwing the customer." It's about getting someone to reveal what they'd be willing to pay, then magically charging that amount.
Your typical econ textbook describes dynamic pricing as charging more for umbrellas when it's raining, or raising the price on soft drinks when the weather's hot, or cutting the price of cinema tickets on a weekday afternoon. A company can also charge different prices based on individual circumstances. Take the old "Saturday night stay" rule for airline and hotel deals as an example. Business travelers self-identify by going home over the weekend and the airlines charge these price-insensitive customers a premium.
So you can see that dynamic pricing is nothing new. What is new('ish) is the way machine learning and genAI can help companies better determine what people are willing to pay. Grocery delivery service Instacart has taken this to the next level, charging some customers 20% more than everyone else.
Instacart insists that this is not all it seems:
The company also took issue with its activity being referred to as “dynamic pricing,” calling it instead an “AI-enabled pricing experiment.” An Instacart spokesperson said: “These tests are not dynamic pricing – prices never change in real-time, including in response to supply and demand. The tests are never based on personal or behavioral characteristics [...]"
Not only does that sound like dynamic pricing, it sounds like someone trying very hard to not say "dynamic pricing" while on the record. Hats off to the legal team. PR loses points for making it so obvious, though. Especially since the Instacart rep quoted above ended their claim with:
" — [the prices] are completely randomized.”
I'm sorry, what? "Randomized?"

They're using generative AI to set prices at random?
Dear Instacart: I can help.
(This is not professional advice.)
If you want random numbers, you can use a random number generator. It's faster and cheaper than an LLM.
Where do I send the invoice for this genius consulting move (which was not professional advice)?
Back to the source
In the previous newsletter I dropped a line from Daniel Pennac's La Fée Carabine. To provide additional context, here's the full quote and my translation:
“On peut interroger n’importe qui, dans n’importe quel état; ce sont rarement les réponses qui apportent la vérité, mais l’enchaînement des questions.“
“You can interrogate anyone, no matter what their state of being. It’s rarely their answers that unveil the truth, but the sequence of questions that you have to ask.“
(I pinched this from an article called "Our Favorite Questions," which I coauthored with a couple of friends back in 2022. Check it out.)
I think of that quote quite a bit. May it serve as your guide through AI news.
In other news …
This section usually hosts a list of one-liners from recent news. A few weeks back I spun this out into its own newsletter, also called In Other News, and gave it a wider remit than Complex Machinery's "risk, AI, and related topics."
In Other News will also be more timely: it lands every Wednesday. None of this noncommittal, "a couple times a month" nonsense you get here with Complex Machinery.
I'll keep doing In Other News here, at least for a while. But if you subscribe to the newsletter version, some of the links here might seem familiar.