Complex Machinery

Archives
Subscribe
December 3, 2025

#050 - Too hot to handle

Insurance, AI, and where the two won't meet.

You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

A dumpster, on fire.  Photo by Stephen Radford on Unsplash.
(Photo by Stephen Radford on Unsplash)

Front and center

Some readers told me that I buried the lede in the last issue. So this time I'll put it front and center:

I've just released my latest book!

Twin Wolves: Balancing risk and reward to make the most of AI is an executive-level read that sees AI adoption through the lenses of risk-taking and risk management.

Consider this a condensed version of my experience consulting in (The Field We Now Call) AI – perfect for the company starting its AI journey, or looking to change direction, or preparing to invest in a business that uses AI.

Saying no to a bad bet

If you want extra insight on where AI is going, look to the insurance industry.

You might think I mean "how insurers use AI." No doubt, I trust the insurance industry to surface interesting use cases. This field is Wall Street's cousin as far as Seeking Profit Through Mathematical Models of Real-World Phenomena. But how insurance uses AI takes a back seat to how it treats AI as a customer. And that's where we learn something.

It helps to understand the business model behind insurance. (For the subscribers who are insurance experts – especially a certain someone who has taught me quite a bit about the mechanics of your field – apologies in advance as I oversimplify for brevity.)

In short, insurers are in the risk business. We can frame this in three ways:

  • They deal in risk transfer: The buyer transfers the risk of some unfortunate event to an insurer.
  • They place bets on a future outcome: People who buy insurance policies bet that some event will occur, while those who sell insurance policies take the other side of that bet.
  • They take a risk on buyers: "is this person/company telling me everything I need to know about this potential unfortunate event?"

(In The Poker Face of Wall Street, Aaron Brown likened a recurring poker game to a weird kind of bank because a player "deposits" money to play and then has to win a game to withdraw. I suppose there's similar framing for getting an insurance policy: you deposit money in premiums and you must suffer an incident to get your money back out. At least, sometimes you get your money.)

Insurers are only profitable when they are right more often than they are wrong – when they collect more in premiums than they pay out in claims – hence all of the data analysis, modeling, research, and due diligence they employ to determine whether a given risk is insurable.

While people and stock markets can operate on vibes sentiment, insurers operate on hard math. They'll turn down anything that looks like a losing bet. Sometimes that's a bet where the counterparty (buyer) knows full well there's a problem and they want someone else to saddle the burden of paying for it. Other times the counterparty is blissfully unaware that they're walking off a cliff but the insurer can see it. Whatever the reason, when an insurer decides that something is not insurable … well … that should be a red flag.

Earlier this year insurers said that their data points to climate change being real. And they've updated their coverage as a result. (For the life of me, I can no longer find the WSJ article on this. If you find it, please let me know.) This week, insurers have become similarly wary of AI.

I don't interpret this as Insurers Hate AI. AI-the-technology works just fine when you use it the right ways in the right places. Insurers are expressing distrust over AI-the-hype-wave, which leads businesses to (mis)use the technology. They see too many ways for it to go wrong – thank the dodgy use cases – and have determined that this is a bad bet for them.

The next short

In September I described John Paulson as someone who successfully shorted (bet against) US housing markets in the run-up to the mortgage crisis. Michael Burry is another such investor. Burry is back in the news because he – and this is a technical term – is shorting the daylights out of Nvidia.

If you're not sure why someone would bet against this massive bull run we call The GenAI Hype Wave, we can take a step back and compare two approaches to investing:

  • The typical "buy low, sell high" route is known as "going long": you buy shares, their price goes up, and you sell them. If you're wrong, your losses are capped because share prices can only fall to zero. The maximum you can lose is the money you spent to buy the shares.
  • As a short-seller, you make your money when prices go down. You borrow some shares from a friend, sell them on the open market, buy them back later after the price falls, then return them to your friend (along with a small fee). What if you're wrong and the price goes up? You have to buy those shares back at the higher price – at a loss – so you can return them to your friend. The higher the price goes, the more it hurts you to buy. And as there's no upper limit to share prices, your downside loss is unbounded.

Precisely because your losses are unbounded, you don't just act on a hunch that a company's share price will fall. You do your homework. You dig around financial statements and other information sources so you can be damned sure that 1/ something is amiss and 2/ it's about time for it to fall apart. You are essentially a financial detective, sniffing out when a company's hand-waving and vibes don't line up with reality.

(Yes, this is a step shy of insurance. And of good investing. And of so many other areas where there is real money on the line…)

That brings us back to Burry. Like Paulson, he shorted mortgages in 2008 because he got a whiff of a problem, researched it, and determined that it was indeed real. For Nvidia he cites the growing concern over the circular financing between the major genAI players. Bloomberg ran a stellar infographic on that in October, and the New York Times offered additional coverage a couple weeks after that. Not to mention, the Financial Times noted that OpenAI ignored experienced advisors to assemble the deal on their own.

And those articles represented just a portion of the ink (pixels) spilled on the questionable deal-making.

So while Burry may have placed the largest known short against the genAI space, he is clearly not alone in his suspicions that something is amiss. Others have raised eyebrows at the circular financing, the hype, and the lack of tangible business use cases. Execs at major genAI players now openly refer to this as a bubble, and we all know what happens to bubbles. A recent financial times piece opened with the line "OpenAI is a money pit with a website on top" and continued to hammer the idea that the numbers weren't adding up. On top of all that, trader ‪Tom Hearden‬ speaks for us all when calls out the overzealous construction projects:

Prediction:

a new Michael Burry is going to get minted in the next two years and he or she will make the bag shorting data center financing

All of this to say: the TechCrunch piece I linked to earlier implies that Burry himself may lead the genAI bubble to burst. A fairer assessment is that the parties inflating a bubble with artificial hope are ultimately responsible for its collapse. They knew there was a problem long before the short-sellers picked up the scent.

Made in our image

Humanoid robots are back in the news. In part because they can't walk straight. And they trash your house when they try to cook. The most reliable helper-bot is remote-controlled by people – which may count as protecting humans from hazardous working conditions, but hardly qualifies as a fully-autonomous actor.

Why, then, do companies keep trying to make them?

For one, consider chatbot companies' forays into adult content. It's not a stretch to see them moving from a text-based interface to a physical UI/UX. The simple fact is that there's plenty of money to be made by giving people what they want. And since people have repeatedly demonstrated that they want that, it's a no-brainer for companies to build it.

(Remember that such basic human desire has driven multiple waves of technology advances – home video rental, home broadband, mobile broadband, video compression codecs, and online payments processing, to name a few. So while the bots don't interest me, I do wonder how bot-related R&D will advance other technologies.)

Two, there's that other human desire to pass off drudgework to someone else. Technology eats that drudgework, and each wave of technology widens the target area of actionable drudge. It's no wonder that do-everything humanoid robots are a staple of sci-fi's futuristic worlds – they're the same shape and size as people, making them easy to plug into a person's job.

If you zoom out, though, these reasons feel pretty weak. I question how many people really want sexual contact with a robot. And on the work side of things, the human approach to a given task is designed around the attributes of the human body. Humanoid robots inherit these same limitations and inefficiencies.

Hence why task-specific robots like dishwashers and washing machines do amazing work while also being cheaper to build and maintain than anything that looks like a person. To go the other way and build a human-shaped robot is to build an airplane that flaps its wings like a bird. It may eventually work, but fixed wings and jet propulsion work much better.

Based on that, I feel something else in this recent wave of humanoid robots. It's more of a push by the purveyors of the technology rather than a pull based on consumer desires. Such an artificial market dynamic takes a page from Soviet-era central economic planning. And it's pointing to similar outcomes.

Keep asking

Nick Hune-Brown, editor from TheLocal, received a pitch from a freelance journalist that sounded a little too good to be true. His digging took him down that uncomfortable road where every question led not to answers, but to more questions. Increasingly uncomfortable questions, at that. The big one being: did this person use genAI to build out their portfolio?

While this article is about an editor investigating a writer, it serves as a guide for anyone staring down an AI product or investment. Does it seem too good to be true? Do yourself a favor and ask questions. Then keep asking when things don't make sense.

To loosely paraphrase a line from Daniel Pennac's La Fée Carabine: it's not about the answers they give; it's about the questions you have to ask.

In other news …

  • The DOJ has reached a settlement with RealPage, the landlord-rent-management system accused of performing algorithmic collusion. (The American Prospect)
  • The so-called "LLeMmings" who get genAI to think for them. (The Atlantic)
  • OpenAI issues a so-called "code red" over improving ChatGPT (The Guardian, WSJ,
  • What was on your Thanksgiving dinner plate? Perhaps a helping of geAI slop? (Bloomberg)
  • Using genAI to recreate scenes from ancient Rome? Maybe, think again. (France24 🇫🇷)
  • You know how I keep saying that LLMs only see patterns of grammar, not patterns of logic? As it turns out, that can also enable some flavors of prompt injection. (Ars Technica)
  • Gamers' widespread anti-AI sentiment turns to Fortnite. (Kotaku)
  • OpenAI needs ever-increasing amounts of cash to keep the dream alive. (Windows Central)
  • I'll let this epic title speak for itself: "Microsoft's head of AI doesn't understand why people don't like AI, and I don't understand why he doesn't understand because it's pretty obvious" (PC Gamer)
  • ChatGPT: for when you want to meet people, but also can't be arsed to interact with them on dating sites. (Le Monde 🇫🇷)

The wrap-up

This was an issue of Complex Machinery.

Reading online? You can subscribe to get this newsletter in your inbox every time it is published.

Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.

Disclaimer: This newsletter does not constitute professional advice.

Read more:

  • Feb 28, 2024

    #002 - It's still a wild animal

    (Photo by Erik-Jan Leusink on Unsplash) Do companies dream of functioning AI? (With apologies to Philip K. Dick.) Companies are just head over heels about...

    Read article →
  • Sep 23, 2025

    #045 - Tracing the connections

    The key lesson from complex systems: when everything's connected, large players can pull others up and also drag them down.

    Read article →
Don't miss what's next. Subscribe to Complex Machinery:
Share this email:
Share on Twitter Share on LinkedIn Share on Hacker News Share via email Share on Mastodon Share on Bluesky
Bluesky
Website favicon