#060 - Shadows of the mortgage meltdown
Sowing the seeds of our subprime AI crisis
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

This essay has been in my head for a while now and I’ve finally written it down. Subscribers can look forward to more "cleaning out the backlog" pieces over the coming weeks.
It's entirely human to review the past to make sense of the present. That's why so many people see the genAI mania through the lens of the Dot-Com era. Both offer an overheated tech market, questionable business models, and credibility based on loose association with the latest buzzwords. This eerie sensation goes double for those of us who first witnessed Dot-Com up close.
But if I brush aside my flashbacks of 2001 I catch a glimpse of something darker: shadows of the US mortgage crisis of 2008. Like the Dot-Com crash, 2008's housing bubble gave us an overheated market and people chasing fool's gold. But the mortgage crisis was also marked by a widespread imbalance of risk/reward tradeoffs and participants who lost their race against time.
Will the genAI crowd heed the blinking-red lights that the mortgage scene ignored?
Neither a borrower nor lender be
It would take several books to explain the mortgage crisis in detail. The defining characteristic, though, was disturbingly simple: misplaced expectations. You could even call it misplaced trust. Layer upon layer thereof took on shapes such as lax risk controls and overvalued assets.
Borrowers took on loans that were well out of their reach because they expected they could always refinance before payments got out of hand. Lenders issued mortgages with little care for borrowers' ability to repay, since they expected to pass the risk to the financial markets by packaging up loans into tradable assets called mortgage-backed securities (MBSs). This made them loan originators more than lenders, really, as they weren't holding the debt on their books. So why would they care about default risk?
Dark magic blended MBSs made of shaky mortgages into collateralized debt obligations (CDOs, which later gave rise to CDOs-squared and synthetic CDOs). Ratings agencies didn't understand the alchemy but still issued praiseworthy, investment-grade ratings to shiny repackagings of junk.
Trusting those ratings, investors seeking safe assets poured cash into CDOs under the twin assumptions that housing prices would never fall and that borrowers would not default en masse. Assumptions that were weakened by inflated property values, which would make it difficult even for capable borrowers to refinance or sell after a market correction.
To cap it off, banks were issuing credit default swaps (CDSs) that would trigger massive payouts in the event of widespread mortgage default. Most issuers treated the fees from CDSs as free money, since they believed that such default was impossible.
Utterances of "impossible," as any risk manager will tell you, precede a fall from grace.
In short: a pile of overpriced loans were in the hands of people who were unlikely to repay them. And everyone treated this as a stable foundation on which to build businesses and make far-reaching financial investments. All parties were asleep at the switch.
Which is why this miraculously worked until it didn't. Home values leveled off and subprime borrowers found themselves unable to refinance or repay. As they defaulted en masse, lenders couldn't shift those mortgages off of their books and into MBSs. Which didn't matter, as the market for CDOs had dried up. The ensuing meltdown left just about everyone holding the bag.
Oh, and those CDS contracts? The ones that paid out in the event of widespread default? People who had done their homework, and who had accepted the strong possibility of those subprime mortgages failing, walked away with a massive payout. All because they understood that gravity was real.
The reboot
The overarching theme of the mortgage crisis – several levels of misplaced trust and failed risk controls that metastasized in the face of overwhelming optimism – certainly holds for genAI. We're falling for the collective illusion yet again.
This latest mania is built on unreasonable excitement in overvalued assets (genAI models and the products built on them), too much faith placed in the parties that create those assets (due to a marketing push by the genAI companies), plus unfounded faith that genAI products and projects will deliver (based on ill-informed hope). Participants find security in herd mentality – a mix of social proof and "line go up" thinking – instead of doing their homework.
I hesitate to draw a straight-on, one-to-one mapping of all mortgage crisis entities to players in the genAI space. Any genAI participant could play the role of the mortgage borrower, loan originator, CDO manager, ratings agency, or toxic derivative. But one such analogy might help explain what 2008 and 2026 have in common:
What if we were to see every corporate genAI project as a mortgage?1
That would make the product team the potential borrower, and put company leadership in the role of the lender. Whereas a mortgage lender would need to check the property's value and the borrower's creditworthiness, this leadership team would need to assess:
Is the project's outcome overvalued? Is the product team asking for more resources than this thing is truly worth? This can be a question of money, time, computational resources, or company reputation.
Can the product team deliver? Even if the product is properly valued, does the team have the skills to build what they claim? Do they have expert guidance on delivering AI projects, or better yet, first-hand experience in doing so? Have they developed sufficient AI literacy to understand what's worth trying and what's doomed to fail on the launch pad?
A CEO pushing company-wide AI mandate, then, is akin to a lender who relaxes lending standards across the board. Doing so will certainly increase project volume, but it will also increase the project failure rate. That's because most genAI projects swing to the low end of the quality spectrum. It's dubious to finance them. Hope and vibes make for terrible underwriting.
You may accuse me of stretching the analogy, so I'll challenge you to a mark-to-market exercise: work through your company's entire genAI portfolio and put real numbers on each project's costs and potential payoffs. Not "if AI eventually does what I want" numbers; actual "this is what AI is capable of doing for us today" numbers. Include up-front costs, maintenance, and possible future costs attributable to downside risk exposures.
How many projects would be considered high-quality assets? And how many are the equivalent of the infamous "no income, no job or assets" (NINJA) loans that plagued balance sheets in 2007? When you ring up the totals, how many of you came up negative?
Be honest: most of your genAI efforts are deep in subprime territory.
Everything in its right place
To be clear, there's nothing inherently wrong with the "subprime" label. A subprime loan simply carries a higher risk of default because the borrower is statistically less likely to repay. Investors who actively seek additional risk (in their quest for additional reward) will naturally gravitate to subprime assets.
The problems arise when MBSs and CDOs built of subprime loans are magically relabeled as safe, investment-grade vehicles. That's the finance equivalent of reputation-washing – when risky assets mingle with the rest of the marketplace unnoticed, creating the dual effect of discouraging risk-seekers while attracting investors who are in search of a relatively safe haven to park their money.
Similarly, much of the subprime genAI space gets relabeled as triple-A-grade models, chatbots, and other apps. Skim the headlines and you'll see models providing incorrect answers in business contexts. Bots that are supposed to summarize text instead make up information. And then we have chatbots that have encouraged people to undertake violent acts and even commit suicide. These are all signs that genAI products are released with insufficient care and are woefully lacking in oversight as they operate in the wild. And yet, they are available to anyone who can type into a text box.
Just as repackaging mortgages didn't make them investment-grade instruments, repackaging genAI tools doesn't make them work properly. It just makes them more attractive to buyers. Instead of ratings agencies blessing toxic CDOs, though, we have the wall-to-wall influence campaigns to tell us that everything related to genAI is the basis for our future well-being.
Too big to fail
That marketing machine affects more than individual projects. The wider subprime genAI picture is also in trouble.
Corporate and government leadership have been whipped into a frenzy over winning this bullshit "AI race." Companies small and large are scrambling to build a large portfolio of genAI projects, few (if any) of which undergo any vetting. That's a lot of risk to carry on their books. Especially when there's no easy way to pass it all to someone else.
(Unlike with mortgages, companies have limited options to uniformly offload bad genAI projects to the market. You might find the occasional ill-advised acquisition, or you can try raising prices on existing products to backfill the void created by what you wasted on genAI. But that's not the same as a mortgage originator pushing loans off wholesale.)
Worse still, anyone who invests in such companies, or buys from them, also takes on exposure to those flawed products. Trace this out far enough and you'll see that genAI's subprime tendrils run far and wide.
The major players – the so-called superscalers – are loading up on debt to fund their growing datacenter construction habit. The spotty track record of genAI gives us a right to question their creditworthiness, and to assess the impact of a default. Doubly so as these companies develop circular financing deals, creating money where there was none and trust where there shouldn't be.
We are fast approaching a situation in which major genAI players might need public assistance to keep their party going. OpenAI's CFO said as much when she mentioned a possible government "backstop" in November 2025. She later claimed to have meant something else, but it's fair to say that the genAI field certainly expects someone to foot the bill. Any lifeline provided by The Government™ will ultimately fall to taxpayers.
There is still a chance that genAI will experience a Deus Ex Machina moment and live up to its hype. But that chance is shrinking. With all of the risk that has built up in the system, it's far more likely that the genAI hype loses its race against time. At that point we can formally call this a bubble.
Cracks in the mirror
And then what?
It's hard to say. The mortgage crisis can offer clues of what's to come, but we'd do well to understand what's different this time around.
First, a subprime genAI project isn't as easy to quantify as a subprime mortgage. We could stretch the analogy and say that companies with large subprime genAI portfolios are akin to toxic CDOs, but that only goes so far since there's no concept of a credit default swap one can use to short the market. (While it's possible to short the stocks of publicly-traded genAI companies, that's only part of the picture.)
Second, the mortgage crisis grew out of the simple financial greed of wanting the most revenue or the largest business. With genAI the money takes a back seat to the emotional greed of wanting to wish genAI into meaningful existence. The technology has developed a cult-like following among its potential buyers.
Third, participants in the run-up to 2008 understood the concept of risk but assumed their actions were risk-free. The genAI crowd doesn't even realize that risk exists. It shows in the way they dive head-first into adopting the technology without any care for what lies below.
The problem with that approach? They’re still accountable for risk exposures they don't acknowledge.
There's still time
The mortgage crisis has been etched into the history books as a colossal failure of all involved, but we’re still writing the genAI story. Our current path will make genAI the 737-MAX of emerging technologies — pushed into service before it was ready, unwisely heralded as safe, and ultimately a source of harm. The other option is to learn from 2008 and change course.
The biggest lesson of the subprime meltdown is to take risk seriously. We need to reduce the number of subprime AI artifacts in our orbit and reduce the reach of those that remain. If we all play our part, we can unwind the last three years' accumulated risk:
Companies adopting genAI: You're overdue to assess and address your downside exposures. Start by performing honest, thorough reviews of your genAI projects. (That means going over each project's premise and implementation alike, because even a great idea can be built poorly.) You'll then need to trace your connections to other exposed parties. Sever what you can; brace for what you cannot.
Executives at said companies: If you develop AI literacy now, you'll be in a better position to hear the tough news coming your way. You'll also know how to course-correct. This is your chance to get your risk department up to speed on AI (or start listening to them, if they've been warning you about genAI all along) and develop a real AI strategy. One that is based on business need and real-world use cases.
Oh, and you can stop pretending that a massive spend of LLM tokens equates successful genAI adoption. It doesn't.
Prospective lenders and bondholders: Unlike the mortgage lenders of yore, you'll need to perform due diligence when genAI companies ask for money. They should to support their creditworthiness with something other than hand-waving about this "AI arms race."
City and state governments: Push back on datacenter buildouts. When genAI companies assure you that this technology is the future and you won't be able to live without it, you can require them to substantiate that claim. You can further require them to show how extra genAI compute capacity will benefit your constituents.
All of us: We can stop taking technology companies at their word when they say that genAI is inevitable. It's not.
Remember that their half-functional chatbots arrived just three years ago. If genAI had truly taken over the world that quickly, they wouldn’t need their desperate marketing campaigns to keep reminding us. We’d be too busy living the improved, bot-enhanced lives they keep promising.
It's our future. The genAI companies only get to control it if we let them.
In other news …
For more links to recent news, and with a slightly broader scope, I encourage you to check out my other newsletter. It's a weekly, curated drop of what I've been reading.
A genAI project also holds parallels to a CDO, as both are poorly understood yet people treated them as magic money machines. But let's stick with the mortgage analogy for now. ↩