Complex Machinery

Archives
March 31, 2026

#058 - A run-in with reality

What happens when genAI meets the real world? It's not always pretty.

You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

Two cars that have been involved in a collision.  Photo by Usman Malik on Unsplash .
(Photo by Usman Malik on Unsplash)

Time to get real

I've been reflecting on something I wrote a couple of years ago:

[E]merging tech is all about the technology in the short term, while policy and law dominate in the long term. When the initial excitement wears off, the technology has to find a way to fit into a world that preexists it – a world that was not built with it in mind.

The wave of genAI chatbots has led me to consider that messy phase in the middle, between the novelty and the normalization, where the new thing and the existing world encounter their first frictions.

Since genAI is the latest step in the sequence of predictive analytics, data science, and machine learning, I expected it would lead to similar frictions – mostly around data collection and data privacy. No surprise, then, that we've seen writers and musicians file lawsuits alleging copyright infringement. And companies are wary of their data making it into an LLM's training set.

Two frictions I did not expect stem from genAI companies placing bets with real-world, physical-realm institutions.

The first is debt. Generative AI companies are taking on loans and issuing bonds to fund their adventures. This is a departure from tech waves of yore, in which companies mostly leaned on VC money. Venture capitalists certainly want their money back, but they hand over the cash with the understanding that they may never see it again. (This is not unlike checking a bag for air travel.) They rely on one portfolio company going gangbusters to cover the other nine that went bust.

Loans, on the other hand, are designed with repayment in mind. Defaults are Bad Things™ and they cause ripples of upset throughout the banking system. It's why lenders explore your financial history to gauge your creditworthiness, instead of relying on your confident tone and pitch deck.

So as genAI companies reach into the mainstream financial system for funding, I ask whether the world is properly braced for the chance that the money never returns. The track records of the technology and the companies providing it do not bode well.

The second unexpected friction is genAI players' appetite for physical space. Technology products typically involve only bytes – stored on disk and transferred across wires. Once the excitement fades, you simply decommission the cloud-based servers and walk away. The world can collectively pretend that it never happened. And that, in turn, will fuel the next wave of collective amnesia.

Datacenters don't go away so easily. In part because they don't arrive so easily. The construction materials, the physical labor required to assemble those pieces, and the connections to the power grid are all intended to last a long time. Such constraints of the physical realm simply do not manifest in a purely digital world.

The overarching risks here are that the datacenters arrive and then go vacant, or that they only get half-built before the market collapses. The latter creates particularly nasty knock-on effects for the tradespeople who are building the damned things.

Combined, debt and datacenters represent genAI's major challenge: the technology has yet to prove itself, yet it must do so in order to repay loans and make good on the physical infrastructure. It's not so much "a race against time" as "playing chicken with the future."

Given that:

1/ The people you see protesting datacenter buildouts in their area? They're effectively asking genAI to stay in its digital space until it can get its act together. Which is fair.

2/ Should genAI get any deeper into the physical realm, and then fail, it will not only cause widespread harm to the financial system; it will also damage the collective illusion required to support the next emerging-tech hype wave. And that may be its greatest gift to the world.

No skeletons in this closet

Generative AI didn't invent inauthentic photos or videos. But it sure as hell sped up the process and made it easier, leading to widespread use of the format.

While the technology is often used to drive scams and create adult content (including revenge porn), activist groups have built on this approach to create virtual celebrities for political gain. Earlier this year, for example, I came across the right-wing "Amelia" character in the UK. This past week, "Jessica Foster" combined all of the lessons of its predecessors – a mix of fakery, sex appeal, and politics – and took it up a notch by appearing in (generated) scenes alongside prominent figures. They're all photo shoots of events that never happened.

People have interacted with the character's social media account as though it were real. Even when they know it is just a simulation. Why so? Simply put, because the images are giving them what they want. Yes, there is the overt sex appeal. (Thirsty men will be thirsty men, of course.) But there's also the emotional appeal of agreeing with a certain world view: "Foster" is presented as an openly right-wing member of the US military. The images are not real, of course, but for a certain mindset they are real enough for them to relate to.

That explains why people interact with the character. But why create it in the first place? Why not get a person to play a role? The reason first hit me a few years ago, when I was still covering web3. Sony had just launched Kingship, a virtual band built on characters from the Bay Area Yacht Club (BAYC) NFT collection:

When a label backs a band, or when a film studio backs an actor, they're investing in high-profile people with real lives and real personalities. It's entirely possible that there will be some messy story in the press. The scandalous love affair. The shocking drug habit. The old, racist tweet rant that somehow slipped through the nonexistent due-diligence exercise.

Every time one of those celebrities gets in trouble, it represents a potential cash leak for their investors. Maybe they'll follow a sin/redemption arc and come back even more bankable. Or maybe their careers will crater, and the remaining albums on that contract are doomed to never be released. We imagine that record labels would love to close off those sources of risk.

So, back to Kingship. Those BAYC characters? They only have the life and personality that they are given. They only "exist" when and where the company wants them to. They can't get into trouble. [...] These BAYC band members are the perfect, low-risk celebrities – wrapped up tight like a movie script.

"Amelia" and "Jessica Foster" are public, shared versions of the digital companions individuals create for themselves through genAI chatbots. Their appeal is a reminder that people will happily craft their own reality when given the chance. Reality be damned.

Hey big spender

Tokens are a unit of measure for LLM use. Chatbot providers charge you for inbound tokens (the words that make up your prompt) as well as outbound tokens (the text, images, and video the bot sends back). This metered usage has led to some sticker shock as companies find out weeks later what their chatbot habit really costs. Only then do they institute usage policies and set rate limits.

Nvidia's Jensen Huang is pointing in the other direction. He expects developers to spend the equivalent of half their salary in LLM tokens. For his hypothetical developer earning $500,000 per year, that is $250k of LLM'ery.

Is that a lot? It feels like a lot. But instead of relying on gut feel, we can do the math.

Assuming 260 workdays per year – five days a week for fifty-two weeks (this is tech-land, after all, so there's no vacation time) – that comes to $961/day.

What does $961 get you, though? Anecdotally, some of my hobby projects – like Fortune Ex Machina – have barely cracked the two-dollar mark. And at current GPT-5.4 pricing, $961 is about 55M input tokens plus 55M output tokens.

Hmm.

As a friend likes to remind me: the math isn't mathing here.

So not only does $250k per year in tokens feel like a lot, it is a lot. It's also irresponsible to promote large-scale token consumption as some vanity metric of developer productivity. To borrow a line:

If you’re using 1M tokens a day for software dev or personal assistant LLMing I think you don’t know what you’re doing, not that you’re a great engineer.

(With a very few exceptions)

That isn't limited to individual developers, either. Any business that treats "company-wide mandate to use AI" and "throw tokens around till something sticks" as a genAI adoption policy is, quite simply, lost.

Move fast and take things

A unifying theme of genAI projects is: move fast and take things. Need training data? Just grab what someone else has created. Want to incorporate someone else's work into your latest project? Go right ahead. And then refuse to compensate them for it.

Which is why this week we have two examples of genAI companies being genAI companies:

The first is Superhuman, the group behind writing-support tool Grammarly. You've probably seen headlines about Grammarly's Expert Review feature which – shall we say – "paid homage" to several contemporary writers.

One of those writers happened to be Nilay Patel, editor-in-chief at The Verge. Patel also runs The Verge's podcast. And he just happened to interview Superhuman's CEO Shishir Mehrotra on that podcast last week.

The discussion was everything you'd expect, and then some. Nilay Patel lives up to his reputation as a thorough interviewer, and he doesn't shy away from repeating questions when Mehrotra offers sideways non-answers. At times Mehrotra appears to throw his product team under the bus. It's a rather un-CEO thing to dodge responsibility like that. Then again, Superhuman/Grammarly seems to have dodged responsibility for the whole Expert Reviews debacle in the first place, so, I guess that fits.

Then we have WebinarTV, which has been mining Zoom links to turn what participants thought were private calls into fodder for genAI-based podcasts. I didn't think it was possible to make both podcasts and Zoom calls worse, much less at the same time. But thanks to genAI, here we are.

If there is any justice in the world, Nilay Patel will someday interview the CEO of WebinarTV.

Recommended reading

Richard Bookstaber is a longtime risk manager who has published multiple books on his topic of expertise. Perhaps most notably, he saw the 2008 financial crisis looming.

When Bookstaber talks about risk, I listen.

His latest piece in the New York Times draws parallels between 2008 and today's overheated genAI space. It aligns with points I've raised here, including tech companies' circular deals and deep connections into the financial system. Oh yes, and I'm adding this piece to my review of the financial crisis. Expect to see more on that soon.

In other news …

For more links to recent news, and with a slightly broader scope, I encourage you to check out my other newsletter. It's a weekly, curated drop of what I've been reading.

  • One interesting side-effect of the Iran conflict: it highlights the split between genAI hardware-bound companies and their software-bound siblings. (Bloomberg)
  • Waymo autonomous taxis keep forgetting to stop for school buses. (Wired)
  • It's not your imagination: genAI chatbots are ignoring your instructions. (The Guardian)
  • In Texas, datacenter construction becomes an unexpected bipartisan issue. (Politico)
  • OpenAI has closed the doors on Sora, its video generator. (The Verge)
  • The twisted connections between social media, our lives, AI psychosis, and the genAI-industrial complex. (Disjunctions)
  • Police in Essex have put a facial recognition project on hold. If you think you know the reason why(te), you would be correct. (The Register)
  • Armed with genAI tools, people are creating denial-of-service (DoS) attacks on the court system. (Futurism)
  • Yet another case of "company accidentally leaks chatbot interactions." This time, it's Sears. (Wired)

The wrap-up

This was an issue of Complex Machinery.

Reading online? You can subscribe to get this newsletter in your inbox every time it is published.

Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.

Disclaimer: This newsletter does not constitute professional advice.

Read more:

  • February 28, 2026

    #056 - If wishes were horses

    Two years of Complex Machinery. And more to come.

    Read article →
  • April 29, 2025

    #035 - This time, it actually is different

    GenAI breaks away from the pack. And not in a good way.

    Read article →
Don't miss what's next. Subscribe to Complex Machinery:
Share this email:
Share on Twitter Share on LinkedIn Share on Hacker News Share via email Share on Mastodon Share on Bluesky
Bluesky
Mastodon
LinkedIn