#055 - take the agents shopping
The hunt for genAI use cases gets desperate. But they'll have a hundred years to pay it all off.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

How do you do, fellow consumers?
Deep down, every business is a very simple concept of trading problems. You take a buyer's problem of Not Having Enough Of Some Good Or Service, you pair that with a seller's problem of Not Having Enough Money, and everybody's happy. Usually.
To do this requires adhering to four basic rules of product management:
Rule 1: Find problems buyers actually want to solve.
Rule 2: Don't confuse yourself for your target market.
Rule 3: Make sure your solution actually works.
Rule 4: Figure out rules 1-3 before you dump GDP-sized piles of money into the idea.
Your typical AI company is stumbling on all four of those rules. Every time I see some new genAI use case, I see a vendor that is trying desperately to relate to customers who may as well be on another planet. It's the "How Do You Do, Fellow kids?" meme, compounded by fear. Racking up tons of debt on weak promises will do that to you.
This is why there's the running half-joke that every genAI use case begins with the premise that you're an idiot. If that seems harsh, the latest edition of Scraping The Bottom Of The Barrel For Use Cases offers two supporting examples.
The first is from the hubbub over agentic AI going shopping:
“For years, online shopping has been about keywords, filters, drop-down menus. And scrolling through multiple pages [of search results] until you find what you want,” Google chief executive Sundar Pichai, who was joined on stage by Walmart’s incoming boss John Furner, told his audience at the show. “Now . . . AI can do the hard work.”
I've read that paragraph several times, scanning it word-for-word, and I've yet to find the "hard work" they plan to eliminate. If you spot it, dear reader, you're welcome to point it out to me.
It gets better (worse). The second example is from Volvo, which wants to jam genAI into its otherwise-lovely vehicles:
The AI agent knows exactly what car it’s in and has access to all of Volvo’s manuals and resources, as well as the greater Internet. It knows how to use the car and can explain it. “I want to understand how I share my digital key. I can open up a manual or something, but I can actually just ask, how do I share my digital key to a friend or to a valet? Or how do I charge? How do I open the charge lid? How do I do this, et cetera? And it just knows all of these things. So you can converse around it without going through the thick manual,” he explained.
I mean, people would certainly want to know all of those things. But there might be other ideas between "the thick manual" and "a full-on genAI implementation." Like, say, a FAQ of some kind? A quick-reference card that they include in the glovebox?
“Have a conversation with a bot" is the new "play a video instead of showing an article." It's a lot of technology to avoid a tiny amount of effort.
Simple math
OpenAI is really thinking about safety.
Maybe. Sort of.
Sure, they're named in several lawsuits which allege their bots have led people to extreme emotional events. But that's fine now because they've just hired a Head of Preparedness:
Altman named Dylan Scandinaro, who was previously listed as being on an AI safety team at Anthropic, in a post on the social media platform X. In his role, Scandinaro will be responsible for, among other things, ensuring that the company safely develops and deploys AI systems and prepares for the risks they pose, according to the original listing. Scandinaro’s former employer [Anthropic] has made a name for itself as a more safety-conscious AI developer.
I see this Head of Preparedness as a Head of AI Risk Management role by another name. And I quite like the name they've chosen! Risk management is all about mapping the road ahead, keeping an eye on potential opportunities and dangers alike. I've said that it should really be called "opportunity management," but I might like the spirit of "preparedness" a tad more.
As a bonus, OpenAI's move might get other shops to take AI risk management more seriously. We all know how smaller companies love to mimic the big players. This might even grow the market for, I dunno, consultants who take a risk-focused approach to AI transformation. Especially the kind who have written a book on it.
Then again, it's easy for a company to treat this kind of title as a lightning rod. Now that they have someone in the seat, they have a scapegoat they can sack when things go awry.
The real test of OpenAI's Head of Preparedness will be whether they get a real say-so in company matters. Including veto power under extreme circumstances. Some might say this is at odds with revenue generation and growth. Not true! Done well, a risk management role is only at odds with short-term gains that will lead to long-term pains.
Hence why I disagree with that Bloomberg piece calling the salary "eye-watering." I get that $550k a year is no joke. It's larger than many salaries in tech and finance. Yes.
Yes, and…
… if this person can spare OpenAI from just one massive lawsuit or PR fiasco per year, the role will pay for itself. From that perspective, I would argue that $550k is too little.
One hundred years of revenue stream
You may have heard that Google has just issued a hundred-year bond. And you may also wonder what this is all about.
In what's become a time-honored tradition here at Complex Machinery, in order to explain AI goings-on I need to first explain financial goings-on. Somewhat oversimplified for brevity:
At a high level, every mortgage, bond, and other fixed-income vehicle is a promise. Give me money now and I will pay you (likely, more) in the future. Specifically, a borrower or bond issuer pays some base amount (the principal) plus a fee (the interest or coupon). Credit default swaps also fall into this family but they're a little special. So I'll skip them for now.
These instruments all have the same mechanics underneath but differ in terms of their framing:
If you "are granted a $1M loan," you're getting $1M now and making a promise to pay (say) $1.3M by some point in the future.
If you "issue a $1.3M bond" you're asking for (say) $1M now and making a promise to pay $1.3M by some point in the future.
See what I mean? In both cases the principle is $1M and the interest or coupon is the additional $300k. Depending on the arrangement the borrower (bond issuer) might pay back the principle in a lump sum at the end, or in installments. But the interest (coupon) payments arrive on a steady beat. And it's precisely that cash flow that convinces lenders to hand out those big sums of money. It's like getting a monthly paycheck, but for granting access to your excess cash instead of providing your labor.
I made up the numbers $1M and $1.3M as examples. In reality the interest will be based on a number of factors, including the opportunity cost of the party handing over the initial $1M (they need to be compensated for locking up money that could be used elsewhere) and the perceived creditworthiness of the party receiving the initial payment (the more questions we have about their ability to repay, the more money we demand in repayment).
What makes this interesting is that mortgages and bonds are tradeable instruments, which means today's buyer can sell that debt to someone else. Maybe I find some greater fool buyer to pay me $1.1M now in exchange for that maybe-$1.3M later. And because we're talking about trading, keep in mind that the current price of the bond may fluctuate over time due to market conditions, changes in the issuer's business, or that kind of thing.
(If you squint just right, you can almost see a bond as a weird flavor of stock.)
Having said all that, let's get back to Google:
A number of genAI-related firms are taking debt right now. Buying into a ten-year bond relies on the issuing party being around over the full decade in order to repay. A hundred-year bond … well … not many companies last that long! But remember, the bond isn't just about the full repayment of the principle. It's also about those steady coupon payments along the way. And for Google, that coupon will amount to 6% interest.
There's a bigger issue about this 100-year bond: unlike everything else with genAI, it'll produce predictable revenues shortly after issuance.
Angry bots
I don't spend a lot of time on the hellsite known as LinkedIn. Or as I call it, "TikTok for the suit and tie set." As well as some other names I won't put in print. But the other day I was rewarded for logging in.
It all started when a maintainer for an open-source project rejected a proposed code update (pull request) that had been generated by an AI bot. In return, the bot allegedly wrote a nastygram of a blog post and mentioned the maintainer by name.
Note that word "allegedly." Like me, Russell Horton raised an eyebrow at the story. As he posted on LinkedIn:
AI is having a Horse_ebooks moment.
Horse_ebooks was a Twitter spambot, beloved for its quirky and disfluent tweets. Until it emerged the account had been coopted as a marketing ploy, with human authors, revealing a betrayal on levels never before seen on the internet.
A similar story played out with the most provocative posts on Moltbook, and I don't think this matplotlib agent blogpost passes the sniff test, either. Righteous indignation is not the default personality of any model.
Indeed.
Any kind of "machines rolling dice" scenario is likely to produce interesting results now and then. We've seen plenty of this with genAI as people engage with chatbots for research and as thought partners.
Yes. Yes, and …
... it's not predictably interesting. Most of what comes out of a bot will be rather meh. Case in point: my genAI fortune cookie side project, Fortune Ex Machina, offers some bangers. But I'll be the first to acknowledge that you have to click that "random fortune" link a while to find them.
As such, some people may be tempted to speak on behalf of their bot – that is, fake some results – in order to keep the excitement going.
This is hardly new. We saw a similar problem with so-called "reality" TV, and how so many of them turned out to be fake. Filming people 24/7 will eventually surface some interesting morsels of TV. But to do so on a schedule, in time for a weekly half-hour show? No. At some point you have to induce (or outright manufacture) the excitement.
Elsewhere …

Apparently there's been some news out of crypto-land. Something about prices moving. Mostly in the downward direction, I think.
Me? I stopped covering crypto a couple of years ago. And while I sometimes miss it … this time … I do not.
Nothing to see here. Move along.
Recommended reading
In January 1986, the space shuttle Challenger exploded just barely a minute after launch.
The lead-up to the disaster is a harsh lesson in risk management – a lesson in why leaders must heed experts' warnings even when they run contrary to the preferred outcome. Hand-waving and positive vibes cannot defy the laws of science.
This Washington Post article goes into detail on a journalist who witnessed the event, and who is keeping it fresh in our minds forty years later. It’s well worth your read.
In other news …
For more links to recent news, and with a slightly broader scope, I encourage you to check out my other newsletter. It's a weekly, curated drop of what I've been reading.
Meta turns to TV spots to change public opinion on its datacenter projects. (New York Times)
AI company Anthropic looked into the impact of AI code assistance on skill development. (Anthropic)
The rise of genAI-based search starts to eat away at web publisher's traffic. (Le Monde 🇫🇷, Les Echos 🇫🇷)
A series of 1980s burglaries doubles as a lesson in the dangers of data leaks. (The Atavist Magazine)
People are fighting back against the rise of genAI slop on social media. (BBC News)
Amazon offered a peek into its robot-fueled warehouse. (The Guardian)
The major genAI companies throw another $600b into the fire. (Financial Times, The Register, Les Echos 🇫🇷)
Anthropic's Claude Opus 4.6 gets better at finding software vulnerabilities. (Der Spiegel 🇩🇪, Anthropic blog)