#057 - Lessons worth learning
Every piece of genAI news will teach you something …
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

Recent AI news has provided a mix of cautionary tales and examples worth following. On today, this Ides of March, beware.
Fun with math
Despite the novelty of what they sell, genAI companies are still beholden to basic rules of business. Two that are on my mind today are:
- There's a lot of money to be made in giving people what they want. This is how you bring in revenue.
- You have to keep track of the costs behind acquiring and assembling your raw materials into the finished product. Optimizing these costs can improve your profit margins.
(Before you ask: no, "don't commit crime" is not a hard rule of business. It's a generally good idea to not commit crimes! But the history books show that crime and business often go hand-in-hand.)
That first rule is why I figure OpenAI is entering the business of adult content. They're on the hook for quite a bit of money, and unlocking ChatGPT's naughty mode is one way to tap into the multi-billion-dollar porn market.
To the second rule, tokens (small words or word-pieces) are the raw materials that LLMs turn into images and text. Some poor soul in OpenAI has the job of calculating how many tokens, on average, are required to generate some measure of adult content. From there they can determine how much adult content they need to sell to meet certain revenue goals.
Except they no longer have to do that, since OpenAI has recently hit pause on ChatGPT's "adult mode." They claim to be working on technology updates to support the effort. My hunch is that it's actually because they found a different source of revenue. Like, say, a massive contract with a government war machine.
Per the first rule, OpenAI is still giving people what they want. And given how some people approach global conflict, we could argue that selling war tools is the same as selling pornography.
Which brings us back to the second rule, and our trusty mathematician. On the plus side, this person no longer has to calculate the number of tokens per unit of sex-themed content.
On the minus side, now they need to figure out how many tokens are required for a drone strike.
Bot-on-bot crime
Hackers have long used software-based tools to automate their probes for vulnerabilities. Generative AI takes this to a new level, as bots can explore novel approaches to poking at a target.
It's how a security research company hacked the McKinsey management consultancy. This was a case of bot-on-bot crime, in that the researchers tasked their genAI bot with breaking into McKinsey's genAI bot.
The exploit was a good old SQL injection – in which a system extends a little too much trust to user input – which means a 1990s-style flaw took down a 2026 creation. A SQL injection vulnerability is the tech equivalent of putting retina scanners on the front door while leaving the side window open.
McKinsey's genAI bot had access to sensitive information, including emails, chat logs, and files with details on secret client engagements. So the fact it was so easily infiltrated is – and I am using a technical term here – A Very Bad Thing™.
There were two silver linings, though:
1/ McKinsey took it seriously. Some companies drag their feet when researchers contact them about a security flaw. Per the timeline in the linked article, McKinsey's Chief Information Security Officer (CISO) responded within a day.
(Perhaps this person had already raised concerns over the bot, and they were eagerly monitoring their inbox for outside validation? Entirely possible.)
2/ The security firm took a read-only stance. While the exploit they uncovered allowed them to modify data, they didn't change anything. They only noted what a malicious actor could have done.
This incident is a reminder that security – tech or otherwise – is a space in which adversaries and defenders can use the same tools. If hackers can task genAI with hunting for exploits, companies can turn genAI inward to proactively scan for trouble.
Whether anyone will actually do this is another story. But they at least have the option.
Apples from the same tree
In just over a week, two companies have landed in hot water over their genAI bots.
The first was writing assistant Grammarly and its "Expert Review" product, in which bots dispensed writing feedback in the flavor of well-known authors. Not content to stick with long-deceased authors (which would have been bad enough) the company decided to build bots in the voice of contemporary writers. Including those from The Verge, and noted tech journalist Julia Angwin.
Grammarly initially stuck to its guns – requiring authors to opt-out of their work being used, which is par for the course for a genAI company – but eventually pulled the plug on Expert Reviews. Still, Angwin is spearheading a class-action suit against the company.
Then there's Character AI. You may recognize that name from lawsuits over teen suicide. Character AI has since closed the platform to the under-18 set, but it still hosts plenty of bots based on personas of known murderers. Including school shooters.
Journalists at Futurism pointed out this problem in December 2024 – not 2025, 2024; quite a while ago, in technology circles – and Character AI has done little to address the issue:
That a teen-loved chatbot platform would be allowing this kind of content is obviously horrifying. Worse: Futurism identified this specific Character.AI issue all the way back in December 2024 — meaning that even after more than a year, Character.AI has yet to resolve an absolutely glaring gap in platform moderation.
We can’t stress enough how easy it is to find this stuff. These bots aren’t the result of complex attempts to “jailbreak” AI models or confuse platforms. The platform’s text filters failed to prevent them from being created, and we found them with simple keyword searches.
The Grammarly and Character AI incidents both involve genAI, but they aren't genAI-based problems. These strike me as product management problems. Or even company culture problems. Which is another way of saying The Company Doesn't Give A Damn.
(But what do I know? I'm just fast-approaching three decades in the tech space, and I've worked a variety of roles involved in the creation of software products, maintenance of infrastructure, and AI. Entirely possible that I'm missing something. Maybe.)
The sad fact is that the AI excitement has given companies more drive to do stupid things. And that excitement sometimes provides them air cover when we notice that they have, in fact, done stupid things.
Hopefully the high-profile news coverage and class-action lawsuit gives other would-be genAI miscreants pause. It probably won't. Because that mindset lives by the mantra "it won't happen to me." But we can hope.
A tale of two bets
Buzzfeed, the web property best known for listicles with clickbait headlines, might be headed for bankruptcy. Is this a reflection on the difficult business climate for news sites? Perhaps. But I'd say it was due to its aggressive lean into genAI.
CEO Jonah Peretti was so bullish that he took the company on a deep, head-first dive into the technology. In particular, he was early to the game of replacing writers with generated content. Peretti made a bet on where the future was headed. And it didn't pay off:
The company’s stock price jumped aggressively, from around $3 per share to north of $15. But longer-term, neither insiders nor the public were particularly compelled by the move. Nonetheless, Peretti doubled down, promising in May 2023 that AI will “replace the majority of static content” on the site, just a month after shuttting down its Pulitzer Prize-winning BuzzFeed News division.
Reality soon set in. The AI quizzes were underwhelming, and the site was soon caught publishing entire AI-generated articles that were sloppy and repetitive. After the initial spike in enthusiasm, the company’s stock took a massive beating; as of this week, its shares are hovering around 70 cents.
Wait, being excited about genAI isn't enough to make genAI work? Who knew? (Everybody knew.) Other CEOs would do well to take note. Buzzfeed isn't just a cautionary tale of how genAI adoption may go awry; it may also be the canary in the coal mine of execs getting by with hand-waving about AI's transformative capabilities.
Compare that to Nvidia, which is placing a bet on itself to develop its own frontier genAI models. (Credit where it's due: I first saw a brief mention of this in Sherwood News, then found a longer writeup on Wired.)
It's still too early to tell where this will go. On the one hand, this move could open up interest in genAI, which could lead to greater GPU sales over the long term and cement Nvidia's role as the chip-maker in the space. On the other hand, every GPU they use for training their own models is a GPU that can't be sold to a customer. It's very much a possible-future-money versus definitely-now-money situation.
So we have two companies and two big bets on genAI. One bet has gone sour. The other remains to be seen, but at least it shows promise. Time will tell.
Recommended reading
This article in The New Yorker is all about turbulence. Not in the metaphorical sense. I mean the real, plane-shaking turbulence that can make air travel unpleasant and even dangerous.
It's also an article about how to handle risk. And that's mostly a matter of learning from every incident, so you can prevent it down the road or at least do a better job of handling the inevitable.
This is required reading for anyone building AI-based products.
In other news …
For more links to recent news, and with a slightly broader scope, I encourage you to check out my other newsletter. It's a weekly, curated drop of what I've been reading.
- A mistaken facial recognition match lands an innocent woman in jail for months. And by "innocent" I mean "she wasn't even in the state where the crime had been committed." (The Independent)
- A research group has exposed a sizable, entirely preventable flaw in the internal genAI bot used by management consultancy McKinsey. Yes, that McKinsey. (Codewall)
- Datacenters are important installations. Which makes them juicy targets during conflict. (Les Echos 🇫🇷)
- Spotify claims that its top developers haven't written any code since December because genAI is handling the work. I'll also note that they dropped this gem during an earnings call, so make of that what you will. (Business Insider)
- The downside of letting genAI bots churn out mountains of code? Human reviewers can't keep up with it, which increases the risk exposure to bugs lurking in the software. (The Register)
- The standoff between Anthropic and the DoD has ended with Anthropic sticking to its principles, and the DoD marking it as a supply chain risk. (WSJ, Die Zeit 🇩🇪, Le Monde 🇫🇷, Les Echos 🇫🇷)
- High-ranking execs are (still) having a rough time with AI adoption… (HBR)
- … which is probably why so few companies are seeing tangible benefits from AI. (Sherwood News)