#056 - If wishes were horses
Two years of Complex Machinery. And more to come.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

Complex Machinery launched in private beta 29 January 2024, then opened to the public a month later on 28 February. So in a way, today marks the second time the newsletter turns two years old.
Whether you've been here since the start or you just subscribed last week, thank you.
Blowing out the candles
When deciding what to cover in this anniversary newsletter, I knew from the start that I wouldn't hand you a bunch of one-liners from my favorite back issues. That'd be underwhelming. Also, I already played that card at the start of the year.
Maybe I could take a stab at what the future of genAI holds? No. Predictions are a fool's game. Even for those of us who build prediction machines for a living.
I then decided I'd trawl the archives to see how different segments have held up over time.
That was an eye-opener.
Researching past issues showed that I've been writing pretty much the same handful of themes for the past two years. (See: "A difference of time" and "It's still a wild animal," echoes of which appear in pretty much every issue.) That's a reflection on the genAI space as a whole – it fills headlines every day, but has yet to evolve. And most of those headlines are about notable AI goofs. So while Complex Machinery was never intended to be a recap of What GenAI Has Gotten Wrong This Week, those are the stories the news keeps handing me. So that's what I write about.
That, in turn, led me to what I'll share today. Stealing an idea from John Spencer's podcast, in which he runs an annual wish list for urban combat, I'm sharing my wish list for genAI.
This is my take on how the world will get the most of what this technology has to offer.
Beggars will ride
1/ The world uncovers real genAI use cases for real problems. We're long overdue to sort out what genAI does well. I don't mean what it could do, someday. It's time for us to build applications on what it's actually good for, today. If we do that, the major genAI providers might actually make good on the trillions of investment dollars they've wrapped up in circular deals.
Bonus: by uncovering meaningful use cases, we'll probably satisfy my sub-wish of finding genAI's killer app.
2/ The future vision and the present-day reality get closer. Selling is all about painting a picture of a possible future. But what we have now is a bunch of genAI promises that are set so far into a hazy distance that they're unlikely to materialize.
As I noted a year ago, and I keep bringing up: the groups selling AI and those buying it live in different times; we need them to come closer together.
3/ AI literacy replaces hype and hope. Promises from genAI vendors range from"pure speculation" to "absolute bonkers bullshit." Those, however, take a back seat to the whoppers corporate buyers tell themselves about their haphazard AI transformation efforts.
Hopes and dreams are good! But real-world applications need a real-world foundation. Strong, widespread AI literacy is the first step. When buyers understand what this technology really is, and what it can(not) do, they're better equipped to resist the siren song of pushy vendors and overexcited peers alike. And that's when the meaningful AI adoption happens.
4/ AI adoption looks more like the research effort that it is, not a quick fix. Generative AI is often sold – and bought – as a fast add-on to a business. Real success with this technology will require the discipline to approach every project as an experiment. In turn, that will require deeper pockets, longer time horizons, and an iterative approach.
If you read between the lines, you'll note that this will also involve acceptance of failure. Not all research projects will make it to the finish line. Companies will learn to keep mum on their fancy new genAI-backed features until they've actually proven themselves out.
5/ Support for displaced workers. Today, the idea of a genAI bot being able to truly do a person's job is laughable. That will change as the technology advances and adoption improves. Companies would do well to ease displaced employees into new roles, or provide them an off-ramp that helps them pursue other opportunities. Large-scale, sudden layoffs should be considered an embarrassment, not a triumph.
6/ A return of machine learning (ML/AI). The cacophony of generative excitement has drowned out ML/AI. Companies regularly applaud genAI stumbling through challenges that its older sibling has been able to handle for more than a decade. At a fraction of the cost and horsepower, no less. (Every time an executive proclaims that genAI lets them "unlock their unstructured data," an angel loses its wings.)
As a side-wish: I can't wait for people to remember that "genAI" isn't the entire "AI" field.
7/ The grifters find their next mark. Once we have peace and quiet, we'll have an easier time making AI everything it was meant to be. Because right now it's just bait for thirsty investors.
Another candle
To make all of that happen, I'll need one more wish:
genAI gets its shit together.
The field keeps trotting out new toys to project the illusion of progress. In reality, genAI is dominated by cases of This Doesn't Work But Hey Let's Pretend It Does. It's a shame. When you consider all the fanfare around genAI, we should be more impressed.
A lot more impressed.
And yet we are not.
Software development offers a sliver of hope. In the hands of very experienced software developers who understand how to delegate, code generation can be a force multiplier. But that's a very narrow set of people. Beyond that we currently have inexperienced developers releasing apps to the public. And we'll soon have leaders overloading their seasoned developers with so much work that they'll erase all of those agentic gains.
I suppose I should mention crime, too? Generative AI has proven quite useful to scammers, purveyors of revenge porn, and worse. Plus you have the unintended side-effects, like end-users getting hooked on chatbots and extreme emotional events. The companies behind these systems don't seem to care about these problems, either. I suppose they're distracted by the mountains of debt they've taken on.
The only use case on which genAI has consistently delivered is "creating excitement." And even that is waning.
Looking ahead
What's next for Complex Machinery? While reviewing my map of the AI risk landscape, it occurred to me that AI has its tendrils in just about everything … so … I guess that gives me scope to write about everything – commercial real estate, workplace policies, autonomous vehicles… You name it.
Should you find any AI stories I should know about, please send them my way! I'm especially interested in reports of:
genAI that actually works as advertised (but is not crime)
Individuals finding novel, creative use cases and interfaces
Companies managing their AI-related risks with more than just hand-waving and bold statements
(You're also welcome to send me tales of GenAI Gone Wrong. Just know that I have a backlog of those…)
Recommended reading
Continuing my usual thread of What Wall Street Can Teach Us About AI, I wrote a short piece on the connection between rogue traders and genAI agents: "Spotting and avoiding ROT in your agentic AI."
In other news …
For more links to recent news, and with a slightly broader scope, I encourage you to check out my other newsletter. It's a weekly, curated drop of what I've been reading.
You and your spouse don't share a common language? AI-backed translator apps can help, though you'll also need solid communication skills. (New York Times)
The prospect of genAI-created software is wrecking financial deals for software firms. (Financial Times)
Would you spend $65k for your kid to sit in an "AI-powered" school that monitors their every move? Yes? Perhaps read this article and have another think? (404 Media)
Les Echos ran a series called "Untamable AI" ("Indomptables IA") on how the world is releasing technology that it can't quite control. Parts 1, 2, 3, and 4. (Les Echos 🇫🇷)
Companies are pulling back on AI-powered tools, slowing sales cycles as they ask lots of questions and evaluate their needs. It's long overdue. (WSJ)