#033 - GenAI is an executive's best friend
I'll emphasize: "best" friend. Not always "a good" friend.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

Sometimes I sketch out a segment, and then shelve it to make room for more pressing news. Eventually something comes along to return that topic to the forefront. Today's newsletter is one of those cases. Try to not see it as reheated leftovers. It's more like a fine wine that I've been saving for the right occasion.
(Side note: my phone initially autocowrong'd "reheated" as "regretted." Make of that what you will.)
That occasion is The Verge's Elizabeth Lopatto giving ChatGPT an ethics exam, showing why you shouldn't trust certain questions to a chatbot. That dovetails well with what I cover below.
I encourage you to read her article first, then come back to this one. I'll wait.
The door was already unlocked
I have a particular annoyance with genAI: it's the way executives claim the technology helps them "unlock" their unstructured data.
For those who don't recognize the term, unstructured data refers to text, images, video, or anything else that doesn't arrive in neat rows and columns of numbers. Deep down, the machines only see numbers, so expressing unstructured data in numeric form requires extra legwork. And extra decisions.
How, for example, do we do this for an image? Do we treat each pixel as a single number that represents its color? Does each pixel become three numbers, breaking out the amount of red, green, and blue therein? Can we scale the image down to save on computation speed? Similarly, in my text analysis work ("NLP" or "NLU" for the insiders) I've had to decide whether to count the singular "dog" separately from the plural "dogs," what words count as noise and should be removed, and whether to process a block of text sentence-by-sentence or as a whole.
The fun part is that there are no wrong answers there. Just the answers you are prepared to live with, as each choice will impact downstream analyses and modeling techniques.
So when executives say that genAI helps them unlock their unstructured data, I'm not sure how to read that. Maybe they mean that it shortens their time-to-market, because they don't need their data scientists to slog through the decisions I just mentioned? Maybe?
Maybe not. I interpret "unlock" to mean "we couldn't do this before, and now we can." In the case of genAI it feels more like "we could always do this; it just took longer and required more thought. Now we've found a tool that gives us the illusion of effectiveness because it obscures key decisions and grinds out all of that pesky nuance, allowing us to treat a complicated issue as a simple one-off matter."
That leads to my next annoyance:
Fast, to-the-point, and often wrong
People in leadership roles are always looking for fast, simple answers. An all-too-common interaction involves them getting frustrated when someone – usually in a technical or analytical role – says that a situation requires more nuance. "Yes yes," the executive protests. "I understand that. But just give me the answer. Is it A or B???"
Enter genAI: it speaks with confidence! It gives short, definitive answers to any question! (Unless you are asking about certain historical events.) Just what the boss ordered. Because it's the boss's mirror image.
The problem? The genAI bot is not an expert. It doesn't even have skills we can evaluate. Its "knowledge" is a collection of text – numeric representations of text, really – which the machine attempts to neatly summarize in response to your query. But "summarize" is not quite the right term. More like "perform a probabilistic, extended autocorrect based on the words present in documents that might match your query." That doesn't roll off the tongue as easily as "summarize," but I'd argue it's a more faithful description.
Take survey analysis as an example. Let's say you want to make sense of several thousand freeform text responses. A human analyst would review the raw data, perform exploratory analysis, and then test several modeling approaches to make sense of it all. It's not easy work. It takes time. It sometimes yields ambiguous results.
And when the results are muddled, the right thing to do is to revisit the survey itself: Were the questions worded in a way to elicit the kinds of answers we were looking for? Was the survey too long? Too nosey? Poorly timed or poorly targeted? Working through this exercise will shed light on what went wrong. The lesson is that sometimes you get the answers you're after, and other times you get a lesson on how to do better next time.
GenAI plays a different game. It will happily swallow the entire lot of survey responses and, within seconds, offer the oversimplified answer that those impatient leaders want to hear. "They love the product." Or "they hate the product." Or sometimes "[off-topic and wholly inappropriate comment about some marginalized group]."
Are the answers any good? It depends. If you're really trying to answer a question, maybe. But if you just want an answer, any answer, then genAI works wonders.
But in that case, I'll ask: why not just flip a coin? It's faster and cheaper than asking a machine.
Under the influence
To be fair, it's not just CEOs who love genAI summaries. Anyone who's short on time – and that's most of us, let's be honest – would like a block of text to turn into a few bullet points. But there's a catch.
It helps to see a genAI summarization bot as a step shy of a social media influencer: someone giving you simple, straightforward guidance on potentially messy issues. And while the word "influencer" carries some (well-earned) baggage, there's a ton of value in an industry expert putting out videos, blogs, or newsletters to distill their knowledge in a way that helps non-experts. The problem with influencers is really the audience: they tend to confuse "production quality, emphasis of speech, and posting frequency" for "experience and capability."
I get it. Genuine experts and bullshitters sound a lot alike. But to borrow a phrase – and for the life of me, I don't recall where I got this – we need to stop believing people just because they have a confident voice and a ring light.
So the next time you get an answer from a chatbot, imagine it's presented as an influencer's video clip. Do you still trust it?
Really?
Then please read this newsletter again. Starting with Elizbeth Lopatto's piece.
And if that's not enough, see this article on a leader's need to be ambivalent. It pairs well with a suggestion by comedian Lucy Porter, on an episode of Radio 4's News Quiz: maybe if genAI models were trained on more British content, they might lose some of their American can-do attitude? (Frankly, I'd like them to feed on less propaganda. But that's just me.)
Recommended reading
Having worked in tech since the Dot-Com era, I've been around long enough to see certain stories repeat. Like "companies adopt technology they don't understand – chaos ensues." Which is why I sense eerie parallels between the current AI wave and the Internet-driven software boom of the 1990s.
Hopefully today's AI leaders will learn some lessons from their software predecessors? I've jotted down some ideas in my latest piece for O'Reilly Radar, "Congratulations, You Are Now an AI Company."
I'll highlight a warning for a particular group of AI adopters:
Just because it runs, doesn't mean it works .
In other news …
- The news business tries to get a handle on AI. Hopefully they won't build dubious genAI chatbots. (The Guardian)
- I spoke too soon. The Wall Street Journal has created a genAI chatbot to answer your tax questions. I'm sure this will turn out well. (WSJ)
- DNA company 23andMe has filed for bankruptcy. Fifteen million people who submitted DNA are thankful for this country's stringent data privacy laws. Oh, wait … (CNBC)
- It's not just me – "Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End." (Futurism)
- A recent (Microsoft) Windows update has uninstalled the (Microsoft) Copilot genAI app. Make of that what you will. (Der Spiegel 🇩🇪)
- Adobe tries its hand at agentic AI. (Les Echos 🇫🇷)
- I linked to this inline earlier but I wanted to highlight it again: popular genAI chatbots allegedly include Russian propaganda sites in their training data.
- Apple made the mistake of announcing Siri's AI features before they'd been proven out. (Bloomberg)
- AI can help people write apps, even if they have no background in software development. But are the apps any good? (Spoiler alert: no.) (The Guardian)
- Here's an interesting look at the impact of genAI on shaping technical roles. It's more nuanced and realistic than "the AI bot will write all of the code." (Le Monde 🇫🇷)
The wrap-up
This was an issue of Complex Machinery.
Reading online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.