#036 - Relationship status: "it's complicated"
Our relationship with robots should be simple. Creepy, commercial GenAI has made it anything but.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

Relationships are difficult. Business or personal, platonic or romantic, they all have the potential to be messy, delicate matters. Because, people.
You'd think that our relationships with machines would be easier to navigate. There's only one set of emotions in the picture. Only one sentient being, even. It turns out that our relationships with bots are a lot more complicated for that very reason: on the one side, we have humans who project all of our weird emotions into whatever we do; and on the other side, the machine-builders are tapping into those emotions to get more of our attention. Because, profit.
It's time to talk about our increasingly unhealthy robot relationships. Those are leading us into equally unhealthy relationships with each other, and with the inevitable.
Way too happy to be here
A couple weeks back, people noticed that ChatGPT was being a little more … complimentary than usual. Parent company OpenAI has since copped to a change in the underlying GPT 4o model and rolled it back. As reported by Kelsey Piper in Vox:
As examples poured in, OpenAI rapidly walked back the update. “We focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time,” the company wrote in a postmortem this week. “As a result, GPT‑4o skewed toward responses that were overly supportive but disingenuous.”
(Side note: this points to a risk I've highlighted elsewhere about relying on a third-party model. Imagine the poor soul who'd built a flattery bot right as 4o became a grade-A sycophant. I hope they snagged the VC cash before the model returned to normal.)
While OpenAI has shifted away from robot yes-men, someone else is leaning all the way in. You can probably take a guess as to who:
X Marks the bot
Facebook has been many things over the years: a simple photo site; an advertising behemoth; and, briefly, a tool for dismantling democracy. But somewhere in there it claimed to be a place for "real connections to actual friends." Which is a far cry from its current plan to provide not-actual not-friends. Head FaceGuy Mark Z boasts that digital relationships will backfill our massive friend deficit, boosting us from a paltry three to fifteen. Not the best pitch from someone who is regularly suspected of being a bot cosplaying as a human being. And who has, on at least one occasion, been labeled "constitutionally bitchmade."
We all know why Facebook wants to give us virtual friends. The whole point of that place – like the point of so many apps out there, let's be honest – is to keep you on the platform. The longer it holds your attention, the more ads it sells, the more money it makes for its corporate overlords. (It's also a sideways gambit to bring people into the company's thus-far-lackluster, burned-ten-figures-of-cash metaverse dreams. But I digress.)
And to capture that attention, FaceMeta keeps looking for flaws in our human-ness, the side-doors to our feelings – like that very natural desire to be heard, or to be right, or to have someone agree with whatever we say even when it's rubbish. It's like they treated Natasha Dow Schüll's Addiction by Design as a manual instead of a cautionary tale. Bots-as-friends takes this to another level: always available, always ready with a compliment, and always learning more about us so it knows how to trigger that dopamine hit.
As a bonus, this idea sidesteps a privacy issue. Chats with friends are rich sources of information about who you really are and who you aspire to be, right? Well, FaceMeta can't be accused of eavesdropping on those private conversations if their bots are participants. You know, supporting you with "oh yeh you're so awesome" while dropping a Totally Relevant Ad for car insurance or orthodontists.
That said, the bots might just be the dream of attention-seekers everywhere. And if they crawl off to their echo chambers, social media might become a nicer place.
Maybe.
Avatars, advisors, companions
The last few weeks have turned up plenty of other unhealthy bot connections. First up, we have genAI avatars of the deceased. You've heard of edtech and insuretech … but I bet you never expected to hear the term "grieftech," did you? And yet, it's here. Powered by AI. Every bit as dystopian as you'd imagine. Companies will package up your loved ones' images, statements, and anything else into a bot so you can chat with a (not-quite-)them long after they've left this earthly plane. Or give them a chance to posthumously speak up in a courtroom setting.
Then we have coach-bots. Because why not take life advice or health guidance from an entity with zero practical experience? And zero accountability? One that knows how to lie when the time is right? I suppose that gives you plausible cover for when you tell your human friend why you're ditching them. "It's not me; it's my advisor-bot telling me what to do here."
This, on top of all the headlines about AI-as-companion relationships gone wrong. A problem that's gotten so bad that the laws might actually catch up with the technology. Considering that the law tends to lag far behind new tech, that's a sign. (Perhaps the only good use of a bot-as-companion is the one depicted in episode 3 of Midnight Burger.)
To top it off, we have the CEO of a major corporation who uses genAI bots as a filter for their media intake and workplace inbox:
Copilot consumes [Microsoft CEO Satya] Nadella's life outside the office as well. He likes podcasts, but instead of listening to them, he loads transcripts into the Copilot app on his iPhone so he can chat with the voice assistant about the content of an episode in the car on his commute to Redmond. At the office, he relies on Copilot to deliver summaries of messages he receives in Outlook and Teams and toggles among at least 10 custom agents from Copilot Studio. He views them as his Al chiefs of staff, delegating meeting prep, research and other tasks to the bots.
Because, sure, it's not like we'd want a CEO to pick up on any important details or nuances in there. A summary – a potentially flawed summary, because the bots are still pretty bad at this – seems good enough.
Customer service bots
Businesses are encouraging us to form relationships with bots so we never have to talk to any humans they employ. And also so they can pretend-friend us into buying things:
There is a flywheel effect at work here. The AI agent has access to an enormous amount of data about users that makes it possible to tailor recommendations, information, and insights to their needs. And once they reside in a messaging app, they can create a continuing presence in the user’s life, just like a person would.
“Once an AI knows you and remembers your history, it stops feeling like a tool and starts to feel like a companion,” says Conor Grennan, chief AI architect at New York University Stern School of Business. “It starts to blur the line between an AI brand ambassador and just a friend who shares your taste.”
This use case is slightly less creepy than a coach or digital ouija board. But it's still sketchy. And weird. Customer service bots keep committing these very public goofs and yet companies … keep releasing them? Putting them face-to-face with customers while wearing the brand logo? I don't get it.
You'd think a company that builds AI tools would know better. And yet, AI tool-maker Cursor employs a customer service bot. At least, it did. Maybe they showed it the door after it told customers about a controversial-yet-nonexistent policy.
It's almost like Cursor never saw my warning – originally posted to LinkedIn and since archived on my site:
Whenever you're about to say "We're going to deploy an AI chatbot for customer service" …
… say this instead: "we're going to deploy a defective, unhinged alternative to a search system. And we shall be held liable for its errors."
I'm not saying that AI is bad. I work in this field and I can assure you that AI – not just the newly-popular genAI, but all of ML/AI – has valid use cases and can bring real business value.
The thing is, "AI chatbot for customer service" is not that use case. At least not today. Maybe someday it'll be useful. But it's folly to keep deploying AI chatbots now when they clearly aren't ready for prime time.
That was a year ago. It would seem, then, that "maybe someday" is not today.
Giving genAI bots front-facing customer service roles is like giving the intern the nuclear launch codes. You could do it … but why would you? Is your life going so well that you need to create new problems?
Where's the risk?
This is when you'll remind me that Complex Machinery is about risk as much as AI. So where's the risk exposure in all of these bot relationships?
The answer is the same as always:
Using AI where it's not a good fit is an invitation for trouble.
Consider:
Customer service: Putting AI bots front-and-center for customer service opens the door to them doing the wrong thing – possibly at scale – with affected customers getting hit before you realize there's been a problem.
Thus far, this has posed more reputational risk than, say, risk of physical harm or sizable monetary damages. And I've seen elsewhere that reputation risk isn't that big of a deal, because the news cycle will eventually shift and people will forget. That may be true, but it doesn't hide the fact that a PR fallout costs time, money, effort. It's an unwelcome distraction from your core mission.
Companions: This is an interesting twist on the risks posed by using bots in customer service roles. The simple fact is that bots still need human supervision. The more delicate the situation, the more supervision required. With a genAI advisor, coach, or companion, the human closest to the loop – and possibly the only human with real-time access to the loop – is the end-user. Someone who's not in the best position to monitor the bots or evaluate their outputs.
Friends and avatars-of-the-deceased: I won't claim that technology is eroding our ability to communicate with others. Throughout history there's been plenty of hand-wringing about speaking over the phone instead of meeting in person, or typing e-mails in lieu of speaking over the phone. No. People can communicate just fine in whatever medium they see fit. The means of communication isn't the problem. I have met the problem, and it is us.
As humans, it can be difficult to accept being proven wrong – especially in public. (See: the people who double down on Spouting Nonsense On Social Media.) Life is just easier when your friends always tell you that you're amazing, and never point out that maybe you need to get your act together. And who wouldn't want to stay in touch with that friend or relative forever?
But that's not how life works. To forge a trusting bond with bots – especially commercial bots programmed to engage – is to rely on a funhouse mirror. It's how we lose practice interacting with other people. Real people. And despite what the genAI companies want you to believe, there are still more people out there than bots. The kind of people who aren't hard-wired to agree with us. People with whom we form relationships that we can't simply restart from scratch, and only one of us recalls the full history of our interactions. As explained by Marie Le Conte in her recent essay "11 things I hate about AI":
The tech barons want you to believe that it is, though. It's in their interest to sell AI chatbots as actual, fulfilling social companions. That's how they'll make their money. We just don't have to trust them, is my argument! We can instead worry about the blatantly vicious cycle that may develop if someone ends up spending too much time chatting away with ChatGPT.
The more you confess to the robot, the harder you'll find it to actually subject yourself to the rawness and unpredictability of human relationships. The fewer friends you have out there in meatspace, the more you'll end up relying on the robot for your socialisation - and so on, and so forth. In the end, what you'll be is both isolated and alienated from your human peers, and that's just not where anyone should end up.
But dropping the bots – therefore, reducing the risk exposure – is tough. The companies that build the bots very much want our attention and they know how to get it. Back to that Vox article from Kelsey Piper:
It matters a great deal precisely what AI companies are trying to target as they train their models. If they’re targeting user engagement above all — which they may need to recoup the billions in investment they’ve taken in— we’re likely to get a whole lot of highly addictive, highly dishonest models, talking daily to billions of people, with no concern for their wellbeing or for the broader consequences for the world.
That should terrify you. And OpenAI rolling back this particular overly eager model doesn’t do much to address these larger worries, unless it has an extremely solid plan to make sure it doesn’t again build a model that lies to and flatters users — but next time, subtly enough we don’t immediately notice.
Either we don't immediately notice, or worse yet, we no longer care.
And if we keep accepting half-baked bots into our personal sphere, we'll certainly get there.
In other news …
- The headline here is about Klarna, but the opening section holds the real news: according to a survey of 2,000 CEOs, only 25% of AI projects have panned out. (Fortune)
- Why are companies so hungry for chatbots? Here are a few thoughts. (Le Monde 🇫🇷)
- You've heard of "vibe coding?" Well, not every AI code assistant is cool with it. (Ars Technica)
- What's that? GenAI bot "hallucinations" are getting worse? Hmm. (New York Times)
- UnitedHealth – yes, that UnitedHealth – joins the list of companies cramming AI into every possible business function. (WSJ)
- AI-generated books are nothing new. But now they're branching into dicier territory, like ADHD guides. (The Guardian)
- Apparently AI models exhibit gender bias. Who knew? (Besides everyone, that is...) (The Register)
- A couple of researchers may have violated Reddit's TOS by running bot experiments on the platform. (Le Monde 🇫🇷)
The wrap-up
This was an issue of Complex Machinery.
Reading online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.