#003 – It's a bot's world, we just live in it
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)
Outpace and replace: next-gen bot arena
Need a guidebook for data science/machine learning/AI? Check out algorithmic trading.
That field represents three decades of hard-learned lessons about predictive modeling and automated decision-making in the name of profit. It's all in there: best practices in model development and data pipelines; understanding how to connect the analysis work to the business need; risk management… There's a reason I bring this up to anyone who asks about AI. And why I wrote a nine-part blog series on it a few years back.
The recent genAI wave has me thinking about a particular chapter of algo trading history: the 1990s. That's when markets began the shift from a humans-only space to a hybrid arena of human (manual) and robot (automated) participants.
Trading turned out to be a natural fit for computer-based automation. It involves analysis of numeric data to spot opportunities, as well as fast reaction to new information. Computers can handle this tirelessly, across large amounts of data, with high precision and speed.
This was great news for the computer-enabled trading shops. Not so much for the manual holdouts, who were unable to compete. They got pushed out. Today people are still involved in planning, programming, and research for trading; but in the pro leagues, market-hours activity has become a bots-only playing field.
I see the AI version of that story on the horizon. The companies jamming it into every possible crevice are, collectively, accelerating the discovery of AI-amenable use cases. And they're most interested in the kind where the machines will outpace the humans.
That creates exposure to three key risks:
1/ Loss of jobs. There will be the use cases that, just like in trading, the machines are a perfect fit. They'll provide real business value and outperform their human counterparts by a wide mile. That's great for productivity and efficiency. Not so much for anyone who loses their job along the way.
Even if governments arrange compensation for AI-driven job losses – I'd rate this as "somewhat unlikely" in general and "pretty much nil" in the US – that won't help the reduced job creation. How do you compensate someone for a job that never existed, because the work had been automated from the start?
I'm optimistic that people will find new work in the long run. But the short run will be painful for those caught in the switch to AI. Irony of ironies, that group will include … AI professionals. Companies can now get certain types of AI power through an API call to Google, OpenAI, and other AI as a Service (AIaaS) providers. No data scientist or ML engineer required.
2/ AI taking on work that it's not meant to handle. AI will be terrible at some jobs. Expect greedy, overly-optimistic cost-conscious companies to employ it anyway.
(This is already happening. See the execs using AI-based podcast translation, while openly admitting that it produces low-quality outputs. Or the businesses that send bots to interview candidates. Or any of the examples from last newsletter's "AI customer service bots gone wrong" opening segment.)
Losing your job is bad enough. Losing it to an inferior mechanical replacement is just adding insult to injury. These cases of The Bot Is A Bad Fit will also spell trouble for anyone on the receiving end of the AI model, like customers and job applicants. And if the bots' mistakes point inward, the companies themselves are in the line of fire. Think "mispricing assets" or "triggering large purchases."
3/ Bots spiraling out of control. An increase in the number of AI bots will lead companies to point the bots at each other. Problems will arise because bots don't know when they're operating out of their depth, and those incidents will be compounded by bots' speed and the lack of human intervention.
The financial realm's bot issues have led to flash crashes and similar runaway-train scenarios. Given that AI is often deployed with no idea of its limits, those bot-on-bot disasters will – to steal a phrase – look like two idiots playing chess. And will be just as entertaining.
Let's take a step back and ask: can we, as a society, curb these risks? Can we limit our exposure to AI's downsides while leaving ourselves open to the potential upsides?
We can, yes.
Whether we will … that's a different question.
Putting the bot to work, so you can work
A fake job search offers a glimpse into the bot-on-bot future.
Hiring managers and HR departments have long turned to automated tools to handle the flood of job applications. Applicants, well aware that they are screened out by machines, have found workarounds. (Raise your hand if you've heard of the "embed invisible keywords in the document so the keyword-scanning tool will catch them and score you higher" technique.)
Modern-day AI takes this up a notch. Employers use it to improve their screening and searching. And according to Insider journalist Aki Ito, applicants can invoke generative AI services to apply to jobs and create cover letters on their behalf.
The result of her experiment was unfortunate, if not predictable:
Unlike the other bots, LazyApply did all the applying in real time, right in front of my eyes. It was as if someone had hacked my computer: I watched as the bot clicked on various boxes and typed out answers to short questions. For the first few minutes, I was mesmerized. Then, I started to panic. In one application, the bot indicated that I speak conversational-level Spanish, which I definitely do not. In another, it reported that I was African American, even though I had specified in my LazyApply profile that I am Asian. […]
Then things got even weirder. A few applications in, I realized that LazyApply wasn't using the updated résumé I had given it. Instead, it was attaching a document I didn't recognize, titled "Aki Ito Cover Letter, Resume, Links for Insider.pdf." […] Instead of sending out the updated résumé I'd provided, LazyApply was submitting an old cover letter it had found buried in the depths of my LinkedIn account, from when I had applied to BI three years ago. In a single spurt, 27 employers -- ranging from a website I had never heard of called CryptoNewsZ to venerable publications like The Boston Globe -- received an application from me that talked about how much I wanted to work for one of their competitors.
This is a precursor to the high-speed, bot-on-bot, spiral-out-of-control scenario I described earlier. In Ito's case, there were people on either side to mind the bots. She was able to deploy her job-application services one at a time and review the results. Even with that, things still ended up a mess. Can you imagine a future version of this tale, one where there's no human involvement to introduce pauses in the action?
Not that it matters, since some tools are … "misreading" resumes anyway. A recent Bloomberg study uncovered that ChatGPT (specifically, the popular GPT-3.5 model) exhibits a rather nasty racial bias when ranking resumes. And as I mentioned earlier, employers' latest step in this arms race is to use AI bots to conduct job interviews. How soon till candidates start sending deepfake AI bots to talk in their stead?
Once again, let's take a step back.
Maybe ... just maybe … "sifting resumes" is another entry on the List of Problems We Will Not Solve With Technology.
The Long Train Effect
The first issue of this newsletter covered modern freight trains and AI-driven automation. The connection? In both cases, companies are assigning more work to machines while reducing human oversight. This leads to new efficiencies, yes, but also new problems: rail companies have seen an increase in theft, and AI-driven products are malfunctioning in all kinds of customer-facing ways.
I can't stop thinking about this phenomenon of large, minimally-supervised AI machines, complete with new risk/reward tradeoffs. I've even given it a name:
"The Long Train Effect."
The inspiration for the name is twofold:
- The New York Times article on freight train thefts, which I cited in the first newsletter.
- Chris Anderson's The Long Tail, a book on the shift from brick-and-mortar retail to warehouses-and-delivery business models.
(Interestingly enough, these two are connected: long-tail business models led to those extended freight trains, which are now getting robbed.)
Will the dawn of cheap, accessible, AI-driven automation have the same impact as the shift to warehouse-backed ecommerce? No clue. But I can already see that it'll have a significant impact on business models and consumers. Once again, new efficiencies and new problems will go hand-in-hand.
Stay tuned. I plan to revisit this more over the course of Complex Machinery.
A correction…
It's a bit early for my first influencer-style apology speech, but here goes:
Last time around I said that genAI chatbots were analogous to wild animals.
Upon further reflection, I realize that was inappropriate.
Wild animals live by a code. They usually "misbehave" (by human standards) when they are frightened and/or desperate.
LLM chatbots, on the other hand, have no such concerns. They are just random.
My sincere apologies to the animal kingdom.
In other news …
- Even auto manufacturers are getting in on the data collection game. And now that consumers know about that, they're fighting back. (New York Times)
- Need help picking an LLM provider for your business? I'm impressed this list included the "build your own" option as a way to manage cost and risk. (WSJ)
- Europe's Digital Markets Act (DMA) recently went into effect. Google Maps has changed to remain in compliance. (Les Echos – in French)
- Real estate website Redfin has introduced an AI chatbot. Hopefully it doesn't experience an Air Canada moment. (GeekWire)
The wrap-up
This was an issue of Complex Machinery.
Reading this online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.