#017 - Stacking the deck
Machines conquered Wall Street, then learned to play a mean game of poker.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)
Games machines play
Certain games have proven stubbornly resistant to computerized players. Getting a machine to win such a game, then, is hailed as a victory for the underlying programming language or modeling technique. We have IBM's Deep Blue for chess, DeepMind's AlphaGo for Go, and then Neo for poker.
You've probably never heard of that last one. Neither had I, until I caught this fascinating Bloomberg piece on AI-driven poker bots. It's a longer read that covers the history and the people. Today I'll only touch on a couple of points.
The first point is that, unlike chess and Go, poker is known as a game played for money. Research into poker robots therefore comes with incentive beyond the pure pursuit of knowledge. More than a decade ago, Neo's creators built a system that could win at scale:
[The Neo team had] managed to substitute the human talent in their operation with an alternative that didn’t need to eat or sleep; that could connect automatically to a platform with minimal supervision by the founders and their friends; and that could sift through millions of potential scenarios to find the best move from a 3-terabyte database of past games, right down to exploiting a given opponent’s tendencies based on their record of play.
My second point is that the bots didn't just walk off with human opponents' money; they changed the way the game is played. Competitors had to be less human in their playing style to keep up the the machines:
[Because of AI, poker] is now less about psychology, spectacular bluffs or calls, and more about revealing as little as possible to opponents and grinding out the percentages. Machines have taught us to play better, more boring poker.
Machines calculating probabilities across tons of historical data? Secretive bots, unemotionally grinding out lots of small wins which add up in volume? We could easily be talking about trading here. Especially high-frequency trading (HFT). All of that aligns with something I said last week:
If this sounds like the bot-on-bot world of algorithmic ("electronic," "computerized") trading, that is precisely where my mind is going. Or, better put: that's where my mind naturally gravitates when I think about AI. As I've noted elsewhere, the story of computers moving into financial markets – from the introduction of REG-NMS, to runaway trades and flash crashes, and everything in-between – will tell us a lot about where our AI-enabled world is heading. Social media and trading are worldwide marketplaces of human decision-making. They are a natural fit for large-scale quantitative analysis and automation.
Hmm.
First trading, then online poker. What will the machines conquer next?
It helps to understand why computers did so well on Wall Street:
Same name, new game
Trading is a world awash in numbers, analyses, and pattern-finding. In the pre-technology era, humans did this work just fine. But then computers arrived, doing the math better, faster, at a larger scale, and without catching a case of nerves. Code could react to market data changes so quickly that network bandwidth, not processor speed, became the limiting factor. In every aspect of the game – from parsing price data to analyzing correlations to placing orders – humans found themselves outpaced.
To understand what this meant for 1990s-era traders, imagine you're a chess pro sitting down for a game. Except the board now extends to fifty dimensions and your opponent can make multiple moves without waiting for you to finish your turn. They react to your confused facial expression by explaining: the pieces could always do this; you just weren't able to move them that way. That was the shift from open-outcry ("pit") trading to the electronic variety. Human actors were displaced overnight. It just took them another few years to accept.
AI – the whole field, not just genAI – will no doubt have a similar impact on some industries. It will prove painfully ineffective in others (as evidenced by the half-baked corporate AI rollouts over the last couple of years). Where is your profession in that spectrum? Three questions will help you sort that out:
- Do you need to automate decisions?
- Does a historical dataset provide enough context for a computer to find the patterns to make those decisions?
- Can you afford some degree of error, since this is ultimately a probabilistic operation?
If you say "yes" to all of the above, then AI is worth a try. And if you earn a living with that kind of work, you now have a head start on figuring out your next steps.
(I'll have more to say about AI's impact on jobs in the next newsletter. Stay tuned.)
Careful with those defaults
LinkedIn – or as I call it, TikTok for Suits – has been feeding your congratulatory notes and influencer posts into its genAI efforts. The site pulled that all-too-common tech-company move of quietly introducing the change while setting the default to "On."
(To turn it off: Settings > Data privacy > How LinkedIn uses your data > Data for generative AI improvement . You're welcome.)
The discovery of this so-called feature inspired a herd of articles, most of which have been shared on LinkedIn itself. It must be a weird experience for their product analytics team. I can just imagine the panicked meeting:
"What do you mean, 'people are turning it off en masse'? How do they even know about it?"
"Dude, this is making big headlines. It now tops our Trending Topics section."
"That explains this message from the genAI team. They say the entire dataset is posts on how to turn it off…"
You're welcome to laugh at LinkedIn's unwanted time in the spotlight. Just do yourself a favor and check your product's default settings. Anything in there you want to change? Now's your chance.
Unreasonable reasoning
(Many thanks to a certain someone who riffed with me on this. You know who you are.)
OpenAI, maker of ChatGPT, recently released a new model. Officially named o1 but better known as " Strawberry," it apparently has the ability to reason things out.
This is a step in the right direction. Instead of just asking a bot to evaluate something, you could check the reasoning to make sure that it's sound. Model explainability is a pillar of model transparency, and transparency makes these things more suitable for sensitive decisions. Perfect.
Except … maybe not. OpenAI doesn't want people to see how the model reached a decision. (Maybe because Strawberry is willing to lie in pursuit of its assigned goals?)
That may limit its use. I'm thinking of this in light of US consumer lending laws, which require an entity to explain why it rejected an application for credit. I like this approach because it doesn't mandate or forbid a particular technology; it just makes it clear that you will be held accountable for your technology's actions. There's no "computer says no" card.
I wonder whether OpenAI could be compelled by court order to reveal the model's reasoning behind a particular decision? I don't know. Laws around tech tend to be murky at best and I lack a law degree. That said, I bet OpenAI's legal counsel preemptively closed the door on this discussion by … updating the TOS. You know, forbidding the model's use in situations where a judge could force them to peel back the curtain.
Companies could still sneak in some rogue Strawberry calls. But in the event of a lawsuit requiring access to the model's reasoning, they'd find themselves between a rock and a hard place. Better known by the very technical legal term Not OpenAI's Problem.
CheatGPT
I mentioned earlier that getting computers to play certain games is hailed as a victory. Closely related is getting computer-based games like Doom to run on random hardware.
Someone has taken that up a notch by getting ChatGPT to run on the TI-84 student calculator.
I don't have anything else to add there. I could have stashed this link under "In other news…" but I made this a standalone segment. All so I could use the title "CheatGPT."
In other news …
- Salesforce is going all-in on AI taking jobs. I'll have more to say about this in the next newsletter. (Bloomberg)
- Related,
truth-tellercomedian John Mulaney has told the audience at Dreamforce – the annual Salesforce conference – what he thinks of them and their AI efforts. (San Francisco Standard) - Studio Lionsgate is getting into the genAI-for-film-stuff game. (Variety)
The wrap-up
This was an issue of Complex Machinery.
Reading this online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.