#041 - Cashing in
Chasing momentum and Meta money.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)

Name your price
I need to start by talking about the stock market. This will be relevant in a moment. Hear me out.
At its core, the stock market is a simple place. Everything that happens is some twist on the old "buy low, sell high" idea. To do this, you try to suss out where share prices will go:
- For some groups, it's about picking single stocks. This can be as loose as throwing darts while blindfolded, or as rigorous as thoroughly researching the company in question.
- Other groups take a more quantitative approach, employing Very Smart People who build Massive Computer-Driven Systems to drive Math That Is Well Beyond Mere Mortals' Comprehension.
- And then you have a little something called "momentum." This is a fancy way of saying: "hey this share price keeps moving up (or down) so I'm just gonna ride the wave and buy (or sell)."
Momentum relies on what's known as a phenomenological model. The word "phenomenon" in there is your clue. You simply observe that something is happening. You don't care to know why it's happening. You just see it and you get after it.
Chasing momentum probably sounds wild to you. There's no research beyond "number go up." There's no fancy math. It seems irresponsible and foolhardy! And yet … it works. For a while.
By this point (or perhaps since three paragraphs ago) you're wondering why the hell I'm delivering a subpar lecture on the trading industry in what claims to be a newsletter about the intersection of AI and risk. I assure you that this has everything to do with AI. And risk. And specifically, risk-taking in AI.
We'll start with the AI part:
AI, as you've no doubt noticed, having a "number go up" moment. People are piling into this field because it's hot. Or, at least, because everyone is saying it's hot. That group of momentum-chasers includes Mark Zuckerberg and the rest of the Meta crew. Mister Z has promised to pour tons of money into his pursuit of "superintelligence":
“As the pace of A.I. progress accelerates, developing superintelligence is coming into sight,” said Zuckerberg in the memo, referencing a form of A.I. that boasts capabilities superior to humans. “I believe this will be the beginning of a new era for humanity, and I am fully committed to doing what it takes for Meta to lead the way.”
A good portion of this money is destined for datacenters. He's also pulling out the stops for talent. Continuing from the previous excerpt:
Impatient with Meta’s slow progress in A.I., Zuckerberg in recent months has personally spearheaded an aggressive recruiting campaign. To lure talent from OpenAI, Meta is offering signing bonuses as high as $100 million, as revealed by Sam Altman, OpenAI’s CEO, during a podcast interview in June.
Those excerpts are from The Observer, though other news outlets have caught on to the recruiting blitz. There's a good writeup in Wired. Plus a Wall Street Journal piece that goes into more detail on the names that are – allegedly – hand-picked by Zuckerberg:
The recruits on "The List" typically have Ph.D.s from elite schools like Berkeley and Carnegie Mellon. They have experience at places like OpenAI in San Francisco and Google DeepMind in London. They are usually in their 20s or 30s – and they all know each other. They spend their days staring at screens to solve the kinds of inscrutable problems that require spectacular amounts of computing power.
You might look at this and say: "some of these people are barely old enough to drink and they'll pull in more money than a small municipal government." Or "this project must be a big deal to lure people out of Google and OpenAI." Or even "this is so dumb." Yes, yes, and yes.
That brings us to risk management:
A common misconception is that risk management is about sitting still and playing it safe. Not quite. It's about evaluating opportunities. Understanding and, when possible, optimizing your risk/reward tradeoff. Improving your risk-taking. You want to place smart bets – the kind that leave you all kinds of exposure to upside gain while closing off sources of downside ruin.
If you're on Zuck's infamous list, then, you pretty much have all upside and near-zero downside! Do you think this will work? Maybe, maybe not. But did you lie and tell Zuck this would work? No. No, you did not. Zuck had a wild dream about AI. He reached out to you. He told you it would work, and he is willing to pay you a king's ransom to try.
(This may seem crazy. Because it is. But it also represents an interesting risk/reward trade off for Zuckerberg: given his net worth, he spends his equivalent of a dollar to buy a lottery ticket. If it works, great. If not, who cares?)
With that, I will say:
1/ If you are on The List, go make that money.
2/ I am not on that list. At least, not as far as I know.
3/ What if I make it on the list? Would I accept an offer from Meta? Me, the person who points out AI's cons as well as its pros? Who keeps saying that genAI marketing is more about future promises than today's realities?
If you have to ask, well, I invite you to reread point #1. We all have a price.
Bot overboard
GenAI chatbots are big piles of data and randomness with a text box as a front-end. This arrangement creates interesting attack vectors. Especially for public-facing, general-purpose bots. Joyriders will get the bot to say something terrible, then post screencaps that include your company logo. Other times, the bot will say something terrible on its own. And then you have bot poisoning, in which bad actors modify the bot's training data or surrounding systems, to further twist what comes through the screen.
The catch with bot poisoning is that it's not limited to outside actors. People inside a company can sabotage the bot for fun or for revenge. They might also do it out of some weird, misguided sense of justice. Or because the boss told them to.
Those last two are allegedly behind Grok's recent impression of your drunk uncle at a holiday gathering.
It started when, to quote a WSJ piece, "Musk said he would tweak Grok after it started to give answers that he didn’t agree with." The bot then went on the rampage, spewing ideas so vile that I will not quote them here. I'll just note that Grok allegedly said that it was a fan of A Certain Mustachioed Figure From History. A few days after Grok got sent to HR, it then appeared to check the boss's views before opining on certain matters. Which, to be fair, is a stellar corporate survival tactic. It's hard to get fired for repeating what the boss said. Not impossible, but at least hard.
This three-ring circus just happened to coincide with the exit of Twitter CEO Linda Yaccarino. It's entirely possible that she had scheduled her resignation far in advance of the Grok flap. But even if that were the case, you couldn't blame an outsider for at least asking the question of whether the two events were connected.
Remember how I said "everyone has their price?" Everyone has their limits, too.
Separating the art from the artist
A group called The Velvet Sundown is making waves on Spotify. But it doesn't really exist. Not in the traditional sense. While there are plenty of discussions to be had around AI-generated or AI-assisted music, let's instead take a moment to talk about virtual music groups. Because that doubles as a story about risk management.
A couple of years ago, when I was still covering web3, I came across this group called Kingship. They were an openly synthetic, virtual band, with members based on Bored Ape Yacht Club (BAYC) NFT characters.
People were confused. What's appealing about a virtual band?
It helps to see this from a business perspective.
Let's say you run a record label. For you, a band represents an investment. You give them money, studio time, and connections to talented sound engineers. If you've chosen wisely, the band sells a ton of albums, and you get a lot more money back than you initially put in. (If this sounds like a VC firm investing in a startup, you're on the right track.)
The problem is that this band is made of people, and people are messy. That messy human-ness is a threat to the band's popularity, which is a threat to the investment. Your investment. Your money.
A synthetic band? Not so much. As I wrote back in 2022:
When a label backs a band, or when a film studio backs an actor, they’re investing in high-profile people with real lives and real personalities. It’s entirely possible that there will be some messy story in the press. The scandalous love affair. The shocking drug habit. The old, racist tweet rant that somehow slipped through the nonexistent due-diligence exercise.
Every time one of those celebrities gets in trouble, it represents a potential cash leak for their investors. Maybe they’ll follow a sin/redemption arc and come back even more bankable. Or maybe their careers will crater, and the remaining albums on that contract are doomed to never be released. We imagine that record labels would love to close off those sources of risk.
So, back to Kingship. Those BAYC characters? They only have the life and personality that they are given. They only “exist” when and where the company wants them to. They can’t get into trouble. And because this arrangement lets Universal decompose the notion of a “music group” into its constituent parts of “personality,” “songwriting,” and “performance” – it can leave each facet to an expert in that domain. These BAYC band members are the perfect, low-risk celebrities – wrapped up tight like a movie script.
That right there is the risk management piece: a virtual band is a way to get all of the upside exposure from making music ... without the downside exposure of a group member going off the rails. Or the group splitting up. Or whatever.
So there you have it.
For music creativity, genAI certainly raises questions. For the music business, though, genAI lets you crank out tunes to draw all of that sweet sweet streaming revenue. Wrap that up some neat backstory for the artificial band, and you're good to go.
The lessons still hold
This past Saturday, 19 July, marked the one-year anniversary of the CrowdStrike incident. A flawed code update blue-screened eight million Windows machines across airports, hospitals, and a variety of other businesses.
I invite you to read Complex Machinery issues #013 and #014 for a recap of the story and the lessons. It's been a year and yet the warnings – about risk, complexity, and our world's connectedness – still hold.
CrowdStrike may have been the most recent such incident, but there will be others. We are surrounded by complex systems. They appear to hum along smoothly, providing an illusion of stability, but they are always one bad day away from a collapse.
I'll wrap up with my favorite explanation of complex systems, from an O'Reilly Radar piece I wrote a couple years back:
What makes a complex system troublesome isn’t the sheer number of connections. It’s not even that many of those connections are invisible because a person can’t see the entire system at once. The problem is that those hidden connections only become visible during a malfunction: a failure in Component B affects not only neighboring Components A and C, but also triggers disruptions in T and R. R’s issue is small on its own, but it has just led to an outsized impact in Φ and Σ.
(And if you just asked “wait, how did Greek letters get mixed up in this?” then … you get the point.)
In other news …
- Do you ever feel like you're an unwilling passenger on someone else's ego trip? No? This piece on tech founders using LLMs to perform "vibe physics" might change your mind.. (Gizmodo)
- File transfer site WeTransfer changed its terms of service (TOS) to allow the company to train AI systems on customers' data. They've since backpedaled. A bit. (Les Echos 🇫🇷)
- Online gaming platform Roblox will introduce AI-driven age verification features. Ostensibly, in the name of safety. But also, maybe, in the name of advertising and the never-ending quest for personal data? (Bloomberg)
- Since things were going so well with Grok – or perhaps as a way to shift the spotlight – parent company xAI will release genAI companion bots… (Gizmodo)
- … and also, um, a version of Grok for kids? (Seeking Alpha)
- The good old Atari 2600 trounced ChatGPT and Copilot at chess … so now Gemini has backed down. People will want to blame this McDonald's problem on genAI, but it's really about a person's poor decision to use "123456" as an admin password. (Bonus points if you just now thought of the Spaceballs line about luggage.) (Wired)
- The UN created interactive genAI bots of refugees. Because, I suppose, there were no actual refugees available for comment?
- The latest genAI crime: creating simulated celebrities to scam people out of their cash. (Hollywood Reporter)
- Delta Airlines plans to use AI for ticket pricing. Which … isn't exactly a shock? Airlines have been using dynamic pricing – good old price discrimination – for ages now. (Ars Technica)
- The continuing debate over which jobs genAI will impact the most. (New York Times)
- Netflix is putting genAI to work, for special effects and more. (Der Spiegel 🇩🇪)
The wrap-up
This was an issue of Complex Machinery.
Reading online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.