#016 - Packaging up crime and mischief
AI use cases, as driven by fraud and stunts. And squeeze bottles.
You're reading Complex Machinery, a newsletter about risk, AI, and related topics. (You can also subscribe to get this newsletter in your inbox.)
Crime Rules Every AI Around Me
(With apologies to Wu-Tang Clan.)
The search for meaningful AI use cases continues to be a game of chance. Recent weeks have offered mixed results:
- British retailer Marks & Spencer using it in a shopper's guide. (Pretty good.)
- Writing code. (Maybe. There are other considerations here. I'll come back to this in a future newsletter.)
- Autonomous vehicles. (Also a maybe, as there are still people involved behind the scenes.)
- Getting quotes for a movie trailer. (Not so much. Not even sure why this even happened, but it did.)
- Detecting emotion and weighing in on who gets unemployment benefits. (Bad. Bad bad bad.)
Trying a little bit of everything is one way to explore the solution space. Dishonesty is a far more efficient guide, as crime is an eager early adopter of emerging tech. It knows to push past the hype and flat-out ask: "how can you make my job easier?"
Take Michael Smith, who allegedly used genAI to bilk $10M out of music streaming services:
Joining forces with the chief executive of an A.I. music company and a music promoter, neither of whom was named in the indictment, Mr. Smith created a staggering catalog of bogus songs, uploading thousands to streaming platforms each week.
I'm surprised the streaming services are so far behind on catching fraud of this scale. But that's a story for another day. Mostly, I'm impressed. Push aside the ethical issues and you'll see that Smith could ply his trade as an AI strategist:
- He noticed a problem – oh hey I could use some more money
- He came up with a solution – people get paid for publishing songs
- He figured out how genAI could connect the two – a machine could create tons of songs, for which I would get paid
This wasn't some silly boondoggle of an AI project. It was a situation that genuinely called for the technology. How many corporate AI efforts pass this test? Not nearly enough.
(It's also a sign that massive, AI-driven fraud is the only way a musician can make money on Spotify. Perhaps they could try OnlyFans instead. Creators there pocket 80% of fan revenue, yet the platform can still return $600m in dividends to the company's owner.)
But what the genAI giveth, the e-mail trail taketh away:
The co-conspirator is said to have supplied him with thousands of tracks a month in exchange for track metadata, such as song and artist names, as well as a monthly cut of streaming revenue.
"Keep in mind what we're doing musically here... this is not 'music,' it's 'instant music' ;)," the executive wrote to Mr Smith in a March 2019 email, and disclosed in the indictment.
Citing further emails obtained from Mr Smith and fellow participants in the scheme, the indictment also states the technology used to create the tracks improved over time - making the scheme harder for platforms to detect.
Oops.
I am not an attorney, but I wager "Don't Crime" is the best path in business.
If you insist on criming, heed the warning of The Wire's Stringer Bell:
To be clear: the answer should always be "no."
Chief of mischief
When it comes to surfacing use cases, mischief is a close runner-up to dishonesty.
Damon Beres, a writer at The Atlantic, unwittingly turned a random act of distraction into a runaway hit. All because he used genAI to put Kermit the Frog in his phone's app icons. I don't know why social media cared that Beres had changed the icons – and I'll underscore, he only changed his own icons; it's not like his work affected anyone else's phone – but people sure cared a lot.
Reflecting on his time as a social media star, Beres notes:
The effort was entirely about entertaining myself and getting engagement [...]. It's no wonder that social-media companies are pushing generative AI; the technology feels like it offers both a way to melt time and a shortcut to the kind of numbers-go-up posting that makes these networks so compulsively usable.
Yep. Ragebait is grist for social media's haterade mill. And most ragebait stems from someone daring to bring those "wouldn't it be funny if…?" moments into (near-)reality. With genAI you pop those ideas into a text box, wait a few seconds, and get a somewhat believable result suitable for sharing online.
Surprisingly enough, I'm none too worried about a flood of unlabeled genAI images in a year when half the world is heading to the polls. True believers will simply accept what they see, labels or no.
What worries me is the thought of machines taking other jobs in the ragebait factory. Add some analyses and you can programmatically surface what's most likely to go viral, then have another bot shepherd the idea through the public sphere. Maybe throw in a few bots to disagree with the first and get the engagement going – the social media equivalent of a pump-and-dump scam. (We're partway there already. Some AI models really want to troll people. And Beres noted that plenty of bots joined the angry mob behind his Kermit-icons post.)
If this sounds like the bot-on-bot world of algorithmic ("electronic," "computerized") trading, that is precisely where my mind is going. Or, better put: that's where my mind naturally gravitates when I think about AI. As I've noted elsewhere, the story of computers moving into financial markets – from the introduction of REG-NMS, to runaway trades and flash crashes, and everything in-between – will tell us a lot about where our AI-enabled world is heading. Social media and trading are worldwide marketplaces of human decision-making. They are a natural fit for large-scale quantitative analysis and automation.
And also, fraud. Lots of fraud.
More than a text box
Amanda Mull (formerly of The Atlantic, now at Bloomberg) writes about consumer behavior. Her latest piece is about packaging for olive oil.
This part caught my eye:
[Olive oil purveyor Graza's] highly recognizable, easily maneuverable squeeze bottles, which allow novice cooks to mimic the way the pros use cooking oil in restaurant kitchens, have proved to be a gateway to a type of buyer the zillion other olive oils never worked all that hard to woo. Graza’s labels are bright green and cheerful, one depicting a cartoon woman doing an over-the-head, behind-the-back maneuver to get Sizzle into her pan. [...] America is full of people who don’t know a great deal about food but wish they were better, more confident cooks, and for these buyers wandering a grocery store, Graza’s packaging is both a calling card and an instructional manual. The squeeze bottles are the kind of move that seems almost gallingly simple in retrospect. Sometimes packaging is just as much the product as what’s contained inside it.
What does an article about squeeze bottles have to do with AI? A lot.
In short: packaging matters. As Mull points out, a product's packaging catches our eye, tells us how it works, and can even inspire us to overcome our apprehension of using it.
That's why the power of an AI-enabled app isn't so much the AI model itself; it's the layers of UI/UX, code, and infrastructure that make the AI accessible to people. This packaging provides a structured, even guided interaction with the model. (Hence why VC firm Sequoia Partners is pouring money into that arena.)
Then we have the plain old text box, for issuing prompts to LLMs. This is kind of a packaging. But also it kind of isn't. The prompt's entry box isn't a guide. It doesn't do much unless you already know what this thing is and how you want to use it.
The genAI chatbot text box is raw in a way that reminds me of the early days of Amazon Web Services. You want on-demand, metered access to storage and virtual server instances? Sure! Just use the commandline, or write custom code to parse XML calls. (Hence the "Web Services" part of the name. Remember SOAP, anyone?)
Like many tech professionals at the time, I found myself at that weird intersection of being comfortable with the commandline and more than capable of writing web services code, but lacking in time and energy. I wanted to deploy some virtual servers for my apps, and wire up services like asynchronous messaging. The raw AWS (just-barely-)packaging represented a barrier to entry.
The introduction of the AWS web management console in 2009 changed all of that. Not only did it provide a visual way to define cloud resources, it made it easier to confirm what resources were running. (That was the downside of that whole "metered access" thing: everyone has a story of being burned by a forgotten AWS instance reappearing on the monthly bill – either their own experience, or one step removed. Fifteen years later I still have a weekly reminder to check my AWS billing statement.)
What will be the next squeezable bottle or web management console for AI models? I don't have an answer for this. But I know that new UIs for genAI will open doors. Getting that power in the hands of new audiences will surface new, and hopefully better, use cases. (We'll also get the next-gen Driving Social Media Wild With Kermit Icons stunts. Let's just be honest.)
Packaging is everything.
Next up
This newsletter was starting to run long, so I'll save the segment on spotting rogue AI for next time. Hopefully by that point I will have also had a chance to experiment with the new "Strawberry" GPT models.
Stay tuned.
In other news …
- At a time when creators are on edge about genAI, seeing it embraced – well, not-officially-rejected – by the NaNoWriMo team was an unexpected twist. They softened the official statement a few days later but the damage was done. (CNET News)
- Investors are falling in love with AI as a money-maker. And people are falling in love with AI-based voices and personalities. (Vox)
- AI's free-for-all will eventually come to an end. What will regulation of AI mean for the companies that build and use it? (WSJ)
- A Microsoft corporate VP talks about reworking budgets to accommodate AI projects. (The Register)
- OpenAI forbids certain kinds of custom chatbots in its marketplaces, but many of them sneak through anyway. (Gizmodo)
The wrap-up
This was an issue of Complex Machinery.
Reading this online? You can subscribe to get this newsletter in your inbox every time it is published.
Who’s behind Complex Machinery? I'm Q McCallum. I think a lot about AI and risk, which I write about here.
Disclaimer: This newsletter does not constitute professional advice.