Our New Affirmation Businesses
As AI and OA become intertwined, are we just affirming anything for money?
October 20th, Claude announced its LLM for the life sciences. I covered it the next day, noting among other things the inanity of their video captions (“Turning Claude into a scientist” is ridiculousness and anthropomorphism in the extreme).
This was apparently part of a larger marketing push, as at least two science podcasts — “StarTalk Radio” and “Science Vs.” — ran ads for Claude at the top this past week.
Podcasts are mostly built on the ad model, so this was certain to happen at some point. However, I wasn’t ready for how repellent I found the ads to be. For Neil Degrasse-Tyson to be introduced by a company making silly science claims is pretty bad, but this is what the advertising model does — it forces individuals and companies to compromise their standards, sometimes without realizing it.
There is something profoundly anti-science to the technology-personalization-information game with all of this — from personalized ads to personalized responses from chatbots. It’s captured well in a recent essay called “The Validation Machines” by Raffi Krikorian, the Chief Technology Officer of Mozilla, who writes in The Atlantic:
Designed to flatter and please as they encourage ever more engagement, chatbots don’t simply answer our questions; they shape how we interact with them and decide which answers we see—and which ones we don’t.
The most powerful way to shape someone’s choices isn’t by limiting what they can see. It’s by gaining their trust. These systems not only anticipate our questions; they learn how to answer in ways that soothe us and affirm us, and in doing so, they become unnervingly skilled validation machines.
This is what makes them so sticky—and so dangerous.
And patently unscientific. How can you discover new things if the machines give you responses they are programmed to weight so the answers are more likely to meet with your approval — so you keep using the machines?
The “ad model in disguise” we call Gold OA has itself corrupted scientific publishing at many levels. It has confused our purpose, enabled bad actors, undermined publishing strategies, and led to scads of bad papers polluting our discovery systems.
Perhaps more importantly, it also provides so many papers of so many types on so many topics with such a variety of conclusions and approaches that nearly anyone can publish anything — which means that nearly anyone can find anything to justify their opinion or argument with a “peer reviewed” claim.
The scientific literature has become its own validation machine — with “gaslight” journals, unvetted claims in preprints, and now our own bots able to snow others with what seems like justification but which is really validation in disguise.
One such paper came across my desk the other day, and it shows how structural the corruption has become. As seems to be a theme lately, it’s an OA paper in a Springer Nature journal. It has one author. It’s full of bad grammar and inarticulate writing. Its premise makes little sense as phrased, and it has a headline designed to attract clicks.
Sadly, that’s just the beginning of the problems here . . .