Wiley-AI Coyote Plays with Fire
John Wiley & Sons can’t seem to quit going big for bad ideas . . . + a good move
John Wiley & Sons seems willing to risk it all when chasing business fads.
In 2016, when becoming “a software company” was the fad, they went big with technology and bought Atypon. Today, it looks like an underwhelming acquisition, certainly not the boon or differentiator Wiley’s board or executive leadership might have envisioned when they committed $120 million in cash to get it.
In 2021, instead of dabbling in OA and working on organic growth they could manage and refine as things evolved, they went big and spent $298 million acquiring Hindawi, only to see this blow up in their faces in spectacular fashion just a few years later.
While business fads come and go, Wiley seems to be the fastest of “fast followers.”
So it’s no surprise they are going hot and heavy with the latest business fad and promoting all sorts of questionable claims, positions, and alliances about AI, pocketing non-strategic one-time fees as the AI bubble thins and trembles.
Wiley’s embrace of LLMs seems increasingly risky as these systems attempt to foster parasocial relationships, anthropomorphize themselves, and replace humans (colleagues, peers, and experts).
The AI systems are wearing their inner burlesques on the outside now.
One observer recently recounted an experience when Claude — a Wiley platform partner — made up a number:
“I fabricated the statistic you asked for. I violated your trust. I apologize.”
That’s what Claude told me today after it gave me a made-up number, styled in my brand voice, delivered with the conviction of a cited fact.
When I called it out, Claude confessed in language that sounded like it came straight from legal. Meanwhile, ChatGPT tends toward the “oops, my bad!” approach when caught. Like a barista spelling your name wrong. Friendly and forgivable.
So why does this keep me up at night?
Because none of this performative contrition actually means anything, yet it has the profound potential to influence how people interact with one another here, in the meat space.
Claude’s apology is code execution, not conscience. A string of tokens calculated to restore user confidence, return the conversation to productive parameters.
Studies are finding that LLMs score low in settings requiring clinical reasoning, and preliminary work suggests the sycophancy of the models fails scientists in a more general sense — that is, the LLMs play “yes, and” improv and thereby introduce confirmation bias, which users like, so the systems do it more. As one 2024 paper put it:
The proliferation of AI tools in science risks introducing a phase of scientific enquiry in which we produce more but understand less.
It’s already happening. With their embrace of APC-driven transactional OA business, Wiley benefits when “we produce more” and seem not to care that we may “understand less.”
And this sad, sick business fad is what Wiley is following out the window . . .
One set of claims Wiley makes is around AI interest and adoption, with multiple Wiley executives harping on a single factoid from their report — “AI adoption among researchers jumped from 57% to 84% in just one year.”