AI Surveys Overcome By Events

Elsevier’s and Wiley’s AI surveys land with a thud as a battle rages around them

Marketing AI in science is a treacherous job. A basic question — “Why do it at all?” — is lurking around every corner. Some of us sniffed out early that it was a slippery eel composed of hype and greed, and this seems to be bearing out.

But plans were put in place, and surveys were conducted. Why? To make the companies look like they’re smart? To court faddish risk? To justify tech investments? To goose stock prices? Yes, yes, yes, and yes.

I recently wrote about Wiley’s approach to AI technologies in science, which ignored the defunding of US science, fiddling with AI while science burned.

Today, Elsevier released its own report centered on AI in scientific research.

  • Oddly, no full-throated condemnations of chaos at the CDC, NIH, or FDA. Hrmmm.

Comparing the two is interesting for a hot minute. Both surveys were fielded around the same time — Wiley from mid-August to early September, Elsevier’s in August. Both had a few thousand respondents. But the reports are pretty different. Elsevier’s could stand on its own without any mention of AI.

Four data points are shared in a general sense:

  • The overall rate of utilization of AI among respondents
  • The belief that AI will save time in 2-3 years
  • The level of concerns about AI models
  • The percent who believe AI tools were ethically developed

The ranges are tight on the middle two concepts, more divergent on the first and last. And pooling the data isn’t really valid since there’s no indication that the questions, recipients, or overall surveys were really comparable. Still, since market research is imprecise and mainly directional, it’s fun.

  • Also, it’s fun to contemplate the effects on response pools running surveys in August might have. Aren’t some major swaths of academia on vacation?

But while these surveys are being shopped around, two more major problems with AI in science are coming to the fore.

First, it’s something we talked about earlier with the Editor-in-Chief of Clinical Orthopaedics and Related Research (CORR), Seth Leopold, MD — a proliferation of fake letters to the editor (LTEs). A report in Science shows that these are occurring across the major journals, and likely beyond, clogging editorial flows and ruining post-publication discussions. This isn’t necessarily new information, but it’s good to raise awareness that it isn’t a solved problem, and probably a growing problem.

Then, arXiv has been forced to publish a policy preventing the submission of reviews and position papers before they’re published in journals because “arXiv has been flooded with papers. Generative AI / large language models have added to this flood by making papers – especially papers not introducing new research results – fast and easy to write.”

We’ve seen this with Tylenol, “gaslight” journals, and “gaslight” preprints. Again, nothing new, but we’re not changing the core elements that would shut this down — pay-to-play publishing, open access publishing, and unreviewed preprints. If those elements were eliminated from the field of play and science were once again truly for scientists, a lot of these problems would evaporate.

So, while marketing departments are surveying researchers during high vacation season about their AI feels — with Elsevier frankly producing a much heftier set of insights that would only be improved if the AI aspects were de-emphasized or excised entirely — editors, staff, reviewers, and scientists are suffering from the proliferation of these systems into the sciences.

Survey says? Who cares? The evidence all around us is that AI is causing scientists, scientific communities, and scientific communicators real problems.

Deal with it.


Subscribe to The Geyser

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe