Two Items for Monday

We've overinvested in OA, and using AI in peer review violated confidentiality

Two Items for Monday

A couple of quick items for Monday.

Overinvested in OA

The implosion of OA seems to have begun, leading to questions about how a large swath of the scholarly publishing industry that overinvested in the imagined benefits of making content free will respond — using the term “invested” broadly.

Some invested intellectually, others invested financially, and some invested personally. Certain careers have blossomed within an environment rife with OA advocacy and techno-utopian mysticism. Dilettantes like funders and bureaucrats have invested mightily in the idea, but these investments are petering out as the cost:benefit ratio has become clearly untenable. Companies, meetings, oversight bodies, and ideologues all invested in the idea. Infrastructures, processes, budgets, and strategies have all been warped by investments in a presumed all-OA future, mainly with the APC as its financial engine.

How will they respond?

I think the inevitable implosion is going to proceed slowly, grudgingly, and end in tears. OA has fundamentally been a movement based on overtly selfish interests — for authors, librarians, and advocates — so there’s no reason to think anyone is going to take the high road out of this morass.

AI in Peer Review Breaches Confidentiality

The NIH’s Office of Extramural Research issued an interesting statement recently — using AI in peer review is a breach of confidentiality. The post is an explanation of a simultaneously released notice entitled, “The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process.”

The logic is straightforward:

. . . scientific peer reviewers are prohibited from using natural language processors, large language models, or other generative AI technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals. Reviewers have long been required to certify and sign an agreement that says they will not share applications, proposals, or meeting materials with anyone who has not been officially designated to participate in the peer review process. Yes, this also means websites, apps, or other AI platforms too.

AI continues to require rapid, thoughtful responses that protect our norms and ideals. It’s nice to see the community protecting its fundamental aspects in this way.

To the first point today, it also seems LLMs and generative AI will ultimately cause us to abandon techno-utopian content distribution schemes — if we’re able to re-establish our norms in those areas, as well.


Subscribe to The Geyser

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe