An "Open" Cluster Headache

Wellcome Open Research and medRxiv share a preprint — and it's another mess

An "Open" Cluster Headache

Last week, a person at the Gates Foundation, wading into a discussion about an analysis I published, claimed that papers using the F1000 Research framework “are not submitted elsewhere — it’s not a preprint server.

But the line is not bright, given how such funder-sponsored open repositories operate — papers are posted prior to review, and can remain up and available after failing review, not receiving any reviews, or not receiving sufficient reviews.

In almost no time, I stumbled across an unapproved submission to Wellcome Open Research that is also on medRxiv. Versions of the report were posted on medRxiv both before and after being submitted to Wellcome Open Research. It was all done rather haphazardly, leaving a trail of futile review, mismatched references and authors, and vanished data.

Sound complicated? Imagine the poor researcher, journalist, teacher, or student who stumbles across any of the four versions they might find.

Let me try to untangle it for you.

Image result for headache
  • Version 1 of the paper — which was entitled at the time, “Evaluation of antibody testing for SARS-CoV-2 using ELISA and lateral flow immunoassays” — was posted on medRxiv on April 20, 2020.
  • Version 2 of the paper — now entitled, “Antibody testing for COVID-19: A report from the National COVID Scientific Advisory Panel” — was posted on medRxiv on May 7, 2020.
  • Version 2a of the paper — still entitled, “Antibody testing for COVID-19: A report from the National COVID Scientific Advisory Panel” — was posted on Wellcome Open Research on June 11, 2020.
  • Version 3 of the paper, same title, was posted on medRxiv on July 7, 2020.
  • Version 2a of the paper received one review on Wellcome Open Research on September 15, 2020. The joint reviewers ticked “Approved” while submitting numerous queries to the authors — queries which have not been answered. The reviewers also indicated the paper may not have provided enough information for the study to be replicated.

Obviously, given the timeline, the July version on medRxiv cannot have reflected any changes from the September review of the June version on Wellcome Open Research.

Now, nearly six months after the September review, no revision to the Wellcome Open Research version has materialized, either.

Neither version is peer-reviewed — either by the dubious standards of F1000 Research or otherwise — so both are, for all intents and purposes, preprints.

Even the Publishing Director of F1000 seems to think unapproved posted works in their system are to be considered in the preprint family:

Whatever bin you put them in, there are problems across the set.

For the sake of simplicity, let’s compare Version 3 (medRxiv) with Version 2a (Wellcome Open Research). It’s not too hard to find inconsistencies — discrepancies in author names and listings, differences in corresponding authors, differences in reference lists, and one version pointing to another version for key (and missing) supplementary data.

Here’s an inventory of the inconsistencies I was able to identify between or within the two:

  • Listed author in Wellcome Open Research: Alastair Hunter
    Listed author in medRxiv: Alistair Hunter
    Spelling difference.
  • Listed author in Wellcome Open Research: Katie Jefferey
    Listed author in medRxiv: Katie Jeffrey
    Spelling difference.
  • One author listed on the Wellcome Open Research version — Dominic Kelly — is not a listed author on the most recent medRxiv version, but is listed in the author affiliations of same. This individual is not a listed author on any prior medRxiv versions, either. Can this person be accountable for the report?
  • Listed author in Wellcome Open Research: Jose Martinez
    Listed author in medRxiv: Jose Carlos Martinez Garrido (with link to ORCID)
    Major name difference.
  • Listed author in Wellcome Open Research: Elena Perez
    Listed author in medRxiv: Elena Perez Lopez
    Potentially major name difference.
  • Listed author in Wellcome Open Research: Alberto Sobrinodiaz
    Listed author in medRxiv: Alberto Jose Sobrino Diaz
    Major name difference.
  • Corresponding authors for Wellcome Open Research version: 2
    Corresponding authors for medRxiv version: 1
  • References in Wellcome Open Research version: 28
    Reference in medRxiv version: 27
  • Supplementary tables in Wellcome Open Research version: 6
    Supplementary tables in medRxiv version: 7

Data availability turns out to be a big question here — the medRxiv version points to the Wellcome Open Research version in its “Data Availability” statement, clearly identifying Table S7. The problem is, there is no Table S7 listed in the Wellcome Open Research version of the report. In addition, the link in the Wellcome Open Research version to such data on Figshare doesn’t work:

As noted above in the inventory of differences, the reference list in the medRxiv version is 27 items long, rather than the 28 on Wellcome Open Research — the authors removed the reference (13) to their Figshare data between the June 11, 2020, version and the July 7, 2020, version, it appears.

The Wellcome Open Research version flags two corresponding authors — Derrick W. Cook and Gavin R. Screaton. (The medRxiv version only lists Cook.) Both are at an institute affiliated with Oxford. I emailed both to ask about some of these matters, and received no reply.

There’s also a question about what “the National COVID Scientific Advisory Panel” is in actuality. It’s not listed as a contributor in Version 1, and first appears in Version 2 in a footnote with an unexplained asterisk after its name:

It also appears in Version 3, with metadata linking it to the Liverpool School of Tropical Medicine. However, searching for the “panel” — either associated with the Liverpool School of Tropical Medicine or not — yields only the reports covered here. There’s no mention of it otherwise that I could find.

Some major and minor flaws across the board make it difficult to accept any of these postings as valid scientific reports society can rely upon — author attributions are confusing, data is missing, a reference to data has been removed in one version, the “panel” meant to convey authority may be an invention, and no version has passed any standard of peer review.

Yet, as noted yesterday, each gets indexed in various places that obscure the status and cover the problems.

Is this really how we want to run the scientific record?


Subscribe now

Give a gift subscription