March 28, 2024

Researcher's Frustrations with Publishing

DeSci Labs

Introduction

Our team has been discussing the problems in scientific publishing and what needs to change. We had many personal experiences, anecdotes, and literature to build on. What we didn’t have yet was hard data from scientists to test our hypotheses and help us prioritize our product development.

So, in January 2024, we conducted an online survey of 94 active scientists about their biggest frustrations with scientific publishing.

Results

Scientists' number one frustration in our survey was that peer-review work is unpaid. Interestingly, this mirrors a finding from Publons’ Editor Survey, in which 75% of editors said that “finding reviewers and getting them to accept review invitations” is the hardest part of their job (p. 28, Publons Global State of Peer Review).

The second highest-ranked pain point was the difficulty of determining whether a study's results are trustworthy. Replicability is a problem close to our hearts. What's the point of the scientific record if our validation hasn't resulted in a record we can trust?

Number 3: Many empirical papers do not offer easy access to their underlying data or code. Building on a half-visible scientific record is like building a house in the dark - unnecessarily risky and inefficient.


Other highly ranked pain points include

  • the slow peer-review and publication process,

  • missing incentives for independent replication efforts,

  • missing incentives for sharing data and code openly,

  • cumbersome submission system for scientific content,

  • and no royalty payments for scientists.

These pain points are slowing scientific progress. They are all related to outdated digital infrastructure for scientific communication and misaligned incentives.

Discussion

These pain points are the main reasons we started DeSci Labs. Our mission is to build compelling solutions to these problems, harnessing the power of decentralized systems. DeSci Nodes already enable easy sharing, connecting, and updating of manuscripts, data, and code on an open peer-to-peer network without paywalls, data silos, link rot, or content drift.

In February, we released the first version of an attestation system highlighting research and authors who follow best open science practices, such as sharing data and code. Very soon, attestations for Open Data and Open Code will be automatically written to authors' ORCID profiles, creating a public track record of best open science practices.

The survey results encouraged us to prioritize research and development for an incentive layer for the CODEX protocol and the applications that operate on it, including DeSci Nodes. We'll have more on this soon.

Methods

We recruited survey participants via social media posts and on Prolific. Prolific participants were invited to the survey if they had an undergraduate degree or higher, were between 23 and 65 years old, and worked in research. Upon completing our 6-minute survey on OpinionX, Prolific participants received a payment of 8 British Pounds.

The core part of the survey consisted of 12 multiple-choice questions, which asked participants to prioritize between two randomly drawn pairs of problem statements selected from a set of 17 options. This part of the survey was introduced with the following text: “In the next section, we'd like to ask you to prioritize between 12 pairs of problem statements to learn which pain points around scientific publishing are most relevant to you. You will see a random selection of problem statements, not all, and some may be shown to you more than once. That's part of the research design. Thanks in advance.”

98 participants completed the survey between 16 and 18 January 2024. 94 participants said they consider themselves “researchers actively pursuing to publish in scientific journals, occasionally or frequently,” only answers from these individuals were considered in the survey results.

We also collected basic socio-demographics. Most survey participants were young, early-career scientists from Europe and North America.

The study had 94% statistical power to detect the superiority of one option over another, with a score difference of 30 and a type 1 error rate of 5%. The study's results do not necessarily generalize to other study populations or sampling methods.