10 commandments to improve scientific publishing
These 10 Commandments are our North Star, our purpose, the reason we exist. They guide all our actions.
1. Our goal is to accelerate scientific progress
We need to move science away from the existing “publish or perish” culture. Researchers are currently rewarded for publishing frequently in high-impact journals; it’s a numbers game.
But quantity is often the enemy of quality. These incentives have led to problematic research practices, a proliferation of low-quality or marginal research, and a widespread replication crisis that undermines progress in many fields of science.
To accelerate scientific progress, we must change the incentives. We must encourage both rigor and novelty, and rethink how research is published, validated, and evaluated.
2. We build open-source software
Many science start-ups set out with good intentions, only to be acquired by an existing publisher before they achieve their mission—becoming part of the broken machinery.
This can be avoided by producing open-source software under a copyleft license, requiring anyone who further develops the code to make it available under a free and open license. This means our technology will exist with or without DeSci Labs, and will remain publicly accessible, so anyone can use and enhance it.
Our open-source software enables the scientific community to meaningfully engage in the technology and its further development.
It is designed to be – and to remain – open and accessible to everyone.
3. We make paywalls around scientific content impossible
We build on an open peer-to-peer network as our data storage layer (IPFS). All data posted on the network has a decentralized persistent identifier (DPID) derived from the files' content, creating a unique digital fingerprint that protects against link rot and content drift. All content on the network can be easily stored by any network participant using the same DPID, allowing users to receive the content from several sources. The idea is simple - multiple copies keep data safe and open.
As a result, erecting paywalls around the content on this network is impossible.
4. We do not take copyright from authors
Unlike publications in paywalled journals, we do not take copyrights away from the creators of the content. Instead, we give users a variety of licenses to choose from that allow them to share their work publicly under standardized terms, while protecting their interests. Authors always maintain the freedom to publish their content elsewhere (e.g. in a journal of their choice) while having control over whether they allow others to distribute, remix, adapt, or build upon their work, whether they want to be credited for the content they created; or whether to allow others to use their work for commercial purposes. All our licences ensure that the content remains openly accessible.
5. We build infrastructure that supports all kinds of scientific content, including manuscripts, data, and code
The current publishing infrastructure is created for sharing full-text manuscripts, but that’s just one form of output from scientific research. Data, code, and other artefacts are often at least equally valuable and underline the claims made in the manuscript.
The infrastructure we build supports all research outputs equally. This will help solve the replication crisis by making research more robust, reliable, and trustworthy, accelerating scientific progress.
6. We avoid data silos
Data silos are good for business but not for the public good. By avoiding silos, anyone can participate in storing data in the network and choose what’s shared on their own servers. Having open-source communities like our peer-to-peer network makes it impossible to create data silos, preventing profiteering from exclusive access to the data.
7. We preserve scientific content for the future
The current Internet suffers from link rot and content drift at scale: If a file is moved or deleted, links break, and content becomes unavailable. If the content of a file changes over time, the link does not lead to the original content anymore (i.e. content drift). Link rot and content drift affect ~50% of scholarly content after only 3 years, and almost all cited sources that are older than 10 years. The DOI system was developed to address this problem, but it’s far from perfect: Studies have shown that roughly half of all DOI do not resolve to the correct target. The current version of the Internet was not developed to guarantee long-term content availability.
We solve this problem by using content-derived decentralized persistent identifiers (DPID) in combination with decentralised data storage. With DeSci, every file has a unique identifier, making it findable and accessible and preserving the scientific record. Meanwhile, an edited or updated file has its own unique identifier, preventing content drift. The option of having multiple copies of valuable data keeps things safe and contributes to its long-term availability. This reduces the chances of data becoming unavailable if a storage provider changes its practices or becomes extinct.
8. We make scientific content accessible to humans and machines
The scientific content is mainly designed to be accessible to humans, but making science also accessible to machines speeds up scientific progress, for example, via the accelerated discovery of content or enhanced data accessibility and interoperability. Making scientific content accessible to machines has two very specific technological requirements: a persistent identifier and metadata describing the content.
DeSci Publish has both, making every file stored on the network FAIR by design. Machines and operating systems may change how they process information, but they always need reliable data access and the context of the files they are viewing.
9. Communities that use our infrastructure are autonomous and define their own rules
As part of this open-source initiative, we provide the infrastructure for scientists to do science better.
This includes giving every community the freedom to decide what works for them, experiment with new and better models of content curation and validation, and credit one another for their work.
10. We build technologies that reward high-quality curation and validation services for scientific content
Curation and validation of scientific content are crucial and valuable parts of the scientific enterprise currently carried out by journals and their editors and referees. Currently, almost three million articles are published annually. The validity of the author’s claims gets evaluated during the peer-review process, and the publication in a journal (i.e. curation) serves two purposes:
It’s a valuable prestige signal for authors.
It accounts for humans’ limited attention span - readers want to use the limited time available in the most productive manner, focusing on selected articles that others have “vouched for”.
Referees spend a median of 5 hours reviewing a manuscript and they are typically not getting paid for this work. By doing so, researchers donate >$3 billion annually to commercial publishers who monetise this input and turn it into profit margins of 30-40% for their shareholders - with little or no recognition for the work of the referees. Unsurprisingly, finding referees is the number one problem of editors.
It is overdue to change this: We will make the scientific community co-owners of the “diamond open access” publishing platform we are building and establish a fair system that rewards referees for their work, speeding up the peer-review process, increasing its quality, and helping editors find qualified referees.
To join a scientific community tailored to your research needs, help solve the replication crisis, and get the recognition you deserve for your work, sign up to DeSci Publish.