When we think about the architecture of the scholarly publishing landscape an interesting aspect is that we have a diverse scale of publishers, many journals, and the ability to scale up in areas of new research by simply launching new journals. In addition we have de-facto standards similar metadata standards, an idea of what peer review might be, an immutable artefact as an output (the PDF).
So we have a distributed scalable system, connected by strong norms.
This has been a great strength in growing a coherent way of repressing the worlds knowledge, but there are also weaknesses in this architecture.
One weakness is that we don't have a single global platform containing all research, on top of which new services or applications can be built.
Another weakness (or if not quite weakness, at least a vector for attack) is that many of our "standards" are actually "norms".
Increasingly we are seeing these norms coming under pressure, and we are seeing the validity of the scholarly record coming under some stresses. Specifically we are seeing bad behaviours both intentional and unintentional at the point of manuscript submission leading to bad manuscripts being published into the scholarly record. The long term challenge is that this could erode trust in that body of knowledge, just at a moment where trust in knowledge is needed more than ever.
While individuals like Elisabeth Bik and initiative like retraction watch play a role in helping, they are limited by being small scale, and not being able to work across the entire ecosystem. While that ecosystem remains fractured a centralised approach to dealing with these issues is hard.
This is where some level of collaboration can help, and the STM Solutions is now trialling an approach to allow sufficient, but not more than sufficient, sharing of information between publishers to help the collaboration Hub https://www.stm-assoc.org/2021_12_08_News_Release_STM_Solutions_builds_collaboration_platform_to_safeguard_research_integrity.pdf .
The idea is that by finding a way to pool information about certain aspects of submitted manuscripts across publishers, in a way that maintains privacy for researchers and users, and that is secure for everyone. Then bad activity can more easily be identified, and appropriate actions can be taken.
But what might we mean by bad activity, and what might such an inactive do about this?
Here are some kinds of activity that we might think about?
unintentional bad behaviour by indiviuals authors who do things that do not match the scholarly norms, but they are not that aware of these norms. For example submitting the same manuscript to multiple journals at the same time. If journals could compare submissions in a privacy preserving way they could alert the authors about this behaviour, and politely request that the author considers their actions. overall threat level --low
intentional bad behaviour at the individual level authors who fabricate data e.g. by manipulating images. If journals could share a gold standard on how to identify such manipulation, or to use the pool of all submitted images, then identification of this behaviour and fair treatment of such behaviour could become a new emergent norm overall threat level medium.
bad behaviour at the individual level for malign intent at the individual level authors decide to mess with the system by posting fake papers, just for the fun of it. If journals had a shared way to identify advesarily generated papers, and could trace this to individuals, then that individual's behaviour could be policed across the field, rather than at the journal level.
bad behaviour at the group level for individual benefit authors engaging in paper mills, and fake rings of peer reviewers, in order to perhaps gain citation credits for those involved in order to get past some other gatekeeping in the ecosystem. Again if journals could quickly identity this behaviour, then it could potentially be reduced at source, and a library of adversarial content could be identified to help with future identification threat level high
bad behaviour at the group level for overall bad intent carefully introducing pseudo science with the intent of diluting the clarity of evidence around areas of string economic interest to certain parties e.g. climate change, the effects of smoking, the effect of certain social intervention. I'm less sure what to do about this. Perhaps in the end we must hope that only history itself, and a rigorous holding to nature, can help here.
Well, of the above the collaboration hub hopes to be a mechanism where at least some of these behaviours can be addressed through collective action. How one approaches this needs careful thought, transparency, and good governance. I've been approached to be on the governing committee for this initiative, I think it is timely, important, and feasible, and I hope that it can have a meaningful impact.