2022
DOI: 10.31222/osf.io/9ka8q
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Making peer review evidence-based: It’s time to open the "black box"

Abstract: Peer review serves an essential role in the cultivation, validation, and dissemination of social work knowledge and scholarship. Nevertheless, the current peer review system has many limitations. It is charged as being unreliable, biased, ineffective, and unaccountable, among numerous other issues. That said, peer review is still commonly viewed as the best possible system of knowledge governance, given the relevant alternatives. In this research note, I scrutinize this assumption. Although peer review can som… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 76 publications
0
2
0
Order By: Relevance
“…Though recent developments in scholarly publishing have enabled the partial or even complete decoupling of peer review from this gatekeeping role (sometimes referred to as "journal-independent" or "journal agnostic" peer review; e.g., Eisen et al, 2022;Hamelin et al, 2022;Lumb 2023), social work journals still largely adhere to a traditional model of double-blind, pre-publication peer review (Caputo, 2019). While the published literature is in some ways validated by this model, the underlying processes are largely unstandardized and opaque, and its overall functioning is poorly understood (see Dunleavy, 2022b;Tennant & Ross-Hellauer, 2019). Blatant errors (e.g., misreported or incorrect statistical results; inaccurate or misleading citations), omissions (e.g., selective reporting of results), misrepresentations (inflation or deflation of findings), and even cases of fraud (e.g., fabrication of data, falsification) are inevitably published (see Dunleavy & Lacasse, 2023;Ioannidis, 2005Ioannidis, , 2008Nuijten et al, 2015;Srivastava, 2016, and references therein).…”
mentioning
confidence: 99%
“…Though recent developments in scholarly publishing have enabled the partial or even complete decoupling of peer review from this gatekeeping role (sometimes referred to as "journal-independent" or "journal agnostic" peer review; e.g., Eisen et al, 2022;Hamelin et al, 2022;Lumb 2023), social work journals still largely adhere to a traditional model of double-blind, pre-publication peer review (Caputo, 2019). While the published literature is in some ways validated by this model, the underlying processes are largely unstandardized and opaque, and its overall functioning is poorly understood (see Dunleavy, 2022b;Tennant & Ross-Hellauer, 2019). Blatant errors (e.g., misreported or incorrect statistical results; inaccurate or misleading citations), omissions (e.g., selective reporting of results), misrepresentations (inflation or deflation of findings), and even cases of fraud (e.g., fabrication of data, falsification) are inevitably published (see Dunleavy & Lacasse, 2023;Ioannidis, 2005Ioannidis, , 2008Nuijten et al, 2015;Srivastava, 2016, and references therein).…”
mentioning
confidence: 99%
“…(1) I and many others in the open science community share the belief that making research, scholarship, and its appraisal (e.g., peer review) open and transparent will help improve the rigor, accountability, and trustworthiness across all levels of the research lifecycle. (2,3,4) This mandatory sharing policy could be enhanced by making explicit and public a plan for evaluating its effectiveness. At some predetermined time (e.g., 1 May 2025, 31, December 2025), and at set-intervals thereafter (e.g., every 3-, 5-years), it would be useful to see published: what percentage of clinical trials published in The BMJ actually do have publicly available data and code, the extent to which this can be accessed unimpeded (a concern shared by other readers, especially those outside of academic and research institutions), and perhaps most importantly the extent to which the data and code are able to be used to reproduce and replicate findings.…”
mentioning
confidence: 99%