Search for a command to run...
Peer review of research findings has changed repeatedly since 1832, when the Royal Society formalized written refereeing by experts (Hist J 2018). Calls for the reform of peer review have changed the system, including calls from researchers concerned with their peers’ impartiality or ability to catch errors or fraud. Here, we address a narrow set of issues relating to the conduct of peer reviewers toward not only their editors but also the authors whose work they are reviewing. An explicit code of ethics may be needed for the vast and far-flung research community, given the logistical impossibility of convening all members to discuss conduct and given the confidentiality of single- and double-blind reviews. Ethics seem particularly important; if overlooked during review, misconduct can corrupt the research-based evidence demanded by public policy and threaten the entire research endeavor. The problem may be even more acute now, given the global rise in anti-intellectualism, including mistrust of experts. Because high-quality independent review is essential to advance knowledge, our recommendations for a code of ethics for reviewers follow the hallmark principles of good science—transparency, independence, falsifiability, and reproducibility—as well as recent calls for greater diversity in the scientific community. Transparency in peer review is fundamental to each step. Transparency about independence should enhance confidence that the review was not distorted by rivalries, collaborations, or other competing interests. Just as most statistical tests require independence of samples, the reliability of findings requires independence between peers during review. Hence, reviewers should disclose all financial and non-financial potentially competing interests related to the authors (if identifiable), methods, or findings of articles they review. We cannot be the judges of our own partiality. Non-financial interests that compete with our ability to fairly and scientifically evaluate findings are especially hard for us to see in ourselves, because they relate to ideology, personal rivalry/friendship, unconscious bias, etc. Transparency also requires reviewers to disclose when they are unqualified to address any portion of a manuscript or would like a specialist to examine the relevant passages. Likewise, a reviewer who challenges a claim made by authors must have (and reveal) the evidence behind their challenge. The challenge must be falsifiable (open to disproof) and the evidence must be reproducible. A common breach of this ethical code is for reviewers to subtly undermine the authors’ credibility not with evidence but rather with assertions such as “In my experience…” and vague citations to unspecified or non-peer-reviewed sources. Such assertions and citations are unscientific because they cannot be falsified. The need for falsifiable, reproducible evidence in reviews extends to the use of generative artificial intelligence (GenAI), including large language models (LLMs), which draws from others’ work untraceably. If one uses GenAI applications in limited circumstances, including translation or simple online searches, one must disclose how and specify the tool by name (see transparency above). At first glance, diversity in the scientific community might appear to be unrelated to peer review. As Oreskes, in her 2019 book “Why Trust Science?” (Princeton, NJ: Princeton University Press) argued, the scientific endeavor advances more quickly when the research community approaches evidence from diverse viewpoints with open-minded scrutiny of all claims. Homogeneity of thought or experiences may impede thoroughgoing scrutiny of the assumptions or methods in a manuscript. Differences of opinion and clashes of values are not scientific disagreements and, thus, not a basis for judging methods or findings. Who conducted the work or how you feel about the results muddles facts and values. All of us have bemoaned when the lay public decides that they dislike a finding and consequently ignore its implications; so why do we do it to peers? Even when confronting methods, peers should suspect themselves of being partial to the assumptions, hypotheses, and procedures with which we are familiar or fond. Chamberlin's early insight (Science 1965) cautions us against “the dangers of parental affection for a favorite theory”. Business as usual does not seem to satisfy anyone. Although imperfect, peer review is arguably the best model available to evaluate research findings. Therefore, a strong code of ethics may now be needed to increase trust in science.