Abstract
Reproducibility and replicability are crucial components of the scientific method, but they may be compromised when there are inherent issues related to a study and analytic choices such as statistical errors, or misalignments between the study’s objectives and implementation. Indeed, statistical errors and misunderstandings contribute to low reproducibility and replicability, hindering independent verification or changes in the direction of research. (McNutt, 2014) Such problems can easily occur in health science, where there are many confounding factors and low prior odds of genuine findings (Ioannidis, 2005). Guidelines for statistical reporting that can minimize these issues are well-established, but are not always followed. In January of 2023, to help address these challenges in a more targeted way, JID Innovations established a statistical review board as part of its overall editorial process, nominating editors with expertise in statistical analysis and data science. (Hall, 2023) All submissions to the journal are reviewed by one of the statistical review editors to provide specialist evaluation and feedback on study design, statistical tests, and analyses as well as bioinformatic aspects of the manuscript. In this commentary, common themes identified by statistical review editors in their peer reviews are brought forth along with comments that are made during the ‘routine’ peer review process in order to highlight prevalent issues in statistical methodologies and reporting seen in submissions to JID Innovations. The goal of this commentary is to propose easy steps that authors can take to inform study design at the outset of any data-driven project, reduce the number of potential revisions to statistical methodology and presentation in the original submission and ultimately to improve the reproducibility and replicability of the work published in JID Innovations, with the added benefit of a more efficient submission process.