19 Things Editors of Experimental Psychology Journals can do to Increase the Replicability of the Research they Publish


The challenges confronting psychological science can be divided into two categories: Hard and easy.  The hard problems have to do with developing measures, methods, theories, and models that will enable the development of a genuinely useful science of psychology.  Solving those problems will take many smart and creative people a long time.  The easy problem is to stop publishing so many crappy, underpowered, p-hacked experiments that don’t replicate.  Replication is not the only criterion for a science, but it is a fundamental one.

I assume that engaged editors read and think about each submission before looking at the reviews, and don’t just tally reviewers’ “votes” but rather make their best judgment as to whether or not the submission meets (or has decent prospects of meeting via revisions) criteria for publication in their journals. That’s just the basics of doing the job.  Here are some other things that editors can do to with the specific aim of increasing the replicability of the findings they publish.

1. Sign on to and endorse the Transparency and Openness Promotion guidelines.

2. Encourage and reward detailed preregistrations (Lindsay, Simons, & Lilienfeld, 2017).

3. Be wary of papers that report a single underpowered study with surprising findings, especially if critical p values are greater than .025 (Lindsay, 2015).

4. If you think that the work reported in a manuscript has potential but you doubt its replicability, consider inviting a revision with a preregistered replication, perhaps under terms of a Registered Report.

5. Encourage (perhaps even require) authors to share de-identified data and materials with reviewers (and more widely after publication) (Lindsay, 2017).

6. If you have Associate Editors, ensure that they have appropriate stats/methods chops and are committed to promoting transparency and replicability.

7. Consider recruiting Statistical Advisers (i.e., psychologists with high-level statistical expertise who are willing to commit to providing consulting service to you and Associate Editors when need arises).

8. Ensure that each submission that is sent for review goes to at least one reviewer who has serious stats/methods chops.

9. Require authors to provide a compelling rationale as to why the sample size of each reported study is appropriate (see, e.g., Anderson, Kelley, & Maxwell, 2017).  Consider not only the number of subjects but also the number of observations per subject per construct.  Precedent is a weak basis for sample-size decisions (because psychologists have a long history of conducting many underpowered experiments and publishing the few that yield statistically significant effects; underpowered studies will be non-significant unless they exaggerate effect size (see cool shiny widget by Felix Schönbrodt implementing Daniel Lakens’s spreadsheet on this issue at http://shinyapps.org/ ).  See also Bruner and Schimack (2016).  There are no hard and fast rules and statistical power must be considered in the context of other considerations that may limit sample size (e.g., difficulty, rarity, riskiness, urgency).

10. Require report of an index of precision for averages of dependent variables (e.g., 95% confidence intervals [Cummings, 2014]] or credible intervals [Morey, 2015]); when appropriate require report of effect sizes and a measure of their precision.  Make sure that authors make clear how these things were calculated.

11. Require fine-grained graphical presentations of results that show distributions (e.g., scatterplots, frequency histograms, violin plots) and look at and think about those distributions.

12. Don’t let authors describe a non-significant NHST result as strong evidence for the null hypothesis (e.g.,. Lakens, 2016) , nor describe a pattern in which an effect is significant in one condition or experiment and not significant in another as if it evidenced an interaction (Gelman & Stern, 2006).

13. Attend to measurement sensitivity, reliability, validity, manipulation checks, demand characteristics, experimenter bias, confounds, etc.

14. Require authors to address in the manuscript the issue of known/anticipated constraints on the generality of the findings (Simons, Shoda, & Lindsay, 2017).

15. Use tools such as StatCheck to detect errors in statistical reporting.

16. Consider inviting submissions that propose Registered Reports (pre data collection).

17. Considering inviting submissions that report preregistered direct replications of findings previously published in your journal (ideally as RRs) (Lindsay, 2017)

18. Publish the action editor’s name with each article.

19. If you become aware of errors in a work published in your journal, work to correct them in an open, straightforward way, whether that involves an erratum, corrigendum, or retraction.  Reach out to Retraction Watch.


Leave a Reply

Your email address will not be published. Required fields are marked *