An Evaluation of Four Solutions to the Garden of Forking Paths Problem
In this paper (Rubin, 2017), I consider Gelman and Loken’s (2013, 2014) garden of forking paths problem. Forking paths occur when researchers decide which analyses to perform based on information from their sample. For example, a researcher may decide whether or not to drop an item from a scale based on how it affects the scale’s Cronbach alpha coefficient within the current sample. This sample-contingent decision rule causes a forking path in the analysis protocol because, during the course of repeated sampling from the same population, some samples may yield an acceptable Cronbach alpha value, leading to the retention of the item in the scale, and some samples may yield an unacceptable Cronbach alpha, leading to the deletion of the item.
Following the garden of forking paths would lead to multiple tests of the same hypothesis during a long run of exact replications of the analysis protocol (e.g., one test that included the item in the scale and one test that excluded the item from the scale). Consequently, the forking paths problem represents a special case of the multiple testing problem. Forking paths increase the familywise Type I error rate for a hypothesis by taking into account not only the test that was actually conducted, but also the test that could have been conducted had a different sample been drawn.
In my paper, I consider four potential solutions to the forking paths problem: (a) preregistration, (b) sensitivity (robustness, multiverse) analyses, (c) adjusting the alpha level, and (d) abandoning the Neyman-Pearson approach. I conclude that preregistration and sensitivity analyses are ineffective solutions, but that adjusting the alpha level and/or abandoning the Neyman-Pearson approach are effective solutions. In particular, the alpha level can be adjusted to take into account the number of forking paths that are involved in testing the relevant hypothesis (e.g., α/2 for one forking path, α/4 for two forking paths, etc.). In addition, statistical inference approaches that do not rely on the concept of repeated sampling from the same population are not susceptible to the forking paths problem. Hence, the Fisherian and Bayesian approaches to hypothesis testing do not suffer from the forking paths problem because they both condition their probability statements on the test that was actually conducted and the sample that was actually drawn.
For further information, please see:
Rubin, M. (2017). An evaluation of four solutions to the forking paths problem: Adjusted alpha, preregistration, sensitivity analyses, and abandoning the Neyman-Pearson approach. Review of General Psychology, 21(4), 321–329. https://doi.org/10.1037/gpr0000135 *Self-archived version*
For more of my work in this area, please see: https://sites.google.com/site/markrubinsocialpsychresearch/replication-crisis