Influence of multiple hypothesis testing on reproducibility in neuroimaging research: a simulation study and Python-based software

T Puoliväli, S Palva, JM Palva - Journal of Neuroscience Methods, 2020 - Elsevier
Journal of Neuroscience Methods, 2020Elsevier
Background Reproducibility of research findings has been recently questioned in many
fields of science, including psychology and neurosciences. One factor influencing
reproducibility is the simultaneous testing of multiple hypotheses, which entails false positive
findings unless the analyzed p-values are carefully corrected. While this multiple testing
problem is well known and studied, it continues to be both a theoretical and practical
problem. New method Here we assess reproducibility in simulated experiments in the …
Background
Reproducibility of research findings has been recently questioned in many fields of science, including psychology and neurosciences. One factor influencing reproducibility is the simultaneous testing of multiple hypotheses, which entails false positive findings unless the analyzed p-values are carefully corrected. While this multiple testing problem is well known and studied, it continues to be both a theoretical and practical problem.
New method
Here we assess reproducibility in simulated experiments in the context of multiple testing. We consider methods that control either the family-wise error rate (FWER) or false discovery rate (FDR), including techniques based on random field theory (RFT), cluster-mass based permutation testing, and adaptive FDR. Several classical methods are also considered. The performance of these methods is investigated under two different models.
Results
We found that permutation testing is the most powerful method among the considered approaches to multiple testing, and that grouping hypotheses based on prior knowledge can improve power. We also found that emphasizing primary and follow-up studies equally produced most reproducible outcomes.
Comparison with existing method(s)
We have extended the use of two-group and separate-classes models for analyzing reproducibility and provide a new open-source software “MultiPy” for multiple hypothesis testing.
Conclusions
Our simulations suggest that performing strict corrections for multiple testing is not sufficient to improve reproducibility of neuroimaging experiments. The methods are freely available as a Python toolkit “MultiPy” and we aim this study to help in improving statistical data analysis practices and to assist in conducting power and reproducibility analyses for new experiments.
Elsevier