Methodological myths in management research
Management and applied psychology research is in a critical state. Recent methodologically-oriented reviews of articles published in top-tier journals have consistently shown that most researchers design their studies and analyze their data in flawed ways. Examples include using designs that trigger demand effects, are non-consequential, or deceive study participants in ways that trigger distrust in experimental instructions (Lonati, Quiroga, Zehnder, & Antonakis, 2018). Moreover, authors mostly analyze their data in ways that preclude establishing causality; yet, they interpret their empirical results causally and—based upon these interpretations—make misleading policy implications (Antonakis, Bendahan, Jacquart, & Lalive, 2010; Antonakis, Bastardoz, & Rönkkö, 2019; Fischer, Dietz, & Antonakis, 2017; Sajons, 2020). The aim of this research project is therefore to uncover what we believe to be ten of the most enduring methodological “myths” in management research. Using a mix of econometric derivations, demonstrations based on simulated data, and laboratory as well as field experiments, we intend to showcase why relying on these myths potentially produces unreliable results. As a way forward, we will then provide scholars with a toolset of how to more rigorously design and evaluate their studies.