Crowdsourcing hypothesis tests: Making transparent how design choices shape research results.

Justin F. Landy*, ML Jia, Isabel L. Ding, Domenico Viganola, Warren Tierney, Anna Dreber, Magnus Johannesson, Thomas Pfeiffer, Charles R. Ebersole, Quentin F. Gronau, A Ly, den Bergh D van, Maarten Marsman, Koen Derks, E-J Wagenmakers, Andrew Proctor, Daniel M. Bartels, Christopher W. Bauman, William J. Brady, F CheungAndrei Cimpian, Simone Dohle, M. Brent Donnellan, Adam Hahn, Michael P. Hall, William Jiménez-Leal, DJ Johnson, Richard E. Lucas, BenoÎt Monin, Andres Montealegre, Elizabeth Mullen, Jun Pang, Jennifer Ray, Diego A. Reinero, J Reynolds, Walter Sowden, Daniel Storage, Runkun Su, Christina M. Tworek, Bavel JJ Van, Daniel Walco, Julian Wills, Xiaobing Xu, Kai Chi Yam, Xiaoyu Yang, William A. Cunningham, Martin Schweinsberg, Molly Urwitz, The Crowdsourcing Hypothesis Tests Collaboration, Eric L. Uhlmann, Jan K. Woike

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

27 Downloads (Pure)
Original languageEnglish
Pages (from-to)451-479
Number of pages0
JournalPsychological Bulletin
Volume146
Issue number5
Early online dateMay 2020
DOIs
Publication statusPublished - May 2020

Cite this