Crowdsourcing hypothesis tests: Making transparent how design choices shape research results.

Crowdsourcing hypothesis tests: Making transparent how design choices shape research results.

Landy, Justin F;Jia, Miaolei Liam;Ding, Isabel L;Viganola, Domenico;Tierney, Warren;Dreber, Anna;Johannesson, Magnus;Pfeiffer, Thomas;Ebersole, Charles R;Gronau, Quentin F;Ly, Alexander;van den Bergh, Don;Marsman, Maarten;Derks, Koen;Wagenmakers, Eric-Jan;Proctor, Andrew;Bartels, Daniel M;Bauman, Christopher W;Brady, William J;Cheung, Felix;Cimpian, Andrei;Dohle, Simone;Donnellan, M Brent;Hahn, Adam;Hall, Michael P;Jiménez-Leal, William;Johnson, David J;Lucas, Richard E;Monin, Benoît;Montealegre, Andres;Mullen, Elizabeth;Pang, Jun;Ray, Jennifer;Reinero, Diego A;Reynolds, Jesse;Sowden, Walter;Storage, Daniel;Su, Runkun;Tworek, Christina M;Van Bavel, Jay J;Walco, Daniel;Wills, Julian;Xu, Xiaobing;Yam, Kai Chi;Yang, Xiaoyu;Cunningham, William A;Schweinsberg, Martin;Urwitz, Molly;Uhlmann, Eric L;, ;
Psychological bulletin 2020
285
landy2020crowdsourcingpsychological

Abstract

To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim. (PsycINFO Database Record (c) 2020 APA, all rights reserved).

Citation

ID: 85969
Ref Key: landy2020crowdsourcingpsychological
Use this key to autocite in SciMatic or Thesis Manager

References

Blockchain Verification

Account:
NFT Contract Address:
0x95644003c57E6F55A65596E3D9Eac6813e3566dA
Article ID:
85969
Unique Identifier:
10.1037/bul0000220
Network:
Scimatic Chain (ID: 481)
Loading...
Blockchain Readiness Checklist
Authors
Abstract
Journal Name
Year
Title
5/5
Creates 1,000,000 NFT tokens for this article
Token Features:
  • ERC-1155 Standard NFT
  • 1 Million Supply per Article
  • Transferable via MetaMask
  • Permanent Blockchain Record
Blockchain QR Code
Scan with Saymatik Web3.0 Wallet

Saymatik Web3.0 Wallet