**Notice:**Due to a recent spike in spam activities, we have increased our measures to flag spam content on OSF. Contact support@osf.io if you believe your content has been flagged in error.

Loading wiki pages...

Wiki Version:

<p>This project page contains articles and materials for a Special Issue of the journal <em>Psychonomic Bulletin & Review</em>. The editorial for this issue is reproduced below.</p>
<h1>Bayesian methods for advancing psychological science</h1>
<ul>
<li>Joachim Vandekerckhove (University of California, Irvine)</li>
<li>Jeffrey N. Rouder (University of California, Irvine and University of Missouri)</li>
<li>John K. Kruschke (Indiana University)</li>
</ul>
<hr>
<p>The simple act of deciding which among competing theories is most likely—or which is most supported by the data—is the most basic goal of empirical science, but the fact that it has a canonical solution in probability theory is seemingly poorly appreciated. It is likely that this lack of appreciation is not for want of interest in the scientific community; rather, it is suspected that many scientists hold misconceptions about statistical methods. Indeed, many psychologists attach false probabilistic interpretations to the outcomes of classical statistical procedures ($p$-values, rejected null hypotheses, confidence intervals, and the like; Gigerenzer, 1998; Hoekstra, Morey, Rouder, & Wagenmakers, 2014; Oakes, 1986). Because the false belief that classical methods provide the probabilistic quantities that scientists need is so widespread, researchers may be poorly motivated to abandon these practices.</p>
<p>The American Statistical Association recently published an unusual warning against inference based on $p$-values (Wasserstein & Lazar, 2016). Unfortunately, their cautionary message did not conclude with a consensus recommendation regarding best-practice alternatives, leaving something of a recommendation gap for applied researchers.</p>
<p>In psychological science, however, a replacement had already been suggested in the form of the "New Statistics" (Cumming, 2014)—a set of methods that focus on effect size estimation, precision, and meta-analysis, and that would forgo the practice of ritualistic null hypothesis testing and the use of the maligned $p$-value. However, because the New Statistics' recommendations regarding inference are based on the same flawed logic as the thoughtless application of $p$-values, they are subject to the same misconceptions and are lacking in the same department. It is not clear how to interpret effect size estimates without also knowing the uncertainty of the estimate (and despite common misconceptions, confidence intervals do not measure uncertainty; Morey, Hoekstra, Rouder, Lee, & Wagenmakers, 2016), nor is it clear how to decide which among competing theories is most supported by data.</p>
<p>In this special issue of <em>Psychonomic Bulletin & Review</em>, we review a different set of methods and principles, now based on the theory of probability and its deterministic sibling, formal logic (Jaynes, 2003; Jeffreys, 1939). The aim of the special issue is to provide and recommend this collection of statistical tools that derives from probability theory: Bayesian statistics.</p>
<h3>Overview of the special issue on Bayesian inference</h3>
<p>The special issue is divided into four sections. The first section is a coordinated five-part introduction that starts from the most basic concepts and works up to the general structure of complex problems and to contemporary issues. The second section is a selection of advanced topics covered in-depth by some of the world’s leading experts on statistical inference in psychology. The third section is an extensive collection of teaching resources, reading lists, and strong arguments for the use of Bayesian methods at the expense of classical methods. The final section contains a number of applications of advanced Bayesian analyses that provides an idea of the wide reach of Bayesian methods for psychological science.</p>
<h4>Section I: Bayesian Inference for Psychology</h4>
<p>The special issue opens with <em>Introduction to Bayesian inference for psychology</em>, in which Etz and Vandekerckhove describe the foundations of Bayesian inference. It is illustrated how all aspects of Bayesian statistics can be brought back to the most basic rules of probability, and that Bayesian statistics is nothing more nor less than the systematic application of probability theory to problems that involve uncertainty. With seven worked examples (the seventh split into two parts) of gradually increasing scope and sophistication, the first paper covers a variety of possible practical scenarios, ranging from simple diagnosis to parameter estimation, model selection, and the calculation of strength of evidence.</p>
<p>Wagenmakers, Marsman, et al. continue in <em>Part I: Theoretical advantages and practical ramifications</em> by illustration the added value of Bayesian methods, with a focus on its desirable theoretical and practical aspects. Then, in <em>Part II: Example applications with JASP</em>, Wagenmakers, Love, et al. showcase JASP: free software that can be to perform the statistical analyses that are most common in psychology, and that can execute them in both a classical and Bayesian way. One of the goals of the JASP project is to provide psychologists with free software that can fulfill all the basic statistical needs that are now met by expensive commercial software. At the time of writing, JASP allows standard analyses like $t$-tests, analysis of variance, regression analysis, and analysis of contingency tables.</p>
<p>However, the full power of Bayesian statistics comes to light in its ability to work seamlessly with far more complex statistical models. As a science matures from the mere description of empirical effects to predication and explanation of patterns, more detailed formal models gain center stage. In Part III: <em>Parameter estimation in nonstandard models</em>, Matzke, Boehm, and Vandekerckhove discuss the nature of formal models, the requirements of their construction, and how to implement models of high complexity in the modern statistical software packages WinBUGS (Lunn, Thomas, Best, & Spiegelhalter, 2000), JAGS (Plummer, 2003), and Stan (Stan Development Team, 2013).</p>
<p>Rounding out the section, Rouder, Haaf, and Vandekerckhove in Part IV: <em>Parameter estimation and Bayes factors</em> discuss the fraught issue of estimation-versus-testing. The paper illustrates that the two tasks are one and the same in Bayesian statistics, and that the distinction in practice is not a distinction of method but of how hypotheses are translated from verbal to formal statements. It is an important feature of Bayesian methods that it matters exactly which question one is asking of the data (and sometimes it requires careful thought to assess precisely what question is asked by a particular model), but that once the question is clearly posed, the solution is unambiguous and inescapable.</p>
<p>Additional tutorial articles are provided in Section III, <em>Learning and Teaching</em>.</p>
<h4>Section II: Advanced Topics</h4>
<p>The <em>Advanced Topics</em> section covers three important issues that go beyond the off-the-shelf use of statistical analysis. In <em>Determining informative priors for cognitive models</em>, Lee and Vanpaemel highlight the sizable advantages that prior information can bring to the data analyst become cognitive modeler. After establishing that priors, just like every other part of a model, are merely codified assumptions that aid in constraining the possible conclusions from data, these authors go on to illustrate some of the different sources of information that cognitive modelers can use to specify priors.</p>
<p>In classical inference, the computation of $p$-values and confidence intervals can be corrupted by researchers "peeking" at their data and making a data-dependent decision to stop or continue collecting data (a.k.a. "optional stopping"). Counterintuitively, these classical quantities are invalidated if the researcher makes this forbidden choice, which is part of why it is a recommended practice to pre-register one’s data collection intentions so reviewers can confirm that a well-defined data collection plan was followed. In contrast, Bayesian analysis are not in general invalidated by "peeking" at data and so the use for sample size planning and power analysis is somewhat diminished. Nevertheless, for logistical reasons (planning time and money investments), it is sometimes useful to calculate ahead of time how many participants a study is likely to need before it yields some minimal or criterial amount of evidence. In <em>Bayes factor design analysis: Planning for compelling evidence</em>, Schoenbrodt and Wagenmakers provide exactly that.</p>
<p>Finally, there arise occasions where even the most sophisticated general-purpose software will not meet the needs of the expert cognitive modeler. On such occasions one may want to implement a custom Bayesian computation engine. In <em>A simple introduction to Markov chain Monte-Carlo sampling</em>, van Ravenzwaaij, Cassey, and Brown describe the basics of sampling-based algorithms, and with examples and provided code, illustrate how to construct a custom algorithm for Bayesian computation.</p>
<h4>Section III: Learning and Teaching</h4>
<p>Four articles make up the <em>Learning and Teaching</em> section. The goal of this section is to collect the most accessible, self-paced learning resources for an engaged novice.</p>
<p>One telling feature of Bayesian methods is how it tends to accord with human intuitions about evidence and knowledge. While it is of course the mathematical underpinnings that support the use of these methods, the intuitive nature of Bayesian inference is a great advantage for novice learners. In <em>Bayesian data analysis for newcomers</em>, Kruschke and Liddell cover the basic foundations of Bayesian methods using examples that emphasize this intuitive nature of probabilistic inference. The paper includes discussion of such topics as the use of priors and limitations of statistical decision-making in general.</p>
<p>With <em>The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and planning from a Bayesian perspective</em>, Kruschke and Liddell lay out a broad and comprehensive case for Bayesian statistics as a better fit for the goals of the aforementioned New Statistics (Cumming, 2014). Contrasting the Bayesian approach with the New Statistics—which these authors discuss at length—Kruschke and Liddell review approaches to estimation and testing and discuss Bayesian power analysis and meta-analysis.</p>
<p><em>How to become a Bayesian in eight easy steps</em> is notable in part because it is an entirely student-contributed paper. Etz, Gronau, Dablander, Edelsbrunner, and Baribault thoroughly review a selection of eight basic works (four theoretical, four practical) that together cover the bases of Bayesian methods. The article also includes a Further Reading appendix that briefly describes an additional 32 sources that are arranged by difficulty and theoretical versus applied focus.</p>
<p>The fourth and final paper in the section for teaching resources is <em>Four reasons to prefer Bayesian analyses over significance testing</em> by Dienes and McLatchie. While our intention for this special issue was to introduce Bayesian statistics without spending too many words on the comparison with classical ("orthodox") statistics, we did feel that a paper focusing on this issue was called for. One reason why this is important is that it is likely that widespread misconceptions about classical methods have made it seem to researchers that their staple methods have the desirable properties of Bayesian statistics that are, in fact, missing. Dienes and McLatchie present a selection of realistic scenarios that illustrate how classical and Bayesian methods may agree or disagree, proving that the attractive properties of Bayesian inference are often missing in classical analyses.</p>
<h4>Section IV: Bayesian Methods in Action</h4>
<p>The concluding section contains a selection of fully-worked examples of Bayesian analyses. Three powerful examples were chosen to showcase the broad applicability of the unifying Bayesian framework.</p>
<p>The first paper, <em>Fitting growth curve models in the Bayesian framework</em> by Oravecz and Muth, provides an example of a longitudinal analysis using growth models. While not a new type of model, it is a framework that is likely to gain prominence as more psychologists focus on the interplay of cognitive, behavioral, affective, and physiological processes that unfold in real time and whose joint dynamics are of theoretical interest.</p>
<p>In a similar vein, methods for dimension reduction have become increasingly useful in the era of Big Data. While psychological data that is massive in volume is still relatively rare, many researchers now collect multiple mode of data simultaneously and are interested in uncovering low-dimensional structures that explain their observed covariance. In <em>Bayesian latent variable models for the analysis of experimental psychology data</em>, Merkle and Wang give an example of an experimental data set whose various measures are jointly analyzed in a Bayesian latent variable model.</p>
<p>The final section of the special issue is rounded out by <em>Sensitivity to the prototype in children with high-functioning autism spectrum disorder: An example of Bayesian cognitive psychometrics</em> by Bartlema, Voorspoels, Rutten, Tuerlinckx, and Vanpaemel. Cognitive psychometrics is the application of cognitive models as measurement tools – here in a clinical context. The practice of cognitive psychometrics involves the construction of often complex nonlinear random-effects models, which are typically intractable in a classical context but pose no unique challenges in the Bayesian framework.</p>
<h3>Digital coverage</h3>
<p>As part of our efforts to make our introductions to Bayesian methods as widely accessible as possible, we have worked with the Psychonomic Society’s Digital Content Editor, who generously offered to host a digital event relating to this special issue. On the <a href="https://featuredcontent.psychonomic.org/bayesinpsych-a-digital-event-on-the-role-of-bayesian-statistics-and-modelling-in-psychology/" rel="nofollow">Psychonomic Society's website</a>, interested readers may find further expert commentary and web links to more content.</p>
<p>Additionally, the editors have set up a <a href="http://bit.ly/BayesGroup" rel="nofollow">social media help desk </a> where questions regarding Bayesian methods and Bayesian inference, especially as they are relevant for psychological scientists, are welcomed. These two digital resources are likely to expand in the future to cover new developments in the dissemination and implementation of Bayesian inference for psychology.</p>
<p>Finally, we have worked to make many of the contributions to the special issue freely available online. The full text of many articles is freely available via the <a href="https://osf.io/2es64" rel="nofollow">Open Science Framework</a>. Here, too, development of these materials is ongoing, for example with the gradual addition of exercises and learning goals for self-teaching or classroom use.</p>
<h3>Authors' note</h3>
<p>The Editors are grateful for the efforts of all contributing authors and reviewers. JV was supported by National Science Foundation grants #1230118 and #1534472.</p>
<h3>References</h3>
<p>Cumming, G. (2014). The new statistics: Why and how. <em>Psychological Science, 25,</em> 7–29.</p>
<p>Dienes, Z., & McLatchie, N. (<a href="https://link.springer.com/article/10.3758/s13423-017-1266-z" rel="nofollow">this issue</a>). Four reasons to prefer Bayesian analyses over significance testing. <em>Psychonomic Bulletin and Review.</em></p>
<p>Etz, A., Gronau, Q. F., Dablander, F., Edelsbrunner, P. A., & Baribault, B. (<a href="https://psyarxiv.com/ph6sw/" rel="nofollow">this issue</a>). How to become a Bayesian in eight easy steps: An annotated reading list. <em>Psychonomic Bulletin and Review.</em></p>
<p>Etz, A., & Vandekerckhove, J. (<a href="https://osf.io/n9mbu/" rel="nofollow">this issue</a>). Introduction to Bayesian inference for psychology. <em>Psychonomic Bulletin and Review.</em></p>
<p>Gigerenzer, G. (1998). We need statistical thinking, not statistical rituals. <em>Behavioral and Brain Sciences, 21,</em> 199–200.</p>
<p>Hoekstra, R., Morey, R. D., Rouder, J. N., & Wagenmakers, E.- J. (2014). Robust misinterpretation of confidence intervals. <em>Psychonomic bulletin & review, 21(5),</em> 1157–1164.</p>
<p>Jaynes, E. T. (2003). <em>Probability theory: The logic of science.</em> Cambridge: Cambridge University Press.</p>
<p>Jeffreys, H. (1939). <em>Theory of probability (1st ed.).</em> Oxford, UK: Oxford University Press.</p>
<p>Kruschke, J. K., & Liddell, T. M. (<a href="https://osf.io/ak5n4/" rel="nofollow">this issue-a</a>). Bayesian data analysis for newcomers. <em>Psychonomic Bulletin and Review.</em></p>
<p>Kruschke, J. K., & Liddell, T. M. (<a href="https://osf.io/8rcbq/" rel="nofollow">this issue-b</a>). The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and planning from a Bayesian perspective. <em>Psychonomic Bulletin and Review.</em></p>
<p>Lee, M. D., & Vanpaemel, W. (<a href="https://webfiles.uci.edu/mdlee/LeeVanpaemel2016.pdf" rel="nofollow">this issue</a>). Determining informative priors for cognitive models. <em>Psychonomic Bulletin and Review.</em></p>
<p>Lunn, D. J., Thomas, A., Best, N., & Spiegelhalter, D. (2000). WinBUGS – a Bayesian modelling framework: Concepts, structure, and extensibility. <em>Statistics and Computing, 10,</em> 325–337.</p>
<p>Matzke, D., Boehm, U., & Vandekerckhove, J. (<a href="https://osf.io/trhsy/" rel="nofollow">this issue</a>). Bayesian inference for psychology, part III: Parameter estimation in nonstandard models. <em>Psychonomic Bulletin and Review.</em></p>
<p>Merkle, E., & Wang, T. (<a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2536989" rel="nofollow">this issue</a>). Bayesian latent variable models for the analysis of experimental psychology data. <em>Psychonomic Bulletin and Review.</em></p>
<p>Morey, R. D., Hoekstra, R., Rouder, J. N., Lee, M. D., & Wagenmakers, E.-J. (2016). The fallacy of placing confidence in confidence intervals. <em>Psychonomic Bulletin & Review, 23,</em> 103–123.</p>
<p>Oakes, M. (1986). <em>Statistical inference: A commentary for the social and behavioral sciences.</em> New York: Wiley.</p>
<p>Oravecz, Z., & Muth, C. (<a href="https://quantdev.ssri.psu.edu/sites/qdev/files/BayesianGCM.pdf" rel="nofollow">this issue</a>). Fitting growth curve models in the Bayesian framework. <em>Psychonomic Bulletin and Review.</em></p>
<p>Plummer, M. (2003). JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. In K. Hornik, F. Leisch, & A. Zeileis (Eds.), <em>Proceedings of the 3rd international workshop on distributed statistical computing.</em> Vienna, Austria.</p>
<p>Rouder, J. N., Haaf, J., & Vandekerckhove, J. (<a href="https://osf.io/bvjg8/" rel="nofollow">this issue</a>). Bayesian inference for psychology, part IV: Parameter estimation and Bayes factors. <em>Psychonomic Bulletin and Review.</em></p>
<p>Schoenbrodt, F., & Wagenmakers, E.-J. (<a href="https://osf.io/d4dcu/" rel="nofollow">this issue</a>). Bayes factor design analysis: Planning for compelling evidence. <em>Psychonomic Bulletin and Review.</em></p>
<p>Stan Development Team. (2013). Stan: A c++ library for probability and sampling, version 1.1. Retrieved from <a href="http://mc-stan.org/" rel="nofollow">http://mc-stan.org/</a></p>
<p>van Ravenzwaaij, D., Cassey, P., & Brown, S. (<a href="https://link.springer.com/article/10.3758/s13423-016-1015-8" rel="nofollow">this issue</a>). A simple introduction to Markov chain Monte-Carlo sampling. <em>Psychonomic Bulletin and Review.</em></p>
<p>Voorspoels, W., Rutten, F., Bartlema, A., Tuerlinckx, F., & Vanpaemel, W. (<a href="https://link.springer.com/article/10.3758%2Fs13423-017-1245-4" rel="nofollow">this issue</a>). Sensitivity to the prototype in children with high-functioning autism spectrum disorder: An example of Bayesian cognitive psychometrics. <em>Psychonomic Bulletin and Review.</em></p>
<p>Wagenmakers, E.-J., Love, J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., . . . Morey, R. D. (<a href="https://osf.io/ahhdr/" rel="nofollow">this issue</a>). Bayesian inference for psychology, part II: Example applications with JASP. <em>Psychonomic Bulletin and Review.</em></p>
<p>Wagenmakers, E.-J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., Love, J., . . . Morey, R. (<a href="https://osf.io/8pzkb/" rel="nofollow">this issue</a>). Bayesian inference for psychology, part I: Theoretical advantages and practical ramifications. <em>Psychonomic Bulletin and Review.</em></p>
<p>Wasserstein, R. L., & Lazar, N. A. (2016). The ASA’s statement on p–values: Context, process, and purpose. <em>The American Statistician.</em></p>

Your browser should refresh shortly…

Renaming wiki...

Press Confirm to return to the project wiki home page.

This page is currently connected to the collaborative wiki. All edits made will be visible to contributors with write permission in real time. Changes will be stored but not published until you click the "Save" button.

This page is currently attempting to connect to the collaborative wiki. You may continue to make edits.
**Changes will not be saved until you press the "Save" button.**

The collaborative wiki is currently unavailable. You may continue to make edits.
**Changes will not be saved until you press the "Save" button.**

Your browser does not support collaborative editing. You may continue to make edits.
**Changes will not be saved until you press the "Save" button.**

OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.

Accept

This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information,
see our Privacy Policy
and information on cookie use.

Accept

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.