Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan, 2/e (Hardcover)

John Kruschke

買這商品的人也買了...

商品描述

There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis obtainable to a wide audience. Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan provides an accessible approach to Bayesian data analysis, as material is explained clearly with concrete examples. The book begins with the basics, including essential concepts of probability and random sampling, and gradually progresses to advanced hierarchical modeling methods for realistic data. Included are step-by-step instructions on how to conduct Bayesian data analyses in the popular and free software R and WinBugs. This book is intended for first-year graduate students or advanced undergraduates. It provides a bridge between undergraduate training and modern Bayesian methods for data analysis, which is becoming the accepted research standard. Knowledge of algebra and basic calculus is a prerequisite.

New to this Edition (partial list):

  • There are all new programs in JAGS and Stan. The new programs are designed to be much easier to use than the scripts in the first edition. In particular, there are now compact high-level scripts that make it easy to run the programs on your own data sets. This new programming was a major undertaking by itself.
  • The introductory Chapter 2, regarding the basic ideas of how Bayesian inference re-allocates credibility across possibilities, is completely rewritten and greatly expanded.
  • There are completely new chapters on the programming languages R (Ch. 3), JAGS (Ch. 8), and Stan (Ch. 14). The lengthy new chapter on R includes explanations of data files and structures such as lists and data frames, along with several utility functions. (It also has a new poem that I am particularly pleased with.) The new chapter on JAGS includes explanation of the RunJAGS package which executes JAGS on parallel computer cores. The new chapter on Stan provides a novel explanation of the concepts of Hamiltonian Monte Carlo. The chapter on Stan also explains conceptual differences in program flow between it and JAGS.
  • Chapter 5 on Bayes’ rule is greatly revised, with a new emphasis on how Bayes’ rule re-allocates credibility across parameter values from prior to posterior. The material on model comparison has been removed from all the early chapters and integrated into a compact presentation in Chapter 10.
  • What were two separate chapters on the Metropolis algorithm and Gibbs sampling have been consolidated into a single chapter on MCMC methods (as Chapter 7). There is extensive new material on MCMC convergence diagnostics in Chapters 7 and 8. There are explanations of autocorrelation and effective sample size. There is also exploration of the stability of the estimates of the HDI limits. New computer programs display the diagnostics, as well.
  • Chapter 9 on hierarchical models includes extensive new and unique material on the crucial concept of shrinkage, along with new examples.
  • All the material on model comparison, which was spread across various chapters in the first edition, in now consolidated into a single focused chapter (Ch. 10) that emphasizes its conceptualization as a case of hierarchical modeling.
  • Chapter 11 on null hypothesis significance testing is extensively revised. It has new material for introducing the concept of sampling distribution. It has new illustrations of sampling distributions for various stopping rules, and for multiple tests.
  • Chapter 12, regarding Bayesian approaches to null value assessment, has new material about the region of practical equivalence (ROPE), new examples of accepting the null value by Bayes factors, and new explanation of the Bayes factor in terms of the Savage-Dickey method.
  • Chapter 13, regarding statistical power and sample size, has an extensive new section on sequential testing, and making the research goal be precision of estimation instead of rejecting or accepting a particular value.
  • Chapter 15, which introduces the generalized linear model, is fully revised, with more complete tables showing combinations of predicted and predictor variable types.
  • Chapter 16, regarding estimation of means, now includes extensive discussion of comparing two groups, along with explicit estimates of effect size.
  • Chapter 17, regarding regression on a single metric predictor, now includes extensive examples of robust regression in JAGS and Stan. New examples of hierarchical regression, including quadratic trend, graphically illustrate shrinkage in estimates of individual slopes and curvatures. The use of weighted data is also illustrated.
  • Chapter 18, on multiple linear regression, includes a new section on Bayesian variable selection, in which various candidate predictors are probabilistically included in the regression model.
  • Chapter 19, on one-factor ANOVA-like analysis, has all new examples, including a completely worked out example analogous to analysis of covariance (ANCOVA), and a new example involving heterogeneous variances.
  • Chapter 20, on multi-factor ANOVA-like analysis, has all new examples, including a completely worked out example of a split-plot design that involves a combination of a within-subjects factor and a between-subjects factor.
  • Chapter 21, on logistic regression, is expanded to include examples of robust logistic regression, and examples with nominal predictors.
  • There is a completely new chapter (Ch. 22) on multinomial logistic regression. This chapter fills in a case of the generalized linear model (namely, a nominal predicted variable) that was missing from the first edition.
  • Chapter 23, regarding ordinal data, is greatly expanded. New examples illustrate single-group and two-group analyses, and demonstrate how interpretations differ from treating ordinal data as if they were metric.
  • There is a new section (25.4) that explains how to model censored data in JAGS.
  • Many exercises are new or revised.
  • Accessible, including the basics of essential concepts of probability and random sampling
  • Examples with R programming language and JAGS software
  • Comprehensive coverage of all scenarios addressed by non-Bayesian textbooks: t-tests, analysis of variance (ANOVA) and comparisons in ANOVA, multiple regression, and chi-square (contingency table analysis)
  • Coverage of experiment planning
  • R and JAGS computer programming code on website
  • Exercises have explicit purposes and guidelines for accomplishment
  • Provides step-by-step instructions on how to conduct Bayesian data analyses in the popular and free software R and WinBugs

商品描述(中文翻譯)

近年來,貝葉斯統計學引起了廣泛的興趣,主要是因為最近開發的計算方法使得貝葉斯分析終於能夠被廣大觀眾所使用。《貝葉斯數據分析:使用R、JAGS和Stan的教程》提供了一種易於理解的貝葉斯數據分析方法,通過具體的例子清晰地解釋了相關內容。該書從基礎知識開始,包括概率和隨機抽樣的基本概念,並逐漸深入介紹了用於現實數據的高級階層建模方法。書中還提供了在流行且免費的軟件R和WinBugs中進行貝葉斯數據分析的逐步指南。本書適用於研究生一年級或高年級本科生,它在本科培訓和現代貝葉斯數據分析方法之間搭建了一座橋樑,後者已成為被廣泛接受的研究標準。代數和基礎微積分知識是必備的先備知識。

本版新增內容(部分列表):
- JAGS和Stan的全新程式。新程式設計得比第一版的腳本更易於使用。特別是現在有了簡潔的高級腳本,可以輕鬆地在自己的數據集上運行程式。這個新的程式設計本身就是一項重大工作。
- 重新撰寫並大幅擴充了第2章的介紹,關於貝葉斯推斷如何在可能性之間重新分配可信度的基本思想。
- 全新的R(第3章)、JAGS(第8章)和Stan(第14章)程式語言章節。關於R的冗長新章節包括了數據文件和結構(如列表和數據框)的解釋,還有幾個實用函數。(它還有一首新詩,我對此特別滿意。)關於JAGS的新章節包括了對RunJAGS套件的解釋,該套件可以在並行計算機核心上執行JAGS。關於Stan的新章節提供了對漢密爾頓蒙特卡羅概念的新解釋,還解釋了它與JAGS之間的程式流程的概念差異。
- 大幅修訂了第5章關於貝葉斯定理的內容,新版更加強調貝葉斯定理如何從先驗分配重新分配可信度到後驗分配的參數值。模型比較的內容已從所有早期章節中刪除,並整合到第10章中的簡潔介紹中。
- 將原先分開的Metropolis算法和Gibbs抽樣兩個章節合併為一個MCMC方法的章節(第7章)。第7章和第8章中有關MCMC收斂診斷的內容得到了廣泛的新擴充。解釋了自相關和有效樣本大小的概念。還探討了HDI限制估計的穩定性。新的計算機程式也顯示了這些診斷結果。
- 第9章關於階層模型包含了關於收縮重要概念的大量新材料,並提供了新的例子。
- 將第一版中分散在各個章節中的模型比較內容整合到一個專注的章節(第10章)中,強調其作為階層建模案例的概念。
- 大幅修訂了關於零假設顯著性檢驗的第11章。增加了介紹抽樣分配概念的新材料。提供了各種停止規則和多重檢驗的抽樣分配的新示例。
- 第12章關於對零值評估的貝葉斯方法有關實際等價區域(ROPE)的新材料,以及通過貝氏因子接受零值的新例子,並對以Savage-Dickey方法解釋貝氏因子進行了新的解釋。
- 第13章關於統計功效和樣本大小的內容,在順序方面有了廣泛的新節,包括了順序方面的內容。