2019年度 総合数理特論B (ベイズ統計学の入門・大学院理学研究科)
Instructor
- Shintaro Hashimoto
- C814
- s-hashimoto(at)hiroshima-u.ac.jp
Texts
- Peter D. Hoff (2009). A First Course in Bayesian Statistical Methods, Springer. (main)
- Andrew Gelman, John B. Carlin, Hal S. Stern, David B. Dunson, Aki Vehtari and Donald B. Rubin (2013). Bayesian Data Analysis (3rd Edition), Chapman & Hall. (optional)
- Jean-Michel Martin and Christian P. Robert (2014). Bayesian Essentials with R, Springer. (optional)
Schedule
Week 16 (2/6): Semiparametric Bayesian copula estimation
- Reading: PH Chapter 12
- References:
- Hoff, P. D. (2007). Extending the rank likelihood for semiparametric copula estimation. The Annals of Applied Statistics. 1, 265-283.
Week 15 (2/4): Linear mixed-effect models
- Reading: PH Chapter 11
- References:
- Papaspiliopoulos, O., Roberts, G. O. and Skold, M. (2007). A general framework for the parametrization of hierarchical models. Statistical Science, 22, 59-73.
Week 14 (1/30): Data augmentation
- Reading: Gibbs sampling for probit regression in Albert and Chib (1993)
- Homework: Homework 3
- References:
- Albert, J. H. and Chib, S. (1993). Bayesian anslysis of binary and polychotomous response data. Journal of American Statistical Association. 88, 669-679.
- Tanner, M. A. and Wong, W. H. (1987). The calculation of posterior distributions by data augmentation. Journal of American Statistical Association. 82, 528-549.
Week 13 (1/28): Metropolis-Hastings algorithm
- Reading: PH Chapter 10
- References:
- Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika. 57, 97-109.
- Dunson, D. B. and Johndrow, J. E. (2020). The Hastings algorithm at fifty. To appear in Biometrika.
Week 12 (1/23): Metropolis algorithm
- Reading: PH Chapter 10
- References:
- Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. and Teller, E. (1953). Equation of state calculation by fast computing machines. Journal of Chemical Physics. 21, 1087-1092.
- Roberts, G. O., Gelman, A. and Gilks, W. R. (1997). Weak convergence and optimal scaling of random walk Metropolis algorithm. The Annals of Applied Probability. 7, 110-120.
Week 11 (1/21): Variable selection and sparse linear models
- Reading: PH Chapter 9 and the paper of Park and Casella (2008)
- References:
- Park, T. and Casella, G. (2008). The Bayesian lasso. Journal of American Statistical Association. 103, 681-686.
Week 10 (1/16): Hierarchical modeling (cont'd) and start Bayesian linear regression
- Reading: PH Chapter 8 (cont'd), and start PH Chapter 9
- References:
- Liang, F., Paulo, R., Molina, G., Clyde, M. A. and Berger, J. O. (2008). Mixtures of g priors for Bayesian variable selection. Journal of American Statsiutical Association. 103, 410-423.
- Raftery, A. E. and Madigan, D. and Hoeting, J. A. (1997). Bayesian model averaging for linear regression models. Journal of American Statistical Association. 92, 179-191.
Week 9 (1/9): (Missing data and imputation,) Hierarchical modeling
- Reading: PH Chapter 7 (Section 7.5), Start Chapter 8
- References:
Week 8 (1/7): Canceled
Week 7 (12/24): Multivariate normal model
- Reading: PH Chapter 6 (cont'd), Start PH Chapter 7 (Section 7.1-7.4)
- Homework: Homework 2
Week 6 (12/19): Introduction to MCMC (Gibbs sampling)
- Reading: PH Chapter 6
- References:
- Gelfand, A. E. and Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal densities. Journal of American Statistical Association. 85, 398–409.
Week 5 (12/17): Normal distribution
Week 4 (12/12): Objective prior, Monte Carlo approximation
- Reading: Objective priors + Start PH Chapter 4
- References:
- Ghosh, M. (2011). Objective priors: An introduction for frequentists. Statistical Science, 26(2), 187-202.
Week 3 (12/10): Credible interval, Count data
- Reading: PH Chapter 3 (Section 3.2 and 3.3)
- Homework: Homework 1
- References:
- Hartigan, J. A. (1966). Note on the confidence-prior of Welch and Peers. Journal of Royal Statistical Society Series B, 28(1), 55-56.
Week 2 (12/5): Binary data, Conjugate prior, Posterior predictive density
- Reading: PH Chapter 3 (Section 3.1)
- References:
- Atchison, J. (1975). Goodness of prediction fit. Biometrika, 62(3), 547-549.
Week 1 (12/3): Introduction
- Reading: PH Chapter 1 and 2
Evaluation
Outline
- Probability and Bayes' formula (12/3)
- Inference for one-parameter models (12/5, 12/10)
- Monte Carlo approximation (12/12)
- Inference for the normal distribution (12/17)
- Markov chain Monte Carlo method (Gibbs sampling: 12/19; MH algorithm: 1/23, 28)
- Multivariate normal model (12/24), Missing data (1/9)
- Hierarchical models (1/9,1/16)
- Linear regression models and Bayesian lasso (1/16, 1/21)
- Data augmentation and for generalized linear models (1/30)
- Linear mixed-effect models (2/4)
- Semiparametric copula estimation (2/6)