5.12 Bayesian approach for parameter estimation
In maximum likelihood estimation, we attempt to find parameters that can maximize the marginalized log likelihood .
In Bayesian framework, we aim to determine the posterior distribution of all model parameters :
Typically, in MCMC programs such as JAGS and nimble, one needs to specify two things:
- priors and
- model likelihood
All parameters have prior distributions
- Item parameters and follow , which is a uniform distribution ranging from 0 to 1:
Code
curve(dbeta(x, 1, 1))
Person’s latent class membership latent.group.index[n] follows a categorical distribution
in the categorical distribution follows a dirichlet distribution
Regarding likelihood, we need to specify the likelihood of observing each item response:
Although MCMC is guaranteed to converge to the target distributions of model parameters, one may need a very long MCMC chain to achieve that.