## EM for mixture models

The standard way for fitting mixture models is the **EM** algorithm or **Expectation Maximization**. This algorithm was the focus of Chapter 3, *Learning Parameters*. So here, we just recall the basic principles of this algorithm again, to later show a Bernoulli mixture model.

A good package to use in R is `mixtools`

to learn mixture models. A thorough presentation of this package is given in the Journal of Statistical Software, Oct 2009, Vol 32, Issue 6, *mixtools: An R Package for Analyzing Finite Mixture Models*.

The EM algorithm is a good choice for learning a mixture model. Indeed, in Chapter 3, *Learning Parameters*, we saw that when data is missing or even when variables are hidden (that is, all their respective data is missing), the EM algorithm will proceeds in two steps: first compute the expected value of the missing variables, so that to do as if the data is fully observed, and then maximize an objective function, usually the likelihood. Then, given the new set of parameters...