统计学习方法学习笔记-09-EM算法及其推广

首先叙述EM算法,然后讨论EM算法的收敛性,作为EM算法的应用,介绍高斯混合模型的学习,最后介绍EM算法的推广-GEM算法

EM算法的引入

目的:概率模型有时候既含有观测变量,也含有隐变量,EM算法就是含有隐变量的概率模型参数的极大似然估计法或极大后验概率估计法
EM算法
输入:观测变量数据\(Y\),隐变量数据\(Z\),联合分布\(P(Y,Z|\theta)\),条件分布\(P(Z|Y,\theta)\);
输出:模型参数\(\theta\)

  • 选择参数的初值\(\theta^{(0)}\),开始迭代
  • \(E\)步:记\(\theta^{(i)}\)为第\(i\)次迭代参数\(\theta\)的估计值,在第\(i + 1\)次迭代的\(E\)步,计算

\[\begin{aligned} Q(\theta,\theta^{(i)}) &= E_Z[\log P(Y,Z|\theta)|Y,\theta^{(i)}] \\ &= \sum_Z\log P(Y,Z|\theta)P(Z|Y,\theta^{(i)}) \end{aligned} \]

  • M步:求使\(Q(\theta,\theta^{(i)})\)极大化的\(\theta\),确定第\(i + 1\)次迭代的参数的估计值\(\theta^{(i + 1)}\)

\[\theta^{(i + 1)} = arg \mathop{max}\limits_{\theta}Q(\theta,\theta^{(i)}) \]

  • 重复第2,3步直到收敛

E步的Q函数是EM算法的核心,表示的是完全数据的对数似然函数\(\log P(Y, Z|\theta)\)关于给定观测数据\(Y\)和当前参数\(\theta^{(i)}\)下对未观测数据\(Z\)的条件概率分布\(P(Z|Y,\theta^{(i)})\)的期望,首先参数初值\(\theta^{(0)}\)可以任意选择,但是该算法对初值是敏感的,E步求Q函数,M步求Q函数的极大化得到\(\theta^{(i + 1)}\),停止迭代的条件是\(\theta\)的变化小于阈值

EM算法的收敛性

  • \(P(Y|\theta)\)为观测数据的似然函数,\(\theta^{(i)},i = 1,2,\cdots\)为EM算法得到的参数估计序列,\(P(Y|\theta^{(i)},i = 1,2,\cdots\)为对应的似然函数序列,则\(P(Y|\theta^{(i)})\)是单调递增的,也就是:

\[P(Y|\theta^{(i + 1)}) \geq P(Y|\theta^{(i)}) \]

  • \(L(\theta) = \log P(Y|\theta)\)为观测数据的对数似然函数,\(\theta^{(i)},i = 1,2,\cdots\)为EM算法得到的参数估计序列,\(L(\theta^{(i)}) = \log P(Y|\theta^{(i)}),i = 1,2,\cdots\)为对应的对数似然函数序列,如果\(P(Y|\theta)\)有上界,则\(L(\theta^{(i)}) = \log P(Y|\theta^{(i)}),i = 1,2,\cdots\)收敛于某一值,在函数\(Q(\theta,\theta ')\)\(L(\theta)\)满足一定条件下,由EM算法得到的参数估计序列\(\theta^{(i)}\)的收敛值\(\theta^*\)\(L(\theta)\)的稳定点

EM算法在高斯混合模型学习中的应用

高斯混合模型是指具有如下形式的概率分布模型

\[P(y|\theta) = \sum_{k = 1}^K\alpha_k\phi(y|\theta_k) \]

其中\(\alpha_k\)是系数,满足\(\alpha_k \geq 0,\sum_{k = 1}^K\alpha_k = 1\)\(\phi(y|\theta_k)\)是高斯分布密度,\(\theta_k = (\mu_k,\sigma_k^2)\)

\[\phi(y|\theta_k) = \frac{1}{\sqrt{2\pi}\sigma_k}exp\left(-\frac{(y - \mu_k)^2}{2\sigma_k^2}\right) \]

称为第\(k\)个分模型,一般混合模型可以由任意概率分布密度代替高斯分布密度,假设观测数据\(y_1,y_2,\cdots,y_N\)
由高斯混合模型生成

\[P(y|\theta) = \sum_{k = 1}^K\alpha_k\phi(y|\theta_k) \]

其中\(\theta = (\alpha_1,\alpha_2,\cdots,\alpha_K;\theta_1,\theta_2,\cdots,\theta_K)\),接下来使用EM算法估计高斯混合模型的参数\(\theta\)
首先明确隐变量,写出完全数据的对数似然函数,以隐变量\(\gamma_{jk}\)来反映观测数据来自第\(k\)个分模型:

\[\gamma_{jk} = \begin{cases} 1 & 第j个观测来自第k个分模型\\ 0 & otherwise \end{cases} j = 1,2,\cdots,N,k = 1,2,\cdots,K \]

有了观测数据\(y_j\)和未观测数据\(\gamma_{jk}\),那么完全数据是:

\[(y_j,\gamma_{j1},\gamma_{j2},\cdots,\gamma_{jK}),j = 1,2,\cdots,N \]

于是可以写出完全数据的似然函数:

\[\begin{aligned} P(y,\gamma|\theta) &= \prod_{j = 1}^NP(y_j,\gamma_{j1},\gamma_{j2},\cdots,\gamma_{jK}|\theta) \\ &= \prod_{k = 1}^K\prod_{j = 1}^N[\alpha_k\phi(y_j|\theta_k)]^{\gamma_{jk}} \\ &= \prod_{k = 1}^K\alpha_k^{n_k}\prod_{j = 1}^N[\phi(y_j|\theta_k)]^{\gamma_{jk}} \\ &= \prod_{k = 1}^K\alpha_k^{n_k}\prod_{j = 1}^N\left[\frac{1}{\sqrt{2\pi}\sigma_k}exp\left(-\frac{(y_j - \mu_k)^2}{2\sigma_k^2}\right)\right] \\ n_k &= \sum_{j = 1}^N\gamma_{jk},\sum_{k = 1}^Kn_k = N \end{aligned} \]

可以得到完全数据的对数似然函数为:

\[\log P(y,\gamma|\theta) = \sum_{k = 1}^K\left\{n_k\log \alpha_k + \sum_{j = 1}^N\gamma_{jk}\left[\log\left(\frac{1}{\sqrt{2\pi}}\right) - \log \sigma_k - \frac{1}{2\sigma_k^2}(y_j - \mu_k)^2\right]\right\} \]

之后完成EM算法的E步:确定Q函数

\[\begin{aligned} Q(\theta,\theta^{(i)}) &= E[\log P(y,\gamma|\theta)|y,\theta^{(i)}] \\ &= E\left\{\sum_{k = 1}^K\left\{n_k\log \alpha_k + \sum_{j = 1}^N\gamma_{jk}\left[\log\left(\frac{1}{\sqrt{2\pi}}\right) - \log \sigma_k - \frac{1}{2\sigma_k^2}(y_j - \mu_k)^2\right]\right\}\right\} \\ &= \sum_{k = 1}^K\left\{\sum_{j = 1}^N(E_{\gamma_{jk}})\log \alpha_k + \sum_{j = 1}^N(E_{\gamma_{jk}})\left[\log\left(\frac{1}{\sqrt{2\pi}}\right) - \log \sigma_k - \frac{1}{2\sigma_k^2}(y_j - \mu_k)^2\right]\right\} \end{aligned} \]

需要求\(E({\gamma_{jk}}|y,\theta)\)记为\(\hat{\gamma_{jk}}\)

\[\begin{aligned} \hat{\gamma}_{jk} &= E({\gamma_{jk}}|y,\theta) = P(\gamma_{jk} = 1|y,\theta) \\ &= \frac{P(\gamma_{jk} = 1,y_j|\theta)}{\sum_{k = 1}^KP(\gamma_{jk} = 1,y_j|\theta)} \\ &= \frac{P(y_j|\gamma_{jk} = 1,\theta)P(\gamma_{jk} = 1|\theta)}{\sum_{k = 1}^KP(y_j|\gamma_{jk} = 1,\theta)P(\gamma_{jk} = 1|\theta)} \\ &= \frac{\alpha_k\phi(y_j|\theta_k)}{\sum_{k = 1}^K\alpha_k\phi(y_j|\theta_k)},j = 1,2,\cdots,N,k = 1,2,\cdots,K \end{aligned} \]

\(\hat{\gamma}_{jk}\)是在当前模型参数下第\(j\)个观测数据来自第\(k\)个分模型的概率,称为分模型\(k\)对观测数据\(y_j\)的响应度,将\(\hat{\gamma}_{jk} = E_{\gamma_{jk}}\)\(n_k = \sum_{j = 1}^NE_{\gamma_{jk}}\)带到上式:

\[Q(\theta,\theta^{(i)}) = \sum_{k = 1}^K\left\{n_k\log \alpha_k + \sum_{j = 1}^N\hat{\gamma}_{jk}\left[\log\left(\frac{1}{\sqrt{2\pi}}\right) - \log \sigma_k - \frac{1}{2\sigma_k^2}(y_j - \mu_k)^2\right]\right\} \]

接下来完成EM算法的M步,也就是求函数\(Q(\theta,\theta^{(i)})\)\(\theta\)的极大值,即求新一轮迭代的模型参数:

\[\theta^{(i + 1)} = arg \mathop{max}\limits_\theta Q(\theta,\theta^{(i)}) \]

\(\theta^{(i + 1)}\)的参数包含\(\hat\mu_k,\hat\sigma_k^2,\hat\alpha_k,k = 1,2,\cdots,K\),只这些参数只需要分别对其求偏导并令偏导数为0即可求得,结果如下:

\[\hat\mu_k = \frac{\sum_{j = 1}^N\hat\gamma_{jk}y_j}{\sum_{j = 1}^{N}\hat\gamma_{jk}},k = 1,2,\cdots,K \\ \hat\sigma^2_k = \frac{\sum_{j = 1}^N\hat\gamma_{jk}(y_j - \mu_k)^2}{\sum_{j = 1}^{N}\hat\gamma_{jk}},k = 1,2,\cdots,K \\ \hat\alpha_k = \frac{n_k}{N} = \frac{\sum_{j =1}^N\hat\gamma_{jk}}{N},k = 1,2,\cdots,K \]

重复上述计算,直到对数似然函数值不再有明显的变化为止。高斯混合模型参数估计的EM算法:
输入:观测数据\(y_1,y_2,\cdots,y_N\),高斯混合模型
输出:高斯混合模型参数

  • 取参数的初始值开始迭代
  • E步:依据当前模型参数,计算分模型\(k\)对观测数据\(y_j\)的响应度

\[\hat{\gamma}_{jk} = \frac{\alpha_k\phi(y_j|\theta_k)}{\sum_{k = 1}^K\alpha_k\phi(y_j|\theta_k)},j = 1,2,\cdots,N,k = 1,2,\cdots,K \]

  • M步:计算新一轮迭代的参数模型

\[\hat\mu_k = \frac{\sum_{j = 1}^N\hat\gamma_{jk}y_j}{\sum_{j = 1}^{N}\hat\gamma_{jk}},k = 1,2,\cdots,K \\ \hat\sigma^2_k = \frac{\sum_{j = 1}^N\hat\gamma_{jk}(y_j - \mu_k)^2}{\sum_{j = 1}^{N}\hat\gamma_{jk}},k = 1,2,\cdots,K \\ \hat\alpha_k = \frac{n_k}{N} = \frac{\sum_{j =1}^N\hat\gamma_{jk}}{N},k = 1,2,\cdots,K \]

  • 重复第2,3步直到对数似然函数值不再有明显的变化即收敛的时候停止

EM算法的推广

EM算法可以解释为F函数的极大-极大算法,基于这个解释有若干变形与推广,如广义期望极大算法(generalized expectation maximization,GEM)
\(F\)函数:假设隐变量数据\(Z\)的概率分布为\(\tilde{P}(Z)\),定义分布\(\tilde{P}\)与参数\(\theta\)的函数\(F(\tilde{P},\theta)\)如下:

\[F(\tilde{P},\theta) = E_{\tilde{P}}[\log P(Y,Z|\theta)] + H(\tilde{P}) \]

上式称为\(F\)函数,式中\(H(\tilde{P}) = -E_{\tilde{P}}\log \tilde{P}(Z)\)是分布\(\tilde{P}(Z)\)的熵,上述函数有如下性质:
对于固定的\(\theta\),存在唯一的分布\(\tilde{P}_\theta\)极大化\(F(\tilde{P},\theta)\),这时\(\tilde{P}_\theta\)由下式给出:

\[\tilde{P}_\theta(Z) = P(Z|Y,\theta) \]

\(\tilde{P}_\theta\)\(\theta\)连续变化,如果上式成立则:

\[F(\tilde{P},\theta) = \log P(Y|\theta) \]

\(L(\theta) = \log P(Y|\theta)\)为观测数据的对数似然函数,\(\theta^{(i)},i = 1,2,\cdots,\)为EM算法得到的参数估计序列,如果\(F(\tilde{P},\theta)\)\(\tilde{P}^*,\theta^*\)得到局部极大值,那么\(L(\theta)\)也在\(\theta^*\)得到局部极大值,如果\(F(\tilde{P},\theta)\)\(\tilde{P}^*,\theta^*\)得到全局极大值,那么\(L(\theta)\)也在\(\theta^*\)得到全局极大值,EM算法的一次迭代可由F函数的极大-极大算法实现,设\(\theta^{(i)}\)是第\(i\)次迭代参数\(\theta\)的估计,\(\tilde{P}^{(i)}\)为第\(i\)次迭代函数\(\tilde{P}\)的估计,在第\(i + 1\)次迭代的两步为:

  • 对固定的\(\theta^{(i)}\)\(\tilde{P}^{(i + 1)}\)使\(F(\tilde{P},\theta^{(i)})\)极大化
  • 对于固定的\(\tilde{P}^{(i + 1)}\),求\(\theta^{(i + 1)}\)使得\(F(\tilde{P}^{(i + 1)},\theta)\)极大化

GEM算法1

输入:观测数据,F函数
输出:模型参数

  • 初始化参数\(\theta^{(0)}\),开始迭代
  • \(i + 1\)次迭代,第一步,记\(\theta^{(i)}\)为参数\(\theta\)的估计值,\(\tilde{P}^{(i)}\)为函数\(\tilde{P}\)的估计,求\(\tilde{P}^{(i + 1)}\)使\(\tilde{P}\)极大化\(F(\tilde{P},\theta^{(i)})\)
  • 第2步,求\(\theta^{(i + 1)}\)使得\(F(\tilde{P}^{(i + 1)},\theta)\)极大化
  • 重复2,3步直到收敛

GEM算法2

输入:观测数据,F函数
输出:模型参数

  • 初始化参数\(\theta^{(0)}\),开始迭代
  • \(i + 1\)次迭代,第一步,记\(\theta^{(i)}\)为参数\(\theta\)的估计值,计算:

\[\begin{aligned} Q(\theta,\theta^{(i)}) &= E_Z[\log P(Y,Z|\theta)|Y,\theta^{(i)}] \\ &= \sum_Z\log P(Y,Z|\theta)P(Z|Y,\theta^{(i)}) \end{aligned} \]

  • 第2步,求\(\theta^{(i + 1)}\)使得

\[Q(\theta^{(i + 1)},\theta^{(i)}) \gt Q(\theta^{(i)},\theta^{(i)}) \]

  • 重复2,3步直到收敛

GEM算法3

当参数\(\theta\)的维数为\(d(d \geq 2)\)时,可以采用一种特殊的GEM算法,它将EM算法的M步分别为\(d\)次条件极大化,每次只改变参数向量的一个分量,其余分量不改变
输入:观测数据,F函数
输出:模型参数

  • 初始化参数\(\theta^{(0)} = (\theta^{(0)}_1,\theta^{(0)}_2,\cdots,\theta^{(0)}_d)\),开始迭代
  • \(i + 1\)次迭代,第一步,记\(\theta^{(i)}\)为参数\(\theta\)的估计值,计算:

\[\begin{aligned} Q(\theta,\theta^{(i)}) &= E_Z[\log P(Y,Z|\theta)|Y,\theta^{(i)}] \\ &= \sum_Z\log P(Y,Z|\theta)P(Z|Y,\theta^{(i)}) \end{aligned} \]

  • 第2步,进行\(d\)次条件极大化,首先在\(\theta^{(0)}_2,\cdots,\theta^{(0)}_d\)保持不变的条件下求使\(Q(\theta,\theta^{(i)})\)达到极大的\(\theta_1^{(i + 1)}\),如此继续直到求得\(\theta^{(i + 1)}\)使得

\[Q(\theta^{(i + 1)},\theta^{(i)}) \gt Q(\theta^{(i)},\theta^{(i)}) \]

  • 重复2,3步直到收敛
posted @ 2022-10-09 16:51  eryo  阅读(206)  评论(0)    收藏  举报