文章目錄
-
- 1 理論
- 2 代碼
- 3 參考
1 理論
EM算法通過疊代求解觀測資料的對數似然函數 L ( θ ) = log P ( Y ∣ θ ) {L}(\theta)=\log {P}(\mathrm{Y} | \theta) L(θ)=logP(Y∣θ)的極大化,實作極大似然估計。每次疊代包括兩步:
-
E E E步:求期望
Q ( θ , θ ( i ) ) = ∑ z log P ( Y , Z ∣ θ ) P ( Z ∣ Y , θ ( i ) ) Q\left(\theta, \theta^{(i)}\right)=\sum_{z} \log P(Y, Z \mid \theta) P\left(Z \mid Y, \theta^{(i)}\right) Q(θ,θ(i))=z∑logP(Y,Z∣θ)P(Z∣Y,θ(i))
-
M M M步:求極大
θ ( i + 1 ) = arg max θ Q ( θ , θ ( i ) ) \theta^{(i+1)}=\arg \max _{\theta} Q\left(\theta, \theta^{(i)}\right) θ(i+1)=argθmaxQ(θ,θ(i))
2 代碼
class EM:
def __init__(self, prob):
self.pro_A, self.pro_B, self.pro_C = prob
# E步
def pmf(self, i):
pro_1 = self.pro_A * math.pow(self.pro_B, data[i]) * math.pow(
(1 - self.pro_B), 1 - data[i])
pro_2 = (1 - self.pro_A) * math.pow(self.pro_C, data[i]) * math.pow(
(1 - self.pro_C), 1 - data[i])
return pro_1 / (pro_1 + pro_2)
# M步
def fit(self, data):
count = len(data)
for d in range(count):
_ = yield
_pmf = [self.pmf(k) for k in range(count)]
pro_A = 1 / count * sum(_pmf)
pro_B = sum([_pmf[k] * data[k] for k in range(count)]) / sum(
[_pmf[k] for k in range(count)])
pro_C = sum([(1 - _pmf[k]) * data[k]
for k in range(count)]) / sum([(1 - _pmf[k])
for k in range(count)])
self.pro_A = pro_A
self.pro_B = pro_B
self.pro_C = pro_C
3 參考
理論:周志華《機器學習》,李航《統計學習方法》
代碼:https://github.com/fengdu78/lihang-code