Eu quero implementar o algoritmo EM manualmente e depois compará-lo com os resultados normalmixEM
do mixtools
pacote. Claro, eu ficaria feliz se os dois tivessem os mesmos resultados. A referência principal é Geoffrey McLachlan (2000), Modelos de Mistura Finita .
Eu tenho uma densidade de mistura de dois gaussianos, de forma geral, a probabilidade de log é dada por (McLachlan página 48):
A etapa E é agora, cálculo da expectativa condicional:
I tried to write a R code (data can be found here).
# EM algorithm manually
# dat is the data
# initial values
pi1 <- 0.5
pi2 <- 0.5
mu1 <- -0.01
mu2 <- 0.01
sigma1 <- 0.01
sigma2 <- 0.02
loglik[1] <- 0
loglik[2] <- sum(pi1*(log(pi1) + log(dnorm(dat,mu1,sigma1)))) +
sum(pi2*(log(pi2) + log(dnorm(dat,mu2,sigma2))))
tau1 <- 0
tau2 <- 0
k <- 1
# loop
while(abs(loglik[k+1]-loglik[k]) >= 0.00001) {
# E step
tau1 <- pi1*dnorm(dat,mean=mu1,sd=sigma1)/(pi1*dnorm(x,mean=mu1,sd=sigma1) +
pi2*dnorm(dat,mean=mu2,sd=sigma2))
tau2 <- pi2*dnorm(dat,mean=mu2,sd=sigma2)/(pi1*dnorm(x,mean=mu1,sd=sigma1) +
pi2*dnorm(dat,mean=mu2,sd=sigma2))
# M step
pi1 <- sum(tau1)/length(dat)
pi2 <- sum(tau2)/length(dat)
mu1 <- sum(tau1*x)/sum(tau1)
mu2 <- sum(tau2*x)/sum(tau2)
sigma1 <- sum(tau1*(x-mu1)^2)/sum(tau1)
sigma2 <- sum(tau2*(x-mu2)^2)/sum(tau2)
loglik[k] <- sum(tau1*(log(pi1) + log(dnorm(x,mu1,sigma1)))) +
sum(tau2*(log(pi2) + log(dnorm(x,mu2,sigma2))))
k <- k+1
}
# compare
library(mixtools)
gm <- normalmixEM(x, k=2, lambda=c(0.5,0.5), mu=c(-0.01,0.01), sigma=c(0.01,0.02))
gm$lambda
gm$mu
gm$sigma
gm$loglik
The algorithm is not working, since some observations have the likelihood of zero and the log of this is -Inf
. Where is my mistake?