Eu li que HMMs, Particle Filters e Kalman filter são casos especiais de redes Bayes dinâmicas. No entanto, conheço apenas HMMs e não vejo a diferença nas redes dinâmicas de Bayes.
Alguém poderia explicar?
Seria bom se sua resposta pudesse ser semelhante à seguinte, mas para a Bayes Networks:
Modelos ocultos de Markov
Um modelo Markov oculto (HMM) é uma 5-tupla :
- : um conjunto de estados (por exemplo, "início do fonema", "meio do fonema", "fim do fonema")
- : um conjunto de possíveis observações (sinais de áudio)
- : A stochastic matrix which gives probabilites to get from state to state .
- : A stochastic matrix which gives probabilites to get in state the observation .
- : Initial distribution to start in one of the states.
It is usually displayed as a directed graph, where each node corresponds to one state and the transition probabilities are denoted on the edges.
Hidden Markov Models are called "hidden", because the current state is hidden. The algorithms have to guess it from the observations and the model itself. They are called "Markov", because for the next state only the current state matters.
For HMMs, you give a fixed topology (number of states, possible edges). Then there are 3 possible tasks
- Evaluation: given a HMM , how likely is it to get observations (Forward algorithm)
- Decoding: given a HMM and a observations , what is the most likely sequence of states (Viterbi algorithm)
- Learning: learn : Baum-Welch algorithm, which is a special case of Expectation maximization.
Bayes networks
Bayes networks are directed acyclical graphs (DAGs) . The nodes represent random variables . For every , there is a probability distribution which is conditioned on the parents of :
There seem to be (please clarify) two tasks:
- Inference: Given some variables, get the most likely values of the others variables. Exact inference is NP-hard. Approximately, you can use MCMC.
Learning: How you learn those distributions depends on the exact problem (source):
- known structure, fully observable: maximum likelihood estimation (MLE)
- known structure, partially observable: Expectation Maximization (EM) or Markov Chain Monte Carlo (MCMC)
- unknown structure, fully observable: search through model space
- unknown structure, partially observable: EM + search through model space
Dynamic Bayes networks
I guess dynamic Bayes networks (DBNs) are also directed probabilistic graphical models. The variability seems to come from the network changing over time. However, it seems to me that this is equivalent to only copying the same network and connecting every node at time with every the corresponding node at time . Is that the case?