Learning Dynamic Bayesian Networks

Zoubin Ghahramani

Dept. of Computer Science
University of Toronto
Toronto, ON M5S 3H5, Canada

Bayesian networks are directed acyclic graphs that represent dependencies between variables in a probabilistic model. Many time series models, including the hidden Markov models (HMMs) used in speech recognition and Kalman filter models used in filtering and control applications, can be viewed as examples of dynamic Bayesian networks. We first provide a brief tutorial on learning and Bayesian networks. We then present some dynamic Bayesian networks that can capture much richer structure than HMMs and Kalman filters, including spatial and temporal multiresolution structure, distributed hidden state representations, and multiple switching linear regimes. While exact probabilistic inference is intractable in these networks, one can obtain tractable variational approximations which call as subroutines the forward-backward and Kalman filter recursions. These approximations can be used to learn the model parameters by maximizing a lower bound on the likelihood.

A modified version appeared in C.L. Giles and M. Gori (eds.), Adaptive Processing of Sequences and Data Structures . Lecture Notes in Artificial Intelligence, 168-197. Berlin: Springer-Verlag, 1998. postscript.


Back to Zoubin Ghahramani's Home Page,