Graphical models

Annotation

This course is devoted to the formalism of graphical models which has become the state-of-the-art approach for working with data that contain internal dependencies. Entities of such data include images, temporal signals, video, texts, web-data, social networks, etc. Graphical models generalize and expand the applicability of several important machine learning algorithms.
Together with the description of graphical models this course touches elements of combinatorial and continuous optimization, Bayesian statistical methods, structured support vector machine, Monte-Carlo methods. On the application side this course deals with computer vision, probabilistic logic, natural language processing, signal processing.
Successful course students will be able to build models taking into account interdependencies in data, learn parameters of such models with and without training data, perform predictions in case of missing data.

Course program

Theory, algorithms, models:

  1. Graphical models. Factor graphs. Directed models (Bayesian networks) and undirected models (Markov random fields). Observed and hidden variables.
  2. Inference problem. Inference criteria, loss functions, empirical risk minimization. Supervised and unsupervised learning problems.
  3. Inference as optimization. Coordinate descent (Iterated Conditional Modes). Inference via convex optimization. Gaussian models.
  4. Inference via message passing:
    • Inference in chain model via the shortest path in a graph.
    • Message passing in chain- and tree-structured graphs.
    • Computing marginals. Message passing in semirings.
    • Message passing in graphs with cycles.
    • Message passing in factor graphs.
  5. Inference via graph cuts:
    • Exact inference in binary submodular problems.
    • Alpha-expansion.
    • Nonsubmodular models (QPBO), fusion moves.
  6. Dual decomposition. Subgradient descent methods.
  7. Supervised learning, log-linear model, feature representation. Learning via minimization:
    • Structured perceptron.
    • Learning via emphirical risk minimization and structured support vector machines.
    • Optimization via subgradient descent and quadratic programming, delayed constraint generation.
  8. Maximum likelihood supervised learning. Pseudo-likelihood, piecewise learning, deep supervised learning.
  9. Bayesian networks. Probabilistic interpretation of message passing. Notion of dependency between variables. The explaining away effect. Usage examples.
  10. Linear dynamic systems (LDS).
    • Kalman filter.
    • Maximum likelihood training of LDS.
    • Extended Kalman filter.
  11. EM-algorithm. Examples.
    • Unsupervised training of HMM.
    • Unsupervised training of LDS.
  12. Monte-Carlo Markov chain methods. (Metropolis-Hastings and Gibbs sampling). Examples.
    • Particle filter.
    • Ising model.
    • Simulated annealing.
  13. Variational inference. Examples.
    • Variational linear regression.
    • Variational Ising model.
  14. Bayesian regularization in machine learning. Examples.
    • Relevance Vector Machine (RVM).
    • Bayesian Principal Component Analysis (PCA).

Applications:

  1. Computer vision / image processing:
    • Stereo.
    • Image segmentation.
    • Human pose estimation.
    • Image restoration.
    • 3D reconstruction.
    • Superresolution.
    • Image stitching.
  2. Natural language processing:
    • Part-Of-Speech tagging.
    • Named entity recognition.
  3. Clustering, facility location problem.
  4. Placing labels on a map.
  5. Planning, route selection.
  6. Web-page classification.
  7. Tracking.
  8. Relevant feature selection.
  9. Choosing the number of principal components and components in a mixture.