Training Products of Experts by Minimizing Contrastive Divergence¶
Motivation(s)¶
Fitting a mixture model via EM or gradient ascent is one way to model a complicated, smooth, high-dimensional data distribution. One limitation is that the posterior distribution cannot be sharper than the individual models in the mixture. This issue becomes more problematic in high-dimensional spaces where the individual models need to be broadly tuned.
Proposed Solution(s)¶
The author proposes the concept of a Product of Experts (PoE)
where \(\mathbf{d}\) is a data vector in a discrete space, \(\theta_m\) represents all the parameters of an individual model \(m\), \(p_m(\mathbf{d} \mid \theta_m)\) is the probability of \(\mathbf{d}\) under model \(m\), and \(\mathbf{c}\) indexes all possible vectors in the data space. This enables each expert to specialize on a different subset of the dimensions in a high-dimensional space. Note that \(p_m(\mathbf{d} \mid \theta)\) could be any non-negative function \(f(\mathbf{d}; \theta_m)\) due to the partition function \(Z\).
Directly fitting a PoE to a set of observed i.i.d. data vectors requires solving
Since the hidden states of all the experts are conditionally independent given the data, Gibbs sampling can update each of them in parallel. Unfortunately, samples drawn from the equilibrium distribution have very high variance because they come from different parts of the model’s distribution. Furthermore, the variance in the samples depends on the parameters of the model causing the parameters to be repelled from regions of high variance even if the gradient is zero.
Ideally, the Markov chain that is implemented by Gibbs sampling should leave the initial distribution over the visible variables unaltered. Towards this goal of reducing the variance, the author proposes the method of contrastive divergence: instead of running the Markov chain to equilibrium, run the chain for one full step and then update the parameters.
Evaluation(s)¶
The experiments on synthetic data (e.g. 5 x 5 clusters using 15 univariate Gaussian experts, 100-dimensional images containing edges) indicate that a PoE is able to fit data distributions that can be factorized into a product of lower dimensional distributions. The simulations reveal that separate initialization and training causes PoE to become trapped in poor local optima. A workaround is to train the experts together with random initialization.
Using contrastive divergence with RBMs on the USPS digits dataset achieved an error rate of 1.1%. The weights were learned by doing multinomial logistic regression on the training data with the labels as outputs and the unnormalised log probability scores from the trained (digit-specific) PoE as inputs. Note that the USPS test set is drawn from a different distribution than the training set. To sidestep this issue, the author created a new test set from the unused portion of the training data.
To get an idea of the relative magnitude of the ignored term in contrastive divergence, extensive simulations were performed using RBMs with small numbers of visible and hidden units. RBMs with few units allow a brute force evaluation of what would be exponential in the number of hidden/visible units. The results indicate the learning procedure does not always improve the log likelihood of the training data, though it has a strong tendency to do so. The paramount point to realize is the approximation did not make contrastive divergence worse in latter iterations.
Future Direction(s)¶
In Andrew Ng’s talk on
Bias-Variance Tradeoff
, the subject of different train and test data distributions was brought up again. The proposed solution is to craft an appropriate data distribution. This is in stark contrast with how humans confront novel situations. What is a reasonable formulation of transfer learning that emphasizes the minimization of self-contradiction?
Question(s)¶
Isn’t a learning algorithm rather fragile if the test and training data needs to come from the same distribution? If a human is recognizing a digit, it doesn’t matter whether the digit is made of wood or water.
Have multinomial pixels and PoE HMM withstood the test of time? They seem overly complicated compared to existing techniques.
When is the relationship of equation (12) useful?
\[P \parallel Z^{-1} \prod_m Q_m^{w_m} \leq \sum_m w_m P \parallel Q_m\]
Analysis¶
Generative models that choose latent variables and then generate data tend to have a strong tendency for the posterior values of the latent variables to be approximately marginally independent after the model has been fitted to data. This is why it’s hard to learn one hidden layer at a time in a greedy bottom-up way. With a PoE, even though the experts have independent priors, the latent variables of different experts will be marginally dependent. This means there may still be lots of statistical structure in the latent variables for additional hidden layers to capture. Furthermore, a PoE retains the property of orthogonal basis functions while allowing non-orthogonal experts and a non-linear generative model.
In order to fully understand this paper, one should already be familiar with Boltzmann machines and RBMs. [Woo][Puh] present a concise and modern exposition on the application of contrastive divergence. Some additional insights may exist in the experimental results of the original paper [Hin02].
One interesting tidbit is that an RBM is a PoE with one expert per hidden unit, and RBMs can be considered as the intersection between Boltzmann machines and PoE. While a PoE is novel and possibly useful, CD-k enables sidestepping intractable calculations with acceptable variance and bias.
Notes¶
Maximum Likelihood Learning¶
Given a finite set of training data
one would like to model the probability of a data point \(\mathbf{x}_i\) using a non-negative function of the form \(f(\mathbf{x}; \Theta)\) where \(\Theta\) is a vector of model parameters. The corresponding likelihood function is
where the partition function \(Z(\Theta)\) is defined as
The goal is to find the
This requires computing
where \(p(\mathbf{D} \mid \Theta)\) represents the true underlying distribution of the model, \(p(\mathbf{X})\) denotes the empirical data distribution
and the derivative of the partition function is given by
Relationship between Maximum Likelihood and Kullback-Leibler Divergence¶
Maximizing the log-likelihood of the data averaged over the data distribution \(Q^0\) is equivalent to minimizing the relative entropy between the data distribution and \(Q^\infty\), the equilibrium distribution over the visible variables that is produced by prolonged Gibbs sampling.
The corresponding gradient is
Notice that when \(p(\mathbf{d}) = N^{-1} \sum_{i = 1}^N \delta(\mathbf{d} - \mathbf{d}_i)\),
Contrastive Divergence¶
Let \(Q^t\) denote the data distribution of Gibbs sampling \(Q^0\) for \(t\) full steps with \(\lim_{t \to \infty} Q^t = Q^\infty\).
The corresponding gradient is now
where
The mathematical motivation behind contrastive divergence is the cancellation of the intractable expectation over \(Q^\infty\). Since \(Q^t\) is \(t\) steps closer to the equilibrium distribution \(Q^\infty\) than \(Q^0\), a reasonable gradient approximation that avoids sampling from the \(Q^\infty\) is
The last term on the right is typically ignored since computing it is non-trivial and its contribution is negligible. Note that contrastive log-likelihood will fail because it can achieve a value of zero when all possible vectors in the data space are equally probable.
References
- Hin02
Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800, 2002.
- Puh
Helmut Puhr. Contrastive divergence. http://www.igi.tugraz.at/lehre/SeminarE/SS10/puhr_E_2010.pdf. Accessed on 2017-05-12.
- Woo
Oliver Woodford. Notes on contrastive divergence. http://www.robots.ox.ac.uk/ ojw/files/NotesOnCD.pdf. Accessed on 2017-05-12.