Lei Mao bio photo

Lei Mao

Machine Learning, Artificial Intelligence. On the Move.

Twitter Facebook LinkedIn GitHub   G. Scholar E-Mail RSS

Introduction

Exponential family is a set of probability distributions whose probability density function (or probability mass function, for the case of a discrete distribution) can be expressed in the form

where $\eta$ is the parameter for the probability density function, which is independent of $x$, and $A(\eta)$ is also independent of $x$. $\eta$ is also called the natural parameter of the distribution, $T(x)$ is also called the sufficient statistics, $A(\eta)$ is also called the log normalizer (we will see why), $h(x)$ is also called the base measurement, and the above expression is also called the natural form of the distribution.


Many common distributions, such as normal distribution, categorical distribution, gamma distribution, Dirichlet distribution, etc, are belong to exponential family.


In this blog post, I will use normal distribution as an example to show how to derive $h(x)$, $T(x)$, $\eta$ and $A(\eta)$. I will also talk about some of the properties of exponential family that we will use for variational inference.

Log Normalizer

Source of Name

Because $p(x|\eta)$ is probability density, the integral of probability density is 1.

We then have

Therefore $A(\eta)$ is a log normalizer for $h(x) \exp\{ T(x)^{\top} \eta\}$.

Derivative of Log Normalizer

The derivative of the log normalizer has an important property, which will be used in many other applications, such as variational inference.

Let’s see how to derive this. Because

We take the derivative with respect to $\eta$,

We then use a special case of Leibniz Integral Rule.

Therefore,

Conjugate Priors

All members of the exponential family have conjugate priors. If you don’t remember what conjugate prior is, please check my blog post on conjugate priors. Here I will give the proof of this theorem.


Given likelihood $p(x|\beta)$ ($\beta$ is its natural parameter) from any member of the exponential family, we have to find a prior $p(\beta)$ that belongs to the same family of the posterior $p(x | \beta)$.


Because $p(x|\beta)$ is from the exponential family, we could write $p(x|\beta)$ in its natural form.

Here we do not limit the size of $\beta$. It could be $\beta = \{ \beta_1, \beta_2, \cdots, \beta_N \}$ and $\beta$ is a column vector.


We then assume that we could find a conjugate prior $p(\beta)$ for the likelihood from the exponential family. We assume $p(\beta)$ has the following natural form.

where

Note that because $\beta \in \mathbb{R}^{N}$ and $A(\beta)$ is a scalar, $T^{\prime}(\beta) \in \mathbb{R}^{N+1}$. $\alpha$ is the natural parameter for $p(\beta)$ and $\alpha \in \mathbb{R}^{N+1}$. We denote $\alpha$ into two parts

where $\alpha_1 \in \mathbb{R}^{N}$ and $\alpha_2 \in \mathbb{R}^{1}$.


According to Bayes’ theorem,

We use $\propto$ here because $p(x)$ is a constant value to $p(\beta | x)$ thus will not change the family of $p(x | \beta) p(\beta)$.

We define

Then we have

We could see that $p(\beta | x)$ also belongs to the exponential family. Therefore we could always find a conjugate prior for the likelihood from exponential family.

Natural Form of Distribution

Normal Distribution

For normal distribution, we have probability density:

Here $\theta$ is called the parameter of the distribution, compared to the natural parameter $\eta$ of the distribution. We will convert this form to the natural form of normal distribution.

So far, we could see that the base measurement, the sufficient statistics and the natural parameter for normal distribution are

It is not hard to see that

Thus,

This term is actually the log normalizer of the distribution.

Therefore, the natural form of normal distribution is

We compared this to the natural form of normal distribution on Wikipedia. They are exactly the same.

Maximum Likelihood Estimation

Original Form

We have a collection of samples $X = \{ x_1, x_2, \cdots, x_N \}$. Each sample $x_i$ is indepdently and identically drawn from a normal distribution $\mathcal{N}(\mu, \sigma^2)$.

To maximize $p(X | \theta) $, it is equivalent to maximize $\log p(X | \theta) $, we have

We first take the derivative with respect to $\mu$.

We then take the derivative with respect to $\sigma^2$.

We finally solve the above two equations. We got

Therefore,

We could see that it is complicated to derive these. Moreover, if the distribution where the sample is drawn from are changed, we need to derive again from scratch using the probaility density of the new distribution.

Natural Form

We now use the natural form of normal distribution to do maximum likelihood estimation for the same task.

To maximize $p(X | \eta)$, it is equivalent to maximize $\log p(X | \eta)$, we have

We take the derivative with respect to $\eta$.

Therefore,

This is a general equation to do maximum likelihood estimation for all the distributions from exponential family.


Specifically for normal distribution,

We solve the above equations

Let’s finally check if the maximum likelihood estimate solutions from the original form and the natural form are equivalent. Note that this step is not required in practice since it is redundant.

It is exactly the same to what we got from the original form. But it is much simpler because we already know $T(x)$, $\eta$, and $A(\eta)$. For other distributions from the exponential family, their corresponding $T(x)$, $\eta$, and $A(\eta)$ could be found on Wikipedia.

Final Remarks

The natural forms of all the exponential family members could be found on Wikipedia. Life becomes easier.

References