Transformations of Random Variables

Introduction

The probability density function of a random variable is a function that describes the likelihood of the random variable taking on a particular value. When a random variable is transformed by a one-to-one function, the probability density function of the transformed random variable can be obtained by multiplying the probability density function of the original random variable by the absolute value of the determinant of the Jacobian matrix of the inverse transformation.

In this article, we will present and prove how the probability density function of a random variable changes when the variable is transformed in a deterministic way.

Prerequisites

Inverse Function Theorem

The inverse function theorem states that the matrix inverse of the Jacobian matrix of an invertible function is the Jacobian matrix of the inverse function.

In other words, if the Jacobian of the function $f: \mathbb{R}^n \to \mathbb{R}^n$ is continuous and nonsingular (non-zero) at a point $\mathbf{x}$, then $f$ is invertible in a neighborhood of $\mathbf{y} = f(\mathbf{x})$, and the Jacobian of the inverse function $f^{-1}$ at $\mathbf{y} = f(\mathbf{x})$ is the inverse of the Jacobian of $f$ at $\mathbf{x} = f^{-1}(\mathbf{y})$.

$$
\begin{align}
\mathbf{J}_{f^{-1}}(\mathbf{y})
&= \mathbf{J}_{f^{-1}}(f(\mathbf{x})) \\
&= \left( \mathbf{J}_f(f^{-1}(\mathbf{y})) \right)^{-1} \\
&= \left( \mathbf{J}_f(\mathbf{x}) \right)^{-1}
\end{align}
$$

The proof to this theorem is somewhat complicated. For those who are interested, you can find it on Wikipedia.

Due to the multiplicative property of the determinant, the determinant of the Jacobian of the inverse function is the reciprocal of the determinant of the Jacobian of the original function.

In other words, because

$$
\mathbf{J}_{f^{-1}}(\mathbf{y}) \mathbf{J}_f(\mathbf{x}) = \mathbf{I}
$$

where $\mathbf{I}$ is the identity matrix, we have

$$
\begin{align}
\det \mathbf{J}_{f^{-1}}(\mathbf{y}) \det \mathbf{J}_f(\mathbf{x}) &= \det \mathbf{I} = 1
\end{align}
$$

Therefore,

$$
\det \mathbf{J}_{f^{-1}}(\mathbf{y}) = \frac{1}{\det \mathbf{J}_f(\mathbf{x})}
$$

Integration By Substitution

The integration by substitution is sometimes convenient for evaluating integrals.

Let $U$ be an open set in $\mathbb{R}^n$ and $\varphi: U \to \mathbb{R}^n$ be an injective differentiable function with continuous partial derivatives, the Jacobian of which is non-zero for every $\mathbf{u} \in U$. Then for any real-valued, compactly supported, continuous function $f$ defined on $\varphi(U)$, the following substitution holds:

$$
\begin{align}
\int_{\varphi(U)} f(\mathbf{v}) d\mathbf{v} = \int_{U} f(\varphi(\mathbf{u})) \cdot \left| \det \left( \mathbf{J}\varphi \right) (\mathbf{u}) \right| d\mathbf{u}
\end{align}
$$

where $\mathbf{v} = \varphi(\mathbf{u})$, $d\mathbf{u}$ and $d\mathbf{v}$ are the volume elements in $\mathbb{R}^n$ and $\mathbb{R}^n$ respectively, and $\mathbf{J}\varphi$ is the Jacobian matrix of $\varphi$, and $\left| \det \left( \mathbf{J}\varphi \right) (\mathbf{u}) \right|$ is the absolute value of the determinant of the Jacobian matrix of partial derivatives of $\varphi$ at the point $\mathbf{u}$.

Transformations of Random Variables

Suppose that $\mathbf{X}$ is a random variable taking values in $S \subseteq \mathbb{R}^n$, and $X$ has a continuous distribution with probability density function $f$. In addition, suppose $\mathbf{Y} = r(\mathbf{X})$, where $r: S \to T$, $T \subseteq \mathbb{R}^m$, and $r$ is a one-to-one transformation. Then $\mathbf{Y}$ has a continuous distribution with probability density function $g$ given by

$$
\begin{align}
g(\mathbf{y})
&= f(r^{-1}(\mathbf{y})) \left| \det \mathbf{J}_{r^{-1}}(\mathbf{y}) \right| \\
&= f(\mathbf{x}) \left| \det \mathbf{J}_{r^{-1}}(\mathbf{y}) \right| \\
&= \frac{f(\mathbf{x})}{\left| \det \mathbf{J}_r(\mathbf{x}) \right|}
\end{align}
$$

where $\mathbf{y} = r(\mathbf{x})$, $\mathbf{x} = r^{-1}(\mathbf{y})$, $\mathbf{J}_r(\mathbf{x})$ is the Jacobian matrix of $r$ at $\mathbf{x}$, and $\mathbf{J}_{r^{-1}}(\mathbf{y})$ is the Jacobian matrix of $r^{-1}$ at $\mathbf{y}$.

Proof

Let $A \subseteq T$ be a measurable set. Then the accumulated probability of $\mathbf{Y}$ being in $A$ is

$$
\begin{align}
P(\mathbf{Y} \in A) &= P(\mathbf{X} \in r^{-1}(A)) \\
&= \int_{r^{-1}(A)} f(\mathbf{x}) d\mathbf{x} \\
&= \int_{A} f(r^{-1}(\mathbf{y})) \left| \det \mathbf{J}_{r^{-1}}(\mathbf{y}) \right| d\mathbf{y} \\
&= \int_{A} g(\mathbf{y}) d\mathbf{y}
\end{align}
$$

Note that the second equality follows from the fact that $r$ is one-to-one, and the third equality follows the integration by substitution trick.

Therefore,

$$
\begin{align}
g(\mathbf{y})
&= f(r^{-1}(\mathbf{y})) \left| \det \mathbf{J}_{r^{-1}}(\mathbf{y}) \right| \\
&= f(\mathbf{x}) \left| \det \mathbf{J}_{r^{-1}}(\mathbf{y}) \right| \\
&= \frac{f(\mathbf{x})}{\left| \det \mathbf{J}_r(\mathbf{x}) \right|}
\end{align}
$$

This concludes the proof. $\square$

References

Author

Lei Mao

Posted on

04-28-2024

Updated on

04-28-2024

Licensed under


Comments