Inverse Function Theorem#

Resources:

Source:

Often we are faced with the situation where we do not know the distribution of our data. But perhaps we know the distribution of a transformation of our data, e.g. if we know that \(X\) is a r.v. that is uniformly distributed, then what is the distribution of \(X^2 + X + c\)? In this case, we want to understand what’s the relationship between the distribution we know and the transformed distribution. One way to do so is to use the inverse transform theorem which directly uses the cumulative distribution function (CDF).

Let’s say we have \(u \sim \mathcal U(0,1)\) and some invertible function \(f(\cdot)\) that maps \(X \sim \mathcal P\) to \(u\).

\[x = f(u)\]

Now, we want to know the probability of \(x\) when all we know is the probability of \(u\).

\[\mathcal P(x)=\mathcal P(f(u)=x)\]

So solving for \(u\) in that equation gives us:

\[\mathcal P(x) = \mathcal P(u=f^{-1}(x))\]

Now we see that \(u=f^{-1}(x)\) which gives us a direct formulation for moving from the uniform distribution space \(\mathcal U\) to a different probability distribution space \(\mathcal P\).

Probability Integral Transform

Resources

Derivative of an Inverse Function#

  • MathInsight - Link