Entropy & Relative Entropy#
Entropy#
This is an upper bound on the amount of information you can convey without any loss (source). More entropy means more randomness or uncertainty
We use logs so that wee get sums of entropies. It implies independence but the log also forces sums.
Examples#
Example Pt II: Delta Function, Uniform Function, Binomial Curve, Gaussian Curve
Under Transformations#
In my line of work, we work with generative models that utilize the change of variable formulation in order to estimate some distribution with
Under rotation: Entropy is invariant
Under scale: Entropy is …???
Computational Cost…?
Relative Entropy (Kullback Leibler Divergence)#
This is the measure of the distance between two distributions. I like the term relative entropy because it offers a different perspective in relation to information theory measures.
If you’ve studied machine learning then you are fully aware that it is not a distance as this measure is not symmetric i.e. \(D_{KL}(p||q) \neq D_{KL}(q||p)\).
Furthermore, the KL divergence is the difference between the cross-entropy and the entropy.
So this is how far away our predictions are from our actual distribution.
Under Transformations#
The KLD is invariance under invertible affine transformations, e.g. \(b = \mu + Ga, \nabla F = G\)
Let’s make a transformation on \(p(y)\) using some nonlinear function \(f()\). So that leaves us with \(y = f(x)\). So let’s apply the change of variables formula to get the probability of \(y\) after the transformation.
Remember, we defined our function as \(y=f(x)\) so technically we don’t have access to the probability of \(y\). Only the probability of \(x\). So we cannot take the derivative in terms of y. But we can take the derivative in terms of \(x\). So let’s rewrite the function:
Now, let’s plug in this formula into our KLD formulation.
We still have two terms that need to go: \(dy\) and \(q(y)\). For the intergration, we can simply multiple by 1 to get \(dy\frac{dx}{dx}\) and then with a bit of rearranging we get: \(\frac{dy}{dx}dx\). I’m also going to change the notation as well to get \(\left| \frac{dy}{dx} \right|dx\). And plugging this in our formula gives us:
Now, we still have the distribution \(q(y)\).
Normalized Variants#
Expected uncertainty.
Lower bound on the number of bits needed to represent a RV, e.g. a RV that has a unform distribution over 32 outcomes.
Lower bound on the average length of the shortest description of \(X\)
Self-Information
and the discrete version:
If we want the viewpoint in terms of expectations, we can do a bit of rearranging to get:
Code - Step-by-Step#
Obtain all of the possible occurrences of the outcomes.
values, counts = np.unique(labels, return_counts=True)
Normalize the occurrences to obtain a probability distribution
counts /= counts.sum()
Calculate the entropy using the formula above
H = - (counts * np.log(counts, 2)).sum()
As a general rule-of-thumb, I never try to reinvent the wheel so I look to use whatever other software is available for calculating entropy. The simplest I have found is from scipy
which has an entropy function. We still need a probability distribution (the counts variable). From there we can just use the entropy function.
Use Scipy Function
H = entropy(counts, base=base)
Formulas#
And we can estimate this empirically by:
where \(p_i = P(\mathbf{X})\).
Code - Step-by-Step#
# 1. obtain all possible occurrences of the outcomes
values, counts = np.unique(labels, return_counts=True)
# 2. Normalize the occurrences to obtain a probability distribution
counts /= counts.sum()
# 3. Calculate the entropy using the formula above
H = - (counts * np.log(counts, 2)).sum()
As a general rule-of-thumb, I never try to reinvent the wheel so I look to use whatever other software is available for calculating entropy. The simplest I have found is from scipy
which has an entropy function. We still need a probability distribution (the counts variable). From there we can just use the entropy function.
Code - Refactored#
# 1. obtain all possible occurrences of the outcomes
values, counts = np.unique(labels, return_counts=True)
# 2. Normalize the occurrences to obtain a probability distribution
counts /= counts.sum()
# 3. Calculate the entropy using the formula above
base = 2
H = entropy(counts, base=base)
Other Spaces#
Renyi#
Above we looked at Shannon entropy which is a special case of Renyi’s Entropy measure. But the generalized entropy formula actually is a generalization on entropy. Below is the given formula.