Rotation
- Author: J. Emmanuel Johnson
- Email: jemanjohnson34@gmail.com
- Website: jejjohnson.netlify.com
- Colab Notebook: Notebook
Main Idea
A rotation matrix \(\mathbf{R}\) is an orthogonal matrix used to perform a linear transformation that preserves lengths and angles. In the context of RBIG, rotations are applied after marginal Gaussianization to mix dimensions before the next iteration.
Forward Transformation
Reverse Transformation
Since \(\mathbf{R}\) is orthogonal, \(\mathbf{R}^{-1} = \mathbf{R}^\top\), so:
Jacobian
The determinant of an orthogonal matrix is ±1 (it is +1 for proper rotation matrices).
Proof:
There are a series of transformations that can be used to prove this:
Therefore, we can conclude that \(\det(\mathbf{R})=\pm 1\). For a proper rotation matrix (no reflections), \(\det(\mathbf{R})=+1\).
Log Jacobian
As shown above, the determinant of a proper orthogonal (rotation) matrix is +1. So taking the log of this is simply zero.
Decompositions
QR Decomposition
where * \(A \in \mathbb{R}^{N \times M}\) * \(Q \in \mathbb{R}^{N \times N}\) is orthogonal * \(R \in \mathbb{R}^{N \times M}\) is upper triangular
Singular Value Decomposition
Finds the singular values of the matrix.
where: * \(A \in \mathbb{R}^{N \times M}\) * \(U \in \mathbb{R}^{N \times K}\) is unitary * \(\Sigma \in \mathbb{R}^{K \times K}\) are the singular values * \(V^\top \in \mathbb{R}^{K \times M}\) is unitary
Eigendecomposition
Finds the eigenvalues of a symmetric matrix.
where: * \(A_S \in \mathbb{R}^{N \times N}\) * \(Q \in \mathbb{R}^{N \times K}\) is unitary * \(\Lambda \in \mathbb{R}^{K \times K}\) are the eigenvalues * \(Q^\top \in \mathbb{R}^{K \times N}\) is unitary
Polar Decomposition
where: * \(A \in \mathbb{R}^{N \times N}\) * \(Q \in \mathbb{R}^{N \times N}\) is unitary (orthogonal) * \(S \in \mathbb{R}^{N \times N}\) is symmetric positive semi-definite
Initializing
We have an initialization step where we compute \(\mathbf W\) (the transformation matrix). This will be in the fit method. We will need the data because some transformations depend on \(\mathbf x\) like the PCA and the ICA.
class LinearTransform(BaseEstimator, TransformMixing):
"""
Parameters
----------
basis :
"""
def __init__(self, basis='PCA', conv=16):
self.basis = basis
self.conv = conv
def fit(self, data):
"""
Computes the inverse transformation of
z = W x
Parameters
----------
data : array, (n_samples x Dimensions)
"""
# Check the data
# Implement the transformation
if basis.upper() == 'PCA':
...
elif basis.upper() == 'ICA':
...
elif basis.lower() == 'random':
...
elif basis.lower() == 'conv':
...
elif basis.upper() == 'dct':
...
else:
raise ValueError('...')
# Save the transformation matrix
self.W = ...
return self
Transformation
We have a transformation step:
where: * \(\mathbf W\) is the transformation * \(\mathbf x\) is the input data * \(\mathbf z\) is the transformed output
def transform(self, data):
"""
Computes the inverse transformation of
z = W x
Parameters
----------
data : array, (n_samples x Dimensions)
"""
return data @ self.W
Inverse Transformation
We also can apply an inverse transform.
def inverse(self, data):
"""
Computes the inverse transformation of
z = W^-1 x
Parameters
----------
data : array, (n_samples x Dimensions)
Returns
-------
"""
return data @ np.linalg.inv(self.W)
Jacobian
Lastly, we can calculate the Jacobian of that function. The Jacobian of a linear transformation is simply \(\det(\mathbf{W})\).