Emulation#
Linear Gaussian State Space Model#
LGSSM
Loss Function
We can train this using maximum likelihood estimation (MLE) like so: c
Because we have linear operations, we can minimize this exactly
Transformed LGSSM#
Again, we can solve for this exactly because we have a simple transformation function, \(\boldsymbol{T}\):
Note that this is the same loss function as above, however, we see that
Motivation
We take our inspiration from Koopman theory. We assume that the perfect dynamics to describe the transition of our state from \(\mathbf{x}_t\) to \(\mathbf{x}_{t+1}\) can be described by a non-linear function, \(\boldsymbol{f}\).
However, this is often unknown and we have no way to solve this. One way to think However, we postulate that there exists some transformation (possibly invertible) that allows us to
A practical example is the case of the coordinate system for the planetary motion without our solar system. An Earth-centric coordinate system shows very non-linear structure when describing the motion for other planets and the sun. However, a sun-centric coordinate system showcase very simple dynamics that can be described using simpler equations.
We are free to choose this transformation and we get some very interesting properties depending upon the restrictions we choose. Some examples of transformations, \(\mathbf{T}\), include FFT, Wavelets, PCA, AE, and NFs.
Transformations
where \(\mathcal{V}(\mathbf{x}, \mathbf{z})\) is the likelihood contribution and \(\mathcal{E}(\mathbf{x}, \mathbf{z})\) is the bound looseness.
Bijective#
These are known as Normalizing Flows or Invertible Neural Networks.
The good thing is that these transformations give you exact likelihoods. However, they are limited because you have limited expressivity due to the diffeomorphic constraint. In addition, it requires this to be
In terms of the log-likelihood function, these are:
Stochastic#
These are known as Variational AutoEncoders.
where \(\mathbf{x} = \boldsymbol{T}_d \circ \boldsymbol{T}_e(\mathbf{x})\).
Cons:
Approximate Inference:
Pros:
Dimensionality Reduction: This allows us to reduce (or increase) the dimensionality
The downside to this method is that it offers approximate inference. However, on the upside, there will be higher expressivity because you are not limited to diffeomorphic transformations. And in addition, it offers dimensionality reduction to reduce the complexity of the problem. All of the flexibility comes at a price because it becomes quite difficult to train and it is a relatively unexplored territory for EO-data.
Ensemble State Space Model#
This is difficult. However, if we restrict ourselves to Gaussian distributions, this allows us to have
Log-Likelihood
Given we have samples, we will need to $\( \begin{aligned} \bar{\mathbf{x}}_t &= \frac{1}{N_e} \sum_{i=1}^{N_e} \mathbf{x}_{(i),t} \\ \bar{\mathbf{P}}_t &= \frac{1}{N_e - 1} \sum_{i=1}^{N_e} (\mathbf{x}_{(i),t} - \bar{\mathbf{x}}_t)(\mathbf{x}_{(i),t} - \bar{\mathbf{x}}_t) \end{aligned} \)$
Deep State Space Model (DSSM)#
Loss Function
Pros
More Flexible
More Scalable
Cons
Approximate Inference
More Flexible (harder to train)
Tutorials#
Generative Models
Iterative Gaussianization
Gaussianization Flows
GFs for Spatial Data
Stochastic Transformations (Slicing, Augmentation)
Kalman Filters
Linear Gaussian State Space Model (LGSSM)
Ensemble LGSSM (EnsLGSSM)
Gaussianized State Space Model (GaussSSM)
Deep Gaussian State Space Model (DGSSM)