TLDR:
We can include more flexibility by considering latent variables.
We can use flexible flow transformations (bijective, Surjective, stochastic) to encode the covariates or other observations.
These could provide more expressive representations and possibly more scalability by reducing the dimension drastically.
We could use a library of pre-trained relevant covariate Embeddings for more expressive representations.
These transformations are fully compatible with state space models.
Generative Parametric Model¶
For this model, we assume that there is some underlying latent variable, z, that underlines our process.
p(y,z)=p(y∣z)p(z) So, the data likelihood will be:
zy∣z∼p(z∣θ)∼p(y∣θ,z) Example Prediction
Sample Hyperparameter:Sample Latent Variable:Process Likelihood:Data Likelihood:αnznθnyn∼p(α)∼p(z)∼p(θ∣zn,αn)∼p(yn∣θn,zn)