Interpolator + DA¶
1. Train NerF on Observations
Assume we have a dataset of independent observations, yn, on the observation domain (Ωy,Ty).
D={(xn,tn),yn}n=1N,x∈Ωy⊆RDst∈Ty⊆R+ We train a neural field, fθ, to interpolate the observation wrt the spatiotemporal coordinate values.
yn=fθ(xn,tn)+εn
2. Train Strong Constrained DA with NerF
u0utyt∼N(ub,Σb)=fθ(ut−1,t)=hθ(ut,t)+εy,εy∼N(0,Σy) We can estimate the state by minimizing the objective function
J(u)=t=1∑T∣∣yθ(t)−hθ(ut,t)∣∣Σy−12+∣∣u0−ub∣∣Σb−12
3. Train NerF
Assume we have a dataset of independent reanalysis points, un∗, on the observation domain (Ωy,Tu).
D={(xn,tn),un∗}n=1N,x∈Ωz⊆RDst∈Tz⊆R+ We train a neural field, fθ, to interpolate the observation wrt the spatiotemporal coordinate values.
u∗(xn,tn)=fθ(xn,tn)+εn,x∈Ωz⊆RDs
5. Train Weak-Constrained DA
u0utyt∼N(ub,Σb)=fθ(ut−1,t),=hθ(ut,t)+εy,εu∼N(0,Σu)εy∼N(0,Σy) We can estimate the state by minimizing the objective function
J(u)=t=1∑T∣∣yθ(t)−hθ(ut,t)∣∣Σy−12+t=1∑T∣∣uθ(t)−fθ(ut−1,t)∣∣Σu−12+∣∣u0−ub∣∣Σb−12
Interpolator + Foundational Models¶
2. Train Embedding on NerF
Assume we have a dataset of sequential, independent observations, yt, which is given by the neural field, fθ.
However, we query the functa on the latent domain, (Ωz,Tz).
yθ(t)=fθ(Xz,t),Xz∈RDΩzt∈Tz⊆R+ where Xz={x∈Ωz∈RDs}.
We can create a dataset by (quasi-)randomly selecting points
D={yθ(t)}t=1T We train an embedding on the latent domain, z, using the Neural Field.
We can also apply a random mask, m, to help augment the data by randomly masking pixels.
L(θ)=∣D∣1t∈D∑∣∣yθ(t)−TD∘TE∘m∘yθ(t)∣∣22 Latent Variable¶
- Train (Masked) AutoEncoder on Simulations
- PnP for Real Observations
- Train AutoEncoder on Sparse Observations
- Train Variational AutoEncoder (Probabilistic Reconstruction)
- Train U-Net (DEQ)