TLDR¶
Step 1: Train a coordinate-based Gaussian process interpolator on patches.
Step 2: Train a conditional prior model from GP simulations. Estimate full field from prior.
Step 3: Train a
Coordinate-Based Interpolation¶
Data
D={(sn,tn),yn}n=1N where N=NsNt.
Joint Distribution
We formulate this as a conditional density estimation problem.
p(y,s,t,f,θ)=p(y∣f,θ)p(f∣s,t,θ)p(θ)
Model
We will use a simple Gaussian process model to fill in the gaps.
yn=f(xn,tn;θ)+εn,f∼GP(mθ(s,t),kθ(s,t)),εn∼N(0,σ2)
Criteria
L(θ)=logp(y)
Scale
There are many opportunities to improve the scalability of this class of methods.
We could use approximate kernels using Fourier Features.
We could also use inducing inputs to sparsify this method.
We could use dynamical methods
Field-Based Interpolation¶
Initial Condition
We will use the Gaussian process
Joint Distribution
p(y,u,θ)=p(y∣u,θ)p(u∣θ)p(θ)
Model
Likelihood:Prior:ynun=h(un,θ)+εn,=μu+εn,ε∼N(0,Σy)ε∼N(0,Σu)
Criteria
J(u;θ)=21∣∣y−h(u;θh)∣∣Σy−12+21∣∣u−μu∣∣Σu−12 where θ={h,θh,μu,Σu,Σy,H} are the parameters of the objective function.
Pretraining¶
For these, we will use a conditional
Gaussian Process Prior¶
Sample Parameters:Sample Gaussian Process:Dataset:θnfnD∼p(θ)∼GP(mθ,kθ)={θn,fn}n=1N Model
$$
\begin{aligned}
\end{aligned}
$$
Loss Function
L()=Eqϕ(z∣θ)[pθ()] Interpolated Data¶
Assimilated Data¶
Deep Equilibrium Model¶
u=h(u,y,x;θ)
Dynamical Model¶
Model
Observation Model:Dynamical Model:ytut=h(ut;θ)+εt,=ODESolver(f,u0,t0,t)εt∼N(0,Σy)