Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Dynamics — ODEs, PDEs, and parameter estimation

Once state lives in a Field and derivatives are coordinate-aware, the step to a full dynamical-system workflow is small: wire the RHS into an ODE solver, differentiate through it, and optimize. The three notebooks here integrate a 1-D advection-diffusion PDE with diffrax Kidger (2021), then invert it for unknown parameters and initial states using optax and jax.value_and_grad.

Forward problem

The target PDE is the linear 1-D advection-diffusion equation

tT  =  UxT  +  κxxT,\partial_t T \;=\; -\,U\,\partial_x T \;+\; \kappa\,\partial_{xx} T,

discretized in space with the finite-difference operators from the derivatives section and integrated in time with diffrax.Tsit5 and a PID step controller.

Writing the RHS as T˙=f(T;U,κ)\dot{\mathbf{T}} = f(\mathbf{T}; U, \kappa), the diffeqsolve call is an end-to-end-differentiable function of the state and the parameters — this is the lever neural-ODE-style frameworks Chen et al. (2018) pull to do gradient-based inversion without manually assembling tangent equations.

Inverse problems

Given noisy observations {yk}\{y_k\} at times {tk}\{t_k\}, the two inverse problems in this section are variations on a least-squares objective:

L(θ,T0)  =  kT(tk;θ,T0)yk2  +  R(θ,T0),\mathcal{L}(\theta, \mathbf{T}_0) \;=\; \sum_k \bigl\| \mathbf{T}(t_k; \theta, \mathbf{T}_0) - \mathbf{y}_k \bigr\|^2 \;+\; \mathcal{R}(\theta, \mathbf{T}_0),

where θ=(U,κ)\theta = (U, \kappa) are the PDE parameters, T0\mathbf{T}_0 is the (unknown) initial state, and R\mathcal{R} is an optional prior / regularizer. Closed-form gradients are infeasible — the solver is implicit in tt — so jax.value_and_grad through diffeqsolve + optax.adam is the only reasonable workflow at this scale. The equivalence with ensemble Kalman-type inversion Evensen (2009) is worth noting: both target the same MAP point, but gradient optimization is cheaper when derivatives are available and the forward model is smooth.

Numerical considerations

Notebooks

References

References
  1. Kidger, P. (2021). On Neural Differential Equations [Phdthesis, University of Oxford]. https://arxiv.org/abs/2202.02435
  2. Chen, R. T. Q., Rubanova, Y., Bettencourt, J., & Duvenaud, D. (2018). Neural Ordinary Differential Equations. Advances in Neural Information Processing Systems (NeurIPS).
  3. Evensen, G. (2009). Data Assimilation: The Ensemble Kalman Filter (2nd ed.). Springer. 10.1007/978-3-642-03711-5