How do we formulate the prediction problem for our quantity of interest?
F:Observations×Parameters→State We have a decision about how we want to formulate this problem.
There are two classes of methods: regression-based learning and objective-based learning
Regression-Based:Objective-Based:θ∗u∗=uargminL(θ)=uargminJ(u,θ) Example: Sea Surface Height Interpolation¶
Recall the problem of the mapping problem we wish to solve
F:ηobs×Θ→ηstate Pros & Cons¶
Regression-Based Losses:
- Pro: If the objective, J(u,θ), is computationally expensive, we don't need to compute this.
- Pro: Uses global information of uobs.
- Pro: Does not need to compute, ∇uJ(u,θ)
- Con: Do not have access to J(u,θ)
- Con: It may be expensive to compute usim
- Con: May be hard when u∗(θ) is not unique...
Objective-Based Losses:
- Pro: Uses objective information of J(u,θ)
- Pro: Faster, does not require usim
- Pro: Easily learns non-unique u∗(θ).
- Con: Can get stuck in local optima of J(u,θ)
- Con: Often requires computing ∇uJ(u,θ)
Examples¶
Denoising¶
In this example, we are interested in denoising a set of observations, yobs. We are interesting in recovering the original signal which believe to be our state, u.
We assume that this can be denoised via a linear operator, H.
For simplicity, we assume iid Gaussian noise.
yobs=Hu+ε,ε∼N(0,σ2) We can write out the posterior using the Bayesian formulation.
p(u∣yobs)∝p(yobs∣u)p(u) We are using linear operations and a Gaussian likelihood, we can use the conjugate posterior which would allow for simpler inference.
We can write this as
p(u∣yobs)∝exp(−J(u,yobs)) which is connected to the Gibbs distribution.
We are left with the objective function
J(u,yobs)=21∣∣yobs−Hu∣∣22+λ∣∣u∣∣1 - J - regularized reconstruction energy
- λ - regularization coefficient