System ¶ We assume that there is an underlying process that can explain the temperature extremes
Measurement : y n = y ( s n , t n ) , y : R D s × R + → R
\begin{aligned}
\text{Measurement}: && &&
y_n &= \boldsymbol{y}(\mathbf{s}_n, t_n), && &&
\boldsymbol{y}: \mathbb{R}^{D_s}\times \mathbb{R}^+ \rightarrow \mathbb{R}
\end{aligned} Measurement : y n = y ( s n , t n ) , y : R D s × R + → R where t n t_n t n is the time stamp of acquisition and s n \mathbf{s}_n s n is the station coordinates.
More concretely, the input parameters of this unknown function is
Spatial Coordinates : s n ∈ Ω ⊆ R D s [ Degrees , Degrees , Meters ] Temporal Coordinates : t n ∈ T ⊆ R + [ Days ] \begin{aligned}
\text{Spatial Coordinates}: && &&
\mathbf{s}_n&\in\Omega\subseteq\mathbb{R}^{D_s} &&
[\text{Degrees}, \text{Degrees}, \text{Meters}]
\\
\text{Temporal Coordinates}: && &&
t_n&\in\mathcal{T}\subseteq\mathbb{R}^+ &&
[\text{Days}]
\end{aligned} Spatial Coordinates : Temporal Coordinates : s n t n ∈ Ω ⊆ R D s ∈ T ⊆ R + [ Degrees , Degrees , Meters ] [ Days ] where D s = [ Latitude , Longitude , Altitude ] D_s = [\text{Latitude}, \text{Longitude}, \text{Altitude}] D s = [ Latitude , Longitude , Altitude ] .
We have an alternative representation when we want to stress the dependencies in the spatial domain
Measurement : y n = y ( Ω , t n ) , y : R D Ω × R + → R D Ω \begin{aligned}
\text{Measurement}: && &&
\mathbf{y}_n &= \boldsymbol{y}(\boldsymbol{\Omega}, t_n), && &&
\boldsymbol{y}: \mathbb{R}^{D_\Omega}\times \mathbb{R}^+ \rightarrow \mathbb{R}^{D_\Omega}
\end{aligned} Measurement : y n = y ( Ω , t n ) , y : R D Ω × R + → R D Ω Covariate ¶ We assume that there is a covariate which is correlated with the increase of extremes.
In this case, we are interested in the Global Mean Surface Temperature (GMST ).
Covariate : x n = x ( t n ) , x : R + → R \begin{aligned}
\text{Covariate}: && &&
x_n &= x(t_n), && &&
x: \mathbb{R}^+ \rightarrow \mathbb{R}
\end{aligned} Covariate : x n = x ( t n ) , x : R + → R Data ¶ We assume that we have a sequence of data points available.
D = { ( t n , s n ) , x n , y n } n = 1 N x ∈ R N y ∈ R N S ∈ R N × D s T ∈ R N \begin{aligned}
\mathcal{D} &= \left\{ (t_n, \mathbf{s}_n), x_n, y_n \right\}_{n=1}^N && &&
\mathbf{x} \in \mathbb{R}^{N} &&
\mathbf{y} \in \mathbb{R}^{N} &&
\mathbf{S} \in \mathbb{R}^{N\times D_s} &&
\mathbf{T} \in \mathbb{R}^{N} &&
\end{aligned} D = { ( t n , s n ) , x n , y n } n = 1 N x ∈ R N y ∈ R N S ∈ R N × D s T ∈ R N where N = N s × N Ω N=N_s\times N_\Omega N = N s × N Ω are the total number of spatial and temporal coordinates available.
For convenience, throughout this paper, we will often stack each of these into vectors or matrices.
In addition, we might use a different notation to denote the dependencies between the spatial points
D = { ( t n , S n ) , x n , y n } n = 1 N y n ∈ R D Ω Y ∈ R D Ω × N S Ω ∈ R N × D Ω × D s \begin{aligned}
\mathcal{D} &= \left\{ (t_n, \mathbf{S}_n), x_n, \mathbf{y}_n \right\}_{n=1}^N && &&
\mathbf{y}_n \in \mathbb{R}^{D_\Omega} && &&
\mathbf{Y} \in \mathbb{R}^{D_\Omega \times N} &&
\mathbf{S}_\Omega \in \mathbb{R}^{N\times D_\Omega \times D_s}
\end{aligned} D = { ( t n , S n ) , x n , y n } n = 1 N y n ∈ R D Ω Y ∈ R D Ω × N S Ω ∈ R N × D Ω × D s Joint Distribution ¶ We also assume that there is a joint distribution of a set of parameters, θ \boldsymbol{\theta} θ , combined with the observation, y \mathbf{y} y .
However, we decompose the joint distribution into a likelihood and prior.
Basically, the observations can be explained some prior parameters.
p ( y , z , θ ) = p ( y ∣ z ) p ( z ∣ θ ) p ( θ ) p(\mathbf{y},\mathbf{z},\boldsymbol{\theta}) = p(\mathbf{y}|\mathbf{z})p(\mathbf{z}|\boldsymbol{\theta})p(\boldsymbol{\theta}) p ( y , z , θ ) = p ( y ∣ z ) p ( z ∣ θ ) p ( θ ) The likelihood term is an arbitrary distribution and the prior term are the prior parameters for the likelihood distribution.
Data Likelihood : y ∼ p ( y ∣ z ) Process Parameters : z ∼ p ( z ∣ θ ) Prior Parameters : θ ∼ p ( θ ) \begin{aligned}
\text{Data Likelihood}: && &&
y &\sim p(\mathbf{y}|\mathbf{z}) \\
\text{Process Parameters}: && &&
\mathbf{z} &\sim p(\mathbf{z}|\boldsymbol{\theta}) \\
\text{Prior Parameters}: && &&
\boldsymbol{\theta} &\sim p(\boldsymbol{\theta}) \\
\end{aligned} Data Likelihood : Process Parameters : Prior Parameters : y z θ ∼ p ( y ∣ z ) ∼ p ( z ∣ θ ) ∼ p ( θ ) where θ = { μ , σ , κ } \boldsymbol{\theta} = \left\{\mu,\sigma,\kappa\right\} θ = { μ , σ , κ } .
The full term for posterior is given by
p ( θ ∣ y ) = 1 Z p ( y ∣ θ ) p ( θ ) p(\boldsymbol{\theta}|\mathbf{y}) =
\frac{1}{Z}p(\mathbf{y}|\boldsymbol{\theta})
p(\boldsymbol{\theta}) p ( θ ∣ y ) = Z 1 p ( y ∣ θ ) p ( θ ) where Z Z Z is a normalizing constant.
The problem term is the normalizing constant because it is an integral wrt to all of the parameters
Z = ∫ p ( y ∣ θ ) p ( θ ) p θ Z=\int p(\mathbf{y}|\boldsymbol{\theta})p(\boldsymbol{\theta})p\boldsymbol{\theta} Z = ∫ p ( y ∣ θ ) p ( θ ) p θ This is intractable because there is no closed form given the non-linearities in the GEVD PDF as seen in (3) and (4) .
Data Likelihood ¶ We are interested in extreme values so it is natural that we use some of the distributions that are readily available for extreme values.
GEVD ¶ The first option is the generalized extreme value distribution (GEVD ).
The cumulative denisty function (CDF ) of the GEVD is given by
F ( y ; θ ) = exp [ − t ( y ; θ ) ] \boldsymbol{F}(y;\boldsymbol{\theta}) =
\exp
\left[ -\boldsymbol{t}(y;\boldsymbol{\theta}) \right] F ( y ; θ ) = exp [ − t ( y ; θ ) ] where the function t ( y ; θ ) t(y;\boldsymbol{\theta}) t ( y ; θ ) is defined as
t ( y ; θ ) = { [ 1 + κ ( y − μ σ ) ] + − 1 / κ , κ ≠ 0 exp ( − y − μ σ ) , κ = 0 \boldsymbol{t}(y;\boldsymbol{\theta}) =
\begin{cases}
\left[ 1 + \kappa \left( \frac{y-\mu}{\sigma} \right)\right]_+^{-1/\kappa}, && \kappa\neq 0 \\
\exp\left(-\frac{y-\mu}{\sigma}\right), && \kappa=0
\end{cases} t ( y ; θ ) = { [ 1 + κ ( σ y − μ ) ] + − 1/ κ , exp ( − σ y − μ ) , κ = 0 κ = 0 Parameters ¶ For this distribution, we have the following free parameters
θ GEVD = { μ , σ , κ } \boldsymbol{\theta}_\text{GEVD} =
\left\{ \mu, \sigma, \kappa\right\} θ GEVD = { μ , σ , κ } Log Probability ¶ If we have a set of observations, we can maximize the log probability of the observations.
We can define the probability density function
p GEVD ( y ; θ ) = 1 σ t ( y ; θ ) κ + 1 e − t ( y ; θ ) p_\text{GEVD}(y;\boldsymbol{\theta}) =
\frac{1}{\sigma}t\left(y;\boldsymbol{\theta}\right)^{\kappa+1}e^{-t\left(y;\boldsymbol{\theta}\right)} p GEVD ( y ; θ ) = σ 1 t ( y ; θ ) κ + 1 e − t ( y ; θ ) where t ( y ; θ ) t(y;\boldsymbol{\theta}) t ( y ; θ ) is defined in (12) .
Subsequently, we can take the log-probability to get a loss function.
log p ( y 1 : N ∣ θ ) = − N b log σ − ( 1 + 1 / κ ) ∑ n = 1 N b log [ 1 + κ z n ] + − ∑ n = 1 N b [ 1 + κ z n ] + − 1 / κ \log p(\boldsymbol{y}_{1:N}|\boldsymbol{\theta}) =
- N_b \log \sigma -
(1+1/\kappa)\sum_{n=1}^{N_b}
\log \left[ 1 + \kappa z_n\right]_+
-
\sum_{n=1}^{N_b}
\left[ 1 + \kappa z_n\right]_+^{-1/\kappa} log p ( y 1 : N ∣ θ ) = − N b log σ − ( 1 + 1/ κ ) n = 1 ∑ N b log [ 1 + κ z n ] + − n = 1 ∑ N b [ 1 + κ z n ] + − 1/ κ where where z n = ( y n − μ ) / σ z_n=(y_n - \mu)/\sigma z n = ( y n − μ ) / σ and [ 1 + κ z n ] + = max ( 1 + κ z n , 0 ) [1 + \kappa z_n]_+ = \text{max}(1 + \kappa z_n,0) [ 1 + κ z n ] + = max ( 1 + κ z n , 0 ) and N b N_{b} N b are the number of blocks.
GPD ¶ Another option is the generalized Pareto distribution.
This is a peak-over-threshold (POT ) method which is the number of events conditioned on the fact that we are above a given threshold, y 0 y_0 y 0 .
p ( Y ≤ y ∣ y ≥ y 0 ) : = F ( y ; θ ) p(Y\leq y|y\geq y_0) := F(y;\boldsymbol{\theta}) p ( Y ≤ y ∣ y ≥ y 0 ) := F ( y ; θ ) We can define the CDF as:
F ( y ; θ ) = { 1 − [ 1 + κ ∗ ( y − y 0 σ ∗ ) ] − 1 / κ ∗ , κ ∗ ≠ 0 1 − exp ( − y − y 0 σ ∗ ) , κ ∗ = 0 \boldsymbol{F}(y;\boldsymbol{\theta}) =
\begin{cases}
1 - \left[ 1 + \kappa^* \left( \frac{y-y_0}{\sigma^*} \right)\right]^{-1/\kappa^*}
, && \kappa^*\neq 0 \\
1 - \exp\left(-\frac{y-y_0}{\sigma^*}\right), && \kappa^*=0
\end{cases} F ( y ; θ ) = { 1 − [ 1 + κ ∗ ( σ ∗ y − y 0 ) ] − 1/ κ ∗ , 1 − exp ( − σ ∗ y − y 0 ) , κ ∗ = 0 κ ∗ = 0 Parameters ¶ The free parameters available for this distribution are
θ GPD = { y 0 , σ ∗ , κ ∗ } \boldsymbol{\theta}_\text{GPD} =
\left\{ y_0, \sigma^*, \kappa^* \right\} θ GPD = { y 0 , σ ∗ , κ ∗ } The threshold parameter, y 0 y_0 y 0 , needs to be decided before trying to fit the other two parameters.
If there are no strong concerns about what is a threshold, we typically choose a reasonable quantile range, e.g., ≥95%.
The remaining free parameters can be directly related to the GEVD parameters in equation (13) like so
σ ∗ = σ + κ ( y 0 − μ ) κ ∗ = κ \begin{aligned}
\sigma^* &= \sigma + \kappa (y_0 - \mu) \\
\kappa^* &= \kappa
\end{aligned} σ ∗ κ ∗ = σ + κ ( y 0 − μ ) = κ Log Probability ¶ If we have a set of observations, we can maximize the log probability of the observations.
We can define the probability density function for the GPD as
p GPD ( y ; θ ) = 1 σ ∗ [ 1 + κ ∗ ( y − y 0 σ ∗ ) ] + − 1 κ ∗ − 1 \begin{aligned}
p_\text{GPD}(y;\boldsymbol{\theta}) =
\frac{1}{\sigma^*}\left[ 1 + \kappa^* \left( \frac{y-y_0}{\sigma^*} \right)\right]^{-\frac{1}{\kappa^*} - 1}_+
\end{aligned} p GPD ( y ; θ ) = σ ∗ 1 [ 1 + κ ∗ ( σ ∗ y − y 0 ) ] + − κ ∗ 1 − 1 where a + = max ( a , 0 ) a_+=\text{max}(a,0) a + = max ( a , 0 ) and t ( y ; θ ) t(y;\boldsymbol{\theta}) t ( y ; θ ) is defined in (12) .
Subsequently, taking the log will result in
log p GPD ( y 1 : N y 0 ∣ θ ) = − N y 0 log σ ∗ − ( 1 + 1 / κ ∗ ) ∑ n = 1 N log [ 1 + κ ∗ z n ] + \log p_\text{GPD}(\boldsymbol{y}_{1:N_{y_0}}|\boldsymbol{\theta}) =
- N_{y_0} \log \sigma^* -
(1+1/\kappa^*)\sum_{n=1}^N
\log \left[ 1 + \kappa^* z_n\right]_+ log p GPD ( y 1 : N y 0 ∣ θ ) = − N y 0 log σ ∗ − ( 1 + 1/ κ ∗ ) n = 1 ∑ N log [ 1 + κ ∗ z n ] + where where z n = ( y n − y 0 ) / σ z_n=(y_n - y_0)/\sigma z n = ( y n − y 0 ) / σ and [ 1 + κ ∗ z n ] + = max ( 1 + κ ∗ z n , 0 ) [1 + \kappa^* z_n]_+ = \text{max}(1 + \kappa^* z_n,0) [ 1 + κ ∗ z n ] + = max ( 1 + κ ∗ z n , 0 ) and N y 0 N_{y_0} N y 0 are the number of exceedences above the threshold, y 0 y_0 y 0 .
Rate ¶ The rate parameter is defined as the expected number of events per event period, T T T .
λ y 0 = [ 1 + κ z ] − 1 κ , z = ( y 0 − μ ) / σ
\begin{aligned}
\lambda_{y_0} &=
\left[ 1 + \kappa z \right]^{- \frac{1}{\kappa}}, && &&
z = (y_0 - \mu)/\sigma
\end{aligned} λ y 0 = [ 1 + κ z ] − κ 1 , z = ( y 0 − μ ) / σ (eq:gpd-param-rate)
This parameterization is useful for both the GEVD and the GPD .
Return Period ¶ For the GEVD , we have the return period defined as:
y = { μ + σ κ { [ log ( 1 − 1 / T R ) ] κ − 1 } κ ≠ 0 μ − σ log [ − log ( 1 − 1 / T R ) ] κ = 0 y =
\begin{cases}
\mu + \frac{\sigma}{\kappa}\left\{\left[\log\left(1-1/T_R\right)\right]^{\kappa}-1\right\} && \kappa\neq 0 \\
\mu - \sigma \log \left[ - \log \left(1 - 1/T_R \right) \right] && \kappa=0
\end{cases} y = { μ + κ σ { [ log ( 1 − 1/ T R ) ] κ − 1 } μ − σ log [ − log ( 1 − 1/ T R ) ] κ = 0 κ = 0 For the GPD , we have the return period defined as:
y = { y 0 + σ κ [ ( λ y 0 T R ) κ − 1 ] , κ ≠ 0 y 0 + σ log ( λ y 0 T R ) , κ = 0 y =
\begin{cases}
y_0 + \frac{\sigma}{\kappa} \left[ (\lambda_{y_0} T_R)^{\kappa} - 1 \right], &&
\kappa \neq 0 \\
y_0 + \sigma \log (\lambda_{y_0} T_R), &&
\kappa = 0
\end{cases} y = { y 0 + κ σ [ ( λ y 0 T R ) κ − 1 ] , y 0 + σ log ( λ y 0 T R ) , κ = 0 κ = 0 Process Parameterization ¶ Concretely, we
Latent Variable : θ : = z = [ z μ z σ z κ ] z ∈ R D Ω \begin{aligned}
\text{Latent Variable}: && &&
\boldsymbol{\theta} :=
\mathbf{z} &=
\begin{bmatrix}
\boldsymbol{z}_{\boldsymbol{\mu}} \\
\boldsymbol{z}_{\boldsymbol{\sigma}} \\
\boldsymbol{z}_{\boldsymbol{\kappa}}
\end{bmatrix}
&& &&
\mathbf{z} \in \mathbb{R}^{D_\Omega}
\end{aligned} Latent Variable : θ := z = ⎣ ⎡ z μ z σ z κ ⎦ ⎤ z ∈ R D Ω We define functions for each of these latent variables (which are input parameters for the respective methods).
Location Parameter : z μ ≈ z μ ( s , x ; θ ) , z μ : R + × R D x × Θ → R D Ω Scale Parameter : z σ ≈ z σ ( s , x ; θ ) , z σ : R + × R D x × Θ → R D Ω Shape Parameter : z κ ≈ z κ ( s , x ; θ ) , z κ : R + × R D x × Θ → R D Ω \begin{aligned}
\text{Location Parameter}: && &&
\boldsymbol{z}_{\boldsymbol{\mu}} &\approx
\boldsymbol{z}_{\boldsymbol{\mu}}(\mathbf{s},x;\boldsymbol{\theta}) , && &&
\boldsymbol{z}_{\boldsymbol{\mu}}: \mathbb{R}^+\times\mathbb{R}^{D_x}\times\Theta
\rightarrow
\mathbb{R}^{D_\Omega} \\
\text{Scale Parameter}: && &&
\boldsymbol{z}_{\boldsymbol{\sigma}} &\approx
\boldsymbol{z}_{\boldsymbol{\sigma}}(\mathbf{s},x;\boldsymbol{\theta}) , && &&
\boldsymbol{z}_{\boldsymbol{\sigma}}: \mathbb{R}^+\times\mathbb{R}^{D_x}\times\Theta
\rightarrow
\mathbb{R}^{D_\Omega} \\
\text{Shape Parameter}: && &&
\boldsymbol{z}_{\boldsymbol{\kappa}} &\approx
\boldsymbol{z}_{\boldsymbol{\kappa}}(\mathbf{s},x;\boldsymbol{\theta}) , && &&
\boldsymbol{z}_{\boldsymbol{\kappa}}: \mathbb{R}^+\times\mathbb{R}^{D_x}\times\Theta
\rightarrow
\mathbb{R}^{D_\Omega}
\end{aligned} Location Parameter : Scale Parameter : Shape Parameter : z μ z σ z κ ≈ z μ ( s , x ; θ ) , ≈ z σ ( s , x ; θ ) , ≈ z κ ( s , x ; θ ) , z μ : R + × R D x × Θ → R D Ω z σ : R + × R D x × Θ → R D Ω z κ : R + × R D x × Θ → R D Ω The hypothesis is that the location parameter for the extremes distribution will be correlated with the GMST covariate.
We also conjecture that the location parameter for each of the stations is highly correlated.
However, we do not postulate that the scale and shape parameters.
Latent Parameter ¶ Latent Variable : z ( t , θ ) = z 0 + z 1 ψ ( t ; θ ) + z 1 ψ ( t ; θ ) + ϵ \begin{aligned}
\text{Latent Variable}: && &&
\mathbf{z}(t,\boldsymbol{\theta})
&=
\mathbf{z}_0
+
\mathbf{z}_1\psi(t;\boldsymbol{\theta})
+
\mathbf{z}_1\psi(t;\boldsymbol{\theta})
+
\epsilon
\end{aligned} Latent Variable : z ( t , θ ) = z 0 + z 1 ψ ( t ; θ ) + z 1 ψ ( t ; θ ) + ϵ Location Parameters ¶ We have a range of use cases for the location parameter.
Location : z μ = μ 0 + μ 1 ϕ ( t ; θ ) + μ 2 ϕ ( s ; θ ) + ϵ \begin{aligned}
\text{Location}: && &&
\mathbf{z}_{\boldsymbol{\mu}}
&=
\boldsymbol{\mu}_0
+
\mu_1 \phi(t;\boldsymbol{\theta})
+
\mu_2 \phi(\mathbf{s};\boldsymbol{\theta})
+
\epsilon \\
\end{aligned} Location : z μ = μ 0 + μ 1 ϕ ( t ; θ ) + μ 2 ϕ ( s ; θ ) + ϵ where z μ ∈ R D Ω \mathbf{z}_{\boldsymbol{\mu}}\in\mathbb{R}^{D_\Omega} z μ ∈ R D Ω
Scale & Shape Parameters ¶ For the scale and shape parameters, we impose only constraint on each form.
We allow for the scale and shape parameter to be different for each station.
Scale : log z σ = σ 0 σ 0 ∈ R D Ω Shape : z κ = κ 0 , κ 0 ∈ R D Ω \begin{aligned}
\text{Scale}: && &&
\log \mathbf{z}_{\boldsymbol{\sigma}} &= \boldsymbol{\sigma}_0
&& && \boldsymbol{\sigma}_0 \in \mathbb{R}^{D_\Omega} \\
\text{Shape}: && &&
\mathbf{z}_{\boldsymbol{\kappa}} &= \boldsymbol{\kappa}_0,
&& && \kappa_0 \in \mathbb{R}^{D_\Omega} \\
\end{aligned} Scale : Shape : log z σ z κ = σ 0 = κ 0 , σ 0 ∈ R D Ω κ 0 ∈ R D Ω We use the log \log log transformation to ensure that everything is positive.
The DMT is formulated as an ordinary differential equation (ODE).
First, we will define it as a system of ODEs whereby we have a state variable
State : z = [ x y ] , z ∈ R 2 \begin{aligned}
\text{State}: && &&
\mathbf{z} &=
\begin{bmatrix}
x \\ y
\end{bmatrix}, && &&
\mathbf{z}\in\mathbb{R}^2
\end{aligned} State : z = [ x y ] , z ∈ R 2 Now, we can define an equation of motion which describes the temporal dynamics of the system.
Equation of Motion : d z d t = f ( z , t , θ ) , f : R 2 × R + × Θ → R \begin{aligned}
\text{Equation of Motion}: && &&
\frac{d\mathbf{z}}{dt} &= \boldsymbol{f}(\mathbf{z},t,\theta),
&& &&
\boldsymbol{f}:\mathbb{R}^2 \times \mathbb{R}^+ \times \Theta \rightarrow \mathbb{R}
\end{aligned} Equation of Motion : d t d z = f ( z , t , θ ) , f : R 2 × R + × Θ → R We also have initial measurements of the system
Initial Values : z ( 0 ) = [ x ( 0 ) y ( 0 ) ] : = z 0 \begin{aligned}
\text{Initial Values}: && &&
\mathbf{z}(0) &=
\begin{bmatrix}
x(0) \\ y(0)
\end{bmatrix}
:=
\mathbf{z}_0
\end{aligned} Initial Values : z ( 0 ) = [ x ( 0 ) y ( 0 ) ] := z 0 From the fundamental theory of calculus, we know that the solution of said ODE is a temporal integration wrt time
TimeStepper : z t = z 0 + ∫ 0 t f ( z 0 , τ , θ ) d τ \begin{aligned}
\text{TimeStepper}: && &&
\mathbf{z}_t = \mathbf{z}_0 + \int_0^t \boldsymbol{f}(\mathbf{z}_0, \tau, \theta)d\tau
\end{aligned} TimeStepper : z t = z 0 + ∫ 0 t f ( z 0 , τ , θ ) d τ Conventionally, we use ODE solvers like Euler, Heun, or Runge-Kutta.
ODESolver : z t = ODESolve ( f , z 0 , t , θ ) \begin{aligned}
\text{ODESolver}: && &&
\mathbf{z}_t = \text{ODESolve}(\boldsymbol{f}, \mathbf{z}_0, t, \theta)
\end{aligned} ODESolver : z t = ODESolve ( f , z 0 , t , θ ) Non-Dimensionalization ¶ We will reparameterize this ODE to remove some dependencies on time.
The above equation is divided by
d y d t d t d x = f ( y , x , θ ) \frac{dy}{dt}\frac{dt}{dx}
= f(y,x,\theta) d t d y d x d t = f ( y , x , θ ) Parameterization ¶ There are many special forms of ODEs which are known from the literature.
1st Order ODE : f ( y , x , θ ) = f 1 ( x ) − f 2 ( x ) ⋅ y \begin{aligned}
\text{1st Order ODE}: && &&
\boldsymbol{f}(y,x,\theta) &=
\boldsymbol{f}_1(x) - \boldsymbol{f}_2(x)\cdot y
\end{aligned} 1st Order ODE : f ( y , x , θ ) = f 1 ( x ) − f 2 ( x ) ⋅ y An example form would the following:
f ( y , x , θ ) = a 0 + a 1 x + a 2 y \boldsymbol{f}(y,x,\theta) =
a_0 + a_1 x + a_2 y f ( y , x , θ ) = a 0 + a 1 x + a 2 y Constant Form .
The first form assumes that we have a constant change in DMT wrt the GMST
Constant : f ( y , x , θ ) = a 0 Linear Solution : y ( x ) = y 0 + a 0 x \begin{aligned}
\text{Constant}: && &&
\boldsymbol{f}(y,x,\theta)
&=
a_0 \\
\text{Linear Solution}: && &&
y(x) &=
y_0 + a_0 x
\end{aligned} Constant : Linear Solution : f ( y , x , θ ) y ( x ) = a 0 = y 0 + a 0 x Linear Form .
The first form assumes that we have a constant change in DMT wrt the GMST
Linear : f ( y , x , θ ) = a 0 + a 1 x Quadratic Solution : y ( x ) = y 0 + a 0 x + 1 2 a 1 x 2 \begin{aligned}
\text{Linear}: && &&
\boldsymbol{f}(y,x,\theta)
&=
a_0 + a_1 x\\
\text{Quadratic Solution}: && &&
y(x) &=
y_0 + a_0 x + \frac{1}{2}a_1x^2
\end{aligned} Linear : Quadratic Solution : f ( y , x , θ ) y ( x ) = a 0 + a 1 x = y 0 + a 0 x + 2 1 a 1 x 2 Multiplicative Form .
The first form assumes that we have a constant change in DMT wrt the GMST
Multiplicative : f ( y , x , θ ) = a 2 y Exponential Solution : y ( x ) = y 0 exp ( a 2 x ) \begin{aligned}
\text{Multiplicative}: && &&
\boldsymbol{f}(y,x,\theta)
&=
a_2 y\\
\text{Exponential Solution}: && &&
y(x) &=
y_0 \exp \left( a_2x \right)
\end{aligned} Multiplicative : Exponential Solution : f ( y , x , θ ) y ( x ) = a 2 y = y 0 exp ( a 2 x )