Formulation#
Trade-offs#
Pros#
Mesh-Free
Lots of Data
Cons#
Transfer Learning
Data#
Model#
Architectures#
We are interested in the case of regression. We have the following generalized architecture.
where \(\boldsymbol{\phi}\) is the basis transformation with some hyperparameters \(\gamma\), \(\text{NN}\) is the neural network layer parameterized by \(\boldsymbol{\theta}\), and we have \(L\) layers, \(L = \{1, 2, \ldots, \ell, \ldots, L-1, L\}\)
Standard Neural Network#
In the standard neural network, we typically have the following standard functions
So more explicitly, we can write it as:
where \(\ell = \{1, 2, \ldots, L-1\}\). Noteably:
The first layer is the identity (i.e. there is not basis function transformation)
The second layer is the standard neural network architecture, i.e. a linear function and a nonlinear activation function
The final layer is always a linear function (in regression; classification would have a sigmoid)
Positional Encoding#
Fourier Features#
Method |
Kernel |
Distribution |
---|---|---|
Gaussian |
\(\mathcal{N}(\mathbf{0},\frac{1}{\sigma^2}\mathbf{I}_r)\) |
|
Laplacian |
\(\text{Cauchy}()\) |
|
Cauchy |
\(\text{Laplace}()\) |
|
Matern |
\(\text{Bessel}()\) |
|
ArcCosine |
Alternative Formulation#
where \(\boldsymbol{\omega} \sim p(\boldsymbol{\omega})\) and \(\boldsymbol{b} \sim \mathcal{U}(0,2\pi)\).
Source:
Blog - Gregory Gundersen
Random Features for Large-Scale Kernel Machines - Rahimi & Recht (2008) - Paper
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond - Liu et al (2021)
Scalable Kernel Methods via Doubly Stochastic Gradients - Dai et al (2015) -
SIREN#
pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis - Chan et al (2021)
COIN++:
Extended#
where \(\boldsymbol{\gamma}\) corresponds to the frequencies and \(\boldsymbol{\beta}\) corresponds to the phase shifts.
Modulation#
Modulation is $\( \boldsymbol{f}^\ell(\mathbf{x},\mathbf{z};\boldsymbol{\theta}) := \boldsymbol{h}_M^\ell\left(\;\text{NN}(\mathbf{x};\boldsymbol{\theta}_{NN})\;,\; \text{M}(\mathbf{z};\boldsymbol{\theta}_{M}) \;\right) \)$
where \(NN\) is the output of the neural network wrt the input, \(\mathbf{x}\), where \(M\) is the output of the modulation function wrt the latent variable, \(\mathbf{z}\), and \(\times\) is an arbitrary operator.
Additive Layer
Affine Layer
Neural Implicit Flowss
Neural Flows
FILM, 2020
Mehta, 2021
Dupoint, 2022
Neural Implicit Flows Pan, 2022
Neural
Affine Modulation#
Affine Modulations
Shift Modulations
Neural Implicit Flows#
In this work, we have a version of the Modulated Siren as mentioned above. However, they use a version that separates the space and time neural networks. $\( \boldsymbol{f}(\mathbf{x}_\phi, t) = \text{NN}_{space}(\mathbf{x}_\phi;\text{NN}_{time}(t)) \)$
Multiplicative Filter Networks#
where \(K = \{1, 2, \ldots, K\}\)
Non-Linear Functions#
FOURIERNET
This method corresponds to the random Fourier Feature transformation.
where the parameters to be learned are:
GABORNET
This method tries to improve upon the Fourier representation. The Fourier representation has global support and would have more difficulties representing more local features. The Gabor filter (see below) will be able to capture both frequency and spatial locality component.
where the parameters to be learned are:
Sources:
Reimplementation (PyTorch) - BoschResearch
Probabilistic#
Deterministic#
Normalizing Flows#
Bayesian#
Random Feature Expansions (RFEs)
Physics Constraints#
Mass#
Momentum#
QG Equations#
Applications#
Interpolation#
Surrogate Modeling#
Sampling#
Feature Engineering#
Spatial Features#
For the spatial features, we have spherical coordinates (i.e. longitude and latitude)
where \(\lambda\) is the latitude, \(\phi\) is the longitude and \(r\) is the radius. Here \(x,y,z\) are bounded between 0 and 1.
Temporal Features#
Tanh#
Fourier Features#
Sinusoidal Positional Encoding#
where $\( \boldsymbol{\omega}_k = \frac{1}{10,000^{\frac{2k}{d}}} \)$
Sources:
Transformer Architecture: The Positional Encoding - Amihossein - Blog
Position Information in Transformers: An Overview - Dufter et al (2021) - Arxiv - Paper
Rethinking Positional Encoding - Zheng et al (2021) - Arxiv Paper
Self-Attention with Functional Time Representation Learning - Xu et al (2019) - Arxiv - Paper
Attention is all you need. A Transformer Tutorial: 5. Positional Encoding - Video
Experiments#
Initial Conditions#
Training Time, Convergence
Random Intitialization
Feature-Wise Interpolation
PyInterp (2D)
Markovian Gaussian Process (MGP)
Optimal Interpolation (OI)
Iterative Schemes#
Speed, Accuracy, PreTraining
Projection-Based
Gradient-Based
Fixed-Point Iteration
Anderson Acceleration
CNN + Gradient
LSTM
Priors#
The impact on the priors on the learning procedure.
Deterministic
Probabilistic
Deterministic#
ODE (Fixed)
PCA (Fixed)
ODE (Learnable)
PCA (Learnable)
UNet
Probabilistic#
UNet + DropOut
Probabilistic UNet