Fourier Pseudospectral Methods¶
Fourier pseudospectral methods are the workhorse of computational fluid dynamics for problems with periodic boundary conditions. They achieve spectral (exponential) accuracy for smooth, periodic functions and are among the most efficient methods available per degree of freedom.
1. Mathematical Setup¶
Consider a \(2\pi\)-periodic function \(u : [0, 2\pi) \to \mathbb{R}\). The Fourier series representation is:
where the Fourier coefficients are:
For a domain \([0, L)\), the wavenumbers are rescaled: \(k \to 2\pi k / L\).
2. The Discrete Fourier Transform (DFT)¶
Given \(N\) uniformly spaced grid points \(x_j = j \Delta x\), \(j = 0, \ldots, N-1\), with \(\Delta x = L/N\), the Discrete Fourier Transform is:
and the Inverse DFT is:
The DFT can be computed in \(O(N \log N)\) time via the Fast Fourier Transform (FFT).
Wavenumbers¶
On a grid with \(N\) points and domain length \(L\), the resolved wavenumbers are:
The Nyquist wavenumber \(k_{N/2} = \pi N / L\) is the highest wavenumber that can be resolved on a grid of \(N\) points.
FFT output ordering
The mathematical index range \(k = -N/2+1, \ldots, N/2\) (centered at zero) differs from
the FFT output ordering used by numpy and jax.numpy, which places positive frequencies
first:
In SpectralDiffX, FourierGrid1D.k stores the wavenumbers \(k_n = 2\pi n_{\text{fft}} / L\) in
this FFT order (as returned by jnp.fft.fftfreq(N, d=L/N) * 2*pi). When computing derivatives
or inspecting the spectrum, use jnp.fft.fftshift to reorder to the centered representation.
3. Spectral Differentiation¶
The key insight of spectral differentiation is that differentiation in physical space is equivalent to multiplication in spectral space. Formally, if:
then:
In matrix-free notation, the \(n\)-th derivative is computed as:
Algorithm:
- Compute \(\hat{u}_k = \text{FFT}(u_j)\)
- Multiply: \(\hat{v}_k = (ik)^n \hat{u}_k\)
- Invert: \(v_j = \text{IFFT}(\hat{v}_k)\)
This is exact up to machine precision for band-limited functions.
4. Higher-Order Derivatives¶
The second, third, and fourth derivatives follow directly:
The Laplacian in \(d\) dimensions is:
Nyquist Mode Treatment¶
At the Nyquist wavenumber \(k = N/2\), the derivative of a real-valued function is ambiguous. The standard convention is to zero out the Nyquist mode before differentiating. For odd-order derivatives this is necessary to maintain a real output; for even orders it is optional but recommended.
5. Aliasing¶
Aliasing arises when the product of two band-limited signals creates energy at frequencies above the Nyquist limit. Those modes are then misidentified (aliased) as lower-frequency modes.
Consider the product \(w = u \cdot v\) where \(u\) and \(v\) have \(N/2\) non-zero modes each. The product has up to \(N\) non-zero modes, but our grid can only represent \(N/2\) — the excess folds back onto lower modes.
Concretely, on a grid of \(N\) points, the wavenumber \(k\) and \(k + N\) are indistinguishable. A mode at \(k = 3N/4\) will masquerade as \(k = -N/4\).
This introduces an aliasing error:
6. Dealiasing — The 2/3 Rule¶
The standard remedy is the 2/3 rule (Orszag, 1971): before computing a nonlinear product, truncate the spectrum so that only the lower \(2N/3\) modes are retained. The upper \(N/3\) modes are set to zero.
Why 2/3? If both operands have at most \(M = 2N/3\) non-zero modes, their product has at most \(2M = 4N/3\) non-zero modes. Since \(4N/3 \leq 2N\), the \(N\)-point grid resolves all product modes without aliasing.
The dealiasing filter is:
Padding approach: An equivalent formulation zero-pads the signal to \(3N/2\) points before the nonlinear operation and then truncates back to \(N\) points afterwards. This is mathematically identical to the 2/3 rule but avoids an explicit dealiasing step.
7. Spectral Filters¶
Beyond dealiasing, spectral filters are used to control numerical instability in under-resolved simulations. Rather than a sharp cutoff, they apply a smooth roll-off to high-frequency modes.
Exponential Filter¶
where \(\alpha\) controls the strength and \(p\) is the order. Higher \(p\) gives a sharper roll-off. A typical choice is \(\alpha = \ln(\epsilon_{\text{mach}}) \approx -36\) and \(p = 4\).
Raised-Cosine Filter¶
This provides a smooth transition from passband to stopband.
8. The Helmholtz/Poisson Equation in Spectral Space¶
The Helmholtz equation is:
In spectral space, this becomes a diagonal system:
Solving:
The \(\mathbf{k} = 0\) mode (mean) is singular for the Poisson equation (\(\alpha = 0\)). The standard treatment is to fix \(\hat{u}_0 = 0\) (zero mean solution).
Algorithm:
- \(\hat{f}_\mathbf{k} = \text{FFT}(f)\)
- \(\hat{u}_\mathbf{k} = \hat{f}_\mathbf{k} / (-|\mathbf{k}|^2 - \alpha)\), with \(\hat{u}_0 = 0\)
- \(u = \text{IFFT}(\hat{u}_\mathbf{k})\)
This is the most efficient spectral solver — it requires only two FFTs and a pointwise division.
Non-periodic boundary conditions
The FFT-based solver above assumes periodic BCs. For Dirichlet (\(\psi = 0\)) or Neumann (\(\partial\psi/\partial n = 0\)) boundaries, the DST and DCT transforms play the same role. See Spectral Transforms: DST & DCT for the transform definitions and Spectral Elliptic Solvers for the full solver algorithms. For irregular (masked) domains, see the Capacitance Matrix Method.
9. Numerical Convergence and Spectral Accuracy¶
For a smooth, periodic function \(u \in C^\infty([0, L))\), the Fourier approximation error converges faster than any algebraic power of \(N\):
Equivalently, if \(u\) is analytic (holomorphic in a strip), the error decays exponentially:
This spectral accuracy is the defining property of spectral methods. The practical implication: for smooth problems, spectral methods achieve the same accuracy as finite difference methods of order \(p\) with far fewer grid points — the ratio grows as \(N^p / e^{\beta N}\).
Comparison of convergence rates (schematic):
| Method | Error scaling |
|---|---|
| FD order 2 | \(O(N^{-2})\) |
| FD order 4 | \(O(N^{-4})\) |
| FD order 8 | \(O(N^{-8})\) |
| Spectral (analytic \(u\)) | \(O(e^{-\beta N})\) |
For non-smooth or discontinuous functions, spectral methods exhibit the Gibbs phenomenon — persistent oscillations near discontinuities — and algebraic convergence at best. In such cases, filtering, spectral mollification, or non-oscillatory reconstructions are required.