Skip to content
Source

GPs at Large Scale

Any literature related to scaling GPs to astronomical size. 1 million points and higher. Also things with respectable orders of magnitude.

Fast Direct Methods for Gaussian Processes by Ambikasaran et. al. (2014)

Scales GPs with dimensions greater than 3.

-> Paper

-> Code

Constant-Time Predictive Distributions for GPs - Pleiss et. al. (2018)

Using MVM techniquees, they are able to make constant time predictive mean and variance estimates; addressing the computational bottleneck of predictive distributions for GPs.

-> Paper

-> Code

When Gaussian Process Meets Big Data: A Review of Scalable GPs - (2019)

A great review paper on GPs and how to scale them. Goes over most of the SOTA.

-> Paper

Exact GP on a Million Data Points - Wang et. al. (2019)

The authors manage to train a GP with multiple GPUs on a million or so data points. They use the matrix-vector-multiplication strategy. Quite amazing actually...

Randomly Projected Additive Gaussian Processes for Regression - Delbridge et. al. (12-2019)

-> Paper

-> Code (PyTorch)

Efficiently Sampling Functions from Gaussian Process Posteriors - Wilson et. al. (16-2020)

Uses a path-wise sampling scheme to efficiently sample for GP posteriors. Motivates the use for GP priors for monte carlo estimation schemes.

-> Tweet 1 | Tweet 2

-> Blog

-> Video

-> Paper

  • Code - Julia | GPFlow
Fast Matrix Square Roots with Applications to Gaussian Processes and Bayesian Optimization by Pleiss et. al. (2020)

-> Paper

-> Code

Sparse Cholesky factorization by Kullback-Leibler minimization - Schafer et, al. (2020)

-> Paper