Skip to content

GPyTorch - Uncertain Inputs

This is my notebook where I play around with all things PyTorch. I use the following packages:

  • PyTorch
  • Pyro
  • GPyTorch
  • PyTorch Lightning
#@title Install Packages
!pip install --upgrade pyro-ppl gpytorch pytorch-lightning tqdm
Requirement already up-to-date: pyro-ppl in /usr/local/lib/python3.6/dist-packages (1.4.0)
Requirement already up-to-date: gpytorch in /usr/local/lib/python3.6/dist-packages (1.2.0)
Requirement already up-to-date: pytorch-lightning in /usr/local/lib/python3.6/dist-packages (0.9.0)
Requirement already up-to-date: tqdm in /usr/local/lib/python3.6/dist-packages (4.50.0)
Requirement already satisfied, skipping upgrade: numpy>=1.7 in /usr/local/lib/python3.6/dist-packages (from pyro-ppl) (1.18.5)
Requirement already satisfied, skipping upgrade: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from pyro-ppl) (3.3.0)
Requirement already satisfied, skipping upgrade: torch>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pyro-ppl) (1.6.0+cu101)
Requirement already satisfied, skipping upgrade: pyro-api>=0.1.1 in /usr/local/lib/python3.6/dist-packages (from pyro-ppl) (0.1.2)
Requirement already satisfied, skipping upgrade: tensorboard==2.2.0 in /usr/local/lib/python3.6/dist-packages (from pytorch-lightning) (2.2.0)
Requirement already satisfied, skipping upgrade: PyYAML>=5.1 in /usr/local/lib/python3.6/dist-packages (from pytorch-lightning) (5.3.1)
Requirement already satisfied, skipping upgrade: future>=0.17.1 in /usr/local/lib/python3.6/dist-packages (from pytorch-lightning) (0.18.2)
Requirement already satisfied, skipping upgrade: packaging in /usr/local/lib/python3.6/dist-packages (from pytorch-lightning) (20.4)
Requirement already satisfied, skipping upgrade: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard==2.2.0->pytorch-lightning) (1.7.0)
Requirement already satisfied, skipping upgrade: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorboard==2.2.0->pytorch-lightning) (0.35.1)
Requirement already satisfied, skipping upgrade: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard==2.2.0->pytorch-lightning) (2.23.0)
Requirement already satisfied, skipping upgrade: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard==2.2.0->pytorch-lightning) (1.15.0)
Requirement already satisfied, skipping upgrade: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard==2.2.0->pytorch-lightning) (0.4.1)
Requirement already satisfied, skipping upgrade: absl-py>=0.4 in /usr/local/lib/python3.6/dist-packages (from tensorboard==2.2.0->pytorch-lightning) (0.10.0)
Requirement already satisfied, skipping upgrade: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard==2.2.0->pytorch-lightning) (3.2.2)
Requirement already satisfied, skipping upgrade: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard==2.2.0->pytorch-lightning) (1.17.2)
Requirement already satisfied, skipping upgrade: grpcio>=1.24.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard==2.2.0->pytorch-lightning) (1.32.0)
Requirement already satisfied, skipping upgrade: protobuf>=3.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard==2.2.0->pytorch-lightning) (3.12.4)
Requirement already satisfied, skipping upgrade: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard==2.2.0->pytorch-lightning) (50.3.0)
Requirement already satisfied, skipping upgrade: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard==2.2.0->pytorch-lightning) (1.0.1)
Requirement already satisfied, skipping upgrade: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->pytorch-lightning) (2.4.7)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard==2.2.0->pytorch-lightning) (2020.6.20)
Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard==2.2.0->pytorch-lightning) (1.24.3)
Requirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard==2.2.0->pytorch-lightning) (3.0.4)
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard==2.2.0->pytorch-lightning) (2.10)
Requirement already satisfied, skipping upgrade: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard==2.2.0->pytorch-lightning) (1.3.0)
Requirement already satisfied, skipping upgrade: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard==2.2.0->pytorch-lightning) (2.0.0)
Requirement already satisfied, skipping upgrade: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard==2.2.0->pytorch-lightning) (0.2.8)
Requirement already satisfied, skipping upgrade: rsa<5,>=3.1.4; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard==2.2.0->pytorch-lightning) (4.6)
Requirement already satisfied, skipping upgrade: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard==2.2.0->pytorch-lightning) (4.1.1)
Requirement already satisfied, skipping upgrade: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard==2.2.0->pytorch-lightning) (3.1.0)
Requirement already satisfied, skipping upgrade: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard==2.2.0->pytorch-lightning) (3.2.0)
Requirement already satisfied, skipping upgrade: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard==2.2.0->pytorch-lightning) (0.4.8)
#@title Load Packages
# TYPE HINTS
from typing import Tuple, Optional, Dict, Callable, Union

# PyTorch Settings
import torch

# Pyro Settings

# GPyTorch Settings
import gpytorch

# PyTorch Lightning Settings
import pytorch_lightning as pl
import tqdm

# NUMPY SETTINGS
import numpy as np
np.set_printoptions(precision=3, suppress=True)

# MATPLOTLIB Settings
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

# SEABORN SETTINGS
import seaborn as sns
sns.set_context(context='talk',font_scale=0.7)
# sns.set(rc={'figure.figsize': (12, 9.)})
# sns.set_style("whitegrid")

# PANDAS SETTINGS
import pandas as pd
pd.set_option("display.max_rows", 120)
pd.set_option("display.max_columns", 120)

# LOGGING SETTINGS
import sys
import logging
logging.basicConfig(
    level=logging.INFO, 
    stream=sys.stdout,
    format='%(asctime)s:%(levelname)s:%(message)s'
)
logger = logging.getLogger()
#logger.setLevel(logging.INFO)

%load_ext autoreload
%autoreload 2

Data

from sklearn.utils import check_random_state

def near_square_wave(
    n_train: int=80, 
    input_noise: float=0.3, 
    output_noise: float=0.15, 
    n_test: int=400,
    random_state: int=123,
):

    # function
    f = lambda x: np.sin(1.0 * np.pi / 1.6 * np.cos(5 + .5 * x))

    # create clean inputs
    x_mu = np.linspace(-10, 10, n_train)

    # clean outputs
    y = f(x_mu)

    # generate noise
    x_rng = check_random_state(random_state)
    y_rng = check_random_state(random_state + 1)

    # noisy inputs
    x = x_mu + input_noise * x_rng.randn(x_mu.shape[0])

    # noisy outputs
    y = f(x_mu) + output_noise * y_rng.randn(x_mu.shape[0])

    # test points
    x_test = np.linspace(-12, 12, n_test) + x_rng.randn(n_test)
    y_test = f(np.linspace(-12, 12, n_test))
    x_test = np.sort(x_test)

    return x[:, None], y[:, None], x_test[:, None], y_test

Example I - Constant Input Error

import math
# function
f = lambda x: torch.sin(1.0 * math.pi / 1.6 * torch.cos(5 + .5 * x))

# training inputs
train_x_mean = torch.linspace(-10, 10, 80)

# noise for outputs
train_x_stdv = 0.3 * torch.ones(train_x_mean.shape[0])

# training points
train_x = train_x_mean + train_x_stdv

# true function (near squared sine wave)
y_rng = check_random_state(123 + 1)
output_noise = 0.15
real_y = f(train_x)
train_y = real_y + output_noise * torch.randn(train_x_mean.size())
fig, ax = plt.subplots()

ax.plot(train_x, real_y)
ax.errorbar(train_x, train_y, xerr=1.96 * train_x_stdv, fmt='k.')

plt.show()
train_x_stdv.shape
torch.Size([80])
#@title GP Model


class ExactGPModel(gpytorch.models.ExactGP):
    def __init__(self, train_x, train_y, likelihood):
        super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
        self.mean_module = gpytorch.means.ConstantMean()
        self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())

    def forward(self, x):
        mean_x = self.mean_module(x)
        covar_x = self.covar_module(x)
        return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(train_x, train_y, likelihood)
training_iter = 100

# Find optimal model hyperparameters
model.train()
likelihood.train()

# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)  # Includes GaussianLikelihood parameters

# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)

iterator = tqdm.notebook.tqdm(range(training_iter))
for i in iterator:
    # Zero gradients from previous iteration
    optimizer.zero_grad()
    # Output from model
    output = model(train_x)
    # Calc loss and backprop gradients
    loss = -mll(output, train_y)
    loss.backward()
    iterator.set_postfix(
        loss=loss.item(), 
        length_scale=model.covar_module.base_kernel.lengthscale.item(),
        noise=model.likelihood.noise.item()
    )
    # print('Iter %d/%d - Loss: %.3f   lengthscale: %.3f   noise: %.3f' % (
    #     i + 1, training_iter, loss.item(),
    #     model.covar_module.base_kernel.lengthscale.item(),
    #     model.likelihood.noise.item()
    # ))
    optimizer.step()

#@title Predictions
# Get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()

# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.fast_pred_var():
    test_x = torch.linspace(-12, 12, 100)
    observed_pred = likelihood(model(test_x))

with torch.no_grad():
    # Initialize plot
    # f, ax = plt.subplots(1, 1, figsize=(8, 3))

    # Get upper and lower confidence bounds
    lower, upper = observed_pred.confidence_region()
    # # Plot training data as black stars
    # ax.errorbar(train_x_mean.numpy(), train_y.numpy(), xerr=train_x_stdv, fmt='k*')
    # # Plot predictive means as blue line
    # ax.plot(test_x.numpy(), observed_pred.mean.numpy(), 'b')
    # # Shade between the lower and upper confidence bounds
    # ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.5)
    # ax.set_ylim([-3, 3])
    # ax.legend(['Observed Data', 'Mean', 'Confidence'])

    fig, ax = plt.subplots()
    ax.scatter(train_x_mean.numpy(), train_y.numpy(), c="red", label="Training Data")
    ax.plot(
        test_x.numpy().squeeze(),
        observed_pred.mean.numpy().squeeze(),
        label=r"Predictive Mean",
        color="black",
        linewidth=3,
    )
    ax.fill_between(
        test_x.numpy().squeeze(),
        lower.numpy(),
        upper.numpy(),
        alpha=0.3,
        color="darkorange",
        label=f"Predictive Std",
    )
    ax.set_ylim([-3.5, 2.5])
    ax.legend(fontsize=12)
    plt.tight_layout()
    # fig.savefig(FIG_PATH.joinpath("1d_gp_taylor_1o.png"))
    # fig.savefig("figures/jaxgp/examples/taylor/1d_gp_taylor_1o.png")
    plt.show()

Linearized GP

test_x = torch.linspace(-12, 12, 100)
test_y = f(test_x)
v = torch.ones(100)

def predict_mean(test_x):
    return likelihood(model(test_x)).mean

def predict_var(test_x):
    return likelihood(model(test_x)).variance

def predict_covar(test_x):
    return likelihood(model(test_x)).covariance_matrix

def predict_mean_sum(test_x):
    return likelihood(model(test_x)).mean.sum()

mu = predict_mean(test_x)
var = predict_var(test_x)
covar = predict_covar(test_x)

_ , dx = torch.autograd.functional.vjp(predict_mean, test_x, v)

_, dx2 = torch.autograd.functional.vhp(predict_mean_sum, test_x, v)
# f, ax = plt.subplots(1, 1, figsize=(8, 5))

plt.figure(figsize=(8, 5))
plt.plot(mu.detach().numpy(), label="Mean")
plt.plot(dx.detach().numpy(), label="Jacobian")
plt.plot(dx2.detach().numpy(), label="Hessian")
plt.legend()
plt.show()

Predictive Mean & Variance

var_e = var.detach().numpy()[:, np.newaxis]
cov_e = covar.detach().numpy()
var_x = train_x_stdv ** 2
cov_x = np.array([0.3])[:, np.newaxis]

# 1st order approximation
to1 = dx.detach().numpy()[:, np.newaxis].dot(cov_x.dot(dx.detach().numpy()[:, np.newaxis].T))
var_to1 = var_e + np.diag(to1)[:, np.newaxis]

# 2nd order approximation
to2 = np.dot(dx2.detach().numpy().squeeze()[:, np.newaxis, np.newaxis], cov_x)
to2 = np.trace(to2, axis1=1, axis2=2)
mu_2 = mu.detach().numpy() + 0.5 * to2.squeeze()
var_to2 = var_to1 + to2[:, np.newaxis]

# standard deviation
std_e = np.sqrt(var_e)
std_to1 = np.sqrt(var_to1)
std_to2 = np.sqrt(var_to2)
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:19: RuntimeWarning: invalid value encountered in sqrt
plt.figure(figsize=(8, 5))
plt.plot(test_x.detach().numpy().squeeze(), mu.detach().numpy(), label="Predictive Mean")
plt.plot(test_x.detach().numpy().squeeze(), mu_2, label="Predictive Mean (2nd Order)")
plt.plot(train_x.numpy(), real_y.numpy(), c="red", label="Real Function", color='black', linewidth=3)
plt.scatter(train_x.numpy(), train_y.numpy(), c="red", label="Training Data")
plt.legend()
plt.show()
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: MatplotlibDeprecationWarning: Saw kwargs ['c', 'color'] which are all aliases for 'color'.  Kept value from 'color'.  Passing multiple aliases for the same property will raise a TypeError in 3.3.
  after removing the cwd from sys.path.
# f, ax = plt.subplots(1, 1, figsize=(8, 5))

plt.figure(figsize=(8, 5))
plt.plot(test_x.detach().numpy().squeeze(), mu.detach().numpy(), label="Predictive Mean")
plt.scatter(train_x.numpy(), train_y.numpy(), c="red", label="Training Data")
plt.plot(train_x.numpy(), real_y.numpy(), c="black", label="Real Function")
plt.plot(test_x.detach().numpy().squeeze(), mu.detach().numpy().squeeze() - 1.96 * std_e.squeeze(), label='Upper Limit')
plt.plot(test_x.detach().numpy().squeeze(), mu.detach().numpy().squeeze() + 1.96 * std_e.squeeze(), label='Lower Limit')
plt.legend()
plt.show()

plt.figure(figsize=(8, 5))
plt.plot(test_x.detach().numpy().squeeze(), mu.detach().numpy(), label="Predictive Mean")
plt.scatter(train_x.numpy(), train_y.numpy(), c="red", label="Training Data")
plt.plot(train_x.numpy(), real_y.numpy(), c="black", label="Real Function")
plt.plot(test_x.detach().numpy().squeeze(), mu.detach().numpy().squeeze() - 1.96 * std_to1.squeeze(), label='Upper Limit')
plt.plot(test_x.detach().numpy().squeeze(), mu.detach().numpy().squeeze() + 1.96 * std_to1.squeeze(), label='Lower Limit')
plt.legend()
plt.show()

plt.figure(figsize=(8, 5))
plt.plot(test_x.detach().numpy().squeeze(), mu_2, label="Predictive Mean")
plt.scatter(train_x.numpy(), train_y.numpy(), c="red", label="Training Data")
plt.plot(train_x.numpy(), real_y.numpy(), c="black", label="Real Function")
plt.plot(test_x.detach().numpy().squeeze(), mu_2.squeeze() - 1.96 * std_to2.squeeze(), label='Upper Limit')
plt.plot(test_x.detach().numpy().squeeze(), mu_2.squeeze() + 1.96 * std_to2.squeeze(), label='Lower Limit')
plt.legend()
plt.show()
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
scores = {}

scores['mae_o1'] = mean_absolute_error(
    test_y.detach().numpy(), mu.detach().numpy()
)
scores['mae_o2'] = mean_absolute_error(
    test_y.detach().numpy(), mu_2
)

print(f"MAE (o1): {scores['mae_o1']:.4f}")
print(f"MAE (o2): {scores['mae_o2']:.4f}")


scores['mse_o1'] = mean_squared_error(
    test_y.detach().numpy(), mu.detach().numpy()
)
scores['mse_o2'] = mean_squared_error(
    test_y.detach().numpy(), mu_2
)

print(f"MSE (o1): {scores['mse_o1']:.4f}")
print(f"MSE (o2): {scores['mse_o2']:.4f}")


scores['rmse_o1'] = np.sqrt(scores['mse_o1'])
scores['rmse_o2'] = np.sqrt(scores['mse_o2'])

print(f"RMSE (o1): {scores['rmse_o1']:.4f}")
print(f"RMSE (o2): {scores['rmse_o2']:.4f}")


scores['r2_o1'] = r2_score(
    test_y.detach().numpy(), mu.detach().numpy()
)
scores['r2_o2'] = r2_score(
    test_y.detach().numpy(), mu_2
)

print(f"R2 (o1): {scores['r2_o1']:.4f}")
print(f"R2 (o2): {scores['r2_o2']:.4f}")
MAE (o1): 0.0964
MAE (o2): 0.1047
MSE (o1): 0.0254
MSE (o2): 0.0261
RMSE (o1): 0.1594
RMSE (o2): 0.1616
R2 (o1): 0.9644
R2 (o2): 0.9635