Skip to content
Source

Walkthrough - Experiment

In this notebook, I will walk-through my experimental procedure.

import sys, os

# Insert path to model directory,.
cwd = os.getcwd()
path = f"{cwd}/../../src"
sys.path.insert(0, path)

# Insert path to package,.
pysim_path = f"/home/emmanuel/code/pysim/"
sys.path.insert(0, pysim_path)

import warnings
from typing import Optional, Tuple
from tqdm import tqdm
import random
import pandas as pd
import numpy as np
import argparse
from sklearn.utils import check_random_state

# toy datasets
from data.distribution import DataParams, Inputs

# Kernel Dependency measure
from sklearn.preprocessing import StandardScaler
from sklearn.gaussian_process.kernels import RBF
from models.dependence import HSICModel
from pysim.kernel.utils import estimate_sigma

# RBIG IT measures
# from models.ite_algorithms import run_rbig_models

# Plotting
from visualization.distribution import plot_scorer

# experiment helpers
from experiments.utils import dict_product, run_parallel_step
from tqdm import tqdm

# Plotting Procedures
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use(['seaborn-talk'])
warnings.filterwarnings('ignore') # get rid of annoying warnings
%matplotlib inline

warnings.filterwarnings('ignore') # get rid of annoying warnings

%load_ext autoreload
%autoreload 2
!pwd
/home/emmanuel/projects/2019_hsic_align/notebooks/4_distributions
FIG_PATH = "/home/emmanuel/projects/2019_hsic_align/results/figures/distribution_experiment/mutual_info/"
RES_PATH = "/home/emmanuel/projects/2019_hsic_align/data/results/distributions/mutual_info/"

Experimental Parameters

# initialize the holder for the parameters
parameters = {}

Case I - HSIC Estimator

In this first part, we have 3 cases of HSIC as a combination of a centered kernel and whether or not we normalize the covariance term. The 3 "scorers" are as follows:

  1. HSIC
HSIC = \frac{1}{n(n-1)}\langle K_xH,K_yH \rangle_F

In this case, the kernels are centered, but the score is not normalized.

  1. Kernel Alignment (KA)
TKA = \frac{\langle K_x,K_y \rangle_F}{||K_x||_F||K_y||_F}

In this case, the kernels are not centered but the score is normalized.

  1. cTKA
cTKA = \frac{\langle K_xH,K_yH \rangle_F}{||K_xH||_F||K_yH||_F}

In this case, the kernels are centered and the score is normalized.

def get_hsic(
    X: np.ndarray, 
    Y: np.ndarray, 
    scorer: str, 
    sigma_X: Optional[float]=None, 
    sigma_Y: Optional[float]=None
) -> float:
    """Estimates the HSIC value given some data, sigma and
    the score."""
    # init hsic model class

    hsic_model = HSICModel()
    # hsic model params
    if sigma_X is not None:

        hsic_model.kernel_X = RBF(sigma_X)
        hsic_model.kernel_Y = RBF(sigma_Y)

    # get hsic score
    hsic_val = hsic_model.get_score(X, Y, scorer)

    return hsic_val

# parameters
parameters['scorer'] = ['hsic', 'ka', 'cka'] 

Case II - Sigma Estimator

For this parameter, we are interested in estimating a few things:

  1. We want to know which estimator to choose from.

Kernel methods are great if the parameters of the kernel are correct. In supervised scenarios, we can simply learn the appropriate kernel parameters that best fit our data given some criteria. In unsupervised settings, we generally do not know which parameters to choose from. But there are many different ways to choose the parameters as every lab/researcher has their own method that "they swear by". I will choose some of the most common ones:

  • Silverman
  • Scott
  • Mean Distance
  • Median Distance
  • Median Distance with the k^{th} sample (or percent) of that distance matrix.

Case III - Sigma Application

  1. We want to know the how we are applying the length scale.

We have three cases to consider:

  • One length scale for both datasets
  • One length scale per dataset
  • One length scale per dataset per dimension

This is important as it could turn a good estimator into a bad estimator. Scott and Silverman work very well for univariate distributions but not very well for multivariate distributions. So if we have one scott/silverman estimate per feature, then this estimator might be a lot better and yield much better results. For the case of the RBF kernel, having one length scale per dimension corresponds to the ARD Kernel which assigns a length scale (or relevance values) per feature. We don't typically use the ARD kernel for kernel methods that we cannot optimize using some gradient function due to how expensive it is. But in this case, it isn't so expensive because we are choosing not to optimizing anything.

def get_sigma(
    X: np.ndarray, 
    Y: np.ndarray, 
    method: str='silverman', 
    percent: Optional[float]=None,
    per_dimension: bool=False, 
    separate_scales: bool=False
) -> Tuple[np.ndarray, np.ndarray]:
    # sigma parameters
    subsample = None
    random_state = 123

    sigma_X = estimate_sigma(
        X, 
        subsample=subsample,
        method=method,
        percent=percent,
        random_state=random_state,
        per_dimension=per_dimension
    )

    sigma_Y = estimate_sigma(
        Y, 
        subsample=subsample,
        method=method,
        percent=percent,
        random_state=random_state,
        per_dimension=per_dimension
    )

    if separate_scales is False:
        sigma_Y = sigma_X = np.mean([sigma_X, sigma_Y])

    return sigma_X, sigma_Y

# Parameters for the estimators
parameters['sigma_estimator'] = [
#     ('silverman',None),
#     ('scott', None),
    *[('median', x) for x in np.arange(0.1, 1.0, 0.1, dtype=np.float64)]
]
parameters['separate_scales'] = [True, False]
parameters['per_dimension'] = [True, False]

Case IV - Standardize or not Standardize

This is a simple case but it can have drastic changes in the results of estimating the length scale. In ML, we tend to standardize our datasets because the algorithms do better with predictions with the ranges are contained. Datasets with massive values for certain features could have adverse affects on the representation and the predictions. The formula is given by:

\bar{x} = \frac{x - \mu_x}{\sigma_x}

Note: this is scaling per feature and not per sample.

# from typing import Tuple, Optional

def standardize_data(
    X: np.ndarray, 
    Y: np.ndarray, 
    standardize: bool=False
) -> Tuple[np.ndarray, np.ndarray]:
    X = StandardScaler().fit_transform(X)
    Y = StandardScaler().fit_transform(Y)
    return X, Y

# experimental parameters
parameters['standardize'] = [True, False]

Case V - Multivariate Datasets

For this experiment, we have generated samples for two sets of multivariate distributions: the Gaussian and the T-Student. We have varied the parameters so that we get a variety of samples, dimensions and the amount of similarity (that we can analytically calculate) between them.

For example, we can take a Gaussian distribution with a covariance and generate a similar Gaussian distribution with the same number of samples and variance with a covariance. We know the cross-covariance between them and the self-covariances, so we can analytically calculate the mutual information between the two. MI is absolute which is the dependence or similarity between the two datasets. Now, we will see how the HSIC scores will do versus this variation of dataset size and shape.

We have the following parameters:

Parameter Entry
Samples 50, 100, 500, 1K, 5K
Dimensions 2, 3, 10, 50, 100
Trials 1 \rightarrow 5
Distributions Gaussian, T-Student
std (Gauss dist) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
nu (T-Stu dist) 1, 2, 3, 4, 5, 6, 7, 8, 9

# example parameters for the dataset
parameters['dataset'] = ['gauss'] 
parameters['std'] = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] 
parameters['nu'] = [1]
parameters['trial'] = [1, 2, 3, 4, 5]
parameters['dimensions'] = [2, 3, 10, 50, 100]

loop_params = [50, 100, 500, 1_000, 5_000]

# example parameters function
example_params = DataParams()

# generates a named tuple containing the inputs and the MI 
inputs = example_params.generate_data()

Main Loop (Update)

So it turns out just doing a blind parallel scheme ended up taking too much time. So I decided to break the problem up a bit.

  1. Do 1 Main Loop (Samples)

I decided not to combine all of the combinations; I did all except for the number of samples. Everytime I was watching the progress bar, it would slow down every once in a while. That was because the bottleneck for kernel methods is the number of samples. We have cases of 1_000 which isn't too bad, but 5_000 samples is where the methods really start to slow down. In addition, there will be a lot of memory consumption. So I decided to do a main loop through the number of samples (starting from the smallest and ending with the largest). That way, we can get the easier datasets out of the way and then work on the larger datasets later.

  1. Controlling the number of jobs.

As I mentioned before, the bottleneck is the number of samples. With 5_000, this starts to eat up a lot of memory when doing this in parallel. So to prevent this I set it up such that I control the number of cores doing the processing. Like so:

# Samples Cores
50 28
100 28
500 28
1_000 16
5_000 8

  1. Appending Data

Because there was a lot of data being shifted around (\sim 297000 parameters), the resulting dataframe which stores the experimental results is going to be huge. So I decided that for every call to the main loop, I will run append those results to a csv file and then del that dataframe to free up memory.

Experiment

We have a lot of parameters. So we are going to run everything in parallel so that we can save time. We will do this by giving the cartesian product of our nD list of parameters. This will give us a list of tuples where each entry is a set of parameters to evaluate. The length of this list will be the total number of parameters.

# create a list of all param combinations
parameters_list = list(dict_product(parameters))
n_params, n_loop_params = len(parameters_list), len(loop_params)
print('# of Params:', n_params, n_loop_params)
# of Params: 59400 5
from typing import Dict 

def step(params: Dict, loop_param: Dict):

    # ================
    # DATA
    # ================    
    dist_data = DataParams(
    dataset=params['dataset'],
    trial = params['trial'],
    std = params['std'],
    nu = params['nu'],
    samples = loop_param,
    dimensions = params['dimensions'],
    )

    # generate data
    inputs = dist_data.generate_data()


    # ====================
    # Sigma Estimator
    # ====================

    # estimate sigma
    sigma_X, sigma_Y = get_sigma(
        X=inputs.X, Y=inputs.Y, 
        method=params['sigma_estimator'][0], 
        percent=params['sigma_estimator'][1], 
        per_dimension=params['per_dimension'],
        separate_scales=params['separate_scales']
    )

    # ====================
    # HSIC Model
    # ====================
    # get hsic score
    score = get_hsic(
        inputs.X, inputs.Y, 
        params['scorer'], 
        sigma_X, sigma_Y
    )

    # ====================
    # Results
    # ====================

    # append results to dataframe
    results_df = pd.DataFrame(
        {
            # Data Params
            "dataset": [params["dataset"]],
            "trial": [params["trial"]],
            "std": [params["std"]],
            "nu": [params["nu"]],
            "samples": [loop_param],
            "dimensions": [params["dimensions"]],
            # STANDARDIZE PARSM
            "standardize": [params["standardize"]],
            # SIGMA FORMAT PARAMS
            "per_dimension": [params["per_dimension"]],
            "separate_scales": [params["separate_scales"]],
            # SIGMA METHOD PARAMS
            "sigma_method": [params["sigma_estimator"][0]],
            "sigma_percent": [params["sigma_estimator"][1]],
            "sigma_X": [sigma_X],
            "sigma_Y": [sigma_Y],
            # HSIC Params
            "scorer": [params["scorer"]],
            "score": [score],
            "mutual_info": [inputs.mutual_info],
        }
    )
    return results_df

Test - Single Step

results_df = pd.DataFrame()

for iloop_param in loop_params:


    step_df = step(parameters_list[0], iloop_param)
#     results_df = pd.concat([results_df, step_df])
    break
step_df
dataset trial std nu samples dimensions standardize per_dimension separate_scales sigma_method sigma_percent sigma_X sigma_Y scorer score mutual_info
0 gauss 1 1 1 50 2 True True True median 0.1 [0.1686566316684468, 0.14612229488391992] [0.1589719949193001, 0.1680410083908699] hsic 0.019091 0.0

Test - Full Loop

verbose = 1
n_jobs = -1

with tqdm(loop_params) as pbar:
    for iparam in pbar:

        pbar.set_description(
            f"# Samples: {iparam}, Tasks: {len(parameters_list)}, Jobs: {n_jobs}"
        )

        results_df = run_parallel_step(
            exp_step=step,
            parameters=parameters_list,
            n_jobs=n_jobs,
            verbose=verbose,
            loop_param=iparam,
        )

        # concat current results

        break
# Samples: 50, Tasks: 59400, Jobs: -1:   0%|          | 0/5 [00:00<?, ?it/s][Parallel(n_jobs=-1)]: Using backend LokyBackend with 28 concurrent workers.
[Parallel(n_jobs=-1)]: Done 232 tasks      | elapsed:    0.9s
[Parallel(n_jobs=-1)]: Done 732 tasks      | elapsed:    1.9s
[Parallel(n_jobs=-1)]: Done 2024 tasks      | elapsed:    4.3s
[Parallel(n_jobs=-1)]: Done 3824 tasks      | elapsed:    7.6s
[Parallel(n_jobs=-1)]: Done 6024 tasks      | elapsed:   11.4s
[Parallel(n_jobs=-1)]: Done 8624 tasks      | elapsed:   15.8s
[Parallel(n_jobs=-1)]: Done 11624 tasks      | elapsed:   21.1s
[Parallel(n_jobs=-1)]: Done 15024 tasks      | elapsed:   27.4s
[Parallel(n_jobs=-1)]: Done 18824 tasks      | elapsed:   34.4s
[Parallel(n_jobs=-1)]: Done 23024 tasks      | elapsed:   41.9s
[Parallel(n_jobs=-1)]: Done 27624 tasks      | elapsed:   49.9s
[Parallel(n_jobs=-1)]: Done 32624 tasks      | elapsed:   59.4s
[Parallel(n_jobs=-1)]: Done 38024 tasks      | elapsed:  1.2min
[Parallel(n_jobs=-1)]: Done 43824 tasks      | elapsed:  1.3min
[Parallel(n_jobs=-1)]: Done 50024 tasks      | elapsed:  1.5min
[Parallel(n_jobs=-1)]: Done 56624 tasks      | elapsed:  1.7min
[Parallel(n_jobs=-1)]: Done 59400 out of 59400 | elapsed:  1.8min finished
# Samples: 50, Tasks: 59400, Jobs: -1:   0%|          | 0/5 [01:48<?, ?it/s]
import time

t0 = time.time()
df_ = pd.concat(results_df, ignore_index=True)
t1 = time.time() - t0
print(f"Time Taken: {t1:.2f} secs")
df_.tail()
Time Taken: 37.71 secs
dataset trial std nu samples dimensions standardize per_dimension separate_scales sigma_method sigma_percent sigma_X sigma_Y scorer score mutual_info
59395 gauss 5 11 1 50 2 False False False median 0.9 2.88898 2.88898 cka 0.480297 0.390005
59396 gauss 5 11 1 50 3 False False False median 0.9 3.35418 3.35418 cka 0.530064 0.377389
59397 gauss 5 11 1 50 10 False False False median 0.9 5.687 5.687 cka 0.714579 0.929178
59398 gauss 5 11 1 50 50 False False False median 0.9 13.6425 13.6425 cka 0.975977 4.052644
59399 gauss 5 11 1 50 100 False False False median 0.9 19.2544 19.2544 cka 0.987792 7.938746

Note: This is another bottleneck.

Appending to File

We can use this simple pseudocode to append to a file.

mode = 'a'
header=False
with open(f"{RES_PATH}{save_name}.csv", mode) as f:
    df.to_csv(f, header=header)
    header=True