09.06 Data Scaling

Data does not always come in a way our machine learning algorithms expects it. We saw that concept a handful of times already: we extracted extra features (e.g. polynomial features), we did dimensionality reduction, we used manifold techniques. Yet, all these things may still fail against certain data.

Wine

sl-wine.svg

And we look at yet another dataset. We import the usual stuff and the wine dataset, which is an example of data that would fail a simple dimensionality reduction.

The set is made of chemical measurement over three different vineyards in Italy. One can take it as a classification problem of attempting to identify the vineyard from the chemical composition. We will use is for dimensionality reduction.

In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn-talk')
from sklearn.datasets import load_wine
wine = load_wine()
print(wine['DESCR'])
.. _wine_dataset:

Wine recognition dataset
------------------------

**Data Set Characteristics:**

    :Number of Instances: 178 (50 in each of three classes)
    :Number of Attributes: 13 numeric, predictive attributes and the class
    :Attribute Information:
 		- Alcohol
 		- Malic acid
 		- Ash
		- Alcalinity of ash  
 		- Magnesium
		- Total phenols
 		- Flavanoids
 		- Nonflavanoid phenols
 		- Proanthocyanins
		- Color intensity
 		- Hue
 		- OD280/OD315 of diluted wines
 		- Proline

    - class:
            - class_0
            - class_1
            - class_2
		
    :Summary Statistics:
    
    ============================= ==== ===== ======= =====
                                   Min   Max   Mean     SD
    ============================= ==== ===== ======= =====
    Alcohol:                      11.0  14.8    13.0   0.8
    Malic Acid:                   0.74  5.80    2.34  1.12
    Ash:                          1.36  3.23    2.36  0.27
    Alcalinity of Ash:            10.6  30.0    19.5   3.3
    Magnesium:                    70.0 162.0    99.7  14.3
    Total Phenols:                0.98  3.88    2.29  0.63
    Flavanoids:                   0.34  5.08    2.03  1.00
    Nonflavanoid Phenols:         0.13  0.66    0.36  0.12
    Proanthocyanins:              0.41  3.58    1.59  0.57
    Colour Intensity:              1.3  13.0     5.1   2.3
    Hue:                          0.48  1.71    0.96  0.23
    OD280/OD315 of diluted wines: 1.27  4.00    2.61  0.71
    Proline:                       278  1680     746   315
    ============================= ==== ===== ======= =====

    :Missing Attribute Values: None
    :Class Distribution: class_0 (59), class_1 (71), class_2 (48)
    :Creator: R.A. Fisher
    :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)
    :Date: July, 1988

This is a copy of UCI ML Wine recognition datasets.
https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data

The data is the results of a chemical analysis of wines grown in the same
region in Italy by three different cultivators. There are thirteen different
measurements taken for different constituents found in the three types of
wine.

Original Owners: 

Forina, M. et al, PARVUS - 
An Extendible Package for Data Exploration, Classification and Correlation. 
Institute of Pharmaceutical and Food Analysis and Technologies,
Via Brigata Salerno, 16147 Genoa, Italy.

Citation:

Lichman, M. (2013). UCI Machine Learning Repository
[https://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
School of Information and Computer Science. 

.. topic:: References

  (1) S. Aeberhard, D. Coomans and O. de Vel, 
  Comparison of Classifiers in High Dimensional Settings, 
  Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of  
  Mathematics and Statistics, James Cook University of North Queensland. 
  (Also submitted to Technometrics). 

  The data was used with many others for comparing various 
  classifiers. The classes are separable, though only RDA 
  has achieved 100% correct classification. 
  (RDA : 100%, QDA 99.4%, LDA 98.9%, 1NN 96.1% (z-transformed data)) 
  (All results using the leave-one-out technique) 

  (2) S. Aeberhard, D. Coomans and O. de Vel, 
  "THE CLASSIFICATION PERFORMANCE OF RDA" 
  Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of 
  Mathematics and Statistics, James Cook University of North Queensland. 
  (Also submitted to Journal of Chemometrics).


One should explore the data first. We will simply plot as many dimensions as we can at once.

In [2]:
fig, ax = plt.subplots(figsize=(14, 6))
ax.scatter(wine.data[:, 0], wine.data[:, 1], s=30*wine.data[:, 2], c=wine.target, cmap='plasma');

The classes seem to be difficult to separate. Yet, we have just a few dimensions and a handful of samples, therefore we can perform a full PCA and see whether we can project this data into a different space.

In [3]:
from sklearn.decomposition import PCA

pca = PCA()
pca.fit(wine.data)
fig, ax = plt.subplots(figsize=(14, 10))
ax.plot(np.cumsum(pca.explained_variance_ratio_))
ax.set(xlabel='components', ylabel='explained variance');

Oh wow, two dimensional space seem to explain the data variance well enough. And, since we can visualize a two dimensional space easily, we should do it.

In [4]:
pca = PCA(n_components=2)
wine_pca = pca.fit_transform(wine.data)
fig, ax = plt.subplots(figsize=(14, 6))
ax.scatter(wine_pca[:, 0], wine_pca[:, 1], s=60, c=wine.target, cmap='plasma');

Despite the fact that we did dimensionality reduction the data does not look separable. Let's try something different, let's describe this data using pandas.

In [5]:
df = pd.DataFrame(wine.data, columns=[wine.feature_names])
df.describe()
Out[5]:
alcohol malic_acid ash alcalinity_of_ash magnesium total_phenols flavanoids nonflavanoid_phenols proanthocyanins color_intensity hue od280/od315_of_diluted_wines proline
count 178.000000 178.000000 178.000000 178.000000 178.000000 178.000000 178.000000 178.000000 178.000000 178.000000 178.000000 178.000000 178.000000
mean 13.000618 2.336348 2.366517 19.494944 99.741573 2.295112 2.029270 0.361854 1.590899 5.058090 0.957449 2.611685 746.893258
std 0.811827 1.117146 0.274344 3.339564 14.282484 0.625851 0.998859 0.124453 0.572359 2.318286 0.228572 0.709990 314.907474
min 11.030000 0.740000 1.360000 10.600000 70.000000 0.980000 0.340000 0.130000 0.410000 1.280000 0.480000 1.270000 278.000000
25% 12.362500 1.602500 2.210000 17.200000 88.000000 1.742500 1.205000 0.270000 1.250000 3.220000 0.782500 1.937500 500.500000
50% 13.050000 1.865000 2.360000 19.500000 98.000000 2.355000 2.135000 0.340000 1.555000 4.690000 0.965000 2.780000 673.500000
75% 13.677500 3.082500 2.557500 21.500000 107.000000 2.800000 2.875000 0.437500 1.950000 6.200000 1.120000 3.170000 985.000000
max 14.830000 5.800000 3.230000 30.000000 162.000000 3.880000 5.080000 0.660000 3.580000 13.000000 1.710000 4.000000 1680.000000

The values of magnesium and proline have completely different magnitudes from all other features. These features have much bigger values than all the others, and since PCA will evaluate variance based on the values alone, it will take these two features as the main variance explanation. In other words, instead of finding the main variance in the data PCA is simply finding these two features.

Until now we worked with PCA on images, in which each dimension follows the same scale: the possible values of the pixel which are from $0$ to $255$. Such a well behaved set of dimensions is uncommon in most datasets, notably if the data are not images.

Let's scale those things down and then apply PCA. The StandardScaler centers the mean of every feature to zero, and ensures that the variance of each feature is exactly one.

In [6]:
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline

preprocess = make_pipeline(StandardScaler(), PCA(n_components=2))
wine_pca = preprocess.fit_transform(wine.data)
fig, ax = plt.subplots(figsize=(14, 6))
ax.scatter(wine_pca[:, 0], wine_pca[:, 1], s=60, c=wine.target, cmap='plasma');

Now this is rather easy to separate. And moreover, we probably do not need a complex classifier for it.

We have two take away messages here. The first is that if we had believed that the PCA examples we have done earlier are good representations for all data sets we would be in trouble when faced with a dataset of non-images. Not scaling the data before PCA is one of the most common mistakes one makes when working with data preprocessing.

Another thing that we did not see before is the use of a pipeline with two preprocessors. We could now add a third sklearn object, perhaps a classifier, to the pipeline and build a three piece pipeline. A pipeline is not limited to two sklearn objects glued together, often one may need a longer one. The dataset after scaling is quite easy to classify, we will leave the concatenation of a model to the pipeline and classification to you.