Tuesday, October 24, 2017

How do I get the components for LDA in scikit-learn?

Leave a Comment

When using PCA in sklearn, it's easy to get out the components:

from sklearn import decomposition pca = decomposition.PCA(n_components=n_components) pca_data = pca.fit(input_data) pca_components = pca.components_ 

But I can't for the life of me figure out how to get the components out of LDA, as there is no components_ attribute. Is there a similar attribute in sklearn lda?

4 Answers

Answers 1

In the case of PCA, the documentation is clear. The pca.components_ are the eigenvectors.

In the case of LDA, we need the lda.scalings_ attribute.

Example using iris data and sklearn:

import numpy as np import matplotlib.pyplot as plt from sklearn import datasets import pandas as pd from sklearn.preprocessing import StandardScaler from sklearn.discriminant_analysis import LinearDiscriminantAnalysis   iris = datasets.load_iris() X = iris.data y = iris.target #In general a good idea is to scale the data scaler = StandardScaler() scaler.fit(X) X=scaler.transform(X)  lda = LinearDiscriminantAnalysis() lda.fit(X,y)  x_new = lda.transform(X)     def myplot(score,coeff,labels=None):     xs = score[:,0]     ys = score[:,1]     n = coeff.shape[0]      plt.scatter(xs ,ys, c = y) #without scaling     for i in range(n):         plt.arrow(0, 0, coeff[i,0], coeff[i,1],color = 'r',alpha = 0.5)         if labels is None:             plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, "Var"+str(i+1), color = 'g', ha = 'center', va = 'center')         else:             plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, labels[i], color = 'g', ha = 'center', va = 'center')  plt.xlabel("LD{}".format(1)) plt.ylabel("LD{}".format(2)) plt.grid()  #Call the function.   # Important: here I think that lda.scalings_ contains the 2 eigenvectors (loadings of the variables). The shape is [n_features,n_components] so [4,2] in our case. So in the myplot function I plot for each variable i, the values that are in [i,0] and [i,1]. All these assuming that the lda.scalings_ contain the eigenvectors.  myplot(x_new[:,0:2], lda.scalings_)  plt.show() 

Verify that the lda.scalings_ are the eigenvectors:

print(lda.scalings_) print(lda.transform(np.identity(4))) 

Results

Results

Answers 2

There is an coef_ Attribute that probably contains what you are looking for. It should be documented. As this is a linear decision function, coef_ is probably the right name in the sklearn naming scheme.

You can also directly use the transform method to project data to the new space.

Answers 3

My reading of the code is that the coef_ attribute is used to weight each of the components when scoring a sample's features against the different classes. scaling is the eigenvector and xbar_ is the mean. In the spirit of UTSL, here's the source for the decision function: https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/lda.py#L188

Answers 4

In PCA, the transform operation uses self.components_.T (see the code):

    X_transformed = np.dot(X, self.components_.T) 

In LDA, the transform operation uses self.scalings_ (see the code):

    X_new = np.dot(X, self.scalings_) 


Note the .T which transposes the array in the PCA, and not in LDA:

  • PCA: components_ : array, shape (n_components, n_features)
  • LDA: scalings_ : array, shape (n_features, n_classes - 1)
If You Enjoyed This, Take 5 Seconds To Share It

0 comments:

Post a Comment