Feature Selection The smart way
Feature selection involves picking the set of features that are most relevant to the target variable. This helps in reducing the complexity of your model, as well as minimizing the resources required for training and inference. This has greater effect in production models where you maybe dealing with terabytes of data or serving millions of requests.
In this notebook, you will run through the different techniques in performing feature selection on the Breast Cancer Dataset. Most of the modules will come from scikit-learn, one of the most commonly used machine learning libraries. It features various machine learning algorithms and has built-in implementations of different feature selection methods. Using these, you will be able to compare which method works best for this particular dataset.
import pandas as pd
import numpy as np
# scikit-learn modules for feature selection and model evaluation
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import RFE, SelectKBest, SelectFromModel, chi2, f_classif
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, roc_auc_score, precision_score, recall_score, f1_score
from sklearn.svm import LinearSVC
from sklearn.feature_selection import SelectFromModel
from sklearn.preprocessing import StandardScaler, MinMaxScaler
# libraries for visualization
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
df = pd.read_csv('./data/breast_cancer_data.csv')
# Print datatypes
print(df.dtypes)
# Describe columns
df.describe(include='all')
df.head()
df.isna().sum()
columns_to_remove = ['Unnamed: 32', 'id']
df.drop(columns_to_remove, axis=1, inplace=True)
# Check that the columns are indeed dropped
df.head()
Integer Encode Diagnosis
You may have realized that the target column, diagnosis
, is encoded as a string type categorical variable: M
for malignant and B
for benign. You need to convert these into integers before training the model. Since there are only two classes, you can use 0
for benign and 1
for malignant. Let's create a column diagnosis_int
containing this integer representation.
df["diagnosis_int"] = (df["diagnosis"] == 'M').astype('int')
# Drop the previous string column
df.drop(['diagnosis'], axis=1, inplace=True)
# Check the new column
df.head()
X = df.drop("diagnosis_int", 1)
Y = df["diagnosis_int"]
def fit_model(X, Y):
'''Use a RandomForestClassifier for this problem.'''
# define the model to use
model = RandomForestClassifier(criterion='entropy', random_state=47)
# Train the model
model.fit(X, Y)
return model
def calculate_metrics(model, X_test_scaled, Y_test):
'''Get model evaluation metrics on the test set.'''
# Get model predictions
y_predict_r = model.predict(X_test_scaled)
# Calculate evaluation metrics for assesing performance of the model.
roc=roc_auc_score(Y_test, y_predict_r)
acc = accuracy_score(Y_test, y_predict_r)
prec = precision_score(Y_test, y_predict_r)
rec = recall_score(Y_test, y_predict_r)
f1 = f1_score(Y_test, y_predict_r)
return acc, roc, prec, rec, f1
def train_and_get_metrics(X, Y):
'''Train a Random Forest Classifier and get evaluation metrics'''
# Split train and test sets
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2,stratify=Y, random_state = 123)
# All features of dataset are float values. You normalize all features of the train and test dataset here.
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Call the fit model function to train the model on the normalized features and the diagnosis values
model = fit_model(X_train_scaled, Y_train)
# Make predictions on test dataset and calculate metrics.
roc, acc, prec, rec, f1 = calculate_metrics(model, X_test_scaled, Y_test)
return acc, roc, prec, rec, f1
def evaluate_model_on_features(X, Y):
'''Train model and display evaluation metrics.'''
# Train the model, predict values and get metrics
acc, roc, prec, rec, f1 = train_and_get_metrics(X, Y)
# Construct a dataframe to display metrics.
display_df = pd.DataFrame([[acc, roc, prec, rec, f1, X.shape[1]]], columns=["Accuracy", "ROC", "Precision", "Recall", "F1 Score", 'Feature Count'])
return display_df
Now you can train the model with all features included then calculate the metrics. This will be your baseline and you will compare this to the next outputs when you do feature selection.
all_features_eval_df = evaluate_model_on_features(X, Y)
all_features_eval_df.index = ['All features']
# Initialize results dataframe
results = all_features_eval_df
# Check the metrics
results.head()
It is a good idea to calculate and visualize the correlation matrix of a data frame to see which features have high correlation. You can do that with just a few lines as shown below. The Pandas corr() method computes the Pearson correlation by default and you will plot it with Matlab PyPlot and Seaborn. The darker blue boxes show features with high positive correlation while white ones indicate high negative correlation. The diagonals will have 1's because the feature is mapped on to itself.
plt.figure(figsize=(20,20))
# Calculate correlation matrix
cor = df.corr()
# Plot the correlation matrix
sns.heatmap(cor, annot=True, cmap=plt.cm.PuBu)
plt.show()
Filter Methods
Let's start feature selection with filter methods. This type of feature selection uses statistical methods to rank a given set of features. Moreover, it does this ranking regardless of the model you will be training on (i.e. you only need the feature values). When using these, it is important to note the types of features and target variable you have. Here are a few examples:
- Pearson Correlation (numeric features - numeric target, exception: when target is 0/1 coded)
- ANOVA f-test (numeric features - categorical target)
- Chi-squared (categorical features - categorical target)
Let's use some of these in the next cells.
Correlation with the target variable
Let's start by determining which features are strongly correlated with the diagnosis (i.e. the target variable). Since we have numeric features and our target, although categorical, is 0/1 coded, we can use Pearson correlation to compute the scores for each feature. This is also categorized as supervised feature selection because we're taking into account the relationship of each feature with the target variable. Moreover, since only one variable's relationship to the target is taken at a time, this falls under univariate feature selection.
cor_target = abs(cor["diagnosis_int"])
# Select highly correlated features (thresold = 0.2)
relevant_features = cor_target[cor_target>0.2]
# Collect the names of the features
names = [index for index, value in relevant_features.iteritems()]
# Drop the target variable from the results
names.remove('diagnosis_int')
# Display the results
print(names)
Now try training the model again but only with the features in the columns you just gathered. You can observe that there is an improvement in the metrics compared to the model you trained earlier.
strong_features_eval_df = evaluate_model_on_features(df[names], Y)
strong_features_eval_df.index = ['Strong features']
# Append to results and display
results = results.append(strong_features_eval_df)
results.head()
Correlation with other features
You will now eliminate features which are highly correlated with each other. This helps remove redundant features thus resulting in a simpler model. Since the scores are calculated regardless of the target variable, this can be categorized under unsupervised feature selection.
For this, you will plot the correlation matrix of the features selected previously. Let's first visualize the correlation matrix again.
plt.figure(figsize=(20,20))
# Calculate the correlation matrix for target relevant features that you previously determined
new_corr = df[names].corr()
# Visualize the correlation matrix
sns.heatmap(new_corr, annot=True, cmap=plt.cm.Blues)
plt.show()
You will see that radius_mean
is highly correlated to radius worst
, perimeter_worst
, and area_worst
. You can retain radius_mean
and remove the rest of the features highly correlated to it.
Moreover, concavity_mean
is highly correlated to concave points_mean
. You will remove concave points_mean
and retain concavity_mean
from your set of features.
This is a more magnified view of the features that are highly correlated to each other.
plt.figure(figsize=(12,10))
# Select a subset of features
new_corr = df[['perimeter_mean', 'radius_worst', 'perimeter_worst', 'area_worst', 'concave points_mean', 'radius_mean', 'concavity_mean']].corr()
# Visualize the correlation matrix
sns.heatmap(new_corr, annot=True, cmap=plt.cm.Blues)
plt.show()
You will now evaluate the model on the features selected based on your observations. You can see that the metrics show the same values as when it was using 25 features. This indicates that you can get the same model performance even if you reduce the number of features. In other words, the 4 features you removed were indeed redundant and you only needed the ones you retained.
subset_feature_corr_names = [x for x in names if x not in ['perimeter_mean', 'radius_worst', 'perimeter_worst', 'area_worst', 'concavepoints_mean']]
# Calculate and check evaluation metrics
subset_feature_eval_df = evaluate_model_on_features(df[subset_feature_corr_names], Y)
subset_feature_eval_df.index = ['Subset features']
# Append to results and display
results = results.append(subset_feature_eval_df)
results.head(n=10)
Bonus challenge (not required): Look back again at the correlation matrix at the start of this section and see if you can remove other highly correlated features. You can remove at least one more and arrive at the same model performance.
Univariate Selection with Sci-Kit Learn
Sci-kit learn offers more filter methods in its feature selection module. Moreover, it also has convenience methods for how you would like to filter the features. You can see the available options here in the official docs.
For this exercise, you will compute the ANOVA F-values to select the top 20 features using SelectKBest()
.
def univariate_selection():
# Split train and test sets
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2,stratify=Y, random_state = 123)
# All features of dataset are float values. You normalize all features of the train and test dataset here.
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# User SelectKBest to select top 20 features based on f-test
selector = SelectKBest(f_classif, k=20)
# Fit to scaled data, then transform it
X_new = selector.fit_transform(X_train_scaled, Y_train)
# Print the results
feature_idx = selector.get_support()
for name, included in zip(df.drop("diagnosis_int",1 ).columns, feature_idx):
print("%s: %s" % (name, included))
# Drop the target variable
feature_names = df.drop("diagnosis_int",1 ).columns[feature_idx]
return feature_names
You will now evaluate the model on the features selected by univariate selection.
univariate_feature_names = univariate_selection()
univariate_eval_df = evaluate_model_on_features(df[univariate_feature_names], Y)
univariate_eval_df.index = ['F-test']
# Append to results and display
results = results.append(univariate_eval_df)
results.head(n=10)
You can see that the performance metrics are the same as in the previous section but it uses only 20 features.
Wrapper Methods
Wrapper methods use a model to measure the effectiveness of a particular subset of features. As mentioned in class, one approach is to remove or add features sequentially. You can either start with 1 feature and gradually add until no improvement is made (forward selection), or do the reverse (backward selection). That can be done with the SequentialFeatureSelector class which uses k-fold cross validation scores to decide which features to add or remove. Recursive Feature Elimination is similar to backwards elimination but uses feature importance scores to prune the number of features. You can also specify how many features to remove at each iteration of the recursion. Let's use this as the wrapper for our model below.
Recursive Feature Elimination
You used the RandomForestClassifier as the model algorithm for which features should be selected. Now, you will use Recursive Feature Elimination, which wraps around the selected model to perform feature selection. This time, you can repeat the same task of selecting the top 20 features using RFE instead of SelectKBest.
def run_rfe():
# Split train and test sets
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2,stratify=Y, random_state = 123)
# All features of dataset are float values. You normalize all features of the train and test dataset here.
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Define the model
model = RandomForestClassifier(criterion='entropy', random_state=47)
# Wrap RFE around the model
rfe = RFE(model, 20)
# Fit RFE
rfe = rfe.fit(X_train_scaled, Y_train)
feature_names = df.drop("diagnosis_int",1 ).columns[rfe.get_support()]
return feature_names
rfe_feature_names = run_rfe()
You will now evaluate the RandomForestClassifier on the features selected by RFE. You will see that there is a slight performance drop compared to the previous approaches.
rfe_eval_df = evaluate_model_on_features(df[rfe_feature_names], Y)
rfe_eval_df.index = ['RFE']
# Append to results and display
results = results.append(rfe_eval_df)
results.head(n=10)
Feature Importances
Feature importance is already built-in in scikit-learn’s tree based models like RandomForestClassifier. Once the model is fit, the feature importance is available as a property named featureimportances.
You can use SelectFromModel to select features from the trained model based on a given threshold.
def feature_importances_from_tree_based_model_():
# Split train and test set
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2,stratify=Y, random_state = 123)
# Define the model to use
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
model = RandomForestClassifier()
model = model.fit(X_train_scaled,Y_train)
# Plot feature importance
plt.figure(figsize=(10, 12))
feat_importances = pd.Series(model.feature_importances_, index=X.columns)
feat_importances.sort_values(ascending=False).plot(kind='barh')
plt.show()
return model
def select_features_from_model(model):
model = SelectFromModel(model, prefit=True, threshold=0.013)
feature_idx = model.get_support()
feature_names = df.drop("diagnosis_int",1 ).columns[feature_idx]
return feature_names
model = feature_importances_from_tree_based_model_()
feature_imp_feature_names = select_features_from_model(model)
feat_imp_eval_df = evaluate_model_on_features(df[feature_imp_feature_names], Y)
feat_imp_eval_df.index = ['Feature Importance']
# Append to results and display
results = results.append(feat_imp_eval_df)
results.head(n=10)
L1 Regularization
L1 or Lasso Regulartization introduces a penalty term to the loss function which leads to the least important features being eliminated. Implementation in scikit-learn can be done with a LinearSVC model as the learning algorithm. You can then use SelectFromModel to select features based on the LinearSVC model’s output of L1 regularization.
def run_l1_regularization():
# Split train and test set
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2,stratify=Y, random_state = 123)
# All features of dataset are float values. You normalize all features of the train and test dataset here.
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Select L1 regulated features from LinearSVC output
selection = SelectFromModel(LinearSVC(C=1, penalty='l1', dual=False))
selection.fit(X_train_scaled, Y_train)
feature_names = df.drop("diagnosis_int",1 ).columns[(selection.get_support())]
return feature_names
l1reg_feature_names = run_l1_regularization()
l1reg_eval_df = evaluate_model_on_features(df[l1reg_feature_names], Y)
l1reg_eval_df.index = ['L1 Reg']
# Append to results and display
results = results.append(l1reg_eval_df)
results.head(n=10)
With these results and also your domain knowledge, you can decide which set of features to use to train on the entire dataset. If you will be basing it on the f1 score, you may narrow it down to the Strong features
, Subset features
and F-test
rows because they have the highest scores. If you want to save resources, the F-test
will be the most optimal of these 3 because it uses the least number of features (unless you did the bonus challenge and removed more from Subset features
). On the other hand, if you find that all the resulting scores for all approaches are acceptable, then you may just go for the method with the smallest set of features.
Wrap Up
That's it for this quick rundown of the different feature selection methods. As shown, you can do quick experiments with these because convenience modules are already available in libraries like sci-kit learn. It is a good idea to do this preprocessing step because not only will you save resources, you may even get better results than when you use all features. Try it out on your previous/upcoming projects and see what results you get!