Machine Learning on a Cancer Dataset - Part 18
In the third and final video on Random Forest we're looking at the importance each feature of the cancer sample plays in the decision making process.
Recall from previous videos that we've done something similar for Decision Trees. The feature importances matrix was skewed, having 2-3 features (out of 30) play the majority of importance in decision making, while the rest of the features were close to zero. This may not be an accurate model of cancer because it may be least likely to have a feature like 'worst radius' play such a 'heavy' role in predicting if a tumor is malignant or benign.
As we compute the feature importances matrix for the Random Forest classifier we see that it looks much more balanced compared to the one for DT. Please see the full details and explanation in the video below.
As a reminder:
In this series I'm going to explore the cancer dataset that comes pre-loaded with scikit-learn. The purpose is to train the classifiers on this dataset, which consists of labeled data: ~569 tumor samples, each labeled malignant or benign, and then use them on new, unlabeled data.
Previous videos in this series:
- Machine Learning on a Cancer Dataset - Part 11
- Machine Learning on a Cancer Dataset - Part 12
- Machine Learning on a Cancer Dataset - Part 13
- Machine Learning on a Cancer Dataset - Part 14
- Machine Learning on a Cancer Dataset - Part 15
- Machine Learning on a Cancer Dataset - Part 16
- Machine Learning on a Cancer Dataset - Part 17