With the increase in the complexity of data, and to fulfill the accuracy-related requirements, people started preferring ensemble classifiers. However, the selection of ensemble classifiers is not that easy. We have a lot of ensemble strategies, like: (1) Model Averaging, (2) Weighted Model Averaging, (3) Majority Voting, (4) Bagging, (5) Boosting, (6) Stacking, (7) Blending, and so many others.
Ensemble strategies, like: (1) Model Averaging, (2) Weighted Model Averaging, and (3) Majority Voting are highly used as “Independently” and as a key component of different ensemble strategies. Based on the use and high success rates, I have identified three ensemble strategies, which have emerged as the winner in Machine-learning-based ensemble classification. They are (1) Stacking, (2) Bagging, and (3) Boosting. In this article, I tried to capture these three techniques with the easiest possible explanations/tutorials.
Stacking (Stacked Generalization)
The term stacking is common in deep learning. But, in Machine learning, a two-layered format of stacking is famous. At the first layer, we use multiple classifiers to learn the different aspects of the data. For this, we select classifiers in such a way that, they result in maximum non-correlations among results and errors. Finally, on the second layer, we use a meta learner to learn from the prediction results of the first level classifiers. In the entire process, we do not take the entire data in learning, instead, we use the K-Fold cross-validation type strategy and use the “K-1” folds to train the system and Kth fold to generate the prediction results by first layer classifiers. We generally use – (1) Decision tree, (2) SVM, (3) Neural networks, (4) Random Forest, (5) Logistic regression, and (7) Bayesian classifier, etc. Finally, at the second level, we use “Meta-Learners”. These meta learners are actually a lightweight classifier, like: Logistic regression. However, some people suggest the use of heavyweight classifiers also. A good strategy of selection of first layer classifier generally gives a better result (accuracy), compared to any of the individual classifier’s results.
Bootstrap aggregating, also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression. It also reduces variance and helps to avoid overfitting. [source]. For example: Random Forest.
A boosting algorithm combines multiple simple models (also known as weak learners or base estimators) to generate the final output. The way they prepare trees/weak learners and the way they combine all these weak learners creates differences in different varieties of boosting algorithms. The different variants of (a) Gradient boosting and (b) XGBoosting are highly famous in this area.
Shared from my LinkedIn article: https://www.linkedin.com/pulse/winning-ensemble-classification-strategies-niraj-kumar-ph-d-/