Ensemble Machine Learning
- Learn how to maximize the use of machine learning algorithms such as Random forests, decision tress, AdaBoost, K-nearest neighbor.
- Practical approach explaining how most powerful machine learning models are built.
- A Comprehensive guide covering key aspects of Ensembling techniques
Ensembling is a technique of combining two or more similar or dissimilar machine leaning algorithms to create a model that delivers superior prediction power. This book will help the readers to develop an understanding in how they can use multiple algorithms to make a strong predictive model. This book contains Python code for different algorithms so that the user can easily understand and implement on their systems.
This book covers different machine learning algorithms which are widely used in practical world for making predictions and classifications. The readers will gain knowledge of different machine learning aspects in one book such as bagging (Decision trees and Random forests), Boosting (Ada-boost etc.) and stacking (combination of Bagging and Boosting algorithms and other) and then learn how to implement them in building Ensemble models. As the machine learning touches almost every field of the digital world, user will come to know how these algorithms can be used in different applications such as computer vision, speech recognition, making recommendations, grouping and document classification, fitting regression on data.
By the end of this book you will understand how machine learning algorithms work behind the scenes and how algorithms can be combined to reduce common problems, and build simple efficient machine learning models with the real use cases mentioned in the book.
What you will learn
- Understand why bagging improves classification and regression performance
- Understand and implement AdaBoost
- Understand the bootstrap method and its application to bagging
- Understand and implement Random Forest
- Understand and implement stacking (Combination of Bagging and Boosting Algorithms and other)
- Handle the skew data sets for maximum prediction accuracy.
- Improving the prediction accuracy by fine tuning the model parameters.
- Analysis of trained predictive model for over-fitting/under-fitting cases.
- Use developed algorithms for practical applications.