ML Introduction Part 2/2: Applied

In this workshop we will take you though one of the most powerful ML algorithms: Decision Trees. We will follow that up with a form of bagging called Random Forests and then a form of boosting called XGBoost. Finally, we’ll teach how to avoid overfitting through cross validation, hyperparameter tuning and feature selection. All of this will be done in Python with sklearn. Looking forward to seeing you there!


  • tags: applied, random forests, decision trees, xgboost, boosting, hyperparameter tuning, sklearn, evaluation, feature selection, test train split, machine learning
When:

Sunday Oct 18th, 2:30am to 3:45am (GMT)

Presented By:

Allyson King

About the Speaker(s):

Allyson is the VP of TAMU Datathon, a BS Statistics and (almost) Computer Science, and has worked at AT&T and TTI.


Join the Discussion:

Join our Discord community and talk with TD organizers and like minded DS/ML enthusiasts!

Join our Discord!