Book Image

Data Smart

By : John W. Foreman
Book Image

Data Smart

By: John W. Foreman

Overview of this book

Data Science gets thrown around in the press like it's magic. Major retailers are predicting everything from when their customers are pregnant to when they want a new pair of Chuck Taylors. It's a brave new world where seemingly meaningless data can be transformed into valuable insight to drive smart business decisions. But how does one exactly do data science? Do you have to hire one of these priests of the dark arts, the "data scientist," to extract this gold from your data? Nope. Data science is little more than using straight-forward steps to process raw data into actionable insight. And in Data Smart, author and data scientist John Foreman will show you how that's done within the familiar environment of a spreadsheet. Why a spreadsheet? It's comfortable! You get to look at the data every step of the way, building confidence as you learn the tricks of the trade. Plus, spreadsheets are a vendor-neutral place to learn data science without the hype. But don't let the Excel sheets fool you. This is a book for those serious about learning the analytic techniques, math and the magic, behind big data.
Table of Contents (18 chapters)
Free Chapter
1
Cover
2
Credits
3
About the Author
4
About the Technical Editors
5
Acknowledgments
18
End User License Agreement

Boosting: If You Get It Wrong, Just Boost and Try Again

What was the reason behind doing bagging, again?

If you trained up a bunch of decision stumps on the whole dataset over and over again, they'd be identical. By taking random selections of the dataset, you introduce some variety to your stumps and end up capturing nuances in the training data that a single stump never could.

Well, what bagging does with random selections, boosting does with weights. Boosting doesn't take random portions of the dataset. It uses the whole dataset on each training iteration. Instead, with each iteration, boosting focuses on training a decision stump that resolves some of the sins committed by the previous decision stumps. It works like this:

  • At first, each row of training data counts exactly the same. They all have the same weight. In your case, you have 1000 rows of training data, so they all start with a weight of 0.001. This means the weights sum up to 1.
  • Evaluate each feature on the entire...