# Summary

Congratulations on making it to the end of the book! This has been an extraordinary journey that began with basic machine learning and `pandas`

and ended with building your own customized transformers, pipelines, and functions to deploy robust, fine-tuned XGBoost models in industry scenarios with sparse matrices to make predictions on new data.

Along the way, you have learned the story of XGBoost, from the first decision trees through random forests and gradient boosting, before discovering the mathematical details and sophistication that has made XGBoost so special. You saw time and time again that XGBoost outperforms other machine learning algorithms, and you gained essential practice in tuning XGBoost's wide-ranging hyperparameters, including `n_estimators`

, `max_depth`

, `gamma`

, `colsample_bylevel`

, `missing`

, and `scale_pos_weight`

.

You learned how physicists and astronomers obtained knowledge about our universe in historically important case studies, and you learned about...