Book Image

Python for Finance Cookbook

By : Eryk Lewinson
Book Image

Python for Finance Cookbook

By: Eryk Lewinson

Overview of this book

Python is one of the most popular programming languages used in the financial industry, with a huge set of accompanying libraries. In this book, you'll cover different ways of downloading financial data and preparing it for modeling. You'll calculate popular indicators used in technical analysis, such as Bollinger Bands, MACD, RSI, and backtest automatic trading strategies. Next, you'll cover time series analysis and models, such as exponential smoothing, ARIMA, and GARCH (including multivariate specifications), before exploring the popular CAPM and the Fama-French three-factor model. You'll then discover how to optimize asset allocation and use Monte Carlo simulations for tasks such as calculating the price of American options and estimating the Value at Risk (VaR). In later chapters, you'll work through an entire data science project in the financial domain. You'll also learn how to solve the credit card fraud and default problems using advanced classifiers such as random forest, XGBoost, LightGBM, and stacked models. You'll then be able to tune the hyperparameters of the models and handle class imbalance. Finally, you'll focus on learning how to use deep learning (PyTorch) for approaching financial tasks. By the end of this book, you’ll have learned how to effectively analyze financial data using a recipe-based approach.
Table of Contents (12 chapters)

Bayesian hyperparameter optimization

In the Tuning hyperparameters using grid search and cross-validation recipe in Chapter 8, Identifying Credit Default with Machine Learning, we described how to use grid search and randomized search to find the (possibly) best set of hyperparameters for our model. In this recipe, we introduce another approach to finding the optimal set of hyperparameters, this time based on the Bayesian methodology. The main motivation for the Bayesian approach is that both grid search and randomized search make uninformed choices, either through exhaustive search over all combinations or through a random sample. This way, they spend a lot of time evaluating bad (far from optimal) combinations, thus basically wasting time. That is why the Bayesian approach makes informed choices of the next set of hyperparameters to evaluate, this way reducing the time spent...