Book Image

Python for Finance Cookbook

By : Eryk Lewinson
Book Image

Python for Finance Cookbook

By: Eryk Lewinson

Overview of this book

Python is one of the most popular programming languages used in the financial industry, with a huge set of accompanying libraries. In this book, you'll cover different ways of downloading financial data and preparing it for modeling. You'll calculate popular indicators used in technical analysis, such as Bollinger Bands, MACD, RSI, and backtest automatic trading strategies. Next, you'll cover time series analysis and models, such as exponential smoothing, ARIMA, and GARCH (including multivariate specifications), before exploring the popular CAPM and the Fama-French three-factor model. You'll then discover how to optimize asset allocation and use Monte Carlo simulations for tasks such as calculating the price of American options and estimating the Value at Risk (VaR). In later chapters, you'll work through an entire data science project in the financial domain. You'll also learn how to solve the credit card fraud and default problems using advanced classifiers such as random forest, XGBoost, LightGBM, and stacked models. You'll then be able to tune the hyperparameters of the models and handle class imbalance. Finally, you'll focus on learning how to use deep learning (PyTorch) for approaching financial tasks. By the end of this book, you’ll have learned how to effectively analyze financial data using a recipe-based approach.
Table of Contents (12 chapters)

Using stacking for improved performance

In the previous recipe, Investigating advanced classifiers, we introduced a few examples of ensemble models. They used multiple decision trees (each model in a slightly different way) to build a better model. The goal was to reduce the overall bias and/or variance. Similarly, stacking is a technique that combines multiple estimators. It is a very powerful and popular technique, used in many competitions.

We provide a high-level overview of the characteristics:

  • The models used as base learners do not need to be homogeneous—we can use a combination of different estimators. For example, we can use a decision tree, a k-nearest neighbors classifier, and logistic regression.
  • Stacking uses a meta learner (model) to combine the predictions of the base learners and create the final prediction.
  • Stacking can be extended to multiple levels...