Book Image

Learning Quantitative Finance with R

By : Dr. Param Jeet, PRASHANT VATS
Book Image

Learning Quantitative Finance with R

By: Dr. Param Jeet, PRASHANT VATS

Overview of this book

The role of a quantitative analyst is very challenging, yet lucrative, so there is a lot of competition for the role in top-tier organizations and investment banks. This book is your go-to resource if you want to equip yourself with the skills required to tackle any real-world problem in quantitative finance using the popular R programming language. You'll start by getting an understanding of the basics of R and its relevance in the field of quantitative finance. Once you've built this foundation, we'll dive into the practicalities of building financial models in R. This will help you have a fair understanding of the topics as well as their implementation, as the authors have presented some use cases along with examples that are easy to understand and correlate. We'll also look at risk management and optimization techniques for algorithmic trading. Finally, the book will explain some advanced concepts, such as trading using machine learning, optimizations, exotic options, and hedging. By the end of this book, you will have a firm grasp of the techniques required to implement basic quantitative finance models in R.
Table of Contents (16 chapters)
Learning Quantitative Finance with R
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Stepwise variable selection


We can use stepwise variable selection (forward, backward, both) in predictive models using the stepAIC() function for feature selection.

This can be done by executing the following code:

> MultipleR.lm = lm(StockYPrice ~  
StockX1Price + StockX2Price + StockX3Price + StockX4Price,  
data=DataMR) 
> step <- stepAIC(MultipleR.lm, direction="both") 
> step$anova  

Here, we are using the dataset used for multiple regression as the input dataset. One can also use all-subsets regression using the leaps() function from the leaps package.

Variable selection by classification

We can use classification techniques such as decision tree or random forest to get the most significant predictors. Here we are using random forest (code is given) to find the most relevant features. All the four attributes in the dataset DataForMultipleRegression1 have been selected in the following example and the plot shows the accuracy of different subset sizes...