Book Image

Financial Modeling Using Quantum Computing

By : Anshul Saxena, Javier Mancilla, Iraitz Montalban, Christophe Pere
5 (1)
Book Image

Financial Modeling Using Quantum Computing

5 (1)
By: Anshul Saxena, Javier Mancilla, Iraitz Montalban, Christophe Pere

Overview of this book

Quantum computing has the potential to revolutionize the computing paradigm. By integrating quantum algorithms with artificial intelligence and machine learning, we can harness the power of qubits to deliver comprehensive and optimized solutions for intricate financial problems. This book offers step-by-step guidance on using various quantum algorithm frameworks within a Python environment, enabling you to tackle business challenges in finance. With the use of contrasting solutions from well-known Python libraries with quantum algorithms, you’ll discover the advantages of the quantum approach. Focusing on clarity, the authors expertly present complex quantum algorithms in a straightforward, yet comprehensive way. Throughout the book, you'll become adept at working with simple programs illustrating quantum computing principles. Gradually, you'll progress to more sophisticated programs and algorithms that harness the full power of quantum computing. By the end of this book, you’ll be able to design, implement and run your own quantum computing programs to turbocharge your financial modelling.
Table of Contents (16 chapters)
1
Part 1: Basic Applications of Quantum Computing in Finance
5
Part 2: Advanced Applications of Quantum Computing in Finance
10
Part 3: Upcoming Quantum Scenario

Quantum Support Vector Machines

The Support Vector Classifier (SVC) and Quantum Support Vector Classifier (QSVC) are the first models that will be used to look at our synthetic dataset, and we will see how a quantum algorithm versus a classical algorithm can work to find potential defaulters. One of the most widely used techniques is known as Support Vector Machines (SVM) (Hearst et al., 1998), which make use of hyperplanes in order to find separable spaces within our data regime. These hyperplanes are responsible for separating our N-dimensional information into different spaces, trying to maximize the margin between samples from the regions split by the hyperplane itself. By softening this margin constraint and allowing some samples to be misclassified, we allow the model to generalize from the dataset itself. This softened version is what we will call an SVC.

Thanks to the abstraction level that Python libraries such as scikit-learn provide, its usage is as simple as calling...