Book Image

Quantum Computing and Blockchain in Business

By : Arunkumar Krishnakumar
Book Image

Quantum Computing and Blockchain in Business

By: Arunkumar Krishnakumar

Overview of this book

Are quantum computing and Blockchain on a collision course or will they be the most important trends of this decade to disrupt industries and life as we know it? Fintech veteran and venture capitalist Arunkumar Krishnakumar cuts through the hype to bring us a first-hand look into how quantum computing and Blockchain together are redefining industries, including fintech, healthcare, and research. Through a series of interviews with domain experts, he also explores these technologies’ potential to transform national and global governance and policies – from how elections are conducted and how smart cities can be designed and optimized for the environment, to what cyberwarfare enabled by quantum cryptography might look like. In doing so, he also highlights challenges that these technologies have to overcome to go mainstream. Quantum Computing and Blockchain in Business explores the potential changes that quantum computing and Blockchain might bring about in the real world. After expanding on the key concepts and techniques, such as applied cryptography, qubits, and digital annealing, that underpin quantum computing and Blockchain, the book dives into how major industries will be impacted by these technologies. Lastly, we consider how the two technologies may come together in a complimentary way.
Table of Contents (20 chapters)
5
Interview with Dr. Dave Snelling, Fujitsu Fellow
7
Interview with Dr. B. Rajathilagam, Head of AI Research, Amrita Vishwa Vidyapeetham
9
Interview with Max Henderson, Senior Data Scientist, Rigetti and QxBranch
11
Interview with Sam McArdle, Quantum Computing Researcher at the University of Oxford
14
Interview with Dinesh Nagarajan, Partner, IBM
18
Other Books You May Enjoy
19
Index

The history of quantum mechanics

In a conversation between an investor and a professor in academia, the investor is often left thinking, "Wow, that is great, but so what?", and the academic is wondering, "Does the investor get it?". The exploration of quantum computing has been one such experience for me, where the nerd in me wanted to delve deep into the physics, math, and the technical aspects of the discipline. However, the investor in me kept on asking, "So what's of value? What's in it for the world? What's in it for businesses?".

As a result of this tug of war, I have come up with a simplified explanation of quantum principles that lays the foundations of quantum mechanics. For a better understanding of quantum computing, we need to first study the basics of quantum information processing with respect to the flow of (quantum) bits, and how they process data and interact with each other. Therefore, let us begin with the tenets of quantum physics as the basis of quantum information processing.

Quantum physics provides the foundational principles that explains the behavior of particles such as atoms, electrons, photons, and positrons. A microscopic particle is defined as a small piece of matter invisible to the naked human eye.

In the process of describing the history of quantum mechanics, I will touch upon several of its fundamental concepts. The discovery and the evolution in scientists' understanding of these concepts has helped shape more modern thinking around quantum computing. The relevance of these concepts to quantum computing will become clear as this chapter unravels. However, at this stage the focus is on how this complex field has continued to perplex great minds for almost 100 years.

Quantum mechanics deals with nature at the smallest scales; exploring interactions between atoms and subatomic particles. Throughout a good part of the 19th century and the early part of the 20th century, scientists were trying to solve the puzzling behavior of particles, matter, light, and color. An electron revolves around the nucleus of an atom, and when it absorbs a photon (a particle of light), it jumps into a different energy level. Ultraviolet rays could provide enough energy to knock out electrons from an atom, producing positive electrical charge due to the removal of the negatively charged electron. Source: https://www.nobelprize.org/prizes/physics/1905/lenard/facts/

Scientists observed that an electron absorbing a photon was often limited to specific frequencies. An electron absorbing a specific type of photon resulted in colors associated with heated gases. This behavior was explained in 1913 by Danish scientist Niels Bohr. Further research in this field led to the emergence of the basic principles of quantum mechanics. Source: https://www.nobelprize.org/prizes/physics/1922/bohr/biographical/

Bohr postulated that electrons were only allowed to revolve in certain orbits, and the colors that they absorbed depended on the difference between the orbits they revolved in. For this discovery, he was awarded the Nobel prize in 1922. More importantly, this helped to cement the idea that the behavior of electrons and atoms was different from that of objects that are visible to the human eye (macroscopic objects). Unlike classical physics, which defined the behavior of macroscopic objects, quantum mechanics involved instantaneous transitions based on probabilistic rules rather than exact mechanistic laws.

This formed the basis of further studies focused on the behavior and interaction of subatomic particles such as electrons. As research identified more differences between classical physics and quantum physics, it was broadly accepted that quantum principles could be used to define the idiosyncrasies of nature (for example: black holes). Two great minds, Albert Einstein and Stephen Hawkins, have contributed to this field through their work on relativity and quantum gravity. Let us now look into how Albert Einstein viewed quantum physics and its concepts. Source: https://www.nobelprize.org/prizes/physics/1921/einstein/facts/

Einstein's quantum troubles

We may have to go back some years in history to understand how Einstein got entangled (pun intended) in the world of quantum mechanics. For a layman, space is just vast emptiness, yet when combined with time, space becomes a four-dimensional puzzle that has proven to be a tremendous challenge to the greatest minds of the 19th and 20th centuries. There were principles of quantum mechanics that Einstein did not agree with, and he was vocal about it.

One of the key principles of quantum mechanics was Copenhagen Interpretation. This explains how the state of a particle is influenced by the fact that the state was observed; the observer thus influenced the state of the particle. Einstein did not agree with this indeterminate aspect of quantum mechanics that Niels Bohr postulated.

In 1927, Einstein began his debates with Bohr at the Solvay Conference in Brussels. He believed in objective reality that existed independent of observation. As per the principles of quantum theory, the experimenters' choice of methods affected whether certain parameters had definitive values or were fuzzy. Einstein couldn't accept that the moon was not there when no one looked at it and felt that the principles of quantum theory were incomplete. Source: https://cp3.irmp.ucl.ac.be/~maltoni/PHY1222/mermin_moon.pdf

One interesting aspect of this indeterministic nature of objects is that as babies, we tend to appreciate these principles better. This is illustrated in the peek-a-boo game that babies often love. They believe that the observer exists only when they observe them, and do not demonstrate the cognitive ability called object permanence. However, as we grow older, we base our actions on the assumption of object permanence.

Niels Bohr believed that it was meaningless to assign reality to the universe in the absence of observation. In the intervals between measurements, quantum systems existed as a fuzzy mixture of all possible properties – commonly known as superposition states. The mathematical function that described the states that particles took is called the wave function, which collapses to one state at the point of observation.

This philosophical battle between the two scientists (Einstein and Bohr) intensified in 1935 with the emergence of the property of Entanglement. It meant that the state of two entangled particles was dependent on each other (or had a correlation) irrespective of how far they were from each other. Einstein (mockingly) called it the Spooky action at a distance.

As a response to Bohr's findings, the infamous EPR (Einstein, Podolsky, Rosen) paper was written in 1935/36 by Albert Einstein, Boris Podolsky, and Nathan Rosen. The purpose of the paper was to argue that quantum mechanics fails to provide a complete description of physical reality. Podolsky was tasked with translating it to English, and Einstein was not happy with the translation. Apart from that, Podolsky also leaked an advance report of the EPR paper to the New York Times, and Einstein was so upset that he never spoke to Podolsky again. Source: https://www.aps.org/publications/apsnews/200511/history.cfm

The EPR paradox identified two possible explanations for the entanglement property. The state of one particle affecting another could potentially be due to shared, embedded properties within both particles, like a gene. Alternatively, the two particles could be making instantaneous communication with each other about their states. The second explanation was thought to be impossible, as this violated the theory of special relativity (if the particles were making instantaneous communication at faster than the speed of light) and the principle of locality.

The principle of locality states that an object is influenced by only its immediate surroundings.

The theory of special relativity states that the laws of physics are the same for all non-accelerating observers, and Einstein showed that the speed of light within a vacuum is the same no matter the speed at which an observer travels.

If entanglement existed, and if particles could influence the state of each other at a great distance, then the theory of locality was also considered to be breached. Hence, the EPR paper challenged the assumption that particles could communicate their states instantaneously and from a good distance.

Hence, the EPR concluded that the two entangled particles had hidden variables embedded in them, which gave them the information to choose correlated states when being observed. Albert Einstein continued to challenge the principles of quantum mechanics.

"Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot but does not really bring us any closer to the secret of the 'old one.' I, at any rate, am convinced that He does not throw dice."

Albert Einstein

Einstein and Bohr could not come to an agreement, even in the presence of an arbitrator. This arbitrator came in the form of John Wheeler. In 1939, Bohr and Wheeler started working at Princeton University and shared a good working relationship. Wheeler was a pleasant persona and could speak German. Einstein – who was the professor in Exile at Princeton – became Wheeler's neighbor and there arose a possibility for these great minds to come together. Wheeler saw merits in Bohr's view on complementarity – where two particles could be entangled. He also agreed with Einstein's challenge to the theory that, when we view particles, we unavoidably alter them. Despite several attempts, John Wheeler did not manage to come up with a theory that convinced both Bohr and Einstein.

Bell's inequality

Following on from the likes of Einstein and Bohr, John Bell entered the arena of quantum in the latter half of the 20th century. He was born in Belfast in 1928, and after several years of flirting with theories of quantum mechanics, he finally chose to take the plunge in 1963 when he took a leave at Stanford University. He explained entanglement as the behavior of identical twins who were separated at the time of birth. If, after a lifetime, they were brought together, they would have surprising things in common. He had come across this in a study by the Institute for the Study of Twins. This led to the thought that perhaps electrons behaved like they had genes. At the minimum, it helped a layman understand what entanglement of quantum particles meant.

However, in 1964, Bell subsequently came up with Bell's inequality. Through a set of experiments on electrons and positron pairs, and probability theory, Bell proved that the conclusion of EPR was wrong. The assumption that particles had to have properties embedded in them to explain entanglement did not seem the right way forward after all. Bell's inequality was supported through several subsequent experiments. The probability explanation through Venn diagrams of Bell's inequality is simple. There is a simpler possible home experiment that can explain the spooky nature of quantum mechanics using a polarizing lens used on photons.

You can check out the YouTube video of the experiment here, https://www.youtube.com/watch?v=zcqZHYo7ONs&t=887s, and it does get quite counter-intuitive.

The video shows the following:

  • Look at a white background through a polarized lens. It looks gray, indicating that a lot of light is being blocked from going through the lens.
  • Add another polarized lens B, and you will observe less light coming through it – indicated by the background getting even darker.
  • Now, by adding another polarized lens C on top of A and B, you would expect the white background to look even darker. But surprisingly, it looks brighter than with just A and B.

The results of the experiment can perhaps be explained by one possibility. What if the nature of the photon changes when it goes through one filter? This could mean the way the changed photon interacts with subsequent filters is different too.

I will explain another weird behavior of light particles (photons) using the Quantum Slit experiment later in this chapter. Currently, the behavior of subatomic particles is most clearly explained through the principles of quantum mechanics. If any new alternative is to be offered, it must be more convincing than the existing principles.

Quantum computers – a fancy idea

Whilst the theories underlying the behavior of particles in nature were being postulated, there were a few individuals who were starting to think about the implications of simulating these behaviors using classical computers. In 1965, the Nobel Prize in Physics was awarded jointly to Sin-Itiro Tomonaga, Julian Schwinger, and Richard P. Feynman for their fundamental work in quantum electrodynamics, with deep-ploughing consequences for the physics of elementary particles. It was in the 1980s that Richard Feynman first discussed the idea "Can a classical computer simulate any physical system?". He is considered to have laid the foundations of quantum computing through his lecture titled "Simulating Physics with Computers."

In 1985, the British physicist David Deutsche highlighted the fact that Alan Turing's theoretical version of a universal computer could not be extended to quantum mechanics. You may ask what Turing's computer was.

In 1936, Alan Turing came up with a simple version of a computer called the Turing machine. It had a tape with several boxes, and bits coded into each one of them as "0"s and "1"s. His idea was that the machine would run above the tape, looking at one square at a time. The machine had a code book that had a set of rules, and, based on the rules, the states ("0"s and "1"s) of each of these boxes would be set. At the end of the process, the states of each of the boxes would provide the answer to the problem that the machine has solved. Many consider this to have laid the foundation for the computers we use today.

However, David Deutsche highlighted that Turing's theories were based on classical physics (0s and 1s), and a computer based on quantum physics would be more powerful than a classical computer.

Richard Feynman's idea started to see traction when Peter Shor of Bell Laboratories invented the algorithm to factor large numbers on the quantum computer. Using this algorithm, a quantum computer would be able to crack even recent cryptography techniques.

In 1996, this was followed by Grover's search algorithm. In a classical computer, when an item has to be searched in a list of N items, it needs, on average, N/2 checks to recover the item. However, with Grover's algorithm, the number of checks could be brought down to √N. In a database search, this offered a quadratic improvement to the search performance. This is considered a key milestone in the field of quantum computing.

Déjà vu

Grover's algorithm and subsequent work in this space have since accelerated the excitement and hype around quantum computing. More recently, tech giants IBM, Google, Intel, Microsoft, and a few others have ramped up their work in quantum computing. At CES 2019, IBM showed off their prowess through the launch of an integrated system for quantum computing for scientists and businesses. IBM also has a cloud-based quantum computing infrastructure that programmers could use. More on what the tech giants are up to will be revealed in Chapter 16, Nation States and Cyberwars.

When I first looked at the picture of IBM's quantum computer replica as revealed at CES 2019, my immediate thought was Déjà vu. The previous generation witnessed the rise of the classical computing revolution, with its far-reaching impacts upon all aspects of society. We stand on the brink of another revolution; we will be fortunate enough to see the evolution of quantum computing first-hand.