Book Image

Hands-On Neural Networks

By : Leonardo De Marchi, Laura Mitchell
Book Image

Hands-On Neural Networks

By: Leonardo De Marchi, Laura Mitchell

Overview of this book

Neural networks play a very important role in deep learning and artificial intelligence (AI), with applications in a wide variety of domains, right from medical diagnosis, to financial forecasting, and even machine diagnostics. Hands-On Neural Networks is designed to guide you through learning about neural networks in a practical way. The book will get you started by giving you a brief introduction to perceptron networks. You will then gain insights into machine learning and also understand what the future of AI could look like. Next, you will study how embeddings can be used to process textual data and the role of long short-term memory networks (LSTMs) in helping you solve common natural language processing (NLP) problems. The later chapters will demonstrate how you can implement advanced concepts including transfer learning, generative adversarial networks (GANs), autoencoders, and reinforcement learning. Finally, you can look forward to further content on the latest advancements in the field of neural networks. By the end of this book, you will have the skills you need to build, train, and optimize your own neural network model that can be used to provide predictable solutions.
Table of Contents (16 chapters)
Free Chapter
1
Section 1: Getting Started
4
Section 2: Deep Learning Applications
9
Section 3: Advanced Applications

History of AI

The idea of AI, entailing machine that can think without human help, is surprisingly old. It can be dated back to the Indian philosophies of Charvaka, from around 1,500 BC.

The basis of AI is the philosophical concept that human reasoning can be mapped into a mechanical process. We can find this process in many civilizations in the first millennium BC, in particular in Greek philosophers such as Aristotle and Euclid.

Philosophers and mathematicians, such as Leibniz and Hobbes, in the 17th century explored the possibility that all of a human being's rational thoughts could be mapped into an algebraic or geometric system.

Only at the beginning of the 20th century was the limits defined of what mathematics and logic can accomplish and how far mathematical reasoning can be abstracted. It was at that time that the mathematician Alan Turing defined the Turing machine, a mathematical construct that captures the essence of symbolic manipulation.

Alan Turing, in 1950, published a paper speculating on the possibility of creating a machine that can think. As thinking is a concept difficult to define, he defined a task to determine if a machine was able to achieve a level of reasoning that can be called AI. The task that the machine needs to accomplish consists of engaging in a conversation with a human in a way that the human would not be able to tell if he was talking with a machine or another human.

In the 50's we also see the creation of the first artificial neural network (ANN) that were able to perform simple logical functions. Between the 1950s and the 1970s the world saw the first new big era of discovery in AI, with applications in Algebra, Geometry, language, and robotics. The results were so astonishing that created a big hype around the field, but when these huge expectations were not met, we saw the first AI winter, where research funding were cut off.

The 1950s also saw the creation of the first ANN that was able to perform simple logical functions. Between the 1950s and the 1970s, the world saw the first new big era of discovery in AI, with applications in algebra, geometry, language, and robotics. The results were so astonishing that the field gained a lot of attention, but when these huge expectations were not met, research funding was cut off and interest in AI dwindled.

Fast forwarding to recent years, when we started having access to a huge amount of data and computational power and Machine Learning (ML) techniques became more and more useful in business. In particular, the advent of the Graphic Processing Unit (GPU) made it possible to train in an efficient way huge neural networks, usually known as Deep Neural Networks (DNNs), on very big datasets. The trend seems now that we will collect more and more data, for smart cities, vehicles, portable devices, the Internet of Things (IoT), and so on. ML can be used to solve a rapidly increasing number of problems. It seems then that we are just at the beginning of this huge revolution, as only very recently compared to human history, we were able to have machines that can take decisions by themselves.

With algorithms, it is possible not only to automate mundane and repetitive tasks but also to improve important fields such as finance and medicines where human biases and limited cognitive power limit the growth of the field.

All this automation can be destabilizing to a large portion of the workforce and can focus more and more wealth and power in the hands of a few select individuals and companies. For this reason, companies such as Google and Facebook are financing long-term research into this project. OpenAI (https://openai.com/), in particular, is a company that wants to provide open source research in AI and easy access to its material for everyone.

If it will be proven that we can automatize any task, we might live in a society not bound by resources. Such a society will not need money, as it's just a way to efficiently allocate resources, and we might end up in a utopian society where people can pursue what makes them happy.

At the moment, these are just futurist theories, but ML is becoming more and more advanced by the day. We will now take an overview of the current state of the field.