Book Image

Hands-On Artificial Intelligence for Beginners

By : Patrick D. Smith, David Dindi
Book Image

Hands-On Artificial Intelligence for Beginners

By: Patrick D. Smith, David Dindi

Overview of this book

Virtual Assistants, such as Alexa and Siri, process our requests, Google's cars have started to read addresses, and Amazon's prices and Netflix's recommended videos are decided by AI. Artificial Intelligence is one of the most exciting technologies and is becoming increasingly significant in the modern world. Hands-On Artificial Intelligence for Beginners will teach you what Artificial Intelligence is and how to design and build intelligent applications. This book will teach you to harness packages such as TensorFlow in order to create powerful AI systems. You will begin with reviewing the recent changes in AI and learning how artificial neural networks (ANNs) have enabled more intelligent AI. You'll explore feedforward, recurrent, convolutional, and generative neural networks (FFNNs, RNNs, CNNs, and GNNs), as well as reinforcement learning methods. In the concluding chapters, you'll learn how to implement these methods for a variety of tasks, such as generating text for chatbots, and playing board and video games. By the end of this book, you will be able to understand exactly what you need to consider when optimizing ANNs and how to deploy and maintain AI applications.
Table of Contents (15 chapters)

Word2vec

The Word2vec algorithm, invented by Tomas Mikolav while he was at Google in 2013, was one of the first modern embedding methods. It is a shallow, two-layer neural network that follows a similar intuition to the autoencoder in that network and is trained to perform a certain task without being actually used to perform that task. In the case of the Word2vec algorithm, that task is learning the representations of natural language. You can think of this algorithm as a context algorithm – everything that it knows is from learning the contexts of words within sentences. It works off something called the distributional hypothesis, which tells us that the context for each word is found from its neighboring words. For instance, think about a corpus vector with 500 dimensions. Each word in the corpus is represented by a distribution of weights across every single one of...