Book Image

Intelligent Projects Using Python

By : Santanu Pattanayak
Book Image

Intelligent Projects Using Python

By: Santanu Pattanayak

Overview of this book

This book will be a perfect companion if you want to build insightful projects from leading AI domains using Python. The book covers detailed implementation of projects from all the core disciplines of AI. We start by covering the basics of how to create smart systems using machine learning and deep learning techniques. You will assimilate various neural network architectures such as CNN, RNN, LSTM, to solve critical new world challenges. You will learn to train a model to detect diabetic retinopathy conditions in the human eye and create an intelligent system for performing a video-to-text translation. You will use the transfer learning technique in the healthcare domain and implement style transfer using GANs. Later you will learn to build AI-based recommendation systems, a mobile app for sentiment analysis and a powerful chatbot for carrying customer services. You will implement AI techniques in the cybersecurity domain to generate Captchas. Later you will train and build autonomous vehicles to self-drive using reinforcement learning. You will be using libraries from the Python ecosystem such as TensorFlow, Keras and more to bring the core aspects of machine learning, deep learning, and AI. By the end of this book, you will be skilled to build your own smart models for tackling any kind of AI problems without any hassle.
Table of Contents (12 chapters)

Building the model

In this section, the core model-building exercise is illustrated. We first define a embedding layer for words in the vocabulary of the text captions followed by the two LSTMs. The weights self.encode_W and self.encode_b are used to reduce the dimension of the features ft from the convolutional neural network. For the second LSTM (LSTM 2), one of the other inputs at any time step t > N is the previous word wt-1 along with the output ht from the first LSTM (LSTM 1). The word embedding for wt-1 is fed to the LSTM 2 instead of the raw one hot encoded vector. For the first N (self.video_lstm_step), the LSTM 1 processes the input features ft from the CNN, and the output hidden state ht(output1) is fed to the LSTM 2. During this encoding phase, the LSTM 2 doesn't receive any word wt-1 as an input.

From the (N+1) time step, we enter the decoding phase, where...