Book Image

Fundamentals of Neural Networks [Video]

By : Yiqiao Yin
Book Image

Fundamentals of Neural Networks [Video]

By: Yiqiao Yin

Overview of this book

Learning can be supervised, semi-supervised or unsupervised. Deep-learning architectures such as deep neural networks, deep belief networks, deep reinforcement learning, recurrent neural networks, and convolutional neural networks have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, material inspection, and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance. This course covers the following three sections: (1) Neural Networks, (2) Convolutional Neural Networks (CNN), and (3) Recurrent Neural Networks (RNN). You will learn about logistic regression and linear regression and know the purpose of neural networks. You will also understand forward and backward propagation as well as the cross-entropy function. Furthermore, you will explore image data, convolutional operation, and residual networks. In the final section of the course, you will understand the use of RNN, Gated Recurrent Unit (GRU), and Long Short-Term Memory (LSTM). You will also have code blocks and notebooks to help you understand the topics covered in the course. By the end of this course, you will have a hands-on understanding of Neural Networks in detail. All resources and code files are placed here: https://github.com/PacktPublishing/Fundamentals-in-Neural-Networks
Table of Contents (4 chapters)
Chapter 4
Recurrent Neural Networks
Content Locked
Section 3
Language Processing
NLP is a tool for structuring data in a way that AI systems can process that deals with language. NLP uses AI to ‘read’ through a document and extract key information.