Book Image

Machine Learning Solutions

Book Image

Machine Learning Solutions

Overview of this book

Machine learning (ML) helps you find hidden insights from your data without the need for explicit programming. This book is your key to solving any kind of ML problem you might come across in your job. You’ll encounter a set of simple to complex problems while building ML models, and you'll not only resolve these problems, but you’ll also learn how to build projects based on each problem, with a practical approach and easy-to-follow examples. The book includes a wide range of applications: from analytics and NLP, to computer vision domains. Some of the applications you will be working on include stock price prediction, a recommendation engine, building a chat-bot, a facial expression recognition system, and many more. The problem examples we cover include identifying the right algorithm for your dataset and use cases, creating and labeling datasets, getting enough clean data to carry out processing, identifying outliers, overftting datasets, hyperparameter tuning, and more. Here, you'll also learn to make more timely and accurate predictions. In addition, you'll deal with more advanced use cases, such as building a gaming bot, building an extractive summarization tool for medical documents, and you'll also tackle the problems faced while building an ML model. By the end of this book, you'll be able to fine-tune your models as per your needs to deliver maximum productivity.
Table of Contents (19 chapters)
Machine Learning Solutions
Foreword
Contributors
Preface
Index

The best approach


We have achieved approximately a 66% accuracy rate; for an FER application, the best accuracy will be approximately 69%. We will achieve this by using the pre-trained model. So, let's look at the implementation, and how we can use it to achieve the best possible outcome.

Implementing the best approach

In this section, we will be implementing the best possible approach for the FER application. This pre-trained model has been built by using dense and deep convolutional layers. Because of the six-layer deep CNN, and with the help of the stochastic gradient descent (SGD) technique, we can build the pre-trained model. The number of neurons for each layer were 32, 32, 64, 64, 128,128, 1,024, and 512, respectively. All layers are using ReLU as an activation function. The 3 x 3 matrix will be used to generate the initial feature map, and the 2 x 2 matrix will be used to generate the max pooling. You can download the model from this GitHub link: https://github.com/jalajthanaki/Facial_emotion_recognition_using_Keras...