Book Image

Machine Learning Projects for Mobile Applications

By : Karthikeyan NG
Book Image

Machine Learning Projects for Mobile Applications

By: Karthikeyan NG

Overview of this book

Machine learning is a technique that focuses on developing computer programs that can be modified when exposed to new data. We can make use of it for our mobile applications and this book will show you how to do so. The book starts with the basics of machine learning concepts for mobile applications and how to get well equipped for further tasks. You will start by developing an app to classify age and gender using Core ML and Tensorflow Lite. You will explore neural style transfer and get familiar with how deep CNNs work. We will also take a closer look at Google’s ML Kit for the Firebase SDK for mobile applications. You will learn how to detect handwritten text on mobile. You will also learn how to create your own Snapchat filter by making use of facial attributes and OpenCV. You will learn how to train your own food classification model on your mobile; all of this will be done with the help of deep learning techniques. Lastly, you will build an image classifier on your mobile, compare its performance, and analyze the results on both mobile and cloud using TensorFlow Lite with an RCNN. By the end of this book, you will not only have mastered the concepts of machine learning but also learned how to resolve problems faced while building powerful apps on mobiles using TensorFlow Lite, Caffe2, and Core ML.
Table of Contents (16 chapters)
Title Page
Dedication
Packt Upsell
Contributors
Preface
Index

Understanding face-swapping


For a long time, understanding human faces has been the grounds for research for computer vision engineers. The first application for this research came in the form of face recognition features. To identify a face in an input image or a video frame, our algorithm should first detect the location of the face. It will then cause a bounding box to frame a face in the image, as follows:

Once we have the bounding boxes, the obvious next step is to identify facial key points with more granular details inside the boxes, for example, the position of the eyes, the nose base, the eyebrows, and so on. Identifying facial landmark points will be helpful in building applications such as virtual makeup rooms, face morphing, Augmented Reality (AR) filters, and so on.

Facial key point identification made with the dlib library looks something like the following:

Note

Facial key point detection was initially invented by Vahid Kazemi and Josephine Sullivan, who identified 68 specific...