Book Image

Machine Learning with Swift

By : Jojo Moolayil, Alexander Sosnovshchenko, Oleksandr Baiev
Book Image

Machine Learning with Swift

By: Jojo Moolayil, Alexander Sosnovshchenko, Oleksandr Baiev

Overview of this book

Machine learning as a field promises to bring increased intelligence to the software by helping us learn and analyse information efficiently and discover certain patterns that humans cannot. This book will be your guide as you embark on an exciting journey in machine learning using the popular Swift language. We’ll start with machine learning basics in the first part of the book to develop a lasting intuition about fundamental machine learning concepts. We explore various supervised and unsupervised statistical learning techniques and how to implement them in Swift, while the third section walks you through deep learning techniques with the help of typical real-world cases. In the last section, we will dive into some hard core topics such as model compression, GPU acceleration and provide some recommendations to avoid common mistakes during machine learning application development. By the end of the book, you'll be able to develop intelligent applications written in Swift that can learn for themselves.
Table of Contents (18 chapters)
Title Page
Packt Upsell

Lossy compression

All lossy methods of compression involve a potential problem: when you lose part of the information from your model, you should check how it performs after this. Retraining on the compressed model will help to adapt the network to the new constraints.

Network optimization techniques include:

  • Weight quantization: Change computation precision. For example, the model can be trained in full precision (float32) and then compressed to int8. This improves the performance significantly.
  • Weight pruning
  • Weight decomposition
  • Low rank approximation. Good approach for CPU.
  • Knowledge distillation: Train a smaller model to predict an output of the bigger one.
  • Dynamic memory allocation
  • Layer and tensor fusion. The idea is to combine successive layers into one. This reduces the memory needed to store intermediate results.

At the moment, each of them has its own pros and cons, but no doubts, that more perfect techniques will be proposed in the closest future.

  • Kernel auto-tuning: Optimizes execution...