Book Image

Machine Learning with Swift

By : Jojo Moolayil, Alexander Sosnovshchenko, Oleksandr Baiev
Book Image

Machine Learning with Swift

By: Jojo Moolayil, Alexander Sosnovshchenko, Oleksandr Baiev

Overview of this book

Machine learning as a field promises to bring increased intelligence to the software by helping us learn and analyse information efficiently and discover certain patterns that humans cannot. This book will be your guide as you embark on an exciting journey in machine learning using the popular Swift language. We’ll start with machine learning basics in the first part of the book to develop a lasting intuition about fundamental machine learning concepts. We explore various supervised and unsupervised statistical learning techniques and how to implement them in Swift, while the third section walks you through deep learning techniques with the help of typical real-world cases. In the last section, we will dive into some hard core topics such as model compression, GPU acceleration and provide some recommendations to avoid common mistakes during machine learning application development. By the end of the book, you'll be able to develop intelligent applications written in Swift that can learn for themselves.
Table of Contents (18 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

What are artificial NNs anyway?


The group of models that we call artificial NNs are universal approximation machines; in other words, the functions that can imitate the behavior of any other function of interest. Here, I mean functions in a more mathematical meaning, as opposed to computer science: functions that take a real-valued input vector and return a real-valued output vector. This definition holds true for feed-forward NNs, which we will be discussing in this chapter. In the following chapters, we'll see networks that map an input tensor (multidimensional array) to an output tensor, and also networks that take their own outputs as an input.

We can think of a NN as a graph and the neuron as a node in a directed acyclic graph. Each such node takes some input and produces some output. Modern NNs are only loosely inspired by the biological brain. If you want to know more about the biological prototype and its relation to NNs, check the Seeing biological analogies section.