Book Image

Machine Learning with Swift

By : Jojo Moolayil, Alexander Sosnovshchenko, Oleksandr Baiev
Book Image

Machine Learning with Swift

By: Jojo Moolayil, Alexander Sosnovshchenko, Oleksandr Baiev

Overview of this book

Machine learning as a field promises to bring increased intelligence to the software by helping us learn and analyse information efficiently and discover certain patterns that humans cannot. This book will be your guide as you embark on an exciting journey in machine learning using the popular Swift language. We’ll start with machine learning basics in the first part of the book to develop a lasting intuition about fundamental machine learning concepts. We explore various supervised and unsupervised statistical learning techniques and how to implement them in Swift, while the third section walks you through deep learning techniques with the help of typical real-world cases. In the last section, we will dive into some hard core topics such as model compression, GPU acceleration and provide some recommendations to avoid common mistakes during machine learning application development. By the end of the book, you'll be able to develop intelligent applications written in Swift that can learn for themselves.
Table of Contents (18 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

The Apriori algorithm


The most famous algorithm for association rule learning is Apriori. It was proposed by Agrawal and Srikant in 1994. The input of the algorithm is a dataset of transactions where each transaction is a set of items. The output is a collection of association rules for which support and confidence are greater than some specified threshold. The name comes from the Latin phrase a priori (literally, "from what is before") because of one smart observation behind the algorithm: if the item set is infrequent, then we can be sure in advance that all its subsets are also infrequent.

You can implement Apriori with the following steps:

  1. Count the support of all item sets of length 1, or calculate the frequency of every item in the dataset.
  2. Drop the item sets that have support lower than the threshold.
  3. Store all the remaining item sets.
  4. Extend each stored item set by one element with all possible extensions. This step is known as candidate generation.
  5. Calculate the support value of each...