Consistent, steady improvement is the name of the game in Machine Learning. Sometimes you find yourself implementing an algorithm from scratch; sometimes you're pulling in libraries. You always need the option to try new algorithms and improve performance. Simultaneously, you need to know that performance doesn't degrade.
You could just ask an expert about every change because testing stochastic algorithms seems impossible. That's just as terribly slow as it sounds. What if you could automate checking that your updated algorithms outperform your previous ones? What if you could design your code so that you could swap in an algorithm from another library or pit one that you wrote yourself against what you have? These are all reasons for this book.
We'll be covering what test-driven development is and what value it brings to machine learning. We'll be using nosetests in Python 2.7 to develop our tests. For machine-learning algorithms, we will be using Statsmodels and sci-kit learn. Statsmodels has some great implementations of regression. sci-kit learn is useful for its plethora of supported classification algorithms.
Chapter 1, Introducing to Test-Driven Machine Learning, explains what Test-Driven Development is, what it looks like, and how it is done in practice.
Chapter 2, Perceptively Testing a Perceptron, develops a perceptron from scratch and defines its behavior even though it behaves non-deterministically.
Chapter 3, Exploring the Unknown with Multi-armed Bandits, introduces multi-armed bandit problems, testing different algorithms, and iterating on their performance.
Chapter 4, Predicting Values with Regression, uses statsmodels to implement regression and report on key performance metrics. We will also explore tuning the model.
Chapter 5, Making Decisions Black and White with Logistic Regression, continues exploring regression as well as quantifying quality of this different type of it. We will use statsmodels again to create our regression models.
Chapter 6, You're So Naïve, Bayes, helps us develop a Gaussian Naïve Bayes algorithm from scratch using test-driven development.
Chapter 7, Optimizing by Choosing a New Algorithm, continues the work from Chapter 6, You're So Naïve, Bayes, and attempts to improve upon it using a new algorithm: Random Forests.
Chapter 8, Exploring scikit-learn Test First, teaches how to teach oneself. You probably already have a lot of experience of this. This chapter will build upon this by teaching you to use the test framework to document sci-kit learn.
Chapter 9, Bringing it all Together, takes a business problem that requires a couple of different algorithms. Again, we will develop everything we need from scratch and mix our code with third party libraries, completely test-driven.
We will be using Python 2.7 in this book along with nosetests to unit test our software. In addition, we will be using statsmodels as well as scikit-learn.
This book is for machine learning professionals who want to be able to test the improvements to their algorithms in isolation and in an automated fashion. This book is for any data scientist who wants to get started in Test-Driven Development with minimal religion and maximum value. This book is not for someone who wants to learn state of the art Test-Driven Development. It is written with the idea that the majority of what can be learned from Test-Driven Development is remarkably simple. We will provide a relatively simple approach to it which the reader can choose to augment as they see fit.
In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.
Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: "Notice that in my test, I instantiate a NumberGuesser
object."
A block of code is set as follows:
def given_no_information_when_asked_to_guess_test(): number_guesser = NumberGuesser() result = number_guesser.guess() assert result is None, "Then it should provide no result."
When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:
for the_class, trained_observations in self._classifications.items():
if len(trained_observations) <= 1:
return None
probability_of_observation_given_class[the_class] = self._probability_given_class(trained_observations, observation)
[default]
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.
To send us general feedback, simply e-mail <[email protected]>
, and mention the book's title in the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from http://www.packtpub.com/sites/default/files/downloads/TestDrivenMachineLearning_ColorImages.pdf.
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.
To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at <[email protected]>
with a link to the suspected pirated material.
We appreciate your help in protecting our authors and our ability to bring you valuable content.
If you have a problem with any aspect of this book, you can contact us at <[email protected]>
, and we will do our best to address the problem.