Book Image

10 Machine Learning Blueprints You Should Know for Cybersecurity

By : Rajvardhan Oak
4 (1)
Book Image

10 Machine Learning Blueprints You Should Know for Cybersecurity

4 (1)
By: Rajvardhan Oak

Overview of this book

Machine learning in security is harder than other domains because of the changing nature and abilities of adversaries, high stakes, and a lack of ground-truth data. This book will prepare machine learning practitioners to effectively handle tasks in the challenging yet exciting cybersecurity space. The book begins by helping you understand how advanced ML algorithms work and shows you practical examples of how they can be applied to security-specific problems with Python – by using open source datasets or instructing you to create your own. In one exercise, you’ll also use GPT 3.5, the secret sauce behind ChatGPT, to generate an artificial dataset of fabricated news. Later, you’ll find out how to apply the expert knowledge and human-in-the-loop decision-making that is necessary in the cybersecurity space. This book is designed to address the lack of proper resources available for individuals interested in transitioning into a data scientist role in cybersecurity. It concludes with case studies, interview questions, and blueprints for four projects that you can use to enhance your portfolio. By the end of this book, you’ll be able to apply machine learning algorithms to detect malware, fake news, deep fakes, and more, along with implementing privacy-preserving machine learning techniques such as differentially private ML.
Table of Contents (15 chapters)

Detecting malware with BERT

So far, we have seen attention, transformers, and BERT. But all of it has been very specific to language-related tasks. How is all of what we have learned relevant to our task of malware detection, which has nothing to do with language? In this section, we will first discuss how we can leverage BERT for malware detection and then demonstrate an implementation of the same.

Malware as language

We saw that BERT shows excellent performance on sentence-related tasks. A sentence is merely a sequence of words. Note that we as humans find meaning in a sequence because we understand language. Instead of words, the tokens could be anything: integers, symbols, or images. So BERT performs well on sequence tasks.

Now, imagine that instead of words, our tokens were calls made by an application. The life cycle of an application could be described as a series of API calls it makes. For instance, <START> <REQUEST-URL> <DOWNLOAD-FILE> <EXECUTE...