Book Image

AWS Certified Machine Learning Specialty: MLS-C01 Certification Guide

By : Somanath Nanda, Weslley Moura
Book Image

AWS Certified Machine Learning Specialty: MLS-C01 Certification Guide

By: Somanath Nanda, Weslley Moura

Overview of this book

The AWS Certified Machine Learning Specialty exam tests your competency to perform machine learning (ML) on AWS infrastructure. This book covers the entire exam syllabus using practical examples to help you with your real-world machine learning projects on AWS. Starting with an introduction to machine learning on AWS, you'll learn the fundamentals of machine learning and explore important AWS services for artificial intelligence (AI). You'll then see how to prepare data for machine learning and discover a wide variety of techniques for data manipulation and transformation for different types of variables. The book also shows you how to handle missing data and outliers and takes you through various machine learning tasks such as classification, regression, clustering, forecasting, anomaly detection, text mining, and image processing, along with the specific ML algorithms you need to know to pass the exam. Finally, you'll explore model evaluation, optimization, and deployment and get to grips with deploying models in a production environment and monitoring them. By the end of this book, you'll have gained knowledge of the key challenges in machine learning and the solutions that AWS has released for each of them, along with the tools, methods, and techniques commonly used in each domain of AWS ML.
Table of Contents (14 chapters)
1
Section 1: Introduction to Machine Learning
4
Section 2: Data Engineering and Exploratory Data Analysis
9
Section 3: Data Modeling

Taking automatic backup, RDS snapshots, and restore and read replicas

In this section, we will see how RDS Automatic Backup and Manual Snapshots work. These features come with Amazon RDS.

Let's consider a database that is scheduled to take a backup at 5 A.M. every day. If the application fails at 11 A.M., then it is possible to restart the application from the backup taken at 11 A.M. with the loss of 6 hours' worth of data. This is called 6 hours RPO (Recovery Point Objective). So, RPO is defined as the time between the most recent backup and the incident and this defines the amount of data loss. If you want to reduce this, then you have to schedule more incremental backups, which increases the cost and backup frequency. If your business demands a lower RPO value, then the business must spend more on meeting the technical solutions.

Now, according to our example, an engineer was assigned this task to bring the system online as soon as the disaster occurred. The engineer...