Book Image

Implementing AWS: Design, Build, and Manage your Infrastructure

By : Yohan Wadia, Rowan Udell, Lucas Chan, Udita Gupta
Book Image

Implementing AWS: Design, Build, and Manage your Infrastructure

By: Yohan Wadia, Rowan Udell, Lucas Chan, Udita Gupta

Overview of this book

With this Learning Path, you’ll explore techniques to easily manage applications on the AWS cloud. You’ll begin with an introduction to serverless computing, its advantages, and the fundamentals of AWS. The following chapters will guide you on how to manage multiple accounts by setting up consolidated billing, enhancing your application delivery skills, with the latest AWS services such as CodeCommit, CodeDeploy, and CodePipeline to provide continuous delivery and deployment, while also securing and monitoring your environment's workflow. It’ll also add to your understanding of the services AWS Lambda provides to developers. To refine your skills further, it demonstrates how to design, write, test, monitor, and troubleshoot Lambda functions. By the end of this Learning Path, you’ll be able to create a highly secure, fault-tolerant, and scalable environment for your applications. This Learning Path includes content from the following Packt products: • AWS Administration: The Definitive Guide, Second Edition by Yohan Wadia • AWS Administration Cookbook by Rowan Udell, Lucas Chan • Mastering AWS Lambda by Yohan Wadia, Udita Gupta
Table of Contents (29 chapters)
Title Page
Copyright and Credits
About Packt
Contributors
Preface
Index

Introducing AWS Data Pipeline


AWS Data Pipeline is an extremely versatile web service that allows you to move data back and forth between various AWS services, as well as on-premise data sources. The service is designed specifically to provide you with an in-built fault tolerance and highly available platform, using which you can define and build your very own custom data migration workflows. AWS Data Pipeline also provides add-on features such as scheduling, dependency tracking, and error handling, so that you do not have to waste extra time and effort in writing them on your own. This easy-to-use and flexible service, accompanied by its low operating costs, make the AWS Data Pipeline service ideal for use cases such as:

  • Migrating data on a periodic basis from an Amazon EMR cluster over to Amazon Redshift for data warehousing
  • Incrementally loading data from files stored in Amazon S3 directly into an Amazon RDS database
  • Copying data from an Amazon MySQL database into an Amazon Redshift cluster...