Book Image

Mastering Apache Cassandra 3.x - Third Edition

By : Aaron Ploetz, Tejaswi Malepati, Nishant Neeraj
Book Image

Mastering Apache Cassandra 3.x - Third Edition

By: Aaron Ploetz, Tejaswi Malepati, Nishant Neeraj

Overview of this book

With ever-increasing rates of data creation, the demand for storing data fast and reliably becomes a need. Apache Cassandra is the perfect choice for building fault-tolerant and scalable databases. Mastering Apache Cassandra 3.x teaches you how to build and architect your clusters, configure and work with your nodes, and program in a high-throughput environment, helping you understand the power of Cassandra as per the new features. Once you’ve covered a brief recap of the basics, you’ll move on to deploying and monitoring a production setup and optimizing and integrating it with other software. You’ll work with the advanced features of CQL and the new storage engine in order to understand how they function on the server-side. You’ll explore the integration and interaction of Cassandra components, followed by discovering features such as token allocation algorithm, CQL3, vnodes, lightweight transactions, and data modelling in detail. Last but not least you will get to grips with Apache Spark. By the end of this book, you’ll be able to analyse big data, and build and manage high-performance databases for your application.
Table of Contents (12 chapters)

Getting started

The first steps to building an application for use with Apache Cassandra are both important to get right, and easy to get wrong. Here we will cover some fundamental questions about whether or not Cassandra is even the correct data store for the application in question.

But first, let's start with an overview of what the wrong path looks like.

The path to failure

As most developers are used to working with relational databases, the typical path to failure starts with a data model that closely resembles one found in a RDBMS. A loading job is then written, which only succeeds in crashing their nodes every couple of hours.

Finally, once the data is there, they build their application around a RDBMS framework...