Book Image

Learning Apache Cassandra - Second Edition

Book Image

Learning Apache Cassandra - Second Edition

Overview of this book

Cassandra is a distributed database that stands out thanks to its robust feature set and intuitive interface, while providing high availability and scalability of a distributed data store. This book will introduce you to the rich feature set offered by Cassandra, and empower you to create and manage a highly scalable, performant and fault-tolerant database layer. The book starts by explaining the new features implemented in Cassandra 3.x and get you set up with Cassandra. Then you’ll walk through data modeling in Cassandra and the rich feature set available to design a flexible schema. Next you’ll learn to create tables with composite partition keys, collections and user-defined types and get to know different methods to avoid denormalization of data. You will then proceed to create user-defined functions and aggregates in Cassandra. Then, you will set up a multi node cluster and see how the dynamics of Cassandra change with it. Finally, you will implement some application-level optimizations using a Java client. By the end of this book, you'll be fully equipped to build powerful, scalable Cassandra database layers for your applications.
Table of Contents (14 chapters)

Batching in Cassandra


In Cassandra, a client can batch multiple statements, which may or may not be related to be executed as a single statement. Earlier, we got a first look at how to use batches in Cassandra. Here, we will dig a bit deeper into the different types of batches and how batches have evolved over time to accommodate more nuanced features, such as atomic batching, batches with custom timestamps, and so on.

Prior to Cassandra 1.2, batches were somewhat resilient. If an update within a batch failed, the coordinator would just hint that particular update. This ensured that all the updates within a batch were eventually committed assuming the coordinator node doesn't go down in between. In the case of the failure scenario where the coordinator node fails mid-batch execution, the client can retry the same batch on a different coordinator. This shouldn't cause any data integrity issues since all the operations within a batch (insert, update, or delete) as well as the entire batch itself...