Sharding and replication
Sharding is partitioning where the database is split across multiple smaller databases to improve performance and reading time. In replication, we basically copy the database across multiple databases to provide a quicker look and less response time. Content delivery networks are the best examples of this.
RethinkDB, just like other NoSQL databases, also uses sharding and replication to provide fast response and greater availability. Let's look at it in detail bit by bit.
Sharding in RethinkDB
RethinkDB makes use of a range sharding algorithm to provide the sharding feature. It performs sharding on the table's primary key to partition the data. RethinkDB uses the table's primary key to perform all sharding operations and it cannot use any other keys to do so. In RethinkDB, the shard key and primary key are the same.
Upon a request to create a new shard for a particular table, RethinkDB examines the table and tries to find out the optimal breakpoint to create an even number of shards.
For example, say you have a table with 1,000 rows, the primary key ranging from 0 to 999, and you've asked RethinkDB to create two shards for you.
RethinkDB will likely find primary key 500 as the breaking point. It will store every entry ranging from 0 to 499 in shard 1, while data with primary keys 500 to 999 will be stored in shard 2. The shards will be distributed across clusters automatically.
You can specify the sharding and replication settings at the time of creation of the table or alter it later. You cannot specify the split point manually; that is RethinkDB's job to do internally. You cannot have less server than you shard.
You can always visit the RethinkDB administrative screen to increase the number of shards or replicas:
We will look at this in more detail with practical use cases in Chapter 5, Administration and Troubleshooting Tasks in RethinkDB totally focused on RethinkDB administration.
Let's see in more detail how range-based sharding works. Sharding can be basically done in two ways, using vertical partitioning or horizontal partitioning:
In vertical partitioning, we store data in different tables having different documents in different databases.
In horizontal partitioning, we store documents of the same table in separate databases. The range shard algorithm is a dynamic algorithm that determines the breaking point of the table and stores data in different shards based on the calculation.
Range-based sharding
In the range sharding algorithm, we use a service called locator to determine the entries in a particular table. The locator service finds out the data using range queries and hence it becomes faster than others. If you do not have a range or some kind of indicator to know which data belongs to which shard in which server, you will need to look over every database to find the particular document, which no doubt turns into a very slow process.
RethinkDB maintains a relevant piece of metadata, which they refer to as the directory. The directory maintains a list of node (RethinkDB instance) responsibilities for each shard. Each node is responsible for maintaining the updated version of the directory.
RethinkDB allows users to provide the location of shards.You can again go to web-based administrative screens to perform the same. However, you need to set up the RethinkDB servers manually using the command line and it cannot be done via web-based interfaces.
Replication in RethinkDB
Replication provides a copy of data in order to improve performance, availability, and failover handling. Each shard in RethinkDB can contain a configurable number of replicas. A RethinkDB instance (node) in the cluster can be used as a replication node for any shard. You can always change the replication from the RethinkDB web console.
Currently, RethinkDB does not allow more than one replica in a single RethinkDB instance due to some technical limitations. Every RethinkDB instance stores metadata of tables. In case of changes in metadata, RethinkDB sends those changes across other RethinkDB instance in the cluster in order to keep the updated metadata across every shard and replica.
Indexing in RethinkDB
RethinkDB uses the primary key by default to index a document in a table. If the user does not provide primary key information during the creation of the table, RethinkDB uses its default name ID.
The default-generated primary key contains information about the shard's location in order to directly fetch the information from the appropriate shard. The primary key of each shard is indexed using the B-Tree data structure.
One of the examples for the RethinkDB primary key is as follows:
D0041fcf-9a3a-460d-8450-4380b00ffac0
.
RethinkDB also provides the secondary key and compound key (combination of keys) features. It even provides multi-index features that allow you to have arrays of values acting as keys, which again can be single compound keys.
Having system-generated keys for primary is very efficient and fast, because the query execution engine can immediately determine on which shard the data is present. Hence, there is no need for extra routing, while having a custom primary key, say an alphabet or a number, may force RethinkDB to perform more searching of data on various clusters. This slows down the performance. You can always use secondary keys of your choice to perform further indexing and searching based on your application needs.