-
Book Overview & Buying
-
Table Of Contents
Mastering RethinkDB
By :
We have covered various architectural features and the data model of RethinkDB. Let's look over some of the constraints of RethinkDB that you need to take into account while architecting your data store.
RethinkDB divides the limitation into hard and soft limitations. The hard limitations are as follows:
RethinkDB also has some memory limitation, as follows:
RethinkDB, in order to keep performance high, stores some data in the RAM. There are basically three sources of usage of RAM by RethinkDB:
RethinkDB stores metadata of tables in the main memory in order to ensure fast read access. Every table consumes around 8 MB per server for the metadata. RethinkDB organizes the data into blocks, with size ranging from 512 bytes to 4 KB. Out of these blocks, approximately 10 to 26 bytes per block are kept in memory.
Page cache is a very important aspect of performance. It is basically used to store very frequently accessed data in the RAM rather than reading it from disk (except in the case of direct I/O, where the page cache is in the application buffer than RAM). RethinkDB uses this formula to calculate the size of the cache:
Cache size = available memory - 1024 MB / 2
If the cache size is less than 1224 MB, then RethinkDB set the size of page cache to 100 MB. This is why it is recommended to have at least 2 GB of RAM allocated for RethinkDB processes.
You can also change the size of the page cache when you start the server or later, using configuration files.
Every database uses some memory to store the results of ongoing running queries. Since queries differ, in general, there is no exact estimate about memory usage by running queries; however, a rough estimate is between 1 MB and 20 MB, including background processes such as transferring data between nodes, voting processes, and so on.