Documentation - Redise Cloud

A guide to Redise Cloud operation and administration

open all | close all

Clustering Redis Databases with Redise Cloud

While it is a design decision that Redis is a (mostly) single-threaded process and this does keep it extremely performant, yet simple, there are times when clustering is advised. For Redise Cloud, it is advantageous and efficient to employ clustering to scale Redis databases when:

  1. The dataset is big enough that it would benefit from using the RAM resources of more than one server. Redis Labs recommends sharding for a dataset once it reaches the size of 30-50 GB.
  2. The operations performed against the database are CPU intensive so that the performance is degraded. By having multiple CPU cores either across the same server or multiple servers to manage the database’s shards, the load of operations is distributed among them.

A Redise Cloud cluster is a set of managed Redis processes, with each process managing a subset of the database’s keyspace. This approach overcomes scaling challenges via horizontal scaling by using multiple cores and multiple servers’ RAM resources.

In a Redise Cloud cluster, the keyspace is partitioned to hash slots. At any given time a slot resides on and is managed by a single server. A server that belongs to a Redis cluster can manage multiple slots. This division of the key space, a.k.a. sharding, is achieved by hashing the keys’ names, or parts of these (key hash tags), to obtain the slot in which a key should reside.

Despite employing multiple Redis processes, using a Redise Cloud cluster is nearly transparent to the application that uses it. The cluster is accessible via a single endpoint that automatically routes all operations to the relevant shards, without the complexity of a requirement to use a cluster-aware Redis client. This allows applications to benefit from using the cluster without performing any code changes, even if they were not designed to use it beforehand.

Note: Redise Cloud clustering is only available in the “Pay as you go” subscription.

 

Multi-Key Operations

Operations on multiple keys in a sharded Redise Cloud cluster are supported with the following limitations:

  1. Multi-key commands: Redis offers several commands that accept multiple keys as arguments. In a sharded setup, multi-key commands can only be used when all affected keys reside in the same slot. These commands are: BITOP, BLPOP, BRPOP, BRPOPLPUSH, MSETNX, RPOPLPUSH, SDIFF, SDIFFSTORE, SINTER, SINTERSTORE, SMOVE, SORT, SUNION, ZINTERSTORE, ZUNIONSTORE.
  2. Geo commands: In GEORADIUS/GEOREADIUSBYMEMBER commands, the STORE and STOREDIST option can only be used when all affected keys reside in the same slot.
  3. Transactions: All operations within a WATCH/MULTI/EXEC block should be performed on keys that are in the same slot.
  4. Lua scripts: All keys that are used by the script must reside in the same slot and need to be provided as arguments to the EVAL/EVALSHA commands (as per the Redis specification).
  5. Renaming keys: The use of the RENAME/RENAMENX commands is allowed only when the key’s original and new name are mapped to the same hash slot.

Changing the Sharding Policy

The clustering configuration of a Redise Cloud instance can be changed. However, some sharding policy changes will trigger the deletion (i.e. FLUSHDB) of the data before they can be applied. These changes are:

  1. Changing the hashing policy from standard to custom or vice versa.
  2. Changing the order of custom hashing policy rules.
  3. Adding rules before existing ones in the custom hashing policy.
  4. Deleting rules from the custom hashing policy.
  5. Disabling clustering for the database.

Standard Hashing Policy

When using the standard hashing policy, a Redise Cloud cluster uses the same behavior that is implemented by the standard, open-source Redis cluster. With the standard policy, hashing is performed as follows:

  1. Keys with a hash tag: a key’s hash tag is any substring between ‘{‘ and ‘}’ in the key’s name. That means that when a key’s name includes the pattern ‘{…}’, the hash tag is used as input for the hashing function. For example, the following key names have the same hash tag and would therefore be mapped to the same slot: foo{bar}, {bar}baz & foo{bar}baz.
  2. Keys without a hash tag: when a key doesn’t contain the ‘{…}’ pattern, the entire key’s name is used for hashing.

You can use the ‘{…}’ pattern to direct related keys to the same hash slot, so that multi-key operations are supported on them. On the other hand, not using a hash tag in the key’s name results in a (statistically) even distribution of keys across the keyspace’s shards. If your application does not perform multi-key operations, you don’t need to construct key names with hash tags.

Custom Hashing Policy

A Redise Cloud cluster can be configured to use a custom hashing policy. A custom hashing policy is required when different keys need to be kept together on the same shard to allow multi-key operations. Redise Cloud’s custom hashing policy is provided via a set of Perl Compatible Regular Expressions (PCRE) rules that describe the dataset’s key name patterns.

To configure a custom hashing policy, enter regular expression (RegEx) rules that identify the substring in the key’s name – hash tag – on which hashing will be done. The hashing tag is denoted in the RegEx by the use of the `tag` named subpattern. Different keys that have the same hash tag will be stored and managed in the same slot.

Once you enable the custom hashing policy, Redise Cloud’s default RegEx rules implement the standard hashing policy:

RegEx Rule Description
.*\{(?<tag>.*)\}.* Hashing is done on the substring between the curly braces.
(?<tag>.*) The entire key’s name is used for hashing.

You can modify existing rules, add new ones, delete rules or change their order to suit your application’s requirements.

Custom Hashing Policy Notes and Limitations

  1. You can define up to 32 RegEx rules, each up to 256 characters.
  2. RegEx rules are evaluated by their order.
  3. The first rule matched is used – strive to place common key name patterns at the beginning of the rule list.
  4. Key names that do not match any of the RegEx rules will trigger an error.
  5. The ‘.*(?<tag>)’ RegEx rule forces keys into a single slot as the hash key will always be empty – when used, this should be the last, catch-all rule.
  6. The following flag is enabled in our regular expression parser:
  • PCRE_ANCHORED: the pattern is constrained to match only at the start of the string which is being searched.