Redis Cluster

GET STARTED WITH REDIS

Redis Sentinel is a component that provides service discovery, failure detection and failover from primary to secondary replica for single-instance databases, but is not relevant when the cluster is configured. As previously described, the cluster uses its own high-availability and auto-failover mechanism. 

Redis Cluster vs Sentinel

Redis Enterprise was built from the ground-up on a shared nothing symmetric architecture with a clear separation between the data path (application/client requests) and management/control path (for cluster operations such as failover, shard migration, resharding, rebalancing, provisioning and cluster upgrade). This architecture provides a true linear scalability, as shown in the figure below:

Redis Enterprise Scalability

Redis Cluster

Redis is an open source in-memory data structure store that is designed to be fast and simple. Built for real-time performance, most requests to Redis complete in less than a millisecond allowing a single server to handle millions of concurrent requests every second.


A single server is a potential bottleneck of any system, and in order to ensure responsiveness for growing volumes of data and operations' throughput, the service it provides needs to be horizontally scalable. A service that scales outwards is deployed across multiple servers that share the workload, each responsible for serving a part of the overall.


A Redis instance is a server process that acts as the in-memory data store. That process is ultimately bound by the network, compute, and memory resources that a single server commands. For scaling Redis in any of these dimensions, additional server processes are deployed, most commonly on other servers, and possibly also on additional cores of existing servers.

Read-Replicas for Scaling ‘Read’

Redis uses replication from a primary instance (also referred to as a "master") to secondary replicas (or "slaves") as the means for providing service availability. The same replicas can also be used for scaling the compute and network throughput of read operations in a Redis database. In such cases, the logic that routes write requests to the primary instance and read requests to its replicas can be implemented in the application's code, and in some cases is provided as part of specific Redis clients or specialized proxies.

Redis Cluster for Scaling ‘Read’ and ‘Write’

Read replicas are a simple but limited solution, as they do not address data volume and write throughput scalability requirements. True scalability of a Redis database across all dimensions is therefore achieved by partitioning the data and assigning each partition with its own dedicated server and resources. A partitioned Redis database is one in which the data is managed simultaneously by a cluster of servers in a shared-nothing architecture, where each server manages only a subset of the entire dataset.


A Redis database can be scaled via external partitioning (a.k.a. sharding) by the application itself, a specific Redis client or a specialized proxy. The straightforward key-value model of the Redis keyspace lends itself to many types of partitioning schemes, providing generous amounts of control and flexibility that can be tailored per use case requirements. 


Because a sharded Redis database is made up of multiple individual primary instances, each primary instance can be set up with its own secondary replicas. Each of the Redis instances in a sharded database is oblivious to the overall partitioning scheme and the availability of each partition. The availability and operation of a sharded database relies on an external element for routing requests correctly at all times. The implementation of an external partitioning mechanism that is both robust and resilient is often a non-trivial challenge, so a better alternative is provided by the Redis cluster. 


The Redis cluster is designed for scaling a database by partitioning it. Unlike with sharding, the instances in a Redis cluster are aware of both the partitioning schema as well as the availability of its members. The cluster provides built-in partitioning of the keyspace, where each partition is made up of a range of hash slots. Partitions can be split, merged and migrated online via the cluster's administrative interface. Keys are mapped to slots based on hashing only parts (tags) or their entire names, and any given partition is managed by one of the cluster's primary instances.


Members of the Redis cluster communicate among themselves for agreement on the current partitioning layout and failure detection. A majority of the members, consisting of both primary and secondary instances, can detect a non-responsive instance and failover to one of its standby replicas in order to ensure continued service. Because all members share overall up-to-date knowledge of the cluster's topology and partition mapping, clients are dynamically routed according to their access patterns.


A client application that connects to a Redis cluster can be oblivious to its underlying complexities, and direct any operation to any of its members. The target member will perform the operation when the data involved is stored in one of its partitions, and will reply with a redirection when it isn't. Most Redis clients, however, proactively discover the cluster's topology upon connecting to it and only handle redirection resulting from changes.

Start Your Free Trial

Select Deployment Option:

Downloadable software for any cloud or private data center.

REDIS ENTERPRISE 

SOFTWARE

Fully managed Redis-e database-as-a-service in your virtual private cloud within major public clouds.

REDIS ENTERPRISE VPC

The fastest way to deploy a Redis database.
Fully managed, serverless and hosted database-as-a-service on major public clouds.

REDIS ENTERPRISE CLOUD

© 2019 Redis Labs | TERMS | PRIVACY