Use Cases

Caching

What is application caching?

Application caching is based on transient data and optimized for speed. Unlike a typical database where content is served from a persistent storage device and performance is greatly affected by storage latency, caches are usually served completely from in-memory. Furthermore, application caching avoids the significant additional latency inherent in cloud-native deployments where persistent storage must be network attached to the cloud instance where the database is deployed, adding round-trip latency between the database instance and the persistent storage.

In-memory caching solutions can be extremely effective at reducing latency, especially in cases where the working-set (the most frequently accessed portion of your dataset) fits the cache size, and when the database access pattern is driven by read requests. To provide an instant response in their user-facing applications, businesses must rely on some kind of caching mechanism. Furthermore, in cases where the application used as a service (via an API) for other applications, caching is becoming extremely critical for accessing data that is most frequently read.

Caching is used extensively today in many scenarios, including:

DBMS data

Most traditional databases are designed to provide robust functionality rather than speed at scale. The database cache is often used for storing copies of lookup tables and the replies to expensive queries from the DBMS, both to improve the application’s performance as well as to reduce the load of the data source.

Session data

Caching user session data is an integral part of building scalable and responsive applications. Because every user interaction requires access to the session’s data, keeping that data in the cache ensures the fastest response time to the application user. Keeping session data at the caching tier level is superior to alternative methods. Keeping sessions sticky at the loadbalancer level, for example, effectively forces all requests in a session to be processed by a single app server, while caching allows the requests to be processed by any app server without losing users’ states.

API responses

Modern applications are built using loosely coupled components that communicate via APIs. Application components use the APIs to make requests for service from other components, whether inside (e.g. in a microservices architecture) or outside (in a SaaS use case, for instance) the application itself. Storing the API’s reply in cache, even if only briefly, improves the application’s performance by avoiding this inter-process communication.

Challenges and best practices for application caching

  1. Variety of solutions: There are many options for application caching, but many are based on niche technologies that are either not open source or not widely used, and therefore limited to a specific platforms and programming languages. They can’t cover today’s variety of deployment options, including fully managed cloud services, on-premises or hybrid deployments.
  2. Managing cache object life-cycles: Cached data is mostly transient and becomes outdated over time. One of the biggest challenges in caching is managing the lifecycle of cache objects through an efficient expiration and eviction policy. Granular control over when objects are expired or evicted is required to avoid constantly increasing your dataset size or increasing the number of ‘cache misses’ that result in inefficient retrieval of data.
  3. Scale to any throughput with sub-millisecond latency: Your cache should be designed to be scaled instantly and linearly to any foreseeable throughput while maintaining sub-millisecond latency at any load.
  4. Keeping your cache highly available with instant failover time: Application performance is reliant on the caching layer, so keeping your cache layer highly available is critical for maintaining your SLA with your customer. As your cache may hit hundreds of thousands or even millions of operations per second, any second of downtime may have an extreme effect on performance and the ability to deliver the SLA.
  5. Globally distributed: As more and more applications are deployed across multiple clouds and regions and are consumed by mobile users; deploying your cache layer in a globally distributed manner is becoming a critical requirement. It’s a real challenge to manage a globally distributed caching system that guarantees a local sub-millisecond latency while also resolving dataset conflicts across deployment sites.

Why Redis Enterprise?

  1. Open source (OSS) Redis is today’s number one choice for application caching. Redis has had more than 2 billion downloads/launches (5M+ per day) on Dockerhub, supporting 50+ programing languages and 150+ client libraries. It is already being integrated as the default caching layer for most application deployment platforms. Redis Enterprise was designed around OSS Redis by the same people who created and maintain open source Redis. It delivers enterprise-grade capabilities to your caching layer.
  2. Redis was designed from the ground up with two important capabilities for managing your cache lifecycle:
    1. Built-in data expiry: allows you to control for how long an object will be maintained active in your dataset, supporting both active and lazy expiry mechanisms
    2. Built in eviction mechanism: allows you to determine which object will be evicted when your cache reaches its memory limit
  3. Redis Enterprise can scale instantly and linearly to almost any throughput needed, while keeping latency at sub-millisecond levels. In a recent benchmark, Redis Enterprise exceeded 200M ops/sec with just a 40-node cluster running in a standard cloud environment:
  4. Redis Enterprise includes multiple high-availability mechanisms to guarantee recovery from a failure event such as a process failure, node failure, zone/rack/data-center failure or even a region/multi data-center failure event, with instant, single-digit seconds failover time. Furthermore, if you deploy your caching layer over Redis Enterprise Cloud, we guarantee 99.999% (five-nines) availability!
  5. Redis Enterprise Active-Active Geo-Distribution allows you to deploy your caching layer globally, where your app can read or write to each replica as if it were a cache with sub-millisecond latency and with a seamless conflict resolution mechanism. The solution, Redis conflict free replicated data types (CRDTs), is based on years of academic research and backed by a well-defined consistency model.

How to implement caching with Redis

Redis is designed around the concept of data-structures and can store your dataset across strings, hashes, sorted sets, sets, lists, streams and other data structures or Redis modules.

Using Node.js, you can retrieve from and save key-value pairs with simple strings through the GET and SET commands of the client object, as shown here:

// connecting redis client to local instance.
const client = redis.createClient(6379)

// Retrieving a string value from Redis if it already exists for this key
return client.get('myStringKey', (err, value) => {
    if (value) {
        console.log('The value associated with this key is: ' + value)
    }
    else { //key not found
        // Storing a simple string in the Redis store
        client.set('myStringKey', 'Redis Enterprise Tutorial');
    }
} });

This snippet tries to retrieve the string value associated with the myStringKey key using the GET command. If the key is not found, the SET command stores the Redis Enterprise Tutorial value for myStringKey.

The same code can be written in Python, as shown here:

# connecting redis client to local instance.
r = redis.Redis(host='localhost', port=6379, db=0)

# Retrieving a string value from Redis if it already exists for this key
value = r.get('myStringKey')

if value == None: # key not found
    # Storing a simple string in the Redis store
    r.set('myStringKey', 'Redis Enterprise Tutorial')
else:
    print 'The value associated with this key is: ', value