Documentation - Redise Pack

A guide to Redise Pack installation, operation and administration

open all | close all

10.2.2 Creating a server-sharded connection decorator

Now that we have a method to easily fetch a sharded connection, let’s use it to build a decorator to automatically pass a sharded connection to underlying functions.

We’ll perform the same three-level function decoration we used in chapter 5, which will let us use the same kind of “component” passing we used there. In addition to component information, we’ll also pass the number of Redis servers we’re going to shard to. The following listing shows the details of our shard-aware connection decorator.

Listing 10.3 A shard-aware connection decorator
def sharded_connection(component, shard_count, wait=1):

Our decorator will take a component name, as well as the number of shards desired.

   def wrapper(function):

We’ll then create a wrapper that will actually decorate the function.

      @functools.wraps(function)

Copy some useful metadata from the original function to the configuration handler.

      def call(key, *args, **kwargs):

Create the function that will calculate a shard ID for keys, and set up the connection manager.

         conn = get_sharded_connection(
            component, key, shard_count, wait)

Fetch the sharded connection.

         return function(conn, key, *args, **kwargs)

Actually call the function, passing the connection and existing arguments.

      return call

Return the fully wrapped function.

   return wrapper

Return a function that can wrap functions that need a sharded connection.

Because of the way we constructed our connection decorator, we can decorate our count_visit() function from chapter 9 almost completely unchanged. We need to be careful because we’re keeping aggregate count information, which is fetched and/or updated by our get_expected() function. Because the information stored will be used and reused on different days for different users, we need to use a nonsharded connection for it. The updated and decorated count_visit() function as well as the decorated and slightly updated get_expected() function are shown next.

Listing 10.4A machine and key-sharded count_visit() function
@sharded_connection('unique', 16)

We’ll shard this to 16 different machines, which will automatically shard to multiple keys on each machine.

def count_visit(conn, session_id):
   today = date.today()
   key = 'unique:%s'%today.isoformat()
   conn2, expected = get_expected(key, today)

Our changed call to get_expected().

   id = int(session_id.replace('-', '')[:15], 16)
   if shard_sadd(conn, key, id, expected, SHARD_SIZE):
      conn2.incr(key)

Use the returned nonsharded connection to increment our unique counts.

@redis_connection('unique')

Use a nonsharded connection to get_expected().

def get_expected(conn, key, today):
   'all of the same function body as before, except the last line'
   return conn, EXPECTED[key]

Also return the nonsharded connection so that count_visit() can increment our unique count as necessary.

In our example, we’re sharding our data out to 16 different machines for the unique visit SETs, whose configurations are stored as JSON-encoded strings at keys named config:redis:unique:0 to config:redis:unique:15. For our daily count information, we’re storing them in a nonsharded Redis server, whose configuration information is stored at key config:redis:unique.

MULTIPLE REDIS SERVERS ON A SINGLE MACHINEThis section discusses sharding writes to multiple machines in order to increase total memory available and total write capacity. But if you’re feeling limited by Redis’s singlethreaded processing limit (maybe because you’re performing expensive searches, sorts, or other queries), and you have more cores available for processing, more network available for communication, and more available disk I/O for snapshots/AOF, you can run multiple Redis servers on a single machine. You only need to configure them to listen on different ports and ensure that they have different snapshot/AOF configurations.

ALTERNATE METHODS OF HANDLING UNIQUE VISIT COUNTS OVER TIMEWith the use of SETBITBITCOUNT, and BITOP, you can actually scale unique visitor counts without sharding by using an indexed lookup of bits, similar to what we did with locations in chapter 9. A library that implements this in Python can be found at https://github.com/Doist/bitmapist.

Now that we have functions to get regular and sharded connections, as well as decorators to automatically pass regular and sharded connections, using Redis connections of multiple types is significantly easier. Unfortunately, not all operations that we need to perform on sharded datasets are as easy as a unique visitor count. In the next section, we’ll talk about scaling search in two different ways, as well as how to scale our social network example.