Documentation - Redise Pack

A guide to Redise Pack installation, operation and administration

open all | close all

5.1.2 Common logs

If you’ve been running log_recent(), you’ll probably discover that although it’s useful for getting an idea of what’s happening right now, it’s not very good at telling you whether any important messages were lost in the noise. By recording information about how often a particular message appears, you could then look through the messages ordered by how often they happened to help you determine what’s important.

A simple and useful way of knowing how often a message appears is by storing the message as a member of a ZSET, with the score being how often the message appears. To make sure that we only see recent common messages, we’ll rotate our record of common messages every hour. So that we don’t lose everything, we’ll keep the previous hour’s worth of common messages. Our code for keeping track of and rotating common log messages is shown next.

Listing 5.2 The log_common() function
def log_common(conn, name, message, severity=logging.INFO, timeout=5):
   severity = str(SEVERITY.get(severity, severity)).lower()

Handle the logging level.

   destination = 'common:%s:%s'%(name, severity)

Set up the destination key for keeping recent logs.

   start_key = destination + ':start'

Keep a record of the start of the hour for this set of messages.

   pipe = conn.pipeline()
   end = time.time() + timeout
   while time.time() < end:
      try:
         pipe.watch(start_key)

We’ll watch the start of the hour key for changes that only happen at the beginning of the hour.

         now = datetime.utcnow().timetuple()

Get the current time.

         hour_start = datetime(*now[:4]).isoformat()

Find the current start hour.

         existing = pipe.get(start_key)
         pipe.multi()

Set up the transaction.

         if existing and existing < hour_start:

If the current list of common logs is for a previous hour…

            pipe.rename(destination, destination + ':last')
            pipe.rename(start_key, destination + ':pstart')

…move the old common log information to the archive.

            pipe.set(start_key, hour_start)

Update the start of the current hour for the common logs.

         pipe.zincrby(destination, message)

Actually increment our common counter.

         log_recent(pipe, name, message, severity, pipe)

Call the log_recent() function to record these, and rely on its call to execute().

         return
   except redis.exceptions.WatchError:
         continue

If we got a watch error from someone else archiving, try again.

This logging function was more involved than the recent logging function, primarily due to being careful when taking care of old logs. That’s why we used the WATCH/MULTI/EXEC transaction to rename the ZSET and rewrite the key that tells us what hour the current data is for. We also passed the pipeline through to the log_recent() function to minimize round trips to Redis while keeping track of common and recent logs.

Now that we’ve started to gather information about running software in Redis by storing recent and common logs, what other kinds of information would be useful to keep in Redis?