6.2 Distributed locking
Generally, when you “lock” data, you first acquire the lock, giving you exclusive access
to the data. You then perform your operations. Finally, you release the lock to others.
This sequence of acquire, operate, release is pretty well known in the context of
shared-memory data structures being accessed by threads. In the context of Redis,
we’ve been using WATCH as a replacement for a lock, and we call it optimistic locking,
because rather than actually preventing others from modifying the data, we’re notified
if someone else changes the data before we do it ourselves.
With distributed locking, we have the same sort of acquire, operate, release operations,
but instead of having a lock that’s only known by threads within the same process,
or processes on the same machine, we use a lock that different Redis clients on
different machines can acquire and release. When and whether to use locks or WATCH
will depend on a given application; some applications don’t need locks to operate correctly,
some only require locks for parts, and some require locks at every step.
One reason why we spend so much time building locks with Redis instead of using
operating system–level locks, language-level locks, and so forth, is a matter of scope.
Clients want to have exclusive access to data stored on Redis, so clients need to have
access to a lock defined in a scope that all clients can see—Redis. Redis does have a
basic sort of lock already available as part of the command set (SETNX), which we use,
but it’s not full-featured and doesn’t offer advanced functionality that users would
expect of a distributed lock.
Throughout this section, we’ll talk about how an overloaded WATCHed key can
cause performance issues, and build a lock piece by piece until we can replace WATCH
for some situations.