Imagine visiting an amusement park, consuming large quantities of all four primary amusement park food groups (which are corn dogs, funnel cake, snow cones, and cotton candy of course), and then spinning around wildly on the Tilt-A-Whirl. The volume and variety (and dare we suggest velocity?) of food you’ve just consumed is likely to overwhelm your stomach, causing discomfort and forcing it to slow down and regroup in order to accommodate the day’s indulgences.
But your software stack isn’t allowed the luxury of slowing, not even a fraction of a sub-millisecond; it needs to handle whatever data is thrown at it without breaking a sweat or popping a Tums. Fast data ingest and processing is an increasingly common requirement for big data use cases, and internal stakeholders and customers alike have come to expect (and rely on) the real-time decision making that it enables.
Collecting, storing and processing large volumes of high-variety, high-velocity data is a task that presents several complex design challenges—especially in fields like IoT, e-commerce, security, communications, entertainment, finance, and retail.
And this is where Redis has a real advantage. It tackles these challenges with an ease that is unmatched by other fast data ingest solutions, which tend to be complex and over-engineered for simple requirements.
In fact, in an independent survey of Redis Labs customers conducted by TechValidate, 74% reported using Redis for messaging and data ingest.
We’ve outlined a few of the standout features that make Redis so ideally suited for messaging, data streaming, and fast data ingest.
When it comes to performance, Redis has been benchmarked to handle over one million read/write operations per second, with sub-millisecond latencies on a modest-sized commodity cloud instance of one or two servers. This makes Redis the most resource-efficient NoSQL database in the market.
Redis offers a variety of data structures such as Lists, Sets, Sorted Sets, and Hashes that provide simple and versatile data processing in order to efficiently combine high-speed data ingest and real-time analytics. By taking advantage of data structures that have been purpose-built to support the objective at hand (e.g. time-series analysis, spatial analysis, machine learning, etc.), analytics operations get a big performance boost as these data structures provide not only mechanisms to more elegantly store variably structured data, they also come with built-in operations that perform complex in-database analytics on the data, right in memory where it is stored.
Another performance boost comes from Redis’ publish/subscribe capabilities, which allow it to act as an efficient message broker between geographically distributed data ingest nodes. Data producing applications publish streaming data to channels in the format(s) required, and consuming applications subscribe to those channels that are relevant to them, receiving messages asynchronously as they are published.
For datasets measured in terabytes, Redis Enterprise from Redis Labs is enhanced to run on a combination of RAM and Flash memory. The ability to simultaneously store keys and hot values in RAM and cold values in more cost-effective Flash drastically reduces operational costs without compromising responsiveness. For larger datasets, which are fast becoming the norm, this is the most cost-optimal way to deliver in-memory performance at the scale of big data.
We recently published a white paper titled Redis for Fast Data Ingest. It overviews the challenges encountered when designing fast data ingest solutions and how Redis addresses these with very little effort. For those of you that like to dive deep, there’s a sample fast data ingest scenario to follow along with, and three different methods, complete with code (oh yeah!), for accomplishing that scenario.
As your organization’s big data needs grow, remember that achieving fast ingest and real-time processing of high-volume, high-variety, high-velocity data doesn’t have to give you a headache—or your system heartburn. Just use Redis Enterprise.