(Only if you have registered for Training)
Getting Started with Redis
From zero to Redis hero. In this session, we will start from scratch and show you how to get started using Redis in your application, from requirements to connect to Redis through to a few basic operations. This session is designed for people who have never touched Redis before but want to get up to speed quickly.
CRDTs are the future of eventual consistency and bring a whole new set of tradeoffs compared to quorum-based protocols. Learn how to keep a globally distributed database in sync without ever incurring partitioning problems. We’ll also cover how to elegantly handle edge cases by making full use of the algebraically-verifiable properties of CRDTs.
Redis + k8s
Kubernetes is the current king of container orchestration. If you are new or just getting started with Kubernetes, this is the session for you! We will review K8s fundamental functions and concepts as well as experience deploying a K8s application powered by Redis on a K8s cluster. We’ll also demonstrate the ease of deploying a Redis Enterprise Cluster with a K8s Operator deployment.
Break out of Redis data structures and see how Redis can power instant full-text search. In this session, we’ll show you how data is indexed in RediSearch and how to perform both simple and advanced queries. The session will also cover how to effectively get your data into RediSearch.
Learn how Redis can solve high performance graph problems with RedisGraph. In this session, we will talk about what makes RedisGraph quick, how graph data is consumed by clients, and how to use Cypher, a declarative graph query language, to both update and query graphs. We’ll walk you through the features we have build since RedisGraph became GA and what’s on the roadmap. We’ll bring on two users of RedisGraph on stage to talk about their experiences. Matt Rodkey, Principal Offering Manager @ IBM, will discuss how they use RedisGraph for their Multicloud Manager. Sébastien Heymann, CEO and founder at Linkurious, will show what can be achieved with the upcoming features. This session will be a high level session, touching upon Use Cases of RedisGraph. There is a more technical deep dive on the second day.
Probabilistic Data Structures
Bloom Filters, HyperLogLog and more. Learn how to leverage probabilistic (p11c) data structures for both business needs and analytics. In this session, you will see how p11c data structures can be used as an advanced caching mechanism, and how their combined use can help you keep track of inventory, customers and more — all with a very low memory footprint.
Redis Streams are here. From distributed logging up to event sourcing, Streams can help you solve a whole new class of problems with Redis. In this session, learn the advantages of using Streams in general and how this data structure differs from other Redis data structure patterns.
Redis as a Primary Data Store
Redis is more than a cache. Learn how you can build applications using Redis as your single source of truth. We will show you essential tips and tricks plus potential pitfalls of using Redis as your primary database. In this session, we will dive into data modeling, transactions, leveraging modules, durability and persistence, as well as handling large data sets with Redis on Flash.
Redis Clustering (Enterprise, Cluster Mode & Open Source)
So, you’ve outgrown a single instance of Redis because of too much data or high throughput needs. In this session, you will learn about options for clustering and how they affect your application. We will go over smart clients, keyspace sharding and performance differences between all the possible topologies.
Chris Richardson, Author of Microservices Patterns, Creator of the original Cloud Foundry
So, you’ve outgrown a single instance of Redis because of too much data or high throughput needs. In this session, you will learn about options for clustering and how they affect your application. We will go over smart clients, keyspace sharding and performance differences between all the possible topologies.
One obstacle to DevOps-style development is complexity. Key business applications are large, unwieldy monoliths, and so it’s impossible to rapidly and safely deploy changes. The solution is to adopt a microservice architecture – an architectural style that has the testability and deployability necessary for DevOps. In a microservice architecture each service’s database is private to that service in order to ensure loose coupling. As a result, it’s challenging to implement transactions and queries that span services. The solution is to implement transactions using the Saga pattern and queries using the CQRS pattern.
By the end of this Microservices Workshop, through a combination of lectures, discussions and hands-on labs, attendees will understand:
- Essential characteristics of the Microservice architecture, its benefits, drawbacks and when to use it
- The challenges of distributed data management in a Microservice architecture and how to solve them using the Saga and CQRS patterns
- Implementing the Saga and CQRS patterns using Redis and Eventuate
The workshop is intended for users of all programming languages. Sample code will be provided in Java.
Preparing for Success: How we plan to throw our DB out the window
Eran Koren, CTO, Moon Active
Moon Active, a social mobile game developer with millions of daily active players, uses Redis as its primary DB. In this talk they will explian how they dynamically resize and rebalancing their players across multiple DBs on a daily basis, increasing reliability, lowering costs and handling explosive growth when it comes.
Redis-SGX: Secure Redis with Intel SGX
Dmitrii Kuvaiskii, Research Scientist, Intel
Redis is a wildly popular in-memory database/cache, alas, it still lacks basic integrity and confidentiality protection. To retrofit end-to-end security in Redis, we built a prototype that combines Intel SGX technology, Graphene-SGX shielded execution framework, transparent remote attestation and TLS network shielding, and our minimal optimizations to Redis. Additionally, we deployed Redis in cluster mode for scalability. Our Redis-SGX encrypts and integrity-protects all client data in transit and in use, while maintaining the same throughput level as an original unprotected version (over a 1Gbps network). This security improvement comes with a modest porting effort — approx. 700 lines of code modified, or 1% of code.
Enable a stateful interaction with Redis in serverless architecture
Pyounguk Cho, Director of Product Management, Oracle
Serverless computing has become a new standard architecture for modern cloud native applications as it enables deverlopers with infinite elasticity without having to deal with underlying infrastructure. While serverless architecture works more naturally for stateless applications as it is designed with a more focus on scalability, it can also be used for stateful applications without sacrificing overall performance by leveraging in-memory Redis caching. Attend this session to learn how to design and deploy a stateful application with Redis and serverless functions with an end-to-end scalability and performance.
(Serverless + Redis) != Stateless
Avner Braverman, Co-founder & CEO, Binaris Inc.
Serverless functions are capturing the imagination of developers all around. This talk will discuss how Redis can become a fundamental building block in Serverless architectures. Join Avner Braverman, co-founder and CEO of Binaris, as he explores “state-full” services and streaming architectures built with Redis and serverless functions. The talk will include code and demos combining serverless with Redis to accelerate developer productivity and create differentiated solutions.
Bootiful Spring-Redis Applications with Josh Long and Mario Gray
Mario Gray, Principal Technologist, Pivotal
Exploring use cases and their implementation styles using Spring Boot, Redis. Use-cases include Geo-location, event streaming via pub-sub and stashing data in Redis as an ordinary data-store.
Designing a Cluster Manager and Proxy
Kevin Xiao, Senior Software Engineer
Development of in-house redis clustering solutions typically end up using twemproxy. However, there are a few critical features that are lacking, such as hotconfig swapping, better server blackout logic, advanced logic on the proxy level, multi-threading, etc. I’ll be sharing my experiences writing redis-flare proxy, a redis proxy written in Rust that targets those features without sacrificing performance. In the first half, I will be going over the need for these features in an in-house clustering solution that scales up to 6 million qps, pitfalls I ran into, and various benchmark results I received when testing various settings. I will talk about other solutions we’ve tested (Redis Enterprise Cluster, redis cluster open-source), and also talk about our specific use cases. Then I will dive into the details of the proxy implementation, focusing on various edge cases to be aware of, as well as details about handling the redis protocol. At this later point, I will talk about the viability of Rust for the development of the proxy, and discuss the feasibility of using Rust for future redis module development.
History of Redis replication and future prospects
Zhoa Zhoa, Senior Engineer, Alibaba
Review SYNC PSYNC and PSYNC2, look forward to PSYNC3, and introduce a new replication mechanism based on AOF which has been used in Alibaba.
Common Redis Use Cases for GraphQL
Ben Awad, Software Consultant, Benjamin Awad Consulting
This talk will explain how to use Redis with MySQL or MongoDB in GraphQL projects. We’ll use Redis for storing session data, creating temporary tokens for resetting passwords, rate limiting, GraphQL subscriptions, caching and more.
Geo-replicated Databases for Disaster Recovering using CRDT
Jude Cadet, Platform Architecture and Engineering, Fiserv
Fiserv’s presence as a leader in the finance industry shows a strong need for increased uptime and availability. Redis geo-replicated databases provide a strong disaster recovery solution where data can be replicated to another region where another application instance can connect to the same data as production. This allows us to support our applications especially critical applications in a highly reliable, redundant, and distributed fashion. We will highlight how the capabilities and services provided by the Redis Enterprise product support the realization of Fiserv’s product offering. Special attention will be paid to the capabilities of the Conflict-Free Replicated Database in support of a large-scale deployment.
How Kong uses Redis
Kong is the most popular open-source API Platform. It just launched its 1.0 version with 20,000 stars, 38,000 community members and 100+ enterprise customers. Under the hood, Kong uses Redis for some of its key functions including rate limiting, HTTP caching, and others. In his talk, Guanlan will draw on years of practical experience, to share stories and knowledge about using Redis in API Platform design, which enables rate limiting and caching in a scalable and performant way. He will also cover the microservices architecture and how Redis fits into it. Other technologies He will discuss include: Lua, OpenResty, NGINX, Kong, and API Management. Key takeaways include: technique and tricks use Redis for large-scale rate limiting, response caching. How microservices and service mesh envolve, and how OpenResty and Redis interact.
Using Redis to Supercharge Game Input
Daniel Lindeman, Software Engineer, Very, Inc.
At a high level the typical ‘main’ function for a video game is an infinite loop that can be boiled down to three stages. One, receive input; Two, update game objects; Three, render the results. In this talk I’ll show how Redis is a great way to supercharge the receive input stage. While many game frameworks allow access to device input like keyboard input or mouse location, it is often a good idea to wedge a layer of abstraction between input and game logic to allow players to remap controllers to their liking. Using Redis Pub/Sub we can take this pattern to the next level, and enable our game to take input from multiple devices, easily add marquee mode, and leverage AI agents.
Beyond PFCount: Creating a query the Redis way with HyperMinHash (LiveRamp’s Tiny Big Data Counting Engine)
Shrif Nada, Software Engineer, LiveRamp
PFCount is Redis’ awesome & memory-efficient way of approximating the cardinality of a set, or the cardinality of a union of sets. Wanna know the total number of elements in the union of some sets? Redis’ Hyperloglog-backed PFCount lets you do that in loglog space (relative to the number of distinct elements) and linear time (relative to the number of sets)! Pretty amazing stuff indeed. But HyperLogLog sketches (and therefore PFCount) lack the ability to approximate an important set operation: cardinality of intersection. In this talk, we discuss HyperMinHash: a state-of-the-art probabilistic data structure which can approximate set, union, and intersection cardinalities with the same efficiency & accuracy as HyperLogLog! At LiveRamp, we’ve been using HyperMinHash backed by Redis to power some of our core products for some time now, and we’d love to share our experience with all of you. Algorithms: HyperMinHash Technologies: Redis Takeaways: loglog space counting of set, union, and intersection cardinality backed by Redis
Bambleweeny: Redis with HTTP & OAuth
Uli Hitzel, Developer Advocate, Axway
We’re all dreaming of running Google-scale services, but let’s be honest – the scenarios we usually deal with are many orders of magnitude smaller. Yet, we tend to build systems that are over-engineered, difficult to operate, and still don’t meet our performance needs. What if there was an easier way for apps, backends, and distributed systems to ingest and exchange data and messages? Bambleweeny is a lightweight HTTP/REST based key-value store and message broker that offers identity, access & quota management. It’s fast, easy to use, and well-documented. Written in Python, using a Redis backend, deployable in a tiny container. In this talk, we introduce this fast-growing open source project, talk about use cases, architecture and hope to get more contributors!
Resilient testing at scale using Redis queues
Aaron Evans, Solutions Architect, Sauce Labs
Test automation challenges include false positives, retries, and slow test execution. I leveraged a few simple patterns using Redis queues and pub/sub to run tests faster in parallel, deal with flaky tests, and learn more about software quality by gathering analytics from test data and predicting and preventing test failures with machine learning. Having worked with many organizations large and small and seen the same challenges faced by software development teams as they try to incorporate test automation, I’ve learned a few simple patterns that make testing better, faster, and more reliable. UI tests can be particularly challenging because they are slow, flaky, and expensive to maintain. There are some practices that can make this better but it often means rewriting a framework with thousands of tests and bulfing infrastructure to scale testing in parallel. I will show how you can use Redis to manage a queue of distributed test workers running in parallel to execute tests by using Redis to manage execution. Then I will show how you can leverage the test data gathered by your workers and stored in Redis using ReJSON and Redisearch to understand and improve test quality and make predictions about where tests will fail and which platforms and features are likely to cause issues with your software delivery in the future.
Atom: The Redis Streams-Powered Robotics SDK
Dan Pipe-Mazo, CTO, Elementary Robotics
One thing that has held back the adoption of robotics by the developer ecosystem is how difficult they are to program, often requiring complex setup, complex APIs and specific programming languages. Most of this difficulty is driven by the ability of the programing language to utilize and implement highly performant, high-throughput communication protocols. We’ve developed Atom, an SDK built around a simple specification for data publishing and command/response paradigms using Redis Streams. With Redis’s many language clients and fantastic open-source community this SDK can be used in nearly any programming language without major performance tradeoffs. Utilizing containerization technology, users can then create reusable microservices, called “elements”, in their favorite programming language that can then be shared with the community. Some examples of elements are hardware drivers (robot, camera, Lidar), vision and path planning algorithms and machine learning models. Elements communicate with each other through Redis’s new, highly performant feature: Redis Streams. For data publishing, Redis Streams provide connectionless, fire-and-forget paradigms. Redis then acts as an N-value last value cache for the published data, automatically pruning itself when efficient. This allows for powerful new methods of subscribing to data. Atom exposes APIs for event-driven subscription as in traditional pub-sub, but also allows users to query for the most recent piece of data or traverse the stream linearly in controlled batch sizes. This allows each subscriber to interact with streams in their most efficient fashion, be it a machine learning model consuming camera frames as quickly as it can or a video packager consuming the same frames at 30Hz. For command and response paradigms Redis Streams’s consumer groups allow for easy load balancing without complicated multi-threading. Atom is a new SDK creating simple APIs in your favorite language to bring the power of Redis Streams to robotics!
Edge Compute on Microcontrollers with Redis And Spark
David Rauschenbach, CTO, Nubix
Co-ordinating and executing machine learning can be hard enough in the data center, let alone when you add the complexities of limited RAM, limited CPUs, and intermittent connectivity so common at the Edge. In this talk, Spark expert David Rauschenbach will describe a computing architecture that enables distribution of Spark-based machine learning to the edge, utilizing Redis as both a long-term data store and an analytics engine for roll-ups from edge devices. This session will feature not only architectural advice, but also working code samples to demonstrate the process.
Running 1000 node Redis Cluster on bare metal hardware
Dmitry Polyakovsky, Polyakovsky, Redis Seattle Meetup Organizer
Using RedisCluster as datastore in serverless HPC scenarios. Alternative approaches of using Envoy proxy and application sharding. Live demo of provisioning the cluster and doing many millions of transactions.
Scaling SQL Write-Master Database Clusters with RedisLabs
Erik Brandsberg, CTO, Heimdall Data
To improve SQL database scale, horizontal scaling is implemented which separates the write-master from read-only servers. This allows write queries to perform only the “expensive” operations and not be burdened by processing read queries. However, there is the challenge of maintaining data consistency for updates. In this session, we discuss how Heimdall Data leverages the RedisLabs Enteriprise publish and subscribe interface to safely improve database scale without any modifications to your application-database.
Building an Event Sourcing system with Redis.
Gerard van Helden, Freelance Full Stack Developer
Event Sourcing is an architecture where every incoming message into the system is recorded, before anything else happens. Based on these records, the entire truth of the system must be reconstructible, opening up opportunities to add precomputed business analytics “after the fact”, distributing data over different storage backends, etc. Currently in the web-o-sphere, the go-to tool for this is Kafka. Kafka implements an immutable event log and a pub/sub system for notifying subscribers of new events on a queue (or “topic”). However, Kafka comes with a lot of … “free stuff”, meaning more mental burden, more tool-specific logic, more management… How am I going to manage this with a relatively small team of junior developers? I won’t. Exit Kafka. What do we actually need for a successful event sourcing system? – A transaction log – Guarantees that messages are delivered to a client – Guarantees that the client can notify the system that it is done processing – Atomicity – Replicability – And a way to keep this all simple, maintainable, reliable and scalable. Enter: – Redis – A few lines of Lua Code – A very simple redis client implementation – A few dozen lines of Java code and presto: we have an extremely fast, simple, reliable and future-proof basis for Event Sourcing, even if you only have money and time for a single server.
Application Sharded Redis with Sentinel vs Redis Cluster: What We Learned
Patrick King, Sr Site Reliability Engineer, New Relic
Cory Johannsen, Lead Software Engineer, New Relic
Over the last year, New Relic has started to heavily leverage Redis as both a cache and a primary data store. As more and more teams used Redis we were getting requests for larger and larger instances, which caused a number of scaling problems within our dockerized Redis deployment. Or initial solution was to use multiple Redis clusters with Sentinel and have the application manage the data sharding that went into each cluster. This worked for a while, but as soon as we needed to re-shard our data, we ran into a number of issues, some that caused outages. Enter Redis Cluster. We started to experiment pretty quick with what Redis Cluster could help us out with and if it would fit our current model for how we use Reids. This talk is about our journey from a single Redis instance, to multiple Redis clusters, to finally Redis Cluster. Come and find out how our journey went and what we discovered along the way!
Leveraging Kafka Eco-System for Redis Streams
Sripathi Krishnan, Founder, RDBTools & CTO, HashedIn
Kafka has a rich eco-system of libraries, tools and best practices for streaming use cases. Is it possible to use the kafka eco-system on top of Redis Streams? This talk will compare and contrast Kafka with Redis Streams. We will also explore the possibility of building a kafka compatible client and producer that actually uses redis streams under the hood.
Evolving Your Distributed Cache In a Continuous Delivery World
Tyler Van Gorder, Principal Software Engineer, Build.com
A distributed cache can play a critical role in high volume, transaction environments. This talk will cover some of the unexpected challenges you may encounter with a distributed cache when multiple versions of your application are attempting to access the cache. We will discuss the caching problems we encountered as we moved our application from a monolithic deployment model to a continuous delivery model. This will include a live demo (in Java) showing how multiple versions of the same object are cached within Redis and how a version can be promoted to different versions of the application if the structure of the object being cached has not changed. I have a github project (and there is a slide deck that is linked off of that project for this talk) : https://github.com/tkvangorder/redis-shared-cache-sample.
Techniques to Improve Cache Speed
Zohaib Sibte Hassan, Tech Lead, DoorDash
One of challenges we face almost everyday is to keep our API latency low. While the problem sounds simple on the surface, it gets interesting some times. One of our endpoints that serves restaurant menus to our consumers had high p99 latency numbers. Since it’s a high traffic endpoint we naturally use cache pretty intensively. We cache our serialized menus in Redis to avoid repeated calls to DB, and spread out the read traffic load. In this we will present different techniques we used improve our cache speed, this includes avoiding cache stampede using Node.js Promises, and how we used compression to not only improve our latency, but also got ourselves more space to cache. Article links: – https://doordash.engineering/2019/01/02/speeding-up-redis-with-compression/ – https://doordash.engineering/2018/08/03/avoiding-cache-stampede-at-doordash/
Redis Cluster: Enabling large-scale HPC workflow with Data Broker
Claudia Misale, Research Staff Member, IBM Research
Lars Schneidenbach, Research Software Designer, IBM Research
Complex problems can often be solved with a workflow of specialty applications. There are different challenges that developers have to overcome when sharing data in a workflow; each application may have very different requirements in how they access shared data. Data access can be online or offline, data sizes and types may vary as well as the frequency they are consumed and produced. Moreover, producers and consumers may not run at the same time, thus introducing possible latency. Also, the deployment system itself plays an important role, especially when considering low latency read/write operations and reliability in case of failures. To facilitate the communication of information within a workflow, we implemented a programming model to help applications share data in the form of named tuples, relying on the concept of namespaces to provide software-based data isolation and access through mainly put/get primitives. Our programming model, namely the Data Broker, can support different backends for storing/retrieving the data: Redis has been our first choice. On top of Redis, we implement put/get function calls, namespace management, asynchronous call with client-side queues per server, and key-space browsing. In this talk, we will show the challenges and benefits of adopting Redis for our workflow shared data store. Attendees will gain perspective on the ease of deploying a Redis cluster at very large scales to facilitate the exchange of different sizes and types of data between applications, leveraging high speed solid state storage for snapshots, as well as the challenges of deploying Redis clusters on large scale systems with HPC batch schedulers. As our use case, we will describe how we used the Data Broker with Redis Cluster, enabling data sharing for an HPC intelligent simulation precision medicine workflow, SPLASH, running on the 2nd fastest supercomputer in the world, Sierra at Lawrence Livermore National Laboratory.
Kubernetes Operators and The Redis Enterprise Journey
Michal Rabinowitch, Software Engineer, Redis Labs
Rob Szumski, Principal Product Manager, Red Hat
Rob will present K8s operators, the past, present and future. He will explain how this helps customers to deploy and manage services inside Kubernetes. Then Michal will showcase the Redis Enterprise Operator. She we will talk about the Redis Enterprise journey from static YAMLs to Operator, the challenges we encountered on the way and the Redis Enterprise Operator solution.
Redis as Job Cache in a Distributed Video Rendering Pipeline
Peter Karp, Staff Software Engineer, BuzzFeed
Video rendering is the process of compositing multimedia such as video, images, text or audio one frame at a time to create a video. As longer videos needed processing we implemented distributed parallel-processing to reduce time and allow creation of longer videos. Redis is used to coordinate the rendering jobs running on separate servers. The common tools for video editing are complex and memory intensive and do not provide simple scripting for bulk batching of routine tasks. Content often needs to be formatted for different platforms, or cropped and trimmed to be re-purposed in new ways. International distribution means adding captions or updating on-screen text. We created Stitcher, a video rendering service to perform these post-production tasks as well as rendering video for Vidder our innovative in-house video editing application. Like many media companies BuzzFeed creates hours of video content every day and has an extensive catalog going back several years. As our users created longer videos in Vidder and required new features such as captions Stitcher’s original sequential rendering approach showed its limitations. Stitcher v.2 took advantage of the way Vidder structured video into independent cells where one video clip was composited with text and images. Stitcher v.2 renders each cell on a server giving true multiprocessor parallelism and coordinated with a Redis job cache. This talk describes some of the requirements and problems found in high volume video production environments, the limits we hit and how our new approach using parallel processing along with Redis solved them. Stitcher is built with Python 3, Redis, moviepy and ffmpeg.
Real-Time Game Health Analytics with WebSockets, Python 3 and Redis PubSub
Benjamin Sergeant, Staff Software Engineer, Machine Zone
Many factors can impact the success of a Mobile Game; for example the throughput of the CDNs serving game assets, the crash rate or the frame rate per second. Collecting this information at scale poses 2 challenges. On the server side, a lot of storage and processing power are needed to answer arbitrary and always changing questions. On the client side the instrumentation should be lightweight to minimize the observer effect. By using an in-memory PubSub system on the server side, we take the disk storage out of the equation, and get full flexibility by publishing structured JSON documents to specific per event type channels equivalent to different tables in a SQL world. The client-server communication is made through encrypted and persistent WebSocket connections. We wrote our own WebSocket C++ implementation and open-sourced it on GitHub at https://github.com/machinezone/IXWebSocket. Multiple subscribers programs are consuming the events, transforming and ingesting them in charting tools such as Grafana, or Error Logging tools such as Sentry. Our system is able to handle 100 billions events per month.
Using Redis to orchestrate cloud-based file transfers at scale
Carlos Justiniano, VP of Engineering, Flywheel Sports
My talk focuses on how we used Redis at Flywheel Sports to perform a massive file transfer of CDN video content between Verizon EdgeCast and Akamai. I’ll discuss the hybrid Microservices and serverless platform (using AWS Lambda) we built and how it performed. The take away for this talk reveals the role that Redis played and how microservices and serverless computing are not mutually exclusive. My Medium post, published by HackerNoon, discusses the effort in greater detail: https://hackernoon.com/cloud-based-file-transfer-at-scale-63d8e2dacb3a
Processing Real-time Volcano Seismic Measurements Through Redis
David Chaves, Research Assistant, Research Center for Communication and Information Technologies. University of Costa Rica
A constant concern in regions with highly seismic and volcanic activities is to design systems that allow for early warnings. They are useful to deliver alerts about geophysics events that could bring severe damage to people, cities, farmlands, and many activities. Considering this, one important fact to develop early warning systems is processing volcanic seismic signals in real-time. This processing allows predicting possible new eruptions that may help to take actions accordingly with the issued alert. Using Redis, we process the constant flow of seismic signals from many stations and execute analysis of amplitude measurements. These analyses are important for geophysicists to identify volcano activities and its possible anomalies. The data is stored and processed using Redis structures and streamed to a website for a real-time overview of volcanic activity. Applying in-memory storage approach allows us to do a continue computing analysis of the signals really fast. This is critical in a scenario where an early alert will benefit many people located in dangerous areas to be prepared and depart when some threat is imminent.
Redis Memory Management Made Simple
Kevin McGehee, Sr. Software Development Engineer, AWS
Madelyn Olson, Software Developer, AWS
Redis, as an in-memory data structure store, keeps all of its information in memory, and thus it is important to understand, track, and limit memory usage for it to work effectively under shifting workloads. In this talk, I will describe the various factors that influence overall Redis memory usage (the items you store are only part of the equation!). The talk will focus on how to design memory-efficient Redis applications with emphasis on best practices and configuration to reduce total memory usage.
Redis: Swiss army knife at HackerRank
Kamal Joshi, Senior Software Engineer, HackerRank
Redis is mostly used for caching but it has far more interesting use cases outside of caching which we make use of at HackerRank. Redis has powered a lot of critical components like Leaderboard, Rating, Feed etc. which make extensive use of data structures like sorted sets, sets, lists and make use of Lua scripts to make atomic updates to multiple values and embed strategies in them. Additionally, in case of feed Redis made fan-out of content easier as it allowed us to push updates to groups of users very fast. The key intersection operations were critical in building social oriented features like friends leaderboard which can be real-time. Redis has powered our first ML initiative which was in form of content recommendation. The collaborative filtering powered recommendation engine uses Redis as the data store and allowed us to test out multiple iterations of the system without building a complete data pipeline. Additionally, data structures like Hyperloglog provided a base for doing count estimations which allows for dynamic updates to the recommendations based on changing user pattern. Redis also powers the job queues at HackerRank, being the backend store for Resque, Sidekiq and Celery. These use Redis lists and sorted sets to provide queues and schedules. We have started looking at Redis streams to have queues where we can have at least once guarantee through delayed acknowledgment. Other use cases of Redis being Pub/Sub, rate limiting, locks, ReJSON, ReBloom which are kind of similar to how others would also be using them. Problems we ran into: Huge lists/sorted sets, multi key operations blocking switching to cluster and quirks of running Redis on VMs with network mounted disks.
Automatic Redis Clustering and Pitfalls
Luke Curley, Software Engineer, Twitch
At Twitch, we run on-premise Redis clusters to store all playback sessions. One of our requirements was to automate cluster creation and management. The existing tools, such as redis-trib, required far too much manual intervention when adding or removing hosts to the cluster. Our goal was to develop a service that would join a cluster, evenly distribute master nodes among hosts, and evenly distribute keys among master nodes. In addition, we wanted to ensure that replicas would not replicate local nodes, extending the built-in cluster replica migration logic. Finally, the result should leverage Redis cluster’s design and strengths, such as being decentralized. While creating the service, I ran into quite a few issues. Some of them were fundamental misunderstandings of how Redis clustering actually works. Some of them were learnings on how the algorithm works during edge cases, and how to prevent doing something terrible that breaks production. We may try to open-source the final result in the future (doubtful in time for the conference), but I would primarily like to explain to talk about how Redis clustering works and how management tools should operate.
Freedom of Movement – Meshes and Relative Motion
Richard Leddy, Independent Consultant, Copious Systems
After a year of working on a node.js stack for passing data through a Bluetooth gateway to responsive points in an application, I have been thinking a lot about the motions of sensors from the field of one Gateway to another. And, besides that, I have been thinking about the movement of the gateways, since they can fit in my pocket as well. I have also been working on an IoT stack for taking in data from arrays of bioreactors. Once again, the possibility of a gateway wandering about a plant becomes relevant, with the gateway moving through fields of sensors. But, the story does not end with just the motion. The process of data collection does not stop. We can turn to Redis to maintain continuity and publish consistent streams of data to visualization endpoints in real time while managing historical information for time series data.
Using Redis Streams to build Event-driven Microservices and User Interface in Clojure(Script)
Bobby Calderwood, Founder, Evident Systems LLC
My team built a system for a major auto manufacturer that required significant asynchronous processing. Our client already had Redis in their environment and were comfortable using it, so we decided to build our system using Redis as the event sourcing layer. We saw Redis Streams on the horizon, but the Clojure language client, Carmine, didn’t yet support the new Streams functionality. In the course of building our system, we ended up submitting pull requests both to Carmine and to Redis Docs to add support for Redis Streams. This talk will be mostly an experience report, with code examples in Clojure representing our Redis usage (via the Carmine client) both before Redis Streams became available as well as after changing the codebase (a surprisingly small amount) to use Redis Streams. We’ll also dig into our architecture, including: * how we were able to implement Event Sourcing and CQRS using Redis and Streams * how we managed significant asynchronous processing by microservices behind our API * how we aggregated current state in a separate data store * how Redis Streams supported pushing changes to our ClojureScript/React user interface
Reinforcement learning on hundreds of thousands of cores
Henrique Pondé de Oliveira Pinto, Member of the Technical Staff, OpenAI
At OpenAI we run some of the largest Reinforcement Learning experiments in the world (>300k CPU cores and > 3000 GPUs) for which we developed systems that are robust and scalable. Redis plays central role in our system: – We use it to to distribute data between machines. Examples include: – configuration for each machine (keys) – new parameters for our neural networks to use (keys + lua) – 180 years of game experience sampled per day (lists) – We distribute the data across multiple Redis instances to handle the massive volume of data. We leverage Redis’ single threaded-ness guarantee as a concurrency primitive. Our experiments needs multiple machines to coordinate and we use Redis’ scripting to achieve that. At the scale we’re running though, we’re hitting the limits of what a single redis instance can do. To address that,we recently designed a simple proxy system which acts as intermediary for our controller Redis. We verified that it scales up to million core experiments, without sacrificing the single threaded-ness or introducing key sharding. The proxies also enable: caching of script requests (which redis cluster does not support), smart throttling reduced connection load on controller redis. A few key takeaways from the presentation: – Through proxies,we can scale Redis to serve as a concurrency primitive for a million workers, way above the connection limit of a single instance. – Redis proxy as a data distribution tool is a viable alternative to redis cluster. (still allowing scripts)
Real-time Spatiotemporal Data Processing for Future Mobility Services
Atsushi Isomura, Researcher, NTT (Nippon Telegraph and Telephone Corporation)
We will show two technologies for future mobility services to accelerate real-time spatiotemporal data processing based on Redis. The recent growth of connected vehicles that closely communicate with cloud computing services makes it possible to provide real-time mobility services (on-demand pickup and real-time traffic control.) Also, the cloud computing platforms require massive data accumulation and real-time information retrieval. To achieve these requirements, we consider Redis as one of the most appropriate open source software due to its in-memory distributed key-value data management mechanism. The received data from the connected vehicles contain spatiotemporal information (time, longitude and latitude), which ordinary becomes primary key in databases. In this case, as one typical spatiotemporal data management design in Redis, we can use GEOADD command to store and GEORADIUS command to search, where we put “timestamp” into key, and “geohash”, one-dimensional bit array of longitude and latitude, into SCORE as SORTED-SET type. However, this design has performance issues as the following two reasons; First, search performance of range query leads to deteriorate because it takes two times for search processing with “timestamp” and “geohash”. Additionally, CPUs of particular nodes reach bottleneck because all insertion and retrieval executed at the same time intensively access to a particular record with the same “timestamp” key. To solve these issues, we propose two technologies; spatiotemporal code and constraint node distribution. Spatiotemporal code, a one-dimensional bit array converted from three-dimensional spatiotemporal information, can be put into the key to reduce the number of search process from twice to only once. Constraint node distribution allows to distribute records belonging to the same key, not to one particular node, but to multiple nodes. This method helps not only load-balancing in storing data, but also avoiding all-node-access in searching data. These two technologies will dramatically improve performance of spatiotemporal data processing.
Work stealing for fun & profit: Making Redis do things you thought it couldn’t do
Jim Nelson, Cluster Operations Engineer, Internet Archive
“When a background task needs to be performed on a regular basis we often turn to standard approaches: cron jobs, schedulers, daemons, and so on. With modern distributed systems another approach is possible: Work stealing. Work stealing is “borrowing” small time slices from the tens of thousands of client requests coming into your servers. Each time slice incrementally performs a bit of background work. Stealing milliseconds here and there is not noticeable to most users, but in aggregate can perform work without deploying one-off systems that must be maintained, upgraded, and monitored. This talk will focus on how to use work stealing with Redis to perform common tasks such as garbage collection, lock monitoring, and more. Sample code will be presented that shows how to make Redis do things it normally can’t do out of the box, such as setting expiration times for hash map fields and sorted set elements. Common strategies for implementing work stealing will also be discussed. A
will be demonstrated that requires no extra state be maintained and minimal communication between clients.”
Complex ephemeral caching with Redis
Jeff Pollard, Staff Engineer, Strava
Strava hosts millions of “segment leaderboards” – portions of road or trail created by our members where they can compare their times to traverse the segment against all other users. While Strava segment leaderboards are canonically stored in Cassandra, we rely on a robust ephemeral cache in Redis to more quickly and effectively service the majority of our reads. This talk will discus how the cache fits in with the larger leaderboards architecture. It will cover our design decisions in choosing Redis, how updates to the cache are replicated from canonical storage, and tradeoffs that were made in its implementation.
Atomicity in Redis (with examples in Node.js)
Thomas Hunter II, Principal Software Engineer, Intrinsic
This would be a more focused version of my 2017 talk. It would focus on running Redis commands atomically from an application. Example code would be written in Node.js. Here’s an outline: Basics of Redis It’s single threaded (like Node.js) Many clients can be connected at once and intersperce commands Race conditions can occur Can’t do GET + SET, should do GETSET Can’t do EXISTS + SET, should do SETNX Include diagram visualizing a race condition Using Multi The MULTI/EXEC commands allow for basic, non-chainable atomicity Can do previous MULTI + GET + SET + EXEC Include diagram visualizing command grouping Commands are queued up, per client, then run all at once Technically other commands can be run while commands are queueing But commands will always run as a group Explain why it’s different from pipelining Using Lua Scripting The ultimate form of atomicity, but with CPU overhead Include sample operation impossible with MULTI.
Container Attached Storage for Redis
Murat Karslioglu, VP Product, MayaData
Vick Kelkar, Director, Product Management, Portworx
Kubernetes and containerized applications allow development teams to iterate fast, deploy efficiently and operate at scale. Kubernetes allows you to orchestrate containers that are highly available. However, in the case of container reschedule, Kubernetes does not provide a great set of primitives to manage your persistent data along with your application containers. In this talk, we will present some of the challenges associated with managing persistent data in Kubernetes and how we can make day 2 operations easier to manage. We will talk about a couple of approaches to solving data persistence problems in multi-cloud environments. During the demos, we will showcase how we address data replication and data encryption challenges.
FaaStRuby: Building a FaaS platform with Redis
Paulo Arruda, CTO, FaaStRuby
After signing the petition “We Want Serverless Ruby!”, I was wondering what it takes to build a serverless platform from scratch. In this presentation, I will talk a little bit about serverless adoption, as well as the main challenges I faced while building FaaStRuby and how Redis helped me solve them. The key takeaways are: – To encourage serverless adoption, we need language-specific frameworks that develop like a monolith and deploy like distributed functions; – Those frameworks must integrate with cloud and self-hosted, language-specifc FaaS platforms; – Redis is a great tool for backing FaaS platforms and has a very important role to play in the future of distributed applications.
Multi-agency Multi-media Interoperable Communication, enabled by Redis
Paul Kurmas, Director, Strategic Product Development, Mutualink, Inc.
David Parry, Principal Engineer, Mutualink inc
Mutualink has developed and deployed a product line that enables secure multimedia communication between public safety agencies and other community stakeholders such as schools, hospitals, and malls, and between disparate and incompatible communication systems. Our systems have been used to enable coordinated event response by federal, state, and local authorities for many highly visible events. Yet the same system is convenient enough for customers to use on a daily basis. The value of any communications network increases geometrically as the number of participants increase. To support an increase in our customer base by orders of magnitude, new techniques and technologies are needed. To support the critical nature of this communication, these technologies must be highly reliable, redundant, and distributed. We will highlight how the capabilities and services provided by the Redis enterprise product support the realization of Mutualink’s next generation product. Special attention will be paid to the capabilities of the Conflict-Free Replicated Database in support of a large-scale deployment.
Intergrating Geocoder using Redis
Linda Achieng’ Otieno, Ruby & Elixir Developer, Podii HQ
This talk will introduce Geocoder gem and redis and how the two work in their fields. We get to then see their compatibility and finally explain how the two work together and involve the concept of Elastic Search. Both Elastic Search and Redis are hunger driven, but I will talk about having them both in your app and not worrying about using a lot of memory.
Flexible integration using Redis Streams in IoT platforms
Aleksandar Novaković, Software Engineer, Mainflux
Janko Isidorovic, CEO, Mainflux
Every IoT platform supports operations for provisioning of connected devices. Other services should be able to easily connect to IoT platform. In order to make a flexible integration point and send events to other services interested in these provisioning operations, we are going to use an architectural pattern called “event sourcing”. To implement this pattern, we need an event log that supports consumer groups, message ordering, acknowledgement of processed messages and message persistence. In event sourcing implementation we are going to present the usage of the new Redis data structure, called “Redis Streams”. Firstly, we are going to show how to send and receive the provisioning messages over these streams in order to propagate changes to connected services. Also, we are going to explore how we can use consumer groups to scale services and process these events efficiently. Secondly, we are going to examine the design of events that are sent over the Redis Streams. We are going to discuss the format of event ID and why generated IDs are so useful. Finally, we are going to present an example of using these event streams inside the open source Mainflux IoT platform to integrate scalable LoRa adapter into the system. We will see why Redis Streams and microservice-based IoT platforms are a perfect match.
Implementing Turing-complete Factor Graphs with Redis PubSub
Andrew Tsai, Graduate Student, Massachusetts Institute of Technology
Vinayak Ramesh, Entrepreneur in Residence, Redpoint Ventures
Machine learning and deep learning research have become increasingly computationally difficult for those who do not have specialized hardware, and researchers who possess such hardware spend more time writing hardware-specific infrastructure code than designing the actual algorithms. Ideally, researchers would be able to implement machine learning algorithms on a hardware-agnostic framework, allowing workflows that can train, for instance, a reinforcement learning agent on a machine and deploy the trained agent on drones or robots. We’ve developed a novel language for computation based on message passing over a “factor graph” data structure (paper in review). Factor graphs are a computationally convenient data structure abstraction that has been popularly utilized for efficient inference in the framework of probabilistic graphical models. We’ve shown that our language is Turing complete and have already demonstrated how to implement integer programming, stochastic gradient descent, and linear algebra in this language efficiently. Factor graphs have shown that they lend themselves well to being implemented using PubSub, which is a common, ubiquitous, and well-understood messaging pattern in software and hardware. Our prototype implements factor graphs in PubSub using Redis’ python client, specifically the PubSub feature; in addition, Redis is also used as the primary in-memory data-store that allows for performant concurrent computations. Our publicly available package illustrates how Redis can contribute to the future of a hardware-agnostic modularized world of machine learning.
Secure Redis Cluster at Box
Ravitej Sistla, Sr. Software Engineer, Box Inc.
Redis Cluster is widely used at Box both for low level applications, such as helping power our relational data access tier, and for higher level ones, such as directly enabling features like Recent Files and others. Some of the largest companies in the world use Box to manage their content and with that come some strict security requirements. Although our Redis Clusters are never exposed to public internet, having a data store unprotected by authentication and encryption is not an option. At Box, we lock down our Redis Clusters by shielding them with in-house-developed authentication and encryption proxies. This allows us to rotate passwords and encryption keys without introducing any unavailability for our clients. Join us for a discussion and a demonstration of how we’re continuing to benefit from Redis’ rich feature set in a security-sensitive context.
Deep Dive in to Redis Replication
Vishy Kasar, Software Engineer, Apple Inc
Redis Replication is a very useful feature that provides high availability, allows you to spread the read load and makes it possible to fail over. In this talk we go over Benefits of replication, Various modes of replication and how they work, Replication and Data Consistency Trade Offs, Memory and disk space requirements when you enable replication, Replication related configuration options, How to understand the various replication related stats emitted by redis, Few practical replication related tips that we learnt by operating redis at Scale This is a great talk if you want to understand replication deeply so you can operate redis confidently.
Streams for Spark
Roshan Kumar, Redis Labs
Spark-Redis (New Support Data Frames + Structured Streaming)
The Rise of DataOps – SQL on Streams
Andrew Stevenson, CTO, Landoop
In a data driven world, data engineering takes center stage. DataOps with Lenses.io enables organizations to deliver repeatable, production ready data pipelines faster with governance. Lenses.io uses a language known across your organization, SQL. In this talk I’ll explore why and how Lenses runs SQL on Apache Kafka, Redis and Redis Streams with a demo included.
Dyno: Composite Datatypes and Redis Modules
Dyno, an open source project by Netflix, is a Redis Java Client. It encapsulates client side complexity providing connection pooling, token aware load balancing, fast failover and flexible retries. Dyno is used by Netflix and many other companies to handle millions of operations in production. We have made Dyno robust and feature rich by adding support to Redis Modules and introduced new data structures leveraging Redis data types. This talk will focus on 3 major features recently added to Dyno. We will introduce Expire Hash, Dual Writer Pipeline and talk about supporting Redis ReJSON module. Expire hash is a new composite data type that provides timeouts on hash fields by leveraging native Redis data structures. Dual writer pipeline is an abstraction on redis client supporting Redis pipelines APIs. Redis natively supports many complex data structures and through modules new features and data structures can be added. With Redis and ReJSON modules, Dyno now supports JSON data type. We will present the design and implementation challenges we faced and talk about how these features add value in Netflix ecosystem. We will also provide a reference use case on how these features are used in production at Netflix and discuss performance.
Extreme Performance at Cloud Scale
Andi Gutmans, General Manager, Amazon Elasticache
Kevin McGehee, Senior Software Development Engineer, AWS
Microseconds are the new Milliseconds. Redis is all about delivering near real-time performance. Furthermore, these days we expect near real-time results at planet scale. Join Andi Gutmans (GM, Amazon ElastiCache) and Kevin McGehee (Sr. Engineer) to explore performance, scale and how they come together.
Developing Redis Modules in Rust: Why and How?
Redis Modules provide a way to extend Redis’ functionality. Running in the same memory space as Redis itself, a bad memory access inside a module can crash the entire server process. Most existing modules, including RedisGraph and RediSearch, are written in C or C++. However, writing robust code in C or C++ that is both memory-safe and performant is not an easy task!
Rust is a modern programming language that specifically addresses the issue of writing safe code without giving up performance, while easily interfacing with existing C/C++ code. These properties make it a natural match for writing Redis modules.
This talk will introduce some of the central features of Rust, and demonstrate why it is such a good fit for writing Redis modules. It will then present the work being done at Redis Labs towards creating a toolkit for writing Redis modules in simple and idiomatic Rust. This toolkit is currently being used to write a new version of RedisJSON that integrates with RediSearch.
Bootstrapping Redis in Kubernetes
In this session you will learn how to deploy a simple application along with a Redis database to a Kubernetes cluster. You will start by running kubectl commands to create a small kubernetes cluster with 2 pods. Then you will execute commands to pull the application code and the Redis database into the cluster. Finally, you will configure your cluster to expose the web application to the public.
Storing terabytes of data with Redis Cluster and Envoy Proxy
Using RedisCluster as datastore in serverless HPC scenarios. Alternative approaches of using Envoy proxy and application sharding. Live demo of provisioning the cluster and doing many millions of transactions.
Shave 100ms Off Your Distributed Cache by Shifting to RedisEdge
During this talk I will demonstrate how to shorten the time distance from the typical Redis setup to the end user. Reducing that latency typically results in faster, more responsive applications and therefore happier users. This revised architecture will make use of recently launched services which will frictionless run our containers (Redis k8s) closer to our users while keeping our primary data stores and code untouched. This talk will also cover running Redis on containers (k8s) and the benefits of that portability. Key takeaways: * Redis is easily portable to almost anywhere in your architecture * New services allow us to deploy Redis closer to our users than ever
Realtime Distributed Tracing for Microservices with Redis
Increasing adoption of microservices and serverless architecture and application need a distributed tracing of applications and being it real-time is more important. Redis is an excellent choice implement be real-time distributed tracing and self-defense network Application solution for enterprise with the help of its features including streaming, redisgraph, bloom filter, Redis queue, Redis events, Redis hyper log. It is even useful for large scale bootstrapping system in a very unique way and most of the bootstrapping system, cloud schedulers, and configuration management system in large scale cloud datacenter would benefit from this.
CDC replication to Redis from Oracle and MySQL
Supercharge your Redis cluster by replicating data in real time from various SOR/ SSOR’s. This approach has enabled our customers to build modern applications on Redis by democratizing data that used to be part of big monolithic systems. We present a generic configuration based CDC (Change Data Capture) adapter that enables development teams to quickly add new source (Oracle, MySQL , MSSqlServer) and target (Redis). In addition to most common out of box features, the adapter provides extension points to add custom functionality and hides the complexities of underlying CDC platforms.
Microservices Architecture in the Real World
Once you decide to adopt a microservices architecture, you’ll face many more decisions and questions. Among them are some core areas that this session will cover, based on our experiences making this shift at Credit Karma. Over the past three years, Credit Karma has gone from zero to over 100 microservices, supporting over 400 engineers while serving our more than 90 million members. The talk will dive into detail about Routing (how does service A find and call service B?), Management (how do you handle hundreds of containers?), Observability (how do you know what’s going on out there?), and Experience (how do your developers deal with these services?), while also talking about the cultural and organizational impact that you can’t avoid.
RediSearch (CRDT + Benchmark)
RediSearch is a full text search engine runs on top of Redis as a Redis module. In this session we will see how RediSearch can work together with CRDT to create an Active-Active Search engine. In addition we will cover a recent benchmark comparing RediSearch with Elastic Search.
Programming with Modules
Since Redis 4 it is possible to extend Redis capabilities using modules. Today we introduce a new module calls RedisGears. In this session we will learn how RedisGears can act as a multi module engine which combines different modules capabilities together.
Deep Dive into RedisGraph
RedisGraph is the only graph database which translate a data retrieval question into a pure mathematical equations, we’ll be taking a guided tour through RedisGraph in and outs, touching on the key optimisations and mathematical tricks it applies to fetch your data faster than ever.
Redis Enterprise on Optane DC Persistent Memory
Intel® Optane™ DC persistent memory is a revolutionary new technology that bridges the memory and storage gap. This innovative new solution, will facilitate having much larger datasets closer to the CPU, so that they can be accessed, processed, and analyzed in real-time while delivering near-DRAM performance at low latencies. Additionally, Intel Optane DC persistent memory modules will support much larger capacities than currently available DRAM solutions, with the option of keeping data persistent (similar to SSDs). This means that you can keep more data closer to the CPU, at a lower cost and while maintaining high throughput and sub-milisecond SLAs. RedisLabs and Intel have collaborated for close to two years on optimizing and benchmarking Redis Enterprise on Intel Optane DC persistent memory. This session will provide an architectural overview of this new technology, the different modes in which it can be utilized and most importantly the benefits that customers can expect to see in deployment (including performance data and a live demo).
High-performing Distributed Apps using Active-Active Redis
“Delivering interactive and scalable user experience for geo-distributed applications has so far been challenging because of network latency. CRDT-based active-active Redis Enterprise solves this problem by delivering local latency for your distributed applications. Redis data structures as CRDTs(conflict-free replicated data type) enable Redis replicas to exchange data modifications behind the scenes and resolve conflicts automatically based on a pre-defined set of rules.
This session covers:
- Introduction to CRDTs and conflict resolution semantics
- How to develop apps using Redis CRDTs
- Redis CRDT use cases such as leaderboards, distributed caching, collaborative apps, shared sessions, distributed data ingest
There will also be a live demo with code walkthrough to show how you can develop apps for today’s challenges.
Redis + Spark Structured Streaming: A Perfect Combination to Scale-out Your Continuous Applications
“Continuous applications” supported by Apache Spark’s Structured Streaming API enable real-time decision making in the areas such as IoT, AI, fraud mitigation, personalized experience, etc. All continuous applications have one thing in common: they collect data from various sources (devices in IoT, for example), process them in real-time (example: ETL), and deliver them to machine learning serving layer for decision making.
Continuous applications face many challenges as they grow to production. Often, due to the rapid increase in the number of devices or end-users or other data sources, the size of their data set grows exponentially. This results in a backlog of data to be processed. The data will no longer be processed in near-real-time.
Redis Streams enables you to collect both binary and text data in the time series format. The consumer groups of Redis Stream help you match the data processing rate of your continuous application with the rate of data arrival from various sources. In this session, I will perform a live demonstration of how to integrate Apache Spark’s Structured Streaming API with open source Redis using Spark-Redis library. I will also walk through the code and run a live continuous application.