Tutorial: Redis Enterprise on PKS, the New Operator, Makes it Easy to Manage Distributed Data

Sheryl Sage by Sheryl Sage

Successful applications tend to become complex over time. A microservices architecture offers the ability to decompose your app into many loosely-coupled services organized around business capabilities, thereby alleviating some of the complexities. Of course, microservices are not a free lunch, but for most large, complex apps, they allow teams to deploy new features faster, scale more easily, and pick the best technology stacks.

But what about distributed data? Each microservice is able to select the right database for the right job by employing a data model based on key-value, graph, hierarchical, JSON, streams, search, and so on. With over 300 databases available in the market, this creates a challenge when selecting a database that both meets your criteria and is lightweight enough for a microservices architecture. 

Fortunately, Redis Enterprise on Pivotal Container Service (PKS) is making strides to simplify your cloud-native apps and microservices with distributed caching, messaging, and a high performance NoSQL database. In this post, we’ll review why the data layer is an important element for microservices, and how to get started implementing the Redis Enterprise Kubernetes Operator and database instances on PKS.  

Why the Data Layer is a Critical Element for Stateful Applications and Microservices on Kubernetes

By far one of the biggest challenges with microservices is achieving data consistency. When data is shared between services, you can use a message stream to transfer data between microservices. But in order to ensure that the dataset is in sync and consistent, Redis Enterprise provides CRDTs (Conflict-free Replicated Data Types) with an active-active shared-nothing architecture. Redis Enterprise CRDTs are implemented using a global database that spans multiple clusters. This architecture benefits microservices in several ways:

  • Seamless conflict resolution for both simple and complex data types
  • Even if the majority of the geo-replicated regions are down, the multi-master cluster will continue to handle read and write operations.
  • With multiple instances of a service (and each with its own database), each microservice can perform read and write operations with local latencies; the databases are able to resolve the conflict, regardless of the number of geo-replicated regions.

A second consideration is identifying upfront the data processing requirements of each microservice. Developers use Redis to power a variety of transient, ephemeral, operational and transactional use cases such as a geo-distributed cache; session store; high-speed transactions; real-time analytics; fast data ingest; messaging; job and queue management; search; recommendation engines; and time series. 

Why is this important? Because for each microservice, the database could be a  single source of truth, a temporary store, or somewhere inbetween. In order to apply high availability and data retention policies, it is important to understand and classify the data needs for each microservice. For example, data ingest services require high velocity and high availability, but the data is temporary (while other types are transactional), requiring a data store that offers a single source of truth. To better understand and optimize each use case, you can broadly classify each microservice and its data processing requirements in the following categories:

  • Transient data: Data such as events, logs, messages and signals usually arrive at high volume and are not stored elsewhere. Data ingests typically process this information before passing it to the appropriate destination. Since this data is not stored anywhere else, high availability is critical—this data cannot be lost.
  • Ephemeral data: Microservices that deliver instant digital user experiences need a high-speed cache. A cache server is a good example of an ephemeral data store. While a cache doesn’t store the master copy of the data, it must be highly available, as failures could cause user experience issues.
  • Operational data: Information gathered from user sessions—such as user activity, shopping carts, clicks, etc. This type of data is used for real-time analytics and is aggregated for trend analysis. This data may not be stored as a permanent record, but is retained for business continuity and analytics. 
  • Transactional data: Data gathered from transactions, such as payment processing and order processing, must be stored as a permanent record in a database with strong ACID controls and a cost-effective storage.

A third strategy to achieving data consistency with Microservices is addressed with Kubernetes and the Operator framework. Operators are the preferred method in Kubernetes to run microservices and applications of all kinds. Essentially, an Operator extends the Kubernetes API to teach Kubernetes about all the things that the application Operator needs to know: how to deploy, scale, manage and update the application. 

Redis Enterprise Kubernetes Operator for Stateful Workloads

Over the last year or so, the Kubernetes community has added support for running stateful applications such as databases, analytics and machine learning. Kubernetes StatefulSets guarantee that pods have unique addressable identities, persistent volumes ensure that any data written to disk will be available after a pod restart, and Operators extend Kubernetes functionality with application-specific logic using custom resources and custom controllers. 

The Redis Enterprise Kubernetes Operator for PKS makes it easy to create, scale and manage Redis instances on-demand by using just a few commands. Redis is deployed with StatefulSet and operates as a headless service to handle the DNS resolution of pods in the deployment. A Redis Enterprise node resides on a pod that is hosted on a different VM or physical server. A layered approach to orchestration allows Redis Enterprise to manage Redis tasks, and the Kubernetes orchestration to run outside the Redis cluster deployment in order to process failures and triggers failover within seconds.  

The Redis Enterprise Operator for PKS automates:

  • Auto-discovery of Redis Enterprise service with StatefulSet and a headless service to handle DNS resolution
  • Managing Redis Enterprise licenses inside the Kubernetes Secrets primitive
  • Bootstrapping multi-node Redis Enterprise clusters using Kubernetes Secrets
  • Fully utilizing Redis Enterprise’s multi-tenant architecture by creating Redis databases in the multi-node cluster
  • Maintaining and upgrading Redis Enterprise

Getting Started- Implementing Redis Enterprise on PKS

If you don’t have a PKS cluster and environment, check out step-by-step instructions for setting up your environment here

To install Redis Enterprise on PKS, you will want to confirm the following prerequisites:

Login and Prepare Your PKS Environment and PKS Cluster

To begin, log in to PKS and your PKS cluster:

$ pks login -a PKS-API -u USERNAME -k

Find the cluster you created by listing the available clusters:

$ pks clusters
Name Plan Name UUID Status Action
cluster1 dev d8g7s9g9-789a-789a-879a-ad8f798s7dfs succeeded CREATE
cluster2 prod s7f9sadf-sfd9-as8d-45af-a9s8d7f3niuy succeeded CREATE

Change the context to your target cluster:

$ pks get-credentials CLUSTER-NAME

Fetching credentials for cluster pks-re-cluster

$ kubectl cluster-info

Next, create a namespace where the Redis Enterprise Cluster will be deployed. While you can use the Kubernetes default namespace, it is a best practice to use a separate namespace if you are sharing the cluster with others. The Operator deployment will deploy and run one Redis Enterprise Cluster in one Kubernetes namespace. In order to run multiple Redis Enterprise Clusters, deploy each one in its own namespace.

$ kubectl get namespaces
NAME STATUS AGE
default Active 14d
kube-public Active 14d
kube-system Active 14d
pks-system Active 14d

Next, create a new namespace using a unique name:

$ kubectl create namespace redis-enterprise

namespace/redis-enterprise created

Finally, switch context to operate within the newly created namespace:

$ kubectl config set-context --current --namespace=redis-enterprise

Context “pks-re-cluster” modified.

Get and Prepare Deployment Files

Clone this repository, which contains the deployment files:

$ git clone https://github.com/RedisLabs/redis-enterprise-k8s-docs

Cloning into ‘redis-enterprise-k8s-docs’…
remote: Enumerating objects: 37, done.
remote: Counting objects: 100% (37/37), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 168 (delta 19), reused 9 (delta 7), pack-reused 131
Receiving objects: 100% (168/168), 45.32 KiB | 7.55 MiB/s, done.
Resolving deltas: 100% (94/94), done.

Let’s look at each yaml file and edit for your specific deployment:

  • rbac.yaml – The rbac (Role-Based Access Control) yaml defines who can access which resources. The Operator application requires these definitions to deploy and manage the entire Redis Enterprise deployment (all cluster resources within a namespace). This yaml should be applied as-is, without changes. To apply it:$ kubectl apply -f rbac.yaml

    role.rbac.authorization.k8s.io/redis-enterprise-operator created
    serviceaccount/redis-enterprise-operator created
    rolebinding.rbac.authorization.k8s.io/redis-enterprise-operator created

  • crd.yaml – The next step applies crd.yaml, creating a CustomResourceDefinition for your Redis Enterprise Cluster resource. This provides another API resource to be handled by the k8s API server and managed by the operator we will deploy next. This yaml should be applied as-is, without changes. To apply it:$ kubectl apply -f crd.yaml

    customresourcedefinition.apiextensions.k8s.io/redisenterpriseclusters.app.redislabs.com configured

  • operator.yaml – Applying this yaml creates the operator deployment, which is responsible for managing the k8s deployment and lifecycle of a Redis Enterprise Cluster. Among many other responsibilities, it creates a stateful set that runs the Redis Enterprise nodes as pods. The yaml in the GitHub repository that you have cloned earlier contains the latest image tag representing the latest Operator version available and, under most circumstances, you should apply this file as-is. To apply it:$ kubectl apply -f operator.yaml

    deployment.apps/redis-enterprise-operator created

Now, verify that your redis-enterprise-operator deployment is running:

$ kubectl get deployment -l name=redis-enterprise-operator
NAME READY UP-TO-DATE AVAILABLE AGE
redis-enterprise-operator 1/1 1 1 0m36s

Next we will createreate a storage class. The Redis Enterprise Cluster deployment dynamically provisions Persistent Volume Claims (PVCs) for use with a cluster’s persistent storage needs. In order to create dynamic PVCs, the Kubernetes cluster must have a storage class defined. Determine whether a storage class is defined on your PKS cluster:

$ kubectl get storageclasses

Since PKS does not automatically provision storage classes, and if you or the cluster administrator did not provision storage classes, then the response will be:

No resources found.

In order to create a storage class, determine the type of Infrastructure your PKS cluster is running on, and consult the table in the Kubernetes Storage Classes article to determine which provisioner to use. Below are two examples of yaml files:

AWS gp2.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp2
mountOptions:
- debug
parameters:
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain

GCP <br><br> *standard.yaml*
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
mountOptions:
- debug
parameters:
type: pd-standard
rovisioner:kubernetes.io/gce-pd
reclaimPolicy: Retain

Create the appropriate yaml file and apply it:

$ kubectl apply -f <your-storage-class.yaml>

More information about persistent storage in Operator deployment. 

Note – you can commit the reclaimPolicy declaration in the yaml file, in case of error, for testing and development environments. For production environments, make sure that Persistent Volume Claims (PVCs) are retained when cluster persistence is used, in order to enable recovery. 

When editing the  Redis Enterprise Cluster (REC) yaml in the next step, you will use the storage class name you have just created.

redis-enterprise-cluster.yaml – Defines the configuration of the newly created resource, Redis Enterprise Cluster. This yaml could be renamed “your_pks_cluster.yaml” to keep things tidy, but this isn’t a mandatory step. However, this yaml must be edited to reflect the specific configurations of your cluster. Here are the only fields you must review before you apply the REC yaml. To learn more about the fields you must review before you apply the REC yaml, please refer to the Redis Enterprise on PKS documentation.

Here is an example of the edited REC yaml file:

```
apiVersion: "app.redislabs.com/v1alpha1"
kind: "RedisEnterpriseCluster"
metadata:
name: "rec-pks"
spec:
enforceIPv4: true
nodes: 3
persistentSpec:
enabled: true
storageClassName: "standard" # ! edit according to infrastructure
uiServiceType: LoadBalancer
username: "demo@redislabs.com"
redisEnterpriseNodeResources:
limits:
cpu: "2000m"
memory: 3Gi
requests:
cpu: "2000m"
memory: 3Gi
redisEnterpriseImageSpec:
imagePullPolicy:  IfNotPresent
repository:       redislabs/redis
versionTag:       5.4.2-27
```

Create Your Cluster

Once you have your_pks_cluster.yaml file set, you need to apply it in order to create your Redis Enterprise Cluster:

$ kubectl apply -f your_cluster_name.yaml

redisenterprisecluster.app.redislabs.com/rec-pks created

In order to track the creation of the cluster nodes, you must track the creation of the StatefulSet (which will be named the same as the cluster name you’ve provided in the your_pks_cluster.yaml file). In the example above it is “rec-pks”:

$ kubectl rollout status sts/rec-pks

Waiting for 3 pods to be ready…
Waiting for 2 pods to be ready…
Waiting for 1 pods to be ready…
statefulset rolling update complete 3 pods at revision rec-pks-808w0973…

Verify the rec creation was successful: 

kubectl get rec

NAME      AGE
rec-pks   7m

$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/rec-pks-0 1/1 Running 0 16m
pod/rec-pks-1 1/1 Running 0 14m
pod/rec-pks-2 1/1 Running 0 13m
pod/rec-pks-services-rigger-585cbf5ff-5f2z5 1/1 Running 0 16m
pod/redis-enterprise-operator-954b6c68c-bgwpr 1/1 Running 0 18m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/rec-pks ClusterIP None <none> 9443/TCP,8001/TCP,8070/TCP 16m
service/rec-pks-ui LoadBalancer 10.100.200.101 53.128.131.29 8443:31459/TCP 16m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/rec-pks-services-rigger 1/1 1 1 16m
deployment.apps/redis-enterprise-operator 1/1 1 1 18m
NAME DESIRED CURRENT READY AGE
replicaset.apps/rec-pks-services-rigger-585f5bcff 1 1 1 16m
replicaset.apps/redis-enterprise-operator-9546cb68c 1 1 1 18m
NAME READY AGE
statefulset.apps/rec-pks 3/3 16m

Create a Redis Enterprise Database

In order to create your database, you will log in to the Redis Enterprise UI.

First, determine your administrator password. It is stored in an opaque k8s Secret named after the REC name. In this example:

$ kubectl get secret/rec-pks -o yaml

apiVersion: v1
data:
license: “”
password: ZGdlaWw3Cg==
username: YWRtaW5AcmVkaXNsYWJzLmNvbQ==kind: Secret

Decode the password:

$ echo 'ZGdlaWw3Cg==' | base64 --decode

Dgeil7

There are two primary options for accessing the Web UI. If your PKS cluster has load balancer service set up with a public IP, you have access to or otherwise a routable IP address from your machine. Determine that IP address:

$ kubectl get service/rec-pks-ui

service/rec-pks-ui   LoadBalancer 10.100.200.101   53.128.131.29 8443:31459/TCP             16m

Determine the IP address, followed by port number 8443, into your browser address bar: https://53.128.131.29:8443

If your PKS cluster does not have a routable IP address from your machine, set up port forwarding for port 8443 to one of you cluster pods: 

$ kubectl port-forward rec-pks-0 8443 

A typical response will include the following lines:

```
Forwarding from 127.0.0.1:8443 -> 8443
Forwarding from [::1]:8443 -> 8443
```

 – Use localhost followed by port number 8443 in your browser address bar:

https://localhost:8443

Now login to the Web UI by using the username defined in your REC yaml and the password you’ve previously decoded.

Follow the interface’s instructions to create your database. For example, a basic setup would follow these steps:

In databases, if you do not have any databases on the node, you will be prompted to create a database.

Click “Next” to create a single-region deployment on RAM.

Enter the mandatory details of the new {{< field “db_type” >}}:

  • Name: Enter pks-test or another database name.
  • Memory limit: Ese the default 0.10GB or whatever value within the available memory.
  • Password: Enter a password and record it for the next steps.

Now click “Activate.”

We will now conduct a simple database connectivity test using Telnet.

Find the Kubernetes services automatically created for your Redis Enterprise database:

$ kubectl get service -l app=redis-enterprise-bdb
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pks-test ClusterIP 10.100.200.52 <none> 14771/TCP 22m
pks-test-headless ClusterIP None <none> 14771/TCP 22m

Set up port forwarding for the database port to one of you database services:

$ kubectl port-forward service/pks-test 14771
Forwarding from 127.0.0.1:14771 -> 14771
Forwarding from [::1]:14771 -> 14771

Connect to your database via Telnet with the database password you’ve recorded earlier with the AUTH command and test some basic Redis commands:

telnet 127.0.0.1 14771

Conclusion

In this post we discussed why understanding your data layer and microservices data processing requirements are important and how the Redis Enterprise Kubernetes Operator on Pivotal Container Service is an important element to deploying stateful workloads with enterprise-grade performance, scalability, high availability and application resiliency.

Give it a try and let us know what you think!  You can learn more about how Pivotal and Redis Labs are working together here.

Sources:

Redis has great, detailed documentation on how to install the Redis Enterprise Kubernetes Operator with PKS which we followed in this post.