Visualize Redis Open Source using Rebrow

If you are serious about monitoring your cluster health with real-time alerts, analyzing your cluster configuration, rebalance as necessary, managing addition of nodes, re-sharding, node deletion, and master-replica configuration, you must try out RedisInSight. It’s pretty cool and officially supported by Redis Labs. It works quite good if you are using Redis Enterprise. But what if you want to monitor and access UI for Redis Open Source?

Rebrow comes to rescue..

My Image

One of the promising tool I came across was Rebrow. It is a Python-Flask-based Browser for Redis Content. It is built for the developer who needs to look into a Redis store. It allows for inspection and deletion of keys and follows PubSub messages. It also displays some runtime and configuration information.

Tested Infrastructure

We will be using Docker Swarm, hence one of the quickest way to get Docker Swarm up and running is through “Play with Docker”, aka PWD. It’s free of cost and open for all. You get maximum of 5 instances of Linux system to play around with Docker.

  • Open Play with Docker labs on your browser
  • Click on Icon near to Instance to choose 3 Managers & 2 Worker Nodes
My image
  • Wait for few seconds to bring up 5-Node Swarm Cluster

Clone the Repository

I have created a docker-compose file with the below contents:

version: "3"
services:
  web:
    # replace username/repo:tag with your name and image details
    image: ajeetraina/redis-flask
    build:
      context: ./stackdemo
      dockerfile: Dockerfile
    deploy:
      replicas: 5
      restart_policy:
        condition: on-failure
      resources:
        limits:
          cpus: "0.1"
          memory: 50M
    ports:
      - "8000:8000"
    networks:
      - webnet
  visualizer:
    image: dockersamples/visualizer:stable
    ports:
      - "8080:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
      placement:
        constraints: [node.role == manager]
    networks:
      - webnet
  redis:
    image: redis:6.0-rc1
    ports:
      - "6379:6379"
    volumes:
      - data:/home/docker/data
    deploy:
      placement:
        constraints: [node.role == manager]
    command: redis-server --appendonly yes
    networks:
      - webnet
networks:
  webnet:
volumes:
  data:

I have put the above code under Collabnix repository which you can pull and leverage it directly. Do follow the below steps:

git clone https://github.com/collabnix/dockerlabs
cd dockerlabs/solution/redis/viz-web-redis
docker stack deploy -c docker-compose.yml myredis

Verifying the Services

$ docker service ls
ID                  NAME                 MODE                REPLICAS            IMAGE                             PORTS
ydgp8j56apek        myredis_redis        replicated          1/1                 redis:3.0.6                       *:6379->6379/tcp
ofqnb4282zo1        myredis_visualizer   replicated          1/1                 dockersamples/visualizer:stable   *:8080->8080/tcp
bkxd3aklxhj7        myredis_web          replicated          5/5                 ajeetraina/redis-flask:latest     *:8000->8000/tcp
My Image

Verifying if Redis is running successfully

$ docker service ps myredis_redis
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
robvimouagqj        myredis_redis.1     redis:6.0-rc1       manager1            Running             Running about a minute ago 

Verifying the Redis Volume

$ docker volume inspect myredis_data
[
    {
        "CreatedAt": "2019-12-29T02:18:00Z",
        "Driver": "local",
        "Labels": {
            "com.docker.stack.namespace": "myredis"
        },
        "Mountpoint": "/var/lib/docker/volumes/myredis_data/_data",
        "Name": "myredis_data",
        "Options": null,
        "Scope": "local"
    }
]

Checking the Redis logs

[manager1] (local) root@192.168.0.45 ~/dockerlabs/solution/redis/viz-web-redis
$ docker service ps myredis_redis
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
robvimouagqj        myredis_redis.1     redis:6.0-rc1       manager1            Running             Running about a minute ago                       
[manager1] (local) root@192.168.0.45 ~/dockerlabs/solution/redis/viz-web-redis
$ docker service ps myredis_redis
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
robvimouagqj        myredis_redis.1     redis:6.0-rc1       manager1            Running             Running about a minute ago                       
[manager1] (local) root@192.168.0.45 ~/dockerlabs/solution/redis/viz-web-redis
$ docker service logs -f myredis_redis
myredis_redis.1.robvimouagqj@manager1    | 1:C 29 Dec 2019 02:35:54.400 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
myredis_redis.1.robvimouagqj@manager1    | 1:C 29 Dec 2019 02:35:54.400 # Redis version=5.9.101, bits=64, commit=00000000, modified=0, pid=1, just started
myredis_redis.1.robvimouagqj@manager1    | 1:C 29 Dec 2019 02:35:54.400 # Configuration loaded
myredis_redis.1.robvimouagqj@manager1    | 1:M 29 Dec 2019 02:35:54.402 * Running mode=standalone, port=6379.
myredis_redis.1.robvimouagqj@manager1    | 1:M 29 Dec 2019 02:35:54.402 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
myredis_redis.1.robvimouagqj@manager1    | 1:M 29 Dec 2019 02:35:54.402 # Server initialized
myredis_redis.1.robvimouagqj@manager1    | 1:M 29 Dec 2019 02:35:54.402 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
myredis_redis.1.robvimouagqj@manager1    | 1:M 29 Dec 2019 02:35:54.402 * Ready to accept connections

Where is my Redis service running?

$ docker service ps myredis_redis
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
robvimouagqj        myredis_redis.1     redis:6.0-rc1       manager1            Running             Running 3 minutes ago

Inspecting Redis Service

$ docker service inspect myredis_redis
[
    {
        "ID": "hmistkdxnirdm5vq2f41aaqr9",
        "Version": {
            "Index": 127
        },
        "CreatedAt": "2019-12-29T02:35:47.7810801Z",
        "UpdatedAt": "2019-12-29T02:35:47.78773254Z",
        "Spec": {
            "Name": "myredis_redis",
            "Labels": {
                "com.docker.stack.image": "redis:6.0-rc1",
                "com.docker.stack.namespace": "myredis"
            },
            "TaskTemplate": {
                "ContainerSpec": {
                    "Image": "redis:6.0-rc1@sha256:c2227b1e5c4755cb94f18eef10b34fb4eac116ce8c5ea0a40d0ca806927b8311",
                    "Labels": {
                        "com.docker.stack.namespace": "myredis"
                    },
                    "Args": [
                        "redis-server",
                        "--appendonly",
                        "yes"
                    ],
                    "Privileges": {
                        "CredentialSpec": null,
                        "SELinuxContext": null
                    },
                    "Mounts": [
                        {
                            "Type": "volume",
                            "Source": "myredis_data",
                            "Target": "/home/docker/data",
                            "VolumeOptions": {
                                "Labels": {
                                    "com.docker.stack.namespace": "myredis"
                                }
                            }
                        }
                    ],
                    "StopGracePeriod": 10000000000,
                    "DNSConfig": {},
                    "Isolation": "default"
                },
                "Resources": {},
                "RestartPolicy": {
                    "Condition": "any",
                    "Delay": 5000000000,
                    "MaxAttempts": 0
                },
                "Placement": {
                    "Constraints": [
                        "node.role == manager"
                    ],
                    "Platforms": [
                        {
                            "Architecture": "amd64",
                            "OS": "linux"
                        },
                        {
                            "Architecture": "386",
                            "OS": "linux"
                        },
                        {
                            "Architecture": "ppc64le",
                            "OS": "linux"
                        },
                        {
                            "Architecture": "s390x",
                            "OS": "linux"
                        }
                    ]
                },
                "Networks": [
                    {
                        "Target": "rolenrgn8nqibx2h16wd2tac6",
                        "Aliases": [
                            "redis"
                        ]
                    }
                ],
                "ForceUpdate": 0,
                "Runtime": "container"
            },
            "Mode": {
                "Replicated": {
                    "Replicas": 1
                }
            },
            "UpdateConfig": {
                "Parallelism": 1,
                "FailureAction": "pause",
                "Monitor": 5000000000,
                "MaxFailureRatio": 0,
                "Order": "stop-first"
            },
            "RollbackConfig": {
                "Parallelism": 1,
                "FailureAction": "pause",
                "Monitor": 5000000000,
                "MaxFailureRatio": 0,
                "Order": "stop-first"
            },
            "EndpointSpec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 6379,
                        "PublishedPort": 6379,
                        "PublishMode": "ingress"
                    }
                ]
            }
        },
        "Endpoint": {
            "Spec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 6379,
                        "PublishedPort": 6379,
                        "PublishMode": "ingress"
                    }
                ]
            },
            "Ports": [
                {
                    "Protocol": "tcp",
                    "TargetPort": 6379,
                    "PublishedPort": 6379,
                    "PublishMode": "ingress"
                }
            ],
            "VirtualIPs": [
                {
                    "NetworkID": "sl1ecujt79razdyhjvmbohhjo",
                    "Addr": "10.255.0.26/16"
                },
                {
                    "NetworkID": "rolenrgn8nqibx2h16wd2tac6",
                    "Addr": "10.0.1.15/24"
                }
            ]
        }
    }
]
$ docker exec -it 2db redis-cli --cluster help
Cluster Manager Commands:
  create         host1:port1 ... hostN:portN
                 --cluster-replicas <arg>
  check          host:port
                 --cluster-search-multiple-owners
  info           host:port
  fix            host:port
                 --cluster-search-multiple-owners
  reshard        host:port
                 --cluster-from <arg>
                 --cluster-to <arg>
                 --cluster-slots <arg>
                 --cluster-yes
                 --cluster-timeout <arg>
                 --cluster-pipeline <arg>
                 --cluster-replace
  rebalance      host:port
                 --cluster-weight <node1=w1...nodeN=wN>
                 --cluster-use-empty-masters
                 --cluster-timeout <arg>
                 --cluster-simulate
                 --cluster-pipeline <arg>
                 --cluster-threshold <arg>
                 --cluster-replace
  add-node       new_host:new_port existing_host:existing_port
                 --cluster-slave
                 --cluster-master-id <arg>
  del-node       host:port node_id
  call           host:port command arg arg .. arg
  set-timeout    host:port milliseconds
  import         host:port
                 --cluster-from <arg>
                 --cluster-copy
                 --cluster-replace
  backup         host:port backup_directory
  help           

For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.
$ curl localhost:8000
Hello World! I have been seen 6 times.

$ curl localhost:8000 
Hello World! I have been seen 7 times.

Let’s try to run Rebrow as Docker Swarm service and see if it really works.

$ docker service create --name myrebrow --publish 5001:5001 --replicas 2  marian/rebrow
p9qfx8bfmk7doamfxwy65eicu
overall progress: 2 out of 2 tasks 
1/2: running   
2/2: running   
verify: Service converged 

Wow ! It was really fast. Let’s open up web browser to see if it works without any issue. Just supply your Manager IP address and port as 6379. By now, it should connect to Redis server and show up the server status as shown below:

My Image

Click on “Keys” to find “hits” with a type “string”.

My Image

Once you click on “hits”, it should show you the total number of hits for the webpage.

My Image

Conclusion:

Rebrow is a promising Web UI for redis database content. Its completely open source and built for the developer who needs to look into a Redis store. It allows for inspection and deletion of keys and follows PubSub messages too. It also displays some runtime and configuration information.

Older Posts:

5 Minutes to Multi-Node Redis Cluster running on Google Cloud Kubernetes Engine using Docker Desktop for Windows

Building 3-Node Active-Active Redis Enterprise Cluster for Developers using Docker Desktop for Mac

Running Redis Enterprise inside Docker Container in 5 Minutes

5 Minutes to Multi-Node Redis Cluster running on Google Cloud Kubernetes Engine using Docker Desktop for Windows

If you are looking out for the easiest way to create Redis Cluster on remote Cloud Platform like Google Cloud Platform just by sitting on your laptop, then Docker Desktop is the right solution. Docker Desktop for Windows is an application for your Windows laptop for the building and sharing of containerized applications and microservices.

By using “docker context” CLI tool which comes by default with Docker Engine 19.03+, you can easily access GKE cluster and build containerized workload flawlessly. You can still use your favourite PowerShell interface to bring up Pods, ConfigMaps, Services and Deployment.

Under this blog post, we will see how easily one can setup GKE cluster on remote Google Cloud Platform using Docker Desktop for Windows. Also, we will bring up Redis Cluster running on Kubernetes. By the end of this blog, we will try to simulate Cluster Failure and see how slave nodes becomes master node once the master quorum gets disturbed.

Pre-requisites:

C:\Users\Ajeet_Raina\AppData\Local\Google\Cloud SDK>gcloud init
Welcome! This command will take you through the configuration of gcloud.

Your current configuration has been set to: [default]

You can skip diagnostics next time by using the following flag:
  gcloud init --skip-diagnostics

Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic passed (1/1 checks passed).

You must log in to continue. Would you like to log in (Y/n)?  Y

Your browser has been opened to visit:
...  


You are logged in as: [dockercaptain1981@gmail.com].

Pick cloud project to use:
 [1] lofty-tea-249310
C:\Users\Ajeet_Raina\AppData\Local\Google\Cloud SDK>gcloud container clusters create k8s-lab1 --disk-size 10 --zone asia-east1-a --machine-type n1-standard-2 --num-nodes 3 --scopes compute-rw

WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Newly created clusters and node-pools will have node auto-upgrade enabled by default. This can be disabled using the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s).
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster k8s-lab1 in asia-east1-a... Cluster is being health-checked (master is healthy)...done.
Created [https://container.googleapis.com/v1/projects/lofty-tea-249310/zones/asia-east1-a/clusters/k8s-lab1].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/asia-east1-a/k8s-lab1?project=lofty-tea-249310
kubeconfig entry generated for k8s-lab1.
NAME      LOCATION      MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION    NUM_NODES  STATUS
k8s-lab1  asia-east1-a  1.13.11-gke.23  35.236.179.254  n1-standard-2  1.13.11-gke.23  3          RUNNING

C:\Users\Ajeet_Raina\AppData\Local\Google\Cloud SDK>kubectl get nodes
NAME                                      STATUS   ROLES    AGE    VERSION
gke-k8s-lab1-default-pool-f1fae040-9vd9   Ready    <none>   108s   v1.13.11-gke.23
gke-k8s-lab1-default-pool-f1fae040-ghf5   Ready    <none>   108s   v1.13.11-gke.23
gke-k8s-lab1-default-pool-f1fae040-z0rf   Ready    <none>   108s   v1.13.11-gke.23

C:\Users\Ajeet_Raina\AppData\Local\Google\Cloud SDK>

  • Install GIT using Chocolatey
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
choco install git
  • Install Docker Desktop for Windows
PS C:\Users\Ajeet_Raina> kubectl config get-contexts
CURRENT   NAME                                         CLUSTER                                      AUTHINFO                                     NAMESPACE
*         gke_lofty-tea-249310_asia-east1-a_k8s-lab1   gke_lofty-tea-249310_asia-east1-a_k8s-lab1   gke_lofty-tea-249310_asia-east1-a_k8s-lab1
PS C:\Users\Ajeet_Raina> kubectl get nodes
NAME                                      STATUS   ROLES    AGE   VERSION
gke-k8s-lab1-default-pool-f1fae040-9vd9   Ready    <none>   64m   v1.13.11-gke.23
gke-k8s-lab1-default-pool-f1fae040-ghf5   Ready    <none>   64m   v1.13.11-gke.23
gke-k8s-lab1-default-pool-f1fae040-z0rf   Ready    <none>   64m   v1.13.11-gke.23
PS C:\Users\Ajeet_Raina>

Cloning this Repo

$git clone https://github.com/collabnix/redisplanet
cd redis/kubernetes/gke/

$ kubectl apply -f redis-statefulset.yaml
configmap/redis-cluster created
statefulset.apps/redis-cluster created
$ kubectl apply -f redis-svc.yaml
service/redis-cluster created
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get po
NAME              READY   STATUS              RESTARTS   AGE
redis-cluster-0   1/1     Running             0          92s
redis-cluster-1   1/1     Running             0          64s
redis-cluster-2   1/1     Running             0          44s
redis-cluster-3   1/1     Running             0          25s
redis-cluster-4   0/1     ContainerCreating   0          12s
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get pvc
NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-redis-cluster-0   Bound    pvc-34bdf05b-4af2-11ea-9222-42010a8c00e8   1Gi        RWO            standard       2m15s
data-redis-cluster-1   Bound    pvc-4564abb9-4af2-11ea-9222-42010a8c00e8   1Gi        RWO            standard       107s
data-redis-cluster-2   Bound    pvc-51566907-4af2-11ea-9222-42010a8c00e8   1Gi        RWO            standard       87s
data-redis-cluster-3   Bound    pvc-5c8391a0-4af2-11ea-9222-42010a8c00e8   1Gi        RWO            standard       68s
data-redis-cluster-4   Bound    pvc-64a340d3-4af2-11ea-9222-42010a8c00e8   1Gi        RWO            standard       55s
data-redis-cluster-5   Bound    pvc-71024053-4af2-11ea-9222-42010a8c00e8   1Gi        RWO            standard       34s

C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl describe pods redis-cluster-0
Name:           redis-cluster-0
Namespace:      default
Priority:       0
Node:           gke-k8s-lab1-default-pool-f1fae040-9vd9/10.140.0.28
Start Time:     Sun, 09 Feb 2020 09:41:14 +0530
Labels:         app=redis-cluster
                controller-revision-hash=redis-cluster-fd959c7f4
                statefulset.kubernetes.io/pod-name=redis-cluster-0
Annotations:    kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container redis
Status:         Running
IP:             10.12.2.3
Controlled By:  StatefulSet/redis-cluster
Containers:
  redis:
    Container ID:  docker://6c8c32c785afabff22323cf77103ae3df29a29580863cdfe8c46db12883d87eb
    Image:         redis:5.0.1-alpine
    Image ID:      docker-pullable://redis@sha256:6f1cbe37b4b486fb28e2b787de03a944a47004b7b379d0f8985760350640380b
    Ports:         6379/TCP, 16379/TCP
    Host Ports:    0/TCP, 0/TCP
    Command:
      /conf/update-node.sh
      redis-server
      /conf/redis.conf
    State:          Running
      Started:      Sun, 09 Feb 2020 09:41:38 +0530
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  100m
    Environment:
      POD_IP:   (v1:status.podIP)
    Mounts:
      /conf from conf (rw)
      /data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-m9xql (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-redis-cluster-0
    ReadOnly:   false
  conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      redis-cluster
    Optional:  false
  default-token-m9xql:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-m9xql
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                    From                                              Message
  ----     ------                  ----                   ----                                              -------
  Warning  FailedScheduling        4m13s (x3 over 4m16s)  default-scheduler                                 pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
  Normal   Scheduled               4m13s                  default-scheduler                                 Successfully assigned default/redis-cluster-0 to gke-k8s-lab1-default-pool-f1fae040-9vd9
  Normal   SuccessfulAttachVolume  4m8s                   attachdetach-controller                           AttachVolume.Attach succeeded for volume "pvc-34bdf05b-4af2-11ea-9222-42010a8c00e8"
  Normal   Pulling                 3m55s                  kubelet, gke-k8s-lab1-default-pool-f1fae040-9vd9  pulling image "redis:5.0.1-alpine"
  Normal   Pulled                  3m49s                  kubelet, gke-k8s-lab1-default-pool-f1fae040-9vd9  Successfully pulled image "redis:5.0.1-alpine"
  Normal   Created                 3m49s                  kubelet, gke-k8s-lab1-default-pool-f1fae040-9vd9  Created container
  Normal   Started                 3m49s                  kubelet, gke-k8s-lab1-default-pool-f1fae040-9vd9  Started container
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl apply -f redis-svc.yaml
service/redis-cluster created

C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get svc
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)              AGE
kubernetes      ClusterIP   10.15.240.1    <none>        443/TCP              28m
redis-cluster   ClusterIP   10.15.248.54   <none>        6379/TCP,16379/TCP   5s

C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379'
'10.12.2.3:6379'10.12.0.6:6379'10.12.1.7:6379'10.12.2.4:6379'10.12.1.8:6379'10.12.2.5:6379'
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-0 -- redis-cli --cluster create 10.12.2.3:6379 10.12.0.6:6379 10.12.1.7:6379 10.12.2.4:6379 10.12.1.8:6379 10.12.2.5:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.12.2.4:6379 to 10.12.2.3:6379
Adding replica 10.12.1.8:6379 to 10.12.0.6:6379
Adding replica 10.12.2.5:6379 to 10.12.1.7:6379
M: 8a78d53307bdde11f6e53a9c1e90b1a9949463f1 10.12.2.3:6379
   slots:[0-5460] (5461 slots) master
M: bf11440a398e88ad7bfc167dd3219a4f594ffa39 10.12.0.6:6379
   slots:[5461-10922] (5462 slots) master
M: c82e231121118c731194d31ddc20d848953174e7 10.12.1.7:6379
   slots:[10923-16383] (5461 slots) master
S: 707bb247a2ecc3fd36feb3c90cc58ff9194b5166 10.12.2.4:6379
   replicates 8a78d53307bdde11f6e53a9c1e90b1a9949463f1
S: 63abc45d61a9d9113db0c57f7fe0596da4c83a6e 10.12.1.8:6379
   replicates bf11440a398e88ad7bfc167dd3219a4f594ffa39
S: 10c2bc0cc626725b5a1afdc5e68142610e498fd7 10.12.2.5:6379
   replicates c82e231121118c731194d31ddc20d848953174e7
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.....
>>> Performing Cluster Check (using node 10.12.2.3:6379)
M: 8a78d53307bdde11f6e53a9c1e90b1a9949463f1 10.12.2.3:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 63abc45d61a9d9113db0c57f7fe0596da4c83a6e 10.12.1.8:6379
   slots: (0 slots) slave
   replicates bf11440a398e88ad7bfc167dd3219a4f594ffa39
M: c82e231121118c731194d31ddc20d848953174e7 10.12.1.7:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 10c2bc0cc626725b5a1afdc5e68142610e498fd7 10.12.2.5:6379
   slots: (0 slots) slave
   replicates c82e231121118c731194d31ddc20d848953174e7
S: 707bb247a2ecc3fd36feb3c90cc58ff9194b5166 10.12.2.4:6379
   slots: (0 slots) slave
   replicates 8a78d53307bdde11f6e53a9c1e90b1a9949463f1
M: bf11440a398e88ad7bfc167dd3219a4f594ffa39 10.12.0.6:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-0 -- redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:126
cluster_stats_messages_pong_sent:130
cluster_stats_messages_sent:256
cluster_stats_messages_ping_received:125
cluster_stats_messages_pong_received:126
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:256

C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>
kubectl apply -f app-depolyment.yaml
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get svc
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)              AGE
hit-counter-lb   LoadBalancer   10.15.253.213   35.187.144.200   80:31309/TCP         103s
kubernetes       ClusterIP      10.15.240.1     <none>           443/TCP              46m
redis-cluster    ClusterIP      10.15.248.54    <none>           6379/TCP,16379/TCP   18m

Simulating a Node Failure

Let us try to simulate the failure of a cluster member by deleting the Pod. The moment you delete redis-cluster-0, which was originally a master, we see that Kubernetes promotes redis-cluster-3 to master, and when redis-cluster-0 returns, it does so as a slave. Let us test it out:

C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-0 -- redis-cli role
1) "master"
2) (integer) 854
3) 1) 1) "10.12.2.4"
      2) "6379"
      3) "854"
C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-1 -- redis-cli role
1) "master"
2) (integer) 994
3) 1) 1) "10.12.1.8"
      2) "6379"
      3) "994"

C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-2 -- redis-cli role
1) "master"
2) (integer) 1008
3) 1) 1) "10.12.2.5"
      2) "6379"
      3) "1008"

C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-3 -- redis-cli role
1) "slave"
2) "10.12.2.3"
3) (integer) 6379
4) "connected"
5) (integer) 1008

C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-4 -- redis-cli role
1) "slave"
2) "10.12.0.6"
3) (integer) 6379
4) "connected"
5) (integer) 1022

C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-5 -- redis-cli role
1) "slave"
2) "10.12.1.7"
3) (integer) 6379
4) "connected"
5) (integer) 1022

Bring down redis-cluster-0 pod and you will see that redis-cluster-3 gets converted from “slave” to “master”.

Further Readings:

Building 3-Node Active-Active Redis Enterprise Cluster for Developers using Docker Desktop for Mac

Redis Enterprise Software (RS) offers Redis Cluster. RS Cluster is just a set of Redis nodes (OS with Redis installed). It is composed of identical nodes that are deployed within a data center or stretched across local availability zones. Redis cluster is self-managed, so all you have to do is create a database with required options and it abstracts out the pain of worrying about nodes/master/slave from you. Redis enterprise comes up with a fancy UI to interact with Redis Cluster.

Installing, Upgrading and uninstalling RS across those identical nodes is quite an easy process but developing globally distributed applications can be challenging, as developers have to think about race conditions and complex combinations of events under geo-failovers and cross-region write conflicts.

CRDB comes to a rescue..

CRDB refers to “Global Conflict-Free Replicated Database”. It is a new type of Redis Enterprise Software database that spans clusters. There can be one or more member databases across many clusters that form a conflict-free replicated database (CRDB)s. Each local database can have different shard count, replica count, and other database options but contain identical information in steady-state.



CRDB is a globally distributed database that spans multiple Redis Enterprise Software clusters. Each CRDB can have many CRDB Instances (Instances is a CRDB in each cluster that is participating in a global CRDBs) come with added smarts for handling globally distributed writes using the proven CRDT approach. CRDT research describes a set of techniques for creating systems that can handle conflicting writes. CRDBs are powered by Multi-Master Replication (MMR) which provides a straightforward and effective way to replicate your data between regions and simplify development of complex applications that can maintain correctness under geo-failovers and concurrent cross-region writes to the same data.

CRDBs simplify developing such applications by directly using built-in smarts for handling conflicting writes based on the data type in use. Instead of depending on just simplistic “last-writer-wins” type conflict resolution, geo-distributed CRDBs combines techniques defined in CRDT (conflict-free replicated data types) research with Redis types to provide smart and automatic conflict resolution based on the data types intent.

Geo-distributed Active-Active Redis App with CRBD

With Redis Active-Active, you can have a database that can spread across more than one participating clusters, and these clusters can belong to different geo-distributed Data Centers or Availability zones (AZs). For such a database, each cluster has one active master. This active master can read/write to the database.

As we have multiple clusters writing to the DB, there could be conflicts around whose write should win. To avoid that, Redis uses Last Write Wins resolution. (more details: Redis Active-Active). These kind of DBs are referred to as a conflict free replica database (CRDB).The major advantage is that applications hosted in different zones can access (read-write) database locally.

In short,

  • Redis Enterprise in its active-active mode is an ideal database for highly interactive, scalable, low-latency geo-distributed apps.
  • It’s architecture is based on the breakthrough academic research: conflict-free replicated data type (CRDT).
  • The Redis Enterprise implementation of CRDT is called Conflict-Free Replicated Database (CRDB).
  • With CRDBs, applications can read and write to the same data set from different geographical locations seamlessly and with latency less than 1 ms, without changing the way the application connects to the database.
  • Please note that CRDBs do not replicate the entire database, only the data. Database configurations, Lua scripts, and other configurations are not replicated.
  • CRDBs also provide disaster recovery and accelerated data read-access for geographically distributed users.

In my last blog, I showcased how to setup Redis Enterprise software on Play with Docker Platform in 5 Minutes. Under this blog post, I will show how to setup 3-Node Active-Active Redis Enterprise Cluster on top of Docker Desktop for Mac.

Pre-requisite

  • Install Docker Desktop for Mac using this link


  • Redis Enterprise Software Docker image works best when you provide a minimum of 2 cores and 6GB RAM per container. You can find additional minimum hardware and software requirements for Redis Enterprise Software in the product documentation

In case you install a fresh Redis Enterprise software, it consumes 985 MB by default. Hence, you need sufficient memory for your container to work smooth.


As this is just for demonstration, we will leverage 6GB memory. To set up Memory & CPU for Docker Desktop, you will need to click on “whale” icon on top right side of the screen, select “Preference” > Resources > Advanced as shown below:

Click on “Apply & Restart”.

Verifying Docker Engine is up and Running

[Captains-Bay]🚩 >  docker version
Client: Docker Engine - Community
 Version:           19.03.5
 API version:       1.40
 Go version:        go1.12.12
 Git commit:        633a0ea
 Built:             Wed Nov 13 07:22:34 2019
 OS/Arch:           darwin/amd64
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          19.03.5
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.12
  Git commit:       633a0ea
  Built:            Wed Nov 13 07:29:19 2019
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          v1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Installing Redis Enterprise Software

Let’s go ahead and create 3 Redis Enterprise Software(RS) containers on Mac. As we need to setup Active-Active RS Enterprise, we will create 3 bridge networks:

[Captains-Bay]🚩 >  cat network-setup.sh 
echo "Script to create bridge networks"
docker network create -d bridge redisnet1  --subnet=172.18.0.0/16 --gateway=172.18.0.1
docker network create -d bridge redisnet2  --subnet=172.19.0.0/16 --gateway=172.19.0.1
docker network create -d bridge redisnet3  --subnet=172.20.0.0/16 --gateway=172.20.0.1


[Captains-Bay]🚩 >  sh network-setup.sh 
Script to create bridge networks
6b47b8256b6408619f7d27d1adc867041239b42140dd7e932de0ea8ba4ccd7c
045fd51912441cd144dd875abf7ae016d35b9bdab895b3d58708e9952cdf4a6
cdc154be0ddaecd3165738b4d5a19d6c86a79d25e987f5d4325939baccefeb9

[Captains-Bay]🚩 >  docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5dd879a59cac        bridge              bridge              local
06755d527174        host                host                local
9c44c79b0c10        none                null                local
6b47b8256b64        redisnet1           bridge              local
045fd5191244        redisnet2           bridge              local
cdc154be0dda        redisnet3           bridge              local

[Captains-Bay]🚩 >  cat rs-setup.sh 
docker run -d --cap-add sys_resource --network redisnet1 --name redis-node-01 -h redis-node-01  -p 8443:8443 -p 9443:9443 -p 12000:12000 --ip 172.18.0.2 redislabs/redis
docker run -d --cap-add sys_resource --network redisnet2 --name  redis-node-02 -h redis-node-02 -p 8444:8443 -p 9444:9443 -p 12001:12000 --ip 172.19.0.2 redislabs/redis
docker run -d --cap-add sys_resource --network redisnet3 --name  redis-node-03 -h redis-node-03 -p 8445:8443 -p 9445:9443 -p 12002:12000 --ip 172.20.0.2 redislabs/redis


[Captains-Bay]🚩 >  sh rs-setup.sh 
8ae80e7d06587b40eefd62e7c5439aaab60c00cb8e0f0a46b26471511de7839
aa379fe49187ad43bceae55223970d43899af998a0c0edace56749961055763
ddd6c3a7a7b604d554a53dfec079615ef042c009f607b4d81b7a162f978fd86

As you saw that we have chosen 8443,8444 and 8445 ports for the HTTPS connections, 12000, 12001 & 12002 ports for Redis client connections and 9443, 9444 & 9445 for REST API connections.

[Captains-Bay]🚩 >  docker network inspect redisnet1
[
    {
        "Name": "redisnet1",
        "Id": "6b47b8256b6408619f7d27d1adc867041239b42140dd7e932de0ea8ba4ccd7cb",
        "Created": "2020-02-07T06:32:31.222918188Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "8ae80e7d06587b40eefd62e7c5439aaab60c00cb8e0f0a46b26471511de78397": {
                "Name": "redis-node-01",
                "EndpointID": "56f7c2f313af5bb1dbc7b2899c17107a7e2b6f60511315a45d40886593d1f050",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Connecting the Networks

[Captains-Bay]🚩 >  cat connect-networks.sh 
echo "connecting networks to containers"

#connecting redisnet1 to node-02 and node-03
docker network connect redisnet1 redis-node-02
docker network connect redisnet1 redis-node-03

#connecting redisnet2 to node-03 & node-01
docker network connect redisnet2 redis-node-01
docker network connect redisnet2 redis-node-03

#connecting redisnet3 to node-01 & node-02
docker network connect redisnet3 redis-node-01
docker network connect redisnet3 redis-node-02


[Captains-Bay]🚩 >  sh connect-networks.sh 
connecting networks to containers


[Captains-Bay]🚩 >  docker network inspect redisnet1
[
    {
        "Name": "redisnet1",
        "Id": "6b47b8256b6408619f7d27d1adc867041239b42140dd7e932de0ea8ba4ccd7cb",
        "Created": "2020-02-07T06:32:31.222918188Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "321153d5db7ab4716d0ad4cc4a3d110b688db5eab59b9dd1705724b4f83618d2": {
                "Name": "redis-node-03",
                "EndpointID": "51a9390c011852b7cdc37b21dc606a58d50e2eb6026cfaa92847fc88600118ff",
                "MacAddress": "02:42:ac:12:00:04",
                "IPv4Address": "172.18.0.4/16",
                "IPv6Address": ""
            },
            "8ca5d39cac34bdb997702d7a6ec945a6ddaacda5a3f9df09bdf39efc502210a7": {
                "Name": "redis-node-02",
                "EndpointID": "fa49bda206b95a1a755fe2fe21b3e26f400229636e6b431a2fac0cb9c3f38960",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            },
            "93838562e851bf9a794d6abb3cb551a6abefdeaf1b9fa9d1226a9b0aad722635": {
                "Name": "redis-node-01",
                "EndpointID": "e4542306433feec430ffb3904e6f806c033038f186305fe3fdb35a5e957cfd66",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Setting up Redis Enterprise Cluster

[Captains-Bay]🚩 >  cat cluster-setup.sh 
#!/bin/bash

echo “Creating clusters”
docker exec -it redis-node-01 /opt/redislabs/bin/rladmin cluster create name cluster1.collabnix.com username ajeetraina@gmail.com password collab123
docker exec -it redis-node-02 /opt/redislabs/bin/rladmin cluster create name cluster2.collabnix.com username ajeetraina@gmail.com password collab123
docker exec -it redis-node-03 /opt/redislabs/bin/rladmin cluster create name cluster3.collabnix.com username ajeetraina@gmail.com password collab123 

[Captains-Bay]🚩 >  sh cluster-setup.sh 
“Creating clusters”
Creating a new cluster... ok
Creating a new cluster... ok
Creating a new cluster... ok
[Captains-Bay]🚩 >  

[Captains-Bay]🚩 >  cat crdb.sh 


echo “Creating a CRDB”
docker exec -it redis-node-01 /opt/redislabs/bin/crdb-cli crdb create –name mycrdb –memory-size 512mb –port 12000 –replication false –shards-count 1 –instance fqdn=cluster1.collabnix.com,username=ajeetraina@gmail.com,password=collab123 –instance fqdn=cluster2.collabnix.com,username=ajeetraina@gmail.com,password=collab123 –instance fqdn=cluster3.collabnix.com,username=ajeetraina@gmail.com,password=collab123
[Captains-Bay]🚩 >  sh crdb.sh 
“”
“Creating a CRDB”
Task 076be001-384e-4f1d-b446-a6081e6a5bf9 created
  ---> Status changed: queued -> started
  ---> CRDB GUID Assigned: crdb:8e3547c1-00a6-450d-9a5a-eb686ab8844e
redislabs@redis-node-01:~/bin$ rladmin status
CLUSTER NODES:
NODE:ID ROLE   ADDRESS    EXTERNAL_ADDRESS      HOSTNAME      SHARDS CORES FREE_RAM        PROVISIONAL_RAM VERSION   STATUS
*node:1 master 172.18.0.2 172.19.0.3,172.20.0.3 redis-node-01 0/100  2     680.22MB/5.81GB 0KB/4.76GB      5.4.10-22 OK    

DATABASES:
DB:ID    NAME         TYPE   STATUS    SHARDS    PLACEMENT      REPLICATION       PERSISTENCE       ENDPOINT    

ENDPOINTS:
DB:ID                   NAME                ID              NODE                ROLE                SSL         

SHARDS:
DB:ID        NAME       ID           NODE       ROLE       SLOTS        USED_MEMORY                STATUS       
redislabs@redis-node-01:~/bin$ 

Docker Desktop Dashboard

You can click on “Dashboard” under Docker Desktop to check CPU and Memory utilisation.

Cleaning Up

#!/bin/bash

docker rm -fv $(docker ps -a -q)
docker network rm collabnet1 collabnet2 collabnet3