Running Redis on 5-Node Docker Swarm Cluster in 2 Minutes

Redis refers to REmote DIctionary Server. It is an open source, in-memory Data Structure Store, used as a database, a caching layer or a message broker. Today Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets, HyperLogLog, bitmaps, streams, and spatial indexes. Two week back, Redis 6.0 Release Candidate 1 was made available which comes with the below new features:

In my subsequent blog posts, I will try to cover each of the above listed features in detail. Stay tuned ! Luckily, Redis 6 RC1 image is available on Docker Hub and one can access Dockerfile here.

How is Redis different from PostgreSQL or traditional SQL DB?

Redis is an in-memory database because it keeps the whole data set in memory, and answers all queries from memory. As RAM is faster than disks, this means Redis always has very fast reads. The only drawback is that the maximum size of the data set is limited by the available RAM. But nevertheless, Redis has built-in protections allowing the user to set a max limit to memory usage, using the maxmemory option in the configuration file to put a limit to the memory Redis can use. If this limit is reached Redis will start to reply with an error to write commands (but will continue to accept read-only commands), or you can configure it to evict keys when the max memory limit is reached in the case where you are using Redis for caching. Interesting, isn’t it?

Databases like PostgreSQL always keep the whole data set including indices on disk in a format that allows random access. Queries can be answered directly from the on-disk data. The database may load caches or indices into memory as an optimization. A larger difference between Redis and SQL databases is how they deal with writes, i.e. what durability guarantees they provide. There are a lot of tunable parameters here, so it’s not correct to say “an SQL database is always more durable than a Redis database”. However, Redis usually commits data to permanent storage on a periodic basis, whereas Postgres will usually commit before each transaction is marked as complete. This means Postgres is slower because it commits more frequently, but Redis usually has a time window where data loss may occur even when the client was told that their update was handled successfully. This data loss may or may not be an acceptable tradeoff in a given use case.

What’s so cool about key-value store?

Redis is based on the key-value model in which data is stored and fetched from Redis by key. Keybased access allows for extremely efficient access times and this model maps naturally to caching, with Redis providing the customary GET and SET semantics for interacting with the data.

Did you know? Redis can handle up to 232 keys, and was tested in practice to handle at least 250 million keys per instance. Every hash, list, set, and sorted set, can hold 232 elements. In other words your limit is likely the available memory in your system. Nevertheless, several of Redis’ commands operate on multiple keys. Multi-key operations provide better overall performance compared to performing the operations one after the other, because they require substantially less communication and administration.

Scalable shared-nothing clustering

Redis can be scaled horizontally to meet any increase in demand for RAM, computation or network resources. A Redis cluster is a set of processes, possibly on multiple nodes, that work together to provide the caching service. The cluster is made up of multiple Redis servers (i.e. shards), with each one of these being responsible for a subset of the cache’s keyspace. This allows scaling out the cluster simply by adding more shards to it and redistributing the data.

Under this blog post, we will test drive Redis 6.0 Release Candidate 1 for the first time and that too running on 5-Node Docker Swarm environment running on the browser. I will show you how to setup Redis Open Source using a single Docker Stack CLI. We will build a simple Python web application running on Docker Compose. The application uses the Flask framework and maintains a hit counter in Redis. Let’s get started:

Tested Infrastructure

To get started with Docker Swarm, you can use “Play with Docker”, aka PWD. It’s free of cost and open for all. You get maximum of 5 instances of Linux system to play around with Docker.

  • Open Play with Docker labs on your browser
  • Click on Icon near to Instance to choose 3 Managers & 2 Worker Nodes
My image
  • Wait for few seconds to bring up 5-Node Swarm Cluster

Clone the Repository

I have created a docker-compose file with the below contents:

version: "3"
services:
  web:
    # replace username/repo:tag with your name and image details
    image: ajeetraina/redis-flask
    build:
      context: ./stackdemo
      dockerfile: Dockerfile
    deploy:
      replicas: 5
      restart_policy:
        condition: on-failure
      resources:
        limits:
          cpus: "0.1"
          memory: 50M
    ports:
      - "8000:8000"
    networks:
      - webnet
  visualizer:
    image: dockersamples/visualizer:stable
    ports:
      - "8080:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
      placement:
        constraints: [node.role == manager]
    networks:
      - webnet
  redis:
    image: redis:6.0-rc1
    ports:
      - "6379:6379"
    volumes:
      - data:/home/docker/data
    deploy:
      placement:
        constraints: [node.role == manager]
    command: redis-server --appendonly yes
    networks:
      - webnet
networks:
  webnet:
volumes:
  data:

I have put the above code under Collabnix repository which you can pull and leverage it directly. Do follow the below steps:

git clone https://github.com/collabnix/dockerlabs
cd dockerlabs/solution/redis/viz-web-redis
docker stack deploy -c docker-compose.yml myredis

Verifying the Services

$ docker service ls
ID                  NAME                 MODE                REPLICAS            IMAGE                             PORTS
ydgp8j56apek        myredis_redis        replicated          1/1                 redis:3.0.6                       *:6379->6379/tcp
ofqnb4282zo1        myredis_visualizer   replicated          1/1                 dockersamples/visualizer:stable   *:8080->8080/tcp
bkxd3aklxhj7        myredis_web          replicated          5/5                 ajeetraina/redis-flask:latest     *:8000->8000/tcp
My Image

Verifying if Redis is running successfully

$ docker service ps myredis_redis
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
robvimouagqj        myredis_redis.1     redis:6.0-rc1       manager1            Running             Running about a minute ago 

Verifying the Redis Volume

$ docker volume inspect myredis_data
[
    {
        "CreatedAt": "2019-12-29T02:18:00Z",
        "Driver": "local",
        "Labels": {
            "com.docker.stack.namespace": "myredis"
        },
        "Mountpoint": "/var/lib/docker/volumes/myredis_data/_data",
        "Name": "myredis_data",
        "Options": null,
        "Scope": "local"
    }
]

Checking the Redis logs

[manager1] (local) root@192.168.0.45 ~/dockerlabs/solution/redis/viz-web-redis
$ docker service ps myredis_redis
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
robvimouagqj        myredis_redis.1     redis:6.0-rc1       manager1            Running             Running about a minute ago                       
[manager1] (local) root@192.168.0.45 ~/dockerlabs/solution/redis/viz-web-redis
$ docker service ps myredis_redis
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
robvimouagqj        myredis_redis.1     redis:6.0-rc1       manager1            Running             Running about a minute ago                       
[manager1] (local) root@192.168.0.45 ~/dockerlabs/solution/redis/viz-web-redis
$ docker service logs -f myredis_redis
myredis_redis.1.robvimouagqj@manager1    | 1:C 29 Dec 2019 02:35:54.400 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
myredis_redis.1.robvimouagqj@manager1    | 1:C 29 Dec 2019 02:35:54.400 # Redis version=5.9.101, bits=64, commit=00000000, modified=0, pid=1, just started
myredis_redis.1.robvimouagqj@manager1    | 1:C 29 Dec 2019 02:35:54.400 # Configuration loaded
myredis_redis.1.robvimouagqj@manager1    | 1:M 29 Dec 2019 02:35:54.402 * Running mode=standalone, port=6379.
myredis_redis.1.robvimouagqj@manager1    | 1:M 29 Dec 2019 02:35:54.402 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
myredis_redis.1.robvimouagqj@manager1    | 1:M 29 Dec 2019 02:35:54.402 # Server initialized
myredis_redis.1.robvimouagqj@manager1    | 1:M 29 Dec 2019 02:35:54.402 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
myredis_redis.1.robvimouagqj@manager1    | 1:M 29 Dec 2019 02:35:54.402 * Ready to accept connections

Where is my Redis service running?

$ docker service ps myredis_redis
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
robvimouagqj        myredis_redis.1     redis:6.0-rc1       manager1            Running             Running 3 minutes ago

Inspecting Redis Service

$ docker service inspect myredis_redis
[
    {
        "ID": "hmistkdxnirdm5vq2f41aaqr9",
        "Version": {
            "Index": 127
        },
        "CreatedAt": "2019-12-29T02:35:47.7810801Z",
        "UpdatedAt": "2019-12-29T02:35:47.78773254Z",
        "Spec": {
            "Name": "myredis_redis",
            "Labels": {
                "com.docker.stack.image": "redis:6.0-rc1",
                "com.docker.stack.namespace": "myredis"
            },
            "TaskTemplate": {
                "ContainerSpec": {
                    "Image": "redis:6.0-rc1@sha256:c2227b1e5c4755cb94f18eef10b34fb4eac116ce8c5ea0a40d0ca806927b8311",
                    "Labels": {
                        "com.docker.stack.namespace": "myredis"
                    },
                    "Args": [
                        "redis-server",
                        "--appendonly",
                        "yes"
                    ],
                    "Privileges": {
                        "CredentialSpec": null,
                        "SELinuxContext": null
                    },
                    "Mounts": [
                        {
                            "Type": "volume",
                            "Source": "myredis_data",
                            "Target": "/home/docker/data",
                            "VolumeOptions": {
                                "Labels": {
                                    "com.docker.stack.namespace": "myredis"
                                }
                            }
                        }
                    ],
                    "StopGracePeriod": 10000000000,
                    "DNSConfig": {},
                    "Isolation": "default"
                },
                "Resources": {},
                "RestartPolicy": {
                    "Condition": "any",
                    "Delay": 5000000000,
                    "MaxAttempts": 0
                },
                "Placement": {
                    "Constraints": [
                        "node.role == manager"
                    ],
                    "Platforms": [
                        {
                            "Architecture": "amd64",
                            "OS": "linux"
                        },
                        {
                            "Architecture": "386",
                            "OS": "linux"
                        },
                        {
                            "Architecture": "ppc64le",
                            "OS": "linux"
                        },
                        {
                            "Architecture": "s390x",
                            "OS": "linux"
                        }
                    ]
                },
                "Networks": [
                    {
                        "Target": "rolenrgn8nqibx2h16wd2tac6",
                        "Aliases": [
                            "redis"
                        ]
                    }
                ],
                "ForceUpdate": 0,
                "Runtime": "container"
            },
            "Mode": {
                "Replicated": {
                    "Replicas": 1
                }
            },
            "UpdateConfig": {
                "Parallelism": 1,
                "FailureAction": "pause",
                "Monitor": 5000000000,
                "MaxFailureRatio": 0,
                "Order": "stop-first"
            },
            "RollbackConfig": {
                "Parallelism": 1,
                "FailureAction": "pause",
                "Monitor": 5000000000,
                "MaxFailureRatio": 0,
                "Order": "stop-first"
            },
            "EndpointSpec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 6379,
                        "PublishedPort": 6379,
                        "PublishMode": "ingress"
                    }
                ]
            }
        },
        "Endpoint": {
            "Spec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 6379,
                        "PublishedPort": 6379,
                        "PublishMode": "ingress"
                    }
                ]
            },
            "Ports": [
                {
                    "Protocol": "tcp",
                    "TargetPort": 6379,
                    "PublishedPort": 6379,
                    "PublishMode": "ingress"
                }
            ],
            "VirtualIPs": [
                {
                    "NetworkID": "sl1ecujt79razdyhjvmbohhjo",
                    "Addr": "10.255.0.26/16"
                },
                {
                    "NetworkID": "rolenrgn8nqibx2h16wd2tac6",
                    "Addr": "10.0.1.15/24"
                }
            ]
        }
    }
]
$ docker exec -it 2db redis-cli --cluster help
Cluster Manager Commands:
  create         host1:port1 ... hostN:portN
                 --cluster-replicas <arg>
  check          host:port
                 --cluster-search-multiple-owners
  info           host:port
  fix            host:port
                 --cluster-search-multiple-owners
  reshard        host:port
                 --cluster-from <arg>
                 --cluster-to <arg>
                 --cluster-slots <arg>
                 --cluster-yes
                 --cluster-timeout <arg>
                 --cluster-pipeline <arg>
                 --cluster-replace
  rebalance      host:port
                 --cluster-weight <node1=w1...nodeN=wN>
                 --cluster-use-empty-masters
                 --cluster-timeout <arg>
                 --cluster-simulate
                 --cluster-pipeline <arg>
                 --cluster-threshold <arg>
                 --cluster-replace
  add-node       new_host:new_port existing_host:existing_port
                 --cluster-slave
                 --cluster-master-id <arg>
  del-node       host:port node_id
  call           host:port command arg arg .. arg
  set-timeout    host:port milliseconds
  import         host:port
                 --cluster-from <arg>
                 --cluster-copy
                 --cluster-replace
  backup         host:port backup_directory
  help           

For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.
$ curl localhost:8000
Hello World! I have been seen 6 times.

$ curl localhost:8000 
Hello World! I have been seen 7 times.

Visualizing Redis using Rebrow

If you are serious about monitoring your cluster health with real-time alerts, analyzing your cluster configuration, rebalance as necessary, managing addition of nodes, re-sharding, node deletion, and master-replica configuration, you must try out RedisInSight. It’s pretty cool and officially supported by Redis Labs. It works quite good if you are using Redis Enterprise.

As we are using Redis Open source, one of the promising tool I came across was Rebrow. It is a Python-Flask-based Browser for Redis Content. It is built for the developer who needs to look into a Redis store. It allows for inspection and deletion of keys and follows PubSub messages. It also displays some runtime and configuration information.

Let’s try to run Rebrow as Docker Swarm service and see if it really works.

$ docker service create --name myrebrow --publish 5001:5001 --replicas 2  marian/rebrow
p9qfx8bfmk7doamfxwy65eicu
overall progress: 2 out of 2 tasks 
1/2: running   
2/2: running   
verify: Service converged 

Wow ! It was really fast. Let’s open up web browser to see if it works without any issue. Just supply your Manager IP address and port as 6379. By now, it should connect to Redis server and show up the server status as shown below:

My Image

Click on “Keys” to find “hits” with a type “string”.

My Image

Once you click on “hits”, it should show you the total number of hits for the webpage.

My Image

In my next blog post, I will test drive Redis Open Source on Jetson Nano for the first time. Stay tuned for more exciting stuffs around Redis in near future.

Don’t miss this out !!

Redis Day Bengaluru is happening this January 21st-22nd at Taj Yesvantpur. It is a free, full-day, single-track event about anything and everything Redis. Its main purpose is for Redis users and developers to share technical knowledge and user stories. Check it out.

2019 Year in Review: The Rise of Pico, Collabnix Slack & DockerLabs

2019 was a transformational year for Collabnix. With major initiatives like DockerLabs, Pico project & Slack, the site attracted around millions of users worldwide. This year Collabnix bagged top position in the “The Most Loved Docker Articles & Blogs” officially announced by Docker, Inc.

1,480,472 visitors this year…

This year, Collabnix attracted millions of users worldwide with top location as “US”. France, New Jersey & Germany as the leading locations too.

Below are the list of top 5 blog posts which attracted 50% of the total visits during Jan 2019 – Dec 2019:

1850+ Collabnix Slack members

Early this year, Collabnix Slack channel was introduced to bring all Docker & Kubernetes enthusiasts under one umbrella. Within 4 months, it crossed 1k users registration which is surely an overwhelming figure. Till date, it has crossed 1850+ registration and lots of new initiatives have been introduced. A HUGE THANKS to Docker & k8s community members for all their support.

2000+ unique attendees..

In last 6-7 months, Docker Bengaluru Meetup attracted 2000+ unique attendee which is really inspiring. Docker Bengaluru saw 400+ attendance, in average and that too on Saturdays which shows that Docker is still an exciting technology platform for DevOps world.

This year, we conducted 30+ Meetup events all across Bangalore & India. In total. Special thanks to the below list of community members for their tireless contributions:

  • Sangam Biradar, EngineITOps
  • Saiyam Pathak, Walmart Labs
  • Savio Mathew, Publicis Sapient
  • Apurva Bhandari, Vuclip
  • Vinay Agrawal, Dell Technologies
  • Suman Chakraborty, SAP Labs
  • Naman Bajpayee, Docker Chandigarh Meetup
  • Prashansa Kulshrestha, UPES Dehradun
  • Balasundaram Nataraj, OnMobile
  • Rohan Mangal, AWS

The Rise of Pico: The Grace Hopper Celebration India

This November, Pico got selected for the Grace Hopper Celebration India event which attracted around 5000+ audience. In case you’re new, Pico is an open source project which helps in implementing object detection & analytics(Deep Learning) using Docker on IoT devices like Raspberry Pi & Jetson Nano in no minutes. The Grace Hopper Celebration of Women in Computing (GHC) is a series of conferences designed to bring the research and career interests of women in computing to the forefront. It is the world’s largest gathering of women in computing.

Pico took me to various places like Kochi, Vellore, Dehradun & Jaipur. Pico was selected for AWS Community Day 2019 conference which attracted around 750+ audience.

26000+ visits & 550+ stars for DockerLabs

One of the most exciting announcement made early this year was the introduction of DockerLabs. Docker Labs brings you tutorials that help you get hands-on experience using Docker & Kubernetes. Here you will find complete documentation of labs and tutorials that will help you, no matter if you are a beginner, SysAdmin, IT Pro or Developer. There are around 550+ workshop and tutorials around Docker & Kubernetes.In case you are not familiar, do visit
http://dockerlabs.collabnix.com/ for further details.

Special thanks to Sangam Biradar, EngineITops for contributing towards https://gopherlabs.collabnix.com which gained lot of traction within few weeks of time.

Docker Bengaluru crossed 8900+ members

Ever since I took the organizer role for Docker Bengaluru, it has grown exponentially. This year we added around 2300+ members which is definitely a HUGE figure. In average, 400+ audience attended Meetup event which was sufficient enough to excite rest of community members.

Overall, 2019 have been an exciting year for Collabnix.
Thanks for being a part of Collabnix Community! Looking forward to bring more exciting contents & great collaboration in the year ahead.

References:

  • https://dockerlabs.collabnix.com
  • https://kubelabs.collabnix.com
  • https://events.collabnix.com
  • https://webinar.collabnix.com
  • https://gopherlabs.collabnix.com

Kubernetes Monitoring & Best Practices Talk at Sumo Logic Bengaluru User Group

Last Friday, I was invited by Sumo Logic to talk around Kubernetes Monitoring & Best Practices. Around 200+ attendees participated for this event which happened at the Bier Library, located in the bylanes of Koramangala 6th Block, Bengaluru – a beautiful space complete with open seating and a koi pond bang in the centre.

The Bier Library, Koramangala (Bengaluru)

The event kickstarted at around 5:30 PM with a short video from Mobility India. Mobility India is a non-governmental organization committed to improving the lives of people with disabilities as well as people who are impoverished through relevant and comprehensive community programs. There were 8 speakers in total, mostly Sumo Logic customers who shared their journey around how they leveraged Sumo Logic platform for their successful business outcome.

The overall theme of this event revolved around the below topics:

  • Acquire best practices from industry leaders
  • Gain key insights from Sumo experts
  • Learn how to maximize the value of your Sumo investment
  • Obtain sneak peek of product roadmap
  • Network with your peers
  • Celebrate well-earned achievements!

This time I collaborated with Suresh Govindachetty, Enterprise Sales Engineer at SumoLogic to come up with a talk around Kubernetes Monitoring & Best Practices. It was 20 minutes talk where I discussed about the challenges around Kubernetes, methodology switch from host-based to service-oriented monitoring system, k8s metrics and best practices around monitoring. Do check out the slides below:

Planning to kickstart Your Kubernetes Journey? Talk to us at Collabnix Slack Channel.

Upcoming Meetup Events:

  • Docker Bengaluru Meetup event happening this January 18th at SAP Labs. Click here to Register:
  • AWS User Group Kochi – Meetup with Docker Captain. Click here to Register.