Are you looking out for a tool which can help you migrate Elasticache data to Redis Open Source or Redis Enterprise and that too without any downtime, then you are at the right place. Before I go ahead and recommend any tool, it is equally important to understand why would you want to migrate from Amazon Elasticache to Redis Enterprise. Now, that’s a great question! I went through several blogs and stackoverflow comments and one of the common reason I usually read is around Amazon Incapability around multi-model database support. Amazon Elasticache lack multi-model database support for modern applications. Redis Enterprise supports multiple data models and structures, so you can iterate applications quickly without worrying about schemas or indexes
RIOT is an open source data import/export tool for Redis. It is used to bulk load/unload data from files (CSV, JSON, XML) and relational databases (JDBC), replicate data between Redis databases, or generate random datasets.
RIOT was developed by Julien Ruaux, Solution Architect currently working in RedisLabs. I was lucky enough to get chance to work with him and present about this tool to wider audience inside RedisLabs.
RIOT is like a Swiss-army knife. RIOT can also be used for the below purposes:
- Import CSV into RediSearch
- Export CSV
- Import CSV into Geo
- Importing JSON
- Exporting JSON
- Exporting compressed JSON
- Import from a databse
- Export to a database
- Creating Random data in Redis DB
- Live Replication of database
RIOT reads records from a source (file, database, Redis, generator) and writes them to a target (file, database, Redis). It can import/export local or remote files in CSV, fixed-width, or JSON format with optional GZIP compression.
Under this blog post, we will see how we can migrate AWS ElastiCache database to Redis Enterprise without any downtime.
The below topics outline the process for migrating your DB from AWS Elasticache to Redis Enterprise:
- Preparing Elasticache (Source)
- Preparing Redis Enterprise (Target)
- Begin the Migration Process
- Verifying the Data Migration Progress
- Completing the Data Migration
Create an EC2 instance
Create an EC2 instance on AWS Cloud. Ensure that this new instance falls under the same security group as well as the same VPC for accessibility.
SSH to this new EC2 instance from my laptop as shown below:
ssh -i "migration.pem" email@example.com
where pem is key-pair used to connect to EC2 instance.
ubuntu@ip-172-31-46-31:~$ sudo redis-cli -h ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com -p 6379 ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com:6379> info # Server redis_version:5.0.6 redis_git_sha1:0 redis_git_dirty:0 redis_build_id:0 redis_mode:cluster os:Amazon ElastiCache arch_bits:64 multiplexing_api:epoll .. # Cluster cluster_enabled:1 # Keyspace
Setting up RIOT Tool
We will leverage the same EC2 instance running Ubuntu 16.04 to setup riot tool.
Login to ubuntu system and install the below software
- Installing JAVA
It is recommended to install at least openjdk 11 version on this Ubuntu OS.
sudo add-apt-repository ppa:openjdk-r/ppa \&& sudo apt-get update -q \&& sudo apt install -y openjdk-11-jdk
- Installing RIOT
wget https://github.com/Redislabs-Solution-Architects/riot/releases/download/v1.8.11/riot-1.8.11.zip unzip riot-1.8.11.zip
Run the below command to generates hashes in the keyspace test2:<index> with fields field1 and field of respectively 100 and 1,000 bytes:
$. /riot --cluster -s ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com:6379 gen --sleep 1 --data field1=100 field=1000 hmset --keyspace test2 --keys index
- -s refers to Server Endpoints,
- –cluster refers to Connect to a Redis cluster
- gen indicates Generate data,
- hmset primarily sets the specified fields to their respective values in the hash stored at key,
- sleep 1 to delay for a specified amount of time, in our case, 1 sec.,
- test2 is a keyspace
Let the above command run without any interruption.
As soon as you run the above command, you can verify keyspace entry with the below CLI:
# Keyspace db0:keys=279763,expires=0,avg_ttl=0
If you want to use Docker container for RIOT, then you can head over https://github.com/ajeetraina/riot/blob/master/Dockerfile which can be built using:
$git clone https://github.com/ajeetraina/riot $cd riot $docker build -t ajeetraina/riot .
Preparing Redis Enterprise as a target DB
In order to test the migration, we need to setup target database, hence we will be installing Redis Enterprise. I have set it up on top of Ubuntu 16.04 LTS running on Google Cloud Platform.
$ wget https://s3.amazonaws.com/redis-enterprise-software-downloads/5.4.14/redislabs-5.4.14-19-xenial-amd64.tar $sudo tar xvf redislabs-5.4.14-19-xenial-amd64.tar $sudo chmod +x install.sh $sudo ./install.sh
Access the Redis Enterprise at https://<public-ip>:8443/
Enter cluster name of your choice, in my case it’s ajeetmigtest.
Under “create database” page, go ahead and specify memory limit as per your infrastructure and supply Redis password.
Once you save the configuration, you can verify all the entries as shown below:
Please save the public endpoint for future reference(shown above).
Joining the rest of the nodes
By now, you should be able to see memory allocation as close to 66 GB.
Ensure that the below command is up and running:
ubuntu@ip-172-31-41-56:~/riot-1.8.11/bin$ ./riot --cluster -s ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com:6379 gen --sleep 1 --data field1=100 field=1000 hmset --keyspace test2 --keys index
Begin the Replication Process
Run the below command to begin the replication from source to target database:
ubuntu@ip-172-31-41-56:~/riot-1.8.11/bin$ sudo ./riot -s 188.8.131.52:12000 replicate --cluster -s ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com:6379
In the above CLI, 184.108.40.206 is public IP of GCP instance where RE is running while ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com:6379 is the endpoint of elasticache.
Verifying the Data Replication Progress
ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com:6379> keys * .. .. 1084029) "test2:2181214" 1084030) "test2:375848" (36.02s)
You can verify the total number of keys both in AWS Elasticache Vs GCP with the above command using redis-cli.