When looking to migrate data from one Redis instance to another, there are a number of methods one could employ, such as replication or snapshotting. However, migrations can get more complicated when you’re moving data to a Redis instance managed by a cloud provider, as managed databases often limit how much control you have over the database’s configuration.
This tutorial outlines one method you can use to migrate data to a Redis instance managed by DigitalOcean. The method involves creating a Bash script that uses Redis’s internal migrate
command to securely pass data through a TLS tunnel configured with stunnel. This guide will also go over a few other commonly-used migration strategies and why they’re problematic when migrating to a DigitalOcean Managed Database.
To complete this tutorial, you will need:
ufw
. To set up this environment, follow our initial server setup guide for Ubuntu 18.04.redis-tools
package in Step 1, since you will have already installed redis-cli
when you installed Redis in the previous prerequisite tutorial.Note: To help keep things clear, this guide will refer to the Redis instance hosted on your Ubuntu server as the “source.” Likewise, it will refer to the instance managed by DigitalOcean as either the “target” or the “Managed Database.”
There are several methods you can employ to migrate data from one Redis instance to another. However, some of these approaches present problems when you’re migrating data to a Redis instance managed by DigitalOcean.
For example, you can use replication to turn your target Redis instance into an exact copy of the source. To do this, you would connect to the target Redis server and run the replicaof
command with the following syntax:
- replicaof source_hostname_or_ip source_port
This will cause the target instance to replicate all the data held on the source without destroying any data that was previously stored on it. Following this, you would promote the replica back to being a primary instance with the following command:
- replicaof no one
Another method for migrating Redis data is to take a snapshot of the data held on your source instance with either Redis’s save
or bgsave
commands. Both of these commands export the snapshot to a file ending in .rdb
, which you would then transfer to the target server. Following that, you’d restart the Redis service so it can load the data.
However, each of these three commands — replicaof
, save
, and bgsave
— are disabled on DigitalOcean Managed Databases. These, among other disabled commands, require advanced privileges or access to the managed database server’s underlying file system, making them impractical for a managed database solution. Because of this, DigitalOcean, like other managed database providers, restricts access to these commands, making the associated migration methods impossible.
Because DigitalOcean’s Managed Databases disallow both replication and snapshotting as means of migrating data, this tutorial will instead use Redis’s migrate
command to move data from the source to the target. The migrate
command is designed to only move one key at a time, but this tutorial will use a Bash script to migrate an entire Redis database automatically.
This optional step involves loading your source Redis instance with some sample data so you can experiment with migrating data to your Managed Redis Database. If you already have data that you want to migrate over to your target instance, you can move ahead to Step 2.
To begin, run the following command to access your Redis server:
- redis-cli
If you’ve configured your Redis server to require password authentication, run the auth
command followed by your Redis password:
- auth password
Then run the following commands. These will create a number of keys holding strings, plus one key holding a hash, one holding a list, and one a set:
- mset string1 "Redis" string2 "is" string3 "fun!"
- mset string4 "Redis" string5 "is" string6 "fast!"
- mset string7 "Redis" string8 "is" string9 "feature-rich!"
- mset string10 "Redis" string11 "has" string12 "fantastic documentation!"
- mset string13 "Redis" string14 "is" string15 "free and open-source!"
- mset string16 "Redis" string17 "has many" string18 "data types."
- mset string19 "Redis" string20 "allows" string21 "strings."
- hmset hash1 field1 "Redis" field2 "allows" field3 "hashes."
- rpush list1 "Redis" "also" "allows" "lists."
- sadd set1 "It" "even" "allows" "sets."
Additionally, run the following expire
commands to provide a few of these keys with a timeout. This will make them volatile, meaning that Redis will delete them after the specified amount of time, 7500
seconds:
- expire string2 7500
- expire hash1 7500
- expire set1 7500
With that, you have some example data you can export to your target Redis instance. You can keep the redis-cli
prompt open for now, since you will run a few more commands from it in the next step in order to back up this data.
Previously, this tutorial discussed using Redis’s bgsave
command to take a snapshot of a Redis database and migrate it to another instance. While we won’t use bgsave
as a means of migrating Redis data, we will use it here to back up the data in case we encounter an error during the migration process.
If you don’t already have it open, start by opening up the Redis command line interface:
- redis-cli
Also, if you’ve configured your Redis server to require password authentication, run the auth
command followed by your Redis password:
- auth password
Next, run the bgsave
command. This will create a snapshot of your current data set and export it to a dump file whose name ends in .rdb
:
- bgsave
Note: As mentioned in the previous Things To Consider section, you can take a snapshot of your Redis database with either the save
or bgsave
commands. The reason we use the bgsave
command here is that the save
command runs synchronously, meaning it will block any other clients connected to the database. Because of this, the save
command documentation recommends that this command should almost never be run in a production environment.
Instead, it suggests using the bgsave
command, which runs asynchronously. This will cause Redis to fork the database into two processes: the parent process will continue to serve clients while the child saves the database before exiting.
Note that if clients add or modify data while the bgsave
operation is running or after it finishes, these changes won’t be captured in the snapshot.
Following that, you can close the connection to your Redis instance by running the exit
command:
- exit
You can find this dump file in your Redis installation’s working directory. If you’re not sure which directory this is, you can check by opening up your Redis configuration file with your preferred text editor. Here, we’ll use nano
:
- sudo nano /etc/redis/redis.conf
Navigate to the line that begins with dbfilename
. It will look like this by default:
. . .
# The filename where to dump the DB
dbfilename dump.rdb
. . .
This directive defines the file to which Redis will export snapshots. The next line (after any comments) will look like this:
. . .
dir /var/lib/redis
. . .
The dir
directive defines Redis’s working directory where any Redis snapshots are stored. By default, this is set to /var/lib/redis
as shown in the example.
Close the redis.conf
file. Assuming you didn’t make any changes to the file, you can do so by pressing CTRL+X
.
Then, list the contents of your Redis working directory to confirm that it’s holding the exported data dump file:
- sudo ls /var/lib/redis
If the dump file was exported correctly, you will see it in this command’s output:
Outputdump.rdb
Once you’ve confirmed that you successfully backed up your data, you can begin the process of migrating it to your Managed Database.
Recall that this guide uses Redis’s internal migrate
command to move keys one by one from the source database to the target. However, unlike previous steps of this tutorial, you won’t run this command from the redis-cli
prompt. Instead, you’ll write a Bash script that, when invoked, will allow you to migrate all the keys from your source Redis instance to your managed one with a single command.
Note: If you have clients writing data to your source Redis instance, now would be a good time to configure them to also write data to your Managed Database. This way, you can migrate the existing data from the source to your target without losing any writes that occur after the migration.
Also, be aware that this migration script will not replace any existing keys on the target database unless one of the existing keys has the same name as a key you’re migrating.
Open a new file named redis-migrate.sh
:
- nano redis-migrate.sh
At the top of the file, add a shebang. This is a sequence of characters that lets your server know that the the script should be executed with the bash
shell:
#!/bin/bash
set
allows you to set or unset certain environmental variables. This will be useful for this script, because we’ll use it to protect against a few potential pitfalls.
Below the shebang, add the following set
command:
#!/bin/bash
set -euo pipefail
This includes the e
option, which will cause the script to exit immediately if any command within it exits with a non-zero status, and the u
option. set
’s u
flag will tell the script to treat any unset variables as errors that force it to exit. This will be useful for our purposes, as this script will require user input.
The last flag, o
allows you to set a variety of parameters. Here, set the pipefail
option. In *nix systems, pipes (|
) are used to pass the output of one command as input into another. For example:
echo “Carpe diem, quam minimum credula postero.” | grep diem
If the command to the left of a pipe (in this example, the echo
command) were to fail, the error message it causes would still get piped into the command to the right of the pipe (the grep
command) because an error message is still valid output. The pipefail
option changes this behavior and causes the script to exit if any command in a pipe chain causes an error.
This script will use Redis’s scan
command to iterate over every key in your database. However, scan
can only iterate through one database at a time, which means if you have keys stored in multiple databases you must be able to specify which database you want to scan and then migrate. Similarly, the migrate
command that this script will use requires you to specify the logical database on the target instance to which you want to migrate data.
Because of this, this script will require users to pass numbers representing both the source database and the target database on the managed Redis instance as command line arguments. To this end, add the following if/then
statement:
#!/bin/bash
set -euo pipefail
if [ "$#" -lt 2 ]
then
echo "Migrate Redis keys to a DigitalOcean Managed Database"
echo "Usage: $0 [source database] [target database]"
exit 1
fi
This statement checks whether the number of arguments passed to the script is less than 2. If so, it prints a message reminding the user of the function of the script as well as how to invoke it correctly. It then exits immediately, and the if/then
statement closes with fi
.
Next, define the following variables:
sourcedb
: The script will use this variable to refer to the logical Redis database on the source instance. Set it to the first argument passed to the script when invoked (${1}
)targetdb
: Similarly, the script will use this variable to refer to the logical database on the target instance. Set this variable to the second argument passed to the script (${2}
)cursor
: We’ll go over how the script uses this variable shortly. For now, just set it to -1
.The new lines declaring these variables should look like this:
. . .
exit 1
fi
sourcedb=${1}
targetdb=${2}
cursor=-1
Managed Redis instances typically require users to submit a password to authenticate. Rather than hard coding passwords into this script, add the following highlighted lines to set up a couple prompts which will ask the user to enter the passwords for both their local and managed Redis instances.
The first and third of these new lines use Bash’s read
builtin. read
will read a single line from standard input and assign that value to a variable name passed to it as an argument. Both of these lines include the -s
option, which prevents read
from echoing input in the terminal, which is important for sensitive information like passwords. Both also include the -p
option, which allows you to output the string immediately after it as a prompt before attempting to read any input.
The first line will prompt you to enter the password for your local Redis instance, and the third will prompt you to enter your managed Redis instance’s password. The line between them will print a blank line, causing the second prompt to appear on a new line. This will help make both prompts more readable in a terminal:
. . .
sourcedb=${1}
targetdb=${2}
cursor=-1
read -s -p "Enter your local Redis password: " localpw
echo ""
read -s -p "Enter your managed Redis password: " managedpw
Next, add the following while
loop. This checks whether the cursor
variable defined previously is not equal to 0
. If so, it will perform every command within the loop until it reaches done
:
. . .
cursor=-1
read -s -p "Enter your local Redis password: " localpw
echo ""
read -s -p "Enter your managed Redis password: " managedpw
while [[ "$cursor" -ne 0 ]]; do
done
Because cursor
was initialized to -1
, that means this while
loop will always run at least once.
Within the while
loop, add the following highlighted if/then
statement. This one checks whether the cursor
variable is equal to -1
and, if so, sets it to equal 0
:
. . .
while [[ "$cursor" -ne 0 ]]; do
if [[ "$cursor" -eq -1 ]]
then
cursor=0
fi
done
Redis’s scan
command allows for a few options, but only requires one argument: a cursor value. If you imagine a Redis database as a long list of randomly assorted keys, a cursor value of 0
tells scan
to start iterating from the very first key in the list. Every time scan
runs, it will return a new cursor as the first line of its output with a limited number of individual keys on each subsequent line, usually between ten and twenty.
To iterate through every key in a database you must continue calling scan
, each time replacing the cursor with the updated cursor from the previous call’s output until it returns a cursor of 0
. This indicates that scan
has completed a full iteration.
This is why we initialized cursor
to -1
only to immediately reset it to 0
with this addition: in order to perform a complete iteration, this script will need to call the scan
command multiple times, using 0
as the initial cursor and then, on each subsequent call, the cursor returned by the previous iteration. The loop should only stop when the last scan
call returns a cursor of 0
.
Note that scan
does not return negative cursor values, so initializing cursor
to -1
will not cause any problems.
Following the if/then
statement, but still before done
, add a line that defines a new local variable, reply
, and sets its value to the output of a scan
command executed with the redis-cli
client.
This redis-cli
command includes the -a
option followed by the localpw
variable. Assuming the user enters the correct password for the local Redis instance when prompted, the -a
flag will use that password here to authenticate. It also includes the -n
flag. This tells redis-cli
which of Redis’s logical databases to connect to, as defined by the sourcedb
variable:
. . .
while [[ "$cursor" -ne 0 ]]; do
if [[ "$cursor" -eq -1 ]]
then
cursor=0
fi
reply=$(redis-cli -a "$localpw" -n "$sourcedb" SCAN "$cursor")
done
Next, add another if/then
statement. This one tests whether the reply
variable is a null value and, if not, it executes all of the statements between then
and fi
:
. . .
while [[ "$cursor" -ne 0 ]]; do
if [[ "$cursor" -eq -1 ]]
then
cursor=0
fi
reply=$(redis-cli -a "$localpw" -n "$sourcedb" SCAN "$cursor")
if [ -n "$reply" ]; then
fi
done
Within this if/then
statement, add the following lines. The first displays the contents held in the reply
variable and then pipes them as input into a tail
command.
You could pass the result of echo "$reply"
directly into the following while
loop, but this would also pipe the first line which, as mentioned previously, holds the updated cursor value. This would cause Redis to attempt to migrate a nonexistent key which could cause an error, or at least unnecessary extra work for your server.
To get around this, we pipe the reply
contents into the tail
command which includes the -n +2
argument. This tells tail
to start reading from the second line before piping each line into the while
loop.
This while
loop reads each line of reply
one by one. Every time it reads a line, it assigns that line’s contents to a new variable, key
. The loop will execute the commands between the do
and done
statements until it has read through every line.
Note the inclusion of IFS=
. This is short for Internal Field Separator, which is a variable that defines the character or set of characters used to separate a pattern. By leaving it blank, it ensures that the read
process will split up reply
at the end of every line:
. . .
while [[ "$cursor" -ne 0 ]]; do
. . .
if [ -n "$reply" ]; then
echo "$reply" | tail -n +2 |
while IFS= read -r key; do
done
fi
done
Within this while
loop, add the highlighted line. This is the command that performs the actual migration:
. . .
while [[ "$cursor" -ne 0 ]]; do
. . .
if [ -n "$keys" ]; then
echo "$keys" |
while IFS= read -r key; do
redis-cli -a "$localpw" -n "$sourcedb" migrate localhost 8000 "$key" "$targetdb" 1000 copy auth "$managedpw" >/dev/null 2>&1
done
fi
done
This command invokes the redis-cli
client program and authenticates to the local Redis instance with the localpw
variable before connecting the logical database entered by the user (represented by the sourcedb
variable). It then calls Redis’s migrate
command which requires you to pass the IP address or hostname of the target Redis instance’s server, as well as the port on which it’s running. Then, it passes the name of the key to migrate (as represented by the key
variable) and the database on the target Redis instance where the key should be migrated to (represented by targetdb
).
Next is a number representing a timeout. This timeout is the maximum amount of idle communication time between the two machines. Note that this isn’t a time limit for the operation; it just means that the operation should always make some level of progress within the defined timeout. Both the target database number and timeout arguments are required for every migrate
command.
Following the timeout is the optional copy
flag. By default, migrate
will delete each key from the source database after it transfers them to the target; by including this option, you’re instructing the migrate
command to merely copy the keys so they will persist on the source.
After copy
comes the auth
flag followed by your managed Redis instance’s password. This isn’t necessary if you’re migrating data to an instance that doesn’t require authentication, but it is necessary when you’re migrating data to one managed by DigitalOcean.
Lastly, this line includes /dev/null
and 2>&1
. /dev/null
redirects the command’s standard output to the /dev/null
file, a null device which immediately discards any data written to it. 2>&1
redirects the command’s standard error to standard output, which means that, thanks to the /dev/null
right before it, any potential errors are also immediately discarded.
Finally, add the following highlighted line after the if/then
statement, but before the outer while
loop’s done
statement. This line updates the value held by the cursor
variable to the cursor value held in the reply
variable. It does this by evaluating reply
with the expr
utility and searching for the first value that matches a regular expression ('\([0-9]*[0-9]\)'
). Because of how the scan
command’s output is formatted, this regular expression will always match the correct cursor value:
. . .
while [[ "$cursor" -ne 0 ]]; do
. . .
if [ -n "$keys" ]; then
echo "$keys" |
while IFS= read -r key; do
redis-cli -a "$localpw" -n "$sourcedb" migrate localhost 8000 "$key" "$targetdb" 1000 copy auth "$managedpw" >/dev/null 2>&1
done
fi
cursor=$(expr "$reply" : '\([0-9]*[0-9]\)')
done
All together, the script should look like this:
#!/bin/bash
set -euo pipefail
if [ "$#" -lt 2 ]
then
echo "Migrate Redis keys to a DigitalOcean Managed Database"
echo "Usage: $0 [source database] [target database]"
exit 1
fi
sourcedb=${1}
targetdb=${2}
cursor=-1
read -s -p "Enter local Redis password: " localpw
echo ""
read -s -p "Enter managed Redis password: " managedpw
while [[ "$cursor" -ne 0 ]]; do
if [[ "$cursor" -eq -1 ]]
then
cursor=0
fi
reply=$(redis-cli -a "$localpw" -n "$sourcedb" SCAN "$cursor")
if [ -n "$reply" ]; then
echo "$reply" | tail -n +2 |
while IFS= read -r key; do
redis-cli -a "$localpw" -n "$sourcedb" migrate localhost 8000 "$key" "$targetdb" 1000 copy auth "$managedpw" >/dev/null 2>&1
done
fi
cursor=$(expr "$reply" : '\([0-9]*[0-9]\)')
done
Double check that you’ve added each line correctly, and then save and close the file. If you used nano
to create the script, do so by pressing CTRL + X
, Y
, then ENTER
.
To wrap up the creation of the script, mark it as executable with chmod
:
- sudo chmod +x redis-migrate.sh
With that, you’re ready to use the script to migrate your Redis data to a managed Redis instance.
To migrate your Redis data with the script you created in the previous step, you can invoke it like this:
- ./redis-migrate.sh source_database target_database
Assuming you followed this tutorial’s optional first step and loaded your local Redis instance’s default database (0
) with data and you want to migrate this data to the 0
database on your managed instance, you’d use the following command:
- ./redis-migrate.sh 0 0
You’ll receive the first prompt for your local Redis instance’s authentication password:
OutputEnter local Redis password:
Type your local Redis’s password and then press ENTER
. If you haven’t configured your local Redis instance to require a password, just press ENTER
to leave the localpw
password variable blank.
You’ll then be prompted to enter your Managed Redis Database’s password:
OutputEnter local Redis password:
Enter managed Redis password:
Note: If you don’t have your Managed Redis Database’s password on hand, you can find it by first navigating to the DigitalOcean Control Panel. From there, click on Databases in the left-hand sidebar menu and then click on the name of the Redis instance to which you want to migrate the data. Scroll down to the Connection Details section where you’ll find a field labeled password. Click on the show button to reveal the password, then copy and paste it into the prompt in order to authenticate.
If you entered the correct database numbers and valid passwords, the script will migrate every key in your database and close without any further output. To test whether the migration was successful, connect to your Managed Redis Database:
- redis-cli -h localhost -p 8000 -a managed_redis_password
If you migrated data to any logical database other than the default, connect to that database with the select
command:
- select target_database
Run a scan
command to see some of the keys now held there:
- scan 0
If you completed Step 1 of this tutorial and added the example data to your source database, you will see output like this:
Output1) "10"
2) 1) "set1"
2) "string6"
3) "string11"
4) "string3"
5) "string5"
6) "string10"
7) "string14"
8) "string18"
9) "string2"
10) "string4"
Lastly, run a ttl
command on any key which you’ve set to expire in order to confirm that it is still volatile:
- ttl string2
Output(integer) 3944
This output shows that even though you migrated the key to your Managed Database, it still set to expire based on the expireat
command you ran previously.
Once you’ve confirmed that all the keys on your source Redis database were exported to your target successfully, you can close your connection to the Managed Database. If any of the other logical databases on your local Redis instance are holding any data, you’ll need to run the script again for each one, making sure to include the appropriate source and target databases as arguments. Also, if you have clients writing data to the source Redis instance and you’ve already configured them to send their writes to the target, you can configure them to stop sending data to the source once you’ve finished migrating all your data.
By completing this tutorial, you will have moved data from a self-managed Redis data store to a Redis instance managed by DigitalOcean. The Bash script used for this process may not be ideal for every Redis use case, but it works well for the use case described in this tutorial and can be optimized for other use cases as well.
Now that you’re using a DigitalOcean Managed Redis Database to store your data, you could measure its performance by running some benchmarking tests. Also, if you’re new to working with Redis, you could check out our series on How To Manage a Redis Database.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Sign up for Infrastructure as a Newsletter.
Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Does this actually work anymore? I saw in another thread that we can’t use redis-cli because it does not support TLS. And it says to use redli which does connect correctly.
But this entire document is about redis-cli, which can’t connect to your managed redis database.
What if I’m migrating a Redis Docker container? Will it work to install
stunnel
on the Docker host?Thanks for the article!
It worked well for me in a staging test with ~30,000 keys, but unfortunately there seems to be an upper-limit to the amount of keys this can migrate.
I tried it on a server with ~130,000 keys and the script never output anything, only hung and then back to prompt. In the meantime, the redis-server in the background hung / crashed / rebooted but with no data loss.
I struggled with the migration script at first because all of my keys weren’t included in
scan 0
I found I could add count onto the end that is greater than my number of keys to get it to iterate through everything.Also redis 4.0.7 is the minimum version to support authentication with the MIGRATE command.