I've been working with ceph for about 2 years now, and I've deployed 3 major ceph clusters so far (starting from Luminous to Mimic) on Linux Ubuntu 14.04 and Ubuntu 16.04. The Ceph clusters were two full SSD clusters and one with HDD disks (7200rpm) for capacity and I've been faced with the following situation.
On RBD ceph pool I have some really big images - more then 5T of disk space used - and I was confronted with the situation when I needed to move the data from once cluster to another. You can create the snapshot and download locally and after that you upload to the second Ceph cluster and imported there the RBD snapshot or use rsync or whatever method you have but... there is another way which involves a Linux PIPE 😊. This is what I use: SSH, Linux PIPE and the RBD snapshot on the cluster.
1. First you create the snapshot on Ceph Cluster-A
rbd snap create rbd/<<your-rbd-image-name>>@<<your-rbd-image-name>>.snapshot
Now please make sure you have the snapshot created with the following command
rbd ls -l | grep <<your-rbd-image-name>>
2 .Now the magic happens. I will use a Linux PIPE and SSH to export and import my Ceph RBD snapshot image from Cluster-A to Cluster-B, directly
rbd export rbd/<<your-rbd-image-name>>@<<your-rbd-image-name>>.snapshot - | ssh <<your-username>>@<<IP-of-cluster-B>> "sudo rbd import --image-format 2 --image-feature layering - rbd/<<your-rbd-image-name>>"
IMPORTNT - you should be able to connect from one cluster to another and have sudo rights
Now, on Ceph Cluster-B run the following command to see the import happening.
rbd ls -l | grep <<your-rbd-image-name>>
Now you should see how your RBD image is imported...
That's it. I hope I did not forget something... If I did, drop me a line on twitter. Thank you.