Storage migration is the day to day activity of every Linux/Unix system administrator. In the IT production environment, we have to move data from one LUN (logical unit number) to another very safely without any downtime or less downtime and without corrupting the data.
In this tutorial, we will show you how you can safely and easily do online storage migration in your Linux/UNIX server whether it is a physical server or virtual.
Generally in Unix environment storage migration is mainly done on physical servers where new LUNs are assigned from SAN and old SAN storage LUNs are given back to them. Storage migration involves assigning of new storage LUNs, scanning them, moving data from old storage to new ones, and then removing the old LUNs and giving them back. 🙂
Steps to do Online Storage Migration in Linux/UNIX
1. The first step for storage migration is the availability of new LUNs where you want to migrate your data from the old storage device. Once the storage is assigned from the SAN team, you can do online SCAN. Please refer to the below article for reference. You can use the same article for physical servers also.
Recommended Article: Adding Storage to VM Host without Reboot
In physical servers after following instructions in the above article, just use multipath command and you will be able to see your new storage LUNs.
# multipath # multipath -ll
For a particular multipath device, you can run the below command.
[root@local ~]# multipath -list mpath0 mpath0 (000000000000192601761530123456789) dm-6 EMC,SYMMETRIX [size=52G][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 3:0:0:0 sda 8:0 [active][undef] \_ 3:0:1:0 sdb 8:16 [active][undef] \_ 5:0:0:0 sdc 8:32 [active][undef] \_ 5:0:1:0 sdd 8:48 [active][undef]
2. Once you have scanned your new LUNs, you can create physical volumes (PV’s) from them and then you can extend your volume group (VG) or create a new VG. Use the below commands to do this.
# pvcreate /dev/mapper/mpath0 # pvcreate /dev/mapper/mpath1
or
pvcreate /dev/mapper/mpath{0..1} # vgextend vg1 /dev/mapper/mpath0 # vgextend vg1 /dev/mapper/mpath1
Recommended Article: 5 Basic Storage Certifications for IT Administrators
3. Once your VG’s or VG are extended, you can migrate your data to new physical volumes which you have just created on new storage LUN’s. You can use pvmove command for this. You can do one-to-one mapping of physical volume or you can also move data of one logical volume at a time.
For moving data or physical extents from one PV to another use the below command syntax.
Syntax:
# pvmove old-pv new-pv
Example:
# pvmove /dev/sdc1 /dev/sdd1 # pvmove /dev/mapper/mapth0 /dev/mapper/mpath2
You can also perform some tasks only to extents belonging to the single Logical Volume. Use the below command line for that.
Syntax:
# pvmove -v -n lv-name old-pv new-pv
Example:
# pvmove -v -n lv1 /dev/sdb1 /dev/sdc1 # pvmove -v -n lv2 /dev/mapper/mapth0 /dev/mapper/mpath2
Note: Please do PV move from one device to another only when there is less production impact. Else we will suggest moving data from logical volumes one by one using the “-n” switch.
While performing the above storage migration, keep in mind that your load average and CPU utilization will also go high so be prepared for it.
4. Once all your data is moved from old physical volume to new one. You can remove old PV from your volume group using vgreduce command and then do PV remove.
Syntax:
# vgreduce vg-name pv-name # pvremove pv-name
Refer our article on safely removing VMDK from Linux server in virtual environment. For a physical server, you can follow the below steps.
If you have a multipath device then after following the above steps you will have to first remove there mapping. So use the below command for each multipath device.
# multipath -f
Example:
multipath -f mpath0
Now to flush any outstanding I/O to all paths to the device
# blockdev --flushbufs
Example:
blockdev --flushbufs /dev/sdb blockdev --flushbufs /dev/mapper/sdb
Now you will have to remove block device which are mapped with the multipath device as below.
echo 1 > /sys/block/sd*/device/delete
Example:
echo 1 > /sys/block/sda/device/delete echo 1 > /sys/block/sdb/device/delete echo 1 > /sys/block/sdc/device/delete echo 1 > /sys/block/sdd/device/delete
After you have removed your block device, you can hand over LUNs to your storage team and asked them to reclaim them.
This was a simple tutorial on how to do online storage migration in Linux/Unix servers. Do let us know if you face any issues with storage migration activity. You can mail us, like us on Facebook, and comment.
if you like the article, do not forget to subscribe to our free newsletter for more awesome content.
Thank you for this tutorial. I researched this topic in Redhat’s support pages and while they describe the same steps that you indicated, they do not show any examples. (https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/removing_devices.html).
Your examples helped clarify each step. Using your tutorial as a guide, I accomplished this task without any issues.
Hi J.R,
Glad that our article helped you. Keep visiting for more tutorials.
Only one think missing is do we have any command / procedure to confirm whether data has been completely migrate to the new disk.
So that we can follow the future action plan to remove the PV from VG and future.
Hi Abhay,
Good question. You will have to grep pvmove in “ps” command output. If it is there in output means pvmove is still working.
Also, with “-v” option it shows how much data is moved in percentage. Let us know if you have any other doubts.
We recently done a storage migration and users started complaining about the performance issues
will there be an impact on performance of the servers after migration ?
is it required to restart the server once a storage migration is completed?
Please clarify .
P.S there were no such performance issues reported before doing this online migration task
Not sure how you are measuring the performance, but this should not have any performance issue. No reboot is required.
We hope you are using good SSD’s for storage.