There are a lot of SAN multipathing solutions on Linux at the moment. Two of them are discussesed in this blog. The first one is device mapper multipathing that is a failover and load balancing solution with a lot of configuration options. The second one (mdadm multipathing) is just a failover solution with manuel re-anable of a failed path. The advantage of mdadm multiphating is that it is very easy to configure.
Before using a multipathing solution for a production environment on Linux it is also important to determine if the used solution is supportet with the used Hardware. For example HP doesn’t support the Device Mapper Multipathing solution on their servers yet.
Device Mapper Multipathing
Procedure for configuring the system with DM-Multipath:
- Install device-mapper-multipath rpm
- Edit the multipath.conf configuration file:
- comment out the default blacklist
- change any of the existing defaults as needed
- Start the multipath daemons
- Create the multipath device with the multipath
Install Device Mapper Multipath
# rpm -ivh device-mapper-multipath-0.4.7-8.el5.i386.rpm warning: device-mapper-multipath-0.4.7-8.el5.i386.rpm: Header V3 DSA signature: Preparing... ########################################### [100%] 1:device-mapper-multipath########################################### [100%]
Initial Configuration
Set user_friendly_name. The devices will be created as /dev/mapper/mpath[n]. Uncomment the blacklist.
# vim /etc/multipath.conf #blacklist { # devnode "*" #} defaults { user_friendly_names yes path_grouping_policy multibus }
Load the needed modul and the startup service.
# modprobe dm-multipath # /etc/init.d/multipathd start # chkconfig multipathd on
Print out the multipathed device.
# multipath -v2 or # multipath -v3
Configuration
Configure device type in config file.
# cat /sys/block/sda/device/vendor HP # cat /sys/block/sda/device/model HSV200 # vim /etc/multipath.conf devices { device { vendor "HP" product "HSV200" path_grouping_policy multibus no_path_retry "5" } }
Configure multipath device in config file.
# cat /var/lib/multipath/bindings # Format: # alias wwid # mpath0 3600508b400070aac0000900000080000 # vim /etc/multipath.conf multipaths { multipath { wwid 3600508b400070aac0000900000080000 alias mpath0 path_grouping_policy multibus path_checker readsector0 path_selector "round-robin 0" failback "5" rr_weight priorities no_path_retry "5" } }
Set not mutipathed devices on the blacklist. (f.e. local Raid-Devices, Volume Groups)
# vim /etc/multipath.conf devnode_blacklist { devnode "^cciss!c[0-9]d[0-9]*" devnode "^vg*" }
Show Configured Multipaths.
# dmsetup ls --target=multipath mpath0 (253, 1) # multipath -ll mpath0 (3600508b400070aac0000900000080000) dm-1 HP,HSV200 [size=10G][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=4][active] \_ 0:0:0:1 sda 8:0 [active][ready] \_ 0:0:1:1 sdb 8:16 [active][ready] \_ 1:0:0:1 sdc 8:32 [active][ready] \_ 1:0:1:1 sdd 8:48 [active][ready]
Format and mount Device
Fdisk cannot be used with /dev/mapper/[dev_name] devices. Use fdisk on the underlying disks and execute the following command when device-mapper multipath maps the device to create a /dev/mapper/mpath[n] device for the partition.
# fdisk /dev/sda # kpartx -a /dev/mapper/mpath0 # ls /dev/mapper/* mpath0 mpath0p1 # mkfs.ext3 /dev/mapper/mpath0p1 # mount /dev/mapper/mpath0p1 /mnt/san
After that /dev/mapper/mpath0p1 is the first partition on the multipathed device.
Multipathing with mdadm on Linux
The md multipathing solution is only a failover solution what means that only one path is used at one time and no load balancing is made.
Start the MD Multipathing Service
Start the MD Multipathing Service
# chkconfig mdmpd on # /etc/init.d/mdmpd start
On the first Node (if it is a shared device)
Make Label on Disk
Make Label on Disk
# fdisk /dev/sda Disk /dev/sdt: 42.9 GB, 42949672960 bytes 64 heads, 32 sectors/track, 40960 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System /dev/sdt1 1 40960 41943024 fd Linux raid autodetect # partprobe
Bind multiple paths together
# mdadm --create /dev/md4 --level=multipath --raid-devices=4 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1
Get UUID
# mdadm --detail /dev/md4 UUID : b13031b5:64c5868f:1e68b273:cb36724e
Set md configuration in config file
# vim /etc/mdadm.conf # Multiple Paths to RAC SAN DEVICE /dev/sd[qrst]1 ARRAY /dev/md4 uuid=b13031b5:64c5868f:1e68b273:cb36724e # cat /proc/mdstat
On the second Node (Copy the /etc/mdadm.conf from the first node)
# mdadm -As # cat /proc/mdstat
Restore a failed path
# mdadm /dev/md1 -f /dev/sdt1 -r /dev/sdt1 -a /dev/sdt1