Retire an old disk array and move data to a new one

1) Update the wiki page BackRaidView
Access the wiki page BackRaidView to decide which are the source and destination disk arrays (using the warranty and size info as a guideline). Update the destination disk array with the server that is going to use it. Make sure the destination disk array is already properly configured (see wiki note on this topic).

2) Login into lxadm using the X forwarding
$ ssh -X lxadm

3) Login into the headnode using the X forwarding
for building 513:
ssh -X lxc1sj43
for building 613:
ssh -X lxc1t907

4) Run firefox from the headnode (after having closed any open firefox local instances)
$ firefox

5) Go to the fiber channel switch GUI to do the zoning:
for building 513 Tape (not needed here but specified for completeness):
http://tsmfcs500
for building 513 Disk:
http://tsmfcs510
for building 613 Tape (not needed here but specified for completeness):
http://tsmfcs600
for building 613 Disk:
http://tsmfcs610

6) Assign the new disk array to the proper TSM server then test and save the zoning

7) Login as root in the TSM server and make the new disk array visible
Check which devices are currently seen:
fdisk -l | grep Disk
If the new disk array is not there you need to find out which scsi host it is connected to, run:
less /proc/scsi/scsi
Find which host has the connection to the Direct-Access device corresponding to the model of the disk array as in the foloowing example:
Host: scsi3 Channel: 00 Id: 00 Lun: 01
  Vendor: Promise  Model: VTrak E310f      Rev: 0333
  Type:   Direct-Access                    ANSI SCSI revision: 05
We found out that in this case the host is no. 3, so we need to wake it up by running:
echo 1 > /sys/class/fc_host/host3/issue_lip
Check which devices are currently seen with fdisk. If the new disk array is still not there you need to run:
echo "- - -" > /sys/class/scsi_host/host3/scan
Check which devices are currently seen with fdisk. If the new disk array is still not there you need to reboot the system. Be sure to plan a downtime with the users and to gracefully halt the TSM server(s) on the machine:
tsm> halt
$ reboot

8) Once the system sees the new disk array device correctly we should format it using the appropriate labels (check the /etc/fstab for the local convention)
First we need to find out the stride size, in order to do so we use the following online tool:
http://busybox.net/~aldot/mkfs_stride.html
Then we do the actual formatting, as in the following example:
mkfs -t ext3 -E stride=64 -T largefile4 -L /tsm51/b5r50d0p0 /dev/sdp
mkfs -t ext3 -E stride=64 -T largefile4 -L /tsm51/b5r50d1p0 /dev/sdq
mkfs -t ext3 -E stride=64 -T largefile4 -L /tsm51/b5r50d2p0 /dev/sdr
mkfs -t ext3 -E stride=64 -T largefile4 -L /tsm51/b5r50d3p0 /dev/sds

9) Mount the new disk array
First we need to modify the /etc/fstab file by adding the appropriate lines, as in:
# TSM staging areas on tsmstor550
LABEL=/tsm51/b5r50d0p0  /tsm51/stg5000          ext3    defaults        1 0
LABEL=/tsm51/b5r50d1p0  /tsm51/stg5010          ext3    defaults        1 0
LABEL=/tsm51/b5r50d2p0  /tsm51/stg5020          ext3    defaults        1 0
LABEL=/tsm51/b5r50d3p0  /tsm51/stg5030          ext3    defaults        1 0
Then we try the configuration by mounting all the configured devices:
mount -a

10) Change the owner of the mount points of the disk arrays to the correct user and group (example: tsm51:tsm)
chown -R tsm51:tsm /tsm51/stg50*

11) Create the new TSM staging volumes using a script that might look like:
pass=XXX
for i in `seq -f %03g 1 100`
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 def vol backuppool /tsm51/stg5000/stg5000${i}.dsm f=20000 wait=yes
 echo ${i}
done
for i in `seq -f %03g 101 150`
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 def vol archivepool /tsm51/stg5000/stg5000${i}.dsm f=20000 wait=yes
 echo ${i}
done
for i in `seq -f %03g 151 154`
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 def vol dirmcpool /tsm51/stg5000/dir5000${i}.dsm f=5000 wait=yes
 echo ${i}
done
for i in `seq -f %03g 1 100`
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 def vol backuppool /tsm51/stg5010/stg5010${i}.dsm f=20000 wait=yes
 echo ${i}
done
for i in `seq -f %03g 101 150`
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 def vol archivepool /tsm51/stg5010/stg5010${i}.dsm f=20000 wait=yes
 echo ${i}
done
for i in `seq -f %03g 151 154`
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 def vol dirmcpool /tsm51/stg5010/dir5010${i}.dsm f=5000 wait=yes
 echo ${i}
done
for i in `seq -f %03g 1 100`
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 def vol backuppool /tsm51/stg5020/stg5020${i}.dsm f=20000 wait=yes
 echo ${i}
done
for i in `seq -f %03g 101 150`
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 def vol archivepool /tsm51/stg5020/stg5020${i}.dsm f=20000 wait=yes
 echo ${i}
done
for i in `seq -f %03g 151 154`
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 def vol dirmcpool /tsm51/stg5020/dir5020${i}.dsm f=5000 wait=yes
 echo ${i}
done
for i in `seq -f %03g 1 100`
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 def vol backuppool /tsm51/stg5030/stg5030${i}.dsm f=20000 wait=yes
 echo ${i}
done
for i in `seq -f %03g 101 150`
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 def vol archivepool /tsm51/stg5030/stg5030${i}.dsm f=20000 wait=yes
 echo ${i}
done
for i in `seq -f %03g 151 154`
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 def vol dirmcpool /tsm51/stg5030/dir5030${i}.dsm f=5000 wait=yes
 echo ${i}
done

12) Set all the old volumes (of the disk array that has to be retired) as readonly in TSM and then move data out of them, and finally delete them
This should also be done using a script, as in:
pass=XXX
for vol in /tsm51/stg1220/dir12201.dsm /tsm51/stg1220/stg12201.dsm /tsm51/stg1220/stg12202.dsm /tsm51/stg1220/stg12203.dsm 
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 UPDate Volume ${vol} ACCess=READOnly
done
for vol in /tsm51/stg1220/dir12201.dsm /tsm51/stg1220/stg12201.dsm /tsm51/stg1220/stg12202.dsm /tsm51/stg1220/stg12203.dsm 
do
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 MOVe Data ${vol} Wait=Yes
 /usr/bin/dsmadmc -id=dkruse -pa=${pass} -SERVER=tsm51 DELete Volume ${vol} Wait=Yes
done

13) Unmount the old the old disk arrays, remove their entries from /etc/fstab and finally dezone them. DONE!

-- DanieleFrancescoKruse - 07-Feb-2011

Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r3 - 2011-12-02 - DanieleFrancescoKruse
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback