Our naming convention for LUNs includes the pool ID, LUN number, server name, filesystem/drive letter, last four digits of the array’s serial number, and size (in GB). Having all of this information in the LUN name makes for very easy reporting and identification of LUNs on a server. This is what our LUN names look like: P1_LUN100_SPA_0000_servername_filesystem_150G
Typically, when presenting a new LUN to our AIX administration team for a new server build, they would assign the LUNs to specific volume groups based on the LUN names. The command ‘powermt display dev=hdiskpower#’ always includes the name & intended volume group for the LUN, making it easy for our admins to identify a LUN’s purpose. Now that we are presenting LUNs through our VPlex, when they run a powermt display on the server the UID for the LUN is shown, not the name. Below is a sample output of what is displayed.
root@VIOserver1:/ # powermt display dev=all
Pseudo name=hdiskpower0
VPLEX ID=FNM00141800023
Logical device ID=6000144000000010704759ADDF2487A6 (this would usually be displayed as a LUN name)
state=alive; policy=ADaptive; queued-IOs=0
==============================================================================
————— Host ————— – Stor – — I/O Path — — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
1 fscsi1 hdisk8 CL1-0B active alive 0 0
1 fscsi1 hdisk6 CL1-0F active alive 0 0
0 fscsi0 hdisk4 CL1-0D active alive 0 0
0 fscsi0 hdisk2 CL1-07 active alive 0 0
Pseudo name=hdiskpower1
VPLEX ID=FNM00141800023
Logical device ID=6000144000000010704759ADDF2487A1 (this would usually be displayed as a LUN name)
state=alive; policy=ADaptive; queued-IOs=0
==============================================================================
————— Host ————— – Stor – — I/O Path — — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
1 fscsi1 hdisk9 CL1-0B active alive 0 0
1 fscsi1 hdisk7 CL1-0F active alive 0 0
0 fscsi0 hdisk5 CL1-0D active alive 0 0
0 fscsi0 hdisk3 CL1-07 active alive 0 0
In order to easily match up the UIDs with the LUN names on the server, an extra step needs to be taken on the VPlex CLI. Log in to the VPlex using a terminal emulator, and once you’re logged in use the ‘vplexcli’ command. That will take you to a shell that allows for additional commands to be entered.
login as: admin
Using keyboard-interactive authentication.
Password:
Last login: Fri Sep 19 13:35:28 2014 from 10.16.4.128
admin@service:~> vplexcli
Trying ::1…
Connected to localhost.
Escape character is ‘^]’.
Enter User Name: admin
Password:
VPlexcli:/>
Once you’re in, run the ls -t command with the additional options listed below. You will need to substitute the STORAGE_VIEW_NAME with the actual name of the storage view that you want a list of LUNs from.
VPlexcli:/> ls -t /clusters/cluster-1/exports/storage-views/STORAGE_VIEW_NAME::virtual-volumes
The output looks like this:
/clusters/cluster-1/exports/storage-views/st1pvio12a-b:
Name Value
————— ————————————————————————————————–
virtual-volumes [(0,P1_LUN411_7872_SPB_VIOServer1_VIO_10G,VPD83T3:6000144000000010704759addf2487a6,10G),
(1,P0_LUN111_7872_SPA_VIOServer1_VIO_10G,VPD83T3:6000144000000010704759addf2487a1,10G)]
Now you can easily see which disk UID is tied to which LUN name.
If you would like to get a list of every storage view and every LUN:UID mapping, you can substitute the storage view name with an asterisk (*).
VPlexcli:/> ls -t /clusters/cluster-1/exports/storage-views/*::virtual-volumes
The resulting report will show a complete list of LUNs, grouped by storage view:
/clusters/cluster-1/exports/storage-views/VIOServer1:
Name Value
————— ————————————————————————————————–
virtual-volumes [(0,P1_LUN421_9322_SPB_/clusters/cluster-1/exports/storage-views/ VIOServer2:
Name Value
————— ————————————————————————————————–
virtual-volumes [(0,P1_LUN421_9322_SPB_VIOServer2_root_75G,VPD83T3:6000144000000010704759addf248ad9,75G),
(1,R2_LUN125_9322_SPB_VIOServer2_redo2_12G,VPD83T3:6000144000000010704759addf248b09,12G),
(2,R2_LUN124_9322_SPA_VIOServer2_redo1_12G,VPD83T3:6000144000000010704759addf248b04,12G),
(3,P3_LUN906_9322_SPB_VIOServer2_oraarc_250G,VPD83T3:6000144000000010704759addf248aff,250G),
(4,P2_LUN706_9322_SPA_VIOServer2_oraarc_250G,VPD83T3:6000144000000010704759addf248afa,250G)]
/clusters/cluster-1/exports/storage-views/VIOServer2:
Name Value
————— ————————————————————————————————
virtual-volumes [(1,R2_LUN1025_9322_SPB_VIOServer2_redo2_12G,VPD83T3:6000144000000010704759addf248b09,12G),
(2,R2_LUN1024_9322_SPA_VIOServer2_redo1_12G,VPD83T3:6000144000000010704759addf248b04,12G),
(3,P3_LUN906_9322_SPB_VIOServer2_ora1_250G,VPD83T3:6000144000000010704759addf248aff,250G),
(4,P2_LUN706_9322_SPA_VIOServer2_ora2_250G,VPD83T3:6000144000000010704759addf248afa,250G)]
/clusters/cluster-1/exports/storage-views/VIOServer3:
Name Value
————— ————————————————————————————————
virtual-volumes [(0,P0_LUN101_3432_SPA_VIOServer3_root_75G,VPD83T3:6000144000000010704759addf248a0a,75G),
(1,P0_LUN130_3432_SPA_VIOServer3_redo1_25G,VPD83T3:6000144000000010704759addf248a0f,25G),
Our VPlex has only been installed for a few months and our team is still learning. There may be a better way to do this, but it’s all I’ve been able to figure out so far.