In larger environments with many managed systems and multiple SAN fabrics, it’s not always clear which SAN fabric an FC port belongs to despite good documentation. In many cases, the hardware is far from the screen, possibly even in a very different building or geographically farther away, so you can not just check the wiring on site.
This blog post will show you how to use Live Partition Mobility (LPM) to find all the FC ports that belong to a given SAN fabric.
We use the LPAR tool for the sake of simplicity, but you can also work with commands from the HMC CLI without the LPAR tool, so please continue reading even if the LPAR tool is not available!
In the following, we have named our SAN fabrics “Fabric1” and “Fabric2.” However, the procedure described below can be used with any number of SAN fabrics.
Since LPM is to be used, we first need an LPAR. We create the LPAR on one of our managed systems (ms09) with the LPAR tool:
$ lpar –m ms09 create fabric1 Creating LPAR fabric1: done Register LPAR done $
Of course you can also use the HMC GUI or the HMC CLI to create the LPAR. We named the new LPAR after our SAN Fabric “fabric1“. Every other name is just as good!
Next, our LPAR needs a virtual FC adapter mapped to an FC port of fabric “Fabric1“:
$ lpar –p standard addfc fabric1 10 ms09-vio1 fabric1 10 ms09-vio1 20 $
The LPAR tool has selected slot 20 for the VFC server adapter on VIOS ms09-vio1 and created the client adapter as well as the server adapter. Of course, client and server adapters can be created in exactly the same way via the HMC GUI or the HMC CLI. Since the LPAR is not active, the ‘-p standard‘ option specified that only the profile should be adjusted.
To map the VFC server adapter to a physical FC port, we need the vfchost adapter number on the VIOS ms09-vio1:
$ vios npiv ms09-vio1 VIOS ADAPT NAME SLOT CLIENT OS ADAPT STATUS PORTS … ms09-vio1 vfchost2 C20 (3) unknown - NOT_LOGGED_IN 0 … $
In slot 20 we have the vfchost2, so this must now be mapped to an FC port of fabric “Fabric1“. We map to the FC port fcs8, which we know to belong to fabric “Fabric1“. If we are wrong, we will see this shortly.
Let’s take a look at the WWPNs for the virtual FC Client Adapter:
$ lpar -p standard vslots fabric1 SLOT REQ TYPE DATA 0 yes serial/server remote: (any)/any hmc=1 1 yes serial/server remote: (any)/any hmc=1 10 no fc/client remote: ms09-vio1(1)/20 c050760XXXXX00b0,c050760XXXXX00b1 $
Equipped with the WWPNs, we now ask our storage colleagues to create a small LUN for these WWPNs, which should only be visible in the fabric “Fabric1“. After the storage colleagues have created the LUN and adjusted the zoning accordingly, we activate our new LPAR in OpenFirmware mode and open a console:
$ lpar activate –p standard –b of fabric1 $ lpar console fabric1 Open in progress Open Completed. IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM ... 1 = SMS Menu 5 = Default Boot List 8 = Open Firmware Prompt 6 = Stored Boot List Memory Keyboard Network SCSI Speaker ok 0 >
Of course, this is also possible without problems with GUI or HMC CLI.
In OpenFirmware mode we start ioinfo and check if the small LUN is visible. If it is not visible, then the FC port fcs8 does not belong to the right fabric!
0 > ioinfo !!! IOINFO: FOR IBM INTERNAL USE ONLY !!! This tool gives you information about SCSI,IDE,SATA,SAS,and USB devices attached to the system Select a tool from the following 1. SCSIINFO 2. IDEINFO 3. SATAINFO 4. SASINFO 5. USBINFO 6. FCINFO 7. VSCSIINFO q - quit/exit ==> 6 FCINFO Main Menu Select a FC Node from the following list: # Location Code Pathname ------------------------------------------------- 1. U9117.MMC.XXXXXXX7-V10-C10-T1 /vdevice/vfc-client@3000000a q - Quit/Exit ==> 1 FC Node Menu FC Node String: /vdevice/vfc-client@3000000a FC Node WorldWidePortName: c050760XXXXXX0016 ------------------------------------------ 1. List Attached FC Devices 2. Select a FC Device 3. Enable/Disable FC Adapter Debug flags q - Quit/Exit ==> 1 1. 500507680YYYYYYY,0 - 10240 MB Disk drive Hit a key to continue... FC Node Menu FC Node String: /vdevice/vfc-client@3000000a FC Node WorldWidePortName: c050760XXXXXX0016 ------------------------------------------ 1. List Attached FC Devices 2. Select a FC Device 3. Enable/Disable FC Adapter Debug flags q - Quit/Exit ==> q
The LUN appears, the WWPN 500507680YYYYYYY is the WWPN of the corresponding storage port, which is unique worldwide and can only be seen in the fabric “Fabric1“!
Activating the LPAR in OpenFirmware mode has served two purposes, firstly to verify that the LUN is visible and our mapping to fcs8 was correct, secondly, the system now has the information which WWPNs need to be found during an LPM operation, so that the LPAR can be moved!
We deactivate the LPAR again.
$ lpar shutdown –f fabric1 $
If we now perform an LPM validation on the inactive LPAR, then a validation can only be successful on a managed system that has a virtual I/O server with a connection to the fabric “Fabric1“. Using a for loop, let’s try that for some managed systems:
$ for ms in ms10 ms11 ms12 ms13 ms14 ms15 ms16 ms17 ms18 ms19 do echo $ms lpar validate fabric1 $ms >/dev/null 2>&1 if [ $? -eq 0 ] then echo connected else echo not connected fi done
The command to validate on the HMC CLI is, migrlpar,.
Since we are not interested in validation messages, we redirect all validation messages to /dev/null.
Here’s the output of the for loop:
ms10 connected ms11 connected ms12 connected ms13 connected ms14 connected ms15 connected ms16 connected ms17 connected ms18 connected ms19 connected
Obviously, all managed systems are connected to fabric “Fabric1“. That’s not very surprising, because they were cabled exactly like that.
It would be more interesting to know which FC port on the managed systems (Virtual I/O servers) are connected to the fabric “Fabric1“. To do this, we need a list of virtual I/O servers for each managed system and the list of NPIV-capable FC ports for each virtual I/O server.
The list of virtual I/O servers can be obtained easily with the following command:
$ vios -m ms11 list ms11-vio1 ms11-vio2 $
On the HMC CLI you can use the command: lssyscfg -r lpar -m ms11 -F “name lpar_env”.
The NPIV-capable ports can be found out with the following command:
$ vios lsnports ms11-vio1 ms11-vio1 name physloc fabric tports aports swwpns awwpns ms11-vio1 fcs0 U78AA.001.XXXXXXX-P1-C5-T1 1 64 60 2048 1926 ms11-vio1 fcs1 U78AA.001.XXXXXXX-P1-C5-T2 1 64 60 2048 2023 ... $
The command lsnports is used on the virtual I/O server. Of course you can do this without the LPAR tool.
With the LPM validation (and of course also with the migration) one can indicate which FC port on the target system is to be used, we show this here once with two examples:
$ lpar validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs0 >/dev/null 2>&1 $ echo $? 0 $ lpar validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs1 >/dev/null 2>&1 $ echo $? 1 $
The validation with target ms10-vio1 and fcs0 was successful, i.e. this FC port is attached to fabric “Fabric1“. The validation with targets ms10-vio1 and fcs1 was not successful, i.e. that port is not connected to the fabric “Fabric1“.
Here is the command that must be called on the HMC, if the LPAR tool is not used:
$ lpar -v validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs0 hmc02: migrlpar -m ms09 -o v -p fabric1 -t ms10 -v -d 5 -i 'virtual_fc_mappings=10/ms10-vio1///fcs0' $
To find out all the FC ports that are connected to the fabric “Fabric1“, we need to loop through the managed systems to be checked, for each managed system we then need a loop across all VIOS of the managed system and finally a loop over each FC ports of the VIOS performing an LPM validation.
We have put things together in the following script. To make sure that it does not get too long, we have omitted some checks:
$ cat bin/fabric_ports #! /bin/ksh # Copyright © 2018, 2019 by PowerCampus 01 GmbH LPAR=fabric1 STATE=$( lpar prop -F state $LPAR | tail -1 ) print "LPAR: $LPAR" print "STATE: $STATE" if [ "$STATE" != "Not Activated" ] then print "ERROR: $LPAR must be in state 'Not Activated'" exit 1 fi fcsCount=0 fcsSameFabricCount=0 for ms in $@ do print "MS: $ms" viosList=$( vios -m $ms list ) for vios in $viosList do rmc_state=$( lpar -m $ms prop -F rmc_state $vios | tail -1 ) if [ "$rmc_state" = "active" ] then fcList= vios -m $ms lsnports $vios 2>/dev/null | \ while read vio fcport rest do if [ "$fcport" != "name" ] then fcList="${fcList} $fcport" fi done for fcport in $fcList do print -n "${vios}: ${fcport}: " lpar validate $LPAR $ms virtual_fc_mappings=10/${vios}///${fcport} </dev/null >/dev/null 2>&1 case "$?" in 0) print "yes" fcsSameFabricCount=$( expr $fcsSameFabricCount + 1 ) ;; *) print "no" ;; esac fcsCount=$( expr $fcsCount + 1 ) done else print "${vios}: RMC not active" fi done done print "${fcsCount} FC-ports investigated" print "${fcsSameFabricCount} FC-ports in same fabric" $
As an illustration we briefly show a run of the script over some managed systems. We start the script with time to see how long it takes:
$ time bin/fabric_ports ms10 ms11 ms12 ms13 ms14 ms15 ms16 ms17 ms18 ms19 LPAR: fabric1 STATE: Not Activated MS: ms10 ms10-vio3: RMC not active ms10-vio1: fcs0: yes ms10-vio1: fcs2: yes ms10-vio1: fcs4: no ms10-vio1: fcs6: no ms10-vio2: fcs0: yes ms10-vio2: fcs2: yes ms10-vio2: fcs4: no ms10-vio2: fcs6: no MS: ms11 ms11-vio3: RMC not active ms11-vio1: fcs0: no ms11-vio1: fcs1: no ms11-vio1: fcs2: no ms11-vio1: fcs3: yes ms11-vio1: fcs4: no … ms19-vio2: fcs2: no ms19-vio2: fcs3: no ms19-vio2: fcs0: no ms19-vio2: fcs1: no ms19-vio2: fcs4: no ms19-vio2: fcs5: no 132 FC-ports investigated 17 FC-ports in same fabric real 2m33.978s user 0m4.597s sys 0m8.137s $
In about 150 seconds, 132 FC ports were examined (LPM validations performed). This means a validation took about 1 second on average.
We have found all the FC ports that are connected to the fabric “Fabric1“.
Of course, this can be done analogously for other fabrics.
A final note: not all ports above are cabled!