FC NPIV client throughput

When using NPIV, multiple client LPARs share a physical FC port of a virtual I/O server. Of course, for performance investigations, it would be nice to be able to easily determine the throughput of each client LPAR and to look at the througputs comparatively. Thus, questions like

  • how much throughput is achieved by a particular LPAR
  • which LPARs have the highest throughput and produce the most FC traffic
  • are there resource bottlenecks

could be answered.

Of course, there are several ways to gain this data. A particularly simple option is provided by the virtual I/O server via the padmin command ‘fcstat‘. The command allows to show NPIV client statistics, using the ‘-client‘ option:

(0)padmin@aixvio1:/home/padmin> fcstat -client
              hostname   dev                wwpn     inreqs    outreqs ctrlreqs          inbytes         outbytes  DMA_errs Elem_errs Comm_errs

               aixvio1  fcs0  0x100000XXXXXXXXXX 49467894179 50422150679 947794529 1861712755360927 1451335312750576         0         0         0
     C050760YYYYYYYYY
                                    0          0        0                0                0         0         0         0
     C050760ZZZZZZZZZ
                                    0          0        0                0                0         0         0         0
                 aix01  fcs0  0xC050760XXXXXXXXX   22685402  101956075 10065757     699512617896    1572578056704         0         0         0
                 aix02  fcs0  0xC050760XXXXXXXXX   28200473   82295158 12051365     387847746448     626772151808         0         0         0
                 aix03  fcs0  0xC050760XXXXXXXXX  376500672  255163053 21583628   22619424512608    3786990844928         0         0         0
                 aix04  fcs0  0xC050760XXXXXXXXX  116450405  504688524 14020031    4037786527400    9929289617408         0         0         0
          blbprodora22  fcs0  0xC050760XXXXXXXXX 1341092479  580673554 37458927   44288566807072   12166718497792         0         0         0
...
               aixvio1  fcs1  0x100000XXXXXXXXXX  391131484 1090556094 156294130   71031615240217   87642294572864         0         0         0
              aixtsm01  fcs2  0xC050760XXXXXXXXX  334020900  785597352 74659821   62072552942128   83284555980288         0         0         0
              aixtsm02  fcs0  0xC050760XXXXXXXXX    2943054   40921231 11617552     107317697968     289142333440         0         0         0

               aixvio1  fcs2  0x210000XXXXXXXXXX  403180246 5877180796   236998  105482699300998 1540608710446612         0         0         0
              aixtsm01  fcs6  0xC050760XXXXXXXXX  146492419  392365162    74250   38378099796342  102844775468007         0         0         0
              aixtsm02  fcs2  0xC050760XXXXXXXXX         19     192848       20             1090      50551063184         0         0         0

               aixvio1  fcs3  0x210000XXXXXXXXXX  405673338 7371951499   260575  105969796271246 1932388891128304         0         0         0
              aixtsm02  fcs3  0xC050760XXXXXXXXX          0          0        4                0                0         0         0         0
                 aix02  fcs7  0xC050760XXXXXXXXX      42624 2677470211    34211          2382280  701864613402184         0         0         0
...
Invalid initiator world wide name
Invalid initiator world wide name
(0)padmin@aixvio1:/home/padmin>

The line with WWPN C050760YYYYYYYYY and C050760ZZZZZZZZZ belongs to NPIV adapters of non-activated LPARs. Therefore, only zeros are displayed as counters. For each virtual (NPIV-enabled) FC port of the virtual I/O server, the physical FC port and the NPIV client LPARs are displayed. Based on the bold-marked block, the output will be briefly described here. First, the physical port of the virtual I/O server is always shown, here aixvio1 and FC port fcs1. In the following lines, the NPIV clients will be shown, each with the LPAR name and the associated virtual FC port of the LPAR, here aixtsm01 and aixtsm02. The virtual FC ports of the LPARs fcs2 (aixtsm01) and fcs0 (aixtsm02) are mapped to the physical FC port fcs1 of aixvio1. After a blank line comes the next physical FC port of the virtual I/O server.

The WWPN of the physical or virtual FC ports are listed in the columns. In addition, the number of incoming and outgoing requests, as well as the transferred bytes, also incoming and outgoing, are listed. Errors are listed in the 3 remaining columns. If there is no DMA buffer available for a request, DMA_errs is incremented, if the queue of the FC adapter is full, Elem_errs is incremented, in the case of transmission errors, Comm_errs is incremented. Regular increasing counters on DMA_errs or Elem_errs may be an indication of too small values for some tuning attributes.

Due to the length of the output and the absolute counters being output, the output is somewhat confusing. But with a small script, you can easily calculate delta values and scale the output to MB per second. With the following example script we have done this:

$ cat npivstat
#! /bin/ksh93
#
# Copyright (c) 2019 by PowerCampus 01 GmbH
# Author: Dr. Armin Schmidt
#

delta=5 # seconds

typeset -A dataInreqs
typeset -A dataOutreqs
typeset -A dataInbytes
typeset -A dataOutbytes
typeset -A dataDMA_errs
typeset -A dataElem_errs
typeset -A dataComm_errs

bc |& # start bc as coroutine
print -p "scale=2"

# get first sample

/usr/ios/cli/ioscli fcstat -client 2>/dev/null | \
while read hostname dev wwpn inreqs outreqs ctrlreqs inbytes outbytes DMA_errs Elem_errs Comm_errs rest
do
case "$wwpn" in
0x*)
dataInreqs[${hostname}_${dev}]=$inreqs
dataOutreqs[${hostname}_${dev}]=$outreqs
dataInbytes[${hostname}_${dev}]=$inbytes
dataOutbytes[${hostname}_${dev}]=$outbytes
dataDMA_errs[${hostname}_${dev}]=$DMA_errs
dataElem_errs[${hostname}_${dev}]=$Elem_errs
dataComm_errs[${hostname}_${dev}]=$Comm_errs
;;
esac
done
sleep $delta

while true
do
/usr/ios/cli/ioscli fcstat -client 2>/dev/null | \
while read hostname dev wwpn inreqs outreqs ctrlreqs inbytes outbytes DMA_errs Elem_errs Comm_errs rest
do
case "$wwpn" in
0x*)
prevInreqs=${dataInreqs[${hostname}_${dev}]}
prevOutreqs=${dataOutreqs[${hostname}_${dev}]}
prevInbytes=${dataInbytes[${hostname}_${dev}]}
prevOutbytes=${dataOutbytes[${hostname}_${dev}]}
prevDMA_errs=${dataDMA_errs[${hostname}_${dev}]}
prevElem_errs=${dataElem_errs[${hostname}_${dev}]}
prevComm_errs=${dataComm_errs[${hostname}_${dev}]}
dataInreqs[${hostname}_${dev}]=$inreqs
dataOutreqs[${hostname}_${dev}]=$outreqs
dataInbytes[${hostname}_${dev}]=$inbytes
dataOutbytes[${hostname}_${dev}]=$outbytes
dataDMA_errs[${hostname}_${dev}]=$DMA_errs
dataElem_errs[${hostname}_${dev}]=$Elem_errs
dataComm_errs[${hostname}_${dev}]=$Comm_errs

print -p "(${inreqs}-${prevInreqs})/$delta"
read -p inreqs
print -p "(${outreqs}-${prevOutreqs})/$delta"
read -p outreqs
print -p "(${inbytes}-${prevInbytes})/${delta}/1024/1024"
read -p inbytes
print -p "(${outbytes}-${prevOutbytes})/${delta}/1024/1024"
read -p outbytes
print -p "(${DMA_errs}-${prevDMA_errs})/$delta"
read -p DMA_errs
print -p "(${Elem_errs}-${prevElem_errs})/$delta"
read -p Elem_errs
print -p "(${Comm_errs}-${prevComm_errs})/$delta"
read -p Comm_errs

printf "%15s %5s %16s %6.2f %7.2f %7.2f %8.2f %8.2f %9.2f %9.2f\n" "$hostname" "$dev" "$wwpn" "$inreqs" "$outreqs" \
"$inbytes" "$outbytes" "$DMA_errs" "$Elem_errs" "$Comm_errs"
;;
"wwpn")
printf "%15s %5s %16s %6s %7s %7s %8s %8s %9s %9s\n" "$hostname" "$dev" "$wwpn" "$inreqs" "$outreqs" \
"$inbytes" "$outbytes" "$DMA_errs" "$Elem_errs" "$Comm_errs"
;;
"")
[ -n "$hostname" ] && continue
printf "%15s %5s %16s %6s %7s %7s %8s %8s %9s %9s\n" "$hostname" "$dev" "$wwpn" "$inreqs" "$outreqs" \
"$inbytes" "$outbytes" "$DMA_errs" "$Elem_errs" "$Comm_errs"
;;
esac
done
print

sleep $delta
done

$

The script ‘npivstat‘ is available for download in our download-area.

Here is an excerpt from a run of the script (much shortened, only one of the physical ports is shown):

aixvio1 # ./npivstat
       hostname    dev              wwpn  inreqs  outreqs  inbytes  outbytes  DMA_errs  Elem_errs  Comm_errs
...                                                                                                          
        aixvio1   fcs2  0x210000XXXXXXXXXX    0.00  1019.00     0.00    254.75      0.00       0.00       0.00
       aixtsm01   fcs6  0xC0507605E5890074    0.00     0.00     0.00      0.00      0.00       0.00       0.00
       aixtsm02   fcs2  0xC0507609A6C70004    0.00     0.00     0.00      0.00      0.00       0.00       0.00
          aix05   fcs6  0xC0507609A6C7001C    0.00  1018.20     0.00    254.55      0.00       0.00       0.00
...                                                                                                          
        aixvio1   fcs2  0x210000XXXXXXXXXX    0.00  1020.20     0.00    255.05      0.00       0.00       0.00
       aixtsm01   fcs6  0xC050760XXXXXXXXX    0.00     0.00     0.00      0.00      0.00       0.00       0.00
       aixtsm02   fcs2  0xC050760XXXXXXXXX    0.00     0.00     0.00      0.00      0.00       0.00       0.00
          aix05   fcs6  0xC050760XXXXXXXXX    0.00  1019.80     0.00    254.95      0.00       0.00       0.00
...                                                                                                           
        aixvio1   fcs2  0x210000XXXXXXXXXX    0.00   984.80     0.00    246.20      0.00       0.00       0.00
       aixtsm01   fcs6  0xC050760XXXXXXXXX    0.00     0.00     0.00      0.00      0.00       0.00       0.00
       aixtsm02   fcs2  0xC050760XXXXXXXXX    0.00     0.00     0.00      0.00      0.00       0.00       0.00
          aix05   fcs6  0xC050760XXXXXXXXX    0.00   985.00     0.00    246.25      0.00       0.00       0.00
...
^Caixvio1 # 

In the example above, the NPIV client aix05 generates approximately 250 MB/s of data, while the other two NPIV clients aixtsm01 and aixtsm02 have not produced FC traffic during this time.

The script must be started as root on a virtual I/O server. Of course you can customize the script to your own needs.

Did you know that state and configuration change information is available on the HMC for about 2 months?

Status and configuration changes of LPARs and managed systems are stored on the HMCs for about 2 months. This can be used to find out, when a managed system was shut down, when a service processor failover took place, or when the memory of an LPAR was expanded, at least if the event is no more than 2 months ago.

The status changes of a managed system can be listed with the command “lslparutil -r sys -m <managed-system> -sh –startyear 1970 –filter event_types = state_change“, or alternatively with the LPAR-Tool command “ms history <managed -system> “.

linux $ ms history ms04
TIME                  PRIMARY_STATE         DETAILED_STATE
03/14/2019 08:45:13   Started               None
03/14/2019 08:36:52   Not Available         Unknown
02/17/2019 01:51:55   Started               None
02/17/2019 01:44:00   Not Available         Unknown
02/12/2019 09:32:57   Started               None
02/12/2019 09:28:02   Started               Service Processor Failover
02/12/2019 09:27:07   Started               None
02/12/2019 09:24:42   Standby               None
02/12/2019 09:21:25   Starting              None
02/12/2019 09:22:59   Stopped               None
02/12/2019 09:21:58   Not Available         Unknown
02/12/2019 09:09:45   Stopped               None
02/12/2019 09:07:53   Stopping              None
linux $

Configuration changes (processor, memory) of a managed system can be displayed with “lslparutil -r sys -m <managed-system> -s h –startyear 1970 –filter event_types = config_change“, or alternatively again with the LPAR tool:

linux $ ms history -c ms02
                                PROCUNIS              MEMORY
TIME                  CONFIGURABLE  AVAILABLE  CONFIGURABLE  AVAILABLE  FIRMWARE
04/16/2019 12:15:51      20.0          5.05       1048576       249344     25856
04/11/2019 11:17:39      20.0          5.25       1048576       253696     25600
04/02/2019 13:24:35      20.0          4.85       1048576       249344     25856
03/29/2019 14:29:14      20.0          5.25       1048576       253696     25600
03/15/2019 15:37:08      20.0          4.85       1048576       249344     25856
03/15/2019 11:36:57      20.0          4.95       1048576       249344     25856
...
linux $

The same information can also be displayed for LPARs.

The last status changes of an LPAR can be listed with “lpar history <lpar>“:

linux $ lpar history lpar02
TIME                  PRIMARY_STATE         DETAILED_STATE
04/17/2019 05:42:43   Started               None
04/17/2019 05:41:24   Waiting For Input     Open Firmware
04/16/2019 12:01:54   Started               None
04/16/2019 12:01:29   Stopped               None
02/15/2019 11:30:48   Stopped               None
02/01/2019 12:23:34   Not Available         Unknown
02/01/2019 12:22:50   Relocating            None
...

This corresponds to the command “lslparutil -r lpar -m ms03 -s h –startyear 1970 –filter event_types = state_change, lpar_names = lpar02” on the HMC command line.

From the output it can be seen that the LPAR has been relocated using LPM, was stopped and restartet and has been in Open Firmware mode.

And finally you can look at the last configuration changes of an LPAR using the command on the HMC CLI “lslparutil -r lpar -m ms03 -s h –startyear 1970 –filter event_types = config_change, lpar_names = lpar02“. The output of the LPAR tool is a bit clearer:

linux $ lpar history -c lpar02
TIME                  PROC_MODE  PROCS  PROCUNITS  SHARING  UNCAP_WEIGHT  PROCPOOL         MEM_MODE  MEM
04/23/2019 18:49:43   shared    1      0.7        uncap    10          DefaultPool      ded       4096
04/23/2019 18:49:17   shared    1      0.7        uncap    5           DefaultPool      ded       4096
04/23/2019 18:48:44   shared    1      0.3        uncap    5           DefaultPool      ded       4096
04/09/2019 08:04:25   shared    1      0.3        uncap    5           DefaultPool      ded       3072
03/14/2019 12:37:32   shared    1      0.1        uncap    5           DefaultPool      ded       3072
02/26/2019 09:34:28   shared    1      0.1        uncap    5           DefaultPool      ded       3072
02/20/2019 06:51:57   shared    1      0.3        uncap    5           DefaultPool      ded       3072
01/31/2019 08:12:58   shared    1      0.3        uncap    5           DefaultPool      ded       3072
..

From the output you can see that the number of processing units were changed several time, the uncapped weight was changed and the memory has been expanded.

Changes of the last two months are available at any time!

PowerVM: Do you know the Profile “last*valid*configuration”?

Maybe one or the other has ever wondered how and where the current configuration of an LPAR is stored. If the current configuration and profile are not synchronized with each other, differences will quickly arise. When an LPAR is shut down and deactivated, the last current configuration is retained. When activating the LPAR, this configuration is available in addition to the profiles of the LPAR as the “current configuration” in the GUI. If one selects the current configuration, then the LPAR has the same configuration after activation as before deactivation. For a newly created LPAR, however, this selection is not available on activation. The difference also manifests itself on the HMC command line: the already activated LPAR can be activated without specifying a profile, the newly created LPAR can only be activated by specifying a profile. Let’s take a closer look.

(Short note: The commands on the HMC command line were executed directly on the HMC hmc01. In the example outputs with the LPAR tool, the commands were started from a Linux jump server. All commands are always shown with both variants!)

We have activated and booted the LPAR aix01 with the profile “standard“. We have not made any dynamic changes yet. We briefly look at the status of the LPAR and check if there is an RMC connection to the HMC:

hscroot@hmc01:~> lssyscfg -m p710 -r lpar --filter lpar_names=aix01 --header -F name lpar_env state curr_profile rmc_state os_version
name lpar_env state curr_profile rmc_state os_version
aix01 aixlinux Running standard active "AIX 7.1 7100-04-00-0000"
hscroot@hmc01:~>
linux $ lpar status aix01
NAME  ID      TYPE   STATUS  PROFILE    RMC   PROCS  PROCUNITS MEMORY  OS
aix01  5  aixlinux  Running  standard  active   1       -      3072    AIX 7.1 7100-04-00-0000
linux $

To see the effect of a dynamic change, let’s take a look at the actual state and the profile “standard“:

hscroot@hmc01:~> lshwres -m p710 -r mem --level lpar --filter lpar_names=aix01 -F curr_mem
3072
hscroot@hmc01:~> lssyscfg -m p710 -r prof --filter profile_names=standard,lpar_names=aix01 -F desired_mem
3072
hscroot@hmc01:~>
linux $ lpar mem aix01
      MEMORY            MEMORY           HUGEPAGES
NAME   MODE  AME   MIN   CURR   MAX   MIN  CURR  MAX
aix01  ded    -   2048   3072  8192    0     0    0
linux $ lpar -p standard mem aix01
      MEMORY            MEMORY           HUGEPAGES
NAME   MODE  AME   MIN   CURR   MAX   MIN  CURR  MAX
aix01  ded    -   2048   3072  8192    0     0    0
linux $

The LPAR has currently 3072 MB of main memory, which are also stored in the “standard” profile.

Now we add 1024 MB of main memory dynamically (DLPAR):

hscroot@hmc01:~> chhwres -m p710 -r mem -o a -p aix01 -q 1024
hscroot@hmc01:~>
linux $ lpar -d addmem aix01 1024
linux $

Now let’s look at the resulting memory resources of the LPAR:

hscroot@hmc01:~> lshwres -m p710 -r mem --level lpar --filter lpar_names=aix01 -F curr_mem
4096
hscroot@hmc01:~>
linux $ lpar mem aix01
     MEMORY            MEMORY          HUGEPAGES
NAME  MODE  AME   MIN   CURR   MAX   MIN  CURR  MAX
aix01  ded   -   2048   4096  8192    0     0    0
linux $

As expected, the LPAR now has 4096 MB of RAM. But what does the profile “standard” looks like?

hscroot@hmc01:~> lssyscfg -m p710 -r prof --filter profile_names=standard,lpar_names=aix01 -F desired_mem
3072
hscroot@hmc01:~>
linux $ lpar -p standard mem aix01
     MEMORY            MEMORY          HUGEPAGES
NAME  MODE  AME   MIN   CURR   MAX   MIN  CURR  MAX
aix01  ded   -   2048   3072  8192    0     0    0
linux $

The profile has not changed, activating the LPAR with this profile would result in 3072 MB of main memory.

The current configuration is always saved in the special profile “last*valid*configuration“:

hscroot@hmc01:~> lssyscfg -m p710 -r prof --filter profile_names=last*valid*configuration,lpar_names=aix01 -F desired_mem
4096
hscroot@hmc01:~>
linux $ lpar -p last*valid*configuration mem aix01
     MEMORY            MEMORY          HUGEPAGES
NAME  MODE  AME   MIN   CURR   MAX   MIN  CURR  MAX
aix01  ded   -   2048   4096  8192    0     0    0
linux $

Here the value of 4096 MB is consistent with the currently available memory in the LPAR.

Every dynamic change to an LPAR is performed on the LPAR via a DLPAR operation and by updating the special profile! If a profile is synchronized manually or automatically, then this special profile is ultimately synchronized with the desired profile.

The existence and handling of the special profile “last*valid*configuration” also makes some LPM possibilities easier to understand. We will deal with this in a later blog post.

We want your feedback!

The new PowerCampus “LPAR tool” is available for download! Much revised and written in C ++. It supports output in various formats: JSON + YAML!

The first 100 feedbacks get two licenses (for 2 LPARS) for free! Forever!

So, download and give feedback, just send an e-mail to info@powercampus.de!

The integrated test license supports without further registration one HMC and two complete managed systems! For an extended trial version for 4 HMC’s and unlimited MS just send an email to info@powercampus.de.

Download “LPAR tool”: https://powercampus.de/en/download-2/

LPAR console using Virtual I/O Server

Typically, a console for an LPAR is launched via an HMC, via GUI or CLI (vtmenu or mkvterm). A console depends on the availability of an HMC. During an HMC update or problems with the HMC, you may not be able to connect to an LPAR console.

Relatively unknown is the ability to configure a console to an LPAR via a virtual I/O server. If the HMC is not available, then a console can be started via the virtual I/O server. No configuration is required on the client LPAR! By default, each client LPAR has 2 virtual serial server adapters (slots 0 and 1). If you configure an associated client adapter on a virtual I/O server, you can use it for a console connection.

On the virtual I/O server one needs only an unused virtual slot (here slot 45). The client LPAR has the LPAR ID 39. The virtual serial client adapter can be created with the following command:

hmc01 $ chhwres -m ms02 -r virtualio --rsubtype serial -o a -p ms02-vio1 -s 45 -a adapter_type=client,remote_lpar_name=aix02,remote_slot_num=0,supports_hmc=0
hmc01 $

Now you can always start a console for the LPAR via the virtual I/O server:

ms02-vio1 :/home/padmin> mkvt -id 39
AIX Version 7
Copyright IBM Corporation, 1982, 2018.
Console login: root
root's Password: XXXXXX


aix02  AIX 7.2         powerpc


Last unsuccessful login: Mon Mar 18 23:14:26 2019 on ssh from N.N.N.N
Last login: Wed Mar 27 20:19:22 2019 on /dev/pts/0 from M.M.M.M
[YOU HAVE NEW MAIL]
aix02:/root> hostname
aix02
aix02:/root>

The command mkvt on the virtual I/O server corresponds to the command mkvterm on the HMC. Here the desired partition must be specified by the LPAR-ID. Terminating the console works as usual with “~.“, Or if you are logged in via SSH on the virtual I/O server with “~~.“.

Alternatively, you can also end a console session with the command rmvt:

ms02-vio1:/home/padmin> rmvt -id 39
ms02-vio1:/home/padmin>

The following message appears in the console and the console is closed:

Virtual terminal has been disconnected.

$

With the LPAR tool, the console can of course be set up even easier. The virtual serial adapter on the virtual I/O server can be created with the command “lpar addserial“, a manual login to the HMC is not necessary for this to work:

$ lpar addserial -c ms02-vio1 45 aix02 0
$

The “-c” option means “create client adapter”. The command also creates the adapter in the profile. The success of the action can be checked by “lpar vslots“, showing all virtual adapters of an LPAR:

$ lpar vslots ms02-vio1
SLOT  REQ  TYPE           DATA
0     1    serial/server  remote: -(any)/any status=unavailable hmc=1
1     1    serial/server  remote: -(any)/any status=unavailable hmc=1
2     0    eth            PVID=1 VLANS=- XXXXXXXXXXXX ETHERNET0
3     1    eth            TRUNK(1) IEEE PVID=1 VLANS=201 XXXXXXXXXXXXX ETHERNET0
...
45     0   serial/client  remote: aix02(39)/0 status=unavailable hmc=0
...
$

Starting the console then proceeds as usual by logging in as padmin on the virtual I/O server and the command mkvt.

Caution: The console session through the virtual I/O server should always be terminated when it is no longer needed. You can not terminate it from the HMC! Here is the attempt to start a console using the HMC, while the console is already active using the virtual I/O server:

$ lpar console aix02

Open in progress 

A terminal session is already open for this partition. 
Only one open session is allowed for a partition. 
Exiting.... 
Attempts to open the session failed. Please close the terminal and retry the open at a later time. 
If the problem persists, Please contact IBM support. 
Received end of file, Exiting.
Connection to X.X.X.X closed.
$

Even rmvterm does not help:

$ lpar rmvterm aix02
/bin/stty: standard input: Inappropriate ioctl for device
$

Conversely, no console can be started using the virtual I/O server if a console is active using the HMC:

ms02-vio1:/home/padmin> mkvt -id 39
Virtual terminal is already connected.

ms02-vio1:/home/padmin>

So always make sure that the console is terminated.

 

Which FC port is connected to which SAN fabric?

In larger environments with many managed systems and multiple SAN fabrics, it’s not always clear which SAN fabric an FC port belongs to despite good documentation. In many cases, the hardware is far from the screen, possibly even in a very different building or geographically farther away, so you can not just check the wiring on site.

This blog post will show you how to use Live Partition Mobility (LPM) to find all the FC ports that belong to a given SAN fabric.

We use the LPAR tool for the sake of simplicity, but you can also work with commands from the HMC CLI without the LPAR tool, so please continue reading even if the LPAR tool is not available!

In the following, we have named our SAN fabrics “Fabric1” and “Fabric2.” However, the procedure described below can be used with any number of SAN fabrics.

Since LPM is to be used, we first need an LPAR. We create the LPAR on one of our managed systems (ms09) with the LPAR tool:

$ lpar –m ms09 create fabric1
Creating LPAR fabric1:
done
Register LPAR
done
$

Of course you can also use the HMC GUI or the HMC CLI to create the LPAR. We named the new LPAR after our SAN Fabric “fabric1“. Every other name is just as good!

Next, our LPAR needs a virtual FC adapter mapped to an FC port of fabric “Fabric1“:

$ lpar –p standard addfc fabric1 10 ms09-vio1
fabric1 10 ms09-vio1 20
$

The LPAR tool has selected slot 20 for the VFC server adapter on VIOS ms09-vio1 and created the client adapter as well as the server adapter. Of course, client and server adapters can be created in exactly the same way via the HMC GUI or the HMC CLI. Since the LPAR is not active, the ‘-p standard‘ option specified that only the profile should be adjusted.

To map the VFC server adapter to a physical FC port, we need the vfchost adapter number on the VIOS ms09-vio1:

$ vios npiv ms09-vio1
VIOS       ADAPT NAME  SLOT  CLIENT OS      ADAPT   STATUS        PORTS
…
ms09-vio1  vfchost2    C20   (3)    unknown  -     NOT_LOGGED_IN  0
…
$

In slot 20 we have the vfchost2, so this must now be mapped to an FC port of fabric “Fabric1“. We map to the FC port fcs8, which we know to belong to fabric “Fabric1“. If we are wrong, we will see this shortly.

Let’s take a look at the WWPNs for the virtual FC Client Adapter:

$ lpar -p standard vslots fabric1
SLOT  REQ  TYPE           DATA
0     yes  serial/server  remote: (any)/any hmc=1
1     yes  serial/server  remote: (any)/any hmc=1
10    no   fc/client      remote: ms09-vio1(1)/20 c050760XXXXX00b0,c050760XXXXX00b1
$

Equipped with the WWPNs, we now ask our storage colleagues to create a small LUN for these WWPNs, which should only be visible in the fabric “Fabric1“. After the storage colleagues have created the LUN and adjusted the zoning accordingly, we activate our new LPAR in OpenFirmware mode and open a console:

$ lpar activate –p standard –b of fabric1

$ lpar console fabric1

Open in progress 

Open Completed.

IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
...

          1 = SMS Menu                          5 = Default Boot List
          8 = Open Firmware Prompt              6 = Stored Boot List

     Memory      Keyboard     Network     SCSI     Speaker  ok
0 >

Of course, this is also possible without problems with GUI or HMC CLI.

In OpenFirmware mode we start ioinfo and check if the small LUN is visible. If it is not visible, then the FC port fcs8 does not belong to the right fabric!

0 > ioinfo

!!! IOINFO: FOR IBM INTERNAL USE ONLY !!!
This tool gives you information about SCSI,IDE,SATA,SAS,and USB devices attached to the system

Select a tool from the following

1. SCSIINFO
2. IDEINFO
3. SATAINFO
4. SASINFO
5. USBINFO
6. FCINFO
7. VSCSIINFO

q - quit/exit

==> 6

FCINFO Main Menu
Select a FC Node from the following list:
 # Location Code           Pathname
-------------------------------------------------
 1. U9117.MMC.XXXXXXX7-V10-C10-T1  /vdevice/vfc-client@3000000a

q - Quit/Exit

==> 1

FC Node Menu
FC Node String: /vdevice/vfc-client@3000000a
FC Node WorldWidePortName: c050760XXXXXX0016
------------------------------------------
1. List Attached FC Devices
2. Select a FC Device
3. Enable/Disable FC Adapter Debug flags

q - Quit/Exit

==> 1

1. 500507680YYYYYYY,0 - 10240 MB Disk drive

Hit a key to continue...

FC Node Menu
FC Node String: /vdevice/vfc-client@3000000a
FC Node WorldWidePortName: c050760XXXXXX0016
------------------------------------------
1. List Attached FC Devices
2. Select a FC Device
3. Enable/Disable FC Adapter Debug flags

q - Quit/Exit

==> q

The LUN appears, the WWPN 500507680YYYYYYY is the WWPN of the corresponding storage port, which is unique worldwide and can only be seen in the fabric “Fabric1“!

Activating the LPAR in OpenFirmware mode has served two purposes, firstly to verify that the LUN is visible and our mapping to fcs8 was correct, secondly, the system now has the information which WWPNs need to be found during an LPM operation, so that the LPAR can be moved!

We deactivate the LPAR again.

$ lpar shutdown –f fabric1
$

If we now perform an LPM validation on the inactive LPAR, then a validation can only be successful on a managed system that has a virtual I/O server with a connection to the fabric “Fabric1“. Using a for loop, let’s try that for some managed systems:

$ for ms in ms10 ms11 ms12 ms13 ms14 ms15 ms16 ms17 ms18 ms19
do
echo $ms
lpar validate fabric1 $ms >/dev/null 2>&1
if [ $? -eq 0 ]
then
   echo connected
else
   echo not connected
fi
done

The command to validate on the HMC CLI is, migrlpar,.

Since we are not interested in validation messages, we redirect all validation messages to /dev/null.

Here’s the output of the for loop:

ms10
connected
ms11
connected
ms12
connected
ms13
connected
ms14
connected
ms15
connected
ms16
connected
ms17
connected
ms18
connected
ms19
connected

Obviously, all managed systems are connected to fabric “Fabric1“. That’s not very surprising, because they were cabled exactly like that.

It would be more interesting to know which FC port on the managed systems (Virtual I/O servers) are connected to the fabric “Fabric1“. To do this, we need a list of virtual I/O servers for each managed system and the list of NPIV-capable FC ports for each virtual I/O server.

The list of virtual I/O servers can be obtained easily with the following command:

$ vios -m ms11 list
ms11-vio1
ms11-vio2
$

On the HMC CLI you can use the command: lssyscfg -r lpar -m ms11 -F “name lpar_env”.

The NPIV-capable ports can be found out with the following command:

$ vios lsnports ms11-vio1
ms11-vio1       name             physloc                        fabric tports aports swwpns  awwpns
ms11-vio1       fcs0             U78AA.001.XXXXXXX-P1-C5-T1          1     64     60   2048    1926
ms11-vio1       fcs1             U78AA.001.XXXXXXX-P1-C5-T2          1     64     60   2048    2023
...
$

The command lsnports is used on the virtual I/O server. Of course you can do this without the LPAR tool.

With the LPM validation (and of course also with the migration) one can indicate which FC port on the target system is to be used, we show this here once with two examples:

$ lpar validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs0 >/dev/null 2>&1
$ echo $?
0
$ lpar validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs1 >/dev/null 2>&1
$ echo $?
1
$

The validation with target ms10-vio1 and fcs0 was successful, i.e. this FC port is attached to fabric “Fabric1“. The validation with targets ms10-vio1 and fcs1 was not successful, i.e. that port is not connected to the fabric “Fabric1“.

Here is the command that must be called on the HMC, if the LPAR tool is not used:

$ lpar -v validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs0
hmc02: migrlpar -m ms09 -o v -p fabric1 -t ms10 -v -d 5 -i 'virtual_fc_mappings=10/ms10-vio1///fcs0'
$

To find out all the FC ports that are connected to the fabric “Fabric1“, we need to loop through the managed systems to be checked, for each managed system we then need a loop across all VIOS of the managed system and finally a loop over each FC ports of the VIOS performing an LPM validation.

We have put things together in the following script. To make sure that it does not get too long, we have omitted some checks:

$ cat bin/fabric_ports
#! /bin/ksh
# Copyright © 2018, 2019 by PowerCampus 01 GmbH

LPAR=fabric1

STATE=$( lpar prop -F state $LPAR | tail -1 )

print "LPAR: $LPAR"
print "STATE: $STATE"

if [ "$STATE" != "Not Activated" ]
then
            print "ERROR: $LPAR must be in state 'Not Activated'"
            exit 1
fi

fcsCount=0
fcsSameFabricCount=0

for ms in $@
do
            print "MS: $ms"
            viosList=$( vios -m $ms list )

            for vios in $viosList
            do
                        rmc_state=$( lpar -m $ms prop -F rmc_state $vios | tail -1 )
                        if [ "$rmc_state" = "active" ]
                        then
                                    fcList=
                                    vios -m $ms lsnports $vios 2>/dev/null | \
                                    while read vio fcport rest
                                    do
                                               if [ "$fcport" != "name" ]
                                               then
                                                           fcList="${fcList} $fcport"
                                               fi
                                    done

                                    for fcport in $fcList
                                    do
                                               print -n "${vios}: ${fcport}: "
                                               lpar validate $LPAR $ms virtual_fc_mappings=10/${vios}///${fcport} </dev/null >/dev/null 2>&1
                                               case "$?" in
                                               0)
                                                           print "yes"
                                                           fcsSameFabricCount=$( expr $fcsSameFabricCount + 1 )
                                                           ;;
                                               *) print "no" ;;
                                               esac
                                               fcsCount=$( expr $fcsCount + 1 )
                                    done
                        else
                                    print "${vios}: RMC not active"
                        fi
            done
done

print "${fcsCount} FC-ports investigated"
print "${fcsSameFabricCount} FC-ports in same fabric"

$

As an illustration we briefly show a run of the script over some managed systems. We start the script with time to see how long it takes:

$ time bin/fabric_ports ms10 ms11 ms12 ms13 ms14 ms15 ms16 ms17 ms18 ms19
LPAR: fabric1
STATE: Not Activated
MS: ms10
ms10-vio3: RMC not active
ms10-vio1: fcs0: yes
ms10-vio1: fcs2: yes
ms10-vio1: fcs4: no
ms10-vio1: fcs6: no
ms10-vio2: fcs0: yes
ms10-vio2: fcs2: yes
ms10-vio2: fcs4: no
ms10-vio2: fcs6: no
MS: ms11
ms11-vio3: RMC not active
ms11-vio1: fcs0: no
ms11-vio1: fcs1: no
ms11-vio1: fcs2: no
ms11-vio1: fcs3: yes
ms11-vio1: fcs4: no
…
ms19-vio2: fcs2: no
ms19-vio2: fcs3: no
ms19-vio2: fcs0: no
ms19-vio2: fcs1: no
ms19-vio2: fcs4: no
ms19-vio2: fcs5: no
132 FC-ports investigated
17 FC-ports in same fabric

real       2m33.978s
user      0m4.597s
sys       0m8.137s
$

In about 150 seconds, 132 FC ports were examined (LPM validations performed). This means a validation took about 1 second on average.

We have found all the FC ports that are connected to the fabric “Fabric1“.

Of course, this can be done analogously for other fabrics.

A final note: not all ports above are cabled!

HMC Error #25B810

Managing and administrating service events is often forgotten on HMCs. In this article we want to use a concrete example, error with reference code #25B810, to show how to handle such events. Of course, our LPAR tool is used here.

First, let’s find all open service events:

$ hmc lssvcevents
TIME                 PROBLEM  PMH   HMC     REFCODE   STATE     STATUS  CALLHOME  FAILING_MTMS      TEXT                                         
02/13/2019 23:02:31  7        -     hmc01   #25B810   approved  Open    false     8231-E2B/06A084P  File System alert event occurred...          
02/16/2019 16:14:28  8        -     hmc01   B3030001  approved  Open    false     8231-E2B/06A084P  ACT04284I A Management Console connect failed
02/11/2019 16:12:43  37       -     hmc02   B3030001  approved  Open    false     8231-E2B/06A084P  ACT04284I A Management Console connect failed
02/11/2019 17:43:19  38       -     hmc02   B3030001  approved  Open    false     8231-E2B/06A084P  ACT04283I A connection to a FSP,BPA...  
$

This article is about the problem with the number 7. The problem was noted on 13.02.2019 at 23:02:31, and examined by the HMC with the name hmc01. The error code is #25B810. The problem is in the “open” state, a call home has not been triggered. For further information, please refer to the problem on the managed system with serial number 06A084P, a Power 710 (8231-E2B). The beginning of the error message can be found in the last column.

First, let’s look at the whole record of the problem by specifying the problem number and HMC:

$ hmc lssvcevents -p 7 hmc01
analyzing_hmc: hmc01
analyzing_mtms: 7042-CR8/21009CD
approval_state: approved
callhome_intended: false
created_time: 02/14/2019 04:11:31
duplicate_count: 0
eed_transmitted: false
enclosure_mtms: 8231-E2B/06A084P
event_severity: 0
event_time: 02/13/2019 23:02:31
failing_mtms: 8231-E2B/06A084P
files: iqyymrge.log/Consolidated system platform log,
iqyvpd.dat/Configuration information associated with the HMC,
actzuict.dat/Tasks performed,
iqyvpdc.dat/Configuration information associated with the HMC,
problems.xml/XML version of the problems opened on the HMC for the HMC and the server,
refcode.dat/list of reference codes associated with the hmc,
iqyylog.log/HMC firmware log information,
PMap.eed/Partition map, obtained from 'lshsc -w -c machine',
hmc.eed/HMC code level obtained from 'lshmc -V' and connection information obtained from 'lssysconn -r all',
sys.eed/Output of various system configuration commands,
8231-E2B_06A084P.VPD.xml/Configuration information associated with the managed system
first_time: 02/14/2019 04:11:31
last_time: 02/14/2019 04:11:31
problem_num: 7
refcode: #25B810
reporting_mtms: 8231-E2B/06A084P
reporting_name: p710
status: Open
sys_mtms: 8231-E2B/06A084P
sys_name: p710
sys_refcode: #25B810
text: File System alert event occurred on /home/ios/CM/DB. Free space is less than 10%, or there was an error querying the filesystem.

At the end of the issue we find the unabbreviated error message. It’s about a file system that has less than 10% free space. The path “/home/ios/CM/DB” indicates a virtual I/O server. The relevant virtual I/O servers are located on the managed system with the serial number 06A084P:

$ ms show 06A084P
NAME  SERIAL_NUM  TYPE_MODEL  HMCS        
p710  06A084P     8231-E2B    hmc01,hmc02
$

It is the managed system named, p710. The managed system includes the following virtual I/O servers:

$ vios -m p710 show
LPAR     ID  SERIAL    LPAR_ENV   MS    HMCs
aixvio1  1   06A084P1  vioserver  p710  hmc01,hmc02
$

A check of the error report on the Virtual I/O Server aixvio1 shows the following entry:

LABEL:          VIO_ALERT_EVENT
IDENTIFIER:     0FD4CF1A

Date/Time:       Wed Feb 13 22:02:31 CST 2019
Sequence Number: 98
Machine Id:      00F6A0844C00
Node Id:         aixvio1
Class:           O
Type:            INFO
WPAR:            Global
Resource Name:   /home/ios/CM/DB 

Description
Informational Message

Probable Causes
Asynchronous Event Occurred

Failure Causes
PROCESSOR

        Recommended Actions
        Check Detail Data

Detail Data
Alert Event Message
25b810
A File System alert event occurred on /home/ios/CM/DB. Free space is less than 10%, or there was an error querying the filesystem.

Diagnostic Analysis
Diagnostic Log sequence number: 19
Resource tested:        sysplanar0
Menu Number:            25B810
Description:


 File System alert event occurred on /home/ios/CM/DB. Free space is less than 10%, or there was an error querying the filesystem.

A quick check of the file system shows that the problem has already been resolved, and there is enough space:

$ df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
...
/dev/hd1           0.25      0.16   35%      111     1% /home
...
$ 

So the problem does not exist anymore. Therefore, the service event on the HMC should also be closed, which we do now:

$ hmc chsvcevent -o close -p 7 hmc01
$

For review we list the open service events:

$ hmc lssvcevents 
TIME                 PROBLEM  PMH   HMC     REFCODE   STATE     STATUS  CALLHOME  FAILING_MTMS      TEXT                                         
02/16/2019 16:14:28  8        -     hmc01   B3030001  approved  Open    false     8231-E2B/06A084P  ACT04284I A Management Console connect failed
02/11/2019 16:12:43  37       -     machmc  B3030001  approved  Open    false     8231-E2B/06A084P  ACT04284I A Management Console connect failed
02/11/2019 17:43:19  38       -     machmc  B3030001  approved  Open    false     8231-E2B/06A084P  ACT04283I A connection to a FSP,BPA...   
$ 

The event with the number 7 was closed successfully.

Service events are easy to manage with the LPAR tool!

LPAR tool is available now

Starting from  5th november 2018 the LPAR-Tool is officially available.

A version for Linux can be downloaded from the download area (versions for AIX and MacOS will follow soon). A user guide is in preparation.

To test the LPAR tool free of charge:

  1. Download the current version of the LPAR tool from the download area.
  2. Request a trial license.
  3. Install the LPAR tool.

Have fun testing the LPAR tool.

Ab 05.11.2018 ist das LPAR-Tool nun offiziell verfügbar!