LPAR tool with test license until 15th september 2019

In our download area, version 1.3.0.2 of our LPAR tool, including a valid test license (valid until 15th september 2019) is available for download. The license is contained directly in the binaries, so no license key must be entered. The included trial license allows use of the LPAR tool for up to 10 HMCs, 100 managed systems and 1000 LPARs.

Resources of not activated LPARs and Memory Affinity

When an LPAR is shut down, resources such as processors, memory, and I/O slots are not automatically released by the LPAR. The resources remain assigned to the LPAR and are then reused on the next activation (with the current configuration). In the first part of the article Resources of not activated LPARs we had already looked at this.

(Note: In the example output, we use version 1.4 of the LPAR tool, but in all cases we show the underlying commands on the HMC command line, so you can try everything without using the LPAR tool.)

The example LPAR lpar1 was shut down, but currently still occupies 100 GB of memory:

linux $ lpar status lpar1
NAME   LPAR_ID  LPAR_ENV  STATE          PROFILE   SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
lpar1  39       aixlinux  Not Activated  standard  0     inactive  1      0.2         102400  Unknown
linux $

The following commands for the output above were executed on the corresponding HMC hmc01:

hmc01: lssyscfg -r lpar -m ms09 --filter lpar_names=lpar1
hmc01: lshwres -r mem -m ms09 --level lpar --filter lpar_names=lpar1
hmc01: lshwres -r proc -m ms09 --level lpar --filter lpar_names=lpar1

As the output shows, the LPAR lpar1 has still allocated its resources (processors, memory, I/O adapters).

In order to understand why deactivating an LPAR does not release the resources, you have to look at the “Memory Affinity Score”:

linux $ lpar lsmemopt lpar1
             LPAR_SCORE  
LPAR_NAME  CURR  PREDICTED
lpar1      100   0
linux $

HMC command line:

hmc01: lsmemopt -m ms09 -r lpar -o currscore –filter lpar_names=lpar1

The Memory Affinity Score describes how close processors and memory are, the closer the memory to the memory is, the better is the throughput to the memory. The command above indicates, with a value between 1 and 100, how big the affinity between processors and LPARs is. Our LPAR lpar1 currently has a value of 100, which means the best possible affinity of memory and processors. If the resources were freed when deactivating an LPAR, then the LPAR would lose this Memory Affinity Score. The next time you enable the LPAR, it then depends on the memory and processors available then how good the memory affinity will be then. We release the resources once:

linux $ lpar -d rmprocs lpar1 1
linux $

HMC command line:

hmc01: chhwres -m ms09 -r proc  -o r -p lpar1 --procs 1

No more score will be given, since the LPAR has no longer allocated any resources:

linux $ lpar lsmemopt lpar1
             LPAR_SCORE  
LPAR_NAME  CURR  PREDICTED
lpar1      none  none
linux $

HMC command line:

hmc01: lsmemopt -m ms09 -r lpar -o currscore –filter lpar_names=lpar1

Now we allocate resources again and look at the effect this has on memory affinity:

linux $ lpar applyprof lpar1 standard
linux $

HMC command line:

hmc01: chsyscfg -r lpar -m ms09 -o apply -p lpar1 -n standard

We again determine the Memory Affinity Score:

linux $ lpar lsmemopt lpar1
             LPAR_SCORE  
LPAR_NAME  CURR  PREDICTED
lpar1      53    0
linux $

HMC command line:

hmc01: lsmemopt -m ms09 -r lpar -o currscore –filter lpar_names=lpar1

The score is now only 53, the performance of the LPAR has become worse. Whether and how much this is noticeable, depends ultimately on the applications on the LPAR.

The fact that the resources are not released when deactivating an LPAR, thus guarantees the next time you activate (with the current configuration) the memory affinity remains the same and thus the performance should be the same.

If you release the resources of an LPAR (manually or automatically), then you have to realize that this has an effect on the LPAR if it is later activated again, because then the resources are reassigned and a worse (but possibly also a better) Memory Affinity Score can result.

Conversely, before activating a new LPAR you can also make sure that there is a good chance for a high memory affinity score for the new LPAR by releasing resources of inactive LPARs.

(Note: resource distribution can be changed and improved at runtime using the Dynamic Platform Optimizer DPO. DPO is supported as of POWER8.)

 

Resources of not activated LPARs

When an LPAR is shutdown, resources such as processors, memory, and I/O slots are not automatically released by the LPAR. The resources remain assigned to the LPAR and are reused on the next activation (with the current configuration).

The article will show how such resources are automatically released and, if desired, how to manually release resources of an inactive LPAR.

(Note: In the example output, we use version 1.4 of the LPAR tool, but in all cases we show the underlying commands on the HMC command line, so you can try everything without using the LPAR tool.)

The example LPAR lpar1 was shut down, but currently still occupies 100 GB of memory:

linux $ lpar status lpar1
NAME   LPAR_ID  LPAR_ENV  STATE          PROFILE   SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
lpar1  39       aixlinux  Not Activated  standard  0     inactive  1      0.2         102400  Unknown
linux $

The following commands for the output above were executed on the corresponding HMC hmc01:

hmc01: lssyscfg -r lpar -m ms09 --filter lpar_names=lpar1
hmc01: lshwres -r mem -m ms09 --level lpar --filter lpar_names=lpar1
hmc01: lshwres -r proc -m ms09 --level lpar --filter lpar_names=lpar1

The resource_config attribute of an LPAR indicates whether the LPAR has currently allocated resources (resource_config=1) or not (resource_config=0):

linux $ lpar status -F resource_config lpar1
1
linux $

Or on the HMC command line:

hmc01: lssyscfg -r lpar -m ms09 --filter lpar_names=lpar1 –F resource_config

The resources allocated by an not activated LPAR can be released in 2 different ways:

  1. Automatic: The resources used are needed by another LPAR, e.g. because memory is expanded dynamically or an LPAR is activated that does not have sufficient resources. In this case, resources are automatically removed from a not activated LPAR. We will show this below with an example.
  2. Manual: The allocated resources are explicitly released by the administrator. This is also shown below in an example.

First we show an example in which resources are automatically taken away from an not activated LPAR.

The managed system ms09 currently has about 36 GB free memory:

linux $ ms lsmem ms09
NAME  INSTALLED  FIRMWARE  CONFIGURABLE  AVAIL  MEM_REGION_SIZE
ms09  786432     33792     786432        36352  256
linux $

HMC command line:

hmc01: lshwres -r mem -m ms09 --level sys

We start an LPAR (lpar2) which was configured with 100 GB of RAM. The managed system has only 36 GB of RAM and is therefore forced to take resources from inactive LPARs in order to provide the required 100 GB. We start lpar2 with the profile standard and look at the memory relations:

linux $ lpar activate -b sms -p standard lpar2
linux $

HMC command line:

hmc01: chsysstate -m ms09 -r lpar -o on -n lpar2 -b sms -f standard

Overview of the storage relations of lpar1 and lpar2:

linux $ lpar status lpar\*
NAME   LPAR_ID  LPAR_ENV  STATE          PROFILE   SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
lpar1  4        aixlinux  Not Activated  standard  0     inactive  1      0.2         60160   Unknown
lpar2  8        aixlinux  Open Firmware  standard  0     inactive  1      0.2         102400  Unknown
linux $ ms lsmem ms09
NAME  INSTALLED  FIRMWARE  CONFIGURABLE  AVAIL  MEM_REGION_SIZE
ms09  786432     35584     786432        0      256
linux $

HMC command line:

hmc01: lssyscfg -r lpar -m ms09
hmc01: lshwres -r mem -m ms09 --level lpar
hmc01: lshwres -r proc -m ms09 --level lpar
hmc01: lshwres -r mem -m ms09 --level sys

The LPAR lpar2 has 100 GB RAM, the managed system has no more free memory and the memory allocated by LPAR lpar1 has been reduced to about 60 GB. Allocated resources from non-activated LPARs are automatically released, when needed and assigned to other LPARs.

But you can of course also release the resources manually. This is also shown briefly here. We are reducing the memory of LPAR lpar1 by 20 GB:

linux $ lpar -d rmmem lpar1 20480
linux $

HMC command line:

hmc01: chhwres -m ms09 -r mem  -o r -p lpar1 -q 20480

As stated, the allocated memory has been reduced by 20 GB:

linux $ lpar status lpar\*
NAME   LPAR_ID  LPAR_ENV  STATE          PROFILE   SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
lpar1  4        aixlinux  Not Activated  standard  0     inactive  1      0.2         39680   Unknown
lpar2  8        aixlinux  Open Firmware  standard  0     inactive  1      0.2         102400  Unknown
linux $ ms lsmem ms09
NAME  INSTALLED  FIRMWARE  CONFIGURABLE  AVAIL  MEM_REGION_SIZE
ms09  786432     35584     786432        20480  256
linux $

HMC command line:

hmc01: lssyscfg -r lpar -m ms09
hmc01: lshwres -r mem -m ms09 --level lpar
hmc01: lshwres -r proc -m ms09 --level lpar
hmc01: lshwres -r mem -m ms09 --level sys

The 20 GB are immediately available to the managed system as free memory. If you remove the entire memory or all processors (or processor units), then all resources of an inactive LPAR are released:

linux $ lpar -d rmmem lpar1 39680
linux $

HMC command line:

hmc01: chhwres -m ms09 -r mem  -o r -p lpar1 -q 39680

Here are the resulting memory relations:

linux $ lpar status lpar\*
NAME   LPAR_ID  LPAR_ENV  STATE          PROFILE   SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
lpar1  4        aixlinux  Not Activated  standard  0     inactive  0      0.0         0       Unknown
lpar2  8        aixlinux  Open Firmware  standard  0     inactive  1      0.2         102400  Unknown
linux $ ms lsmem ms09
NAME        INSTALLED  FIRMWARE  CONFIGURABLE  AVAIL  MEM_REGION_SIZE
ms09  786432     31232     786432        64512  256
linux $

HMC command line:

hmc01: lssyscfg -r lpar -m ms09
hmc01: lshwres -r mem -m ms09 --level lpar
hmc01: lshwres -r proc -m ms09 --level lpar
hmc01: lshwres -r mem -m ms09 --level sys

The LPAR lpar1 now has 0 processors, 0.0 processor units and 0 MB of memory! In addition, the resource_config attribute now has the value 0, which indicates that the LPAR no longer has any resources configured!

linux $ lpar status -F resource_config lpar1
0
linux $

HMC command line:

hmc01: lssyscfg -r lpar -m ms09 --filter lpar_names=lpar1 –F resource_config

Finally, the question arises as to why you should release resources manually if they are automatically released by the managed system when needed?

We will answer this question in a second article.

 

Did you know that state and configuration change information is available on the HMC for about 2 months?

Status and configuration changes of LPARs and managed systems are stored on the HMCs for about 2 months. This can be used to find out, when a managed system was shut down, when a service processor failover took place, or when the memory of an LPAR was expanded, at least if the event is no more than 2 months ago.

The status changes of a managed system can be listed with the command “lslparutil -r sys -m <managed-system> -sh –startyear 1970 –filter event_types = state_change“, or alternatively with the LPAR-Tool command “ms history <managed -system> “.

linux $ ms history ms04
TIME                  PRIMARY_STATE         DETAILED_STATE
03/14/2019 08:45:13   Started               None
03/14/2019 08:36:52   Not Available         Unknown
02/17/2019 01:51:55   Started               None
02/17/2019 01:44:00   Not Available         Unknown
02/12/2019 09:32:57   Started               None
02/12/2019 09:28:02   Started               Service Processor Failover
02/12/2019 09:27:07   Started               None
02/12/2019 09:24:42   Standby               None
02/12/2019 09:21:25   Starting              None
02/12/2019 09:22:59   Stopped               None
02/12/2019 09:21:58   Not Available         Unknown
02/12/2019 09:09:45   Stopped               None
02/12/2019 09:07:53   Stopping              None
linux $

Configuration changes (processor, memory) of a managed system can be displayed with “lslparutil -r sys -m <managed-system> -s h –startyear 1970 –filter event_types = config_change“, or alternatively again with the LPAR tool:

linux $ ms history -c ms02
                                PROCUNIS              MEMORY
TIME                  CONFIGURABLE  AVAILABLE  CONFIGURABLE  AVAILABLE  FIRMWARE
04/16/2019 12:15:51      20.0          5.05       1048576       249344     25856
04/11/2019 11:17:39      20.0          5.25       1048576       253696     25600
04/02/2019 13:24:35      20.0          4.85       1048576       249344     25856
03/29/2019 14:29:14      20.0          5.25       1048576       253696     25600
03/15/2019 15:37:08      20.0          4.85       1048576       249344     25856
03/15/2019 11:36:57      20.0          4.95       1048576       249344     25856
...
linux $

The same information can also be displayed for LPARs.

The last status changes of an LPAR can be listed with “lpar history <lpar>“:

linux $ lpar history lpar02
TIME                  PRIMARY_STATE         DETAILED_STATE
04/17/2019 05:42:43   Started               None
04/17/2019 05:41:24   Waiting For Input     Open Firmware
04/16/2019 12:01:54   Started               None
04/16/2019 12:01:29   Stopped               None
02/15/2019 11:30:48   Stopped               None
02/01/2019 12:23:34   Not Available         Unknown
02/01/2019 12:22:50   Relocating            None
...

This corresponds to the command “lslparutil -r lpar -m ms03 -s h –startyear 1970 –filter event_types = state_change, lpar_names = lpar02” on the HMC command line.

From the output it can be seen that the LPAR has been relocated using LPM, was stopped and restartet and has been in Open Firmware mode.

And finally you can look at the last configuration changes of an LPAR using the command on the HMC CLI “lslparutil -r lpar -m ms03 -s h –startyear 1970 –filter event_types = config_change, lpar_names = lpar02“. The output of the LPAR tool is a bit clearer:

linux $ lpar history -c lpar02
TIME                  PROC_MODE  PROCS  PROCUNITS  SHARING  UNCAP_WEIGHT  PROCPOOL         MEM_MODE  MEM
04/23/2019 18:49:43   shared    1      0.7        uncap    10          DefaultPool      ded       4096
04/23/2019 18:49:17   shared    1      0.7        uncap    5           DefaultPool      ded       4096
04/23/2019 18:48:44   shared    1      0.3        uncap    5           DefaultPool      ded       4096
04/09/2019 08:04:25   shared    1      0.3        uncap    5           DefaultPool      ded       3072
03/14/2019 12:37:32   shared    1      0.1        uncap    5           DefaultPool      ded       3072
02/26/2019 09:34:28   shared    1      0.1        uncap    5           DefaultPool      ded       3072
02/20/2019 06:51:57   shared    1      0.3        uncap    5           DefaultPool      ded       3072
01/31/2019 08:12:58   shared    1      0.3        uncap    5           DefaultPool      ded       3072
..

From the output you can see that the number of processing units were changed several time, the uncapped weight was changed and the memory has been expanded.

Changes of the last two months are available at any time!

We want your feedback!

The new PowerCampus “LPAR tool” is available for download! Much revised and written in C ++. It supports output in various formats: JSON + YAML!

The first 100 feedbacks get two licenses (for 2 LPARS) for free! Forever!

So, download and give feedback, just send an e-mail to info@powercampus.de!

The integrated test license supports without further registration one HMC and two complete managed systems! For an extended trial version for 4 HMC’s and unlimited MS just send an email to info@powercampus.de.

Download “LPAR tool”: https://powercampus.de/en/download-2/

Which FC port is connected to which SAN fabric?

In larger environments with many managed systems and multiple SAN fabrics, it’s not always clear which SAN fabric an FC port belongs to despite good documentation. In many cases, the hardware is far from the screen, possibly even in a very different building or geographically farther away, so you can not just check the wiring on site.

This blog post will show you how to use Live Partition Mobility (LPM) to find all the FC ports that belong to a given SAN fabric.

We use the LPAR tool for the sake of simplicity, but you can also work with commands from the HMC CLI without the LPAR tool, so please continue reading even if the LPAR tool is not available!

In the following, we have named our SAN fabrics “Fabric1” and “Fabric2.” However, the procedure described below can be used with any number of SAN fabrics.

Since LPM is to be used, we first need an LPAR. We create the LPAR on one of our managed systems (ms09) with the LPAR tool:

$ lpar –m ms09 create fabric1
Creating LPAR fabric1:
done
Register LPAR
done
$

Of course you can also use the HMC GUI or the HMC CLI to create the LPAR. We named the new LPAR after our SAN Fabric “fabric1“. Every other name is just as good!

Next, our LPAR needs a virtual FC adapter mapped to an FC port of fabric “Fabric1“:

$ lpar –p standard addfc fabric1 10 ms09-vio1
fabric1 10 ms09-vio1 20
$

The LPAR tool has selected slot 20 for the VFC server adapter on VIOS ms09-vio1 and created the client adapter as well as the server adapter. Of course, client and server adapters can be created in exactly the same way via the HMC GUI or the HMC CLI. Since the LPAR is not active, the ‘-p standard‘ option specified that only the profile should be adjusted.

To map the VFC server adapter to a physical FC port, we need the vfchost adapter number on the VIOS ms09-vio1:

$ vios npiv ms09-vio1
VIOS       ADAPT NAME  SLOT  CLIENT OS      ADAPT   STATUS        PORTS
…
ms09-vio1  vfchost2    C20   (3)    unknown  -     NOT_LOGGED_IN  0
…
$

In slot 20 we have the vfchost2, so this must now be mapped to an FC port of fabric “Fabric1“. We map to the FC port fcs8, which we know to belong to fabric “Fabric1“. If we are wrong, we will see this shortly.

Let’s take a look at the WWPNs for the virtual FC Client Adapter:

$ lpar -p standard vslots fabric1
SLOT  REQ  TYPE           DATA
0     yes  serial/server  remote: (any)/any hmc=1
1     yes  serial/server  remote: (any)/any hmc=1
10    no   fc/client      remote: ms09-vio1(1)/20 c050760XXXXX00b0,c050760XXXXX00b1
$

Equipped with the WWPNs, we now ask our storage colleagues to create a small LUN for these WWPNs, which should only be visible in the fabric “Fabric1“. After the storage colleagues have created the LUN and adjusted the zoning accordingly, we activate our new LPAR in OpenFirmware mode and open a console:

$ lpar activate –p standard –b of fabric1

$ lpar console fabric1

Open in progress 

Open Completed.

IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
...

          1 = SMS Menu                          5 = Default Boot List
          8 = Open Firmware Prompt              6 = Stored Boot List

     Memory      Keyboard     Network     SCSI     Speaker  ok
0 >

Of course, this is also possible without problems with GUI or HMC CLI.

In OpenFirmware mode we start ioinfo and check if the small LUN is visible. If it is not visible, then the FC port fcs8 does not belong to the right fabric!

0 > ioinfo

!!! IOINFO: FOR IBM INTERNAL USE ONLY !!!
This tool gives you information about SCSI,IDE,SATA,SAS,and USB devices attached to the system

Select a tool from the following

1. SCSIINFO
2. IDEINFO
3. SATAINFO
4. SASINFO
5. USBINFO
6. FCINFO
7. VSCSIINFO

q - quit/exit

==> 6

FCINFO Main Menu
Select a FC Node from the following list:
 # Location Code           Pathname
-------------------------------------------------
 1. U9117.MMC.XXXXXXX7-V10-C10-T1  /vdevice/vfc-client@3000000a

q - Quit/Exit

==> 1

FC Node Menu
FC Node String: /vdevice/vfc-client@3000000a
FC Node WorldWidePortName: c050760XXXXXX0016
------------------------------------------
1. List Attached FC Devices
2. Select a FC Device
3. Enable/Disable FC Adapter Debug flags

q - Quit/Exit

==> 1

1. 500507680YYYYYYY,0 - 10240 MB Disk drive

Hit a key to continue...

FC Node Menu
FC Node String: /vdevice/vfc-client@3000000a
FC Node WorldWidePortName: c050760XXXXXX0016
------------------------------------------
1. List Attached FC Devices
2. Select a FC Device
3. Enable/Disable FC Adapter Debug flags

q - Quit/Exit

==> q

The LUN appears, the WWPN 500507680YYYYYYY is the WWPN of the corresponding storage port, which is unique worldwide and can only be seen in the fabric “Fabric1“!

Activating the LPAR in OpenFirmware mode has served two purposes, firstly to verify that the LUN is visible and our mapping to fcs8 was correct, secondly, the system now has the information which WWPNs need to be found during an LPM operation, so that the LPAR can be moved!

We deactivate the LPAR again.

$ lpar shutdown –f fabric1
$

If we now perform an LPM validation on the inactive LPAR, then a validation can only be successful on a managed system that has a virtual I/O server with a connection to the fabric “Fabric1“. Using a for loop, let’s try that for some managed systems:

$ for ms in ms10 ms11 ms12 ms13 ms14 ms15 ms16 ms17 ms18 ms19
do
echo $ms
lpar validate fabric1 $ms >/dev/null 2>&1
if [ $? -eq 0 ]
then
   echo connected
else
   echo not connected
fi
done

The command to validate on the HMC CLI is, migrlpar,.

Since we are not interested in validation messages, we redirect all validation messages to /dev/null.

Here’s the output of the for loop:

ms10
connected
ms11
connected
ms12
connected
ms13
connected
ms14
connected
ms15
connected
ms16
connected
ms17
connected
ms18
connected
ms19
connected

Obviously, all managed systems are connected to fabric “Fabric1“. That’s not very surprising, because they were cabled exactly like that.

It would be more interesting to know which FC port on the managed systems (Virtual I/O servers) are connected to the fabric “Fabric1“. To do this, we need a list of virtual I/O servers for each managed system and the list of NPIV-capable FC ports for each virtual I/O server.

The list of virtual I/O servers can be obtained easily with the following command:

$ vios -m ms11 list
ms11-vio1
ms11-vio2
$

On the HMC CLI you can use the command: lssyscfg -r lpar -m ms11 -F “name lpar_env”.

The NPIV-capable ports can be found out with the following command:

$ vios lsnports ms11-vio1
ms11-vio1       name             physloc                        fabric tports aports swwpns  awwpns
ms11-vio1       fcs0             U78AA.001.XXXXXXX-P1-C5-T1          1     64     60   2048    1926
ms11-vio1       fcs1             U78AA.001.XXXXXXX-P1-C5-T2          1     64     60   2048    2023
...
$

The command lsnports is used on the virtual I/O server. Of course you can do this without the LPAR tool.

With the LPM validation (and of course also with the migration) one can indicate which FC port on the target system is to be used, we show this here once with two examples:

$ lpar validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs0 >/dev/null 2>&1
$ echo $?
0
$ lpar validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs1 >/dev/null 2>&1
$ echo $?
1
$

The validation with target ms10-vio1 and fcs0 was successful, i.e. this FC port is attached to fabric “Fabric1“. The validation with targets ms10-vio1 and fcs1 was not successful, i.e. that port is not connected to the fabric “Fabric1“.

Here is the command that must be called on the HMC, if the LPAR tool is not used:

$ lpar -v validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs0
hmc02: migrlpar -m ms09 -o v -p fabric1 -t ms10 -v -d 5 -i 'virtual_fc_mappings=10/ms10-vio1///fcs0'
$

To find out all the FC ports that are connected to the fabric “Fabric1“, we need to loop through the managed systems to be checked, for each managed system we then need a loop across all VIOS of the managed system and finally a loop over each FC ports of the VIOS performing an LPM validation.

We have put things together in the following script. To make sure that it does not get too long, we have omitted some checks:

$ cat bin/fabric_ports
#! /bin/ksh
# Copyright © 2018, 2019 by PowerCampus 01 GmbH

LPAR=fabric1

STATE=$( lpar prop -F state $LPAR | tail -1 )

print "LPAR: $LPAR"
print "STATE: $STATE"

if [ "$STATE" != "Not Activated" ]
then
            print "ERROR: $LPAR must be in state 'Not Activated'"
            exit 1
fi

fcsCount=0
fcsSameFabricCount=0

for ms in $@
do
            print "MS: $ms"
            viosList=$( vios -m $ms list )

            for vios in $viosList
            do
                        rmc_state=$( lpar -m $ms prop -F rmc_state $vios | tail -1 )
                        if [ "$rmc_state" = "active" ]
                        then
                                    fcList=
                                    vios -m $ms lsnports $vios 2>/dev/null | \
                                    while read vio fcport rest
                                    do
                                               if [ "$fcport" != "name" ]
                                               then
                                                           fcList="${fcList} $fcport"
                                               fi
                                    done

                                    for fcport in $fcList
                                    do
                                               print -n "${vios}: ${fcport}: "
                                               lpar validate $LPAR $ms virtual_fc_mappings=10/${vios}///${fcport} </dev/null >/dev/null 2>&1
                                               case "$?" in
                                               0)
                                                           print "yes"
                                                           fcsSameFabricCount=$( expr $fcsSameFabricCount + 1 )
                                                           ;;
                                               *) print "no" ;;
                                               esac
                                               fcsCount=$( expr $fcsCount + 1 )
                                    done
                        else
                                    print "${vios}: RMC not active"
                        fi
            done
done

print "${fcsCount} FC-ports investigated"
print "${fcsSameFabricCount} FC-ports in same fabric"

$

As an illustration we briefly show a run of the script over some managed systems. We start the script with time to see how long it takes:

$ time bin/fabric_ports ms10 ms11 ms12 ms13 ms14 ms15 ms16 ms17 ms18 ms19
LPAR: fabric1
STATE: Not Activated
MS: ms10
ms10-vio3: RMC not active
ms10-vio1: fcs0: yes
ms10-vio1: fcs2: yes
ms10-vio1: fcs4: no
ms10-vio1: fcs6: no
ms10-vio2: fcs0: yes
ms10-vio2: fcs2: yes
ms10-vio2: fcs4: no
ms10-vio2: fcs6: no
MS: ms11
ms11-vio3: RMC not active
ms11-vio1: fcs0: no
ms11-vio1: fcs1: no
ms11-vio1: fcs2: no
ms11-vio1: fcs3: yes
ms11-vio1: fcs4: no
…
ms19-vio2: fcs2: no
ms19-vio2: fcs3: no
ms19-vio2: fcs0: no
ms19-vio2: fcs1: no
ms19-vio2: fcs4: no
ms19-vio2: fcs5: no
132 FC-ports investigated
17 FC-ports in same fabric

real       2m33.978s
user      0m4.597s
sys       0m8.137s
$

In about 150 seconds, 132 FC ports were examined (LPM validations performed). This means a validation took about 1 second on average.

We have found all the FC ports that are connected to the fabric “Fabric1“.

Of course, this can be done analogously for other fabrics.

A final note: not all ports above are cabled!

HMC Error #25B810

Managing and administrating service events is often forgotten on HMCs. In this article we want to use a concrete example, error with reference code #25B810, to show how to handle such events. Of course, our LPAR tool is used here.

First, let’s find all open service events:

$ hmc lssvcevents
TIME                 PROBLEM  PMH   HMC     REFCODE   STATE     STATUS  CALLHOME  FAILING_MTMS      TEXT                                         
02/13/2019 23:02:31  7        -     hmc01   #25B810   approved  Open    false     8231-E2B/06A084P  File System alert event occurred...          
02/16/2019 16:14:28  8        -     hmc01   B3030001  approved  Open    false     8231-E2B/06A084P  ACT04284I A Management Console connect failed
02/11/2019 16:12:43  37       -     hmc02   B3030001  approved  Open    false     8231-E2B/06A084P  ACT04284I A Management Console connect failed
02/11/2019 17:43:19  38       -     hmc02   B3030001  approved  Open    false     8231-E2B/06A084P  ACT04283I A connection to a FSP,BPA...  
$

This article is about the problem with the number 7. The problem was noted on 13.02.2019 at 23:02:31, and examined by the HMC with the name hmc01. The error code is #25B810. The problem is in the “open” state, a call home has not been triggered. For further information, please refer to the problem on the managed system with serial number 06A084P, a Power 710 (8231-E2B). The beginning of the error message can be found in the last column.

First, let’s look at the whole record of the problem by specifying the problem number and HMC:

$ hmc lssvcevents -p 7 hmc01
analyzing_hmc: hmc01
analyzing_mtms: 7042-CR8/21009CD
approval_state: approved
callhome_intended: false
created_time: 02/14/2019 04:11:31
duplicate_count: 0
eed_transmitted: false
enclosure_mtms: 8231-E2B/06A084P
event_severity: 0
event_time: 02/13/2019 23:02:31
failing_mtms: 8231-E2B/06A084P
files: iqyymrge.log/Consolidated system platform log,
iqyvpd.dat/Configuration information associated with the HMC,
actzuict.dat/Tasks performed,
iqyvpdc.dat/Configuration information associated with the HMC,
problems.xml/XML version of the problems opened on the HMC for the HMC and the server,
refcode.dat/list of reference codes associated with the hmc,
iqyylog.log/HMC firmware log information,
PMap.eed/Partition map, obtained from 'lshsc -w -c machine',
hmc.eed/HMC code level obtained from 'lshmc -V' and connection information obtained from 'lssysconn -r all',
sys.eed/Output of various system configuration commands,
8231-E2B_06A084P.VPD.xml/Configuration information associated with the managed system
first_time: 02/14/2019 04:11:31
last_time: 02/14/2019 04:11:31
problem_num: 7
refcode: #25B810
reporting_mtms: 8231-E2B/06A084P
reporting_name: p710
status: Open
sys_mtms: 8231-E2B/06A084P
sys_name: p710
sys_refcode: #25B810
text: File System alert event occurred on /home/ios/CM/DB. Free space is less than 10%, or there was an error querying the filesystem.

At the end of the issue we find the unabbreviated error message. It’s about a file system that has less than 10% free space. The path “/home/ios/CM/DB” indicates a virtual I/O server. The relevant virtual I/O servers are located on the managed system with the serial number 06A084P:

$ ms show 06A084P
NAME  SERIAL_NUM  TYPE_MODEL  HMCS        
p710  06A084P     8231-E2B    hmc01,hmc02
$

It is the managed system named, p710. The managed system includes the following virtual I/O servers:

$ vios -m p710 show
LPAR     ID  SERIAL    LPAR_ENV   MS    HMCs
aixvio1  1   06A084P1  vioserver  p710  hmc01,hmc02
$

A check of the error report on the Virtual I/O Server aixvio1 shows the following entry:

LABEL:          VIO_ALERT_EVENT
IDENTIFIER:     0FD4CF1A

Date/Time:       Wed Feb 13 22:02:31 CST 2019
Sequence Number: 98
Machine Id:      00F6A0844C00
Node Id:         aixvio1
Class:           O
Type:            INFO
WPAR:            Global
Resource Name:   /home/ios/CM/DB 

Description
Informational Message

Probable Causes
Asynchronous Event Occurred

Failure Causes
PROCESSOR

        Recommended Actions
        Check Detail Data

Detail Data
Alert Event Message
25b810
A File System alert event occurred on /home/ios/CM/DB. Free space is less than 10%, or there was an error querying the filesystem.

Diagnostic Analysis
Diagnostic Log sequence number: 19
Resource tested:        sysplanar0
Menu Number:            25B810
Description:


 File System alert event occurred on /home/ios/CM/DB. Free space is less than 10%, or there was an error querying the filesystem.

A quick check of the file system shows that the problem has already been resolved, and there is enough space:

$ df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
...
/dev/hd1           0.25      0.16   35%      111     1% /home
...
$ 

So the problem does not exist anymore. Therefore, the service event on the HMC should also be closed, which we do now:

$ hmc chsvcevent -o close -p 7 hmc01
$

For review we list the open service events:

$ hmc lssvcevents 
TIME                 PROBLEM  PMH   HMC     REFCODE   STATE     STATUS  CALLHOME  FAILING_MTMS      TEXT                                         
02/16/2019 16:14:28  8        -     hmc01   B3030001  approved  Open    false     8231-E2B/06A084P  ACT04284I A Management Console connect failed
02/11/2019 16:12:43  37       -     machmc  B3030001  approved  Open    false     8231-E2B/06A084P  ACT04284I A Management Console connect failed
02/11/2019 17:43:19  38       -     machmc  B3030001  approved  Open    false     8231-E2B/06A084P  ACT04283I A connection to a FSP,BPA...   
$ 

The event with the number 7 was closed successfully.

Service events are easy to manage with the LPAR tool!

LPAR tool is available now

Starting from  5th november 2018 the LPAR-Tool is officially available.

A version for Linux can be downloaded from the download area (versions for AIX and MacOS will follow soon). A user guide is in preparation.

To test the LPAR tool free of charge:

  1. Download the current version of the LPAR tool from the download area.
  2. Request a trial license.
  3. Install the LPAR tool.

Have fun testing the LPAR tool.

Ab 05.11.2018 ist das LPAR-Tool nun offiziell verfügbar!