We want your feedback!

The new PowerCampus “LPAR tool” is available for download! Much revised and written in C ++. It supports output in various formats: JSON + YAML!

The first 100 feedbacks get two licenses (for 2 LPARS) for free! Forever!

So, download and give feedback, just send an e-mail to info@powercampus.de!

The integrated test license supports without further registration one HMC and two complete managed systems! For an extended trial version for 4 HMC’s and unlimited MS just send an email to info@powercampus.de.

Download “LPAR tool”: https://powercampus.de/en/download-2/

LPAR console using Virtual I/O Server

Typically, a console for an LPAR is launched via an HMC, via GUI or CLI (vtmenu or mkvterm). A console depends on the availability of an HMC. During an HMC update or problems with the HMC, you may not be able to connect to an LPAR console.

Relatively unknown is the ability to configure a console to an LPAR via a virtual I/O server. If the HMC is not available, then a console can be started via the virtual I/O server. No configuration is required on the client LPAR! By default, each client LPAR has 2 virtual serial server adapters (slots 0 and 1). If you configure an associated client adapter on a virtual I/O server, you can use it for a console connection.

On the virtual I/O server one needs only an unused virtual slot (here slot 45). The client LPAR has the LPAR ID 39. The virtual serial client adapter can be created with the following command:

hmc01 $ chhwres -m ms02 -r virtualio --rsubtype serial -o a -p ms02-vio1 -s 45 -a adapter_type=client,remote_lpar_name=aix02,remote_slot_num=0,supports_hmc=0
hmc01 $

Now you can always start a console for the LPAR via the virtual I/O server:

ms02-vio1 :/home/padmin> mkvt -id 39
AIX Version 7
Copyright IBM Corporation, 1982, 2018.
Console login: root
root's Password: XXXXXX


aix02  AIX 7.2         powerpc


Last unsuccessful login: Mon Mar 18 23:14:26 2019 on ssh from N.N.N.N
Last login: Wed Mar 27 20:19:22 2019 on /dev/pts/0 from M.M.M.M
[YOU HAVE NEW MAIL]
aix02:/root> hostname
aix02
aix02:/root>

The command mkvt on the virtual I/O server corresponds to the command mkvterm on the HMC. Here the desired partition must be specified by the LPAR-ID. Terminating the console works as usual with “~.“, Or if you are logged in via SSH on the virtual I/O server with “~~.“.

Alternatively, you can also end a console session with the command rmvt:

ms02-vio1:/home/padmin> rmvt -id 39
ms02-vio1:/home/padmin>

The following message appears in the console and the console is closed:

Virtual terminal has been disconnected.

$

With the LPAR tool, the console can of course be set up even easier. The virtual serial adapter on the virtual I/O server can be created with the command “lpar addserial“, a manual login to the HMC is not necessary for this to work:

$ lpar addserial -c ms02-vio1 45 aix02 0
$

The “-c” option means “create client adapter”. The command also creates the adapter in the profile. The success of the action can be checked by “lpar vslots“, showing all virtual adapters of an LPAR:

$ lpar vslots ms02-vio1
SLOT  REQ  TYPE           DATA
0     1    serial/server  remote: -(any)/any status=unavailable hmc=1
1     1    serial/server  remote: -(any)/any status=unavailable hmc=1
2     0    eth            PVID=1 VLANS=- XXXXXXXXXXXX ETHERNET0
3     1    eth            TRUNK(1) IEEE PVID=1 VLANS=201 XXXXXXXXXXXXX ETHERNET0
...
45     0   serial/client  remote: aix02(39)/0 status=unavailable hmc=0
...
$

Starting the console then proceeds as usual by logging in as padmin on the virtual I/O server and the command mkvt.

Caution: The console session through the virtual I/O server should always be terminated when it is no longer needed. You can not terminate it from the HMC! Here is the attempt to start a console using the HMC, while the console is already active using the virtual I/O server:

$ lpar console aix02

Open in progress 

A terminal session is already open for this partition. 
Only one open session is allowed for a partition. 
Exiting.... 
Attempts to open the session failed. Please close the terminal and retry the open at a later time. 
If the problem persists, Please contact IBM support. 
Received end of file, Exiting.
Connection to X.X.X.X closed.
$

Even rmvterm does not help:

$ lpar rmvterm aix02
/bin/stty: standard input: Inappropriate ioctl for device
$

Conversely, no console can be started using the virtual I/O server if a console is active using the HMC:

ms02-vio1:/home/padmin> mkvt -id 39
Virtual terminal is already connected.

ms02-vio1:/home/padmin>

So always make sure that the console is terminated.

 

Error Message from Crypto Library when Logging in

On some systems we have recently encountered syslog error messages when logging in with ssh (or also with /bin/su) of the following type:

Mar 15 10:43:47 aix01 auth|security:err|error sshd[14024884]: Crypto library (CLiC) error: Wrong object type

Mar 15 11:08:42 aix01 auth|security:err|error su: Crypto library (CLiC) error: Wrong signature

Login and also the su command worked  without problems. However, the many error messages, one with each login, were annoying.

The reference to the Crypto Library (CLiC), which is actually needed only when using EFS, was already an indication in the investigation. EFS is not in use on these systems. A check with the command “efskeymgr -V” resulted in the following:

$ efskeymgr -V
There is no key loaded in the current process.
$

Here an error message should have resulted, with the hint that EFS is not activated. A look into the directory /var revealed that the directory /var/efs (in which the EFS keys are stored) exists:

$ ls -l /var/efs
total 24
drwx------    2 root     system          256 Apr 25 2017  efs_admin/
-rw-r--r--    1 root     system            0 Apr 25 2017  efsenabled
drwx------   51 root     system         4096 Mar 17 10:40 groups/
drwx------  123 root     system         8192 Mar 17 05:15 users/
$

So EFS was activated, even though it is not used. To disable EFS, a reboot is actually necessary. However, as it is not really used in our case, and probably turned on only because of an oversight or error, we use the following workaround to rename the /var/efs directory:

$ mv /var/efs /var/efs.orig
$

A short test with the command “efskeymgr -V” shows, that EFS is not currently active from view of AIX:

$ efskeymgr -V
Problem initializing EFS framework.
Please check EFS is installed and enabled (see efsenable) on you system.
Error was: (EFS was not configured)
$

A test login via ssh confirms that no error message is logged any more when logging in.

Note: Please make sure that EFS is not used!

 

Which FC port is connected to which SAN fabric?

In larger environments with many managed systems and multiple SAN fabrics, it’s not always clear which SAN fabric an FC port belongs to despite good documentation. In many cases, the hardware is far from the screen, possibly even in a very different building or geographically farther away, so you can not just check the wiring on site.

This blog post will show you how to use Live Partition Mobility (LPM) to find all the FC ports that belong to a given SAN fabric.

We use the LPAR tool for the sake of simplicity, but you can also work with commands from the HMC CLI without the LPAR tool, so please continue reading even if the LPAR tool is not available!

In the following, we have named our SAN fabrics “Fabric1” and “Fabric2.” However, the procedure described below can be used with any number of SAN fabrics.

Since LPM is to be used, we first need an LPAR. We create the LPAR on one of our managed systems (ms09) with the LPAR tool:

$ lpar –m ms09 create fabric1
Creating LPAR fabric1:
done
Register LPAR
done
$

Of course you can also use the HMC GUI or the HMC CLI to create the LPAR. We named the new LPAR after our SAN Fabric “fabric1“. Every other name is just as good!

Next, our LPAR needs a virtual FC adapter mapped to an FC port of fabric “Fabric1“:

$ lpar –p standard addfc fabric1 10 ms09-vio1
fabric1 10 ms09-vio1 20
$

The LPAR tool has selected slot 20 for the VFC server adapter on VIOS ms09-vio1 and created the client adapter as well as the server adapter. Of course, client and server adapters can be created in exactly the same way via the HMC GUI or the HMC CLI. Since the LPAR is not active, the ‘-p standard‘ option specified that only the profile should be adjusted.

To map the VFC server adapter to a physical FC port, we need the vfchost adapter number on the VIOS ms09-vio1:

$ vios npiv ms09-vio1
VIOS       ADAPT NAME  SLOT  CLIENT OS      ADAPT   STATUS        PORTS
…
ms09-vio1  vfchost2    C20   (3)    unknown  -     NOT_LOGGED_IN  0
…
$

In slot 20 we have the vfchost2, so this must now be mapped to an FC port of fabric “Fabric1“. We map to the FC port fcs8, which we know to belong to fabric “Fabric1“. If we are wrong, we will see this shortly.

Let’s take a look at the WWPNs for the virtual FC Client Adapter:

$ lpar -p standard vslots fabric1
SLOT  REQ  TYPE           DATA
0     yes  serial/server  remote: (any)/any hmc=1
1     yes  serial/server  remote: (any)/any hmc=1
10    no   fc/client      remote: ms09-vio1(1)/20 c050760XXXXX00b0,c050760XXXXX00b1
$

Equipped with the WWPNs, we now ask our storage colleagues to create a small LUN for these WWPNs, which should only be visible in the fabric “Fabric1“. After the storage colleagues have created the LUN and adjusted the zoning accordingly, we activate our new LPAR in OpenFirmware mode and open a console:

$ lpar activate –p standard –b of fabric1

$ lpar console fabric1

Open in progress 

Open Completed.

IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
...

          1 = SMS Menu                          5 = Default Boot List
          8 = Open Firmware Prompt              6 = Stored Boot List

     Memory      Keyboard     Network     SCSI     Speaker  ok
0 >

Of course, this is also possible without problems with GUI or HMC CLI.

In OpenFirmware mode we start ioinfo and check if the small LUN is visible. If it is not visible, then the FC port fcs8 does not belong to the right fabric!

0 > ioinfo

!!! IOINFO: FOR IBM INTERNAL USE ONLY !!!
This tool gives you information about SCSI,IDE,SATA,SAS,and USB devices attached to the system

Select a tool from the following

1. SCSIINFO
2. IDEINFO
3. SATAINFO
4. SASINFO
5. USBINFO
6. FCINFO
7. VSCSIINFO

q - quit/exit

==> 6

FCINFO Main Menu
Select a FC Node from the following list:
 # Location Code           Pathname
-------------------------------------------------
 1. U9117.MMC.XXXXXXX7-V10-C10-T1  /vdevice/vfc-client@3000000a

q - Quit/Exit

==> 1

FC Node Menu
FC Node String: /vdevice/vfc-client@3000000a
FC Node WorldWidePortName: c050760XXXXXX0016
------------------------------------------
1. List Attached FC Devices
2. Select a FC Device
3. Enable/Disable FC Adapter Debug flags

q - Quit/Exit

==> 1

1. 500507680YYYYYYY,0 - 10240 MB Disk drive

Hit a key to continue...

FC Node Menu
FC Node String: /vdevice/vfc-client@3000000a
FC Node WorldWidePortName: c050760XXXXXX0016
------------------------------------------
1. List Attached FC Devices
2. Select a FC Device
3. Enable/Disable FC Adapter Debug flags

q - Quit/Exit

==> q

The LUN appears, the WWPN 500507680YYYYYYY is the WWPN of the corresponding storage port, which is unique worldwide and can only be seen in the fabric “Fabric1“!

Activating the LPAR in OpenFirmware mode has served two purposes, firstly to verify that the LUN is visible and our mapping to fcs8 was correct, secondly, the system now has the information which WWPNs need to be found during an LPM operation, so that the LPAR can be moved!

We deactivate the LPAR again.

$ lpar shutdown –f fabric1
$

If we now perform an LPM validation on the inactive LPAR, then a validation can only be successful on a managed system that has a virtual I/O server with a connection to the fabric “Fabric1“. Using a for loop, let’s try that for some managed systems:

$ for ms in ms10 ms11 ms12 ms13 ms14 ms15 ms16 ms17 ms18 ms19
do
echo $ms
lpar validate fabric1 $ms >/dev/null 2>&1
if [ $? -eq 0 ]
then
   echo connected
else
   echo not connected
fi
done

The command to validate on the HMC CLI is, migrlpar,.

Since we are not interested in validation messages, we redirect all validation messages to /dev/null.

Here’s the output of the for loop:

ms10
connected
ms11
connected
ms12
connected
ms13
connected
ms14
connected
ms15
connected
ms16
connected
ms17
connected
ms18
connected
ms19
connected

Obviously, all managed systems are connected to fabric “Fabric1“. That’s not very surprising, because they were cabled exactly like that.

It would be more interesting to know which FC port on the managed systems (Virtual I/O servers) are connected to the fabric “Fabric1“. To do this, we need a list of virtual I/O servers for each managed system and the list of NPIV-capable FC ports for each virtual I/O server.

The list of virtual I/O servers can be obtained easily with the following command:

$ vios -m ms11 list
ms11-vio1
ms11-vio2
$

On the HMC CLI you can use the command: lssyscfg -r lpar -m ms11 -F “name lpar_env”.

The NPIV-capable ports can be found out with the following command:

$ vios lsnports ms11-vio1
ms11-vio1       name             physloc                        fabric tports aports swwpns  awwpns
ms11-vio1       fcs0             U78AA.001.XXXXXXX-P1-C5-T1          1     64     60   2048    1926
ms11-vio1       fcs1             U78AA.001.XXXXXXX-P1-C5-T2          1     64     60   2048    2023
...
$

The command lsnports is used on the virtual I/O server. Of course you can do this without the LPAR tool.

With the LPM validation (and of course also with the migration) one can indicate which FC port on the target system is to be used, we show this here once with two examples:

$ lpar validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs0 >/dev/null 2>&1
$ echo $?
0
$ lpar validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs1 >/dev/null 2>&1
$ echo $?
1
$

The validation with target ms10-vio1 and fcs0 was successful, i.e. this FC port is attached to fabric “Fabric1“. The validation with targets ms10-vio1 and fcs1 was not successful, i.e. that port is not connected to the fabric “Fabric1“.

Here is the command that must be called on the HMC, if the LPAR tool is not used:

$ lpar -v validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs0
hmc02: migrlpar -m ms09 -o v -p fabric1 -t ms10 -v -d 5 -i 'virtual_fc_mappings=10/ms10-vio1///fcs0'
$

To find out all the FC ports that are connected to the fabric “Fabric1“, we need to loop through the managed systems to be checked, for each managed system we then need a loop across all VIOS of the managed system and finally a loop over each FC ports of the VIOS performing an LPM validation.

We have put things together in the following script. To make sure that it does not get too long, we have omitted some checks:

$ cat bin/fabric_ports
#! /bin/ksh
# Copyright © 2018, 2019 by PowerCampus 01 GmbH

LPAR=fabric1

STATE=$( lpar prop -F state $LPAR | tail -1 )

print "LPAR: $LPAR"
print "STATE: $STATE"

if [ "$STATE" != "Not Activated" ]
then
            print "ERROR: $LPAR must be in state 'Not Activated'"
            exit 1
fi

fcsCount=0
fcsSameFabricCount=0

for ms in $@
do
            print "MS: $ms"
            viosList=$( vios -m $ms list )

            for vios in $viosList
            do
                        rmc_state=$( lpar -m $ms prop -F rmc_state $vios | tail -1 )
                        if [ "$rmc_state" = "active" ]
                        then
                                    fcList=
                                    vios -m $ms lsnports $vios 2>/dev/null | \
                                    while read vio fcport rest
                                    do
                                               if [ "$fcport" != "name" ]
                                               then
                                                           fcList="${fcList} $fcport"
                                               fi
                                    done

                                    for fcport in $fcList
                                    do
                                               print -n "${vios}: ${fcport}: "
                                               lpar validate $LPAR $ms virtual_fc_mappings=10/${vios}///${fcport} </dev/null >/dev/null 2>&1
                                               case "$?" in
                                               0)
                                                           print "yes"
                                                           fcsSameFabricCount=$( expr $fcsSameFabricCount + 1 )
                                                           ;;
                                               *) print "no" ;;
                                               esac
                                               fcsCount=$( expr $fcsCount + 1 )
                                    done
                        else
                                    print "${vios}: RMC not active"
                        fi
            done
done

print "${fcsCount} FC-ports investigated"
print "${fcsSameFabricCount} FC-ports in same fabric"

$

As an illustration we briefly show a run of the script over some managed systems. We start the script with time to see how long it takes:

$ time bin/fabric_ports ms10 ms11 ms12 ms13 ms14 ms15 ms16 ms17 ms18 ms19
LPAR: fabric1
STATE: Not Activated
MS: ms10
ms10-vio3: RMC not active
ms10-vio1: fcs0: yes
ms10-vio1: fcs2: yes
ms10-vio1: fcs4: no
ms10-vio1: fcs6: no
ms10-vio2: fcs0: yes
ms10-vio2: fcs2: yes
ms10-vio2: fcs4: no
ms10-vio2: fcs6: no
MS: ms11
ms11-vio3: RMC not active
ms11-vio1: fcs0: no
ms11-vio1: fcs1: no
ms11-vio1: fcs2: no
ms11-vio1: fcs3: yes
ms11-vio1: fcs4: no
…
ms19-vio2: fcs2: no
ms19-vio2: fcs3: no
ms19-vio2: fcs0: no
ms19-vio2: fcs1: no
ms19-vio2: fcs4: no
ms19-vio2: fcs5: no
132 FC-ports investigated
17 FC-ports in same fabric

real       2m33.978s
user      0m4.597s
sys       0m8.137s
$

In about 150 seconds, 132 FC ports were examined (LPM validations performed). This means a validation took about 1 second on average.

We have found all the FC ports that are connected to the fabric “Fabric1“.

Of course, this can be done analogously for other fabrics.

A final note: not all ports above are cabled!

Removal of Host-Key from ~/.ssh/known_hosts

Occasionally, a host key is changed on a host, either manually or possibly automatically through an update of OpenSSH. When you log in via ssh to the host in question you will get the following message:

$ ssh aix01
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
SHA256:xYglDF3cuHCCrxtbFUbpofpmhNs9MiO114vAT4qVX2M.
Please contact your system administrator.
Add correct host key in /home/as/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/as/.ssh/known_hosts:2
RSA host key for aix01 has changed and you have requested strict checking.
Host key verification failed.
$

Now many administrators use vi (or another editor) to remove the entry with the old host key from the known_hosts file. The line number of the corresponding entry is given in the output above, /home/as/.ssh/known_hosts:2 means the entry is in line 2 of the file.

It is much easier to remove the obsolete host key using the ssh-keygen command and the “-R” (remove) option:

$ ssh-keygen -R aix01
# Host aix01 found: line 2
/home/as/.ssh/known_hosts updated.
Original contents retained as /home/as/.ssh/known_hosts.old
$ 

The command creates a copy of the file, with the extension “.old” and removes the desired entry. This is much easier than using an editor!

If you want to know if a host key for a system already exists in the known_hosts, there is the option “-F” (find) for this purpose:

$ ssh-keygen -F aix02
# Host aix02 found: line 49 
aix02,192.168.178.49 ssh-rsa AAAAB3NzaC1yc2E...
$

The public host key and the line for the system are shown.

Cron-Jobs are not startet anymore

Recently, no cron job was started anymore on one of our AIX systems. There was no entry in the error report and no indication of the problem could be found by syslog. In the log of the cron daemon, however, there were a lot of messages:

# cat /var/adm/cron/log
...
! c queue max run limit reached Sat Feb 23 08:49:00 2019
! rescheduling a cron job Sat Feb 23 08:49:00 2019
...

On AIX, the number of active cron jobs is set to 100 by default. Obviously this number had been achieved on our system. New entries are then executed by default 60 seconds later. Both can be configured via the file /var/adm/cron/queuedefs. The value 100 is already quite high and reaching the value indicates a problem.

The PID of the cron daemon was quick to find out:

$ ps -ef|grep cron
    root  6684924        1   0   Sep 26      -  8:03 /usr/sbin/cron
$

The currently active cron jobs run as cron‘s child processes. With the option “-T” of the ps command, we can quickly list all children:

$ ps -T 6684924
      PID    TTY  TIME CMD
  6684924      -  8:03 cron
  3276876      -  0:00    |\--perl
  9961588      -  0:00    |    \--mount
 12714002      -  0:07    |        \--nfsmnthelp
  3604516      -  0:00    |\--perl
 20185130      -  0:00    |    \--mount
 10158264      -  0:35    |        \--nfsmnthelp
  4587542      -  0:00    |\--perl
...

It is immediately noticeable that the lines are repeated again and again, i.e. a perl program was started over and over again by cron, which tried to mount a file system via NFS, which did not work (no answer from the NFS server) and the perl script hangs. Since the script was restarted over and over again, at some point in time there were 100 active cron jobs and from that moment on no more cron jobs were started. We briefly count the active perl processes:

$ ps -T 6684924 |grep perl |wc -l
     100
$

There are exactly 100 perl processes started by cron. We terminate some of the hanging perl processes:

# kill 3276876 3604516  4587542
#

A look at the end of the cron log file shows, that jobs have been terminated, and after a short while the first newly started cron job appears:

# tail –f /var/adm/cron/log
…
Cron Job with pid: 3276876 Failed
Cron Job with pid: 3604516  Failed
Cron Job with pid: 4587542Failed
mqm       : CMD ( /appdata/mqm/admin/bin/checks/checkXmitMonitoring.sh >>/appdata/mqm/tracks/logs/scheduler/checkXmitMonitoring.fatal 2>&1 ) : PID ( 28442840 ) : Mon Feb 25 10:34:00 2019
…

We also terminate the other hanging processes and restart the cron daemon for safety’s sake by simply terminating it:

# kill 6684924
#

The cron daemon is automatically restarted thanks to an /etc/inittab entry:

# lsitab cron
cron:23456789:respawn:/usr/sbin/cron
#

After cron works again, the perl script should be examined, which ultimately led to the hanging of cron. For scripts started per cron, it is advisable to check whether the job is still running or already running.

Automatic creation of home directories

There are several possibilities under AIX to automatically create missing home directories when logging in. This is especially useful if the user accounts are managed through LDAP or another naming service and are not created locally. If a user is newly created in LDAP, he initially has no home directory on the AIX LDAP client:

$ ssh new_user@aix01
...
Could not chdir to home directory /home/new_user: No such file or directory
$ pwd
/
$ exit
$

Probably the easiest way to automatically create the home directory when logging in, is the attribute mkhomeatlogin in the file /etc/security/login.cfg. The default for this attribute is “false” if it is not set:

# lssec -f /etc/security/login.cfg -s usw -a mkhomeatlogin
usw mkhomeatlogin=
# 

The attribute can be set to true with the chsec command:

# chsec -f /etc/security/login.cfg -s usw -a mkhomeatlogin=true
# lssec -f /etc/security/login.cfg -s usw -a mkhomeatlogin
usw mkhomeatlogin=true
#

We try the login again:

$ ssh new_user@aix01
...
$ pwd
/home/new_user
$

A new home directory has been created for the user.

HMC Error #25B810

Managing and administrating service events is often forgotten on HMCs. In this article we want to use a concrete example, error with reference code #25B810, to show how to handle such events. Of course, our LPAR tool is used here.

First, let’s find all open service events:

$ hmc lssvcevents
TIME                 PROBLEM  PMH   HMC     REFCODE   STATE     STATUS  CALLHOME  FAILING_MTMS      TEXT                                         
02/13/2019 23:02:31  7        -     hmc01   #25B810   approved  Open    false     8231-E2B/06A084P  File System alert event occurred...          
02/16/2019 16:14:28  8        -     hmc01   B3030001  approved  Open    false     8231-E2B/06A084P  ACT04284I A Management Console connect failed
02/11/2019 16:12:43  37       -     hmc02   B3030001  approved  Open    false     8231-E2B/06A084P  ACT04284I A Management Console connect failed
02/11/2019 17:43:19  38       -     hmc02   B3030001  approved  Open    false     8231-E2B/06A084P  ACT04283I A connection to a FSP,BPA...  
$

This article is about the problem with the number 7. The problem was noted on 13.02.2019 at 23:02:31, and examined by the HMC with the name hmc01. The error code is #25B810. The problem is in the “open” state, a call home has not been triggered. For further information, please refer to the problem on the managed system with serial number 06A084P, a Power 710 (8231-E2B). The beginning of the error message can be found in the last column.

First, let’s look at the whole record of the problem by specifying the problem number and HMC:

$ hmc lssvcevents -p 7 hmc01
analyzing_hmc: hmc01
analyzing_mtms: 7042-CR8/21009CD
approval_state: approved
callhome_intended: false
created_time: 02/14/2019 04:11:31
duplicate_count: 0
eed_transmitted: false
enclosure_mtms: 8231-E2B/06A084P
event_severity: 0
event_time: 02/13/2019 23:02:31
failing_mtms: 8231-E2B/06A084P
files: iqyymrge.log/Consolidated system platform log,
iqyvpd.dat/Configuration information associated with the HMC,
actzuict.dat/Tasks performed,
iqyvpdc.dat/Configuration information associated with the HMC,
problems.xml/XML version of the problems opened on the HMC for the HMC and the server,
refcode.dat/list of reference codes associated with the hmc,
iqyylog.log/HMC firmware log information,
PMap.eed/Partition map, obtained from 'lshsc -w -c machine',
hmc.eed/HMC code level obtained from 'lshmc -V' and connection information obtained from 'lssysconn -r all',
sys.eed/Output of various system configuration commands,
8231-E2B_06A084P.VPD.xml/Configuration information associated with the managed system
first_time: 02/14/2019 04:11:31
last_time: 02/14/2019 04:11:31
problem_num: 7
refcode: #25B810
reporting_mtms: 8231-E2B/06A084P
reporting_name: p710
status: Open
sys_mtms: 8231-E2B/06A084P
sys_name: p710
sys_refcode: #25B810
text: File System alert event occurred on /home/ios/CM/DB. Free space is less than 10%, or there was an error querying the filesystem.

At the end of the issue we find the unabbreviated error message. It’s about a file system that has less than 10% free space. The path “/home/ios/CM/DB” indicates a virtual I/O server. The relevant virtual I/O servers are located on the managed system with the serial number 06A084P:

$ ms show 06A084P
NAME  SERIAL_NUM  TYPE_MODEL  HMCS        
p710  06A084P     8231-E2B    hmc01,hmc02
$

It is the managed system named, p710. The managed system includes the following virtual I/O servers:

$ vios -m p710 show
LPAR     ID  SERIAL    LPAR_ENV   MS    HMCs
aixvio1  1   06A084P1  vioserver  p710  hmc01,hmc02
$

A check of the error report on the Virtual I/O Server aixvio1 shows the following entry:

LABEL:          VIO_ALERT_EVENT
IDENTIFIER:     0FD4CF1A

Date/Time:       Wed Feb 13 22:02:31 CST 2019
Sequence Number: 98
Machine Id:      00F6A0844C00
Node Id:         aixvio1
Class:           O
Type:            INFO
WPAR:            Global
Resource Name:   /home/ios/CM/DB 

Description
Informational Message

Probable Causes
Asynchronous Event Occurred

Failure Causes
PROCESSOR

        Recommended Actions
        Check Detail Data

Detail Data
Alert Event Message
25b810
A File System alert event occurred on /home/ios/CM/DB. Free space is less than 10%, or there was an error querying the filesystem.

Diagnostic Analysis
Diagnostic Log sequence number: 19
Resource tested:        sysplanar0
Menu Number:            25B810
Description:


 File System alert event occurred on /home/ios/CM/DB. Free space is less than 10%, or there was an error querying the filesystem.

A quick check of the file system shows that the problem has already been resolved, and there is enough space:

$ df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
...
/dev/hd1           0.25      0.16   35%      111     1% /home
...
$ 

So the problem does not exist anymore. Therefore, the service event on the HMC should also be closed, which we do now:

$ hmc chsvcevent -o close -p 7 hmc01
$

For review we list the open service events:

$ hmc lssvcevents 
TIME                 PROBLEM  PMH   HMC     REFCODE   STATE     STATUS  CALLHOME  FAILING_MTMS      TEXT                                         
02/16/2019 16:14:28  8        -     hmc01   B3030001  approved  Open    false     8231-E2B/06A084P  ACT04284I A Management Console connect failed
02/11/2019 16:12:43  37       -     machmc  B3030001  approved  Open    false     8231-E2B/06A084P  ACT04284I A Management Console connect failed
02/11/2019 17:43:19  38       -     machmc  B3030001  approved  Open    false     8231-E2B/06A084P  ACT04283I A connection to a FSP,BPA...   
$ 

The event with the number 7 was closed successfully.

Service events are easy to manage with the LPAR tool!

Migration from SDDPCM to AIX PCM

Many AIX systems still use SDDPCM as a multipathing solution. However, SDDPCM is no longer supported on POWER 9 hardware from IBM.

The following is the migration from SDDPCM to AIX PCM. On our example system we have the following physical volumes:

$ lspv
hdisk0          00abcdefabcde000                    datavg          active     
hdisk1          00abcdefabcde001                    datavg          active     
hdisk2          none                                None                       
hdisk3          00abcdefabcde003                    altinst_rootvg             
hdisk4          00abcdefabcde004                    rootvg          active     
$

The Physical Volumes are disks that are made available through an SVC:

$ lsdev -l hdisk0 -F uniquetype
disk/fcp/2145
$

The Path Control Module (PCM) uses SDDPCM:

$ lsattr -El hdisk0 -a PCM -F value
PCM/friend/sddpcm
$

You can also see this when looking at the list of kernel extensions:

$ genkex | grep pcm
f1000000c012a000    af000 /usr/lib/drivers/sddpcmke
$

Which PCM driver is used for which disk types can be easily viewd with the command “manage_disk_drivers”:

$ manage_disk_drivers -l
Device              Present Driver        Driver Options    
2810XIV             AIX_AAPCM             AIX_AAPCM,AIX_non_MPIO
DS4100              AIX_SDDAPPCM          AIX_APPCM,AIX_SDDAPPCM
DS4200              AIX_SDDAPPCM          AIX_APPCM,AIX_SDDAPPCM
DS4300              AIX_SDDAPPCM          AIX_APPCM,AIX_SDDAPPCM
DS4500              AIX_SDDAPPCM          AIX_APPCM,AIX_SDDAPPCM
DS4700              AIX_SDDAPPCM          AIX_APPCM,AIX_SDDAPPCM
DS4800              AIX_SDDAPPCM          AIX_APPCM,AIX_SDDAPPCM
DS3950              AIX_SDDAPPCM          AIX_APPCM,AIX_SDDAPPCM
DS5020              AIX_SDDAPPCM          AIX_APPCM,AIX_SDDAPPCM
DCS3700             AIX_APPCM             AIX_APPCM
DCS3860             AIX_APPCM             AIX_APPCM
DS5100/DS5300       AIX_SDDAPPCM          AIX_APPCM,AIX_SDDAPPCM
DS3500              AIX_APPCM             AIX_APPCM
XIVCTRL             MPIO_XIVCTRL          MPIO_XIVCTRL,nonMPIO_XIVCTRL
2107DS8K            NO_OVERRIDE           AIX_AAPCM,AIX_non_MPIO,NO_OVERRIDE
IBMFlash            NO_OVERRIDE           AIX_AAPCM,AIX_non_MPIO,NO_OVERRIDE
IBMSVC              NO_OVERRIDE           AIX_AAPCM,AIX_non_MPIO,NO_OVERRIDE
$

In our case, SVC disks, the last line is relevant (IBMSVC). As current PCM driver NO_OVERRIDE is listed here, possible other drivers are AIX_AAPCM (AIX PCM for active / active and ALUASystems) and AIX_non_MPIO (drives without multi-pathing). The value NO_OVERRIDE means that if no multipathing driver is explicitly specified, a multipathing driver is used if possible (if available), otherwise no multipathing driver is used. If more than one multipathing driver is available (in our case AIX PCM and SDDPCM, then SDDPCM has priority).

In a subsequent blog entry, we will take a closer look at the possible values, as well as the point in AIX where the selection is made.

Before we change the driver for IBMSVC disks (a reboot is necessary), let’s take a look at the attributes of our disks, here an example for the hdisk0:

$ lsattr -El hdisk0
PCM             PCM/friend/sddpcm                                   PCM                                     True
...
algorithm       load_balance                                        Algorithm                               True+
...
queue_depth     120                                                 Queue DEPTH                             True+
...
reserve_policy  no_reserve                                          Reserve Policy                          True+
...
$

Changing the driver will cause the values of some set attributes to be lost or replaced by new default values of the new driver. This is especially true for the queue_depth (here: 120), the reserve_policy (here: no_reserve) and the load-balancing policy (algorithm). The current values should be noted, then after the conversion to the AIX PCM driver then adjust accordingly.

Switching to AIX PCM can be done with the command “manage_disk_drivers”. For this, the command is given the disk type (here IBMSVC) with the option “-d” and the desired driver (here AIX_AAPCM for the AIX PCM driver) with the option “-o”:

# manage_disk_drivers -d IBMSVC -o AIX_AAPCM
********************** ATTENTION *************************
  For the change to take effect the system must be rebooted
#

The changed configuration can be listed directly with “manage_disk_drivers -l”:

$ manage_disk_drivers -l
Device              Present Driver        Driver Options    
...
IBMSVC              AIX_AAPCM             AIX_AAPCM,AIX_non_MPIO,NO_OVERRIDE
$

To make the change, the system must now be rebooted:

# shutdown -r now

SHUTDOWN PROGRAM
Thu Feb  7 09:43:38 CET 2019
...

We execute the 3 commands from the beginning again (lspv, lsdevund lsattr):

$ lspv
hdisk0          00abcdefabcde000                    datavg          active     
hdisk1          00abcdefabcde001                    datavg          active     
hdisk2          none                                None                       
hdisk3          00abcdefabcde003                    altinst_rootvg             
hdisk4          00abcdefabcde004                    rootvg          active     
$

The physical volumes are unchanged.

$ lsdev -l hdisk0 -F uniquetype
disk/fcp/mpioosdisk
$

The type of disks has changed from disk / fcp / 2145 to disk / fcp / mpioosdisk. This already indicates that the multipathing driver has changed.

$ lsattr -El hdisk0 -a PCM -F value
PCM/friend/fcpother
$

The Path Control Module (PCM) has also changed. The guy is no longer sddpcm but fcpother. That does not look directly after AIX PCM. However, a look at the corresponding driver shows immediately that AIX PCM is in use here:

$ lsdev -P -c PCM -s friend -t fcpother -F DvDr
aixdiskpcmke
$

The associated kernel extension aixdiskpcmke is also currently loaded and in use:

$ genkex | grep pcm
         73e2000    57000 /usr/lib/drivers/aixdiskpcmke
$

Let’s take a look at the attributes of hdisk0 again. We expect changed values for some attributes here:

$ lsattr -El hdisk0
PCM             PCM/friend/fcpother                                 Path Control Module              False
...
algorithm       fail_over                                           Algorithm                        True+
...
queue_depth     20                                                  Queue DEPTH                      True+
...
reserve_policy  single_path                                         Reserve Policy                   True+
...
$

The value 120 for the queue_depth has been lost and has been replaced by the default value 20. The reserve_policy has changed to single_path and the load-balancing algorithm is now fail_over, i. only one path is used at a time.

We change the settings to a configuration that corresponds to the initial situation:

# chdev -P -l hdisk0 -a algorithm=shortest_queue -a queue_depth=120 -a reserve_policy=no_reserve
hdisk0 changed
#

Since the Physical Volume is in use, the setting can only be changed in the ODM and a further reboot is necessary.

After all disks have been reconfigured via the ODM, the system must be rebooted a second time:

# shutdown -r now

SHUTDOWN PROGRAM
Thu Feb  6 20:07:12 CET 2019
...

After the reboot SDDPCM can be uninstalled:

# installp -u devices.fcp.disk.ibm.mpio.rte devices.sddpcm.72.rte
+-----------------------------------------------------------------------------+
                    Pre-deinstall Verification...
+-----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...

SUCCESSES
...
0503-292 This update will not fully take effect until after a
        system reboot.

    * * *  A T T E N T I O N  * * *
    System boot image has been updated. You should reboot the
    system as soon as possible to properly integrate the changes
    and to avoid disruption of current functionality.

installp:  bosboot process completed.
+-----------------------------------------------------------------------------+
                                Summaries:
+-----------------------------------------------------------------------------+

Installation Summary
--------------------
Name                        Level           Part        Event       Result
-------------------------------------------------------------------------------
devices.sddpcm.72.rte       2.7.1.1         ROOT        DEINSTALL   SUCCESS   
devices.sddpcm.72.rte       2.7.1.1         USR         DEINSTALL   SUCCESS   
devices.fcp.disk.ibm.mpio.r 1.0.0.25        USR         DEINSTALL   SUCCESS   
#

The SDDPCM fileset, as well as the associated host-attachment fileset, were successfully uninstalled.

Since the SDDPCM driver was not loaded, and thus no changes have been made to the kernel, actually another reboot should not be necessary. However, since it is explicitly pointed out a quick reboot, and it is also a good idea to do a reboot test with the final configuration, we reboot the system a third and final time:

# shutdown -r now

SHUTDOWN PROGRAM
Thu Feb  6 20:17:21 CET 2019
...

After the reboot, we check the disk attributes again:

$ lsattr -El hdisk0
PCM             PCM/friend/fcpother                                 Path Control Module              False
...
algorithm       shortest_queue                                      Algorithm                        True+
...
queue_depth     120                                                 Queue DEPTH                      True+
...
reserve_policy  no_reserve                                          Reserve Policy                   True+
...
$

The system now uses the AIX PCM driver for multipathing:

$ manage_disk_drivers -l
Device              Present Driver        Driver Options    
...
IBMSVC              AIX_AAPCM             AIX_AAPCM,AIX_non_MPIO,NO_OVERRIDE
$

Migrating from SDDPCM to AIX PCM is pretty easy to do.