HSCLB505 The partition cannot use hardware-accelerated encryption

When migrating LPARs using LPM onto a somewhat older hardware, the following error can occur:

HSCLB505 The partition cannot use hardware-accelerated encryption on the destination managed system because the destination managed system does not support hardware-accelerated encryption.

This means that hardware-accelerated encryption is activated for the LPAR, but is not supported on the destination managed system.

Disabling hardware-accelerated encryption using the LPAR-Tool is easy:

$ lpar -d chmem lpar01 hardware_mem_encryption=0
$

Without the LPAR-Tool this is of course also possible. Log into your HMC and use the following command from the commandoline:

chhwres -m ms01 -r mem -o s -p lpar01 -a 'hardware_mem_encryption=0'

Afterwards validation and migration using LPM should work.

HSCLB504 The migrating partition cannot use hardware-accelerated Active Memory Expansion

When migrating LPARs using LPM onto a somewhat older hardware, the following error can occur:

HSCLB504 The migrating partition cannot use hardware-accelerated Active Memory Expansion on the destination managed system because the destination managed system does not support hardware-accelerated Active Memory Expansion.

This means that Active Memory Erweiterung (AME) is activated for the LPAR, but is not supported on the destination managed system.

Disabling Active Memory Expansion using the LPAR-Tool is easy:

$ lpar -d chmem lpar01 hardware_mem_expansion=0
$

Without the LPAR-Tool this is of course also possible. Log into your HMC and use the following command from the commandoline:

chhwres -m ms01 -r mem -o s -p lpar01 -a 'hardware_mem_expansion=0'

Afterwards validation and migration using LPM should work.

PowerVM: Do you know the Profile “last*valid*configuration”?

Maybe one or the other has ever wondered how and where the current configuration of an LPAR is stored. If the current configuration and profile are not synchronized with each other, differences will quickly arise. When an LPAR is shut down and deactivated, the last current configuration is retained. When activating the LPAR, this configuration is available in addition to the profiles of the LPAR as the “current configuration” in the GUI. If one selects the current configuration, then the LPAR has the same configuration after activation as before deactivation. For a newly created LPAR, however, this selection is not available on activation. The difference also manifests itself on the HMC command line: the already activated LPAR can be activated without specifying a profile, the newly created LPAR can only be activated by specifying a profile. Let’s take a closer look.

(Short note: The commands on the HMC command line were executed directly on the HMC hmc01. In the example outputs with the LPAR tool, the commands were started from a Linux jump server. All commands are always shown with both variants!)

We have activated and booted the LPAR aix01 with the profile “standard“. We have not made any dynamic changes yet. We briefly look at the status of the LPAR and check if there is an RMC connection to the HMC:

hscroot@hmc01:~> lssyscfg -m p710 -r lpar --filter lpar_names=aix01 --header -F name lpar_env state curr_profile rmc_state os_version
name lpar_env state curr_profile rmc_state os_version
aix01 aixlinux Running standard active "AIX 7.1 7100-04-00-0000"
hscroot@hmc01:~>
linux $ lpar status aix01
NAME  ID      TYPE   STATUS  PROFILE    RMC   PROCS  PROCUNITS MEMORY  OS
aix01  5  aixlinux  Running  standard  active   1       -      3072    AIX 7.1 7100-04-00-0000
linux $

To see the effect of a dynamic change, let’s take a look at the actual state and the profile “standard“:

hscroot@hmc01:~> lshwres -m p710 -r mem --level lpar --filter lpar_names=aix01 -F curr_mem
3072
hscroot@hmc01:~> lssyscfg -m p710 -r prof --filter profile_names=standard,lpar_names=aix01 -F desired_mem
3072
hscroot@hmc01:~>
linux $ lpar mem aix01
      MEMORY            MEMORY           HUGEPAGES
NAME   MODE  AME   MIN   CURR   MAX   MIN  CURR  MAX
aix01  ded    -   2048   3072  8192    0     0    0
linux $ lpar -p standard mem aix01
      MEMORY            MEMORY           HUGEPAGES
NAME   MODE  AME   MIN   CURR   MAX   MIN  CURR  MAX
aix01  ded    -   2048   3072  8192    0     0    0
linux $

The LPAR has currently 3072 MB of main memory, which are also stored in the “standard” profile.

Now we add 1024 MB of main memory dynamically (DLPAR):

hscroot@hmc01:~> chhwres -m p710 -r mem -o a -p aix01 -q 1024
hscroot@hmc01:~>
linux $ lpar -d addmem aix01 1024
linux $

Now let’s look at the resulting memory resources of the LPAR:

hscroot@hmc01:~> lshwres -m p710 -r mem --level lpar --filter lpar_names=aix01 -F curr_mem
4096
hscroot@hmc01:~>
linux $ lpar mem aix01
     MEMORY            MEMORY          HUGEPAGES
NAME  MODE  AME   MIN   CURR   MAX   MIN  CURR  MAX
aix01  ded   -   2048   4096  8192    0     0    0
linux $

As expected, the LPAR now has 4096 MB of RAM. But what does the profile “standard” looks like?

hscroot@hmc01:~> lssyscfg -m p710 -r prof --filter profile_names=standard,lpar_names=aix01 -F desired_mem
3072
hscroot@hmc01:~>
linux $ lpar -p standard mem aix01
     MEMORY            MEMORY          HUGEPAGES
NAME  MODE  AME   MIN   CURR   MAX   MIN  CURR  MAX
aix01  ded   -   2048   3072  8192    0     0    0
linux $

The profile has not changed, activating the LPAR with this profile would result in 3072 MB of main memory.

The current configuration is always saved in the special profile “last*valid*configuration“:

hscroot@hmc01:~> lssyscfg -m p710 -r prof --filter profile_names=last*valid*configuration,lpar_names=aix01 -F desired_mem
4096
hscroot@hmc01:~>
linux $ lpar -p last*valid*configuration mem aix01
     MEMORY            MEMORY          HUGEPAGES
NAME  MODE  AME   MIN   CURR   MAX   MIN  CURR  MAX
aix01  ded   -   2048   4096  8192    0     0    0
linux $

Here the value of 4096 MB is consistent with the currently available memory in the LPAR.

Every dynamic change to an LPAR is performed on the LPAR via a DLPAR operation and by updating the special profile! If a profile is synchronized manually or automatically, then this special profile is ultimately synchronized with the desired profile.

The existence and handling of the special profile “last*valid*configuration” also makes some LPM possibilities easier to understand. We will deal with this in a later blog post.

Which FC port is connected to which SAN fabric?

In larger environments with many managed systems and multiple SAN fabrics, it’s not always clear which SAN fabric an FC port belongs to despite good documentation. In many cases, the hardware is far from the screen, possibly even in a very different building or geographically farther away, so you can not just check the wiring on site.

This blog post will show you how to use Live Partition Mobility (LPM) to find all the FC ports that belong to a given SAN fabric.

We use the LPAR tool for the sake of simplicity, but you can also work with commands from the HMC CLI without the LPAR tool, so please continue reading even if the LPAR tool is not available!

In the following, we have named our SAN fabrics “Fabric1” and “Fabric2.” However, the procedure described below can be used with any number of SAN fabrics.

Since LPM is to be used, we first need an LPAR. We create the LPAR on one of our managed systems (ms09) with the LPAR tool:

$ lpar –m ms09 create fabric1
Creating LPAR fabric1:
done
Register LPAR
done
$

Of course you can also use the HMC GUI or the HMC CLI to create the LPAR. We named the new LPAR after our SAN Fabric “fabric1“. Every other name is just as good!

Next, our LPAR needs a virtual FC adapter mapped to an FC port of fabric “Fabric1“:

$ lpar –p standard addfc fabric1 10 ms09-vio1
fabric1 10 ms09-vio1 20
$

The LPAR tool has selected slot 20 for the VFC server adapter on VIOS ms09-vio1 and created the client adapter as well as the server adapter. Of course, client and server adapters can be created in exactly the same way via the HMC GUI or the HMC CLI. Since the LPAR is not active, the ‘-p standard‘ option specified that only the profile should be adjusted.

To map the VFC server adapter to a physical FC port, we need the vfchost adapter number on the VIOS ms09-vio1:

$ vios npiv ms09-vio1
VIOS       ADAPT NAME  SLOT  CLIENT OS      ADAPT   STATUS        PORTS
…
ms09-vio1  vfchost2    C20   (3)    unknown  -     NOT_LOGGED_IN  0
…
$

In slot 20 we have the vfchost2, so this must now be mapped to an FC port of fabric “Fabric1“. We map to the FC port fcs8, which we know to belong to fabric “Fabric1“. If we are wrong, we will see this shortly.

Let’s take a look at the WWPNs for the virtual FC Client Adapter:

$ lpar -p standard vslots fabric1
SLOT  REQ  TYPE           DATA
0     yes  serial/server  remote: (any)/any hmc=1
1     yes  serial/server  remote: (any)/any hmc=1
10    no   fc/client      remote: ms09-vio1(1)/20 c050760XXXXX00b0,c050760XXXXX00b1
$

Equipped with the WWPNs, we now ask our storage colleagues to create a small LUN for these WWPNs, which should only be visible in the fabric “Fabric1“. After the storage colleagues have created the LUN and adjusted the zoning accordingly, we activate our new LPAR in OpenFirmware mode and open a console:

$ lpar activate –p standard –b of fabric1

$ lpar console fabric1

Open in progress 

Open Completed.

IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM
...

          1 = SMS Menu                          5 = Default Boot List
          8 = Open Firmware Prompt              6 = Stored Boot List

     Memory      Keyboard     Network     SCSI     Speaker  ok
0 >

Of course, this is also possible without problems with GUI or HMC CLI.

In OpenFirmware mode we start ioinfo and check if the small LUN is visible. If it is not visible, then the FC port fcs8 does not belong to the right fabric!

0 > ioinfo

!!! IOINFO: FOR IBM INTERNAL USE ONLY !!!
This tool gives you information about SCSI,IDE,SATA,SAS,and USB devices attached to the system

Select a tool from the following

1. SCSIINFO
2. IDEINFO
3. SATAINFO
4. SASINFO
5. USBINFO
6. FCINFO
7. VSCSIINFO

q - quit/exit

==> 6

FCINFO Main Menu
Select a FC Node from the following list:
 # Location Code           Pathname
-------------------------------------------------
 1. U9117.MMC.XXXXXXX7-V10-C10-T1  /vdevice/vfc-client@3000000a

q - Quit/Exit

==> 1

FC Node Menu
FC Node String: /vdevice/vfc-client@3000000a
FC Node WorldWidePortName: c050760XXXXXX0016
------------------------------------------
1. List Attached FC Devices
2. Select a FC Device
3. Enable/Disable FC Adapter Debug flags

q - Quit/Exit

==> 1

1. 500507680YYYYYYY,0 - 10240 MB Disk drive

Hit a key to continue...

FC Node Menu
FC Node String: /vdevice/vfc-client@3000000a
FC Node WorldWidePortName: c050760XXXXXX0016
------------------------------------------
1. List Attached FC Devices
2. Select a FC Device
3. Enable/Disable FC Adapter Debug flags

q - Quit/Exit

==> q

The LUN appears, the WWPN 500507680YYYYYYY is the WWPN of the corresponding storage port, which is unique worldwide and can only be seen in the fabric “Fabric1“!

Activating the LPAR in OpenFirmware mode has served two purposes, firstly to verify that the LUN is visible and our mapping to fcs8 was correct, secondly, the system now has the information which WWPNs need to be found during an LPM operation, so that the LPAR can be moved!

We deactivate the LPAR again.

$ lpar shutdown –f fabric1
$

If we now perform an LPM validation on the inactive LPAR, then a validation can only be successful on a managed system that has a virtual I/O server with a connection to the fabric “Fabric1“. Using a for loop, let’s try that for some managed systems:

$ for ms in ms10 ms11 ms12 ms13 ms14 ms15 ms16 ms17 ms18 ms19
do
echo $ms
lpar validate fabric1 $ms >/dev/null 2>&1
if [ $? -eq 0 ]
then
   echo connected
else
   echo not connected
fi
done

The command to validate on the HMC CLI is, migrlpar,.

Since we are not interested in validation messages, we redirect all validation messages to /dev/null.

Here’s the output of the for loop:

ms10
connected
ms11
connected
ms12
connected
ms13
connected
ms14
connected
ms15
connected
ms16
connected
ms17
connected
ms18
connected
ms19
connected

Obviously, all managed systems are connected to fabric “Fabric1“. That’s not very surprising, because they were cabled exactly like that.

It would be more interesting to know which FC port on the managed systems (Virtual I/O servers) are connected to the fabric “Fabric1“. To do this, we need a list of virtual I/O servers for each managed system and the list of NPIV-capable FC ports for each virtual I/O server.

The list of virtual I/O servers can be obtained easily with the following command:

$ vios -m ms11 list
ms11-vio1
ms11-vio2
$

On the HMC CLI you can use the command: lssyscfg -r lpar -m ms11 -F “name lpar_env”.

The NPIV-capable ports can be found out with the following command:

$ vios lsnports ms11-vio1
ms11-vio1       name             physloc                        fabric tports aports swwpns  awwpns
ms11-vio1       fcs0             U78AA.001.XXXXXXX-P1-C5-T1          1     64     60   2048    1926
ms11-vio1       fcs1             U78AA.001.XXXXXXX-P1-C5-T2          1     64     60   2048    2023
...
$

The command lsnports is used on the virtual I/O server. Of course you can do this without the LPAR tool.

With the LPM validation (and of course also with the migration) one can indicate which FC port on the target system is to be used, we show this here once with two examples:

$ lpar validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs0 >/dev/null 2>&1
$ echo $?
0
$ lpar validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs1 >/dev/null 2>&1
$ echo $?
1
$

The validation with target ms10-vio1 and fcs0 was successful, i.e. this FC port is attached to fabric “Fabric1“. The validation with targets ms10-vio1 and fcs1 was not successful, i.e. that port is not connected to the fabric “Fabric1“.

Here is the command that must be called on the HMC, if the LPAR tool is not used:

$ lpar -v validate fabric1 ms10 virtual_fc_mappings=10/ms10-vio1///fcs0
hmc02: migrlpar -m ms09 -o v -p fabric1 -t ms10 -v -d 5 -i 'virtual_fc_mappings=10/ms10-vio1///fcs0'
$

To find out all the FC ports that are connected to the fabric “Fabric1“, we need to loop through the managed systems to be checked, for each managed system we then need a loop across all VIOS of the managed system and finally a loop over each FC ports of the VIOS performing an LPM validation.

We have put things together in the following script. To make sure that it does not get too long, we have omitted some checks:

$ cat bin/fabric_ports
#! /bin/ksh
# Copyright © 2018, 2019 by PowerCampus 01 GmbH

LPAR=fabric1

STATE=$( lpar prop -F state $LPAR | tail -1 )

print "LPAR: $LPAR"
print "STATE: $STATE"

if [ "$STATE" != "Not Activated" ]
then
            print "ERROR: $LPAR must be in state 'Not Activated'"
            exit 1
fi

fcsCount=0
fcsSameFabricCount=0

for ms in $@
do
            print "MS: $ms"
            viosList=$( vios -m $ms list )

            for vios in $viosList
            do
                        rmc_state=$( lpar -m $ms prop -F rmc_state $vios | tail -1 )
                        if [ "$rmc_state" = "active" ]
                        then
                                    fcList=
                                    vios -m $ms lsnports $vios 2>/dev/null | \
                                    while read vio fcport rest
                                    do
                                               if [ "$fcport" != "name" ]
                                               then
                                                           fcList="${fcList} $fcport"
                                               fi
                                    done

                                    for fcport in $fcList
                                    do
                                               print -n "${vios}: ${fcport}: "
                                               lpar validate $LPAR $ms virtual_fc_mappings=10/${vios}///${fcport} </dev/null >/dev/null 2>&1
                                               case "$?" in
                                               0)
                                                           print "yes"
                                                           fcsSameFabricCount=$( expr $fcsSameFabricCount + 1 )
                                                           ;;
                                               *) print "no" ;;
                                               esac
                                               fcsCount=$( expr $fcsCount + 1 )
                                    done
                        else
                                    print "${vios}: RMC not active"
                        fi
            done
done

print "${fcsCount} FC-ports investigated"
print "${fcsSameFabricCount} FC-ports in same fabric"

$

As an illustration we briefly show a run of the script over some managed systems. We start the script with time to see how long it takes:

$ time bin/fabric_ports ms10 ms11 ms12 ms13 ms14 ms15 ms16 ms17 ms18 ms19
LPAR: fabric1
STATE: Not Activated
MS: ms10
ms10-vio3: RMC not active
ms10-vio1: fcs0: yes
ms10-vio1: fcs2: yes
ms10-vio1: fcs4: no
ms10-vio1: fcs6: no
ms10-vio2: fcs0: yes
ms10-vio2: fcs2: yes
ms10-vio2: fcs4: no
ms10-vio2: fcs6: no
MS: ms11
ms11-vio3: RMC not active
ms11-vio1: fcs0: no
ms11-vio1: fcs1: no
ms11-vio1: fcs2: no
ms11-vio1: fcs3: yes
ms11-vio1: fcs4: no
…
ms19-vio2: fcs2: no
ms19-vio2: fcs3: no
ms19-vio2: fcs0: no
ms19-vio2: fcs1: no
ms19-vio2: fcs4: no
ms19-vio2: fcs5: no
132 FC-ports investigated
17 FC-ports in same fabric

real       2m33.978s
user      0m4.597s
sys       0m8.137s
$

In about 150 seconds, 132 FC ports were examined (LPM validations performed). This means a validation took about 1 second on average.

We have found all the FC ports that are connected to the fabric “Fabric1“.

Of course, this can be done analogously for other fabrics.

A final note: not all ports above are cabled!