HSCLB504 The migrating partition cannot use hardware-accelerated Active Memory Expansion

When migrating LPARs using LPM onto a somewhat older hardware, the following error can occur:

HSCLB504 The migrating partition cannot use hardware-accelerated Active Memory Expansion on the destination managed system because the destination managed system does not support hardware-accelerated Active Memory Expansion.

This means that Active Memory Erweiterung (AME) is activated for the LPAR, but is not supported on the destination managed system.

Disabling Active Memory Expansion using the LPAR-Tool is easy:

$ lpar -d chmem lpar01 hardware_mem_expansion=0
$

Without the LPAR-Tool this is of course also possible. Log into your HMC and use the following command from the commandoline:

chhwres -m ms01 -r mem -o s -p lpar01 -a 'hardware_mem_expansion=0'

Afterwards validation and migration using LPM should work.

WWPN of FC ports in Open Firmware

The following article deals with WWPN of FC ports in Open Firmware.

Port and node WWNs of FC ports can be found very easily in the Open Firmware, even when the ioinfo command is no longer available, as is the case with new POWER9 firmware. The hardware structure of a POWER system is available in the Open Firmware in the form of a device tree. Hardware components such as PCI bridges, processors and PCI cards are represented as device nodes in this tree.

With the command “dev /” you can access the device nodes, starting with the root node (“/” or slash):

0 > dev /  ok
0 >

In the device tree you can navigate with the commands dev, ls and pwd similar to the Unix file system. An ls on the root node shows all available device nodes (as well as some “package nodes” which are not discussed here).

The hierarchy is visualized in the device tree by indenting the device nodes:

0 > ls 
0000020939c0: /ibm,serial
000002094ae8: /chosen
000002094d60: /packages
000002094e58:   /disassembler
...0000020af578: /cpus
0000020b5200:   /PowerPC,POWER7@0
...
0000020ba640: /memory@0
...
00000226cad0: /pci@800000020000120
00000229d750:   /pci@0
0000022a0018:     /pci@2
0000022a28e0:       /ethernet@0
0000022b4a28:       /ethernet@0,1
0000022c6b70:     /pci@4
0000022c9438:       /ethernet@0
0000022db580:       /ethernet@0,1
000002277fd8: /pci@800000020000121
0000022ed7d0:   /fibre-channel@0
0000023026e0:     /fp
000002303240:     /disk
000002304de0:     /tape
000002306270:   /fibre-channel@0,1
00000231b180:     /fp
00000231bce0:     /disk
00000231d880:     /tape
...
ok
0 >

The example output shows 2 FC ports. Both FC ports are children of the device node pci@800000020000121, which can be found directly under the root node /.

With the command “dev / pci@800000020000121” we first navigate to this node and then display the child or child nodes using “ls“:

0 > dev /pci@800000020000121  ok
0 > ls
0000022ed7d0: /fibre-channel@0
0000023026e0:   /fp
000002303240:   /disk
000002304de0:   /tape
000002306270: /fibre-channel@0,1
00000231b180:   /fp
00000231bce0:   /disk
00000231d880:   /tape
ok
0 >

We next move into the device node of the first FC port fiber-channel@0.

With the command “pwd” we check briefly the position in the device tree and then use “ls” to look at the available subnodes:

0 > dev fibre-channel@0  ok
0 > pwd /pci@800000020000121/fibre-channel@0 ok
0 > ls
0000023026e0: /fp
000002303240: /disk
000002304de0: /tape
ok
0 >

Each device node has a number of properties, which depend on the type of the underlying hardware component.

The properties of a device node can be displayed with the command “.properties” (the command name begins with a “.“):

0 > .properties
ibm,loc-code            U5802.001.008C110-P1-C2-T1
vendor-id               000010df
device-id               0000f100
...
name                    fibre-channel
...
manufacturer            456d756c 657800
copyright               436f7079 72696768 74202863 29203230 30302d32 30313220 456d756c 657800
device_type             fcp
model                   10N9824
...
port-wwn                10000000 c9b12345
node-wwn                20000000 c9b12345
...
ok
0 >

In addition to the location code, the port WWN (port-wwn) and the node WWN (node-wwn) are displayed.

If you would like to know more about the structure of WWNs, please refer to the article:  Numbers: FC World Wide Names (WWNs)

Of course, you can also find out the MAC address of an ethernet port in the same way. With “dev ..” you can move up one level in the device tree, just like in a Unix file system. But you can also abbreviate and go straight to the top, which we do here in the following. Then we display all available device nodes again:

0 > dev /  ok
0 > ls 
...
00000226cad0: /pci@800000020000120
00000229d750:   /pci@0
0000022a0018:     /pci@2
0000022a28e0:       /ethernet@0
0000022b4a28:       /ethernet@0,1
0000022c6b70:     /pci@4
0000022c9438:       /ethernet@0
0000022db580:       /ethernet@0,1
...
ok
0 >

As an example, we select the device node /pci@800000020000120/pci@0/pci@2/ethernet@0.1 and again let us display the properties:

0 > dev /pci@800000020000124/pci@0/pci@2/ethernet@0,1  ok
0 > pwd /pci@800000020000124/pci@0/pci@2/ethernet@0,1 ok
0 > .properties
ibm,loc-code            U5802.001.008C110-P1-C4-T2
vendor-id               00008086
device-id               000010bc
...
name                    ethernet
...
device_type             network
...
max-frame-size          00000800
address-bits            00000030
local-mac-address       00145eea 1234
mac-address             00145eea 1234
...
0 >

The MAC address is available here by the property mac-address.

If you want to leave the device tree, you can do this with the command “device-end“:

0 > device-end  ok
0 >

We hope this article about WWPN of FC ports in Open Firmware was both helpful and informative.

LPAR-Tool in Action: Examples

The LPAR tool can administer HMCs, managed systems, LPARs and virtual-I/O-servers via the command line. The current version of the LPAR tool (currently 1.4.0.2) can be downloaded from our download page https://powercampus.de/download. A trial license, valid until October 31, is included. This article will show you some simple but useful applications of the LPAR tool.

A common question in larger environments (multiple HMCs, many managed systems) is: where is a particular LPAR? This question can easily be answered with the LPAR tool, by using the command “lpar show“:

$ lpar show lpar02
NAME    ID  SERIAL     LPAR_ENV  MS    HMCS
lpar02  39  123456789  aixlinux  ms21  hmc01,hmc02
$

In addition to the name, the LPAR-ID and the serial number, the managed system, here ms21, and the associated HMCs, here hmc01 and hmc02, are also shown. You can also specify multiple LPARs and/or wildcards:

$ lpar show lpar02 lpar01
...
$ lpar show lpar*
...
$

If no argument is given, all LPARs are listed.

 

Another question that frequently arises is the status of an LPAR or multiple LPARs. Again, this can be easily answered, this time with the command “lpar status“:

$ lpar status lpar02
NAME    LPAR_ID  LPAR_ENV  STATE    PROFILE   SYNC  RMC     PROCS  PROC_UNITS  MEM   OS_VERSION
lpar02  39       aixlinux  Running  standard  0     active  1      0.7         7168  AIX 7.2 7200-03-02-1846
$

The LPAR lpar02 is Running, the profile used is standard, the RMC connection is active and the LPAR is running AIX 7.2 (TL3 SP2). The LPAR has 1 processor core, with 0.7 processing units and 7 GB RAM. The column SYNC indicates whether the current configuration is synchronized with the profile (attribute sync_curr_profile).

Of course, several LPARs or even all LPARs can be specified here.

If you want to see what the LPAR tool does in the background: for most commands you can specify the option “-v” for verbose-only. The HMC commands will then be listed, but no changes will be made to the HMC. Here are the HMC commands that are issued for the status output:

$ lpar status -v lpar02
hmc01: lssyscfg -r lpar -m ms21
hmc01: lshwres -r mem -m ms21 --level lpar
hmc01: lshwres -r proc -m ms21 --level lpar
$

 

Next, the addition of additional RAM will be shown. We start with the status of the LPAR:

$ lpar status lpar02
NAME    LPAR_ID  LPAR_ENV  STATE    PROFILE   SYNC  RMC     PROCS  PROC_UNITS  MEM   OS_VERSION
lpar02  39       aixlinux  Running  standard  0     active  1      0.7         7168  AIX 7.2 7200-03-02-1846
$

The LPAR is running and RMC is active, so a DLPAR operation should be possible. We will first check if the maximum memory size is already in use:

$ lpar lsmem lpar02
            MEMORY         MEMORY         HUGE_PAGES 
LPAR_NAME  MODE  AME  MIN   CURR  MAX   MIN  CURR  MAX
lpar02     ded   0.0  2048  7168  8192  0    0     0
$

Currently the LPAR uses 7 GB and a maximum of 8 GB are possible. Extending the memory by 1 GB (1024 MB) should be possible. We add the memory by using the command “lpar addmem“:

$ lpar addmem lpar02 1024
$

We check the success by starting the command “lpar lsmem” again:

$ lpar lsmem lpar02
           MEMORY         MEMORY         HUGE_PAGES 
LPAR_NAME  MODE  AME  MIN   CURR  MAX   MIN  CURR  MAX
lpar02     ded   0.0  2048  8192  8192  0    0     0
$

(By the way: if the current configuration is not synchronized with the current profile, attribute sync_curr_profile, then the LPAR tool also updates the profile!)

 

Virtual adapters can be listed using “lpar lsvslot“:

$ lpar lsvslot lpar02
SLOT  REQ  ADAPTER_TYPE   STATE  DATA
0     Yes  serial/server  1      remote: (any)/any connect_status=unavailable hmc=1
1     Yes  serial/server  1      remote: (any)/any connect_status=unavailable hmc=1
2     No   eth            1      PVID=123 VLANS= ETHERNET0 XXXXXXXXXXXX
6     No   vnic           -      PVID=1234 VLANS=none XXXXXXXXXXXX failover sriov/ms21-vio1/1/3/0/2700c003/2.0/2.0/20/100.0/100.0,sriov/ms21-vio2/2/1/0/27004004/2.0/2.0/10/100.0/100.0
10    No   fc/client      1      remote: ms21-vio1(1)/47 c050760XXXXX0016,c050760XXXXX0017
20    No   fc/client      1      remote: ms21-vio2(2)/25 c050760XXXXX0018,c050760XXXXX0019
21    No   scsi/client    1      remote: ms21-vio2(2)/20
$

The example shows virtual FC and SCSI adapters as well as a vNIC adapter in slot 6.

 

Finally, we’ll show how to start a console for an LPAR:

$ lpar console lpar02

Open in progress 

 Open Completed.

…

AIX Version 7

Copyright IBM Corporation, 1982, 2018.

Console login:

…

The console can be terminated with “~.“.

 

Of course, the LPAR tool can do much more.

To be continued.

 

LPAR tool 1.4.0.1 available (including a valid test license)!

In our download area, version 1.4.0.1 of our LPAR tool, including a valid test license (valid until 31th october 2019) is available for download. The license is contained directly in the binaries, so no license key must be entered. The included trial license allows use of the LPAR tool for up to 10 HMCs, 100 managed systems and 1000 LPARs.

LPAR tool with test license until 15th september 2019

In our download area, version 1.3.0.2 of our LPAR tool, including a valid test license (valid until 15th september 2019) is available for download. The license is contained directly in the binaries, so no license key must be entered. The included trial license allows use of the LPAR tool for up to 10 HMCs, 100 managed systems and 1000 LPARs.

Resources of not activated LPARs and Memory Affinity

When an LPAR is shut down, resources such as processors, memory, and I/O slots are not automatically released by the LPAR. The resources remain assigned to the LPAR and are then reused on the next activation (with the current configuration). In the first part of the article Resources of not activated LPARs we had already looked at this.

(Note: In the example output, we use version 1.4 of the LPAR tool, but in all cases we show the underlying commands on the HMC command line, so you can try everything without using the LPAR tool.)

The example LPAR lpar1 was shut down, but currently still occupies 100 GB of memory:

linux $ lpar status lpar1
NAME   LPAR_ID  LPAR_ENV  STATE          PROFILE   SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
lpar1  39       aixlinux  Not Activated  standard  0     inactive  1      0.2         102400  Unknown
linux $

The following commands for the output above were executed on the corresponding HMC hmc01:

hmc01: lssyscfg -r lpar -m ms09 --filter lpar_names=lpar1
hmc01: lshwres -r mem -m ms09 --level lpar --filter lpar_names=lpar1
hmc01: lshwres -r proc -m ms09 --level lpar --filter lpar_names=lpar1

As the output shows, the LPAR lpar1 has still allocated its resources (processors, memory, I/O adapters).

In order to understand why deactivating an LPAR does not release the resources, you have to look at the “Memory Affinity Score”:

linux $ lpar lsmemopt lpar1
             LPAR_SCORE  
LPAR_NAME  CURR  PREDICTED
lpar1      100   0
linux $

HMC command line:

hmc01: lsmemopt -m ms09 -r lpar -o currscore –filter lpar_names=lpar1

The Memory Affinity Score describes how close processors and memory are, the closer the memory to the memory is, the better is the throughput to the memory. The command above indicates, with a value between 1 and 100, how big the affinity between processors and LPARs is. Our LPAR lpar1 currently has a value of 100, which means the best possible affinity of memory and processors. If the resources were freed when deactivating an LPAR, then the LPAR would lose this Memory Affinity Score. The next time you enable the LPAR, it then depends on the memory and processors available then how good the memory affinity will be then. We release the resources once:

linux $ lpar -d rmprocs lpar1 1
linux $

HMC command line:

hmc01: chhwres -m ms09 -r proc  -o r -p lpar1 --procs 1

No more score will be given, since the LPAR has no longer allocated any resources:

linux $ lpar lsmemopt lpar1
             LPAR_SCORE  
LPAR_NAME  CURR  PREDICTED
lpar1      none  none
linux $

HMC command line:

hmc01: lsmemopt -m ms09 -r lpar -o currscore –filter lpar_names=lpar1

Now we allocate resources again and look at the effect this has on memory affinity:

linux $ lpar applyprof lpar1 standard
linux $

HMC command line:

hmc01: chsyscfg -r lpar -m ms09 -o apply -p lpar1 -n standard

We again determine the Memory Affinity Score:

linux $ lpar lsmemopt lpar1
             LPAR_SCORE  
LPAR_NAME  CURR  PREDICTED
lpar1      53    0
linux $

HMC command line:

hmc01: lsmemopt -m ms09 -r lpar -o currscore –filter lpar_names=lpar1

The score is now only 53, the performance of the LPAR has become worse. Whether and how much this is noticeable, depends ultimately on the applications on the LPAR.

The fact that the resources are not released when deactivating an LPAR, thus guarantees the next time you activate (with the current configuration) the memory affinity remains the same and thus the performance should be the same.

If you release the resources of an LPAR (manually or automatically), then you have to realize that this has an effect on the LPAR if it is later activated again, because then the resources are reassigned and a worse (but possibly also a better) Memory Affinity Score can result.

Conversely, before activating a new LPAR you can also make sure that there is a good chance for a high memory affinity score for the new LPAR by releasing resources of inactive LPARs.

(Note: resource distribution can be changed and improved at runtime using the Dynamic Platform Optimizer DPO. DPO is supported as of POWER8.)

 

Resources of not activated LPARs

When an LPAR is shutdown, resources such as processors, memory, and I/O slots are not automatically released by the LPAR. The resources remain assigned to the LPAR and are reused on the next activation (with the current configuration).

The article will show how such resources are automatically released and, if desired, how to manually release resources of an inactive LPAR.

(Note: In the example output, we use version 1.4 of the LPAR tool, but in all cases we show the underlying commands on the HMC command line, so you can try everything without using the LPAR tool.)

The example LPAR lpar1 was shut down, but currently still occupies 100 GB of memory:

linux $ lpar status lpar1
NAME   LPAR_ID  LPAR_ENV  STATE          PROFILE   SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
lpar1  39       aixlinux  Not Activated  standard  0     inactive  1      0.2         102400  Unknown
linux $

The following commands for the output above were executed on the corresponding HMC hmc01:

hmc01: lssyscfg -r lpar -m ms09 --filter lpar_names=lpar1
hmc01: lshwres -r mem -m ms09 --level lpar --filter lpar_names=lpar1
hmc01: lshwres -r proc -m ms09 --level lpar --filter lpar_names=lpar1

The resource_config attribute of an LPAR indicates whether the LPAR has currently allocated resources (resource_config=1) or not (resource_config=0):

linux $ lpar status -F resource_config lpar1
1
linux $

Or on the HMC command line:

hmc01: lssyscfg -r lpar -m ms09 --filter lpar_names=lpar1 –F resource_config

The resources allocated by an not activated LPAR can be released in 2 different ways:

  1. Automatic: The resources used are needed by another LPAR, e.g. because memory is expanded dynamically or an LPAR is activated that does not have sufficient resources. In this case, resources are automatically removed from a not activated LPAR. We will show this below with an example.
  2. Manual: The allocated resources are explicitly released by the administrator. This is also shown below in an example.

First we show an example in which resources are automatically taken away from an not activated LPAR.

The managed system ms09 currently has about 36 GB free memory:

linux $ ms lsmem ms09
NAME  INSTALLED  FIRMWARE  CONFIGURABLE  AVAIL  MEM_REGION_SIZE
ms09  786432     33792     786432        36352  256
linux $

HMC command line:

hmc01: lshwres -r mem -m ms09 --level sys

We start an LPAR (lpar2) which was configured with 100 GB of RAM. The managed system has only 36 GB of RAM and is therefore forced to take resources from inactive LPARs in order to provide the required 100 GB. We start lpar2 with the profile standard and look at the memory relations:

linux $ lpar activate -b sms -p standard lpar2
linux $

HMC command line:

hmc01: chsysstate -m ms09 -r lpar -o on -n lpar2 -b sms -f standard

Overview of the storage relations of lpar1 and lpar2:

linux $ lpar status lpar\*
NAME   LPAR_ID  LPAR_ENV  STATE          PROFILE   SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
lpar1  4        aixlinux  Not Activated  standard  0     inactive  1      0.2         60160   Unknown
lpar2  8        aixlinux  Open Firmware  standard  0     inactive  1      0.2         102400  Unknown
linux $ ms lsmem ms09
NAME  INSTALLED  FIRMWARE  CONFIGURABLE  AVAIL  MEM_REGION_SIZE
ms09  786432     35584     786432        0      256
linux $

HMC command line:

hmc01: lssyscfg -r lpar -m ms09
hmc01: lshwres -r mem -m ms09 --level lpar
hmc01: lshwres -r proc -m ms09 --level lpar
hmc01: lshwres -r mem -m ms09 --level sys

The LPAR lpar2 has 100 GB RAM, the managed system has no more free memory and the memory allocated by LPAR lpar1 has been reduced to about 60 GB. Allocated resources from non-activated LPARs are automatically released, when needed and assigned to other LPARs.

But you can of course also release the resources manually. This is also shown briefly here. We are reducing the memory of LPAR lpar1 by 20 GB:

linux $ lpar -d rmmem lpar1 20480
linux $

HMC command line:

hmc01: chhwres -m ms09 -r mem  -o r -p lpar1 -q 20480

As stated, the allocated memory has been reduced by 20 GB:

linux $ lpar status lpar\*
NAME   LPAR_ID  LPAR_ENV  STATE          PROFILE   SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
lpar1  4        aixlinux  Not Activated  standard  0     inactive  1      0.2         39680   Unknown
lpar2  8        aixlinux  Open Firmware  standard  0     inactive  1      0.2         102400  Unknown
linux $ ms lsmem ms09
NAME  INSTALLED  FIRMWARE  CONFIGURABLE  AVAIL  MEM_REGION_SIZE
ms09  786432     35584     786432        20480  256
linux $

HMC command line:

hmc01: lssyscfg -r lpar -m ms09
hmc01: lshwres -r mem -m ms09 --level lpar
hmc01: lshwres -r proc -m ms09 --level lpar
hmc01: lshwres -r mem -m ms09 --level sys

The 20 GB are immediately available to the managed system as free memory. If you remove the entire memory or all processors (or processor units), then all resources of an inactive LPAR are released:

linux $ lpar -d rmmem lpar1 39680
linux $

HMC command line:

hmc01: chhwres -m ms09 -r mem  -o r -p lpar1 -q 39680

Here are the resulting memory relations:

linux $ lpar status lpar\*
NAME   LPAR_ID  LPAR_ENV  STATE          PROFILE   SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
lpar1  4        aixlinux  Not Activated  standard  0     inactive  0      0.0         0       Unknown
lpar2  8        aixlinux  Open Firmware  standard  0     inactive  1      0.2         102400  Unknown
linux $ ms lsmem ms09
NAME        INSTALLED  FIRMWARE  CONFIGURABLE  AVAIL  MEM_REGION_SIZE
ms09  786432     31232     786432        64512  256
linux $

HMC command line:

hmc01: lssyscfg -r lpar -m ms09
hmc01: lshwres -r mem -m ms09 --level lpar
hmc01: lshwres -r proc -m ms09 --level lpar
hmc01: lshwres -r mem -m ms09 --level sys

The LPAR lpar1 now has 0 processors, 0.0 processor units and 0 MB of memory! In addition, the resource_config attribute now has the value 0, which indicates that the LPAR no longer has any resources configured!

linux $ lpar status -F resource_config lpar1
0
linux $

HMC command line:

hmc01: lssyscfg -r lpar -m ms09 --filter lpar_names=lpar1 –F resource_config

Finally, the question arises as to why you should release resources manually if they are automatically released by the managed system when needed?

We will answer this question in a second article.

 

Accessing the Update Access Key Expiration Date from AIX

As part of the introduction of POWER8 systems, IBM has also introduced the “Update Access Key”, which is necessary to perform firmware updates of the managed system. By default, newly delivered systems have an update access key that usually expires after 3 years. Thereafter, the Update Access Key can has to be extended every 6 month, but only if a maintenance contract exists(https://www.ibm.com/servers/eserver/ess/index.wss).

Of course, it is easy to find out when the current Update Access Key runs through the HMC, GUI or CLI. But you can also display the expiration date via the lscfg command from AIX:

In the case of AIX 7.1, this looks like this:

$ lscfg -vpl sysplanar0 | grep -p "System Firmware"
      System Firmware:
...
        Microcode Image.............SV860_138 SV860_103 SV860_138
        Microcode Level.............FW860.42 FW860.30 FW860.42
        Microcode Build Date........20180101 20170628 20180101
        Microcode Entitlement Date..20190825
        Hardware Location Code......U8284.22A.XXXXXXX-Y1
      Physical Location: U8284.22A.XXXXXXX-Y1

In the case of AIX 7.2, the output is slightly different:

$ lscfg -vpl sysplanar0 |grep -p "System Firmware"
      System Firmware:
...
        Microcode Image.............SV860_138 SV860_103 SV860_138
        Microcode Level.............FW860.42 FW860.30 FW860.42
        Microcode Build Date........20180101 20170628 20180101
        Update Access Key Exp Date..20190825
        Hardware Location Code......U8284.22A.XXXXXXX-Y1
      Physical Location: U8284.22A.XXXXXXX-Y1

Relevant are the lines “Microcode Entitlement Date” respectively “Update Access Key Exp Date“.