Multiple Shared Processor Pools: Entitled Pool Capacity

Distribution of processor shares to shared processor pools and LPARs in the default shared processor pool according to EPC or EC.

An important change when using shared processor pools in PowerVM concerns the distribution of unused processor shares of the LPARs. Without shared processor pools, unused processor shares are divided among all uncapped LPARs according to their weights. As soon as shared processor pools are used, the distribution takes place in two stages. Unused processor shares are first distributed to uncapped LPARs within the same shared processor pool. Only the unused processor shares that are not consumed by other LPARs in the same shared processor pool are redistributed to LPARs in other shared processor pools.

Each shared processor pool has a so-called Entitled Pool Capacity (EPC), which is the sum of the guaranteed entitlements of the assigned LPARs and the Reserved Pool Capacity (RPC). The reserved pool capacity can be configured using the reserved_pool_proc_units attribute of the shared processor pool and has the default value 0. Just as the entitlement is guaranteed for a shared processor LPAR, the assignment of the entitled pool capacity is guaranteed for a shared processor pool , regardless of how the shares are then distributed to the associated LPARs in the shared processor pool. Figure 5.15 shows reserved, entitled and maximum pool capacities for a shared processor pool.

The following condition must always be met for the pool capacities:

Reserved Pool Capacity <= Entitled Pool Capacity <= Maximum Pool Capacity

The pool capacities are always shown in the output of “ms lsprocpool“:

$ ms lsprocpool ms06
MS_NAME  PROCPOOL      ID  EC_LPARS  RESERVED  PENDING  ENTITLED  MAX
ms06     DefaultPool   0   7.90      -         -        7.90      -
ms06     SharedPool01  1   0.60      0.10      0.10     0.70      1.00
$

In the column EC_LPARS the guaranteed entitlements of the assigned LPARs are added up, here 0.60 for the pool SharedPool01. The column RESERVED shows the reserved pool capacity (0.10 for SharedPool01), the column ENTITLED shows the entitled pool capacity and finally the column MAX shows the maximum pool capacity. (The SharedPool01 is the shared processor pool from Figure 5.15.)

The figure above shows how the distribution of processor shares works in the presence of several shared processor pools.

Each shared processor pool receives a share of the processors (cores) according to its entitled pool capacity. Shared processor LPARs in the default shared processor pool receive processor shares according to their entitlement. The unused processor shares are distributed to all LPARs, regardless of shared processor pools, according to their weights (this is not shown in the diagram).

The processor shares assigned to each shared processor pool (according to the entitled pool capacity) are then distributed within the shared processor pool to the associated LPARs according to their entitlement. That means in particular that every LPAR in a shared processor pool continues to receive its guaranteed entitlement!

If an LPAR in a shared processor pool does not consume its entitlement, then these unused processor shares are first distributed within the shared processor pool to other LPARs that need additional processor shares. The distribution then takes place as before, taking into account the weights of the LPARs. Unused processor shares are thus, so to speak, “recycled” within a shared processor pool. If not all unused processor shares in the shared processor pool are used up in this way, then these are redistributed to all LPARs (LPARs with a need for additional processor shares) via the hypervisor, regardless of the associated shared processor pool.

This two-stage distribution of processor shares can be observed very well in a small experiment. We have increased the guaranteed entitlement to 0.8 for the 3 LPARs (lpar1, lpar2 and lpar3):

$ lpar addprocunits lpar1 0.4
$ lpar addprocunits lpar2 0.4
$ lpar addprocunits lpar3 0.4
$

The assignment to the shared processor pools remains: lpar1 and lpar2 are assigned to the shared processor pool benchmark and lpar3 remains assigned to the default pool:

$ lpar -m ms11 lsproc
           PROC         PROCS           PROC_UNITS                        UNCAP   PROC    
LPAR_NAME  MODE    MIN  DESIRED  MAX  MIN  DESIRED  MAX  SHARING_MODE     WEIGHT  POOL
lpar1      shared  1    4        8    0.1  0.8      2.0  uncap            100     benchmark
lpar2      shared  1    4        8    0.1  0.8      2.0  uncap            100     benchmark
lpar3      shared  1    4        8    0.1  0.8      2.0  uncap            100     DefaultPool
ms11-vio1  ded     1    7        8    -    -        -    keep_idle_procs  -       -
ms11-vio2  ded     1    6        8    -    -        -    keep_idle_procs  -       -
$

In the shared processor pool benchmark, the resulting entitled pool capacity is 2 * 0.8 + 0.0 = 1.6 (the reserved pool capacity is 0.0). The entitled pool capacity of the default Shared Processor Pool with only one LPAR is 0.8.

$ ms lsprocpool ms11
MS_NAME  PROCPOOL     ID  EC_LPARS  RESERVED  PENDING  ENTITLED  MAX
ms11     DefaultPool  0   0.80      -         -        0.80      -
ms11     testpool     1   0.00      0.00      0.00     0.00      2.00
ms11     benchmark    2   1.60      0.00      0.00     1.60      2.00
$

We start the benchmark again, this time on lpar1 (shared processor pool benchmark) and lpar3 (shared processor pool DefaultPool) in parallel. No load is placed on lpar2 (Shared Processor Pool benchmark), the LPAR is at a load of approx. 0.00 – 0.01 during the benchmark. This means that the guaranteed entitled pool capacity of 1.6 is available exclusively for lpar1! The guaranteed entitlement of lpar2 in the default pool is only 0.8. Of the 3 physical processors (cores) in the physical shared processor pool, only an entitlement of 3.0 – 1.6 – 0.8 = 0.6 remains, which can be distributed to LPARs with additional processor components. Since lpar1 and lpar3 both have the same weights (uncap_weight=100), they each receive an additional 0.3 processing units. That makes for lpar1: 1.6 + 0.3 = 1.9. And for lpar3: 0.8 + 0.3 = 1.1. This can be seen very nicely in the graphics for the processor utilization (figure 5.17). A short time after the start of the benchmark, on lpar1 around 1.9 physical processors (cores) are used and on lpar3 around 1.1. Due to the larger processor shares, the benchmark on lpar1 is finished faster, which means that the processor utilization goes down there. However, lpar3 has then more processor shares available and lpar3 then takes almost all of the 3 available processors at the end.

Without additional shared processor pools, all uncapped LPARs benefit from unused processor shares that an LPAR does not use. Since potentially all LPARs get shares of these unused processor shares, the proportion for an individual LPAR is not so large. If additional shared processor pools are used, uncapped LPARs in the same shared processor pool benefit primarily from unused processor shares of an LPAR. These are fewer LPARs and therefore the proportion of additional processor capacity per LPAR is higher.

5.5. Multiple Shared Processor Pools

5.5.1. Physical Shared Processor Pool

5.5.2. Multiple Shared Processor Pools

5.5.3. Configuring a Shared Processor Pool (Maximum Pool Capacity)

5.5.4. Assigning a Shared Processor Pools

5.5.5. Entitled Pool Capacity (EPC)

5.5.6. Reserved Pool Capacity (RPC)

5.5.7. Deactivating a Shared Processor Pool

Adding Logical SR-IOV Ports

SR-IOV Ethernet port with internal switch and 3 logical ports.

In order that an LPAR can use a virtual function of an SR-IOV adapter in PowerVM, a so-called logical port must be created for the LPAR. Which logical ports already exist can be displayed with the command “ms lssriov” and the option “-l” (logical port):

$ ms lssriov -l ms03
LOCATION_CODE  ADAPTER  PPORT  LPORT  LPAR  CAPACITY  CURR_MAC_ADDR  CLIENTS
$

Since the SR-IOV adapters have just been configured to shared mode, there are of course no logical ports yet. To add a logical SR-IOV port to an LPAR, the command “lpar addsriov” (add SR-IOV logical port) is used. In addition to the LPAR, the adapter ID and the port ID of the physical port must be specified. Alternatively, a unique suffix of the physical location code of the physical port can also be specified:

$ lpar addsriov aix22 P1-C11-T1
$

The creation can take a few seconds. A quick check shows that a logical port has actually been created:

$ ms lssriov -l ms03
LOCATION_CODE                   ADAPTER  PPORT  LPORT     LPAR   CAPACITY  CURR_MAC_ADDR  CLIENTS
U78AA.001.VYRGU0Q-P1-C11-T1-S1  1        0      27004001  aix22  2.0       a1b586737e00   -
$

Similar to a managed system for virtual Ethernet, an internal switch is implemented on the SR-IOV adapters for each physical Ethernet port, see figure above. One of the virtual functions is assigned to each logical port. The associated LPARs access the logical ports directly via the PCI Express bus (PCIe switch).

An LPAR can easily have several logical SR-IOV ports. With the command “lpar lssriov” (list SR-IOV logical ports) all logical ports of an LPAR can be displayed:

$ lpar lssriov aix22
LPORT     REQ  ADAPTER  PPORT  CONFIG_ID  CAPACITY  MAX_CAPACITY  PVID  VLANS  CURR_MAC_ADDR  CLIENTS
27004001  Yes  1        0      0          2.0       100.0         0     all    a1b586737e00   -
$

There are a number of attributes that can be specified for a logical port when it is created. Among other things, the following properties can be configured:

    • capacity – the guaranteed capacity for the logical port.
    • port_vlan_id – the VLAN ID for untagged packets or 0 to switch off VLAN tagging.
    • promisc_mode – switch promiscuous mode on or off.

The complete list of attributes and their possible values can be found in the online help (“lpar help addsriov“).

As an example we add another logical port with port VLAN-ID 55 and a capacity of 20% to the LPAR aix22:

$ lpar addsriov aix22 P1-C4-T2 port_vlan_id=55 capacity=20
$

The generated logical port thus has a guaranteed share of 20% of the bandwidth of the physical port P1-C4-T2! The LPAR now has 2 logical SR-IOV ports:

$ lpar lssriov aix22
LPORT     REQ  ADAPTER  PPORT  CONFIG_ID  CAPACITY  MAX_CAPACITY  PVID  VLANS  CURR_MAC_ADDR  CLIENTS
27004001  Yes  1        0      0          2.0       100.0         0     all    a1b586737e00   -
2700c003  Yes  3        2      1          20.0      100.0         55    all    a1b586737e01   -
$

After the logical ports have been added to the LPAR using the PowerVM Hypervisor, they appear in the Defined state. The logical ports appear under AIX as ent devices, like all other Ethernet adapters!

aix22 # lsdev -l ent\*
ent0 Available       Virtual I/O Ethernet Adapter (l-lan)
ent1 Defined   00-00 PCIe2 10GbE SFP+ SR 4-port Converged Network Adapter VF (df1028e214100f04)
ent2 Defined   01-00 PCIe2 100/1000 Base-TX 4-port Converged Network Adapter VF (df1028e214103c04)
aix22 #

After the config manager cfgmgr has run, the new ent devices are in the Available state and can be used in exactly the same way as all other Ethernet adapters.

7.6. SR-IOV

7.6.1. Activating Shared Modes

7.6.2. Configuration of Physical SR-IOV Ports

7.6.3. Adding Logical SR-IOV Ports

7.6.4. Changing a Logical SR-IOV Port

7.6.5. Removing Logical SR-IOV Ports

7.6.6. Setting an SR-IOV Adapter from Shared back to Dedicated

Adding a Virtual Ethernet Adapter

Delivery of tagged packets, here for the VLAN 200.

If in a PowerVM environment a virtual Ethernet adapter is to be added to an active LPAR using the LPAR-Tool, the LPAR must have an active RMC connection to an HMC. This requires an active Ethernet adapter (physical or virtual). A free virtual slot is required for the virtual Ethernet adapter.

$ lpar lsvslot aix22
SLOT  REQ  ADAPTER_TYPE   STATE  DATA
0     Yes  serial/server  1      remote: (any)/any connect_status=unavailable hmc=1
1     Yes  serial/server  1      remote: (any)/any connect_status=unavailable hmc=1
5     No   eth            1      PVID=100 VLANS= ETHERNET0 1DC8DB485D1E
10    No   fc/client      1      remote: ms03-vio1(1)/5 c05076030aba0002,c05076030aba0003
20    No   fc/client      1      remote: ms03-vio2(2)/4 c05076030aba0000,c05076030aba0001
$

The virtual slot 6 is not yet used by the LPAR aix22. A virtual Ethernet adapter can be added with the command “lpar addeth“. At least the desired virtual slot number for the adapter and the desired port VLAN ID must be specified:

$ lpar addeth aix22 6 900
$

In the example, a virtual Ethernet adapter for aix22 with port VLAN ID 900 was created in slot 6. If the slot number doesn’t matter, the keyword auto can be specified instead of a number; the LPAR tool then automatically assigns a free slot number. The virtual adapter is available immediately, but must first be made known to the operating system. How this happens exactly depends on the operating system used. In the case of AIX there is the cfgmgr command for this purpose.

After the virtual Ethernet adapter has been added, but before a run of cfgmgr is started, only the virtual Ethernet adapter ent0 is known to the AIX operating system of the LPAR aix22:

aix22 # lscfg -l ent*
  ent0             U9009.22A.8991971-V30-C5-T1  Virtual I/O Ethernet Adapter (l-lan)
aix22 #

After a run of cfgmgr the newly added virtual Ethernet adapter appears as ent1:

aix22 # cfgmgr
aix22 # lscfg -l ent*
  ent0             U9009.22A.8991971-V30-C5-T1  Virtual I/O Ethernet Adapter (l-lan)
  ent1             U9009.22A.8991971-V30-C6-T1  Virtual I/O Ethernet Adapter (l-lan)
aix22 #

Note: On AIX, the device name for an Ethernet adapter cannot be used to identify the type. Regardless of whether an Ethernet adapter is physical or virtual or a virtual function of an SR-IOV adapter, the device name ent with an ascending instance number is always used.

If an IEEE 802.1q compatible virtual Ethernet adapter with additional VLAN IDs is to be created, the option “-i” (IEEE 802.1q compatible adapter) must be used. Alternatively, the ieee_virtual_eth=1 attribute can also be specified. The additional VLAN IDs are specified as a comma-separated list:

$ lpar addeth -i aix22 7 900 100,200,300
$

The port VLAN ID is 900, and the additional VLAN IDs are 100, 200 and 300.
If an LPAR has no active RMC connection or is not active, then a virtual Ethernet adapter can only be added to one of the profiles of the LPAR. This is always the case, for example, if the LPAR has just been created and has not yet been installed.

In this case, only the option “-p” with a profile name has to be used for the commands shown. Which profiles an LPAR has, can easily be found out using “lpar lsprof” (list profiles):

$ lpar lsprof aix22
NAME                      MEM_MODE  MEM   PROC_MODE  PROCS  PROC_COMPAT
standard                  ded       7168  ded        2      default
last*valid*configuration  ded       7168  ded        2      default
$

(The last active configuration is stored in the profile with the name last*valid*configuration.)

The virtual adapters defined in the profile standard can then be displayed by specifying the profile name with “lpar lsvslot“:

$ lpar -p standard lsvslot aix22
SLOT  REQ  ADAPTER_TYPE   DATA
0     Yes  serial/server  remote: (any)/any connect_status= hmc=1
1     Yes  serial/server  remote: (any)/any connect_status= hmc=1
5     No   eth            PVID=100 VLANS= ETHERNET0 
6     No   eth            PVID=900 VLANS= ETHERNET0 
7     No   eth            IEEE PVID=900 VLANS=100,200,300 ETHERNET0 
10    No   fc/client      remote: ms03-vio1(1)/5 c05076030aba0002,c05076030aba0003
20    No   fc/client      remote: ms03-vio2(2)/4 c05076030aba0000,c05076030aba0001
$

When adding the adapter, only the corresponding profile name has to be given, otherwise the command looks exactly as shown above:

$ lpar -p standard addeth -i aix22 8 950 150,250
$

In order to make the new adapter available in slot 8, the LPAR must be activated again by default, specifying the profile name.

7.3. Virtual Ethernet

7.3.1. VLANs and VLAN Tagging

7.3.2. Adding a Virtual Ethernet Adapter

7.3.3. Virtuelle Ethernet Switches

7.3.4. Virtual Ethernet Bridge Mode (VEB)

7.3.5. Virtual Ethernet Port Aggregator Mode (VEPA)

7.3.6. Virtual Networks

7.3.7. Adding and Removing VLANs to/from an Adapter

7.3.8. Changing Attributes of a Virtual Ethernet Adapter

7.3.9. Removing a Virtual Ethernet Adapter

Monitoring virtual FC Client Traffic

With the LPAR tool, statistics for all virtual FC clients can be displayed at any time using the “vios fcstat” command. This allows you to determine at any time which client LPARs have which I/O throughput (when using NPIV).

Which NPIV-capable FC adapters are available on a virtual I/O server can easily be found out with “vios lsnports“:

$ vios lsnports ms15-vio1
NAME  PHYSLOC                     FABRIC  TPORTS  APORTS  SWWPNS  AWWPNS
fcs0  U78CB.001.XXXXXXX-P1-C5-T1  1       64      62      2032    2012
fcs1  U78CB.001.XXXXXXX-P1-C5-T2  1       64      62      2032    2012
fcs2  U78CB.001.XXXXXXX-P1-C5-T3  1       64      61      2032    1979
fcs3  U78CB.001.XXXXXXX-P1-C5-T4  1       64      61      2032    1979
fcs4  U78CB.001.XXXXXXX-P1-C3-T1  1       64      50      3088    3000
fcs5  U78CB.001.XXXXXXX-P1-C3-T2  1       64      63      3088    3077
$

We display the FC client statistics with the command “vios fcstat”. By default, the data for all virtual FC clients of the specified virtual I/O server are shown every 10 seconds:

$ vios fcstat ms15-vio1
HOSTNAME   PHYSDEV  WWPN                DEV    INREQS    INBYTES      OUTREQS    OUTBYTES     CTRLREQS
ms15-vio1  fcs1     0x210000XXXXX56EC5  fcs1   774.75/s  129.51 MB/s  1332.71/s   92.96 MB/s  20
aixtsmp1   fcs2     0xC050760XXXXX0058  fcs6   318.10/s   83.39 MB/s  481.34/s   126.18 MB/s  0
ms15-vio1  fcs2     0x210000XXXXX56EC6  fcs2   318.10/s   83.39 MB/s  480.78/s   126.03 MB/s  0
aixtsmp1   fcs5     0xC050760XXXXX003E  fcs0   583.98/s   60.35 MB/s  1835.17/s  124.86 MB/s  0
ms15-vio1  fcs5     0x10000090XXXXX12D  fcs5   583.70/s   60.27 MB/s  1836.21/s  124.92 MB/s  0
ms15-vio1  fcs0     0x21000024XXXXXEC4  fcs0   923.19/s  165.08 MB/s  1032.81/s   17.25 MB/s  46
aixtsmp3   fcs1     0xC050760XXXXX00E4  fcs0   775.12/s  129.48 MB/s  1047.32/s   17.15 MB/s  20
aixtsmp3   fcs0     0xC050760XXXXX00DE  fcs1   775.78/s  128.99 MB/s  1037.99/s   17.39 MB/s  20
aixtsmp1   fcs1     0xC050760XXXXX0056  fcs5     0.00/s    0.00 B/s   290.39/s    76.12 MB/s  0
aixtsmp1   fcs0     0xC050760XXXXX0052  fcs4   142.89/s   36.12 MB/s    0.00/s     0.00 B/s   26
ms15-vio1  fcs4     0x10000090XXXXX12C  fcs4   234.97/s    4.58 MB/s  621.78/s    11.12 MB/s  40
cus1dbp01  fcs4     0xC050760XXXXX0047  fcs0   243.55/s    5.05 MB/s  432.33/s     9.95 MB/s  0
cus1dbi01  fcs4     0xC050760XXXXX0044  fcs1     0.94/s   10.42 KB/s   87.28/s   459.26 KB/s  0
...
HOSTNAME   PHYSDEV  WWPN                DEV    INREQS     INBYTES      OUTREQS    OUTBYTES     CTRLREQS
aixtsmp1   fcs5     0xC050760XXXXX003E  fcs0   1772.84/s  162.24 MB/s  1309.30/s   70.60 MB/s  68
ms15-vio1  fcs5     0x10000090XXXXX12D  fcs5   1769.13/s  161.95 MB/s  1305.60/s   70.54 MB/s  68
ms15-vio1  fcs1     0x21000024XXXXXEC5  fcs1   883.55/s   118.97 MB/s  1551.97/s  108.78 MB/s  43
ms15-vio1  fcs2     0x21000024XXXXXEC6  fcs2   201.09/s    52.72 MB/s  497.26/s   130.35 MB/s  0
aixtsmp1   fcs2     0xC050760XXXXX0058  fcs6   201.09/s    52.72 MB/s  495.40/s   129.87 MB/s  0
ms15-vio1  fcs0     0x21000024XXXXXEC4  fcs0   923.54/s   128.89 MB/s  1234.98/s   23.31 MB/s  65
aixtsmp3   fcs0     0xC050760XXXXX00DE  fcs1   876.93/s   118.93 MB/s  1234.98/s   23.32 MB/s  44
aixtsmp3   fcs1     0xC050760XXXXX00E4  fcs0   884.17/s   119.07 MB/s  1223.50/s   23.00 MB/s  43
aixtsmp1   fcs1     0xC050760XXXXX0056  fcs5     0.00/s     0.00 B/s   325.83/s    85.41 MB/s  0
...
^C
$

The LPAR name, the physical FC port (PHYSDEV) on the virtual I/O server, the WWPN of the client adapter, the virtual FC client port (DEV), as well as the number of requests (INREQS and OUTREQS) and thereby transferred bytes (INBYTES and OUTBYTES). The transfer rates are output in KB/s, MB/s or GB/s. The output can be very long on larger systems! The output is sorted according to throughput, i.e. the most active virtual client adapters are output first. With the option ‘-t‘ (top) the output can be restricted to a desired number of data records: e.g. with ‘-t 10‘ only the top ten adapters with the highest throughput are shown. In addition, the interval length (in seconds) can be specified via a further argument, here is a short example:

$ vios fcstat -t 10 ms15-vio1 2
HOSTNAME   PHYSDEV  WWPN                DEV   INREQS     INBYTES      OUTREQS    OUTBYTES     CTRLREQS
ms15-vio1  fcs1     0x21000024XXXXXEC5  fcs1  1034.58/s   86.56 MB/s  2052.23/s  160.11 MB/s  20
ms15-vio1  fcs5     0x10000090XXXXX12D  fcs5  1532.63/s  115.60 MB/s  1235.72/s  118.32 MB/s  40
aixtsmp1   fcs5     0xC050760XXXXX003E  fcs0  1510.33/s  114.88 MB/s  1236.49/s  118.27 MB/s  40
aixtsmp3   fcs1     0xC050760XXXXX00E4  fcs0  1036.11/s   86.67 MB/s  1612.25/s   44.86 MB/s  20
aixtsmp3   fcs0     0xC050760XXXXX00DE  fcs1  1031.50/s   86.29 MB/s  1588.02/s   44.27 MB/s  20
ms15-vio1  fcs0     0x21000024XXXXXEC4  fcs0  1029.58/s   86.31 MB/s  1567.63/s   43.65 MB/s  20
aixtsmp1   fcs1     0xC050760XXXXX0056  fcs5    0.00/s     0.00 B/s   436.52/s   114.43 MB/s  0
ms15-vio1  fcs2     0x21000024XXXXXEC6  fcs2    0.00/s     0.00 B/s   435.75/s   114.23 MB/s  0
aixtsmp1   fcs2     0xC050760XXXXX0058  fcs6    0.00/s     0.00 B/s   432.68/s   113.42 MB/s  0
ms15-vio1  fcs4     0x10000090XXXXX12C  fcs4  144.99/s     0.78 MB/s  478.83/s     2.22 MB/s  46
HOSTNAME   PHYSDEV  WWPN                DEV   INREQS    INBYTES      OUTREQS    OUTBYTES     CTRLREQS
aixtsmp1   fcs5     0xC050760XXXXX003E  fcs0  758.14/s   35.55 MB/s  1822.99/s  112.60 MB/s  0
ms15-vio1  fcs5     0x10000090XXXXX12D  fcs5  757.38/s   35.52 MB/s  1821.46/s  112.59 MB/s  0
ms15-vio1  fcs0     0x21000024XXXXXEC4  fcs0  944.23/s   85.09 MB/s  1657.58/s   41.40 MB/s  2
aixtsmp3   fcs0     0xC050760XXXXX00DE  fcs1  943.47/s   85.15 MB/s  1636.90/s   40.68 MB/s  2
ms15-vio1  fcs1     0x21000024XXXXXEC5  fcs1  949.21/s   84.88 MB/s  1586.74/s   39.41 MB/s  2
aixtsmp3   fcs1     0xC050760XXXXX00E4  fcs0  946.53/s   84.64 MB/s  1584.83/s   39.40 MB/s  2
ms15-vio1  fcs4     0x10000090XXXXX12C  fcs4   39.44/s  449.92 KB/s  676.97/s     3.63 MB/s  10
cus1dbp01  fcs4     0xC050760XXXXX0047  fcs0   29.10/s  471.69 KB/s  310.92/s     1.28 MB/s  4
cus1mqp01  fcs4     0xC050760XXXXX002C  fcs0    1.91/s    4.71 KB/s  230.12/s     1.66 MB/s  0
cus2orap01 fcs4     0xC050760XXXXX000F  fcs0    0.77/s    4.31 KB/s   48.25/s   263.49 KB/s  0
^C
$

The option ‘-s‘ (select) can be used to select and show only data records from a specific client (‘-s hostname = aixtsmp1‘) or only data records from a specific physical port (‘-s physdev = fcs1‘):

$ vios fcstat -s hostname=aixtsmp1 ms15-vio1 2
HOSTNAME  PHYSDEV  WWPN                DEV   INREQS     INBYTES      OUTREQS    OUTBYTES     CTRLREQS
aixtsmp1  fcs5     0xC050760XXXXX003E  fcs0  1858.72/s   51.14 MB/s  1231.82/s  104.20 MB/s  0
aixtsmp1  fcs2     0xC050760XXXXX0058  fcs6    6.94/s     1.82 MB/s    6.94/s     1.82 MB/s  0
aixtsmp1  fcs4     0xC050760XXXXX0042  fcs2    0.39/s     1.19 KB/s    0.39/s   395.05 B/s   0
aixtsmp1  fcs1     0xC050760XXXXX0056  fcs5    0.39/s     7.72 B/s     0.00/s     0.00 B/s   1
aixtsmp1  fcs0     0xC050760XXXXX0052  fcs4    0.00/s     0.00 B/s     0.00/s     0.00 B/s   0
aixtsmp1  fcs3     0xC050760XXXXX005A  fcs7    0.00/s     0.00 B/s     0.00/s     0.00 B/s   0
HOSTNAME  PHYSDEV  WWPN                DEV   INREQS     INBYTES      OUTREQS    OUTBYTES     CTRLREQS
aixtsmp1  fcs5     0xC050760XXXXX003E  fcs0  1760.48/s  111.48 MB/s  1125.70/s   95.20 MB/s  0
aixtsmp1  fcs2     0xC050760XXXXX0058  fcs6    8.53/s     2.24 MB/s  484.61/s   127.04 MB/s  0
aixtsmp1  fcs1     0xC050760XXXXX0056  fcs5    0.00/s     0.00 B/s   469.04/s   122.96 MB/s  0
aixtsmp1  fcs4     0xC050760XXXXX0042  fcs2    0.37/s     1.14 KB/s    0.00/s     0.00 B/s   0
aixtsmp1  fcs0     0xC050760XXXXX0052  fcs4    0.00/s     0.00 B/s     0.00/s     0.00 B/s   0
aixtsmp1  fcs3     0xC050760XXXXX005A  fcs7    0.00/s     0.00 B/s     0.00/s     0.00 B/s   0
^C
$

With the “vios fcstat” command, FC throughput of any LPAR can be shown at any time in an extremely simple way, at the push of a button, so to speak.

If the intervals are smaller, the accuracy of the displayed values suffers. At 2 second intervals the inaccuracy is approx. 10%. However, the relationship between the displayed values is still correct.

The “label” Attribute for FC Adapters

As of AIX 7.2 TL4 and VIOS 3.1.1.10 there is a new attribute “label” for physical FC adapters. The administrator can set this attribute to any character string (maximum 255 characters). Even if the attribute is only informative, it can be extremely useful in PowerVM virtualization environments. If you have a large number of managed systems, it is not always clear to which FC fabric a certain FC port is connected. This can of course be looked up in the documentation of your systems, but it does involve a certain amount of effort. It is easier if you link this information directly with the FC adapters, which is exactly what the new “label” attribute allows in a simple way. On AIX:

# chdev -l fcs0 -U -a label="Fabric_1"
fcs0 changed
# lsattr -El fcs0 -a label -F value
Fabric_1
#

On virtual I/O servers, the attribute can also be set using the padmin account:

/home/padmin> chdev -dev fcs1 -attr label="Fabric_2" -perm
fcs1 changed
/home/padmin> lsdev -dev fcs1 -attr label                
value

Fabric_2
/home/padmin>

The attribute is also defined for older FC adapters.

If the “label” attribute is consistently used, it is always possible to determine online for each FC adapter to which fabric the adapter is connected to. This information only needs to be stored once for each FC adapter.

(Note: The “label” attribute is not implemented for AIX 7.1, at least not until 7.1 TL5 SP6.)

HSCF0180E Operation failed for

When trying to update the system firmware of a managed system via the HMC Command Line, we encountered the following error message:

hmc01:~> updlic -o a -t all -l latest -m ms26 -r sftp -h X.X.X.X -u XXXXXXXX --passwd XXXXXXXX -d /firmware/system/01VL940_071_027
HSCF0180E Operation failed for ms26 (9009-22A*XXXXXXX).
Could not unpack the firmware update package.
Check the health and available disk space of the file system.
hmc01:~>

The error message that was displayed suggested checking the available space in the HMC file systems:

hmc01:~> lshmcfs
filesystem=/var,filesystem_size=7935,filesystem_avail=4955,temp_files_start_time=11/22/2018 12:59:00,temp_files_size=2011
filesystem=/dump,filesystem_size=60347,filesystem_avail=55935,temp_files_start_time=02/15/2021 10:21:00,temp_files_size=0
filesystem=/extra,filesystem_size=20030,filesystem_avail=15939,temp_files_start_time=none,temp_files_size=0
filesystem=/,filesystem_size=15615,filesystem_avail=4369,temp_files_start_time=02/15/2021 06:05:00,temp_files_size=4
hmc01:~>

Actually, the available space should be sufficient, but to be on the safe side, we’ve cleaned up a bit with the temporary files:

hmc01:~> chhmcfs -o f -d 5
hmc01:~>

The updlic command was unimpressed and returned the same error message.

The removal of some old firmware versions from the HMC‘s local disk repository was also unsuccessful:

hmc01:~> updlic -o p --ecnumber 01AL740
hmc01:~> updlic -o p --ecnumber 01AL770
hmc01:~> updlic -o p --ecnumber 01AM740
hmc01:~>

The error message was still the same. Obviously, contrary to the information in the error message, the problem had nothing to do with the available space on the HMC!

We then took a closer look at the downloaded firmware. We downloaded the firmware as an ISO file H75557812_01VL940_071_027.iso from the IBM website and then mounted it on our NIM server using the loopmount command:

aixnim:/root> loopmount -i /tmp/H75557812_01VL940_071_027.iso -o "-o ro -V cdrfs" -m /mnt
aixnim:/root> ls -l /mnt
total 528296
-rw-r--r--    1 102010979 213            1860 Feb 04 09:08 01VL940071_special_instructs.xml.special.note.xml
-rw-r-----    1 102010979 210            7290 Feb 04 09:08 01VL940_071_027.dd.xml
-rw-r--r--    1 102010979 213           95687 Feb 04 09:07 01VL940_071_027.html
-rw-r-----    1 102010979 210            2971 Feb 04 09:08 01VL940_071_027.pd.sdd
-rw-r-----    1 102010979 210           67338 Feb 04 09:08 01VL940_071_027.readme.txt
-rw-r-----    1 102010979 210       134969022 Feb 04 09:08 01VL940_071_027.rpm
-rw-r-----    1 102010979 210       135328848 Feb 04 09:08 01VL940_071_027.tar.gz
-rw-r-----    1 102010979 210            9442 Feb 04 09:08 01VL940_071_027.xml
aixnim:/root>

What we didn’t notice when copying the files was the lack of read permissions on other for most files.

After we had assigned read permissions for all files, the next update attempt was successful:

hmc01:~> updlic -o a -t all -l latest -m ms26 -r sftp -h X.X.X.X -u XXXXXX --passwd XXXXXXXX -d /firmware/system/01VL940_071_027 
HSCF0179W Operation was partially successful for ms26 (9009-22A*XXXXXXX).
The following deferred fixes are present in the fix pack.  Deferred fixes will be activated after the next IPL of the system.
An immediate IPL is not required, unless you want to activate one of the fixes below now.
..
hmc01:~>

LPAR-Tool 1.6.0.0 is available now

Version 1.6.0.0 of our LPAR tool is now available in our download area!

New features are:

  • Online monitoring of SEA client statistics (vios help seastat)
  • Online monitoring of virtual FC client adapters (vios help fcstat)
  • Display of historical processor and memory data (lpar help lsmem, lpar help lsproc)

In the article Monitoring SEA Traffic the possibilities of calling up SEA client statistics are shown.

The Impact of FC-Ports without a Link

FC ports that are not used and do not have a link should be deactivated, as these significantly extend the runtime of a series of commands and operations (e.g. LPM).

(Note: our LPAR tool is used in some examples, but the corresponding commands on the HMC or the virtual I / O server are always shown!)

Two 4-port FC adapters are in use on one of our virtual I / O servers (ms26-vio1):

$ lpar lsslot ms26-vio1
DRC_NAME                  DRC_INDEX  IOPOOL  DESCRIPTION
U78D3.001.XXXXXXX-P1-C49  21040015   none    PCIe3 x8 SAS RAID Internal Adapter 6Gb
U78D3.001.XXXXXXX-P1-C7   2103001C   none    PCIe3 4-Port 16Gb FC Adapter
U78D3.001.XXXXXXX-P1-C2   21010021   none    PCIe3 4-Port 16Gb FC Adapter
$
(HMC: lshwres -r io --rsubtype slot -m ms26 --filter lpar_names=ms26-vio1)

However, only 2 ports of the 8 ports are cabled:

$ vios lsnports ms26-vio1
NAME  PHYSLOC                     FABRIC  TPORTS  APORTS  SWWPNS  AWWPNS
fcs0  U78D3.001.XXXXXXX-P1-C2-T1  1       64      64      3072    3072
fcs4  U78D3.001.XXXXXXX-P1-C7-T1  1       64      64      3072    3072
$
(VIOS: lsnports)

When working with the virtual I / O server, it is noticeable, that some of the commands have an unexpectedly long runtime and sometimes hang for a long time. Some example commands are given below, along with the measured runtime:

(0)padmin@ms26-vio1:/home/padmin> time netstat –cdlistats
…
Error opening device: /dev/fscsi1
errno: 00000045

Error opening device: /dev/fscsi2
errno: 00000045

Error opening device: /dev/fscsi3
errno: 00000045

Error opening device: /dev/fscsi5
errno: 00000045

Error opening device: /dev/fscsi6
errno: 00000045

Error opening device: /dev/fscsi7
errno: 00000045

real    1m13.56s
user    0m0.03s
sys     0m0.10s
(0)padmin@ms26-vio1:/home/padmin>
(0)padmin@ms26-vio1:/home/padmin> time lsnports
name             physloc                        fabric tports aports swwpns  awwpns
fcs0             U78D3.001.XXXXXXX-P1-C2-T1          1     64     64   3072    3072
fcs4             U78D3.001.XXXXXXX-P1-C7-T1          1     64     64   3072    3072

real    0m11.61s
user    0m0.01s
sys     0m0.00s
(0)padmin@ms26-vio1:/home/padmin>
(0)padmin@ms26-vio1:/home/padmin> time fcstat fcs1

Error opening device: /dev/fscsi1
errno: 00000045

real    0m11.31s
user    0m0.01s
sys     0m0.01s
(4)padmin@ms26-vio1:/home/padmin>

LPM operations also take significantly longer, since all FC ports are examined when searching for suitable FC ports for the necessary NPIV mappings. This can lead to delays in the range of minutes before the migration is finally started.

In order to avoid these unnecessarily long runtimes, FC ports that are not wired should not be activated. The fscsi device has the attribute autoconfig, with the possible values defined and available. By default, the value available is used, which means that the kernel configures and activates the device, even if it has no link, which leads to the waiting times shown above. If the autoconfig attribute is set to defined, the fscsi device is not activated, it then remains in the defined state.

The following example shows how to reconfigure the fscsi1 device:

$ vios chdev ms26-vio1 fscsi1 autoconfig=defined
$
(VIOS: chdev -dev fscsi1 -attr autoconfig=defined)
$
$ vios rmdev ms26-vio1 fscsi1
$
(VIOS: rmdev -dev fscsi1 –ucfg)
$
$ vios lsdev ms26-vio1 fscsi1
NAME    STATUS   PHYSLOC                     PARENT  DESCRIPTION
fscsi1  Defined  U78D3.001.XXXXXXX-P1-C2-T2  fcs1    FC SCSI I/O Controller Protocol Device
$
(VIOS: lsdev -dev fscsi1)
$
$  vios lsattr ms26-vio1 fscsi1
ATTRIBUTE     VALUE      DESCRIPTION                            USER_SETTABLE
attach        none       How this adapter is CONNECTED          False
autoconfig    defined    Configuration State                    True
dyntrk        yes        Dynamic Tracking of FC Devices         True+
fc_err_recov  fast_fail  FC Fabric Event Error RECOVERY Policy  True+
scsi_id       Adapter    SCSI ID                                False
sw_fc_class   3          FC Class for Fabric                    True
$
(VIOS: lsdev -dev fscsi1 –attr)
$

With the autoconfig=defined attribute, the fscsi device remains defined even when the cfgmgr is run!

If one repeats the runtime measurement of the commands above, one can see that the runtime of the commands has already measurably improved:

(0)padmin@ms26-vio1:/home/padmin> time netstat –cdlistats
…
Error opening device: /dev/fscsi1
errno: 00000005

Error opening device: /dev/fscsi2
errno: 00000045

Error opening device: /dev/fscsi3
errno: 00000045

Error opening device: /dev/fscsi5
errno: 00000045

Error opening device: /dev/fscsi6
errno: 00000045

Error opening device: /dev/fscsi7
errno: 00000045

real    1m1.02s
user    0m0.04s
sys     0m0.10s
(0)padmin@ms26-vio1:/home/padmin>
(0)padmin@ms26-vio1:/home/padmin> time lsnports
name             physloc                        fabric tports aports swwpns  awwpns
fcs0             U78D3.001.XXXXXXX-P1-C2-T1          1     64     64   3072    3072
fcs4             U78D3.001.XXXXXXX-P1-C7-T1          1     64     64   3072    3072

real    0m9.70s
user    0m0.00s
sys     0m0.01s
(0)padmin@ms26-vio1:/home/padmin>
(0)padmin@ms26-vio1:/home/padmin> time fcstat fcs1

Error opening device: /dev/fscsi1
errno: 00000005

real    0m0.00s
user    0m0.02s
sys     0m0.00s
(4)padmin@ms26-vio1:/home/padmin>

The running time of the netstat command was shortened by 12 seconds, the lsnports command was about 2 seconds faster.

We now also set the autoconfig attribute to defined for all other unused FC ports:

$ for fscsi in fscsi2 fscsi3 fscsi5 fscsi6 fscsi7
> do
> vios chdev ms26-vio1 $fscsi autoconfig=defined
> vios rmdev ms26-vio1 $fscsi
> done
$

Now we repeat the runtime measurement of the commands again:

(0)padmin@ms26-vio1:/home/padmin> time netstat –cdlistats
…
Error opening device: /dev/fscsi1
errno: 00000005

Error opening device: /dev/fscsi2
errno: 00000005

Error opening device: /dev/fscsi3
errno: 00000005

Error opening device: /dev/fscsi5
errno: 00000005

Error opening device: /dev/fscsi6
errno: 00000005

Error opening device: /dev/fscsi7
errno: 00000005

real    0m0.81s
user    0m0.03s
sys     0m0.10s
(0)padmin@ms26-vio1:/home/padmin>
(0)padmin@ms26-vio1:/home/padmin> time lsnports         
name             physloc                        fabric tports aports swwpns  awwpns
fcs0             U78D3.001.XXXXXXX-P1-C2-T1          1     64     64   3072    3072
fcs4             U78D3.001.XXXXXXX-P1-C7-T1          1     64     64   3072    3072

real    0m0.00s
user    0m0.01s
sys     0m0.01s
(0)padmin@ms26-vio1:/home/padmin> time fcstat fcs1       

Error opening device: /dev/fscsi1
errno: 00000005

real    0m0.04s
user    0m0.00s
sys     0m0.00s
(4)padmin@ms26-vio1:/home/padmin>

The netstat command now takes less than 1 second, the lsnports command only 0.1 seconds.

It is therefore worthwhile to set the autoconfig attribute for unused FC ports to defined!

 

MDS reports at your fingertips

Many AIX and Power System administrators use Microcode Discovery Services to regularly check the versions of adapter firmware and system firmware. The following steps are usually necessary:

– Download the current catalog file catalog.mic.

– Run Inventory Scout to generate the microcode upload file.

– Upload the microcode upload file to IBM http://www14.software.ibm.com/support/customercare/mds/mds

In many cases, the upload is carried out via a browser. The report is shown in the form of an HTML output. Alternatively, you can also upload e.g. with the help of curl and request the data in JSON format.

$ curl -F "mdsData=@ms01-vio1.mup;type=multipart/form" -F "format=json" -H "Expect:" http://www14.software.ibm.com/support/customercare/mds/mds -o ms01-vio1.mup

The returned JSON file contains all information that is otherwise displayed in the browser.

With a small script, the JSON file can be displayed relatively easily in readable ASCII form. We have created the script mds_report for this purpose and made it available in our download area (https://powercampus.de/download). The script expects a microcode upload file as an argument, here is a sample output:

$ mds_report ms01-vio1.mup
ms01-vio1.mup upload microcode upload file to IBM ... uploaded

Microcode by Host

ms01-vio1
IP Addr: X.X.X.X
Model: 8205-E6D   Serial: XXXXXX
Microcode catalog: 2020.07.30

DEVICES          INSTALLED        LATEST           RECOMMEND   PKGNAME
system           AL770_126        AL770_126        None        8231-E1D; 8231-E2D; 8246-L1D; 8246-L1T; 8246-L2D; 8246-L2T; 8202-E4D; 8205-E6D; 8268-E1D; 8493-SV6 HV16 System Firmware
sissas0          0422003f         0422003f         None        PCI Express x8 Ext Dual-x4 3Gb SAS RAID Adapter (CCIN: 574E)
ent0,1,2,3       10080180         10240310         Update      4-Port Gigabit Ethernet PCI-Express Adapter
ent4,5,6,7       0400401800007    0400401800009    Update      PCIe2 2-Port 10GbE SFP+Copper or 10GbE SR Adapter
fcs0,1,2,3       210301           210313           Update      PCIe2 4-Port 8Gb Fibre Channel Adapter, FC 5729
fcs4,5,6,7       0320080270       0325080271       Update      8Gb PCIe2 Low Profile 4-Port FC Adapter
hdisk0,1         37343138         37343139         Update      Savvio 15K.3 146/300GB SAS Disk Drive
cd0              RA65             RA65             None        SATA DVD-RAM Drive RMBO0140512

Microcode by Type

IMPACT        SEVERITY    RELDATE       LATEST           PKGNAME
Security      SPE         2018.05.27    AL770_126        8231-E1D; 8231-E2D; 8246-L1D; 8246-L1T; 8246-L2D; 8246-L2T; 8202-E4D; 8205-E6D; 8268-E1D; 8493-SV6 HV16 System Firmware
Usability     ATT         2013.06.06    0422003f         PCI Express x8 Ext Dual-x4 3Gb SAS RAID Adapter (CCIN: 574E)
Usability     ATT         2019.06.20    10240310         4-Port Gigabit Ethernet PCI-Express Adapter
Usability     ATT         2016.11.14    0400401800009    PCIe2 2-Port 10GbE SFP+Copper or 10GbE SR Adapter
Usability     ATT         2019.06.17    210313           PCIe2 4-Port 8Gb Fibre Channel Adapter, FC 5729
Usability     ATT         2020.01.28    0325080271       8Gb PCIe2 Low Profile 4-Port FC Adapter
Function      ATT         2019.04.30    37343139         Savvio 15K.3 146/300GB SAS Disk Drive
New           NEW         2014.10.24    RA65             SATA DVD-RAM Drive RMBO0140512
$

The output is very similar to the output in the browser. In the first section “Microcode by Host” the update recommendations for the system firmware and adapter firmware are given. In the second section “Microcode by TypeImpact and Severity, as well as the release date of the last available firmware version are shown.

If access to the Internet is only possible via a proxy, the proxy can be specified using the -x argument, as shown in the following example:

$ mds_report -x http://10.0.0.217:1234 ms07-vio1.mup
ms07-vio1.mup upload microcode upload file to IBM ... uploaded

Microcode by Host

ms07-vio1
IP Addr: X.X.X.X
Model: 8408-44E   Serial: XXXXXXX
Microcode catalog: 2020.07.30

DEVICES          INSTALLED        LATEST           RECOMMEND   PKGNAME
system           SV860_138        SV860_215        Update      8247-21L, 8247-22L, 8247-42L, 8284-21A, 8284-22A, 8286-41A, 8286-42A, 8408-44E, 8408-E8E, 5148-21L, 5148-22L - system-v860.60
sissas0          15511800         19512900         Update      PCIe3 RAID SAS Adapter Quad-port 6Gb x8...
ses0,1,2,3       1D0B             1D0B             None        SAS Enclosure Services for Power 8 4U High Function DASD backplane 8408-E8E
pdisk0,1         37363135         37363142         Update      BP5XX15KHDD 15KRPM 73/146/300/600GB SAS Disk Drive
fcs0,1           00010000020025201919  00012000040025700015  Update      PCIe2 2-Port 16Gb FC Adapter
fcs2,3,4,5       0320080270       0325080271       Update      8Gb PCIe2 Low Profile 4-Port FC Adapter

Microcode by Type

IMPACT        SEVERITY    RELDATE       LATEST           PKGNAME
Security      HIPER       2020.03.04    SV860_215        8247-21L, 8247-22L, 8247-42L, 8284-21A, 8284-22A, 8286-41A, 8286-42A, 8408-44E, 8408-E8E, 5148-21L, 5148-22L - system-v860.60
Availability  ATT         2020.02.25    19512900         PCIe3 RAID SAS Adapter Quad-port 6Gb x8...
New           NEW         2015.06.03    1D0B             SAS Enclosure Services for Power 8 4U High Function DASD backplane 8408-E8E
Function      ATT         2020.04.16    37363142         BP5XX15KHDD 15KRPM 73/146/300/600GB SAS Disk Drive
Usability     ATT         2020.02.18    00012000040025700015  PCIe2 2-Port 16Gb FC Adapter
Usability     ATT         2020.01.28    0325080271       8Gb PCIe2 Low Profile 4-Port FC Adapter
$

If you want to use the script more often, you should enter the proxy in the script itself, for this there is the PROXY variable, which can be set as follows:

$ grep ^PROXY mds_report
PROXY="http://10.0.0.217:1234"
$

(Where 10.0.0.217:1234 is just an example, you have to supply your own values here.)

It is then no longer necessary to specify a proxy using the -x option.

If the script is executed as root on an AIX system, the proxy configuration is automatically adopted from ESA (Electronic Service Agent).

If you need the URLs to download the firmware, you should use the option -u (show download URLs). The links for the firmware versions are then displayed at the end of the output, here is an example:

$ mds_report -u ms03-vio1.mup
/appdata/daten/fk450/aix/mds/virt-aix23-vio1.mup upload microcode upload file to IBM ... uploaded

Microcode by Host

ms03-vio1
IP Addr: X.X.X.X
Model: 9009-22A   Serial: XXXXXXX
Microcode catalog: 2020.07.30

DEVICES          INSTALLED        LATEST           RECOMMEND   PKGNAME
system           VL910_144        VL940_050        Update      9008-22L; 9009-22A; 9009-41A; 9009-42A; 9223-22H; and 9223-42H-system
sissas0          19511400         19512900         Update      PCIe3 RAID SAS Adapter Quad-port 6Gb x8...
pdisk0           36383035         36383035         None        AL14SE 600/1200/1800 GB 4K Hard Disk Drive
pdisk1,2         41374B30         41374B30         None        Ultrastar C15K600-5xx
fcs0,1,2,3,4,5,6,7  00011000040041500005  00012000040025700015  Update      PCIe3 4-Port 16Gb FC Adapter

Microcode by Type

IMPACT        SEVERITY    RELDATE       LATEST           PKGNAME
Availability  SPE         2020.05.21    VL940_050        9008-22L; 9009-22A; 9009-41A; 9009-42A; 9223-22H; and 9223-42H-system
Availability  ATT         2020.02.25    19512900         PCIe3 RAID SAS Adapter Quad-port 6Gb x8...
Data          HIPER       2016.12.01    36383035         AL14SE 600/1200/1800 GB 4K Hard Disk Drive
Function      ATT         2015.08.18    41374B30         Ultrastar C15K600-5xx
Usability     ATT         2020.02.18    00012000040025700015  PCIe3 4-Port 16Gb FC Adapter

Downloads

http://www.ibm.com/support/fixcentral/quickorder?product=ibm/power/900922A&release=all&platform=all&function=fixId&includeSupersedes=0&source=fc&fixids=01VL940_050_027
http://www.ibm.com/support/fixcentral/quickorder?product=ibm/io&release=all&platform=all&function=fixId&includeSupersedes=0&source=fc&fixids=40145679_20200224110413_GRP
http://www.ibm.com/support/fixcentral/quickorder?product=ibm/io&release=all&platform=all&function=fixId&includeSupersedes=0&source=fc&fixids=1354333840_20161130155709_GRP
http://www.ibm.com/support/fixcentral/quickorder?product=ibm/io&release=all&platform=all&function=fixId&includeSupersedes=0&source=fc&fixids=1448849004_20150813164908_GRP
http://www.ibm.com/support/fixcentral/quickorder?product=ibm/io&release=all&platform=all&function=fixId&includeSupersedes=0&source=fc&fixids=427029183_20200213134040_GRP
$

The script generally takes less than 1 second to run!

We tested the script on AIX, Linux, and MacOS. Under MacOS there is usually no ksh93. But the installed ksh supports all the necessary features that are required by the mds_report script. If you change the interpreter in the first line of the script to ksh, the script will also run on a Mac.

A good description of Inventory Scout and MDS can be found here: http://gibsonnet.net/blog/cgaix/html/MDS%20reports.html (Chris Gibson)

You can find out how to automate Inventory Scout in our article Automating Inventory Scout