IBM PowerVM: Add a virtual Ethernet adapter to an LPAR

A virtual Ethernet adapter is to be added to the LPAR aix01 with IBM PowerVM. The data in detail:

    • HMC: hmc01
    • managed system: ms25
    • LPAR: aix01
    • profile: standard
    • virtual slot number: 4
    • Port-VLAN-ID: 900
    • virtual Ethernet switch: ETHERNET0(default)
    • additional VLANs: none

The command on the associated HMC hmc01 is:

hscroot@hmc01:~> chhwres -m ms25 -r virtualio --rsubtype eth -o a -p aix01 -s 2 -a 'ieee_virtual_eth=0,port_vlan_id=900'
hscroot@hmc01:~>

If the currently used profile of the LPAR is not automatically synchronized, then the additional virtual Ethernet adapter should also be added to the profile:

hscroot@hmc01:~> chsyscfg -r prof -m ms25 -i 'lpar_name=aix01,name=standard,"virtual_eth_adapters+=""4/0/900///0"""'
hscroot@hmc01:~>

With our LPAR tool, the command to use looks like this:

$ lpar addeth aix01 4 900
$

The current profile is automatically adjusted.

Detailed information on the LPAR tool and virtual Ethernet adapters can be found here: Virtual Ethernet

LPAR tool: Console

lpar console

A console can be opened for an LPAR at any time using the LPAR tool:

$ lpar console lpar01
Open in progress
Open completed.
PowerPC Firmware
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Main Menu
1. Select Language
2. Setup Remote IPL (Initial Program Load)
3. Change SCSI Settings
4. Select Console
5. Select Boot Options
…

To terminate a console session, the escape sequence “~.” used.

Some LPAR tool commands support opening a console via the “-c” (console) option:

    • Activating an LPAR with “lpar activate -c“.
    • Shutting down an LPAR with “lpar shutdown -c“.
    • Shutting down the operating system with “lpar osshutdown -c“.
    • Initiating a system dump for an LPAR with “lpar dumprestart -c“.

A presentation on the subject can be found here: Console with the LPAR tool

NTP Configuration on HMC with LPAR tool

NTP-Konfiguration HMC

The current status of NTP on an HMC can then be displayed using “hmc lsntp“:

$ hmc lsntp
NAME   XNTP     XNTPSTATUS    XNTPSERVER
hmc01  disable  -             -
hmc02  enable   synchronized  192.168.189.77,192.168.189.78
hmc03  enable   synchronized  192.168.189.77,192.168.189.78
$

Another NTP server can be added to the NTP configuration of an HMC using the “hmc addntpserver” command:

$ hmc addntpserver hmc01 192.168.189.77
$ hmc addntpserver hmc01 192.168.189.78
$

A check with “hmc lsntp” shows that two NTP servers are now configured, but NTP is still not activated:

$ hmc lsntp hmc01
NAME   XNTP     XNTPSTATUS  XNTPSERVER
hmc01  disable  -           192.168.189.77,192.168.189.78
$

NTP can now be activated with the command “hmc enablentp“:

$ hmc enablentp hmc01
$

The first sync may take a while:

$ hmc lsntp hmc01
NAME   XNTP    XNTPSTATUS      XNTPSERVER
hmc01  enable  unsynchronized  192.168.189.77,192.168.189.78
$

The time on the HMC is not yet synchronized immediately after enabling NTP (XNTPSTATUS: unsynchronized).

A detailed status for each NTP server can be obtained by using the “-a” option (all NTP servers):

$ hmc lsntp –a hmc01
NAME    SERVER         STATE          POLL_FREQ_SECONDS  SECONDS_SINCE_LAST_POLL
hmc01  192.168.189.77  not connected  64                 0
hmc01  192.168.189.78  not connected  64                 0
$

As soon as synchronization with one of the NTP servers is achieved, the overall status is synchronized:

$ hmc lsntp hmc01
NAME   XNTP    XNTPSTATUS    XNTPSERVER
hmc01  enable  synchronized  192.168.189.77,192.168.189.78
$

A more detailed description can be found here: NTP Configuration on the HMC

The LPAR tool can be downloaded for testing here: Download

Administering Storage Pools in PowerVM

File Storage Pool

In many cases, the use of SAN LUNs via NPIV is not suitable for the rapid provisioning of client LPARs. The SAN LUNs must first be created on the external storage systems and then the zoning in the SAN fabric must be adjusted, so that the new SAN LUNs are visible to the WWPNs of the client LPAR. Using VSCSI to map the SAN LUNs to the client LPARs also requires some effort. Each SAN LUN is assigned to one or more client LPARs via VSCSI, which can lead to a large number of SAN LUNs on the virtual I/O servers.

One way to provide storage for client LPARs more quickly, is to use storage pools on the virtual I/O servers. Once a storage pool has been created, storage can be made available for client LPARs with just one command. So-called backing devices are generated in the storage pool, which can be assigned to the client LPARs via virtual SCSI. Storage for client LPAR can thus be made available by the virtual I/O servers via PowerVM. This means that, for example, a boot disk, for a new client LPAR, can be created within a few seconds and can be used immediately.

PowerVM offers two different types of storage pools: local storage pools and shared storage pools. A local storage pool, or simply storage pool, is only available on one virtual I/O server. Each virtual I/O server has its own independent storage pools. A shared storage pool, on the other hand, can be made available by several virtual I/O servers that are combined in a cluster. Access to the shared storage pool is possible from each virtual I/O server that belong to the cluster. Shared storage pools are not dealt with in this chapter.

There are two types of local storage pools: logical volume storage pools and file storage pools. With a logical volume storage pool, storage is made available for the client LPARs in the form of logical volumes, with a file storage pool in the form of files.

Figure 8.13 shows a logical volume storage pool. The storage pool is implemented in the form of a volume group and therefore draws its storage capacity from the associated physical volumes. In order to provide storage for client LPARs, logical volumes are created in the storage pool. In the figure, the logical volumes bd01, bd02 and bd03 have been created. The logical volumes are referred to as backing devices, because they ultimately serve as the storage location for the data of the client LPARs. The assignment of a backing device to a client LPAR, more precisely a vhost adapter, which is assigned one-to-one to a virtual SCSI adapter of a client LPAR, takes place using a so-called virtual target device (vtscsi0, vtscsi1 and vtscsi2 in the figure). The virtual target device is a child device of one of the vhost adapters and points to the corresponding backing device via the device attribute aix_tdev. When mapping, the virtual target device is created as a child of a vhost adapter.

Logical Volume Storage Pool
Figure 8.13: Logical Volume Storage Pool

As long as the storage pool still has free capacity, additional backing devices can be created and assigned to client LPARs at any time. The provisioning of storage for client LPAR is therefore very flexible and, above all, very fast and is completely under the control of the PowerVM administrator.

In addition to logical volume storage pools, file storage pools are also supported. Such a file storage pool is shown in figure 8.14; it is implemented as a file system. The underlying logical volume is in the logical volume storage pool mypool. The storage pool name is used as the name for the logical volume, in the figure then name filepool is used. The file system is mounted under /var/vio/storagepools/filepool, whereby the last path component is the same as the storage pool name. Files are used as backing devices, the file name being the same as the backing device name. The mapping is still implemented using virtual target devices, in the figure vtscsi3 and vtscsi4 are shown as examples. The attribute aix_tdev of the virtual target devices points in this case to the respective file in the file storage pool.

File Storage Pool
Figure 8.14: File Storage Pool

Monitoring virtual FC Client Traffic

With the LPAR tool, statistics for all virtual FC clients can be displayed at any time using the “vios fcstat” command. This allows you to determine at any time which client LPARs have which I/O throughput (when using NPIV).

Which NPIV-capable FC adapters are available on a virtual I/O server can easily be found out with “vios lsnports“:

$ vios lsnports ms15-vio1
NAME  PHYSLOC                     FABRIC  TPORTS  APORTS  SWWPNS  AWWPNS
fcs0  U78CB.001.XXXXXXX-P1-C5-T1  1       64      62      2032    2012
fcs1  U78CB.001.XXXXXXX-P1-C5-T2  1       64      62      2032    2012
fcs2  U78CB.001.XXXXXXX-P1-C5-T3  1       64      61      2032    1979
fcs3  U78CB.001.XXXXXXX-P1-C5-T4  1       64      61      2032    1979
fcs4  U78CB.001.XXXXXXX-P1-C3-T1  1       64      50      3088    3000
fcs5  U78CB.001.XXXXXXX-P1-C3-T2  1       64      63      3088    3077
$

We display the FC client statistics with the command “vios fcstat”. By default, the data for all virtual FC clients of the specified virtual I/O server are shown every 10 seconds:

$ vios fcstat ms15-vio1
HOSTNAME   PHYSDEV  WWPN                DEV    INREQS    INBYTES      OUTREQS    OUTBYTES     CTRLREQS
ms15-vio1  fcs1     0x210000XXXXX56EC5  fcs1   774.75/s  129.51 MB/s  1332.71/s   92.96 MB/s  20
aixtsmp1   fcs2     0xC050760XXXXX0058  fcs6   318.10/s   83.39 MB/s  481.34/s   126.18 MB/s  0
ms15-vio1  fcs2     0x210000XXXXX56EC6  fcs2   318.10/s   83.39 MB/s  480.78/s   126.03 MB/s  0
aixtsmp1   fcs5     0xC050760XXXXX003E  fcs0   583.98/s   60.35 MB/s  1835.17/s  124.86 MB/s  0
ms15-vio1  fcs5     0x10000090XXXXX12D  fcs5   583.70/s   60.27 MB/s  1836.21/s  124.92 MB/s  0
ms15-vio1  fcs0     0x21000024XXXXXEC4  fcs0   923.19/s  165.08 MB/s  1032.81/s   17.25 MB/s  46
aixtsmp3   fcs1     0xC050760XXXXX00E4  fcs0   775.12/s  129.48 MB/s  1047.32/s   17.15 MB/s  20
aixtsmp3   fcs0     0xC050760XXXXX00DE  fcs1   775.78/s  128.99 MB/s  1037.99/s   17.39 MB/s  20
aixtsmp1   fcs1     0xC050760XXXXX0056  fcs5     0.00/s    0.00 B/s   290.39/s    76.12 MB/s  0
aixtsmp1   fcs0     0xC050760XXXXX0052  fcs4   142.89/s   36.12 MB/s    0.00/s     0.00 B/s   26
ms15-vio1  fcs4     0x10000090XXXXX12C  fcs4   234.97/s    4.58 MB/s  621.78/s    11.12 MB/s  40
cus1dbp01  fcs4     0xC050760XXXXX0047  fcs0   243.55/s    5.05 MB/s  432.33/s     9.95 MB/s  0
cus1dbi01  fcs4     0xC050760XXXXX0044  fcs1     0.94/s   10.42 KB/s   87.28/s   459.26 KB/s  0
...
HOSTNAME   PHYSDEV  WWPN                DEV    INREQS     INBYTES      OUTREQS    OUTBYTES     CTRLREQS
aixtsmp1   fcs5     0xC050760XXXXX003E  fcs0   1772.84/s  162.24 MB/s  1309.30/s   70.60 MB/s  68
ms15-vio1  fcs5     0x10000090XXXXX12D  fcs5   1769.13/s  161.95 MB/s  1305.60/s   70.54 MB/s  68
ms15-vio1  fcs1     0x21000024XXXXXEC5  fcs1   883.55/s   118.97 MB/s  1551.97/s  108.78 MB/s  43
ms15-vio1  fcs2     0x21000024XXXXXEC6  fcs2   201.09/s    52.72 MB/s  497.26/s   130.35 MB/s  0
aixtsmp1   fcs2     0xC050760XXXXX0058  fcs6   201.09/s    52.72 MB/s  495.40/s   129.87 MB/s  0
ms15-vio1  fcs0     0x21000024XXXXXEC4  fcs0   923.54/s   128.89 MB/s  1234.98/s   23.31 MB/s  65
aixtsmp3   fcs0     0xC050760XXXXX00DE  fcs1   876.93/s   118.93 MB/s  1234.98/s   23.32 MB/s  44
aixtsmp3   fcs1     0xC050760XXXXX00E4  fcs0   884.17/s   119.07 MB/s  1223.50/s   23.00 MB/s  43
aixtsmp1   fcs1     0xC050760XXXXX0056  fcs5     0.00/s     0.00 B/s   325.83/s    85.41 MB/s  0
...
^C
$

The LPAR name, the physical FC port (PHYSDEV) on the virtual I/O server, the WWPN of the client adapter, the virtual FC client port (DEV), as well as the number of requests (INREQS and OUTREQS) and thereby transferred bytes (INBYTES and OUTBYTES). The transfer rates are output in KB/s, MB/s or GB/s. The output can be very long on larger systems! The output is sorted according to throughput, i.e. the most active virtual client adapters are output first. With the option ‘-t‘ (top) the output can be restricted to a desired number of data records: e.g. with ‘-t 10‘ only the top ten adapters with the highest throughput are shown. In addition, the interval length (in seconds) can be specified via a further argument, here is a short example:

$ vios fcstat -t 10 ms15-vio1 2
HOSTNAME   PHYSDEV  WWPN                DEV   INREQS     INBYTES      OUTREQS    OUTBYTES     CTRLREQS
ms15-vio1  fcs1     0x21000024XXXXXEC5  fcs1  1034.58/s   86.56 MB/s  2052.23/s  160.11 MB/s  20
ms15-vio1  fcs5     0x10000090XXXXX12D  fcs5  1532.63/s  115.60 MB/s  1235.72/s  118.32 MB/s  40
aixtsmp1   fcs5     0xC050760XXXXX003E  fcs0  1510.33/s  114.88 MB/s  1236.49/s  118.27 MB/s  40
aixtsmp3   fcs1     0xC050760XXXXX00E4  fcs0  1036.11/s   86.67 MB/s  1612.25/s   44.86 MB/s  20
aixtsmp3   fcs0     0xC050760XXXXX00DE  fcs1  1031.50/s   86.29 MB/s  1588.02/s   44.27 MB/s  20
ms15-vio1  fcs0     0x21000024XXXXXEC4  fcs0  1029.58/s   86.31 MB/s  1567.63/s   43.65 MB/s  20
aixtsmp1   fcs1     0xC050760XXXXX0056  fcs5    0.00/s     0.00 B/s   436.52/s   114.43 MB/s  0
ms15-vio1  fcs2     0x21000024XXXXXEC6  fcs2    0.00/s     0.00 B/s   435.75/s   114.23 MB/s  0
aixtsmp1   fcs2     0xC050760XXXXX0058  fcs6    0.00/s     0.00 B/s   432.68/s   113.42 MB/s  0
ms15-vio1  fcs4     0x10000090XXXXX12C  fcs4  144.99/s     0.78 MB/s  478.83/s     2.22 MB/s  46
HOSTNAME   PHYSDEV  WWPN                DEV   INREQS    INBYTES      OUTREQS    OUTBYTES     CTRLREQS
aixtsmp1   fcs5     0xC050760XXXXX003E  fcs0  758.14/s   35.55 MB/s  1822.99/s  112.60 MB/s  0
ms15-vio1  fcs5     0x10000090XXXXX12D  fcs5  757.38/s   35.52 MB/s  1821.46/s  112.59 MB/s  0
ms15-vio1  fcs0     0x21000024XXXXXEC4  fcs0  944.23/s   85.09 MB/s  1657.58/s   41.40 MB/s  2
aixtsmp3   fcs0     0xC050760XXXXX00DE  fcs1  943.47/s   85.15 MB/s  1636.90/s   40.68 MB/s  2
ms15-vio1  fcs1     0x21000024XXXXXEC5  fcs1  949.21/s   84.88 MB/s  1586.74/s   39.41 MB/s  2
aixtsmp3   fcs1     0xC050760XXXXX00E4  fcs0  946.53/s   84.64 MB/s  1584.83/s   39.40 MB/s  2
ms15-vio1  fcs4     0x10000090XXXXX12C  fcs4   39.44/s  449.92 KB/s  676.97/s     3.63 MB/s  10
cus1dbp01  fcs4     0xC050760XXXXX0047  fcs0   29.10/s  471.69 KB/s  310.92/s     1.28 MB/s  4
cus1mqp01  fcs4     0xC050760XXXXX002C  fcs0    1.91/s    4.71 KB/s  230.12/s     1.66 MB/s  0
cus2orap01 fcs4     0xC050760XXXXX000F  fcs0    0.77/s    4.31 KB/s   48.25/s   263.49 KB/s  0
^C
$

The option ‘-s‘ (select) can be used to select and show only data records from a specific client (‘-s hostname = aixtsmp1‘) or only data records from a specific physical port (‘-s physdev = fcs1‘):

$ vios fcstat -s hostname=aixtsmp1 ms15-vio1 2
HOSTNAME  PHYSDEV  WWPN                DEV   INREQS     INBYTES      OUTREQS    OUTBYTES     CTRLREQS
aixtsmp1  fcs5     0xC050760XXXXX003E  fcs0  1858.72/s   51.14 MB/s  1231.82/s  104.20 MB/s  0
aixtsmp1  fcs2     0xC050760XXXXX0058  fcs6    6.94/s     1.82 MB/s    6.94/s     1.82 MB/s  0
aixtsmp1  fcs4     0xC050760XXXXX0042  fcs2    0.39/s     1.19 KB/s    0.39/s   395.05 B/s   0
aixtsmp1  fcs1     0xC050760XXXXX0056  fcs5    0.39/s     7.72 B/s     0.00/s     0.00 B/s   1
aixtsmp1  fcs0     0xC050760XXXXX0052  fcs4    0.00/s     0.00 B/s     0.00/s     0.00 B/s   0
aixtsmp1  fcs3     0xC050760XXXXX005A  fcs7    0.00/s     0.00 B/s     0.00/s     0.00 B/s   0
HOSTNAME  PHYSDEV  WWPN                DEV   INREQS     INBYTES      OUTREQS    OUTBYTES     CTRLREQS
aixtsmp1  fcs5     0xC050760XXXXX003E  fcs0  1760.48/s  111.48 MB/s  1125.70/s   95.20 MB/s  0
aixtsmp1  fcs2     0xC050760XXXXX0058  fcs6    8.53/s     2.24 MB/s  484.61/s   127.04 MB/s  0
aixtsmp1  fcs1     0xC050760XXXXX0056  fcs5    0.00/s     0.00 B/s   469.04/s   122.96 MB/s  0
aixtsmp1  fcs4     0xC050760XXXXX0042  fcs2    0.37/s     1.14 KB/s    0.00/s     0.00 B/s   0
aixtsmp1  fcs0     0xC050760XXXXX0052  fcs4    0.00/s     0.00 B/s     0.00/s     0.00 B/s   0
aixtsmp1  fcs3     0xC050760XXXXX005A  fcs7    0.00/s     0.00 B/s     0.00/s     0.00 B/s   0
^C
$

With the “vios fcstat” command, FC throughput of any LPAR can be shown at any time in an extremely simple way, at the push of a button, so to speak.

If the intervals are smaller, the accuracy of the displayed values suffers. At 2 second intervals the inaccuracy is approx. 10%. However, the relationship between the displayed values is still correct.

LPAR-Tool 1.6.0.0 is available now

Version 1.6.0.0 of our LPAR tool is now available in our download area!

New features are:

  • Online monitoring of SEA client statistics (vios help seastat)
  • Online monitoring of virtual FC client adapters (vios help fcstat)
  • Display of historical processor and memory data (lpar help lsmem, lpar help lsproc)

In the article Monitoring SEA Traffic the possibilities of calling up SEA client statistics are shown.

HSCLB505 The partition cannot use hardware-accelerated encryption

When migrating LPARs using LPM onto a somewhat older hardware, the following error can occur:

HSCLB505 The partition cannot use hardware-accelerated encryption on the destination managed system because the destination managed system does not support hardware-accelerated encryption.

This means that hardware-accelerated encryption is activated for the LPAR, but is not supported on the destination managed system.

Disabling hardware-accelerated encryption using the LPAR-Tool is easy:

$ lpar -d chmem lpar01 hardware_mem_encryption=0
$

Without the LPAR-Tool this is of course also possible. Log into your HMC and use the following command from the commandoline:

chhwres -m ms01 -r mem -o s -p lpar01 -a 'hardware_mem_encryption=0'

Afterwards validation and migration using LPM should work.

LPAR-Tool: Which LPARs have no RMC-Connection

Status and configuration of LPARs are regularly needed information in the administration of LPARs. With the LPAR tool, information such as status, RMC status, number of cores, size of RAM, OS version and other data can be easily and quickly determined, even with hundreds or thousands of LPARs. Which LPARs don’t have an RMC connection is shown as one of the examples.

All of the following examples were performed on an environment with 10 HMCs, 50 managed systems, and just over 500 LPARs. To determine how long the LPAR tool requires, the run times of the commands were measured and specified using time.

The names of the LPARs were manually changed in the outputs shown and replaced by generic names lparXX and aixYY.

First of all, the status of a single LPAR:

$ time lpar status aix01
NAME   LPAR_ID  LPAR_ENV   STATE     PROFILE    SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
aix01  27       aixlinux   Running   standard   0     active    1      0.1         8192    AIX 7.1 7100-04-02-1614

real    0m0.210s
user    0m0.011s
sys     0m0.013s
$

Of course you can also specify multiple LPARs. If you want to know the status of all LPARs (in our case just over 500 LPARs), just leave out the argument:

$ time lpar status
NAME    LPAR_ID  LPAR_ENV   STATE     PROFILE      SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
aix01   27       aixlinux   Running   standard     0     active    1      0.1         8192    AIX 7.1 7100-04-02-1614
aix02   1        aixlinux   Running   standard     -     -         1      -           8320    Unknown
...
lpar01  6        aixlinux   Running   standard     0     active    1      0.4         20480   AIX 7.1 7100-04-05-1720

real	0m18.933s
user	0m3.819s
sys	0m3.789s
$

In the background, the LPAR tool executes more than 150 commands on the corresponding HMCs (lshwres and lssyscfg)!

The output should now be restricted to LPARs that are currently active (state=Running). There is the option “-s“, which can be used to specify criteria for attributes that must be met. Only LPARs meeting these criteria will be shown:

$ time lpar status -s state=Running
NAME     LPAR_ID  LPAR_ENV   STATE    PROFILE    SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
aix01    27       aixlinux   Running  standard   0     active    1      0.1         8192    AIX 7.1 7100-04-02-1614
aix02    1        aixlinux   Running  standard   -     -         1      -           8320    Unknown
...
lpar01   6        aixlinux   Running  standard   0     active    1      0.4         20480   AIX 7.1 7100-04-05-1720

real	0m17.998s
user	0m3.692s
sys	0m3.647s
$

Now we want to know on which of these LPARs RMC is not working/not active. The option “-s” allows to combine any number of criteria. All specified criteria must then be met (logical AND). The RMC state can be found in the attribute rmc_state:

$ time lpar status -s state=Running,rmc_state!=active
NAME     LPAR_ID  LPAR_ENV   STATE    PROFILE    SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
aix02    1        aixlinux   Running  standard   -     -         1      -           8320    Unknown
aix03    2        aixlinux   Running  standard   -     -         1      -           8320    Unknown
...
lpar07   4        aixlinux   Running  standard   0     none      1      1.0         4352    Unknown

real	0m19.057s
user	0m3.550s
sys	0m3.512s
$

As another example we want to know on which LPARs AIX 7.1 TL5 is installed. The os_version attribute contains the OS version. The ‘~‘ operator can be used to compare against a regular expression (similar to the grep command). We use the regular expression 7100-05:

$ time lpar status -s os_version~7100-05
NAME     LPAR_ID  LPAR_ENV  STATE    PROFILE    SYNC  RMC       PROCS  PROC_UNITS  MEM     OS_VERSION
aix14    14       aixlinux  Running  standard   0     active    2      0.2         16384   AIX 7.1 7100-05-02-1810
aix16    24       aixlinux  Running  standard   0     active    2      0.2         16384   AIX 7.1 7100-05-03-1846
...
lpar10   10       aixlinux  Running  standard   0     active    3      0.3         32768   AIX 7.1 7100-05-02-1810

real	0m18.212s
user	0m3.726s
sys	0m3.676s
$

So far, we have always used the default output format. Now we’d like to list all systems that still run on AIX 6.1, but this time only the LPAR name and the OS version will be output. For this there is the option “-F“, with which the desired output fields can be specified:

$ time lpar status -s os_version~6100 -F name:os_version
aix39:AIX 6.1 6100-07-04-1216
aix46:AIX 6.1 6100-07-04-1216
...
lpar35:AIX 6.1 6100-09-05-1524

real	0m18.041s
user	0m3.619s
sys	0m3.699s
$

If you prefer JSON output, you can easily do that with the option “-j“, here’s the same example with JSON output:

$ time lpar status -s os_version~6100 -F name:os_version -j
{
	"name": "aix39",
	"os_version": "AIX 6.1 6100-07-04-1216"
}
{
	"name": "aix46",
	"os_version": "AIX 6.1 6100-07-04-1216"
}
...
{
	"name": "lpar35",
	"os_version": "AIX 6.1 6100-09-05-1524"
}

real	0m21.247s
user	0m3.670s
sys	0m3.720s
$

Of course you can not know all attribute names by heart! But that’s not necessary, because you can easily see all attribute names. Use the option “-f” (stanza format) and specify any LPAR:

$ lpar status -f lpar19
lpar19:
	curr_lpar_proc_compat_mode = POWER7
	curr_mem = 8192
	curr_proc_mode = shared
	curr_proc_units = 0.3
	curr_procs = 2
	name = lpar19
	os_version = AIX 6.1 6100-09-05-1524
...
$

With the options “-h” and “-m” the LPARs can be selected depending on the associated HMC and/or managed system.

Status of all LPARs with associated HMC hmc01:

$ lpar -h hmc01 status

Status of all LPARs whose corresponding HMC has type 7042-CR6:

$ lpar -h 7042-CR6 status

Status of all LPARs whose associated HMC has type 7042-CR6 and whose name begins with lpar:

$ lpar -h 7042-CR6 status lpar*

Status of all LPARs on the managed system ms13:

$ lpar -m ms13 status

Status of all LPARs whose managed system is an S922:

$ lpar -m 9009-22A status

The presented selection and output options apply to all output commands of the LPAR tool (except the vios command).

The LPAR-Tool can be downloaded from our download area: https://powercampus.de/en/download-2

The LPAR-Tool contains a test license which is valid until the end of october.

LPAR-Tool in Action: Examples

The LPAR tool can administer HMCs, managed systems, LPARs and virtual-I/O-servers via the command line. The current version of the LPAR tool (currently 1.4.0.2) can be downloaded from our download page https://powercampus.de/download. A trial license, valid until October 31, is included. This article will show you some simple but useful applications of the LPAR tool.

A common question in larger environments (multiple HMCs, many managed systems) is: where is a particular LPAR? This question can easily be answered with the LPAR tool, by using the command “lpar show“:

$ lpar show lpar02
NAME    ID  SERIAL     LPAR_ENV  MS    HMCS
lpar02  39  123456789  aixlinux  ms21  hmc01,hmc02
$

In addition to the name, the LPAR-ID and the serial number, the managed system, here ms21, and the associated HMCs, here hmc01 and hmc02, are also shown. You can also specify multiple LPARs and/or wildcards:

$ lpar show lpar02 lpar01
...
$ lpar show lpar*
...
$

If no argument is given, all LPARs are listed.

 

Another question that frequently arises is the status of an LPAR or multiple LPARs. Again, this can be easily answered, this time with the command “lpar status“:

$ lpar status lpar02
NAME    LPAR_ID  LPAR_ENV  STATE    PROFILE   SYNC  RMC     PROCS  PROC_UNITS  MEM   OS_VERSION
lpar02  39       aixlinux  Running  standard  0     active  1      0.7         7168  AIX 7.2 7200-03-02-1846
$

The LPAR lpar02 is Running, the profile used is standard, the RMC connection is active and the LPAR is running AIX 7.2 (TL3 SP2). The LPAR has 1 processor core, with 0.7 processing units and 7 GB RAM. The column SYNC indicates whether the current configuration is synchronized with the profile (attribute sync_curr_profile).

Of course, several LPARs or even all LPARs can be specified here.

If you want to see what the LPAR tool does in the background: for most commands you can specify the option “-v” for verbose-only. The HMC commands will then be listed, but no changes will be made to the HMC. Here are the HMC commands that are issued for the status output:

$ lpar status -v lpar02
hmc01: lssyscfg -r lpar -m ms21
hmc01: lshwres -r mem -m ms21 --level lpar
hmc01: lshwres -r proc -m ms21 --level lpar
$

 

Next, the addition of additional RAM will be shown. We start with the status of the LPAR:

$ lpar status lpar02
NAME    LPAR_ID  LPAR_ENV  STATE    PROFILE   SYNC  RMC     PROCS  PROC_UNITS  MEM   OS_VERSION
lpar02  39       aixlinux  Running  standard  0     active  1      0.7         7168  AIX 7.2 7200-03-02-1846
$

The LPAR is running and RMC is active, so a DLPAR operation should be possible. We will first check if the maximum memory size is already in use:

$ lpar lsmem lpar02
            MEMORY         MEMORY         HUGE_PAGES 
LPAR_NAME  MODE  AME  MIN   CURR  MAX   MIN  CURR  MAX
lpar02     ded   0.0  2048  7168  8192  0    0     0
$

Currently the LPAR uses 7 GB and a maximum of 8 GB are possible. Extending the memory by 1 GB (1024 MB) should be possible. We add the memory by using the command “lpar addmem“:

$ lpar addmem lpar02 1024
$

We check the success by starting the command “lpar lsmem” again:

$ lpar lsmem lpar02
           MEMORY         MEMORY         HUGE_PAGES 
LPAR_NAME  MODE  AME  MIN   CURR  MAX   MIN  CURR  MAX
lpar02     ded   0.0  2048  8192  8192  0    0     0
$

(By the way: if the current configuration is not synchronized with the current profile, attribute sync_curr_profile, then the LPAR tool also updates the profile!)

 

Virtual adapters can be listed using “lpar lsvslot“:

$ lpar lsvslot lpar02
SLOT  REQ  ADAPTER_TYPE   STATE  DATA
0     Yes  serial/server  1      remote: (any)/any connect_status=unavailable hmc=1
1     Yes  serial/server  1      remote: (any)/any connect_status=unavailable hmc=1
2     No   eth            1      PVID=123 VLANS= ETHERNET0 XXXXXXXXXXXX
6     No   vnic           -      PVID=1234 VLANS=none XXXXXXXXXXXX failover sriov/ms21-vio1/1/3/0/2700c003/2.0/2.0/20/100.0/100.0,sriov/ms21-vio2/2/1/0/27004004/2.0/2.0/10/100.0/100.0
10    No   fc/client      1      remote: ms21-vio1(1)/47 c050760XXXXX0016,c050760XXXXX0017
20    No   fc/client      1      remote: ms21-vio2(2)/25 c050760XXXXX0018,c050760XXXXX0019
21    No   scsi/client    1      remote: ms21-vio2(2)/20
$

The example shows virtual FC and SCSI adapters as well as a vNIC adapter in slot 6.

 

Finally, we’ll show how to start a console for an LPAR:

$ lpar console lpar02

Open in progress 

 Open Completed.

…

AIX Version 7

Copyright IBM Corporation, 1982, 2018.

Console login:

…

The console can be terminated with “~.“.

 

Of course, the LPAR tool can do much more.

To be continued.