What is the size of the internal log in JFS2

inline log size

A trivial question we stumbled across recently:

How big is the internal JFS2 log currently?

The size of the internal JFS2 log must meet the following two conditions:

    1. The log cannot be larger than 10% of the file system size.
    2. The maximum size cannot exceed 2047 MB.

When creating a JFS2 filesystem with internal log, if no size is specified for the log (-a logsize=value), 0.4% of the filesystem size is used by default. The value 0.4% is documented in the crfs manual page.

But how big is the internal JFS2 log right now?

This information is provided by the dumpfs command. It expects either the mount point of a JFS2 file system or the device file of the underlying logical volume as an argument. The command lists the superblock and additional control information. The output can be very long for larger file systems. Since we are only interested in the JFS2 log, it is advisable to filter the output using the grep command:

# dumpfs /data | grep -i log
aggregate block size    4096            log2 of aggregate block size    12
LVM I/O Transfer size   512             log2 of LVM transfer  size      9
log2 of block size/transfer size        3
Aggregate attributes    J2_GROUPCOMMIT J2_INLINELOG
log device      0x8000002700000001 log serial number    0x26
Inline Log: 541065216 (132096); 1024
fsck Service Log number of blocks: 50
Extendfs Inline Log Working Space: 541065216 (132096); 1024
#

The last value in the line “Inline Log:” indicates the size of the internal log in blocks. The block size of the file system can be found in the line “aggregate block size“. In our case, the internal log has a size of 1024 blocks, each with 4096 bytes. This gives a size of 4 MB (1024 * 4 KB).

If an external log is used, the output looks like this:

# dumpfs / | grep -i log
aggregate block size    4096            log2 of aggregate block size    12
LVM I/O Transfer size   512             log2 of LVM transfer  size      9
log2 of block size/transfer size        3
log device      0x8000000a00000003 log serial number    0xb
Inline Log: 0 (0); 0
fsck Service Log number of blocks: 50
Extendfs Inline Log Working Space: 0 (0); 0
#

The internal log has a size of 0 blocks.

However, this is not the easiest way. Chris Gibson points out the “-q” option of the lsfs command, which displays additional information for JFS and JFS2 file systems:

# lsfs -q /filesystem
Name            Nodename   Mount Pt               VFS   Size    Options    Auto Accounting
/dev/fslv01     --         /filesystem            jfs2  1048576 --         no   no
  (lv size: 1048576, fs size: 1048576, block size: 4096, sparse files: yes, inline log: yes, inline log size: 4, EAformat: v1, Quota: no, DMAPI: no, VIX: yes, EFS: no, ISNAPSHOT: no, MAXEXT: 0, MountGuard: no)
#

The size of the inline log is specified there directly in MB (inline log size: 4).

Determining the size of the internal JFS2 log is therefore no problem with the right command (dumpfs lsfs)!

View IOS Version as normal User

On a virtual I/O server, the IOS version can be displayed as user padmin using the ioslevel command:

padmin> ioslevel
3.1.2.10
padmin>

As user root (after using oem_setup_env), the IOS version can be shown as follows:

# /usr/ios/cli/ioscli ioslevel
3.1.2.10
#

However, both commands do not work as a normal, non-privileged user:

$ ioslevel
ksh: ioslevel: not found.
$ /usr/ios/cli/ioscli ioslevel
Access to run command is not valid.

$

The IOS version is simply stored in a text file and can be easily displayed as a normal user with the cat command:

$ cat /usr/ios/cli/ios.level
3.1.2.10
$

YUM with NIMHTTP

Starting with AIX 7.2, NIM supports the use of HTTP. The new NIM service handler nimhttp (port 4901) is available for this purpose. This offers the possibility of making YUM repositories available on a NIM server with the help of this NIM service handler. To do this, the repositories must be saved under the document root (/export/nim by default). AIX clients can then access the repositories using HTTP with port nummer 4901.

The repositories must be configured on the AIX client under /opt/freeware/etc/yum/yum.conf or /opt/freeware/etc/yum/repos.d. All available YUM operations are supported in this way.

If nimhttp is already used on the NIM server, this does not result in any additional effort.

The following shows the configuration for using YUM with nimhttp.

The first requirement is that the NIM service handler nimhttp is active on the NIM server:

aixnim # lssrc -s nimhttp
Subsystem         Group            PID          Status
 nimhttp                           19136996     active
aixnim #

If nimhttp has not yet been activated, this can be done using the nimconfig command on the NIM server:

aixnim # nimconfig -h
0513-077 Subsystem has been changed.
0513-059 The nimhttp Subsystem has been started. Subsystem PID is 19136996.
aixnim #

Note: The configuration of nimhttp is shown elsewhere.

For test purposes, we create a small text file on the NIM server under /export/nim (document root):

aixnim # echo "testfile for nimhttp" >/export/nim/testfile
aixnim #

Next, we check the functionality on the NIM client by downloading the test file from the NIM server with the NIM client command nimhttp:

aix01 # nimhttp -f testfile -o dest=/tmp -v
nimhttp: (source)       testfile
nimhttp: (dest_dir)     /tmp
nimhttp: (verbose)      debug
nimhttp: (master_ip)    aixnim
nimhttp: (master_port)  4901

sending to master...
size= 46
pull_request= "GET /testfile HTTP/1.1
Connection: close

"
Writing 21 bytes of data to /tmp/testfile
Total size of datalen is 21. Content_length size is 21.
aix01 #

(The ‘-v‘ option provides the debugging output shown.)

The test file was saved under /tmp/testfile.

Another test with the command curl (available from the AIX toolbox) also shows that nimhttp can be used successfully to download data:

aix01 # curl http://aixnim:4901/testfile
testfile for nimhttp
aix01 #

The use of nimhttp to access YUM repositories should therefore be possible in principle.

We have copies of the AIX Toolbox repositories provided by IBM on our NIM server in the following directories:

/export/nim/aixtoolbox/RPMS/noarch          AIX_Toolbox_noach (AIX noarch repository)

/export/nim/aixtoolbox/RPMS/ppc               AIX_Toolbox (AIX generic repository)

/export/nim/aixtoolbox/RPMS/ppc-7.1         AIX_Toolbox_71 (AIX 7.1 specific repository)

/export/nim/aixtoolbox/RPMS/ppc-7.2         AIX_Toolbox_72 (AIX 7.2 specific repository)

In order for an AIX client system to be able to access these repositories, they must be referenced in the YUM configuration. For the sake of simplicity, we have entered the repositories in the configuration file /opt/freeware/etc/yum/yum.conf:

aix01 # vi /opt/freeware/etc/yum/yum.conf
[main]
cachedir=/var/cache/yum
keepcache=1
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1

[AIX_Toolbox]
name=AIX generic repository
baseurl=http://aixnim:4901/aixtoolbox/RPMS/ppc/
enabled=1
gpgcheck=0

[AIX_Toolbox_noarch]
name=AIX noarch repository
baseurl=http://aixnim:4901/aixtoolbox/RPMS/noarch/
enabled=1
gpgcheck=0

[AIX_Toolbox_72]
name=AIX 7.2 specific repository
baseurl=http://aixnim:4901/aixtoolbox/RPMS/ppc-7.2/
enabled=1
gpgcheck=0

aix01 #

Alternatively, a separate repo file can be created for each repository under /opt/freeware/etc/yum/repos.d.

The key entry is the baseurl attribute. The URL used here is http. The host name of the NIM server is given the port number of nimhttp (port 4901) separated by a colon. The path is then relative to /export/nim (document root) on the NIM server.

A list of the available YUM repositories printed by the command “yum repolist” shows the expected repositories:

aix01 # yum repolist
repo id                                     repo name                                             status
AIX_Toolbox                                 AIX generic repository                                2740
AIX_Toolbox_72                              AIX 7.2 specific repository                            417
AIX_Toolbox_noarch                          AIX noarch repository                                  301
repolist: 3458
aix01 #

To demonstrate that installing RPMs in this way with nimhttp is also possible, we show the installation of wget:

aix01 # yum install wget
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package wget.ppc 0:1.21.1-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

========================================================================================================
Package              Arch                Version                     Repository                   Size
========================================================================================================
Installing:
wget                 ppc                 1.21.1-1                    AIX_Toolbox                 703 k

Transaction Summary
========================================================================================================
Install       1 Package

Total size: 703 k
Installed size: 1.4 M
Is this ok [y/N]: y
Downloading Packages:
Running Transaction Check
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : wget-1.21.1-1.ppc                                                                    1/1
From wget-1.21.1 onwards, symbolic link of wget in /usr/bin is removed.
The binary is shipped in /opt/freeware/bin. Please use absolute path or
add /opt/freeware/bin in PATH environment variable to use the binary.

Installed:
  wget.ppc 0:1.21.1-1                                                                                  

Complete!

aix01 #

The AIX system does not necessarily have to be a NIM client, as YUM does not use NIM. YUM only uses the http server provided by the NIM master. The AIX version is also irrelevant, the AIX client can run on AIX 7.1 or AIX 7.2.

Note: For AIX 7.3, the Dandified YUM (DNF) is used instead of YUM.

Virtual Network Interface Controller (vNIC)

vNIC adapter with 2 vNIC backing devices and vNIC failover.

The big disadvantage of SR-IOV, as described above, is that LPARs with logical SR-IOV ports cannot be moved (LPM). After the introduction of SR-IOV on POWER systems, there were a number of suggestions for workarounds. However, all of these workarounds require, on the one hand, a special configuration and, on the other hand, a number of reconfigurations to be carried out before and after an LPM operation. In everyday practice, however, this unnecessarily complicates LPM operations.

With the introduction of vNICs, client LPARs can use SR-IOV adapters and still support LPM. As with VSCSI and VFC, a pair of adapters is used for this purpose: the so-called vNIC adapter is used in a virtual slot on the client LPAR and an associated vNIC server adapter is used on a virtual I/O server. The logical SR-IOV port is assigned to the virtual I/O server. The vNIC server adapter, also known as the vNIC backing device, serves as a proxy for the logical SR-IOV port. The interaction of the various adapters is shown in figure 7.19.

Communication path of vNIC for control information and data.
Figure 7.19: Communication path of vNIC for control information and data.

In order to achieve good performance, only control information is transmitted from the vNIC adapter of the client to the vNIC server adapter on the virtual I/O server, which is transmitted in turn from the vNIC server adapter, via the associated logical SR-IOV port (ent adapter), to the corresponding logical port (virtual function) of the SR-IOV adapter. The data itself is transferred between the vNIC client adapter and the logical port of the SR-IOV adapter via DMA (Direct Memory Access) with the help of the hypervisor. In particular, there is no copying of the data via the virtual I/O server. The vNIC adapter on the client is a purely virtual adapter, so LPM works without any problems. The client does not own the logical SR-IOV port and does not access it itself via the PCIe bus (switch).

Shared Ethernet Adapter

SEA with multiple trunking adapters and VLANs

Despite SR-IOV and vNIC, Shared Ethernet is still the most widely used virtualization solution, when it comes to virtualizing Ethernet. The POWER Hypervisor implements internal virtual IEEE802.1q compatible network switches, which, in conjunction with so-called shared Ethernet adapters or SEAs for short, take over the connection to external networks. The shared Ethernet adapters are implemented in software as a layer 2 bridge by the virtual I/O server.

As shown in figure 8.2, a shared Ethernet adapter can have several so-called trunking adapters. The SEA shown has the 3 trunking adapters ent8, ent9 and ent10, all 3 of which are connected to the virtual switch with the name ETHMGMT. In the case shown, all trunking adapters support VLAN tagging. In addition to the port VLAN IDs (PVIDs), the 3 trunking adapters also have additional VLANs via VLAN tagging. In addition to the connection to the virtual switch via the trunking adapter, the SEA also has a connection to an external network by the physical network adapter (ent0). Network packets from client LPARs to external systems are forwarded to the SEA via one of the trunking adapters and then to the external network via the associated physical network adapter. Network packets from external systems to client LPARs are forwarded by the SEA via the trunking adapter with the correct VLAN to the virtual switch, which then forwards the packets to the client LPAR.

SEA with multiple trunking adapters and VLANs
Figure 8.2: SEA with multiple trunking adapters and VLANs.

In the simplest case, a SEA consists of just one trunking adapter. A SEA can have up to 16 trunking adapters, whereby each of the trunking adapters can have up to 20 additional VLANs in addition to the port VLAN ID.

Which SEAs already exist on a virtual I/O server can be found out with the help of the command “vios lssea” (list SEAs):

$ vios lssea ms05-vio1
                                       TIMES   TIMES    TIMES    BRIDGE 
NAME   HA_MODE  PRIORITY  STATE       PRIMARY  BACKUP  FLIPFLOP  MODE
ent33  Sharing  1         PRIMARY_SH  1        1       0         Partial
ent34  Sharing  1         PRIMARY_SH  1        1       0         Partial
$

Some basic information is displayed for each SEA, such as the HA mode (see later), the priority of the SEA, as well as information on how often the SEA was already primary or backup.

Virtual FC Adapter and NPIV

Physical FC port with Virtual FC and NPIV

One possibility for the virtualization of storage under PowerVM is the use of virtual FC adapters. A virtual FC client adapter is connected to a virtual FC server adapter on a virtual I/O server via the POWER Hypervisor, as shown in figure 7.10. On the virtual I/O server, the virtual FC server adapter is then connected to one of the physical FC ports (mapping). Each of the connected virtual FC server adapters can log into the FC fabric of its own. Each virtual FC server adapter is assigned its own 24-bit FC address.

Communication path of the virtual FC client adapter to the SAN LUN.
Figure 7.10: Communication path of the virtual FC client adapter to the SAN LUN.

The advantage of virtual FC is that each virtual FC client adapter has its own N_Port and can therefore communicate directly with the storage in the FC fabric. The storage LUNs can be assigned directly to the virtual FC client adapter, without having to map each LUN individually on the virtual I/O server. The virtual I/O server itself, normally does not see the storage LUNs of the virtual FC clients. This makes administration much easier than with virtual SCSI, where each storage LUN has to be mapped on the virtual I/O server to a virtual SCSI server adapter (see next chapter).

Before a virtual FC adapter is created and mapped, the situation on a virtual I/O server is as shown in figure 7.11. The physical FC port is connected to a FC fabric and therefore configures an N_Port. The physical FC port logs into the fabric (FLOGI) and is assigned the unique N_Port ID 8c8240. The FC port then registers its WWPN (here 10:00:00:10:9b:ab:01:02) with the simple name server (SNS) of the fabric (PLOGI). The virtual I/O server can then communicate with other N_Ports in the fabric using the fcs0 device.

Physical FC port without virtual FC and NPIV
Figure 7.11: Physical FC Port without Virtual FC and NPIV

N_Port-ID virtualization or NPIV for short is an extension of the FC standard and allows more than one N_Port to log into the fabric using the same physical FC port. In principle, this option has always existed, but only in connection with FC Arbitrated Loop (FC-AL) and fabrics. With NPIV, multiple client LPARs can share a physical FC port. Each client has its own unique N_Port.

Figure 7.12 shows the situation with 2 virtual FC client adapters. Each of the client adapters has a unique WWPN. The WWPN is assigned by PowerVM when the virtual FC client adapter is created (in order to be able to support live partition mobility, 2 WWPNs are always assigned, whereby only one of the two WWPNs is active). Each virtual FC client adapter requires a partner adapter on a virtual I/O server, the virtual FC server adapter (or vfchost). One of the physical FC ports must be assigned to the virtual FC server adapter on the virtual I/O server. If the client LPAR is active, the virtual FC server adapter logs into the fabric (FDISC) and is assigned a unique N_Port ID. In the figure it is the 8c8268 for the blue adapter and the 8c8262 for the red adapter. Then the blue adapter registers its client WWPN (here c0:50:76:07:12:cd:00:16) with the simple name server (SNS) of the fabric (PLOGI). The red adapter does the same for its client WWPN (here c0:50:76:07:12:cd:00:09). Both virtual FC client adapters then have an N_Port with a unique 24-bit ID and can thus communicate with other N_Ports in the fabric.

Physical FC port with Virtual FC and NPIV
Figure 7.12: Physical FC port with Virtual FC and NPIV

The data is of course not copied between the virtual FC client adapter and the virtual FC server adapter by the hypervisor, as this would cost too much performance. The hypervisor only forwards the physical memory address at which the data is located and the physical FC port then uses DMA (Direct Memory Access) to access this data.

Administering Storage Pools in PowerVM

File Storage Pool

In many cases, the use of SAN LUNs via NPIV is not suitable for the rapid provisioning of client LPARs. The SAN LUNs must first be created on the external storage systems and then the zoning in the SAN fabric must be adjusted, so that the new SAN LUNs are visible to the WWPNs of the client LPAR. Using VSCSI to map the SAN LUNs to the client LPARs also requires some effort. Each SAN LUN is assigned to one or more client LPARs via VSCSI, which can lead to a large number of SAN LUNs on the virtual I/O servers.

One way to provide storage for client LPARs more quickly, is to use storage pools on the virtual I/O servers. Once a storage pool has been created, storage can be made available for client LPARs with just one command. So-called backing devices are generated in the storage pool, which can be assigned to the client LPARs via virtual SCSI. Storage for client LPAR can thus be made available by the virtual I/O servers via PowerVM. This means that, for example, a boot disk, for a new client LPAR, can be created within a few seconds and can be used immediately.

PowerVM offers two different types of storage pools: local storage pools and shared storage pools. A local storage pool, or simply storage pool, is only available on one virtual I/O server. Each virtual I/O server has its own independent storage pools. A shared storage pool, on the other hand, can be made available by several virtual I/O servers that are combined in a cluster. Access to the shared storage pool is possible from each virtual I/O server that belong to the cluster. Shared storage pools are not dealt with in this chapter.

There are two types of local storage pools: logical volume storage pools and file storage pools. With a logical volume storage pool, storage is made available for the client LPARs in the form of logical volumes, with a file storage pool in the form of files.

Figure 8.13 shows a logical volume storage pool. The storage pool is implemented in the form of a volume group and therefore draws its storage capacity from the associated physical volumes. In order to provide storage for client LPARs, logical volumes are created in the storage pool. In the figure, the logical volumes bd01, bd02 and bd03 have been created. The logical volumes are referred to as backing devices, because they ultimately serve as the storage location for the data of the client LPARs. The assignment of a backing device to a client LPAR, more precisely a vhost adapter, which is assigned one-to-one to a virtual SCSI adapter of a client LPAR, takes place using a so-called virtual target device (vtscsi0, vtscsi1 and vtscsi2 in the figure). The virtual target device is a child device of one of the vhost adapters and points to the corresponding backing device via the device attribute aix_tdev. When mapping, the virtual target device is created as a child of a vhost adapter.

Logical Volume Storage Pool
Figure 8.13: Logical Volume Storage Pool

As long as the storage pool still has free capacity, additional backing devices can be created and assigned to client LPARs at any time. The provisioning of storage for client LPAR is therefore very flexible and, above all, very fast and is completely under the control of the PowerVM administrator.

In addition to logical volume storage pools, file storage pools are also supported. Such a file storage pool is shown in figure 8.14; it is implemented as a file system. The underlying logical volume is in the logical volume storage pool mypool. The storage pool name is used as the name for the logical volume, in the figure then name filepool is used. The file system is mounted under /var/vio/storagepools/filepool, whereby the last path component is the same as the storage pool name. Files are used as backing devices, the file name being the same as the backing device name. The mapping is still implemented using virtual target devices, in the figure vtscsi3 and vtscsi4 are shown as examples. The attribute aix_tdev of the virtual target devices points in this case to the respective file in the file storage pool.

File Storage Pool
Figure 8.14: File Storage Pool

Multiple Shared Processor Pools: Entitled Pool Capacity

Distribution of processor shares to shared processor pools and LPARs in the default shared processor pool according to EPC or EC.

An important change when using shared processor pools in PowerVM concerns the distribution of unused processor shares of the LPARs. Without shared processor pools, unused processor shares are divided among all uncapped LPARs according to their weights. As soon as shared processor pools are used, the distribution takes place in two stages. Unused processor shares are first distributed to uncapped LPARs within the same shared processor pool. Only the unused processor shares that are not consumed by other LPARs in the same shared processor pool are redistributed to LPARs in other shared processor pools.

Each shared processor pool has a so-called Entitled Pool Capacity (EPC), which is the sum of the guaranteed entitlements of the assigned LPARs and the Reserved Pool Capacity (RPC). The reserved pool capacity can be configured using the reserved_pool_proc_units attribute of the shared processor pool and has the default value 0. Just as the entitlement is guaranteed for a shared processor LPAR, the assignment of the entitled pool capacity is guaranteed for a shared processor pool , regardless of how the shares are then distributed to the associated LPARs in the shared processor pool. Figure 5.15 shows reserved, entitled and maximum pool capacities for a shared processor pool.

The following condition must always be met for the pool capacities:

Reserved Pool Capacity <= Entitled Pool Capacity <= Maximum Pool Capacity

The pool capacities are always shown in the output of “ms lsprocpool“:

$ ms lsprocpool ms06
MS_NAME  PROCPOOL      ID  EC_LPARS  RESERVED  PENDING  ENTITLED  MAX
ms06     DefaultPool   0   7.90      -         -        7.90      -
ms06     SharedPool01  1   0.60      0.10      0.10     0.70      1.00
$

In the column EC_LPARS the guaranteed entitlements of the assigned LPARs are added up, here 0.60 for the pool SharedPool01. The column RESERVED shows the reserved pool capacity (0.10 for SharedPool01), the column ENTITLED shows the entitled pool capacity and finally the column MAX shows the maximum pool capacity. (The SharedPool01 is the shared processor pool from Figure 5.15.)

The figure above shows how the distribution of processor shares works in the presence of several shared processor pools.

Each shared processor pool receives a share of the processors (cores) according to its entitled pool capacity. Shared processor LPARs in the default shared processor pool receive processor shares according to their entitlement. The unused processor shares are distributed to all LPARs, regardless of shared processor pools, according to their weights (this is not shown in the diagram).

The processor shares assigned to each shared processor pool (according to the entitled pool capacity) are then distributed within the shared processor pool to the associated LPARs according to their entitlement. That means in particular that every LPAR in a shared processor pool continues to receive its guaranteed entitlement!

If an LPAR in a shared processor pool does not consume its entitlement, then these unused processor shares are first distributed within the shared processor pool to other LPARs that need additional processor shares. The distribution then takes place as before, taking into account the weights of the LPARs. Unused processor shares are thus, so to speak, “recycled” within a shared processor pool. If not all unused processor shares in the shared processor pool are used up in this way, then these are redistributed to all LPARs (LPARs with a need for additional processor shares) via the hypervisor, regardless of the associated shared processor pool.

This two-stage distribution of processor shares can be observed very well in a small experiment. We have increased the guaranteed entitlement to 0.8 for the 3 LPARs (lpar1, lpar2 and lpar3):

$ lpar addprocunits lpar1 0.4
$ lpar addprocunits lpar2 0.4
$ lpar addprocunits lpar3 0.4
$

The assignment to the shared processor pools remains: lpar1 and lpar2 are assigned to the shared processor pool benchmark and lpar3 remains assigned to the default pool:

$ lpar -m ms11 lsproc
           PROC         PROCS           PROC_UNITS                        UNCAP   PROC    
LPAR_NAME  MODE    MIN  DESIRED  MAX  MIN  DESIRED  MAX  SHARING_MODE     WEIGHT  POOL
lpar1      shared  1    4        8    0.1  0.8      2.0  uncap            100     benchmark
lpar2      shared  1    4        8    0.1  0.8      2.0  uncap            100     benchmark
lpar3      shared  1    4        8    0.1  0.8      2.0  uncap            100     DefaultPool
ms11-vio1  ded     1    7        8    -    -        -    keep_idle_procs  -       -
ms11-vio2  ded     1    6        8    -    -        -    keep_idle_procs  -       -
$

In the shared processor pool benchmark, the resulting entitled pool capacity is 2 * 0.8 + 0.0 = 1.6 (the reserved pool capacity is 0.0). The entitled pool capacity of the default Shared Processor Pool with only one LPAR is 0.8.

$ ms lsprocpool ms11
MS_NAME  PROCPOOL     ID  EC_LPARS  RESERVED  PENDING  ENTITLED  MAX
ms11     DefaultPool  0   0.80      -         -        0.80      -
ms11     testpool     1   0.00      0.00      0.00     0.00      2.00
ms11     benchmark    2   1.60      0.00      0.00     1.60      2.00
$

We start the benchmark again, this time on lpar1 (shared processor pool benchmark) and lpar3 (shared processor pool DefaultPool) in parallel. No load is placed on lpar2 (Shared Processor Pool benchmark), the LPAR is at a load of approx. 0.00 – 0.01 during the benchmark. This means that the guaranteed entitled pool capacity of 1.6 is available exclusively for lpar1! The guaranteed entitlement of lpar2 in the default pool is only 0.8. Of the 3 physical processors (cores) in the physical shared processor pool, only an entitlement of 3.0 – 1.6 – 0.8 = 0.6 remains, which can be distributed to LPARs with additional processor components. Since lpar1 and lpar3 both have the same weights (uncap_weight=100), they each receive an additional 0.3 processing units. That makes for lpar1: 1.6 + 0.3 = 1.9. And for lpar3: 0.8 + 0.3 = 1.1. This can be seen very nicely in the graphics for the processor utilization (figure 5.17). A short time after the start of the benchmark, on lpar1 around 1.9 physical processors (cores) are used and on lpar3 around 1.1. Due to the larger processor shares, the benchmark on lpar1 is finished faster, which means that the processor utilization goes down there. However, lpar3 has then more processor shares available and lpar3 then takes almost all of the 3 available processors at the end.

Without additional shared processor pools, all uncapped LPARs benefit from unused processor shares that an LPAR does not use. Since potentially all LPARs get shares of these unused processor shares, the proportion for an individual LPAR is not so large. If additional shared processor pools are used, uncapped LPARs in the same shared processor pool benefit primarily from unused processor shares of an LPAR. These are fewer LPARs and therefore the proportion of additional processor capacity per LPAR is higher.

5.5. Multiple Shared Processor Pools

5.5.1. Physical Shared Processor Pool

5.5.2. Multiple Shared Processor Pools

5.5.3. Configuring a Shared Processor Pool (Maximum Pool Capacity)

5.5.4. Assigning a Shared Processor Pools

5.5.5. Entitled Pool Capacity (EPC)

5.5.6. Reserved Pool Capacity (RPC)

5.5.7. Deactivating a Shared Processor Pool

Adding Logical SR-IOV Ports

SR-IOV Ethernet port with internal switch and 3 logical ports.

In order that an LPAR can use a virtual function of an SR-IOV adapter in PowerVM, a so-called logical port must be created for the LPAR. Which logical ports already exist can be displayed with the command “ms lssriov” and the option “-l” (logical port):

$ ms lssriov -l ms03
LOCATION_CODE  ADAPTER  PPORT  LPORT  LPAR  CAPACITY  CURR_MAC_ADDR  CLIENTS
$

Since the SR-IOV adapters have just been configured to shared mode, there are of course no logical ports yet. To add a logical SR-IOV port to an LPAR, the command “lpar addsriov” (add SR-IOV logical port) is used. In addition to the LPAR, the adapter ID and the port ID of the physical port must be specified. Alternatively, a unique suffix of the physical location code of the physical port can also be specified:

$ lpar addsriov aix22 P1-C11-T1
$

The creation can take a few seconds. A quick check shows that a logical port has actually been created:

$ ms lssriov -l ms03
LOCATION_CODE                   ADAPTER  PPORT  LPORT     LPAR   CAPACITY  CURR_MAC_ADDR  CLIENTS
U78AA.001.VYRGU0Q-P1-C11-T1-S1  1        0      27004001  aix22  2.0       a1b586737e00   -
$

Similar to a managed system for virtual Ethernet, an internal switch is implemented on the SR-IOV adapters for each physical Ethernet port, see figure above. One of the virtual functions is assigned to each logical port. The associated LPARs access the logical ports directly via the PCI Express bus (PCIe switch).

An LPAR can easily have several logical SR-IOV ports. With the command “lpar lssriov” (list SR-IOV logical ports) all logical ports of an LPAR can be displayed:

$ lpar lssriov aix22
LPORT     REQ  ADAPTER  PPORT  CONFIG_ID  CAPACITY  MAX_CAPACITY  PVID  VLANS  CURR_MAC_ADDR  CLIENTS
27004001  Yes  1        0      0          2.0       100.0         0     all    a1b586737e00   -
$

There are a number of attributes that can be specified for a logical port when it is created. Among other things, the following properties can be configured:

    • capacity – the guaranteed capacity for the logical port.
    • port_vlan_id – the VLAN ID for untagged packets or 0 to switch off VLAN tagging.
    • promisc_mode – switch promiscuous mode on or off.

The complete list of attributes and their possible values can be found in the online help (“lpar help addsriov“).

As an example we add another logical port with port VLAN-ID 55 and a capacity of 20% to the LPAR aix22:

$ lpar addsriov aix22 P1-C4-T2 port_vlan_id=55 capacity=20
$

The generated logical port thus has a guaranteed share of 20% of the bandwidth of the physical port P1-C4-T2! The LPAR now has 2 logical SR-IOV ports:

$ lpar lssriov aix22
LPORT     REQ  ADAPTER  PPORT  CONFIG_ID  CAPACITY  MAX_CAPACITY  PVID  VLANS  CURR_MAC_ADDR  CLIENTS
27004001  Yes  1        0      0          2.0       100.0         0     all    a1b586737e00   -
2700c003  Yes  3        2      1          20.0      100.0         55    all    a1b586737e01   -
$

After the logical ports have been added to the LPAR using the PowerVM Hypervisor, they appear in the Defined state. The logical ports appear under AIX as ent devices, like all other Ethernet adapters!

aix22 # lsdev -l ent\*
ent0 Available       Virtual I/O Ethernet Adapter (l-lan)
ent1 Defined   00-00 PCIe2 10GbE SFP+ SR 4-port Converged Network Adapter VF (df1028e214100f04)
ent2 Defined   01-00 PCIe2 100/1000 Base-TX 4-port Converged Network Adapter VF (df1028e214103c04)
aix22 #

After the config manager cfgmgr has run, the new ent devices are in the Available state and can be used in exactly the same way as all other Ethernet adapters.

7.6. SR-IOV

7.6.1. Activating Shared Modes

7.6.2. Configuration of Physical SR-IOV Ports

7.6.3. Adding Logical SR-IOV Ports

7.6.4. Changing a Logical SR-IOV Port

7.6.5. Removing Logical SR-IOV Ports

7.6.6. Setting an SR-IOV Adapter from Shared back to Dedicated

Adding a Virtual Ethernet Adapter

Delivery of tagged packets, here for the VLAN 200.

If in a PowerVM environment a virtual Ethernet adapter is to be added to an active LPAR using the LPAR-Tool, the LPAR must have an active RMC connection to an HMC. This requires an active Ethernet adapter (physical or virtual). A free virtual slot is required for the virtual Ethernet adapter.

$ lpar lsvslot aix22
SLOT  REQ  ADAPTER_TYPE   STATE  DATA
0     Yes  serial/server  1      remote: (any)/any connect_status=unavailable hmc=1
1     Yes  serial/server  1      remote: (any)/any connect_status=unavailable hmc=1
5     No   eth            1      PVID=100 VLANS= ETHERNET0 1DC8DB485D1E
10    No   fc/client      1      remote: ms03-vio1(1)/5 c05076030aba0002,c05076030aba0003
20    No   fc/client      1      remote: ms03-vio2(2)/4 c05076030aba0000,c05076030aba0001
$

The virtual slot 6 is not yet used by the LPAR aix22. A virtual Ethernet adapter can be added with the command “lpar addeth“. At least the desired virtual slot number for the adapter and the desired port VLAN ID must be specified:

$ lpar addeth aix22 6 900
$

In the example, a virtual Ethernet adapter for aix22 with port VLAN ID 900 was created in slot 6. If the slot number doesn’t matter, the keyword auto can be specified instead of a number; the LPAR tool then automatically assigns a free slot number. The virtual adapter is available immediately, but must first be made known to the operating system. How this happens exactly depends on the operating system used. In the case of AIX there is the cfgmgr command for this purpose.

After the virtual Ethernet adapter has been added, but before a run of cfgmgr is started, only the virtual Ethernet adapter ent0 is known to the AIX operating system of the LPAR aix22:

aix22 # lscfg -l ent*
  ent0             U9009.22A.8991971-V30-C5-T1  Virtual I/O Ethernet Adapter (l-lan)
aix22 #

After a run of cfgmgr the newly added virtual Ethernet adapter appears as ent1:

aix22 # cfgmgr
aix22 # lscfg -l ent*
  ent0             U9009.22A.8991971-V30-C5-T1  Virtual I/O Ethernet Adapter (l-lan)
  ent1             U9009.22A.8991971-V30-C6-T1  Virtual I/O Ethernet Adapter (l-lan)
aix22 #

Note: On AIX, the device name for an Ethernet adapter cannot be used to identify the type. Regardless of whether an Ethernet adapter is physical or virtual or a virtual function of an SR-IOV adapter, the device name ent with an ascending instance number is always used.

If an IEEE 802.1q compatible virtual Ethernet adapter with additional VLAN IDs is to be created, the option “-i” (IEEE 802.1q compatible adapter) must be used. Alternatively, the ieee_virtual_eth=1 attribute can also be specified. The additional VLAN IDs are specified as a comma-separated list:

$ lpar addeth -i aix22 7 900 100,200,300
$

The port VLAN ID is 900, and the additional VLAN IDs are 100, 200 and 300.
If an LPAR has no active RMC connection or is not active, then a virtual Ethernet adapter can only be added to one of the profiles of the LPAR. This is always the case, for example, if the LPAR has just been created and has not yet been installed.

In this case, only the option “-p” with a profile name has to be used for the commands shown. Which profiles an LPAR has, can easily be found out using “lpar lsprof” (list profiles):

$ lpar lsprof aix22
NAME                      MEM_MODE  MEM   PROC_MODE  PROCS  PROC_COMPAT
standard                  ded       7168  ded        2      default
last*valid*configuration  ded       7168  ded        2      default
$

(The last active configuration is stored in the profile with the name last*valid*configuration.)

The virtual adapters defined in the profile standard can then be displayed by specifying the profile name with “lpar lsvslot“:

$ lpar -p standard lsvslot aix22
SLOT  REQ  ADAPTER_TYPE   DATA
0     Yes  serial/server  remote: (any)/any connect_status= hmc=1
1     Yes  serial/server  remote: (any)/any connect_status= hmc=1
5     No   eth            PVID=100 VLANS= ETHERNET0 
6     No   eth            PVID=900 VLANS= ETHERNET0 
7     No   eth            IEEE PVID=900 VLANS=100,200,300 ETHERNET0 
10    No   fc/client      remote: ms03-vio1(1)/5 c05076030aba0002,c05076030aba0003
20    No   fc/client      remote: ms03-vio2(2)/4 c05076030aba0000,c05076030aba0001
$

When adding the adapter, only the corresponding profile name has to be given, otherwise the command looks exactly as shown above:

$ lpar -p standard addeth -i aix22 8 950 150,250
$

In order to make the new adapter available in slot 8, the LPAR must be activated again by default, specifying the profile name.

7.3. Virtual Ethernet

7.3.1. VLANs and VLAN Tagging

7.3.2. Adding a Virtual Ethernet Adapter

7.3.3. Virtuelle Ethernet Switches

7.3.4. Virtual Ethernet Bridge Mode (VEB)

7.3.5. Virtual Ethernet Port Aggregator Mode (VEPA)

7.3.6. Virtual Networks

7.3.7. Adding and Removing VLANs to/from an Adapter

7.3.8. Changing Attributes of a Virtual Ethernet Adapter

7.3.9. Removing a Virtual Ethernet Adapter