IBM PowerVM: Add a virtual Ethernet adapter to an LPAR

A virtual Ethernet adapter is to be added to the LPAR aix01 with IBM PowerVM. The data in detail:

    • HMC: hmc01
    • managed system: ms25
    • LPAR: aix01
    • profile: standard
    • virtual slot number: 4
    • Port-VLAN-ID: 900
    • virtual Ethernet switch: ETHERNET0(default)
    • additional VLANs: none

The command on the associated HMC hmc01 is:

hscroot@hmc01:~> chhwres -m ms25 -r virtualio --rsubtype eth -o a -p aix01 -s 2 -a 'ieee_virtual_eth=0,port_vlan_id=900'
hscroot@hmc01:~>

If the currently used profile of the LPAR is not automatically synchronized, then the additional virtual Ethernet adapter should also be added to the profile:

hscroot@hmc01:~> chsyscfg -r prof -m ms25 -i 'lpar_name=aix01,name=standard,"virtual_eth_adapters+=""4/0/900///0"""'
hscroot@hmc01:~>

With our LPAR tool, the command to use looks like this:

$ lpar addeth aix01 4 900
$

The current profile is automatically adjusted.

Detailed information on the LPAR tool and virtual Ethernet adapters can be found here: Virtual Ethernet

Error when deleting a SEA

The following SEA on a virtual I/O server is no longer required:

$ lsdev -dev ent48
name             status      description
ent48            Available   Shared Ethernet Adapter
$

Attempting to delete the SEA using rmvdev fails with the following error message:

$ rmvdev -sea ent48

Some error messages may contain invalid information
for the Virtual I/O Server environment.

Method error (/usr/lib/methods/ucfgcommo):
        0514-062 Cannot perform the requested function because the
                 specified device is busy.

$

The SEA is still in use. One possibility is the use of LLDP. This can be checked with the lsdev command:

$ lsdev -dev ent48 -attr lldpsvc
value

yes
$

In this case LLDP is active on the SEA and must first be stopped before the SEA can be deleted. Stopping LLDP on the SEA can be easily done by changing the lldpsvc attribute to the value “no“:

$ chdev -dev ent48 -attr lldpsvc=no
ent48 changed
$

Another attempt to delete the SEA ent48 is now successful:

$ rmvdev -sea ent48
ent48 deletedError deleting a SEA
$

More information on SEAs can be found here: Shared Ethernet Adapter

LPAR tool: Console

lpar console

A console can be opened for an LPAR at any time using the LPAR tool:

$ lpar console lpar01
Open in progress
Open completed.
PowerPC Firmware
SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Main Menu
1. Select Language
2. Setup Remote IPL (Initial Program Load)
3. Change SCSI Settings
4. Select Console
5. Select Boot Options
…

To terminate a console session, the escape sequence “~.” used.

Some LPAR tool commands support opening a console via the “-c” (console) option:

    • Activating an LPAR with “lpar activate -c“.
    • Shutting down an LPAR with “lpar shutdown -c“.
    • Shutting down the operating system with “lpar osshutdown -c“.
    • Initiating a system dump for an LPAR with “lpar dumprestart -c“.

A presentation on the subject can be found here: Console with the LPAR tool

LPAR tool 1.7.0.1 is now available

Version 1.7.0.1 of the LPAR tool is now available in our download area.

The new version supports the following new features, among others:

    • Installation of IFixes and updates on the HMC (hmc help updhmc)
    • System firmware updates (and more) of managed systems (ms help updatelic)
    • Display FLRT data with online query at IBM (hmc help flrt, ms help flrt, lpar help flrt)
    • Configuration of NTP on HMCs (hmc help ntp)

Versions for Linux, AIX and Macos are available.

All versions include a test license valid until September 30th, 2022.

So download, install and then try it out!

View IOS Version as normal User

On a virtual I/O server, the IOS version can be displayed as user padmin using the ioslevel command:

padmin> ioslevel
3.1.2.10
padmin>

As user root (after using oem_setup_env), the IOS version can be shown as follows:

# /usr/ios/cli/ioscli ioslevel
3.1.2.10
#

However, both commands do not work as a normal, non-privileged user:

$ ioslevel
ksh: ioslevel: not found.
$ /usr/ios/cli/ioscli ioslevel
Access to run command is not valid.

$

The IOS version is simply stored in a text file and can be easily displayed as a normal user with the cat command:

$ cat /usr/ios/cli/ios.level
3.1.2.10
$

Virtual Network Interface Controller (vNIC)

vNIC adapter with 2 vNIC backing devices and vNIC failover.

The big disadvantage of SR-IOV, as described above, is that LPARs with logical SR-IOV ports cannot be moved (LPM). After the introduction of SR-IOV on POWER systems, there were a number of suggestions for workarounds. However, all of these workarounds require, on the one hand, a special configuration and, on the other hand, a number of reconfigurations to be carried out before and after an LPM operation. In everyday practice, however, this unnecessarily complicates LPM operations.

With the introduction of vNICs, client LPARs can use SR-IOV adapters and still support LPM. As with VSCSI and VFC, a pair of adapters is used for this purpose: the so-called vNIC adapter is used in a virtual slot on the client LPAR and an associated vNIC server adapter is used on a virtual I/O server. The logical SR-IOV port is assigned to the virtual I/O server. The vNIC server adapter, also known as the vNIC backing device, serves as a proxy for the logical SR-IOV port. The interaction of the various adapters is shown in figure 7.19.

Communication path of vNIC for control information and data.
Figure 7.19: Communication path of vNIC for control information and data.

In order to achieve good performance, only control information is transmitted from the vNIC adapter of the client to the vNIC server adapter on the virtual I/O server, which is transmitted in turn from the vNIC server adapter, via the associated logical SR-IOV port (ent adapter), to the corresponding logical port (virtual function) of the SR-IOV adapter. The data itself is transferred between the vNIC client adapter and the logical port of the SR-IOV adapter via DMA (Direct Memory Access) with the help of the hypervisor. In particular, there is no copying of the data via the virtual I/O server. The vNIC adapter on the client is a purely virtual adapter, so LPM works without any problems. The client does not own the logical SR-IOV port and does not access it itself via the PCIe bus (switch).

Shared Ethernet Adapter

SEA with multiple trunking adapters and VLANs

Despite SR-IOV and vNIC, Shared Ethernet is still the most widely used virtualization solution, when it comes to virtualizing Ethernet. The POWER Hypervisor implements internal virtual IEEE802.1q compatible network switches, which, in conjunction with so-called shared Ethernet adapters or SEAs for short, take over the connection to external networks. The shared Ethernet adapters are implemented in software as a layer 2 bridge by the virtual I/O server.

As shown in figure 8.2, a shared Ethernet adapter can have several so-called trunking adapters. The SEA shown has the 3 trunking adapters ent8, ent9 and ent10, all 3 of which are connected to the virtual switch with the name ETHMGMT. In the case shown, all trunking adapters support VLAN tagging. In addition to the port VLAN IDs (PVIDs), the 3 trunking adapters also have additional VLANs via VLAN tagging. In addition to the connection to the virtual switch via the trunking adapter, the SEA also has a connection to an external network by the physical network adapter (ent0). Network packets from client LPARs to external systems are forwarded to the SEA via one of the trunking adapters and then to the external network via the associated physical network adapter. Network packets from external systems to client LPARs are forwarded by the SEA via the trunking adapter with the correct VLAN to the virtual switch, which then forwards the packets to the client LPAR.

SEA with multiple trunking adapters and VLANs
Figure 8.2: SEA with multiple trunking adapters and VLANs.

In the simplest case, a SEA consists of just one trunking adapter. A SEA can have up to 16 trunking adapters, whereby each of the trunking adapters can have up to 20 additional VLANs in addition to the port VLAN ID.

Which SEAs already exist on a virtual I/O server can be found out with the help of the command “vios lssea” (list SEAs):

$ vios lssea ms05-vio1
                                       TIMES   TIMES    TIMES    BRIDGE 
NAME   HA_MODE  PRIORITY  STATE       PRIMARY  BACKUP  FLIPFLOP  MODE
ent33  Sharing  1         PRIMARY_SH  1        1       0         Partial
ent34  Sharing  1         PRIMARY_SH  1        1       0         Partial
$

Some basic information is displayed for each SEA, such as the HA mode (see later), the priority of the SEA, as well as information on how often the SEA was already primary or backup.

Virtual FC Adapter and NPIV

Physical FC port with Virtual FC and NPIV

One possibility for the virtualization of storage under PowerVM is the use of virtual FC adapters. A virtual FC client adapter is connected to a virtual FC server adapter on a virtual I/O server via the POWER Hypervisor, as shown in figure 7.10. On the virtual I/O server, the virtual FC server adapter is then connected to one of the physical FC ports (mapping). Each of the connected virtual FC server adapters can log into the FC fabric of its own. Each virtual FC server adapter is assigned its own 24-bit FC address.

Communication path of the virtual FC client adapter to the SAN LUN.
Figure 7.10: Communication path of the virtual FC client adapter to the SAN LUN.

The advantage of virtual FC is that each virtual FC client adapter has its own N_Port and can therefore communicate directly with the storage in the FC fabric. The storage LUNs can be assigned directly to the virtual FC client adapter, without having to map each LUN individually on the virtual I/O server. The virtual I/O server itself, normally does not see the storage LUNs of the virtual FC clients. This makes administration much easier than with virtual SCSI, where each storage LUN has to be mapped on the virtual I/O server to a virtual SCSI server adapter (see next chapter).

Before a virtual FC adapter is created and mapped, the situation on a virtual I/O server is as shown in figure 7.11. The physical FC port is connected to a FC fabric and therefore configures an N_Port. The physical FC port logs into the fabric (FLOGI) and is assigned the unique N_Port ID 8c8240. The FC port then registers its WWPN (here 10:00:00:10:9b:ab:01:02) with the simple name server (SNS) of the fabric (PLOGI). The virtual I/O server can then communicate with other N_Ports in the fabric using the fcs0 device.

Physical FC port without virtual FC and NPIV
Figure 7.11: Physical FC Port without Virtual FC and NPIV

N_Port-ID virtualization or NPIV for short is an extension of the FC standard and allows more than one N_Port to log into the fabric using the same physical FC port. In principle, this option has always existed, but only in connection with FC Arbitrated Loop (FC-AL) and fabrics. With NPIV, multiple client LPARs can share a physical FC port. Each client has its own unique N_Port.

Figure 7.12 shows the situation with 2 virtual FC client adapters. Each of the client adapters has a unique WWPN. The WWPN is assigned by PowerVM when the virtual FC client adapter is created (in order to be able to support live partition mobility, 2 WWPNs are always assigned, whereby only one of the two WWPNs is active). Each virtual FC client adapter requires a partner adapter on a virtual I/O server, the virtual FC server adapter (or vfchost). One of the physical FC ports must be assigned to the virtual FC server adapter on the virtual I/O server. If the client LPAR is active, the virtual FC server adapter logs into the fabric (FDISC) and is assigned a unique N_Port ID. In the figure it is the 8c8268 for the blue adapter and the 8c8262 for the red adapter. Then the blue adapter registers its client WWPN (here c0:50:76:07:12:cd:00:16) with the simple name server (SNS) of the fabric (PLOGI). The red adapter does the same for its client WWPN (here c0:50:76:07:12:cd:00:09). Both virtual FC client adapters then have an N_Port with a unique 24-bit ID and can thus communicate with other N_Ports in the fabric.

Physical FC port with Virtual FC and NPIV
Figure 7.12: Physical FC port with Virtual FC and NPIV

The data is of course not copied between the virtual FC client adapter and the virtual FC server adapter by the hypervisor, as this would cost too much performance. The hypervisor only forwards the physical memory address at which the data is located and the physical FC port then uses DMA (Direct Memory Access) to access this data.

Administering Storage Pools in PowerVM

File Storage Pool

In many cases, the use of SAN LUNs via NPIV is not suitable for the rapid provisioning of client LPARs. The SAN LUNs must first be created on the external storage systems and then the zoning in the SAN fabric must be adjusted, so that the new SAN LUNs are visible to the WWPNs of the client LPAR. Using VSCSI to map the SAN LUNs to the client LPARs also requires some effort. Each SAN LUN is assigned to one or more client LPARs via VSCSI, which can lead to a large number of SAN LUNs on the virtual I/O servers.

One way to provide storage for client LPARs more quickly, is to use storage pools on the virtual I/O servers. Once a storage pool has been created, storage can be made available for client LPARs with just one command. So-called backing devices are generated in the storage pool, which can be assigned to the client LPARs via virtual SCSI. Storage for client LPAR can thus be made available by the virtual I/O servers via PowerVM. This means that, for example, a boot disk, for a new client LPAR, can be created within a few seconds and can be used immediately.

PowerVM offers two different types of storage pools: local storage pools and shared storage pools. A local storage pool, or simply storage pool, is only available on one virtual I/O server. Each virtual I/O server has its own independent storage pools. A shared storage pool, on the other hand, can be made available by several virtual I/O servers that are combined in a cluster. Access to the shared storage pool is possible from each virtual I/O server that belong to the cluster. Shared storage pools are not dealt with in this chapter.

There are two types of local storage pools: logical volume storage pools and file storage pools. With a logical volume storage pool, storage is made available for the client LPARs in the form of logical volumes, with a file storage pool in the form of files.

Figure 8.13 shows a logical volume storage pool. The storage pool is implemented in the form of a volume group and therefore draws its storage capacity from the associated physical volumes. In order to provide storage for client LPARs, logical volumes are created in the storage pool. In the figure, the logical volumes bd01, bd02 and bd03 have been created. The logical volumes are referred to as backing devices, because they ultimately serve as the storage location for the data of the client LPARs. The assignment of a backing device to a client LPAR, more precisely a vhost adapter, which is assigned one-to-one to a virtual SCSI adapter of a client LPAR, takes place using a so-called virtual target device (vtscsi0, vtscsi1 and vtscsi2 in the figure). The virtual target device is a child device of one of the vhost adapters and points to the corresponding backing device via the device attribute aix_tdev. When mapping, the virtual target device is created as a child of a vhost adapter.

Logical Volume Storage Pool
Figure 8.13: Logical Volume Storage Pool

As long as the storage pool still has free capacity, additional backing devices can be created and assigned to client LPARs at any time. The provisioning of storage for client LPAR is therefore very flexible and, above all, very fast and is completely under the control of the PowerVM administrator.

In addition to logical volume storage pools, file storage pools are also supported. Such a file storage pool is shown in figure 8.14; it is implemented as a file system. The underlying logical volume is in the logical volume storage pool mypool. The storage pool name is used as the name for the logical volume, in the figure then name filepool is used. The file system is mounted under /var/vio/storagepools/filepool, whereby the last path component is the same as the storage pool name. Files are used as backing devices, the file name being the same as the backing device name. The mapping is still implemented using virtual target devices, in the figure vtscsi3 and vtscsi4 are shown as examples. The attribute aix_tdev of the virtual target devices points in this case to the respective file in the file storage pool.

File Storage Pool
Figure 8.14: File Storage Pool