Shared Ethernet Adapter

SEA with multiple trunking adapters and VLANs

Despite SR-IOV and vNIC, Shared Ethernet is still the most widely used virtualization solution, when it comes to virtualizing Ethernet. The POWER Hypervisor implements internal virtual IEEE802.1q compatible network switches, which, in conjunction with so-called shared Ethernet adapters or SEAs for short, take over the connection to external networks. The shared Ethernet adapters are implemented in software as a layer 2 bridge by the virtual I/O server.

As shown in figure 8.2, a shared Ethernet adapter can have several so-called trunking adapters. The SEA shown has the 3 trunking adapters ent8, ent9 and ent10, all 3 of which are connected to the virtual switch with the name ETHMGMT. In the case shown, all trunking adapters support VLAN tagging. In addition to the port VLAN IDs (PVIDs), the 3 trunking adapters also have additional VLANs via VLAN tagging. In addition to the connection to the virtual switch via the trunking adapter, the SEA also has a connection to an external network by the physical network adapter (ent0). Network packets from client LPARs to external systems are forwarded to the SEA via one of the trunking adapters and then to the external network via the associated physical network adapter. Network packets from external systems to client LPARs are forwarded by the SEA via the trunking adapter with the correct VLAN to the virtual switch, which then forwards the packets to the client LPAR.

SEA with multiple trunking adapters and VLANs
Figure 8.2: SEA with multiple trunking adapters and VLANs.

In the simplest case, a SEA consists of just one trunking adapter. A SEA can have up to 16 trunking adapters, whereby each of the trunking adapters can have up to 20 additional VLANs in addition to the port VLAN ID.

Which SEAs already exist on a virtual I/O server can be found out with the help of the command “vios lssea” (list SEAs):

$ vios lssea ms05-vio1
                                       TIMES   TIMES    TIMES    BRIDGE 
NAME   HA_MODE  PRIORITY  STATE       PRIMARY  BACKUP  FLIPFLOP  MODE
ent33  Sharing  1         PRIMARY_SH  1        1       0         Partial
ent34  Sharing  1         PRIMARY_SH  1        1       0         Partial
$

Some basic information is displayed for each SEA, such as the HA mode (see later), the priority of the SEA, as well as information on how often the SEA was already primary or backup.

Virtual FC Adapter and NPIV

Physical FC port with Virtual FC and NPIV

One possibility for the virtualization of storage under PowerVM is the use of virtual FC adapters. A virtual FC client adapter is connected to a virtual FC server adapter on a virtual I/O server via the POWER Hypervisor, as shown in figure 7.10. On the virtual I/O server, the virtual FC server adapter is then connected to one of the physical FC ports (mapping). Each of the connected virtual FC server adapters can log into the FC fabric of its own. Each virtual FC server adapter is assigned its own 24-bit FC address.

Communication path of the virtual FC client adapter to the SAN LUN.
Figure 7.10: Communication path of the virtual FC client adapter to the SAN LUN.

The advantage of virtual FC is that each virtual FC client adapter has its own N_Port and can therefore communicate directly with the storage in the FC fabric. The storage LUNs can be assigned directly to the virtual FC client adapter, without having to map each LUN individually on the virtual I/O server. The virtual I/O server itself, normally does not see the storage LUNs of the virtual FC clients. This makes administration much easier than with virtual SCSI, where each storage LUN has to be mapped on the virtual I/O server to a virtual SCSI server adapter (see next chapter).

Before a virtual FC adapter is created and mapped, the situation on a virtual I/O server is as shown in figure 7.11. The physical FC port is connected to a FC fabric and therefore configures an N_Port. The physical FC port logs into the fabric (FLOGI) and is assigned the unique N_Port ID 8c8240. The FC port then registers its WWPN (here 10:00:00:10:9b:ab:01:02) with the simple name server (SNS) of the fabric (PLOGI). The virtual I/O server can then communicate with other N_Ports in the fabric using the fcs0 device.

Physical FC port without virtual FC and NPIV
Figure 7.11: Physical FC Port without Virtual FC and NPIV

N_Port-ID virtualization or NPIV for short is an extension of the FC standard and allows more than one N_Port to log into the fabric using the same physical FC port. In principle, this option has always existed, but only in connection with FC Arbitrated Loop (FC-AL) and fabrics. With NPIV, multiple client LPARs can share a physical FC port. Each client has its own unique N_Port.

Figure 7.12 shows the situation with 2 virtual FC client adapters. Each of the client adapters has a unique WWPN. The WWPN is assigned by PowerVM when the virtual FC client adapter is created (in order to be able to support live partition mobility, 2 WWPNs are always assigned, whereby only one of the two WWPNs is active). Each virtual FC client adapter requires a partner adapter on a virtual I/O server, the virtual FC server adapter (or vfchost). One of the physical FC ports must be assigned to the virtual FC server adapter on the virtual I/O server. If the client LPAR is active, the virtual FC server adapter logs into the fabric (FDISC) and is assigned a unique N_Port ID. In the figure it is the 8c8268 for the blue adapter and the 8c8262 for the red adapter. Then the blue adapter registers its client WWPN (here c0:50:76:07:12:cd:00:16) with the simple name server (SNS) of the fabric (PLOGI). The red adapter does the same for its client WWPN (here c0:50:76:07:12:cd:00:09). Both virtual FC client adapters then have an N_Port with a unique 24-bit ID and can thus communicate with other N_Ports in the fabric.

Physical FC port with Virtual FC and NPIV
Figure 7.12: Physical FC port with Virtual FC and NPIV

The data is of course not copied between the virtual FC client adapter and the virtual FC server adapter by the hypervisor, as this would cost too much performance. The hypervisor only forwards the physical memory address at which the data is located and the physical FC port then uses DMA (Direct Memory Access) to access this data.