Manage group membership on AIX with chgrpmem

AIX provides an elegant command to change user group membership: chgrpmem.

As an example we use the users user01, user02, …, and the group mygroup:

$ lsgroup mygroup
mygroup id=225 admin=false users= registry=files
$

The group mygroup currently has no members (users=””).

To add the two local users user01 and user02 to the group mygroup, the “-m” (member) option must be used. Then follows a plus sign “+” for add and a comma-separated list of user names. The last argument is the group:

# chgrpmem -m + user01,user02 mygroup
#
# lsgroup mygroup
mygroup id=225 admin=false users=user01,user02 registry=files
#

Using the equal sign “=” instead of the plus sign “+” overwrites the current list of users with the given list of user names:

# chgrpmem -m = user03,user04,user05 mygroup
# 
# lsgroup mygroup
mygroup id=225 admin=false users=user03,user04,user05 registry=files
#

Removing users is done by using a minus sign “” e.g. removing user04:

# chgrpmem -m - user04 mygroup
# 
# lsgroup mygroup
mygroup id=225 admin=false users=user03,user05 registry=files
#

However, removing a user from the member list of a group does not always have to be successful! We create the user user06 with primary group mygroup:

# mkuser pgrp=mygroup user06
# 
# lsgroup mygroup
mygroup id=225 admin=false users=user03,user05,user06 registry=files
#

The output of lsgroup shows that the user06 is also a member of the group mygroup. However, membership cannot be revoked in this case:

# chgrpmem -m - user06 mygroup
Cannot drop "user06" from primary group "mygroup".
#

A user must always have a primary group! The chgrpmem command can only be used to manage users’ additional memberships. The primary group can only be changed with the chuser command.

Note: The chgrpmem command and the “-a” option can also be used to change the administrators of a group. However, this is rarely used in practice and is therefore not addressed here.

Changing the PVID of a Physical Volume

Each physical volume used by AIX LVM has a unique physical volume ID, or PVID for short. The PVID is a software-generated ID that is stored in the header area of a disk (block 0). When a new disk is added to an AIX system, the new physical volume does not yet have a PVID. As soon as a physical volume is added to a volume group, a PVID is automatically generated if the physical volume should not already have a PVID. An already existing PVID is adopted.

A PVID can also be created manually, using the chdev command. The pv attribute is set to the value yes:

# chdev -l hdisk3 -a pv=yes
hdisk3 changed
#

The set PVID can be displayed either with the lsattr command or simply with lspv:

$ lsattr -El hdisk3 -a pvid -F value
00c276b0084049750000000000000000
$
$ lspv |grep hdisk3
hdisk3          00c276b008404975                    None                       
$

A PVID can also be removed again. However, the physical volume must not be in use for this (e.g. as part of a volume group).

To clear a PVID of a physical volume, the pv attribute can be set to the value clear:

# chdev -l hdisk3 -a pv=clear
hdisk3 changed
#

The PVID has been removed as shown by the following outputs:

$ lsattr -El hdisk3 -a pvid -F value
none
$
$ lspv |grep hdisk3
hdisk3          none                                None                       
$

Attempting to delete the PVID of a physical volume that is in use, results in the following error message:

# chdev -l hdisk0 -a pv=clear
Method error (/usr/lib/methods/chgdisk):
        0514-062 Cannot perform the requested function because the
                 specified device is busy.
     pv    

#

Determining the Size of a Physical Volume

There are a number of different ways to determine the size of a physical volume (disk, LUN) under AIX.

If you have root privileges, you can use the bootinfo command with the “-s” (size) option:

#  bootinfo -s hdisk0
51200
#

The size of the physical volume is given in MB. In the example 51,200 MB or about 50 GB.

Without root privileges, the getconf command can be used. With this command, system-wide configuration parameters as well as device-specific variables can be displayed. The device-specific variable DISK_SIZE can be used to display the size of a physical volume. The physical volume in question is specified by the absolute path of the physical volume’s block or character device file:

$ getconf DISK_SIZE /dev/hdisk0
51200
$

Here, too, the size is given in MB.

Another option, which again requires root privileges, is to use the lsmpio command. The command offers the option “-q” (query) to display data about an MPIO storage device:

# lsmpio -ql hdisk0
Device:  hdisk0
…
           Capacity:  50.00GiB
…
#

The size is directly displayed in GB (GiB) this time.

If the physical volume is part of a volume group, the lspv command can also be used to at least estimate the size:

$ lspv hdisk0
…
TOTAL PPs:          199 (50944 megabytes)    VG DESCRIPTORS:   2
…                                      
$

The area that can be used for data is specified here (50,944 MB), the physical volume itself is somewhat larger, since space is also used for administrative information.

LPAR tool 1.7.0.1 is now available

Version 1.7.0.1 of the LPAR tool is now available in our download area.

The new version supports the following new features, among others:

    • Installation of IFixes and updates on the HMC (hmc help updhmc)
    • System firmware updates (and more) of managed systems (ms help updatelic)
    • Display FLRT data with online query at IBM (hmc help flrt, ms help flrt, lpar help flrt)
    • Configuration of NTP on HMCs (hmc help ntp)

Versions for Linux, AIX and Macos are available.

All versions include a test license valid until September 30th, 2022.

So download, install and then try it out!

show_life_cycle: new URL for FLRT Lite data file

IBM has changed the URL for the FLRT Lite data file. From the old URL

https://www14.software.ibm.com/support/customercare/flrt/liteTable

the data file can no longer be obtained. The new URL is:

https://esupport.ibm.com/customercare/flrt/liteTable

For users of our show_life_cycle script, we have made the updated version of the script with the new URL available in our download area.

(Many thanks to Lutz Leonhardt for the hint.)

What is the size of the internal log in JFS2

inline log size

A trivial question we stumbled across recently:

How big is the internal JFS2 log currently?

The size of the internal JFS2 log must meet the following two conditions:

    1. The log cannot be larger than 10% of the file system size.
    2. The maximum size cannot exceed 2047 MB.

When creating a JFS2 filesystem with internal log, if no size is specified for the log (-a logsize=value), 0.4% of the filesystem size is used by default. The value 0.4% is documented in the crfs manual page.

But how big is the internal JFS2 log right now?

This information is provided by the dumpfs command. It expects either the mount point of a JFS2 file system or the device file of the underlying logical volume as an argument. The command lists the superblock and additional control information. The output can be very long for larger file systems. Since we are only interested in the JFS2 log, it is advisable to filter the output using the grep command:

# dumpfs /data | grep -i log
aggregate block size    4096            log2 of aggregate block size    12
LVM I/O Transfer size   512             log2 of LVM transfer  size      9
log2 of block size/transfer size        3
Aggregate attributes    J2_GROUPCOMMIT J2_INLINELOG
log device      0x8000002700000001 log serial number    0x26
Inline Log: 541065216 (132096); 1024
fsck Service Log number of blocks: 50
Extendfs Inline Log Working Space: 541065216 (132096); 1024
#

The last value in the line “Inline Log:” indicates the size of the internal log in blocks. The block size of the file system can be found in the line “aggregate block size“. In our case, the internal log has a size of 1024 blocks, each with 4096 bytes. This gives a size of 4 MB (1024 * 4 KB).

If an external log is used, the output looks like this:

# dumpfs / | grep -i log
aggregate block size    4096            log2 of aggregate block size    12
LVM I/O Transfer size   512             log2 of LVM transfer  size      9
log2 of block size/transfer size        3
log device      0x8000000a00000003 log serial number    0xb
Inline Log: 0 (0); 0
fsck Service Log number of blocks: 50
Extendfs Inline Log Working Space: 0 (0); 0
#

The internal log has a size of 0 blocks.

However, this is not the easiest way. Chris Gibson points out the “-q” option of the lsfs command, which displays additional information for JFS and JFS2 file systems:

# lsfs -q /filesystem
Name            Nodename   Mount Pt               VFS   Size    Options    Auto Accounting
/dev/fslv01     --         /filesystem            jfs2  1048576 --         no   no
  (lv size: 1048576, fs size: 1048576, block size: 4096, sparse files: yes, inline log: yes, inline log size: 4, EAformat: v1, Quota: no, DMAPI: no, VIX: yes, EFS: no, ISNAPSHOT: no, MAXEXT: 0, MountGuard: no)
#

The size of the inline log is specified there directly in MB (inline log size: 4).

Determining the size of the internal JFS2 log is therefore no problem with the right command (dumpfs lsfs)!

View IOS Version as normal User

On a virtual I/O server, the IOS version can be displayed as user padmin using the ioslevel command:

padmin> ioslevel
3.1.2.10
padmin>

As user root (after using oem_setup_env), the IOS version can be shown as follows:

# /usr/ios/cli/ioscli ioslevel
3.1.2.10
#

However, both commands do not work as a normal, non-privileged user:

$ ioslevel
ksh: ioslevel: not found.
$ /usr/ios/cli/ioscli ioslevel
Access to run command is not valid.

$

The IOS version is simply stored in a text file and can be easily displayed as a normal user with the cat command:

$ cat /usr/ios/cli/ios.level
3.1.2.10
$

YUM with NIMHTTP

Starting with AIX 7.2, NIM supports the use of HTTP. The new NIM service handler nimhttp (port 4901) is available for this purpose. This offers the possibility of making YUM repositories available on a NIM server with the help of this NIM service handler. To do this, the repositories must be saved under the document root (/export/nim by default). AIX clients can then access the repositories using HTTP with port nummer 4901.

The repositories must be configured on the AIX client under /opt/freeware/etc/yum/yum.conf or /opt/freeware/etc/yum/repos.d. All available YUM operations are supported in this way.

If nimhttp is already used on the NIM server, this does not result in any additional effort.

The following shows the configuration for using YUM with nimhttp.

The first requirement is that the NIM service handler nimhttp is active on the NIM server:

aixnim # lssrc -s nimhttp
Subsystem         Group            PID          Status
 nimhttp                           19136996     active
aixnim #

If nimhttp has not yet been activated, this can be done using the nimconfig command on the NIM server:

aixnim # nimconfig -h
0513-077 Subsystem has been changed.
0513-059 The nimhttp Subsystem has been started. Subsystem PID is 19136996.
aixnim #

Note: The configuration of nimhttp is shown elsewhere.

For test purposes, we create a small text file on the NIM server under /export/nim (document root):

aixnim # echo "testfile for nimhttp" >/export/nim/testfile
aixnim #

Next, we check the functionality on the NIM client by downloading the test file from the NIM server with the NIM client command nimhttp:

aix01 # nimhttp -f testfile -o dest=/tmp -v
nimhttp: (source)       testfile
nimhttp: (dest_dir)     /tmp
nimhttp: (verbose)      debug
nimhttp: (master_ip)    aixnim
nimhttp: (master_port)  4901

sending to master...
size= 46
pull_request= "GET /testfile HTTP/1.1
Connection: close

"
Writing 21 bytes of data to /tmp/testfile
Total size of datalen is 21. Content_length size is 21.
aix01 #

(The ‘-v‘ option provides the debugging output shown.)

The test file was saved under /tmp/testfile.

Another test with the command curl (available from the AIX toolbox) also shows that nimhttp can be used successfully to download data:

aix01 # curl http://aixnim:4901/testfile
testfile for nimhttp
aix01 #

The use of nimhttp to access YUM repositories should therefore be possible in principle.

We have copies of the AIX Toolbox repositories provided by IBM on our NIM server in the following directories:

/export/nim/aixtoolbox/RPMS/noarch          AIX_Toolbox_noach (AIX noarch repository)

/export/nim/aixtoolbox/RPMS/ppc               AIX_Toolbox (AIX generic repository)

/export/nim/aixtoolbox/RPMS/ppc-7.1         AIX_Toolbox_71 (AIX 7.1 specific repository)

/export/nim/aixtoolbox/RPMS/ppc-7.2         AIX_Toolbox_72 (AIX 7.2 specific repository)

In order for an AIX client system to be able to access these repositories, they must be referenced in the YUM configuration. For the sake of simplicity, we have entered the repositories in the configuration file /opt/freeware/etc/yum/yum.conf:

aix01 # vi /opt/freeware/etc/yum/yum.conf
[main]
cachedir=/var/cache/yum
keepcache=1
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1

[AIX_Toolbox]
name=AIX generic repository
baseurl=http://aixnim:4901/aixtoolbox/RPMS/ppc/
enabled=1
gpgcheck=0

[AIX_Toolbox_noarch]
name=AIX noarch repository
baseurl=http://aixnim:4901/aixtoolbox/RPMS/noarch/
enabled=1
gpgcheck=0

[AIX_Toolbox_72]
name=AIX 7.2 specific repository
baseurl=http://aixnim:4901/aixtoolbox/RPMS/ppc-7.2/
enabled=1
gpgcheck=0

aix01 #

Alternatively, a separate repo file can be created for each repository under /opt/freeware/etc/yum/repos.d.

The key entry is the baseurl attribute. The URL used here is http. The host name of the NIM server is given the port number of nimhttp (port 4901) separated by a colon. The path is then relative to /export/nim (document root) on the NIM server.

A list of the available YUM repositories printed by the command “yum repolist” shows the expected repositories:

aix01 # yum repolist
repo id                                     repo name                                             status
AIX_Toolbox                                 AIX generic repository                                2740
AIX_Toolbox_72                              AIX 7.2 specific repository                            417
AIX_Toolbox_noarch                          AIX noarch repository                                  301
repolist: 3458
aix01 #

To demonstrate that installing RPMs in this way with nimhttp is also possible, we show the installation of wget:

aix01 # yum install wget
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package wget.ppc 0:1.21.1-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

========================================================================================================
Package              Arch                Version                     Repository                   Size
========================================================================================================
Installing:
wget                 ppc                 1.21.1-1                    AIX_Toolbox                 703 k

Transaction Summary
========================================================================================================
Install       1 Package

Total size: 703 k
Installed size: 1.4 M
Is this ok [y/N]: y
Downloading Packages:
Running Transaction Check
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : wget-1.21.1-1.ppc                                                                    1/1
From wget-1.21.1 onwards, symbolic link of wget in /usr/bin is removed.
The binary is shipped in /opt/freeware/bin. Please use absolute path or
add /opt/freeware/bin in PATH environment variable to use the binary.

Installed:
  wget.ppc 0:1.21.1-1                                                                                  

Complete!

aix01 #

The AIX system does not necessarily have to be a NIM client, as YUM does not use NIM. YUM only uses the http server provided by the NIM master. The AIX version is also irrelevant, the AIX client can run on AIX 7.1 or AIX 7.2.

Note: For AIX 7.3, the Dandified YUM (DNF) is used instead of YUM.

Virtual Network Interface Controller (vNIC)

vNIC adapter with 2 vNIC backing devices and vNIC failover.

The big disadvantage of SR-IOV, as described above, is that LPARs with logical SR-IOV ports cannot be moved (LPM). After the introduction of SR-IOV on POWER systems, there were a number of suggestions for workarounds. However, all of these workarounds require, on the one hand, a special configuration and, on the other hand, a number of reconfigurations to be carried out before and after an LPM operation. In everyday practice, however, this unnecessarily complicates LPM operations.

With the introduction of vNICs, client LPARs can use SR-IOV adapters and still support LPM. As with VSCSI and VFC, a pair of adapters is used for this purpose: the so-called vNIC adapter is used in a virtual slot on the client LPAR and an associated vNIC server adapter is used on a virtual I/O server. The logical SR-IOV port is assigned to the virtual I/O server. The vNIC server adapter, also known as the vNIC backing device, serves as a proxy for the logical SR-IOV port. The interaction of the various adapters is shown in figure 7.19.

Communication path of vNIC for control information and data.
Figure 7.19: Communication path of vNIC for control information and data.

In order to achieve good performance, only control information is transmitted from the vNIC adapter of the client to the vNIC server adapter on the virtual I/O server, which is transmitted in turn from the vNIC server adapter, via the associated logical SR-IOV port (ent adapter), to the corresponding logical port (virtual function) of the SR-IOV adapter. The data itself is transferred between the vNIC client adapter and the logical port of the SR-IOV adapter via DMA (Direct Memory Access) with the help of the hypervisor. In particular, there is no copying of the data via the virtual I/O server. The vNIC adapter on the client is a purely virtual adapter, so LPM works without any problems. The client does not own the logical SR-IOV port and does not access it itself via the PCIe bus (switch).

Shared Ethernet Adapter

SEA with multiple trunking adapters and VLANs

Despite SR-IOV and vNIC, Shared Ethernet is still the most widely used virtualization solution, when it comes to virtualizing Ethernet. The POWER Hypervisor implements internal virtual IEEE802.1q compatible network switches, which, in conjunction with so-called shared Ethernet adapters or SEAs for short, take over the connection to external networks. The shared Ethernet adapters are implemented in software as a layer 2 bridge by the virtual I/O server.

As shown in figure 8.2, a shared Ethernet adapter can have several so-called trunking adapters. The SEA shown has the 3 trunking adapters ent8, ent9 and ent10, all 3 of which are connected to the virtual switch with the name ETHMGMT. In the case shown, all trunking adapters support VLAN tagging. In addition to the port VLAN IDs (PVIDs), the 3 trunking adapters also have additional VLANs via VLAN tagging. In addition to the connection to the virtual switch via the trunking adapter, the SEA also has a connection to an external network by the physical network adapter (ent0). Network packets from client LPARs to external systems are forwarded to the SEA via one of the trunking adapters and then to the external network via the associated physical network adapter. Network packets from external systems to client LPARs are forwarded by the SEA via the trunking adapter with the correct VLAN to the virtual switch, which then forwards the packets to the client LPAR.

SEA with multiple trunking adapters and VLANs
Figure 8.2: SEA with multiple trunking adapters and VLANs.

In the simplest case, a SEA consists of just one trunking adapter. A SEA can have up to 16 trunking adapters, whereby each of the trunking adapters can have up to 20 additional VLANs in addition to the port VLAN ID.

Which SEAs already exist on a virtual I/O server can be found out with the help of the command “vios lssea” (list SEAs):

$ vios lssea ms05-vio1
                                       TIMES   TIMES    TIMES    BRIDGE 
NAME   HA_MODE  PRIORITY  STATE       PRIMARY  BACKUP  FLIPFLOP  MODE
ent33  Sharing  1         PRIMARY_SH  1        1       0         Partial
ent34  Sharing  1         PRIMARY_SH  1        1       0         Partial
$

Some basic information is displayed for each SEA, such as the HA mode (see later), the priority of the SEA, as well as information on how often the SEA was already primary or backup.