show_life_cycle: new URL for FLRT Lite data file

IBM has changed the URL for the FLRT Lite data file. From the old URL

https://www14.software.ibm.com/support/customercare/flrt/liteTable

the data file can no longer be obtained. The new URL is:

https://esupport.ibm.com/customercare/flrt/liteTable

For users of our show_life_cycle script, we have made the updated version of the script with the new URL available in our download area.

(Many thanks to Lutz Leonhardt for the hint.)

What is the size of the internal log in JFS2

inline log size

A trivial question we stumbled across recently:

How big is the internal JFS2 log currently?

The size of the internal JFS2 log must meet the following two conditions:

    1. The log cannot be larger than 10% of the file system size.
    2. The maximum size cannot exceed 2047 MB.

When creating a JFS2 filesystem with internal log, if no size is specified for the log (-a logsize=value), 0.4% of the filesystem size is used by default. The value 0.4% is documented in the crfs manual page.

But how big is the internal JFS2 log right now?

This information is provided by the dumpfs command. It expects either the mount point of a JFS2 file system or the device file of the underlying logical volume as an argument. The command lists the superblock and additional control information. The output can be very long for larger file systems. Since we are only interested in the JFS2 log, it is advisable to filter the output using the grep command:

# dumpfs /data | grep -i log
aggregate block size    4096            log2 of aggregate block size    12
LVM I/O Transfer size   512             log2 of LVM transfer  size      9
log2 of block size/transfer size        3
Aggregate attributes    J2_GROUPCOMMIT J2_INLINELOG
log device      0x8000002700000001 log serial number    0x26
Inline Log: 541065216 (132096); 1024
fsck Service Log number of blocks: 50
Extendfs Inline Log Working Space: 541065216 (132096); 1024
#

The last value in the line “Inline Log:” indicates the size of the internal log in blocks. The block size of the file system can be found in the line “aggregate block size“. In our case, the internal log has a size of 1024 blocks, each with 4096 bytes. This gives a size of 4 MB (1024 * 4 KB).

If an external log is used, the output looks like this:

# dumpfs / | grep -i log
aggregate block size    4096            log2 of aggregate block size    12
LVM I/O Transfer size   512             log2 of LVM transfer  size      9
log2 of block size/transfer size        3
log device      0x8000000a00000003 log serial number    0xb
Inline Log: 0 (0); 0
fsck Service Log number of blocks: 50
Extendfs Inline Log Working Space: 0 (0); 0
#

The internal log has a size of 0 blocks.

However, this is not the easiest way. Chris Gibson points out the “-q” option of the lsfs command, which displays additional information for JFS and JFS2 file systems:

# lsfs -q /filesystem
Name            Nodename   Mount Pt               VFS   Size    Options    Auto Accounting
/dev/fslv01     --         /filesystem            jfs2  1048576 --         no   no
  (lv size: 1048576, fs size: 1048576, block size: 4096, sparse files: yes, inline log: yes, inline log size: 4, EAformat: v1, Quota: no, DMAPI: no, VIX: yes, EFS: no, ISNAPSHOT: no, MAXEXT: 0, MountGuard: no)
#

The size of the inline log is specified there directly in MB (inline log size: 4).

Determining the size of the internal JFS2 log is therefore no problem with the right command (dumpfs lsfs)!

YUM with NIMHTTP

Starting with AIX 7.2, NIM supports the use of HTTP. The new NIM service handler nimhttp (port 4901) is available for this purpose. This offers the possibility of making YUM repositories available on a NIM server with the help of this NIM service handler. To do this, the repositories must be saved under the document root (/export/nim by default). AIX clients can then access the repositories using HTTP with port nummer 4901.

The repositories must be configured on the AIX client under /opt/freeware/etc/yum/yum.conf or /opt/freeware/etc/yum/repos.d. All available YUM operations are supported in this way.

If nimhttp is already used on the NIM server, this does not result in any additional effort.

The following shows the configuration for using YUM with nimhttp.

The first requirement is that the NIM service handler nimhttp is active on the NIM server:

aixnim # lssrc -s nimhttp
Subsystem         Group            PID          Status
 nimhttp                           19136996     active
aixnim #

If nimhttp has not yet been activated, this can be done using the nimconfig command on the NIM server:

aixnim # nimconfig -h
0513-077 Subsystem has been changed.
0513-059 The nimhttp Subsystem has been started. Subsystem PID is 19136996.
aixnim #

Note: The configuration of nimhttp is shown elsewhere.

For test purposes, we create a small text file on the NIM server under /export/nim (document root):

aixnim # echo "testfile for nimhttp" >/export/nim/testfile
aixnim #

Next, we check the functionality on the NIM client by downloading the test file from the NIM server with the NIM client command nimhttp:

aix01 # nimhttp -f testfile -o dest=/tmp -v
nimhttp: (source)       testfile
nimhttp: (dest_dir)     /tmp
nimhttp: (verbose)      debug
nimhttp: (master_ip)    aixnim
nimhttp: (master_port)  4901

sending to master...
size= 46
pull_request= "GET /testfile HTTP/1.1
Connection: close

"
Writing 21 bytes of data to /tmp/testfile
Total size of datalen is 21. Content_length size is 21.
aix01 #

(The ‘-v‘ option provides the debugging output shown.)

The test file was saved under /tmp/testfile.

Another test with the command curl (available from the AIX toolbox) also shows that nimhttp can be used successfully to download data:

aix01 # curl http://aixnim:4901/testfile
testfile for nimhttp
aix01 #

The use of nimhttp to access YUM repositories should therefore be possible in principle.

We have copies of the AIX Toolbox repositories provided by IBM on our NIM server in the following directories:

/export/nim/aixtoolbox/RPMS/noarch          AIX_Toolbox_noach (AIX noarch repository)

/export/nim/aixtoolbox/RPMS/ppc               AIX_Toolbox (AIX generic repository)

/export/nim/aixtoolbox/RPMS/ppc-7.1         AIX_Toolbox_71 (AIX 7.1 specific repository)

/export/nim/aixtoolbox/RPMS/ppc-7.2         AIX_Toolbox_72 (AIX 7.2 specific repository)

In order for an AIX client system to be able to access these repositories, they must be referenced in the YUM configuration. For the sake of simplicity, we have entered the repositories in the configuration file /opt/freeware/etc/yum/yum.conf:

aix01 # vi /opt/freeware/etc/yum/yum.conf
[main]
cachedir=/var/cache/yum
keepcache=1
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1

[AIX_Toolbox]
name=AIX generic repository
baseurl=http://aixnim:4901/aixtoolbox/RPMS/ppc/
enabled=1
gpgcheck=0

[AIX_Toolbox_noarch]
name=AIX noarch repository
baseurl=http://aixnim:4901/aixtoolbox/RPMS/noarch/
enabled=1
gpgcheck=0

[AIX_Toolbox_72]
name=AIX 7.2 specific repository
baseurl=http://aixnim:4901/aixtoolbox/RPMS/ppc-7.2/
enabled=1
gpgcheck=0

aix01 #

Alternatively, a separate repo file can be created for each repository under /opt/freeware/etc/yum/repos.d.

The key entry is the baseurl attribute. The URL used here is http. The host name of the NIM server is given the port number of nimhttp (port 4901) separated by a colon. The path is then relative to /export/nim (document root) on the NIM server.

A list of the available YUM repositories printed by the command “yum repolist” shows the expected repositories:

aix01 # yum repolist
repo id                                     repo name                                             status
AIX_Toolbox                                 AIX generic repository                                2740
AIX_Toolbox_72                              AIX 7.2 specific repository                            417
AIX_Toolbox_noarch                          AIX noarch repository                                  301
repolist: 3458
aix01 #

To demonstrate that installing RPMs in this way with nimhttp is also possible, we show the installation of wget:

aix01 # yum install wget
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package wget.ppc 0:1.21.1-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

========================================================================================================
Package              Arch                Version                     Repository                   Size
========================================================================================================
Installing:
wget                 ppc                 1.21.1-1                    AIX_Toolbox                 703 k

Transaction Summary
========================================================================================================
Install       1 Package

Total size: 703 k
Installed size: 1.4 M
Is this ok [y/N]: y
Downloading Packages:
Running Transaction Check
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : wget-1.21.1-1.ppc                                                                    1/1
From wget-1.21.1 onwards, symbolic link of wget in /usr/bin is removed.
The binary is shipped in /opt/freeware/bin. Please use absolute path or
add /opt/freeware/bin in PATH environment variable to use the binary.

Installed:
  wget.ppc 0:1.21.1-1                                                                                  

Complete!

aix01 #

The AIX system does not necessarily have to be a NIM client, as YUM does not use NIM. YUM only uses the http server provided by the NIM master. The AIX version is also irrelevant, the AIX client can run on AIX 7.1 or AIX 7.2.

Note: For AIX 7.3, the Dandified YUM (DNF) is used instead of YUM.

oslevel -s does not show latest AIX version

The command “oslevel –s” shows the highest fully installed service pack on an AIX system, e.g .:

$ oslevel -s
7100-05-05-1939
$

However, a higher service pack can easily be installed (but not completely). The easiest way to check this, is with “oslevel –qs“:

$ oslevel -qs
Known Service Packs
-------------------
7100-05-06-2028
7100-05-06-2016
7100-05-06-2015
7100-05-05-1939
7100-05-05-1938
7100-05-05-1937
7100-05-04-1914
7100-05-04-1913
7100-05-03-1846
7100-05-03-1838
…
$

The output shows that 7100-05-06-2028 is installed. However, there are filesets that have not been updated to 7100-05-06-2028, which is why a lower version is displayed under “oslevel –s“.

If you want to know which filesets are lower than 7100-05-06-2028, “oslevel –s –l” can be used:

$ oslevel -s -l 7100-05-06-2028
Fileset                                 Actual Level       Service Pack Level
-----------------------------------------------------------------------------
openssl.base                            1.0.2.1801         1.0.2.2002    
openssl.license                         1.0.2.1801         1.0.2.2002    
openssl.man.en_US                       1.0.2.1801         1.0.2.2002    
$

Here, the OpenSSL filesets are still installed in an older version (1.0.2.1801) and would have to be updated to 1.0.2.2002 in order that the service pack 7100-05-06-2028 is completely installed.

0516-404 allocp: This system cannot fulfill the allocation request

When increasing logical volumes or file systems, the error “0516-404 allocp: This system cannot fulfill the allocation request” often occurs. Many AIX administrators are at first a little at a loss as to what the cause of the problem is: is it due to max LPs or upper bound, the chosen strictness or does the problem have completely different causes. Unfortunately, the error message is only generic and does not say what the real problem is. A good way to get to the bottom of the problem is the alog lvmt, in which all LVM actions, especially the low-level commands, including error messages are logged.

Our article Troubleshooting problems with extendlv and file system extension uses a series of examples to show how the cause of the problem can be found in many cases.

0516-787 extendlv: Maximum allocation for logical volume lv01 is 512

If the following message occurs while expanding a logical volume or file system:

# chfs -a size=+5G /fs01
0516-787 extendlv: Maximum allocation for logical volume fslv01
        is 512.
#

Then the cause is a limitation of the logical volume. The size of a logical volume is limited by the maximum number of LPs (logical partitions) that can be allocated for the logical volume. The error message says exactly that and even gives the current value for the maximum number of LPs. This is a changeable attribute of the logical volume and can be displayed with the lslv command:

$ lslv fslv01
...MAX LPs:            512                    PP SIZE:        8 megabyte(s)
...$

The attribute can be changed with the command chlv:

# chlv -x 768 fslv01
#

With a PP (Physical Partition) size of 8 MB, 768 PPs are sufficient for exactly 6 GB. Of course, you could consider the next expansion and set the value accordingly higher.

If there are enough 0516-787 extendlv: Maximum allocation for logical volumes free in the underlying volume group and no other limits are exceeded, the file system or logical volume should now be able to be expanded:

# chfs -a size=+5G /fs01
Filesystem size changed to 12582912
Inlinelog size changed to 24 MB.
#

secldapclntd: LDAP failed to bind to server

We recently had a minor outage on one of our systems. Users could no longer log in and users could no longer use a web GUI. The problem occurred sporadically and then disappeared again. During these times there were a lot of messages of the following types in the syslog:

Mar  4 07:56:05 aix01 daemon:err|error secldapclntd: LDAP failed to bind to server aixldapp11.
Mar  4 07:56:05 aix01 daemon:err|error secldapclntd: LDAP failed to bind to server aixldapp12.
Mar  4 07:56:10 aix01 daemon:err|error secldapclntd: LDAP failed to bind to server aixldapp11.
Mar  4 07:56:10 aix01 daemon:err|error secldapclntd: LDAP failed to bind to server aixldapp12.
...

These messages indicated that both LDAP servers were temporarily unavailable. Investigations of the logs on the two LDAP servers did not reveal any connection attempts that were rejected. An examination of the firewall logs also showed that the firewall did not block any packets to the LDAP servers.

We then opened a case at IBM and promptly received instructions back for setting up special LDAP tracing. However, the problem then no longer occurred and therefore the tracing could not determine the cause.

The system in question can be reached via the Internet and therefore the number of ephemeral ports has been restricted for security reasons. Only about 1500 ephemeral ports are allowed through the firewall and configured on AIX (tcp_ephemeral_high and tcp_ephemeral_low). We then simulated on a test system what happens when the available ephemeral ports are exhausted. The above messages came immediately as soon as the LDAP client tried to open a new connection.

One reason for the LDAP client error message “LDAP failed to bind to server” can be the unavailability of ephemeral ports!

The “label” Attribute for FC Adapters

As of AIX 7.2 TL4 and VIOS 3.1.1.10 there is a new attribute “label” for physical FC adapters. The administrator can set this attribute to any character string (maximum 255 characters). Even if the attribute is only informative, it can be extremely useful in PowerVM virtualization environments. If you have a large number of managed systems, it is not always clear to which FC fabric a certain FC port is connected. This can of course be looked up in the documentation of your systems, but it does involve a certain amount of effort. It is easier if you link this information directly with the FC adapters, which is exactly what the new “label” attribute allows in a simple way. On AIX:

# chdev -l fcs0 -U -a label="Fabric_1"
fcs0 changed
# lsattr -El fcs0 -a label -F value
Fabric_1
#

On virtual I/O servers, the attribute can also be set using the padmin account:

/home/padmin> chdev -dev fcs1 -attr label="Fabric_2" -perm
fcs1 changed
/home/padmin> lsdev -dev fcs1 -attr label                
value

Fabric_2
/home/padmin>

The attribute is also defined for older FC adapters.

If the “label” attribute is consistently used, it is always possible to determine online for each FC adapter to which fabric the adapter is connected to. This information only needs to be stored once for each FC adapter.

(Note: The “label” attribute is not implemented for AIX 7.1, at least not until 7.1 TL5 SP6.)

Correcting “wrong” LVM mirroring

Mirrored logical volume from practice

In Part III of our article series “AIX LVM: Mechanics of migratepv” we show how incorrect mirroring of logical volumes can be corrected online. All that is required is the migratelp and migratepv commands and a good understanding of how these commands work.

Here are the links to the articles in the series:

AIX LVM: Mechanics of migratepv (Part I)

AIX LVM: Mechanics of migratepv (Part II)

AIX LVM: Mechanics of migratepv (Part III)

Error: Mirror pools must be defined

The following error message occurred while creating a mirrored logical volume:

# mklv -c 2 -t jfs2 datavg01 10
0516-1814 lcreatelv: Mirror pools must be defined for each copy when strict mirror
        pools are enabled.
0516-822 mklv: Unable to create logical volume.
#

The reason for this is that the mirror pool strictness is set for the volume group. The article Mirror Pools: Understanding Mirror Pool Strictness examines and explains this in more detail.