Under Construction
Example: Installation of a new Power 10 VIOS with ISO image on HMC
A new Virtual I/O Server ms11-vio1 is to be installed on a new Power 10 Managed System (S1022). The LPAR for the Virtual I/O Server was created with the following configuration (profile: standard):
The virtual I/O server was created with 2 dedicated processors:
LPAR-Tool
$ lpar -p standard lsproc ms11-vio1
PROC PROC PROCS PROC_UNITS UNCAP PROC
LPAR_NAME MODE COMPAT MIN DESIRED MAX MIN DESIRED MAX SHARING_MODE WEIGHT POOL
ms11-vio1 ded default 1 2 4 - - - keep_idle_procs - -
$
The size of the main memory was chosen to be 16 GB:
LPAR-Tool
$ lpar -p standard lsmem ms11-vio1
MEMORY MEMORY HUGE_PAGES
LPAR_NAME MODE AME MIN DESIRED MAX MIN DESIRED MAX
ms11-vio1 ded - 4.00 GB 16.00 GB 32.00 GB - - -
$
Two PCIe3 4-port 16 Gb FC adapters and two NVMe SSDs were assigned to the virtual I/O server:
LPAR-Tool
$ lpar -p standard lsslot ms11-vio1
DRC_NAME DRC_INDEX REQ IOPOOL DESCRIPTION
U78DA.ND0.WZS163D-P1-C2 21030212 No none 1.6TB NVMe Gen4 U.2 SSD II
U78DA.ND0.WZS163D-P1-C3 21040213 No none 1.6TB NVMe Gen4 U.2 SSD II
U78DA.ND0.WZS163D-P0-C3 21010038 No none PCIe3 4-Port 16Gb FC Adapter
U78DA.ND0.WZS163D-P0-C1 21010041 No none PCIe3 4-Port 16Gb FC Adapter
$
A logical SR-IOV port was configured for the connection to the network:
LPAR-Tool
$ lpar -p standard lssriov ms11-vio1
LPORT ADAPTER PPORT CONFIG_ID CAPACITY MAX_CAPACITY PVID VLANS MAC_ADDR CLIENTS
- 1 0 0 10.0 100.0 0 all - -
$
On the HMC CLI, the above information can be displayed with just one command:
HMC-CLI
hscroot@hmc01:~> lssyscfg -r prof -m ms11 --filter profile_names=standard
name=standard,lpar_name=ms11-vio1,lpar_id=1,lpar_env=vioserver,all_resources=0,min_mem=4096,desired_mem=16384,max_mem=32768,mem_mode=ded,hpt_ratio=1:128,ppt_ratio=1:4096,proc_mode=ded,min_procs=1,desired_procs=2,max_procs=4,sharing_mode=keep_idle_procs,affinity_group_id=none,"io_slots=21010041/none/0,21030212/none/0,21040213/none/0,21010038/none/0",lpar_io_pool_ids=none,max_virtual_slots=300,"virtual_serial_adapters=0/server/1/any//any/1,1/server/1/any//any/1",virtual_scsi_adapters=none,virtual_eth_adapters=none,virtual_eth_vsi_profiles=none,virtual_fc_adapters=none,vnic_adapters=none,vtpm_adapters=none,boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,virtual_vasi_adapters=none,lpar_proc_compat_mode=default,sriov_eth_logical_ports=config_id=0:adapter_id=1:phys_port_id=0:logical_port_id=:diag_mode=0:huge_dma_window_mode=0:promisc_mode=1:allowed_vlan_ids=all:mac_addr=:allowed_os_mac_addrs=all:port_vlan_id=0:pvid_priority=0:capacity=10.0:max_capacity=100.0:allowed_priorities=none,sriov_roce_logical_ports=none
hscroot@hmc01:~>
The IP information for installation is as follows:
Client-IP (VIOS): 172.16.107.45
Metmask: 255.255.255.0
Gateway: 172.16.107.1
VLAN: 100
Since VLAN 100 does not match the port VLAN ID (PVID) of the logical SR-IOV port (see above), the vlan_tag attribute must be specified with VLAN 100 (vlan_tag=100) for the installation.
The installation should be carried out using one of the HMCs’ stored installation images. Therefore, we first display the available images:
LPAR-Tool
$ vios lsviosimg ms11-vio1
NAME HMC SIZE IMAGE_FILES
VIOS_3.1.4.30 hmc01 6063.53 dvdimage.v1.iso,dvdimage.v2.iso
VIOS_4.1.0.0 hmc01 3487.68 dvdimage.v1.iso
VIOS_4.1.0.0 hmc02 3487.68 dvdimage.v1.iso
VIOS_3.1.4.30 hmc02 6063.53 dvdimage.v1.iso,dvdimage.v2.iso
$
HMC-CLI
hscroot@hmc01:~> lsviosimg
name=VIOS_3.1.4.30,"image_files=dvdimage.v1.iso,dvdimage.v2.iso",size=6063.53
name=VIOS_4.1.0.0,image_files=dvdimage.v1.iso,size=3487.68
hscroot@hmc01:~>
hscroot@hmc02:~> lsviosimg
name=VIOS_4.1.0.0,image_files=dvdimage.v1.iso,size=3487.68
name=VIOS_3.1.4.30,"image_files=dvdimage.v1.iso,dvdimage.v2.iso",size=6063.53
hscroot@hmc02:~>
In this case, all images are stored on both HMCs (here hmc01 and hmc02). Therefore, it does not really matter which of the two HMCs is used for the installation. If desired, one of the two HMCs can be selected using the “-h” option. If no HMC is explicitly specified, the LPAR tool makes a choice.
For images stored on an HMC, you can simply specify the name of the image during installation. We will use VIOS_4.1.0.0 for the installation.
In principle, the installation can now be started:
LPAR-Tool
$ vios installios ms11-vio1 172.16.107.45 255.255.255.0 172.16.107.1 VIOS_4.1.0.0 vlan_tag=900
…
HMC-CLI
hscroot@hmc01:~> installios -s ms11 -p ms11-vio1 -r standard -i 172.16.107.45 -g 172.16.107.1 -S 255.255.255.0 -d /extra/viosimages/VIOS_4.1.0.0/dvdimage.v1.iso -V 900
…
Since no profile or HMC was specified, the LPAR tool selects the active profile (or the default profile) and an HMC. When using the HMC CLI, you have to select both yourself.
Determining the correct interface on the HMC can take a long time and in some cases the installation aborts because no interface was found. We therefore recommend specifying the HMC interface to be used. The command “hmc lsnet” makes it very easy to find out the corresponding interface:
LPAR-Tool
$ hmc lsnet hmc01
INTER IPV4 IPV4 IPV4 DHCP DHCP JUMBO
FACE ADDR NETMASK DHCP SERVER SERVERRANGE FRAME SPEED DUPLEX TSO
eth0 172.16.107.91 255.255.255.0 off off - off auto auto -
eth1 10.0.0.1 255.255.255.0 off on 10.0.0.2,10.0.0.254 off auto auto -
eth2 0.0.0.0 255.255.255.0 off off - off auto auto -
eth3 0.0.0.0 255.255.255.0 off off - off auto auto -
eth4 0.0.0.0 255.255.255.255 off off - off auto auto -
eth5 0.0.0.0 255.255.255.255 off off - off auto auto -
$
HMC-CLI
hscroot@hmc01:~> lshmc -n
hostname=hmc01,domain=,description=,"ipaddr=172.16.107.91,10.0.0.1,0.0.0.0,0.0.0.0,0.0.0.0,0.0.0.0","networkmask=255.255.255.0,255.255.255.0,255.255.255.0,255.255.255.0,255.255.255.255,255.255.255.255",gateway=172.16.107.1,nameserver=,dns=enabled,domainsuffix=gdl.mex.ibm.com,slipipaddr=10.253.0.1,slipnetmask=255.255.0.0,"ipaddrlpar=172.16.107.91,10.0.0.1","networkmasklpar=255.255.255.0,255.255.255.0","clients=10.0.0.19,10.0.0.20,10.0.0.13,10.0.0.9,10.0.0.18,10.0.0.246,10.0.0.16,10.0.0.23,10.0.0.24,10.0.0.25,10.0.0.90,10.0.0.239,10.0.0.12,10.0.0.8,10.0.0.15,10.0.0.3,10.0.0.254,10.0.0.240,10.0.0.21,10.0.0.22,10.0.0.17,10.0.0.122,10.0.0.141,10.0.0.70,10.0.0.96,10.0.0.177,10.0.0.99,10.0.0.159,10.0.0.79,10.0.0.200",ipv6addrlpar=,ipv4addr_eth0=172.16.107.91,ipv4netmask_eth0=255.255.255.0,ipv4dhcp_eth0=off,dhcpserver_eth0=off,ipv6addr_eth0=,ipv6auto_eth0=off,ipv6privacy_eth0=off,ipv6dhcp_eth0=off,lparcomm_eth0=off,jumboframe_eth0=off,speed_eth0=auto,duplex_eth0=auto,tso_eth0=,ipv4addr_eth1=10.0.0.1,ipv4netmask_eth1=255.255.255.0,ipv4dhcp_eth1=off,dhcpserver_eth1=on,"dhcpserverrange_eth1=10.0.0.2,10.0.0.254",ipv6addr_eth1=,ipv6auto_eth1=off,ipv6privacy_eth1=off,ipv6dhcp_eth1=off,lparcomm_eth1=off,jumboframe_eth1=off,speed_eth1=auto,duplex_eth1=auto,tso_eth1=,ipv4addr_eth2=0.0.0.0,ipv4netmask_eth2=255.255.255.0,ipv4dhcp_eth2=off,dhcpserver_eth2=off,ipv6addr_eth2=,ipv6auto_eth2=off,ipv6privacy_eth2=off,ipv6dhcp_eth2=off,lparcomm_eth2=off,jumboframe_eth2=off,speed_eth2=auto,duplex_eth2=auto,tso_eth2=,ipv4addr_eth3=0.0.0.0,ipv4netmask_eth3=255.255.255.0,ipv4dhcp_eth3=off,dhcpserver_eth3=off,ipv6addr_eth3=,ipv6auto_eth3=off,ipv6privacy_eth3=off,ipv6dhcp_eth3=off,lparcomm_eth3=off,jumboframe_eth3=off,speed_eth3=auto,duplex_eth3=auto,tso_eth3=,ipv4addr_eth4=0.0.0.0,ipv4netmask_eth4=255.255.255.255,ipv4dhcp_eth4=off,dhcpserver_eth4=off,ipv6addr_eth4=,ipv6auto_eth4=off,ipv6privacy_eth4=off,ipv6dhcp_eth4=off,lparcomm_eth4=off,jumboframe_eth4=off,speed_eth4=auto,duplex_eth4=auto,tso_eth4=,ipv4addr_eth5=0.0.0.0,ipv4netmask_eth5=255.255.255.255,ipv4dhcp_eth5=off,dhcpserver_eth5=off,ipv6addr_eth5=,ipv6auto_eth5=off,ipv6privacy_eth5=off,ipv6dhcp_eth5=off,lparcomm_eth5=off,jumboframe_eth5=off,speed_eth5=auto,duplex_eth5=auto,tso_eth5=
hscroot@hmc01:~>
In our case, the public interface eth0 is even in the same network as the client IP of the virtual I/O server.
We also determine the MAC address of the virtual I/O server for the installation interface. To do this, the LPAR must first be activated shortly:
LPAR-Tool
$ lpar -p standard activate -b sms ms11-vio1
$
HMC-CLI
hscroot@hmc01:~> chsysstate -m ms11 -r lpar -o on -n ms11-vio1 -b sms -f standard
hscroot@hmc01:~>
The logical SR-IOV ports of the Virtual I/O server can then be displayed with the MAC addresses:
LPAR-Tool
$ lpar lssriov ms11-vio1
LPORT REQ ADAPTER PPORT CONFIG_ID CAPACITY MAX_CAPACITY PVID VLANS CURR_MAC_ADDR CLIENTS
27004001 Yes 1 0 0 10.0 100.0 0 all 4e411670ab00 -
$
HMC-CLI
hscroot@hmc01:~> lshwres -r sriov --rsubtype logport -m ms11 --level eth --filter lpar_names=ms11-vio1
config_id=0,lpar_name=ms11-vio1,lpar_id=1,lpar_state=Open Firmware,is_required=1,adapter_id=1,logical_port_id=27004001,logical_port_type=eth,drc_name=PHB 4097,location_code=U78DA.ND0.WZS163D-P0-C0-T0-S1,functional_state=1,phys_port_id=0,debug_mode=0,diag_mode=0,huge_dma_window_mode=0,capacity=10.0,max_capacity=100.0,promisc_mode=1,mac_addr=4e411670ab00,curr_mac_addr=4e411670ab00,allowed_os_mac_addrs=all,allowed_vlan_ids=all,port_vlan_id=0,is_vnic_backing_device=0
hscroot@hmc01:~>
The MAC address you are looking for is 4e411670ab00.
We therefore start the installation with the additional arguments interface=eth0 and mac_addr= 4e411670ab00:
LPAR-Tool
$ vios installios ms11-vio1 172.16.107.45 255.255.255.0 172.16.107.1 VIOS_4.1.0.0 vlan_tag=100 interface=eth0 mac_addr=4e411670ab00
HMC-CLI
hscroot@hmc01:~> installios -s ms11 -p ms11-vio1 -r standard -i 172.16.107.45 -g 172.16.107.1 -A eth0 -m 4e411670ab00 -S 255.255.255.0 -d /extra/viosimages/VIOS_4.1.0.0/dvdimage.v1.iso -V 100
The output is independent of whether the LPAR tool or the HMC CLI was used:
Logging session output to /tmp/installios.1797914.log.
nimol_config MESSAGE: Added "REMOTE_ACCESS_METHOD /usr/bin/rsh" to the file "/etc/nimol.conf"
nimol_config MESSAGE: Removed "disable = yes" from the file "/etc/xinetd.d/tftp"
nimol_config MESSAGE: Added "disable = no" to the file "/etc/xinetd.d/tftp"
nimol_config MESSAGE: Removed "local2,local3.* -/var/log/localmessages;RSYSLOG_TraditionalFileFormat" from the file "/etc/rsyslog.conf"
nimol_config MESSAGE: Added "local3.* -/var/log/localmessages;RSYSLOG_TraditionalFileFormat" to the file "/etc/rsyslog.conf"
nimol_config MESSAGE: Added "local2.* /var/log/nimol.log" to the file "/etc/rsyslog.conf"
nimol_config MESSAGE: Executed /usr/sbin/nimol_bootreplyd -l -d -f /etc/nimoltab -s 172.16.107.91.
nimol_config MESSAGE: Successfully configured NIMOL.
nimol_config MESSAGE: target directory: /info/default2
nimol_config MESSAGE: source directory: /mnt/nimol
nimol_config MESSAGE: extract_nim_res location /mnt/nimol
nimol_config MESSAGE: search for the tar file in the directory
nimol_config MESSAGE: File list=bosinst.data
image.data
installp
ismp
mkcd.data
nimol
OSLEVEL
ppc
README.vios
root
RPMS
sbin
udi
usr
nimol_config MESSAGE: Copying /mnt/nimol/nimol/ioserver_res/booti.chrp.mp.ent.Z to /info/default2...
nimol_config MESSAGE: Copying /mnt/nimol/nimol/ioserver_res/ispot.tar.Z to /info/default2...
nimol_config MESSAGE: Copying /mnt/nimol/nimol/ioserver_res/mksysb to /info/default2/mksysb...
nimol_config MESSAGE: Copying /mnt/nimol/nimol/ioserver_res/bosinst.data to /info/default2/bosinst.data...
nimol_config MESSAGE: Added "/info/default2 *(rw,insecure,no_root_squash)" to the file "/etc/exports"
nimol_config MESSAGE: Successfully created "default2".
nimol_install MESSAGE: The hostname "172_16_107_45" will be used.
nimol_install MESSAGE: Added "CLIENT 172_16_107_45" to the file "/etc/nimol.conf"
nimol_install MESSAGE: Added "172_16_107_45:ip=172.16.107.45:ht=ethernet:gw=172.16.107.1:sm=255.255.255.0:bf=172_16_107_45:sa=172.16.107.91:ha=4e411670ab00" to the file "/etc/nimoltab"
nimol_install MESSAGE: Executed kill -HUP 1799116.
nimol_install MESSAGE: Created /tftpboot/172_16_107_45.
nimol_install MESSAGE: Executed /sbin/arp -s 172_16_107_45 4e411670ab00 -i eth0.
nimol_install MESSAGE: Executed /sbin/iptables -I INPUT 1 -s 172_16_107_45 -j ACCEPT.
nimol_install MESSAGE: Created /info/default2/scripts/172_16_107_45.script.
nimol_install MESSAGE: Created /tftpboot/172_16_107_45.info.
nimol_install MESSAGE: Successfully setup 172_16_107_45 for a NIMOL install
convert_hostname : begin : ip=172.16.107.45
# Connecting to ms11-vio1.
# Connected
# Checking for power off.
# Power off the node.
# Wait for power off.
# Power off complete.
# Power on ms11-vio1 to Open Firmware.
# Power on complete.
# Client IP address is 172.16.107.45.
# Server IP address is 172.16.107.91.
# Gateway IP address is 172.16.107.1.
# Subnetmask IP address is 255.255.255.0.
# Getting adapter location codes.
BOOTP initiated!
# bootp sent over network.
# Network boot proceeding, lpar_netboot is exiting.
…
Jul 17 12:54:28 172_16_107_45 nimol: ,info=LED 610: mount -r 172.16.107.91:/info/default2/SPOT/usr /SPOT/usr,
Jul 17 12:54:28 172_16_107_45 nimol: ,info=,
Jul 17 12:54:28 172_16_107_45 nimol: ,-S,booting,172_16_107_45,
Jul 17 12:54:59 172_16_107_45 nimol: ,info=LED 610: mount 172.16.107.91:/info/default2/mksysb /NIM_BOS_IMAGE,
Jul 17 12:54:59 172_16_107_45 nimol: ,info=LED 610: mount 172.16.107.91:/info/default2/bosinst.data /NIM_BOSINST_DATA,
Jul 17 12:54:59 172_16_107_45 nimol: ,info=LED 610: mount 172.16.107.91:/info/default2/lpp_source /SPOT/usr/sys/inst.images,
Jul 17 12:54:59 172_16_107_45 nimol: ,info=,
Jul 17 12:55:04 172_16_107_45 nimol: ,-R,success,172_16_107_45,
Jul 17 12:55:04 172_16_107_45 nimol: ,info=extract_data_files,
Jul 17 12:55:06 172_16_107_45 nimol: ,info=query_disks,
Jul 17 12:55:07 172_16_107_45 nimol: ,info=extract_diskette_data,
Jul 17 12:55:08 172_16_107_45 nimol: ,info=setting_console,
Jul 17 12:55:08 172_16_107_45 nimol: ,info=initialization,
Jul 17 12:55:09 172_16_107_45 nimol: ,info=verifying_data_files,
Jul 17 12:55:22 172_16_107_45 nimol: ,info=,
Jul 17 12:55:24 172_16_107_45 nimol: ,info=BOS install 1% complete : Making boot logical volume.,
Jul 17 12:55:25 172_16_107_45 nimol: ,info=BOS install 2% complete : Making paging logical volumes.,
Jul 17 12:55:27 172_16_107_45 nimol: ,info=BOS install 3% complete : Making logical volumes.,
Jul 17 12:55:34 172_16_107_45 nimol: ,info=BOS install 4% complete : Forming the jfs log.,
Jul 17 12:55:34 172_16_107_45 nimol: ,info=BOS install 5% complete : Making file systems.,
Jul 17 12:55:43 172_16_107_45 nimol: ,info=BOS install 6% complete : Mounting file systems.,
Jul 17 12:55:43 172_16_107_45 nimol: ,info=BOS install 7% complete,
Jul 17 12:55:44 172_16_107_45 nimol: ,info=BOS install 7% complete : Restoring base operating system.,
Jul 17 12:56:02 172_16_107_45 nimol: ,info=BOS install 19% complete : 16% of mksysb data restored.,
Jul 17 12:56:22 172_16_107_45 nimol: ,info=BOS install 30% complete : 31% of mksysb data restored.,
Jul 17 12:56:42 172_16_107_45 nimol: ,info=BOS install 43% complete : 48% of mksysb data restored.,
Jul 17 12:57:03 172_16_107_45 nimol: ,info=BOS install 54% complete : 62% of mksysb data restored.,
Jul 17 12:57:23 172_16_107_45 nimol: ,info=BOS install 64% complete : 76% of mksysb data restored.,
Jul 17 12:57:43 172_16_107_45 nimol: ,info=BOS install 71% complete : 86% of mksysb data restored.,
Jul 17 12:57:59 172_16_107_45 nimol: ,info=BOS install 82% complete,
Jul 17 12:57:59 172_16_107_45 nimol: ,info=BOS install 82% complete : Initializing disk environment.,
Jul 17 12:57:59 172_16_107_45 nimol: ,info=BOS install 83% complete : Over mounting /.,
Jul 17 12:58:33 172_16_107_45 nimol: ,info=BOS install 84% complete,
Jul 17 12:58:33 172_16_107_45 nimol: ,info=BOS install 85% complete : Copying Cu* to disk.,
Jul 17 12:58:33 172_16_107_45 nimol: ,info=BOS install 86% complete,
Jul 17 12:58:35 172_16_107_45 nimol: ,info=BOS install 87% complete,
Jul 17 12:58:36 172_16_107_45 nimol: ,info=BOS install 88% complete,
Jul 17 12:58:43 172_16_107_45 nimol: ,info=BOS install 89% complete,
Jul 17 12:59:39 172_16_107_45 nimol: ,info=BOS install 89% complete : Initializing dump device.,
Jul 17 12:59:39 172_16_107_45 nimol: ,info=recover_device_attributes,
Jul 17 12:59:39 172_16_107_45 nimol: ,-R,success,172_16_107_45,
Jul 17 12:59:39 172_16_107_45 nimol: ,info=BOS install 89% complete : Network Install Manager customization.,
Jul 17 13:00:43 172_16_107_45 nimol: ,-R,success,172_16_107_45,
Jul 17 13:00:43 172_16_107_45 nimol: ,info=bosboot,
Jul 17 13:00:43 172_16_107_45 nimol: ,info=BOS install 90% complete : Creating boot image.,
Jul 17 13:00:56 172_16_107_45 nimol: ,info=BOS install 100% complete,
Jul 17 13:00:57 172_16_107_45 nimol: ,-R,success,172_16_107_45,
Jul 17 13:00:57 172_16_107_45 nimol: ,-R,success,172_16_107_45,
Jul 17 13:00:57 172_16_107_45 nimol: ,-R,success,172_16_107_45,
Jul 17 13:00:57 172_16_107_45 nimol: ,-R,success,172_16_107_45,
Jul 17 13:00:57 172_16_107_45 nimol: ,-S,shutdown,172_16_107_45,
…
nimol_install MESSAGE: Removed "172_16_107_45:ip=172.16.107.45:ht=ethernet:gw=172.16.107.1:sm=255.255.255.0:bf=172_16_107_45:sa=172.16.107.91:ha=4e411670ab00" from the file "/etc/nimoltab"
nimol_install MESSAGE: Executed kill -HUP 1799116.
nimol_install MESSAGE: Removed /tftpboot/172_16_107_45.
nimol_install MESSAGE: Executed /sbin/arp -d 172_16_107_45.
nimol_install MESSAGE: Executed /sbin/iptables -D INPUT -s 172_16_107_45 -j ACCEPT.
nimol_install MESSAGE: Removed /tftpboot/172_16_107_45.info.
nimol_install MESSAGE: Removed /info/default2/scripts/172_16_107_45.script.
nimol_install MESSAGE: Removed "CLIENT 172_16_107_45" from the file "/etc/nimol.conf"
nimol_config MESSAGE: Removed "/info/default2 *(rw,insecure,no_root_squash)" from the file "/etc/exports"
nimol_config MESSAGE: Executed /usr/sbin/exportfs -ua.
nimol_config MESSAGE: Executed /usr/sbin/exportfs -a.
nimol_config MESSAGE: Removed /tftpboot/default2.chrp.mp.ent.
nimol_config MESSAGE: Removed /info/default2.
nimol_config MESSAGE: Removed "LABEL default2" from the file "/etc/nimol.conf"
nimol_config MESSAGE: Unconfiguring the NIMOL server...
nimol_config MESSAGE: Removed "disable = no" from the file "/etc/xinetd.d/tftp"
nimol_config MESSAGE: Added "disable = yes" to the file "/etc/xinetd.d/tftp"
nimol_config MESSAGE: Removed "local2.* /var/log/nimol.log" from the file "/etc/rsyslog.conf"
nimol_config MESSAGE: Removed "local3.* -/var/log/localmessages;RSYSLOG_TraditionalFileFormat" from the file "/etc/rsyslog.conf"
nimol_config MESSAGE: Added "local2,local3.* -/var/log/localmessages;RSYSLOG_TraditionalFileFormat" to the file "/etc/rsyslog.conf"
nimol_config MESSAGE: Executed kill 1799116.
nimol_config MESSAGE: Removed /var/tmp/nimol_original.
nimol_config MESSAGE: Executed /bin/systemctl stop nfs-server.
nimol_config MESSAGE: Removed /etc/nimol.conf.
nimol_config MESSAGE: Successfully unconfigured NIMOL.
After the installation is complete and the Virtual I/O Server has booted, you must first accept IBM’s license terms and assign a password for the user padmin. The best way to do this is to open a console:
LPAR-Tool
$ lpar console ms11-vio1
HMC-CLI
hscroot@hmc01:~> mkvterm -m ms11 -p ms11-vio1
When logging in with the user padmin, you will immediately be asked to enter a password. There is no default password:
Open in progress
Open Completed.
IBM Virtual I/O Server
login: padmin
[compat]: 3004-610 You are required to change your password.
Please choose a new one.
padmin's New password: XXXXXXXXXX
Enter the new password again: XXXXXXXXXX
Indicate by selecting the appropriate response below whether you
accept or decline the software maintenance terms and conditions.
Accept (a) | Decline (d) | View Terms (v) > a
$ license -accept
Current system settings are different from the best practice recommendations for a VIOS.
To view the differences between system and the recommended settings, run the following:
$rules -o diff -s -d
To deploy the VIOS recommended default settings, run the following:
$rules -o deploy -d
$shutdown -restart
$
We accept the recommended default settings for VIOS and then reboot:
$ rules -o deploy -d
bosboot: Boot image is 67633 512 byte blocks.
A manual post-operation is required for the changes to take effect, please reboot the system.
$ shutdown -restart
Shutting down the VIO Server could affect Client Partitions. Continue [y|n]?
y
SHUTDOWN PROGRAM
Wed Jul 17 08:13:11 CDT 2024
Running /etc/rc.d/rc2.d/Ksshd stop
0513-044 The sshd Subsystem was requested to stop.
…
After the reboot, RMC should be active and the Virtual I/O Server should then also be accessible via network (SSH):
LPAR-Tool
$ lpar status ms11-vio1
LPAR PROC
NAME ID LPAR_ENV STATE PROFILE SYNC RMC PROCS UNITS MEM OS_VERSION
ms11-vio1 1 vioserver Running standard 0 active 2 - 16 GB VIOS 4.1.0.00
$
HMC-CLI
hscroot@hmc01:~> lssyscfg -r lpar -m ms11
name=ms11-vio1,lpar_id=1,lpar_env=vioserver,state=Running,resource_config=1,os=vios,os_version=VIOS 4.1.0.00,logical_serial_num=89B35821,default_profile=standard,curr_profile=standard,work_group_id=none,shared_proc_pool_util_auth=0,allow_perf_collection=0,power_ctrl_lpar_ids=none,boot_mode=norm,lpar_keylock=norm,auto_start=0,redundant_err_path_reporting=0,rmc_state=active,rmc_ipaddr=172.16.107.45,msp=0,time_ref=0,lpar_avail_priority=191,desired_lpar_proc_compat_mode=default,curr_lpar_proc_compat_mode=POWER10,sync_curr_profile=0,affinity_group_id=none,vtpm_enabled=0,migr_storage_vios_data_status=unavailable,migr_storage_vios_data_timestamp=unavailable,powervm_mgmt_capable=0,pend_secure_boot=0,curr_secure_boot=0,keystore_kbytes=0,description=
hscroot@hmc01:~>
The output confirms that the Virtual I/O Server version 4.1.0.00 was installed and RMC is active.