6.2. Active Memory Sharing (AMS)

Similar to the case of shared processors, the physical memory can also be shared between several LPARs. Different amounts of physical memory are assigned to the LPARs at different times, depending on their requirements. The amount of physical memory currently assigned to an LPAR can vary over time.

As a rule of thumb, the main memory size of an LPAR is chosen by the administrator to be sufficient for the peak utilization, usually with a smaller or larger safety buffer. The so-called working set (memory pages that are actually used) is usually smaller. Otherwise, increased paging will result. With PowerHA systems, a sufficient amount of memory must be available for applications currently running on other nodes. Otherwise the failover in the case of an error could fail because there is not enough memory available to start the additional applications from the failing node. However, this means that, especially in cluster systems, the difference between the memory actually required and the configured memory can easily be only half the main memory size. As long as there is no failover, the memory configured for this additional applications is not used (apart from file system caching).

Figure 6.2 shows the actual memory usage of some LPARs over the course of a day. The red line shows the sum of the configured main memory sizes of the LPARs (248 GB). In the course of the day, however, the total memory usage of all LPARs together, remains most of the time significantly below this value. The values shown come from real production systems. Although one does not see the mountains of use of an LPAR, which are often shown in this context, which meet a valley of use of another LPAR, between approx. 60 and 100 GB of allocated memory are still not used by the LPARs. This memory could be used, for example, to run further LPARs on the managed system, or to assign more memory to individual LPARs.

Actual (total) memory usage of some LPARs during a day.
Figure 6.2: Actual (total) memory usage of some LPARs during a day.

Similar to the case of shared processors, a so-called shared memory pool can be created for sharing memory. In contrast to shared processor pools, there is only one shared memory pool at most. In principle, almost any number of LPARs (maximum 1000) can then be assigned to the shared memory pool. The sum of the main memory sizes of these LPARs may exceed the main memory size of the shared memory pool. For example, in the situation in figure 6.2, a shared memory pool of 200 GB could be created and all 6 LPARs shown with a total of 248 GB of memory could be assigned. Since it cannot be ruled out that all LPARs together use more than 200 GB of memory, the missing memory of 248 GB – 200 GB = 48 GB must of course be provided in some form. This is where paging devices known from operating systems come into play. A separate paging device (with a capacity of at least the maximum memory size of the LPAR) must be provided for each LPAR that uses the shared memory pool. Missing physical main memory is then replaced by memory on the paging devices.

Figure 6.3 shows a shared memory pool with 2 shared memory LPARs. The memory of the LPARs consists of physical main memory from the shared memory pool and possibly memory from paging devices. For the LPARs themselves, it cannot be determined whether a memory area uses physical memory or memory on a paging device. The hypervisor assigns physical memory from the shared memory pool to the individual LPARs dynamically. After relocating physical memory pages to a paging device, the hypervisor can assign the released physical memory to another LPAR. If the original LPAR accesses the paged-out memory pages again, the hypervisor assigns free physical memory pages, starts I/O on the paging device to read the paged-out data and then saves them on the new physical memory pages. The processes on the associated LPAR are not aware of these background actions. The memory access simply takes a little longer.

Shared memory pool with 2 shared memory LPARs.
Figure 6.3: Shared memory pool with 2 shared memory LPARs.

Since the hypervisor, as firmware, cannot start I/O on the paging devices directly itself, a virtual I/O server is required to perform the I/O. So at least one virtual I/O server must be assigned to a shared memory pool in order to perform I/O from and to the paging devices. Such a virtual I/O server is referred to as a paging virtual I/O server. This means that the paging devices must be visible and accessible on the paging virtual I/O server. The hypervisor delegates I/O to the paging devices to a paging virtual I/O server.

The sharing of physical main memory using a shared memory pool is known as Active Memory Sharing (AMS).