Page Replacement Policies

For the operating system, the best page to remove is intuitively the page that will never be needed again, or at least, not in the near future. Such algorithms MIN or Optimal replacement strategy, OPT have been unfortunately found to require some type of simulation run to get an advance knowledge of future page faults (interrupts). This simulation process is expensive to compute and, hence, OPT is not considered a practical replacement policy, and primarily has only theoretical importance. The other approach is to pick a random page to replace at each page fault. This is known as Random replacement. However, we will study here some useful page replacement policies specified in a demand paging memory management on real systems.

FIGURE 4.15

An entry in a page table using NRU.

Not Recently Used Page Replacement (NRU)

The operating system (OS) in most computers with virtual memory keeps track of status about pages whether used or unused with the help of the two status hits associated with each page in the page table. A typical page table entry is shown in Figure 4.15. Here, "R" bit is set when the page is referenced (Read or Written), and "M" bit is set when the page is written to (i.e. modified). It is important that these bits must be updated on every memory reference, so it is essential that they be set by the hardware. Once a bit is set to 1, it remains 1 until the OS resets it to 0 in software. If hardware does not have these bits, they can be simulated by software under the control of the OS.

"R" and "M" bits in a combined way can be used to build a simple effective paging algorithm. When a process is started, both these page bits initially are set to 0 by OS for all its pages. Periodically (e.g. on each clock interrupt), the R bit is cleared, to distinguish pages that have not been referenced recently from those that have been. When a page fault occurs, the operating system scans all the pages, and divides them into four distinct categories based on the current values of R and M bits.

Class 0:

Not referenced (R = 0),

Not modified (M = 0)

Class 1:

Not referenced (R = 0),

Modified (M = 1)

Class 2:

Referenced (R = 1),

not modified (M = 0)

Class 3:

Referenced (R = 1),

Modified (M = 1)

At first glance, although class 1 pages seem impossible, they occur when a class 3 page has its R bit cleared by a clock interrupt. Clock interrupts, however, do not clear the M bit, because this information is only needed to know whether the page has to be rewritten to disk or not, at the time of its removal.

The NRU algorithm removes a page at random from the lowest-numbered non-empty class. This algorithm implicitly indicates that it is always appropriate to remove a modified page that has not been referenced in at least one clock period (typically 20 ms) than a clean page that is in heavy use. The main attraction and distinct advantage of NRU is that it is easy to understand, efficient to implement, and offers a performance that, while certainly not optimal, is often adequate.

First-In–First-Out (FIFO)

Another low-overhead page removal algorithm is FIFO (First-In-First-Out) algorithm, in which a page selected for removal is the one which has been in the memory longest. Is FIFO, a good policy for the operating system? Probably not, as the pages that are replaced for being in memory longest times would be those that the system have been using for the highest times.

The columns in the page table are kept ordered so that the "oldest" page is always at the bottom, and the "newest" page is at the top. On a page fault, the page at the bottom is removed and the new page is added to the top of the list. In actual implementation of FIFO removal, the clock value indicated the time when a page was loaded and can be stored, and for removal, a search can be made for the "oldest" time. It is usually more efficient to maintain an ordered list, usually by means of pointers, so that the page to be removed can be selected without a lengthy search. Flowever, FIFO has three key disadvantages, namely;

  • 1. If a page is frequently and continually used, it will eventually become the oldest, and will be removed; even though it would be needed once again immediately.
  • 2. Some strange side effects can occur.
  • 3. Others algorithms have been found to be more effective.

The most noted side effect, called the FIFO anomaly/Belady effect is that under certain circumstances, adding more physical memory can result in poorer performance, when one would expect a larger memory to result in a better one. The actual occurrence of page traces that result in this anomaly are, of course, very rare. Nevertheless, this unusual phenomenon coupled with the other objections as noted, has caused FIFO in its pure form be dropped from favour. However, pure FIFO has been modified in many different ways to make it more attractive to be modestly accepted. Of these, the schemes, namely, Second chance and Clock page replacement (Circular FIFO) are worthy of mention.

A brief detail of Second chance and Clock page replacement (Circular FIFO) is with respective figures in the following web site: http://routledge.com/9780367255732.

Least Recently Used Page (LRU)

One of the most popular page removal techniques among many ones with a good approximation to the optimal algorithm is called Least Recently Used (LRU). It selects the page for removal that has not been referenced for the longest time. FIFO, in contrast, removes the page that has been in memory for the longest time, regardless of how often and when it was referenced. LRU is based on the theory, that if a page is referenced, it is likely to be referenced again soon. Conversely, if it has not been referenced for a long time, it is unlikely to be needed in the near future. This approach is also the basis of the theory of locality. According to this strategy, when a page fault occurs, throw out the page which is remained unused for the longest time. LRU policy, however avoids the replacement of frequently used blocks which can occur with FIFO.

An LRU page trace analysis as explained puts the most recently referenced page on the top of the M list. The least recently used page thus falls to the bottom of the M list, and is the candidate for removal. LRU has been found to possess many interesting theoretical properties, and functions in such a way that increasing memory size can never cause the number of page faults (page interrupts) to increase, in contrast to the FIFO anomaly.

Implementation of theoretically realizable LRU is not cheap. It is necessary to maintain a linked list of all pages in memory with the most recently used page at the front, and the least recently used page at the rear. The difficulty is that the list must be updated on every memory reference. Moreover, at the time of page replacement, finding a page in the list, deleting it, and then adjusting the link and placing the new one to the front is a very time- consuming operation. Either an expensive special hardware is needed, or it must be realized by a cheaper approximation in software.

On every instruction, switching and manipulating a linked list is prohibitively slow, even when implemented in hardware. However, there are other ways to implement LRU with special hardware. This method requires to equip the hardware with a 64-bit counter, C, that is automatically incremented after each instruction. Furthermore, each page table entry must also have a field large enough to store the contents of this counter C. After each memory reference, the current value of C is stored in the page table entry for the page just referenced. When a page fault occurs, the operating system examines all the counters in the page table to find the lowest one. That page is the least recently used, and is selected to be a victim for replacement.

Performance Comparison

The effectiveness of a page removal algorithm to a large extent depends on the behaviour of the particular program being run. The performance of a page replacement algorithm depends mostly on execution trace, address trace, and finally on page trace. The best policy until now is the OPT algorithm. However, this policy cannot be implemented in practice, because it is not really possible to predict the future page reference of a program beforehand. The LRU algorithm is a popular policy being stood on a strong base of theory of locality. It often results in a high hit ratio. If the available main memory is more than the size of the working set, which is usual and very common, LRU then appears to perform very well. The Random policies and FIFO are very easy to implement with almost no demand of additional facility, but may perform badly because of violation of locality of program. The Second Chance and Circular FIFO attempt to optimize FIFO, and work within the frame of FIFO, and at best, can approximate the LRU, but cannot perform beyond it. The performance of NRU may be between the LRU and the FIFO, with the cost of some additional overhead in the bargain. It is efficient to implement, and offers a performance that, while certainly not optimal, it is often adequate. It is thus very difficult to declare any one as the winner, because there is no fixed superiority of any policy over the others. The reason is that the program behaviour, run-time status of the page frames, and many other similar parameters are to be taken here into consideration that surely influence the ultimate outcome as a whole.

A few examples on page replacement policies with solutions are given in the following web site: http://routledge.com/9780367255732.

 
Source
< Prev   CONTENTS   Source   Next >