FRI JUN 22 2018




The main purpose of a computer system is to execute programs.  These programs, together with the data they access, must be at least partially in main memory during execution.  To improve both the utilization of the CPU and the speed of its response to users, a general-purpose computer must keep several processes in memory.  Many memory-management schemes exist, reflecting various approaches, and the effectiveness of each algorithm depends on the situation.  Selection of a memory-management scheme for a system depends on many factors, especially on the hardware design of the system.  Most algorithms require hardware support.  As a result of CPU scheduling, we can improve both the utilization of the CPU and the speed of the computer’s response to its users.  To realize this increase in performance, however, we must keep several processes in memory—that is, we must share memory.  In this section, we discuss various ways to manage memory.  The memorymanagement algorithms vary from a primitive bare-machine approach to paging and segmentation strategies.  Each approach has its own advantages and disadvantages.  Selection of a memory-management method for a specific system depends on many factors, especially on the hardware design of the system.  As we shall see, many algorithms require hardware support, leading many systems to have closely integrated hardware and operating-system memory management.

One of the most difficult aspects of operating system design is memory management.  Although the cost of memory has dropped dramatically and, as a result, the size of main memory on modern machines has grown, reaching into the gigabyte range, there is never enough main memory to hold all of the programs and data structures needed by active processes and by the operating system.  Accordingly, a central task of the operating system is to manage memory, which involves bringing in and swapping out blocks of data from secondary memory.  However, memory I/O is a slow operation, and its speed relative to the processor's instruction cycle time lags further and further behind with each passing year.  To keep the processor or processors busy and thus to maintain efficiency, the operating system must cleverly time the swapping in and swapping out to minimize the effect of memory I/O on performance.


Some basic requirements for memory managements are listed below:

  • Relocation

  • Protection

  • Sharing

  • Logical organization

  • Physical organization

In a multiprogramming system, the available main memory is generally shared among a number of processes.  Typically, it is not possible for the programmer to know in advance which other programs will be resident in main memory at the time of execution of his or her program.  In addition, we would like to be able to swap active processes in and out of main memory to maximize processor utilization by providing a large pool of ready processes to execute.  Once a program has been swapped out to disk, it would be quite limiting to declare that when it is next swapped back in, it must be placed in the same main memory region as before. Instead, we may need to relocate the process to a different area of memory.​  Thus, we cannot know ahead of time where a program will be placed, and we must allow that the program may be moved about in main memory due to swapping.

Each process should be protected against unwanted interference by other processes, whether accidental or intentional.  Thus, programs in other processes should not be able to reference memory locations in a process for reading or writing purposes without permission.  In one sense, satisfaction of the relocation requirement increases the difficulty of satisfying the protection requirement.  Because the location of a program in main memory is unpredictable, it is impossible to check absolute addresses at compile time to assure protection.  Furthermore, most programming languages allow the dynamic calculation of addresses at run time (for example, by computing an array subscript or a pointer into a data structure).  Hence all memory references generated by a process must be checked at run time to ensure that they refer only to the memory space allocated to that process.  Note that the memory protection requirement must be satisfied by the processor (hardware) rather than the operating system (software).  This is because the operating system cannot anticipate all of the memory references that a program will make.  Even if such anticipation were possible, it would be prohibitively time consuming to screen each program in advance for possible memory-reference violations.  Thus, it is only possible to assess the permissibility of a memory reference (data access or branch) at the time of execution of the instruction making the reference.  To accomplish this, the processor hardware must have that capability.

Any protection mechanism must have the flexibility to allow several processes to access the same portion of main memory.  For example, if a number of processes are executing the same program, it is advantageous to allow each process to access the same copy of the program rather than have its own separate copy.  Processes that are cooperating on some task may need to share access to the same data structure.  The memory management system must therefore allow controlled access to shared areas of memory without compromising essential protection.  Again, we will see that the mechanisms used to support relocation support sharing capabilities.


Logical Organization
Almost invariably, main memory in a computer system is organized as a linear, or one-dimensional, address space, consisting of a sequence of bytes or words.  Secondary memory, at its physical level, is similarly organized.  While this organization closely mirrors the actual machine hardware, it does not correspond to the way in which programs are typically constructed.  Most programs are organized into modules, some of which are unmodifiable (read only, execute only) and some of which contain data that may be modified.  If the operating system and computer hardware can effectively deal with user programs and data in the form of modules of some sort, then a number of advantages can be realized:

  • Modules can be written and compiled independently, with all references from one module to another resolved by the system at run time.

  • With modest additional overhead, different degrees of protection (read only, execute only) can be given to different modules.

  • It is possible to introduce mechanisms by which modules can be shared among processes.  The advantage of providing sharing on a module level is that this corresponds to the user’s way of viewing the problem, and hence it is easy for the user to specify the sharing that is desired.

Physical Organization
Computer memory is organized into at least two levels, referred to as main memory and secondary memory.  Main memory provides fast access at relatively high cost.  In addition, main memory is volatile; that is, it does not provide permanent storage.  Secondary memory is slower and cheaper than main memory and is usually not volatile.  Thus secondary memory of large capacity can be provided for long-term storage of programs and data, while a smaller main memory holds programs and data currently in use.  In this two-level scheme, the organization of the flow of information between main and secondary memory is a major system concern.  The responsibility for this flow could be assigned to the individual programmer, but this is impractical and undesirable
for two reasons:

  • The main memory available for a program plus its data may be insufficient.  In that case, the programmer must engage in a practice known as overlaying, in which the program and data are organized in such a way that various modules can be assigned the same region of memory, with a main program responsible for switching the modules in and out as needed.  Even with the aid of compiler tools, overlay programming wastes programmer time.

  • In a multiprogramming environment, the programmer does not know at the time of coding how much space will be available or where that space will be.  It is clear, then, that the task of moving information between the two levels of memory should be a system responsibility.  This task is the essence of memory management.

The principal operation of memory management is to bring processes into main memory for execution by the processor.  In almost all modern multiprogramming systems, this involves a sophisticated scheme known as virtual memory.  Virtual memory is, in turn, based on the use of one or both of two basic techniques:

  • segmentation

  • paging



A user program can be subdivided using segmentation, in which the program and its associated data are divided into a number of segments.  It is not required that all segments of all programs be of the same length, although there is a maximum segment length.  As with paging, a logical address using segmentation consists of two parts, in this case a segment number and an offset.


Both unequal fixed-size and variable-size partitions are inefficient in the use of memory; the former results in internal fragmentation, the latter in external fragmentation.  Suppose, however, that main memory is partitioned into equal fixed-size chunks that are relatively small, and that each process is also divided into small fixed-size chunks of the same size.  Then the chunks of a process, known as pages, could be assigned to available chunks of memory, known as frames, or page frames.  The wasted space in memory for each process is due to internal fragmentation consisting of only a fraction of the last page of a process.  There is no external fragmentation.

Memory Management Requirement