Download paging concept in operating system and more Essays (university) Applications of Computer Sciences in PDF only on Docsity!
CPSC 410--Richard Furuta 2/24/99 1
Silberschatz and Galvin
Chapter 8
Memory Management
CPSC 410--Richard Furuta 2/24/99 2
Memory Management
• Goal: permit different processes to share
memory--effectively keep several in
memory at the same time
• Eventual meta-goal: users develop programs
in what appears to be their own infinitely-
large address space (i.e., starting at address
0 and extending without limit)
CPSC 410--Richard Furuta 2/24/99 3
Memory Management
- Initially we assume that the entire program must be in physical memory before it can be executed. - How can we avoid including unnecessary routines? Programs consist of modules written by several people who may not [be able to] communicate their intentions to one another.
- Reality:
- primary memory has faster access time but limited size; secondary memory is slower but much cheaper.
- program and data must be in primary memory to be referenced by CPU directly
CPSC 410--Richard Furuta 2/24/99 4
Multistep Processing of User
Program
Source Program
compiler or assembler
Object module
Object module
linkage editor
Object module
Load module
Load module system libraries loader
memory image
Compile time ----------------Load time---------------
CPSC 410--Richard Furuta 2/24/99 7
Binding
- Typically
- compiler binds symbolic names (e.g., variable names) to relocatable addresses (i.e., relative to the start of the module)
- linkage editor may further modify relocatable addresses (e.g., relative to a larger unit than a single module)
- loader binds relocatable addresses to absolute addresses
- Actually, address binding can be done at any point in a design
CPSC 410--Richard Furuta 2/24/99 8
When should binding occur?
- binding at compile time
- generates absolute code. Must know at compile time where the process (or object) will reside in memory. Example: *0 in C. Limits complexity of system.
- binding at load time
- converts compiler’s relocatable addresses into absolute addresses at load time. The most common case. The program cannot be moved during execution.
- binding at run time
- process can be moved during its execution from one memory segment to another. Requires hardware assistance (discussed later). Run-time overhead results from movement of process.
CPSC 410--Richard Furuta 2/24/99 9
When should loading occur?
- Recall that loading moves objects into memory
- Load before execution
- load all routines before runtime starts
- straightforward scheme
- Load during execution-- Dynamic loading
- loads routines on first use
- note that unused routines (ones that are not invoked) are not loaded
- Implement as follows: on call to routine, check if the routine is in memory. If not, load it.
CPSC 410--Richard Furuta 2/24/99 10
When should linking occur?
- Recall that linking resolves references among objects.
- Standard implementation: link before execution (hence all references to library routines have been resolved before execution begins). Called static linking.
- Link during execution: dynamic linking
- memory resident library routines
- every process uses the same copy of the library routines
- hence linking is deferred to execution time, but loading is not necessarily deferred
CPSC 410--Richard Furuta 2/24/99 13
Overlays
- So far, the entire program and data of process
must be in physical memory during execution.
- Ad hoc mechanism for permitting process to be
larger than the amount of memory allocated to it:
overlays
- In effect keeps only those instructions and data in
memory that are in current use
- Needed instructions and data replace those no
longer in use
CPSC 410--Richard Furuta 2/24/99 14
Overlays
Example
Common routines
Overlay driver
Main Routine A Overlay Area Main Routine B
Common data
CPSC 410--Richard Furuta 2/24/99 15
Overlays
• Overlays do not require special hardware
support--can be managed by programmer
• Programmer must structure program
appropriately, which may be a difficulty
• Very common solution in early days of
computers. Now, probably dynamic
loading and binding are more flexible
• Example: Fortran common
CPSC 410--Richard Furuta 2/24/99 16
Logical versus Physical
Address Space
- logical address : generated by the CPU (logical address space)
- physical address : loaded into the memory address register of the memory (physical address space)
- compile-time and load-time address binding: logical and physical addresses are the same
- execution-time address binding: logical and physical addresses may differ - in this case, logical address referred to as virtual address
CPSC 410--Richard Furuta 2/24/99 19
Logical Address Space versus
Physical Address Space
• User programs only see the logical address
space, in range 0 to max
• Physical memory operates in the physical
address space, addresses in the range R+0 to
R+ max
• This distinction between logical and
physical address spaces is a key one for
memory management schemes.
CPSC 410--Richard Furuta 2/24/99 20
Swapping
- What: temporarily move inactive process to
backing store (e.g., fast disk). At some later time,
return it to main memory for continued execution.
- Why: permit other processes to use memory
resources (hence each process can be bigger)
- Who: decision of what process to swap made by
medium-term scheduler
CPSC 410--Richard Furuta 2/24/99 21
Schematic view of Swapping
CPSC 410--Richard Furuta 2/24/99 22
Swapping
- Some possibilities of when to swap
- if you have 3 processes, start to swap one out when its quantum expires while two is executing. Goal is to have third process in place when two’s quantum expires (i.e., overlap computation with disk i/o)
- context switch time is very high if you can’t achieve this
- Another option: roll out lower priority process in favor of higher priority process. Roll in the lower priority process when the higher priority one finishes
CPSC 410--Richard Furuta 2/24/99 25
Contiguous Allocation
- Divide memory into partitions. Initially consider two partitions--one for the resident operating system and one for a user process.
- Where should the operating system go--low memory or high memory?
- Frequently put the operating system in low memory because this is where the interrupt vector is located. Also this permits the user partition to be expanded without running into the operating system (a factor when we have more than one partition or if we run the same binaries on different system configurations).
CPSC 410--Richard Furuta 2/24/99 26
Memory Partitions
Resident Operating System
User Processes (program and data)
0 Low memory
High memory
CPSC 410--Richard Furuta 2/24/99 27
Single Partition Allocation
- Initial location of the user’s process in memory is not 0
- The relocation register (base register) points to the first location in the user’s partition. User’s logical addresses are adjusted by the hardware to produce the physical address. (Address binding delayed until execution time.)
- Relocation register value is static during program execution, hence all of the OS must be present (it might be used). Otherwise have to relocate user code/data “on the fly”! In other words we cannot have transient OS code.
CPSC 410--Richard Furuta 2/24/99 28
Single Partition Allocation
- How about memory references passed from the
user process to the OS (for example, blocks of
memory passed as an argument to a I/O routine)?
- The address must be translated from user’s logical
address space to the physical address space. Other
arguments don’t get translated (e.g., counts).
- Hence OS software has to handle these
translations.
CPSC 410--Richard Furuta 2/24/99 31
Multiple-Partition Allocation
- Goal: allocate memory to multiple processes (which permits rapid switches, for example)
- Simple scheme: fixed-size partition
- memory divided into several partitions of fixed size
- each partition holds one process
- partition becomes free when process terminates; another process picked from the ready queue gets the free partition
- number of partitions bounds the degree of multiprogramming
- originally used in the IBM OS/360 operating system (MFT)
- No longer used
CPSC 410--Richard Furuta 2/24/99 32
Multiple-Partition Allocation
Dynamic Partition
- Memory is partitioned dynamically
- Hole : block of available memory
- Holes of various size are scattered throughout memory
- Process still must occupy contiguous memory
- OS keeps a table listing which parts of memory are available - Allocated partitions - Free partitions (hole)
- When a process arrives, the OS searches for a part of memory that is large enough to hold the process. Allocates only the amount of needed memory.
CPSC 410--Richard Furuta 2/24/99 33
Multiple-Partition Allocation
Dynamic Partition
operating system p 500K 100
600
operating system
p
p 800K
operating system
p
p
CPSC 410--Richard Furuta 2/24/99 34
Multiple-Partition Allocation
Dynamic Partition
operating system
p
p
p 400K
operating system
p
p
p
p 600K can’t p2 done p4 gets alloc.
operating system
p
p
p
CPSC 410--Richard Furuta 2/24/99 37
Multiple-Partition Allocation
Dynamic Partition
- first fit algorithm: allocate the first hole that is big enough. Searching can start either at beginning of set of holes or where the previous first-fit search ended. We quit when we find a free hole that is large enough.
- best fit : allocate the smallest hole that is big enough. Must search entire list to find it if you don’t keep free list ordered by size.
- worst fit : allocate the largest hole. Again may need to search entire free list if not ordered. Produces the largest leftover hole, which may be less likely to create external fragmentation.
CPSC 410--Richard Furuta 2/24/99 38
Multiple-Partition Allocation
Dynamic Partition
- Simulation shows that first-fit and best-fit are
better than worst-fit for time and storage use.
- First-fit is faster than best-fit
- First-fit and best-fit are similar in storage use.
- 50% rule--up to 1/3 of memory is lost to external
fragmentation in first-fit ( N allocated, 1/2 N lost)
CPSC 410--Richard Furuta 2/24/99 39
Multiple-Partition Allocation
Dynamic Partition
- General comments:
- memory protection is necessary to prevent state interactions. This is effected by the limit register.
- base registers are required to point to the current partition
- In general, blocks are allocated in some quantum (e.g., power or 2). No point in leaving space free if you can’t address it or if it is too small to be of any use at all. Also there is an expense in keeping track of free space (free list; traversing list; etc.).
- This results in lost space--allocated but not required by process
- Internal fragmentation : difference between required memory and allocated memory.
- Internal fragmentation also results from estimation error and management overhead.
CPSC 410--Richard Furuta 2/24/99 40
External Fragmentation
- External fragmentation can be controlled with compaction.
- requires dynamic address binding (have to move pieces around)
- can be quite expensive in time
- some schemes try to control expense by only doing certain kinds of coalescing--e.g., on power of 2 boundary. (Topic of a data structures class.)
- OS approach can also be to roll out/roll in all processes, returning processes to new addresses--no additional code required!