WHAT IS PAGING
We will see the principles of operation of the paging in the next section.
Paging scheme solves the problem faced in variable sized partitions like external fragmentation.
Principles of Operation
In a paged system, logical memory is divided into a number of fixed sizes 'chunks' called pages. The physical memory is also predivided into same fixed sized blocks (as is the size of pages) called page frames. The page sizes (also the frame sizes) are always powers of 2, and vary between 512 bytes to 8192 bytes per page. The reason behind this is implementation of paging mechanism using page number and page offset. This is discussed in detail in the following sections:
Pages
|
0
|
Frames 0
| |||||
Page0
|
1
| ||||||
1
|
1
|
Page 0
| |||||
3
| |||||||
Page1
| |||||||
2
|
2
|
Page4
| |||||
Page2
|
6
| ||||||
3
|
3
|
Page1
| |||||
Page3
|
4
| ||||||
Page 3
| |||||||
Page4
|
4
|
2
|
4
| ||||
Logical Memory
|
5
| ||||||
Page No.
|
Frame No
| ||||||
6
|
Page2
| ||||||
Page Table
| |||||||
Physical Memory
|
Figure 10: Principle of operation of paging
Each process page is loaded to some memory frame. These pages can be loaded into contiguous frames in memory or into noncontiguous frames also as shown in Figure 10. The external fragmentation is alleviated since processes are loaded into separate holes.
Page Allocation
In variable sized partitioning of memory every time when a process of size n is to be loaded, it is important to know the
best location from the list of available/free holes. This dynamic storage allocation is necessary to increase efficiency and
throughput of system. Most commonly used strategies to make such selection are:
1) Best-fit Policy: Allocating the hole in which the process fits most "tightly" i.e., the difference between the hole size and the process size is the minimum one.
2) First-fit Policy: Allocating the first available hole (according to memory order), which is big enough to accommodate the new process.
3) Worst-fit Policy: Allocating the largest hole that will leave maximum amount of unused space i.e., leftover space is maximum after allocation.
Now, question arises which strategy is likely to be used? In practice, best-fit and first-fit are better than worst-fit. Both
these are efficient in terms of time and storage requirement. Best-fit minimize the leftover space, create smallest hole that
could be rarely used. First-fit on the other hand requires least overheads in its implementation because of its simplicity.
Possibly worst-fit also sometimes leaves large holes that could further be used to accommodate other processes. Thus all
these policies have their own merits and demerits.
Hardware Support for Paging
Every logical page in paging scheme is divided into two parts:
1) A page number (p) in logical address space
2) The displacement (or offset) in page p at which item resides (i.e., from start of page).
This is known as Address Translation scheme. For example, a 16-bit address can be divided as given in Figure below:
15
|
10
|
0
| ||
00110
|
00000101010
| |||
Page No. (p)
|
Displacement (d)
| |||
Here, as page number takes 5bits, so range of values is 0 to 31(i.e. 25-1). Similarly, offset value uses 11-bits, so range is 0 to 2023(i.e., 211–1). Summarizing this we can say paging scheme uses 32 pages, each with 2024 locations.
The table, which holds virtual address to physical address translations, is called the page table. As displacement is
constant, so only translation of virtual page number to physical page is required. This can be seen diagrammatically in
Figure 11.

Figure 11: Address Translation scheme
Page number is used as an index into a page table and the latter contains base address of each corresponding physical
memory page number (Frame). This reduces dynamic relocation efforts. The Paging hardware support is shown diagrammatically in Figure 12:

Paging address Translation by direct mapping
This is the case of direct mapping as page table sends directly to physical memory page. This is shown in Figure 12. But disadvantage of this scheme is its speed of translation. This is because page table is kept in primary storage and its size can be considerably large which increases instruction execution time (also access time) and hence decreases system speed. To overcome this additional hardware support of registers and buffers can be used. This is explained in next section.
Paging Address Translation with Associative Mapping
This scheme is based on the use of dedicated registers with high speed and efficiency. These small, fast-lookup cache
help to place the entire page table into a content-addresses associative storage, hence speed-up the lookup problem with
a cache. These are known as associative registers or Translation Look-aside Buffers (TLB's). Each register consists of
two entries:
1) Key, which is matched with logical page p.
2) Value which returns page frame number corresponding to p.
It is similar to direct mapping scheme but here as TLB's contain only few page table entries, so search is fast. But it is
quite expensive due to register support. So, both direct and associative mapping schemes can also be combined to get
more benefits. Here, page number is matched with all associative registers simultaneously. The percentage of the number of times the page is found in TLB's is called hit ratio. If it is not found, it is searched in page table and added into TLB. But
if TLB is already full then page replacement policies can be used. Entries in TLB can be limited only. This combined
scheme is shown in Figure 13.

Figure 13: Combined Associative/ Direct Mapping
Protection and Sharing
Paging hardware typically also contains some protection mechanism. In page table corresponding to each frame a protection bit
is associated. This bit can tell if page is read-only or read-write. Sharing code and data takes place if two page table entries in
different processes point to same physical page, the processes share the memory. If one process writes the data, other process
will see the changes. It is a very efficient way to communicate. Sharing must also be controlled to protect modification and
accessing data in one process by another process. For this programs are kept separately as procedures and data, where
procedures and data that are non-modifiable (pure/reentrant code) can be shared. Reentrant code cannot modify itself and must
make sure that it has a separate copy of per-process global variables. Modifiable data and procedures cannot be shared without
concurrency controls. Non-modifiable procedures are also known as pure procedures or reentrant codes (can't change during
execution). For example, only one copy of editor or compiler code can be kept in memory, and all editor or compiler processes
can execute that single copy of the code. This helps memory utilisation. Major advantages of paging scheme are:
1) Virtual address space must be greater than main memory size.i.e., can execute program with large logical address space as compared with physical address space.
2) Avoid external fragmentation and hence storage compaction.
3) Full utilisation of available main storage.
Disadvantages of paging include internal fragmentation problem i.e., wastage within allocated page when process is smaller than page boundary. Also, extra resource consumption and overheads for paging hardware and virtual address to physical address translation takes place.
WHAT IS SEGMENTATION
In the earlier section we have seen the memory management scheme called as paging. In general, a user or a programmer prefers to view system memory as a collection of variable-sized segments rather than as a linear array of words. Segmentation is a memory management scheme that supports this view of memory.
Principles of Operation
Segmentation presents an alternative scheme for memory management. This scheme divides the logical address space into variable length chunks, called segments, with no proper ordering among them. Each segment has a name and a length. For simplicity, segments are referred by a segment number, rather than by a name. Thus, the logical addresses are expressed as a pair of segment number and offset within segment. It allows a program to be broken down into logical parts according to the user view of the memory, which is then mapped into physical memory. Though logical addresses are two-dimensional but actual physical addresses are still one-dimensional array of bytes only.
Address Translation
This mapping between two is done by segment table, which contains segment base and its limit. The segment base has starting physical address of segment, and segment limit provides the length of segment. This scheme is depicted in Figure 14.

The offset d must range between 0 and segment limit/length, otherwise it will generate address error. For example, consider situation shown in Figure 15.

This scheme is similar to variable partition allocation method with improvement that the process is divided into parts. For fast retrieval we can use registers as in paged scheme. This is known as a segment-table length register (STLR). The segments in a segmentation scheme correspond to logical divisions of the process and are defined by program names. Extract the segment number and offset from logical address first. Then use segment number as index into segment table to obtain segment base address and its limit /length. Also, check that the offset is not greater than given limit in segment table. Now, general physical address is obtained by adding the offset to the base address.
Protection and Sharing
This method also allows segments that are read-only to be shared, so that two processes can use shared code for better memory efficiency. The implementation is such that no program can read from or write to segments belonging to another program, except the segments that have been set up to be shared. With each segment-table entry protection bit specifying segment as read-only or execute only can be used. Hence illegal attempts to write into a read-only segment can be prevented.
Sharing of segments can be done by making common /same entries in segment tables of two different processes which point to same physical location. Segmentation may suffer from external fragmentation i.e., when blocks of free memory are not enough to accommodate a segment. Storage compaction and coalescing can minimize this drawback.
Check your progress 2
1) What is the advantage of using Base and Limit registers?
These help in dynamic relocation. They make a job easy to move in memory.
2) How does lookup work with TLB's?
With TLB support steps determine page number and offset first. Then look up page number in TLB. If it is there, add offset to physical page number and access memory location. Otherwise, trap to OS. OS performs check, looks up physical page number, and loads translation into TLB.Then restart the instruction.
3) Why is page size always powers of 2?
Paging is implemented by breaking up an address into a page and offset number. It is efficient to break address into X page bits and Y offset bits, rather than calculating address based on page number and offset. Because each bit position represents a power of 2, splitting an address between bits results in a page size that is a power of 2.
4) A system with 18-bit address uses 6 bits for page number and next 12 bits for offset. Compute the total number of pages and express the following address according to paging scheme 001011(page number) and 000000111000(offset)?
Page Number (6 Bits)=001011 = 11(decimal)
Page Number range= (26–1) = 63
Displacement (12 Bits)=000000111000 = 56(decimal)
No comments:
Post a Comment