OPERATING SYSTEM IMPORTANT QUESTION ANSWER PART 5
for IGNOU BCA MCA Students
56. What is a translation look-aside buffer? What is contained in each entry it contains?
A translation look-aside buffer (TLB) is a high speed cache for page table entries (that can either be virtual or part of the physical MMU) containing those that are recently used. It consists of two different entries, EntryLo and EntryHi. EntryHi contains a virtual address and sometimes ASID. EntryLo contains a corresponding physical address and a number of bits signifying whether the address is 'dirty', 'empty' or otherwise.
57. Some TLBs support address space identifiers (ASIDS), why?
Multitasking operating systems would better perform when the physical memory addresses loaded from the TLB are identifiable, as they are process specific. On a context switch, one needs to flush the TLB (invalidate all entries), which is an expensive operation, and on a system that is frequently context switching could provide so much of a performance issue that the TLB almost becomes useless.
58. Describe a two-level page table? How does it compare to a simple page table array?
A two level page table is a form of nested page table structure. It works in terms of bases and offsets, where portions can be contatenated to form the original physical addresses. Second level page tables representing unmapped pages are left unallocated, saving a bit of memory.
59. What is an inverted page table? How does it compare to a two-level page table?
It uses hashing to achieve being an array of page numbers indexed by physical frame number. It grows with size of RAM, rather than virtual address space, saving a vast amount of space.
60. What are temporal locality and spatial locality?
Temporal locality is the concept (most commonly referred to in the subject of caching), that if you access a piece of data at any one time, you are likely to want to access that same piece again soon. Spatial locality, conversely, is the concept that if you access a given piece of data, you are likely to want to access its neighbouring data.
61. What is the working set of a process?
The working set of a process is the allocated pages/segments at any one time window (delta), consisting of all pages accessed during that time. Includes current top of stack, areas of the heap, current code segment and shared libraries.
62. How does page size of a particular achitecture affect working set size?
Page size of a particular architecture affects working set size because the larger the pages, the more memory that can be wasted on irrelevant data, vastly increasing the working set size for no good reason. If they're smaller, the pages accurately reflect current memory usage.
63. What is thrashing? How might it be detected? How might one recover from it once
detected?
Thrashing is the phenomenon that occurs when the total sum of all working set sizes becomes greater than the available physical memory. Productivity drops since the number of instructions that is able to be sent to the CPU drops. It could be detected if a threshold value was put in place for each working set size, and recovery is as simple as suspending processes until total working set size decreases.
64. Enumerate some pros and cons for increasing the page size.
Pros:
- Reduce total page table size, freeing some memory (hey, it uses up memory too!)
- Increase TLB coverage
- Increase swapping I/O throughput
Cons:
- Increase page fault latency (more page to search through)
- Increase internal fragmentation of pages
65. Describe two virtual memory page fetch policies. Which is less common in practice? Why?
Demand Paging – relevant pages are loaded as page faults occur
Pre Paging – try to load pages for processes before they are accessed Prepaging is less common in practise because it wastes bandwidth if it gets it wrong (and it will be wrong) and wastes even more if it kicks known good pages because it's trying to stuff it full of bad ones!
66. What operating system event might we observe and use as input to an algorithm that decides how many frames an application receives (i.e. an algorithm that determines the application's resident set size)?
This was not covered in the 2013 course.
67. Name and describe four page replacement algorithms. Critically compare them with each other.
OPTIMAL: Impossible to achieve, but perfect. Works on the Nostradamus-like basis that we can predict
which page won't be used for the longest time, and elect that that one should be replaced.
LEAST RECENTLY USED: Most commonly used, not quite optimal but better than the rest. We mark every page with a timestamp and the one which has been least recently accessed gets the flick. However we need to account for the extra overhead of adding/reading the time stamp, and who knows, the next page we want to access might just be the one we've sent back to the aether. It's quite a performer though.
CLOCK: Each page is marked with a 'usage' bit, and each is given a 'second chance' upon page replacement time. The one that becomes unmarked first disappears. Not quite as accurate as LRU, but has less of an overhead (one extra bit, as opposed to many for the timestamp).
FIFO: The simple and dodgiest option, we make the foolish assumption that the oldest page in the queue is the one that gets tossed. This is not quite always the case.
68. Describe buffering in the I/O subsystem of an operating system. Give reasons why it is required, and give a case where it is an advantage, and a case where it is a disadvantage.
Buffering is the action I/O devices take in order to efficiently transfer data to or from the operating system. Instead of directly transferring data bit-by-bit (literally) via system calls and registers, which is slow and requires context switch after context switch, we fill a buffer (or two, or three) with data, which is periodically transferred to or from main program control. We can also perform double or triple buffering where two-way efficiency is gained. It can be advantageous when we need to transfer large volumes of data, and at the same time may be disadvantageous due to overheads – if the network is high-speed the time to copy between buffers might be comparable to the time spent actually transferring the data!
No comments:
Post a Comment