NEWS & UPDATES >> BCA BCSP064 Synopsis & Project Work Started for DEC 2017, IGNOU MCA MCSP060 Synopsis Work Started for DEC 2017, CONTACT 4 IGNOU Mini Project

OPERATING SYSTEM IMPORTANT QUESTION ANSWER Part 4

OPERATING SYSTEM IMPORTANT QUESTION ANSWER PART 4

for IGNOU BCA MCA Students



45The Berkeley Fast Filesystem (and Linux Ext2fs) use the idea of block groups. Describe what this idea is and what improvements block groups have over the simple filesystem layout


of the System V file system (s5fs).


The System V file system contained four blocks the boot block, super block, inode array and data blocks. This was inefficient, because seek times would be massive inodes are at the start of the disk and the actual data could be anywhere from the start to the end! There was also only a single super block (block containing attributes of the entire filesystem). If this was corrupted, it's bye-bye file system. The Berkeley Fast Filesystem (and ext2) extended the System V filesystem by creating block groups – all equally sized and each somewhat replicating the System V structure (aside from boot block). The inode array was split into group descriptors, data block bitmap, inode bitmap and inode table. This solves the major problems with s5fs, as proximity of inode tables and data blocks is spatial locality- friendly, and you can no longer corrupt the entire filesystem by way of superblock.


46. What is the reference count field in the inode? You should consider its relationship to directory entries in you answer.


The reference count field is a counter of how many times the inode is "referenced" by name. Adding a directory entry increments this counter. When the count falls to zero, (i.e. there are no longer any directory entries to that file), its inode and all corresponding disk blocks can be safely deallocated.


47. The filesystem buffer cache does both buffering and caching. Describe why buffering is needed. Describe how buffering can improve performance (potentially to the detriment of file system robustness). Describe how the caching component of the buffer cache improves performance.


Buffering is required in order to ensure that performance is not degraded as a file is sequentially read or written to, perhaps from different parts of the physical disk, since a large number of the disk reads occur before the file is actually "read" by the user (or in the case of writes, the data is flushed to disk long after individual "writes" occur). This minimises expensive disk I/O operations, because I/O occurs in
sequential bursts, but possibly at the detriment of robustness for example if a power failure occurs, the
buffer cache is lost, and its contents were not written to disk. The caching component improves performance still, by storing frequently used parts of files in fast memory, minimising lengthy disk access times.


48. What does flushd do on a UNIX system?


Flushd forces a write of the contents of the buffer cache to the hard disk every 30 seconds, avoiding data loss on an unexpected OS termination.


49. Why might filesystems managing external storage devices do write-through caching
(avoid buffering writes) even though there is a detrimental affect on performance.


Write-through caching is necessary on external drives in order to maintain reliability and avoid data loss in situations where the drive controller is compromised through an event (a kernel panic, power failure, or most commonly – simply being unplugged) where the buffer cache is lost. It is always much safer to have critical data written to physical disk blocks, despite the high cost of disk I/O operations.


50. Describe the difference between external and internal fragmentation. Indicate which of the two are most likely to be an issues on a) a simple memory memory mangement machine using base limit registers and static partitioning, and b) a similar machine using dynamic partitioning.


External fragmentation refers to fragmentation (any unused space inside the partition is wasted) outside of individual partitions, while internal fragmentation refers to that of the inside of partitions. In situation a, internal fragmentation is likely to be an issue, and in situation b, external fragmentation is likely to be an issue.


51. List and describe the four memory allocation algorithms covered in lectures. Which two of the four are more commonly used in practice?


The four memory allocation algorithms (in the scheme of dynamic partitioning placement) are:
First-Fit in the linked list of available memory addresses, we place the data in the first entry that will fit its data. Its aim is to minimise the amount of searching, but leads to external fragmentation later on.
Next-Fit similar to first fit, but instead of searching from the beginning each time, it searches from the
last successful allocation. Greatly reduces the amount of searching but leaves external fragmentation at the beginning of memory.
Worst-Fit traverses the memory and gives the partitions as large spaces as possible to leave usable
fragments left over. Needs to search the complete list and such is a poor performer.
Best-Fit carefully scours the memory for spaces that perfectly fit the RAM we want. However, the search is likely to take a very long time.

We most commonly use first-fit and next-fit in practise. They're easier to implement and are faster to boot.

52. Base-limit MMUs can support swapping. What is swapping? Can swapping permit an application requiring 16M memory to run on a machine with 8M of RAM?


Swapping is the process of saving memory contents belonging to a particular process to a backing store (most commonly a fast and large disk) temporarily and reloading it for continued execution. Unfortunately, it does not permit an application requiring 16MiB of memory to run on a system with less RAM, as the program requires the full contents of its memory to be in the physical memory at any one time.


53. Describe page-based virtual memory. You should consider pages, frames, page tables, and Memory Management Units in youanswer.


Page-based virtual memory is the idea that physical memory space can be divided into frames, while each process owns a virtual address space divided into pages which are the same size as frames. The page table contains the location of a frame corresponding to a page, which is generally obfuscated using some algorithm (commonly two-level and inverting).


54. Give some advantages of a system with page-based virtual memory compared to a simply system with base-limit registers that implements swapping.


Page based virtual memory ensures that every process has its own address space. It allows processes access up to 2^(system bits) bytes of address space, and allows programs requiring more RAM to use only what it needs to at any one point. Physical memory need not be contiguous. No external fragmentation. Minimal internal fragmentation. Allows sharing (you can map several pages to the same frame).


55. Describe segmentation-based virtual memory.  You should consider the components of a memory address, the segment table and its contents, and how the final physical address is formed in your answer.


Segmentation-based virtual memory is a scheme supporting the user view of memory, that is, a program


is divided into segments such as main(), function(), stack etc. A segmented memory address is distinguished by its segment number and offset. The segment table has a base (starting PADDR where a segment resides) and limit (length of the segment) column physical addresses are formed by coalescing base and limit registers.


No comments:

Post a Comment