OPERATING SYSTEM IMPORTANT QUESTION ANSWER PART 6
for IGNOU BCA MCA Students
69. Device controllers are generally becoming more complex in the functionality they provide (e.g. think about the difference between implementing a serial port with a flip-flop controlled by the CPU and a multi-gigabit network adapter with the TCP/IP stack on the card itself). What effect might this have on the operating system and system performance?
Device controllers' improved functionality can at times really increase the performance of the overall system as we are offloading tasks from the CPU (think of the difference between running 3D games with and without a dedicated GPU). However we may end up with the problem of lack of hardware support, or could be bounded by slow interfaces.
70. Compare I/O based on polling with interrupt-driven I/O. In what situation would you favour one technique over the other?
Polling means that we are continually using CPU cycles to check whether any I/O is occurring or not. However, it means the CPU is busy-waiting on any other task so as to not miss any I/O. Interrupt-driven I/O allows the CPU to work on other tasks and handle requests on demand, however it requires a
context switch into the interrupt handler to process it. We normally favour interrupt-driven I/O for any sort
of human interface device, as it is much slower than the data processing it is doing, so a few hundred interrupts each second won't matter. We might favour polling for high bandwidth raw data transfer applications, as a million context switches tends to slow things down.
71. Explain how the producer-consumer problem is relevant to operating system I/O.
The action of performing buffering on inputs and outputs is a real-life manifestation of the producer- consumer problem. The input/output to/from a buffer acts as the producer, while the OS or device receiving the data becomes the consumer, bounded by the limited capacity of the buffer.
72. What is disk interleaving? What problem is it trying to solve?
Disk interleaving is the action of putting sequentially accessed data into non-sequential sectors. It was used to adjust timing difference between receiving and reading data. Because buffer storage has become more than sufficient in recent times, interleaving has fallen out of use.
73. What is cylinder skew? What problem is it trying to solve?
Not in the 2013 COMP3231 course.
74. Name four disk-arm scheduling algorithms. Outline the basic algorithm for each.
FIFO: It processes requests as they come. If there are too many processes, it deteriorates to "random".
However, it avoids starvation. Shortest Seek Time First: Out of a list of requests, pick the ones that minimize seek time.Performance is excellent, but is susceptible to starvation.
Elevator (SCAN): Out of a list of requests, we move the head in one direction and back again. It services requests in track order until reaching the highest, then reverses. Performance is good, not quite as good as SSTF, but avoids starvation.
Modified Elevator (Circular-SCAN): Similar to elevator, but reads sectors in only one direction. It does
not down-scan, instead electing to go back to the first track and start again. This gives it better locality on sequential reads and reduces maximum delay to read a particular sector.
75. Why is it generally correct to favour I/O bound processes over CPU-bound processes?
We favour I/O bound processes over CPU bound processes because they are generally many, many, orders of magnitude slower, so delaying CPU bound processes to take care of a tiny bit of I/O is nary a problem. However, choosing to run a CPU bound process prior to an I/O one delays the next I/O request hugely.
76. What is the difference between preemptive scheduling and non-preemptive scheduling? What is the issue with the latter?
Pre-emptive scheduling is based on timer interrupts, where a running thread may be interrupted by the OS and switched to the ready state at will (usually if something more important comes through) or when it has exceeded its timing allocation. Non-preemptive scheduling means once a thread is in the running state, it continues until it completes, or at least gives up the CPU voluntarily. Threads that do this are likely to monopolise the CPU.
77. Describe round robin scheduling. What is the parameter associated with the scheduler? What is the issue in chosing the parameter?
Round-robin scheduling works by giving each process a "timeslice" to run in, implemented by a ready queue and a regular timer interrupt. When a timeslice expires, the next process pre-empts the current process and runs for its timeslice, putting the pre-empted process at the end of the queue. The parameter concerned with it is the timeslice, which has to be exactly the right size. If it was too short, there is a large overhead every time it expires and the context switches, if it was too long, then we might end up with an unresponsive system.
78. The traditional UNIX scheduler is a priority-based round robin scheduler (also called a multi-level round robin schduler). How does the scheduler go about favouring I/O bound jobs over long-running CPU-bound jobs?
The traditional UNIX scheduler assigns each process a priority and places them in multiple ready queues. Priorities increase over time to prevent saturation of low priority processes, boosted based on the amount (as in lack thereof) of CPU time consumed. I/O bound jobs are favoured, naturally, as they consume very little CPU time.
79. In a real-time system with a periodic task set,, how are priorities assigned to each of the periodic tasks?
There are two types of real-time scheduling – rate monotonic and earliest deadline first. In rate monotonic, priorities are assigned based on the period of each task, and in earliest deadline first, priorities are dynamically assigned based on the deadlines of each task.
80. What is an EDF scheduler? What is its advantage over a rate monotic scheduler?
EDF scheduler = Earliest Deadline First scheduling. Rate monotonic scheduling, comparatively, assigns
priorities based on the period of each task. However, it only works if CPU utilisation is not too high. EDF is more difficult to implement, however it always works so long as the tasks are actually possible to schedule!
No comments:
Post a Comment