Khoa học máy tính - Chapter 7: Scheduling

Different scheduling policies Time-sharing: Multilevel adaptive scheduling Fair share scheduling Real-time: Deadline scheduling Rate monotonic scheduling Performance analysis is used to study and tune performance of scheduling policies

ppt54 trang | Chia sẻ: nguyenlam99 | Lượt xem: 732 | Lượt tải: 0download
Bạn đang xem trước 20 trang tài liệu Khoa học máy tính - Chapter 7: Scheduling, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
Chapter 7SchedulingCopyright © 20081Operating Systems, by Dhananjay Dhamdhere*IntroductionScheduling Terminology and ConceptsNonpreemptive Scheduling PoliciesPreemptive Scheduling PoliciesScheduling in PracticeReal-Time SchedulingCase StudiesPerformance Analysis of Scheduling Policies2Operating Systems, by Dhananjay Dhamdhere*Scheduling Terminology and ConceptsScheduling is the activity of selecting the next request to be serviced by a serverIn an OS, a request is the execution of a job or a process, and the server is the CPU3Operating Systems, by Dhananjay Dhamdhere*Scheduling Terminology and Concepts (continued)45Operating Systems, by Dhananjay Dhamdhere*Fundamental Techniques of SchedulingSchedulers use three fundamental techniques:Priority-based schedulingProvides high throughput of the systemReordering of requestsImplicit in preemptionEnhances user service and/or throughputVariation of time sliceSmaller values of time slice provide better response times, but lower CPU efficiencyUse larger time slice for CPU-bound processes6Operating Systems, by Dhananjay Dhamdhere*The Role of PriorityPriority: tie-breaking rule employed by scheduler when many requests await attention of serverMay be static or dynamicSome process reorderings could be obtained through prioritiesE.g., Short processes serviced before long onesSome reorderings would need complex priority functionsWhat if processes have the same priority?Use round-robin schedulingMay lead to starvation of low-priority requestsSolution: aging of requests7Operating Systems, by Dhananjay Dhamdhere*Nonpreemptive Scheduling PoliciesA server always services a scheduled request to completionAttractive because of its simplicitySome nonpreemptive scheduling policies:First-come, first-served (FCFS) schedulingShortest request next (SRN) schedulingHighest response ratio next (HRN) scheduling8Operating Systems, by Dhananjay Dhamdhere*FCFS Scheduling9Operating Systems, by Dhananjay Dhamdhere*Shortest Request Next (SRN) SchedulingMay causestarvation of long processes10Operating Systems, by Dhananjay Dhamdhere*Highest Response Ratio Next (HRN)Use of response ratiocounters starvation11Operating Systems, by Dhananjay Dhamdhere*Preemptive Scheduling PoliciesIn preemptive scheduling, server can switch to next request before completing current onePreempted request is put back into pending listIts servicing is resumed when it is scheduled againA request may be scheduled many times before it is completedLarger scheduling overhead than with nonpreemptive schedulingUsed in multiprogramming and time-sharing OSs12Operating Systems, by Dhananjay Dhamdhere*Round-Robin Scheduling with Time-Slicing (RR)In this example, δ = 113Operating Systems, by Dhananjay Dhamdhere*Example: Variation of Response Time in RR SchedulingAt small values of δ, rt for a request may be higher for smaller values of δ14Operating Systems, by Dhananjay Dhamdhere*Least Completed Next (LCN)Issues:Short processes will finish ahead of long processesStarves long processes of CPU attentionNeglects existing processes if new processes keep arriving in the system15Operating Systems, by Dhananjay Dhamdhere*Shortest Time to Go (STG)Since it is analogous to the SRN policy, long processes might face starvation.16Operating Systems, by Dhananjay Dhamdhere*Scheduling in PracticeTo provide a suitable combination of system performance and user service, OS has to adapt its operation to the nature and number of user requests and availability of resourcesA single scheduler using a classical scheduling policy cannot address all these issues effectivelyModern OSs employ several schedulersUp to three schedulersSome of the schedulers may use a combination of different scheduling policies17Operating Systems, by Dhananjay Dhamdhere*Long-, Medium-, and Short-Term SchedulersThese schedulers perform the following functions:Long-term: Decides when to admit an arrived process for scheduling, depending on:Nature (whether CPU-bound or I/O-bound)Availability of resources Kernel data structures, swapping spaceMedium-term: Decides when to swap out a process from memory and when to load it back, so that a sufficient number of ready processes are in memoryShort-term: Decides which ready process to service next on the CPU and for how longAlso called the process scheduler, or scheduler18Operating Systems, by Dhananjay Dhamdhere*19Operating Systems, by Dhananjay Dhamdhere*Example: Long, Medium-, and Short-Term Scheduling in Time-Sharing20Operating Systems, by Dhananjay Dhamdhere*Scheduling Data Structures and MechanismsInterrupt servicing routine invokes context saveDispatcher loads two PCB fields—PSW and GPRs—into CPU to resume operation of processScheduler executes idle loop if no ready processes21Operating Systems, by Dhananjay Dhamdhere*Priority-Based SchedulingOverhead depends on number of distinct priorities, not on the number of ready processesCan lead to starvation of low-priority processesAging can be used to overcome this problemCan lead to priority inversionAddressed by using the priority inheritance protocol22Operating Systems, by Dhananjay Dhamdhere*Round-Robin Scheduling with Time-SlicingCan be implemented through a single list of PCBs of ready processesList is organized as a queueScheduler removes first PCB from queue and schedules process described by itIf time slice elapses, PCB is put at the end of queueIf process starts I/O operation, its PCB is added at end of queue when its I/O operation completesPCB of a ready process moves toward the head of the queue until the process is scheduled23Operating Systems, by Dhananjay Dhamdhere*Multilevel SchedulingA priority and a time slice is associated with each ready queueRR scheduling with time slicing is performed within itHigh priority queue has a small time slice Good response times for processesLow priority queue has a large time sliceLow process switching overheadA process at the head of a queue is scheduled only if the queues for all higher priority levels are emptyScheduling is preemptivePriorities are static24Operating Systems, by Dhananjay Dhamdhere*Multilevel Adaptive SchedulingAlso called multilevel feedback schedulingScheduler varies priority of process so it receives a time slice consistent with its CPU requirementScheduler determines “correct” priority level for a process by observing its recent CPU and I/O usageMoves the process to this levelExample: CTSS, a time-sharing OS for the IBM 7094 in the 1960sEight-level priority structure25Operating Systems, by Dhananjay Dhamdhere*Fair Share SchedulingFair share: fraction of CPU time to be devoted to a group of processes from same user or applicationEnsures an equitable use of the CPU by processes belonging to different users or different applicationsLottery scheduling is a technique for sharing a resource in a probabilistically fair manner Tickets are issued to applications (or users) on the basis of their fair share of CPU timeActual share of the resources allocated to the process depends on contention for the resource26Operating Systems, by Dhananjay Dhamdhere*Kernel PreemptibilityHelps ensure effectiveness of a schedulerWith a noninterruptible kernel, event handlers have mutually exclusive access to kernel data structures without having to use data access synchronizationIf handlers have large running times, noninterruptibility causes large kernel latencyMay even cause a situation analogous to priority inversionPreemptible kernel solves these problemsA high-priority process that is activated by an interrupt would start executing sooner27Operating Systems, by Dhananjay Dhamdhere*Scheduling HeuristicsScheduling heuristics reduce overhead and improve user serviceUse of a time quantumAfter exhausting quantum, process is not considered for scheduling unless granted another quantumDone only after active processes have exhausted their quantaVariation of process priorityPriority could be varied to achieve various goalsBoosted while process is executing a system callVary to more accurately characterize the nature of a process28Operating Systems, by Dhananjay Dhamdhere*Power ManagementIdle loop used when no ready processes exist Wastes powerBad for power-starved systemsE.g., embedded systemsSolution: use special modes in CPUSleep mode: CPU does not execute instructions but accepts interruptsSome computers provide several sleep modes“Light” or “heavy”OSs like Unix and Windows have generalized power management to include all devices29Operating Systems, by Dhananjay Dhamdhere*Real-Time SchedulingReal-time scheduling must handle two special scheduling constraints while trying to meet the deadlines of applicationsFirst, processes within real-time applications are interacting processesDeadline of an application should be translated into appropriate deadlines for the processesSecond, processes may be periodicDifferent instances of a process may arrive at fixed intervals and all of them have to meet their deadlines30Operating Systems, by Dhananjay Dhamdhere*Process Precedences and Feasible SchedulesDependences between processes (e.g., Pi → Pj) are considered while determining deadlines and schedulingResponse equirements are guaranteed to be met (hard real-time systems) or are met probabilistically (soft real-time systems), depending on type of RT systemRT scheduling focuses on implementing a feasible schedule for an application, if one existsA process precedence graph (PPG) is a directed graph G ≡ (N,E) such that Pi  N represents a process, and an edge (Pi ,Pj)  E implies Pi → Pj . Thus, a path Pi , . . . ,Pk in PPG implies Pi Pk. A process Pk is a descendant of Pi if Pi Pk.31Operating Systems, by Dhananjay Dhamdhere*Process Precedences and Feasible Schedules (continued) Another dynamic scheduling policy: optimistic scheduling – Admits all processes; may miss some deadlines32Operating Systems, by Dhananjay Dhamdhere*Deadline SchedulingTwo kinds of deadlines can be specified:Starting deadline: latest instant of time by which operation of the process must beginCompletion deadline: time by which operation of the process must completeWe consider only completion deadlines in the followingDeadline estimation is done by considering process precedences and working backward from the response requirement of the application Di = Dapplication −∑k Є descendant(i) xk33Operating Systems, by Dhananjay Dhamdhere*Example: Determining Process DeadlinesTotal of service times of processes is 25 secondsIf the application has to produce a response in 25 seconds, the deadlines of the processes would be:34Operating Systems, by Dhananjay Dhamdhere*Deadline Scheduling (continued)Deadline determination is actually more complexMust incorporate several other constraints as wellE.g., overlap of I/O operations with CPU processingEarliest Deadline First (EDF) Scheduling always selects the process with the earliest deadlineIf pos(Pi) is position of Pi in sequence of scheduling decisions, deadline overrun does not occur ifCondition holds when a feasible schedule existsAdvantages: Simplicity and nonpreemptive natureGood policy for static scheduling35Operating Systems, by Dhananjay Dhamdhere*Deadline Scheduling (continued)EDF policy for the deadlines of Figure 7.13:P4 : 20 indicates that P4 has the deadline 20P2,P3 and P5,P6 have identical deadlinesThree other schedules are possibleNone of them would incur deadline overruns36Operating Systems, by Dhananjay Dhamdhere*Example: Problems of EDF SchedulingPPG of Figure 7.13 with the edge (P5,P6) removedTwo independent applications: P1–P4 and P6, and P5If all processes are to complete by 19 secondsFeasible schedule does not existDeadlines of the processes:EDF scheduling may schedule the processes as follows: P1,P2,P3,P4,P5,P6, or P1,P2,P3,P4,P6,P5Hence number of processes that miss their deadlines is unpredictable37Operating Systems, by Dhananjay Dhamdhere*Feasibility of schedule for Periodic ProcessesFraction of CPU time used by Pi = xi / TiIn the following example, fractions of CPU time used add up to 0.93If CPU overhead of OS operation is negligible, it is feasible to service these three processesIn general, set of periodic processes P1, . . . ,Pn that do not perform I/O can be serviced by a hard real-time system that has a negligible overhead if:38Operating Systems, by Dhananjay Dhamdhere*Rate Monotonic (RM) SchedulingDetermines the rate at which process has to repeatRate of Pi = 1 / TiAssigns the rate itself as the priority of the processA process with a smaller period has a higher priority Employs a priority-based schedulingCan complete its operation early39Operating Systems, by Dhananjay Dhamdhere*Rate Monotonic Scheduling (continued)Rate monotonic scheduling is not guaranteed to find a feasible schedule in all situationsFor example, if P3 had a period of 27 secondsIf application has a large number of processes, may not be able to achieve more than 69 percent CPU utilization if it is to meet deadlines of processesThe deadline-driven scheduling algorithm dynamically assigns process priorities based on their current deadlinesCan achieve 100 percent CPU utilizationPractical performance is lower because of the overhead of dynamic priority assignment40Operating Systems, by Dhananjay Dhamdhere*Case StudiesScheduling in UnixScheduling in SolarisScheduling in LinuxScheduling in Windows41Operating Systems, by Dhananjay Dhamdhere*Scheduling in UnixPure time-sharing operating systemIn Unix 4.3 BSD, priorities are in the range 0 to 127Processes in user mode have priorities between 50 and 127Processes in kernel mode have priorities between 0 and 49Uses a multilevel adaptive scheduling policy Process priority = base priority for user processes + f (CPU time used recently) + nice valueFor fair shareAdd f (CPU time used by processes in group)42Operating Systems, by Dhananjay Dhamdhere*Example: Process Scheduling in Unix43Operating Systems, by Dhananjay Dhamdhere*Example: Fair Share Scheduling in Unix44Operating Systems, by Dhananjay Dhamdhere*Scheduling in SolarisSolaris supports four classes of processesTime-sharing and interactive processes have priorities between 0 and 59Scheduling governed by a dispatch tableFor each entry, indicates how priority should change with nature of process and to avoid starvationSystem processes have priorities between 60-99They are not time-slicedRT processes have priorities between 100 and 159 Scheduled by a RR policy within a priority levelInterrupt servicing threads: priorities 160 - 169Solaris 9 supports a fair share scheduling class45Operating Systems, by Dhananjay Dhamdhere*Scheduling in LinuxSupports real-time and non-real-time applicationsRT processes have static priorities between 0-100Scheduled FIFO or RR within each priority levelScheduling of a process is determined by a flagNon RT processes have dynamic priorities (-20 to 19)Initially, 0 priorityPriority can be varied through nice system callsKernel varies process priority according to its natureScheduled by using the notion of a time quantum2.6 kernel uses a scheduler that incurs less overhead and scales better46Operating Systems, by Dhananjay Dhamdhere*Scheduling in WindowsScheduling is priority-driven and preemptiveWithin a priority level, RR policy with time-slicingPriorities of non-RT threads are dynamically varied, hence also called the variable priority classFavor interactive threadsRT threads are given higher priorities (16-31)Effective priority depends on: base priority of process, base priority of thread, and a dynamic componentProvides a number of low power-consumption system states for responsiveness, e.g., hybernate and standbyVista introduced new state sleep, which combines features of hybernate and standby47Operating Systems, by Dhananjay Dhamdhere*Performance Analysis of Scheduling PoliciesThe set of requests directed at a scheduling policy is called its workloadFirst step in performance analysis of a policy is to accurately characterize its typical workloadThree approaches could be used for performance analysis of scheduling policies:Implementation of a scheduling policy in an OSSimulationMathematical modeling48Operating Systems, by Dhananjay Dhamdhere*Performance Analysis through ImplementationThe scheduling policy to be evaluated is implemented in a real OS that is used in the target operating environmentThe OS receives real user requests; services them, using the scheduling policy; and collects data for statistical analysis of the policy’s performanceDisruptive approachDisruption can be avoided using virtual machine software49Operating Systems, by Dhananjay Dhamdhere*SimulationSimulation achieved by coding scheduling policy and relevant OS functions as a simulator and using a typical workload as its inputAnalysis may be repeated with many workloads to eliminate the effect of variations across workloads50Mathematical modelingA mathematical model is a set of expressions for performance characteristics such as arrival times and service times of requestsQueuing theory is employed To provide arrival and service patternsExponential distributions are used because of their memoryless propertyArrival times: F(t) =1 – e –αt, where α is the mean arrival rateService times: S(t) = 1 – e –ωt, where ω is the mean execution rateMean queue length is given by Little’s formulaL = α x W, where L is the mean queue length and W is the mean wait time for a requestOperating Systems, by Dhananjay Dhamdhere*51Operating Systems, by Dhananjay Dhamdhere*Mathematical Modeling (continued)52Operating Systems, by Dhananjay Dhamdhere*SummaryScheduler decides process to service and how long Three techniques:Priority-based, reordering of requests, and variation of time sliceScheduling can be:Non-preemptive: E.g., SRN, HRNPreemptive: E.g., RR, LCN, STGOS uses three schedulers: long-term, medium-term, and short-term scheduler53Operating Systems, by Dhananjay Dhamdhere*Summary (continued)Different scheduling policiesTime-sharing: Multilevel adaptive scheduling Fair share schedulingReal-time:Deadline schedulingRate monotonic schedulingPerformance analysis is used to study and tune performance of scheduling policies54

Các file đính kèm theo tài liệu này:

  • pptchapter_07_1049.ppt
Tài liệu liên quan