Which scheduling technique is used in windows xp
Because the former number is much smaller than it should be, the scheduler assumes that thread A started running in the middle of a clock interval and may have additionally been interrupted. Thread A gets its quantum increased by another clock interval, and the quantum target is recalculated. Thread A now has its chance to run for a full clock interval. At the next clock interval, thread A has finished its quantum, and thread B now gets a chance to run. When a thread finishes running either because it returned from its main routine, called ExitThread , or was killed with TerminateThread , it moves from the running state to the terminated state.
If there are no handles open on the thread object, the thread is removed from the process thread list and the associated data structures are deallocated and released. A typical context switch requires saving and reloading the following data:. If the new thread is in a different process, it loads the address of its page table directory into a special processor register so that its address space is available.
See the description of address translation in Chapter 9. Various Windows process viewer utilities report the idle process using different names. Remember, only one thread per Windows system is actually running at priority 0—the zero page thread, explained in Chapter 9. Apart from priority, there are many other fields in the idle process or its threads that may be reported as 0.
This occurs because the idle process is not an actual full-blown object manager process object, and neither are its idle threads. Instead, the initial idle thread and idle process objects are statically allocated and used to bootstrap the system before the process manager initializes. Subsequent idle thread structures are allocated dynamically as additional processors are brought online.
Once process management initializes, it uses the special variable PsIdleProcess to refer to the idle process. Apart from some critical fields provided so that these threads and their process can have a PID and name, everything else is ignored, which means that query APIs may simply return zeroed data.
Although some details of the flow vary between architectures, the basic flow of control of the idle thread is as follows:. Enables and disables interrupts allowing any pending interrupts to be delivered. Checks whether any DPCs described in Chapter 3 are pending on the processor. If DPCs are pending, clears the pending software interrupt and delivers them. This will also perform timer expiration, as well as deferred ready processing. The latter is explained in the upcoming multiprocessor scheduling section.
Checks whether a thread has been selected to run next on the processor, and if so, dispatches that thread. Calls the registered power management processor idle routine in case any power management functions need to be performed , which is either in the processor power driver such as intelppm.
On debug systems, checks if there is a kernel debugger trying to break into the system and gives it access. If requested, checks for threads waiting to run on other processors and schedules them locally.
This operation is also explained in the upcoming multiprocessor scheduling section. In six cases, the Windows scheduler can boost increase the current priority value of threads:.
When a thread has been waiting on an executive resource for too long. The intent of these adjustments is to improve overall system throughput and responsiveness as well as resolve potentially unfair scheduling scenarios.
Windows never boosts the priority of threads in the real-time range 16 through Therefore, scheduling is always predictable with respect to other threads in the real-time range.
Windows Vista adds one more scenario in which a priority boost can occur, multimedia playback. Unlike the other priority boosts, which are applied directly by kernel code, multimedia playback boosts are managed by a user-mode service called the MultiMedia Class Scheduler Service MMCSS.
Although the boosts are still done in kernel mode, the request to boost the threads is managed by this user-mode service. These values are listed in Table As illustrated in Figure , after the boost is applied, the thread gets to run for one quantum at the elevated priority level. After the thread has completed its quantum, it decays one priority level and then runs another quantum.
A thread with a higher priority can still preempt the boosted thread, but the interrupted thread gets to finish its time slice at the boosted priority level before it decays to the next lower priority. As noted earlier, these boosts apply only to threads in the dynamic priority range 0 through No matter how large the boost is, the thread will never be boosted beyond level 15 into the real-time priority range.
In other words, a priority 14 thread that receives a boost of 5 will go up to priority A priority 15 thread that receives a boost will remain at priority When a thread that was waiting for an executive event or a semaphore object has its wait satisfied because of a call to the function SetEvent, PulseEvent , or ReleaseSemaphore , it receives a boost of 1. This adjustment helps balance the scales.
The boost is always applied to the base priority not the current priority. The thread gets to run at the elevated priority for its remaining quantum as described earlier, quantums are reduced by 1 when threads exit a wait before decaying one priority level at a time until it reaches its original base priority.
A special boost is applied to threads that are awoken as a result of setting an event with the special functions NtSetEventBoostPriority used in Ntdll. If its quantum is less than 4 quantum units, it is set to 4 quantum units. This boost is removed at quantum end. When a thread attempts to acquire an executive resource ERESOURCE; see Chapter 3 for more information on kernel synchronization objects that is already owned exclusively by another thread, it must enter a wait state until the other thread has released the resource.
To avoid deadlocks, the executive performs this wait in intervals of five seconds instead of doing an infinite wait on the resource. At the end of these five seconds, if the resource is still owned, the executive will attempt to prevent CPU starvation by acquiring the dispatcher lock, boosting the owning thread or threads, and performing another wait.
The boost is always applied to the base priority not the current priority of the owner thread. The quantum of the thread is reset so that the thread gets to run at the elevated priority for a full quantum, instead of only the quantum it had left.
Just like other boosts, at each quantum end, the priority boost will slowly decrease by one level. Because executive resources can be either shared or exclusive, the kernel will first boost the exclusive owner and then check for shared owners and boost all of them.
When the waiting thread enters the wait state again, the hope is that the scheduler will schedule one of the owner threads, which will have enough time to complete its work and release the resource. For example, if the resource has multiple shared owners, the executive will boost all those threads to priority 14, resulting in a sudden surge of high-priority threads on the system, all with full quantums.
Only until after all the shared owners have gotten a chance to run and their priority decreased below the waiting thread will the waiting thread finally get its chance to acquire the resource. Whenever a thread in the foreground process completes a wait operation on a kernel object, the kernel function KiUnwaitThread boosts its current not base priority by the current value of PsPrioritySeperation.
The windowing system is responsible for determining which process is considered to be in the foreground. As described in the section on quantum controls, PsPrioritySeperation reflects the quantum-table index used to select quantums for the threads of foreground applications.
However, in this case, it is being used as a priority boost value. The reason for this boost is to improve the responsiveness of interactive applications—by giving the foreground application a small boost when it completes a wait, it has a better chance of running right away, especially when other processes at the same base priority might be running in the background.
Using the CPU Stress tool, you can watch priority boosts in action. Select the Programs option. This causes PsPrioritySeperation to get a value of 2. Run Cpustres. Select the second thread thread 1.
The first thread is the GUI thread. You should see something like this:. Select Properties from the Action menu. The boost is applied when the thread wakes up. Threads that own windows receive an additional boost of 2 when they wake up because of windowing activity such as the arrival of window messages. The windowing system Win32k. The reason for this boost is similar to the previous one—to favor interactive applications. You can also see the windowing system apply its boost of 2 for GUI threads that wake up to process window messages by monitoring the current priority of a GUI application and moving the mouse across the window.
Just follow these steps:. Be sure that the Programs option is selected. Scroll down until you see Notepad thread 0. Click it, click the Add button, and then click OK. As in the previous experiment, select Properties from the Action menu.
You should see the priority of thread 0 in Notepad at 8, 9, or Because Notepad entered a wait state shortly after it received the boost of 2 that threads in the foreground process receive, it might not yet have decayed from 10 to 9 and then to 8. With Reliability and Performance Monitor in the foreground, move the mouse across the Notepad window. Make both windows visible on the desktop. Now bring Notepad to the foreground.
You should see the priority rise to 12 and remain there or drop to 11, because it might experience the normal priority decay that occurs for boosted threads on the quantum end because the thread is receiving two boosts: the boost of 2 applied to GUI threads when they wake up to process windowing input and an additional boost of 2 because Notepad is in the foreground. What does Windows do to address this situation? We have previously seen how the executive code responsible for executive resources manages this scenario by boosting the owner threads so that they can have a chance to run and release the resource.
However, executive resources are only one of the many synchronization constructs available to developers, and the boosting technique will not apply to any other primitive.
Therefore, Windows also includes a generic CPU starvation relief mechanism as part of a thread called the balance set manager a system thread that exists primarily to perform memory management functions and is described in more detail in Chapter 9. To minimize the CPU time it uses, it scans only 16 ready threads; if there are more threads at that priority level, it remembers where it left off and picks up again on the next pass.
Also, it will boost only 10 threads per pass—if it finds 10 threads meriting this particular boost which would indicate an unusually busy system , it stops the scan at that point and picks up again on the next pass. We mentioned earlier that scheduling decisions in Windows are not affected by the number of threads, and that they are made in constant time , or O 1.
Because the balance set manager does need to scan ready queues manually, this operation does depend on the number of threads on the system, and more threads will require more scanning time. However, the balance set manager is not considered part of the scheduler or its algorithms and is simply an extended mechanism to increase reliability. Additionally, because of the cap on threads and queues to scan, the performance impact is minimized and predictable in a worst-case scenario.
Will this algorithm always solve the priority inversion issue? But over time, CPU-starved threads should get enough CPU time to finish whatever processing they were doing and reenter a wait state. Change the activity level of the active thread by default, Thread 1 from Low to Maximum.
Change the thread priority from Normal to Below Normal. The screen should look like this:. Raise the priority of Performance Monitor to real time by running Task Manager, clicking the Processes tab, and selecting the Mmc. Right-click the process, select Set Priority, and then select Realtime.
If you receive a Task Manager Warning message box warning you of system instability, click the Yes button. If you have a multiprocessor system, you will also need to change the affinity of the process: right-click and select Set Affinity. Run another copy of CPU Stress. In this copy, change the activity level of Thread 1 from Low to Maximum. Now switch back to Performance Monitor. You should see CPU activity every 6 or so seconds because the thread is boosted to priority Run Windows Media Player or some other audio playback program , and begin playing some audio content.
Run Cpustres, and set the activity level of Thread 1 to Maximum. Raise the priority of Thread 1 from Normal to Time Critical. You should hear the music playback stop as the compute-bound thread begins consuming all available CPU time. Every so often, you should hear bits of sound as the starved thread in the audio playback process gets boosted to 15 and runs enough to send more data to the sound card.
Skipping and other audio glitches have been a common source of irritation among Windows users in the past, and the user-mode audio stack in Windows Vista would have only made the situation worse since it offers even more chances for preemption. In turn, each of these tasks includes information about the various properties that differentiate them. The most important one for scheduling is called the Scheduling Category, which is the primary factor determining the priority of threads registered with MMCSS.
Table shows the various scheduling categories. Pro Audio threads running at a higher priority than any other thread on the system except for critical system threads. Threads part of a foreground application such as Windows Media Player. All other threads not part of the previous categories. Threads that have exhausted their share of the CPU and will only continue running if no other higher priority threads are ready to run.
The main mechanism behind MMCSS boosts the priority of threads inside a registered process to the priority level matching their scheduling category and relative priority within this category for a guaranteed period of time.
It then lowers those threads to the Exhausted category so that other, nonmultimedia threads on the system can also get a chance to execute. By default, multimedia threads will get 80 percent of the CPU time available, while other threads will receive 20 percent based on a sample of 10 ms; in other words, 8 ms and 2 ms.
MMCSS itself runs at priority 27, since it needs to preempt any Pro Audio threads in order to lower their priority to the Exhausted category. It is important to emphasize that the kernel still does the actual boosting of the values inside the KTHREAD MMCSS simply makes the same kind of system call any other application would do , and the scheduler is still in control of these threads. It is simply their high priority that makes them run almost uninterrupted on a machine, since they are in the real-time range and well above threads that most user applications would be running in.
As was discussed earlier, changing the relative thread priorities within a process does not usually make sense, and no tool allows this because only developers understand the importance of the various threads in their programs. On the other hand, because applications must manually register with MMCSS and provide it with information about what kind of thread this is, MMCSS does have the necessary data to change these relative thread priorities and developers are well aware that this will be happening.
We are now going to perform the same experiment as the prior one but without disabling the MMCSS service. If you have a multiprocessor machine, be sure to set the affinity of the Wmplayer. Scroll down until you see Wmplayer, and then select all its threads. Click the Add button, and then click OK. You should see one or more priority 21 threads inside Wmplayer, which will be constantly running unless there is a higher-priority thread requiring the CPU after they are dropped to the Exhausted category.
You should notice the system slowing down considerably, but the music playback will continue. Use this time to stop Cpustres. Blog, Inc. Related Audiobooks Free with a 30 day trial from Scribd. Who Owns the Future? Jaron Lanier. Views Total views. Actions Shares.
No notes for slide. Scheduling It is a key concept in computer multitasking, multiprocessing operating system and real-time operating system designs.
Refers to the way processes are assigned to run on the available CPU This assignment is carried out by softwares known as a scheduler and a dispatcher. The scheduler is called a dispatcher. Thank You…!!! Total views On Slideshare 0. Improve this answer. Bibhas Bibhas 2, 2 2 gold badges 16 16 silver badges 21 21 bronze badges.
That is not really a correct description of the Windows NT-family scheduler. In step 1, change "top-level FIFO queue" to "the queue for its current priority". For step 6, change "the base level queue" to "the queue for its base priority" which is usually not the lowest level queue in the system.
There is more; see the Windows Internals book. When I used XP, it had the behaviour of a round robin with out multilevel feedback queue : a single cpu intensive task would stave the system. I think I read somewhere that they added multi level feedback queue with the SMP, and had two different kernel, depending on if you had multi-core or not. This would explain why Microsoft told us that we needed a duel core system, to do better multi-tasking it was needed to get the basic performance that we were used to in Unix.
I think this better scheduler is now enabled even for single core. It's been multilevel 32 priority levels since NT 3. So is the performance still terrible, when a process tries to hog the cpu assuming one cpu-core, or one hog per core. Sign up or log in Sign up using Google. Sign up using Facebook. This preemption gives a real-time thread preferential access to the CPU when the thread needs such access.
The dispatcher uses a level priority scheme to determine the order of thread execution. Priorities are divided into two classes. The variable class contains threads having priorities from 1 to 15, and the real-time class contains threads with priorities ranging from 16 to There is also a thread running at priority 0 that is used for memory management.
The dispatcher uses a queue for each scheduling priority and traverses the set of queues from highest to lowest until it finds a thread that is ready to run. If no ready thread is found, the dispatcher will execute a special thread called the idle thread. The Win32 API identifies several priority classes to which a process can belong.
0コメント