Tutorial and Lecture BE / B Tech
Welcome to our comprehensive resource hub for BE/B Tech students seeking academic support and enrichment. Whether navigating the intricacies of engineering concepts or striving for excellence in your studies, our tutorials and lectures guide you every step of the way. Dive into a wealth of educational content curated by seasoned educators and industry experts, tailored to meet the unique needs of budding engineers.
Why Choose Our Tutorials and Lectures?
- Expert Guidance: Our tutorials and lectures are crafted by experienced faculty members and subject matter experts with extensive backgrounds in engineering education and industry practice.
- Comprehensive Coverage: From fundamental concepts to advanced topics, our content spans various disciplines within the BE/B Tech curriculum, ensuring you have access to resources relevant to your coursework and career aspirations.
- Interactive Learning: Engage with dynamic learning materials that go beyond traditional lectures. Our tutorials incorporate interactive elements, practical examples, and hands-on exercises to reinforce your understanding and encourage active participation.
- Flexible Accessibility: Access our tutorials and lectures anytime, anywhere, at your convenience. Whether you prefer to learn at your own pace or follow a structured study plan, our platform offers flexibility to accommodate your learning style.
- Supplementary Resources: Besides tutorials and lectures, our platform provides supplementary resources such as study guides, reference materials, and practice problems to support your academic journey and enhance learning outcomes.
Our Offerings:
- Core Subject Tutorials: Dive deep into core engineering subjects, including mathematics, physics, computer science, electrical engineering, mechanical engineering, and more. Our tutorials break down complex concepts into digestible modules, making learning accessible and engaging.
- Specialized Workshops: Explore specialized workshops designed to broaden your skill set and expose you to emerging trends and technologies in your field. Topics may include data science, artificial intelligence, renewable energy, cybersecurity, and other cutting-edge areas of engineering.
- Industry Insights: Gain valuable insights from industry practitioners through guest lectures, case studies, and real-world projects. Learn how theoretical knowledge translates into practical applications and explore potential career paths within the engineering industry.
- Exam Preparation: Use our resources to Prepare your BE/B Tech exams confidently. Access practice tests, sample questions, and revision materials to reinforce your learning and excel in your assessments.
- Career Development: Elevate your career prospects with our career development resources. Learn essential skills such as resume writing, interview preparation, and professional networking to enhance your employability and succeed in today’s competitive job market.
Get Started Today:
- Embark on your journey to academic excellence and professional success with our tutorials and lectures for BE/B Tech students.
- Join our community of learners and empower yourself with the knowledge and skills to thrive in the dynamic engineering field.
- Start exploring our resources today and unlock your full potential!
Operating System
CPU Scheduling
In Uniprogrammming systems like MS-DOS, the CPU remains ideal when a process waits for any I/O operation to be done. This is an overhead since it wastes time and causes the problem of starvation. However, in multiprogramming systems, the CPU does not remain idle during the Process’s waiting time and starts executing other processes. The operating system has to define which process the CPU will be given.
In multiprogramming systems, the operating system schedules the processes on the CPU to maximize its utilization, and this procedure is called CPU scheduling. The Operating System uses various scheduling algorithms to schedule the processes.
The task of the short-term scheduler is to schedule the CPU for the number of processes present in the Job Pool. Whenever the running process requests some IO operation, then the short-term scheduler saves the current context of the Process (also called PCB) and changes its state from running to waiting. When the Process is waiting, the Short-term scheduler picks another process from the ready queue and assigns the CPU to this Process. This procedure is called context switching.
What is saved in the Process Control Block?
The Operating system maintains a process control block during the lifetime of the Process. The Process control block is deleted when the Process is terminated or killed. The following information is saved in the process control block and changes with the state of the Process.
Why do we need Scheduling?
In Multiprogramming, if the long-term scheduler picks more I/O bound processes, then most of the time, the CPU remains ideal. The task of the Operating system is to optimize the utilization of resources.
If most running processes change their state from running to waiting, there may always be a possibility of deadlock in the system. Hence, to reduce this overhead, the OS needs to schedule the jobs to optimize CPU utilization and avoid the possibility of deadlock.
Scheduling Algorithms in OS (Operating System)
There are various algorithms that the Operating System uses to schedule the processes on the processor efficiently.
The Purpose of a Scheduling Algorithm
- Maximum CPU utilization
- Fare allocation of CPU
- Maximum throughput
- Minimum turnaround time
- Minimum waiting time
- Minimum response time
The following algorithms can be used to schedule the jobs:
First Come First Serve
It is the most straightforward algorithm to implement. The process with the minimal arrival time will get the CPU first. The lesser the arrival time, the sooner the process gets the CPU. It is the non-preemptive type of Scheduling.
- Round Robin
The OS defines a time quantum (slice) in the round-robin scheduling algorithm. All the processes will be executed cyclically. Each process will get the CPU for a small amount of time (called time quantum) and then return to the ready queue to wait for its next turn. It is a preemptive type of Scheduling.
Shortest Job First
The Job with the shortest burst time will get the CPU first. The lower the burst time, the sooner the process gets the CPU. It is the non-preemptive type of Scheduling.
Shortest remaining time first
It is the preemptive form of SJF. In this algorithm, the OS schedules the Job according to the remaining execution time.
Priority-based Scheduling
In this algorithm, the priority will be assigned to each process. The higher the priority, the sooner the process will get the CPU. If the two processes prioritize the same, they will be scheduled according to their arrival time.
Highest Response Ratio Next
The process with the highest response ratio will be scheduled next in this scheduling algorithm. This reduces the starvation in the system.
What is Process Scheduling?
Process Scheduling is the process of the process manager handling the removal of an active process from the CPU and selecting another process based on a specific strategy.
Process Scheduling is an integral part of Multi-programming applications. Such operating systems allow more than one process to be loaded into usable memory at a time and the loaded shared CPU process uses repetition time.
There are three types of process schedulers:
- Long term or Job Scheduler
- Short term or CPU Scheduler
- Medium-term Scheduler
Different Types of Process Schedulers
Process Scheduling handles the selection of a process for the processor based on a scheduling algorithm and the removal of a process from the processor. It is an essential part of a multiprogramming operating system.
Many scheduling queues are used in process scheduling. When the processes enter the system, they are put into the job queue. The processes ready to execute in the main memory are kept in the ready queue. The processes waiting for the I/O device are kept in the I/O device queue.
The different schedulers that are used for process scheduling are −
Long Term Scheduler
The job or long-term scheduler selects processes from the storage pool in the secondary memory and loads them into the ready queue in the main memory for execution.
The long-term scheduler controls the degree of multiprogramming. A careful mixture of I/O and CPU-bound processes must be selected to yield optimum system throughput. If it selects too many CPU-bound processes, then the I/O devices are idle, and if it selects too many I/O-bound processes, then the processor has nothing to do.
The job of the long-term scheduler is essential and directly affects the system for a long time.
Short Term Scheduler
The short-term scheduler selects one of the processes from the ready queue and schedules them for execution. A scheduling algorithm decides which process will be scheduled for execution next.
The short-term scheduler executes much more frequently than the long-term scheduler, as a process may execute only for a few milliseconds.
The choices of the short-term scheduler are critical. If it selects a process with a long burst time, all the processes after that will have to wait for a long time in the ready queue. This is known as starvation, which may happen if the short-term scheduler makes a wrong decision.
A diagram that demonstrates long-term and short-term schedulers is given as follows −
Medium Term Scheduler
The medium-term scheduler swaps out a process from the main memory. It can again swap in the process from when it stopped executing. This can also be called suspending and resuming the process.
This helps reduce the degree of multiprogramming. Swapping also improves the mix of I/O-bound and CPU-bound processes in the memory.
Why do we need to schedule processes?
Scheduling is essential in many different computer environments. One of the most critical areas is scheduling which programs will work on the CPU. This task is handled by the operating system (OS) of the computer, and there are many different ways in which we can configure programs.
Process Scheduling allows the OS to allocate CPU time for each process. Another essential reason to use a process scheduling system is that it keeps the CPU busy. This allows you to get less response time for programs.
- Considering that hundreds of programs may need to work, the OS must launch the program, stop it, switch to another program, etc. How the OS configures the system to run another in the CPU is called “context switching”. If the OS keeps context-switching programs in and out of the provided CPUs, it can give the user a tricky idea that they can run any programs they want to run simultaneously.
- So now that we know we can run one program at a given CPU, and we know we can change the operating system and remove another one using the context switch, how do we choose which programs we need? Run, and with what program?
That is where Scheduling comes in! First, you determine the metrics, saying “the amount of time until the end”. This metric is defined as “the interval between which a function enters the system until it is completed”. Second, you decide on a metric that reduces metrics. We want our tasks to end as soon as possible.
What is the need for a CPU scheduling algorithm?
CPU scheduling decides which process will own the CPU while another process is suspended. The primary function of CPU scheduling is to ensure that whenever the CPU remains idle, the OS has at least selected one of the processes available in the ready-to-use line.
In Multiprogramming, if the long-term scheduler selects multiple I / O binding processes, then most of the Time, the CPU remains idle. The function of an effective program is to improve resource utilization.
If most operating systems change their status from performance to waiting, there may always be a chance of failure. So, in order to minimize this excess, the OS needs to schedule tasks to make full use of the CPU and avoid the possibility of deadlock.
Objectives of Process Scheduling Algorithm:
Utilization of CPU at maximum level. Keep the CPU as busy as possible. Allocation of CPU should be fair.
Throughput should be Maximum. i.e., the number of processes that complete their execution per Time unit should be maximized.
Minimum turnaround time, i.e. Time taken by a process to finish execution, should be the least.
Minimum waiting time: There should be a minimum waiting time, and the process should not starve in the ready queue.
Minimum response time. This means that the Time when a process produces the first response should be as short as possible.
What are the different terminologies to take care of in any CPU Scheduling algorithm?
Arrival Time: The Time at which the process arrives in the ready queue.
Completion Time: The Time at which the process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(WT): Time Difference between turn around Time and burst Time.
Waiting Time = Turn Around Time – Burst Time
What are some things to consider while designing a CPU Scheduling algorithm?
Different CPU Scheduling algorithms have different structures, and the choice of a particular algorithm depends on various factors. Many conditions have been raised to compare CPU scheduling algorithms.
The criteria include the following:
CPU utilization: The primary purpose of any CPU algorithm is to keep the CPU as busy as possible. Theoretically, CPU usage can range from 0 to 100, but it varies from 40 to 90 per cent in a real-time system, depending on the system load.
Throughput: The average CPU performance is the Number of processes performed and completed during each unit. This is called throughput. The output may vary depending on the length or duration of the processes.
Turn round Time: For a particular process, the essential conditions are how long it takes to perform that process. The Time elapsed from the process delivery to completion is known as the conversion time. Conversion time is the Time spent waiting for memory access, waiting in line, using the CPU, and waiting for I / O.
Waiting Time: The Scheduling algorithm does not affect the Time required to complete the process once it has started performing. It only affects the waiting Time of the process, i.e. the Time spent in the waiting process in the ready queue.
Response Time: In a collaborative system, turnaround Time is not the best option. The process may produce something early and continue to compute the new results while the previous results are released to the user. Therefore, another method is the Time to submit the application process until the first response is issued. This measure is called response time.
What are the different types of CPU Scheduling Algorithms?
There are mainly two types of scheduling methods:
Preemptive Scheduling: Preemptive Scheduling is used when a process switches from the running state to the ready state or from the waiting state to the ready state.
Non-Preemptive Scheduling: Non-Preemptive Scheduling is used when a process terminates or switches from running to waiting.
Different types of CPU Scheduling Algorithms
Let us now learn about these CPU scheduling algorithms in operating systems one by one:
1. First Come First Serve:
FCFS considered to be the simplest of all operating system scheduling algorithms. First come first serve scheduling algorithm states that the process that requests the CPU first is allocated the CPU first and is implemented by using FIFO queue.
Characteristics of FCFS:
- FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
- Tasks are always executed on a First-come, First-serve concept.
- FCFS is easy to implement and use.
- This algorithm is not much efficient in performance, and the wait time is quite high.
Advantages of FCFS:
- Easy to implement
- First come, first serve method
Disadvantages of FCFS:
- FCFS suffers from Convoy effect.
- The average waiting time is much higher than the other algorithms.
- FCFS is very simple and easy to implement and hence not much efficient.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on First come, First serve Scheduling.
2. Shortest Job First(SJF):
Shortest job first (SJF) is a scheduling process that selects the waiting process with the smallest execution time to execute next. This scheduling method may or may not be preemptive. Significantly reduces the average waiting time for other processes waiting to be executed. The full form of SJF is Shortest Job First.
Characteristics of SJF:
- Shortest Job first has the advantage of having a minimum average waiting time among all operating system scheduling algorithms.
- It is associated with each task as a unit of time to complete.
- It may cause starvation if shorter processes keep coming. This problem can be solved using the concept of ageing.
Advantages of Shortest Job first:
- As SJF reduces the average waiting time thus, it is better than the first come first serve scheduling algorithm.
- SJF is generally used for long term scheduling
Disadvantages of SJF:
- One of the demerit SJF has is starvation.
- Many times it becomes complicated to predict the length of the upcoming CPU request
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Shortest Job First.
3. Longest Job First(LJF):
Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF), as the name suggests this algorithm is based upon the fact that the process with the largest burst time is processed first. Longest Job First is non-preemptive in nature.
Characteristics of LJF:
- Among all the processes waiting in a waiting queue, CPU is always assigned to the process having largest burst time.
- If two processes have the same burst time then the tie is broken using FCFS i.e. the process that arrived first is processed first.
- LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LJF:
- No other task can schedule until the longest job or process executes completely.
- All the jobs or processes finish at the same time approximately.
Disadvantages of LJF:
- Generally, the LJF algorithm gives a very high average waiting time and average turn-around time for a given set of processes.
- This may lead to convoy effect.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the Longest job first scheduling.
4. Priority Scheduling:
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling algorithm that works based on the priority of a process. In this algorithm, the editor sets the functions to be as important, meaning that the most important process must be done first. In the case of any conflict, that is, where there is more than one process with equal value, then the most important CPU planning algorithm works on the basis of the FCFS (First Come First Serve) algorithm.
Characteristics of Priority Scheduling:
- Schedules tasks based on priority.
- When the higher priority work arrives and a task with less priority is executing, the higher priority proess will takes the place of the less priority proess and
- The later is suspended until the execution is complete.
- Lower is the number assigned, higher is the priority level of a process.
Advantages of Priority Scheduling:
- The average waiting time is less than FCFS
- Less complex
Disadvantages of Priority Scheduling:
- One of the most common demerits of the Preemptive priority CPU scheduling algorithm is the Starvation Problem.This is the problem in which a process has to wait for a longer amount of time to get scheduled into the CPU. This condition is called the starvation problem.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Priority Preemptive Scheduling algorithm.
5. Round robin:
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed time slot. It is the preemptive version of First come First Serve CPU Scheduling algorithm. Round Robin CPU Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin:
- It’s simple, easy to use, and starvation-free as all processes get the balanced CPU allocation.
- One of the most widely used methods in CPU scheduling as a core.
- It is considered preemptive as the processes are given to the CPU for a very limited time.
Advantages of Round robin:
- Round robin seems to be fair as every process gets an equal share of CPU.
- The newly created process is added to the end of the ready queue.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the Round robin Scheduling algorithm.
6. Shortest Remaining Time First:
Shortest remaining time first is the preemptive version of the Shortest job first which we have discussed earlier where the processor is allocated to the job closest to completion. In SRTF the process with the smallest amount of time remaining until completion is selected to execute.
Characteristics of Shortest remaining time first:
- SRTF algorithm makes the processing of the jobs faster than SJF algorithm, given it’s overhead charges are not counted.
- The context switch is done a lot more times in SRTF than in SJF and consumes the CPU’s valuable time for processing. This adds up to its processing time and diminishes its advantage of fast processing.
Advantages of SRTF:
- In SRTF the short processes are handled very fast.
- The system also requires very little overhead since it only makes a decision when a process completes or a new process is added.
Disadvantages of SRTF:
- Like the shortest job first, it also has the potential for process starvation.
- Long processes may be held off indefinitely if short processes are continually added.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the shortest remaining time first.
7. Longest Remaining Time First:
The longest remaining time first is a preemptive version of the longest job first scheduling algorithm. This scheduling algorithm is used by the operating system to program incoming processes for use in a systematic way. This algorithm schedules those processes first which have the longest processing time remaining for completion.
Characteristics of longest remaining time first:
- Among all the processes waiting in a waiting queue, the CPU is always assigned to the process having the largest burst time.
- If two processes have the same burst time then the tie is broken using FCFS i.e. the process that arrived first is processed first.
- LRTF CPU Scheduling can be of both preemptive and non-preemptive.
- No other process can execute until the longest task executes completely.
- All the jobs or processes finish at the same time approximately.
Disadvantages of LRTF:
- This algorithm gives a very high average waiting time and average turn-around time for a given set of processes.
- This may lead to a convoy effect.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on the longest remaining time first.
8. Highest Response Ratio Next:
Highest Response Ratio Next is a non-preemptive CPU Scheduling algorithm and it is considered as one of the most optimal scheduling algorithms. The name itself states that we need to find the response ratio of all available processes and select the one with the highest Response Ratio. A process once selected will run till completion.
Characteristics of Highest Response Ratio Next:
- The criteriafor HRRN is Response Ratio, and the mode is Non-Preemptive.
- HRRN is considered as the modification of Shortest Job First to reduce the problem of starvation.
- In comparison with SJF, during the HRRN scheduling algorithm, the CPU is allotted to the next process which has the highest response ratioand not to the process having less burst time.
Response Ratio = (W + S)/S
Here, W is the waiting time of the process so far and S is the Burst time of the process.
Advantages of HRRN:
- HRRN Scheduling algorithm generally gives better performance than the shortest job first Scheduling.
- There is a reduction in waiting time for longer jobs and also it encourages shorter jobs.
Disadvantages of HRRN:
- The implementation of HRRN scheduling is not possible as it is not possible to know the burst time of every job in advance.
- In this scheduling, there may occur an overload on the CPU.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Highest Response Ratio Next.
9. Multiple Queue Scheduling:
Processes in the ready queue can be divided into different classes where each class has its own scheduling needs. For example, a common division is a foreground (interactive) process and a background (batch) process. These two classes have different scheduling needs. For this kind of situation Multilevel Queue Scheduling is used.
The description of the processes in the above diagram is as follows:
- System Processes: The CPU itself has its process to run, generally termed as System Process.
- Interactive Processes: An Interactive Process is a type of process in which there should be the same type of interaction.
- Batch Processes: Batch processing is generally a technique in the Operating system that collects the programs and data together in the form of a batchbefore the processing
Advantages of multilevel queue scheduling:
- The main merit of the multilevel queue is that it has a low scheduling overhead.
Disadvantages of multilevel queue scheduling:
- Starvation problem
- It is inflexible in nature
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Multilevel Queue Scheduling.
10. Multilevel Feedback Queue Scheduling::
Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like Multilevel Queue Scheduling but in this process can move between the queues. And thus, much more efficient than multilevel queue scheduling.
Characteristics of Multilevel Feedback Queue Scheduling:
- In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on entry to the system, and processes are not allowed to move between queues.
- As the processes are permanently assigned to the queue, this setup has the advantage of low scheduling overhead,
- But on the other hand disadvantage of being inflexible.
Advantages of Multilevel feedback queue scheduling:
- It is more flexible
- It allows different processes to move between different queues
Disadvantages of Multilevel feedback queue scheduling:
- It also produces CPU overheads
- It is the most complex algorithm.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on Multilevel Feedback Queue Scheduling.
Exercise:
- Consider a system which requires 40-time units of burst time. The Multilevel feedback queue scheduling is used and time quantum is 2 unit for the top queue and is incremented by 5 unit at each level, then in what queue the process will terminate the execution?
- Which of the following is false about SJF? S1: It causes minimum average waiting time S2: It can cause starvation (A) Only S1 (B) Only S2 (C) Both S1 and S2 (D) Neither S1 nor S2 Answer (D) S1 is true SJF will always give minimum average waiting time. S2 is true SJF can cause starvation.
- Consider the following table of arrival time and burst time for three processes P0, P1 and P2. (GATE-CS-2011)
Process | Arrival time | Burst Time |
P0 | 0 ms | 9 ms |
P1 | 1 ms | 4 ms |
P2 | 2 ms | 9 ms |
- The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only at arrival or completion of processes. What is the average waiting time for the three processes? (A) 5.0 ms (B) 4.33 ms (C) 6.33 (D) 7.33 Solution : Answer: – (A) Process P0 is allocated processor at 0 ms as there is no other process in the ready queue. P0 is preempted after 1 ms as P1 arrives at 1 ms and burst time for P1 is less than remaining time of P0. P1 runs for 4ms. P2 arrived at 2 ms but P1 continued as burst time of P2 is longer than P1. After P1 completes, P0 is scheduled again as the remaining time for P0 is less than the burst time of P2. P0 waits for 4 ms, P1 waits for 0 ms and P2 waits for 11 ms. So average waiting time is (0+4+11)/3 = 5.
- Consider the following set of processes, with the arrival times and the CPU-burst times given in milliseconds (GATE-CS-2004)
Process | Arrival time | Burst Time |
P1 | 0 ms | 5 ms |
P2 | 1 ms | 3 ms |
P3 | 2 ms | 3 ms |
P4 | 4 ms | 1 ms |
- What is the average turnaround time for these processes with the preemptive shortest remaining processing time first (SRPT) algorithm ? (A) 5.50 (B) 5.75 (C) 6.00 (D) 6.25 Answer (A) Solution: The following is Gantt Chart of execution
P1 | P2 | P4 | P3 | P1 |
1 | 4 | 5 | 8 | 12 |
- Turn Around Time = Completion Time – Arrival Time Avg Turn Around Time = (12 + 3 + 6+ 1)/4 = 5.50
- An operating system uses the Shortest Remaining Time First (SRTF) process scheduling algorithm. Consider the arrival times and execution times for the following processes:
Process | Arrival time | Burst Time |
P1 | 20 ms | 0 ms |
P2 | 25 ms | 15 ms |
P3 | 10 ms | 30 ms |
P4 | 15 ms | 45 ms |
- What is the total waiting time for process P2? (A) 5 (B) 15 (C) 40 (D) 55 Answer (B) At time 0, P1 is the only process, P1 runs for 15 time units. At time 15, P2 arrives, but P1 has the shortest remaining time. So P1 continues for 5 more time units. At time 20, P2 is the only process. So it runs for 10 time units At time 30, P3 is the shortest remaining time process. So it runs for 10 time units At time 40, P2 runs as it is the only process. P2 runs for 5 time units. At time 45, P3 arrives, but P2 has the shortest remaining time. So P2 continues for 10 more time units. P2 completes its execution at time 55
Total waiting time for P2
= Completion time – (Arrival time + Execution time)
= 55 – (15 + 25)
= 15
Difference between Turn Around Time (TAT) and Waiting Time (WT) in CPU Scheduling
In CPU Scheduling, we often need to find the average Turnaround and Waiting Time with the help of Arrival, Burst and Completion Time. Let’s have a brief look of them: Turnaround Time (TAT):
- It is the time interval from the time of submission of a process to the time of the completion of the process.
- The difference b/w Completion Timeand Arrival Time is called Turnaround Time.
Completion Time (CT): This is the time when the process completes its execution. Arrival Time (AT): This is the time when the process has arrived in the ready state.
TAT = CT – AT
Waiting Time (WT):
- The time spent by a process waiting in the ready queue for getting the CPU.
- The time difference b/w Turnaround Timeand Burst Time is called Waiting Time.
Burst Time (BT): This is the time required by the process for its execution.
WT = TAT – BT
Now with Waiting Time and Burst Time, we can also calculate Turn Around Time via:
TAT = BT + WT
Example:
Process | Burst Time (in sec.) |
P1 | 24 |
P2 | 3 |
P3 | 4 |
Solution: Figure – Gantt Chart
Avg. TAT = (24 + 27 + 31) / 3 = 27.33 secAvg. WT = (0 + 24 + 27) / 3 = 17.0 sec
Let’s see the difference between Turnaround Time and Waiting Time:
Turn Around Time | Waiting Time |
The time since the process entered into ready queue for execution till the process completed it’s execution. | The time process spent in the ready queue and for I/O completion. |
Different CPU Scheduling algorithms produce different TAT for the same set of processes. | CPU Scheduling Algorithm doesn’t affect the amount of time during which a process executes or does I/O but only the amount of time that a process spends waiting in the ready queue. |
The turnaround time is generally limited by the speed of the output device. | Waiting time has no such major effect. |
Turn Around Time = Burst time + Waiting time. | Waiting time = Turn Around – Burst time. |
// C program for implementation of FCFS // scheduling #include<stdio.h> // Function to find the waiting time for all // processes void findWaitingTime(int processes[], int n, int bt[], int wt[]) { // waiting time for first process is 0 wt[0] = 0;
// calculating waiting time for (int i = 1; i < n ; i++ ) wt[i] = bt[i-1] + wt[i-1] ; } // Function to calculate turn around time void findTurnAroundTime( int processes[], int n, int bt[], int wt[], int tat[]) { // calculating turnaround time by adding // bt[i] + wt[i] for (int i = 0; i < n ; i++) tat[i] = bt[i] + wt[i]; } //Function to calculate average time void findavgTime( int processes[], int n, int bt[]) { int wt[n], tat[n], total_wt = 0, total_tat = 0;
//Function to find waiting time of all processes findWaitingTime(processes, n, bt, wt); //Function to find turn around time for all processes findTurnAroundTime(processes, n, bt, wt, tat); //Display processes along with all details printf(“Processes Burst time Waiting time Turn around time\n”); // Calculate total waiting time and total turn // around time for (int i=0; i<n; i++) { total_wt = total_wt + wt[i]; total_tat = total_tat + tat[i]; printf(” %d “,(i+1)); printf(” %d “, bt[i] ); printf(” %d”,wt[i] ); printf(” %d\n”,tat[i] ); } float s=(float)total_wt / (float)n; float t=(float)total_tat / (float)n; printf(“Average waiting time = %f”,s); printf(“\n”); printf(“Average turn around time = %f “,t); }
// Driver code int main() { //process id’s int processes[] = { 1, 2, 3}; int n = sizeof processes / sizeof processes[0];
//Burst time of all processes int burst_time[] = {10, 5, 8};
findavgTime(processes, n, burst_time); return 0; } // This code is contributed by Shivi_Aggarwal |
Output:
Processes Burst time Waiting time Turn around time
//The Output is Wrong please correct it
1 10 0 10
2 5 10 15
3 8 15 23
Average waiting time = 8.33333
Average turn around time = 16