08.705 RTOS Module 2 Notes

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 30

08.

705 RTOS

Module 2

RTOS Tasks, Process and Threads: Program: General term for a unit of computation and typically used in the context of programming. Process: Refers to a program in execution. It is an independently executable unit handled by an OS. Threads: For better utilization of resources, a process is broken into sub-process called threads. They are referred as lightweight processes, because many threads can be run in parallel, ie one at a time, for each process without any overhead. Tasks: Generic term, referred as an independently scheduled unit of computation and is used typically in context of scheduling of computation on the processor. RTOS tasks and task states: Basic building block of software written under RTOS is task. Tasks are very simple to write: under RTOSs a task is simply a subroutine. Most RTOS allow multiple tasks at a time. Each task in an RTOS is always in one of three states: 1. Running: This means that microprocessor is executing the instruction of the task. In a single processor system, there is only one task in running state at a given time. 2. Ready: This means that some other task is in running state, but a task is ready for execution if microprocessor is available. Any number of task can be in this state. 3. Blocked: This means that the task hasnt got anything to do right now even if the processor is available. A task get into this state because they are waiting for some external event. Any number of task can be in this state. Nowadays most RTOS also have the following states: a. Suspended b. Pended c. Waiting d. Delayed Scheduler: It is a part of RTOS to track the state of each task and decide which one task should got to the running state. The scheduler in most RTOSs are simpleminded: they look the priorities of the tasks among the unblocked tasks, the one with the highest priority runs and rest of them wait in its ready state. The lower priority tasks have to wait and scheduler assumes that user knew what to do with task priority. The transition among three task states are shown below:

Department of ECE, VKCET

Page 1

08.705 RTOS

Module 2

A task will only block because it decides for itself that it has run out of things to do. A task has to be running just before it is blocked. While a task is blocked, it never gets the microprocessor. Therefore, an interrupt routine or some other task in the system must be able to signal that whatever the task was waiting. Otherwise the task will be blocked forever. The shuffling of tasks between the ready and running states is entirely the work of the scheduler. Tasks can block themselves, and tasks and ISR can move other tasks from blocked state to ready state, but scheduler has control over the running state. Tasks and Data: Each task has its own private context, which includes the register values, a PC, and a stack. The other data like global and static variables, initialized and uninitialized variables, everything else is shared among all of the tasks in the system. A typical system with three tasks and its data is shown below:

RTOS typically has its own private data structures, which are not available to any any of the tasks. Sharing data between different tasks cause shared data problem. Consider the following example:

Department of ECE, VKCET

Page 2

08.705 RTOS

Module 2

Here both Task1 and Task2 call the function vCountErrors. This code has a potential bug. Ifthe Task1 calls vCountErrors and if RTOS stops Task1 and runs Task2, which then calls vCountErrors, the variable cErrors may be corrupted. Reentrancy and reentrancy rules: Functions are called reentrant functions when they are called by more than one task and that will always work correctly, even if RTOS switches from one task to another in the middle of executing function. The shared functions are always non-reentrant. There are three rules to decide a function as reentrant and are: 1. A reentrant function may not use variables in a non-atomic way unless they are stored on the stack of the task that called the function or otherwise the private variable of that task. 2. A reentrant function may not call any other function that are not themselves reentrant 3. A reentrant function may not use the hardware in a non-atomic way. Applying reentrancy rule: Consider the example:

The variable fError is fixed at a location in memory and is shared by any task that calls display. The use of fError is not atomic, because RTOS may switch tasks between the time that is tested and the time that it is set. This function violates rule 1. The variable j is no problem, because it is on the stack. Another problem is in printf, is also reentrant and violates rule 2. Semaphores and shared data Semaphore is a tool used in RTOS to avoid shared data problem. The concept was used in old days to avoid a train ran into other. Consider the following figure:

When train A enters the section of the track, the semaphore behind it automatically lowers. Page 3

Department of ECE, VKCET

08.705 RTOS

Module 2

When train B arrives, its driver notes the lowered semaphore and he stops the train and wait for the semaphore to rise. When train A leaves the section of the track, the semaphore rises and the driver on the train B knows that the section is safe to proceed on. RTOS uses the same concept. RTOS Semaphore: RTOS semaphore uses get and give, take and release, pend and post, p and v, wait and signal etc. instead of raise and lower action in semaphore. A typical RTOS type of semaphore is binary semaphore. In binary semaphore, a task can call two functions TakeSemaphore and ReleaseSemaphore If one task has called TaskSemaphore to take the semaphore and has not called ReleaseSemaphore to release it, then any other task that calls TakeSemaphore will block until the first task calls ReleaseSemaphore. Only one task can have the semaphore at a time. The use of semaphore to solve the problem is shown below:

Department of ECE, VKCET

Page 4

08.705 RTOS The execution flow with semaphores of above example is shown below:

Module 2

Initializing Semaphores: Semaphores should be initializing at the right place of RTOS code. Consider the part of RTOS software of nuclear reactor system:

Department of ECE, VKCET

Page 5

08.705 RTOS

Module 2

Here it is shown that vReadTemperatureTask call a delay at the beginning before taking semaphore by OSSemPend vControlTask get enough time to call OSSemCreate using the statement p_semTemp = OSSemInit (1); A problem with this code is higher priority task vReadTemperatureTask all the delay time. A solution is, give high priority to vControlTask or semaphore initialization call OSSemInit in the start-up code, here usually before OSStart.

Department of ECE, VKCET

Page 6

08.705 RTOS

Module 2

Reentrancy Semaphores: Semaphores can be used to make a function reentrant. Consider the following code which is showing that semaphore making the function as reentrant.

Here the static variable cErrors is protected by semaphore routines before and after its assignment. Second call of function vCountErrors will be blocked when it try to take the semaphore. The functions and data structures names begin with NU used in RTOS is called Nucleus, which is trade mark of AT Inc. (Accelerated Technology Inc.) Multiple Semaphores: Allow many semaphores. Each call to the RTOS must identify the semaphore on which it operates. The semaphores are all independent of one another. If one task takes semaphore A, another task can take semaphore B. If one task is waiting for semaphore C, the task will still blocked even if other task releases semaphore D. An eg. shows the use of two semaphores X and Y for synchronizing the tasks I, J and M and tasks J, K and L respectively:

Department of ECE, VKCET

Page 7

08.705 RTOS

Module 2

Advantages of multiple semaphores: Increases the response of RTOS to tasks. Efficient resource sharing is possible. Semaphore as signal: A common use to communicate between different tasks and an ISR with a task. Problems with semaphores: Semaphores are the solutions for shared data problem, but if it is not perfectly used results problems. Some reasons to make problems with semaphores are: 1. Forgetting to take semaphore: Results shared data bug. 2. Forgetting to release semaphore: Results infinite blocking of task which took semaphore last time. 3. Taking the wrong semaphore: Result infinite blocking or shared data bug. 4. Holding a semaphore for too long: Result unnecessary waiting of tasks. Also results priority inversion problem. It can be solved by priority inheritance, by boosting priority of low priority task temporarily. 5. Causing dead lock. An example for deadlock:

vTask1 calls ajsmrsv and take semaphore A, then RTOS stops it and run vTask2. This task call ajsmrsv and take semaphore B, again call ajsmrsv to get semaphore A but blocks. Then RTOS switches back to vTask1, it calls ajsmrsv to get semaphore B but blocks. So both tasks goes to block state infinitely.

Department of ECE, VKCET

Page 8

08.705 RTOS

Module 2

RTOS Services Some commercial RTOS services are: 1. Message Queues 2. Mail Boxes 3. Pipes 4. Timer Functions 5. Events 6. Memory Management 7. Interaction between ISR Most of them are used for inter-task communication. Message queues, Mailboxes and Pipes: All are used for inter-task communication to coordinate different tasks and share the data between them. Message Queue: Some features of message queue are: 1. OS provide inserting and deleting the message pointer or messages. 2. Each queue for the message need initialization before it is being using. 3. Each created queue has an ID. 4. Each queue has a user definable size. 5. When OS call is to insert a message into the queue, the bytes are as per the pointed numbers of bytes. 6. When a queue is full, there is an error handling function to handle it. Queue message block with the message or message-pointers is shown below:

Some standard message queue functions used in RTOS are: 1. OSQCreate: Function for initialization of Q. 2. OSQPend: Function wait Q message. 3. OSQPost and OSQPostFront: Functions for inserting message into Q, second function is for higher priority messages. 4. OSQQuery: Function for querying for a message. 5. QErrorHandling: Error function. 6. OSQAccept: Function to accept a message from the Q if available without waiting. 7. OSQFlush: Function to flush the messages from the Q without waiting. Mailboxes: Inter-task communication mailboxes can be used only a single destined task. It is a message or a message pointer. The source (sender) is a task that sends the message pointer to mailbox. The destination is the place where the OSMBoxPend function waits for the mailbox message and read it when received. Three mailbox-types at the different RTOSes are: Department of ECE, VKCET Page 9

08.705 RTOS

Module 2

1. Multiple unlimited messages queuing up. 2. One message per mail box. 3. Multiple messages with a priority parameter for each message. Some standard OS mailbox functions used in RTOS are: 1. OSMBoxCreate creates a box and initializes the mailbox contents with a NULL pointer. 2. OSMBoxPost sends (writes) a message to the box. 3. OSMBoxWait waits (pend) for a mailbox message. 4. OSMBoxAccept reads the current message pointer after checking the presence and deletes the mailbox when read. 5. OSMBoxQuery queries the mailbox when read and not needed later.

Pipes: Similar to the ones used for devices like files. It is a device for inserting (writing) and deleting (reading) to/from two interconnected tasks or two set of tasks. Writing and reading from a pipe is similar to C command fwrite with a file name to write into a named file and fread with a file name to read named file. Writing to pipe is at the back pointer address *pBACK and reading at the front pointer address *pFRONT. Pipe is unidirectional: ie, one task inserts into it and the other one deletes from it. Pipe messages in a message buffer is shown below:

Some standard functions for pipe are: 1. PipeDevCreate creating a device, which functions as pipe. 2. open ( ) for opening the device to enable its use from beginning of its allocated buffer. 3. connect ( ) for connecting a thread or task inserting bytes to the thread or task deleting bytes from the pipe. 4. write ( ) for inserting from the bottom of the empty memory space in the buffer allotted to it. 5. read ( ) for deleting from the pipe from the bottom of unread memory spaces in the buffer filled after writing into the pipe 6. close ( ) closing the device to enable its use from the beginning of its allocated buffer only after opening it again. According to the flexibility, speed, memory space, duration of interrupt disabling period either one of these inter-task communication technique can be used in RTOS. Problems due to Queues, Mailboxes and Pipes: They are easy to share data and coordinate tasks, but causes bugs. Some reasons are: 1. If RTOS do no restrict some tasks to read or write to any given queue, mailboxes or pipe, its usage should be correctly. Otherwise results shared data bug. 2. RTOS cannot ensure the data written onto a queue, mailbox or pipe will be properly interpreted by the task that reads it. 3. Running out of memory space in queue and pipe cause a disaster for RTOS software. Timer functions: Most of the embedded system must keep track on the passage of time. This is useful to reduce the power consumption of the system and it is a most critical factor of embedded system. Timer function is a simple service that delays a task for a period of time. It blocks the task for the pre-determined period of time expires.

Department of ECE, VKCET

Page 10

08.705 RTOS

Module 2

A standard RTOS timer function is: TaskDelay( ) it create a silence for a period given in its argument. Timer functions must be accurate to the nearest system tick (period). RTOS works based on the single hardware timer to interrupt periodically in every ms/s/ns. An example shown below is timer function accuracy.

Events: RTOSes offer management of events. Event is an essential Boolean flag that tasks can set or reset and that other task can wait for it. For an example: When the user pulls a trigger on a cordless bar-code scanner, the task that turn on the laser scanning mechanism and tries to recognize the bar-code must start. Some features of the events are: 1. More than on task can block waiting for an event and RTOS will unblock all of them. 2. RTOS works based on a group of events and task can wait for any subset of events within the group. 3. Different RTOS deal in different ways with the issue of resetting an event after it has occurred and that the tasks that were waiting for it have been unblocked. Some RTOSes reset events automatically and others require task software itself. Comparison between different methods for inter-task communication: 1. Semaphores are faster and simplest method, but carry not much information to the tasks. 2. Events are complicated and little bit slower than semaphore, but a task can wait multiple events where only one for the case of semaphores. Due to this inconvenience of semaphores most of the RTOSes uses events instead of semaphores. 3. Queues, mailbox and pipes carry lot of messages, but take more processor time and also have lot of chance for bugs in the code. Memory Management: When a task is created, memory manager RTOS service allocates the memory addresses to the task by mapping its address space. Memory manager of OS has to secure, strong and well protected. Some RTOS offer the C functions malloc and free, but they are slow and their execution time is unpredictable. So RTOS designers avoid it. Typical RTOS function used for memory management is getbuf and reqbuf, which allocate a free memory buffer to the task called.

To keep better accuracy, system tick should be as short as possible (high system clock frequency). It is also possible to provide separate hardware timer for better accuracy. By using separate hardware timer, any number of simultaneously.

Department of ECE, VKCET

Page 11

08.705 RTOS

Module 2

The difference between these function is: reqbuf will return a NULL pointer if no memory buffers are available and getbuf will block the task that calls it if no memory buffers are available. Some memory management strategies are: 1. Fixed blocks allocation 2. Dynamic blocks allocation 3. Dynamic page allocation 4. Dynamic data memory allocation 5. Dynamic address relocation 6. Multiprocessor memory allocation 7. Memory protection to OS functions 8. Memory protection among tasks

Interrupt Service Routine (ISR) in RTOS: ISR in most RTOS environment must follow two rules that do not apply to the task codes. The rules are: 1. Rule 1: An interrupt routine must not call any RTOS function that might block the caller. Therefore ISR must not get semaphore, read from empty queues and mailboxes wait for events and so on. 2. Rule 2: An ISR may not call any RTOS functions that might cause the RTOS to switch tasks values the RTOS knows that an ISR and not a task, is executing. This means that ISR may not write to mailboxes or queue on which the task may be waiting, set events, release semaphores etc.

Department of ECE, VKCET

Page 12

08.705 RTOS

Module 2

An example which not following rule 1:

This code violate rule 1: If interrupt happened during vTaskTemperatures, it had the semaphore and ISR called GetSemaphore. RTOS will notice that semaphore already taken and blocks ISR, this will also block the task. The solution of this type of problem is by using queue: Task read the shared data from the queue and ISR write the data to queue. Consider the following figure to understand the rule 2:

ISR interrupts low priority task, calls the RTOS to send message to mailbox (it is legal under rule 1 if a function cant block). After ISR the tasks executed according to their priority.

Department of ECE, VKCET

Page 13

08.705 RTOS

Module 2

The real thing happened is shown below:

If the higher priority task is blocked for some message, ISR send it to mailbox, RTOS unblocks higher priority task. Then RTOS do nothing for ISR and after higher priority task, it returns to lower priority task. This problems can be solved by different method and one solution is shown below:

Here RTOS intercepts all interrupts and it block low priority task, starts ISR, write messages for ISR and complete ISR. After that RTOS switches to tasks according to their priority.

Design using RTOS: To design a RTOS for an embedded system, speed of the system is a critical factor. For hard real-time systems, deadline should be absolute value, and soft and firm real-time system demanding good response but some compromise on deadline. An RTOS designer must have the followings: 1. Some idea about the hardware. 2. Speed of the processor. 3. General software engineering skills in designing embedded system software. 4. Design tools and methodology used for embedded system design. Design principles: Embedded systems commonly do nothing until the passage of time or external events requires a response. In a normal embedded system, RTOS tasks spend most of the time in blocked, waiting for interrupt routine, another task to send a message, cause an event, free a semaphore etc.

Department of ECE, VKCET

Page 14

08.705 RTOS

Module 2

When an interrupt occurs, the ISR uses the RTOS services to signal one or more of the tasks, each of which work and each of which may signal other tasks. In this way, each interrupt can create a cascade of signals and task activity. Write short ISR: There are two reasons to write short ISRs. They are: 1. If lowest priority task interrupts highest priority tasks code, writing longer ISR task result slower task code response for tasks. 2. ISR tends to be more bug-prone and harder to debug the task code. More events are required for various responses from the software for reset port hardware, save received data, reset the interrupt controller, analyze the received data, and formulate a response and so on. The deadlines for all of these may be different. So keeping shorter codes according to the required deadlines is better. Number of RTOS Tasks: In embedded system design the system work is divided into number of RTOS tasks. If the number of tasks is more, there are advantages as well as disadvantages. The advantages are: 1. With more tasks, better control of the relative response times of the different parts of the system. 2. With more tasks, the system becomes more modular. 3. With more tasks, data encapsulation becomes more effectively. The disadvantages are: 1. With more tasks, more data shared among two or more tasks. This required more semaphores and hence more microprocessor time and more semaphore related bugs. 2. With more tasks, more requirements to pass messages from one task to another through pipes, mailboxes, queues and so on. This will also take more microprocessor time and more chance for bugs. 3. With more tasks, more stack memory is required and hence need more memory space. 4. With more tasks, require more microprocessor time for RTOS switching between tasks for content saving and restoring. 5. With more tasks, more calls to RTOS required. Tasks for priority: Multiple tasks in RTOS architecture must have priorities. And higher priorities must be assigned to the task which works with tighter response time requirements. For example, a button presses task need better response than the processing task. Therefore the code as well as the priorities for these two tasks must be different. Another example, task for shutting down must have higher priority than other tasks. Tasks for encapsulation: Separate tasks deals with hardware shared by different parts of the system. For example, the code for the task handling the buttons on the front panel of the printer uses printers display to respond to the user. The code of the other task moves sheet of the paper through the printing mechanism uses the display to report empty paper tray and paper jams. So both codes need to write display hardware and if they try to write display at the same time cause wrong message on the display. Such problems can be solved by a single task that controls the hardware display. Department of ECE, VKCET Page 15

08.705 RTOS

Module 2

When other tasks want to send information to display, they send messages to the display task and RTOS will ensure the messages sent to the display task are properly queued properly. A typical example is shown below:

This type data sharing is called task data encapsulation. Creating and destroying of tasks: Most of the RTOSes allows creating and destroying tasks during the running time of the system. There are two good reasons to avoid this are: 1. The functions that create and destroy tasks are time-consuming functions in RTOS and it is much worse than getting semaphore or writing a message into mailbox. This will affect systems throughput. 2. Creating a task is a relatively reliable operation, but it is difficult to destroy a task without leaving its background cause bugs. For example, if a task is destroyed and it owns a semaphore, that needs other task which may blocked forever. Scheduling: Definition Given a set of tasks J = {J1 , J2 ...Jn }, a schedule is an assignment of tasks to the processor so that each task is executed until completion. Scheduling: Example A schedule is called feasible if all tasks can be completed according to a set of specified constraints. A set of tasks is called schedulable if there exist at least one algorithm that can produce a feasible schedule. Scheduling constraints: The following types of constraints are considered: 1. Timing constraints: meet the deadline 2. Precedence constraints respect prerequisites 3. Resource constraints access only available resources Department of ECE, VKCET Page 16

08.705 RTOS Task characterization: Arrival time ai The time Ji becomes ready for execution Also called release time ri Computation time Ci Time necessary for execution without interruption Deadline di Time before which task has to complete its execution Start time Si Time at which Ji start its execution Finishing time fi Time at which Ji finishes its execution

Module 2

Lateness Li Li= fi - di Delay of task completion with respect to di Tardiness Ei Ei= max (0, Li) Time a task stays active after its deadline Laxity or slack time Xi Xi= di - ai Maximum time a task can be delayed on first activation to complete before its deadline Periodic and aperiodic tasks Periodic task i consists of infinite sequence of identical activities, called instances or job Regularly activated at a constant rate Activation of first instance of is called i Ti = period of the task Each task i can be characterized by Ci, Ti, Di . Where Ci , Ti, Di are constant for each instance In most cases: Ci =Di Aperiodic task Ji consists of infinite sequence of identical activities (instances). Their activations are not regular. Periodic (fig a) and aperiodic (fig b) tasks examples:

Department of ECE, VKCET

Page 17

08.705 RTOS Scheduling metrics:

Module 2

RTOS Scheduling: Different algorithms, commonly used are: 1. Cooperative Scheduling 2. Static Cyclic Scheduling (SCS) 3. Earliest Deadline First (EDF) 4. Rate Monotonic Scheduling (RMS) 5. Deadline Monotonic Scheduling (DMS) RTOS tasks may be aperiodic or periodic. Different scheduling algorithms based on such tasks are: Aperiodic Task Scheduling 1. Non-preemptive methods: a) EDD (Earliest Due Date) b) LDF (Largest Deadline First) 2. Preemptive methods: a) EDF (Earliest Deadline First) b) EDF*(EDF with precedence constraints) Periodic Task Scheduling: 1. Static-priority assignments a) RM (Rate Monotonic) b) DM (Deadline Monotonic) 2. Dynamic-priority assignments a) EDF b) EDF* Aperiodic Task Scheduling: Different types of algorithms: each represents the solution for particular scheduling problem. If the algorithms timing complexity is more, the scheduling optimization is high. Otherwise complexity can be reduced, but have poor optimization.

Department of ECE, VKCET

Page 18

08.705 RTOS Non-preemptive methods: 1. Earliest Due Date (EDD) or Earliest Deadline Due(EDD) or Jacksons algorithm:

Module 2

Proof: Let be a schedule produced by any algorithm A. If A is different than EDD, then there exist two tasks Ja and Jb, with deadlines da db, such that Jb immediately precedes Ja in . Now, let be a schedule obtained from by exchanging J a with Jb, so that Ja immediately precedes Jb in Illustration:

Maximum lateness between Ja and Jb in is Lmax (a,b) = fa da Similarly Lmax(a,b) = max (La, Lb). Here two cases considered:

Both cases Lmax(a,b) < Lmax(a,b), then cannot increase the maximum lateness. So is transformed to EDD by interchanging Ja and Jb and is optimal. Problem 1: Consider a set of ve tasks, simultaneously activated at time t = 0, whose parameters (worst -case computation times Ci and deadlines di) are indicated in the table shown below.

Check whether the EDD algorithm produces a feasible schedule? Solution: Assumptions about the task set for applying EDD: Tasks have same arrival times (synchronous arrivals) Tasks are independent EDD is non-preemptive.

Department of ECE, VKCET

Page 19

08.705 RTOS

Module 2

Scheduling is given below in the non-decreasing deadline order J1, J5, J3, J4, J2 is shown below:

The maximum lateness of each task is: L1 = 1 3 = -2 L2 = 8 10 = -2 L3 = 4 7 = -3 L4 = 7 8 = -1 L5 = 3 5 = -2 The maximum lateness of algorithm is Lmax = L4 Since the maximum lateness is negative, we can say that all tasks have been executed within their deadlines and algorithm is feasible. Problem 2: Consider a set of ve tasks, simultaneously activated at time t = 0, whose parameters are indicated in the table shown below.

Check whether the EDD algorithm produces a feasible schedule. Solution: The maximum lateness of tasks are: L1 = -1, L2 = -1, L3 = -2, L4 = 2, L5 = 0 The maximum lateness of scheduling is Lmax = L4 = 2 , it is positive, so this is not a feasible one. 2. LDF: An optimal algorithm that minimizes the maximum lateness of tasks with precedence relations and simultaneous arrival times. It can be executed in polynomial time with respect to the number of tasks in the set. Rule: Consider n tasks and its precedence relations by a directed acyclic graph (DAG), LDF algorithm build a scheduling queue from tail to head: among the tasks without successors or whose successors have been all selected, LDF selects the task with the latest deadline to be scheduled last. This procedure is repeated until all tasks in the set are selected. At run time, tasks are extracted from head of the queue, so that the first task inserted in the queue will be executed last, whereas the last task in the queue will be executed first.

Department of ECE, VKCET

Page 20

08.705 RTOS

Module 2

Problem 1: Consider the parameters of six tasks together with their precedence graph shown below:

Check whether the LDF algorithm produces a feasible schedule. Solution: It constructs a schedule from tail to head using a queue: 1. Pick up a task from the current DAG, that Has the latest deadline and Does not precede any other tasks (a leaf!) 2. Remove the selected task from the DAG and put it to the queue Repeat the two steps until the DAG contains no more tasks. Then the queue is a potentially feasible schedule. The last task selected should be run first.

Department of ECE, VKCET

Page 21

08.705 RTOS Then scheduling order is: J1, J2, J4, J3, J5, J6 LDF scheduling:

Module 2

Problem 2: Given the precedence graph and the table of task execution times and deadlines, determine the Latest Deadline First (LDF) schedule. Is the schedule feasible?

Solution

J8, J7

Department of ECE, VKCET

Page 22

08.705 RTOS

Module 2

J8, J7, J6

J8, J7, J6, J4, J5

J8, J7, J6, J4, J5, J3 J8, J7, J6, J4, J5, J3, J2, J1 Scheduling order: J1, J2, J3, J5, J4, J6, J7, J8

Lmax = L4 = L5 =L7 = L8 = 0, hence feasible. Preemptive methods: 1. EDF If tasks are not synchronous, but can have arbitrary arrival times, then preemption becomes an important factor. Scheduling with preemption is always easier than its non-preemptive counterpart. In a non-preemptive scheduling algorithm, the scheduler must ensure that a newly arriving task will never need to interrupt a currently executing task in order to meet its own deadline. This guarantee requires a considerable amount of searching. If preemption is allowed, this searching is unnecessary, since a task can be interrupted if a more important task arrives. Department of ECE, VKCET Page 23

08.705 RTOS EDF also called Horn algorithm. Similar to EDD, but preemptive scheduling.

Module 2

Problem 1: Given are five tasks with arrival times, execution times and deadlines according to the following table. Determine the Earliest Deadline First (EDF) schedule. Is the schedule feasible?

Solution: EDF is optimal under following assumptions: scheduling algorithm is preemptive, the tasks are independent, and may have arbitrary arrival times. The Horns rule says that given a set of n independent tasks with arbitrary arrival times any algorithm that at any instant executes the task with earliest absolute deadlines among the ready tasks is optimal with respect to the maximum lateness. The schedule is shown below:

Lateness:

Lmax

L1 = 12 16 = -4 L2 = 3 7 = -4 L3 = 7 8 = -1 L4 = 10 11 = -1 L5 = 16 18 = -2 = L3 = L4 = -1, hence feasible.

Department of ECE, VKCET

Page 24

08.705 RTOS

Module 2

2. EDF with precedence (EDF*): Consider a set of n tasks with precedence constraints and dynamic activations. These tasks can be schedulable in polynomial time complexity only if tasks are preemptable. This scheduling approach use a transfer of set J of dependent tasks into a set J* of independent tasks by modifying the timing parameters. Then the tasks are schedulable by EDF algorithm. The transformation algorithm ensures that J is schedulable and the precedence constraints are obeyed if and only if J* is schedulable. Transformation method modify the release time and deadline, so that each task cannot start before its predecessors and cannot preempt their successors. Modification of ri Let given tasks Ja and Jb, such that Ja Jb (that is,Ja is an immediate predecessor of Jb). Then in any valid schedule that meets precedence constraints the following conditions must be satisfied.

Therefore, the release time rb of Jb can be replaced by the maximum between rb and (ra+Ca) without changing the problem. Let rb* be the new release time of Jb. Then,

The algorithm implementation steps are:

Department of ECE, VKCET

Page 25

08.705 RTOS

Module 2

Modification of deadline di Given two tasks Ja and Jb, such that Ja Jb (that is,Ja is an immediate predecessor of Jb), then in any feasible schedule that meets the precedence constraints the following conditions must be satisfied.

Therefore, the deadline da of Ja can be replaced by the minimum between da and (dbCb) without changing the problem. Let da* be the new deadline of Ja. Then,

The algorithm implementation steps are:

Problem Given are seven tasks A, B, C, D, E, F, G with following precedence constraints: A C, B C, C E, D F, B D, C F, D G All tasks arrive at time t0 = 0, have a common deadline d = 20 and the following execution times:

1. Construct the precedence graph for this task set. 2. Draw the resulting EDF* schedule and compute the average response time of the tasks. Solution: 1. The precedence graph for the given task set is:

Department of ECE, VKCET

Page 26

08.705 RTOS

Module 2

2. EDF* schedules tasks with precedence constraints. Release time and deadline of individual tasks are modified such that all the precedence constraints are satisfied. By doing this, the scheduling problem is transformed into a problem without precedence constraints, which can then be handled by a "normal" EDF scheduler. There are several points to take into consideration while modifying tasks release times and deadlines: Release time modification: - task must start the execution not earlier than its release time - task must start the execution not earlier than the minimum finishing time of its predecessors

Deadline modification: - task must finish within its deadline - task must finish not later than the maximum start time of its successors

Modified release time and deadlines are:

EDF* Scheduling:

Department of ECE, VKCET

Page 27

08.705 RTOS

Module 2

Periodic Task Scheduling: Periodic task: Task for periodic activities typically arise from sensory data acquisition, low-level servoing, control loops, action planning, system monitoring etc. Scheduling will be priority driven and preemptive. Two types: 1. Static-priority based scheduling: RM and DM 2. Dynamic-priority based scheduling: EDF and EDF* Processor utilization factor Used to determine whether periodic tasks scheduling algorithms are schedulable. Consider a set of n periodic tasks, the processor utilization factor U is the fraction of processor time spent in the execution of the task set. Since Ci/Ti is the fraction of processor time spent in executing task i, the utilization factor for tasks is given by

This provides a measure of the computational load on the CPU due to the periodic task set. The CPU utilization can be improved by increasing tasks computation times or by decreasing their periods. There exists a maximum value of U, below which is schedulable and above which is not schedulable. Such a limit depends on the task set and on the algorithm used to schedule the tasks. Let Uub(,A) be the upper bound of the processor utilization factor for a task set under a given algorithm A. When U = Uub(,A), the set is said to fully utilize the processor. Also is schedulable by A, but an increase in the computation time in any of the tasks will make the set infeasible. Consider two tasks 1 and 2 with Ci = 2 and periods 4 and 6 respectively. Its Uub = 2/4 + 2/6 = 5/6 = 0.833 The scheduling is shown below:

Department of ECE, VKCET

Page 28

08.705 RTOS

Module 2

If any execution time is increased by a small factor, the task set becomes infeasible. For a given algorithm A, the least upper bound Ulub(A) of the processor utilization factor is the minimum of the utilization factors over all task sets that fully utilize the processor and

Ulub defines an important characteristic of a scheduling algorithm useful for easily verifying the schedulability of a task set. Any task set whose processor utilization factor is less than or equal to U lub is schedulable by the algorithm. On the other hand, when Ulub < U 1.0, the schedulability can be achieved only if the task periods are suitably related. RATE MONOTONIC SCHEDULING The Rate Monotonic (RM) scheduling algorithm is a simple rule that assigns priorities to tasks according to their request rates. Tasks with higher request rates (that is, with shorter periods) will have higher priorities. RM is a fixed-priority assignment: a priority Pi is assigned to the task before execution and does not change over time. RM is intrinsically preemptive: the currently executing task is preempted by a newly arrived task with shorter period. Optimality of RM Scheduling Consider = { 1,2,...,n} the set of periodic tasks ordered by increasing periods, with n being the task with the longest period. According to RM, n will be the task with the lowest priority. The response time of task n is delayed by the interference of i with higher priority.

Department of ECE, VKCET

Page 29

08.705 RTOS Advancing the release time of i may increase the completion time of n.

Module 2

The response time of n is largest (worst) when it is released simultaneously with i. Consider a set of two periodic tasks 1 and 2, with T1 < T2. If priorities are not assigned according to RM, then task 2 have the highest priority. The scheduling is:


Case 1:

At critical instants, the scheduling is feasible only if

If priorities are assigned according to RM, task 1 will receive the highest priority. To guarantee a feasible schedule two cases must be considered. Let F = T2/T1 be the number of periods of 1 entirely contained in T2.

Department of ECE, VKCET

Page 30

You might also like