Unit Iv Rtos Based Embedded System Design
Unit Iv Rtos Based Embedded System Design
Unit Iv Rtos Based Embedded System Design
4.1 Introduction to basic concepts of RTOS- Task, process & threads, Interrupt routines in RTOS
4.2 Multiprocessing and Multitasking
4.3 Preemptive and non-preemptive scheduling
4.4 Task communication shared memory
4.5 Message passing
4.6 Inter process Communication
4.7 Synchronization between processes-semaphores, Mailbox, pipes
4.8 Priority inversion, priority inheritance
4.9 Comparison of Real time Operating systems: VxWorks, чC/OS-II, RT Linux
4.1 Introduction to basic concepts of RTOS
• A variant of OS that operates in constrained environment in which computer memory and processing
power is limited. Moreover they often need to provide their services in definite amount of time.
• Hard, Soft & Firm RTOS
• Example RTOS: VxWorks, pSOS, Nucleus, RTLinux…
4.1 Operating System
A system software, which
• Task Management— Creation, block, run, delay, suspend, resume, deletion
• Memory Management— Allocation, Freeing, De-location
• Device Management—Configure, Initiate, register with OS, read, listen, write, accept, deregister
• I/O Devices subsystems management—Display (LCD, Touch Screen), Printer, USB ports
• Network Devices subsystems management — Ethernet, Internet, WiFi
• Includes Middleware — TCP/IP stack for telecommunications
• Includes Key-applications — Clock, Mail, Internet Explorer, Search, Access to the Maps external
library
4.1 RTOS
Real-time OS (RTOS) is an intermediate layer between hardware devices and software
programming
“Real-time” means keeping deadlines, not speed
Advantages of RTOS in SoC design
• Shorter development time
• Less porting efforts
• Better reusability
Disadvantages
• More system resources needed
• Future development confined to the chosen RTOS
Multitasking operation system with hard or soft real time constraints
An OS for the system having the time-limits for service of tasks and interrupts
Enables defining of time constraints
Enables execution of concurrent tasks (or processes or threads)
RTOS enables setting of the rules
Assigning priorities
Predictable Latencies
4.1 Soft and Hard Real Time OS
Soft real-time
• Tasks are performed by the system as fast as possible, but tasks don’t have to finish by
specific times
• Priority scheduling
• Multimedia streaming
• Hard real-time
• Tasks have to be performed correctly and on time
• Deadline scheduling
• Aircraft controller, Nuclear reactor controller
4.1 Structure of a RTOS
4.1 Components of RTOS
• The most important component of RTOS is its kernel (Monolithic & Microkernel).
• BSP or Board Support Package makes an RTOS target-specific (It’s a processor specific code onto
(processor) which we like to have our RTOS running).
4.1 RTOS KERNEL
4.1 Tasks
• Task is defined as embedded program computational unit that runs on the CPU under the state
control of kernel of an OS.
• It has a state , which at an instance defines by status (running , blocked ,or finished).
• Structure-its data , objects and resources and control block.
• They share global and “static” variables, file descriptors, signal bookkeeping, code area, and
heap, but they have own thread status, program counter, registers, and stack.
• Shorter creation and context switch times, and faster IPC.
To save the state of the currently running task (registers, stack pointer,PC, etc.), and to restore that
of the new task.
Priority-Based Kernels
• There are two types
– Non-preemptive
– Preemptive
Non-Preemptive Kernels
• Perform “cooperative multitasking”
– Each task must explicitly give up control of the CPU
– This must be done frequently to maintain the illusion of concurrency
• Asynchronous events are still handled by ISRs
– ISRs can make a higher-priority task ready to run
– But ISRs always return to the interrupted tasks
Advantages of Non-Preemptive Kernels
• Interrupt latency is typically low
• Can use non-reentrant functions without fear of corruption by another task
– Because each task can run to completion before it relinquishes the CPU
– However, non-reentrant functions should not be allowed to give up control of the CPU
• Task-response is now given by the time of the longest task much lower than with F/B systems
• Less need to guard shared data through the use of semaphores
– However, this rule is not absolute
– Shared I/O devices can still require the use of mutual exclusion semaphores
– A task might still need exclusive access to a printer
Disadvantages of Non-Preemptive Kernels
• Responsiveness
– A higher priority task might have to wait for a long time
– Response time is nondeterministic
• Very few commercial kernels are non-preemptive
Preemptive Kernels
• The highest-priority task ready to run is always given control of the CPU
– If an ISR makes a higher-priority task ready, the higher-priority task is resumed (instead
of the interrupted task)
• Most commercial real-time kernels are preemptive
Advantages of Preemptive Kernels
• Execution of the highest-priority task is deterministic
• Task-level response time is minimized
Disadvantages of Preemptive Kernels
• Should not use non-reentrant functions unless exclusive access to these functions is ensured
Non-Preemptive Scheduling
• Why non-preemptive?
Non-preemptive scheduling is more efficient than preemptive scheduling since preemption incurs
context switching overhead which can be significant in fine-grained multithreading systems
Basic Real-Time Scheduling
• First Come First Served (FCFS)
• Round Robin (RR)
• Shortest Job First (SJF)
First Come First Served (FCFS)
• Simple “first in first out” queue
• Long average waiting time
• Negative for I/O bound processes
• Non preemptive
Round-Robin Scheduling
Round Robin (RR)
• FCFS + preemption with time quantum
• Performance (average waiting time) is proportional to the size of the time quantum.
Shortest Job First (SJF)
• Optimal with respect to average waiting time.
• Requires profiling of the execution times of tasks.
Shared Memory Communication
• Communication occurs by “simply” reading/writing to shared address page
– Really low overhead communication
– Introduces complex synchronization problems
Message Passing Communication
• Messages are collection of data objects and their structures
• Messages have a header containing system dependent control information and a message body
that can be fixed or variable size.
• When a process interacts with another, two requirements have to be satisfied.
Message Passing Communication
Synchronization and Communication.
Fixed Length
• Easy to implement
• Minimizes processing and storage overhead.
Variable Length
• Requires dynamic memory allocation, so fragmentation could occur.
Basic Communication Primitives
• Two generic message passing primitives for sending and receiving messages.
send (destination, message)
receive (source, message) source or dest={ process name, link, mailbox, port}
Addressing - Direct and Indirect
1) Direct Send/ Receive communication primitives
Communication entities can be addressed by process names (global process identifiers)
Global Process Identifier can be made unique by concatenating the network host address with the locally
generated process id. This scheme implies that only one direct logical communication path exists
between any pair of sending and receiving processes.
Symmetric Addressing: Both the processes have to explicitly name in the communication primitives.
Asymmetric Addressing: Only sender needs to indicate the recipient.
2) Indirect Send/ Receive communication primitives
Messages are not sent directly from sender to receiver, but sent to shared data structure.
Multiple clients might request services from one of multiple servers. We use mail boxes.
Abstraction of a finite size FIFO queue maintained by kernel
P 2 P 2 P 3 ... P 4
m
m m m
P 1 P 1
unicast m u lticast
Interprocess communication
• OS provides interprocess communication mechanisms:
– various efficiencies;
– communication power.
• Interprocess communication (IPC): OS provides mechanisms so that processes can pass data.
• Two types of semantics:
– blocking: sending process waits for response;
– non-blocking: sending process continues.
– Shared memory:
– processes have some memory in common;
– must cooperate to avoid destroying/missing messages.
– Message passing:
– processes send messages along a communication channel---no common address space.
Blocking, deadlock, and timeouts
• Blocking operations issued in the wrong sequence can cause deadlocks.
• Deadlocks should be avoided. Alternatively, timeout can be used to detect deadlocks.
Deadlocks and Timeouts
• Connect and receive operations can result in indefinite blocking
• For example, a blocking connect request can result in the requesting process to be suspended
indefinitely if the connection is unfulfilled or cannot be fulfilled, perhaps as a result of a
breakdown in the network .
• It is generally unacceptable for a requesting process to “hang” indefinitely. Indefinite blocking
can be avoided by using timeout.
Indefinite blocking may also be caused by a deadlock
Semaphores
• A semaphore is a key that your code acquires in order to continue execution
• If the key is already in use, the requesting task is suspended until the key is released
• There are two types
– Binary semaphores
• 0 or 1
– Counting semaphores
• >= 0
• Initialize (or create)
– Value must be provided
– Waiting list is initially empty
• Wait (or pend)
– Used for acquiring the semaphore
– If the semaphore is available (the semaphore value is positive), the value is
decremented, and the task is not blocked
– Otherwise, the task is blocked and placed in the waiting list
– Most kernels allow you to specify a timeout
– If the timeout occurs, the task will be unblocked and an error code will be returned to
the task
• Signal (or post)
– Used for releasing the semaphore
– If no task is waiting, the semaphore value is incremented
– Otherwise, make one of the waiting tasks ready to run but the value is not incremented
– Which waiting task to receive the key?
• Highest-priority waiting task
• First waiting task
Semaphore
Semaphore serves as a key to the resource
A flag represent the status of the resource
Prevent re-entering Critical Region
Can extent to counting Semaphore
Priority inversion
• Typical characterization of priority inversion
– A medium-priority task preempts a lower-priority task which is using a shared resource
on which a higher priority task is blocked
– If the higher-priority task would be otherwise ready to run, but a medium-priority task
is currently running instead, a priority inversion is said to occur
Priority Inheritance
Basic protocol [Sha 1990]
1. A job J uses its assigned priority, unless it is in its CS and blocks higher priority jobs
In which case, J inherits PH, the highest priority of the jobs blocked by J
When J exits the CS, it resumes the priority it had at the point of entry into the CS
2. Priority inheritance is transitive
Advantage
• Transparent to scheduler
Disadvantage
• Deadlock possible in the case of bad use of semaphores
• Chained blocking: if P accesses n resources locked by processes with lower priorities, P must wait
for n CS
Chained Blocking
• A weakness of the priority inheritance protocol is that it does not prevent chained blocking.
• Suppose a medium priority thread attempts to take a mutex owned by a low priority thread, but
while the low priority thread's priority is elevated to medium by priority inheritance, a high
priority thread becomes runnable and attempts to take another mutex already owned by the
medium priority thread. The medium priority thread's priority is increased to high, but the high
priority thread now must wait for both the low priority thread and the medium priority thread to
complete before it can run again.
• The chain of blocking critical sections can extend to include the critical sections of any threads
that might access the same mutex. Not only does this make it much more difficult for the system
designer to compute overhead, but since the system designer must compute the worst case
overhead, the chained blocking phenomenon may result in a much less efficient system.
• These blocking factors are added into the computation time for tasks in the RMA analysis,
potentially rendering the system unschedulable.