ES UNIT 2 NOTES
ES UNIT 2 NOTES
ES UNIT 2 NOTES
Port 0 is used as multiplexed data & address lines. It gives lower order (A7-A0) 8bit address in
initial T cycle & higher order (A8-A15) used as data bus.
8 bit address is latched using external latch & ALE signal from 8051.
Port 2 provides higher order (A15-A8) 8 bit address.
PSEN is used to activate the output enable signal of external ROM/EPROM.
Note:RD & WR signals: generally P3.6 & P3.7 pins of port 3 are used to generate memory read and
memory write signals. Remaining pins of port 3 i.e. P3.0-P3.5 can be used for other functions.
Solved Examples:
Example 1: Design a μController system using 8051 to Interface the external RAM of size 16k x 8.
Solution: Given, Memory size: 16k
Which means, we require 2n=16k: n address lines
Addr A A A A A A A A A A A A A A A A Hex
ess 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Addr
Start 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0000
H
End 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3FF
FH
Figure shows interfacing of 16k x 8 RAM to 8051.
Addr A A A A A A A A A A A A A A A A Hex
ess 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Addr
Start 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0000
H
End 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0FF
FH
Figure shows interfacing of 4k x 8 ROM to 8051
Solution:
Addr A A A A A A A A A A A A A A A A Hex
ess 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Addr
Start 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0000
H
End 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3FF
FH
Addr A A A A A A A A A A A A A A A A Hex
ess 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Addr
Start 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8000
H
End 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 FFF
FH
Figure shows the interfacing of 16Kx8 Memory (ROM) and 32Kx8 RAM with 8051
Figure 16Kx8 Memory (ROM) and 32Kx8 RAM Interfacing with 8051
RTOS
Real-time operating systems (RTOS) are used in environments where a large number of
events, mostly external to the computer system, must be accepted and processed in a short
time or within certain deadlines. such applications are industrial control, telephone switching
equipment, flight control, and real-time simulations
Advantages:
The advantages of real-time operating systems are as follows-
1. Maximum consumption –
Maximum utilization of devices and systems. Thus more output from all the resources.
2. Task Shifting –
Time assigned for shifting tasks in these systems is very less. For example, in older
systems, it takes about 10 microseconds. Shifting one task to another and in the latest
systems, it takes 3 microseconds.
3. Focus On Application –
Focus on running applications and less importance to applications that are in the queue.
5. Error Free –
These types of systems are error-free.
6. Memory Allocation –
Memory allocation is best managed in these types of systems.
Disadvantages:
The disadvantages of real-time operating systems are as follows-
1. Limited Tasks –
Very few tasks run simultaneously, and their concentration is very less on few
applications to avoid errors.
3. Complex Algorithms –
The algorithms are very complex and difficult for the designer to write on.
5. Thread Priority –
It is not good to set thread priority as these systems are very less prone to switching tasks.
Complex Simple
Processes can have several different types of timing requirements imposed on them by the
application. The timing requirements on a set of processes strongly influence the type of
scheduling that is appropriate. A scheduling policy must define the timing requirements that
it uses to determine whether a schedule is valid. Before studying scheduling proper, we
outline the types of process timing requirements that are useful in embedded system design.
Two important requirements on processes: release time and deadline.
The release time is the time at which the process becomes ready to execute; this is not
necessarily the time at which it actually takes control of the CPU and starts to run. An
aperiodic process is by definition initiated by an event, such as external data arriving or data
computed by another process.
The release time is generally measured from that event, although the system may want to
make the process ready at some interval after the event itself. For a periodically executed
process, there are two common possibilities.
In simpler systems, the process may become ready at the beginning of the period. More
sophisticated systems, such as those with data dependencies between processes, may set the
release time at the arrival time of certain data, at a time after the start of the period.
A deadline specifies when a computation must be finished. The deadline for an aperiodic
process is generally measured from the release time, since that is the only reasonable time
reference. The deadline for a periodic process may in general occur at some time other than
the end of the period.
Rate requirements are also fairly common. A rate requirement specifies how quickly
processes must be initiated.
The period of a process is the time between successive executions. For example, the period of
a digital filter is defined by the time interval between successive input samples.
The process’s rate is the inverse of its period. In a multirate system, each process executes at
its own distinct rate.
The most common case for periodic processes is for the initiation interval to be equal to the
period. However, pipelined execution of processes allows the initiation interval to be less
than the period. Figure 3.3 illustrates process execution in a system with four CPUs.
CPU Metrics
We also need some terminology to describe how the process actually executes.
The initiation time is the time at which a process actually starts executing on the CPU.
The completion time is the time at which the process finishes its work.
The most basic measure of work is the amount of CPU time expended by a process. The CPU
time of process i is called Ci . Note that the CPU time is not equal to the completion time
minus initiation time; several other processes may interrupt execution. The total CPU time
consumed by a set of processes is
T= ∑ Ti
We need a basic measure of the efficiency with which we use the CPU. The simplest and
most direct measure is utilization:
Utilization is the ratio of the CPU time that is being used for useful computations to the total
available CPU time. This ratio ranges between 0 and 1, with 1 meaning that all of the
available CPU time is being used for system purposes. The utilization is often expressed as a
percentage. If we measure the total execution time of all processes over an interval of time t,
then the CPU utilization is
U=T/t.
A context switching helps to share a single CPU across all processes to complete its
execution and store the system's tasks status. When the process reloads in the system, the
execution of the process starts at the same point where there is conflicting.
Following are the reasons that describe the need for context switching in the Operating
system.
1. The switching of one process to another process is not directly in the system. A
context switching helps the operating system that switches between the multiple
processes to use the CPU's resource to accomplish its tasks and store its context. We
can resume the service of the process at the same point later. If we do not store the
currently running process's data or context, the stored data may be lost while
switching between processes.
2. If a high priority process falls into the ready queue, the currently running process will
be shut down or stopped by a high priority process to complete its tasks in the system.
3. If any running process requires I/O resources in the system, the current process will
be switched by another process to use the CPUs. And when the I/O requirement is
met, the old process goes into a ready state to wait for its execution in the CPU.
Context switching stores the state of the process to resume its tasks in an operating
system. Otherwise, the process needs to restart its execution from the initials level.
4. If any interrupts occur while running a process in the operating system, the process
status is saved as registers using context switching. After resolving the interrupts, the
process switches from a wait state to a ready state to resume its execution at the same
point later, where the operating system interrupted occurs.
5. A context switching allows a single CPU to handle multiple process requests
simultaneously without the need for any additional processors.
Suppose that multiple processes are stored in a Process Control Block (PCB). One process is
running state to execute its task with the use of CPUs. As the process is running, another
process arrives in the ready queue, which has a high priority of completing its task using
CPU. Here we used context switching that switches the current process with the new process
requiring the CPU to finish its tasks. While switching the process, a context switch saves the
status of the old process in registers. When the process reloads into the CPU, it starts the
execution of the process when the new process stops the old process. If we do not save the
state of the process, we have to start its execution at the initial level. In this way, context
switching helps the operating system to switch between the processes, store or reload the
process when it requires executing its tasks.
1. Interrupts
2. Multitasking
3. Kernel/User switch
Interrupts: A CPU requests for the data to read from a disk, and if there are any interrupts,
the context switching automatic switches a part of the hardware that requires less time to
handle the interrupts.
Kernel/User Switch: It is used in the operating systems when switching between the user
mode, and the kernel/user mode is performed.
A PCB (Process Control Block) is a data structure used in the operating system to store all
data related information to the process. For example, when a process is created in the
operating system, updated information of the process, switching information of the process,
terminated process in the PCB.
There are several steps involves in context switching of the processes. The following diagram
represents the context switching of two processes, P1 to P2, when an interrupt, I/O needs, or
priority-based process occurs in the ready queue of PCB.
As we can see in the diagram, initially, the P1 process is running on the CPU to execute its
task, and at the same time, another process, P2, is in the ready state. If an error or interruption
has occurred or the process requires input/output, the P1 process switches its state from
running to the waiting state. Before changing the state of the process P1, context switching
saves the context of the process P1 in the form of registers and the program counter to
the PCB1. After that, it loads the state of the P2 process from the ready state of the PCB2 to
the running state.
1. First, these context switching needs to save the state of process P1 in the form of the program
counter and the registers to the PCB (Program Counter Block), which is in the running state.
2. Now update PCB1 to process P1 and moves the process to the appropriate queue, such as the
ready queue, I/O queue and waiting queue.
3. After that, another process gets into the running state, or we can select a new process from the
ready state, which is to be executed, or the process has a high priority to execute its task.
4. Now, we have to update the PCB (Process Control Block) for the selected process P2. It
includes switching the process state from ready to running state or from another state like
blocked, exit, or suspend.
5. If the CPU already executes process P2, we need to get the status of process P2 to resume its
execution at the same time point where the system interrupt occurs.
Similarly, process P2 is switched off from the CPU so that the process P1 can resume
execution. P1 process is reloaded from PCB1 to the running state to resume its task at the
same point. Otherwise, the information is lost, and when the process is executed again, it
starts execution at the initial level.
Earliest-Deadline-First Scheduling
Earliest Deadline First (EDF) is one of the best known algorithms for realtime processing. It
is an optimal dynamic algorithm. In dynamic priority algorithms, the priority of a task can
change during its execution. It produces a valid schedule whenever one exists.
EDF is a preemptive scheduling algorithm that dispatches the process with the earliest
deadline. If an arriving process has an earlier deadline than the running process, the system
preempts the running process and dispatches the arriving process.
A task with a shorter deadline has a higher priority. It executes a job with the earliest
deadline. Tasks cannot be scheduled by rate monotonic algorithm.
EDF is optimal among all scheduling algorithms not keeping the processor idle at certain
times. Upper bound of process utilization is 100 %.
Whenever a new task arrive, sort the ready queue so that the task closest to the end of its
period assigned the highest priority. System preempt the running task if it is not placed in
the first of the queue in the last sorting.
If two tasks have the same absolute deadlines, choose one of the two at random (ties can be
broken arbitrarily). The priority is dynamic since it changes for different jobs of the same
task.
EDF can also be applied to aperiodic task sets. Its optimality guarantees that the maximal
lateness is minimized when EDF is applied.
Many real time systems do not provide hardware preemption, so other algorithm must be
employed.
In scheduling theory, a real-time system comprises a set of real-time tasks; each task consists
of an infinite or finite stream of jobs. The task set can be scheduled by a number of policies
including fixed priority or dynamic priority algorithms.
The success of a real-time system depends on whether all the jobs of all the tasks can be
guaranteed to complete their executions before their deadlines. If they can, then we say the
task set is schedulable.
The schedulability condition is that the total utilization of the task set must be less than or
equal to 1.
Implementation of earliest deadline first : Is it really not feasible to implement EDF
scheduling ?
1. Absolute deadlines change for each new task instance, therefore the priority needs to
be updated every time the task moves back to the ready queue.
2. More important, absolute deadlines are always increasing, how can we associate a
finite priority value to an ever increasing deadline value.
Advantages
1. It is optimal algorithm.
2. Periodic, aperiodic and sporadic tasks are scheduled using EDF algorithm.
3. Gives best CPU utilization.
Disadvantages
Rate Monotonic Priority Assignment (RM) is a so called static priority round robin
scheduling algorithm.
In this algorithm, priority is increases with the rate at which a process must be scheduled. The
process of lowest period will get the highest priority.
The priorities are assigned to tasks before execution and do not change over time. RM
scheduling is preemptive, i.e., a task can be preempted by a task with higher priority.
In RM algorithms, the assigned priority is never modified during runtime of the system. RM
assigns priorities simply in accordance with its periods, i.e. thepriority is as higher as shorter is
the period which means as higher is the activation rate. So RM is a scheduling algorithm for
periodic task sets.
If a lower priority process is running and a higher priority process becomes available to run, it
will preempt the lower priority process. Each periodic task is assigned a priority inversely
based on its period :
Advantages :
Disadvantages :
Priority Inversion
Priority inversion occurs when a low-priority job executes while some ready higher-priority
job waits.
Consider three tasks Tl, T2 and T3 with decreasing priorities. Task T1 and T3 share some
data or resource that requires exclusive access, while T2 does not interact with either of the
other two tasks.
Task T3 starts at time t0 and locks semaphore s at time tv At time t2, Tl arrives and
preempts T3 inside its critical section. After a while, Tl requests to use the shared resource
by attempting to lock s, but it gets blocked, as T3 is currently using it. Hence, at time t3
continues to execute inside its critical section. Next, when T2 arrives at time t4, it preempts
T3, as it has a higher priority and does not interact with either Tl or T3.
The execution time of T2 increases the blocking time of Tl, as it is no longer dependent
solely on the length of the critical section executed by T3.
When tasks share resources, there may be priority inversions.
Priority inversion is not avoidable; However, in some cases, the priorityinversion could be
too large.
Simple solutions :
Comment: These are simple readable text, written in code to make it more
understandable to the user. Usually comments are written in // or /* */.
Pre-processor directives: The Pre-Processor directives tell the compiler which
files to look in to find the symbols that are not present in the program.
Global Declaration: The part of the code where global variables are defined.
Local Declaration: The part of the code where local variables are defined.
Main function: Every C program has a main function that drives the whole code.
It basically has two parts the declaration part and the execution part. Where, the
declaration part is where all the variables are declared, and the execution part
defines the whole structure of execution in the program.
In nature it uses a cross-platform development scheme, i.e., the development of the
application by it is platform-independent and can be used on multiple platforms.
The following table list the major differences between the C and Embedded C
programming languages:
Parameters C Embedded C
Embedded C is a fully
C language is a hardware-
Dependency hardware-dependent language.
independent language.
Embedded C is OS-
C compilers are OS-dependent.
independent.