ES UNIT 2 NOTES

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

EXTERNAL MEMORY INTERFACE

EXTERNAL ROM (PROGRAM MEMORY) INTERFACING

Interfacing of ROM/EPROM to 8051

Figure shows how to access or interface ROM to 8051.

 Port 0 is used as multiplexed data & address lines. It gives lower order (A7-A0) 8bit address in
initial T cycle & higher order (A8-A15) used as data bus.
 8 bit address is latched using external latch & ALE signal from 8051.
 Port 2 provides higher order (A15-A8) 8 bit address.
 PSEN is used to activate the output enable signal of external ROM/EPROM.

EXTERNAL RAM (DATA MEMORY) INTERFACING

Interfacing of RAM to 8051


Figure shows how to connect or interface external RAM (data memory) to 8051.

 Port 0 is used as multiplexed data & address lines.


 Address lines are decoded using external latch & ALE signal from 8051 to providelower order
(A7-A0) address lines.
 Port 2 gives higher order address lines.
 RD & WR signals from 8051 selects the memory read & memory write operationsrespectively.

Note:RD & WR signals: generally P3.6 & P3.7 pins of port 3 are used to generate memory read and
memory write signals. Remaining pins of port 3 i.e. P3.0-P3.5 can be used for other functions.

Solved Examples:
Example 1: Design a μController system using 8051 to Interface the external RAM of size 16k x 8.
Solution: Given, Memory size: 16k
Which means, we require 2n=16k: n address lines

 Here n=14: A0 to A13 address lines are required.


 A14 and A15 are connected through OR gate to CS pin of external RAM.
 When A14 and A15 both are low (logic ‘0’), external data memory (RAM) is selected.

Address Decoding (Memory Map) For 16k X 8 Ram

Addr A A A A A A A A A A A A A A A A Hex
ess 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Addr

Start 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0000
H
End 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3FF
FH
Figure shows interfacing of 16k x 8 RAM to 8051.

Figure 16Kx8 Memory (RAM) Interfacing with 8051


Example 2: Design a μController system using 8051 to interface the external ROM ofsize 4k x 8.
Solution: Given, Memory size: 4k

i.e we require 2n=4k :: n address lines

 Here n=12 :: A0 to A11 address lines are required.


 Remaining lines A0, A0, A0, A0 & PSEN are connected though OR gate to CS &RD of
external ROM.
 When A0 to A0 are low (logic ‘0’), only then external ROM is selected.

Address Decoding (Memory Map) for 4k x 8 RAM

Addr A A A A A A A A A A A A A A A A Hex
ess 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Addr

Start 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0000
H
End 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0FF
FH
Figure shows interfacing of 4k x 8 ROM to 8051

Figure 4Kx8 Memory (ROM) Interfacing with 8051


Example 3: Design a μController system using 8051, 16k bytes of ROM & 32k bytes of RAM.
Interface the memory such that starting address for ROM is 0000H & RAM is 8000H.

Solution:

Given, Memory size- ROM : 16k


i.e we require 2n=16k :: n address lines
Here n=14 :: A0 to A13 address lines are required.
A14,A15,PSEN ORed CS when low – ROM is selected
Memory size- RAM :32k

i.e we require 2n=32k :: n address lines


Here n=15 :: A0 to A15 address lines are required.
A15 inverted(NOT Gate) CS when high- RAM is selected.For RAM
selection
 PSEN is used as chip select pin ROM.
 RD is used as read control signal pin..
 WR is used as write control signal pin.
Address Decoding (Memory Map) for 16k x 8 ROM.

Addr A A A A A A A A A A A A A A A A Hex
ess 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Addr

Start 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0000
H
End 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3FF
FH

Address Decoding (Memory Map) for 32k x 8 RAM.

Addr A A A A A A A A A A A A A A A A Hex
ess 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Addr

Start 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8000
H
End 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 FFF
FH

Figure shows the interfacing of 16Kx8 Memory (ROM) and 32Kx8 RAM with 8051

Figure 16Kx8 Memory (ROM) and 32Kx8 RAM Interfacing with 8051
RTOS

Real-time operating systems (RTOS) are used in environments where a large number of
events, mostly external to the computer system, must be accepted and processed in a short
time or within certain deadlines. such applications are industrial control, telephone switching
equipment, flight control, and real-time simulations

The real-time operating systems can be of 3 types –

1. Hard Real-Time operating system:


These operating systems guarantee that critical tasks be completed within a range of
time.
For example, a robot is hired to weld a car body. If the robot welds too early or too late,
the car cannot be sold, so it is a hard real-time system that requires complete car welding
by robot hardly on the time., scientific experiments, medical imaging systems, industrial
control systems, weapon systems, robots, air traffic control systems, etc.

2. Soft real-time operating system:


This operating system provides some relaxation in the time limit.
For example – Multimedia systems, digital audio systems etc. Explicit, programmer-
defined and controlled processes are encountered in real-time systems. A separate process
is changed with handling a single external event. The process is activated upon
occurrence of the related event signalled by an interrupt.

Multitasking operation is accomplished by scheduling processes for execution


independently of each other. Each process is assigned a certain level of priority that
corresponds to the relative importance of the event that it services. The processor is
allocated to the highest priority processes. This type of schedule, called, priority-based
preemptive scheduling is used by real-time systems.

3. Firm Real-time Operating System:


RTOS of this type have to follow deadlines as well. In spite of its small impact, missing a
deadline can have unintended consequences, including a reduction in the quality of the
product. Example: Multimedia applications.

Advantages:
The advantages of real-time operating systems are as follows-
1. Maximum consumption –
Maximum utilization of devices and systems. Thus more output from all the resources.

2. Task Shifting –
Time assigned for shifting tasks in these systems is very less. For example, in older
systems, it takes about 10 microseconds. Shifting one task to another and in the latest
systems, it takes 3 microseconds.

3. Focus On Application –
Focus on running applications and less importance to applications that are in the queue.

4. Real-Time Operating System In Embedded System –


Since the size of programs is small, RTOS can also be embedded systems like in transport
and others.

5. Error Free –
These types of systems are error-free.

6. Memory Allocation –
Memory allocation is best managed in these types of systems.

Disadvantages:
The disadvantages of real-time operating systems are as follows-

1. Limited Tasks –
Very few tasks run simultaneously, and their concentration is very less on few
applications to avoid errors.

2. Use Heavy System Resources –


Sometimes the system resources are not so good and they are expensive as well.

3. Complex Algorithms –
The algorithms are very complex and difficult for the designer to write on.

4. Device Driver And Interrupt signals –


It needs specific device drivers and interrupts signals to respond earliest to interrupts.

5. Thread Priority –
It is not good to set thread priority as these systems are very less prone to switching tasks.

6. Minimum Switching – RTOS performs minimal task switching.

Comparison of Regular and Real-Time operating systems:


Regular OS Real-Time OS (RTOS)

Complex Simple

Best effort Guaranteed response

Fairness Strict Timing constraints

Average Bandwidth Minimum and maximum limits

Unknown components Components are known

Unpredictable behavior Predictable behavior

Plug and play RTOS is upgradeable

MULTIPLE TASKS AND MULTIPLE PROCESSES:


Tasks and Processes
Many (if not most) embedded computing systems do more than one thing that is, the
environment can cause mode changes that in turn cause the embedded system to behave quite
differently. For example, when designing a telephone answering machine,
We can define recording a phone call and operating the user’s control panel as distinct tasks,
because they perform logically distinct operations and they must be performed at very
different rates. These different tasks are part of the system’s functionality, but that
application-level organization of functionality is often reflected in the structure of the
program as well.
A process is a single execution of a program. If we run the same program two different times,
we have created two different processes. Each process has its own state that includes not only
its registers but all of its memory. In some OSs, the memory management unit is used to keep
each process in a separate address space. In others, particularly lightweight RTOSs, the
processes run in the same address space. Processes that share the same address space are
often called threads.
Multirate Systems
Implementing code that satisfies timing requirements is even more complex when multiple
rates of computation must be handled. Multirate embedded computing systems are very
common, including automobile engines, printers, and cell phones. In all these systems,
certain operations must be executed periodically, and each operation is executed at its own
rate.
Timing Requirements on Processes

Processes can have several different types of timing requirements imposed on them by the
application. The timing requirements on a set of processes strongly influence the type of
scheduling that is appropriate. A scheduling policy must define the timing requirements that
it uses to determine whether a schedule is valid. Before studying scheduling proper, we
outline the types of process timing requirements that are useful in embedded system design.
Two important requirements on processes: release time and deadline.

The release time is the time at which the process becomes ready to execute; this is not
necessarily the time at which it actually takes control of the CPU and starts to run. An
aperiodic process is by definition initiated by an event, such as external data arriving or data
computed by another process.

The release time is generally measured from that event, although the system may want to
make the process ready at some interval after the event itself. For a periodically executed
process, there are two common possibilities.

In simpler systems, the process may become ready at the beginning of the period. More
sophisticated systems, such as those with data dependencies between processes, may set the
release time at the arrival time of certain data, at a time after the start of the period.
A deadline specifies when a computation must be finished. The deadline for an aperiodic
process is generally measured from the release time, since that is the only reasonable time
reference. The deadline for a periodic process may in general occur at some time other than
the end of the period.
Rate requirements are also fairly common. A rate requirement specifies how quickly
processes must be initiated.

The period of a process is the time between successive executions. For example, the period of
a digital filter is defined by the time interval between successive input samples.
The process’s rate is the inverse of its period. In a multirate system, each process executes at
its own distinct rate.

The most common case for periodic processes is for the initiation interval to be equal to the
period. However, pipelined execution of processes allows the initiation interval to be less
than the period. Figure 3.3 illustrates process execution in a system with four CPUs.
CPU Metrics
We also need some terminology to describe how the process actually executes.
The initiation time is the time at which a process actually starts executing on the CPU.
The completion time is the time at which the process finishes its work.
The most basic measure of work is the amount of CPU time expended by a process. The CPU
time of process i is called Ci . Note that the CPU time is not equal to the completion time
minus initiation time; several other processes may interrupt execution. The total CPU time
consumed by a set of processes is

T= ∑ Ti

We need a basic measure of the efficiency with which we use the CPU. The simplest and
most direct measure is utilization:

U=CPU time for useful work/total available CPU time

Utilization is the ratio of the CPU time that is being used for useful computations to the total
available CPU time. This ratio ranges between 0 and 1, with 1 meaning that all of the
available CPU time is being used for system purposes. The utilization is often expressed as a
percentage. If we measure the total execution time of all processes over an interval of time t,
then the CPU utilization is
U=T/t.

Context Switching in OS (Operating System)


The Context switching is a technique or method used by the operating system to switch a
process from one state to another to execute its function using CPUs in the system. When
switching perform in the system, it stores the old running process's status in the form of
registers and assigns the CPU to a new process to execute its tasks. While a new process is
running in the system, the previous process must wait in a ready queue. The execution of the
old process starts at that point where another process stopped it. It defines the characteristics
of a multitasking operating system in which multiple processes shared the same CPU to
perform multiple tasks without the need for additional processors in the system.
The need for Context switching

A context switching helps to share a single CPU across all processes to complete its
execution and store the system's tasks status. When the process reloads in the system, the
execution of the process starts at the same point where there is conflicting.

Following are the reasons that describe the need for context switching in the Operating
system.

1. The switching of one process to another process is not directly in the system. A
context switching helps the operating system that switches between the multiple
processes to use the CPU's resource to accomplish its tasks and store its context. We
can resume the service of the process at the same point later. If we do not store the
currently running process's data or context, the stored data may be lost while
switching between processes.
2. If a high priority process falls into the ready queue, the currently running process will
be shut down or stopped by a high priority process to complete its tasks in the system.
3. If any running process requires I/O resources in the system, the current process will
be switched by another process to use the CPUs. And when the I/O requirement is
met, the old process goes into a ready state to wait for its execution in the CPU.
Context switching stores the state of the process to resume its tasks in an operating
system. Otherwise, the process needs to restart its execution from the initials level.
4. If any interrupts occur while running a process in the operating system, the process
status is saved as registers using context switching. After resolving the interrupts, the
process switches from a wait state to a ready state to resume its execution at the same
point later, where the operating system interrupted occurs.
5. A context switching allows a single CPU to handle multiple process requests
simultaneously without the need for any additional processors.

Example of Context Switching

Suppose that multiple processes are stored in a Process Control Block (PCB). One process is
running state to execute its task with the use of CPUs. As the process is running, another
process arrives in the ready queue, which has a high priority of completing its task using
CPU. Here we used context switching that switches the current process with the new process
requiring the CPU to finish its tasks. While switching the process, a context switch saves the
status of the old process in registers. When the process reloads into the CPU, it starts the
execution of the process when the new process stops the old process. If we do not save the
state of the process, we have to start its execution at the initial level. In this way, context
switching helps the operating system to switch between the processes, store or reload the
process when it requires executing its tasks.

Context switching triggers

Following are the three types of context switching triggers as follows.

1. Interrupts
2. Multitasking
3. Kernel/User switch

Interrupts: A CPU requests for the data to read from a disk, and if there are any interrupts,
the context switching automatic switches a part of the hardware that requires less time to
handle the interrupts.

Multitasking: A context switching is the characteristic of multitasking that allows the


process to be switched from the CPU so that another process can be run. When switching the
process, the old state is saved to resume the process's execution at the same point in the
system.

Kernel/User Switch: It is used in the operating systems when switching between the user
mode, and the kernel/user mode is performed.

What is the PCB?

A PCB (Process Control Block) is a data structure used in the operating system to store all
data related information to the process. For example, when a process is created in the
operating system, updated information of the process, switching information of the process,
terminated process in the PCB.

Steps for Context Switching

There are several steps involves in context switching of the processes. The following diagram
represents the context switching of two processes, P1 to P2, when an interrupt, I/O needs, or
priority-based process occurs in the ready queue of PCB.
As we can see in the diagram, initially, the P1 process is running on the CPU to execute its
task, and at the same time, another process, P2, is in the ready state. If an error or interruption
has occurred or the process requires input/output, the P1 process switches its state from
running to the waiting state. Before changing the state of the process P1, context switching
saves the context of the process P1 in the form of registers and the program counter to
the PCB1. After that, it loads the state of the P2 process from the ready state of the PCB2 to
the running state.

The following steps are taken when switching Process P1 to Process 2:

1. First, these context switching needs to save the state of process P1 in the form of the program
counter and the registers to the PCB (Program Counter Block), which is in the running state.
2. Now update PCB1 to process P1 and moves the process to the appropriate queue, such as the
ready queue, I/O queue and waiting queue.
3. After that, another process gets into the running state, or we can select a new process from the
ready state, which is to be executed, or the process has a high priority to execute its task.
4. Now, we have to update the PCB (Process Control Block) for the selected process P2. It
includes switching the process state from ready to running state or from another state like
blocked, exit, or suspend.
5. If the CPU already executes process P2, we need to get the status of process P2 to resume its
execution at the same time point where the system interrupt occurs.

Similarly, process P2 is switched off from the CPU so that the process P1 can resume
execution. P1 process is reloaded from PCB1 to the running state to resume its task at the
same point. Otherwise, the information is lost, and when the process is executed again, it
starts execution at the initial level.

Priority Based Scheduling

Earliest-Deadline-First Scheduling

Earliest Deadline First (EDF) is one of the best known algorithms for realtime processing. It
is an optimal dynamic algorithm. In dynamic priority algorithms, the priority of a task can
change during its execution. It produces a valid schedule whenever one exists.
EDF is a preemptive scheduling algorithm that dispatches the process with the earliest
deadline. If an arriving process has an earlier deadline than the running process, the system
preempts the running process and dispatches the arriving process.
A task with a shorter deadline has a higher priority. It executes a job with the earliest
deadline. Tasks cannot be scheduled by rate monotonic algorithm.
EDF is optimal among all scheduling algorithms not keeping the processor idle at certain
times. Upper bound of process utilization is 100 %.
Whenever a new task arrive, sort the ready queue so that the task closest to the end of its
period assigned the highest priority. System preempt the running task if it is not placed in
the first of the queue in the last sorting.
If two tasks have the same absolute deadlines, choose one of the two at random (ties can be
broken arbitrarily). The priority is dynamic since it changes for different jobs of the same
task.
EDF can also be applied to aperiodic task sets. Its optimality guarantees that the maximal
lateness is minimized when EDF is applied.
Many real time systems do not provide hardware preemption, so other algorithm must be
employed.
In scheduling theory, a real-time system comprises a set of real-time tasks; each task consists
of an infinite or finite stream of jobs. The task set can be scheduled by a number of policies
including fixed priority or dynamic priority algorithms.
The success of a real-time system depends on whether all the jobs of all the tasks can be
guaranteed to complete their executions before their deadlines. If they can, then we say the
task set is schedulable.
The schedulability condition is that the total utilization of the task set must be less than or
equal to 1.
Implementation of earliest deadline first : Is it really not feasible to implement EDF
scheduling ?

Task Arrival Duration Deadline


T1 0 10 33
T2 4 3 28
T3 5 10 29

Problems for implementations :

1. Absolute deadlines change for each new task instance, therefore the priority needs to
be updated every time the task moves back to the ready queue.
2. More important, absolute deadlines are always increasing, how can we associate a
finite priority value to an ever increasing deadline value.

3. Most important, absolute deadlines are impossible to compute a-priori.


EDF properties :

1. EDF is optimal with respect to feasibility (i.e. schedulability).


2. EDF is optimal with respect to minimizing the maximum lateness.

Advantages

1. It is optimal algorithm.
2. Periodic, aperiodic and sporadic tasks are scheduled using EDF algorithm.
3. Gives best CPU utilization.

Disadvantages

1. Needs priority queue for storing deadlines


2. Needs dynamic priorities
3. Typically no OS support
4. Behaves badly under overload
5. Difficult to implement.

Rate Monotonic Scheduling

Rate Monotonic Priority Assignment (RM) is a so called static priority round robin
scheduling algorithm.
In this algorithm, priority is increases with the rate at which a process must be scheduled. The
process of lowest period will get the highest priority.
The priorities are assigned to tasks before execution and do not change over time. RM
scheduling is preemptive, i.e., a task can be preempted by a task with higher priority.
In RM algorithms, the assigned priority is never modified during runtime of the system. RM
assigns priorities simply in accordance with its periods, i.e. thepriority is as higher as shorter is
the period which means as higher is the activation rate. So RM is a scheduling algorithm for
periodic task sets.
If a lower priority process is running and a higher priority process becomes available to run, it
will preempt the lower priority process. Each periodic task is assigned a priority inversely
based on its period :

1. The shorter the period, the higher the priority.


2. The longer the period, the lower the priority.
The algorithm was proven under the following assumptions :
1. Tasks are periodic.
2. Each task must be completed before the next request occurs.
3. All tasks are independent.

4. Run time of each task request is constant.


5. Any non-periodic task in the system has no required deadlines.
RMS is optimal among all fixed priority scheduling algorithms forscheduling periodic tasks
where the deadlines of the tasks equal their periods.

Advantages :

1. Simple to understand. 2. Easy to implement. 3. Stable algorithm.

Disadvantages :

1. Lower CPU utilization.


2. Only deal with independent tasks.
3. Non-precise schedulability analysis

Comparison between RMS and EDF

Parameters RMS EDF


Priorities Static Dynamic
Works with OS with fixed priorities Yes No
Uses full computational power of processor No Yes
Possible to exploit full computational power of No Yes

Priority Inversion

Priority inversion occurs when a low-priority job executes while some ready higher-priority
job waits.
Consider three tasks Tl, T2 and T3 with decreasing priorities. Task T1 and T3 share some
data or resource that requires exclusive access, while T2 does not interact with either of the
other two tasks.

Task T3 starts at time t0 and locks semaphore s at time tv At time t2, Tl arrives and
preempts T3 inside its critical section. After a while, Tl requests to use the shared resource
by attempting to lock s, but it gets blocked, as T3 is currently using it. Hence, at time t3
continues to execute inside its critical section. Next, when T2 arrives at time t4, it preempts
T3, as it has a higher priority and does not interact with either Tl or T3.

The execution time of T2 increases the blocking time of Tl, as it is no longer dependent
solely on the length of the critical section executed by T3.
When tasks share resources, there may be priority inversions.

Priority inversion is not avoidable; However, in some cases, the priorityinversion could be
too large.

Simple solutions :

1. Make critical sections non-preemptable.


2. Execute critical sections at the highest priority of the task that could use it.
The solution of the problem is rather simple; while the low priority task blocks an higher
priority task, it inherits the priority of the higher priority task; in this way, every medium
priority task cannot make preemption.
Embedded C

Embedded C is an extension of C language and it is used to develop micro-


controller-based applications. The extensions in the Embedded C language from
normal C Programming Language are the I/O Hardware Addressing, fixed-point
arithmetic operations, accessing address spaces, etc.

Embedded C Program has five layers of Basic Structures. They are:

 Comment: These are simple readable text, written in code to make it more
understandable to the user. Usually comments are written in // or /* */.
 Pre-processor directives: The Pre-Processor directives tell the compiler which
files to look in to find the symbols that are not present in the program.
 Global Declaration: The part of the code where global variables are defined.
 Local Declaration: The part of the code where local variables are defined.
 Main function: Every C program has a main function that drives the whole code.
It basically has two parts the declaration part and the execution part. Where, the
declaration part is where all the variables are declared, and the execution part
defines the whole structure of execution in the program.
In nature it uses a cross-platform development scheme, i.e., the development of the
application by it is platform-independent and can be used on multiple platforms.

Difference between Embedded C and C

The following table list the major differences between the C and Embedded C
programming languages:

Parameters C Embedded C

 C is a general-purpose  Embedded C is simply an


programming language, which extension of C language and it
General can be used to design any type is used to develop micro-
of desktop-based application. controller-based applications.
 It is a type of high-level  It is nothing but an extension of
language. C.
Parameters C Embedded C

 Embedded C is a fully
 C language is a hardware-
Dependency hardware-dependent language.
independent language.
 Embedded C is OS-
 C compilers are OS-dependent.
independent.

 For Embedded C, specific


 For C language, the standard compilers that are able to
compilers can be used to generate particular
compile and execute the hardware/micro-controller-
program. based output are used.
 Popular Compiler to execute a  Popular Compiler to execute an
C language program are: Embedded C language program
 GCC (GNU are:
Compiler collection)  Keil compiler
 Borland turbo C,  BiPOM
 Intel C++ ELECTRONIC
Compiler  Green Hill Software

 Formatting depends upon the


type of microprocessor that is
 C language has a free format of
used.
program coding.
 It is used for limited resources
 It is specifically used for
like RAM and ROM.
desktop applications.
 High level of optimization.
 Optimization is normal.
 It is not easy to read and modify
 It is very easy to read and
the Embedded C language.
modify the C language.
 Bug fixing is complicated in an
 Bug fixing is very easy in a C
Embedded C language program.
Usability and language program.
Applications  It supports only the required
 It supports other various
processor of the application and
programming languages during
not the programming languages.
application.
 Only the pre-defined input can
 Input can be given to the
be given to the running
program while it is running.
program.
 Applications of C Program:
 Applications of Embedded C
 Logical programs
Program:
 System software
 DVD
programs
 TV
 Digital camera

You might also like