Embedded System: Single-Functioned

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

A system is an arrangement in which all its unit assemble work together according to a

set of rules. It can also be defined as a way of working, organizing or doing one or many
tasks according to a fixed plan. For example, a watch is a time displaying system. Its
components follow a set of rules to show time. If one of its parts fails, the watch will stop
working. So we can say, in a system, all its subcomponents depend on each other.

Embedded System
As its name suggests, Embedded means something that is attached to another thing.
An embedded system can be thought of as a computer hardware system having software
embedded in it. An embedded system can be an independent system or it can be a part
of a large system. An embedded system is a microcontroller or microprocessor based
system which is designed to perform a specific task. For example, a fire alarm is an
embedded system; it will sense only smoke.
An embedded system has three components −
• It has hardware.
• It has application software.
• It has Real Time Operating system (RTOS) that supervises the application software and
provide mechanism to let the processor run a process as per scheduling by following a plan
to control the latencies. RTOS defines the way the system works. It sets the rules during the
execution of application program. A small scale embedded system may not have RTOS.
So we can define an embedded system as a Microcontroller based, software driven,
reliable, real-time control system.

Characteristics of an Embedded System


• Single-functioned − An embedded system usually performs a specialized operation and
does the same repeatedly. For example: A pager always functions as a pager.
• Tightly constrained − All computing systems have constraints on design metrics, but those
on an embedded system can be especially tight. Design metrics is a measure of an
implementation's features such as its cost, size, power, and performance. It must be of a size
to fit on a single chip, must perform fast enough to process data in real time and consume
minimum power to extend battery life.
• Reactive and Real time − Many embedded systems must continually react to changes in the
system's environment and must compute certain results in real time without any delay.
Consider an example of a car cruise controller; it continually monitors and reacts to speed
and brake sensors. It must compute acceleration or de-accelerations repeatedly within a
limited time; a delayed computation can result in failure to control of the car.
• Microprocessors based − It must be microprocessor or microcontroller based.
• Memory − It must have a memory, as its software usually embeds in ROM. It does not need
any secondary memories in the computer.
• Connected − It must have connected peripherals to connect input and output devices.
• HW-SW systems − Software is used for more features and flexibility. Hardware is used for
performance and security.

Advantages
• Easily Customizable
• Low power consumption
• Low cost
• Enhanced performance

Disadvantages
• High development effort
• Larger time to market

Basic Structure of an Embedded System


The following illustration shows the basic structure of an embedded system −
• Sensor − It measures the physical quantity and converts it to an electrical signal which can
be read by an observer or by any electronic instrument like an A2D converter. A sensor stores
the measured quantity to the memory.
• A-D Converter − An analog-to-digital converter converts the analog signal sent by the sensor
into a digital signal.
• Processor & ASICs − Processors process the data to measure the output and store it to the
memory.
• D-A Converter − A digital-to-analog converter converts the digital data fed by the processor
to analog data
• Actuator − An actuator compares the output given by the D-A Converter to the actual
(expected) output stored in it and stores the approved output.

Microprocessor Microcontroller

Microprocessors are multitasking in nature. Can Single task oriented. For example, a washing
perform multiple tasks at a time. For example, on machine is designed for washing clothes only.
computer we can play music while writing text in
text editor.

RAM, ROM, I/O Ports, and Timers can be added RAM, ROM, I/O Ports, and Timers cannot be
externally and can vary in numbers. added externally. These components are to be
embedded together on a chip and are fixed in
numbers.

Designers can decide the number of memory or Fixed number for memory or I/O makes a
I/O ports needed. microcontroller ideal for a limited but specific
task.

External support of external memory and I/O Microcontrollers are lightweight and cheaper
ports makes a microprocessor-based system than a microprocessor.
heavier and costlier.

External devices require more space and their A microcontroller-based system consumes
power consumption is higher. less power and takes less space.

The 8051 microcontrollers work with 8-bit data bus. So they can support external data
memory up to 64K and external program memory of 64k at best. Collectively, 8051
microcontrollers can address 128k of external memory.
When data and code lie in different memory blocks, then the architecture is referred
as Harvard architecture. In case data and code lie in the same memory block, then the
architecture is referred as Von Neumann architecture.

Embedded Systems - Architecture Types


The 8051 microcontrollers work with 8-bit data bus. So they can support external data memory up to
64K and external program memory of 64k at best. Collectively, 8051 microcontrollers can address
128k of external memory.
When data and code lie in different memory blocks, then the architecture is referred as Harvard
architecture. In case data and code lie in the same memory block, then the architecture is referred
as Von Neumann architecture.

Von Neumann Architecture


The Von Neumann architecture was first proposed by a computer scientist John von Neumann. In
this architecture, one data path or bus exists for both instruction and data. As a result, the CPU does
one operation at a time. It either fetches an instruction from memory, or performs read/write operation
on data. So an instruction fetch and a data operation cannot occur simultaneously, sharing a common
bus.
Von-Neumann architecture supports simple hardware. It allows the use of a single,
sequential memory. Today's processing speeds vastly outpace memory access times,
and we employ a very fast but small amount of memory (cache) local to the processor.

Von-Neumann Model
Von-Neumann proposed his computer architecture design in 1945 which was later known as
Von-Neumann Architecture. It consisted of a Control Unit, Arithmetic, and Logical Memory
Unit (ALU), Registers and Inputs/Outputs.

Von Neumann architecture is based on the stored-program computer concept, where


instruction data and program data are stored in the same memory. This design is still used
in most computers produced today.

A Von Neumann-based computer:


o Uses a single processor
o Uses one memory for both instructions and data.
o Executes programs following the fetch-decode-execute cycle
Components of Von-Neumann Model:
o Central Processing Unit
o Buses
o Memory Unit

What’s the difference between Von-


Neumann and Harvard architectures?
MARCH 8, 2018 BY SCOTT THORNTON 9 COMMENTS

FacebookTwitterLinkedInEmail
These two processor architectures can be classified by how they use memory.
Von-Neumann architecture
In a Von-Neumann architecture, the same memory and bus are used to store both data
and instructions that run the program. Since you cannot access program memory and
data memory simultaneously, the Von Neumann architecture is susceptible to
bottlenecks and system performance is affected.

Figure 2: The Harvard architecture has a separate bus for signals and storage. (Image:
Wikimedia Commons)
Harvard Architecture
The Harvard architecture stores machine instructions and data in separate memory
units that are connected by different busses. In this case, there are at least two memory
address spaces to work with, so there is a memory register for machine instructions and
another memory register for data. Computers designed with the Harvard architecture
are able to run a program and access data independently, and therefore
simultaneously. Harvard architecture has a strict separation between data and code.
Thus, Harvard architecture is more complicated but separate pipelines remove the
bottleneck that Von Neumann creates.

Modified Harvard Architecture


The majority of modern computers have no physical separation between the memory
spaces used by both data and programs/code/machine instructions, and therefore could
be described technically as Von Neumann for this reason. However, the better way to
represent the majority of modern computers is a “modified Harvard architecture.”
Modern processors might share memory but have mechanisms like special instructions
that keep data from being mistaken for code. Some call this “modified Harvard
architecture.” However, modified Harvard architecture does have two
separate pathways (busses) for signal (code) and storage (memory), while the memory
itself is one shared, physical piece. The memory controller is where the modification is
seated, since it handles the memory and how it is used.

Figure 1: The Von Neumann architecture has been around since the 1940s. A clarifying trait is
that a single bus used for both signal and storage. (Image: Wikimedia Commons)

The Von Neumann Bottleneck


If a Von Neumann machine wants to perform an operation on some data in memory, it
has to move the data across the bus into the CPU. When the computation is done, it
needs to move outputs of the computation to memory across the same bus. The
amount of data the bus can transfer at one time (speed and bandwidth) plays a large
part in how fast the Von Neumann architecture can be. The throughput of a computer is
related to how false the processors are as well as the rate of data transfer across the
bus. The processor can be idle while waiting for a memory fetch, or it can perform
something called speculative processing, based on what the processor might next need
to do after the current computation is finished (once data is fetched and computations
are performed).

The Von Neumann bottleneck occurs when data taken in or out of memory must wait
while the current memory operation is completed. That is, if the processor just
completed a computation and is ready to perform the next, it has to write the finished
computation into memory (which occupies the bus) before it can fetch new data out of
memory (which also uses the bus). The Von Neumann bottleneck has increased over
time because processors have improved in speed while memory has not progressed as
fast. Some techniques to reduce the impact of the bottleneck are to keep memory in
cache to minimize data movement, hardware acceleration, and speculative execution. It
is interesting to note that speculative execution is the conduit for one of the latest
security flaws discovered by Google Project Zero, named Spectre.

Harvard Architecture
The Harvard architecture offers separate storage and signal buses for instructions and
data. This architecture has data storage entirely contained within the CPU, and there is
no access to the instruction storage as data. Computers have separate memory areas
for program instructions and data using internal data buses, allowing simultaneous
access to both instructions and data.
Programs needed to be loaded by an operator; the processor could not boot itself. In a
Harvard architecture, there is no need to make the two memories share properties.

Von-Neumann Architecture vs Harvard Architecture


The following points distinguish the Von Neumann Architecture from the Harvard
Architecture.

Von-Neumann Architecture Harvard Architecture

Single memory to be shared by both code and data. Separate memories for code and data.

Processor needs to fetch code in a separate clock Single clock cycle is sufficient, as
cycle and data in another clock cycle. So it requires separate buses are used to access code
two clock cycles. and data.

Higher speed, thus less time consuming. Slower in speed, thus more time-
consuming.

Simple in design. Complex in design.


CISC and RISC
CISC is a Complex Instruction Set Computer. It is a computer that can address a large
number of instructions.
In the early 1980s, computer designers recommended that computers should use fewer
instructions with simple constructs so that they can be executed much faster within the
CPU without having to use memory. Such computers are classified as Reduced
Instruction Set Computer or RISC.

CISC vs RISC
The following points differentiate a CISC from a RISC −

CISC RISC

Larger set of instructions. Easy to program Smaller set of Instructions. Difficult to program.

Simpler design of compiler, considering larger Complex design of compiler.


set of instructions.

Many addressing modes causing complex Few addressing modes, fix instruction format.
instruction formats.

Instruction length is variable. Instruction length varies.

Higher clock cycles per second. Low clock cycle per second.

Emphasis is on hardware. Emphasis is on software.

Control unit implements large instruction set Each instruction is to be executed by hardware.
using micro-program unit.

Slower execution, as instructions are to be read Faster execution, as each instruction is to be


from memory and decoded by the decoder unit. executed by hardware.
Pipelining is not possible. Pipelining of instructions is possible,
considering single clock cycle.

You might also like