WSN UNIT 5 Q Meterial
WSN UNIT 5 Q Meterial
WSN UNIT 5 Q Meterial
These are general purpose low power PCs, embedded PCs, custom-designed PCs. These nodes
support off the shelf operating system. Like Win CE, Linux, or real time operating systems. Off
the shelf operating system means the required software tools all are readily available. It supports
standard wireless communication protocols like blue tooth and IEEE 802.11. Because of high
processing capability and memory these nodes accommodate rich set of protocols and also they
support wide variety of sensors ranging from microphone to video cameras.
Disadvantages:
More power hungry due to high end processing capability.
Form factor is more (size of the node).
Note: among the above categories dedicated embedded sensor nodes are used in wireless sensor
networks because of low farm factor, and commercially availability.
Sensor node programming challenges:
Generally any programming language provides networking, processing, I/O interfacing, user
interaction hardware facilities. In sensor networks along these facilities some more capabilities
are required like message passing, event synchronization and interrupt handling capability. To
provide above facilities in resource constrained embedded system (sensor networks) several
techniques are available those are:
Real-time scheduling:
Real time scheduling executes all events on priority base. Here urgent events are executed first.
Event-driven execution:
Event driven execution operates processor in different modes like sleep mode, idle mode, and
active mode. So power consumption is reduced.
These three techniques very much useful in small and standalone embedded systems but not
much suitable to sensor networks due to:
Sensor network are large scale distributed systems, so execution of distributed algorithms
are difficult.
Sensor networks embed into the physical world, sensor network should be able to
respond to multiple concurrent events.
To support above characteristics in the sensor networks design two methods are used. Those are
design methodology and design platforms.
Design methodology gives the conceptual view to the programmer. It thinks about how to
decompose the problem in terms of events, synchronization and message passing. Simply it gives
block diagram representation of the problem.
Design platform supports the design methodology by providing discrete time language constructs
and restrictions, and real time execution services.
Note: design methodology is not constant based on application design methodology is changed.
Demonstrate Berkeley mote architecture and specifications:
Berkeley motes are family of embedded sensor nodes. The most popular Berkeley node is mica
mote.
Architecture:
Processor:
Mica mote contains two processors one is main processor Atmega103, it is used for regular
processing. Another one is co-processor; it is used as a standby processor. If any software up
gradation is going in main processor that time co-processor handle regular processing works.
Main processor internally contains 512KB flash memory and 4KB data memory. Due to small
memory size software footprint which is used in mica mote should be small. The most popular
OS used in mica motes is tiny OS. In addition to internal memory it also supports external flash
memory with 512KB capacity. It is interfaced with low speed serial peripheral interface (SPI).
The external memory is used for storing data which is going to be executed later.
Transceiver:
In mica mote transceiver TR 1000 radio transceiver is used operated with 916 MHz band. The
general data rate is 40kbps and by using hardware accelerators the data rate may be increased up
to 50Kbps. The transmission power of the transceiver is possible to control by transmission
power control. It is digitally controlled by software using potentiometer. Maximum transmission
range of transceiver is 300feet.
I/O connector:
Berkeley mote contains 51 pins I/O extension connector. It is used to interface sensors, actuators,
serial I/O boards and parallel I/O boards. Sensors maybe a temperature sensor, light sensor,
microphone, accelerometer etc. serial I/O board is used to communicate mote with PC in real
time. Parallel I/O board is used for downloading programs to the mote.
Note: among all operations radio transmission takes more power consumption.
Features:
Tiny OS is the node level software; it is mainly used in resource constrained environment like
Berkeley motes.
Features:
Small footprint.
Supports only static memory allocation.
Simple task model.
Minimal device and networking abstractions.
Tiny OS is the language based application development approach. Just like traditional OS in tiny
OS also organized like layers. Lower layers are closer to the hardware components and higher
layers are closer to the application. Tiny OS provides library set of system software components.
Almost all components are implemented in tiny OS using software language like nesC. Few of
the components are implemented by hardware with software wrapper. This software wrapper
controls the actual hardware.
Field Monitor Example:
The above fig shows field monitor for sensing and sending operation. This application
continuously monitors the event using temperature and photo sensor and collected data is
transmitted through transceiver. Here lower layers are closer to hardware components ADC,
hardware components, RFM. Higher layers are closer to application like sensing and sending.
Blocks are tiny OS components and arrows represent functional calls. Direction of arrow shows
data from callers to called components.
Component interface:
To reflect layered structure in tiny OS nesC classify the interfaces into two types. Those are
provides and uses. Provides are used to declare the library files required. Uses are used to define
which component is going to implement. nesC defines direction of interfaces between
components are event calls and command calls. Calls from higher layers to lower layers are
called command calls and calls from lower layers to higher layers are called event calls.
moduleTimerModule {
provides {
interfaceStdControl;
interface Timer01;
}
uses interface Clock as Clk;
}
interfaceStdControl {
commandresult_tinit();
}
interface Timer01 {
commandresult_t start(char type, uint32_t interval;
commandresult_t stop();
eventresult_t timer0Fire();
eventresult_t timer1Fire();
}
interface Clock {
commandresult_tsetRate(char interval, char scale);
eventresult_t fire();
}
Component implementation:
Components implementation is done by using two methods. Those are modules and configurations.
Modules are used to implement actual component using language based code.
The above code shows timer component implementation using nesC code. Here signal keyword
indicates triggering by an event.
Configurations are used to interface already available component modules.
The above code shows interfacing of timer component module with hardware clock.
To simulate sensor node it should available in the form of software with communication
terminal. Based on the application programmer concentrate to model the sensor node. If the
nodes are mobile, then the positions and motion properties of the nodes need to be modeled. If
energy characteristics are part of the design considerations, then the power consumption of the
nodes needs to be modeled.
Communication model:
Communication is the major part in sensor network. Different communication protocols required
in different layer levels.At physical layer level protocols required for analyzing delays and
collision. At network layer level routing protocols are required and at data link layer level MAC
protocols are required. So for analyzing all types of communication protocols simulator should
have communication model.
Physical environment model:
The major goal of sensor network is to interact physical world. Physical environment parameters
are changed time to time. So for analyzing sensor network performance under different physical
environment conditions simulator should have physical environment model. The environment
can be simulated separately or can even be stored in data files for sensor nodes to read in. If, in
addition to sensing, the network also performs actions that influence the behavior of the
environment, then a more tightly integrated simulation mechanism is required.
Tossim simulator:
Tossim is a node level simulator. It is dedicatedly used simulating nesC applications running in
Berkeley motes. The key design decisions on building TOSSIM were to make it scalable to a
network of potentially thousands of nodes, and to be able to use the actual software code in the
simulation. To achieve these goals, TOSSIM takes a cross-compilation approach that compiles
the nesC source code into components in the simulation. The event-driven execution model of
TinyOS greatly simplifies the design of TOSSIM. By replacing a few low-level components,such
as the A/D conversion (ADC), the system clock, and the radio front end, TOSSIM translates
hardware interrupts into discreteevent simulator events. The simulator event queue delivers the
interrupts that drive the execution of a node. The upper-layer TinyOScode runs unchanged.
TOSSIM has a visualization package called TinyViz, which is a Java application that can
connect to TOSSIM simulations. TinyViz also provides mechanisms to control a running
simulation by, for example, modifying ADC readings, changing channel properties,and injecting
packets. TinyViz is designed as a communication service that interacts with the TOSSIM event
queue. The exact visual interface takes the form of plug-ins that can interpret TOSSIM events.
Network simular2 (ns2) is the object oriented, discrete event driven network simulator
written in C++ and OTCL (object oriented tool command language). ns2 is initially designed for
wired network after that it is extended to wireless communication networks. Recently it is
extended to wireless sensor networks. Original ns2 supports logical address only but extension
version supporting nodes mobility and location.
ns2 is extended with the help of two organizations. One is UCLA (university of California los
angles) and another one is NRL (navy research laboratory). UCLA introduces sensor sim model
in ns2. SensorSim aims at providing an energy model for sensor nodes and communication, so
that power properties can be simulated [175]. SensorSim also supports hybrid simulation, where
some real sensor nodes, running real applications, can be executed together with a simulation.
The NRL sensor network extension provides a flexible way of modeling physical
phenomenain a discrete event simulator. Physical phenomena are modeled as network nodes
which communicate with real nodes through physical layers. Any interesting events are sent to
the nodes that can sense them as a form of communication. The receiving nodes simply have a
sensor stack parallel to the network stack that processes these events.
Implementing ns2 two languages are used tcl script (tool command language) and C++
language. To implement all protocols and inside execution programs are written by using C++
language and for implementing front end network design tcl script is used. C++ executes the
tasks very fast compared to tcl script but it is not much useful to implement sensor network front
end design. Because the front end design is more frequently changed due to nodes run out and
new nodes are entered in the network. So C++ takes lot of time to implement the mentioned
changes for that purpose tcl script is used. Tcl script executes slowly compared to C++ but we
can easily modify front end structure using tcl.
The key advantage of ns-2 is its rich libraries of protocols for nearly all network layers
and for many routing mechanisms. Examples are:
ns2 Architecture:
The above fig shows ns2 architecture front end network design is done using tcl script. Then
it is converted into object oriented tcl script (otcl). Then next step is otcl is converted into
understandable form of internal C++ protocols.Simulator results are analyzed by using trace
file or network animator.
In some applications like moving object tracking or intruder tracking we should update state
of the object continuously. In normal control problem for updating state space equations are
used.
State space equations are:
XK+1 = f(XK,UK)
YK = g(XK,UK)
wherex is the state of a system, u are the inputs, y are the outputs, k is an integer update index
over space and/or time, f is the state updatefunction, and g is the output or observation
function. This normal state updation is not useful in distributed real time embedded systems
like sensor networks. Because in sensor network for state updation we have to address several
questions. Those are:
Where are the state variables stored?
Where do the inputs come from?
Where do the outputs go?
Where are the functions f and g evaluated?
How long does the acquisition of inputs take?
Are the inputs in ukcollected synchronously?
Do the inputs arrive in the correct order through communication?
What is the time duration between indices k and k + 1? Is it a constant?
Along these questions some nonfunctional aspects also need to beconsideredaspects of
computation, related to concurrency, responsiveness, networking, and resource management, are
not well supported by traditional programming models and languages.State-centric programming
aims at providing design methodologies and frameworks that give meaningful abstractions for
these issues. So programmer writes the code to satisfy above factors. One type of such
abstraction is collaboration groups.
Collaboration group:
A collaboration group is a set of entities that contribute to a state update. These entities can be
physical sensor nodes, or they can be more abstract system components such as virtual sensors or
mobile agents hopping among sensors.a collaboration group provides two abstractions: itsscope
to encapsulate network topologies and its structure to encapsulate communication protocols. The
scope of a group defines the membership of the nodes with respect to the group.Grouping nodes
according to some physical attributes rather than node addresses is an important and
distinguishing characteristic of sensor networks.
The structure of a group defines the “roles” each member plays in the group, and thus the flow
of data. Are all members in the group equal peers? Is there a “leader” member in the group that
consumes data? Do members in the group form a tree with parent and children relations? For
example, a group may have a leader node that collects certain sensor readings from all followers.
A group is a 4-tuple:
G = (A, L, p, R)
where
A is a set of agents;
L is a set of labels, called roles;
p :A → L is a function that assigns each agent a role;
R ⊆L × L are the connectivity relations among roles.
At run time, the scope and structural dynamics of groups are managed by group management
protocols, which are highly dependent on the types of groups.
nesC is the extension of the C language. It provides set of language constructs and restrictions to
implement tiny OS components. In tiny OS we required to types of implementations one is
component interface implementation another one is component implementation.
Component interface:
To reflect layered structure in tiny OS nesC classify the interfaces into two types. Those are
provides and uses. Provides are used to declare the library files required. Uses are used to define
which component is going to implement. nesC defines direction of interfaces between
components are event calls and command calls. Calls from higher layers to lower layers are
called command calls and calls from lower layers to higher layers are called event calls.
moduleTimerModule {
provides {
interfaceStdControl;
interface Timer01;
}
uses interface Clock as Clk;
}
interfaceStdControl {
commandresult_tinit();
}
interface Timer01 {
commandresult_t start(char type, uint32_t interval;
commandresult_t stop();
eventresult_t timer0Fire();
eventresult_t timer1Fire();
}
interface Clock {
commandresult_tsetRate(char interval, char scale);
eventresult_t fire();
}
Component implementation:
Components implementation is done by using two methods. Those are modules and configurations.
Modules are used to implement actual component using language based code.
The above code shows timer component implementation using nesC code. Here signal keyword
indicates triggering by an event.
Configurations are used to interface already available component modules.
The above code shows interfacing of timer component module with hardware clock.
Memorize few commands of ns2 required in wireless sensor network:
Defining variables:
Set a 43
Set b 27
Creating nodes:
Creating connection:UDP
This command is used identify the count of arrival packets and drop packets in between node n0
and node n1.
In cyclic driven simulator total time is divided into discrete slots. At each tick, the physical
phenomena are first simulated, and then all nodes are checked to see if they have anything to
sense, process, or communicate. Sensing and computation are assumed to be finished before the
next tick. Sending a packet is also assumed to be completed by then. However, the packet will
not be available for the destination node until the next tick. This split-phase communicationis a
key mechanism to reduce cyclic dependencies that may occur in cycle-driven simulations. That
is, there should be no two components, such that one of them computes yk= f (xk) and the other
computes xk= g(yk), for the same tick index k. one of the most subtle issues in designing a CD
simulator is how to detect and deal with cyclic dependencies among nodes or algorithm
components.
In discrete event simulator time is continuous nature and events are in discrete nature. An event
is a 2-tuple with a value and a time stamp indicating when the event is supposed to be handled.
A DE simulator typically requires a global event queue. All events passing between nodes or
modules are put in the event queue and sorted according to their chronological order. At each
iteration of the simulation, the simulator removes the first event (the one with the earliest time
stamp) from the queue and triggers the component that reacts to that event.
DE simulator is more accurate than a CD simulator, and, as a consequence, DE simulators run
slower. The overhead of ordering all events and computation, in addition to the values and time
stamps of events, usually dominates the computation time.
Program execution contexts in tiny OS:
A program executed in TinyOS has two contexts, tasks and events. Tasks are created by
components to tasks scheduler. The defaultimplementation of the TinyOS scheduler maintains a
task queue and invokes tasks according to the order in which they were posted. Thus tasks are
deferred computation mechanisms. Tasks always run to completion without preempting or being
preempted by other tasks. Thus tasks are nonpreemptive. The scheduler invokes a newtask from
the task queue only when the current task has completed. When no tasks are available in the task
queue, the scheduler puts theCPU into the sleep mode to save energy.
events from hardware: clock, digital inputs, or other kinds of interrupts. The execution of an
interrupt handler is called an event context. The processing of events also runs to completion, but
it preempts tasks and can be preempted by other events. Because there is no preemption
mechanism among tasks and because events always preempt tasks, programmers are required to
chop their code, especially the code in the event contexts, into small execution pieces, so that it
will not block other tasksfor too long.
In some applications like moving object tracking or intruder tracking we should update state
of the object continuously. In normal control problem for updating state space equations are
used.
State space equations are:
XK+1 = f(XK,UK)
YK = g(XK,UK)
wherex is the state of a system, u are the inputs, y are the outputs, k is an integer update index
over space and/or time, f is the state updatefunction, and g is the output or observation
function. This normal state updation is not useful in distributed real time embedded systems
like sensor networks. Because in sensor network for state updation we have to address several
questions. Those are:
Where are the state variables stored?
Where do the inputs come from?
Where do the outputs go?
Where are the functions f and g evaluated?
How long does the acquisition of inputs take?
Are the inputs in ukcollected synchronously?
Do the inputs arrive in the correct order through communication?
What is the time duration between indices k and k + 1? Is it a constant?