Mcoo68 Smu Mca Sem2 2011
Mcoo68 Smu Mca Sem2 2011
Mcoo68 Smu Mca Sem2 2011
Set-1
b) Ordered list:
A list in which the order matters to the application. Therefore, for example, the implementer
cannot scramble the order to improve efficiency. An ordered list is a list in which the order of the
items is significant. However, the items in an ordered list are not necessarily sorted.
Consequently, it is possible to change the order o items and still have a valid ordered list.
Consider a list of the titles of the chapters in this book. The order of the items in the list
corresponds to the order in which they appear in the book. However, since the chapter titles are
not sorted alphabetically, we cannot consider the list to be sorted. Since it is possible to change
the order of the chapters in book, we must be able to do the same with the items of the list. As a
result, we may insert an item into an ordered list at any position.
A searchable container is a container that supports the following additional operations:
insert
-used to put objects into a the container;
withdraw
-used to remove objects from the container;
find
-used to locate objects in the container;
isMember
-used to test whether a given object instance is in the container.
c) Linked list:
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
A linked list is an ordered collection of items from which items may be deleted and
inserted in any places. A data structure in which each element contains a pointer to the
next element, thus forming a linear list.
d) Queue:
An ordered list in which insertion always occurs at one end and deletion always occurs at the
other end.
Figure 2. FIFO
The following operations can be applied to a queue:
InitQueue(Queue): creates an empty queue;
Join(Item): inserts an item to the rear of the queue;
Leave(Queue): removes an item from the front of the queue;
isEmpty(Queue): returns true is the queue is empty.
e) Stack:
A stack is a list in which all insertions and deletions are made at one end, called the top. The
last element to be inserted into the stack will be the first to be removed. Thus stacks are
sometimes referred to as Last In First Out (LIFO) lists.
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
Figure 3. LIFO
f) Binary tree:
Each element in a binary tree is stored in a "node" class (or struct). Each node contains pointers
to a left child node and a right child node. In some implementations, it may also contain a
pointer to the parent node. A tree may also have an object of a second "tree" class (or struct)
which as a header for the tree. The "tree" object contains a pointer to the root of the tree (the
node with no parent) and whatever other information the programmer wants to squirrel away in
it (e.g. number of nodes currently in the tree).
In a binary tree, elements are kept sorted in left to right order across the tree. That is, if N is a
node, then the value stored in N must be larger than the value stored in left-child(N) and less
than the value stored in right-child(N). Variant trees may have the opposite order (smaller
values to the right rather than to the left) or may allow two different nodes to contain equal
values.
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
g) Hash tables:
A hash table or hash map is a data structure that uses a hash function to map identifying
values, known as keys (e.g., a person's name), to their associated values (e.g., their telephone
number). Thus, a hash table implements an associate array. The hash function is used to
transform the key into the index (the hash) of an array element (the slot or bucket) where the
corresponding value is to be sought.
Ideally, the hash function should map each possible key to a unique slot index, but this ideal is
rarely achievable in practice (unless the hash keys are fixed; i.e. new entries are never added to
the table after it is created). Instead, most hash table designs assume that hash collisions—
different keys that map to the same hash value—will occur and must be accommodated in some
way.
In a well-dimensioned hash table, the average cost (number of instructions) for each lookup is
independent of the number of elements stored in the table. Many hash table designs also allow
arbitrary insertions and deletions of key-value pairs, at constant average (indeed) cost per
operation.
In many situations, hash tables turn out to be more efficient than search trees or any other table
lookup structure. For this reason, they are widely used in many kinds of computer software,
particularly for associate arrays, database indexing caches and sets.
2. Illustrate the C program to represents the Stack Implementation on POP and PUSH
operation.
Ans:2
#include<studio.h>
#include<conio.h>
#define Max 5
Int Staff [Max], top=-1;
Void display()
{
If ((top==-1 || (top==0))
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
{
Printf(“\n stack is full\n”);
}
Else
{
Printf(“\n stack elements are\n”);
For(int i=top-1;i>=0;i--)
}
}
void push()
{
int ele;
char ch;
it(top-=-1)
top=0;
do
{
If (top>=5)
{
Printf(“\n stack is full”);
Break;
}
Else
{
Clrscr();
Printf(“\n enter the element to be insearted\n”);
Scanf(“%d”, &ele);
Staff(top++)=ele
display();
}
Printf(“\n Do u want to add more elements:?\n”);
Scanf(“\n%c”, &ch);
}
While ((ch==’y’ ||(ch==’Y’));
}
void pop()
{
If ((top==-1)||(top==0)
{
Printf(“\nstack is under flow\n”);
}
Else
{
Printf(“%d is deleted fro stack\n”,Staff(--top]”);
display();
}
}
Void main()
{
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
clrscr();
char c;
int choice;
do
{
cirscr();
printf(“\n enter the choice\n”);
printf(“1->push\n”);
printf(“2-pop\n”);
scant(“%d”, &choice);
if(choice==1)
push();
elshif(choice==2)
pop();
else
printf(“\in valid choice”);
printf(“\n do u want to continue:?”);
scanf(“\n%c”, &c);
}
While{(c==’y’)||(c==’Y’));
}
Ans:3
Splay Trees
We shall describe the algorithm by giving three rewrite rules in the form of pictures. In these
pictures, x is the node that was accessed (that will eventually be at the root of the tree). By
looking at the local structure of the tree defined by x, x’s parent, and x’s grandparent we decide
which of the following three rules to follow. We continue to apply the rules until x is at the root
of the tree:
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
Notes
1) Each rule has a mirror image variant, which covers all the cases.
2) The zig-zig rule is the one that distinguishes splaying from just rotating x to the root of the
tree.
3) Top-down splaying is much more efficient in practice.
The Basic Bottom-Up Splay Tree
• A technique called splaying can be used so a logarithmic amortized bound can be
achieved.
• We use rotations such as we’ve seen before.
• The zig case.
o Let X be a non-root node on the access path on which we are rotating.
o If the parent of X is the root of the tree, we merely rotate X and the root as shown
in Figure 2.
• Given the rotations, consider the example in Figure 5, where we are splaying c.
Zig-Zig – Figure 8
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
Zig-Zag – Figure
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
4. Discuss the techniques for allowing a hash file to expand and shrink dynamically.
What are the advantages and disadvantages of each?
Ans:4
Dynamic Hashing:
As the database grows over time, we have three options:
1. Choose hash function based on current file size. Get performance degradation as file
grows.
2. Choose has function based on anticipated file size. Space is wasted initially.
3. Periodically re-organize hash structure as file grows. Requires selecting new hash
function, recomputing all addresses and generating new bucket assignments.
Some hashing techniques allow the hash function to be modified dynamically to accommodate
the growth or shrinking of the database. These are called dynamic hash functions. Extendable
hashing is one form of dynamic hashing. Extendable hashing splits and coalesces buckets as
database size changes.
• Note that the i appearing over the bucket address table tells how many bits are required
to determine the correct bucket.
• It may be the case that several entries point to the same bucket.
• All such entries will have a common hash prefix, but the length of this prefix may be less
than i.
• So we give each bucket an integer giving the length of the common hash prefix.
• Compute .
1. If , then only one entry in the bucket address table points to bucket j.
• Then we need to increase the size of the bucket address table so that we can include
pointers to the two buckets that result from splitting bucket j.
• We increment i by one, thus considering more of the hash, and doubling the size of the
bucket address table.
• Each entry is replaced by two entries, each containing original value.
• Now two entries in bucket address table point to bucket j.
• We allocate a new bucket z, and set the second pointer to point to z.
• Set and to i.
• Rehash all records in bucket j which are put in either j or z.
• Now insert new record.
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
• It is remotely possible, but unlikely, that the new hash will still put all of the records in
one bucket.
• If so, split again and increment i again.
2. If , then more than one entry in the bucket address table points to bucket j.
• Then we can split bucket j without increasing the size of the bucket address table
(why?).
• Note that all entries that point to bucket j correspond to hash prefixes that have the same
• We allocate a new bucket z, and set and to the original value plus 1.
• Now adjust entries in the bucket address table that previously pointed to bucket j.
• Leave the first half pointing to bucket j, and make the rest point to bucket z.
• Rehash each record in bucket j as before.
• Reattempt new insert.
Note that in both cases we only need to rehash records in bucket j.
Deletion of records is similar. Buckets may have to be coalesced, and bucket address table may
have to be halved.
Insertion is illustrated for the example deposit file of Figure 11.20.
• 32-bit hash values on bname are shown in Figure 11.21.
• An initial empty hash structure is shown in Figure 11.22.
• We insert records one by one.
• We (unrealistically) assume that a bucket can only hold 2 records, in order to illustrate
both situations described.
• As we insert the Perryridge and Round Hill records, this first bucket becomes full.
• When we insert the next record (Downtown), we must split the bucket.
• Since , we need to increase the number of bits we use from the hash.
Set-2
1. Explain the double ended queue with the help of suitable example.
Ans:1
A double-ended queue (dequeue, often abbreviated to deque, pronounced deck) is an
abstract data structure that implements a queue for which elements can only be added to or
removed from the front (head) or back (tail). It is also often called a head-tail linked list.
Deque is a special type of data structure in which insertions and deletions will be
done either at the front end or at the rear end of the queue. The operations that can
be performed on deques are
2. Explain Insert/Delete a node at the rear, front end from circular singly linked list.
Ans:2
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
A singly linked circular list is a linked list where the last node in the list points to the first node
in the list. A circular list does not contain NULL pointers.
With a circular list, a pointer to the last node gives easy access also to the first node, by
following one link. Thus, in applications that require access to both ends of the list (e.g., in the
implementation of a queue), a circular structure allows one to handle the structure by a single
pointer, instead of two.
A circular list can be split into two circular lists, in constant time, by giving the addresses of the
last node of each piece. The operation consists in swapping the contents of the link fields of
those two nodes. Applying the same operation to any two nodes in two distinct lists joins the two
list into one. This property greatly simplifies some algorithms and data structures, such as the
quad-edge and face-edge.
The simplest representation for an empty circular list (when such a thing makes sense) is a null
pointer, indicating that the list has no nodes. With this choice, many algorithms have to test for
this special case, and handle it separately. By contrast, the use of null to denote an empty
linear list is more natural and often creates fewer special cases.
Consider the list contains 4 nodes and last is a pointer variable that contains the address of the
last node.
50 20 45 10
Step 1:Obtain a free node temp from the availability list and store the item in info field. This can
be accomplished using the following statements
Temp=getnode();
Temp->info=item;
Step 2: Copy the address of the first node(i.e. last->link) into link field of newly obtained node
temp and the statement to accomplish this task is
Temp-> link=last->link
Step 3: Establish a link between the newly created node temp and the last node. This is
achieved by copying the address of the node temp into link field of last node. The corresponding
code for this is
Last->link=temp;
Step 4: The new node is made as the last node using the statement:
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
Return temp;
Consider the list contains 4 nodes and a pointer variable last contains address of the last node.
20 45 10 80
Step 1: To insert an item 50 at the front of the list, obtain a free node temp from the availability
list and store the item in info field. This can be accomplished using the following statements
Temp=getnode();
Temp->info=item;
Step 2: Copy the address of the first node(i.e. last->link) into link field of newly obtained node
temp and the statement to accomplish this task is
Temp->link=last->link.
Step 3: Establish a link between the newly created node temp and the last node. This is
achieved by copying the address of the node temp into link field of last node. The corresponding
code for this is
Last->link=temp;
Now, an item is successfully inserted at the front of the list. All these steps have been designed
by assuming the list is already existing. If the list is empty, make temp itself as the last node and
establish a link between the first node and the last node. Repeatedly insert the items using the
above procedure to create a list.
Function to insert an item at the front end of the list.
NODE insert_front(int item, NODE last)
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
{
NODE temp;
Temp=getnode();
Temp->info=item;
If(last==NULL)
Last=temp;
Else
temp->link=last->link;
Last->link=temp;
Return last;
}
b. Discuss how Splay Tree differs from a Binary Tree? Justify your answer with
appropriate example.
Ans:3 a)
Once you’ve identified a problem as a search problem, it’s important to choose the right type of
search. Here are some information which may be useful while taking the decision.
Search Time Space When to use
Must search tree anyway, know the level the
DFS O(c^k) O(k) answers are on, or you aren’t looking for the
shallowest number.
Know answers are very near top of tree, or want
BFS O(c^d) O(c^d)
shallowest answer.
Want to do BFS, don’t have enough space, and
DFS+ID O(c^d) O(d)
can spare the time.
d is the depth of the answer
k is the depth searched
d <= k
Remember the ordering properties of each search. If the program needs to produce a list sorted
shortest solution first (in terms of distance from the root node), use breadth first search or
iterative deepening. For other orders, depth first search is the right strategy.
If there isn’t enough time to search the entire tree, use the algorithm that is more likely to find
the answer. If the answer is expected to be in one of the rows of nodes closest to the root, use
breadth first search or iterative deepening. Conversely, if the answer is expected to be in the
leaves, use the simpler depth first search. Be sure to keep space constraints in mind. If memory is
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
insufficient to maintain the queue for breadth first search but time is available, use iterative
deepening.
B)
Binary Tree:
A binary tree is simply a tree in which each node can have at most two children.
A binary search tree is a binary tree in which the nodes are assigned values, with the following
restrictions ;
Notes
1) Each rule has a mirror image variant, which covers all the cases.
2) The zig-zig rule is the one that distinguishes splaying from just rotating x to the root of the
tree.
3) Top-down splaying is much more efficient in practice.
MC0068 – Data Structures using
(Book ID: B0701 & B0702)
Consider the following class of binary-tree based algorithms for processing a sequence of
requests:
on each access we pay the distance from the root to the accessed node.
Arbitrary rotations are also allowed, at a cost of 1 per rotation.
For a sequence Sigma, let C(T,Sigma) be the cost of the optimal algorithm in this class for
processing a sequence Sigma starting from tree T. Then the cost of splaying the sequence
(starting from any tree) is O(C(T,Sigma) + n).
What about dynamic operations that change the set of elements in the tree? Here’s how we can
do Insertion, Join, and Deletion.
Insertion
Say we want to insert a new item e. Proceed like ordinary binary tree insertion, but stop short of
linking the new item e into the tree. Let the node you were about to link e to be called p. We
splay p to the root. Now construct a tree according to one of the following pictures: