Solved_Unit_4_Q-Bank

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

DSU Unit 4 Solved Question Bank

Q1. What are the different types of traversing?

Ans –
Traversing refers to visiting each node in a data structure in a systematic way. Traversal methods
differ between trees and graphs due to their structures. Here are the types of traversal:

Tree Traversal
1. Depth-First Traversal (DFT)
• Explores as far as possible along a branch before backtracking.
• Types:
o Preorder (NLR): Visit Node → Left subtree → Right subtree.
▪ Used for: Copying a tree, evaluating expressions in prefix notation.
o Inorder (LNR): Left subtree → Visit Node → Right subtree.
▪ Used for: Sorting in Binary Search Trees.
o Postorder (LRN): Left subtree → Right subtree → Visit Node.
▪ Used for: Deleting a tree, evaluating expressions in postfix notation.
• Example Of DFT –
1
/ \
2 3
/\
4 5
Preorder: 1 → 2 → 4 → 5 → 3
Inorder: 4 → 2 → 5 → 1 → 3
Postorder: 4 → 5 → 2 → 3 → 1

2. Breadth-First Traversal (BFT)


• Visits nodes level by level, starting from the root.
• Also called Level Order Traversal.
• Implemented using a queue.
• Used for: Finding the shortest path in unweighted trees, traversing organizational
hierarchies.
• Example:
1
/ \
2 3
/ \ / \
4 5 6 7
Level Order: 1 → 2 → 3 → 4 → 5 → 6 → 7
Graph Traversal
1. Depth-First Search (DFS)
• Explores as deep as possible along each branch before backtracking.
• Implemented using recursion or a stack.
• Applications:
o Cycle detection.
o Pathfinding (e.g., mazes).
o Topological sorting in Directed Acyclic Graphs (DAGs).

2. Breadth-First Search (BFS)


• Explores all neighbors of a node before moving to the next level.
• Implemented using a queue.
• Applications:
o Shortest path in unweighted graphs (e.g., finding paths in social networks).
o Level order traversal in trees.

Example For DFS And BFS


A
/ \
B C
| |
D E

DFS (starting from A): A → B → D → C → E


BFS (starting from A): A → B → C → D → E

Q2. Define BST?


Ans –
Binary Search Tree (BST):
A Binary Search Tree (BST) is a type of binary tree that organizes data in a hierarchical
structure to support efficient searching, insertion, and deletion operations. Each node in a BST
follows a specific ordering property, making it suitable for applications requiring dynamic sorted
data.
A Binary Search Tree is a specialized type of binary tree where each node satisfies the
following properties:
1. Left Subtree Rule: All nodes in the left subtree of a node contain values less than the
node’s value.
2. Right Subtree Rule: All nodes in the right subtree of a node contain values greater than
the node’s value.
3. Unique Values: Each node has a unique key (no duplicates allowed).
Properties of a BST
1. Ordering Property:
o For each node NNN:
▪ All values in the left subtree are less than NNN.
▪ All values in the right subtree are greater than NNN.
2. No Duplicate Values:
o Typically, BSTs do not allow duplicate values to ensure efficient operations.
Variants like multiway trees handle duplicates.
3. Recursive Structure:
o A BST is defined recursively:
▪ Each subtree of a BST is itself a BST.
4. Dynamic Structure:
o BSTs can grow or shrink dynamically, adjusting to the size of the dataset.
5. Height:
o The height of a BST determines the efficiency of operations:
▪ Best Case (Balanced): O(log n) for operations.
▪ Worst Case (Skewed): O(n) for operations when the tree degenerates
into a linked list.
Basic Operations
1. Searching
• Start from the root.
• Compare the target value with the current node:
o If the value matches, the search is successful.
o If the value is smaller, move to the left subtree.
o If the value is larger, move to the right subtree.
• Time Complexity:
o O(h), where h is the height of the tree.
2. Insertion
• Locate the appropriate position for the new value by following the BST ordering
property.
• Insert the value as a new leaf node.
• Time Complexity: O(h).
3. Deletion
• Locate the node to be deleted and handle three cases:
1. No Child (Leaf Node): Directly remove the node.
2. One Child: Replace the node with its only child.
3. Two Children:
▪ Replace the node with its inorder successor (smallest value in the right
subtree) or inorder predecessor (largest value in the left subtree).
▪ Recursively delete the replacement node.
• Time Complexity: O(h).
4. Traversal
• Inorder Traversal: Retrieves nodes in sorted order.
• Preorder Traversal: Useful for copying the tree structure.
• Postorder Traversal: Useful for deleting the tree.
Advantages of a BST
1. Efficient Searches: Searching, insertion, and deletion can be performed in O(log n) for
balanced trees.
2. Dynamic Nature: Unlike arrays, BSTs dynamically resize.
3. Sorted Output: Inorder traversal produces a sorted sequence of elements.

Disadvantages of a BST
1. Degeneration: If poorly balanced (e.g., inserting sorted elements), a BST can degrade
into a linked list with O(n) complexity for operations.
2. Balancing: Requires self-balancing (e.g., AVL or Red-Black trees) for optimal
performance.

Applications of BST
1. Databases: Indexing and searching records.
2. Compilers: Used in symbol tables to store variable names.
3. Network Routing: Efficient routing and IP lookup.
4. Searching and Sorting:
o BSTs act as a dynamic sorted collection.
5. Autocomplete: Variants like tries and BSTs are used for efficient prefix-based searches.

Example
A BST for the values {10, 5, 15, 3, 7, 12, 18}:
10
/ \
5 15
/ \ / \
3 7 12 18
• Left subtree of 10: {5, 3, 7} (all values < 10).
• Right subtree of 10: {15, 12, 18} (all values > 10).

Q3. Explain weighted and Directed Graph?

Ans – Weighted and Directed Graphs


Graphs are data structures consisting of nodes (vertices) and edges (connections). When
additional attributes like direction or weights are added, the graph becomes a Directed Graph or
a Weighted Graph (or both).

1. Weighted Graph
• Definition: A graph where every edge has a numerical value (called weight or cost).
• Representation:
o Adjacency Matrix: A 2D array where the value at position [i][j] represents the
weight of the edge from vertex i to j.
o Adjacency List: Each vertex has a list of pairs (neighbor, weight).
Example:
(A)---4---(B)
| |
6| |3
| |
(C)---5---(D)
• Edges have weights like 4, 6, 5, and 3.
• Adjacency Matrix OF Vertices: A,B,C,D
A B C D
A [ 0 4 6 0 ]
B [ 4 0 0 3 ]
C [ 6 0 0 5 ]
D [ 0 3 5 0 ]
• Adjacency List:
A → [(B, 4), (C, 6)]
B → [(A, 4), (D, 3)]
C → [(A, 6), (D, 5)]
D → [(B, 3), (C, 5)]

2. Directed Graph (Digraph)


• Definition: A graph where edges have a direction, meaning they point from one vertex to
another.
• Edge Representation: (u,v)(u, v)(u,v) means there is an edge from vertex uuu to vvv, but
not necessarily v→uv \to uv→u.
Example:
(X) → (Y)
↑ ↓
(Z) ← (W)
Edge X→Y indicates a connection from X to Y, but not the reverse.
Graph Details:
• Vertices: X,Y,Z,W
• Edges:
o X→Y
o Y→W
o W→Z
o Z→X
Edge Representation for Directed Graph (Unweighted):
• List of edges with only the direction:
[(X, Y), (Y, W), (W, Z), (Z, X)]

Applications of Weighted & Directed Graphs


1. Weighted Graphs:
o Transportation Networks: Calculate distances, times, or costs.
o Economics: Represent costs, profits, or risks.
2. Directed Graphs:
o Task Scheduling: Represent tasks with dependencies.
o Web Crawling: Links from one webpage to another.
Q4. Define Complete Binary Tree?

Ans –
Complete Binary Tree: Detailed Explanation
A Complete Binary Tree is a specialized form of a binary tree with specific structural
properties. Here's an in-depth look at its definition, properties, examples, and applications.
A Complete Binary Tree satisfies the following conditions:
1. All levels, except possibly the last, are completely filled with nodes.
2. Nodes in the last level are aligned as far left as possible.
Properties of a Complete Binary Tree
1. Height:
o The height of a complete binary tree with n nodes is ⌊log2(n)⌋+1.
o Example: A complete binary tree with 15 nodes has a height of ⌊log2(15)⌋+1=4.
2. Node Distribution:
o For a tree of height h, the first h - 1levels are completely filled.
o Nodes in the last level (height h) are filled from left to right.
3. Number of Nodes:
o A complete binary tree with height h has between 2h-1 and 2h−1 nodes:
▪ Minimum nodes: 2h-1 (when the last level has only one node).
▪ Maximum nodes: 2h−1 (when all levels are fully filled).
4. Efficient Storage:
o A complete binary tree can be stored efficiently in an array (or list) without
pointers or references:
▪ Parent of a node at index i: (i−1)//2.
▪ Left child: 2i+1.
▪ Right child: 2i+2.

Advantages of Complete Binary Trees


1. Efficient Traversal:
o Nodes are stored sequentially in arrays, enabling fast access.
2. Memory Usage:
o No gaps or null pointers in storage representation.
3. Balanced Structure:
o Ensures operations like search, insertion, and deletion are more efficient than in
skewed trees.

Applications of Complete Binary Trees


1. Heap Data Structure:
o Min-Heaps and Max-Heaps are implemented as complete binary trees.
o Example: Priority queues in scheduling and graph algorithms.
2. Binary Search Algorithms:
o Complete binary trees are used in scenarios where balanced searching is required.
3. Huffman Coding:
o In data compression, Huffman trees are often complete binary trees.
Example Tree:

1
/ \
2 3
/ \ / \
4 5 6 7
/ \
8 9
Explanation:
1. Levels:
o Level 1: Fully filled (Node 1).
o Level 2: Fully filled (Nodes 2, 3).
o Level 3: Fully filled (Nodes 4, 5, 6, 7).
o Level 4: Partially filled (Nodes 8 and 9), aligned from left to right.
2. Nodes:
o Total number of nodes = 9.
o Height of the tree (h) = ⌊log2(9)⌋+1=4.

Q5. What are the two methods of binary tree implementation?

Ans –
Q6. Explain the AVL tree insertion and deletion with suitable example.

Ans –
AVL Tree: Overview
An AVL tree is a self-balancing binary search tree where the difference between the
heights of the left and right subtrees of any node (called the balance factor) is at most 1. This
ensures that the tree remains approximately balanced, resulting in O(log n) time complexity for
search, insertion, and deletion operations.

Key Concepts
1. Balance Factor:
Balance Factor = Height of Left Subtree - Height of Right Subtree
o Valid balance factor: −1,0,+1.
o If the balance factor exceeds these limits, rotations are used to rebalance.
2. Rotations:
o Used to restore balance after insertion or deletion.
o Types of rotations:
▪ Right Rotation (RR): Corrects left-heavy imbalance.
▪ Left Rotation (LL): Corrects right-heavy imbalance.
▪ Left-Right Rotation (LR): Corrects left-right imbalance.
▪ Right-Left Rotation (RL): Corrects right-left imbalance.

Insertion in an AVL Tree


When inserting a node, we follow the standard binary search tree (BST) insertion rules,
then check and restore balance using rotations if needed.
Steps for Insertion:
1. Insert the node as in a BST.
2. Update the height of the affected nodes.
3. Calculate the balance factor.
4. Perform rotations if the tree becomes unbalanced.
Example:
Insert the nodes 10, 20, 30, 40, 50, 25 into an AVL tree.
1. Insert 10:
Tree:
10
2. Insert 20:
Tree:
10
\
20
oBalance Factor of root (10): -1. Tree is balanced.
3. Insert 30:
Tree:
10
\
20
\
30
o Balance Factor of root (10): -2.
o Imbalance type: LL (Right-heavy).
o Perform Left Rotation at 10.
Result:
20
/ \
10 30
4. Insert 40:
Tree:
20
/ \
10 30
\
40
o Balance Factor of root (20): -1. Tree is balanced.
5. Insert 50:
Tree:
20
/ \
10 30
\
40
\
50
o Balance Factor of root (20): -2.
o Imbalance type: LL.
o Perform Left Rotation at 30.
Result:
20
/ \
10 40
/ \
30 50
6. Insert 25:
Tree:
20
/ \
10 40
/ \
30 50
/
25
o Balance Factor of root (20): −2-2−2.
o Imbalance type: RL (Right-Left).
o Perform Right Rotation at 40, then Left Rotation at 20.
Result:
30
/ \
20 40
/ \ \
10 25 50
Deletion in an AVL Tree
Deletion in an AVL tree involves:
1. Removing the node as in a BST.
2. Updating the height of affected nodes.
3. Calculating the balance factor.
4. Performing rotations to restore balance if needed.
Example:
Delete 25 from the tree:
30
/ \
20 40
/ \ \
10 25 50
1. Remove 25:
Tree:
30
/ \
20 40
/ \
10 50
o Balance Factor of root (30): 111. Tree is balanced.
2. Delete 20:
Tree:
30
/ \
10 40
\
50
o Balance Factor of root (30): −2-2−2.
o Imbalance type: LL.
o Perform Left Rotation at 30.
Result:
40
/ \
30 50
/
10

Comparison of AVL Tree Operations


Operation Time Complexity
Search O(log n)
Insertion O(log n)
Deletion O(log n)
Q7. What are the Terminologies of Tree?

Ans –
The Tree Terminologies:
1. Node
• A node is a fundamental unit of a tree.
• It contains:
o Data: Information stored in the node.
o Pointers: References to its child nodes (left, right, or more, depending on the tree
type).
• Example: In a binary tree:
10
/ \
5 20
o 10, 5, and 20 are nodes.

2. Root
• The root is the topmost node of the tree and serves as the entry point.
• A tree has exactly one root.
• It has no parent.
• Example: In the tree:
10
/ \
5 20
o 10 is the root.

3. Parent and Child


• A parent node has one or more child nodes connected to it.
• A child is any node connected to its parent node.
• Example:
10
/ \
5 20
o 10 is the parent of 5 and 20.
o 5 and 20 are children of 10.

4. Siblings
• Siblings are nodes that share the same parent.
• Example:
10
/ \
5 20
o 5 and 20 are siblings because they share the same parent (10).
5. Leaf (or External Node)
• A leaf node is a node that has no children.
• It is the end of a path in the tree.
• Example:
10
/ \
5 20
/
3
o 3 and 20 are leaf nodes because they have no children.

6. Internal Node
• An internal node is any node with at least one child.
• Example:
10
/ \
5 20
/
3
o 10 and 5 are internal nodes because they have children.

7. Degree
• The degree of a node is the number of children it has.
• The degree of a tree is the maximum degree of any node in the tree.
• Example:
10
/ \
5 20
/
3
o Degree of 10: 2 (it has two children, 5 and 20).
o Degree of 5: 1 (it has one child, 3).
o Degree of 20: 0 (it has no children).

8. Level
• The level of a node is its distance from the root, measured in edges.
• The root is at level 0, its children at level 1, and so on.
• Example:
10 (Level 0)
/ \
5 20 (Level 1)
/
3 (Level 2)

9. Height
• The height of a node is the number of edges in the longest path from that node to a leaf.
• The height of a tree is the height of the root node.
• Example:
10
/ \
5 20
/
3
o Height of 3: 0 (it’s a leaf).
o Height of 5: 1 (path: 5 → 3).
o Height of the tree: 2 (path: 10 → 5 → 3).

10. Depth
• The depth of a node is the number of edges from the root to that node.
• Example:
10
/ \
5 20
/
3
o Depth of 10: 0.
o Depth of 5: 1.
o Depth of 3: 2.

11. Path
• A path is a sequence of nodes connected by edges.
• Example: In the tree:
10
/ \
5 20
/
3
o Path from 10 to 3: 10 → 5 → 3.

12. Ancestors and Descendants


• Ancestors: Nodes on the path from the current node to the root.
o Example: In the above tree, ancestors of 3 are 5 and 10.
• Descendants: Nodes reachable from the current node.
o Example: Descendants of 10 are 5, 20, and 3.
13. Subtree
• A subtree is a tree consisting of a node and all its descendants.
• Example:
10
/ \
5 20
/
3
o Subtree rooted at 5:

5
/
3

Q8. Create a binary search tree for the following numbers start from an empty binary
search tree. 45,26,10,60,70,30,40 Delete keys 10,60 and 45 one after the other and show
the trees at each stage.

Ans –
Let's go step by step to create the Binary Search Tree (BST) and perform the deletions
as requested. We will start with an empty BST and insert the following numbers: 45, 26, 10, 60,
70, 30, 40. After that, we will delete 10, 60, and 45, one by one, and show the tree at each stage.

1. Insert 45:
Start with an empty tree and insert 45. Since it's the first node, it becomes the root.
45

2. Insert 26:
• 26 is less than 45, so it will be inserted to the left of 45.
45
/
26

3. Insert 10:
• 10 is less than 45, so we go left to 26.
• 10 is also less than 26, so it is inserted as the left child of 26.
45
/
26
/
10
4. Insert 60:
• 60 is greater than 45, so it is inserted as the right child of 45.
45
/ \
26 60
/
10

5. Insert 70:
• 70 is greater than 45, so we go right to 60.
• 70 is greater than 60, so it is inserted as the right child of 60.
45
/ \
26 60
/ \
10 70

6. Insert 30:
• 30 is less than 45, so we go left to 26.
• 30 is greater than 26, so it is inserted as the right child of 26.
45
/ \
26 60
/ \ \
10 30 70

7. Insert 40:
• 40 is less than 45, so we go left to 26.
• 40 is greater than 26, so we go right to 30.
• 40 is greater than 30, so it is inserted as the right child of 30.
45
/ \
26 60
/ \ \
10 30 70
\
40

Now the BST is as follows:


45
/ \
26 60
/ \ \
10 30 70
\
40
Delete 10:
• 10 is a leaf node (no children), so we simply remove it.
45
/ \
26 60
\ \
30 70
\
40

Delete 60:
• 60 has one child (right child 70), so we replace 60 with its right child 70.
45
/ \
26 70
\
30
\
40

Delete 45:
• 45 has two children (26 and 70). We replace 45 with its in-order successor, which is the
smallest node in its right subtree (in this case, 70).
• After replacement, we remove 70.
70
/
26
\
30
\
40

Final Tree after Deletions:


70
/
26
\
30
\
40

This is the final tree after performing all insertions and deletions. Each step has been shown to
clearly illustrate how the tree evolves.
Q9. Constructing a expression tree of given expression abc*+e*f+

Ans – ( Below Explanation Is Just For Understanding In Exam Just Draw The Tree )
Steps to Construct the Expression Tree:
Step 1: Start with an empty stack.
We process the expression character by character.

Step 2: Process each character.


1. Read a:
o It's an operand. Push it onto the stack.
Stack: [a]
2. Read b:
o It's an operand. Push it onto the stack.
Stack: [a, b]
3. Read c:
o It's an operand. Push it onto the stack.
Stack: [a, b, c]
4. Read *:
o It's an operator. Pop the top two operands (b and c) from the stack.
o Create a subtree with * as the root, b as the left child, and c as the right child.
o Push this subtree back onto the stack.
Stack: [a, (b * c)]
5. Read +:
o It's an operator. Pop the top two operands (a and (b * c)) from the stack.
o Create a subtree with + as the root, a as the left child, and (b * c) as the right
child.
o Push this subtree back onto the stack.
Stack: [(a + (b * c))]
6. Read e:
o It's an operand. Push it onto the stack.
Stack: [(a + (b * c)), e]
7. Read *:
o It's an operator. Pop the top two operands (e and (a + (b * c))) from the stack.
o Create a subtree with * as the root, (a + (b * c)) as the left child, and e as the
right child.
o Push this subtree back onto the stack.
Stack: [((a + (b * c)) * e)]
8. Read f:
o It's an operand. Push it onto the stack.
Stack: [((a + (b * c)) * e), f]
9. Read +:
o It's an operator. Pop the top two operands (f and ((a + (b * c)) * e)) from
the stack.
o Create a subtree with + as the root, ((a + (b * c)) * e) as the left child, and f
as the right child.
o Push this subtree back onto the stack.
Stack: [((a + (b * c)) * e + f)]

Final Expression Tree


The final tree represents the entire expression abc*+e*f+.
Tree Structure:
+
/ \
* f
/ \
+ e
/ \
a *
/ \
b c

Q10. What is Hashing and explain it’s Types?

Ans – Hashing
Hashing is a process of mapping data (keys) into a fixed-size value (called a hash or hash
code) using a mathematical function called a hash function. This value serves as an index to
store and retrieve data efficiently in a data structure, commonly a hash table.

Purpose of Hashing
1. Fast Data Retrieval: Hashing allows for near-instantaneous data access.
2. Efficient Storage: Data is distributed across a table to reduce collisions and maximize
storage efficiency.
3. Applications: Hashing is widely used in:
o Databases for indexing.
o Cryptography.
o Caches.
o Hashmaps and sets in programming.
Advantages of Hashing
• Fast data retrieval and insertion (O(1) on average).
• Efficient memory utilization.

Disadvantages of Hashing
• Collisions may degrade performance.
• Poor choice of hash function can lead to uneven distribution.
• Not suitable for ordered data retrieval.
Types Of Hashing
Static Hashing and Dynamic Hashing are two approaches to implementing hash
tables based on how they handle data growth and storage management.
1. Static Hashing
Definition:
In static hashing, the size of the hash table is fixed at the time of creation. Keys are
mapped to a fixed number of slots using a hash function. Once the table is created, it cannot
grow or shrink dynamically.

How Static Hashing Works:


1. A hash function is used to compute an index for each key, mapping it to a specific slot in
the hash table.
2. If a collision occurs (i.e., two keys map to the same slot), collision resolution techniques
like separate chaining or open addressing are applied.

Features of Static Hashing:


• Fixed Size: The number of slots in the hash table remains constant.
• Simple Implementation: Easy to create and manage.
• Predictable Performance: Performance is consistent if the number of keys is within the
expected range.

Advantages of Static Hashing:


1. Simplicity: Easy to implement and manage since the table size is fixed.
2. Efficient for Small Data: Works well when the number of keys is known in advance and
does not change frequently.

Disadvantages of Static Hashing:


1. Poor Scalability:
o If the dataset grows beyond the size of the table, collisions increase, degrading
performance.
o If the dataset is smaller than the table, memory is wasted.
2. Table Overflow:
o When the table becomes full, rehashing (creating a new, larger table and
remapping all keys) is required, which is computationally expensive.
3. Static Nature:
o Cannot dynamically adjust to changes in the number of keys.

Applications of Static Hashing:


• Caches with a fixed number of entries.
• Small datasets where the maximum number of keys is predictable.

2. Dynamic Hashing
Definition:
In dynamic hashing, the hash table can grow or shrink dynamically as the number of
keys increases or decreases. This approach is especially useful when the size of the dataset is
unpredictable.
How Dynamic Hashing Works:
Dynamic hashing uses advanced techniques like extendible hashing or linear hashing to
resize the table dynamically, ensuring minimal collisions and efficient utilization of memory.
1. Extendible Hashing:
o Maintains a directory of pointers to buckets.
o The directory size grows when the table needs to expand.
o If a bucket overflows, it is split into two buckets, and the directory is updated.
2. Linear Hashing:
o Incrementally splits the hash table into new buckets as data grows.
o Instead of rehashing the entire table, new keys are distributed into a subset of the
table.

Features of Dynamic Hashing:


• Scalable Size: The table size adjusts dynamically based on the number of keys.
• Efficient Storage: Reduces wasted memory and minimizes collisions.

Advantages of Dynamic Hashing:


1. Handles Growth Gracefully:
o Automatically resizes the table, avoiding performance degradation due to
collisions.
2. Efficient Memory Utilization:
o Dynamically adjusts the size of the table to match the dataset, reducing memory
waste.

3. Incremental Resizing:
o With techniques like linear hashing, only a portion of the table is resized,
minimizing computational overhead.

Disadvantages of Dynamic Hashing:


1. Complex Implementation:
o More challenging to implement than static hashing due to the resizing mechanism.
2. Overhead:
o Directory management or bucket splitting can introduce slight overhead during
insertion.

Applications of Dynamic Hashing:


• Databases for indexing (e.g., extendible hashing in B-trees).
• Systems where the dataset size is highly variable and unpredictable.
Q11. Explain Depth first and breadth first traversal?

Ans –
Depth-First Traversal (DFS) and Breadth-First Traversal (BFS) are two fundamental
graph traversal algorithms used to explore nodes and edges systematically. These techniques
differ in their approach to visiting nodes.

1. Depth-First Traversal (DFS)


Definition:
DFS is a traversal method that explores as far as possible along a branch before
backtracking. It uses a stack (explicitly or through recursion) to remember the next vertex to
visit.

Steps of DFS:
1. Start from the given node (usually the root or any starting node).
2. Mark the node as visited.
3. Explore one adjacent node that hasn’t been visited.
4. Repeat the process recursively for the unvisited nodes.
5. Backtrack when no unvisited adjacent nodes remain.

DFS Example:
Consider the following graph:
A
/ \
B C
/ \
D E
Performing DFS starting from A:
• Start at A: Visited = {A}
• Go to B: Visited = {A, B}
• Go to D: Visited = {A, B, D}
• Backtrack to B, then visit E: Visited = {A, B, D, E}
• Backtrack to A, then visit C: Visited = {A, B, D, E, C}
DFS Output: A → B → D → E → C

Applications of DFS:
1. Finding Connected Components: Useful in undirected graphs.
2. Pathfinding: DFS is a key component in algorithms like backtracking and maze-solving.
3. Topological Sorting: Used in directed acyclic graphs.
4. Cycle Detection: Helps detect cycles in graphs.

Advantages of DFS:
1. Requires less memory than BFS (uses only the recursion stack or a single stack
explicitly).
2. Can find paths in scenarios where the target node is deep in the graph.
Disadvantages of DFS:
1. May explore unnecessary paths, especially in large graphs.
2. Cannot guarantee the shortest path in unweighted graphs.

2. Breadth-First Traversal (BFS)


Definition:
BFS is a traversal method that explores all neighbors at the current depth before moving
on to nodes at the next depth level. It uses a queue to track nodes to be visited.

Steps of BFS:
1. Start from the given node.
2. Mark the node as visited and enqueue it.
3. Dequeue a node from the queue and visit all its unvisited neighbors.
4. Mark the neighbors as visited and enqueue them.
5. Repeat until the queue is empty.

BFS Example:
Using the same graph as above:
A
/ \
B C
/ \
D E
Performing BFS starting from A:
• Start at A: Visited = {A}
• Enqueue neighbors B and C: Queue = [B, C]
• Dequeue B and enqueue neighbors D and E: Queue = [C, D, E]
• Dequeue C: Queue = [D, E]
• Dequeue D: Queue = [E]
• Dequeue E: Queue = []
BFS Output: A → B → C → D → E

Applications of BFS:
1. Shortest Path in Unweighted Graphs: Guarantees the shortest path between nodes.
2. Level Order Traversal: Commonly used in tree traversals.
3. Finding Connected Components: In undirected graphs.
4. Networking: Used in finding the shortest route between devices.

Advantages of BFS:
1. Guarantees the shortest path in unweighted graphs.
2. Explores all neighbors systematically.
Disadvantages of BFS:
1. Requires more memory than DFS (stores all neighbors in the queue).
2. Can be slower in very large graphs.
Q12. Write a FindMin and FindMax function in Tree?

Ans –

These functions assume the existence of a TreeNode structure defined as:

struct TreeNode {
int value;
struct TreeNode* left;
struct TreeNode* right;
};

FindMin Function

int findMin(struct TreeNode* root) {


if (root == NULL) {
printf("The tree is empty.\n");
return -1;
}
struct TreeNode* current = root;
while (current->left != NULL) {
current = current->left;
}
return current->value;
}

FindMax Function

int findMax(struct TreeNode* root) {


if (root == NULL) {
printf("The tree is empty.\n");
return -1;
}
struct TreeNode* current = root;
while (current->right != NULL) {
current = current->right;
}
return current->value;
}
Q13. Create of a BST of given stream 7,2,9,0,5,6,8,1

Ans – BST of given stream 7,2,9,0,5,6,8,1:

7
/ \
2 9
/ \ /
0 5 8
\ \
1 6

You might also like