Algorithm U1 Answer Key
Algorithm U1 Answer Key
Algorithm U1 Answer Key
2 Marks
1. **What is an Algorithm? What are the criteria for writing an algorithm?**
Ans: - An algorithm is a step-by-step set of instructions for solving a particular
problem or performing a specific task. It's a precise and unambiguous sequence of
operations that, when executed, produces a desired result.
Criteria for writing an algorithm:
- **Input**: It should specify what data is needed as input.
- **Output**: It should define the expected result or output.
- **Finiteness**: The algorithm must terminate after a finite number of steps.
- **Definiteness**: Each step must be precisely defined and unambiguous.
- **Effectiveness**: Every step should be executable and achieve a specific task.
- **Generality**: It should be applicable to a range of input values or instances.
3. Explain Consecutive integer checking methods of find the GCD of two numbers.
Ans: Consecutive Integer Checking Algorithm:
The Consecutive Integer Checking Algorithm is a simple, but inefficient, method to
find the GCD of two numbers. Here’s how it works:
1. Start with the smaller of the two numbers.
2. Check if it divides both numbers. If it does, it’s the GCD.
3. If not, subtract one and try again.
4. Repeat until you find a number that divides both numbers evenly.
For example, let’s find the GCD of 48 and 18:
• Start with 18 (the smaller number).
• Check if 18 divides both 48 and 18. It doesn’t (48 / 18 leaves a
remainder), so subtract 1 to get 17.
• 17 doesn’t divide both numbers, so subtract 1 to get 16.
• Continue this process until you reach 6.
• 6 divides both 48 and 18, so the GCD of 48 and 18 is 6.
This method is straightforward, but it can be slow for large numbers because it
potentially requires checking all integers down to 1. Euclid’s algorithm, on the
other hand, is much faster and more efficient for large numbers.
6. Explain following
a. Graph problem
b. Combinatorial problems
c. Geometrical problems.
Ans: a. Graph Problem:
A graph problem involves the study and manipulation of graphs, which are
mathematical structures representing relationships between pairs of objects. Graph
problems can include finding paths, determining connectivity, and analyzing network
structures.
b. Combinatorial Problems:
Combinatorial problems deal with counting, arranging, and selecting objects. These
problems often involve combinations, permutations, and arrangements of elements.
Examples include the traveling salesman problem and the assignment problem.
c. Geometrical Problems:
Geometrical problems relate to spatial and geometric relationships. These problems
often involve finding measurements, angles, areas, or volumes in a geometric
context. Examples include finding the area of a polygon or determining the distance
between two points in space.
11. Write an algorithm find sum of two matrixes also calculate its time complexity.
Ans: ```python
Algorithm AddMatrices(A, B):
Input: Matrices A and B of size m x n
Output: Resultant matrix C of size m x n
12. Write an algorithm find the sum of n numbers also calculate its space and
time complexity.
Ans: algorithm in Python to find the sum of n numbers:
```python
def find_sum(n):
total = 0
for i in range(1, n+1):
total += i
return total
```
In this algorithm, the time complexity is O(n) because there is a loop that runs n
times¹. The space complexity is O(1) because only a constant amount of space is
used (for the variables total and i)¹.
14. Explain Worst case, Best case and average case with example
Ans: - **Worst case**: This refers to the most complex scenario for an algorithm.
The worst case analysis considers the scenario in which the algorithm takes the
longest possible time to complete its task. This is typically the scenario in which
the input data is in a specific order or configuration that causes the algorithm to
perform the maximum number of operations.
For example, for a linear search algorithm, the worst case occurs when the
element being searched for is at the end of the list, resulting in a time complexity
of O(n).
15. Write an algorithm to perform sequential search and also calculate its Worst
case, Best case and average case complexity.
Ans: Sequential search is a simple algorithm for finding a specific element in a list of
elements. It works by iterating through the list and comparing each element to the
element to be searched for until the element is found or the end of the list is reached.
`algorithm in Python to perform a sequential search:
def sequential_search(lst, item):
for i in range(len(lst)):
if lst[i] == item:
return i
return -1
In this algorithm:
- The **worst-case** scenario is when the item is not in the list or is the last element
in the list. The function would have to iterate through the entire list, resulting in a
time complexity of O(n)¹⁵.
- The **best-case** scenario is when the item is the first element in the list. The
function would find the item immediately, resulting in a time complexity of O(1)¹⁵.
- The **average-case** scenario, assuming that the item is equally likely to be at any
position in the list, would result in a time complexity of O(n)¹⁵.
- The space complexity of this algorithm is O(1) because it uses a constant amount of
space.
16. Explain Big O notation with example.
Ans: In computer science, Big O Notation is a fundamental tool used to find out the
time complexity of algorithms. Big O Notation allows programmers to classify
algorithms depending on how their run time or space requirements vary as the input
size varies.
Examples:
Runtime Complexity for Linear Search – O(n)
Runtime Complexity for Binary Search – O(log n)
Runtime Complexity for Bubble Sort, Selection Sort, Insertion Sort, Bucket Sort -
O(n^c).
Runtime Complexity for Exponential algorithms like Tower of Hanoi - O(c^n).
Runtime Complexity for Heap Sort, Merge Sort - O(n log n). Big O Notation gives
the upper-bound runtime or worst-case complexity of an algorithm. It analyzes
and classifies algorithms depending on their run time or space requirements.
19. Explain asymptotic notations Big O, Big Ω and Big θ that are used to
compare the order of growth of an algorithm with example.
Ans: 1. Big O Notation (O-notation): Big O notation is used to describe the asymptotic
upper bound, which provides an upper limit on the time complexity of an algorithm¹.
It represents the worst-case scenario in terms of execution time¹. For example, if we
have a simple search algorithm that checks each element of a list to find a target
value, the worst-case scenario is that the target is at the end of the list or not in the
list at all. In this case, the algorithm runs in O(n) time, where n is the number of
elements in the list¹.
2. Big Omega Notation (Ω-notation): Big Omega notation provides an asymptotic
lower bound, which provides a lower limit on the time complexity of an algorithm¹. It
represents the best-case scenario¹. For example, in the best-case scenario for a simple
search algorithm (where the target value is the first element in the list), the algorithm
runs in Ω(1) time, meaning it will take at least constant time¹.
3. **Big Theta Notation (Θ-notation)**: Big Theta notation provides a tight bound on
the time complexity of an algorithm¹. It is used when the algorithm's upper and lower
bounds are the same¹. For example, an algorithm that always takes a constant
amount of time, regardless of the size of the input, is said to run in Θ(1) time¹. This
means that the algorithm's time complexity is exactly constant, neither growing with
the size of the input nor depending on the specific values of the input¹. These
notations are used to compare the order of growth of an algorithm as the size of the
input increases¹. They provide a way to analyze an algorithm's running time by
identifying its behavior as its input size grows¹.
19. Write an algorithm to Find the largest element in an array and also perform
mathematical analysis.
```python
def matrix_multiply(A, B):
n = len(A)
C = [[0 for _ in range(n)] for _ in range(n)]
for i in range(n):
for j in range(n):
for k in range(n):
C[i][j] += A[i][k] * B[k][j]
return C
```
This algorithm assumes that `A` and `B` are square matrices of the same size. The
time complexity of this algorithm is O(n^3), where n is the number of rows (or
columns) in each matrix. This is because there are three nested loops, each of which
runs `n` times. The time complexity depends on the size of the input and the specific
operations performed in the algorithm.
22. Write a non-recursive algorithm to Count the number of bits in a number. And
also perform mathematical analysis.
Ans: Python code:
def count_bits(n):
count = 0
while n:
count += n & 1
n >>= 1
return count
This algorithm uses bitwise operations to count the number of bits in `n`. The time
complexity of this algorithm is O(log n), where n is the number being examined. This
is because the number of bits in `n` is proportional to the logarithm (base 2) of `n`.
The time complexity depends on the size of the input and the specific operations
performed in the algorithm.
23. List the steps for analyzing the time efficiency of recursive algorithm.
Ans: Steps for Analyzing the Time Efficiency of Recursive Algorithm:
1. Define the Recurrence Relation:Identify the recursive relationship that describes how
the problem is decomposed into smaller subproblems. This recurrence relation is
often a key to understanding the algorithm's time complexity.
2. Write Down the Base Case(s):Identify the base case(s) of the recursion, which
represent the smallest subproblems that can be directly solved. Base cases are crucial
for determining when the recursion stops.
3. Determine the Size of Subproblems: Analyze how the size of the problem decreases
with each recursive call. Define the size of the subproblems in terms of the input size.
4. Express the Recurrence Relation: Write down the recurrence relation explicitly,
expressing the time complexity of the algorithm in terms of the size of the input and
the time complexity of smaller subproblems.
5. Solve or Simplify the Recurrence: Solve the recurrence relation or simplify it to
obtain a closed-form expression. This step involves expressing the time complexity of
the algorithm as a function of the input size without recursion.
6. Determine the Dominant Term: Identify the dominant term in the closed-form
expression. The dominant term indicates the growth rate of the time complexity and is
often the most significant factor in determining the overall efficiency.
7. Analyze the Time Complexity: Express the time complexity of the recursive
algorithm using Big O notation. The result should provide a clear understanding of
how the algorithm's efficiency scales with the input size.
8. Verify with Examples: Validate the derived time complexity by running the algorithm
on different inputs and comparing the observed behavior with the theoretical analysis.
9. Consider Best, Worst, and Average Cases: Analyze the best-case, worst-case, and
average-case time complexities separately, if applicable. Different input scenarios may
lead to distinct time complexities.
10. Optimize if Necessary: If the derived time complexity is not acceptable, consider
optimizing the algorithm or exploring alternative algorithmic approaches to achieve
the desired efficiency.
By following these steps, you can systematically analyze the time efficiency of a
recursive algorithm, gaining insights into its performance characteristics and enabling
informed decisions about its suitability for specific use cases.
23. Explain with example mathematical analysis of recursive algorithm.
Ans: Let’s consider the recursive algorithm for factorial calculation:
Python
def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n-1)
The base case is n == 0, where the function returns 1. The recurrence relation can be
defined as T(n) = T(n-1) + c, where c is a constant representing the time to
perform the multiplication2. This relation says that the time to compute
factorial(n) is the time to compute factorial(n-1) plus the time for the
multiplication operation.
Solving this recurrence relation gives T(n) = cn, which means the time complexity of
the algorithm is O(n).
24. Write an algorithm to find the factorial of a number using recursion and also
perform mathematical analysis.
Ans: Recursive Algorithm for Factorial Calculation:
Python’
def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n - 1)
Mathematical Analysis:
The factorial of a non-negative integer n is the product of all positive integers from 1
to n. Mathematically, it is represented as:
n! = n * (n - 1) * (n - 2) * ... * 1
Using the recursive definition of factorial, we can derive the following recurrence
relation:
n! = n * (n - 1)!
This recurrence relation forms the basis of the recursive algorithm for calculating
factorial. The base case of the recursion is n = 0, for which factorial(0) = 1.
25.Write an algorithm to perform Towers of Hanoi using recursion and also
perform mathematical analysis.
Ans: Towers of Hanoi Algorithm Using Recursion:
Python algorithm to solve the Towers of Hanoi problem using recursion:
def hanoi(n, source, target, auxiliary):
if n > 0:
# Move n - 1 disks from source to auxiliary, so they are out of the way
hanoi(n - 1, source, auxiliary, target)
26. State the recursive algorithm to count the bits of a decimal number in its
binary representation. Give its mathematical analysis.
Ans: Counting the Bits of a Decimal Number in Its Binary Representation:
Here’s a Python algorithm to count the bits of a decimal number in its binary
representation using recursion:
def count_bits(n):
if n == 0:
return 0
else:
return 1 + count_bits(n & (n - 1))
In this algorithm, the base case is when n == 0, and it returns 0. For other
cases, it makes a recursive call with n & (n - 1), which has the effect of ‘turning off’
the rightmost ‘1’ bit in n, and adds 1 to the count.
The time complexity of this algorithm is O(log n), because each recursive call reduces
the number of ‘1’ bits in n by one, and there can be at most log n ‘1’ bits in n. The
space complexity is also O(log n), because in the worst case, if a recursive call is made
log n times, there will be log n activation records on the call stack.
27. Consider the following algorithm.
GUESS[A[][]]
for i<-0 to n-1
for j<-0 to i
A[i][j]<-0
i) What does the algorithm compute?
ii)What it its basic operation?
iii) What is the efficiency of this algorithm?
Ans: i) The algorithm `GUESS[A[][]]` initializes the lower triangle of a square matrix
`A` (including the diagonal) with zeros. The lower triangle of a matrix includes all
the elements `A[i][j]` where `i >= j`.
ii) The basic operation of this algorithm is the assignment operation `A[i][j] <- 0`.
iii) The efficiency of this algorithm can be analyzed as follows:
The outer loop runs `n` times, and for each iteration `i` of the outer loop, the inner
loop runs `i+1` times. Therefore, the total number of operations is `1 + 2 + 3 + ... +
n = n*(n+1)/2`.
In terms of time complexity, we ignore lower order terms and multiplicative constants
for large `n`, so this simplifies to O(n^2). This means that the time complexity of
the algorithm is quadratic in the size of the input `n`. The space complexity of the
algorithm is O(1), which means the amount of space used does not change with
the size of the input, because it only uses a constant amount of space to store the
indices and the matrix remains the same size.
Ans: a) The recurrence relation is given by: `x(n) = x(n - 1) + 5` for `n > 1`, `x(1) = 0`. To
find `x(n)`, we repeatedly substitute the recurrence into itself. For example, `x(2) = x(2-
1) + 5 = 0 + 5 = 5`, `x(3) = x(3-1) + 5 = 5 + 5 = 10`. So, we can observe a pattern and the
general solution is `x(n) = 5 * (n - 1)`¹.
b) The recurrence relation is given by: `x(n) = 3x(n - 1)` for `n > 1`, `x(1) = 4`. Similarly,
we find the values for `x(n)` iteratively. For example, `x(2) = 3 * x(2 - 1) = 3 * 4 = 12`,
`x(3) = 3 * x(3 - 1) = 3 * 12 = 36`. So, the general solution is `x(n) = 4 * 3 ^ (n - 1)`¹.
c) The recurrence relation is given by: `x(n) = x(n - 1) + n` for `n > 0`, `x(0) = 0`. Here,
each term is the sum of the previous term and `n`. For example, `x(1) = x(1 - 1) + 1 = 0 +
1 = 1`, `x(2) = x(2 - 1) + 2 = 1 + 2 = 3`. From this, the general solution `x(n) = n * (n + 1) /
2`¹.
28. Find the time complexity of below algorithm:
Fibonacci(n)
{
if n<=1 then
output 'n'
else
f2=0;
f1=1;
for i=2 to n do
{
f=f1+f2;
f2=f1;
f1=f;
}
output 'f'
}
**Time Complexity:**
- The loop iterates \(n-1\) times, resulting in a linear dependence on \(n\).
- The constant time operations inside the loop do not affect the overall time
complexity.
Therefore, the time complexity of this algorithm is \(O(n)\), where \(n\) is the input to
the Fibonacci function.