Answer Format (CA2-DAA)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Problem Statement: Explain different design techniques.

Design and analysis of algorithms involve creating efficient and effective


algorithms to solve computational problems. Various design techniques are employed to
develop algorithms, each with its own advantages and
applications.
Here are some common design techniques:

Brute force:
Idea: Exhaustively try all possible solutions and select the one with the best outcome.
Example: Simple searching algorithms like linear search and brute-force string matching.

Divide and Conque:


Idea: It is a top-down approach. The algorithms which follow the divide & conquer
techniques involve three steps:
 Divide the original problem into a set of subproblems.
 Solve every subproblem individually, recursively.
 Combine the solution of the subproblems (top level) into a solution of the whole
original problem.
Examples: Merge Sort and Quick Sort are sorting algorithms that use divide and conquer.

Dynamic Programming:
Idea: Break the problem into overlapping subproblems and solve each subproblem only
once, storing the solutions in a table to avoid redundant computations.
Examples: The classic example is the computation of Fibonacci numbers using dynamic
programming.

Greedy Algorithms:
Idea: Greedy method is used to solve the optimization problem. An optimization
problem is one in which we are given a set of input values, which are required either to
be maximized or minimized (known as objective), i.e. some constraints or conditions.
 Greedy Algorithm always makes the choice (greedy criteria) looks best at the
moment, to optimize a given objective.
 The greedy algorithm doesn't always guarantee the optimal solution however it
generally produces a solution that is very close in value to the optimal.
Example: Dijkstra's algorithm for finding the shortest path in a graph and Huffman
coding for data compression.

Backtracking:
Idea: Systematically explore all possible solutions by trying out different
choices and backtracking when a solution is not viable.
Examples: N-Queens problem and Sudoku solving.

Randomized Algorithms:
Idea: Introduce randomness into the algorithm to achieve a probabilistic guarantee on the
solution's correctness and efficiency.
Examples: Randomized Quicksort and algorithms for approximating solutions to NP-
hard problems.

Parallel Algorithms:
Idea: Design algorithms to be executed concurrently on multiple processors or threads to
improve efficiency.
Example: Parallel matrix multiplication and parallel sorting algorithms.

Approximation Algorithms:
Idea: Provide fast solutions that may not be optimal but are close to the optimal solution.
Examples: The Traveling Salesman Problem has approximation algorithms that find
solutions close to the optimal tour.

Reduction:
Idea: Transform one problem into another problem for which we already have
an algorithm.
Examples: Many NP-hard problems are solved through reductions from other known
problems.
Amortized Analysis:
Idea: Analyze the average performance of a sequence of operations, even if a single
operation may be expensive.
Examples: Dynamic arrays use amortized analysis to show that a sequence of n append
operations takes O(n) time.

Problem Statement: Write an algorithm for quick sort. Explain with an examples and
show the analysis of the algorithm.
Theory:
Quicksort is the widely used sorting algorithm that makes n log n comparisons in average
case for sorting an array of n elements. It is a faster and highly efficient sorting algorithm.
This algorithm follows the divide and conquer approach. Divide and conquer is a
technique of breaking down the algorithms into subproblems, then solving the
subproblems, and combining the results back together to solve the original problem.

Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array
into two sub-arrays such that each element in the left sub-array is less than or equal to the
pivot element and each element in the right sub-array is larger than the pivot element.

Conquer: Recursively, sort two subarrays with Quicksort.


Examples:
Algorithm:
quickSort(array, leftmostIndex, rightmostIndex)
if (leftmostIndex < rightmostIndex)
pivotIndex <- partition(array,leftmostIndex, rightmostIndex)
quickSort(array, leftmostIndex, pivotIndex - 1)
quickSort(array, pivotIndex, rightmostIndex)
partition(array, leftmostIndex, rightmostIndex)
set rightmostIndex as pivotIndex
storeIndex <- leftmostIndex - 1
for i <- leftmostIndex + 1 to rightmostIndex
if element[i] < pivotElement
swap element[i] and element[storeIndex]
storeIndex++
swap pivotElement and element[storeIndex+1]
return storeIndex + 1
Complexity Analysis:
Time Complexity Analysis of Quick Sort:
Let us consider the following terminologies:

T(K): Time complexity of quicksort of K elements


P(K): Time complexity for finding the position of pivot among K elements.

Best Case:
The best case occurs when we select the pivot as the mean. So here
T(N) = 2 * T(N / 2) + N * constant

Now T(N/2) is also 2*T(N / 4) + N / 2 * constant. So,

T(N) = 2*(2*T(N / 4) + N / 2 * constant) + N * constant


= 4 * T(N / 4) + 2 * constant * N.

So, we can say that

T(N) = 2^k * T(N / 2^k) + k * constant * N

then, 2^k = N
k = log2N
So T(N) = N * T(1) + N * log2N. Therefore, the time complexity is O(N * logN).

Worst Case:
The worst case will occur when the array gets divided into two parts, one part consisting
of N-1 elements and the other and so on. So,

T(N) = T(N – 1) + N * constant


= T(N – 2) + (N – 1) * constant + N * constant = T(N – 2) + 2 * N * constant – constant
= T(N – 3) + 3 * N * constant – 2 * constant – constant
...
= T(N – k) + k * N * constant – (k – 1) * constant – . . . – 2*constant – constant
= T(N – k) + k * N * constant – constant * (k*(k – 1))/2

If we put k = N in the above equation, then

T(N) = T(0) + N * N * constant – constant * (N * (N-1)/2)


= N^2 – N*(N-1)/2
= N^2/2 + N/2

So the worst case complexity is O(N^2)


If we subtract the above two equations, we get

N * T(N) – (N – 1) * T(N – 1) = 2 * T(N – 1) + N2 * constant – (N – 1)2 * constant


N * T(N) = T(N – 1) * (2 + N – 1) + constant + 2 * N * constant – constant
= (N + 1) * T(N – 1) + 2 * N * constant

Divide both side by N*(N-1) and we will get

T(N) / (N + 1) = T(N – 1)/N + 2 * constant / (N + 1) — (i)

If we put N = N-1 it becomes

T(N – 1) / N = T(N – 2)/(N – 1) + 2*constant/N

Therefore, the equation (i) can be written as


T(N) / (N + 1) = T(N – 2)/(N – 1) + 2*constant/(N + 1) + 2*constant/N

Similarly, we can get the value of T(N-2) by replacing N by (N-2) in the equation (i). At last it will
be like

T(N) / (N + 1) = T(1)/2 + 2*constant * [1/2 + 1/3 + . . . + 1/(N – 1) + 1/N + 1/(N + 1)]


T(N) = 2 * constant * log2N * (N + 1)

If we ignore the constant it becomes

T(N) = log2N * (N + 1)

So the time complexity is O(N * logN).


Problem Solving (Program):
#include <stdio.h>
int partition (int a[], int start, int end)
{
int pivot = a[end];
int i = (start - 1);

for (int j = start; j <= end - 1; j++)


{
if (a[j] < pivot)
{
i++;
int t = a[i];
a[i] = a[j];
a[j] = t;
}
}
int t = a[i+1];
a[i+1] = a[end];
a[end] = t;
return (i + 1);
}
void quick(int a[], int start, int end)
if (start < end)
{
int p = partition(a, start, end);
quick(a, start, p - 1);
quick(a, p + 1, end);
}
}
void printArr(int a[], int n)
{
int i;
for (i = 0; i < n; i++)
printf("%d ", a[i]);
}
int main()
{
int a[] = { 24, 9, 29, 14, 19, 27 };
int n = sizeof(a) / sizeof(a[0]);
printf("Before sorting array elements are - \n");
printArr(a, n);
quick(a, 0, n - 1);
printf("\nAfter sorting array elements are - \n");
printArr(a, n);
return 0;
}
Output:

Conclusion: Using Quick Sort, the numbers in the list are sorted successfully.

You might also like