11.algorithm Design

Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

6 June 2022 C-DAC 1

What are the different class of algorithms

An algorithm refers to the sequential steps and processes that should be


followed to solve a problem.
There can be various kinds of algorithms devised to solve different problems
although in programming consider the following important Algorithms to solve
a problem.
• Brute Force algorithm
• Greedy algorithm
• Recursive algorithm
• Backtracking algorithm
• Divide & Conquer algorithm
• Dynamic programming algorithm
• Randomised algorithm
6 June 2022 C-DAC 2
Brute Force Algorithm

The simplest possible algorithm that can be devised to solve a problem is


called the brute force algorithm.
To device an optimal solution first we need to get a solution at least and then
try to optimise it.
Every problem can be solved by brute force approach although generally not
with appreciable space and time complexity
Example – searching for an element in a sorted array of elements

6 June 2022 C-DAC 3


Greedy Algorithm

Decision is made that is good at that point without considering the future. This
means that some local best is chosen and considers it as the global optimal. There
are two properties in this algorithm.
• Greedily choosing the best option
• If an optimal solution can be found by retrieving the optimal solution to its
subproblems.
This algorithm is easy to device and most of the time the simplest one. But making
locally best decisions does not always work as it sounds. So, it is replaced by a
reliable solution called Dynamic Programming approach.
Applications - Sorting: Selection Sort, Topological sort, Prim’s & Kruskal’s
algorithms, Coin Change problem, Fractional Knapsack Problem, Job Scheduling
algorithm
6 June 2022 C-DAC 4
Recursive Algorithm

It does not require specifically think about every sub-problem.


just need to think about the existing cases and the solution of the simplest sub-
problem, all other complexity will be handled by it automatically.
Recursion is a very powerful tool that take care of memory management
Recursion simply means calling itself to solve its subproblems.

For Example: Factorial of a number

6 June 2022 C-DAC 5


Backtracking Algorithm

It is an improvement to the brute force approach.


Start with one possible option out of many available and try to solve the
problem with the selected move then print the solution, else backtrack and
select some other and try to solve it.
It is a form of recursion, it’s just that when a given option cannot give a
solution, backtrack to the previous option which can give a solution, and
proceed with other options
Applications
• Generating all Binary strings
• N-Queens Problem
• Knapsack Problem
• Graph colouring Problem
6 June 2022 C-DAC 6
Divide & Conquer Algorithm

As the name suggests it has two parts: Divide the problem into subproblems
and solve them.
Combine the solution of each above problems.
This algorithm is extensively used in various problems as it is quite stable and
optimal for most of the problems asked.
Applications:
• Binary Search
• Merge Sort & Quick Sort
• Median Finding
• Matrix Multiplication

6 June 2022 C-DAC 7


Dynamic Algorithm

Its simply means remembering the past and apply it to future corresponding
results and hence this algorithm is quite efficient in terms of time complexity.
This algorithm has two version:
• Bottom-Up Approach: Starts solving from the bottom of the problems i.e.
solving the last possible sub-problems first and using the result of those
solving the above sub-problems.
• Top-Down Approach: Starts solving the problems from the very beginning
to arrive at the required sub-problem and solve it using previously solved
sub-problems.
Applications – Fibonacci sequence, Bellman-Ford algorithm, Chain Matrix
multiplication, Subset Sum, Knapsack Problem

6 June 2022 C-DAC 8


Efficient Algorithm Writing – Tricks and Examples

Effectively using loops- Initialization


Choose the loop counter such that it can address the smallest possible instance
of the problem. Consider the problem of summing the first n integers. The
smallest possible instance is to add a zero number. In that case, the minimum
value of the sum variable and the loop counter i will be zero.
• i=0
• s=0

6 June 2022 C-DAC 9


Efficient Algorithm Writing – Tricks and Examples

Find iterative construct


Find the problem with next smallest instance. Careful inspection of loop and
logic tells that in each iteration variable i needs to be incremented and A[i]
should be added to the partially computed sum.
i=i+1
sum = sum + A[i]
The complete code segment would be :
i←0
sum ← 0
while i < n do
i←i+1
sum ← sum + a[i] // Assume that array index starts from 1.
end

6 June 2022 C-DAC 10


Efficient Algorithm Writing – Tricks and Examples

Termination
A number of iterations may be known in advance or it may depend on certain
conditions to be satisfied. To find 5! loop should be iterated 5 times
The loop count is known in advance:
for (i = 1; i <= 5; i++)
{ ...
}
The loop count is not predefined:Sometimes loop may be forced to terminate
from in-between
for (i = 0; i < n; i++)
{
if A[i] == x
break;
}

6 June 2022 C-DAC 11


Remove redundant operations
Redundant code that is repeated within loops has a negative impact on
execution time. Algorithm DRAW_ELLIPSE(a, b)
for x ← 0 to a do
y ← (b/a)*sqrt(a*a – x*x)
PlotPixel(x, round(y))
end
By relocating the constant computation outside the loop,
Algorithm EFFICIENT_DRAW_ELLIPSE(a, b)
t ← b/a;
m ← a * a;
for x ← 0 to a do
y ← t*sqrt(m – x*x)
PlotPixel(x, round(y))
end
Constant division b/a and constant multiplication a*a is moved outside of the
loop, and hence reduced n multiplications and n divisions

6 June 2022 C-DAC 12


Referencing Array

One memory access corresponds to a reference to one array element. If the


array is large, this may result in a significant number of memory accesses,
slowing down the process.

Arrays are typically used to retain a huge amount of data and are accessible via
the loop. Consider the following two scenarios for locating the array’s smallest
element.
Algorithm POOR_FIND_MIN
k←0
for i ← 1 to n do
if A[i] < A[k] then
k←i
end
Min ← A[k]

6 June 2022 C-DAC 13


Referencing Array

Efficient version of this algorithm would be:


Algorithm EFFICIENT_FIND_MIN
Min ← A[0]
for i ← 1 to n do
if A[i] < Min then
Min ← A[i]
k←i
end
Min ← A[k]
The POOR_FIND_MIN algorithm requires two array accesses for each
comparison, whereas EFFICIENT_FIND_MIN needs only one. Arithmetic on
the array needs to address calculation on each array access.
EFFICIENT_FIND_MIN compares A[i] with constant Min, which reduces the
array access by a factor of two

6 June 2022 C-DAC 14


Late termination

Sometimes, inefficiency in the program is also introduced due to unnecessary


more tests. Program keep iterating after producing useful results. Consider the
problem of linear search.
Algorithm POOR_LINEAR_SEARCH(A, Key)
Index ← -1
for i ← 1 to n do
if A[i] == key then
Index ← i;
end
end

if Index == -1 then
print “Key not found”
else
print “Key found on index i”
end
6 June 2022 C-DAC 15
Late termination

Efficient version of this algorithm would be:


Algorithm EFFICIENT_LINEAR_SEARCH(A, Key)
flag ← 0
for i ← 1 to n do
if A[i] == key then
flag ← 1
break
end
if flag == 0 then
print “Key not found”
Else print “Key found on index i”
end

POOR_LINEAR_SEARCH goes on to iterate n times Key is found at the first


location. Whereas, EFFICIENT_LINEAR_SEARCH breaks the loop once Key
is found, which makes it more efficient
6 June 2022 C-DAC 16
Late termination

Efficient version of this algorithm would be:


Algorithm EFFICIENT_LINEAR_SEARCH(A, Key)
flag ← 0
for i ← 1 to n do
if A[i] == key then
flag ← 1
break
end
if flag == 0 then
print “Key not found”
Else print “Key found on index i”
end

POOR_LINEAR_SEARCH goes on to iterate n times Key is found at the first


location. Whereas, EFFICIENT_LINEAR_SEARCH breaks the loop once Key
is found, which makes it more efficient
6 June 2022 C-DAC 17
Analysis of an Algorithm

The efficiency of an algorithm depends on the amount of time, storage and


other resources required to execute the algorithm. The efficiency is measured
with the help of asymptotic notations.

An algorithm may not have the same performance for different types of inputs.
With the increase in the input size, the performance will change.

The study of change in performance of the algorithm with the change in the
order of the input size is defined as asymptotic analysis.

6 June 2022 C-DAC 18


Asymptotic Notations

These are mathematical notations used to describe running time of an algorithm


when the input tends towards a particular value or a limiting value.
For example: In bubble sort, when the input array is already sorted, the time
taken by the algorithm is linear i.e. the best case. But, when the input array is in
reverse condition, the algorithm takes the maximum time (quadratic) to sort the
elements i.e. the worst case.
When the input array is neither sorted nor in reverse order, then it takes average
time. These durations are denoted using asymptotic notations.
There are mainly three asymptotic notations:
• Big-O notation
• Omega notation
• Theta notation
6 June 2022 C-DAC 19
Big-O Notation (O-notation)

Big-O notation represents the upper bound of the running time of an algorithm.
Thus, it gives the worst-case complexity of an algorithm.
Since it gives the worst-case running time of an algorithm, it is widely used to
analyze an algorithm as we are always interested in the worst-case scenario.

6 June 2022 C-DAC 20


Omega Notation (Ω-notation)

Omega notation represents the lower bound of the running time of an algorithm.
Thus, it provides the best case complexity of an algorithm.
For any value of n, the minimum time required by the algorithm is given by
Omega Ω(g(n))

6 June 2022 C-DAC 21


Theta Notation (Θ-notation)

Theta notation encloses the function from above and below.


Since it represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an algorithm.

6 June 2022 C-DAC 22


Divide and Conquer Algorithm

A divide and conquer algorithm is a strategy of solving a large problem by


• breaking the problem into smaller sub-problems
• solving the sub-problems, and
• combining them to get the desired output.

To use the divide and conquer algorithm, recursion is used..

6 June 2022 C-DAC 23


How Divide and Conquer Algorithms Work?

Divide: Divide the given problem into sub-problems using recursion.


Conquer: Solve the smaller sub-problems recursively. If the sub-problem is
small enough, then solve it directly.
Combine: Combine the solutions of the sub-problems that are part of the
recursive process to solve the actual problem.

6 June 2022 C-DAC 24


How Divide and Conquer Algorithms Work?

6 June 2022 C-DAC 25


Divide and Conquer - Time Complexity

Divide and Conquer approach:


fib(n)
If n < 2, return 1
Else , return f(n - 1) + f(n -2)
Dynamic approach:
mem = []
fib(n)
If n in mem: return mem[n]
else,
If n < 2, f = 1
else , f = f(n - 1) + f(n -2)
mem[n] = f
return f
In a dynamic approach, mem stores the result of each sub-problem.

6 June 2022 C-DAC 26


Greedy algorithm

A greedy algorithm is an approach for solving a problem by selecting the best


option available at the moment. It doesn't worry whether the current best result
will bring the overall optimal result.

The algorithm never reverses the earlier decision even if the choice is wrong. It
works in a top-down approach.

This algorithm may not produce the best result for all the problems. It's because
it always goes for the local best choice to produce the global best result..

6 June 2022 C-DAC 27


Greedy algorithm - Properties

1. Greedy Choice Property


If an optimal solution to the problem can be found by choosing the best choice
at each step without reconsidering the previous steps once chosen, the problem
can be solved using a greedy approach. This property is called greedy choice
property.

2. Optimal Substructure
If the optimal overall solution to the problem corresponds to the optimal
solution to its subproblems, then the problem can be solved using a greedy
approach. This property is called optimal substructure.

6 June 2022 C-DAC 28


Greedy algorithm

6 June 2022 C-DAC 29


Greedy algorithm

1. Let's start with the root node 20. The weight of the right child is 3 and the
weight of the left child is 2.

2. Problem is to find the largest path. And, the optimal solution at the moment is
3. So, the greedy algorithm will choose 3.

3. Finally the weight of an only child of 3 is 1. This gives us our final result 20
+ 3 + 1 = 24.

However, it is not the optimal solution. There is another path that carries more
weight (20 + 2 + 10 = 32) as shown in the image below.

6 June 2022 C-DAC 30


Greedy algorithm

6 June 2022 C-DAC 31


Dynamic Programming algorithm

Dynamic Programming (DP) is an algorithmic technique for solving an


optimization problem by breaking it down into simpler sub-problems and
utilizing the fact that the optimal solution to the overall problem depends upon
the optimal solution to its sub-problems.
From a dynamic programming point of view, Dijkstra's algorithm for the
shortest path problem is a successive approximation scheme that solves the
dynamic programming functional equation for the shortest path problem by the
Reaching method.

6 June 2022 C-DAC 32


Dynamic Programming algorithm

Let's find the fibonacci sequence upto 5th term. A fibonacci series is the
sequence of numbers in which each number is the sum of the two preceding
ones. For example, 0,1,1, 2, 3. Here, each number is the sum of the two
preceding numbers.

Algorithm
Let n be the number of terms.
1. If n <= 1, return 1.
2. Else, return the sum of two preceding numbers.

6 June 2022 C-DAC 33


Dynamic Programming algorithm
Calculating the fibonacci sequence up to the 5th term.
The first term is 0. The second term is 1.
The third term is sum of 0 (from step 1) and 1(from step 2), which is 1.
The fourth term is the sum of the third term (from step 3) and second term
(from step 2) i.e. 1 + 1 = 2.
The fifth term is the sum of the fourth term (from step 4) and third term (from
step 3) i.e. 2 + 1 = 3.
Hence, sequence is 0,1,1, 2, 3. Here, results of the previous steps are used. This
is called a dynamic programming approach.
F(0) = 0
F(1) = 1
F(2) = F(1) + F(0)
F(3) = F(2) + F(1)
F(4) = F(3) + F(2)
6 June 2022 C-DAC 34
Brute force algorithm
This approach finds all the possible solutions to find a satisfactory solution to a
given problem.
It can be of following types:

Optimizing: In this case, the best solution is found. To find the best solution, it
may either find all the possible solutions to find the best solution or if the value
of the best solution is known, it stops finding when the best solution is found.
For example: Finding the best path for the travelling salesman problem. Here
best path means that travelling all the cities and the cost of travelling should be
minimum.

6 June 2022 C-DAC 35


Brute force algorithm

Satisficing: It stops finding the solution as soon as the satisfactory solution is


found. Or example, finding the travelling salesman path which is within 10% of
optimal.
Often Brute force algorithms require exponential time. Various heuristics and
optimization can be used:
Heuristic: A rule of thumb that helps to decide which possibilities should look
at first.
Optimization: A certain possibilities are eliminated without exploring all of
them.

6 June 2022 C-DAC 36


Brute force algorithm

Brute force search considers each and every state of a tree, and the state is
represented in the form of a node. As far as the starting position is concerned,
have two choices, i.e., A state and B state. In the case of B state, we have two
states, i.e., state E and F.
In the case of brute force search, each state is considered one by one. As we can
observe in the above tree that the brute force search takes 12 steps to find the
solution.

6 June 2022 C-DAC 37


Brute force algorithm

On the other hand, backtracking, which uses Depth-First search, considers the
below states only when the state provides a feasible solution. Consider the
above tree, start from the root node, then move to node A and then node C. If
node C does not provide the feasible solution, then there is no point in
considering the states G and H. We backtrack from node C to node A. Then, we
move from node A to node D. Since node D does not provide the feasible
solution, we discard this state and backtrack from node D to node A.

We move to node B, then we move from node B to node E. We move from node
E to node K; Since k is a solution, so it takes 10 steps to find the solution. In
this way, we eliminate a greater number of states in a single iteration. Therefore,
we can say that backtracking is faster and more efficient than the brute force
6 Juneapproach.
2022 C-DAC 38
Backtracking algorithms

A backtracking algorithm is a problem-solving algorithm that uses a brute force


approach for finding the desired output.

The Brute force approach tries out all the possible solutions and chooses the
desired/best solutions.

The term backtracking suggests that if the current solution is not suitable, then
backtrack and try other solutions. Thus, recursion is used in this approach.

This approach is used to solve problems that have multiple solutions. To have
an optimal solution, one must go for dynamic programming.

6 June 2022 C-DAC 39


Backtracking algorithms

A space state tree is a tree representing all the possible states (solution or non-
solution) of the problem from the root as an initial state to the leaf as a terminal
state.

6 June 2022 C-DAC 40


Branch-and-bound algorithms

Branch and bound is an algorithm design paradigm which is generally used for
solving combinatorial optimization problems. These problems are typically
exponential in terms of time complexity and may require exploring all possible
permutations in worst case. The Branch and Bound Algorithm technique solves
these problems relatively quickly.

6 June 2022 C-DAC 41


Branch-and-bound algorithms

The above are jobs, problems and problems given. Jobs = {j1, j2, j3, j4}, P =
{10, 5, 8, 3}, d = {1, 2, 1, 2}. Solutions in two ways are:

1) first way of representing the solutions is the subsets of jobs. S1 = {j1, j4}
2) second way of representing the solution is that first job is done, second
and third jobs are not done, and fourth job is done. S2 = {1, 0, 0, 1}

The solution s1 is the variable-size solution while the solution s2 is the fixed-
size solution.

6 June 2022 C-DAC 42


Branch-and-bound algorithms

The above are jobs, problems and problems given. Jobs = {j1, j2, j3, j4}, P =
{10, 5, 8, 3}, d = {1, 2, 1, 2}. Solutions in two ways are:
1) first way of representing the solutions is the subsets of jobs. S1 = {j1, j4}
2) second way of representing the solution is that first job is done, second
and third jobs are not done, and fourth job is done. S2 = {1, 0, 0, 1}
The solution s1 is the variable-size solution while the solution s2 is the fixed-
size solution.

6 June 2022 C-DAC 43


Branch-and-bound algorithms

6 June 2022 C-DAC 44

You might also like