DAA Module-1-2

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 111

Design and Analysis

Algorithm
Dr. Rajesh I S
Assistant Professor
BMSIT&M
Time Analysis

X=5*a+6*b  (1)
Space Analysis

S(n)=3 Words
Frequency Count Method
 n+1
1 n+1 n

 n
Space Complexity
 n+1

n

n
 n+1

 n (n+1)

n*n
Time Complexity #1

i--
i=i+2
Compare Class of Functions

n=1

n=2

n=4

n=8
Asymptotic Notations:
Asymptotic Notation is a way of comparing function that ignores constant
factors and small input sizes.

Three notations are used to calculate the running time complexity of an


algorithm:

The time required for the execution of an algorithm is categorized into three
types:

Worst case: the maximum time required for the execution


Average case: average time taken for the execution.
Best case: minimum time taken for the execution
Best case: Define the input for which algorithm takes less time or minimum time. In
the best case calculate the lower bound of an algorithm. Example: In the linear search
when search data is present at the first location of large data then the best case
occurs.
Worst Case: Define the input for which algorithm takes a long time or maximum
time. In the worst calculate the upper bound of an algorithm. Example: In the linear
search when search data is not present at all then the worst case occurs.
Average case: In the average case take all random inputs and calculate the
computation time for all inputs.
And then we divide it by the total number of inputs.
Average case = all random case time / total no of case
Big O notation (O): This notation provides an upper bound on the growth rate of an
algorithm’s running time or space usage. It represents the worst-case scenario, i.e., the
maximum amount of time or space an algorithm may need to solve a problem. For
example, if an algorithm’s running time is O(n), then it means that the running time of
the algorithm increases linearly with the input size n or less.
Omega notation (Ω): This notation provides a lower bound on the growth rate of an
algorithm’s running time or space usage. It represents the best-case scenario, i.e., the
minimum amount of time or space an algorithm may need to solve a problem. For
example, if an algorithm’s running time is Ω(n), then it means that the running time of
the algorithm increases linearly with the input size n or more.
Theta notation (Θ): This notation provides both an upper and lower bound on the
growth rate of an algorithm’s running time or space usage. It represents the average-
case scenario, i.e., the amount of time or space an algorithm typically needs to solve a
problem. For example, if an algorithm’s running time is Θ(n), then it means that the
running time of the algorithm increases linearly with the input size n.

In general, the choice of asymptotic notation depends on the problem and the specific
algorithm used to solve it. It is important to note that asymptotic notation does not
provide an exact running time or space usage for an algorithm, but rather a description
of how the algorithm scales with respect to input size. It is a useful tool for comparing
the efficiency of different algorithms and for predicting how they will perform on large
input sizes.
Little o Notations
Little o notation is used to describe an upper bound that cannot be tight. In
other words, loose upper bound of f(n).

Little ω Notations
Another asymptotic notation is little omega notation. it is denoted by (ω).
Little omega (ω) notation is used to describe a loose lower bound of f(n).
What is the need for Asymptotic analysis?

Let’s compare linear search and binary search algorithms:

Execution time for linear search: a*n

Execution time for binary search: b*(log n)

a,b => contents, n=> input size.

Since n > log n, linear search takes more time to execute than binary search. But what
if Linear search is executed on a very faster system.
The way to study the efficiency of an algorithm is to implement it and
experiment by running the program on various test inputs while recording the
time spent during each execution

Given two algorithms for a task, how do we find out which one is better?
One way of doing this is – to implement both the algorithms and run the two programs on your
computer for different inputs and see which one takes less time. There are many problems with
this approach for the analysis of algorithms.

It might be possible that for some inputs, the first algorithm performs better than the second.
And for some inputs second performs better.
It might also be possible that for some inputs, the first algorithm performs better on one
machine, and the second works better on another machine for some other inputs.

Asymptotic Analysis is the big idea that handles the above issues in analyzing algorithms. In
Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size (we
don’t measure the actual running time). We calculate, how the time (or space) taken by an
algorithm increases with the input size.
Let there be ‘n' items to be searched, After the first search the list is divided into two,
each of length n/2 . After the next search, 2 lists, each of length n/4 and so on. This
successive division has to stop when the length of list becomes 1. Let it happen after k
steps.

After the k steps,

Hence the order is log(n).


The reason is the order of growth of Binary Search with respect to input size is
logarithmic while the order of growth of Linear Search is linear.
So the machine-dependent constants can always be ignored after a certain value
of input size.
Challenges of Experimental Analysis:
Experimental running times of two algorithms are difficult to directly compare unless
the experiments are performed in the same hardware and software environments.
Experiments can be done only on a limited set of test inputs; hence, they leave out the
running times of inputs not included in the experiment (and these inputs may be
important).

To overcome the challenges in the Experimental analysis Asymptotic Analysis is


used.
Does Asymptotic Analysis always work?
Asymptotic Analysis is not perfect, but that’s the best way available for analyzing
algorithms. For example, say there are two sorting algorithms that take 1000nLogn and
2nLogn time respectively on a machine. Both of these algorithms are asymptotically the
same (order of growth is nLogn). So, With Asymptotic Analysis, we can’t judge which
one is better as we ignore constants in Asymptotic Analysis.
Also, in Asymptotic analysis, we always talk about input sizes larger than a constant
value. It might be possible that those large inputs are never given to your software and
an asymptotically slower algorithm always performs better for your particular
situation. So, you may end up choosing an algorithm that is Asymptotically slower but
faster for your software
Advantages or Disadvantages:
Advantages:
1.Asymptotic analysis provides a high-level understanding of how an algorithm performs
with respect to input size.
2.It is a useful tool for comparing the efficiency of different algorithms and selecting the
best one for a specific problem.
3.It helps in predicting how an algorithm will perform on larger input sizes, which is
essential for real-world applications.
4.Asymptotic analysis is relatively easy to perform and requires only basic mathematical
skills.
Disadvantages:
Asymptotic analysis does not provide an accurate running time or space usage
of an algorithm.
It assumes that the input size is the only factor that affects an algorithm’s
performance, which is not always the case in practice.
Asymptotic analysis can sometimes be misleading, as two algorithms with the
same asymptotic complexity may have different actual running times or space
usage.
It is not always straightforward to determine the best asymptotic complexity for
an algorithm, as there may be trade-offs between time and space complexity.
Algorithm Design Techniques

The following is a list of several popular design approaches:

1. Divide and Conquer Approach: It is a top-down approach. The algorithms which


follow the divide & conquer techniques involve three steps:
•Divide the original problem into a set of subproblems.
•Solve every subproblem individually, recursively.
•Combine the solution of the subproblems (top level) into a solution of the whole
original problem.
2. Greedy Technique: Greedy method is used to solve the optimization problem. An
optimization problem is one in which we are given a set of input values, which are
required either to be maximized or minimized (known as objective), i.e. some
constraints or conditions.

•Greedy Algorithm always makes the choice (greedy criteria) looks best at the
moment, to optimize a given objective.
•The greedy algorithm doesn't always guarantee the optimal solution however it
generally produces a solution that is very close in value to the optimal.
3. Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all possible small
problems and then combine them to obtain solutions for bigger problems. This is particularly helpful when
the number of copying subproblems is exponentially large. Dynamic Programming is frequently related
to Optimization Problems.

5. Randomized Algorithms: A randomized algorithm is defined as an algorithm that is allowed to access a


source of independent, unbiased random bits, and it is then allowed to use these random bits to influence
its computation.

6. Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find the right one. It is
a depth-first search of the set of possible solution. During the search, if an alternative doesn't work, then
backtrack to the choice point, the place which presented different alternatives, and tries the next
alternative.
Properties of Asymptotic Notations
Following Asymptotic Notations Properties.
1.General Properties
2.Reflexive Properties
3.Transitive Properties
4.Symmetric Properties
5.Transpose Symmetric Properties
Selection Sort
The selection sort enhances the bubble sort by making only a single swap for each
pass through the rundown. In order to do this, a selection sort searches for the
biggest value as it makes a pass and, after finishing the pass, places it in the best
possible area. Similarly, as with a bubble sort, after the first pass, the biggest item is
in the right place. After the second pass, the following biggest is set up. This
procedure proceeds and requires n-1 goes to sort n item since the last item must be
set up after the (n-1) th pass.
Sequential Search Algorithm

You might also like