Asymptotic Notation 1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 24

Asymptotic Notation

Asymptotic Notation
The main idea of asymptotic analysis is to have a measure of
efficiency of algorithms that doesn’t depend on machine specific
constants, mainly because this analysis doesn’t require algorithms
to be implemented and time taken by programs to be compared
It is often used to describe how the size of the input data affects an
algorithm’s usage of computational resources. Running time of an
algorithm is described as a function of input size n for large n.
• Asymptotic Notation
• Whenever we want to perform analysis of an algorithm,
we need to calculate the complexity of that algorithm.
• But when we calculate the complexity of an algorithm it
does not provide the exact amount of resource required.
• So instead of taking the exact amount of resource, we
represent that complexity in a general form (Notation)
which produces the basic nature of that algorithm.
• We use that general form (Notation) for analysis process.
• Asymptotic notation of an algorithm is a
mathematical representation of its complexity.
• Note - In asymptotic notation, when we want
to represent the complexity of an algorithm,
we use only the most significant terms in the
complexity of that algorithm and ignore least
significant terms in the complexity of that
algorithm (Here complexity can be Space
Complexity or Time Complexity).
• For example, consider the following time complexities of two
algorithms...

• Algorithm 1 : 5n2 + 2n + 1
• Algorithm 2 : 10n2 + 8n + 3
• Generally, when we analyze an algorithm, we consider the time
complexity for larger values of input data (i.e. 'n' value).
• In above two time complexities, for larger value of 'n' the term '2n +
1' in algorithm 1 has least significance than the term '5n2', and the
term '8n + 3' in algorithm 2 has least significance than the term
'10n2'.
• Here, for larger value of 'n' the value of most significant terms ( 5n2
and 10n2 ) is very larger than the value of least significant terms ( 2n
+ 1 and 8n + 3 ).
• So for larger value of 'n' we ignore the least significant terms to
represent overall time required by an algorithm. In asymptotic
notation, we use only the most significant terms to represent the
time complexity of an algorithm.
Asymptotic Notation
Asymptotic analysis of an algorithm - refers to defining the
mathematical framing of its run-time performance.
Usually, time required by an algorithm falls under three types:
 Best Case − Minimum time required for program execution.
 Average Case − Average time required for program execution.
 Worst Case − Maximum time required for program execution.
Asymptotic analysis are input bound i.e., if there's no input to the
algorithm it is concluded to work in a constant time. Other than the
"input" all other factors are considered constant.
Asymptotic analysis refers to computing the running time of any
operation in mathematical units of computation.
Asymptotic Notation

For example, running time of one operation is computed as f(n)


and may be for another operation it is computed as g(n2). Which
means first operation running time will increase linearly with the
increase in n and running time of second operation will increase
exponentially when n increases.
The Big-O Notation
Omega Ω Notation
Theta Θ Notation
• Big - Oh notation is used to define the upper bound of an
algorithm in terms of Time Complexity.
• That means Big - Oh notation always indicates the maximum
time required by an algorithm for all input values. That
means Big - Oh notation describes the worst case of an
algorithm time complexity.
• Big - Oh Notation can be defined as follows...
• Consider function f(n) as time complexity of an algorithm and
g(n) is the most significant term. If f(n) <= C g(n) for all n >=
n0, C > 0 and n0 >= 1. Then we can represent f(n) as O(g(n)).
• f(n) = O(g(n))
• Consider the following graph drawn for the values of f(n) and
C g(n) for input (n) value on X-Axis and time required is on Y-
Axis

In above graph after a particular input value n0, always C g(n) is


greater than f(n) which indicates the algorithm's upper bound.
Asymptotic Notation -The Big-O

The Big-O
Big oh(O): Definition: f(n) = O(g(n)) (read as f of n is big oh of g of n) if
there exist a positive integer n0 and a positive number c such that |f(n)| ≤ c|
g(n)| for all n ≥ n0 . Here g(n) is the upper bound of the function f(n).
Big “oh”: O
 The asymptotic running time of an algorithm is defined in terms of
functions.
 The upper bound for the function ‘f’ is provided by the big oh notation
(O).
 Considering ‘g’ to be a function from the non-negative integers to the
positive real numbers, then it define O(g) as the set of function f such that
for some real constant c>0 and some non-negative integers constant n0.
The Big-O Notation
Example:
• Example

• Consider the following f(n) and g(n)...


• f(n) = 3n + 2
• g(n) = n
• If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n)
for all values of C > 0 and n0>= 1
• f(n) <= C g(n)
• ⇒3n + 2 <= C n
• Above condition is always TRUE for all values of C = 4 and n >= 2.
• By using Big - Oh notation we can represent the time complexity as
follows...
• 3n + 2 = O(n)
The Big-O Notation

Time complexity of an algorithm using O Notation:


Running TIME NAME
O(1) constant
O(n) linear
O(n2) quadratic
O(n3) cubic
O(2n) ,O(3n), O(kn) Exponential
O(nk) Polynomial
O(log n) Logarithmic
O(n log n) Log linear

Some of the commonly occurring time complexities in their


ascending orders of magnitude are listed as: O(1) ≤ O(log n) ≤
O(n) ≤ O(n log n) ≤ O(n2) ≤ O(n3) ≤ O(2n)
The Big-O Notation
Let's draw the growth rates for the above functions and take a
look at the following table.
The Big-O Notation
Plot of function values
Omega Ω Notation

Omega: Ω
 The lower bound for the function ‘f’ is provided by the
big omega notation (Ω).
 Considering ‘g’ to be a function from the non-negative
integers to the positive real numbers, we define Ω(g) as
the set of function f such that for some real constant c>0
and some non-negative integers constant n0.
f(n) = Ω (g(n)) iff there exist positive constants c and n 0
such that f(n) ≥ cg(n) for all n, n  n0.
• Big - Omege Notation (Ω)
• Big - Omega notation is used to define the lower bound of an
algorithm in terms of Time Complexity.
• That means Big-Omega notation always indicates the minimum
time required by an algorithm for all input values. That means
Big-Omega notation describes the best case of an algorithm time
complexity.
• Big - Omega Notation can be defined as follows...
• Consider function f(n) as time complexity of an algorithm and
g(n) is the most significant term. If f(n) >= C g(n) for all n >= n0, C
> 0 and n0 >= 1. Then we can represent f(n) as Ω(g(n)).
• f(n) = Ω(g(n))
• Consider the following graph drawn for the
values of f(n) and C g(n) for input (n) value on
X-Axis and time required is on Y-Axis

In above graph after a particular input value n0,


always C g(n) is less than f(n) which indicates
the algorithm's lower bound.
Omega Ω Notation
Example:
• Example

• Consider the following f(n) and g(n)...


• f(n) = 3n + 2
• g(n) = n
• If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n)
for all values of C > 0 and n0>= 1
• f(n) >= C g(n)
• ⇒3n + 2 >= C n
• Above condition is always TRUE for all values of C = 1 and n >= 1.
• By using Big - Omega notation we can represent the time complexity as
follows...
• 3n + 2 = Ω(n)
Theta Θ Notation
Definition: f(n) = Θ(g(n)) (read as f of n is theta of g of n), if there
exists a positive integer n0 and two positive constants c1 and c2
such that c1 |g(n)| ≤ |f(n)| ≤ c2 |g(n)| for all n ≥ n0. The function g(n)
is both an upper bound and a lower bound for the function f(n) for
all values of n, n ≥ n0
Theta: Ѳ
 The upper and lower bound for the function ‘f’ is provided by the
big oh notation (Ѳ). Considering ‘g’ to be a function from the
non-negative integers to the positive real numbers, we define
Ѳ(g) as the set of function f such that for some real constant c1
and c2 >0 and some non negative integers constant n0.
• Big - Theta notation is used to define the average bound of an
algorithm in terms of Time Complexity.
• That means Big - Theta notation always indicates the average
time required by an algorithm for all input values. That means
Big - Theta notation describes the average case of an algorithm
time complexity.
• Big - Theta Notation can be defined as follows...
• Consider function f(n) as time complexity of an algorithm and
g(n) is the most significant term. If C1 g(n) <= f(n) <= C2 g(n) for
all n >= n0, C1 > 0, C2 > 0 and n0 >= 1. Then we can represent
f(n) as Θ(g(n)).
• f(n) = Θ(g(n))
• Consider the following graph drawn for the values of
f(n) and C g(n) for input (n) value on X-Axis and time
required is on Y-Axis

In above graph after a particular input value


n0, always C1 g(n) is less than f(n) and C2 g(n)
is greater than f(n) which indicates the
algorithm's average bound.
Theta Θ Notation
Example:
• Example

• Consider the following f(n) and g(n)...


• f(n) = 3n + 2
• g(n) = n
• If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n)
<= C2 g(n) for all values of C1 > 0, C2 > 0 and n0>= 1
• C1 g(n) <= f(n) <= C2 g(n)
• ⇒C1 n <= 3n + 2 <= C2 n
• Above condition is always TRUE for all values of C1 = 1, C2 = 4 and n >= 2.
• By using Big - Theta notation we can represent the time compexity as
follows...
• 3n + 2 = Θ(n)

You might also like