1 Algorithm: Design: Indian Institute of Information Technology Design and Manufacturing, Kancheepuram
1 Algorithm: Design: Indian Institute of Information Technology Design and Manufacturing, Kancheepuram
1 Algorithm: Design: Indian Institute of Information Technology Design and Manufacturing, Kancheepuram
1 Algorithm: Design
Any algorithm must have an input provided by the user and must produce an output. Computa-
tional instructions in the algorithm must involve only basic algebraic (arithmetic) operations and it
should terminate after a finite number of steps. Finally, any algorithm must involve unambiguous
instructions and produce the desired output. Mathematically, an algorithm can be represented as
a function, F : I → O, where I is the set of inputs and O is the set of outputs generated by the
algorithm. The word algorithm comes from the name of Persian author “Abu Jafar Mohammad ibn
Musa al Khawarizmi ”.
Note: Although, the phrase algorithm is associated with computer science, the notion compu-
tation (algorithm) did exist for many centuries.
We shall address these questions in this lecture. Let us consider an example of finding a maximum
element in an array of size n. An algorithm is typically described using pseudo code as follows:
Algo Max-array(A,n)
{
Max = A[1];
for i = 2 to n do
if ( A[i] > Max ) then Max = A[i];
return Max;
}
Note: The maximum can also be computed by sorting the array in an increasing order (decreasing
order) and picking the last element (first element). There are at least five different algorithms to
find a maximum element and therefore, it is natural ask for an efficient algorithm. This calls for
the study of analysis of algorithms.
2. Recursive :
Fact(n)
{
if n = 1
return 1;
else
return n * fact(n-1);
}
2 Algorithm: Analysis
The analysis of algorithms involves the following measures and are listed in the order of priority.
1. Correctness: For any algorithm, a proof of correctness is important which will exhibit the
fact that the algorithm indeed output the desired answer. Often, discovering the underlying
combinatorics is quite challenging.
2. Amount of work done (time complexity) : For a problem, there may exist many algo-
rithms. Comparison of two algorithms is inevitable to pick the best of two. By analysis we mean,
the amount of time and space required to execute the algorithm. A computational problem can
have many algorithms but the estimation of time and space complexity provide an insight into
reasonable directions of search for finding the efficient algorithm. The time Complexity does
not refer to the actual running time of an algorithm in terms of millisec (system time). The
actual running time depends on the system configuration. An algorithm taking 100µs on Intel
machine may take 10µs on an AMD machine. So, the time complexity must be defined indepen-
dent of the system configuration. For each algorithm, we focus on step count: the number
of times each statement in the algorithm is executed, which in some sense reflects the
time complexity. The step count focuses on primitive operations along with basic operations.
Moreover, this number increases with the problem size. Therefore, we express the step count
(time complexity) as a function of the input size. The notion input size and primitive operations
vary from one problem to another. The popular ones are;
Note that the first ’for loop’ is executed m + 1 times, i.e., the first m calls are true calls during
which the inner loop is executed and the last call (m + 1)th call is a false call.
3. Fibonacci series
for i = 2 to n do ---- n
{
f = f1 + f2; ---- n - 1
f2 = f1; ---- n - 1
f1 = f; ---- n - 1
}
output ’f’ ---- 1
} --------
Total no of steps= 4n + 1
Note that if n ≤ 1 then the step count is just two and 4n + 1 otherwise.
Remark:
– Note that 3n + 2 6= O(1). i.e., there does not exist c and n0 such that, 3n + 2 ≤ c.1 for all n
beyond n0 . However large the c is, there exist n beyond which 3n + 2 > c.1.
– 10n2 + 4n + 2 6= O(n).
– 3n 6= O(2n ). We shall prove this by contradiction. Suppose 3n = O(2n ), then by definition,
3n ≤ c2n . This implies that 1.5n ≤ c where c is a positive constant, which is a contradiction.
– n! ≤ c.nn for c =√1 and ∀n ≥ 2 i.e., n! = O(nn ). Moreover, from Stirling’s approximation it
follows that n! ≈ 2nπ.nn e−n
Next we present the notation which precisely captures the loose upper bounds.
o-notation
The asymptotic upper bound provided by O-notation may or may not be asymptotically tight.
The bound 2n2 =O(n2 ) is asymptotically tight, but the bound 2n=O(n2 ) is not. We use o-notation
(Little oh) to denote an upper bound that is not asymptotically tight. We formally define as
f (n) = o(g(n)) if for any positive constant c > 0, there exists a positive constant n0 > 0 such that
0 ≤ f (n) < c.g(n) for all n ≥ n0 .
Note that in the definition the inequality works for any positive constant c > 0. This is true because
g(n) is a loose upper bound, and hence g(n) is polynomially larger than f (n) by n , > 0. Due to
this n , the contribution of c to the inequality is minimal which is why the quantifier in o notation
is universal whereas in O is existential.
For example,
1. 2n = o(n2 ), but 2n2 6= o(n2 ). Note that here n2 is polynomially larger than 2n by n , = 1.
2. 100n + 6 = o(n1.2 ) Here n1.2 is polynomially larger than 100n + 6 by n , = 0.2. For any positive
constant c, there exist n0 such that ∀n ≥ n0 , 100n + 6 ≤ c.n1.2
3. 10n2 + 4n + 2 = o(n3 ) Here n3 is polynomially larger than 10n2 + 4n + 2 by n , = 1
4. 6.2n + n2 = o(3n ) Note that 3n is 1.5n × 2n . So for any c > 0, 2n ≤ c.3n . The value of c is
insignificant as 1.5n dominates any c > 0.
5. 3n + 3 = o(n1.00001 ) Here n1.00001 is polynomially larger than 3n + 3 by n , = 0.00001
6. n3 + n + 5 = o(n3.1 ) Here n3.1 is polynomially larger than n3 + n + 5 by n , = 0.1
Intuitively, in o-notation, the function f (n) becomes insignificant relative to g(n) as n approaches
f (n)
infinity; that is, lim =0
n→∞ g(n)
Having looked at the upper bounds, in the similar way we shall now see asymptotic lower bounds.
2.4 Asymptotic lower bounds
Omega notation
The function f (n) = Ω(g(n)) if and only if there exist positive constants c, n0 such that f (n) ≥
c.g(n), ∀n ≥ n0 . Omega can be used to denote all lower bounds of an algorithm. Omega notation
also denotes the best case analysis of an algorithm.
Examples:
1. 3n + 2
3n + 2 ≥ n, ∀n ≥ 1
⇒ 3n + 2 = Ω(n)
2. 10n2 + 4n + 2 = Ω(n2 )
10n2 + 4n + 2 ≥ n2 , c = 1 ∀n ≥ 1
3. n3 + n + 5 = Ω(n3 )
n3 + n + 5 ≥ n3 , c = 1, ∀n ≥ 0
4. 2n2 + nlogn + 1 = Ω(n2 )
2n2 + nlogn + 1 ≥ 2.n2 , c = 2, ∀n ≥ 1
5. 6.2n + n2 = Ω(2n ) = Ω(n2 ) = Ω(n) = Ω(1)
6.2n + n2 ≥ 2n , c = 1 ∀n ≥ 1
Of the above, Ω(2n ) is a tight lower bound, while all others are loose lower bounds.
Remark:
– 3n2 + 2 6= Ω(n3 ). Reason: There does not exist a positive constant c such that 3n2 + 2 ≥ c.n3
for every n ≥ n0 as we cannot bound n by a constant. That is, on the contrary if 3n2 + 2 ≥ c.n3 ,
then n ≤ 1c is a contradiction.
– 3.2n 6= Ω(n!).
– 5 6= Ω(n).
ω-notation
We use ω-notation to denote a lower bound that is not asymptotically tight.
We define ω(g(n)) (little-omega) as
f (n) = ω(g(n)) if for any positive constant c > 0, there exists a positive constant n0 > 0 such that
0 ≤ c.g(n) < f (n) for all n ≥ n0 .
For example,
f (n)
The relation f (n) = ω(g(n)) implies that lim = ∞, if the limit exists. That is, f (n) becomes
n→∞ g(n)
arbitrarily large relative to g(n) as n approaches infinity.
2.5 Asymptotic tight bound
Theta notation
The function f (n) = Θ(g(n)) if and only if there exist positive constants c1 , c2 , n0 such that
c1 g(n) ≤ f (n) ≤ c2 g(n), ∀n ≥ n0 . Theta can be used to denote tight bounds of an algorithm.
i.e., g(n) is a lower bound as well as an upper bound for f (n). Note that f (n) = Θ(g(n)) if and
only if f (n) = Ω(g(n)) and f (n) = O(g(n)).
Examples
1. 3n + 1010
3n ≤ 3n + 1010 ≤ 4n, ∀n ≥ 1010
⇒ 3n + 1010 = Θ(n)
Note that the first inequality captures 3n + 1010 = Ω(n) and the later one captures 3n + 1010 =
O(n).
2. 10n2 + 4n + 2 = Θ(n2 )
10n2 ≤ 10n2 + 4n + 2 ≤ 20n2 , ∀n ≥ 1, c1 = 10, c2 = 20
3. 6(2n ) + n2 = Θ(2n )
6(2n ) ≤ 6(2n ) + n2 ≤ 12(2n ), ∀n ≥ 1, c1 = 6, c2 = 12
4. 2n2 + nlogn + 1 = Θ(n2 )
2n2 ≤ 2n2 + nlog2 n + 1 ≤ 5.n2 , ∀n ≥ 2, c1 = 2, c2 = 5
√ √
5. n n + nlog2 (n) + 2 = Θ(n n)
√ √ √
n n ≤ n n + nlog2 (n) + 2 ≤ 5.n n, ∀n ≥ 2, c1 = 1, c2 = 5
Remark:
1. Reflexivity :
2. Symmetry :
Proof:
By the definition of θ , there exists positive constants c1 , c2 , no such that c1 .f (n) ≤ g(n) ≤ c2 .f (n)
for all n ≥ no
1 1
⇒ f (n) ≤ .g(n) and f (n) ≥ .g(n)
c1 c2
1 1
⇒ .g(n) ≤ f (n) ≤ .g(n)
c2 c1
By the definition of θ , f (n) = θ(g(n))
This completes the proof of Symmetry property.
3. Transitivity :
Proof:
By the definition of Big-Oh(O) , there exists positive constants c, no such that f (n) ≤ c.g(n)
for all n ≥ no
⇒ f (n) ≤ c1 .g(n)
⇒ g(n) ≤ c2 .h(n)
By the definition, f (n) = O(h(n)) Note : Theta(Θ) and Omega(Ω) also satisfies Transitivity
Property.
4. Transpose Symmetry:
1
⇒ g(n) ≥ f (n)
c
By the definition of Omega (Ω) , g(n) = Ω(f (n))
1
⇒ f (n) ≤ g(n)
c
By the definition of Big-Oh(O) , f (n) = O(g(n))
Therefore, Transpose Symmetry is proved.
Proof. Without loss of generality, assume f (n) ≤ g(n), ⇒ max(f (n) + g(n)) = g(n)
By the definition of θ ,
Lemma 2. For two asymptotic functions f (n) and g(n), O(f (n))+O(g(n)) = O(max(f (n), g(n)))
≤ (c1 + c2 )g(n)
≤ c g(n)
Remarks :
f (n)
1. If lim = c, c ∈ R+ then f (n) = θ(g(n))
n→∞ g(n)
f (n)
2. If lim ≤ c, c ∈ R (c can be 0) then f (n) = O(g(n))
n→∞ g(n)
f (n)
3. If lim = 0, then f (n) = O(g(n)) and g(n) 6= O(f (n))
n→∞ g(n)
f (n)
4. If lim ≥ c, c ∈ R(c can be ∞) then f (n) = Ω(g(n))
n→∞ g(n)
f (n)
5. If lim = ∞, then f (n) = Ω(g(n)) and g(n) 6= Ω(f (n))
n→∞ g(n)
0
6. L Hôpital Rule :
f (n) f 0 (n)
lim = lim 0
n→∞ g(n) n→∞ g (n)
Proof.
f (n) log n
lim = lim √
n→∞ g(n) n→∞ n
0
Applying L Hôpital Rule,
1
f (n) n
lim = lim
n→∞ g(n) n→∞ 1 .n− 12
2
2
= lim √ = 0
n→∞ n
√
From Remark 3, f (n) = O(g(n)) => log n = O( n).
√
Proof for n 6= O(log n)
√
f (n) n
lim = lim
n→∞ g(n) n→∞ log n
1
1 −2
f (n) .n
lim = lim 2 1
n→∞ g(n) n→∞
n
√
n
= lim =∞
n→∞ 2
√ √
From Remark 3, f (n) = Ω(g(n)) ⇒ n = Ω(log n). ⇒ n 6= O(log n)
Note: There are asymptotic functions which can not be compared using any of the above notation.
For example, the following two functions f (n) and g(n) are such that f (n) 6= O(g(n)) and g(n) 6=
O(f (n))
Acknowledgements: Lecture contents presented in this module and subsequent modules are
based on the text books mentioned at the reference and most importantly, author has greatly learnt
from lectures by algorithm exponents affiliated to IIT Madras/IMSc; Prof C. Pandu Rangan, Prof
N.S.Narayanaswamy, Prof Venkatesh Raman, and Prof Anurag Mittal. Author sincerely acknowl-
edges all of them. Special thanks to Teaching Assistants Mr.Renjith.P and Ms.Dhanalakshmi.S for
their sincere and dedicated effort and making this scribe possible. Author has benefited a lot by
teaching this course to senior undergraduate students and junior undergraduate students who have
also contributed to this scribe in many ways. Author sincerely thanks all of them.
References:
1. E.Horowitz, S.Sahni, S.Rajasekaran, Fundamentals of Computer Algorithms, Galgotia Publica-
tions.
2. T.H. Cormen, C.E. Leiserson, R.L.Rivest, C.Stein, Introduction to Algorithms, PHI.
3. Sara Baase, A.V.Gelder, Computer Algorithms, Pearson.