DAA Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Design and Analysis of

Algorithms
IT3202 Students Course Note

Prepared by Asmelash Girmay


Department of Information Technology
Mekelle Institute of Technology
Mekelle University
Mekelle

March 15, 2021


Chapter 1

Analysis of Algorithm

Algorithm: computational procedure that consists of a finite set of instructions by which, given an
input from some set of possible inputs, enable us to obtaining ab output through a systematic execution
of instructions.
An algorithm is a sequence of unambiguous instructions
for solving a problem, i.e., for obtaining a required output
for any legitimate input in a finite amount of time.
An algorithm can be implemented in more than one
programming language.
A problem can be solved in more than one ways. Hence,
many solution algorithms can be derived for a given problem.
Given computational problem, an algorithm describes a specific computational procedure for achieving
that input/output relationship.
Example of algorithms:
• Sorting – bubble sorting, insertion sorting, etc
• Internet algorithms – routing algorithms, searching algorithms
• Security algorithms – public key cryptography
• Optimization algorithms – resources allocation
Expressing algorithms:
• flowchart
• pseudo code
How do we choose between the different sorting algorithms?
• We need to analyze them and choose the one with a better performance.
Analysis of Algorithms
Process of defining the performance of a program or algorithm
• Performance: amount of computer memory and time needed to run a program of given algorithm
• There are two methods to determine the performance of a program
◦ A Priori Analysis − This is a theoretical analysis of an algorithm that is performed before
implementation. Efficiency of an algorithm is measured by assuming that all other factors, for
example, processor speed, are constant and have no effect on the implementation.
◦ A Posterior Analysis − This is an empirical analysis of an algorithm that is performed after
implementation. The selected algorithm is implemented using programming language. This is
then executed on target computer machine. In this analysis, actual statistics like running time
and space required, are collected.
Characteristics of an Algorithm
• Unambiguous – Algorithm should be clear and unambiguous. Each of its steps (or phases), and
their inputs/outputs should be clear and must lead to only one meaning.
• Input − An algorithm should have 0 or more well-defined inputs.
• Output − An algorithm should have 1 or more well-defined outputs, and should match the
desired output.
• Finiteness − Algorithms must terminate after a finite number of steps.
• Feasibility − Should be feasible with the available resources.
• Independent − An algorithm should have step-by-step directions, which should be independent
of any programming code.
Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space used by the algorithm X
are the two main factors, which decide the efficiency of X.
• Time Factor − Time is measured by counting the number of key operations such as
comparisons in the algorithm.
• Space Factor − Space is measured by counting the maximum memory space required by the
algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage space required by the
algorithm in terms of n as the size of input data.
Space complexity
Space complexity of an algorithm represents the amount of memory space required by the algorithm in
its life cycle. The space required by an algorithm is equal to the sum of the following two components
• A fixed part that is a space required to store certain data and variables, that are independent of
the size of the problem. For example, simple variables and constants used, program size, etc.
• A variable part is a space required by variables, whose size depends on the size of the problem.
For example, dynamic memory allocation, recursion stack space, etc.
Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed part and S(I) is the
variable part of the algorithm, which depends on instance characteristic I. Example:
We have three variables, A, B, C and one constant, 10.
S(P) = C + SP(I) = 1 + 3 = 4.
Now, space depends on data types of given variables and
constant types and it will be multiplied accordingly.
Space complexity of a program – the amount of memory it needs to run to completion. A problem may
have several possible solutions with differing space requirements.
More generically, space needed by a program has the following components
a. Instruction space – space needed to store the compiled version of the a program instructions
b. Data space – space needed to store all constants and variables values
Two components
• space needed by constants and variables
• space needed by competent variables such as array, structure, and dynamically allocated
memory
c. environment stack space – stack is used to store information needed to resume execution of partially
completed functions.
Time Complexity
Time complexity of an algorithm represents the amount of time required by the algorithm to run to
completion. Time requirements can be defined as a numerical function T(n), where T(n) can be
measured as the number of steps, provided each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the total computational time is
T(n) = c ∗ n, where c is the time taken for the addition of two bits. Here, we observe that T(n) grows
linearly as the input size increases.
ASYMPTOTIC ANALYSIS
Asymptotic analysis of an algorithm refers to defining the mathematical framing of its run-time
performance. Using asymptotic analysis, we can very well conclude the best case, average case, and
worst case scenario of an algorithm.
Usually, the time required by an algorithm falls under three types
• Best Case − Minimum time required for program execution.
• Average Case − Average time required for program execution.
• Worst Case − Maximum time required for program execution.
Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the running time complexity of an
algorithm.
• Ο Notation
• Ω Notation
• θ Notation
Big Oh Notation, Ο
The notation Οn is the formal way to express the upper bound of an
algorithm's running time. It measures the worst case time complexity
or the longest amount of time an algorithm can possibly take to
complete.

Omega Notation, Ω
The notation Ωn is the formal way to express the lower bound of an
algorithm's running time. It measures the best case time complexity or the
best amount of time an algorithm can possibly take to complete.

Theta Notation, θ
The notation θn is the formal way to express both the lower bound and
the upper bound of an algorithm's running time.

Algorithm Design Techniques/Strategies


• Brute force
• Divide and conquer
• Decrease and conquer
• Transform and conquer
• Greedy approach
• Dynamic programming
• Backtracking and branch-and-bound
• Space and time trade-offs
• Iterative improvement
• Branch and bound
Chapter 2

Divide and Conquer

In divide and conquer approach, the problem in hand, is divided into smaller sub-problems and then
each problem is solved independently. When we keep on dividing the sub-problems into even smaller
sub-problems, we may eventually reach a stage where no more division is possible. Those "atomic"
smallest possible sub-problem (fractions) are solved. The solution of all sub-problems is finally merged
in order to obtain the solution of an original problem.
Inspired by emperors and colonizers.
It has three steps:
1. Divide the problem into smaller problems
2. Conquer by solving problems in (1)
3. Combine the results in (2) together.
Example: Binary Search: Search for x in sorted array A.

Examples: Binary Search, Merge sort, Quick sort, etc. Matrix multiplication, Selection, Convex Hulls.
Binary Search
Let T (n) denote the worst-case time to binary search in an array of length n.
Recurrence is T (n) = T (n/2) + O(1).
T (n) = O(log n).
Merge Sort
Sort an unordered array of numbers A.
Let T (n) denote the worst-case time to merge sort an array of
length n.
Recurrence is T (n) = 2T (n/2) + O(n).
T (n) = O(n log n).
Merge Sort: Illustration

Multiplying Numbers
We want to multiply two n-bit numbers. Cost is number of elementary bit steps. Grade school method
has Θ(n2 ) cost.:
n2 multiplies, n2 /2 additions, plus some carries.
Doesn’t hardware provide multiply? It is fast, optimized,
and free. So, why bother?
• True for numbers that fit in one computer word.
But what if numbers are very large.
• Cryptography (encryption, digital signatures) uses
big number “keys.” Typically 256 to 1024 bits
long!
• n2 multiplication too slow for such large numbers.
• Karatsuba’s (1962) divide-and-conquer scheme multiplies two n bit numbers in O(n1.59 ) steps.

Karatsuba’s Algorithm
Let X and Y be two n-bit numbers. Write X = a b and Y = c d. Where a, b, c, d are n/2 bit numbers.
(Assume n = 2k .). Hence,
XY = (a2n/2 + b)(c2n/2 + d) = ac2n + (ad + bc)2n/2 + bd
Example: X = 4729, Y = 1326
• a = 47; b = 29 and c = 13; d = 26.
• ac = 47 ∗ 13 = 611; ad = 47 ∗ 26 = 1222; bc = 29 ∗ 13 = 377; and bd = 29 ∗ 26 = 754
• XY = 6110000 + 159900 + 754 = 6270654
This is D&C: Solve 4 problems, each of size n/2; then perform O(n) shifts to multiply the terms by 2n
and 2n/2 . We can write the recurrence as T (n) = 4T (n/2) + O(n). But this solves to T (n) = O(n 2 )!
• XY = ac2 n + (ad + bc)2 n/2 + bd.
• Note that (a − b)(c − d) = (ac + bd) − (ad + bc).
• Solve 3 subproblems: ac, bd, (a − b)(c − d).
We can get all the terms needed for XY by addition and subtraction! The recurrence for this algorithm
is T (n) = 3T (n/2) + O(n) = O(n log2 3 ). The complexity is O(n log 2 3 ) ≈ O(n 1.59 ).
The recursion solution: review
T (n) = 2T (n/2) + cn, with T (1) = 1.
By term expansion.

Set i = log 2 n. Use T (1) = 1. We get T (n) = n + cn(log n) = O(n log n).
The Tree View
T (n) = 2T (n/2) + cn, with T (1) = 1.
# leaves = n; # levels = log n.
Work per level is O(n), so total is O(n log n).

Solving By Induction
Recurrence: T (n) = 2T (n/2) + cn.
Base case: T (1) = 1.
Claim: T (n) = cn log n + cn.
T (n) = 2T (n/2) + cn
= 2 (c(n/2) log(n/2) + cn/2) + cn
= cn (log n − 1 + 1) + cn
= cn log n + cn
More Examples
T (n) = 4T (n/2) + cn, T (1) = 1.
• Stops when n/2 i = 1, and i = log n.
• Recurrence solves to T (n) = O(n2 ).
By Term Expansion

Terminates when 2 i = n, or i = log n.


• 4i = 2i × 2i = n × n = n2 .
• T (n) = n 2 + cn 2 = O(n 2 ).
Exercise
1. T (n) = 2T (n/4) + √n, T (1) = 1.

Master Method
• Number children multiply by factor a at
each level.
• Number of leaves is a log b n = n log b a .
• Verify by taking logarithm on both sides.
• By recursion tree, we get
Applying Master Method

Exercise
• Workout with the analysis of Matrix multiplication, Quick Sort, Linear time selection, Convex
hulls
Chapter 3

Graph Search, Shortest Paths, and Data Structures

Data Structures(heaps, balanced search trees, hash tables,bloom filters), graph primitives(applications
of breadth-first and deep-first search, connectivity, shortest paths), and their applications(ranging from
duplication to social network analysis)

Please refer to your data structures course materials.


Chapter 4

Greedy Algorithms

An algorithm is designed to achieve optimum solution for a given problem. In greedy algorithm
approach, decisions are made from the given solution domain. As being greedy, the closest solution that
seems to provide an optimum solution is chosen.
Greedy algorithms try to find a localized optimum solution, which may eventually lead to globally
optimized solutions. However, generally greedy algorithms do not provide globally optimized
solutions.
Counting Coins
This problem is to count to a desired value by choosing the least possible coins and the greedy
approach forces the algorithm to pick the largest possible coin. If we are provided coins of $1, 2, 5 and
10 and we are asked to count $ 18 then the greedy procedure will be −
1. Select one $ 10 coin, the remaining count is 8
2. Then select one $ 5 coin, the remaining count is 3
3. Then select one $ 2 coin, the remaining count is 1
4. And finally, the selection of one $ 1 coins solves the problem
Though, it seems to be working fine, for this count we need to pick only 4 coins. But if we slightly
change the problem then the same approach may not be able to produce the same optimum result. For
the currency system, where we have coins of 1, 7, 10 value, counting coins for value 18 will be
absolutely optimum but for count like 15, it may use more coins than necessary. For example, the
greedy approach will use 10 + 1 + 1 + 1 + 1 + 1, total 6 coins. Whereas the same problem could be
solved by using only 3 coins 7 + 7 + 1. Hence, we may conclude that the greedy approach picks an
immediate optimized solution and may fail where global optimization is a major concern.
Examples: Most networking algorithms use the greedy approach. Here is a list of few of them −
• Travelling Salesman Problem
• Prim's Minimal Spanning Tree Algorithm
• Kruskal's Minimal Spanning Tree Algorithm
• Dijkstra's Minimal Spanning Tree Algorithm
• Graph - Map Coloring
• Graph - Vertex Cover
• Knapsack Problem
• Job Scheduling Problem
Minimum Spanning Trees and Prim’s Algorithm
Spanning Trees: A sub-graph T of a undirected graph G (V, E) is a spanning tree of G if it is a tree and
contains every vertex of G. Example:

Every connected graph has a spanning tree. A weighted graph is a graph, in which each edge has a
weight (some real number). Hence, weight of a Graph is the sum of the weights of all edges.

A Minimum Spanning Tree in an undirected connected weighted graph is a spanning tree of minimum
weight (among all spanning trees). Example: in the above graph, Tree 2 with w=71 is the MST.
The minimum spanning tree may not be unique. However, if the weights of all the edges are pairwise
distinct, it is indeed unique. Example:
Generic Algorithm for MST problem
Let A be a set of edges such that A C T, where T is a MST. An edge (u, v) is a safe edge for A, if A U
{(u,v)} is also a subset of some MST.
If at each step, we can find a safe edge (u, v), we can ’grow’ a MST. This leads to the following generic
approach:
Generic-MST(G, w)
Let A=EMPTY;
while A does not form a spanning tree find an edge (u, v) that is safe for A add (u, v) to A
return A
How can we find a safe edge?
We first give some definitions. Let G = (V, E) be a connected and undirected graph. We define:
Cut A cut (S, V – S) of G is a partition of V.
Cross An edge (u, v) E E crosses the cut (S, V – S) if one of its endpoints is in , and the other is in V - S.
Respect A cut respects a set A of edges if no edge in A crosses the cut.
Light edge An edge is a light edge crossing a cut if its weight is the minimum of any edge crossing the
cut.
Lemma
Let G = (V, E) be a connected, undirected graph with a real-valued weight function defined on E. Let A
be a subset of E that is included in some minimum spanning tree for G, let (S, V – S) be any cut of G
that respects A, and let (u, v) be a light edge crossing the cut (S, V – S). Then, edge (u, v) is safe for A.
It means that we can find a safe edge by:
1. First finding a cut that respects A,
2. Then finding the light edge crossing that cut.
That light edge is a safe edge.
Prim’s Algorithm
The generic algorithm gives us an idea how to ’grow’ a MST. If you read the theorem and the proof
carefully, you will notice that the choice of a cut (and hence the corresponding light edge) in each
iteration is immaterial. We can select any cut (that respects the selected edges) and find the light edge
crossing that cut to proceed.
The Prim’s algorithm makes a nature choice of the cut in each iteration – it grows a single tree and adds
a light edge in each iteration.
Prim’s Algorithm : How to grow a tree
• Start by picking any vertex r to be the root of the tree.
• While the tree does not contain all vertices in the graph find shortest edge leaving the tree and
add it to the tree .
Running time is O((|V| + |E|) log |V|).
Step 0: Choose any element r; set S = {r} and A = {}. (Take r as the root of our spanning tree.)
Step 1: Find a lightest edge such that one endpoint is in S and the other is in V \ S. Add this edge to A
and its (other) endpoint to S.
Step 2: If V \ S = {}, then stop and output (minimum) spanning tree (S, A). Otherwise go to Step 1.
The idea: expand the current tree by adding the lightest (shortest) edge leaving it and its endpoint.
Exercise:
Find MST.
Answer: A={{a,b},{b,d},{c,d},{c,f},{f,g},{f,e}}
Show the necessary steps.

Analysis of Prim’s Algorithm


Scheduling Classes
The next example is slightly less trivial. Suppose you decide to drop out of computer science at the last
minute and change your major to Applied Chaos. The Applied Chaos department has all of its classes
on the same day every week, referred to as “Soberday" by the students (but interestingly, not by the
faculty). Every class has a different start time and a different ending time: AC 101 (‘Toilet Paper
Landscape Architecture’) starts at 10:27pm and ends at 11:51pm; AC 666 (‘Immanentizing the
Eschaton’) starts at 4:18pm and ends at 7:06pm, and so on. In the interests of graduating as quickly as
possible, you want to register for as many classes as you can. (Applied Chaos classes don’t require any
actual work.) The University’s registration computer won’t let you register for overlapping classes, and
no one in the department knows how to override this ‘feature’. Which classes should you take?
More formally, suppose you are given two arrays S[1 .. n] and F [1 .. n] listing the start and finish times
of each class. Your task is to choose the largest possible subset X ∈ {1, 2, . . . , n} so that for any pair i,
j ∈ X , either S[i] > F [ j] or S[ j] > F [i]. We can illustrate the problem by drawing each class as a
rectangle whose left and right x-coordinates show the start and finish times. The goal is to find a largest
subset of rectangles that do not overlap vertically.

This problem has a fairly simple recursive solution, based on the observation that either you take class
1 or you don’t. Let B4 be the set of classes that end before class 1 starts, and let L8 be the set of classes
that start later than class 1 ends:
B 4 = {i | 2 ≤ i ≤ n and F [i] < S[1]} L 8 = {i | 2 ≤ i ≤ n and S[i] > F [1]}
If class 1 is in the optimal schedule, then so are the optimal schedules for B4 and L8 , which we can
find recursively. If not, we can find the optimal schedule for {2, 3, . . . , n} recursively. So we should
try both choices and take whichever one gives the better schedule. Evaluating this recursive algorithm
from the bottom up gives us a dynamic programming algorithm that runs in O(n 2 ) time. I won’t bother
to go through the details, because we can do better.
Intuitively, we’d like the first class to finish as early as possible, because that leaves us with the most
remaining classes. If this greedy strategy works, it suggests the following very simple algorithm. Scan
through the classes in order of finish time; whenever you encounter a class that doesn’t conflict with
your latest class so far, take it!

We can write the greedy algorithm somewhat more formally as follows. (Hopefully the first line is
understandable.)

This algorithm clearly runs in O(n log n) time. To prove that this algorithm actually gives us a maximal
conflict-free schedule, we use an exchange argument, similar to the one we used for tape sorting. We
are not claiming that the greedy schedule is the only maximal schedule; there could be others. (See the
figures on the previous page.) All we can claim is that at least one of the maximal schedules is the one
that the greedy algorithm produces.
Lemma: At least one maximal conflict-free schedule includes the class that finishes first.
Theorem: The greedy schedule is an optimal schedule.
Huffman Codes
A binary code assigns a string of 0s and 1s to each character in the alphabet. A binary code is prefix
free if no code is a prefix of any other. 7-bit ASCII and Unicode’s UTF-8 are both prefix-free binary
codes. Morse code is a binary code, but it is not prefix-free; for example, the code for S (· · ·) includes
the code for E (·) as a prefix. Any prefix-free binary code can be visualized as a binary tree with the
encoded characters stored at the leaves. The code word for any symbol is given by the path from the
root to the corresponding leaf; 0 for left, 1 for right. The length of a codeword for a symbol is the depth
of the corresponding leaf. (Note that the code tree is not a binary search tree. We don’t care at all about
the sorted order of symbols at the leaves. (In fact. the symbols may not have a well-defined order!)
Suppose we want to encode messages in an n-character alphabet so that the encoded message is as
short as possible. Specifically, given an array frequency counts f [1 .. n], we want to compute a prefix
free binary code that minimizes the total encoded length of the message
Let x and y be the two least frequent characters (breaking ties between equally frequent characters
arbitrarily). There is an optimal code tree in which x and y are siblings.
Huffman codes are optimal prefix-free binary codes.
Exercises
Chapter 5

Dynamic Programming

Dynamic programming approach is similar to divide and conquer in breaking down the problem into
smaller and yet smaller possible sub-problems. But unlike, divide and conquer, these sub-problems are
not solved independently. Rather, results of these smaller sub-problems are remembered and used for
similar or overlapping sub-problems.
Dynamic programming is used where we have problems, which can be divided into similar sub-
problems, so that their results can be re-used. Mostly, these algorithms are used for optimization.
Before solving the in-hand sub-problem, dynamic algorithm will try to examine the results of the
previously solved sub-problems. The solutions of sub-problems are combined in order to achieve the
best solution.
So we can say that −
• The problem should be able to be divided into smaller overlapping sub-problem.
• An optimum solution can be achieved by using an optimum solution of smaller sub-problems.
• Dynamic algorithms use Memorization.
Comparison
In contrast to greedy algorithms, where local optimization is addressed, dynamic algorithms are
motivated for an overall optimization of the problem.
In contrast to divide and conquer algorithms, where solutions are combined to achieve an overall
solution, dynamic algorithms use the output of a smaller sub-problem and then try to optimize a bigger
sub-problem. Dynamic algorithms use Memorization to remember the output of already solved sub-
problems.
Example
The following computer problems can be solved using dynamic programming approach −
• Fibonacci number series
• Knapsack problem
• Tower of Hanoi
• All pair shortest path by Floyd-Warshall
• Shortest path by Dijkstra
• Project scheduling
Dynamic programming can be used in both top-down and bottom-up manner. And of course, most of
the times, referring to the previous solution output is cheaper than recomputing in terms of CPU cycles.
Computing a Binomial Coefficient
Computing a binomial coefficient is a standard example of applying dynamic programming to a non-
optimization problem. You may recall from your studies of elementary combinatories that the binomial
coefficient, denoted C(n, k).
Algorithm: Binomial(n, k)

What is the time efficiency of this algorithm?


Knapsack
Edit Distance
The edit distance between two words is the minimum number of letters, insertions, deletions, and
substitutions required to transform one word to another.
Example: to transform food to money
=> Food → Mood → Mond → Mone → Money!
Cost = 4 = 3 (substitutions) + 1 (insertion)
Gap representation F O O D
M O N E Y
Optional structure property:
Suppose we have the gap representation for the shortest edit sequence of two strings. If we remove the
last column, the remaining columns must represent the shortest edit sequence for the remaining sub
strings. Because, if the sub-strings had a shortest edit sequence, we could just give the last column back
on to get a shortest edit sequence for the original string.
Given two strings A [1...m] and Bp1...n]
Edit [i, j] = smallest number of operations (insertion, deletion, substitution needed to change A[1...i] in
to B[1...j].

Edit = min {
Edit (i-1, j-1) if Ai = Bj,
Edit(i-1, j) + 1,
Edit(i, j-1) + 1,
Edit (i-1, j-1) + 1
}

Deletion: Last entry in the bottom row is empty, hence edit (i, j) ← 1 + edit (i-1, j)
Insertion: Last entry in the top row is empty, hence edit (i, j) ← 1 + edit (i, j-1)
Substitution: Both rows have characters in their last column, hence
edit (i, j) ← edit (i-1, j-) + (Ai != Bj), 0 – false, 1 – true.

Edit (i, j) = j, if i = 0
= i, if j = 0
= min {
Edit(i-1, j) + 1,
Edit(i, j-1) + 1,
Edit (i-1, j-1) + 1 if Ai != Bj,
Edit (i-1, j-1) + 0 if Ai = Bj
}
The edit distance between the original string S = Edit (m, n).

Exercise:
1. Given that A:[MIT], B:[MU]. Find:
a) The three ways of replacing A with B.
b) m
c) n
d) Edit (m, n).
Chapter 6

Shortest Paths Revisited, and NP-Complete Problems

Shortest paths (Bellman-ford, floyd warshall, Johnson), NP-completeness(what it means for algorithm
designer, and strategies for coping with computationally intractable problems:analysis of heuristics,
local search)

Please refer to the course materials provided and read more on the
aforementioned special topics.

You might also like