Ch-1: Algorithms Analysis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Class 1 – Algorithm Analysis

In algorithm analysis, the rate of growth refers to how the running time of an
algorithm increases as the input size grows. It's a measure of the algorithm's
efficiency and scalability.

Key points to remember:

● Focus on the dominant term:


● Constant factors matter less:

2 3 𝑛 𝑛 𝑛
1 < 𝑙𝑜𝑔𝑛 < 𝑛 < 𝑛𝑙𝑜𝑔𝑛 < 𝑛 < 𝑛 < ... < 2 < 3 < 𝑛

Commonly used growth rates:

● Constant time: O(1)


● Logarithmic time: O(log n)
● Linear time: O(n)
● Quadratic time: O(n²)
● Exponential time: O(2^n)

Big O Notation (O) - Upper Bound
Definition: A function f(n) is said to be in O(g(n)) if there exist positive constants c and n₀ such
that
f(n) ≤ c * g(n) for all n ≥ n₀.

Big Omega Notation (Ω) - lower bound


A function f(n) is said to be in Ω(g(n)) if there exist positive constants c and n₀ such that
f(n) ≥ c g(n) for all n ≥ n₀.

Big Theta Notation (Θ) - Avg bound


Definition: A function f(n) is said to be in Θ(g(n)) if f(n) is in O(g(n)) and f(n) is in Ω(g(n)).
● Meaning: f(n) grows at the same rate as g(n) as n approaches infinity.
● Example: If f(n) = 2n² + 3n + 1, then f(n) is in Θ(n²).

Key points:

● Big O notation is used to describe the upper bound of a function's growth. - Worst
● Big Omega notation is used to describe the lower bound of a function's growth. - Best
● Big Theta notation is used to describe avg bounds of a function's growth. - Avg

Guidelines for Asymptotic Analysis


1) Loops: The running time of a loop is, at most, the running time of the statements

inside the loop (including tests) multiplied by the number of iterations.

2) Nested loops: Analyze from the inside out. Total running time is the product of

the sizes of all the loops.

3) Consecutive statements: Add the time complexities of each statement.

2 2
total = 𝑛 + 𝑛 = 𝑂(𝑛 )

4) If-then-else statements:
Worst-case running time: the
test, plus either the then
part or the else part (whichever is the larger).

//test: constant
if (length() == 0) {
return false; // then part: constant
}
else { // else part: (c + c) * n
for (int n = 0; n < length(); n++) {
// another if: c+ c
if (!list[n].equals(otherList.list[n]))
// constant
return false;
}
}
Total time = c₀ + c₁ + (c₂ + c₃) * n = O(n)

5) Logarithmic complexity: An algorithm is O(logn) if it takes a constant time to


cut the problem size by a fraction (usually by ½). As an example, let us consider the
following program:

for (i=1; i<=n;)


i = i*2;

Examples

2
total = 𝑛/2 * 3𝑛/2 * 𝑙𝑜𝑔(𝑛) = 𝑂(𝑛 𝑙𝑜𝑔(𝑛))
2
total = 𝑛/2 * 𝑙𝑜𝑔(𝑛) * 𝑙𝑜𝑔(𝑛) = 𝑂(𝑛 𝑙𝑜𝑔 (𝑛))

total = 𝑛 * 𝑐 = 𝑂(𝑛)

Find a key in a list:


● Linear Search - O(n)
● Binary Search - log (n)

case study 2 - prefix avg

algo: quadratic solution

Overall time complexity:


● The total number of operations is the sum of the iterations in the inner loop for
all outer loop iterations.
𝑛 (𝑛+1)
● This can be expressed as: 1 + 2 + 3 + ... + n =
2
2
● Using the Big O notation, this simplifies to O(𝑛 ).

linear time

MCSS (Max contiguous Subsequence Sum)

Cubic:
Quadratic

divide and conquer


Base case: If the array has only one element, its sum is the maximum contiguous subsequence sum.
Divide: Divide the array into two halves.
Conquer: Recursively find the maximum contiguous subsequence sums in the left and right halves.
Combine:

● Find the maximum crossing sum: This is the sum of a subsequence that starts in the left half and
ends in the right half. Iterate from the middle towards the left and right ends, keeping track of the
maximum sum seen so far.
● Return the maximum of the left sum, right sum, and crossing sum.

Time complexity:

The algorithm has a time complexity of O(n log n). This is because the array is divided in half at each
recursion level, and the combining step takes linear time.

Linear time
Tricky parts of this algorithm are:

▪ No MCSS will start or end with a negative number


▪ We only find the value of the MCSS
▪ But if we need the actual subsequence, we’ll need to resort on at least divide and
conquer
Sample Questions
Exercise Questions
5. Find the worst-case running time in O-notation for algorithms linearMCCS
and quadraticMCCS. Show all steps used in the calculations. What about and
Big-O and Big-omega?

ans:

The worst-case running time for the linearMCCS algorithm is O(n), while the
worst-case running time for the quadraticMCCS algorithm is O(n^2).

LinearMCCS (Kadane's Algorithm):

Kadane's algorithm is a linear-time algorithm for finding the maximum contiguous


subarray sum. It works by keeping track of the maximum sum ending at each index.
The worst-case occurs when the entire array contains negative numbers, and the
maximum sum is the largest negative number.

Steps:

1. Initialize current_sum and max_sum to 0.


2. Iterate through the array:
○ If current_sum + arr[i] > 0, update current_sum to current_sum + arr[i].
○ Otherwise, set current_sum to 0.
○ Update max_sum to the maximum of max_sum and current_sum.

Time complexity:

The algorithm iterates through the array once, performing constant-time operations
at each step. Therefore, the worst-case running time is O(n).

QuadraticMCCS (Brute Force):

The brute force algorithm for finding the maximum contiguous subarray sum checks
all possible subarrays and calculates their sums. The worst-case occurs when the
maximum subarray is the entire array.

Steps:

1. Iterate through the array:


○ For each index i:
■ Iterate from index i to the end of the array:
■ Calculate the sum of the subarray from index i to index j.
■ Update max_sum if the calculated sum is greater than the
current max_sum.

Time complexity:

The outer loop iterates n times, and the inner loop iterates at most n times for each
outer loop iteration. Therefore, the total number of iterations is O(n^2).

Big-O and Big-Omega:

● Big-O: The worst-case running time for both algorithms can be expressed in
big-O notation as O(n) for linearMCCS and O(n^2) for quadraticMCCS.
● Big-Omega: The best-case running time for both algorithms is also O(n) and
O(n^2), respectively. This is because the algorithms must at least examine
every element in the array to find the maximum contiguous subarray sum.
Example:

Consider the following sorting algorithms:

● Bubble sort: O(n²)


● Merge sort: O(n log n)
● Quick sort: O(n log n) (average case)

The worst-case time complexity of an algorithm that divides arrays in two parts in each
iteration is O(log n).

This is because each iteration effectively halves the size of the array. So, the number of iterations
required to reduce the array size to 1 is approximately log base 2 of n (where n is the initial size of
the array). Therefore, the time complexity grows logarithmically with the input size.

Examples of algorithms with this time complexity include:

● Binary search: In binary search, the array is divided in half in each iteration to find the
target element.
● Merge sort: Merge sort divides the array into two halves, sorts each half recursively,
and then merges the sorted halves.
● Quick sort: Quick sort partitions the array into two parts based on a pivot element and
recursively sorts each part.

While the exact time complexity might vary slightly depending on the specific implementation
details of the algorithm, the overall trend is O(log n) for algorithms that divide arrays in two parts in
each iteration.

You might also like