H16 Complexiteit

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Department of Computer Science

Time complexity and sorting


Part 1: Iterative algorithms
Time complexity
q A correct algorithm is essential

q A fast algorithm is desirable


q But how do we determine whether an algorithm is fast (enough)?
- Count the lines of code?
- Measure it on my computer with an example?
- Calculate the exact time it will take?
Strategy

q Instead of measuring the time itself, describe how the


program behaves when the input becomes larger?
q For example, how much slower will it get if I make the input twice
as large?
- No slowdown?
- Twice as slow?
- Four times as slow?
- Exponentially much slower?
q (Time) complexity = characterize the performance T(n) in
function of the input size n.
q Space complexity: do the same for memory (storage space)
q We’re not really interested in exact numbers, but only in
orders of magnitude
q Analysis for large values of n only
q Constants and lower-order terms are neglected
Worst, average, best case

q Worst case
q What is the maximum time taken by the algorithm?
(often when the input fulfills some special conditions)

q Average case
q How much time does the algorithm take when given random input?
(i.e., without special assumptions or restrictions on the input)

q Best case
q What is the minimum time taken by the algorithm?
(often when the input fulfills some special conditions)
What is the input size n?

q If the input is a list, string, tuple, dictionary, set, …


q The length of the input: n = len(lst)

q If the input is a number


q The number itself: n = number
q The number of decimal digits: n = log10(number)
q The number of bits: n = log2(number)

q Multi-dimensional input is possible as well


q E.g., n×m matrix => T(n, m)

q Remember: always clearly specify what n is!


Measuring and plotting time
import time

measurements = []
for N in range(…):
# … (create input of size N)
start = time.perf_counter()
# … (do something with input)
end = time.perf_counter()
measurements.append((N, end-start))

import simpleplot as plt


plt.add_line(measurements)
plt.show_plot()
Big-O

We say that an algorithm has time complexity T(n) = O(F(n))


when F(n) is the ‘least upper bound’ for T(n)
for all ‘large enough’ values of n
i.e., T(n) is smaller than some constant c>0 times F(n): T(n) ≤ c F(n)
for all n > n0 for some value of n0
and F(n) is the smallest of all such bounds

Remember this:
Some examples:
T(n) = 25 We use Big-O to express
= O(1) an (asymptotic) least upper
bound on the time required by
an algorithm
T(n) = 25 n2 + 3000 n + 10
= O(n2)

T(n) = 10-50 2n + 1050 n


= O(2n) Note: We always assume T(n) and F(n) are positive
Some typical bounds
O(2n)
O(n3)
O(n2) slow

O(n log n)

O(n)
O(log n)
O(1)
fast
Calculating with Big O

If T(n) = O(F(n)) , then k T(n) = O(F(n)) (for any constant k)

Multiplying a time function with a constant does not affect


the time complexity.

Use: code executed a fixed number of times (or a variable


number of times, but not depending on n)

# Example
for i in range(3):
# body: O(n)

# overall: O(n)
Calculating with Big O

If T1(n) = O(F(n)) and T2(n) = O(G(n)) , then


T1(n) + T2(n) = O( max( F(n), G(n) ) )
Adding two time functions together leads to the largest
time complexity of both.

Use: two different pieces of code executed one after another

# Example
for i in range(n):
# body O(1)
for j in range(n*n):
# body O(1)

# for loop 1 (i): O(n)


# for loop 2 (j): O(n2)
# for loops 1 and 2: O(n2)
Calculating with Big O

If T1(n) = O(F(n)) and T2(n) = O(G(n)) , then


T1(n) T2(n) = O( F(n) G(n) )
Multiplying two time functions together leads to the
product of their time complexities.

Use: two different pieces of code where one is executed within


the other (e.g., a nested loop, function calls, …)

# Example 1: # Example 2:
for i in range(n/2): def helper(n):
# body: O(n2) # body: O(n)

# overall: def fun(n):


# O(n) O(n2) = O(n3) for i in range(n*n):
helper(i)

# overall for fun:


# O(n2) O(n) = O(n3)
A few simple examples

q Getting the length of a list/string/set/dictionary: O(1)

q Getting the i-th element of a list: O(1)

q Appending an element to the end of a list: O(1)

q Inserting an element in a list: O(n)

q Getting an element from a dictionary: O(1) !!!

q Checking whether a set contains an element: O(1) !!!


More simple examples
q Finding the maximum in a list of length n: O(n)

q Checking whether a list contains some value


q Best case: O(1) – when the value is found at the first position
q Worst case, average case: O(n)
q On average, still twice as fast as finding the maximum!
- But you don’t see that in big-O notation
q If the list is sorted, you can do better! (binary search)

q Generating all pairs of list elements


q Worst case, average case, best case: O(n2)
Even more examples
q Factorial
q O(n) where n = value to compute the factorial of
q Selection sort
q O(n2) best and worst case
q Insertion sort
q O(n) best case (already sorted)
q O(n2) worst case
q Binary search
q O(1) best case (element in middle position)
q O(log n) worst case (element not found)
q Bubble sort
q Without optimization O(n2) best and worst case
q With optimization O(n) best case, O(n2) worst case
q Merge 2 sorted lists
q O(n+m) with n the length of the first list, and m the length of the second

You might also like