Time Complexity

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 13

Time Complexity

UC Berkeley
Fall 2004, E77
http://jagger.me.berkeley.edu/~pack/e77

Copyright 2005, Andy Packard. This work is licensed under the Creative Commons Attribution-ShareAlike
License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/2.0/ or send a letter to
Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
Time complexity of algorithms
Dependency of
– the time it takes to solve a problem
– as a function of the problem dimension/size
Examples:
– Sorting a list of length n
– Searching a list of length n
– Multiplying a n×n matrix by an n×1 vector
Time to solve problem might depend on data
– Average-case time
– Best-case time
• data is well suited for algorithm (can’t be counted on)
– Worst-case time
• data is such that algorithm performs poorly (time-wise)
Worst-Case gives an upper bound as to how much time
will be needed to solve any instance of the problem.
Example: N-by-N matrix, N-by-1 vector, multiply
(3 accesses, 1 add, 1 multiply)

Y = zeros(N,1); initialize space, c1N


for i=1:N initialize “for” loop, c2N
Y(i) = 0.0; Scalar assignment, c3
for j=1:N initialize “for” loop, c2N

N times
N times
Y(i) = Y(i) + A(i,j)*x(j); c4
end End of loop, return/exit, c5
end End of loop, return/exit, c5

Total = c1N+c2N+N(c3+c2N+N(c4+c5)+c5)
= (c2+c4+c5)N2 + (c1+c2+c3+c5)N
= c6N2 + c7N
Asymptotic Time complexity of algorithms
Dependency of
– the time it takes to solve a problem
– as a function of the problem dimension/size
but
– formula may only be valid for “large” problems

So, we keep track of “growth rates” of the computational


workload as a function of problem dimension
Order, called “big O” notation
Given two functions, f and g, say that “f is of order g” if
– there is a constant c, and
– a value x0
such that
f ( x ) ≤ c ⋅ g ( x ) for all x > x0

So, apart from a fixed multiplicative constant, the function


g is an
– upper bound on the function f
– valid for large values of its argument.
Notation: write f = O( g ) to mean “f is of order g”.

Sometimes write f ( x ) = O( g ( x ) ) to remind us what the


arguments are labled.
Not equality,but “belongs to a class”

Recall that f ( n ) = O( g ( n ) ) means that


– there is a constant c, and
– a value n0
such that f ( n ) ≤ c ⋅ g ( n ) for all n > n0

The = sign does not mean equality!


It means that f is an element of the set of functions which
are eventually bounded by (different) constant multiples of
g.

Therefore, it is correct/ok to write


3n = O( n ) , ( )
3n = O n 2 , (
3n = O n 3 ⋅ log n )
Big O: Examples
Example: f ( n ) = 31
g ( n) = 1

Note: for all n > 0, 31 ≤ 31


f ( n ) 31⋅ g (n)

For all n, f is bounded above by 31g

Write f ( n ) = O( g ( n ) )
or
31 = O(1)
Big O: Examples
Example: f ( n ) = 4n 2 + 7 n + 12
g ( n) = n 2

Note: for n > 8, 4 n 2


+ 7 n + 12 < 5n 2

f ( n) 5 g ( n)
1200

For large enough n


1000

f is bounded above by 5g 800

Write f ( n ) = O( g ( n ) )
600

400

or f = O( g )
( )
200

or 4 n 2
+ 7 n + 12 = O n 2
0
0 5 10 15
Big O: Relationships
Suppose f1 and f2 satisfy: There is a value n0
f1 ( n ) ≤ f 2 ( n ) for all n > n0

Then f1 ( n ) + f 2 ( n ) ≤ 2 f 2 ( n ) for all n > n0

Hence f1 + f 2 = O( f 2 )

Example 3: 27 log( n ) + 19 + n = O( n )
f1 ( n ) f 2 ( n)

Generalization: f1 = O( f 2 ) ⇒ f1 + f 2 = O( f 2 )
Big O: Relationships
Suppose positive functions f1, f2, g1, g2, satisfy
f1 = O( g1 ) and f 2 = O( g 2 )
Then f1 ⋅ f 2 = O( g1 ⋅ g 2 )

Why? There exist c1, c2, n1, n2 such that


f i ( n ) ≤ ci ⋅ g i ( n ) for all n > ni

Take c0=c1c2, n0=max(n1,n2). Multiply to give


f1 ( n ) f ( n ) 2 ≤ c0 ⋅ g1 ( n ) g 2 ( n ) for all n > n0

Example:
(12n 2
)
+ 2 log n ( 27 log( n ) + 19 + n ) = O n 3 ( )
f1 ( n ) f 2 ( n)
Asymptotic: Relationships
Obviously, for any positive function g

g = O( g )

Let β be a positive constant. Then β ⋅ g = O( g ) as well.

Example 4: 13n = O n
3
( )
3

Message: Bounding of growth rate. If n doubles, then the


bound grows by 8.

log n
Example 5: = O( log n )
512
Example: N-by-N matrix, N-by-1 vector, multiply
Y = zeros(N,1); initialize space, c1N
for i=1:N initialize “for” loop, c2N
Y(i) = 0.0; Scalar assignment, c3
for j=1:N initialize “for” loop, c2N

N times
N times
Y(i) = Y(i) + A(i,j)*x(j); c4
end End of loop, return/exit, c5
end End of loop, return/exit, c5

Total = c6N2 + c7N


•Problem size affects (is, in fact) N
•Processor speed, processor and language
architecture, ie., technology, affect c6 and c7
Hence, “this algorithm of matrix-vector multiply has O(N2)
complexity.”
Time complexity familiar tasks
Task Growth rate
Matrix/vector multiply O(N2)
Getting a specific element from a list O(1)
Dividing a list in half, dividing one halve in half, etc O(log2N)
Binary Search O(log2N)
Scanning (brute force search) a list O(N)
Nested for loops (k levels) O(Nk)
MergeSort O(N log2N)
BubbleSort
O(N2)
Generate all subsets of a set of data
O(2N)
Generate all permutations of a set of data
O(N!)

You might also like