Data and File Structure

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Data and File structure

Algorithm: an algorithm is a finite set of instruction which, if followed accomplish a particular task.
Every algorithm should have following properties:

 It must be correct. In other words, it must compute the desired function, converting each
input to the correct output.
 It is composed of a series of concrete steps. Concrete means that the action describes by
that step is completely understood- and doable- by the person or machine that must
perform the algorithm
 There can be no ambiguity as to which will be perform next
 It must be composed of a finite number of steps
 It must terminate. In other words, it may not go into an infinite loop.

Most important factor that effect the running of algorithm is input.

Common growth rate functions

 1 (constant): growth is independent of the problem size n


 Log n (logarithmic) : growth increases slowly compared to the problem size ( binary search)
 N (linear): directly proportional to the size of the problem
 N* log n ( n log n ): typical of some divide and conquer the
 N^2
 N!

1< log n < n < n* log n < n^2 < n!


Best , worst and average cases

 Consider an example of sequential search algorithm


- Algorithm begins at the first position in the array and looks at each value in turn until k is
found
- Once k is found, the algorithm stops
- There is a wide range of possible running times for the sequential search algorithm
- The first integer in the array could have value k, and so only one integer is examined. In
this case the runn0ing time is short. This is the best case for the algorithm
- Alternatively, if the last position in the array contains k, then the running time is
relatively long.

Methods of estimating/representing time complexity

 Operation counts
 Step counts s/e frequency total steps
Algorithm sum (a,n) { 0 0 0
sum = 0; 1 1 1
for(I =1; i<=n; i++); 1 n+1 n+1
sum = sum + a[i]; 1 n n
return sum; 1 1 1
} 0 0 0
total= 2n+3
Ex: Algorithm sum(a,b, c,m,n) { 0 0 0
for(I =1; i<=n; i++) { 1 m+1 m+1
for (j = 1; i<=n; j++) { 1 m(n+1) mn+m
c [i] [j] = a [i] [j] + b [i] [j];1 m(n) mn
}
}

Total: 2mn + 2m +1

Let m = n for generalized answer

= 2 n^2 + 2n +1

Eg: algorithm Fibonacci(n) { step count

If n <=1 then 1

Output ‘n’

Else

F2 = 0;

F1 = 1;

For i = 2 to n do { n

F = f1+ f2; n -1

F2 = f1; n-1

F1 = f; n-1

Output ‘f’ 1

Total : 4n +1

Not that if n <= 1 then the step count is just 2 and 4n +1 otherwise

 Counting cache misses


 Asymptotic notations: it describes the behavior of the time or space complexity for larger
instance characteristics. ‘Upper’ bpund agorithm
1. Big-oh (O) notation: it measures the worst case time complexity or the longest amount
of time an algo can possibly take to complete.
F(n) = O(g(n)) iff there exist positive constant c and n0 such that
F(n) <= c * g(n) for all n, n >= n0
Eg: 3n+3 = O(n)
3n + 3 <= 4n
n>= 3
2. Omega(Ω) notation: it is normal way to express the lower bound of an algoithm’s
running time. It measures the best case time complexity or the best amount of time an
algorithm can possibly take to complete.
f(n) = Ω(g(n)) iff there exist positive constant c and n0 such that:
f(n) >= c* g(n) for all n, n>= n0
3. Theta ( Ꝋ ) notation: express both, the lower bound and the uppar bound of an
algorithm’s running time
f(n) = Ꝋ (g (n)) iff there exist +ve constant c1, c2 and n0 such that:
c1 * g(n) <= f(n) <= c2 * g(n) for all n, n>= n0
 Recurrence equations

You might also like