Problem Solving Techunit1
Problem Solving Techunit1
Problem Solving Techunit1
unit1
Algorithm INTRODUCTION
What is an Algorithm?
Informal Definition:
An Algorithm is any well-defined computational procedure that takes some value or set
of values as input and produces a set of values or some value as output. Thus algorithm is a
sequence of computational steps that transforms the input into the output.
Formal Definition:
The performance of a program is the amount of computer memory and time needed to run a
program. We use two approaches to determine the performance of a program. One is analytical,
and the other experimental. In performance analysis we use analytical methods, while in
performance measurement we conduct experiments.
Time Complexity:
The time needed by an algorithm expressed as a function of the size of a problem is called the
time complexity of the algorithm. The time complexity of a program is the amount of computer
time it needs to run to completion.
The limiting behavior of the complexity as size increases is called the asymptotic time
complexity. It is the asymptotic complexity of an algorithm, which ultimately determines the
size of problems that can be solved by the algorithm.
Space Complexity:
The space complexity of a program is the amount of memory it needs to run to completion. The
space need by a program has the following components:
Instruction space: Instruction space is the space needed to store the compiled version of the
program instructions.
Data space: Data space is the space needed to store all constant and variable values. Data space
has two components:
Space needed by constants and simple variables in program.
Space needed by dynamically allocated objects such as arrays and class instances.
Environment stack space: The environment stack is used to save information needed to resume
execution of partially completed functions.
Instruction Space: The amount of instructions space that is needed depends on
factors such as:
The compiler used to complete the program into machine code.
The compiler options in effect at the time of compilation
the target computer.
Algorithm Specification
Algorithm can be described in three ways.
1. Natural language like English: When this way is choosed care should be taken, we should
ensure that each & every statement is definite.
2. Graphic representation called flowchart: This method will work well when the algorithm is
small& simple.
3. Pseudo-code Method: In this method, we should typically describe algorithms as program,
which resembles programming language constructs
Pseudo-Code Conventions:
3. An identifier begins with a letter. The data types of variables are not explicitly declared.
4. Compound data types can be formed with records. Here is an example,
Node. Record { data type – 1 data-1; . . . data type – n data – n; node * link; }
Here link is a pointer to the record type node. Individual data items of a record can be accessed
with → and period.
5. Assignment of values to variables is done using the assignment statement.
<Variable>:= <expression>;
While Loop:
{ <statement-1> . . . <statement-n> }
For Loop:
{ <statement-1> . . . <statement-n> }
repeat-until:
Case
statement: Case
{ : <condition-1> : <statement-1>
...
: <condition-n> : <statement-n>
: else : <statement-n+1>
Input and output are done using the instructions read & write.
Growth function
Orders Of Growth
♦A difference in running times on small inputs is not what really distinguishes efficient
algorithms from inefficient ones.
♦When we have to compute, for example, the greatest common divisor of two small numbers, it
is not immediately clear how much more efficient Euclid„s algorithm is compared to the other
two algorithms discussed in previous section or even why we should care which of them is
faster and by how much. It is only when we have to find the greatest common divisor of two
large numbers that the difference in algorithm efficiencies becomes both clear and important.
The complexity of an algorithm M is the function f(n) which gives the running time and/or
storage space requirement of the algorithm in terms of the size „n‟ of the input data. Mostly, the
storage space required by an algorithm is simply a multiple of the data size „n‟. Complexity
shall refer to the running time of the algorithm.
The function f(n), gives the running time of an algorithm, depends not only on the size „n‟ of the
input data but also on the particular data. The complexity function f(n) for certain cases are:
1. Best Case : The minimum possible value of f(n) is called the best case.
The field of computer science, which studies efficiency of algorithms, is known as analysis of
algorithms.
Algorithms can be evaluated by a variety of criteria. Most often we shall be interested in the rate
of growth of the time or space required to solve larger and larger instances of a problem. We
will associate with the problem an integer, called the size of the problem, which is a measure of
the quantity of input data.
Asymptotic notations
The following notations are commonly use notations in performance analysis and used to
characterize the complexity of an algorithm:
1. Big–OH (O)
2. Big–OMEGA (Ω),
3. Big–THETA (θ) and
This section reviews some standard mathematical functions and notations and explores the
relationships among them. It also illustrates the use of the asymptotic notations.
Monotonicity
For any real number x, we denote the greatest integer less than or equal to x by ⌊x⌋ (read "the
floor of x") and the least integer greater than or equal to x by ⌈x⌉ (read "the ceiling of x"). For all
real x,
(3.3)
⌈n/2⌉ + ⌊n/2⌋ = n,
(3.5)
(3.6)
(3.7)
The floor function f(x) = ⌊x⌋ is monotonically increasing, as is the ceiling function f(x) = ⌈x⌉.
Modular arithmetic
For any integer a and any positive integer n, the value a mod n is the remainder (or residue) of
the quotient a/n:
(3.8)
Given a well-defined notion of the remainder of one integer when divided by another, it is
convenient to provide special notation to indicate equality of remainders. If (a mod n) =
(b mod n), we write a ≡ b (mod n) and say that a is equivalent to b, modulo n. In other
words, a ≡ b (mod n) if a and b have the same remainder when divided by n.
Equivalently, a ≡ b (mod n) if and only if n is a divisor of b - a. We write a ≢ b (mod n) if a is
not equivalent to b, modulo n.
Polynomials
where the constants a0, a1, ..., ad are the coefficients of the polynomial and ad ≠ 0. A polynomial
is asymptotically positive if and only if ad > 0. For an asymptotically positive polynomial p(n) of
degree d, we have p(n) = Θ(nd). For any real constant a ≥ 0, the function na is monotonically
increasing, and for any real constant a ≤ 0, the function na is monotonically decreasing. We say
that a function f(n) is polynomially bounded if f(n) = O(nk) for some constant k.
Exponentials
For all n and a ≥ 1, the function an is monotonically increasing in n. When convenient, we shall
assume 00 = 1.
The rates of growth of polynomials and exponentials can be related by the following fact. For all
real constants a and b such that a > 1,
(3.9)
nb = o(an).
Thus, any exponential function with a base strictly greater than 1 grows faster than any
polynomial function.
Using e to denote 2.71828..., the base of the natural logarithm function, we have for all real x,
(3.10)
where "!" denotes the factorial function defined later in this section. For all real x, we have the
inequality
(3.11)
where equality holds only when x = 0. When |x| ≤ 1, we have the approximation
(3.12)
ex = 1 + x + Θ(x2).
(In this equation, the asymptotic notation is used to describe the limiting behavior as x → 0
rather than as x → ∞.) We have for all x,
(3.13)
Logarithms
An important notational convention we shall adopt is that logarithm functions will apply only to
the next term in the formula, so that lg n + k will mean (lg n) + k and not lg(n + k). If we
hold b > 1 constant, then for n > 0, the function logb n is strictly increasing.
(3.15)
By equation (3.14), changing the base of a logarithm from one constant to another only changes
the value of the logarithm by a constant factor, and so we shall often use the notation "lg n" when
we don't care about constant factors, such as in O-notation. Computer scientists find 2 to be the
most natural base for logarithms because so many algorithms and data structures involve
splitting a problem into two parts.
We say that a function f(n) is polylogarithmically bounded if f(n) = O(lgk n) for some constant k.
We can relate the growth of polynomials and polylogarithms by substituting lg n for n and
2a for a in equation (3.9), yielding
lgb n = o(na)
for any constant a > 0. Thus, any positive polynomial function grows faster than any
polylogarithmic function.
Factorials
The notation n! (read "n factorial") is defined for integers n ≥ 0 as
Thus, n! = 1 · 2 · 3 n.
A weak upper bound on the factorial function is n! ≤ nn, since each of the n terms in the factorial
product is at most n. Stirling's approximation,
(3.17)
where e is the base of the natural logarithm, gives us a tighter upper bound, and a lower bound as
well. One can prove (see Exercise 3.2-3)
(3.18)
where Stirling's approximation is helpful in proving equation (3.18). The following equation also
holds for all n ≥ 1:
(3.19)
where
(3.20)
Functional iteration
We use the notation f(i)(n) to denote the function f(n) iteratively applied i times to an initial value
of n. Formally, let f(n) be a function over the reals. For nonnegative integers i, we recursively
define
We use the notation lg* n (read "log star of n") to denote the iterated logarithm, which is defined
as follows. Let lg(i) n be as defined above, with f(n) = lg n. Because the logarithm of a
nonpositive number is undefined, lg(i) n is defined only if lg(i-1) n > 0. Be sure to distinguish
lg(i) n (the logarithm function applied i times in succession, starting with argument n) from
lgi n (the logarithm of n raised to the ith power). The iterated logarithm function is defined as
Since the number of atoms in the observable universe is estimated to be about 1080, which is
much less than 265536, we rarely encounter an input size n such that lg* n > 5.
Fibonacci numbers
Thus, each Fibonacci number is the sum of the two previous ones, yielding the sequence
Fibonacci numbers are related to the golden ratio φ and to its conjugate , which are given by
the following formulas:
(3.22)
Specifically, we have
(3.23)
Designing algorithm
Many algorithms are recursive in nature to solve a given problem recursively dealing with sub-
problems.
In divide and conquer approach, a problem is divided into smaller problems, then the smaller
problems are solved independently, and finally the solutions of smaller problems are combined
into a solution for the large problem.
Generally, divide-and-conquer algorithms have three parts −
Divide the problem into a number of sub-problems that are smaller instances of the
same problem.
Conquer the sub-problems by solving them recursively. If they are small enough, solve
the sub-problems as base cases.
Combine the solutions to the sub-problems into the solution for the original problem.
Divide and conquer approach supports parallelism as sub-problems are independent. Hence, an
algorithm, which is designed using this technique, can run on the multiprocessor system or in
different machines simultaneously.
In this approach, most of the algorithms are designed using recursion, hence memory
management is very high. For recursive function stack is used, where function state needs to be
stored.
Following are some problems, which are solved using divide and conquer approach.
Problem Statement
Solution
In this algorithm, the numbers are stored in an array numbers[]. Here, p and q represents the
start and end index of a sub-array.
Algorithm: Merge-Sort (numbers[], p, r)
if p < r then
q = ⌊(p + r) / 2⌋
Merge-Sort (numbers[], p, q)
Merge-Sort (numbers[], q + 1, r)
Merge (numbers[], p, q, r)
Function: Merge (numbers[], p, q, r)
n1 = q – p + 1
n2 = r – q
declare leftnums[1…n1 + 1] and rightnums[1…n2 + 1] temporary arrays
for i = 1 to n1
leftnums[i] = numbers[p + i - 1]
for j = 1 to n2
rightnums[j] = numbers[q+ j]
leftnums[n1 + 1] = ∞
rightnums[n2 + 1] = ∞
i=1
j=1
for k = p to r
if leftnums[i] ≤ rightnums[j]
numbers[k] = leftnums[i]
i=i+1
else
numbers[k] = rightnums[j]
j=j+1
Analysis
Example
In the following example, we have shown Merge-Sort algorithm step by step. First, every
iteration array is divided into two sub-arrays, until the sub-array contains only one element.
When these sub-arrays cannot be divided further, then merge operations are performed.
Algorithm for Swapping two numbers using third variable:
C implementation
#include <stdio.h>
int main()
{
int a, b, Temp;
Temp = a;
a = b;
b = Temp;
return 0;
}
STEP 1: START
STEP 2: ENTER A, B
STEP 3: PRINT A, B
STEP 4: A = A + B
STEP 5: B= A - B
STEP 6: A =A - B
STEP 7: PRINT A, B
STEP 8: END
C implementation
#include<stdio.h>
int main()
{
int a=10, b=20;
printf("Before swap a=%d b=%d",a,b);
a=a+b;//a=30 (10+20)
b=a-b;//b=10 (30-20)
a=a-b;//a=20 (30-10)
printf("\nAfter swap a=%d b=%d",a,b);
return 0;
}
C implementation
#include<stdio.h>
int main()
{
int n,sum=0,m;
printf("Enter a number:");
scanf("%d",&n);
while(n>0)
{
m=n%10;
sum=sum+m;
n=n/10;
}
printf("Sum is=%d",sum);
return 0;
}
Step 1: Start
Step 2: Read n numbers to be summed from the input device (such as keyboard).
Step 3: Initialize the variable sum as zero.
Step 4: Initialize the variable count as one.
Step 5: While count is less than or equal to n, i.e. numbers to be added, repeatedly do: Read the
number at position count, i.e. when count is 1 read first number, when count is 2, read second
number and so on. Update the current sum by adding to it the number read.
Step 6: Write the sum of nth numbers.
Step 7: Stop.
C implementation
#include<stdio.h>
int main()
{
//let's assume the maximum array size as 100.
//initialize sum as 0. Otherwise, it will take some garbage value.
int arr[100], size, i, sum = 0;
return 0;
}
C implementation
#include<stdio.h>
int main()
{
int i,fact=1,number;
printf("Enter a number: ");
scanf("%d",&number);
for(i=1;i<=number;i++){
fact=fact*i;
}
printf("Factorial of %d is: %d",number,fact);
return 0;
}
Step1: Start
Step2: Declare variables i, a,b , show
Step3: Initialize the variables, a=0, b=1, and show =0
Step4: Enter the number of terms of Fibonacci series to be printed
Step5: Print First two terms of series
Step 6 : Use loop for step 7step 11
Step 7: show= a+b
step 8: a=b
step 9: b=show
step 10: increase value of i each time by 1
step 11: print the value of show
Step12: End
C implementation
#include<stdio.h>
int main()
{
int n1=0,n2=1,n3,i,number;
printf("Enter the number of elements:");
scanf("%d",&number);
printf("\n%d %d",n1,n2);//printing 0 and 1
for(i=2;i<number;++i)//loop starts from 2 because 0 and 1 are already printed
{
n3=n1+n2;
printf(" %d",n3);
n1=n2;
n2=n3;
}
return 0;
}
C implementation
#include<stdio.h>
int main()
{
int n, reverse=0, rem;
printf("Enter a number: ");
scanf("%d", &n);
while(n!=0)
{
rem=n%10;
reverse=reverse*10+rem;
n/=10;
}
printf("Reversed Number: %d",reverse);
return 0;
}
STEP 1: START
Step 2: Establish 'm' the number whose square root is required
Step3: Set the initial VALUE i=0.1;
Step 4: repeat step 5 and 6 for i=0.1 to M/2
STEP 5: if i*i=M go to step 7
Step 6: if i*i>M SET i=i-0.1 goto step 7
Step 7: print I as square root
Step 8:stop
C implementation
#include <stdio.h>
#include <stdlib.h>
int main()
{
float m,i;
printf("enter a number\n");
scanf("%f",&m);
for(i=0.1;i<=m/2;i=i+0.1)
{
if(i*i==m)
{
break;
}
if(i*i>m)
{
i=i-0.1;
break;
}
}
printf("square root of %f is %f",m,i);
return 0;
}
.
Number conversion
Steps
Step 1 − Divide the decimal number to be converted by the value of the new base.
Step 2 − Get the remainder from Step 1 as the rightmost digit (least significant digit) of
new base number.
Step 3 − Divide the quotient of the previous divide by the new base.
Step 4 − Record the remainder from Step 3 as the next digit (to the left) of the new base
number.
Repeat Steps 3 and 4, getting remainders from right to left, until the quotient becomes zero in
Step 3.
The last remainder thus obtained will be the Most Significant Digit (MSD) of the new base
number.
Example −
Decimal Number: 2910
Calculating Binary Equivalent −
Step Operation Result Remainder
Step 1 29 / 2 14 1
Step 2 14 / 2 7 0
Step 3 7/2 3 1
Step 4 3/2 1 1
Step 5 1/2 0 1
As mentioned in Steps 2 and 4, the remainders have to be arranged in the reverse order so that
the first remainder becomes the Least Significant Digit (LSD) and the last remainder becomes
the Most Significant Digit (MSD).
Decimal Number − 2910 = Binary Number − 111012.
Steps
Step 1 − Determine the column (positional) value of each digit (this depends on the
position of the digit and the base of the number system).
Step 2 − Multiply the obtained column values (in Step 1) by the digits in the
corresponding columns.
Step 3 − Sum the products calculated in Step 2. The total is the equivalent value in
decimal.
Example
Binary Number − 111012
Calculating Decimal Equivalent −
Steps
Step 1 − Convert the original number to a decimal number (base 10).
Step 2 − Convert the decimal number so obtained to the new base number.
Example
Octal Number − 258
Calculating Binary Equivalent −
Step 1 − Convert to Decimal
Step Octal Number Decimal Number
Step 2 10 / 2 5 0
Step 3 5 / 2 2 1
Step 4 2 / 2 1 0
Step 5 1 / 2 0 1