Problem Solving Techunit1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Problem solving technique

unit1

Algorithm INTRODUCTION

What is an Algorithm?

Informal Definition:

An Algorithm is any well-defined computational procedure that takes some value or set
of values as input and produces a set of values or some value as output. Thus algorithm is a
sequence of computational steps that transforms the input into the output.

Formal Definition:

An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for


obtaining a required output for any legitimate input in a finite amount of time.
Properties of an algorithm

INPUT Zero or more quantities are externally supplied.


OUTPUT At least one quantity is produced.
DEFINITENESS each instruction is clear and unambiguous.
FINITENESS if we trace out the instructions of an algorithm, then for all cases, the algorithm
terminates after a finite number of steps.
EFFECTIVENESS every instruction must very basic so that it can be carried out, in principle,
by a person using only pencil & paper.

How algorithm is a technology ?


Algorithms are just like a technology. We all use latest and greatest processors but we need to
run implementations of good algorithms on that computer in order to properly take benefits of
our money that we spent to have the latest processor. Let‟s make this example more concrete by
pitting a faster computer(computer A) running a sorting algorithm whose running time on n
values grows like n2 against a slower computer (computer B) running a sorting algorithm whose
running time grows like n lg n. They each must sort an array of 10 million numbers. Suppose
that computer A executes 10 billion instructions per second (faster than anysingle sequential
computer at the time of this writing) and computer B executes only 10 million instructions per
second, so that computer A is1000 times faster than computer B in raw computing power. To
make the difference even more dramatic, suppose that the world‟s craftiest programmer codes in
machine language for computer A, and the resulting code requires 2n2 instructions to sort n
numbers. Suppose furtherthat just an average programmer writes for computer B, using a
highlevel language with an inefficient compiler, with the resulting code taking 50n lg n
instructions. Computer A (Faster) Computer B(Slower) Running time grows like n2 . Grows
innlogn. 10 billion instructions per sec. 10million instruction per sec 2n2 instruction. 50 nlogn
instruction
It is more than 5.5hrs it is under 20 mins. So choosing a good algorithm (algorithm with slower
rate of growth) as used by computer B affects a lot.
How to decide which algorithm is best suited?
1. It depends on how efficient the algorithm when higher order of inputs is given.
2. The possible restrictions/constraints on the values.
3. The architecture of the computer and the kind of storage devices to be used.
4. Another important aspect is the correctness of the algorithm implying that algorithm is
correct if, for every instance, it produces correct output. An incorrect algorithm might
not halt at all on some input instances, or give incorrect output.
Practical applications of algorithms:
 The Internet without which it is difficult to imagine a day is the result of clever and
efficient algorithms. With the aid of these algorithms, various sites on the Internet are
able to manage and manipulate this large volume of data. Finding good routes on which
the data will travel and using search engine to find pages on which particular
information is present.
 Another great milestone is the Human Genome Project which has great progress towards
the goal of identification of the 100000 genes in human DNA, determining the
sequences of the 3 billion chemical base pairs that make up the human DNA, storing
this huge amount of information in databases, and developing tools for data analysis.
Each of these steps required sophisticated and efficient algorithms.
 The day-to-day electronic commerce activities is hugely dependent on our personal
information such as credit/debit card numbers, passwords, bank statements, OTPs and so
on. The core technologies used include public-key cryptocurrency and digital signatures
which are based on numerical algorithms and number theory.
 The approach of linear programming is also one such technique which is widely used
like
 In manufacturing and other commercial enterprises where resources need to be
allocated scarcely in the most beneficial way.
 Or a institution may want to determine where to spend money buying advertising
in order to maximize the chances of their institution to grow.
 Shortest path algorithm also has an extensive use as
 In a transportation firm such as a trucking or railroad company, may have
financial interest in finding shortest path through a road or rail network because
taking shortest path result in lower labour or fuel costs.
 Or a routing node on the Internet may need to find the shortest path through the
network in order to route a message quickly.
 Even an application that does not require algorithm content at the application level relies
heavily on algorithms as the application depends on hardware, GUI, networking or
object orientation and all of these make an extensive use of algorithms.

Performance of a program: time and space tradeoff

The performance of a program is the amount of computer memory and time needed to run a
program. We use two approaches to determine the performance of a program. One is analytical,
and the other experimental. In performance analysis we use analytical methods, while in
performance measurement we conduct experiments.
Time Complexity:
The time needed by an algorithm expressed as a function of the size of a problem is called the
time complexity of the algorithm. The time complexity of a program is the amount of computer
time it needs to run to completion.
The limiting behavior of the complexity as size increases is called the asymptotic time
complexity. It is the asymptotic complexity of an algorithm, which ultimately determines the
size of problems that can be solved by the algorithm.

Space Complexity:
The space complexity of a program is the amount of memory it needs to run to completion. The
space need by a program has the following components:
Instruction space: Instruction space is the space needed to store the compiled version of the
program instructions.
Data space: Data space is the space needed to store all constant and variable values. Data space
has two components:
 Space needed by constants and simple variables in program.
 Space needed by dynamically allocated objects such as arrays and class instances.
Environment stack space: The environment stack is used to save information needed to resume
execution of partially completed functions.
Instruction Space: The amount of instructions space that is needed depends on
factors such as:
 The compiler used to complete the program into machine code.
 The compiler options in effect at the time of compilation
 the target computer.

Algorithm Design Goals


The three basic design goals that one should strive for in a program are:
1. Try to save Time
2. Try to save Space
3. Try to save Face

a program that runs faster is a better program, so saving time is an obvious


goal. Likewise, a program that saves space over a competing program is considered
desirable. We want to “save face” by preventing the program from locking up or
generating reams of garbled data.

Algorithm Specification
Algorithm can be described in three ways.

1. Natural language like English: When this way is choosed care should be taken, we should
ensure that each & every statement is definite.

2. Graphic representation called flowchart: This method will work well when the algorithm is
small& simple.
3. Pseudo-code Method: In this method, we should typically describe algorithms as program,
which resembles programming language constructs

Pseudo-Code Conventions:

1. Comments begin with // and continue until the end of line.

2. Bocks are indicated with matching braces {and}.

3. An identifier begins with a letter. The data types of variables are not explicitly declared.
4. Compound data types can be formed with records. Here is an example,
Node. Record { data type – 1 data-1; . . . data type – n data – n; node * link; }

Here link is a pointer to the record type node. Individual data items of a record can be accessed
with → and period.
5. Assignment of values to variables is done using the assignment statement.

<Variable>:= <expression>;

6. There are two Boolean values TRUE and FALSE.


Logical Operators AND, OR, NOT

Relational Operators <, <=,>,>=, =, !=


7. The following looping statements are employed. For, while and repeat-until

While Loop:

While < condition > do

{ <statement-1> . . . <statement-n> }

For Loop:

For variable: = value-1 to value-2 step step do

{ <statement-1> . . . <statement-n> }

repeat-until:

repeat <statement-1> . . . <statement-n> until<condition>

8. A conditional statement has the following forms.


If <condition> then <statement>
If <condition> then <statement-1> Else <statement-1>

Case
statement: Case

{ : <condition-1> : <statement-1>

...

: <condition-n> : <statement-n>

: else : <statement-n+1>

Input and output are done using the instructions read & write.

Growth function
Orders Of Growth

♦A difference in running times on small inputs is not what really distinguishes efficient
algorithms from inefficient ones.

♦When we have to compute, for example, the greatest common divisor of two small numbers, it
is not immediately clear how much more efficient Euclid„s algorithm is compared to the other
two algorithms discussed in previous section or even why we should care which of them is
faster and by how much. It is only when we have to find the greatest common divisor of two
large numbers that the difference in algorithm efficiencies becomes both clear and important.

For large values of n, it is the function„s order of growth that counts:

Analysis /Complexity of Algorithms

The complexity of an algorithm M is the function f(n) which gives the running time and/or
storage space requirement of the algorithm in terms of the size „n‟ of the input data. Mostly, the
storage space required by an algorithm is simply a multiple of the data size „n‟. Complexity
shall refer to the running time of the algorithm.

The function f(n), gives the running time of an algorithm, depends not only on the size „n‟ of the
input data but also on the particular data. The complexity function f(n) for certain cases are:

1. Best Case : The minimum possible value of f(n) is called the best case.

2. Average Case : The expected value of f(n).


3. Worst Case : The maximum value of f(n) for any key possible input.

The field of computer science, which studies efficiency of algorithms, is known as analysis of
algorithms.

Algorithms can be evaluated by a variety of criteria. Most often we shall be interested in the rate
of growth of the time or space required to solve larger and larger instances of a problem. We
will associate with the problem an integer, called the size of the problem, which is a measure of
the quantity of input data.

Asymptotic notations
The following notations are commonly use notations in performance analysis and used to
characterize the complexity of an algorithm:

1. Big–OH (O)
2. Big–OMEGA (Ω),
3. Big–THETA (θ) and

Big–OH O (Upper Bound)


f(n) <O(g(n)), (pronounced order of or big oh), says that the growth rate of f(n) is less than or
equal (<) that of g(n).

Big–OMEGA (Lower Bound)


f(n) > (g(n)) (pronounced omega), says that the growth rate of f(n) is greater than or equal to
(>) that of g(n).
Big–THETA (Same order)
g1(n) <f(n) <g2(n) (pronounced theta), says that the growth rate of f(n) equals (=) the
growth rate of g(n) [if f(n) = O(g(n)) and T(n) = (g(n)].

Standard notations and common functions

This section reviews some standard mathematical functions and notations and explores the
relationships among them. It also illustrates the use of the asymptotic notations.

Monotonicity

A function f(n) is monotonically increasing if m ≤ n implies f(m) ≤ f(n). Similarly, it


is monotonically decreasing if m ≤ n implies f(m) ≥ f(n). A function f(n) is strictly
increasing if m < n implies f(m) < f(n) and strictly decreasing if m < n implies f(m) > f(n).

Floors and ceilings

For any real number x, we denote the greatest integer less than or equal to x by ⌊x⌋ (read "the
floor of x") and the least integer greater than or equal to x by ⌈x⌉ (read "the ceiling of x"). For all
real x,
(3.3)

For any integer n,

⌈n/2⌉ + ⌊n/2⌋ = n,

and for any real number n ≥ 0 and integers a, b > 0,


(3.4)

(3.5)

(3.6)
(3.7)

The floor function f(x) = ⌊x⌋ is monotonically increasing, as is the ceiling function f(x) = ⌈x⌉.

Modular arithmetic

For any integer a and any positive integer n, the value a mod n is the remainder (or residue) of
the quotient a/n:
(3.8)

Given a well-defined notion of the remainder of one integer when divided by another, it is
convenient to provide special notation to indicate equality of remainders. If (a mod n) =
(b mod n), we write a ≡ b (mod n) and say that a is equivalent to b, modulo n. In other
words, a ≡ b (mod n) if a and b have the same remainder when divided by n.
Equivalently, a ≡ b (mod n) if and only if n is a divisor of b - a. We write a ≢ b (mod n) if a is
not equivalent to b, modulo n.

Polynomials

Given a nonnegative integer d, a polynomial in n of degree d is a function p(n) of the form

where the constants a0, a1, ..., ad are the coefficients of the polynomial and ad ≠ 0. A polynomial
is asymptotically positive if and only if ad > 0. For an asymptotically positive polynomial p(n) of
degree d, we have p(n) = Θ(nd). For any real constant a ≥ 0, the function na is monotonically
increasing, and for any real constant a ≤ 0, the function na is monotonically decreasing. We say
that a function f(n) is polynomially bounded if f(n) = O(nk) for some constant k.

Exponentials

For all real a > 0, m, and n, we have the following identities:


a0 = 1,
a1 = a,
a-1 = 1/a,
(am)n = amn,
(am)n = (an)m,
am an = am+n.

For all n and a ≥ 1, the function an is monotonically increasing in n. When convenient, we shall
assume 00 = 1.

The rates of growth of polynomials and exponentials can be related by the following fact. For all
real constants a and b such that a > 1,
(3.9)

from which we can conclude that

nb = o(an).

Thus, any exponential function with a base strictly greater than 1 grows faster than any
polynomial function.

Using e to denote 2.71828..., the base of the natural logarithm function, we have for all real x,
(3.10)

where "!" denotes the factorial function defined later in this section. For all real x, we have the
inequality
(3.11)

where equality holds only when x = 0. When |x| ≤ 1, we have the approximation
(3.12)

When x → 0, the approximation of ex by 1 + x is quite good:

ex = 1 + x + Θ(x2).

(In this equation, the asymptotic notation is used to describe the limiting behavior as x → 0
rather than as x → ∞.) We have for all x,
(3.13)

Logarithms

We shall use the following notations:


lg n = log2 n (binary logarithm) ,
ln n = loge n (natural logarithm) ,
lgk n = (lg n)k (exponentiation) ,
lg lg n = lg(lg n) (composition) .

An important notational convention we shall adopt is that logarithm functions will apply only to
the next term in the formula, so that lg n + k will mean (lg n) + k and not lg(n + k). If we
hold b > 1 constant, then for n > 0, the function logb n is strictly increasing.

For all real a > 0, b > 0, c > 0, and n,


(3.14)

(3.15)

where, in each equation above, logarithm bases are not 1.

By equation (3.14), changing the base of a logarithm from one constant to another only changes
the value of the logarithm by a constant factor, and so we shall often use the notation "lg n" when
we don't care about constant factors, such as in O-notation. Computer scientists find 2 to be the
most natural base for logarithms because so many algorithms and data structures involve
splitting a problem into two parts.

There is a simple series expansion for ln(1 + x) when |x| < 1:

We also have the following inequalities for x > -1:


(3.16)

where equality holds only for x = 0.

We say that a function f(n) is polylogarithmically bounded if f(n) = O(lgk n) for some constant k.
We can relate the growth of polynomials and polylogarithms by substituting lg n for n and
2a for a in equation (3.9), yielding

From this limit, we can conclude that

lgb n = o(na)

for any constant a > 0. Thus, any positive polynomial function grows faster than any
polylogarithmic function.

Factorials
The notation n! (read "n factorial") is defined for integers n ≥ 0 as

Thus, n! = 1 · 2 · 3 n.

A weak upper bound on the factorial function is n! ≤ nn, since each of the n terms in the factorial
product is at most n. Stirling's approximation,
(3.17)

where e is the base of the natural logarithm, gives us a tighter upper bound, and a lower bound as
well. One can prove (see Exercise 3.2-3)
(3.18)

where Stirling's approximation is helpful in proving equation (3.18). The following equation also
holds for all n ≥ 1:
(3.19)

where
(3.20)

Functional iteration

We use the notation f(i)(n) to denote the function f(n) iteratively applied i times to an initial value
of n. Formally, let f(n) be a function over the reals. For nonnegative integers i, we recursively
define

For example, if f(n) = 2n, then f(i)(n) = 2in.

The iterated logarithm function

We use the notation lg* n (read "log star of n") to denote the iterated logarithm, which is defined
as follows. Let lg(i) n be as defined above, with f(n) = lg n. Because the logarithm of a
nonpositive number is undefined, lg(i) n is defined only if lg(i-1) n > 0. Be sure to distinguish
lg(i) n (the logarithm function applied i times in succession, starting with argument n) from
lgi n (the logarithm of n raised to the ith power). The iterated logarithm function is defined as

lg* n = min {i = 0: lg(i) n ≤ 1}.


The iterated logarithm is a very slowly growing function:
lg* 2 = 1,
lg* 4 = 2,
lg* 16 = 3,
lg* 65536 = 4,
lg*(265536) = 5.

Since the number of atoms in the observable universe is estimated to be about 1080, which is
much less than 265536, we rarely encounter an input size n such that lg* n > 5.

Fibonacci numbers

The Fibonacci numbers are defined by the following recurrence:


(3.21)

Thus, each Fibonacci number is the sum of the two previous ones, yielding the sequence

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ... .

Fibonacci numbers are related to the golden ratio φ and to its conjugate , which are given by
the following formulas:
(3.22)

Specifically, we have
(3.23)

which can be proved by induction (Exercise 3.2-6). Since , we have , so


that the ith Fibonacci number Fi is equal to rounded to the nearest integer. Thus, Fibonacci
numbers grow exponentially.

Designing algorithm
Many algorithms are recursive in nature to solve a given problem recursively dealing with sub-
problems.
In divide and conquer approach, a problem is divided into smaller problems, then the smaller
problems are solved independently, and finally the solutions of smaller problems are combined
into a solution for the large problem.
Generally, divide-and-conquer algorithms have three parts −
 Divide the problem into a number of sub-problems that are smaller instances of the
same problem.
 Conquer the sub-problems by solving them recursively. If they are small enough, solve
the sub-problems as base cases.
 Combine the solutions to the sub-problems into the solution for the original problem.

Pros and cons of Divide and Conquer Approach

Divide and conquer approach supports parallelism as sub-problems are independent. Hence, an
algorithm, which is designed using this technique, can run on the multiprocessor system or in
different machines simultaneously.
In this approach, most of the algorithms are designed using recursion, hence memory
management is very high. For recursive function stack is used, where function state needs to be
stored.

Application of Divide and Conquer Approach

Following are some problems, which are solved using divide and conquer approach.

 Finding the maximum and minimum of a sequence of numbers


 Strassen‟s matrix multiplication
 Merge sort
 Binary search

Problem Statement

The problem of sorting a list of numbers lends itself immediately to a divide-and-conquer


strategy: split the list into two halves, recursively sort each half, and then merge the two sorted
sub-lists.

Solution

In this algorithm, the numbers are stored in an array numbers[]. Here, p and q represents the
start and end index of a sub-array.
Algorithm: Merge-Sort (numbers[], p, r)
if p < r then
q = ⌊(p + r) / 2⌋
Merge-Sort (numbers[], p, q)
Merge-Sort (numbers[], q + 1, r)
Merge (numbers[], p, q, r)
Function: Merge (numbers[], p, q, r)
n1 = q – p + 1
n2 = r – q
declare leftnums[1…n1 + 1] and rightnums[1…n2 + 1] temporary arrays
for i = 1 to n1
leftnums[i] = numbers[p + i - 1]
for j = 1 to n2
rightnums[j] = numbers[q+ j]
leftnums[n1 + 1] = ∞
rightnums[n2 + 1] = ∞
i=1
j=1
for k = p to r
if leftnums[i] ≤ rightnums[j]
numbers[k] = leftnums[i]
i=i+1
else
numbers[k] = rightnums[j]
j=j+1

Analysis

Let us consider, the running time of Merge-Sort as T(n). Hence,


T(n)={c2xT(n2)+dxnifn⩽1otherwiseT(n)={cifn⩽12xT(n2)+dxnotherwise where c and d are
constants
Therefore, using this recurrence relation,
T(n)=2iT(n2i)+i.d.nT(n)=2iT(n2i)+i.d.n
As, i=logn,T(n)=2lognT(n2logn)+logn.d.ni=logn,T(n)=2lognT(n2logn)+logn.d.n
=c.n+d.n.logn=c.n+d.n.logn
Therefore, T(n)=O(nlogn)T(n)=O(nlogn)

Example

In the following example, we have shown Merge-Sort algorithm step by step. First, every
iteration array is divided into two sub-arrays, until the sub-array contains only one element.
When these sub-arrays cannot be divided further, then merge operations are performed.
Algorithm for Swapping two numbers using third variable:

Input: two variable


Output :exchanged values
Step 1 start
2 Declare a variable a,b and c as integer;
3Read two numbers a and b;
4 c=a;
5 a =b;
6 b=a;
7 Print a and b
8 stop

C implementation
#include <stdio.h>

int main()
{
int a, b, Temp;

printf("\nPlease Enter the value of a and b\n");


scanf("%d %d", &a, &b);

printf("\nBefore : a = %d and b = %d\n", a, b);

Temp = a;
a = b;
b = Temp;

printf("\nAfter : a = %d and b = %d\n", a, b);

return 0;
}

Algorithm for Swapping two numbers without using third variable:

STEP 1: START
STEP 2: ENTER A, B
STEP 3: PRINT A, B
STEP 4: A = A + B
STEP 5: B= A - B
STEP 6: A =A - B
STEP 7: PRINT A, B
STEP 8: END

C implementation

#include<stdio.h>
int main()
{
int a=10, b=20;
printf("Before swap a=%d b=%d",a,b);
a=a+b;//a=30 (10+20)
b=a-b;//b=10 (30-20)
a=a-b;//a=20 (30-10)
printf("\nAfter swap a=%d b=%d",a,b);
return 0;
}

ALGORITHM TO get sum of digits of a number


INPUT:NUMBER
OUTPUT: SUM OF ITS DIGITS
STEP 1:START
Step 1: Get number by user.
Step 2: Get the modulus/remainder of the number.
Step 3: sum the remainder of the number.
Step 4: Divide the number by 10.
Step 5: Repeat the step 2 while number is greater than 0.
STEP7:STOP

C implementation
#include<stdio.h>
int main()
{
int n,sum=0,m;
printf("Enter a number:");
scanf("%d",&n);
while(n>0)
{
m=n%10;
sum=sum+m;
n=n/10;
}
printf("Sum is=%d",sum);
return 0;
}

Algorithm: sum of n numbers (sum of array elements)

Step 1: Start
Step 2: Read n numbers to be summed from the input device (such as keyboard).
Step 3: Initialize the variable sum as zero.
Step 4: Initialize the variable count as one.
Step 5: While count is less than or equal to n, i.e. numbers to be added, repeatedly do: Read the
number at position count, i.e. when count is 1 read first number, when count is 2, read second
number and so on. Update the current sum by adding to it the number read.
Step 6: Write the sum of nth numbers.
Step 7: Stop.

C implementation

#include<stdio.h>

int main()
{
//let's assume the maximum array size as 100.
//initialize sum as 0. Otherwise, it will take some garbage value.
int arr[100], size, i, sum = 0;

//Get size input from user


printf("Enter array size\n");
scanf("%d",&size);
//Get all elements using for loop and store it in array
printf("Enter array elements\n");
for(i = 0; i < size; i++)
scanf("%d",&arr[i]);

//add all elements to the variable sum.


for(i = 0; i < size; i++)
sum = sum + arr[i]; // same as sum += arr[i];

//print the result


printf("Sum of the array = %d\n",sum);

return 0;
}

Algorithm of factorial of a number


Step 1: Start
Step 2: Read a number n
Step 2: Initialize variables:
i = 1, fact = 1
Step 3: if i <= n go to step 4 otherwise go to step 7
Step 4: Calculate
fact = fact * i
Step 5: Increment the i by 1 (i=i+1) and go to step 3
Step 6: Print fact
Step 7: Stop

C implementation
#include<stdio.h>
int main()
{
int i,fact=1,number;
printf("Enter a number: ");
scanf("%d",&number);
for(i=1;i<=number;i++){
fact=fact*i;
}
printf("Factorial of %d is: %d",number,fact);
return 0;
}

Fibonacci Series Algorithm:

Step1: Start
Step2: Declare variables i, a,b , show
Step3: Initialize the variables, a=0, b=1, and show =0
Step4: Enter the number of terms of Fibonacci series to be printed
Step5: Print First two terms of series
Step 6 : Use loop for step 7step 11
Step 7: show= a+b
step 8: a=b
step 9: b=show
step 10: increase value of i each time by 1
step 11: print the value of show
Step12: End

C implementation

#include<stdio.h>
int main()
{
int n1=0,n2=1,n3,i,number;
printf("Enter the number of elements:");
scanf("%d",&number);
printf("\n%d %d",n1,n2);//printing 0 and 1
for(i=2;i<number;++i)//loop starts from 2 because 0 and 1 are already printed
{
n3=n1+n2;
printf(" %d",n3);
n1=n2;
n2=n3;
}
return 0;
}

Algorithm to print reverse of number


Input: num
(1) Initialize rev_num = 0
(2) Loop while num > 0
(a) rem=num%10;
rev_num = rev_num*10 + rem;
(b) Divide num by 10;
(3) Return rev_num

C implementation
#include<stdio.h>
int main()
{
int n, reverse=0, rem;
printf("Enter a number: ");
scanf("%d", &n);
while(n!=0)
{
rem=n%10;
reverse=reverse*10+rem;
n/=10;
}
printf("Reversed Number: %d",reverse);
return 0;
}

Algorithm Description: to print square root of a number

STEP 1: START
Step 2: Establish 'm' the number whose square root is required
Step3: Set the initial VALUE i=0.1;
Step 4: repeat step 5 and 6 for i=0.1 to M/2
STEP 5: if i*i=M go to step 7
Step 6: if i*i>M SET i=i-0.1 goto step 7
Step 7: print I as square root
Step 8:stop

C implementation
#include <stdio.h>
#include <stdlib.h>

int main()
{
float m,i;
printf("enter a number\n");
scanf("%f",&m);
for(i=0.1;i<=m/2;i=i+0.1)
{
if(i*i==m)
{

break;
}
if(i*i>m)
{
i=i-0.1;
break;
}
}
printf("square root of %f is %f",m,i);
return 0;
}
.

Number conversion

Decimal to Other Base System

Steps
 Step 1 − Divide the decimal number to be converted by the value of the new base.
 Step 2 − Get the remainder from Step 1 as the rightmost digit (least significant digit) of
new base number.
 Step 3 − Divide the quotient of the previous divide by the new base.
 Step 4 − Record the remainder from Step 3 as the next digit (to the left) of the new base
number.
Repeat Steps 3 and 4, getting remainders from right to left, until the quotient becomes zero in
Step 3.
The last remainder thus obtained will be the Most Significant Digit (MSD) of the new base
number.
Example −
Decimal Number: 2910
Calculating Binary Equivalent −
Step Operation Result Remainder

Step 1 29 / 2 14 1

Step 2 14 / 2 7 0

Step 3 7/2 3 1

Step 4 3/2 1 1

Step 5 1/2 0 1

As mentioned in Steps 2 and 4, the remainders have to be arranged in the reverse order so that
the first remainder becomes the Least Significant Digit (LSD) and the last remainder becomes
the Most Significant Digit (MSD).
Decimal Number − 2910 = Binary Number − 111012.

Other Base System to Decimal System

Steps
 Step 1 − Determine the column (positional) value of each digit (this depends on the
position of the digit and the base of the number system).
 Step 2 − Multiply the obtained column values (in Step 1) by the digits in the
corresponding columns.
 Step 3 − Sum the products calculated in Step 2. The total is the equivalent value in
decimal.
Example
Binary Number − 111012
Calculating Decimal Equivalent −

Step Binary Number Decimal Number


Step 1 111012 ((1 × 24) + (1 × 23) + (1 × 22) + (0 × 21) + (1 × 20))10

Step 2 111012 (16 + 8 + 4 + 0 + 1)10

Step 3 111012 2910

Binary Number − 111012 = Decimal Number − 2910

Other Base System to Non-Decimal System

Steps
 Step 1 − Convert the original number to a decimal number (base 10).
 Step 2 − Convert the decimal number so obtained to the new base number.
Example
Octal Number − 258
Calculating Binary Equivalent −
Step 1 − Convert to Decimal
Step Octal Number Decimal Number

Step 1 258 ((2 × 81) + (5 × 80))10

Step 2 258 (16 + 5 )10

Step 3 258 2110

Octal Number − 258 = Decimal Number − 2110


Step 2 − Convert Decimal to Binary
Step Operation Result Remainder
Step 1 21 / 2 10 1

Step 2 10 / 2 5 0

Step 3 5 / 2 2 1

Step 4 2 / 2 1 0

Step 5 1 / 2 0 1

Decimal Number − 2110 = Binary Number − 101012


Octal Number − 258 = Binary Number − 101012

You might also like