The Software Cost Increases As The Hardware Cost Decreases in Parallel To The Development of Technology. at The Beginning The Reverse Was True
The Software Cost Increases As The Hardware Cost Decreases in Parallel To The Development of Technology. at The Beginning The Reverse Was True
The Software Cost Increases As The Hardware Cost Decreases in Parallel To The Development of Technology. at The Beginning The Reverse Was True
WHAT IS A COMPUTER?
Let us start this first lecture by asking the question "what is a computer" and answering it.
A Computer is a device capable of performing computations and making logical decisions and
speeds much much faster than human beings can. Even the Personal Computers (PCs) gain
the ability to perform multiple millions of calculations per second today.
Computers process data under the control of sets of instructions called computer programs.
Computer programs can be written for many different purposes. The simulation of scientific
problems, forecasting of the future of many economic and social activities. Professional
programs are written by people called computer programmer. Specific programs are written
by people who need them. These people do not have to be a computer programmer or
scientist. It's not difficult to learn computer languages to write programs.
Devices such as the keyboard, screen, disks (internal and external), and processing units
comprising a computer are called hardware. The computer programs that run on a computer
are referred to as software. The software cost increases as the hardware cost decreases in
parallel to the development of technology. At the beginning the reverse was true.
COMPUTER ORGANISATION
1. Input Unit: This is the "receiving" section of the computer. Both programs and data on
which these programs will act are given to the computer trough this section.
2. Output Unit: This is the "shipping" section of the computer. It takes information that has
been process by the computer and places it on various output devices to make the information
available for use outside the computer.
3. Memory Unit: This is the rapid access "warehouse" section of the computer. It stores
information that has been entered through the input unit so that the information may be
immediately available for processing. Memory can also hold the information that has already
been processed until the information can be placed on output devices by the output unit.
Memory capacity is referred to as RAM (RANDOM ACCESS MEMORY) capacity.
4. Arithmetic and Logical Unit (ALU): This is the "manufacturing" section of the computer. It
is responsible for performing calculations (addition, subtraction, multiplication and division).
This unit contains the decision mechanism as well. Through this mechanism it compares two
items from the memory unit to determine whether or not they are equal and take actions
accordingly.
5. Central Processing Unit (CPU): This is the "administrative" section of the computer. It is
the computer's coordinator and is responsible for supervising the operation of other sections.
The CPU tells the input unit when information should be read into the memory unit, and tells
the ALU when information from the memory unit should be utilized in calculations, and tells
the output unit when to send the information from the memory unit to certain output devices.
6. Secondary Storage Unit: This the long term high-capacity warehouse section of the
computer. Programs or/and data not actively being used by other units are generally stored on
secondary storage devices.
Computer applications today require a single machine to perform many operations and the
applications may compete for the resources of the machine. This demands a high degree of
coordination. This coordination is handled by system software known as the operating system
(OS).
Early computers were capable of performing only one job or task at a time. This form of
computer operation is often called single-user batch processing. The computer runs a single
program at a time while processing data in groups or batches. In batch processing jobs are
queued up so that as soon as one completed, the next would start. Users often took their jobs
to the computer operators and came back and wait their job to be executed. Sometimes the
waiting time would be as long as a day. Users had no interaction with computer during
program execution. Maybe okay for some applications, but not for all.
As computers became more powerful, it became evident that single-user batch processing
rarely utilize the computer resources efficiently. Then it was told that many jobs or tasks could
made to share the resources of the computer to achieve better utilization of the computer
resources. This is called multiple programming. Multiple programming involves the
simultaneous operation of many jobs on the same computer.
Distributed computing also refers to the use of distributed systems to solve computational
problems. In distributed computing, a problem is divided into many tasks, each of which is
solved by one computer
2. Assembly Languages
3. High-level Languages
Machine code or machine language is a system of instructions and data executed directly by
a computer's central processing unit. Machine code may be regarded as a primitive (and
cumbersome) programming language or as the lowest-level representation of a compiled
and/or assembled computer program. Every processor or processor family has its own
machine code instruction set.
d) Control flow instructions such as goto, if ... goto, call, and return.
Instructions are patterns of bits that by physical design correspond to different commands to
the machine.
A bit is the basic unit of information in computing and telecommunications; it is the amount
of information that can be stored by a device or other physical system that can normally exist
in only two distinct states. These may be the two stable positions of an electrical switch, two
distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two
directions of magnetization or polarization, etc.
In computing, a bit is defined as a variable or computed quantity that can have only two
possible values. These two values are often interpreted as binary digits and are usually
denoted by the Arabic numerical digits 0 and 1. Indeed, the term "bit" is a contraction of
binary digit. The two values can also be interpreted as logical values (true/false, yes/no).
algebraic signs (+/), activation states (on/off), or any other two-valued attribute. In several
popular programming languages, numeric 0 is equivalent (or convertible) to logical false, and
1 to true. The correspondence between these values and the physical states of the underlying
storage or device is a matter of convention, and different assignments may be used even
within the same device or program.
In information theory, one bit is typically defined as the uncertainty of a binary random
variable that is 0 or 1 with equal probability, or the information that is gained when the value
of such a variable becomes known.
In quantum computing, a quantum bit or qubit is a quantum system that can exist in
superposition of two bit values, "true" and "false".
The symbol for bit, as a unit of information, is "bit" or (lowercase) "b"; the latter being
recommended by the IEEE 1541 Standard
The instruction set is thus specific to a class of processors using (much) the same architecture.
Successor or derivative processor designs often include all the instructions of a predecessor
and may add additional instructions. Occasionally a successor design will discontinue or alter
the meaning of some instruction code (typically because it is needed for new purposes),
affecting code compatibility to some extent; even nearly completely compatible processors
may show slightly different behavior for some instructions but this is seldom a problem.
Systems may also differ in other details, such as memory arrangement, operating systems, or
peripheral devices; because a program normally relies on such factors, different systems will
typically not run the same machine code, even when the same type of processor is used.
Assembly languages
A utility program called an assembler is used to translate assembly language statements into
the target computer's machine code. The assembler performs a more or less isomorphic
translation (a one-to-one mapping) from mnemonic statements into machine instructions and
data. This is in contrast with high-level languages, in which a single statement generally
results in many machine instructions.
I will discuss briefly binary and hexadecimal representations. The programs that convert these English
like instructions to its binary representation so that machine can understand are called assemblers.
add al,[170]
This instruction means take the value at the memory location with cell number 170 and added to the
value stored in AL register. Assembly language is not a user friend language and it is cumbersome to
write programs. Programs written in assembly languages are not portable.
High Level Languages
This greater abstraction and hiding of details is generally intended to make the language user-
friendly, as it includes concepts from the problem domain instead of those of the machine
used. A high-level language isolates the execution semantics of a computer architecture from
the specification of the program, making the process of developing a program simpler and
more understandable with respect to a low-level language. The amount of abstraction
provided defines how "high-level" a programming language is.
Programming languages such as C, FORTRAN, Pascal, Basic, Java, HTML that enables a
programmer to write programs that are more or less independent of a particular type of
computer. Such Languages are considered high-level languages because they are closer to the
human languages and further from the machine languages. The main advantage of high-level
languages over low-level languages is that they are easy to read, write, and maintain.
Ultimately, programs written in a high-level language must be translated into machine
languages by a compiler or interpreter.
The first high-level programming languages were designed in the 1950s. Now there are
dozens of different languages, including Ada, Algol, BASIC, COBOL, C, C++, FORTRAN,
LISP, Pascal, and Prolog