Programming Language 1
Programming Language 1
Programming Language 1
Level-6
S.RANI CLEMENTKING
Lecturer,
DEPARTMENT OF COMPUTER SCIENCE
UNIVERSITY CENTER FOR GIRL’S STUDY,
AL-SAMER, KING KHALID UNIVERSITY
______________________________________________________________________________
1|Page
PROGRAMMING LANGUAGE-278 CS
CONTENTS
CHAPTER PAGE
I.PRELIMINARIES 1
1.1 Reason for studying concept of Programming Language
1.2 Programming Domains.
1.3 Language Evaluation Criteria
1.4 Computer Architecture
1.5 Programming Methodologies
1.6 Language Categories
1.7 Language Trade –Offs
1.8 Implementation Methods
1.8.1 Virtual Computer.
1.8.2 Compilation Process
1.8.3 Pure Interpretation.
1.9 Hybrid Implementation Systems.
______________________________________________________________________________
2|Page
IV.SUPPORT OF OBJECT ORIENTED PROGRAMMING CONCEPTS 49
V.EXCEPTION HANDLING 61
TEXT BOOKS
REFERENCES
2 . Wesely
1. www.computeronline.com.
______________________________________________________________________________
3|Page
Chapter 1 “Preliminaries”
______________________________________________________________________________
4|Page
Programming Domains
• Scientific applications
– In the early 40s computers were invented for scientific applications.
– The applications require large number of floating point computations.
– Fortran was the first language developed scientific applications.
– ALGOL 60 was intended for the same use.
• Business applications
– The first successful language for business was COBOL.
– Produce reports, use decimal arithmetic numbers and characters.
– The arrival of PCs started new ways for businesses to use computers.
– Spreadsheets and database systems were developed for business.
• Artificial intelligence
– Symbolic rather than numeric computations are manipulated.
– Symbolic computation is more suitably done with linked lists than arrays.
– LISP was the first widely used AI programming language.
• Systems programming
– The O/S and all of the programming supports tools are collectively known as its
system software.
– Need efficiency because of continuous use.
• Scripting languages
– Put a list of commands, called a script, in a file to be executed.
– PHP is a scripting language used on Web server systems. Its code is embedded in
HTML documents. The code is interpreted on the server before the document is
sent to a requesting browser.
• Special-purpose languages
– RPG is an example of these languages.
Readability
• Software development was largely thought of in term of writing code “LOC”.
• Language constructs were designed more from the point of view of the computer than the
users.
• Because ease of maintenance is determined in large part by the readability of programs,
readability became an important measure of the quality of programs and programming
languages. The result is a crossover from focus on machine orientation to focus on
human orientation.
• The most important criterion “ease of use”
• Overall simplicity “Strongly affects readability”
– Too many features make the language difficult to learn. Programmers tend to
learn a subset of the language and ignore its other features. “ALGOL 60”
– Multiplicity of features is also a complicating characteristic “having more than
one way to accomplish a particular operation.”
– Ex “Java”:
count = count + 1
count += 1
count ++
++count
______________________________________________________________________________
5|Page
– Although the last two statements have slightly different meaning from each other
and from the others, all four have the same meaning when used as stand-alone
expressions.
– Operator overloading where a single operator symbol has more than one meaning.
– Although this is a useful feature, it can lead to reduced readability if users are
allowed to create their own overloading and do not do it sensibly.
• Orthogonality
– Makes the language easy to learn and read.
– Meaning is context independent. Pointers should be able to point to any type of
variable or data structure. The lack of orthogonality leads to exceptions to the
rules of the language.
– A relatively small set of primitive constructs can be combined in a relatively
small number of ways to build the control and data structures of the language.
– Every possible combination is legal and meaningful.
– Ex: page 11 in book.
– The more orthogonal the design of a language, the fewer exceptions the language
rules require.
– The most orthogonal programming language is ALGOL 68. Every language
construct has a type, and there are no restrictions on those types.
– This form of orthogonality leads to unnecessary complexity.
• Control Statements
– It became widely recognized that indiscriminate use of goto statements severely
reduced program readability.
– Ex: Consider the following nested loops written in C
while (incr < 20)
{
while (sum <= 100
{
sum += incr;
}
incr++;
}
______________________________________________________________________________
6|Page
if C didn’t have a loop construct, this would be written as follows:
loop1:
if (incr >= 20) go to out;
loop2:
if (sum > 100) go to next;
sum += incr;
go to loop2;
next:
incr++;
go to loop1:
out:
– Basic and Fortran in the early 70s lacked the control statements that allow strong
restrictions on the use of gotos, so writing highly readable programs in those
languages was difficult.
– Since then, languages have included sufficient control structures.
– The control statement design of a language is now a less important factor in
readability than it was in the past.
Syntax Considerations
– The syntax of the elements of a language has a significant effect on readability.
– The following are examples of syntactic design choices that affect readability:
• Identifier forms: Restricting identifiers to very short lengths detracts from
readability. ANSI BASIC (1978) an identifier could consist only of a
single letter of a single letter followed by a single digit.
• Special Words: Program appearance and thus program readability are
strongly influenced by the forms of a language’s special words. Ex: while,
class, for. C uses braces for pairing control structures. It is difficult to
determine which group is being ended. Fortran 95 allows programmers to
use special names as legal variable names.
• Form and Meaning: Designing statements so that their appearance at least
partially indicates their purpose is an obvious aid to readability.
• Semantic should follow directly from syntax, or form.
• Ex: In C the use of static depends on the context of its appearance.
If used as a variable inside a function, it means the variable is created at
compile time.
If used on the definition of a variable that is outside all functions, it means
the variable is visible only in the file in which its definition appears.
Writability
It is a measure of how easily a language can be used to create programs for a chosen
problem domain.
Most of the language characteristics that affect readability also affect writability.
• Simplicity and orthogonality
______________________________________________________________________________
7|Page
– A smaller number of primitive constructs and a consistent set of rules for
combining them is much better than simply having a large number of primitives.
• Support for abstraction
– Abstraction means the ability to define and then use complicated structures or
operations in ways that allow many of the details to be ignored.
– A process abstraction is the use of a subprogram to implement a sort algorithm
that is required several times in a program instead of replicating it in all places
where it is needed.
• Expressivity
– It means that a language has relatively convenient, rather than cumbersome, ways
of specifying computations.
– Ex: ++count ⇔ count = count + 1 // more convenient and shorter
Reliability
A program is said to be reliable if it performs to its specifications under all conditions.
• Type checking: is simply testing for type errors in a given program, either by the
compiler or during program execution.
– The earlier errors are detected, the less expensive it is to make the required
repairs. Java requires type checking of nearly all variables and expressions at
compile time.
• Exception handling: the ability to intercept run-time errors, take corrective measures,
and then continue is a great aid to reliability.
• Aliasing: it is having two or more distinct referencing methods, or names, for the same
memory cell.
– It is now widely accepted that aliasing is a dangerous feature in a language.
• Readability and writability: Both readability and writability influence reliability.
Cost
– Categories
– Training programmers to use language
– Writing programs “Writability”
– Compiling programs
– Executing programs
– Language implementation system “Free compilers is the key, success of Java”
– Reliability, does the software fail?
– Maintaining programs: Maintenance costs can be as high as two to four times as
much as development costs.
– Portability “standardization of the language”
– Generality (the applicability to a wide range of applications)
______________________________________________________________________________
8|Page
• Assignment statements model piping
• Iteration is efficient
Programming methodologies
• 1950s and early 1960s: Simple applications; worry about machine efficiency
• Late 1960s: People efficiency became important; readability, better control structures
– Structured programming
– Top-down design and step-wise refinement
• Late 1970s: Process-oriented to data-oriented
– data abstraction
• Middle 1980s: Object-oriented programming
______________________________________________________________________________
9|Page
Language Categories
• Imperative
– Central features are variables, assignment statements, and iteration
– C, Pascal
Central Features of Imperative Languages
• Variables: model memory cells
• Assignment statements: model piping
• Iteration is fast on von Neumann computers because instructions are stored in adjacent
cells of memory and repeating the execution of a section of code requires only a simple
branch instruction
• Functional
– Main means of making computations is by applying functions to given parameters
– LISP, Scheme
– A functional language is one in which the primary means of computation is
applying functions to given parameters.
– Programming can be done in a functional language
– without the kind of variables that are used in imperative languages
– without assignment statements and without iteration.
– Although many computer scientists have expounded on the myriad benefits of
functional languages, it is unlikely that they will displace the imperative language
until a non-von Neumann computer is designed that allows efficient execution of
programs in functional languages
• Logic
– Rule-based
– Rules are specified in no special order
– Prolog
• Object-oriented
– Encapsulate data objects with processing
– Inheritance and dynamic type binding
– Grew out of imperative languages
– C++, Java
Programming Environments
• The collection of tools used in software development
• UNIX
– An older operating system and tool collection
• Borland JBuilder
– An integrated development environment for Java
• Microsoft Visual Studio.NET
– A large, complex visual environment
– Used to program in C#, Visual BASIC.NET, Jscript, J#, or C++
• An operating system and language implementations are layered over the machine
language interface of a computer.
• These layers can be thought of as virtual computers, providing interface to the user at
higher levels.
______________________________________________________________________________
10 | P a g e
– e.g. an OS and a C compiler provide a virtual C computer
• Most computers provide several different virtual computers.
• The operating system and language implementation are layered over Machine interface of
a computer
•
INTERPRETER:
• Programs are interpreted by another program called an interpreter, with no translation
whatever.
• The interpreter program acts as a software simulation of a machine whose fetch-execute
cycle deals with high-level language program statements rather than machine
instructions.
• This software simulation obviously provides a virtual machine for the language.
ADVANTAGES OF INTERPRETER:
• Allowing easy implementation of many source-level debugging operations, because all
run-time error messages can refer to source-level unit.
– For example, if an array is found to be out of rang, the error message can easily
indicate the source line and the name of the array.
DISADVANTAGES OF INTERPRETER:
______________________________________________________________________________
11 | P a g e
• Slower execution (10 to 100 times slower than compiled programs)
– The decoding of the high-level language statements are far more complex than
machine language instruction.
– Regardless of how many times a statement is executed, it must be decoded every
time.
– Therefore, statement decoding, rather than the connection between the processor
and memory, is the bottleneck of a pure interpreter.
– Often requires more space.
– In addition to the source program, the symbol table must be present during
interpretation
– The source program may be stored in a form designed for easy access and
modification rather than one that provides for minimal size.
POPULARITY OF INTERPRETATION:
• Some simple early languages of the 1960s (APL, SNOBOL, and LISP) were purely
interpreted.
• By the 1980s, the approach was rarely used on high-level languages.
• In recent years, pure interpretation has made a significant comeback with some Web
scripting languages, such as JavaScript and PHP, which are now widely used.
• Translate high-level program (source language) into machine code (machine language)
• Slow translation, fast execution
PHASES OF COMPILER:
Compiler consists of 2 phases.
1. Analysis Phase
2. Synthesis phase
______________________________________________________________________________
12 | P a g e
1. Analysis Phase:
Analysis Phase performs 3 actions namely
a) Lexical analysis - it contains a sequence of characters called tokens. Input is source program
& the output is tokens. It also gathers the characters of the source program into lexical units.
Lexical units: identifiers, special words, operators and punctuation symbols
b) Syntax analysis - input is a token and the output is parse tree, transforms lexical units into
parse trees which represent the syntactic structure of program
c) Semantic analysis - input is parse tree and the output is expanded version of parse tree
2. Synthesis Phase:
Synthesis Phase performs 3 actions namely
d) Intermediate Code generation - Here all the errors are checked & it produce an intermediate
code. translate a source program into an intermediate language one, semantics analysis: check for
errors that are difficult if not impossible to detect during syntax analysis, such as type errors.
e) Code Optimization - the intermediate code is optimized here to get the target program.
Improve programs (usually in their intermediate code version) by making them smaller or faster
or both, is often an optional part of compilation.
• Some compilers are incapable of doing any significant optimization.
• Optimization may
– omit some code in your program change the execution order of code in your
program
• P.S.: Sometimes, especially when synchronization between processes is
required, the above results may create some bugs in your programs which
cannot be detected by just checking the source code.
f) Code Generation - this is the final step & here the target program code is generated. Machine
code is generated.
______________________________________________________________________________
13 | P a g e
FIGURE: COMPILATION PHASES
HYBRID IMPLEMENTATION SYSTEMS
• A compromise between compilers and pure interpreters
• A high-level language program is translated to an intermediate language that allows easy
interpretation
• Faster than pure interpretation
• Perl programs are partially compiled to detect errors before interpretation to simplify the
interpreter.
• Initial implementations of Java
• initial implementations of Java were all hybrid
• its intermediate form, byte code, provides portability to any machine that has a byte code
interpreter and a run-time system (together, these are called Java Virtual Machine)
• There are now systems that translate Java byte code into machine code for faster
execution.
Introduction
Who must use language definitions “description of P/L.”?
o Other language designers “scrutinized by potential users.”
o Programming language Implementors “determine how definitions are formed, and
their intended effects when executed.”
o Programmers (the users of the language use textbooks, manuals, & courses.)
Syntax - the form or structure of the expressions, statements, and program units.
Semantics - the meaning of the expressions, statements, and program units.
Ex:
while (<Boolean_expr>)<statement>
The semantics of this statement form is that when the current value of the Boolean
expression is true, the embedded statement is executed.
The form of a statement should strongly suggest what the statement is meant to accomplish.
Lexemes Tokens
index identifier
= equal_sign
2 int_literal
* mult_op
count identifier
+ plus_op
17 int_literal
; Semicolon
______________________________________________________________________________
15 | P a g e
Language Recognizers
In general can be formally defined in two distinct ways.
o The syntax analysis part of a compiler is a recognizer for the language the compiler
translates.
o They determine whether given programs are in the language.
Syntax Analyzers: determine whether the given programs are syntactically correct.
Context-free Grammars
– Developed by Noam Chomsky in the mid-1950s who described four classes of generative
devices or grammars that define four classes of languages.
– Context-free and regular grammars are useful for describing the syntax of P/Ls.
– Tokens of P/Ls can be described by regular grammars.
– Whole P/Ls can be described by context-free grammars.
Fundamentals
<stmt> → <single_stmt>
| begin <stmt_list> end
______________________________________________________________________________
16 | P a g e
– Multiple definitions can be written as a single rule, with the different definitions
separated by the symbol |, meaning logical OR.
Describing Lists
<ident_list> → ident
| ident, <ident_list>
• The sentences of the language are generated through a sequence of applications of the
rules, beginning with a special nonterminal of the grammar called the start symbol.
• A derivation is a repeated application of rules, starting with the start symbol and ending
with a sentence (all terminal symbols)
• An example grammar:
<program> → <stmts>
<stmts> → <stmt> | <stmt> ; <stmts>
<stmt> → <var> = <expr>
<var> → a | b | c | d
<expr> → <term> + <term> | <term> - <term>
<term> → <var> | const
• An example derivation:
Attributes Of Grammar
Parse Trees
<stmts>
<stmt>
<var> = <expr>
a <term> + <term>
<var>
b const
Ambiguity
5. A grammar is ambiguous iff it generates a sentential form that has two or more
distinct parse trees.
6. Ex: Two distinct parse trees for the same sentence, const – const / const
<expr> → <expr> <op> <expr> | const
<op> → / | -
______________________________________________________________________________
18 | P a g e
7. Ex: Two distinct parse trees for the same sentence, A = B + A * C
Operator Precedence
8. The fact that an operator in an arithmetic expression is generated lower in the parse
tree can be used to indicate that it has higher precedence over an operator produced
higher up in the tree.
9. In the left parsed tree above, one can conclude that the * operator has precedence over
the + operator. How about the tree on the right hand side?
10. An unambiguous Grammar for Expressions
A= B+C*A
______________________________________________________________________________
19 | P a g e
Associativity of Operators
11. Do parse trees for expressions with two or more adjacent occurrences of operators
with equal precedence have those occurrences in proper hierarchical order?
12. An example of an assignment using the previous grammar is:
A=B+C+A
13. Figure above shows the left + operator lower than the right + operator. This is the
correct order if + operator meant to be left associative, which is typical.
______________________________________________________________________________
20 | P a g e
Extended BNF
Because of minor inconveniences in BNF, it has been extended in several ways. EBNF
extensions do not enhance the descriptive powers of BNF; they only increase its readability
and writability.
Optional parts are placed in brackets ([ ])
Put alternative parts of RHSs in parentheses and separate them with vertical bars
BNF:
<expr> → <expr> + <term>
| <expr> - <term>
| <term>
<term> → <term> * <factor>
| <term> / <factor>
| <factor>
EBNF:
<expr> → <term> {(+ | -) <term>}
<term> → <factor> {(* | /) <factor>}
Axiomatic Semantics
Axiomatic Semantics was defined in conjunction with the development of a method to prove
the correctness of programs.
Such correction proofs, when they can be constructed, show that a program performs the
computation described by its specification.
In a proof, each statement of a program is both preceded and followed by a logical
expression that specified constraints on program variables.
Approach: Define axioms or inference rules for each statement type in the language (to allow
transformations of expressions to other expressions.)
The expressions are called assertions.
Assertions
Axiomatic semantics is based on mathematical logic. The logical expressions are called
predicates, or assertions.
______________________________________________________________________________
21 | P a g e
An assertion before a statement (a precondition) states the relationships and constraints
among variables that are true at that point in execution.
An assertion following a statement is a postcondition.
A weakest precondition is the least restrictive precondition that will guarantee the
validity of the associated postcondition.
An example: a = b + 1 {a > 1}
If the weakest precondition can be computed from the given postcondition for each
statement of a language, then correctness proofs can be constructed from programs in that
language.
Program proof process: The postcondition for the whole program is the desired result.
Work back through the program to the first statement. If the precondition on the first
statement is the same as the program spec, the program is correct.
An Axiom is a logical statement that is assumed to be true.
An Inference Rule is a method of inferring the truth of one assertion on the basis of the
values of other assertions.
Assignment Statements
Ex:
b/2–1 < 10
b/2 < 11
b < 22
∴ the weakest precondition for the given assignment and the postcondition is {b < 22}
An assignment statement has a side effect if it changes some variable other than its left
side.
______________________________________________________________________________
22 | P a g e
Ex:
x = 2 * y – 3 {x > 25}
2 * y – 3 > 25
2 * y > 28
y > 14
∴ the weakest precondition for the given assignment and the postcondition is {y > 14}
Ex:
x = x + y – 3 {x > 10}
x + y – 3 > 10
y > 13 – x
Sequences
y = 3 * x + 1;
x = y + 3; {x < 10}
y + 3 < 10
y<7
3*x+1<7
3*x<6
x<2
Computing the weakest precondition (wp) for a while loop is inherently more difficult
than for a sequence b/c the number of iterations can’t always be predetermined.
The corresponding step in the axiomatic semantics of a while loop is finding an assertion
called a loop invariant, which is crucial to finding the weakest precondition.
It is helpful to treat the process of producing the wp as a function, wp.
To find I, we use the loop postcondition to compute preconditions for several different
numbers of iterations of the loop body, starting with none. If the loop body contains a
single assignment statement, the axiom for assignment statements can be used to compute
these cases.
Ex:
It is now obvious that {y < x} will suffice for cases of one or more iterations. Combining
this with {y = x} for the 0 iterations case, we get {y <= x} which can be used for the loop
invariant.
Ex:
s is a nonnegative power of 2.
The loop invariant I is a weakened version of the loop postcondition, and it is also a
precondition.
I must be weak enough to be satisfied prior to the beginning of the loop, but when
combined with the loop exit condition, it must be strong enough to force the truth of
the postcondition.
______________________________________________________________________________
24 | P a g e
Chapter -3
Data Types
Data Types that are not defined in terms of others types are called Primitive Type
Some of this types are merely reflection of hardware ( integers ) other require only a little
non hardware support for their implementation.
Integer
Almost always an correct reflection of the hardware, so the mapping is small
There may be as many as eight different integer types in a language
Integers are represented as a string of bits( positive or negative)
Most computers now use a notion of twos complement to store negative integers ( See any
assembly language programming book)
Floating Point
Model real numbers, but only as approximations
Languages for scientific use support at least two floating-point types; sometimes more
Usually exactly like the hardware, but not always; some languages allow accuracy specs in
code.
IEEE floating-point formats:
______________________________________________________________________________
25 | P a g e
a) Single precision,
b) Double precision
Decimal
- For business applications (money)
- Advantage: accuracy
They are stored very much like charter strings using binary code for decimal digits (BCD)
Boolean
First were introduced in Algol 60
Most of all general purpose languages have that Could be implemented as bits, but often as
bytes
- Advantage: readability
Character Type
Character data are stored in computer as numeric codings.
ASCII: 128 different characters ( 0 .. 127)
A 16 bit character set named Unicode was developed to include most of worlds .
______________________________________________________________________________
26 | P a g e
Character String Types
Is one in which the Values consist of sequences of characters
Design issues:
1. Is it a primitive type or just a special kind of array?
Operations:
- Assignment
- Comparison (=, >, etc.)
- Catenation
- Substring reference
- Pattern matching
Examples:
Pascal, - Not primitive; assignment and comparison only (of packed arrays)
FORTRAN 77, FORTRAN 90, SNOBOL4 and BASIC
- Somewhat primitive
Assignment, comparison, catenation, substring reference
FORTRAN, SNOBOL4 have an intrinsic for pattern matching
C, C++, ADA not primitive, strcpy, ctrcat strlen, strcmp - comonly used string manipulation
library functions (string.h) N := N1 & N2 (catenation) N(2..4) (substring reference) ( ADA)
JAVA: strings are supported as a primitive type by the String class ( constant strings) and
StringBuffer class ( changeable strings)
There are several design choices regarding the length of string values
1. Static length string (length is specified at the declaration time) - FORTRAN 90, Ada,
COBOL, Pascal
CHARACTER (LEN = 15) NAME; (FORTRAN 90)
Static length strings are always full
______________________________________________________________________________
27 | P a g e
2. Limited Dynamic Length strings can store any number of characters between 0 and
maximum (specified by the variables declaration)
- C and C++
- actual length is indicated by a null character
Enumeration Types
- One in which the user enumerates all of the possible values, which
are symbolic constants
Design Issue:
Should a symbolic constant be allowed to be in more than one type definition?
Examples:
Pascal
- cannot reuse constants; they can be used for array subscripts, for variables, case selectors; NO
input or output; can be compared
Ada
- constants can be reused (overloaded literals); disambiguate with
context or type_name ‘ (one of them); can be used as in Pascal;
CAN be input and output
C and C++ -
______________________________________________________________________________
28 | P a g e
like Pascal, except they can be input and output as integers
Java does not include an enumeration type but can be implemented as a nice class.
Subrange Type:
Subrange Type - an ordered contiguous subsequence of an ordinal type
Examples
Pascal - Subrange types behave as their parent types; can be used as for variables and array
indices
e.g. type pos = 0 .. MAXINT;
Ada - Subtypes are not new types, just constrained existing types (so they are compatible); can
be used as in Pascal, plus case constants
subtype INDEX is INTEGER range 1 .. 100;
All of operations defined for the parent type are also defined for subtype, except assignment of
values out of range.
Arrays
2. Fixed stack dynamic - range of subscripts is statically bound, but storage is bound at
elaboration time
e.g. Pascal locals and, C locals that are not
static
______________________________________________________________________________
29 | P a g e
Advantage: space efficiency
(Example) In a C function,
void func() {
int a[5];
...
}
3. Stack-dynamic - range and storage are dynamic, but once the subscript ranges are bound and
the storage is allocated they are fixed from then on for the variable’s lifetime
declare
STUFF : array (1..N) of FLOAT;
begin
...
end;
Advantage: flexibility - size need not be known until the array is about to be used
4. Heap-dynamic – The binding of subscript ranges and storage allocation is dynamic and can
change any number of times during the array’s life time
e.g. (FORTRAN 90
INTEGER, ALLOCATABLE, ARRAY (:,:) :: MAT
(Declares MAT to be a dynamic 2-dim array)
ALLOCATE (MAT (10, NUMBER_OF_COLS))
(Allocates MAT to have 10 rows and
NUMBER_OF_COLS columns)
DEALLOCATE MAT
Example
#include <iostream.h>
void main() {
char *array;
______________________________________________________________________________
30 | P a g e
int N;
cout << "Type an array size:\n";
cin >> N;
array = new char[N];
delete [] array;
}
Implementation of Arrays
Access function maps subscript expressions to an address in
the array - Row major (by rows) or column major order (by columns)
Column major order is used in FORTRAN, but most of other languages use row major order.
Location of the [i, j] element in a matrix.
Locationa[I,j] = address of a[1,1] + ( i -1)(size of row) + (j -1)( element size)
______________________________________________________________________________
31 | P a g e
Associative Arrays
An associative array is an unordered collection of data elements that are indexed by an equal
number of values called keys.
- Each element of a associative array is in fact a pair of entities , a key and value
- Design Issues:
1. What is the form of references to elements?
- First was introduced in1960 COBOLsince than almost all languages support them ( except
pre 90 FORTRANs). In OOL, the class construct
- Supports records.
______________________________________________________________________________
32 | P a g e
p_john = &john;
cout << p_john grade << endl;
}
einstein> g++ simple_struct.C
einstein> a.out
A
Record Operations
1. Assignment
- Pascal, Ada, and C allow it if the types are identical
- In Ada, the RHS can be an aggregate constant
2. Initialization
- Allowed in Ada, using an aggregate constant
3. Comparison
- In Ada, = and /=; one operand can be an aggregate constant
4. MOVE CORRESPONDING
- In COBOL - it moves all fields in the source record to fields with the same names in the
destination record
1. Access to array elements is much slower than access to record fields, because subscripts are
dynamic (field names are static)
2. Dynamic subscripts could be used with record field access, but it would disallow type
checking and it would be much slower.
______________________________________________________________________________
33 | P a g e
The compiler time descriptors for record is shown on the left side of slide. No need for run
time descriptors.
Unions
A union is a type whose variables are allowed to store different type values at different times
during execution
Examples:
1. FORTRAN - with EQUIVALENCE in C or C++ construct union is used
( free unions)
2. Algol 68 - discriminated unions
______________________________________________________________________________
34 | P a g e
- Use a hidden tag to maintain the current type
- Tag is implicitly set by assignment
- References are legal only in conformity clauses
(see book example p. 231)
- This runtime type selection is a safe method of accessing union objects
______________________________________________________________________________
35 | P a g e
Sets
A set is a type whose variables can store unordered collections of distinct values from some
ordinal type called base type
Design Issue:
What is the maximum number of elements in any set base type?
Examples:
1. Pascal
- No maximum size in the language definition (not portable, poor
writability if max is too small)
- Operations: union (+), intersection (*), difference (-), =, < >, superset
3. Ada - does not include sets, but defines in as set membership operator for all enumeration
types
4. Java includes a class for set operations
Evaluation
______________________________________________________________________________
36 | P a g e
- If a language does not have sets, they must be simulated, either with enumerated types or with
arrays
- Arrays are more flexible than sets, but have much slower operations
Implementation
- Usually stored as bit strings and use logical operations for the set operations.
Pointers
A pointer type is a type in which the range of values consists of memory
addresses and a special value, nil (or null) The value nil indicates that a
pointer cannot currently be used to reference another object. In C and
C++, a value 0 is used as nil.
Uses:
1. Addressing flexibility ( indirect addressing )
2. Dynamic storage management ( access to heap)
Design Issues:
1. What is the scope and lifetime of pointer variables?
2. What is the lifetime of heap-dynamic variables?
3. Are pointers restricted to pointing at a particular type?
4. Are pointers used for dynamic storage management, indirect addressing, or both?
5. Should a language support pointer types, reference types, or both?
Example
______________________________________________________________________________
37 | P a g e
(Example in C++)
int j;
int *ptr; // pointer to integer variables
...
j = *ptr;
(Example 1) dangling.C
#include <stream.h>
void main() {
int *x, *y;
x = new int;
*x = 777;
delete x;
y = new int;
*y = 999;
cout << *x << endl;
______________________________________________________________________________
38 | P a g e
}
x 777
delete x;
x y 999
Example (C++)
dangling.C
#include <stream.h>
void main() {
int *x, *y;
x = new int;
*x = 777;
delete x;
y = new int;
*y = 999;
cout << *x << endl;
}
______________________________________________________________________________
39 | P a g e
Chapter 4
Support for Object-Oriented Programming
Introduction
Many object-oriented programming (OOP) languages
– Some support procedural and data-oriented programming (e.g., Ada 95 and C++)
– Newer languages do not support other paradigms but use their imperative structures
(e.g., Java and C#)
Object-Oriented Programming
• Inheritance
• Polymorphism
Object-Oriented Concepts
• The class from which another class inherits is a parent class or superclass
• The entire collection of methods of an object is called its message protocol or message
interface
______________________________________________________________________________
40 | P a g e
• Messages have two parts--a method name and the destination object
• In the simplest case, a class inherits all of the entities of its parent
– A class can also hide entities for its clients while allowing its subclasses to see them
______________________________________________________________________________
41 | P a g e
• Nested Classes
• Initialization of Objects.
Initialization of Objects
• How are parent class members initialized when a subclass object is created?
– If they behave line the ADTs, they can be allocated from anywhere
• Allocated from the run-time stack
• Explicitly create on the heap (via new)
– If they are all heap-dynamic, references can be uniform thru a pointer or reference
variable
• Simplifies assignment - dereferencing can be implicit
• Include an imperative-style typing system for primitives but make everything else objects
– Advantage - fast operations on simple objects and a relatively small typing system
– Disadvantage - still some confusion because of the two type systems
Dynamic Binding:
• A polymorphic variable can be defined in a class that is able to reference (or point to)
objects of the class and objects of any of its descendants
• When a class hierarchy includes classes that override methods and such methods are called
through a polymorphic variable, the binding to the correct method will be dynamic
• Allows software systems to be more easily extended during both development and
maintenance.
• An abstract method is one that does not include a definition (it only defines a protocol)
• Dynamic Binding
– A method can be defined to be virtual, which means that they can be called through
polymorphic variables and dynamically bound to messages
– A class that has at least one pure virtual function is an abstract class
Inheritance:
• Inheritance allows new classes defined in terms of existing ones, i.e., by allowing them to
inherit common parts
• Inheritance addresses both of the above concerns--reuse ADTs after minor changes and
define classes in a hierarchy.
______________________________________________________________________________
43 | P a g e
Single and Multiple Inheritance:
• Multiple inheritance allows a new class to inherit from two or more classes
– Potential inefficiency - dynamic binding costs more with multiple inheritance (but
not much)
• Advantage:
• Inheritance
– A class need not be the subclass of any class.
– Private (visible only in the class and friends) (disallows subclasses from
being subtypes)
class base_class {
private:
int a;
float x;
protected:
int b;
float y;
public:
int c;
______________________________________________________________________________
44 | P a g e
float z;
};
Polymorphism may require dynamic type checking of parameters and the return value
• General Characteristics
– All data are objects except the primitive types
– All primitive types have wrapper classes that store one data value
– All objects are heap-dynamic, are referenced through reference variables, and most
are allocated with new
– A finalize method is implicitly called when the garbage collector is about to reclaim
the storage occupied by the object.
Inheritance
– Single inheritance supported only, but there is an abstract class category that
provides some of the benefits of multiple inheritance (interface)
–
An interface can include only method declarations and named constants, e.g.,
______________________________________________________________________________
45 | P a g e
public interface Comparable <T> {
public int comparedTo (T b);
}
– Static binding is also used if the methods is static or private both of which disallow
overriding
• All are hidden from all classes in their package, except for the nesting class
Evaluation
– No parentless classes
______________________________________________________________________________
46 | P a g e
– Little in common with Java
– Dynamic typing
– An object has a collection of properties which are either data properties or method
properties
– A bare object can be created with new and a call to the constructor for Object
var my_object = new Object();
JavaScript Evaluation
– A scripting language
• No inheritance
______________________________________________________________________________
47 | P a g e
Chapter 5
Exception Handling
Exception Handling:
Contents:
Exception Handling:
The try / catch blocks
try {
//statements – one of which is capable of throwing an exception
}
catch (ExceptionTypeName objName) {
//one or more statements to execute if this exception occurs
}
finally {
______________________________________________________________________________
48 | P a g e
//statements to be executed whether or not exception occurs
}
An optional finally block can be added at the end of the catch blocks to provide a set of
statements that are always executed whether or not an exception occurs.
Exceptions C++
Exceptions provide a way to handle the errors generated by our programs by
transferring control to functions called handlers.
To catch exceptions we have to place our code on which we want exception
handling in the try block. If an exception occurs the control is passed to the
handler, otherwise the handlers are ignored.
The code to be executed, that may produce exceptions, is placed in the try
block and the error handlers are declared with the keyword catch.
Example
#include <iostream>
using namespace std;
int main ()
{
try
{
throw 10;
}
catch (int e)
{
cout << “We have a problem!!” << endl;
}
return 0;
}
Output : We have a problem!!!
Throw
______________________________________________________________________________
49 | P a g e
The throw expression accepts one parameter as its argument and this is
passed to the exception handler. You can have a number of throw statements
at different parts of your try block with different values being thrown so that
the exception handler on receiving the parameter will know what restorative
actions to take.
try
{ // code
if ( x )
throw 10;
// code
if (y)
throw 20;
//code
}
Catch() {}
The exception handler can be identified by the keyword catch . catch always
takes only one parameter. The type of the catch parameter is important as the
type of the argument passed by the throw expression is checked against it
and the catch function with the correct parameter type is executed. This way
we can chain multiple exception handlers and only the one with the correct
parameter type gets executed.
try { // code here }
catch (int param) {
cout << "int exception";
}
catch (char param) {
cout << "char exception";
}
______________________________________________________________________________
50 | P a g e
catch (...) {
cout << "default exception";
}
The catch(…) handler catches all exceptions, no matter what type. It can be
used as a default handler if declared last.
Points to remember
You can nest the try-catch blocks.
After exception handling the program returns to the point after the try-catch
block, not after the throw statement.
Exception Handling- java
Most of Java’s I/O classes (and many others) throw exceptions.
For example: Consider a function with a message readLine( ) directed to a
BufferedInputReader object .
import java.io.*; //needed for streams and IOException
public class ExceptionalExample {
public static void main(String [ ] args) throws IOException
{
InputStreamReader isr = new InputStreamReader(System.in);
BufferedReader br = new BufferedReader(isr);
System.out.println(“Enter a number”);
String str = br.readLine( );
The statement “throws IOException” is required when using readLine( ). It must
be included in the header of any function in which readLine( ) appears.
If we fail to include the “throws IOException” clause in the main function header
we will see the following error reported when we compile the program
ExceptionalExample:
c:\javacode>javac ExceptionalExample.java
ExceptionalExample.java:10: unreported exception java.io.IOException; must be
caught or declared to be thrown
______________________________________________________________________________
51 | P a g e
String str = readLine( );
1 error
c:\javacode>
Method readLine( ) throws an IOException. Any function that uses it must either
throw the same exception (and pass the buck to the operating system to handle the
exception) or catch the exception and provide its own error handling methodology.
In the previous example we chose the first strategy.
Types of exceptions
Checked exceptions – inability to acquire system resources (such as insufficient
memory, file does not exist)
Java checks at compile time that some mechanism is explicitly in place to receive
and process an exception object that may be created during runtime due to one of
these exceptions occurring.
Unchecked exceptions – exceptions that occur because of the user entering bad
data, or failing to enter data at all.
Unchecked exceptions can be avoided by writing more robust code that protects
against bad input values. Java does not check at compile time to ensure that there
is a mechanism in place to handle such errors.
It is often preferred to use Java’s exception handling capabilities to handle bad user
input rather than trying to avoid such circumstances by providing user-input
validation in the code.
The exception hierarchy (partial)
______________________________________________________________________________
52 | P a g e
Example
Consider implementing the BufferReader readLine( ) example with try-catch
clocks.
import java.io.*
public class ExceptionalExample {
public static void main(String [ ] args) //no throws clause used here
{
InputStreamReader isr = new InputStreamReader(System.in);
BufferedReader br = new BufferedReader(isr);
System.out.println(“Enter a number”);
try {
String str = br.readLine( );
double num = Double.parseDouble(str);
______________________________________________________________________________
53 | P a g e
}
//add the catch blocks
catch (IOException ioe) {
//print a message to the screen
System.out.println(ioe.toString( ));
} //add the catch blocks
catch (IOException ioe) {
//print a message to the screen
System.out.println(ioe.toString( ));
}
catch (Exception e) {
//catch any other exception and print a message to the screen
System.out.println(e.toString( ));
}
finally {
System.exit(0);
catch (Exception e) {
//catch any other exception and print a message to the screen
System.out.println(e.toString( ));
}
finally {
System.exit(0);
______________________________________________________________________________
54 | P a g e
The catch(…) handler catches all exceptions, no matter what type. It can be used as a default
handler if declared last.
Points to remember
After exception handling the program returns to the point after the try-catch block, not after
the throw statement.
______________________________________________________________________________
55 | P a g e