Programming Language 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 55

KING KHALID UNIVERSITY

DEPARTMENT OF INFORMATION SYSTEMS AND COMPUTER SCIENCE


COLLEGE OF COMPUTER SCIENCE,
UNIVERSITY CENTER FOR GIRL’S STUDY, AL-SAMER,
ABHA, SAUDI ARABIA

Learning Material for the subject

PROGRAMMING LANGUAGE I- 278 CS

Level-6

Prepared by the Instructor

S.RANI CLEMENTKING
Lecturer,
DEPARTMENT OF COMPUTER SCIENCE
UNIVERSITY CENTER FOR GIRL’S STUDY,
AL-SAMER, KING KHALID UNIVERSITY

Even Semester, 1433-1434

______________________________________________________________________________
1|Page
PROGRAMMING LANGUAGE-278 CS

CONTENTS

CHAPTER PAGE

I.PRELIMINARIES 1
1.1 Reason for studying concept of Programming Language
1.2 Programming Domains.
1.3 Language Evaluation Criteria
1.4 Computer Architecture
1.5 Programming Methodologies
1.6 Language Categories
1.7 Language Trade –Offs
1.8 Implementation Methods
1.8.1 Virtual Computer.
1.8.2 Compilation Process
1.8.3 Pure Interpretation.
1.9 Hybrid Implementation Systems.

II. DESCRIBING SYNTAX AND SEMANTICS 19


2.1 Introduction of Syntax and Semantics.
2.2 The General Problems of Describing Syntax
2.2.1 Language Recognizers
2.2.2 Language Generator
2.3 Formal Methods of Describing Syntax
2.3.1 Backus-Naur Form
2.3.2 Context Free Grammars.
2.4 Attribute of Grammars
2.4.1 Static Semantics
2.4.2 Attribute Grammars Defined
2.4.3 Intrinsic Attributes
2.4.4 Examples of Attribute Grammars
2.4.5 Computing Attribute Values
2.4.5 Evaluation
2.5 Dynamic Semantics

III. DATA TYPES 32

3.1 Introductiion of Data Types


3.2 Primitive Data Types.
3.3 Character String Types
3.4 User-Defined Ordinal Types
3.5 Types of Arrays
3.6 Associative Arrays
3.7 Record Types
3.8 Union Types
3.9 Set Types
3.10 Pointer Types

______________________________________________________________________________
2|Page
IV.SUPPORT OF OBJECT ORIENTED PROGRAMMING CONCEPTS 49

4.1 Introduction of Object Oriented Programming


4.2 Design Issues for Object Oriented Languages
4.3 Support for Object Oriented Programming in C++
4.4 Support for Object Oriented Programming in Java
4.5 The Object Oriented Model of java Script

V.EXCEPTION HANDLING 61

5.1 Introduction to Exception Handling


5.2 Exception Handling in PL/1
5.3 Exception Handling in C++
5.4 Exception Handling in Java.

TEXT BOOKS

1. Sebesta R. (2002) “Concepts of Programming Languages” 5th and 6th Edition.


2 . Wesely

REFERENCES

1. Sebesta R. (2002) “Concepts of Programming Languages” 5th and 6th Edition.

2 . Wesely

1. www.computeronline.com.

______________________________________________________________________________
3|Page
Chapter 1 “Preliminaries”

Reasons for Studying Concepts of Programming Languages

• Increased ability to express ideas.


• It is believed that the depth at which we think is influenced by the expressive
power of the language in which we communicate our thoughts. It is difficult for
people to conceptualize structures they can’t describe, verbally or in writing.
• Language in which they develop S/W places limits on the kinds of control
structures, data structures, and abstractions they can use.
• Awareness of a wider variety of P/L features can reduce such limitations in S/W
development.
• Can language constructs be simulated in other languages that do not support those
constructs directly?

• Improved background for choosing appropriate languages


• Many programmers, when given a choice of languages for a new project, continue
to use the language with which they are most familiar, even if it is poorly suited to
new projects.
• If these programmers were familiar with other languages available, they would be
in a better position to make informed language choices.

• Greater ability to learn new languages


• Programming languages are still in a state of continuous evolution, which means
continuous learning is essential.
• Programmers who understand the concept of OO programming will have easier
time learning Java.
• Once a thorough understanding of the fundamental concepts of languages is
acquired, it becomes easier to see how concepts are incorporated into the design
of the language being learned.

• Understand significance of implementation


• Understanding of implementation issues leads to an understanding of why
languages are designed the way they are.
• This in turn leads to the ability to use a language more intelligently, as it was
designed to be used.

• Ability to design new languages


• The more languages you gain knowledge of, the better understanding of
programming languages concepts you understand.

• Overall advancement of computing


• In some cases, a language became widely used, at least in part, b/c those in
positions to choose languages were not sufficiently familiar with P/L concepts.
• Many believe that ALGOL 60 was a better language than FORTRAN; however,
Fortran was most widely used. It is attributed to the fact that the programmers
and managers didn’t understand the conceptual design of ALGOL 60.
• Do you think IBM has something to do with it?

______________________________________________________________________________
4|Page
Programming Domains

• Scientific applications
– In the early 40s computers were invented for scientific applications.
– The applications require large number of floating point computations.
– Fortran was the first language developed scientific applications.
– ALGOL 60 was intended for the same use.
• Business applications
– The first successful language for business was COBOL.
– Produce reports, use decimal arithmetic numbers and characters.
– The arrival of PCs started new ways for businesses to use computers.
– Spreadsheets and database systems were developed for business.
• Artificial intelligence
– Symbolic rather than numeric computations are manipulated.
– Symbolic computation is more suitably done with linked lists than arrays.
– LISP was the first widely used AI programming language.
• Systems programming
– The O/S and all of the programming supports tools are collectively known as its
system software.
– Need efficiency because of continuous use.
• Scripting languages
– Put a list of commands, called a script, in a file to be executed.
– PHP is a scripting language used on Web server systems. Its code is embedded in
HTML documents. The code is interpreted on the server before the document is
sent to a requesting browser.
• Special-purpose languages
– RPG is an example of these languages.

Language Evaluation Criteria

Readability
• Software development was largely thought of in term of writing code “LOC”.
• Language constructs were designed more from the point of view of the computer than the
users.
• Because ease of maintenance is determined in large part by the readability of programs,
readability became an important measure of the quality of programs and programming
languages. The result is a crossover from focus on machine orientation to focus on
human orientation.
• The most important criterion “ease of use”
• Overall simplicity “Strongly affects readability”
– Too many features make the language difficult to learn. Programmers tend to
learn a subset of the language and ignore its other features. “ALGOL 60”
– Multiplicity of features is also a complicating characteristic “having more than
one way to accomplish a particular operation.”
– Ex “Java”:
count = count + 1
count += 1
count ++
++count

______________________________________________________________________________
5|Page
– Although the last two statements have slightly different meaning from each other
and from the others, all four have the same meaning when used as stand-alone
expressions.
– Operator overloading where a single operator symbol has more than one meaning.
– Although this is a useful feature, it can lead to reduced readability if users are
allowed to create their own overloading and do not do it sensibly.

• Orthogonality
– Makes the language easy to learn and read.
– Meaning is context independent. Pointers should be able to point to any type of
variable or data structure. The lack of orthogonality leads to exceptions to the
rules of the language.
– A relatively small set of primitive constructs can be combined in a relatively
small number of ways to build the control and data structures of the language.
– Every possible combination is legal and meaningful.
– Ex: page 11 in book.
– The more orthogonal the design of a language, the fewer exceptions the language
rules require.
– The most orthogonal programming language is ALGOL 68. Every language
construct has a type, and there are no restrictions on those types.
– This form of orthogonality leads to unnecessary complexity.

• Control Statements
– It became widely recognized that indiscriminate use of goto statements severely
reduced program readability.
– Ex: Consider the following nested loops written in C
while (incr < 20)
{
while (sum <= 100
{
sum += incr;
}
incr++;
}

______________________________________________________________________________
6|Page
if C didn’t have a loop construct, this would be written as follows:

loop1:
if (incr >= 20) go to out;
loop2:
if (sum > 100) go to next;
sum += incr;
go to loop2;
next:
incr++;
go to loop1:
out:

– Basic and Fortran in the early 70s lacked the control statements that allow strong
restrictions on the use of gotos, so writing highly readable programs in those
languages was difficult.
– Since then, languages have included sufficient control structures.
– The control statement design of a language is now a less important factor in
readability than it was in the past.

• Data Types and Structures


– The presence of adequate facilities for defining data types and data structures in a
language is another significant aid to reliability.
– Ex: Boolean type.
timeout = 1 or timeout = true

 Syntax Considerations
– The syntax of the elements of a language has a significant effect on readability.
– The following are examples of syntactic design choices that affect readability:
• Identifier forms: Restricting identifiers to very short lengths detracts from
readability. ANSI BASIC (1978) an identifier could consist only of a
single letter of a single letter followed by a single digit.
• Special Words: Program appearance and thus program readability are
strongly influenced by the forms of a language’s special words. Ex: while,
class, for. C uses braces for pairing control structures. It is difficult to
determine which group is being ended. Fortran 95 allows programmers to
use special names as legal variable names.
• Form and Meaning: Designing statements so that their appearance at least
partially indicates their purpose is an obvious aid to readability.
• Semantic should follow directly from syntax, or form.
• Ex: In C the use of static depends on the context of its appearance.
If used as a variable inside a function, it means the variable is created at
compile time.
If used on the definition of a variable that is outside all functions, it means
the variable is visible only in the file in which its definition appears.

Writability
 It is a measure of how easily a language can be used to create programs for a chosen
problem domain.
 Most of the language characteristics that affect readability also affect writability.
• Simplicity and orthogonality
______________________________________________________________________________
7|Page
– A smaller number of primitive constructs and a consistent set of rules for
combining them is much better than simply having a large number of primitives.
• Support for abstraction
– Abstraction means the ability to define and then use complicated structures or
operations in ways that allow many of the details to be ignored.
– A process abstraction is the use of a subprogram to implement a sort algorithm
that is required several times in a program instead of replicating it in all places
where it is needed.
• Expressivity
– It means that a language has relatively convenient, rather than cumbersome, ways
of specifying computations.
– Ex: ++count ⇔ count = count + 1 // more convenient and shorter

Reliability
 A program is said to be reliable if it performs to its specifications under all conditions.
• Type checking: is simply testing for type errors in a given program, either by the
compiler or during program execution.
– The earlier errors are detected, the less expensive it is to make the required
repairs. Java requires type checking of nearly all variables and expressions at
compile time.
• Exception handling: the ability to intercept run-time errors, take corrective measures,
and then continue is a great aid to reliability.
• Aliasing: it is having two or more distinct referencing methods, or names, for the same
memory cell.
– It is now widely accepted that aliasing is a dangerous feature in a language.
• Readability and writability: Both readability and writability influence reliability.

Cost
– Categories
– Training programmers to use language
– Writing programs “Writability”
– Compiling programs
– Executing programs
– Language implementation system “Free compilers is the key, success of Java”
– Reliability, does the software fail?
– Maintaining programs: Maintenance costs can be as high as two to four times as
much as development costs.
– Portability “standardization of the language”
– Generality (the applicability to a wide range of applications)

Influences on Language Design


• Computer architecture: Von Neumann
• We use imperative languages, at least in part, because we use von Neumann machines
– Data and programs stored in same memory
– Memory is separate from CPU
– Instructions and data are piped from memory to CPU
– Results of operations in the CPU must be moved back to memory
– Basis for imperative languages
• Variables model memory cells

______________________________________________________________________________
8|Page
• Assignment statements model piping
• Iteration is efficient

Programming methodologies
• 1950s and early 1960s: Simple applications; worry about machine efficiency
• Late 1960s: People efficiency became important; readability, better control structures
– Structured programming
– Top-down design and step-wise refinement
• Late 1970s: Process-oriented to data-oriented
– data abstraction
• Middle 1980s: Object-oriented programming

______________________________________________________________________________
9|Page
Language Categories
• Imperative
– Central features are variables, assignment statements, and iteration
– C, Pascal
Central Features of Imperative Languages
• Variables: model memory cells
• Assignment statements: model piping
• Iteration is fast on von Neumann computers because instructions are stored in adjacent
cells of memory and repeating the execution of a section of code requires only a simple
branch instruction

• Functional
– Main means of making computations is by applying functions to given parameters
– LISP, Scheme
– A functional language is one in which the primary means of computation is
applying functions to given parameters.
– Programming can be done in a functional language
– without the kind of variables that are used in imperative languages
– without assignment statements and without iteration.
– Although many computer scientists have expounded on the myriad benefits of
functional languages, it is unlikely that they will displace the imperative language
until a non-von Neumann computer is designed that allows efficient execution of
programs in functional languages

• Logic
– Rule-based
– Rules are specified in no special order
– Prolog
• Object-oriented
– Encapsulate data objects with processing
– Inheritance and dynamic type binding
– Grew out of imperative languages
– C++, Java

Programming Environments
• The collection of tools used in software development
• UNIX
– An older operating system and tool collection
• Borland JBuilder
– An integrated development environment for Java
• Microsoft Visual Studio.NET
– A large, complex visual environment
– Used to program in C#, Visual BASIC.NET, Jscript, J#, or C++

• An operating system and language implementations are layered over the machine
language interface of a computer.
• These layers can be thought of as virtual computers, providing interface to the user at
higher levels.
______________________________________________________________________________
10 | P a g e
– e.g. an OS and a C compiler provide a virtual C computer
• Most computers provide several different virtual computers.
• The operating system and language implementation are layered over Machine interface of
a computer

INTERPRETER:
• Programs are interpreted by another program called an interpreter, with no translation
whatever.
• The interpreter program acts as a software simulation of a machine whose fetch-execute
cycle deals with high-level language program statements rather than machine
instructions.
• This software simulation obviously provides a virtual machine for the language.

ADVANTAGES OF INTERPRETER:
• Allowing easy implementation of many source-level debugging operations, because all
run-time error messages can refer to source-level unit.
– For example, if an array is found to be out of rang, the error message can easily
indicate the source line and the name of the array.

DISADVANTAGES OF INTERPRETER:
______________________________________________________________________________
11 | P a g e
• Slower execution (10 to 100 times slower than compiled programs)
– The decoding of the high-level language statements are far more complex than
machine language instruction.
– Regardless of how many times a statement is executed, it must be decoded every
time.
– Therefore, statement decoding, rather than the connection between the processor
and memory, is the bottleneck of a pure interpreter.
– Often requires more space.
– In addition to the source program, the symbol table must be present during
interpretation
– The source program may be stored in a form designed for easy access and
modification rather than one that provides for minimal size.

POPULARITY OF INTERPRETATION:

• Some simple early languages of the 1960s (APL, SNOBOL, and LISP) were purely
interpreted.
• By the 1980s, the approach was rarely used on high-level languages.
• In recent years, pure interpretation has made a significant comeback with some Web
scripting languages, such as JavaScript and PHP, which are now widely used.

FIGURE: PURE INTERPRETATION


COMPILATION PROCESS

• Translate high-level program (source language) into machine code (machine language)
• Slow translation, fast execution
PHASES OF COMPILER:
Compiler consists of 2 phases.
1. Analysis Phase
2. Synthesis phase

______________________________________________________________________________
12 | P a g e
1. Analysis Phase:
Analysis Phase performs 3 actions namely

a) Lexical analysis - it contains a sequence of characters called tokens. Input is source program
& the output is tokens. It also gathers the characters of the source program into lexical units.
Lexical units: identifiers, special words, operators and punctuation symbols
b) Syntax analysis - input is a token and the output is parse tree, transforms lexical units into
parse trees which represent the syntactic structure of program
c) Semantic analysis - input is parse tree and the output is expanded version of parse tree

2. Synthesis Phase:
Synthesis Phase performs 3 actions namely
d) Intermediate Code generation - Here all the errors are checked & it produce an intermediate
code. translate a source program into an intermediate language one, semantics analysis: check for
errors that are difficult if not impossible to detect during syntax analysis, such as type errors.

e) Code Optimization - the intermediate code is optimized here to get the target program.
Improve programs (usually in their intermediate code version) by making them smaller or faster
or both, is often an optional part of compilation.
• Some compilers are incapable of doing any significant optimization.
• Optimization may
– omit some code in your program change the execution order of code in your
program
• P.S.: Sometimes, especially when synchronization between processes is
required, the above results may create some bugs in your programs which
cannot be detected by just checking the source code.

f) Code Generation - this is the final step & here the target program code is generated. Machine
code is generated.

______________________________________________________________________________
13 | P a g e
FIGURE: COMPILATION PHASES
HYBRID IMPLEMENTATION SYSTEMS
• A compromise between compilers and pure interpreters
• A high-level language program is translated to an intermediate language that allows easy
interpretation
• Faster than pure interpretation
• Perl programs are partially compiled to detect errors before interpretation to simplify the
interpreter.
• Initial implementations of Java
• initial implementations of Java were all hybrid
• its intermediate form, byte code, provides portability to any machine that has a byte code
interpreter and a run-time system (together, these are called Java Virtual Machine)
• There are now systems that translate Java byte code into machine code for faster
execution.

FIGURE: HYBRID IMPLEMENTATION SYSTEMS


______________________________________________________________________________
14 | P a g e
Chapter 2 “Describing Syntax and Semantics”

Introduction
 Who must use language definitions “description of P/L.”?
o Other language designers “scrutinized by potential users.”
o Programming language Implementors “determine how definitions are formed, and
their intended effects when executed.”
o Programmers (the users of the language use textbooks, manuals, & courses.)
 Syntax - the form or structure of the expressions, statements, and program units.
 Semantics - the meaning of the expressions, statements, and program units.
Ex:
while (<Boolean_expr>)<statement>
 The semantics of this statement form is that when the current value of the Boolean
expression is true, the embedded statement is executed.
 The form of a statement should strongly suggest what the statement is meant to accomplish.

The General Problem of Describing Syntax


 A sentence “statement” is a string of characters over some alphabet. The syntax rules of a
language specify which strings of characters from the language’s alphabet are in the
language.
 A language is a set of sentences.
 A lexeme is the lowest level syntactic unit of a language. It includes identifiers, literals,
operators, and special word. (e.g. *, sum, begin) A program is strings of lexemes.
 A token is a category of lexemes (e.g., identifier.) An identifier is a token that have lexemes,
or instances, such as sum and total.
 Ex:
index = 2 * count + 17;

Lexemes Tokens
index identifier
= equal_sign
2 int_literal
* mult_op
count identifier
+ plus_op
17 int_literal
; Semicolon

______________________________________________________________________________
15 | P a g e
Language Recognizers
 In general can be formally defined in two distinct ways.

Language Recognizers: used in compilers.

o The syntax analysis part of a compiler is a recognizer for the language the compiler
translates.
o They determine whether given programs are in the language.

Syntax Analyzers: determine whether the given programs are syntactically correct.

Language Generators: generate the sentences of a language.

Formal Methods of Describing Syntax

Backus-Naur Form and Context-Free Grammars


 It is a syntax description formalism that became the mostly wide used method for P/L syntax.

Context-free Grammars

– Developed by Noam Chomsky in the mid-1950s who described four classes of generative
devices or grammars that define four classes of languages.
– Context-free and regular grammars are useful for describing the syntax of P/Ls.
– Tokens of P/Ls can be described by regular grammars.
– Whole P/Ls can be described by context-free grammars.

Backus-Naur Form (1959)

– Invented by John Backus to describe ALGOL 58 syntax.


– BNF is equivalent to context-free grammars used for describing syntax.

Fundamentals

– A metalanguage is a language used to describe another language “Ex: BNF.”


– In BNF, abstractions are used to represent classes of syntactic structures--they act like
syntactic variables (also called nonterminal symbols)

<while_stmt> → while ( <logic_expr> ) <stmt>

– This is a rule; it describes the structure of a while statement


– A rule has a left-hand side (LHS) “The abstraction being defined” and a right-hand side
(RHS) “consists of some mixture of tokens, lexemes and references to other
abstractions”, and consists of terminal and nonterminal symbols.
– A grammar is a finite nonempty set of rules and the abstractions are called nonterminal
symbols, or simply nonterminals.
– The lexemes and tokens of the rules are called terminal symbols or terminals.
– An abstraction (or nonterminal symbol) can have more than one RHS

<stmt> → <single_stmt>
| begin <stmt_list> end
______________________________________________________________________________
16 | P a g e
– Multiple definitions can be written as a single rule, with the different definitions
separated by the symbol |, meaning logical OR.

Describing Lists

• Syntactic lists are described using recursion.

<ident_list> → ident
| ident, <ident_list>

• A rule is recursive if its LHS appears in its RHS.

Grammars and derivations

• The sentences of the language are generated through a sequence of applications of the
rules, beginning with a special nonterminal of the grammar called the start symbol.
• A derivation is a repeated application of rules, starting with the start symbol and ending
with a sentence (all terminal symbols)
• An example grammar:

<program> → <stmts>
<stmts> → <stmt> | <stmt> ; <stmts>
<stmt> → <var> = <expr>
<var> → a | b | c | d
<expr> → <term> + <term> | <term> - <term>
<term> → <var> | const

• An example derivation:

<program> => <stmts> => <stmt>


=> <var> = <expr> => a = <expr>
=> a = <term> + <term>
=> a = <var> + <term>
=> a = b + <term>
=> a = b + const

• Every string of symbols in the derivation, including <program>, is a sentential form.


• A sentence is a sentential form that has only terminal symbols.
• A leftmost derivation is one in which the leftmost nonterminal in each sentential form is
the one that is expanded. The derivation continues until the sentential form contains no
nonterminals.
• A derivation may be neither leftmost nor rightmost.

Attributes Of Grammar

An attribute grammar is a context-free grammar that has been extended to provide


context sensitive information by appending attributes to some of its non terminals. Each distinct
symbol in the grammar has associated with it a finite, possibly empty, set of attributes.
• Each attribute has a domain of possible values.
• An attribute may be assigned values from its domain during parsing.
______________________________________________________________________________
17 | P a g e
• Attributes can be evaluated in assignments
1. Static semantics specification
2. Compiler design (static semantics checking)

Parse Trees

3. Hierarchical structures of the language are called parse trees..


4. A parse tree for the simple statement A = B + const
<program>>

<stmts>

<stmt>

<var> = <expr>

a <term> + <term>

<var>

b const
Ambiguity

5. A grammar is ambiguous iff it generates a sentential form that has two or more
distinct parse trees.
6. Ex: Two distinct parse trees for the same sentence, const – const / const
<expr> → <expr> <op> <expr> | const
<op> → / | -

______________________________________________________________________________
18 | P a g e
7. Ex: Two distinct parse trees for the same sentence, A = B + A * C

<assign>  <id> = <expr>


<id>  A|B|C
<expr>  <expr> + <expr>
| <expr> * <expr>
| (<expr>)
| <id>

Operator Precedence
8. The fact that an operator in an arithmetic expression is generated lower in the parse
tree can be used to indicate that it has higher precedence over an operator produced
higher up in the tree.
9. In the left parsed tree above, one can conclude that the * operator has precedence over
the + operator. How about the tree on the right hand side?
10. An unambiguous Grammar for Expressions

<assign>  <id> = <expr>


<id>  A|B|C
<expr>  <expr> + <term>
| <term>
<term>  <term> * <factor>
| <factor>
<factor>  (<expr>)
|
<id>

A= B+C*A

______________________________________________________________________________
19 | P a g e
Associativity of Operators

11. Do parse trees for expressions with two or more adjacent occurrences of operators
with equal precedence have those occurrences in proper hierarchical order?
12. An example of an assignment using the previous grammar is:

A=B+C+A

13. Figure above shows the left + operator lower than the right + operator. This is the
correct order if + operator meant to be left associative, which is typical.

______________________________________________________________________________
20 | P a g e
Extended BNF

 Because of minor inconveniences in BNF, it has been extended in several ways. EBNF
extensions do not enhance the descriptive powers of BNF; they only increase its readability
and writability.
 Optional parts are placed in brackets ([ ])

<proc_call> -> ident [ ( <expr_list>)]

 Put alternative parts of RHSs in parentheses and separate them with vertical bars

<term> -> <term> (+ | -) const

 Put repetitions (0 or more) in braces ({ })

<ident> -> letter {letter | digit}

BNF:
<expr> → <expr> + <term>
| <expr> - <term>
| <term>
<term> → <term> * <factor>
| <term> / <factor>
| <factor>

EBNF:
<expr> → <term> {(+ | -) <term>}
<term> → <factor> {(* | /) <factor>}

Describing the Meanings of Programs: Dynamic Semantics

Axiomatic Semantics

 Axiomatic Semantics was defined in conjunction with the development of a method to prove
the correctness of programs.
 Such correction proofs, when they can be constructed, show that a program performs the
computation described by its specification.
 In a proof, each statement of a program is both preceded and followed by a logical
expression that specified constraints on program variables.
 Approach: Define axioms or inference rules for each statement type in the language (to allow
transformations of expressions to other expressions.)
 The expressions are called assertions.

Assertions

 Axiomatic semantics is based on mathematical logic. The logical expressions are called
predicates, or assertions.

______________________________________________________________________________
21 | P a g e
 An assertion before a statement (a precondition) states the relationships and constraints
among variables that are true at that point in execution.
 An assertion following a statement is a postcondition.
 A weakest precondition is the least restrictive precondition that will guarantee the
validity of the associated postcondition.

Pre-post form: {P} statement {Q}

An example: a = b + 1 {a > 1}

One possible precondition: {b > 10}


Weakest precondition: {b > 0}

 If the weakest precondition can be computed from the given postcondition for each
statement of a language, then correctness proofs can be constructed from programs in that
language.
 Program proof process: The postcondition for the whole program is the desired result.
Work back through the program to the first statement. If the precondition on the first
statement is the same as the program spec, the program is correct.
 An Axiom is a logical statement that is assumed to be true.
 An Inference Rule is a method of inferring the truth of one assertion on the basis of the
values of other assertions.

Assignment Statements

 Ex:

a=b/2–1 {a < 10}

The weakest precondition is computed by substituting b / 2 -1 in the assertion {a < 10} as


follows:

b/2–1 < 10
b/2 < 11
b < 22

∴ the weakest precondition for the given assignment and the postcondition is {b < 22}

 An assignment statement has a side effect if it changes some variable other than its left
side.

______________________________________________________________________________
22 | P a g e
 Ex:

x = 2 * y – 3 {x > 25}
2 * y – 3 > 25
2 * y > 28
y > 14

∴ the weakest precondition for the given assignment and the postcondition is {y > 14}

 Ex:

x = x + y – 3 {x > 10}
x + y – 3 > 10
y > 13 – x

Sequences

 The weakest precondition for a sequence of statements cannot be described by an axiom,


because the precondition depends on the particular kinds of statements in the sequence.
 In this case, the precondition can only be described with an inference rule.
 Ex:

y = 3 * x + 1;
x = y + 3; {x < 10}

y + 3 < 10
y<7

3*x+1<7
3*x<6
x<2

Logical Pretest Loops

 Computing the weakest precondition (wp) for a while loop is inherently more difficult
than for a sequence b/c the number of iterations can’t always be predetermined.
 The corresponding step in the axiomatic semantics of a while loop is finding an assertion
called a loop invariant, which is crucial to finding the weakest precondition.
 It is helpful to treat the process of producing the wp as a function, wp.
 To find I, we use the loop postcondition to compute preconditions for several different
numbers of iterations of the loop body, starting with none. If the loop body contains a
single assignment statement, the axiom for assignment statements can be used to compute
these cases.

wp(statement, postcondition) = precondition

 Ex:

while y <> x do y = y + 1 end {y = x}

For 0 iterations, the wp is  {y = x}


______________________________________________________________________________
23 | P a g e
For 1 iteration,
wp(y = y + 1), {y = x}) = {y + 1 = x}, {y = x – 1}
For 2 iterations,
wp(y = y + 1), {y = x - 1}) = {y + 1 = x - 1}, {y = x – 2}
For 3 iterations,
wp(y = y + 1), {y = x - 2}) = {y + 1 = x - 2}, {y = x – 3}

 It is now obvious that {y < x} will suffice for cases of one or more iterations. Combining
this with {y = x} for the 0 iterations case, we get {y <= x} which can be used for the loop
invariant.

 Ex:

while s > 1 do s = s / 2 end {s = 1}

For 0 iterations, the wp is  {s = 1}


For 1 iteration,
wp(s > 1, {s = s / 2}) = {s / 2 = 1}, {s = 2}
For 2 iterations,
wp(s > 1, {s = s / 2}) = {s / 2 = 2}, {s = 4}
For 3 iterations,
wp(s > 1, {s = s / 2}) = {s / 2 = 4}, {s = 8}

 s is a nonnegative power of 2.
 The loop invariant I is a weakened version of the loop postcondition, and it is also a
precondition.
 I must be weak enough to be satisfied prior to the beginning of the loop, but when
combined with the loop exit condition, it must be strong enough to force the truth of
the postcondition.

______________________________________________________________________________
24 | P a g e
Chapter -3
Data Types

Design Issues for all data types:

What is the syntax of references to variables?

What operations are defined and how are they specified ?

Def: A descriptor is the collection of the attributes of a variable

In an implementation, a descriptor is a collection of the memory sells that store the


variables attributes.

We will discuss for each specific types design issues separately.

Primitive Data Types

Data Types that are not defined in terms of others types are called Primitive Type
Some of this types are merely reflection of hardware ( integers ) other require only a little
non hardware support for their implementation.
Integer
Almost always an correct reflection of the hardware, so the mapping is small
There may be as many as eight different integer types in a language
Integers are represented as a string of bits( positive or negative)
Most computers now use a notion of twos complement to store negative integers ( See any
assembly language programming book)
Floating Point
Model real numbers, but only as approximations

Languages for scientific use support at least two floating-point types; sometimes more

Usually exactly like the hardware, but not always; some languages allow accuracy specs in
code.
IEEE floating-point formats:

______________________________________________________________________________
25 | P a g e
a) Single precision,

b) Double precision

Decimal
- For business applications (money)

- Store a fixed number of decimal digits (coded)

- Advantage: accuracy

- Disadvantages: limited range, waste memory

Primary data types for business data processing ( COBOL)

They are stored very much like charter strings using binary code for decimal digits (BCD)

Boolean
First were introduced in Algol 60
Most of all general purpose languages have that Could be implemented as bits, but often as
bytes
- Advantage: readability

Character Type
Character data are stored in computer as numeric codings.
ASCII: 128 different characters ( 0 .. 127)
A 16 bit character set named Unicode was developed to include most of worlds .

______________________________________________________________________________
26 | P a g e
Character String Types
Is one in which the Values consist of sequences of characters

Design issues:
1. Is it a primitive type or just a special kind of array?

2. Is the length of objects static or dynamic?

Operations:
- Assignment
- Comparison (=, >, etc.)
- Catenation
- Substring reference
- Pattern matching

Examples:
Pascal, - Not primitive; assignment and comparison only (of packed arrays)
FORTRAN 77, FORTRAN 90, SNOBOL4 and BASIC
- Somewhat primitive
Assignment, comparison, catenation, substring reference
FORTRAN, SNOBOL4 have an intrinsic for pattern matching

C, C++, ADA not primitive, strcpy, ctrcat strlen, strcmp - comonly used string manipulation
library functions (string.h) N := N1 & N2 (catenation) N(2..4) (substring reference) ( ADA)
JAVA: strings are supported as a primitive type by the String class ( constant strings) and
StringBuffer class ( changeable strings)

String Length Options

There are several design choices regarding the length of string values
1. Static length string (length is specified at the declaration time) - FORTRAN 90, Ada,
COBOL, Pascal
CHARACTER (LEN = 15) NAME; (FORTRAN 90)
Static length strings are always full
______________________________________________________________________________
27 | P a g e
2. Limited Dynamic Length strings can store any number of characters between 0 and
maximum (specified by the variables declaration)
- C and C++
- actual length is indicated by a null character

3. Dynamic - SNOBOL4, Perl, JavaScript


- provides maximum flexibility, but requires of overhead of dynamic storage allocation and
deallocation.

Ordinal Types ( user defined )


Defination: An ordinal type is one in which the range of possible values can be easily
associated with the set of positive integers.
In many languages , users can define two kinds of ordinal types,
1.Enumeration
2.Subrange.

Enumeration Types
- One in which the user enumerates all of the possible values, which
are symbolic constants

 Design Issue:
Should a symbolic constant be allowed to be in more than one type definition?

Examples:
 Pascal
- cannot reuse constants; they can be used for array subscripts, for variables, case selectors; NO
input or output; can be compared
 Ada
- constants can be reused (overloaded literals); disambiguate with
context or type_name ‘ (one of them); can be used as in Pascal;
CAN be input and output
 C and C++ -

______________________________________________________________________________
28 | P a g e
like Pascal, except they can be input and output as integers

 Java does not include an enumeration type but can be implemented as a nice class.
Subrange Type:
Subrange Type - an ordered contiguous subsequence of an ordinal type
Examples
Pascal - Subrange types behave as their parent types; can be used as for variables and array
indices
e.g. type pos = 0 .. MAXINT;
Ada - Subtypes are not new types, just constrained existing types (so they are compatible); can
be used as in Pascal, plus case constants
subtype INDEX is INTEGER range 1 .. 100;
All of operations defined for the parent type are also defined for subtype, except assignment of
values out of range.
Arrays

• An array is an aggregate of homogeneous data elements in which an individual element is


identified by its position in the aggregate, relative to the first element.

Specific element of an array is identified by:


i) Aggregate name;
ii) Index (subscript): position relative to the first element.
Array Categories

Four Categories of Arrays (based on subscript binding and binding to storage)

1. Static - range of subscripts and storage bindings are static


e.g. FORTRAN 77, some arrays in Ada
Advantage: execution efficiency (no allocation or deallocation)

2. Fixed stack dynamic - range of subscripts is statically bound, but storage is bound at
elaboration time
e.g. Pascal locals and, C locals that are not
static
______________________________________________________________________________
29 | P a g e
Advantage: space efficiency
(Example) In a C function,
void func() {
int a[5];
...
}

3. Stack-dynamic - range and storage are dynamic, but once the subscript ranges are bound and
the storage is allocated they are fixed from then on for the variable’s lifetime

e.g. Ada declare blocks

declare
STUFF : array (1..N) of FLOAT;
begin
...
end;
Advantage: flexibility - size need not be known until the array is about to be used

4. Heap-dynamic – The binding of subscript ranges and storage allocation is dynamic and can
change any number of times during the array’s life time

e.g. (FORTRAN 90
INTEGER, ALLOCATABLE, ARRAY (:,:) :: MAT
(Declares MAT to be a dynamic 2-dim array)
ALLOCATE (MAT (10, NUMBER_OF_COLS))
(Allocates MAT to have 10 rows and
NUMBER_OF_COLS columns)
DEALLOCATE MAT

(Deallocates MAT’s storage)

Example

(Example) program heap_array.C

#include <iostream.h>
void main() {
char *array;
______________________________________________________________________________
30 | P a g e
int N;
cout << "Type an array size:\n";
cin >> N;
array = new char[N];
delete [] array;
}

- In APL & Perl, arrays grow and shrink as needed


- In Java, all arrays are objects (heap-dynamic)

Implementation of Arrays
Access function maps subscript expressions to an address in
the array - Row major (by rows) or column major order (by columns)
Column major order is used in FORTRAN, but most of other languages use row major order.
Location of the [i, j] element in a matrix.
Locationa[I,j] = address of a[1,1] + ( i -1)(size of row) + (j -1)( element size)

______________________________________________________________________________
31 | P a g e
Associative Arrays
An associative array is an unordered collection of data elements that are indexed by an equal
number of values called keys.

- Each element of a associative array is in fact a pair of entities , a key and value

- Design Issues:
1. What is the form of references to elements?

2. Is the size static or dynamic?

Associative arrays are supported by the standard class library of Java.


But the main languages which supports an associative arrays is Perl.
Records
A record is a possibly heterogeneous collective of data elements in which the individual
elements are identified by names.

- First was introduced in1960 COBOLsince than almost all languages support them ( except
pre 90 FORTRANs). In OOL, the class construct

- Supports records.

Design Issues that are specific for records


1. What is the form of references?
2. What unit operations are defined?
Example
structure type in C and C++, program simple_struct.C
#include <stream.h>
struct student {
int ssn;
char grade;
};
void main() {
struct student john;
struct student *p_john;
john.grade = 'A';

______________________________________________________________________________
32 | P a g e
p_john = &john;
cout << p_john  grade << endl;
}
einstein> g++ simple_struct.C
einstein> a.out
A

Record Operations
1. Assignment
- Pascal, Ada, and C allow it if the types are identical
- In Ada, the RHS can be an aggregate constant

2. Initialization
- Allowed in Ada, using an aggregate constant

3. Comparison
- In Ada, = and /=; one operand can be an aggregate constant

4. MOVE CORRESPONDING
- In COBOL - it moves all fields in the source record to fields with the same names in the
destination record

Comparing records and arrays

1. Access to array elements is much slower than access to record fields, because subscripts are
dynamic (field names are static)

2. Dynamic subscripts could be used with record field access, but it would disallow type
checking and it would be much slower.

Implementation of Record Type


. The fields of record are stored in adjacent memory locations. The offset address relative to
beginning is associated with each field.The field accesses are handled by using this offsets.

______________________________________________________________________________
33 | P a g e
The compiler time descriptors for record is shown on the left side of slide. No need for run
time descriptors.

Unions
A union is a type whose variables are allowed to store different type values at different times
during execution

Design Issues for unions:


1. What kind of type checking, if any, must be done?
2. Should unions be integrated with records?

Examples:
1. FORTRAN - with EQUIVALENCE in C or C++ construct union is used
( free unions)
2. Algol 68 - discriminated unions
______________________________________________________________________________
34 | P a g e
- Use a hidden tag to maintain the current type
- Tag is implicitly set by assignment
- References are legal only in conformity clauses
(see book example p. 231)
- This runtime type selection is a safe method of accessing union objects

Example (Free union C++)


#include <stream.h>
union Value {
char cval;
float fval;
};
void main() {
Value val;
val.cval = 'a';
cout << val.cval << endl;
val.fval = 1.2;
cout << val.fval << endl;
cout << val.cval << endl;
}
einstein> g++ union.C
classes> a.out

Implementation of Union Type


Discriminated union
Tag entry is associated with a case table, each of whose entries points to a
descriptor of a particular variant.

______________________________________________________________________________
35 | P a g e
Sets
A set is a type whose variables can store unordered collections of distinct values from some
ordinal type called base type

Design Issue:
What is the maximum number of elements in any set base type?

Examples:

1. Pascal
- No maximum size in the language definition (not portable, poor
writability if max is too small)
- Operations: union (+), intersection (*), difference (-), =, < >, superset

2. Modula-2 and Modula-3 - Additional operations: INCL, EXCL, /


(symmetric set difference (elements in one but not both operands))

3. Ada - does not include sets, but defines in as set membership operator for all enumeration
types
4. Java includes a class for set operations
Evaluation

______________________________________________________________________________
36 | P a g e
- If a language does not have sets, they must be simulated, either with enumerated types or with
arrays
- Arrays are more flexible than sets, but have much slower operations

Implementation

- Usually stored as bit strings and use logical operations for the set operations.
Pointers
A pointer type is a type in which the range of values consists of memory
addresses and a special value, nil (or null) The value nil indicates that a
pointer cannot currently be used to reference another object. In C and
C++, a value 0 is used as nil.

Uses:
1. Addressing flexibility ( indirect addressing )
2. Dynamic storage management ( access to heap)

Design Issues:
1. What is the scope and lifetime of pointer variables?
2. What is the lifetime of heap-dynamic variables?
3. Are pointers restricted to pointing at a particular type?
4. Are pointers used for dynamic storage management, indirect addressing, or both?
5. Should a language support pointer types, reference types, or both?

Fundamental Pointer Operations:


1. Assignment: Sets a pointer variable to the address of some object.

2. References (explicit versus implicit dereferencing)


Obtaining the value of the memory cell whose address is in the memory cell to which the
pointer variable is bound to. In C and C++, dereferencing is specified by prefixing a
identifier of a pointer type by the dereferencing operator (*).

Example
______________________________________________________________________________
37 | P a g e
(Example in C++)
int j;
int *ptr; // pointer to integer variables
...
j = *ptr;

Problems with pointers:


1. Dangling pointers (dangerous)
- A pointer points to a heap-dynamic variable that has been deallocated
- Creating one:

a. Allocate a heap-dynamic variable and set a pointer to point at it


b. Set a second pointer to the value of the first pointer
c. Deallocate the heap-dynamic variable, using the first pointer

(Example 1) dangling.C
#include <stream.h>
void main() {
int *x, *y;
x = new int;
*x = 777;
delete x;
y = new int;
*y = 999;
cout << *x << endl;

______________________________________________________________________________
38 | P a g e
}
 x 777
 delete x;
 x y 999

Example (C++)
dangling.C
#include <stream.h>
void main() {
int *x, *y;
x = new int;
*x = 777;
delete x;
y = new int;
*y = 999;
cout << *x << endl;
}

______________________________________________________________________________
39 | P a g e
Chapter 4
Support for Object-Oriented Programming

Introduction
Many object-oriented programming (OOP) languages

– Some support procedural and data-oriented programming (e.g., Ada 95 and C++)

– Some support functional program (e.g., CLOS)

– Newer languages do not support other paradigms but use their imperative structures
(e.g., Java and C#)

– Some are pure OOP language (e.g., Smalltalk & Ruby)

Object-Oriented Programming

• Abstract data types

• Inheritance

– Inheritance is the central theme in OOP and languages that support it

• Polymorphism

Object-Oriented Concepts

• ADTs are usually called classes

• Class instances are called objects

• A class that inherits is a derived class or a subclass

• The class from which another class inherits is a parent class or superclass

• Subprograms that define operations on objects are called methods

• Calls to methods are called messages

• The entire collection of methods of an object is called its message protocol or message
interface

______________________________________________________________________________
40 | P a g e
• Messages have two parts--a method name and the destination object

• In the simplest case, a class inherits all of the entities of its parent

• Inheritance can be complicated by access controls to encapsulated entities

– A class can hide entities from its subclasses

– A class can hide entities from its clients

– A class can also hide entities for its clients while allowing its subclasses to see them

• Besides inheriting methods as is, a class can modify an inherited method

– The new one overrides the inherited one

– The method in the parent is overriden

• There are two kinds of variables in a class:

– Class variables - one/class

– Instance variables - one/object

• There are two kinds of methods in a class:

– Class methods – accept messages to the class

– Instance methods – accept messages to objects

• Single vs. Multiple Inheritance

• One disadvantage of inheritance for reuse:

– Creates interdependencies among classes that complicate maintenance.

Design Issues for OOP Languages:

• The Exclusivity of Objects.

• Are Subclasses Subtypes?

• Type Checking and Polymorphism

• Single and Multiple Inheritance

• Object Allocation and Deallocation

• Dynamic and Static Binding

______________________________________________________________________________
41 | P a g e
• Nested Classes

• Initialization of Objects.

Initialization of Objects

• Are objects initialized to values when they are created?

– Implicit or explicit initialization

• How are parent class members initialized when a subclass object is created?

Allocation and DeAllocation of Objects

• From where are objects allocated?

– If they behave line the ADTs, they can be allocated from anywhere
• Allocated from the run-time stack
• Explicitly create on the heap (via new)

– If they are all heap-dynamic, references can be uniform thru a pointer or reference
variable
• Simplifies assignment - dereferencing can be implicit

– If objects are stack dynamic, there is a problem with regard to subtypes

• Is deallocation explicit or implicit?

The Exclusivity of Objects


• Everything is an object
– Advantage - elegance and purity
– Disadvantage - slow operations on simple objects

• Add objects to a complete typing system


– Advantage - fast operations on simple objects
– Disadvantage - results in a confusing type system (two kinds of entities)

• Include an imperative-style typing system for primitives but make everything else objects
– Advantage - fast operations on simple objects and a relatively small typing system
– Disadvantage - still some confusion because of the two type systems

Dynamic and Static Binding:

• Should all binding of messages to methods be dynamic?


______________________________________________________________________________
42 | P a g e
– If none are, you lose the advantages of dynamic binding

– If all are, it is inefficient

• Allow the user to specify

Dynamic Binding:

• A polymorphic variable can be defined in a class that is able to reference (or point to)
objects of the class and objects of any of its descendants

• When a class hierarchy includes classes that override methods and such methods are called
through a polymorphic variable, the binding to the correct method will be dynamic

• Allows software systems to be more easily extended during both development and
maintenance.

Dynamic Binding Concepts

• An abstract method is one that does not include a definition (it only defines a protocol)

• An abstract class is one that includes at least one virtual method

• An abstract class cannot be instantiated.

Support for OOP in C++

• Dynamic Binding

– A method can be defined to be virtual, which means that they can be called through
polymorphic variables and dynamically bound to messages

– A pure virtual function has no definition at all

– A class that has at least one pure virtual function is an abstract class
Inheritance:

• Productivity increases can come from reuse

– ADTs are difficult to reuse—always need changes

– All ADTs are independent and at the same level

• Inheritance allows new classes defined in terms of existing ones, i.e., by allowing them to
inherit common parts

• Inheritance addresses both of the above concerns--reuse ADTs after minor changes and
define classes in a hierarchy.

______________________________________________________________________________
43 | P a g e
Single and Multiple Inheritance:

• Multiple inheritance allows a new class to inherit from two or more classes

• Disadvantages of multiple inheritance:

– Language and implementation complexity (in part due to name collisions)

– Potential inefficiency - dynamic binding costs more with multiple inheritance (but
not much)

• Advantage:

– Sometimes it is quite convenient and valuable.

Support for OOP in C++

• Inheritance
– A class need not be the subclass of any class.

– Access controls for members are ,

– Private (visible only in the class and friends) (disallows subclasses from
being subtypes)

– Public (visible in subclasses and clients)

– Protected (visible in the class and in subclasses, but not clients)


Inheritance Example in C++

class base_class {
private:
int a;
float x;
protected:
int b;
float y;
public:
int c;

______________________________________________________________________________
44 | P a g e
float z;
};

class subclass_1 : public base_class { … };


// In this one, b and y are protected and
// c and z are public

class subclass_2 : private base_class { … };


// In this one, b, y, c, and z are private,
// and no derived class has access to any
// member of base_class
Multiple inheritance is supported
– If there are two inherited members with the same name, they can both be referenced
using the scope resolution operator
Type Checking and Polymorphism:

Polymorphism may require dynamic type checking of parameters and the return value

– Dynamic type checking is costly and delays error detection


If overriding methods are restricted to having the same parameter types and return type, the
checking can be static.
Support for OOP in Java
• Because of its close relationship to C++, focus is on the differences from that language

• General Characteristics
– All data are objects except the primitive types
– All primitive types have wrapper classes that store one data value
– All objects are heap-dynamic, are referenced through reference variables, and most
are allocated with new
– A finalize method is implicitly called when the garbage collector is about to reclaim
the storage occupied by the object.

Inheritance
– Single inheritance supported only, but there is an abstract class category that
provides some of the benefits of multiple inheritance (interface)


An interface can include only method declarations and named constants, e.g.,
______________________________________________________________________________
45 | P a g e
public interface Comparable <T> {
public int comparedTo (T b);
}

– Methods can be final (cannot be overriden)


Dynamic Binding
– In Java, all messages are dynamically bound to methods, unless the method is final
(i.e., it cannot be overriden, therefore dynamic binding serves no purpose)

– Static binding is also used if the methods is static or private both of which disallow
overriding

• Several varieties of nested classes

• All are hidden from all classes in their package, except for the nesting class

• Nonstatic classes nested directly are called innerclasses


– An innerclass can access members of its nesting class
– A static nested class cannot access members of its nesting class

• Nested classes can be anonymous

• A local nested class is defined in a method of its nesting class


– No access specifier is used

Evaluation

– Design decisions to support OOP are similar to C++

– No support for procedural programming

– No parentless classes

– Dynamic binding is used as “normal” way to bind method calls to method


definitions

– Uses interfaces to provide a simple form of support for multiple inheritance.

The Object Model of JavaScript

General Characteristics of JavaScript

______________________________________________________________________________
46 | P a g e
– Little in common with Java

• Similar to Java only in that it uses a similar syntax

– Dynamic typing

– No classes or inheritance or polymorphism

– Variables can reference objects or can directly access primitive values


JavaScript objects

– An object has a collection of properties which are either data properties or method
properties

– Appear as hashes, both internally and externally

– A list of property/value pairs

– Properties can be added or deleted dynamically

– A bare object can be created with new and a call to the constructor for Object
var my_object = new Object();

– References to properties are with dot notation

JavaScript Evaluation

• Effective at what it is designed to be

– A scripting language

• Inadequate for large scale development

• No encapsulation capability of classes

– Large programs cannot be effectively organized

• No inheritance

– Reuse will be very difficult

______________________________________________________________________________
47 | P a g e
Chapter 5
Exception Handling

Exception Handling:
Contents:

Exception Handling:
The try / catch blocks
try {
//statements – one of which is capable of throwing an exception
}
catch (ExceptionTypeName objName) {
//one or more statements to execute if this exception occurs
}
finally {
______________________________________________________________________________
48 | P a g e
//statements to be executed whether or not exception occurs
}
An optional finally block can be added at the end of the catch blocks to provide a set of
statements that are always executed whether or not an exception occurs.
Exceptions C++
 Exceptions provide a way to handle the errors generated by our programs by
transferring control to functions called handlers.
 To catch exceptions we have to place our code on which we want exception
handling in the try block. If an exception occurs the control is passed to the
handler, otherwise the handlers are ignored.
 The code to be executed, that may produce exceptions, is placed in the try
block and the error handlers are declared with the keyword catch.
Example
#include <iostream>
using namespace std;
int main ()
{
try
{
throw 10;
}
catch (int e)
{
cout << “We have a problem!!” << endl;
}
return 0;
}
Output : We have a problem!!!

Throw
______________________________________________________________________________
49 | P a g e
 The throw expression accepts one parameter as its argument and this is
passed to the exception handler. You can have a number of throw statements
at different parts of your try block with different values being thrown so that
the exception handler on receiving the parameter will know what restorative
actions to take.
try
{ // code
if ( x )
throw 10;
// code
if (y)
throw 20;
//code
}

Catch() {}
 The exception handler can be identified by the keyword catch . catch always
takes only one parameter. The type of the catch parameter is important as the
type of the argument passed by the throw expression is checked against it
and the catch function with the correct parameter type is executed. This way
we can chain multiple exception handlers and only the one with the correct
parameter type gets executed.
try { // code here }
catch (int param) {
cout << "int exception";
}
catch (char param) {
cout << "char exception";
}

______________________________________________________________________________
50 | P a g e
catch (...) {
cout << "default exception";
}
The catch(…) handler catches all exceptions, no matter what type. It can be
used as a default handler if declared last.
Points to remember
 You can nest the try-catch blocks.
 After exception handling the program returns to the point after the try-catch
block, not after the throw statement.
Exception Handling- java
Most of Java’s I/O classes (and many others) throw exceptions.
For example: Consider a function with a message readLine( ) directed to a
BufferedInputReader object .
import java.io.*; //needed for streams and IOException
public class ExceptionalExample {
public static void main(String [ ] args) throws IOException
{
InputStreamReader isr = new InputStreamReader(System.in);
BufferedReader br = new BufferedReader(isr);
System.out.println(“Enter a number”);
String str = br.readLine( );
The statement “throws IOException” is required when using readLine( ). It must
be included in the header of any function in which readLine( ) appears.
If we fail to include the “throws IOException” clause in the main function header
we will see the following error reported when we compile the program
ExceptionalExample:
c:\javacode>javac ExceptionalExample.java
ExceptionalExample.java:10: unreported exception java.io.IOException; must be
caught or declared to be thrown

______________________________________________________________________________
51 | P a g e
String str = readLine( );
1 error
c:\javacode>
Method readLine( ) throws an IOException. Any function that uses it must either
throw the same exception (and pass the buck to the operating system to handle the
exception) or catch the exception and provide its own error handling methodology.
In the previous example we chose the first strategy.

Types of exceptions
Checked exceptions – inability to acquire system resources (such as insufficient
memory, file does not exist)
Java checks at compile time that some mechanism is explicitly in place to receive
and process an exception object that may be created during runtime due to one of
these exceptions occurring.

Unchecked exceptions – exceptions that occur because of the user entering bad
data, or failing to enter data at all.

Unchecked exceptions can be avoided by writing more robust code that protects
against bad input values. Java does not check at compile time to ensure that there
is a mechanism in place to handle such errors.
It is often preferred to use Java’s exception handling capabilities to handle bad user
input rather than trying to avoid such circumstances by providing user-input
validation in the code.
The exception hierarchy (partial)

______________________________________________________________________________
52 | P a g e
Example
Consider implementing the BufferReader readLine( ) example with try-catch
clocks.
import java.io.*
public class ExceptionalExample {
public static void main(String [ ] args) //no throws clause used here
{
InputStreamReader isr = new InputStreamReader(System.in);
BufferedReader br = new BufferedReader(isr);
System.out.println(“Enter a number”);
try {
String str = br.readLine( );
double num = Double.parseDouble(str);

______________________________________________________________________________
53 | P a g e
}
//add the catch blocks
catch (IOException ioe) {
//print a message to the screen
System.out.println(ioe.toString( ));
} //add the catch blocks
catch (IOException ioe) {
//print a message to the screen
System.out.println(ioe.toString( ));
}
catch (Exception e) {
//catch any other exception and print a message to the screen
System.out.println(e.toString( ));
}
finally {
System.exit(0);
catch (Exception e) {
//catch any other exception and print a message to the screen
System.out.println(e.toString( ));
}
finally {
System.exit(0);

The anatomy of an exception

______________________________________________________________________________
54 | P a g e
 The catch(…) handler catches all exceptions, no matter what type. It can be used as a default
handler if declared last.

 Points to remember

 You can nest the try-catch blocks.

 After exception handling the program returns to the point after the try-catch block, not after
the throw statement.

______________________________________________________________________________
55 | P a g e

You might also like