14UIT305-Database Systems PDF
14UIT305-Database Systems PDF
14UIT305-Database Systems PDF
OBJECTIVES:
• To introduce the basic concepts of database system design and architecture
• To familiarize the Normal forms
• To demonstrate the transaction, recovery controls and storage techniques
UNIT I INTRODUCTION 9
Purpose of Database System -– Views of data – Data Models – Database Languages ––Database System Architecture
– Database users and Administrator – Entity–Relationship Model (E-R model) – E-R Diagrams -- Introduction to
relational databases.
The relational Model – The catalog – Types– Keys– Relational Algebra – Domain Relational Calculus – Tuple
Relational Calculus– Fundamental operations – Additional Operations – SQL fundamentals – Integrity – Triggers–
Security – Advanced SQL features – Embedded SQL – Dynamic SQL– Missing Information – Views –Introduction
to Distributed Databases and Client/Server Databases.
Functional Dependencies – Non-loss Decomposition – Functional Dependencies – First, Second, Third Normal
Forms, Dependency Preservation – Boyce/Codd Normal Form-Multi-valued Dependencies and Fourth Normal Form
– Join Dependencies and Fifth Normal Form.
UNIT IV TRANSACTIONS 9
Transaction Concepts – Transaction Recovery – ACID Properties – System Recovery –Media Recovery – Two Phase
Commit– Save Points – SQL Facilities for recovery –Concurrency – Need for Concurrency – Locking Protocols –
Two Phase Locking – Intent Locking – Deadlock– Serializability – Recovery Isolation Levels – SQL Facilities for
Concurrency.
Overview of Physical Storage Media – Magnetic Disks – RAID – Tertiary storage – File Organization – Organization
of Records in Files – Indexing and Hashing – Ordered Indices – B+ tree Index Files – B tree Index Files – Static
Hashing – Dynamic Hashing – Query Processing Overview – Catalog Information for Cost Estimation – Selection
Operation – Sorting – Join Operation – Database Tuning. Multimedia Database. Case study: FIRM –a database
management system for real time avionics.
TOTAL: 45 PERIODS
COURSE OUTCOMES:
After the successful completion of this course, the student will be able to
• Explain the basic needs of database systems
• Build query with structured query language
• Explain the need for security in database
• Construct a DBMS for an application
TEXT BOOKS:
1. Abraham Silberschatz, Henry F. Korth, Sudharshan.S, “Database System Concepts”,
Tata McGraw Hill, 5th Edition, 2006.
2. Date.C.J, Kannan.A, Swamynathan.S, “An Introduction to Database Systems”, Pearson
Education, 8th Edition, 2006.
REFERENCE BOOKS:
1. RamezElmasri, ShamkantB.Navathe, “Fundamentals of Database Systems”, Pearson
Addision Wesley, 4th Edition, 2007.
2. Raghu Ramakrishnan, “Database Management Systems”, Tata McGraw Hill, 3rd Edition
3. Singh.S.K, “Database Systems Concepts, Design and Applications”, Pearson Education,
1st Edition, 2006.
4. Hector Garcia-Molina, Jeffrey D.Ullman, Jennifer Widom, “Database Systems: The
5. Complete Book”, Pearson Education, 4th Edition, 2009.
3
UNIT I
INTRODUCTION
Purpose of Database systems- Views of data- Data Models- Database Languages- Database system
Architecture– Database users and Administrator- Entity Relationship Model (ER model) – E-R
Diagram – Introduction to relational databases.
1. INTRODUCTION
Data: Known facts that can be recorded that have implicit meaning.
E.g. Student roll no, names, address etc
DBMS: DBMS is a collection of interrelated data and a set of program to access those data. The primary
goal of a DBMS is to provide a way to store and retrieve database information that is both convenient and
efficient.
Database Applications
Banking: all transactions
Airlines: reservations, schedules
Universities: registration, grades
Sales: customers, products, purchases
Online retailers: order tracking, customized recommendations
Manufacturing: production, inventory, orders, supply chain
Human resources: employee records, salaries, tax deductions
Credit card transactions
Telecommunications & Finance
4
For Example: Failure during transfer of fund from system A to A. It will be debited from A
but not credited to B leading to wrong transaction.
vi. Concurrent Access Anomalies
In order to improve the overall performance of the system and obtain a faster response time
many systems allow multiple users to update the data simultaneously. In such environment,
interaction of concurrent updates may result in inconsistent data.
For Example: Consider bank account A, containing $500. If two customers withdraw funds
say $50 and $100 respectively) from account A at about the same time, the result of the
concurrent executions may leave the account in an incorrect (or inconsistent) state. Balance
will be $400 instead of $350. To protect against this possibility, the system must maintain
some form of supervision.
vii. Security problems
Not every user of the database system should be able to access all the data. System should be
protected using proper security.
For Example: In a banking system, pay roll personnel should be only given authority to see
the part of the database that has information about the various bank employees. They do not
need access to information about customer accounts.
Since application programs added to the system in an ad-hoc manner, it is difficult to enforce
such security constraints.
viii. Integrity problems
The data values stored in the database must satisfy certain types of consistency constrains.
For Example: The balance of a bank account may never fall below a prescribed amount (say
$100).These constraints are enforced in the system by adding appropriate code in the various
application programs.
Advantages of Database
Data base is a way to consolidate and control the operational data centrally. It is a better way to control
the operational data. The advantages of having a centralized control of data are:
When the same data is duplicated and changes are made at one side, which is not propagated
to the other site, it gives rise to inconsistency. Then the two entries regarding the same data will not
agree. So, if the redundancy is removed, chances of having inconsistent data are also removed.
iii. The data can be shared
The data stored from one application, can be used for another application. Thus, the data of
database stored for one application can be shared with new applications.
iv. Standards can be enforced
With central control of the database, the DBA can ensure that all applicable standards are
observed in the representation of the data.
v. Security can be enforced
DBA can define the access paths for accessing the data stored in database and he can define
authorization checks whenever access to sensitive data is attempted.
vi. Integrity can be maintained
Integrity means that the data in the database is accurate. Centralized control of the data helps
in permitting the administrator to define integrity constraints to the data in the database.
3. VIEW OF DATA
A major purpose of a database system is to provide users with an abstract view of the data. That is,
the system hides certain details of how the data are stored and maintained.
Data abstraction
The Complexity is hidden from the users through several level of abstraction. There are three levels
of data abstraction:
i. Physical level: It is the lowest level of abstraction that describes how the data are actually stored.
The physical level describes complex low-level data structures in details.
ii. Logical level: It is the next higher level of abstraction that describes what data are stored in the
database and what relationships exist among those data.
iii. View level: It is the highest level of abstraction that describes only part of the entire database.
7
Data Independence
The ability to modify a scheme definition in one level without affecting a scheme definition in the
next higher level is called data independence. There are two levels of data independence:
1. Physical data independence is the ability to modify the physical scheme without causing application
programs to be rewritten. Modifications at the physical level are occasionally necessary in order to improve
performance.
2. Logical data independence is the ability to modify the conceptual scheme without causing application
programs to be rewritten. Modifications at the conceptual level are necessary whenever the logical structure
of the database is altered.
Logical data independence is more difficult to achieve than physical data independence since
application programs are heavily dependent on the logical structure of the data they access.
Database change over times as information is inserted and deleted. The collection of information
stored in the database at a particular moment is called an instance of the database.
iii. Subschema: A database may also have several subschemas at the view level called as subschemas
that describe different views of the database.
4. DATA MODELS
Underlying structure of the database is called as data models.
It is a collection of conceptual tools for describing data, data relationships, data semantics, and
consistency constraints.
It is a way to describe the design of the database at physical, logical and view level.
Different types of data models are:
Entity relationship model
Relational model
Hierarchical model
Network model
Object Based model
Object Relational model
Semi Structured Data model
Entity relationship model
It is based on a collection of real world things or objects called entities and the relationship among
these objects.
The Entity relationship model is widely used in database design.
Relational Model
The relational model uses a collection of tables to represent both data and the relationship among
those data.
Each table has multiple columns and each column has a unique name.
Software such as Oracle, Microsoft SQL Server and Sybase are based on the relational model.
E.g. Record Based model. It is based on fixed format records of several types.
Hierarchical Model
Hierarchical database organize data in to a tree data structure such that each record type has only one
owner
Hierarchical structures were widely used in the first main frame database management systems.
Links are possible vertically but not horizontally or diagonally.
9
Advantages
High speed of access to large datasets.
Ease of updates.
Simplicity: the design of a hierarchical database is simple.
Data security: Hierarchical model was the first database model that offered the data security
that is provided and enforced by the DBMS.
Efficiency: The hierarchical database model is a very efficient one when the database
contains a large number of transactions, using data whose relationships are fixed.
Disadvantages
Implementation complexity
Database management problems
Lack of structural independence
Network Model
The model is based on directed graph theory.
The network model replaces the hierarchical tree with a graph thus allowing more general
connections among the nodes.
The main difference of the network model from the hierarchical model is its ability to handle many-
to-many (n: n) relationship or in other words, it allows a record to have more than one parent.
Example is, an employee working for two departments.
Sample network model
Advantages:
Conceptual simplicity :
10
5. DATABASE LANGUAGES
Data definition and data manipulation languages are not two separate languages but part of a single
database language such as SQL language.
Data definition language
DDL specifies the database schema and some additional properties to data.
The storage structure and access methods are specified using specified using special type of DDL called
s data storage and data definition language.
The data values stored in the database must satisfy certain consistency constraints. For example,
suppose the balance on an account should not fall below $100.
Database system concentrates on constraints that have less overload.
1. Domain Constraints:
Domain of possible value should be associated with every attributes.
E.g. integer type, character type, date/time type
Declaring attributes to a particular domain will act as a constraint on that value.
They are tested as and when values are entered in to database.
2. Referential Constraints:
In some cases there will be value that appears in one relation for a given set of attributes also
appears for a certain set of attributes in some other relation. Such constraint is called
Referential Constraints.
If any modification violates the constraints then the action that caused the violation should be
rejected.
3. Assertions
It is a condition that database should always satisfy.
Domains and referential integrity are special form of assertion.
E.g. Every loan should have a customer whose account balance is minimum of $1000.00
Modifications to database should not cause violation to assertion.
4. Authorization
The users are differentiated as per the access permit given to them on the different data of the
database. This is known as authorization.
The most common authorizations are
i. Read authorization
Allows reading but no modification of data.
ii. Insert authorization
Allows insertion of new data but no modification of existing data.
12
A data-manipulation language (DML) is a language that helps users to access or manipulate data.
A query is a statement requesting the retrieval of information.
The portion of DML that involves information retrieval is called as query language.
There are basically two types of DML:
Procedural DMLs
User should specify what data are needed and how to get those data.
Declarative DMLs (nonprocedural DMLs)
User should specify what data are needed without specifying how to get those data. This is
easier to learn and user than procedural DML.
Data manipulation that can be performed using DML are
The retrieval of information stored in the database
The insertion of new information into the database
The deletion of information from the database
The modification of information stored in the database
A database system is partitioned into modules that deal with each of the responsibilities of the
overall system. The functional components of a database system can be broadly divided into
Storage Manager
Query Processor
13
In three tier architecture, the client machines act as a front end and do not contain any direct database
calls.
The client end communicates with the application servers through interface.
The application server interacts with database system to access data.
The business logic of application says what actions to be carried out under what condition.
Three tier is more appropriate for large applications.
Three tier architecture
14
Storage Manager
A storage manager is a program module that provides the interface between the low level data stored in
the database and the application programs and queries submitted to the system.
The storage manager is responsible for the interaction with the file manager.
The storage manager translates the various DML statements into low-level file system commands. Thus,
the storage manager is responsible for storing, retrieving, and updating data in the database.
Components of the storage manager are:
1. Authorization and integrity manager: It tests for satisfaction of various integrity constraints and
checks the authority of users accessing the data.
2. Transaction manager: It ensures that the database remains in a consistent state despite system
failures, and concurrent executions proceed without conflicting.
3. File manager: It manages the allocation of space on disk storage and the data structures used to
represent information stored on disk.
4. Buffer manager: It is responsible for fetching data from disk storage into main memory and to
decide what data to cache in main memory. It enables the database to handle data sizes that are much
larger than the size of the main memory. The storage manager implements several data structures as
part of physical system implementation.
i. Data files: which store the database itself.
ii. Data dictionary: It contains metadata that is data about data. The schema of a table is an
example of metadata. A database system consults the data dictionary before reading and
modifying actual data.
iii. Indices: Which provide fast access to data items that hold particular values.
The Query Processor
15
The query processor is an important part of the database system. It helps the database system to simplify
and facilitate access to data. The query processor components include:
1. DDL interpreter, which interprets DDL statements and records the definitions in the data
dictionary.
2. DML compiler, which translates DML statements in a query language into an evaluation plan
consisting of low-level instructions that the query evaluation engine understands.
A query can be translates into any number of evaluations plans that all give the same result.
The DML compiler also performs query optimization, that is, it picks up the lowest cost
evaluation plan from among the alternatives.
3. Query evaluation engine, which executes low-level instructions generated by the DML
compiler.
16
• Schema definition. The DBA creates the original database schema by executing a set of data definition
statements in the DDL.
• Storage structure and access-method definition.
• Schema and physical-organization modification. The DBA carries out changes to the schema and
physical organization to reflect the changing needs of the organization.
• Granting of authorization for data access. By granting different types of authorization, the database
administrator can regulate which parts of the database various users can access.
Authorization information is kept in a special system structure that the database system consults whenever
someone attempts to access the data in the system.
• Routine maintenance. Examples of the database administrator‘s routine maintenance activities are:
1. periodically backing up the database
2. Ensuring that enough free disk space
3. Monitoring jobs running on the database and ensuring that performance is not degraded by very
expensive tasks submitted by some users.
4. Ensuring that performance is not degraded by very expensive tasks submitted by some users.
For example all persons who are customers at a given bank can be defined as entity set customer.
The properties that describe an entity are called attributes.
Types of relationships
i) Unary relationship: A unary relationship exists when an association is maintained within a single entity.
Boss Employee
Manager
Worker
ii) Binary relationship: A binary relationship exists when two entities are associated.
iii) Ternary relationship: A ternary relationship exists when there are three entities associated.
Student
20
iv) Quaternary relationship: A quaternary relationship exists when there are four entities associated.
Teacher
Subject
The number of entity set participating in a relationship is called degree of the relationship set.
Binary relationship set is of degree 2; a tertiary relationship set is of degree 3.
Entity role: The function that an entity plays in a relationship is called that entity‘s role. A role is one end
of an association.
3. Attributes
The properties that describes an entity is called attributes.
The attributes of customer entity set are customer_id, customer_name and city.
Each attributes has a set of permitted values called the domain or value set.
Each entity will have value for its attributes.
Example:
Customer Name John
Customer Id 321
1) Simple attribute:
This type of attributes cannot be divided into sub parts.
Example: Age, sex, GPA
2) Composite attribute:
This type of attributes Can be subdivided.
Example: Address: street, city, state, zip
3) Single-valued attribute:
This type of attributes can have only a single value
Example: Social security number
4) Multi-valued attribute:
Multi-valued attribute Can have many values.
Example: Person may have several college degrees, phone numbers
5) Derived attribute:
Derived attribute Can be calculated or derived from other related attributes or entities.
Example: Age can be derived from D.O.B.
6) Stored attributes:
The attributes stored in a data base are called stored attributes.
An attribute takes a null value when an entity does not have a value for it.
Null values indicate the value for the particular attribute does not exists or unknown.
E.g. : 1. Middle name may not be present for a person (non existence case)
2. Apartment number may be missing or unknown.
CONSTRAINTS
An E-R enterprise schema may define certain constraints to which the contents of a database system
must conform.
Three types of constraints are
1. Mapping cardinalities
2. Key constraints
3. Participation constraints
1. Mapping cardinalities
Mapping cardinalities express the number of entities to which another entity can be associated via
a relationship set.
Cardinality in E-R diagram that is represented by two ways:
i) Directed line ( ) ii) Undirected line ( --------- )
22
ii) One-to-many: An entity in A is associated with any number of entities (zero or more) in B. An
entity in B, however, can be associated with at most one entity in A.
iii) Many-to-one: An entity in A is associated with at most one entity in B. An entity in B, however, can
be associated with any number (zero or more) of entities in A.
23
Example: Many employees works for a company. This relationship is shown by many-to-one as given
below.
Employees Works-for Company
iv) Many-to-many: An entity in A is associated with any number (zero or more) of entities in B, and an
entity in B is associated with any number (zero or more) of entities in A.
Example: Employee works on number of projects and project is handled by number of employees.
Therefore, the relationship between employee and project is many-to-many as shown below.
Works- Project
Employee
on
2. Keys
A key allows us to identify a set of attributes and thus distinguishes entities from each other.
Keys also help uniquely identify relationships, and thus distinguish relationships from each other.
Any attribute or combination of attributes that uniquely identifies a row in the table.
Superkey Example: Roll_No attribute of the entity set ‗student‘ distinguishes one student entity
from another. Customer_name, Customer_id together is a Super key
Minimal Superkey. A superkey that does not contain a subset of attributes that is itself a
superkey.
Candiate Key
Example: Student_name and Student_street,are sufficient to uniquely identify one
particular student.
The candidate key selected to uniquely identify all rows. It should be rarely changed and
Primary Key cannot contain null values.
Secondary Key An attribute or combination of attributes used to make data retrieval more efficient.
3. Participation Constraint
9. ENTITY-RELATIONSHIP(E-R) DIAGRAMS
E-R diagram can express the overall logical structure of a database graphically.
E-R diagram consists of the following major components:
25
Double rectangles
represent weak entity sets
• Double lines are used in an E-R diagram to indicate that the participation of an entity set in a
relationship set is total; that is, each entity in the entity set occurs in at least one relationship in that
relationship set.
26
The number of time an entity participates in a relationship can be specified using complex
cardinalities.
An edge between an entity set and binary relationship set can have an associated minimum and
maximum cardinality assigned in the form of l..h.
l - Minimum cardinality
h - Maximum cardinality
A minimum value of 1 indicates total participation of the entity set in the relationship set.
A maximum value of 1 indicates that the entity participates in at most one relationship.
A maximum value * indicates no limit.
A label 1... on an edge is equivalent to a double line.
Discriminator in a weak entity set is a set of attributes that distinguishes the different entities among
the weak entity also called as partial key.
Extended E-R Features
ER model that is supported with the additional semantic concepts is called the extended entity
relationship model or EER model.
EER model deals with
1. Specialization
2. Generalization
3. Aggregation
1. Specialization:
The process of designating subgroupings within an entity set is called Specialization
Specialization is a top-down process.
Consider an entity set person. A person may be further classified as one of the following:
• Customer
• Employee
All person has a set of attributes in common with some additional attributes.
Specialization is depicted by a triangle component labeled ISA.
The label ISA stands for ―is a‖ for example, that a customer ―is a‖ person.
The ISA relationship may also be referred to as a super class-subclass relationship.
2. Generalization:
Generalization is a simple inversion of specialization.
Generalization is the process of defining a more general entity type from a set of more specialized
entity types.
Generalization is a bottom-up approach.
Generalization results in the identification of a generalized super class from the original subclasses.
28
If an entity set is involved as a lower-level entity set in more than one ISA relationship, then the
entity set has multiple inheritance and the resulting structure is said to be a lattice.
Constraints on Generalizations
1. One type of constraint determining which entities can be members of a lower-level entity set. Such
membership may be one of the following:
• Condition-defined. In condition-defined the members of lower-level entity set is evaluated on
the basis of whether or not an entity satisfies an explicit condition.
• User-defined. User defined constraints are defined by user.
2. A second type of constraint relates to whether or not entities may belong to more than one lower-
level entity set within a single generalization. The lower-level entity sets may be one of the
following:
• Disjoint. A disjointness constraint requires that an entity belong to no more than one lower-
level entity set.
• Overlapping. Same entity may belong to more than one lower-level entity set within a single
generalization.
3. A final constraint, the completeness constraint specifies whether or not an entity in the higher-level
entity set must belong to at least one of the lower-level entity sets .This constraint may be one of the
following:
• Total generalization or specialization. Each higher-level entity must belong to a lower-level
entity set. It is represented by double line.
• Partial generalization or specialization. Some higher-level entities may not belong to any
lower-level entity set.
3. Aggregation
One limitation of the E-R model is that it cannot express relationships among relationships.
Consider the ternary relationship works-on, between a employee, branch, and job. Now, suppose
we want to record managers for tasks performed by an employee at a branch. There another
entity set manager is created.
The best way to model such a situation is to use aggregation.
Aggregation is an abstraction through which relationships are treated as higherlevel entities.
In our example works-on act as high level entity.
30
Summary of ER diagram
32
2 Mark Questions
16 Mark Questions
UNIT II
RELATIONAL MODEL
The relational Model – The catalog- Types– Keys - Relational Algebra – Domain Relational Calculus –
Tuple Relational Calculus - Fundamental operations – Additional Operations- SQL fundamentals -
Integrity – Triggers - Security – Advanced SQL features –Embedded SQL– Dynamic SQL- Missing
Information– Views – Introduction to Distributed Databases and Client/Server Databases
Mathematically table is called as a relation and rows in a table are called as tuples.
The tuples in a relation can be either sorted or unsorted.
Several attributes can have same domain. E.g.: customer_name, employee_name.
Attributes can also be distinct. E.g.: balance, branch_name
Attributes can have null values incase if the value is unknown or does not exist.
Database schema begins with upper case and database relation begins with lower case.
Account-schema = (account-number, branch-name, balance)
account (Account-schema)
Account Table
2. THE CATALOG
The catalog is a place where all the schemas and the corresponding mappings are kept.
The catalog contains detailed information also called as descriptor information or meta data.
Descriptor information is essential for the system to perform its job properly.
For example the authorization subsystem uses catalog information about users and security
constraints to grant or deny access to a particular user.
The catalog should be self describing.
3. RELATIONAL ALGEBRA
The relational algebra is a procedural query language.
It consists of a set of operations that take one or two relations as input and produce a new relation as
their result.
Formal Definition
A basic expression in the relational algebra consists of either one of the following:
A relation in the database
A constant relation
Let E1 and E2 be relational-algebra expressions; the following are all relational-algebra expressions:
E1 E2
36
E1 – E2
E1 x E2
p (E1), P is a predicate on attributes in E1
s (E1), S is a list consisting of some of the attributes in E1
x (E1), x is the new name for the result of E1
Output relation is
Query 1: σyear=2000(Book)
The output of query 1 is shown below.
Query 2: σprice>300(Book)
The output of query 2 is shown below.
38
Example 3: Select the tuples for all books whose publishing year is 2000 or price is greater than 300.
Query 3: σ(year=2000) OR (price>300)(Book)
The output of query 3 is shown below.
Example 4: Select the tuples for all books whose publishing year is 2000 and price is greater than 300.
The project operation selects certain columns from a table while discarding others. It removes any
duplicate tuples from the result relation.
Syntax
Π<attributelist> ( R )
Example: The following are the examples of project operation on Book relation.
Example 1: Display all titles with author name.
Query 1: ΠTitle, Author (Book)
The output of query 1 is shown below.
Title Author
DBMS Korth
Compiler Ulman
OOMD Rambaugh
PPL Sabista
Example 2: Display all book titles with authors and price.
Query 2: ΠTitle, Author, Price ( Book )
The output of query 2 is shown below.
Title Author Price
DBMS Korth 250
Compiler Ulman 350
OOMD Rambaugh 450
PPL Sabista 500
Composition of select and project operations
The relational operations select and project can be combined to form a complicated query.
Π customer-name (σ customer-city =”Harrison” (customer))
Output:
Customer-name
Hayes
Example: Display the titles of books having price greater than 300.
Query: ΠTitle,( σprice>300(Book))
The output of query 1 is shown below.
Title
Compiler
OOMD
PPL
Example 1: Renames both the relation and its attributes, the second renames the relation only and the third
renames as follows.
6. Cartesian-Product Operation(X)
Cartesian product is also known as CROSS PRODUCT or CROSS JOINS.
Cartesian product allows us to combine information from any 2 relation.
Syntax: Relation1 x Relation 2
Example: Consider following two relations publisher_info and Book_info.
Publisher_Info
Publisher_code Name
P0001 McGraw_Hill
P0002 PHI
P0003 Pearson
Book_Info
Book_ID Title
B0001 DBMS
43
B0002 Compiler
The Cartesian product of Publisher_Info and Book_Info is given in fig .
Publisher_Info X Book_Info
Publisher_code Name Book_ID Title
P0001 McGraw_Hill B0001 DBMS
P0002 PHI B0001 DBMS
P0003 Pearson B0001 DBMS
P0001 McGraw_Hill B0002 Compiler
P0002 PHI B0002 Compiler
P0003 Pearson B0002 Compiler
2. Naturaljoin ( )
44
The natural join operation performs a selection on those attributes that appear in both relation
schemes and finally removes duplicate attributes.
Syntax: Relation1 Relation 2
Example: consider the 2 relations
Employee Salary
Emp_name Salary
Hari 2000
Om 5000
Smith 7000
Jay 10000
Division Operation is suited to queries that include the phrase ‘for all‘.
Depositor Relation
Suppose that we wish to find all customers who have an account at all the branches located in Brooklyn.
Step 1: We can obtain all branches in Brooklyn by the expression
r1 = Π branch-name (σ branch-city =’”Brooklyn” (branch))
The result relation for this expression is shown in figure.
Step 2: We can find all (customer-name, branch-name) pairs for which the customer has an account at a
branch by writing
r2 = Π customer-name, branch-name (depositor account)
Figure shows the result relation for this expression.
46
Now, we need to find customers who appear in r2 with every branch name in r1. The operation that
provides exactly those customers is the divide operation.
Thus, the query is
Π customer-name, branch-name (depositor account)
÷ Π branch-name (σ branch-city =”Brooklyn” (branch))
The result of this expression is a relation that has the schema (customer-name) and that contains the tuple
(Johnson).
4. The Assignment Operation ( )
The assignment operation works like assignment in a programming language.
Example:
Result of the expression to the right of the ← is assigned to the relation variable on the left of the←.
With the assignment operation, a query can be written as a sequential program consisting of a series
of assignments followed by an expression whose value is displayed as the result of the query.
1. Generalized projection
2. Aggregate operations
3. Outer join.
1. Generalized projection
The generalized-projection operation extends the projection operation by allowing arithmetic
functions to be used in the projection list.
The generalized projection operation has the form
47
2. Aggregate Functions
Aggregate functions take a collection of values and return a single value as a result. Few Aggregate
Function are,
1. Avg
2. Min
3. Max
4. Sum
5. Count
1. Avg: The aggregate function avg returns the average of the values.
Example: Use the pt-works relation in Figure
G avg (salary)(pt-works)
48
Result:
Salary
2062.5
The attribute branch-name in the left-hand subscript of G indicates that the input relation pt-
works must be divided into groups based on the value of branch-name.
The calculated sum is placed under the attribute name sum-salary and the maximum salary is
placed under the attribute max-salary.
3. Sum:
The aggregate function sum returns the total of the values.
Example: Suppose that we want to find out the total sum of salaries.
G sum(salary)(pt-works)
The symbol G is the letter G in calligraphic font; read it as “calligraphic G”
Result:
Salary
16500
4. Count:
Returns the number of the elements in the collection,
1. Inner Join
2. Outer Join
3. Natural Joint
Inner Join ( )
Inner Join returns the matching rows from the tables that are being jointed.
Example: Consider the two relations
Example:
Result:
Outer Join
The outer-join operation is an extension of the join operation to deal with missing information.
Outer-join operations avoid loss of information.
Outer Joins are classified into three types namely:
1. Left Outer Join
2. Right Outer Join
3. Full Outer Join
The left outer join ( ) takes all tuples in the left relation that did not match with any tuple in the
right relation, pads the tuples with null values for all other attributes from the right relation, and adds them
to the result of the natural join.
50
Example:
Result:
Example:
Result:
The full outer join( ) does both of those operations, padding tuples from the left relation that did
not match any from the right relation, as well as tuples from the right relation that did not match any from
the left relation, and adding them to the result of the join.
Example:
Result:
51
4. RELATIONAL CALCULUS
Relational Calculus is a formal query language where we can write one declarative expression to
specify a retrieval request and hence there is no description of how to retrieve it.
A calculus expression specifies what is to be retrieved rather than how to retrieve it.
Relational Calculus is considered to be non procedural language.
Relational Calculus can be divided into
1. Tuple Relational Calculus
2. Domain Relational Calculus
2. s[x] u[y], where s and u are tuple variables, x is an attribute on which s is defined, y is
an attribute on which u is defined, and is a comparison operator
3. s[x] c, where s is a tuple variable, x is an attribute on which s is defined, is a
comparison operator, and c is a constant in the domain of attribute x
Rules to built formulas from atoms
An atom is a formula.
If P1 is a formula, then so are ¬P1 and (P1).
If P1 and P2 are formulae, then so are P1 ∨ P2, P1 ∧ P2, and P1 ⇒ P2.
52
Safety of Expressions
1. Find the loan_number, branch_name, and amount for loans of over $1200
2. Find the names of all customers who have a loan of over $1200
3. Find the names of all customers who have a loan from the Perryridge branch and the loan
amount:
{ c, a | l ( c, l borrower b ( l, b, a loan b = ―Perryridge‖))}
{ c, a | l ( c, l borrower l, “ Perryridge”, a loan)}
Safety of Expressions
The expression: { x1, x2, …, xn | P (x1, x2, …, xn )} is safe if all of the following hold:
1. All values that appear in tuples of the expression are values from dom (P) (that is, the values appear
either in P or in a tuple of a relation mentioned in P).
2. For every ―there exists‖ subformula of the form x (P1(x)), the subformula is true if and only if there
is a value of x in dom (P1) such that P1(x) is true.
3. For every ―for all‖ subformula of the form x (P1 (x)), the subformula is true if and only if P1(x) is
true for all values x from dom (P1).
5. SQL FUNDAMENTALS
5.1. Introduction
SQL is a standard common set used to communicate with the relational database
management systems.
All tasks related to relational data management-creating tables, querying, modifying, and
granting access to users, and so on.
5.2. Advantages of SQL
SQL is a high level language that provides a greater degree of abstraction than procedural languages.
SQL enables the end-users and systems personnel to deal with a number of database management
systems where it is available.
Application written in SQL can be easily ported across systems.
SQL specifies what is required and not how it should be done.
SQL was simple and easy to learn can handle complex situations.
All SQL operations are performed at a set level.
5.3. Parts of SQL
The SQL language has several parts:
55
Data-definition language (DDL). The SQL DDL provides commands for defining relation
schemas, deleting relations, and modifying relation schemas.
Interactive data-manipulation language (DML). The SQL DML includes a query language based
on both the relational algebra and the tuple relational calculus. It also includes commands to insert
tuples into, delete tuples from, and modify tuples in the database.
View definition. The SQL DDL includes commands for defining views.
Transaction control. SQL includes commands for specifying the beginning and ending of
transactions.
Embedded SQL and dynamic SQL. Embedded and dynamic SQL define how SQL statements can
be embedded within general-purpose programming languages, such as C, C++, Java, PL/I, COBOL,
Pascal, and FORTRAN.
Integrity. The SQL DDL includes commands for specifying integrity constraints that the data stored
in the database must satisfy. Updates that violate integrity constraints are disallowed.
Authorization. The SQL DDL includes commands for specifying access rights to relations and
views.
5.4. Domain Types in SQL
1. Char (n): Fixed length character string, with user-specified length n.
2. varchar(n): Variable length character strings, with user-specified maximum length n.
3. int: Integer (a finite subset of the integers that is machine-dependent).
4. Smallint: Small integer (a machine-dependent subset of the integer domain type).
5. numeric (p,d): fixed point number, with user-specified precision of p digits, with n digits to the
right of decimal point.
6. Real, double precision: Floating point and double-precision floating point numbers, with
machine-dependent precision.
7. float (n): Floating point number, with user-specified precision of at least n digits.
8. Date: Dates, containing a (4 digit) year, month and date
Example: date ‗2005-7-27‘
9. Time: Time of day, in hours, minutes and seconds.
Example: time ‗09:00:30‘ time ‗09:00:30.75‘
10. Timestamp: date plus time of day
Example: timestamp ‗2005-7-27 09:00:30.75‘
11. Interval: period of time
56
Syntax: create table <table name> (columnname1 data type (size), Columnname 2 data
ii) Syntax: alter table <table name> modify (columnname new data type (size));
Example: alter table customer modify (social_security_no varchar2 (11));
iii) Syntax: alter table <table name> add (new columnname data type (size));
Example: alter table customer add (acc_no varchar2(5));
Example:
a) Find the names of all branches in the loan table
select branch_name from loan;
b) List all account numbers made by brighton branch
select acc_no from account where branch_name = 'brighton';
c) List the customers who are living in the city harrison
select cust_name from customer where cust_city = 'harrison';
Where P
Rename Operation
The SQL allows renaming relations and attributes using the as clause:
Old-name as new-name
59
Example: Find the name, loan number and loan amount of all customers; rename the column name
loan_number as loan_id.
select customer_name ,borrower.loan_number as loan_id,a mount from borrower,loan where
borrower.loan_number = loan.loan_number
Tuple Variables
Tuple variables are defined in the from clause via the use of the as clause.
Example: Find the customer names and their loan numbers for all customers having a loan at some
branch.
select customer_name, T.loan_number, S.amount from borrower as T, loan as S
where T.loan_number = S.loan_number
String Operation
SQL includes a string-matching operator for comparisons on character strings.The operator ―like‖
uses patterns that are described using two special characters:
o percent (%). The % character matches any substring.
o underscore ( _ ). The _ character matches any character.
Example:
‘Perry%‘ matches any string beginning with ―Perry‖.
‘%idge%‘ matches any string containing ―idge‖ as a substring, for example, ‘Perryridge‘, ‘Rock
Ridge‘, ‘Mianus Bridge‘, and ‘Ridgeway‘.
‘- - - ‘ matches any string of exactly three characters.
‘ - - -%‘ matches any string of at least three characters.
Example: Select * from customer where customer_name like ‘j%‘;
Example :select distinct customer_name from borrower, loan where borrower loan_number =
loan.loan_number and branch_name = 'Perryridge' order by customer_name
We may specify desc for descending order or asc for ascending order, for each attribute; ascending
order is the default.
Example: order by customer_name desc
Set Operations
Set operators combine the results of two queries into a single one.
1. Union – returns all distinct rows selected by either query.
Example: Find all customers having a loan, an account or both at the bank
Query: select cust_name from depositor union select cust_name from borrower;
3. Intersect – returns only rows that are common to both the Queries
Example: Find all customers who have both a loan, and an account at the bank.
Query: select cust_name from depositor intersect select cust_name from borrower;
4. Minus – returns all distinct rows selected only by the first Query and not by the second.
Example: To find all customers who have an account but no loan at the bank.
Query: select cust_name from depositor minus select cust_name from borrower;
Aggregate Function
These functions operate on the multiset of values of a column of a relation, and return a value
(a) AVG - To find the average of values.
Example: Find the average of account balance from the account table
Query: select avg (balance) from account;
(b) SUM – To find the sum of values.
Example: Find the sum of account balance from the account table
61
Query: select branch_name, avg (balance) from account group by branch_name having
branch_name=‘brighton‘;
Null Values
It is possible for tuples to have a null value, denoted by null, for some of their attributes
Null signifies an unknown value or that a value does not exist.
The predicate is null can be used to check for null values.
Example: Find all loan number which appears in the loan relation with null values for
amount.
select loan_number from loan where amount is null
62
and: The result of true and unknown is unknown, false and unknown is false, while unknown and unknown
is unknown.
or: The result of true or unknown is true, false or unknown is unknown, while unknown or unknown is
unknown.
not: The result of not unknown is unknown.
Nested Subqueries
SQL provides a mechanism for the nesting of subqueries.
A subquery is a select-from-where expression that is nested within another query.
A common use of subqueries is to perform tests for set membership, set comparisons, and set
cardinality.
Example 1: Find all the information of customer who has an account number is A-101.
Query: select * from customer where cust_name=(select cust_name
from depositor where acc_no='A-101');
Example 2:Find all customers who have a loan from the bank, find their names And loan numbers.
Query: select cust_name, loan_no from borrower where loan_no in
(select loan_no from loan);
1. Set memberships
INExample:
Select * from customer where customer_name in(‘Hays‘, ‘Jones‘);
NOT INExample: Select * from customer where customer_name not in(‘Hays‘,’Jones‘);
2. Set comparisons
SQL uses various comparison operators such as <, <=,=,>,>=,<>,any, all,
some etc to compare sets.
Examples 1: Select * from borrower where loan_number<any(select loan_number
from loan 2 where branch_name=‘Perryridge‘);
Example 2: Select loan_no from loan from amount<=30000;
Example: Select title from book where exists(select * from order where book.book-
id=order.book_id);
Similar to exists we can use not exists also.
Example: Select title from book where not exists(select * from order where book.book-
id=order.book_id);
1. Derived Relations
SQL allows a subquery expression to be used in the from clause. If we use such an
expression, then we must give the result relation a name, and we can rename the attributes. For renaming as
clause is used
For example: “Find the average account balance of those branches where the average account balance is
greater than $1200.”
Select branch-name, avg-balance from (select branch-name, avg (balance) from account
group by branch-name)as branch-avg (branch-name, avg-balance) where avg-balance > 1200
Here subquery result is named as branch-avg with attributes of branch-name and avg-balance.
2. with clause.
The with clause provides a way of defining a temporary view, whose definition is available
only to the query in which the with clause occurs.
64
Consider the following query, which selects accounts with the maximum balance; if there are
many accounts with the same maximum balance, all of them are selected.
with max-balance (value) as
select max(balance)
from account
select account-number
from account, max-balance
where account.balance = max-balance.value
6. INTEGRITY
Integrity constraints ensures that changes made to the database by authorized users donot result in a
loss of data consistency.
It is a mechanism used to prevent invalid data entry into the table.
Prevents accidental damages of database.
Types
1. Domain integrity Constraints
2. Entity integrity Constraints
3. Referential integrity Constraints
1. Domain integrity Constraints
A Domain is a set of values that may be assigned to an attribute. All values that appear in a column
of a relation (table) must be taken from the same domain.
Types
Not Null Constraints
Check Constraints
Example: create table student (name char(15) not null,student-id char(10), degree_level char(15),
primary key(student_id), check(degree_level in(‗bachelors‘,‘master‘,‘doctorate‘)));
The create domain clause can be used to de.ne new domains. For example, the statements:
create domain Dollars numeric(12,2)
create domain Pounds numeric(12,2)
2. Entity integrity Constraints
The entity integrity constraints state that no primary key value can be null. This is because the
primary key value is used to identify individual tuples in a relation.
Types
Unique Constraint
Primary key Constraint
a) UNIQUE – Avoid duplicate values
unique(Aj1,Aj2,……,Ajm)
The unique specification saya that attributes Aj1,Aj2,……,Ajm form a candidate key. These attributes
should have distinct values.
Syntax :create table <table name>(columnname data type (size) constraint
constraint_name unique);
b) Composite UNIQUE – Multicolumn unique key is called composite unique key
Syntax : create table <table name>(columnname1 data type (size), columnname2 data type
(size), constraint constraint_name unique (columnname1, columnname2));
c) PRIMARY KEY – It will not allow null values and avoid duplicate values.
Syntax : create table <table name>(columnname data type (size) constraint constraint_name
primary key);
d) Composite PRIMARY KEY – Multicolumn primary key is called composite primary key
Syntax : create table <table name>(columnname1 data type (size), columnname2 data type
(size), constraint constraint_name primary key (columnname1, columnname2));
3. REFERENTIAL INTEGRITY
Ensures that a value appears in one relation for a given set of attributes also appears for a certain set
of attributes in another relation. This condition is called referential integrity
Reference key (foreign key) – Its represent relationships between tables. Foreign key is a column whose
values are derived from the primary key of the same or some other table.
66
Syntax: create table <table name>(columnname data type (size) constraint constraint_name
references parent_table_name);
Formal Definition
Let r1(R1) and r2(R2) be relations with primary keys K1 and K2 respectively.
The subset of R2 is a foreign key referencing K1 in relation r1, if for every t2 in r2 there
must be a tuple t1 in r1 such that t1[K1] = t2[ ].
Referential integrity constraint also called subset dependency since its can be written as
(r2) K1 (r1)
Assertions
An assertion is a predicate expressing a condition that we wish the database always to satisfy.
An assertion in SQL takes the form
Create assertion <assertion-name> check <predicate>
When an assertion is made, the system tests it for validity, and tests it again on every update that
may violate the assertion
Asserting for all X,P(X) is achieved in a round-robin fashion using not exists X such that not
P(X).
Assertion Example
The sum of all loan amounts for each branch must be less than the sum of all account balances at
the branch.
create assertion sum-constraint check (not exists (select * from branch where (select
sum(amount) from loan where loan.branch-name = branch.branch-name) >= (select
sum(balance) from account where account.branch-name = branch.branch-name)))
7.TRIGGERS
A trigger is a statement that is executed automatically by the system as a side effect of a
modification to the database.
To design a trigger mechanism, we must:
Specify the conditions under which the trigger is to be executed.
Specify the actions to be taken when the trigger executes.
Trigger Example
Suppose that instead of allowing negative account balances, the bank deals with overdrafts by
setting the account balance to zero
67
revoke select on
branch from U1, U2, U3 restrict
With restrict, the revoke command fails if cascading revokes are required.
Roles
Roles permit common privileges for a class of users can be specified just once by creating a
corresponding ―role‖
Privileges can be granted to or revoked from roles, just like user
Roles can be assigned to users, and even to other roles
O create role teller
create role manager
o grant select on
branch to teller
grant update (balance) on account to teller
grant all privileges on account to manager
grant teller to manager
grant teller to alice, bob
grant manager to avi
Authorization and Views
Users can be given authorization on views, without being given any authorization on the relations
used in the view definition
Ability of views to hide data serves both to simplify usage of the system and to enhance security by
allowing users access only to data they need for their job
A combination or relational-level security and view-level security can be used to limit a user‘s
access to precisely the data that user needs.
Granting of Privileges
The passage of authorization from one user to another may be represented by an authorization grant
graph.
The nodes of this graph are the users.
The root of the graph is the database administrator.
Consider graph for update authorization on loan.
An edge Ui Uj indicates that user Ui has granted update authorization on loan to Uj.
Authorization Grant Graph
72
If the database administrator revokes authorization from U2, U2 retains authorization through U3,
If authorization is revoked subsequently from U3, U3 appears to retain authorization through U2.
73
When the database administrator revokes authorization from U3, the edges fromU3 to U2 and from
U2 to U3 are no longer part of a path starting with the database administrator.
The edges between U2 and U3 are deleted, and the resulting authorization graph is
Audits Trials
An audit trail is a log of all changes (inserts/deletes/updates) to the database along with information
such as which user performed the change, and when the change was performed.
Used to track erroneous/fraudulent updates.
Can be implemented using triggers, but many database systems provide direct support.
Limitations of SQL Authorization
SQL does not support authorization at a tuple level
o E.g. we cannot restrict students to see only (the tuples storing) their own grades
With the growth in Web access to databases, database accesses come primarily from application
servers.
o End users don't have database user ids, they are all mapped to the same database user id
All end-users of an application (such as a web application) may be mapped to a single database
user
The task of authorization in above cases falls on the application program, with no support from
SQL
o Benefit: fine grained authorizations, such as to individual tuples, can be implemented by the
application.
o Drawback: Authorization must be done in application code, and may be dispersed all over
an application
o Checking for absence of authorization loopholes becomes very difficult since it requires
reading large amounts of application code
Encryption
74
Data Encryption Standard (DES) substitutes characters and rearranges their order on the basis of an
encryption key which is provided to authorized users via a secure mechanism. Scheme is no more secure
than the key transmission mechanism since the key has to be shared.
Advanced Encryption Standard (AES) is a new standard replacing DES, and is based on the
Rijndael algorithm, but is also dependent on shared secret keys
Public-key encryption is based on each user having two keys:
o public key – publicly published key used to encrypt data, but cannot be used to decrypt data
o private key -- key known only to individual user, and used to decrypt data.
Need not be transmitted to the site doing encryption.
Encryption scheme is such that it is impossible or extremely hard to decrypt data given only the
public key.
The RSA public-key encryption scheme is based on the hardness of factoring a very large number
(100's of digits) into its prime components.
Authentication (Challenge response system)
Password based authentication is widely used, but is susceptible to sniffing on a network
Challenge-response systems avoid transmission of passwords
o DB sends a (randomly generated) challenge string to user
o User encrypts string and returns result.
o DB verifies identity by decrypting result
o Can use public-key encryption system by DB sending a message encrypted using user‘s
public key, and user decrypting and sending the message back
Digital signatures are used to verify authenticity of data
o Private key is used to sign data and the signed data is made public.
o Any one can read the data with public key but cannot generate data without private key..
o Digital signatures also help ensure nonrepudiation: sender
cannot later claim to have not created the data
Digital Certificates
Digital certificates are used to verify authenticity of public keys.
Problem: when you communicate with a web site, how do you know if you are talking with the
genuine web site or an imposter?
o Solution: use the public key of the web site
o Problem: how to verify if the public key itself is genuine?
Solution:
75
o Every client (e.g. browser) has public keys of a few root-level certification authorities
o A site can get its name/URL and public key signed by a certification authority: signed
document is called a certificate
o Client can use public key of certification authority to verify certificate
o Multiple levels of certification authorities can exist. Each certification authority
presents its own public-key certificate signed by a higher level authority, and
Uses its private key to sign the certificate of other web sites/authorities
9. EMBEDDED SQL
Embedded SQL are SQL statements included in the programming language
The SQL standard defines embeddings of SQL in a variety of programming languages such as C,
Java, and Cobol.
A language to which SQL queries are embedded is referred to as a host language, and the SQL
structures permitted in the host language comprise embedded SQL.
The embedded SQL program should be preprocessed prior to compilation.
The preprocessor replaces embedded SQLrequests with host language declarations and procedure
calls.
The resulting program is compiled by host language compiler.
EXEC SQL statement is used to identify embedded SQL request to the preprocessor
o EXEC SQL <embedded SQL statement > END_EXEC
Note: this varies by language (for example, the Java embedding uses # SQL { …. }; , C language uses
semicolon instead of END_EXEC)
Example Query
From within a host language, find the names and cities of customers with more than the variable
amount dollars in some account.
Specify the query in SQL and declare a cursor for it
EXEC SQL
declare c cursor for
select depositor.customer_name, customer_city
from depositor, customer, account
where depositor.customer_name = customer.customer_name
and depositor account_number = account.account_number
and account.balance > :amount
76
END_EXEC
The open statement causes the query to be evaluated
EXEC SQL open c END_EXEC
The fetch statement causes the values of one tuple in the query result to be placed on host
language variables.
EXEC SQL fetch c into :cn, :cc END_EXEC
Repeated calls to fetch get successive tuples in the query result
A variable called SQLSTATE in the SQL communication area (SQLCA) gets set to ‗02000‘ to
indicate no more data is available
The close statement causes the database system to delete the temporary relation that holds the result
of the query.
EXEC SQL close c END_EXEC
Note: above details vary with language. For example, the Java embedding defines Java iterators to step
through result tuples.
Updates Through Cursors
Can update tuples fetched by cursor by declaring that the cursor is for update
o declare c cursor for select * from account
where branch_name = ‗Perryridge‘
for update
To update tuple at the current location of cursor c
update account set balance = balance + 100 where current of c
SQLPREPPED identifies the SQL variables. It holds the compiled version of SQL statement whose
source form is given in SQLSOURCE.
The prepare statement takes the source statement and prepares it to produce an executable version,
which is stored in SQLPREPPED.
EXECUTE statement executes the SQLPREPPED version.
EXECUTE IMMEDIATE statement combines the functions of PREPARE and EXECUTE in a
single operation.
Call Level Interface
The SQL Call Level Interface (SQL/CLI) is based on Microsoft‘s Opensoure DataBase Connectivity
(ODBC).
They allow the applications to be written from which the exact SQL code is not known until run
time.
Two principle reason for using SQL/CLI
Dynamic SQL is a source code statement. Dynamic SQL requires some kind of SQL
compiler to process the operations like PREPARE, EXECUTE. SQL/CLI does not requir any
special compiler instead it uses the host language compiler. It is in object code form.
SQL/CLI is DBMS independent i.e, it allows creation of several applications with different
DBMS.
Example for SQL/CLI
strcpy (sqlsource, ― Delete from account where amount>10000);
rc = SQLExecDirect(hstmt,(SQLCHAR*)sqlsource,SQL.NTS);
Strcpy is used to copy the source form of delete statement into sqlsource variable.
SQLExecDirect executes the SQL Statement contained in sqlsource anf assigns the return code to
the variable rc.
Two standards connects an SQL database and performs queries and updates.
Opensoure DataBase Connectivity (ODBC) was initially developed for C language and extended to
other languages like C++, C# and Visual Basic.
Java DataBase Connectivity (JDBC) is an application program interface for java language.
The users and applications connects to an SQL server establishing a session, executes a series of
statements and finally disconnects the session.
In addition to normal SQL commands, a session can also contains commands to commit the work
carried out or rollback the work carried out in a session.
78
11.VIEWS
A View is an object that gives the user a logical view of data from an underlying tables or tables.
It is not desirable for all users to see the entire logical model.
Security consideration may require that certain data be hidden from users.
Any relation that is not part of the logical model, but is made visible to a user as a virtual relation,
is called as view
Creating of Views
Updation of a View
Views can be used for data manipulation i.e, the user can perform insert, Update,a nd the delete
operations on the view.
The views on which data manipulation can be done are called updatable Views, the views that do
not allow data manipulation are called Readonly Views.
Destroying a view
79
A distributed database is a database in which storage devices are not all attached to a common processing
unit such as the CPU, controlled by a distributed database management system (together sometimes called
adistributed database system).
Two Types :
• Homogeneous distributed databases
Same software/schema on all sites, data may be partitioned among sites
Goal: provide a view of a single database, hiding details of distribution
• Heterogeneous distributed databases
Different software/schema on different sites
Goal: integrate existing databases to provide useful functionality
• Differentiate between local and global transactions
A local transaction accesses data in the single site at which the transaction was initiated.
A global transaction either accesses data in a site different from the one at which the transaction was initiated
or accesses data in several different sites.
CLIENT/SERVER DATABASES :
• The user specifies which database(s) to query and formulates a query, using the client software.
• The client software then connects to the database(s) and submits the query, in a structure suitable for
communication between client and server.
• The server retrieves data from the database, orders these, and returns these to the client.
• The client processes the incoming data, and presents them to the user.
2 Mark Questions
16 Mark Questions
1. Discuss about various operations in Relational algebra (Fundamental operations – Additional operation)
2. Discuss in detail about an Integrity, Triggers and Security.
3. Explain Embedded and Dynamic SQL.
4. Explain String Operations and Aggregate functions used in SQL.
5. Explain detail in domain relational calculus.
6. Explain detail in Tuple relational calculus.
7. Explain detail in distributed databases and client/server databases.
82
UNIT III
DATABASE DESIGN
1. INTRODUCTION
Relational database design requires a ―good collection of relation schemas.
Pit-falls in Relational Database Design
A bad design may lead to
• Repetition of information
• Inability to represent certain information
Design Goals
a) Avoid redundant data.
b) Ensure that relationships among attributes are represented.
c) Facilitate the checking of updates for violation of database integrity constraints.
Lending-schema= (branch_name, branch_city, assets,c ustomer_name, loan_no, amount).
(b) Can use null values, but they are difficult to handle.
2. FUNCTIONAL DEPENDENCIES
Functional dependencies are constraints on the set of legal relations.
The functional dependency holds on R if and only if for any legal relations r(R), whenever
any two tuples t1 and t2 of r agree on the attributes , they also agree on the attributes . That is,
t1[ ] = t2 [ ] t1[ ] = t2 [ ]
It requires that the value for a certain set of attributes determines uniquely the value for another set
of attributes.
In a given relation R, X and Y are attributes. Attributes Y is functionally dependent on attribute X if
each value of X determines exactly one value of Y, which is represented as
X –> Y
i.e., “X determines Y” or “Y is functionally dependent on X”
X –> Y does not imply Y –> X
For example, in a student relation the value of an attribute “ Marks” is known then the value of an
attribute “Grade” is determined since
Marks –> Grade
Types
(a) Full functional dependency
(b) Partial functional dependency
(c) Transitive functional dependency
(a)Full dependencies
In a relation R, X and Y are attributes. X functionally determines Y. Subset of X should not
functionally determine Y.
In the above example marks is fully functionally dependent on student_no and course_no together
and not on subset of {student_no, course_no}.
This means marks cannot be determined either by student_no or course_no alone.It can be
determined only using student_no and course_no together.
Hence marks are fully functionally dependent on {student_no, course_no}.
(b)Partial dependencies
84
X –> Y
Y –> Z
X –> Z
For example, grade depends on marks and in turn mark depends on {student_no
course_no}, hence Grade depends fully transitively on {student_no & course_no}.
Use of Functional Dependencies
We use functional dependencies to:
o Test relations to see if they are legal under a given set of functional dependencies.
If a relation r is legal under a set F of functional dependencies, we say that r satisfies
F.
o specify constraints on the set of legal relations
We say that F holds on R if all legal relations on R satisfy the set of functional
dependencies F.
2.1. CLOSURE OF A SET OF FUNCTIONAL DEPENDENCIES
Given a set of functional dependencies F, there are certain other functional dependencies that are
logically implied by F.
o For example: If A B and B C, then we can infer that A C
The set of all functional dependencies logically implied by F is the closure of F.
We denote the closure of F by F+.
We can find all F+ by applying Armstrong’s Axioms:
o Reflexivity Rule
If is a set of attributes and , then holds.
o Augmentation Rule
If , then is a set of attributes, then holds.
o Transitivity Rule
If holds and holds then holds.
These rules are
85
Procedure for Computing F+: To compute the closure of a set of functional dependencies F:
F+=F
repeat
for each functional dependency f in F+
86
B.
87
3. result = ABCG (A C)
3. result = ABCGH (CG H and CG AGBC)
4. result = ABCGHI (CG I and CG AGBCH)
Uses of Attribute Closure
There are several uses of the attribute closure algorithm:
Testing for super key:
+, +
o To test if is a super key, we compute and check if contains all attributes of R.
Testing functional dependencies
o To check if a functional dependency holds (or, in other words, is in F+), just check if
+
.
+
o That is, we compute by using attribute closure, and then check if it contains .
o Is a simple and cheap test, and very useful
2.3. CANONICAL COVER
If a relational schema R has a set of functional dependencies.
Whenever a user performs an update on the relation, the database system must ensure that the update
does not violate any functional dependencies.
The system must roll back the update if it violates any functional dependencies in the set F.
The violation can be checked by testing a simplified set of functional dependencies.
If simplified set of functional dependency is satisfied then the original functional dependency is
satisfied and vice versa.
Sets of functional dependencies may have redundant dependencies that can be inferred from the
others.
A canonical cover of F is a “minimal” set of functional dependencies equivalent to F, having no
redundant dependencies or redundant parts of dependencies
Extraneous Attributes
An attribute of a functional dependency is said to be extraneous if we can remove it without
changing the closure of the set of functional dependencies.
Consider a set F of functional dependencies and the functional dependency in F.
o Attribute A is extraneous in if A and F logically implies
(F – { }) {( – A) }.
88
3. NORMALIZATION
Normalization of data is a process of analyzing the given relational schema based on their functional
dependencies and primary key to achieve the desirable properties of
Minimize redundancy
Minimize insert, delete and update anomalies during database activities
Normalization is an essential part of database design.
The concept of normalization helps the designer to built efficient design.
Purpose of Normalization:
Minimize redundancy in data.
Remove insert, delete and update anomaly during database activities.
90
Department
91
In our example Departmentrelation is not in 1NF because Dlocation has multivalued attributes.
There are 3 main techniques to achieve 1NF for such relation.
1. Remove the Dlocation that violates 1NF and place it in a separate relation Dept_location
along with primary key Dnumber of department. The primary key of this relation is the
combination of {Dnumber, Dlocation}.
Dept_location
Dnumber Dlocation
5 Bellaire
5 Sugsrland
5 Houston
4 Stafford
1 Houston
2. Expand the key so that there will be separate tuple in the original department relation. The
primary key becomes {Dnumber, Dlocation}. This solution has the disadvantage of
introducing redundancy in the relation.
3. If a maximum number of values is knowm for the attribute. For example, if it is known that
atmost three locations can exist for a department, and then replace Dlocation by Dlocation1,
Dlocation 2, and Dlocation3. This solution has the disadvantage of introducing null values if
most departments have fewerthan three locations.
EMP_PROJ1 EMP_PROJ2
Eid Ename Eid Pnumber Hours
93
In the above example EMP_PROJ. Ssn and Pnumber are primary key.
The table is in 1NF.
FD1 is in 2NF but FD2 and FD3 violates 2Nf.
The Ename, Pname, Plocation in FD2 and FD3 are partially dependent on the primary key attributes
Ssn and Pnumber.
A relation which is not in second normal form can be made to be in 2NF by decomposing the
relation into a number such that each nonprime attribute is fully functional dependent on the primary
key.
94
FD1
FD2
FD3
EP1
Ssn Pnumber Hours
FD1
EP2
Ssn Ename
FD2
EP3
Pnumber Pname Plocation
FD3
EMP_DEPT
95
ED1
Ename Eid DOB Address Dnumber
ED2
Dnumber Dname DMGRid
The dependency EidDMRid is transitive through Dnumber in EMO_DEPT, because both the
dependencies EidDnumber and DnumberDMGRid hold.
Dnumber is neither a key itself nor a subset of key of EMP_DEPT. therefore the EMP_DEPT
relational schema is not in 3NF.
The relation is in 2NF because there is no partial dependencies on the key attribute.
We can normalize EMP_DEPT by decomposing it into two 3NF relational schemas ED1 and ED2.
For relations where primary key Decomposes and set up a new relation
contains multiple attribute, no non key for each partial key with its dependent
2NF attribute should be functionally attributes. Make sure to keep a relation
dependent on a pert of primary key. with the original primary key and any
attributes that are fully FD on it.
Relations should not have a non key Decompose and set up a relation that
attribute functionally determined by includes the non key attributes that
3NF another non key attribute. i.e., there functionally determines other non-key
should be no transitive dependency of attributes.
a non key attribute on the primary key.
96
If it is decomposed into
rule.
8. DEPENDENCY PRESERVATION
Let F be a set of functional dependencies on a schema R, and let R1, R2, . . . , Rn be a decomposition
of R.
The restriction of F to Ri is the set Fi of all functional dependencies in F+ that include only
attributes of Ri.
Example
97
F = {A → B, B → C}
The restriction of F is A → C, since A → C is in F+, even though it is not in F.
Even though F’≠ F, F‘+=F+ where F‘=F1 F2 F3 Fn.
The decomposition having the property F‘+=F+ is a dependency-preserving decomposition.
Algorithm to test dependency preservation
compute F+;
for each schema Ri in D do
begin
Fi : = the restriction of F+ to Ri;
end
F’: =θ
for each restriction Fi do
begin
F’ = F’ Fi
end
compute F’+;
if (F’+= F+) then return (true)
else return (false);
The input to the algorithm is a set of decomposed relational schemas D = {R1, R2, R3…, Rn} and a
set F of functional dependencies.
This algorithm is expensive since it requires the computation of F+
The second alternative method to calculate dependency preservation is as follows.
The test is applied to each { } in F
result = α
while (changes to result) do
for each Ri in the decomposition
t = (result ∩Ri)+ ∩ Ri
result = result t
If result contains all attributes in β, then the functional dependency α → β is preserved.
EMP
Ename Pname Dname
Smith X John
Smith Y Anna
Smith X Anna
Smith Y John
EMP_PROJECTS EMP_DEPENDENTS
Ename Pname Ename Dname
If the relation has nontrivial MVDs, then insert, delete and update operations on single tuple may
cause additional tuples to be modified.
To overcome these anomalies the relation is decomposed into 4NF.
Procedure for 4NF:
Input: A universal relation R and a set of functional and multivalued dependencies F
Set D :={ R}
While there is a relation schema Q in D that is not in 4NF, do
{
Choose a relation schema Qin D that is not in 4NF;
Fine a nontrivial MVD X →→ Y that violates 4NF;
Replace Qin D by two relation schemas (Q-Y) and (X Y);
};
The constraint states that every legal state r of R should have a nonadditive join decomposition into
R1, R2, R3… Rn i.e, for every such relation r we have
( R1(r), R2(r)… Rn(r)) =r
JD denoted as JD (R1, R2) implies an MVD (R1 R2) →→ (R1 - R2).
FIFTH NORMAL FORM (5NF)
A relational schema R is in fifth normal form or Project Join Normal Form (PJNF) with respect to a
set F of functional, multivalued and join dependency if, for every nontrivial join dependency JD (R1, R2,
+
R3… Rn) in F , every Ri is a superkey of R.
SUPPLY
Sname Part_name Proj_name
Smith Bolt
Sname Part_name
Smith Nut
Smith Bolt
Adamsky Bolt
Smith Nut
Walton Nut
Adamsky Bolt
Adamsky Nail
Walton Nut
Adamsky Bolt
Smith Bolt Y
The supply relation is decomposed into three relations R1, R2, R3 that are in 5NF.
103
Smith X Bolt X
Smith Y Nut Y
Adamsky Y Bolt Y
R1 Walton Z Nut Z
R2
R3 Adamsky X Nail X
ANOMALIES IN DATABASES
There are three types of anomalies. They are
1. Insert Anomalies
2. Update Anomalies
3. Delete Anomalies
1. Insert Anomalies:
The inability to insert part of information into a relational schema due to the unavailability of part of
the remaining information is called Insert Anomalies.
Example: If there is a guid having no registered under him, then we can not insert the guide‘s
information in the schema project.
2. Update Anomalies:
3. Delete Anomalies:
If the deletion of some information leads to loss of some other information, then we say there is a
deletion anomaly.
Example: If a guide guides one student and if the student discontinues the course then the
information about the guid will be lost.
104
2 Mark Questions
16 Mark Questions
1. Explain detail about Functional Dependencies.
2. Explain detail about first, second and third normalization form.
3. Explain detail about Boyce code normal form and fifth normalization form.
4. Explain detail in decomposition using Functional Dependencies.
5. Explain detail in decomposition using Multi-Valued Dependencies.
105
UNIT IV
TRANSACTIONS
Transaction Concepts - Transaction Recovery – ACID Properties – System Recovery – Media Recovery
– Two Phase Commit - Save Points – SQL Facilities for recovery – Concurrency – Need for Concurrency
– Locking Protocols – Two Phase Locking – Intent Locking – Deadlock- Serializability – Recovery
Isolation Levels – SQL Facilities for Concurrency.
1. TRANSACTION CONCEPTS
A transaction is a logical unit of work. It begins with the execution of a BEGIN TRANSACTION
operation and ends with the execution of a COMMIT or ROLLBACK operation.
BEGIN TRANSACTION
UPDATE ACC123 (BALANCE: =BALANCE-$100);
If any error occurred THEN GOTO UNDO;
END IF;
UPDATE ACC123 (BALANCE: =BALANCE+$100);
If any error occurred THEN GOTO UNDO;
END IF;
COMMIT;
GOTO FINISH;
UNDO;
ROLLBACK;
FINISH;
RETURN;
In our example an amount of $100 is transferred from account 123 to 456.
It is not a single atomic operation, it involves two separate updates on the database.
Transaction involves a sequence of database update operation.
106
The purpose of this transaction is to transform a correct state of database into another incorrect state,
without preserving correctness at all intermediate points.
Transaction management guarantees a correct transaction and maintains the database in a correct
state.
It guarantees that if the transaction executes some updates and then a failure occurs before the
transaction reaches its planned termination, then those updates will be undone.
Thus the transaction either executes entirely or totally cancelled.
The system component that provides this atomicity is called transaction manager or transaction
processing monitor or TP monitor.
ROLLBACK and COMMIT are key to the way it works.
1. COMMIT:
The COMMIT operation signals successful end of transaction.
It tells the transaction manager that a logical unit of work has been successfully completed and
database is in correct state and the updates can be recorded or saved.
2. ROLLBACK:
a. By contrast, the ROLLBACK operation signals unsuccessful end of transaction.
b. It tells the transaction manager that something has gone wrong, the database might be in
incorrect state and all the updates made by the transaction should be undone.
3. IMPLICIT ROLLBACK:
Explicit ROLLBACK cannot be issued in all cases of transaction failures or errors. So the
system issues implicit ROLLBACK for any transaction failure.
If the transaction does not reach the planned termination then we ROLLBACK the transaction
else it is COMMITTED.
4. MESSAGE HANDLING:
A typical transaction will not only update the database, it will also send some kind of message
back to the end user indicating what has happened.
Example: “Transfer done” if the COMMIT is reached, or Error “transfer not done”
5. RECOVERY LOG:
The system maintains a log or journal or disk on which all particular about the updation is
maintained.
The values of before and after updation is also called as before and after images.
This log is used to bring the database to the previous state incase of some undo operation.
The log consist of two portions
107
8. NO NESTED TRANSACTIONS:
An application program can execute a BEGIN TRANSACTION statement only when it has no
transaction currently in progress.
i.e., no transaction has other transactions nested inside itself.
9. CORRECTNESS:
Consistent means ―not violating any known integrity constraint.‖
Consistency and correctness of the system should be maintained.
If T is a transaction that transforms the database from state D1 to state D2, and if D1 is correct,
then D2 is correct as well.
10. MULTIPLE ASSIGNMENT:
108
Database positioning means at the time of execution each program will typically have addressability to
certain tuples in the database, this addressability is lost at a COMMIT point.
Transactions are not only a unit of work but also unit of recovery.
If a transaction successfully commits, then the system updates will be permanently recorded in the
database, even if the system crashes the very next moment.
If the system crashes before the updates are written physically to the database, the system‘s restart
procedure will still record those updates in the database.
The values can be discovered from the relevant records in the log.
The log must be physically written before the COMMIT processing can complete. This is called write-
ahead log rule.
The restart procedure helps in recovering any any transactions that completed successfully but not
physically written prior to the crash.
Implementation issues
109
1. Database updates are kept in buffers in main memory and not physically written to disk until the
transaction commits. That way, if the transaction terminates unsuccessfully, there will be no need
to undo any disk updates.
2. Database updates are physically written to the disk after COMMIT operation. That way, if the
system subsequently crashes, there will be no need to redo any disk updates.
If there is no enough disk space then a transaction may steal buffer space from another transaction. They
may also force updates to be written physically at the time of COMMIT.
Write ahead log rule is elaborated as follows:
1. The log record for a given database update must be physically written to the log before that update
is physically written to the database.
2. All other log records for a given transaction must be physically written to the log before the
COMMIT log record for that transaction is physically written to the log.
3. COMMIT processing for a given transaction must not complete until the COMMIT log record for
that transaction is physically written to the log.
3. ACID PROPERTIES
ACID stands for Atomicity, Correctness, Isolation and Durability.
* Atomicity: Transactions are atomic.
Consider the following example
Transaction to transfer $50 from account A to account B:
read(A)
A := A – 50
write(A)
read(B)
B := B + 50
write(B)
read(X), which transfers the data item X from the database to a local buffer belonging to the
transaction that executed the read operation.
write(X), which transfers the data item X from the local buffer of the transaction that executed the
write back to the database.
Before the execution of transaction Ti the values of accounts A and B are $1000 and $2000,
respectively.
Suppose if the transaction fails due to some power failure, hardware failure and system error the
transaction Ti will not execute successfully.
110
If the failure happens after the write(A) operation but before the write(B) operation. The database
will have values $950 and $2000 which results in a failure.
The system destroys $50 as a result of failure and leads the system to inconsistent state.
The basic idea of atomicity is: The database system keeps track of the old values of any data on
which a transaction performs a write, if the transaction does not terminate successfully then the
database system restores the old values.
Atomicity is handled by transaction-management component.
* Correctness/ Consistency:
Transactions transform a correct state of the database into another correct state, without necessarily
preserving correctness at all intermediate points.
In our example the transaction is in consistent state if the sum of A and B is unchanged by the
execution of transaction.
*Isolation:
Transactions are isolated from one another.
Even though there are many transactions running concurrently, any given transaction‘s updates are
concealed from all the rest, until that transaction commits.
The database will be temporarily inconsistent while the transaction is in progress.
When the amount is reduced from A and not yet incremented to B. the database will be inconsistent.
If a second concurrently running transaction reads A and B at this intermediate point and computes
A+B, it will observe an inconsistent value.
If the second transaction performs updates on A and B based on the inconsistent values that it read,
the database will remain inconsistent even after both transactions are completed.
In order to avoid this problem serial execution of transaction is preferred.
Concurrency control component maintain isolation of transaction.
*Durability:
Once a transaction commits, its updates persist in the database, even if there is a subsequent system
crash.
The computer system failure may lead to loss of data in main memory, but data written to disk are
not lost.
Durability is guaranteed by ensuring the following
o The updates carried out by the transaction should be written to the disk.
111
o Information stored in the disk should be sufficient to enable the database to reconstruct the
updates when the database system restarts after failure.
o Recovery management component is responsible for ensuring durability.
4. SYSTEM RECOVERY
The system must be recovered not only from purely local failures such as an individual transaction, but
also from ―global‖ failures.
A local failure affects only the transaction in which the failure has actually occurred.
A global failure affects all of the transactions in progress at the time of the failure.
The failures fall into two broad categories:
1. System failures (e.g., power outage), which affect all transactions currently in progress but do not
physically damage the database. A system failure is sometimes called a soft crash.
2. Media failures (e.g., head crash on the disk), which cause damage to the database or some portion
of it. A media failure is sometimes called a hard crash.
System failure and recovery
During system failures the contents of main memory is lost.
The transaction at the time of the failure will not be successfully completed, so transactions must be
undone i.e., rolled back when the system restarts.
It is necessary to redo certain transactions at the time of restart that is not successfully completed prior
to the crash but did not manage to get their updates transferred from the buffers in main memory to the
physical database.
Whenever some prescribed number of records has been written to the log the system automatically takes
a checkpoint.
The checkpoint record contains a list of all transactions that were in progress at the time the checkpoint
was taken.
To see how a check point works consider the following
112
Prepare:
The resource manager should get ready to ―go either way‖ on the transaction.
The participant in the transaction should record all updates performed during the transaction from
temporary storage to permanent storage.
In order to perform either COMMIT or ROLLBACK as necessary.
114
Resource manager now replies “OK” to the coordinator or NOT OK based on the write operation.
Commit:
When the coordinator has received replies from all participants, it takes a decision regarding the
transaction and records it in the physical log.
If all replies were OK, that the decision is”commit”, if any reply was “Not OK” the decision is
“rollback.”
The coordinator informs its decision to all the participants.
Each participant must then commit or roll back the transaction locally, as instructed by the
coordinator.
If the system fails at some point during the process, the restart procedure looks for the decision of the
coordinator.
If the decision is found then the two phase commit can start processing from where it has left off.
If the decision is not found then it assumes that the decision is ROLLBACK and the process can
complete appropriately.
If the participants are from several systems like in distributed system, then some participants should wait
for long time for the coordinators decision.
Data communication manager (DC manager) can act as a resource manager in case of a two-phase
commit process.
6. SAVEPOINTS
Transactions cannot be nested with in another transaction.
Transactions cannot be broken down into smaller subtransactions.
Transactions establish intermediate savepoints while executing.
If there is a roll back operation executed in the transaction, instead of performing roll back all the way to
the beginning we can roll back to the previous savepoint.
Savepoint is not the same as performing a COMMIT, updates made by the transaction are still not
visible to other transaction until the transaction successfully executes a COMMIT.
7. MEDIA RECOVERY
Media recovery is different from transaction and system recovery.
A media failure is a failure such as a disk head crash or a disk controller failure in which some portion
of the database has been physically destroyed.
Recovery from such a failure basically involves reloading or restoring the database from a backup or
dump copy and then using the log.
There is no need to undo transactions that were still in progress at the time of the failure.
115
The dump portion of that utility is used to make backup copies of the database on demand.
Such copies can be kept on tape or other archival storage, it is not necessary that they be on direct access
media.
After a media failure, the restore portion of the utility is used to recreate the database from a specified
backup copy.
2 Mark Questions
1. What is transaction?
2. What are the two statements regarding transaction?
3. What are the properties of transaction?
4. What is recovery management component?
5. When is a transaction rolled back?
Any changes that the aborted transaction made to the database must be undone. Once the changes caused by
an aborted transaction have been undone, then the transaction has been rolled back.
6. What are the states of transaction?
7. What is a shadow copy scheme?
8. Give the reasons for allowing concurrency?
9. What is average response time?
10. What are the two types of serializability?
11. Define lock?
12. What are the different modes of lock?
13. Define deadlock?
14. Define the phases of two phase locking protocol
15. Define upgrade and downgrade?
16. What is a database graph?
17. What are the two methods for dealing deadlock problem?
18. What is a recovery scheme?
19. What are the two types of errors?
20. What are the storage types?
21. Define blocks?
22. What is meant by Physical blocks?
23. What is meant by buffer blocks?
24. What is meant by disk buffer?
25. What is meant by log-based recovery?
26. What are uncommitted modifications?
117
16 Mark Questions
UNIT V
IMPLEMENTATION TECHNIQUES
Overview of Physical Storage Media – Magnetic Disks – RAID – Tertiary storage – File
Organization – Organization of Records in Files – Indexing and Hashing –Ordered Indices – B+ tree
Index Files – B tree Index Files – Static Hashing – Dynamic Hashing – Query Processing Overview –
Catalog Information for Cost Estimation – Selection Operation – Sorting – Join Operation – Database
Tuning,Multimedia Database. Case Study:FIRM – a database management system for real time avionics.
» Accessing Speed
» Cost per unit of data
» Reliability
o data loss on power failure or system crash
o physical failure of the storage device
» Can differentiate storage into:
o volatile storage: loses contents when power is switched off
o non-volatile storage:
Contents persist even when power is switched off.
Includes secondary and tertiary storage, as well as batter-backed up main-
memory.
Storage Hierarchy
» The various storage media can be organized in a hierarchy accounting to their speed and their cost.
» The higher level is expensive, but is fast. As we move down the hierarchy, the cost per bit decreases, where
as the access time increases.
» Storage hierarchy includes 3 main categories.
1. Primary storage: Fastest media but volatile (cache, main memory).
2. Secondary storage: next level in hierarchy, non-volatile, moderately fast access time also called on-line
storage
E.g. flash memory, magnetic disks
3. Tertiary storage: lowest level in hierarchy, non-volatile, slow access time also called off-line storage
119
o Primary medium for the long-term storage of data; typically stores entire database.
o Data must be moved from disk to main memory for access, and written back for storage
Much slower access than main memory (more on this later)
o direct-access – possible to read data on disk in any order, unlike magnetic tape
o Capacities range up to roughly 400 GB currently
Much larger capacity and cost/byte than main memory/flash memory
Growing constantly and rapidly with technology improvements (factor of 2 to 3 every
2 years)
o Survives power failures and system crashes
disk failure can destroy data, but is rare
» Optical storage
o non-volatile, data is read optically from a spinning disk using a laser
o CD-ROM (640 MB) and DVD (4.7 to 17 GB) most popular forms
o Write-one, read-many (WORM) optical disks used for archival storage (CD-R, DVD-R,
DVD+R)
o Multiple write versions also available (CD-RW, DVD-RW, DVD+RW, and DVD-RAM)
o Reads and writes are slower than with magnetic disk
o Juke-box systems, with large numbers of removable disks, a few drives, and a mechanism
for automatic loading/unloading of disks available for storing large volumes of data
» Tape storage
o non-volatile, used primarily for backup (to recover from disk failure), and for archival data
o sequential-access – much slower than disk
o very high capacity (40 to 300 GB tapes available)
o tape can be removed from drive storage costs much cheaper than disk, but drives are
expensive
o Tape jukeboxes available for storing massive amounts of data
hundreds of terabytes (1 terabyte = 109 bytes) to even a petabyte (1 petabyte = 1012
bytes)
2. MAGNETIC-DISK
a. Data is stored on spinning disk, and read/written magnetically
121
b. Primary medium for the long-term storage of data; typically stores entire database.
c. Data must be moved from disk to main memory for access, and written back for storage
i. Much slower access than main memory (more on this later)
d. direct-access – possible to read data on disk in any order, unlike magnetic tape
e. Capacities range up to roughly 400 GB currently
i. Much larger capacity and cost/byte than main memory/flash memory
ii. Growing constantly and rapidly with technology improvements (factor of 2 to 3 every
2 years)
f. Survives power failures and system crashes
i. disk failure can destroy data, but is rare
Magnetic Hard Disk Mechanism
» Read-write head
a. Positioned very close to the platter surface (almost touching it)
b. Reads or writes magnetically encoded information.
» Surface of platter divided into circular tracks
a. Over 50K-100K tracks per platter on typical hard disks
» Each track is divided into sectors.
a. A sector is the smallest unit of data that can be read or written.
b. Sector size typically 512 bytes
c. Typical sectors per track: 500 (on inner tracks) to 1000 (on outer tracks)
» To read/write a sector
a. disk arm swings to position head on right track
b. platter spins continually; data is read/written as sector passes under head
» Head-disk assemblies
a. multiple disk platters on a single spindle (1 to 5 usually)
b. One head per platter, mounted on a common arm.
» Cylinder i consists of ith track of all the platters
» Earlier generation disks were susceptible to head-crashes
a. Surface of earlier generation disks had metal-oxide coatings which would disintegrate on
head crash and damage all data on disk
b. Current generation disks are less susceptible to such disastrous failures, although individual
sectors may get corrupted
» Disk controller – interfaces between the computer system and the disk drive hardware.
122
Disk Subsystem
FigureDisk Subsystem
f. Controllers functionality (checksum, bad sector remapping) often carried out by individual
disks; reduces load on controller
123
» Access time – the time it takes from when a read or write request is issued to when data transfer
begins. Consists of:
a. Seek time – time it takes to reposition the arm over the correct track.
b. Rotational latency – time it takes for the sector to be accessed to appear under the head.
» Data-transfer rate – the rate at which data can be retrieved from or stored to the disk.
a. 25 to 100 MB per second max rate, lower for inner tracks
» Mean time to failure (MTTF) – the average time the disk is expected to run continuously without
any failure.
a. Typically 3 to 5 years
Optimization of Disk-Block Access
» File organization – optimize block access time by organizing the blocks to correspond to how data
will be accessed
a. E.g. Store related information on the same or nearby cylinders.
124
» Nonvolatile write buffers speed up disk writes by writing blocks to a non-volatile RAM buffer
immediately
a. Non-volatile RAM: battery backed up RAM or flash memory
i. Even if power fails, the data is safe and will be written to disk when power returns
b. Controller then writes to disk whenever the disk has no other requests or request has been
pending for some time
c. Database operations that require data to be safely stored before continuing can continue
without waiting for data to be written to disk
d. Writes can be reordered to minimize disk arm movement
3. RAID
» Redundancy – store extra information that can be used to rebuild information lost in a disk failure
» E.g., Mirroring (or shadowing)
a. Duplicate every disk. Logical disk consists of two physical disks.
b. Every write is carried out on both disks
i. Reads can take place from either disk
c. If one disk in a pair fails, data still available in the other
i. Data loss would occur only if a disk fails, and its mirror disk also fails before the
system is repaired
1. Probability of combined event is very small
a. Except for dependent failure modes such as fire or building collapse or
electrical power surges
» Mean time to data loss depends on mean time to failure, and mean time to repair
1. Bit-level striping – split the bits of each byte across multiple disks
a. In an array of eight disks, write bit i of each byte to disk i.
b. Each access can read data at eight times the rate of a single disk.
c. But seek/access time worse than for a single disk
i. Bit level striping is not used much any more
2. Block-level striping – with n disks, block i of a file goes to disk (i mod n) + 1
d. Requests for different blocks can run in parallel if the blocks reside on different disks
e. A request for a long sequence of blocks can utilize all disks in parallel
RAID Levels
» Schemes to provide redundancy at lower cost by using disk striping combined with parity bits
a. Different RAID organizations, or RAID levels, have differing cost, performance and
reliability characteristics
» RAID Level 0: Block striping; non-redundant.
a. Used in high-performance applications where data lost is not critical.
a. a single parity bit is enough for error correction, not just detection, since we know which
disk has failed
i. When writing data, corresponding parity bits must also be computed and written to a
parity bit disk
ii. To recover data in a damaged disk, compute XOR of bits from other disks (including
parity bit disk)
b. Faster data transfer than with a single disk, but fewer I/Os per second since every disk has to
participate in every I/O.
c. Subsumes Level 2 (provides all its benefits, at lower cost).
» RAID Level 4: Block-Interleaved Parity; uses block-level striping, and keeps a parity block on a
separate disk for corresponding blocks from N other disks.
a. Provides higher I/O rates for independent block reads than Level 3
i. block read goes to a single disk, so blocks stored on different disks can be read in
parallel
b. Provides high transfer rates for reads of multiple blocks than no-striping
c. Before writing a block, parity data must be computed
1. More efficient for writing large amounts of data sequentially
» RAID Level 5: Block-Interleaved Distributed Parity; partitions data and parity among all N + 1
disks, rather than storing data in N disks and parity in 1 disk.
a. E.g., with 5 disks, parity block for nth set of blocks is stored on disk (n mod 5) + 1, with the
data blocks stored on the other 4 disks.
b. Higher I/O rates than Level 4.
128
i. Block writes occur in parallel if the blocks and their parity blocks are on different
disks.
c. Subsumes Level 4: provides same benefits, but avoids bottleneck of parity disk.
» RAID Level 6: P+Q Redundancy scheme; similar to Level 5, but stores extra redundant information
to guard against multiple disk failures.
a. Better reliability than Level 5 at a higher cost; not used as widely.
» Software RAID: RAID implementations done entirely in software, with no special hardware
support
» Hardware RAID: RAID implementations with special hardware
a. Use non-volatile RAM to record writes that are being executed
b. Beware: power failure during write can result in corrupted disk
i. E.g. failure after writing one block but before writing the second in a mirrored system
ii. Such corrupted data must be detected when power is restored
1. Recovery from corruption is similar to recovery from failed disk
2. NV-RAM helps to efficiently detected potentially corrupted blocks
a. Otherwise all blocks of disk must be read and compared with
mirror/parity block
» Hot swapping: replacement of disk while system is running, without power down
a. Supported by some hardware RAID systems,
b. reduces time to recovery, and improves availability greatly
» Many systems maintain spare disks which are kept online, and used as replacements for failed disks
immediately on detection of failure
a. Reduces time to recovery greatly
» Many hardware RAID systems ensure that a single point of failure will not stop the functioning of
the system by using
a. Redundant power supplies with battery backup
b. Multiple controllers and multiple interconnections to guard against controller/interconnection
failures
4. TERTIARY STORAGE
Optical Disks
b. DVD-10 and DVD-18 are double sided formats with capacities of 9.4 GB and 17 GB
c. Other characteristics similar to CD-ROM
» Record once versions (CD-R and DVD-R) are becoming popular
a. data can only be written once, and cannot be erased.
b. high capacity and long lifetime; used for archival storage
c. Multi-write versions (CD-RW, DVD-RW and DVD-RAM) also available
Magnetic Tapes
Storage Access
» A database file is partitioned into fixed-length storage units called blocks. Blocks are units of both
storage allocation and data transfer.
» Database system seeks to minimize the number of block transfers between the disk and memory.
We can reduce the number of disk accesses by keeping as many blocks as possible in main
memory.
» Buffer – portion of main memory available to store copies of disk blocks.
» Buffer manager – subsystem responsible for allocating buffer space in main memory.
» Buffer Manager
» Programs call on the buffer manager when they need a block from disk.
131
a. If the block is already in the buffer, the requesting program is given the address of the block
in main memory
b. If the block is not in the buffer,
i. the buffer manager allocates space in the buffer for the block, replacing (throwing
out) some other block, if required, to make space for the new block.
ii. The block that is thrown out is written back to disk only if it was modified since the
most recent time that it was written to/fetched from the disk.
iii. Once space is allocated in the buffer, the buffer manager reads the block from the
disk to the buffer, and passes the address of the block in main memory to requester.
» Buffer-Replacement Policies
» Most operating systems replace the block least recently used (LRU strategy)
» Idea behind LRU – use past pattern of block references as a predictor of future references
» Queries have well-defined access patterns (such as sequential scans), and a database system can use
the information in a user‘s query to predict future references
a. LRU can be a bad strategy for certain access patterns involving repeated scans of data
i. e.g. when computing the join of 2 relations r and s by a nested loops
for each tuple tr of r do
for each tuple ts of s do
if the tuples tr and ts match …
b. Mixed strategy with hints on replacement strategy provided by the query optimizer is
preferable
» Pinned block – memory block that is not allowed to be written back to disk.
» Toss-immediate strategy – frees the space occupied by a block as soon as the final tuple of that
block has been processed
» Most recently used (MRU) strategy – system must pin the block currently being processed. After
the final tuple of that block has been processed, the block is unpinned, and it becomes the most
recently used block.
» Buffer manager can use statistical information regarding the probability that a request will reference
a particular relation
a. E.g., the data dictionary is frequently accessed. Heuristic: keep data-dictionary blocks in
main memory buffer
132
5. FILE ORGANIZATION
» The database is stored as a collection of files. Each file is a sequence of records. A record is a
sequence of fields.
Types
Free Lists
» Store the address of the first deleted record in the file header.
» Use this first record to store the address of the second deleted record, and so on
» Can think of these stored addresses as pointers since they ―point‖ to the location of a record.
» More space efficient representation: reuse space for normal attributes of free records to store
pointers. (No pointers stored in in-use records.)
134
Fixed-length representation:
o reserved space
o pointers
» Reserved space – can use fixed-length records of a known maximum length; unused space in shorter
records filled with a null or end-of-record symbol.
136
o good for queries involving depositor customer, and for queries involving one single customer
and his accounts
o bad for queries involving only customer
o results in variable size records
Clustering File Structure with Pointer Chains
» Access time
» Insertion time
» Deletion time
» Space overhead
8. ORDERED INDICES
o Indexing techniques evaluated on basis of:
» In an ordered index, index entries are stored sorted on the search key value. E.g., author catalog in
library.
» Primary index: in a sequentially ordered file, the index whose search key specifies the sequential
order of the file.
o Also called clustering index
o The search key of a primary index is usually but not necessarily the primary key.
» Secondary index: an index whose search key specifies an order different from the sequential order
of the file. Also called non-clustering index.
Primary index
» Index-sequential file: ordered sequential file with a primary index.
Sequential File for account Records
FigureDense Index
Sparse Index Files
» Sparse Index: contains index records for only some search-key values.
o Applicable when records are sequentially ordered on search-key
» To locate a record with search-key value K we:
o Find index record with largest search-key value < K
o Search file sequentially starting at the record to which the index record points
» Less space and less maintenance overhead for insertions and deletions.
» Generally slower than dense index for locating records.
» Good tradeoff: sparse index with an index entry for every block in file, corresponding to least
search-key value in the block.
Example of Sparse Index Files
FigureSparse Index
Secondary Indices
» Frequently, one wants to find all the records whose values in a certain field (which is not the search-
key of the primary index satisfy some condition.
142
o Example 1: In the account database stored sequentially by account number, we may want to
find all accounts in a particular branch
o Example 2: as above, but where we want to find all accounts with a specified balance or range
of balances
» We can have a secondary index with an index record for each search-key value; index record points
to a bucket that contains pointers to all the actual records with that particular search-key value.
Secondary Index on balance field of account
» Disadvantage of indexed-sequential files: performance degrades as file grows, since many overflow
blocks get created. Periodic reorganization of entire file is required.
» Advantage of B+-tree index files: automatically reorganizes itself with small, local, changes, in the
face of insertions and deletions. Reorganization of entire file is not required to maintain
performance.
» Disadvantage of B+-trees: extra insertion and deletion overhead, space overhead.
» Advantages of B+-trees outweigh disadvantages, and they are used extensively.
A B+-tree is a rooted tree satisfying the following properties:
» All paths from root to leaf are of the same length
» Each node that is not a root or a leaf has between [n/2] and n children.
» A leaf node has between [(n–1)/2] and n–1 values
» Special cases:
o If the root is not a leaf, it has at least 2 children.
o If the root is a leaf (that is, there are no other nodes in the tree), it can have between 0 and (n–1)
values.
+
B -Tree Node Structure
Figuretypical node of a B+-Tree
+
FigureA leaf node for account B -Trees index (n=3)
Example of a B+-tree
+
Figure B -tree for account file (n = 3)
+
Figure B -tree for account file (n = 5)
146
» Leaf nodes must have between 2 and 4 values ( (n–1)/2 and n –1, with n = 5).
» Non-leaf nodes other than root must have between 3 and 5 children ( (n/2 and n with n =5).
» Root must have at least 2 children.
Observations about B+-trees
» Since the inter-node connections are done by pointers, ―logically‖ close blocks need not be
―physically‖ close.
» The non-leaf levels of the B+-tree form a hierarchy of sparse indices.
» The B+-tree contains a relatively small number of levels (logarithmic in the size of the main file),
thus searches can be conducted efficiently.
» Insertions and deletions to the main file can be handled efficiently, as the index can be restructured
in logarithmic time.
Queries on B+-Trees
» Find all records with a search-key value of k.
o Start with the root node
Examine the node for the smallest search-key value > k.
If such a value exists, assume it is Kj. Then follow Pi to the child node
Otherwise k Km–1, where there are m pointers in the node. Then follow Pm to the
child node.
o If the node reached by following the pointer above is not a leaf node, repeat the above
procedure on the node, and follow the corresponding pointer.
o Eventually reach a leaf node. If for some i, key Ki = k follow pointer Pi to the desired record or
bucket. Else no record with search-key value k exists.
» In processing a query, a path is traversed in the tree from the root to some leaf node.
» If there are K search-key values in the file, the path is no longer than log n/2 (K) .
procedure find(value V )
set C = root node
while C is not a leaf node begin
Let Ki = smallest search-key value, if any, greater than V
if there is no such value then begin
Let m = the number of pointers in the node
set C = node pointed to by Pm
end
147
» Find the record to be deleted, and remove it from the main file and from the bucket (if present)
» Remove (search-key value, pointer) from the leaf node if there is no bucket or if the bucket has
become empty
» If the node has too few entries due to the removal, and the entries in the node and a sibling fit into a
single node, then
o Insert all the search-key values in the two nodes into a single node (the one on the left), and
delete the other node.
o Delete the pair (Ki–1, Pi), where Pi is the pointer to the deleted node, from its parent, recursively
using the above procedure.
» Otherwise, if the node has too few entries due to the removal, and the entries in the node and a
sibling fit into a single node, then
o Redistribute the pointers between the node and a sibling such that both have more than the
minimum number of entries.
o Update the corresponding search-key value in the parent of the node.
» The node deletions may cascade upwards till a node which has n/2 or more pointers is found. If
the root node has only one pointer after deletion, it is deleted and the sole child becomes the root.
149
» The removal of the leaf node containing ―Downtown‖ did not result in its parent having too little
pointers. So the cascaded deletions stopped with the deleted leaf node‘s parent.
» Node with ―Perryridge‖ becomes underfull (actually empty, in this special case) and merged with its
sibling.
» As a result ―Perryridge‖ node‘s parent became underfull, and was merged with its sibling (and an
entry was deleted from their parent)
» Root node then had only one child, and was deleted and its child became the new root node
» Parent of leaf containing Perryridge became underfull, and borrowed a pointer from its left sibling
» Search-key value in the parent‘s parent changes as a result
10. B-TREE INDEX FILES
» Similar to B+-tree, but B-tree allow search-key values to appear only once; eliminates redundant
storage of search keys.
» Search keys in nonleaf nodes appear nowhere else in the B-tree; an additional pointer field for each
search key in a nonleaf node must be included.
» Generalized B-tree leaf node
151
» Each bucket j stores a value ij; all the entries that point to the same bucket have the same values on
the first ij bits.
» To locate the bucket containing search-key Kj:
o Compute h(Kj) = X
» Use the first i high order bits of X as a displacement into bucket address table, and follow the pointer to
appropriate bucket
» To insert a record with search-key value Kj
o Follow same procedure as look-up and locate the bucket, say j.
o If there is room in the bucket j inserts record in the bucket.
o Else the bucket must be split and insertion re-attempted
Overflow buckets used instead in some cases
o To split a bucket j when inserting record with search-key value Kj:
» If i > ij (more than one pointer to bucket j)
o Allocate a new bucket z, and set ij and iz to the old ij -+ 1.
o make the second half of the bucket address table entries pointing to j to point to z
o Remove and reinsert each record in bucket j.
o recompute new bucket for Kj and insert record in the bucket (further splitting is required if the
bucket is still full)
» If i = ij (only one pointer to bucket j)
o Increment i and double the size of the bucket address table.
o Replace each entry in the table by two entries that point to the same bucket.
o recompute new bucket address table entry for Kj
Now i > ij so use the first case above.
» When inserting a value, if the bucket is full after several splits (that is, i reaches some limit b) create
an overflow bucket instead of splitting bucket entry table further.
» To delete a key value,
o Locate it in its bucket and remove it.
o The bucket itself can be removed if it becomes empty (with appropriate updates to the bucket
address table).
o Coalescing of buckets can be done (can coalesce only with a ―buddy‖ bucket having same value
of ij and same ij –1 prefix, if it is present)
o Decreasing bucket address table size is also possible
157
Note: decreasing bucket address table size is an expensive operation and should be
done only if number of buckets becomes much smaller than the size of the table
Use of Extendable Hash Structure: Example
» Hash structure after insertion of one Brighton and two Downtown records
158
» Each relational algebra operation can be evaluated using one of several different algorithms
o Correspondingly, a relational-algebra expression can be evaluated in many ways.
» Annotated expression specifying detailed evaluation strategy is called an evaluation-plan.
o E.g., can use an index on balance to find accounts with balance < 2500,
or can perform complete relation scan and discard accounts with balance 2500
161
» Algorithm A1 (linear search). Scan each file block and test all records to see whether they satisfy the
selection condition.
o Cost estimate = br block transfers + 1 seek
br denotes number of blocks containing records from relation r
o If selection is on a key attribute, can stop on finding record
cost = (br /2) block transfers + 1 seek
o Linear search can be applied regardless of
selection condition or
ordering of records in the file, or
availability of indices
» A2 (binary search). Applicable if selection is an equality comparison on the attribute on which file
is ordered.
o Assume that the blocks of a relation are stored contiguously
o Cost estimate (number of disk blocks to be scanned):
cost of locating the first tuple by a binary search on the blocks
– log2(br) * (tT + tS)
If there are multiple records satisfying selection
– Add transfer cost of the number of blocks containing records that satisfy
selection condition
Selections Using Indices
» Index scan – search algorithms that use an index
Selection condition must be on search-key of index.
» A3 (primary index on candidate key, equality). Retrieve a single record that satisfies the corresponding
equality condition
Cost = (hi + 1) * (tT + tS)
» A4 (primary index on nonkey, equality) Retrieve multiple records.
Records will be on consecutive blocks
o Let b = number of blocks containing matching records
Cost = hi * (tT + tS) + tS + tT * b
(r).
o Test other conditions on tuple after fetching it into memory buffer.
» A9 (conjunctive selection using multiple-key index).
o Use appropriate composite (multiple-key) index if available.
» A10 (conjunctive selection by intersection of identifiers).
o Requires indices with record pointers.
o Use corresponding index for each condition, and take intersection of all the obtained sets of
record pointers.
o Then fetch records from file
o If some conditions do not have appropriate indices, apply test in memory.
Disjunction: 1 2 ... n (r).
14. SORTING
» We may build an index on the relation, and then use the index to read the relation in sorted order.
May lead to one disk block access for each tuple.
» For relations that fit in memory, techniques like quicksort can be used. For relations that don‘t fit in
memory, external sort-merge is a good choice.
External Sort-Merge
o Let M denote memory size (in pages).
» Create sorted runs.
Let i be 0 initially.
Repeatedly do the following till the end of the relation:
(a) Read M blocks of relation into memory
(b) Sort the in-memory blocks
(c) Write sorted data to run Ri; increment i.
Let the final value of i be N
» Merge the runs (N-way merge).
We assume (for now) that N < M.
o Use N blocks of memory to buffer input runs, and 1 block to buffer output. Read the first
block of each run into its buffer page
o repeat
Select the first record (in sort order) among all buffer pages
Write the record to the output buffer. If the output buffer is full write it to disk.
165
» Cost analysis:
o Total number of merge passes required: logM–1(br/M) .
o Block transfers for initial run creation as well as in each pass is 2br
for final pass, we don‘t count write cost
we ignore final write cost for all operations since the output of an operation
may be sent to the parent operation without being written to disk
Thus total number of block transfers for external sorting:
br ( 2 logM–1(br / M) + 1)
166
» Cost of seeks
o During run generation: one seek to read each run and one seek to write each run
2 br / M
o During the merge phase
Buffer size: bb (read/write bb blocks at a time)
Need 2 br / bb seeks for each merge pass
except the final one which does not require a write
Total number of seeks:
2 br / M + br / bb (2 logM–1(br / M) -1)
» In the worst case, if there is enough memory only to hold one block of each relation, the estimated
cost is
nr bs + br
block transfers, plus
nr + br
seeks
» If the smaller relation fits entirely in memory, use that as the inner relation.
o Reduces cost to br + bs block transfers and 2 seeks
» Assuming worst case memory availability cost estimate is
o with depositor as outer relation:
5000 400 + 100 = 2,000,100 block transfers,
5000 + 100 = 5100 seeks
o with customer as the outer relation
10000 100 + 400 = 1,000,400 block transfers and 10,400 seeks
» If smaller relation (depositor) fits entirely in memory, the cost estimate will be 500 block transfers.
» Variant of nested-loop join in which every block of inner relation is paired with every block of outer
relation.
for each block Br of r do begin
for each block Bs of s do begin
for each tuple tr in Br do begin
for each tuple ts in Bs do begin
Check if (tr,ts) satisfy the join condition
if they do, add tr • ts to the result.
end
end
end
end
» Worst case estimate: br bs + br block transfers + 2 * br seeks
» Best case: br + bs block transfers + 2 seeks.
Indexed Nested-Loop Join
» Index lookups can replace file scans if
168
Hybrid merge-join: If one relation is sorted, and the other has a secondary B+-tree index on the join
attribute
Hash-Join
o
» r tuples in ri need only to be compared with s tuples in si Need not be compared with s tuples in any
other partition.
Hash-Join Algorithm
» Partition the relation s using hashing function h. When partitioning a relation, one block of memory
is reserved as the output buffer for each partition.
» Partition r similarly.
» For each i:
170
Load si into memory and build an in-memory hash index on it using the join attribute. This hash
index uses a different hash function than the earlier one h.
Read the tuples in ri from the disk one by one. For each tuple tr locate each matching tuple ts in
si using the in-memory hash index. Output the concatenation of their attributes.
» Relation s is called the build input and r is called the probe input.
» The value n and the hash function h is chosen such that each si should fit in memory.
» Recursive partitioning required if number of partitions n is greater than number of pages M of
memory.
Handling of Overflows
» Partitioning is said to be skewed if some partitions have significantly more tuples than some others
» Hash-table overflow occurs in partition si if si does not fit in memory. Reasons could be
» Many tuples in s with same value for join attributes
» Bad hash function
» Overflow resolution can be done in build phase
» Partition si is further partitioned using different hash function.
» Partition ri must be similarly partitioned.
» Overflow avoidance performs partitioning carefully to avoid overflows during build phase
» E.g. partition build relation into many partitions, then combine them
» Both approaches fail with large numbers of duplicates
» Fallback option: use block nested loops join on overflowed partitions
Cost of Hash-Join
» If recursive partitioning is not required: cost of hash join is
3(br + bs) +4 nh
» Total cost estimate is:
2(br + bs logM–1(bs) – 1 + br + bs block transfers +
2( br / bb + bs / bb ) logM–1(bs) – 1 seeks
» If the entire build input can be kept in main memory no partitioning is required
» Cost estimate goes down to br + bs.
Hybrid Hash–Join
» Useful when memory sized are relatively large, and the build input is bigger than memory.
» Main feature of hybrid hash join:
Keep the first partition of the build relation in memory.
171
» E.g. With memory size of 25 blocks, depositor can be partitioned into five partitions, each of size 20
blocks.
» Division of memory:
The first partition occupies 20 blocks of memory
1 block is used for input, and 1 block each for buffering the other 4 partitions.
» customer is similarly partitioned into five partitions each of size 80
» the first is used right away for probing, instead of being written out
» Cost of 3(80 + 320) + 20 +80 = 1300 block transfers for
hybrid hash join, instead of 1500 with plain hash-join.
Complex Joins
» Join with a conjunctive condition:
r 1 2 ... n s
» Either use nested loops/block nested loops, or
» Compute the result of one of the simpler joins r i s
final result comprises those tuples in the intermediate result that satisfy the remaining
conditions
1 ... i –1 i +1 ... n
» Either use nested loops/block nested loops, or Compute as the union of the records in
individual joins
(r 1 s) (r 2 s) ... (r n s)
Other Operations
» Duplicate elimination can be implemented via hashing or sorting.
» On sorting duplicates will come adjacent to each other, and all but one set of duplicates can
be deleted.
» Hashing is similar – duplicates will come into the same bucket.
» Projection:
» Perform projection on each tuple followed by duplicate elimination.
» Aggregation can be implemented in a manner similar to duplicate elimination.
» Sorting or hashing can be used to bring tuples in the same group together, and then the
aggregate functions can be applied on each group.
172
» Set operations ( , and ): can either use variant of merge-join after sorting, or variant of hash-
join.
r s:
o Add tuples in si to the hash index if they are not already in it.
o At end of si add the tuples in the hash index to the result.
r s:
o output tuples in si to the result if they are already there in the hash index
r – s:
o for each tuple in si, if it is there in the hash index, delete it from the index.
o At end of si add remaining tuples in the hash index to the result.
» Outer join can be computed either as
» A join followed by addition of null-padded non-participating tuples.
» by modifying the join algorithms.
» Modifying merge join to compute r s
» In r s, non participating tuples are those in r – R(r s)
» Modify merge-join to compute r s: During merging, for every tuple tr from r that do
not match any tuple in s, output tr padded with nulls.
» Right outer-join and full outer-join can be computed similarly.
» Modifying hash join to compute r s
» If r is probe relation, output non-matching r tuples padded with nulls
» If r is build relation, when probing keep track of which r tuples matched s tuples. At end of
si output non-matched r tuples padded with nulls
173
2 Mark Questions
1. Define query optimization.
2. What is an index?
3. What are called jukebox systems?
4. What are the types of storage devices?
5. What is called remapping of bad sectors?
6. Define access time.
7. Define seek time.
8. Define average seek time.
9. Define rotational latency time.
10. Define average latency time.
11. What is meant by data-transfer rate?
12. What is meant by mean time to failure?
13. What is a block and a block number?
14. What are called journaling file systems?
15. What is the use of RAID?
16. What is called mirroring?
17. What is called mean time to repair?
18. What is called bit-level striping?
19. What is called block-level striping?
20. What are the two main goals of parallelism?
21. What are the factors to be taken into account when choosing a RAID level?
22. What is meant by software and hardware RAID systems?
23. Define hot swapping?
24. What are the ways in which the variable-length records arise in database systems?
25. What is the use of a slotted-page structure and what is the information present in the header?
26. What are the two types of blocks in the fixed –length representation? Define them.
27. What is known as heap file organization?
28. What is known as sequential file organization?
29. What is hashing file organization?
30. What is known as clustering file organization?
31. What are the types of indices?
32. What are the techniques to be evaluated for both ordered indexing and hashing?
174
16 Mark Questions