Data Management Using Microsoft SQL Server - CPINTL
Data Management Using Microsoft SQL Server - CPINTL
Data Management Using Microsoft SQL Server - CPINTL
No part of this book may be reproduced or copied in any form or by any means – graphic, electronic or
mechanical, including photocopying, recording, taping, or storing in information retrieval system or sent
or transferred without the prior written permission of copyright owner Aptech Limited.
APTECH LIMITED
Contact E-mail: [email protected]
Edition 2 - 2013
Dear Learner,
Aptech Ltd. designs its courses using a sound instructional design model – from conceptualization
to execution, incorporating the following key aspects:
Scanning the user system and needs assessment
Needs assessment is carried out to find the educational and training needs of the learner
Technology trends are regularly scanned and tracked by core teams at Aptech Ltd. TAG*
analyzes these on a monthly basis to understand the emerging technology training needs for
the Industry.
The skill requirements are then mapped with the learner profile (user system) to derive the
Learning objectives for the different roles.
Needs analysis and design of curriculum
The Learning objectives are then analyzed and translated into learning tasks. Each learning
task or activity is analyzed in terms of knowledge, skills and attitudes that are required to
perform that task. Teachers and domain experts do this jointly. These are then grouped in
clusters to form the subjects to be covered by the curriculum.
In addition, the society, the teachers, and the industry expect certain knowledge and skills
that are related to abilities such as learning-to-learn, thinking, adaptability, problem solving,
positive attitude etc. These competencies would cover both cognitive and affective domains.
A precedence diagram for the subjects is drawn where the prerequisites for each
subject are graphically illustrated. The number of levels in this diagram is determined
by the duration of the course in terms of number of semesters etc. Using the precedence
diagram and the time duration for each subject, the curriculum is organized.
The content outlines are developed by including additional topics that are required for the
completion of the domain and for the logical development of the competencies identified.
Evaluation strategy and scheme is developed for the subject. The topics are arranged/organized
in a meaningful sequence.
The detailed instructional material – Training aids, Learner material, reference material, project
guidelines, etc.- are then developed. Rigorous quality checks are conducted at every stage.
Strategies for delivery of instruction
Careful consideration is given for the integral development of abilities like thinking, problem
solving, learning-to-learn etc. by selecting appropriate instructional strategies (training
methodology), instructional activities and instructional materials.
The area of IT is fast changing and nebulous. Hence considerable flexibility is provided in the
instructional process by specially including creative activities with group interaction between
the students and the trainer. The positive aspects of web based learning –acquiring information,
organizing information and acting on the basis of insufficient information are some of the
aspects, which are incorporated, in the instructional process.
Assessment of learning
The learning is assessed through different modes – tests, assignments & projects. The
assessment system is designed to evaluate the level of knowledge & skills as defined by the
learning objectives.
Evaluation of instructional process and instructional materials
*TAG – Technology & Academics Group comprises of members from Aptech Ltd., professors from
reputed Academic Institutions, Senior Managers from Industry, Technical gurus from Software
Majors & representatives from regulatory organizations/forums.
Technology heads of Aptech Ltd. meet on a monthly basis to share and evaluate the technology
trends. The group interfaces with the representatives of the TAG thrice a year to review and
validate the technology and academic directions and endeavors of Aptech Ltd.
Aptech New Products Design Model
Key Aspects
1
Evaluation of
Scanning the user
Instructional
system and needs
Processes and
assessment
Material
2 6
Need Analysis
Assessment of
and design of
learning
curriculum
3 Design and
5
Strategies for
development of
delivery of
instructional
instructions
material 4
“
“
A little learning is a dangerous thing,
but a lot of ignorance is just as bad
Preface
SQL Server 2012 is the latest client-server based Relational Database Management System (RDBMS) from
Microsoft. It provides an enterprise-level data management platform for an organization. SQL Server includes
numerous features and tools that make it an outstanding database and data analysis platform. It is also targeted for
large-scale Online Transactional Processing (OLTP), data warehousing, and e-commerce applications. One of the
key features of this version of SQL Server is that it is available on the cloud platform.
The book begins with an introduction to RDBMS concepts and moves on to introduce SQL Azure briefly. The book
then covers various SQL Server 2012 topics such as data types, usage of Transact-SQL, and database objects such
as indexes, stored procedures, functions, and so on. The book also describes transactions, programming elements
with Transact-SQL, and finally troubleshooting errors with error handling techniques.
This book is the result of a concentrated effort of the Design Team, which is continuously striving to bring you the
best and the latest in Information Technology. The process of design has been a part of the ISO 9001 certification for
Aptech-IT Division, Education Support Services. As part of Aptech’s quality drive, this team does intensive research
and curriculum enrichment to keep it in line with industry trends.
Design Team
“
“
Nothing is a waste of time if you
use the experience wisely
Table of Contents
Module
1. RDBMS Concepts 1
4. SQL Azure 73
5. Transact-SQL 85
“ Learning is not compulsory
but neither is survival
“
Session - 1
RDBMS Concepts
Welcome to the Session, RDBMS Concepts.
This session deals with the concepts related to databases and database
management systems, explores various database models, and introduces
the concept of an RDBMS.
1.1 Introduction
Organizations often maintain large amounts of data, which are generated as a result of day-to-day
operations. A database is an organized form of such data. It may consist of one or more related data items
called records. Think of a database as a data collection to which different questions can be asked. For
example, 'What are the phone numbers and addresses of the five nearest post offices?' or 'Do we have
any books in our library that deal with health food? If so, on which shelves are they located?' or 'Show
me the personnel records and sales figures of five best-performing sales people for the current quarter,
but their address details need not be shown'.
In large multi-user systems, the same file or record may need to be accessed by multiple
users simultaneously. Handling this in a file-based system is difficult.
Security problems
In data-intensive applications, security of data is a major concern. Users should be given
access only to required data and not to the whole database.
For example, in a banking system, payroll personnel need to view only that part of the
database that has information about the various bank employees. They do not need access
to information about customer accounts. Since application programs are added to the system
in an ad-hoc manner, it is difficult to enforce such security constraints. In a file-based system,
this can be handled only by additional programming in each application.
Integrity problems
In any application, there will be certain data integrity rules, which need to be maintained.
These could be in the form of certain conditions/constraints on the elements of the data
records. In the savings bank application, one such integrity rule could be 'Customer ID, which
is the unique identifier for a customer record, should not be empty'. There can be several
such integrity rules. In a file-based system, all these rules need to be explicitly programmed
in the application program.
Though all these are common issues of concern to any data-intensive application, each application
had to handle all these problems on its own. The application programmer needs to bother not
only about implementing the application business rules but also, about handling these common
issues.
Some of the benefits of using such a centralized database system are as follows:
The amount of redundancy in the stored data can be reduced
In an organization, several departments often store the same data. Maintaining a centralized
database helps the same data to be accessed by many departments. Thus, duplication of data
or 'data redundancy' can be reduced.
No more inconsistencies in data
When data is duplicated across several departments, any modifications to the data have to
be reflected across all departments. Sometimes, this can lead to inconsistency in the data.
As there is a central database, it is possible for one person to take up the task of updating
the data on a regular basis. Consider that Mr. Larry Finner, an employee of an organization is
promoted as a Senior Manager from Manager. In such a case, there is just one record in the
database that needs to be changed. As a result, data inconsistency is reduced.
The stored data can be shared
A central database can be located on a server, which can be shared by several users. In this
way all users can access the common and updated information all the time.
Standards can be set and followed
A central control ensures that a certain standard in the representation of data can be set and
followed. For example, the name of an employee has to be represented as 'Mr. Larry Finner'.
This representation can be broken down into the following components:
A title (Mr.)
First name (Larry)
Last name (Finner)
It is certain that all the names stored in the database will follow the same format if the
standards are set in this manner.
Data Integrity can be maintained
Data integrity refers to the accuracy of data in the database. For example, when an employee
resigns and leaves the organization, consider that the Accounts department has updated its
database and the HR department has not updated its records. The data in the company's
records is hence inaccurate.
Centralized control of the database helps in avoiding these errors. It is certain that if a record
is deleted from one table, its linked record in the other table is also deleted.
Concepts
It is not necessary to use general-purpose DBMS for implementing a computerized database. The users
can write their own set of programs to create and maintain the database, in effect creating their own
special-purpose DBMS software. The database and the software together are called a database system.
The end user accesses the database system through application programs and queries. The DBMS
software enables the user to process the queries and programs placed by the end user. The software
accesses the data from the database.
database has to be manipulated, which includes querying the database to retrieve specific data, updating
the database, and finally, generating reports.
These reports are the source of information, which is, processed data. A DBMS is also responsible for
data security and integrity.
The benefits of a typical DBMS are as follows:
Data storage
The programs required for physically storing data, handled by a DBMS, is done by creating complex
data structures, and the process is called data storage management.
Data definition
A DBMS provides functions to define the structure of the data in the application. These include
defining and modifying the record structure, the type and size of fields, and the various
constraints/conditions to be satisfied by the data in each field.
Data manipulation
Once the data structure is defined, data needs to be inserted, modified, or deleted. The functions,
which perform these operations, are also part of a DBMS. These functions can handle planned and
unplanned data manipulation needs. Planned queries are those, which form part of the application.
Unplanned queries are ad-hoc queries, which are performed on a need basis.
Performance
Optimizing the performance of the queries is one of the important functions of a DBMS. Hence,
the DBMS has a set of programs forming the Query Optimizer, which evaluates the different
implementations of a query and chooses the best among them.
the program.
This model is very efficient when a database contains a large volume of data. For example, a bank's
customer account system fits the hierarchical model well because each customer's account is
subject to a number of transactions.
The components of the language used with network models are as follows:
1. A Data Definition Language (DDL) that is used to create and remove databases and database objects.
It enables the database administrator to define the schema components.
2. A sub-schema DDL that enables the database administrator to define the database components.
3. A Data Manipulation Language (DML), which is used to insert, retrieve, and modify database
information. All database users use these commands during the routine operation of the
database.
4. Data Control Language (DCL) is used to administer permissions on the databases and database
objects.
The term 'Relation' is derived from the set theory of mathematics. In the Relational Model, unlike the
Hierarchical and Network models, there are no physical links. All data is maintained in the form of tables
consisting of rows and columns. Data in two tables is related through common columns and not physical
links. Operators are provided for operating on rows in tables.
The popular relational DBMSs are Oracle, Sybase, DB2, Microsoft SQL Server, and so on.
This model represents the database as a collection of relations. In this model's terminology, a row is
called a tuple, a column, an attribute, and the table is called a relation. The list of values applicable to a
particular field is called domain. It is possible for several attributes to have the same domain. The number
of attributes of a relation is called degree of the relation. The number of tuples determines the cardinality
of the relation.
In order to understand the relational model, consider tables 1.3 and 1.4.
2 John 87
4 Lisa 90
6 Peter 65
7 Joe 89
Table 1.5: Displaying Student Names and Marks
It was possible to get this information because of two facts: First, there is a column common to both the
tables - Roll Number. Second, based on this column, the records from the two different tables could be
matched and the required information could be obtained.
In a relational model, data is stored in tables. A table in a database has a unique name that identifies its
contents. Each table can be defined as an intersection of rows and columns.
Concepts
Figure 1.5 shows two tables related to one another through a common key (data value) in a relational
database.
Country Capital
Greece Athens
Italy Rome
USA Washington
China Beijing
Japan Tokyo
Australia Sydney
France Paris
Table 1.6: Capitals
Concepts
Country Currency
Greece Drachma
Italy Lira
USA Dollar
China Renminbi (Yuan)
Japan Yen
Australia Australian Dollar
France Francs
Table 1.7: Currency
Both the tables have a common column, that is, the Country column. Now, if the user wants to
display the information about the currency used in Rome, first find the name of the country to
which Rome belongs. This information can be retrieved from table 1.6. Next, that country should
be looked up in table 1.7 to find out the currency.
It is possible to get this information because it is possible to establish a relation between the two
tables through a common column called Country.
Every table has a set of attributes that are taken together as a 'key' (technically, a 'superkey'),
which uniquely identifies each entity.
For example, a company might have an Employee table with a row for each employee. What attributes
might be interesting for such a table? This will depend on the application and the type of use the data will
be put to, and is determined at database design time.
Concepts
Consider the scenario of a company maintaining customer and order information for products being sold
and customer-order details for a specific month, such as, August.
The tables 1.8, 1.9, 1.10, and 1.11 are used to illustrate this scenario. These tables depict tuples and
attributes in the form of rows and columns. Various terms related to these tables are given in table
1.12.
Administering these resources is the responsibility of the Database Administrator. DBA is responsible
for authorizing access to the database, for coordinating and monitoring its use and for acquiring
software and hardware resources as needed. DBA is accountable for problems such as breach of
security or poor system response time.
Database Designer
Database Designers are responsible for identifying the data to be stored in the database and for
choosing appropriate structures to represent and store this data. It is the responsibility of database
designers to communicate with all prospective database users, in order to understand their
requirements, and to come up with a design that meets the requirements.
End User
The end user invokes an application to interact with the system, or writes a query for easy retrieval,
modification, or deletion of data.
1.7.1 Entity
Concepts
An entity is a person, place, thing, object, event, or even a concept, which can be distinctly identified. For
example, the entities in a university are students, faculty members, and courses.
Each entity has certain characteristics known as attributes. For example, the student entity might include
attributes such as student number, name, and grade. Each attribute should be named appropriately.
A grouping of related entities becomes an entity set. Each entity set is given a name. The name of the
entity set reflects the contents. Thus, the attributes of all the students of the university will be stored in
an entity set called Student.
Each table must have a key known as primary key that uniquely identifies each row.
All values in a column must conform to the same data format. For example, if the attribute is
assigned a decimal data format, all values in the column representing that attribute must be in
decimals.
Concepts
Each column has a specific range of values known as the attribute domain.
DBMS RDBMS
It does not need to have data In an RDBMS, tabular structure is a must
in tabular structure nor does it
and table relationships are enforced by
enforce tabular relationships the system. These relationships enable the
between data items. user to apply and manage business rules
with minimal coding.
Small amount of data can be An RDBMS can store and retrieve large
stored and retrieved. amount of data.
A DBMS is less secure than an An RDBMS is more secure than a DBMS.
RDBMS.
It is a single user system. It is a multi-user system.
Most DBMSs do not support It supports client/server architecture.
client/server architecture.
Table 1.13: Difference between DBMS and RDBMS
In an RDBMS, a relation is given more importance. Thus, the tables in an RDBMS are dependent and
the user can establish various integrity constraints on these tables so that the ultimate data used by the
user remains correct. In case of a DBMS, entities are given more importance and there is no relation
established among these entities.
Concepts
(B) Data Manipulation Language (DML) (D) Data Control Language (DCL)
5. A ________________ describes a container for storing data and the process of storing and retrieving
data from that container.
Concepts
1.9.1 Answers
1. C
2. D
3. B
4. B
5. C
Concepts
Summary
A database is a collection of related data stored in the form of a table.
A data model describes a container for storing data and the process of storing and retrieving data
from that container.
A DBMS is a collection of programs that enables the user to store, modify, and extract information
from a database.
A Relational Database Management System (RDBMS) is a suite of software programs for creating,
maintaining, modifying, and manipulating a relational database.
A relational database is divided into logical units called tables. These logical units are interrelated
to each other within the database.
In an RDBMS, a relation is given more importance, whereas, in case of a DBMS, entities are given
more importance and there is no relation established among these entities.
Concepts
This session talks about Data Modeling, the E-R model, its components,
symbols, diagrams, relationships, Data Normalization, and Relational
Operators.
2.1 Introduction
A data model is a group of conceptual tools that describes data, its relationships, and semantics. It also
consists of the consistency constraints that the data adheres to. The Entity-Relationship, Relational,
Network, and Hierarchical models are examples of data models. The development of every database
begins with the basic step of analyzing its data in order to determine the data model that would best
represent it. Once this step is completed, the data model is applied to the data.
belongs to, owns, works for, saves in, purchased, and so on.
Attributes
Attributes are features that an entity has. Attributes help distinguish every entity from
another. For example, the attributes of a student would be roll_number, name, stream,
semester, and so on.
Relationships between entities of the same entity set are called self-relationships. For example, a
manager and his team member, both belong to the employee entity set. The team member works
for the manager. Thus, the relation, 'works for', exists between two different employee entities of
the same employee entity set.
Concepts
Relationships that exist between three entities of different entity sets are called ternary relationships.
For example, an employee works in the accounts department at the regional branch. The relation,
'works' exists between all three, the employee, the department, and the location.
Concepts
One-to-Many
This kind of mapping exists when an entity of one set can be associated with more than one entity
of another entity set.
Consider the relation between a customer and the customer's vehicles. A customer can have more
than one vehicle. Therefore, the mapping is a one to many mapping, that is, one customer - one or
more vehicles.
The mapping cardinality can be seen in figure 2.8.
Details table.
Weak entity sets
Entity sets that do not have enough attributes to establish a primary key are called weak entity
sets.
The symbols used for various components can be seen in table 2.2.
Weak Entity
Attribute
Relationship
Key Attribute
A multi-valued attribute is illustrated with a double-line ellipse, which has more than one value for
at least one instance of its entity. This attribute may have upper and lower bounds specified for any
individual entity value.
The telephone attribute of an individual may have one or more values, that is, an individual can
have one or more telephone numbers. Hence, the telephone attribute is a multi-valued attribute.
Concepts
The symbol and example of a multi-valued attribute can be seen in figure 2.12.
2.4 Normalization
Initially, all databases are characterized by large number of columns and records. This approach has
certain drawbacks. Consider the following details of the employees in a department. Table 2.3 consists of
the employee details as well as the details of the project they are working on.
Repetition anomaly
The data such as Project_id, Project_name, Grade, and Salary repeat many times. This
repetition hampers both, performance during retrieval of data and the storage capacity. This
repetition of data is called the repetition anomaly.
The repetition is shown in table 2.4 with the help of shaded cells.
Suppose John was given a hike in Salary or John was demoted. The change in John's Salary
or Grade needs to be reflected in all projects John works for. This problem in updating all the
occurrences is called updating anomaly.
The Department Employee Details table is called an unnormalized table. These drawbacks lead to
the need for normalization.
Project_id Project_name
113 BLUE STAR
124 MAGNUM
The Emp_no attribute is the primary key for the Employee Details table. Therefore, in first normal
form, the initial Employee Project Details table has been reduced to the Project Details and
Employee Details tables.
Project_id Project_name
113 BLUE STAR
124 MAGNUM
Table 2.10: Project Details
Emp_no Project_id
142 113
Concepts
142 124
168 113
263 113
109 124
Table 2.12: Employee Project Details
The attributes, Emp_no and Project_id, of the Employee Project Details table combine together
to form the primary key. Such primary keys are called composite primary keys.
Grade Salary
A 20,000
B 15,000
C 10,000
Table 2.14: Grade Salary Details Table
Concepts
Thus, at the end of the three normalization stages, the initial Employee Project Details table has
been reduced to the Project Details, Employee Project Details, Employee Details, and
Grade Salary Details tables as shown in tables 2.15, 2.16, 2.17, and 2.18.
Project_id Project_name
113 BLUE STAR
124 MAGNUM
Emp_no Project_id
142 113
142 124
168 113
263 113
109 124
Grade Salary
A 20,000
B 15,000
C 10,000
Table 2.18: Grade Salary Details
2.4.4 Denormalization
By normalizing a database, redundancy is reduced. This, in turn, reduces the storage requirements for
the database and ensures data integrity. However, it has some drawbacks. They are as follows:
Complex join queries may have to be written often to combine the data in multiple tables.
Concepts
Joins may practically involve more than three tables depending on the need for information.
If such joins are used very often, the performance of the database will become very poor. The CPU time
required to solve such queries will be very large too. In such cases, storing a few fields redundantly
can be ignored to increase the performance of the database. The databases that possess such minor
redundancies in order to increase performance are called denormalized databases and the process of
doing so is called denormalization.
The SELECT operator is used to extract data that satisfies a given condition. The lowercase Greek
letter sigma, 'σ', is used to denote selection. A select operation, on the Branch Reserve Details
table, to display the details of the branches in London would result in table 2.20.
A selection on the Branch Reserve Details table to display branches with reserve greater than
20 billion Euros would result in table 2.21.
Washington BS-05 30
Table 2.21: Details of Branches with Reserves Greater Than 20 Billion Euros
PROJECT
The PROJECT operator is used to project certain details of a relational table. The PROJECT operator
only displays the required details leaving out certain columns. The PROJECT operator is denoted
by the Greek letter pi, '�'. Assume that only the Branch_id and Reserve amounts need to be
displayed.
A project operation to do the same, on the Branch Reserve Details table, would result in table
2.22.
The product operation combines each record from the first table with all the records in the second
table, somewhat generating all possible combinations between the table records.
UNION
Suppose an official of the bank with the data given in tables 2.19 and 2.23 wanted to know which
branches had reserves below 20 billion Euros or loans. The resultant table would consist of branches
with either reserves below 20 billion Euros or loans or both.
This is similar to the union of two sets of data; first, set of branches with reserve less than 20 billion
Euros and second, branches with loans. Branches with both, reserves below 20 billion Euros and
loans would be displayed only once. The UNION operator does just that, it collects the data from
the different tables and presents a unified version of the complete data. The union operation is
represented by the symbol, 'U'. The union of the Branch Reserve Details and Branch Loan
Details tables would generate table 2.25.
Branch Branch_id
London BS-01
London BS-02
Paris BS-03
Table 2.25: Unified Representation of Branches with Less Reserves or Loans
INTERSECT
Suppose the same official after seeing this data wanted to know which of these branches had
both low reserves and loans too. The answer would be the intersect relational operation. The
INTERSECT operator generates data that holds true in all the tables it is applied on. It is based on
the intersection set theory and is represented by the '∩' symbol. The result of the intersection of
the Branch Reserve Details and Branch Loan Details tables would be a list of branches that
have both reserves below 20 billion Euros and loans in their account. The resultant table generated
is table 2.26.
Branch Branch_id
London BS-01
London BS-02
Table 2.26: Branches with Low Reserves and Loans
DIFFERENCE
If the same official now wanted the list of branches that had low reserves but no loans, then the
Concepts
official would have to use the difference operation. The DIFFERENCE operator, symbolized as '-',
generates data from different tables too, but it generates data that holds true in one table and not
the other. Thus, the branch would have to have low reserves and no loans to be displayed.
Branch Branch_id
Paris BS-03
Table 2.27: Branches with Low Reserves but No Loans
JOIN
The JOIN operation is an enhancement to the product operation. It allows a selection to be
performed on the product of tables. For example, if the reserve values and loan amounts of branches
with low reserves and loan values was needed, the product of the Branch Reserve Details and
Branch Loan Details would be required. Once the product of tables 2.19 and 2.23 would be
generated, only those branches would be listed which have both reserves below 20 billion Euros
and loans. Table 2.28 is generated as a result of the JOIN operation.
(A) a, b, c (C) a, c, e
(B) a, d, c (D) a, b, c, e
Concepts
2.6.1 Answers
1. A
2. B
3. C
4. D
5. D
Concepts
Summary
Data modeling is the process of applying an appropriate data model to the data at hand.
E-R model views the real-world as a set of basic objects and relationships among them.
Entity, attributes, entity set, relationships, and relationship sets form the five basic components of
E-R model.
Mapping cardinalities express the number of entities that an entity is associated with.
The process of removing redundant data from the tables of a relational database is called
normalization.
Relational Algebra consists of a collection of operators that help retrieve data from the relational
databases.
SELECT, PRODUCT, UNION, and DIVIDE are some of the relational algebra operators.
Concepts
This session explains the basic architecture of SQL Server 2012 and lists the
versions and editions of SQL Server. It also explains the role and structure of
SQL Server along with the new features added in SQL Server 2012. Finally,
the session explains the process to connect to SQL Server instances, create
and organize script files, and execute Transact-SQL queries.
3.1 Introduction
SQL Server is an RDBMS developed by Microsoft. It provides an enterprise-level data management
platform for an organization. SQL Server includes numerous features and tools that make it an outstanding
database and data analysis platform. It is also targeted for large-scale Online Transactional Processing
(OLTP), data warehousing, and e-commerce applications.
SQL Server 2012 is the new version of SQL Server and was launched by Microsoft on March 6, 2012. One
of the major features of this version of SQL Server is that it is available on the cloud platform. Using SQL
Server 2012 not only helps an organization to store and manage huge amount of information, but also to
protect and utilize this data at different locations as required.
Tools
There are a number of tools that are provided in SQL Server 2012 for development and query management
of a database. The SQL Server Installation Center must be used to install SQL Server program features and
tools. Features can also be modified or removed using the SQL Server Installation Center. Table 3.1 lists
the different tools available in SQL Server 2012.
Tool Description
SQL Server Management One of the most important tools available in SQL Server 2012 is SSMS.
Studio (SSMS) SSMS is an application provided with SQL Server 2012 that helps to create
databases, database objects, query data, and manage the overall working
of SQL Server.
SQLCMD SQLCMD is a command-line tool that can be used in place of SSMS. It
performs similar functions as SSMS, but in command format only.
SQL Server Installation The SQL Server Installation Center tool can also be used to add, remove,
Center and modify SQL Server programs.
SQL Server Configuration SQL Server Configuration Manager is used by database administrators to
Manager manage the features of the SQL software installed in client machines. This
tool is not available to all users. It can be used to configure the services,
server protocols, client protocols, client aliases, and so on.
SQL Server Profiler SQL Server Profiler is used to monitor an instance of the Database Engine
or Analysis Services.
SQL Server Data Tools SSDT is an Integrated Development Environment (IDE) used for Business
(SSDT) Intelligence Components. It helps to design the database using a tool named
Visual Studio.
Connectivity Tools The connectivity tools include DB-Library, Open Database Connectivity
(ODBC), Object Linking and Embedding Database (OLE DB), and so on. These
tools are used to communicate between the clients, servers, and network
libraries.
Table 3.1: Different Tools in SQL Server 2012
Services
There are various services that are executed on a computer running SQL Server. These services run along
with the other Windows services and can be viewed in the task manager. Some of the SQL Server 2012
services are as follows:
SQL Server Database Engine - Database Engine is a core service that is used for storing,
processing, and securing data. It is also used for replication, full-text search, and the Data Quality
Concepts
Services (DQS). It contains tools for managing relational and eXtensible Markup Language (XML)
data.
SQL Server Analysis Services - Analysis Services contain tools that help to create and manage
Online Analytical Processing (OLAP).
This is used for personal, team, and corporate business intelligence purposes. Analysis services are
also used in data mining applications. These services also help to collaborate with PowerPivot,
Excel, and even SharePoint Server Environment.
SQL Server Reporting Services - Reporting Services help to create, manage, publish, and deploy
reports. These reports can be in tabular, matrix, graphical, or free-form format. Report
applications can also be created using Reporting Services.
SQL Server Integration Services - Integration Services are used for moving, copying, and
transforming data using different graphical tools and programmable objects. The DQS component
is also included in Integration Services. Integration services help to build high-performance data
integration solutions.
SQL Server Master Data Services - Master Data Services (MDS) are used for master data
management. MDS is used for analysis, managing, and reporting information such as hierarchies,
granular security, transactions, business rules, and so on.
Instances
All the programs and resource allocations are saved in an instance. An instance can include memory,
configuration files, and CPU. Multiple instances can be used for different users in SQL Server 2012. Even
though many instances may be present on a single computer, they do not affect the working of other
instances. This means that all instances work in isolation. Each instance can be customized as per the
requirement. Even permissions for each instance can be granted on individual basis. The resources can
also be allocated to the instance accordingly, for example, the number of databases allowed.
In other words, instances can be called as a bigger container that contains sub-containers in the form of
databases, security options, server objects, and so on.
Note - Books Online (BOL) is an indirect part of the architecture of SQL Server 2012 that provides
information about different concepts related to SQL Server 2012. This not only includes the new
features and components of SQL Server 2012, but also has information about development aspects
such as syntax creation and query generation. BOL is available online as well as can be accessed through
the Help menu in SSMS.
Version Year
SQL Server 1.0 1989
SQL Server 1.1 1991
SQL Server 4.2 1992
Version Year
SQL Server 6.0 1995
SQL Server 6.5 1996
SQL Server 7.0 1998
SQL Server 2000 2000
SQL Server 2005 2005
SQL Server 2008 2008
SQL Server 2008 R2 2010
SQL Server 2012 2012
Table 3.2: Different Versions of SQL Server
Enterprise – This is the edition that is recurrently released on most versions of SQL Server. This is
the full edition of SQL Server which contains all the features of SQL Server 2012. The enterprise
edition of SQL Server 2012 supports features such as Power View, xVelocity, Business Intelligence
services, virtualization, and so on.
Standard – The standard edition is the basic edition of SQL Server that supports fundamental
database and reporting and analytics functionality. However, it does not support critical
application development, security, and data warehousing.
Business Intelligence – This is a new edition introduced for the first time in SQL Server 2012. This
edition supports basic database, reporting and analytics functionality, and also business
intelligence services. This edition supports features such as PowerPivot, PowerView, Business
Intelligence Semantic Model, Master Data Services, and so on.
Table 3.3 shows a comparison of the features available for the different editions of SQL Server 2012.
management
Reporting Yes Yes Yes
Analytics Yes Yes Yes
Databases – Contains a collection of databases that stores a specific set of structured data.
Security – Used to provide flexible and trustworthy security configuration in SQL Server 2012. This
includes logins, roles, credentials, audits, and so on.
Server Objects – Used to monitor activity in computers running an instance of SQL Server.
Replication – Used to copy and distribute data and database objects from one database to
another, and then, to synchronize between databases to maintain consistency.
AlwaysOn High Availability – Used for high availability and disaster recovery. It is generally used
for applications that require high uptime and failure protection.
Concepts
Management – Used to manage policies, resources, events, maintenance plans, and so on.
Integration Services Catalogs – Integration Services Catalogs stores all the objects of the project
after the project has been deployed.
Statistics properties – Information about the statistics of objects can be viewed in SQL Server
2012 by using the sys.dm _ db _ stats _ properties function.
Failover clustering enhancements – SQL Server 2012 provides multi-subnet failover clusters. It
has also introduced indirect checkpoints and a flexible failover policy for cluster health detection.
This has strengthened the existing disaster recovery solution in SQL Server 2012.
SQL Azure – Microsoft SQL Azure is a cloud based relational database service that leverages
existing SQL Server technologies. SQL Azure can be used to store and manage data using queries
and other functions that are similar to SQL Server 2012. Reporting services and backup feature has
been added in SQL Azure. Also, the database size in SQL Azure can now be increased upto 150
Giga Byte (GB).
Data-tier Applications – A new version of the Data-tier Application (DAC) has been introduced in
SQL Server 2012. A DAC is a logical database management entity defining SQL Server objects
associated with a user's database. This DAC upgrade alters the existing database to match the
schema to the new version of DAC.
Data Quality Services – To maintain the integrity, conformity, consistency, accuracy, and validity
of the data, the Data Quality Services (DQS) has been integrated with SQL Server 2012. DQS uses
techniques such as monitoring, data cleansing, matching and profiling, and so on to maintain the
data quality and correctness.
Big data support – Microsoft has announced a partnership with Cloudera to use Hadoop as the
platform to support big data. Big data is a large collection of data under data sets that are divided
for easier processing. The collection stored under big data can include information from social
networking Web sites, roadway traffic and signaling data, and so on. These kinds of data that are
large and complex require specific applications such as Hadoop to process the data. SQL Server
2012 in collaboration with Hadoop would now be able to support big data.
SQL Server Installation – The SQL Server Installation Center now includes SQL Server Data Tools
(SSDT) and Server Message Block (SMB) file server. SSDT provides an IDE for building business
intelligence solutions. SMB file server is a supported storage option that can be used for system
databases and database engines.
Concepts
Server mode – The server mode concept has been added for Analysis Services installation. An
Analysis Services instance has three server modes that are Multidimensional, Tabular, and
SharePoint.
Audit features – Customized audit specifications can be defined to write custom events in the
audit log. New filtering features have also been added in SQL Server 2012.
Selective XML Index – This is a new type of XML index that is introduced in SQL Server 2012. This
new index has a faster indexing process, improved scalability, and enhanced query performance.
Master Data Services – This feature provides a central data hub to ensure consistency and
integrity across different applications. In SQL Server 2012, an Excel add-in has been created to
support master data when working with Excel. This add-in makes it easy to transfer and manage
the master data in Excel. This data can be easily edited on Excel and it can also be published back
to the database.
PowerView – A new business intelligence toolkit named PowerView has been introduced in SQL
Server 2012. This toolkit helps to create Business Intelligence reports for an entire organization.
PowerView is an add-in of SQL Server 2012 that works in collaboration with Microsoft SharePoint
Server 2010. This add-in helps to present and visualize SQL Server 2012 data in a compatible view
on the SharePoint platform. PowerView is a business intelligence tool that can be used to make
customer presentations by using models, animations, visualization, and so on.
Full Text Search - In SQL Server 2012, the data stored in extended properties and metadata is also
searched and indexed. All the additional properties of a document are searched along with data
present in the document. These additional properties include Name, Type, Folder path, Size, Date,
and so on.
1. Click Start → All Programs → Microsoft SQL Server 2012 → SQL Server Management Studio.
2. In the Connect to Server dialog box, select the Server type as Database Engine.
4. Select either Windows Authentication or SQL Server Authentication, provide the required Login
and Password, and click Connect.
Concepts
Note - The two authentication methods provided by SQL Server 2012 are SQL Server Authentication
and Windows Authentication. SQL Server Authentication requires a user account for login and
password. Hence, multiple user accounts can access the information using their respective usernames
and passwords. With Windows Authentication, the operating system credentials can be used to log in
to the SQL Server database. This will work only on a single machine and cannot be used in any other
computer.
Concepts
The conceptual layers in which the script files must be organized are shown in figure 3.5.
OR
On the Query menu, click Execute.
OR
Press F5 or Alt+X or Ctrl+E.
2. The first version of SQL Server was released in the year ______________.
3. Which edition of SQL Server 2012 supports features such as PowerPivot, PowerView, Business
Intelligence Semantic Model, Master Data Services, and so on?
5. Which of the following statements about the tools in SQL Server 2012 are true?
a. The SQL Server Installation Center tool can be used to add, remove, and modify SQL Server
programs.
b. SQLCMD is an IDE used for Business Intelligence Components. It helps to design the database
using Visual Studio.
c. SQL Server Profiler is used to monitor an instance of the Database Engine or Analysis
Services.
d. SQL Server Installation Center is an application provided with SQL Server 2012 that helps to
develop databases, query data, and manage the overall working of SQL Server.
Concepts
3.10.1 Answers
1. B
2. A
3. C
4. C
5. A
Concepts
Summary
The basic architecture of SQL Server 2012 includes tools, services, and instances.
The three editions of SQL Server are Enterprise, Standard, and Business Intelligence.
The structure of SQL Database includes databases, security, server objects, replications, AlwaysOn
High Availability, Management, Integration Services Catalogs, and so on.
SSMS is used to connect to SQL Server Instances. SSMS is a tool used for developing, querying,
and managing the databases.
The script files should be stored in .sql format in SQL Server 2012.
The queries typed in Transact-SQL and saved as .sql files can be executed directly into the SSMS
query window.
Concepts
This session explains SQL Azure and its benefits. It also lists the differences
between SQL Azure and on-premises SQL Server. Finally, the session
explains the process to connect SQL Azure with SSMS.
4.1 Introduction
Cloud computing is a technology trend, that involves the delivery of software, platforms, and infrastructure
as services through the Internet or networks. Windows Azure is a key offering in Microsoft's suite of cloud
computing products and services. The database functions of Microsoft's cloud platform are provided by
Windows Azure SQL Database, which is commonly known as SQL Azure.
SQL Azure can be used to store and manage data using queries and other functions that are similar to SQL
Server 2012. The data on SQL Azure does not have the constraint of being location-specific. This means
that the data stored in SQL Azure can be viewed and edited from any location, as the entire data is stored
on cloud storage platform.
The process of SQL Azure operation is explained in the model as shown in figure 4.2.
Account – An SQL Azure account must first be created before adding servers that will help to store
and manage the data. This account is created for billing purposes. The subscription for an account
is recorded and metered and an individual is charged according to the usage. To create an account,
the credentials need to be provided. After the user account is created, the requirements need to
be provided for the SQL Azure database. This includes the number of databases required, database
size, and so on.
Server – The SQL Azure server is the object that helps to interact between the account and the
database. After the account is registered, the databases are configured using the SQL Azure server.
Other settings such as firewall settings and Domain Name System (DNS) assignment are also
configured in the SQL Azure server.
Database – The SQL Azure database stores all the data in a similar manner as any on-premises SQL
Server database would store the data. Though present on the cloud, the SQL Azure database has
all the functionalities of a normal RDBMS such as tables, views, queries, functions, security
settings, and so on.
In addition to these core objects, there is an additional object in SQL Azure. This object is the SQL Azure
Data Sync technology. The SQL Azure Data Sync technology is built on Microsoft Sync Framework and SQL
Azure database.
Concepts
SQL Azure Data Sync helps to synchronize data on the local SQL Server with the data on SQL Azure as
shown in figure 4.3.
Lower cost – SQL Azure provides several functions similar to on-premises SQL Server at a lower
cost when compared on-premises instances of SQL Server. Also, as SQL Azure is on the cloud
platform, it can be accessed from any location. Hence, there is no additional cost required to
develop a dedicated IT infrastructure and department to manage the databases.
Usage of TDS – TDS is used in on-premises SQL Server databases for client libraries. Hence, most
developers are familiar with TDS and its use. The same kind of TDS interface is used in SQL Azure
to build client libraries. Hence, it is easier for developers to work on SQL Azure.
Concepts
Automatic failover measures – SQL Azure stores multiple copies of data on different physical
locations. Even if there is a hardware failure due to heavy usage or excessive load, SQL Azure helps
to maintain the business operations by providing availability of data through other physical
locations. This is done by using the automatic failover measures that are provided in SQL Azure.
Flexibility in service usage – Even small organizations can use SQL Azure as the pricing model for
SQL Azure is based on the storage capacity that is used by an organization. If the organization
needs more storage, the price can be altered to suit the need. This helps the organizations to be
flexible in the investment depending on the service usage.
Transact-SQL support – As SQL Azure is completely based on the relational database model, it
also supports Transact-SQL operations and queries. This concept is similar to the working of the
on-premises SQL Servers. Hence, administrators do not need any additional training or support to
use SQL Azure.
Tools – On-premises SQL Server provides a number of tools for monitoring and management. All
these tools may not be supported by SQL Azure as there are a limited set of tools that are available
in this version.
Backup – Backup and restore function must be supported in on-premises SQL Server for disaster
recovery. For SQL Azure, as all the data is on the cloud platform, backup and restore is not
required.
USE statement – The USE statement is not supported by SQL Azure. Hence, the user cannot switch
between databases in SQL Azure as compared to on-premises SQL Server.
Authentication – SQL Azure supports only SQL Server authentication and on-premises SQL Server
supports both SQL Server authentication and Windows Authentication.
Transact-SQL support – Not all Transact-SQL functions are supported by SQL Azure.
Accounts and Logins – In SQL Azure, administrative accounts are created in the Azure
management portal. Hence, there are no separate instance-level user logins.
Firewalls – Firewalls settings for allowed ports and IP addresses can be managed on physical
servers for on-premises SQL Server. As an SQL Azure database is present on cloud, authentication
Concepts
6. Click Connect. The connection to the Database is successfully established as shown in figure 4.5.
Note - The master database is the default database to which SQL Server connects to via SQL Azure. To
connect to another database, on the Connect to Server box, click Options to reveal the Connection
Properties tab and enter the name of the desired database in the Connect to database text box. After
a connection to a user-defined database is established, a user cannot switch to other database without
disconnecting and reconnecting to the next database. Users can switch from the master database to
another database only through SSMS because the USE statement is not supported. Concepts
System Administrator
(A) Windows Authentication (C)
Authentication
(B) SQL Server Authentication (D) No Authentication
4. Which of the following helps to synchronize data on the local SQL Server with the data on SQL
Azure?
4.6.1 Answers
1. C
2. A
3. B
4. C
Concepts
Summary
Microsoft SQL Azure is a cloud based relational database service that leverages existing SQL Server
technologies.
SQL Azure enables users to perform relational queries, search operations, and synchronize data
with mobile users and remote back offices.
SQL Azure can store and retrieve both structured and unstructured data.
Applications retrieve data from SQL Azure through a protocol known as Tabular Data Stream
(TDS).
The three core objects in the SQL Azure operation model are account, server, and database.
SQL Azure Data Sync helps to synchronize data on the local SQL Server with the data on SQL
Azure.
Users can connect to SQL Azure using SSMS.
Concepts
Try It Yourself
1. List out some organizational scenarios where using SQL Azure database would be more advantageous
than using on-premises SQL Server database.
Concepts
Explain Transact-SQL
5.1 Introduction
SQL is the universal language used in the database world. Most modern RDBMS products use some
type of SQL dialect as their primary query language. SQL can be used to create or destroy objects,
such as tables, on the database server and to do things with those objects, such as put data into
them or query for data. Transact-SQL is Microsoft's implementation of the standard SQL. Usually
referred to as T-SQL, this language implements a standardized way to communicate to the database.
The Transact-SQL language is an enhancement to SQL, the American National Standards Institute (ANSI)
standard relational database language. It provides a comprehensive language that supports defining
tables, inserting, deleting, updating, and accessing the data in the table.
5.2 Transact-SQL
Transact-SQL is a powerful language offering features such as data types, temporary objects, and extended
stored procedures. Scrollable cursors, conditional processing, transaction control, and exception and
error-handling are also some of the features which are supported by Transact-SQL.
The Transact-SQL language in SQL Server 2012 provides improved performance, increased functionality,
and enhanced features. Enhancements include scalar functions, paging, sequences, meta-data discovery,
and better error handling support.
Code Snippet 1 shows the Transact-SQL statement, SELECT, which is used to retrieve all records of
employees with 'Design Engineer' as the JobTitle from the Employee table.
Code Snippet 1:
SELECT LoginID
FROM Employee
WHERE JobTitle = 'Design Engineer'
Figure 5.1 shows the result that retrieves all records of employees with 'Design Engineer' as the JobTitle
from the Employee table.
Concepts
SELECT statement
INSERT statement
UPDATE statement
DELETE statement
GRANT statement
REVOKE statement
DENY statement
Variables
Transact-SQL functions that return one or more data values of a specific data type
Stored procedures that have a return code belonging to the integer data type
Various items in SQL Server 2012 such as columns, variables, and expressions are assigned data types.
SQL Server 2012 supports three kinds of data types:
nullability, and type. In such cases, an alias data type can be created that can be used commonly
by all these tables.
Alias data types can be created using the CREATE TYPE statement. The syntax for the CREATE
TYPE statement is as follows:
Syntax:
where,
schema_name: identifies the name of the schema in which the alias data type is being created.
A schema is a collection of objects such as tables, views, and so forth in a database.
type_name: identifies the name of the alias type being created.
base_type: identifies the name of the system-defined data type based on which the alias
data type is being created.
precision and scale: specify the precision and scale for numeric data.
NULL | NOT NULL: specifies whether the data type can hold a null value or not.
Code Snippet 2 shows how to create an alias data type using CREATE TYPE statement.
Code Snippet 2:
In the code, the built-in data type varchar is stored as a new data type named usertype by using
the CREATE TYPE statement.
User-defined types
These are created using programming languages supported by the .NET Framework.
CONTAINS - Searches for precise or less precise matches to single words and phrases, words
within a certain distance of one another, or weighted matches.
Predicate Example
IN SELECT UserID, FirstName, LastName, Salary FROM Employee WHERE Salary
IN(5000,20000);
BETWEEN Select UserID, FirstName, LastName, Salary FROM Employee WHERE Salary
BETWEEN 5000 and 20000;
LIKE Select UserID, FirstName, LastName, Salary FROM Employee WHERE FirstName
LIKE '%h%'
CONTAINS SELECT UserID, FirstName, LastName, Salary FROM Employee WHERE Salary
CONTAINS(5000);
Table 5.2: Predicate Examples
Operators are used to perform arithmetic, comparison, concatenation, or assignment of values. For
example, data can be tested to verify that the COUNTRY column for the customer data is populated (or
has a NOT NULL value). In queries, anyone who can see the data in the table requiring an operator can
perform operations. Appropriate permissions are required before data can be successfully changed. SQL
Server has seven categories of operators. Table 5.3 describes the different operators supported in SQL
Server 2012.
Order Operators
1 () Parentheses
2 *, /, %
3 +, -
4 =, <, >, >=, <=, !=, !>
5 NOT
Concepts
6 AND
7 BETWEEN, IN, CONTAINS, LIKE, OR
8 =
Table 5.4: Precedence of Predicates and Operators
1. 2 + 2 * (4 + (5 – 3))
2. 2 + 2 * (4 + 2)
3. 2+2*6
4. 2 + 12
5. 14
Hence, the code will display 14.
5.5.2 Functions
A function is a set of Transact-SQL statements that is used to perform some task. Transact-SQL includes a
large number of functions. These functions can be useful when data is calculated or manipulated. In SQL,
functions work on the data, or group of data, to return a required value. They can be used in a SELECT
list, or anywhere in an expression. The four types of functions in SQL Server 2012 are as follows:
Rowset functions - In Transact-SQL, the rowset function is used to return an object that can be
used in place of a table reference. For example, OPENDATASOURCE, OPENQUERY, OPENROWSET,
and OPENXML are rowset functions.
Aggregate functions - Transact-SQL provides aggregate functions to assist with the summarization
of large volumes of data. For example, SUM, MIN, MAX, AVG, COUNT, COUNTBIG, and so on are
aggregate functions.
Ranking functions - Many tasks, such as creating arrays, generating sequential numbers, finding
ranks, and so on can be implemented in an easier and faster way by using ranking functions. For
Concepts
example, RANK, DENSE _ RANK, NTILE, and ROW _ NUMBER are ranking functions.
Scalar functions - In scalar functions, the input is a single value and the output received is also a
single value.
5.5.3 Variables
A variable is an object that can hold a data value. In Transact-SQL, variables can be classified into local
and global variables.
In Transact-SQL, local variables are created and used for temporary storage while SQL statements are
executed. Data can be passed to SQL statements using local variables. The name of a local variable must
be prefixed with '@' sign.
Global variables are in-built variables that are defined and maintained by the system. Global variables in
SQL Server are prefixed with two '@' signs. The value of any of these variables can be retrieved with a
simple SELECT query.
Concepts
5.5.4 Expressions
An expression is a combination of identifiers, values, and operators that SQL Server can evaluate in order
to obtain a result. Expressions can be used in several different places when accessing or changing data.
Code Snippet 4 shows an expression that operates on a column to add an integer to the results of the
YEAR function on a datetime column.
Code Snippet 4:
Table 5.6 shows some of the commonly used control-of-flow statements in Transact-SQL.
Control-of-Flow Description
Statement
IF. . .ELSE Provides branching control based on a logical test.
WHILE Repeats a statement or a block of statements as long as the
condition is true.
BEGIN. . .END Defines the scope of a block of Transact-SQL statements.
TRY. . . CATCH Defines the structure for exception and error handling.
BEGIN TRANSACTION Marks a block of statements as part of an explicit transaction.
Table 5.6: Control-of-Flow Statements
5.5.6 Comments
Comments are descriptive text strings, also known as remarks, in program code that will be ignored
by the compiler. Comments can be included inside the source code of a single statement, a batch, or
a stored procedure. Comments explain the purpose of the program, special execution conditions, and
provide revision history information. Microsoft SQL Server supports two types of commenting styles:
- - (double hyphens)
A complete line of code or a part of a code can be marked as a comment, if two hyphens (- -) are
placed at the beginning. The remainder of the line becomes a comment.
Code Snippet 5 displays the use of this style of comment.
Code Snippet 5:
USE AdventureWorks2012
-- HumanResources.Employee table contains the details of an employee.
-- This statement retrieves all the rows of the table
-- HumanResources.Employee.
SELECT * FROM HumanResources.Employee
multiple-line comment, the open-comment character pair (/*) must begin the comment, and the
close-comment character pair (*/) must end the comment.
USE AdventureWorks2012
/* HumanResources.Employee table contains the details of an employee.
This statement retrieves all the rows of the table
HumanResources.Employee. */
SELECT * FROM HumanResources.Employee
USE AdventureWorks2012
SELECT * FROM HumanResources.Employee
GO
In Code Snippet 7, the two statements will be grouped into one execution plan, but executed one
statement at a time. The GO keyword signals the end of a batch.
distinct objects considered as a whole. For example, all the employees under an Employee table can
be considered as one set. The employees are the different objects that form a part of the set in the
Employee table.
Table 5.7 shows the different applications in the set theory and their corresponding application in SQL
Server queries.
Defining subqueries
Element Description
SELECT <select list> Defines the columns to be returned
FROM <table source> Defines the table to be queried
WHERE <search condition> Filters the rows by using predicates
GROUP BY <group by list> Arranges the rows by groups
HAVING <search condition> Filters the groups using predicates
ORDER BY <order by list> Sorts the output
Table 5.8: Elements of SELECT Statement
Code Snippet 8 shows a SELECT statement.
Code Snippet 8:
In the example, the order in which SQL Server will execute the SELECT statement is as follows:
1. First, the FROM clause is evaluated to define the source table that will be queried.
Concepts
2. Next, the WHERE clause is evaluated to filter the rows in the source table. This filtering is defined by
the predicate mentioned in the WHERE clause.
3. After this, the GROUP BY clause is evaluated. This clause arranges the filtered values received from
the WHERE clause.
4. Next, the HAVING clause is evaluated based on the predicate that is provided.
5. Next, the SELECT clause is executed to determine the columns that will appear in the query
results.
Concepts
(A) a-4, b-2, c-3, d-1, e-5 (C) a-1, b-4, c-5, d-3, e-2
(B) a-1, b-2, c-4, d-3, e-5 (D) a-5, b-3, c-4, d-2, e-1
Concepts
5. Which of the following are the two mathematical fundamentals that are used in SQL Server 2012?
(B) 62 (D) 26
Concepts
5.8.1 Answers
1. A
2. C
3. B
4. D
5. B
6. C
Concepts
Summary
Transact-SQL is a powerful language which offers features such as data types, temporary objects,
and extended stored procedures.
SQL Server supports three types of Transact-SQL statements, namely, DDL, DML, and DCL.
A data type is an attribute defining the type of data that an object can contain.
The Transact-SQL language elements includes predicates, operators, functions, variables,
expressions, control-of-flow, errors, and transactions, comments, and batch separators.
Sets and Predicate Logic are the two mathematical fundamentals that are used in SQL Server
2012.
Set theory is a mathematical foundation used in relational database model, where a set is a
collection of distinct objects considered as a whole.
Predicate logic is a mathematical framework that consists of logical tests that gives a result.
Concepts
Try It Yourself
1. Use the Query Editor to execute a query. Ensure that a connection to a new server instance is
established. Then, in the AdventureWorks2012 database, execute a query to select the columns
namely, ProductID, Name, and ProductNumber from Production.Product table, and Product
ID and ModifiedDate from Production.ProductDocument table.
Concepts
This session describes system and user defined databases. It also lists the
key features of the AdventureWorks2012 database. Finally, the session
describes types of database modification.
6.1 Introduction
A database is a collection of data stored in data files on a disk or some removable medium. A database
consists of data files to hold actual data.
An SQL Server database is made up of a collection of tables that stores sets of specific structured data.
A table includes a set of rows (also called as records or tuples) and columns (also called as attributes).
Each column in the table is intended to store a specific type of information, for example, dates, names,
currency amounts, and numbers.
A user can install multiple instances of SQL Server on a computer. Each instance of SQL Server can include
multiple databases. Within a database, there are various object ownership groups called schemas. Within
each schema, there are database objects such as tables, views, and stored procedures. Some objects
such as certificates and asymmetric keys are contained within the database, but are not contained within
a schema.
SQL Server databases are stored as files in the file system. These files are grouped into file groups. When
people gain access to an instance of SQL Server, they are identified as a login. When people gain access
to a database, they are identified as a database user.
A user who has access to a database can be given permission to access the objects in the database.
Though permissions can be granted to individual users, it is recommended to create database roles, add
the database users to the roles, and then, grant access permission to the roles. Granting permissions to
roles instead of users makes it easier to keep permissions consistent and understandable as the number
of users grow and continually change.
SQL Server 2012 supports three kinds of databases, which are as follows:
System Databases
User-defined Databases
Sample Databases
Table 6.1 shows the system databases that are supported by SQL Server 2012.
Database Description
master The database records all system-level information of an instance of SQL
Server.
msdb The database is used by SQL Server Agent for scheduling database alerts
and various jobs.
model The database is used as the template for all databases to be created on the
particular instance of SQL Server 2012.
resource The database is a read-only database. It contains system objects included
with SQL Server 2012.
tempdb The database holds temporary objects or intermediate result sets.
Table 6.1: System Databases
Administration utilities: From SQL Server 2005 onwards, several SQL Server administrative
utilities are integrated into SSMS. It is the core administrative console for SQL Server installations.
It enables to perform high-level administrative functions, schedule routine maintenance tasks, and
so forth. Figure 6.1 shows the SQL Server 2012 Management Studio window.
Concepts
SQL Server Management Objects (SQL-SMO) API: Includes complete functionality for
administering SQL Server in applications.
Transact-SQL scripts and stored procedures: These use system stored procedures and Transact-
SQL DDL statements. Figure 6.2 shows a Transact-SQL query window.
System catalog views: Views displaying metadata for describing database objects in an SQL Server
instance.
Concepts
SQL-SMO: New managed code object model, providing a set of objects used for managing
Microsoft SQL Server.
Catalog functions, methods, attributes, or properties of the data API: Used in ActiveX Data
Objects (ADO), OLE DB, or ODBC applications.
where,
DATABASE_NAME: is the name of the database to be created.
Concepts
ON: indicates the disk files to be used to store the data sections of the database and data files.
PRIMARY: is the associated <filespec> list defining the primary file.
<filespec>: controls the file properties.
<filegroup>: controls filegroup properties.
LOG ON: indicates disk files to be used for storing the database log and log files.COLLATE
collation_name: is the default collation for the database. A collation defines rules for comparing
and sorting character data based on the standard of particular language and locale. Collation name
can be either a Windows collation name or a SQL collation name.
Code Snippet 1 shows how to create a database with database file and transaction log file with collation
name.
Code Snippet 1:
After executing the code in Code Snippet 1, SQL Server 2012 displays the message 'Command(s) completed
successfully'.
Figure 6.3 shows the database Customer_DB listed in the Object Explorer.
Concepts
where,
database_name: is the original name of the database.
MODIFY NAME = new_database_name: is the new name of the database to which it is to be
renamed.
COLLATE collation_name: is the collation name of the database.
<add_or_modify_files>: is the file to be added, removed, or modified.
<add_or_modify_filegroups>: is the filegroup to be added, modified, or removed from the
database.
<set_database_options>: is the database-level option influencing the characteristics of the
database that can be set for each database. These options are unique to each database and do not
affect other databases.
Code Snippet 2 shows how to rename a database Customer_DB with a new database name, CUST_DB.
Code Snippet 2:
Concepts
Figure 6.4 shows database Customer_DB is renamed with a new database name, CUST_DB.
where,
login is an existing database username.
After sp_changedbowner is executed, the new owner is known as the dbo user inside the selected
database. The dbo receives permissions to perform all activities in the database. The owner of the
master, model, or tempdb system databases cannot be changed.
Code Snippet 3, when executed, makes the login 'sa' the owner of the current database and maps 'sa'
to existing aliases that are assigned to the old database owner, and will display 'Command(s) completed
Concepts
successfully'.
Code Snippet 3:
USE CUST_DB
EXEC sp_changedbowner 'sa'
Note - Server-wide settings are set using the sp_configure system stored procedure or SQL Management
Studio.
Code Snippet 4 when executed sets AUTO_SHRINK option for the CUST_DB database to ON.
The AUTO_SHRINK options when set to ON, shrinks the database that have free space.
Code Snippet 4:
USE CUST_DB;
ALTER DATABASE CUST_DB
SET AUTO_SHRINK ON
The AdventureWorks2012 database consists of around 100 features. Some of the key features are as
follows:
A database engine that includes administration facilities, data access capabilities, Full-Text Search
facility, Common Language Runtime (CLR) integration advantage, SMO, Service Broker, and XML.
Analysis Services
Integration Services
Notification Services
Reporting Services
Replication Facilities
A set of integrated samples for two multiple feature-based samples: HRResume and Storefront.
The sample database consists of these parts:
AdventureWorks2012: Sample OLTP database
AdventureWorks2012DW: Sample Data warehouse
AdventureWorks2012AS: Sample Analysis Services database
6.4.2 Filegroups
In SQL Server, data files are used to store database files. The data files are further subdivided into filegroups
for the sake of performance. Each filegroup is used to group related files that together store a database
object. Every database has a primary filegroup by default. This filegroup contains the primary data file.
The primary file group and data files are created automatically with default property values at the time
of creation of the database. User-defined filegroups can then be created to group data files together for
administrative, data allocation, and placement purposes.
For example, files named Customer_Data1.ndf, Customer_Data2.ndf, and
three
Customer_Data3.ndf, can be created on three disk drives respectively. These can then be assigned
to the filegroup Customer_fgroup1. A table can then be created specifically on the filegroup
Customer_fgroup1. Queries for data from the table will be spread across the three disk drives thereby,
improving performance.
Table 6.3 shows the filegroups that are supported by SQL Server 2012.
Filegroup Description
Primary The filegroup that consists of the primary file. All system tables are placed
inside the primary filegroup.
User-defined Any filegroup that is created by the user at the time of creating or modifying
databases.
Concepts
A file cannot be a member of more than one filegroup at the same time. A maximum of 32,767 filegroups
can be created for each database. Filegroups can contain only data files. Transaction log files cannot
belong to a filegroup.
The following is the syntax to add filegroups while creating a database.
Syntax:
where,
database_name: is the name of the new database.
ON: indicates the disk files to store the data sections of the database, and data files.
PRIMARY and associated <filespec> list: define the primary file. The first file specified in the
<filespec> entry in the primary filegroup becomes the primary file.
LOG ON: indicates the disk files used to store the database log files.
COLLATE collation_name: is the default collation for the database.
Code Snippet 5 shows how to add a filegroup (PRIMARY as default) while creating a database, called
SalesDB.
Code Snippet 5:
LOG ON
( NAME = 'SalesDB_log', FILENAME = 'C:\Program Files\Microsoft SQL Server\
MSSQL11.MSSQLSERVER\MSSQL\DATA\SalesDB_log.ldf' , SIZE = 2048KB , MAXSIZE =
2048GB , FILEGROWTH = 10%)
COLLATE SQL_Latin1_General_CP1_CI_AS
Figure 6.5 shows the file groups when creating SalesDB database.
| <set_database_options>
Code Snippet 6 shows how to add a filegroup to an existing database, called CUST_DB.
Code Snippet 6:
USE CUST_DB;
ALTER DATABASE CUST_DB
ADD FILEGROUP FG_ReadOnly
After executing the code, SQL Server 2012 displays the message 'Command(s) completed successfully'
and the filegroup FG_ReadOnly is added to the existing database, CUST_DB.
Default Filegroup
Objects are assigned to the default filegroup when they are created in the database. The PRIMARY
filegroup is the default filegroup. The default filegroup can be changed using the ALTER DATABASE
statement. System objects and tables remain within the PRIMARY filegroup, but do not go into the new
default filegroup.
To make the FG_ReadOnly filegroup as default, it should contain at least one file inside it.
Code Snippet 7 shows how to create a new file, add it to the FG_ReadOnly filegroup and make the
FG_ReadOnly filegroup that was created in Code Snippet 6 as the default filegroup.
Code Snippet 7:
USE CUST_DB
ALTER DATABASE CUST_DB
ADD FILE (NAME = Cust_DB1, FILENAME = 'C:\Program Files\Microsoft SQL Server\
MSSQL11.MSSQLSERVER\MSSQL\DATA\Cust_DB1.ndf')
TO FILEGROUP FG_ReadOnly
ALTER DATABASE CUST_DB
MODIFY FILEGROUP FG_ReadOnly DEFAULT
Concepts
After executing the code in Code Snippet 7, SQL Server 2012 displays the message saying the filegroup
property 'DEFAULT' has been set.
Rolling a restored database, file, filegroup, or page forward to the point of failure
The database can be restored to the point of failure after a hardware loss or disk failure affecting
the database files.
[;]
<add_or_modify_files>::=
{
ADD FILE <filespec> [ ,...n ]
[ TO FILEGROUP { filegroup_name | DEFAULT } ]
| ADD LOG FILE <filespec> [ ,...n ]
| REMOVE FILE logical_file_name
| MODIFY FILE <filespec>
}
Note - By default, the data and transaction logs are put on the same drive and path to accommodate
single-disk systems, but may not be optimal for production environments.
Concepts
Note - The 'Use full-text indexing' option is always checked and dimmed because, from SQL
Server 2008 onwards, all user-defined databases are full-text enabled.
6. To change the default values of the primary data and transaction log files, in the Database files grid,
click the appropriate cell and enter the new value.
Concepts
7. To change the collation of the database, select the Options page, and then, select a collation from
the list as shown in figure 6.9.
8. To change the recovery model, select the Options page and then, select a recovery model from the
list as shown in figure 6.10.
10. To add a new filegroup, click the Filegroups page. Click Add and then, enter the values for the
filegroup as shown in figure 6.11.
2. Expand Databases, right-click the database to delete, and then, click Delete, as shown in figure
6.12.
(
NAME = logical_file_name,
FILENAME = 'os_file_name'
) [ ,...n ]
AS SNAPSHOT OF source_database_name
[;]
where,
database_snapshot_name: is the name of the new database snapshot.
ON ( NAME = logical_file_name, FILENAME = 'os_file_name' ) [ ,... n ]: is the
list of files in the source database. For the snapshot to work, all the data files must be specified
individually.
AS SNAPSHOT OF source_database_name: is the database being created is a database
snapshot of the source database specified by source_database_name.
Code Snippet 9 creates a database snapshot on the CUST_DB database.
Code Snippet 9:
a. Integration Services
b. Reporting Services
c. Notification Services
d. Implicit Services
6.5.1 Answers
1. A
2. B
3. B
4. A
5. C
Concepts
Summary
An SQL Server database is made up of a collection of tables that stores sets of specific structured
data.
SQL Server uses system databases to support different parts of the DBMS.
The SQL Server data files are used to store database files, which are further subdivided into
filegroups for the sake of performance.
Objects are assigned to the default filegroup when they are created in the database. The PRIMARY
filegroup is the default filegroup.
Try It Yourself
1. Create a database named UnitedAir using Transact-SQL statements with the following
properties:
Primary filegroup with files, UnitedAir1 _ dat and UnitedAir2 _ dat. The size, maximum
size, and file growth should be 5, 10, and 15% respectively.
A filegroup
named UnitedAirGroup1 with the files UnitedAirGrp1F1 and
UnitedAirGrp1F2.
Concepts
This session explores the various data types provided by SQL Server 2012
and describes how to use them. The techniques for creation, modification,
and removal of tables and columns are also discussed.
7.1 Introduction
One of the most important types of database objects in SQL Server 2012 is a table. Tables in SQL Server
2012 contain data in the form of rows and columns. Each column may have data of a specific type and
size.
Alias data types - These are based on the system-supplied data types. One of the typical uses of
alias data types is when more than one table stores the same type of data in a column and has
similar characteristics such as length, nullability, and type. In such cases, an alias data type can be
created that can be used commonly by all these tables.
User-defined types - These are created using programming languages supported by the .NET
Framework, which is a software framework developed by Microsoft.
Table 7.1 shows various data types in SQL Server 2012 along with their categories and description.
where,
schema_name: identifies the name of the schema in which the alias data type is being created.
type_name: identifies the name of the alias type being created.
base_type: identifies the name of the system-defined data type based on which the alias data
type is being created.
precision and scale: specify the precision and scale for numeric data.
NULL|NOT NULL: specifies whether the data type can hold a null value or not.
Concepts
Code Snippet 1 shows how to create an alias data type named usertype using the CREATE TYPE
statement.
Code Snippet 1:
where,
database_name: is the name of the database in which the table is created.
table_name: is the name of the new table. table_name can be a maximum of 128 characters.
column_name: is the name of a column in the table. column_name can be up to 128
characters. column_name are not specified for columns that are created with a timestamp data
type. The default column name of a timestamp column is timestamp.
data_type: It specifies data type of the column.
Code Snippet 2 demonstrates creation of a table named dbo.Customer_1.
Code Snippet 2:
The next few sections take a look at various features associated with tables such as column nullability,
default definitions, constraints, and so forth.
Concepts
where,
ALTER COLUMN: specifies that the particular column is to be changed or modified.
ADD: specifies that one or more column definitions are to be added.
DROP COLUMN ([<column_name>]: specifies that column_name is to be removed from the
table.
Code Snippet 3 demonstrates altering the Customer_id column.
Code Snippet 3:
USE [CUST_DB]
ALTER TABLE [dbo].[Customer_1]
ALTER Column [Customer_id number] [numeric](12, 0) NOT NULL;
USE [CUST_DB]
ALTER TABLE [dbo].[Table_1]
ADD [Contact_number] [numeric](12, 0) NOT NULL;
USE [CUST_DB]
ALTER TABLE [dbo].[Table_1]
Concepts
Before attempting to drop columns, however, it is important to ensure that the columns can be dropped.
Under certain conditions, columns cannot be dropped, such as, if they are used in a CHECK, FOREIGN
KEY, UNIQUE, or PRIMARY KEY constraint, associated with a DEFAULT definition, and so forth.
where,
<Table_Name>: is the name of the table to be dropped.
Code Snippet 6 demonstrates how to drop a table.
Code Snippet 6:
USE [CUST_DB]
DROP TABLE [dbo].[Table_1]
INSERT Statement - The INSERT statement adds a new row to a table. The syntax for INSERT
statement is as follows:
Syntax:
where,
<Table_Name>: is the name of the table in which row is to be inserted.
[INTO]: is an optional keyword used between INSERT and the target table.
<Values>: specifies the values for columns of the table.
Concepts
USE [CUST_DB]
INSERT INTO [dbo].[Table_2] VALUES (101, 'Richard Parker', 'Richy')
GO
The outcome of this will be that one row with the given data is inserted into the table.
UPDATE Statement - The UPDATE statement modifies the data in the table. The syntax for UPDATE
statement is as follows:
Syntax:
UPDATE <Table_Name>
SET <Column_Name = Value>
[WHERE <Search condition>]
where,
<Table_Name>: is the name of the table where records are to be updated.
<Column_Name>: is the name of the column in the table in which record is to be updated.
<Value>: specifies the new value for the modified column.
<Search condition>: specifies the condition to be met for the rows to be deleted.
Code Snippet 8 demonstrates the use of the UPDATE statement to modify the value in column
Contact_number.
Code Snippet 8:
USE [CUST_DB]
UPDATE [dbo].[Table_2] SET Contact_number = 5432679 WHERE Contact_name LIKE
'Richy'
GO
DELETE Statement - The DELETE statement removes rows from a table. The syntax for DELETE
statement is as follows:
Syntax:
where,
<Table_Name>: is the name of the table from which the records are to be deleted.
The WHERE clause is used to specify the condition. If WHERE clause is not included in the
DELETE statement, all the records in the table will be deleted.
Code Snippet 9 demonstrates how to delete a row from the Customer_2 table whose
Contact_number value is 5432679.
Code Snippet 9:
USE [CUST_DB]
DELETE FROM [dbo].[Customer_2] WHERE Contact_number = 5432679
GO
In Code Snippet 10, the CREATE TABLE statement uses the NULL and NOT NULL keywords with column
definitions.
Code Snippet 10:
USE [CUST_DB]
CREATE TABLE StoreDetails ( StoreID int NOT NULL, Name varchar(40) NULL)
GO
The result of the code is that the StoreDetails table is created with StoreID and Name columns
added to the table.
USE [CUST_DB]
CREATE TABLE StoreProduct( ProductID int NOT NULL, Name varchar(40) NOT NULL,
Price money NOT NULL DEFAULT (100))
GO
When a row is inserted using a statement as shown in Code Snippet 12, the value of Price will not be
blank; it will have a value of 100.00 even though a user has not entered any value for that column.
Code Snippet 12:
Concepts
USE [CUST_DB]
INSERT INTO dbo.StoreProduct (ProductID, Name) VALUES (111, 'Rivets')
GO
Figure 7.2 shows the output of Code Snippet 12, where though values are added only to the ProductID
and Name columns, the Price column will still show a value of 100.00. This is because of the DEFAULT
definition.
A column having IDENTITY property must be defined using one of the following data types:
decimal, int, numeric, smallint, bigint, or tinyint.
A column having IDENTITY property need not have a seed and increment value specified. If they
Concepts
are not specified, a default value of 1 will be used for both.
A table cannot have more than one column with IDENTITY property.
The identifier column in a table must not allow null values and must not contain a DEFAULT
definition or object.
Columns defined with IDENTITY property cannot have their values updated.
The values can be explicitly inserted into the identity column of a table only if the
IDENTITY _ INSERT option is set ON. When IDENTITY _ INSERT is ON, INSERT statements
must supply a value.
The advantage of identifier columns is that SQL Server can automatically provide key values, thus reducing
costs and improving performance. Using identifier columns simplifies programming and keeps primary
key values short.
Once the IDENTITY property has been set, retrieving the value of the identifier column can be done
by using the IDENTITYCOL keyword with the table name in a SELECT statement. To know if a table
has an IDENTITY column, the OBJECTPROPERTY() function can be used. To retrieve the name of the
IDENTITY column in a table, the COLUMNPROPERTY function is used.
The syntax for IDENTITY property is as follows:
Syntax:
where,
seed_value: is the seed value from which to start generating identity values.
increment_value: is the increment value by which to increase each time.
Code Snippet 13 demonstrates the use of IDENTITY property. HRContactPhone is created as a table
with two columns in the schema Person that is available in the CUST_DB database. The Person_ID
column is an identity column. The seed value is 500, and the increment value is 1.
USE [CUST_DB]
GO
CREATE TABLE HRContactPhone ( Person_ID int IDENTITY(500,1) NOT NULL,
MobileNumber bigint NOT NULL )
GO
Concepts
While inserting rows into the table, if IDENTITY_INSERT is not turned on, then, explicit values for the
IDENTITY column cannot be given. Instead, statements similar to Code Snippet 14 can be given.
Code Snippet 14:
USE [CUST_DB]
INSERT INTO HRContactPhone (MobileNumber) VALUES(983452201)
INSERT INTO HRContactPhone (MobileNumber) VALUES(993026654)
GO
Figure 7.3 shows the output where IDENTITY property is incrementing Person_ID column values.
To know whether a table has a ROWGUIDCOL column, the OBJECTPROPERTY function is used. The
COLUMNPROPERTY function is used to retrieve the name of the ROWGUIDCOL column. Code Snippet 15
demonstrates how to CREATE TABLE statement to create the EMPCellularPhone table.
The Person_ID column automatically generates a GUID for each new row added to the table.
USE [CUST_DB]
CREATE TABLE EMP_CellularPhone( Person_ID uniqueidentifier DEFAULT NEWID() NOT
NULL, PersonName varchar(60) NOT NULL)
GO
USE [CUST_DB]
INSERT INTO EMP_CellularPhone(PersonName) VALUES ('William Smith')
SELECT * FROM EMP_CellularPhone
GO
Figure 7.4 shows the output where a unique identifier is displayed against a specific PersonName.
7.4 Constraints
One of the important functions of SQL Server is to maintain and enforce data integrity. There are a number
of means to achieve this but one of the commonly used and preferred methods is to use constraints.
A constraint is a property assigned to a column or set of columns in a table to prevent certain types
of inconsistent data values from being entered. Constraints are used to apply business logic rules and
enforce data integrity.
Constraints can be created when a table is created, as part of the table definition by using the CREATE
TABLE statement or can be added at a later stage using the ALTER TABLE statement. Constraints can
be categorized as column constraints and table constraints. A column constraint is specified as part of a
column definition and applies only to that column. A table constraint can apply to more than one column
in a table and is declared independently from a column definition. Table constraints must be used when
Concepts
FOREIGN KEY
CHECK
NOT NULL
Code Snippet 17 demonstrates how to create a table EMPContactPhone to store the contact telephone
details of a person. Since the column EMP_ID must be a primary key for identifying each row uniquely, it
is created with the primary key constraint.
USE [CUST_DB]
CREATE TABLE EMPContactPhone ( EMP_ID int PRIMARY KEY, MobileNumber bigint,
ServiceProvider varchar(30), LandlineNumber bigint)
GO
Having created a primary key for EMP_ID, a query is written to insert rows into the table with the
Concepts
USE [CUST_DB]
INSERT INTO dbo.EMPContactPhone values (101, 983345674,'Hutch', NULL)
INSERT INTO dbo.EMPContactPhone values (102, 989010002,'Airtel', NULL)
GO
The first statement shown in Code Snippet 18 is executed successfully but the next INSERT statement
will fail because the value for EMP_ID is duplicate as shown in figure 7.5.
7.4.2 UNIQUE
A UNIQUE constraint is used to ensure that only unique values are entered in a column or set of columns.
It allows developers to make sure that no duplicate values are entered. Primary keys are implicitly unique.
Unique key constraints enforce entity integrity because once the constraints are applied; no two rows in
the table can have the same value for the columns. UNIQUE constraints allow null values.
A single table can have more than one UNIQUE constraint.
The syntax to create UNIQUE constraint is as follows:
Syntax:
Concepts
Code Snippet 19 demonstrates how to make the MobileNumber and LandlineNumber columns as
unique.
Code Snippet 19:
USE [CUST_DB]
GO
CREATE TABLE EMP_ContactPhone(Person_ID int PRIMARY KEY, MobileNumber bigint
UNIQUE,ServiceProvider varchar(30),LandlineNumber bigint UNIQUE)
USE [CUST_DB]
INSERT INTO EMP_ContactPhone values (111, 983345674, 'Hutch', NULL)
INSERT INTO EMP_ContactPhone values (112, 983345674, 'Airtel', NULL)
GO
Though a value of NULL has been given for the LandlineNumber columns, which are defined as UNIQUE, the
command will execute successfully because UNIQUE constraints check only for the uniqueness of values
but do not prevent null entries. The first statement shown in Code Snippet 20 is executed successfully
but the next INSERT statement will fail even though the primary key value is different because the value
for MobileNumber is a duplicate as shown in figure 7.7. This is because the column MobileNumber is
defined to be unique and disallows duplicate values.
where,
table_name: is the name of the table from which to reference primary key.
<pk_column_name>: is the name of the primary key column.
Code Snippet 21 demonstrates how to create a foreign key constraint.
Code Snippet 21:
USE [CUST_DB]
GO
CREATE TABLE EMP_PhoneExpenses ( Expense_ID int PRIMARY KEY, MobileNumber bigint
FOREIGN KEY REFERENCES EMP_ContactPhone (MobileNumber), Amount bigint)
A row is inserted into the table such that the mobile number is the same as one of the mobile numbers
in EMP_ContactPhone. The command that will be written is shown in Code Snippet 22.
Code Snippet 22:
7.4.4 CHECK
A CHECK constraint limits the values that can be placed in a column. Check constraints enforce integrity
of data. For example, a CHECK constraint can be given to check if the value being entered into VoterAge
is greater than or equal to 18. If the data being entered for the column does not satisfy the condition,
then, insertion will fail.
A CHECK constraint operates by specifying a search condition, which can evaluate to TRUE, FALSE, or
unknown. Values that evaluate to FALSE are rejected. Multiple CHECK constraints can be specified for
a single column. A single CHECK constraint can also be applied to multiple columns by creating it at the
table level.
Code Snippet 23 demonstrates creating a CHECK constraint to ensure that the Amount value will always
be non-zero. A NULL value can, however, be added into Amount column if the value of Amount is not
known.
Code Snippet 23:
USE [CUST_DB]
CREATE TABLE EMP_PhoneExpenses ( Expense_ID int PRIMARY KEY, MobileNumber bigint
FOREIGN KEY REFERENCES EMP_ContactPhone (MobileNumber), Amount bigint CHECK
(Amount >10))
GO
Once a CHECK constraint has been defined, if an INSERT statement is written with data that violates the
constraint, it will fail as shown in Code Snippet 24.
Code Snippet 24:
USE [CUST_DB]
INSERT INTO dbo.EMP_PhoneExpenses values (101, 983345674, 9)
GO
The error message of Code Snippet 24 that appears when the Amount constraint is less than 10 is shown
in figure 7.10. Concepts
2. A ______________ in a table is a column that points to a primary key column in another table.
3. Which of the following code is used to drop a table from CUST _ DB database?
USE [CUST_DB]
(A) DROP TABLE [dbo].[Table_1] (C) GO
DELETE TABLE [dbo].[Table_1]
USE [CUST_DB]
USE [CUST_DB]
GO
(B) GO (D)
SUBTRACT
DROP TABLE [dbo].[Table_1]
[dbo].[Table_1]
4. Which of the following property of SQL Server is used to create identifier columns that can contain
auto-generated sequential values to uniquely identify each row within a table?
5. A ______________ constraint is used to ensure that only unique values are entered in a column or
set of columns.
7.5.1 Answers
1. D
2. A
3. B
4. B
5. A
Concepts
Summary
A data type is an attribute that specifies the storage capacity of an object and the type of data it
can hold, such as numeric data, character data, monetary data, and so on.
SQL Server 2012 supports three kinds of data types:
System data types
Alias data types
User-defined types
Most tables have a primary key, made up of one or more columns of the table that identifies
records uniquely.
The nullability feature of a column determines whether rows in the table can contain a null value
for that column.
A DEFAULT definition for a column can be created at the time of table creation or added at a later
stage to an existing table.
The IDENTITY property of SQL Server is used to create identifier columns that can contain
auto-generated sequential values to uniquely identify each row within a table.
Constraints are used to apply business logic rules and enforce data integrity.
A UNIQUE constraint is used to ensure that only unique values are entered in a column or set of
columns.
A foreign key in a table is a column that points to a primary key column in another table.
A CHECK constraint limits the values that can be placed in a column.
Concepts
Try It Yourself
1. Saint Clara Insurance (SCI) services is a leading Insurance company based in New York, USA. SCI
Services wanted a faster, more accurate, and less expensive way to handle insurance claims adjusting
for its insurance company customers. With an ever-increasing customer base, they decided to create
a Web-based application that will be used not only by employees who work on field, but will also be
used by the administrators in the head office.
SCI handles approximately 650 claims per month, but that can soar to 15000 or more when a
hurricane or some other disaster strikes. Officers can use the software on the device type of their
choice: Tablet PCs or laptops in the field, or desktop PCs back in their offices. The use of Microsoft
SQL Server 2005 as the software's database enables to receive and update all the necessary
information regarding a customer or claimer.
With thousands of customers expected every month, data integrity of the data in the database is
very important. You need to perform the following tasks:
a. Create a database called SaintClaraServices to store the details of the company. Create
a table CustomerHeader with the details given in table 7.2.
Try It Yourself
b. Create a table CustomerDetails with the specifications given in table 7.3.
Field Name Data Type Description
ClientID int Stores client id. This column is the Primary Key
FatherName char Stores the name of the client's father
MotherName char Stores the name of the client's mother
Amount money Stores the principal amount
Period int Stores period for insurance
Plan char Stores plan for insurance
Premium money Stores premium
NomineeName char Stores nominee name
Date datetime Stores the date on which insurance is made
Table 7.3: CustomerDetails Table
Concepts
8.1 Introduction
The SELECT statement is a core command used to access data in SQL Server 2012. XML allows developers
to develop their own set of tags and makes it possible for other programs to understand these tags. XML
is the preferred means for developers to store, format, and manage data on the Web.
where,
table_name: is the table from which the data will be displayed.
<column_name1>...<column_nameN>: are the columns that are to be displayed.
Note - All commands in SQL Server 2012 do not end with a semicolon.
SELECT LEFT('International',5)
Concepts
The code will display only the first five characters from the extreme left of the word 'International'.
Figure 8.1: First Five Characters from the Extreme Left of the Word
where,
*: specifies all columns of the named tables in the FROM clause.
<table_name>: is the name of the table from which the information is to be retrieved. It is
possible to include any number of tables. When two or more tables are used, the row of each table
is mapped with the row of others. This activity takes a lot of time if the data in the tables are huge.
Hence, it is recommended to use this syntax with a condition.
Code Snippet 2 demonstrates the use of ' * ' in the SELECT statement.
Code Snippet 2:
USE AdventureWorks2012
SELECT * FROM HumanResources.Employee
GO
The partial output of Code Snippet 2 with some columns of HumanResources.Employee table is shown
in figure 8.2.
Concepts
where,
<column_name1>..<column_nameN>: are the columns that are to be displayed.
For example, to display the cost rates in various locations from Production.Location table in
AdventureWorks2012 database, the SELECT statement is as shown in Code Snippet 3.
Code Snippet 3:
USE AdventureWorks2012
SELECT LocationID,CostRate FROM Production.Location
GO
Figure 8.3 shows LocationID and CostRate columns from AdventureWorks2012 database.
USE AdventureWorks2012
SELECT [Name] +':'+ CountryRegionCode +'→'+ [Group] FROM Sales.SalesTerritory
GO
Figure 8.4 displays the country name, country region code, and corresponding group from
Sales.SalesTerritory of AdventureWorks2012 database.
Figure 8.4: Country Name, Country Region Code, and Corresponding Group
Code Snippet 5 demonstrates how to display 'ChangedDate' as the heading for ModifiedDate column
in the dbo.Individual table, the SELECT statement.
Code Snippet 5:
USE CUST_DB
SELECT ModifiedDate as 'ChangedDate' FROM dbo.Individual
GO
The output displays'ChangedDate' as the heading for ModifiedDate column in the dbo.Individual
table. Figure 8.5 shows the original heading and the changed heading.
Note - The table used in the FROM clause of a query is called as a base table.
For example, consider the table Production.ProductCostHistory from AdventureWorks2012
database. Consider the example where the production people decide to give 15% discount on the
standard cost of all the products. The discount amount does not exist but can be calculated by executing
the SELECT statement shown in Code Snippet 6.
Code Snippet 6:
USE AdventureWorks2012
SELECT ProductID,StandardCost,StandardCost * 0.15 as Discount FROM
Production.ProductCostHistory
GO
Concepts
Figure 8.6 shows the output where discount amount is calculated using SELECT statement.
USE AdventureWorks2012
SELECT DISTINCT StandardCost FROM Production.ProductCostHistory
GO
The SELECT statement has various clauses associated with it. In this section, each clause is discussed in
detail.
where,
new_table: is the name of the new table that is to be created.
Code Snippet 8 uses an INTO clause which creates a new table Production.ProductName with details
such as the product's ID and its name from the table Production.ProductModel.
Code Snippet 8:
USE AdventureWorks2012
SELECT ProductModelID,Name INTO Production.ProductName FROM Production.
ProductModel
GO
After executing the code, a message stating '(128 row(s) affected)' is displayed.
Concepts
If a query is written to display the rows of the new table, the output will be as shown in figure 8.7.
where,
search_condition: is the condition to be met by the rows.
Table 8.1 shows the different operators that can be used with the WHERE clause.
Operator Description
= Equal to
<> Not equal to
Concepts
Operator Description
BETWEEN Between a range
LIKE Search for an ordered pattern
IN Within a range
Table 8.1: Operators
Code Snippet 9 demonstrates the equal to operator with WHERE clause to display data with EndDate
6/30/2007 12:00:00 AM.
Code Snippet 9:
USE AdventureWorks2012
SELECT * FROM Production.ProductCostHistory WHERE EndDate = '6/30/2007 12:00:00
AM'
GO
Code Snippet 9 will return all records from the table Production.ProductCostHistory which has
the end date as '6/30/2007 12:00:00 AM'.
The output SELECT with WHERE clause is shown in figure 8.8.
Code Snippet 10 demonstrates the equal to operator with WHERE clause to display data with address
having Bothell city.
Code Snippet 10:
USE AdventureWorks2012
SELECT DISTINCT StandardCost FROM Production.ProductCostHistory
GO
USE AdventureWorks2012
SELECT * FROM HumanResources.Department WHERE DepartmentID < 10
GO
The query in Code Snippet 11 displays all those records where the value in DepartmentID is less than
10. Concepts
USE AdventureWorks2012
SELECT * FROM Sales.CustomerAddress WHERE AddressID > 900 AND AddressTypeID = 5
GO
OR operator returns TRUE and displays all the rows if it satisfies any one of the conditions. Code Snippet
13 demonstrates OR operator.
Code Snippet 13:
USE AdventureWorks2012
SELECT * FROM Sales.CustomerAddress WHERE AddressID < 900 OR AddressTypeID = 5
GO
The query in Code Snippet 13 will display all the rows whose AddressID is less than 900 or whose
AddressTypeID is equal to five.
The NOT operator negates the search condition. Code Snippet 14 demonstrates NOT operator.
Code Snippet 14:
USE AdventureWorks2012
SELECT * FROM Sales.CustomerAddress WHERE NOT AddressTypeID = 5
GO
Code Snippet 14 will display all the records whose AddressTypeID is not equal to 5. Multiple logical
operators in a single SELECT statement can be used. When more than one logical operator is used, NOT
is evaluated first, then AND, and finally OR.
The GROUP BY clause can have more than one grouped column.
where,
column_name1: is the name of the column according to which the resultset should be grouped.
For example, consider that if the total number of resource hours has to be found for each work order, the
query in Code Snippet 15 would retrieve the resultset.
Code Snippet 15:
USE AdventureWorks2012
SELECT WorkOrderID,SUM(ActualResourceHrs) FROM Production.WorkOrderRouting
GROUP BY WorkOrderID
GO
and processing.
USE CUST_DB
CREATE TABLE NewEmployees (EmployeeID smallint,FirstName char(10), LastName
char(10), Department varchar(50), HiredDate datetime, Salary money );
INSERT INTO NewEmployees
VALUES(11,'Kevin','Blaine', 'Research', '2012-07-31', 54000);
WITH EmployeeTemp (EmployeeID,FirstName,LastName,Department,
HiredDate,Salary)
AS
(
SELECT * FROM NewEmployees
)
SELECT * FROM EmployeeTemp
The statement in Code Snippet 16 inserts a new row for the NewEmployees table and transfers the
temporary resultset to EmployeeTemp as shown in figure 8.12.
Code Snippet 17 demonstrates how to use UPDATE statement with an INSERT statement.
Code Snippet 17:
USE CUST_DB;
GO
CREATE TABLE dbo.table_3
(
id INT,
employee VARCHAR(32)
)
go
INSERT INTO dbo.table_3 VALUES
(1, 'Matt')
,(2, 'Joseph')
,(3, 'Renny')
,(4, 'Daisy');
GO
DECLARE @updatedTable TABLE
(
id INT, olddata_employee VARCHAR(32), newdata_employee VARCHAR(32)
);
UPDATE dbo.table_3
Set employee= UPPER(employee)
OUTPUT
inserted.id,
deleted.employee,
inserted.employee
INTO @updatedTable
Concepts
After executing Code Snippet 17, the output where rows are affected by an INSERT statement and
an UPDATE statement is shown in figure 8.13.
where,
expression: is the character string which is to be placed into the large value data type
column.
@offset: is the starting value (units) where the replacement is to be done.
@Length: is the length of the portion in the column, starting from @offset that is replaced
by expression.
Code Snippet 18 demonstrates how .WRITE clause is used in UPDATE statement.
Code Snippet 18:
USE CUST_DB;
GO
CREATE TABLE dbo.table_5
(
Employee_role VARCHAR(max),
Summary VARCHAR(max)
Concepts
The SELECT statement in Code Snippet 19 sorts the query results on the SalesLastYear column of the
Sales.SalesTerritory table.
Code Snippet 19:
Concepts
USE AdventureWorks2012
SELECT * FROM Sales.SalesTerritory ORDER BY SalesLastYear
GO
Easy Data Search and Management - All the XML data is stored locally in one place, thus making it
easier to search and manage.
Better Performance - Queries from a well-implemented XML database are faster than queries over
documents stored in a file system. Also, the database essentially parses each document when
storing it.
The xml data type is used to store XML documents and fragments in an SQL Server database. An XML
fragment is an XML instance with the top-level element missing from its structure.
The following is the syntax to create a table with columns of type xml.
Syntax:
Code Snippet 20 creates a new table named PhoneBilling with one of the columns belonging to xml
data type.
Code Snippet 20:
USE AdventureWorks2012
CREATE TABLE Person.PhoneBilling (Bill_ID int PRIMARY KEY, MobileNumber bigint
UNIQUE, CallDetails xml)
GO
A column of type xml can be added to a table at the time of creation or after its creation. The xml data
type columns support DEFAULT values as well as the NOT NULL constraint.
Data can be inserted into the xml column in the Person.PhoneBilling �������������������������������
table as shown in Code Snippet
21.
Code Snippet 21:
USE AdventureWorks2012
INSERT INTO Person.PhoneBilling VALUES (100,9833276605,
'<Info><Call>Local</Call><Time>45 minutes</Time><Charges>200</Charges></
Info>')
SELECT CallDetails FROM Person.PhoneBilling
GO
The xml data type columns cannot be used as a primary key, foreign key, or as a unique constraint.
USE SampleDB
CREATE XML SCHEMA COLLECTION CricketSchemaCollection
AS N'<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" >
<xsd:element name="MatchDetails">
<xsd:complexType>
<xsd:complexContent>
<xsd:restriction base="xsd:anyType">
<xsd:sequence>
<xsd:element name="Team" minOccurs="0" maxOccurs="unbounded">
<xsd:complexType>
Concepts
<xsd:complexContent>
<xsd:restriction base="xsd:anyType">
<xsd:sequence />
The CREATE XML SCHEMA COLLECTION statement creates a collection of schemas, any of which can be
used to validate typed XML data with the name of the collection. This example shows a new schema called
CricketSchemaCollection being added to the SampleDB database. Once a schema is registered,
the schema can be used in new instances of the xml data type.
Code Snippet 24 creates a table with an xml type column and specifies a schema for the column.
Code Snippet 24:
USE SampleDB
CREATE TABLE CricketTeam ( TeamID int identity not null, TeamInfo xml(CricketSchem
aCollection) )
GO
To create new rows with the typed XML data, the INSERT statement can be used as shown in Code
Snippet 25.
Code Snippet 25:
USE SampleDB
INSERT INTO CricketTeam (TeamInfo) VALUES ('<MatchDetails><Team
Concepts
A typed XML variable can also be created by specifying the schema collection name. For instance,
in Code Snippet 26, a variable team is declared as a typed XML variable with schema name as
CricketSchemaCollection. The SET statement is used to assign the variable as an XML fragment.
Code Snippet 26:
USE SampleDB
DECLARE @team xml(CricketSchemaCollection)
SET @team = '<MatchDetails><Team country="Australia"></Team></MatchDetails>'
SELECT @team
GO
8.5.3 XQuery
After XML data has been stored using the xml data type, it can be queried and retrieved using a
language named XQuery. XML Query or XQuery is a new query language, which combines syntax that is
familiar to developers who work with the relational database, and XPath language, that is used to select
individual sections or collections of elements from an XML document. XQuery can be query structured or
semi-structured XML data. To query an XML instance stored in a variable or column of xml type, xml data
type methods are used. For example, a variable of xml type is declared and queried by using methods
of the xml data type. Developers need to query XML documents, and this involves transforming XML
documents in the required format. XQuery makes it possible to perform complex queries against an XML
data source over the Web.
Some of the xml data type methods used with XQuery are described as follows:
exist()
This method is used to determine if one or more specified nodes are present in the XML
document. It returns 1 if the XQuery expression returned at least one node, 0 if the Xquery
expression evaluated to an empty result, and NULL if the xml data type instance against which the
query was executed is NULL.
Code Snippet 27 demonstrates the use of exist() method. It is assumed that many records have
been inserted into the table.
Code Snippet 27:
USE SampleDB
Concepts
This will return only those TeamID values where the Team element has been specified in the
TeamInfo. The output is shown in figure 8.17.
query()
The query() method can be used to retrieve either the entire contents of an XML document or a
selected section of the XML document. Code Snippet 28 shows the use of query() method.
Code Snippet 28:
USE SampleDB
SELECT TeamInfo.query('/MatchDetails/Team') AS Info FROM CricketTeam
GO
value()
The value() method can be used to extract scalar values from an xml data type. Code Snippet 29
demonstrates this method.
Code Snippet 29:
USE SampleDB
SELECT TeamInfo.value('(/MatchDetails/Team/@score)[1]', 'varchar(20)')
AS Score FROM CricketTeam where TeamID=1
GO
Concepts
Concepts
2. The ________________ statement retrieves rows and columns from one or more tables.
3. Which of the following is the general format of the .WRITE clause query?
ADD INTO dbo.table_5(Employee_ INSERT INTO dbo.table_
role, Summary) VALUES 5(Employee_role, Summary) VALUES
('Research', 'This a very long ('Research', 'This a very long
non-unicode string') non-unicode string')
SELECT *FROM dbo.table_5 SELECT *FROM dbo.table_5
(A) UPDATE dbo.table_5 SET Summary (C) UPDATE dbo.table_5 SET Summary
.WRITE('n incredibly') .WRITE('n incredibly', 6,5)
WHERE Employee_role LIKE WHERE Employee_role LIKE
'Research' 'Research'
SELECT *FROM dbo.table_5 SELECT *FROM dbo.table_5
INSERT INTO dbo.table_ INSERT INTO dbo.table_
5(Employee_role, Summary) 5(Employee_role, Summary) VALUES
VALUES ('Research', 'This a very ('Research', 'This a very long
long non-unicode string') non-unicode string')
SELECT *FROM dbo.table_5 SELECT *FROM dbo.table_5
(B) UPDATE dbo.table_5 SET Summary (D) dbo.table_5 SET Summary('n
.WRITE('n incredibly', 6,5) incredibly', 6,5)
WHERE Employee_role LIKE WHERE Employee_role LIKE
'Research' 'Research'
Concepts
4. Which of the following clause with the SELECT statement is used to specify tables or retrieves the
records?
5. ________ is used to improve the efficiency of queries on XML documents that are stored in an XML
column.
Concepts
8.6.1 Answers
1. D
2. A
3. B
4. B
5. A
Concepts
Summary
The SELECT statement retrieves rows and columns from tables.
SELECT statement allows the users to specify different expressions in order to view the resultset in
an ordered manner.
A SELECT statement can contain mathematical expressions by applying operators to one or more
columns.
XML allows developers to develop their own set of tags and makes it possible for other programs
to understand these tags.
A typed XML instance is an XML instance which has a schema associated with it.
Concepts
Try It Yourself
1. Transcorp United Inc. is an import export company in USA. The database of the company is created
in SQL Server 2012. Transcorp has around 3000 employees worldwide. The details of the employees
such as Employee Code, Employee Name, Employee Department, Date of Joining, and so on are
stored in EMP _ Details table.
As the database administrator of Transcorp, you have to perform the following tasks:
Retrieve data of the employees who has joined the company before the year 2012 and after
2010.
Edit the name of a female employee Julia Drek to Julia Dean using the .WRITE property.
Get the data of all the employees who are from Houston.
Concepts
Describe subqueries
Explain joins
9.1 Introduction
SQL Server 2012 includes several powerful query features that help you to retrieve data efficiently and
quickly. Data can be grouped and/or aggregated together in order to present summarized information.
Using the concept of subqueries, a resultset of a SELECT can be used as criteria for another SELECT
statement or query. Joins help you to combine column data from two or more tables based on a logical
relationship between the tables. On the other hand, set operators such as UNION and INTERSECT help
you to combine row data from two or more tables. The PIVOT and UNPIVOT operators are used to
transform the orientation of data from column-oriented to row-oriented and vice versa. The GROUPING
SET subclause of the GROUP BY clause helps to specify multiple groupings in a single query.
Syntax:
where,
column_name: is the name of the column according to which the resultset should be grouped.
Consider the WorkOrderRouting table in the AdventureWorks2012 database. The total resource
hours per work order needs to be calculated. To achieve this, the records need to be grouped by work
order number, that is, WorkOrderID.
Code Snippet 1 retrieves and displays the total resource hours per work order along with the work
order number. In this query, a built-in function named SUM() is used to calculate the total. SUM() is an
aggregate function. Aggregate functions will be covered in detail in a later section.
Code Snippet 1:
Executing this query will return all the work order numbers along with the total number of resource
hours per work order.
Using a GROUP BY on a query for this table will take into consideration the NULL values too. For
example, Code Snippet 3 retrieves and displays the average of the list price for each Class.
Code Snippet 3:
Consider the Sales.SalesTerritory table. This table has a column named Group indicating
the geographic area to which the sales territory belongs to. Code Snippet 4 calculates and displays
the total sales for each group. The output needs to display all the groups regardless of whether
they had any sales or not. To achieve this, the code makes use of GROUP BY with ALL.
Code Snippet 4:
Apart from the rows that are displayed in Code Snippet 4, it will also display the group 'Pacific' with
null values as shown in figure 9.4. This is because the Pacific region did not have any sales.
Concepts
Code Snippet 5 displays the row with the group 'Pacific' as it has total sales less than 6000000.
Code Snippet 5:
The output of this is only row, with Group name Pacific and total sales, 5977814.9154.
CUBE: CUBE is an aggregate operator that produces a super-aggregate row. In addition to the
usual rows provided by the GROUP BY, it also provides the summary of the rows that the GROUP BY
clause generates. The summary row is displayed for every possible combination of groups in the
resultset. The summary row displays NULL in the resultset but at the same time returns all the
values for those. The following is the syntax of CUBE.
Syntax:
Code Snippet 6 retrieves and displays the total sales of each country and also, the total of the sales
of all the countries' regions.
ROLLUP: In addition to the usual rows that are generated by the GROUP BY, it also introduces
summary rows into the resultset. It is similar to CUBE operator but generates a resultset that
shows groups arranged in a hierarchical order. It arranges the groups from the lowest to the
highest. Group hierarchy in the result is dependent on the order in which the columns that are
grouped are specified.
The following is the syntax of ROLLUP.
Syntax:
Code Snippet 7 demonstrates the use of ROLLUP. It retrieves and displays the total sales of each
country, the total of the sales of all the countries' regions and arranges them in order.
Code Snippet 7:
Since the query does not use a GROUP BY clause, all rows in the table will be summarized by the aggregate
formulas in the SELECT clause.
The output is shown in figure 9.7.
therefore, must be used as inputs to aggregate functions. To correct or prevent the error, one needs to
remove SalesOrderID from the query.
Besides using numeric data, aggregate expressions can also include date, time, and character data for
summarizing.
Code Snippet 10 returns the earliest and latest order date, using MIN and MAX.
Code Snippet 10:
Method Description
STUnion Returns an object that represents the union of a geometry/geography instance with
another geometry/geography instance.
STIntersection Returns an object that represents the points where a geometry/geography instance
intersects another geometry/geography instance.
STConvexHull Returns an object representing the convex hull of a geometry/geography instance. A
set of points is called convex if for any two points, the entire segment is contained in
the set. The convex hull of a set of points is the smallest convex set containing the set.
For any given set of points, there is only one convex hull.
Table 9.2: Spatial Aggregate Methods
Figures 9.9, 9.10, and 9.11 visually depict an example of these methods.
Concepts
Here, two variables are declared of the geography type and appropriate values are assigned to them.
Then, they are combined into a third variable of geography type by using the STUnion() method.
The output of the code is shown in figure 9.13.
Union Aggregate
Concepts
Envelope Aggregate
Collection Aggregate
These aggregates are implemented as static methods, which work for either the geography or the
geometry data types. Although aggregates are applicable to all classes of spatial data, they can be best
described with polygons.
Union Aggregate
It performs a union operation on a set of geometry objects. It combines multiple spatial objects
into a single spatial object, removing interior boundaries, where applicable. The following is the
syntax of UnionAggregate.
Syntax:
SELECT Geography::UnionAggregate(SpatialLocation)
AS AVGLocation
FROM Person.Address
WHERE City = 'London';
To view a visual representation of the spatial data, you can click the Spatial results tab in the output
window. This will display the output as shown in figure 9.15.
Envelope Aggregate
It returns a bounding area for a given set of geometry or geography objects.
The Envelope Aggregate exhibits different behaviors for geography and geometry types. Based
on the type of object it is applied to, it returns different results. For the geometry type, the result
is a 'traditional' rectangular polygon, which closely bounds the selected input objects. For the
geography type, the result is a circular object, which loosely bounds the selected input objects.
Furthermore, the circular object is defined using the new CurvePolygon feature. The following is
the syntax of EnvelopeAggregate.
Syntax:
where,
geometry_operand: is a geometry type table column comprising the set of geometry
objects.
geography_operand: is a geography type table column comprising the set of geography
objects.
Code Snippet 14 returns a bounding box for a set of objects in a table variable column.
Code Snippet 14:
Concepts
SELECT Geography::EnvelopeAggregate(SpatialLocation)
AS Location
FROM Person.Address
WHERE City = 'London'
Collection Aggregate
It returns a GeometryCollection/GeographyCollection instance with one geometry/
geography part for each spatial object(s) in the selection set. The following is the syntax of
CollectionAggregate.
Syntax:
where,
geometry_operand: is a geometry type table column comprising the set of geometry
objects.
geography_operand: is a geography type table column comprising the set of geography
objects.
Code Snippet 15 returns a GeometryCollection instance that contains a CurvePolygon and
a Polygon.
Code Snippet 15:
SELECT geometry::CollectionAggregate(shape)
FROM @CollectionDemo;
where,
geometry_operand: is a geometry type table column comprising the set of
geometry objects.
geography_operand: is a geography type table column comprising the set of
geography objects.
Code Snippet 16 demonstrates the use of ConvexHullAggregate.
Code Snippet 16:
SELECT Geography::ConvexHullAggregate(SpatialLocation)
AS Location
FROM Person.Address
WHERE City = 'London'
Concepts
9.6 Subqueries
You can use a SELECT statement or a query to return records that will be used as criteria for another
SELECT statement or query. The outer query is called parent query and the inner query is called a
subquery. The purpose of a subquery is to return results to the outer query. In other words, the inner query
statement should return the column or columns used in the criteria of the outer query statement.
The simplest form of a subquery is one that returns just one column. The parent query can use the results
of this subquery using an = sign.
The syntax for the most basic form of a subquery using just one column with an = sign is shown.
Syntax:
In a subquery, the innermost SELECT statement is executed first and its result is passed as criteria to the
outer SELECT statement.
Consider a scenario where it is required to determine the due date and ship date of the most recent
orders.
Concepts
Here, a subquery has been used to achieve the desired output. The inner query or subquery retrieves the
most recent order date. This is then passed to the outer query, which displays due date and ship date for
all the orders that were made on that particular date.
A part of the output of the code is shown in figure 9.19.
Scalar subqueries return a single value. Here, the outer query needs to be written to process a
single result.
Multi-valued subqueries return a result similar to a single-column table. Here, the outer query
needs to be written to handle multiple possible results.
Concepts
These keywords, also called predicates, are used with multi-valued queries.
For example, consider that all the first names and last names of employees whose job title is 'Research
and Development Manager' need to be displayed. Here, the inner query may return more than one row
as there may be more than one employee with that job title. To ensure that the outer query can use the
results of the inner query, the IN keyword will have to be used.
Code Snippet 18 demonstrates this.
Code Snippet 18:
Here, the inner query retrieves the BusinessEntityID from the HumanResources.Employee
table for those records having job title 'Research and Development Manager'. These results
are then passed to the outer query, which matches the BusinessEntityID with that in the
Person.Person table. Finally, from the records that are matching, the first and last names are extracted
and displayed.
The output is displayed in figure 9.20.
The ntext, text, and image data types cannot be used in the SELECT list of subqueries.
The SELECT list of a subquery introduced with a comparison operator can have only one
Concepts
Subqueries that are introduced by a comparison operator not followed by the keyword ANY or
ALL cannot include GROUP BY and HAVING clauses.
You cannot use DISTINCT keyword with subqueries that include GROUP BY.
Self-contained subqueries are written as standalone queries, without any dependencies on the
outer query. A self-contained subquery is processed once when the outer query runs and passes
its results to the outer query.
Correlated subqueries reference one or more columns from the outer query and therefore,
depend on the outer query. Correlated subqueries cannot be run separately from the outer query.
The EXISTS keyword is used with a subquery to check the existence of rows returned by the
subquery. The subquery does not actually return any data; it returns a value of TRUE or FALSE.
The following is the syntax of a subquery containing the word EXISTS.
Syntax:
where,
Subquery_Statement: specifies the subquery.
The code in Code Snippet 18 can be rewritten as shown in Code Snippet 19 using the EXISTS
keyword to yield the same output.
Code Snippet 19:
Here, the inner subquery retrieves all those records that match job title as 'Research and Development
Manager' and whose BusinessEntityId matches with that in the Person table. If there are no
records matching both these conditions, the inner subquery will not return any rows. Thus, in that
Concepts
case, the EXISTS will return false and the outer query will also not return any rows. However, the
code in Code Snippet 19 will return two rows because the given conditions are satisfied. The output
will be the same as figure 9.20.
Similarly, one can use the NOT EXISTS keyword. The WHERE clause in which it is used is satisfied
if there are no rows returned by the subquery.
However, if the subquery refers to a parent query, the subquery needs to be revaluated for every iteration
in the parent query. This is because the search criterion in the subquery is dependent upon the value of
a particular record in the parent query.
When a subquery takes parameters from its parent query, it is known as Correlated subquery. Consider
that you want to retrieve all the business entity ids of persons whose contact information was last modified
not earlier than 2012. To do this, you can use a correlated subquery as shown in Code Snippet 21.
Code Snippet 21:
SELECT e.BusinessEntityID
FROM Person.BusinessEntityContact e
WHERE e.ContactTypeID IN
(
SELECT c.ContactTypeID
FROM Person.ContactType c
WHERE YEAR(e.ModifiedDate) >=2012
)
In Code Snippet 21, the inner query retrieves contact type ids for all those persons whose contact
information was modified on or before 2012. These results are then passed to the outer query, which
matches these contact type ids with those in the Person.BusinessEntityContact table and displays
the business entity IDs of those records. Figure 9.22 shows part of the output.
9.7 Joins
Joins are used to retrieve data from two or more tables based on a logical relationship between tables.
A join typically specifies foreign key relationship between the tables. It defines the manner in which two
tables are related in a query by:
Concepts
Specifying the column from each table to be used for the join. A typical join specifies a foreign key
from one table and its associated key in the other table.
Specifying a logical operator such as =, <> to be used in comparing values from the columns.
where,
<ColumnName1>, <ColumnName2>: Is a list of columns that need to be displayed.
Table_A: Is the name of the table on the left of the JOIN keyword.
Table_B: Is the name of the table on the right of the JOIN keyword.
AS Table_Alias: Is a way of giving an alias name to the table. An alias defined for the table in a
query can be used to denote a table so that the full name of the table need not be used.
<CommonColumn>: Is a column that is common to both the tables. In this case, the join succeeds
only if the columns have matching values.
Consider that you want to list employee first names, last names, and their job titles from the
HumanResources.Employee and Person.Person. To extract this information from the two tables,
you need to join them based on BusinessEntityID as shown in Code Snippet 22.
Code Snippet 22:
Here, the tables HumanResources.Employee and Person.Person are given aliases A and B. They
are joined together on the basis of their business entity ids. The SELECT statement then retrieves the
Concepts
Code Snippet 23 demonstrates the use of inner join. The scenario for this is similar to Code Snippet 22.
Code Snippet 23:
FROM Person.Person A
INNER JOIN HumanResources.Employee B
ON
A.BusinessEntityID = B.BusinessEntityID;
Consider that you want to retrieve all the customer ids from the Sales.Customers table and
order information such as ship dates and due dates, even if the customers have not placed any
orders. Since the record count would be very huge, it is to be restricted to only those orders that
are placed before 2012. To achieve this, you perform a left outer join as shown in Code Snippet
24.
Code Snippet 24:
Concepts
In Code Snippet 24, the left outer join is constructed between the tables Sales.Customer and
Sales.SalesOrderHeader. The tables are joined on the basis of customer ids. In this case, all
records from the left table, Sales.Customer and only matching records from the right table,
Sales.SalesOrderHeader, are returned. Figure 9.24 shows the output.
SELECT <ColumnList>
FROM Left_Table_Name
AS
Table_A AS Table_Alias_A
RIGHT OUTER JOIN
Table_B AS Table_Alias_B
ON
Table_Alias_A.<CommonColumn> = Table_Alias_B.<CommonColumn>
Concepts
Consider that you want to retrieve all the product names from Product table and all the
corresponding sales order ids from the SalesOrderDetail table even if there is no matching
record for the products in the SalesOrderDetail table. To do this, you will use a right outer join
as shown in Code Snippet 25.
Code Snippet 25:
In the code, all the records from Product table are shown regardless of whether they have been
sold or not.
9.7.3 Self-Join
A self-join is used to find records in a table that are related to other records in the same table. A table is
joined to itself in a self-join.
Consider that an Employee table in a database named Sterling has a column named mgr_id to denote
information for managers whom employees are reporting to. Assume that the table has appropriate
records inserted in it.
A manager is also an employee. This means that the mgr_id in the table is the emp_id of an
employee.
Concepts
For example, Anabela with emp_id as ARD36773F is an employee but Anabela is also a manager for
Victoria, Palle, Karla, and other employees as shown in figure 9.25.
SELECT TOP 7 A.fname + ' ' + A.lname AS 'Employee Name', B.fname + ' '+B.lname AS
'Manager'
FROM
Employee AS A
INNER JOIN
Employee AS B
ON A.mgr_id = B.emp_id
In Code Snippet 26, the Employee table is joined to itself based on the mgr_id and emp_id columns.
Concepts
Insert a new row from the source if the row is missing in the target
Compare last and first names of customers from both source and target tables
Update customer information in target table if the last and first names match
Insert new records in target table if the last and first names in source table do not exist in target
table
Delete existing records in target table if the last and first names do not match with those of source
table
The MERGE statement accomplishes the tasks in a single statement. MERGE also allows you to optionally
display those records that were inserted, updated, or deleted by using an OUTPUT clause. The following
is the syntax of the MERGE statement.
Syntax:
MERGE target_table
USING source_table
ON match_condition
WHEN MATCHED THEN UPDATE SET Col1 = val1 [, Col2 = val2...]
WHEN [TARGET] NOT MATCHED THEN INSERT (Col1 [,Col2...] VALUES (Val1 [,
Val2...])
WHEN NOT MATCHED BY SOURCE THEN DELETE
[OUTPUT $action, Inserted.Col1, Deleted.Col1,...] ;
Concepts
where,
target_table: is the table WHERE changes are being made.
source_table: is the table from which rows will be inserted, updated, or deleted into the target
table.
match_conditions: are the JOIN conditions and any other comparison operators.
MATCHED: true if a row in the target_table and source_table matches the
match_condition.
NOT MATCHED: true if a row from the source_table does not exist in the target_table.
SOURCE NOT MATCHED: true if a row exists in the target_table but not in the
source_table.
OUTPUT: An optional clause that allows to view those records that have been inserted/deleted/
updated in target_table.
MERGE statements are terminated with a semi-colon (;).
Code Snippet 27 shows how to use MERGE statement. It makes use of the Sterling database.
Code Snippet 27:
DELETE
Note - Common Table Expression (CTE) were first introduced in SQL Server 2005.
Key advantages of CTEs are improved readability and ease in maintenance of complex queries.
The following is the syntax to create a CTE.
Syntax:
WITH <CTE_name>
AS ( <CTE_definition> )
Concepts
For example, to retrieve and display the customer count year-wise for orders present in the
Sales.SalesOrderHeader table, the code will be as given in Code Snippet 28.
Code Snippet 28:
WITH CTE_OrderYear
AS
(
SELECT YEAR(OrderDate) AS OrderYear, CustomerID
FROM Sales.SalesOrderHeader
)
SELECT OrderYear, COUNT(DISTINCT CustomerID) AS CustomerCount
FROM CTE_OrderYear
GROUP BY OrderYear;
Here, CTE_OrderYear is specified as the CTE name. The WITH...AS keywords begins the CTE definition.
Then, the CTE is used in the SELECT statement to retrieve and display the desired results.
Figure 9.29 shows the output.
CTEs are limited in scope to the execution of the outer query. Hence, when the outer query ends,
the lifetime of the CTE will end.
You need to define a name for a CTE and also, define unique names for each of the columns
referenced in the SELECT clause of the CTE.
A single CTE can be referenced multiple times in the same query with one definition.
Multiple CTEs can also be defined in the same WITH clause. For example, consider Code Snippet
29. It defines two CTEs using a single WITH clause. This snippet assumes that three tables named
Student, City, and Status are created.
Code Snippet 29:
WITH CTE_Students
AS
(
Select StudentCode, S.Name,C.CityName, St.Status
FROM Student S
INNER JOIN City C
ON S.CityCode = C.CityCode
INNER JOIN Status St
ON S.StatusId = St.StatusId)
,
StatusRecord –- This is the second CTE being defined
AS
(
SELECT Status, COUNT(Name) AS CountofStudents
FROM CTE_Students
GROUP BY Status
)
SELECT * FROM StatusRecord
Assuming some records are inserted in all three tables, the output may be as shown in figure
9.30.
UNION
Concepts
INTERSECT
EXCEPT
Query_Statement1
UNION [ALL]
Query_Statement2
where,
Query_Statement1 and Query_Statement2 are SELECT statements.
Code Snippet 30 demonstrates the UNION operator.
Code Snippet 30:
This will list all the product ids of both tables that match with each other. If you include the ALL clause,
all rows are included in the resultset including duplicate records.
Code Snippet 31 demonstrates the UNION ALL operator.
Code Snippet 31:
By default, the UNION operator removes duplicate records from the resultset. However, if you use the
ALL clause with UNION operator, then all the rows are returned. Apart from UNION, the other operators
Concepts
that are used to combine data from multiple tables are INTERSECT and EXCEPT.
Query_statement1
INTERSECT
Query_statement2
where,
Query_Statement1 and Query_Statement2 are SELECT statements.
Code Snippet 32 demonstrates the INTERSECT operator.
Code Snippet 32:
The number of columns and the order in which they are given must be the same in both the
queries.
The EXCEPT operator returns all of the distinct rows from the query given on the left of the EXCEPT
operator and removes all the rows from the resultset that match the rows on the right of the EXCEPT
operator.
Query_statement1
EXCEPT
Query_statement2
where,
Query_Statement1 and Query_Statement2 are SELECT statements.
The two rules that apply to INTERSECT operator are also applicable for EXCEPT operator.
Code Snippet 33 demonstrates the EXCEPT operator.
Code Snippet 33:
where,
table_source: is a table or table expression.
aggregate_function: is a user-defined or in-built aggregate function that accepts one or more
inputs.
value_column: is the value column of the PIVOT operator.
pivot_column: is the pivot column of the PIVOT operator. This column must be of a type that can
implicitly or explicitly be converted to nvarchar().
Concepts
IN (column_list): are values in the pivot_column that will become the column names
of the output table. The list must not include any column names that already exist in the input
table_source being pivoted.
table_alias: is the alias name of the output table.
The output of this will be a table containing all columns of the table_source except the
pivot_column and value_column. These columns of the table_source, excluding the
pivot_column and value_column, are called the grouping columns of the pivot operator.
In simpler terms, to use the PIVOT operator, you need to supply three elements to the operator:
Grouping: In the FROM clause, the input columns must be provided. The PIVOT operator uses
those columns to determine which column(s) to use for grouping the data for aggregation.
Spreading: Here, a comma-separated list of values that occur in the source data is provided that
will be used as the column headings for the pivoted data.
FROM
(SELECT TOP 5 Name, SalesYTD
FROM Sales.SalesTerritory
) AS SourceTable
PIVOT
(
SUM(SalesYTD)
FOR Name IN ([Northwest], [Northeast], [Central], [Southwest], [Southeast])
) AS PivotTable;
A name for the new column that will display the unpivoted values
A name for the column that will display the names of the unpivoted values
Consider the earlier scenario. Code Snippet 36 shows the code to convert the temporary pivot table into
a permanent one so that the same table can be used for demonstrating UNPIVOT operations.
Code Snippet 36:
Consider that you need a report that groups several columns of a table. Further, you want aggregates of the
columns. In earlier versions of SQL Server, you would have to write several distinct GROUP BY clauses followed
by UNION clauses to achieve this. First introduced in SQL Server 2008, the GROUPING SETS operator allows
you to group together multiple groupings of columns followed by an optional grand total row, denoted by
parentheses, (). It is more efficient to use GROUPING SETS operators instead of multiple GROUP BY with
UNION clauses because the latter adds more processing overheads on the database server.
GROUP BY
GROUPING SETS ( <grouping set list> )
where,
grouping set list: consists of one or more columns, separated by commas.
A pair of parentheses, (), without any column name denotes grand total.
Code Snippet 38 demonstrates the GROUPING SETS operator. It assumes that a table Students is
created with fields named Id, Name, and Marks respectively.
Code Snippet 38:
Concepts
2. The ________________ operator is used to display only the rows that are common to both the
tables.
4. ________________ is formed when records from two tables are combined only if the rows from
both the tables are matched based on a common column.
5. _______________ return all rows from at least one of the tables in the FROM clause of the SELECT
statement, as long as those rows meet any WHERE or HAVING conditions of the SELECT
statement.
6. Consider you have two tables, Products and Orders that are already populated. Based on new
orders, you want to update Quantity in Products table. You write the following code:
MERGE Products AS T
USING Orders AS S
ON S.ProductID = T.ProductID
___________________________
___________________________
___________________________
Which of the following code, when inserted into the blank space, will allow you to achieve this?
WHEN MATCHED THEN WHEN NOT MATCHED THEN
(A) UPDATE SET T.Quantity (C) UPDATE SET T.Quantity =
= S.Quantity S. Quantity
WHEN MATCHED THEN WHEN MATCHED THEN
(B) UPDATE SET Quantity = (D) UPDATE SET S.Quantity =
Quantity T. Quantity
9.11.1 Answers
1. C
2. B
3. B
4. B
5. B
6. A
7. A
Concepts
Summary
The GROUP BY clause and aggregate functions enabled to group and/or aggregate data together
in order to present summarized information.
Spatial aggregate functions are newly introduced in SQL Server 2012.
A subquery allows the resultset of one SELECT statement to be used as criteria for another SELECT
statement.
Joins help you to combine column data from two or more tables based on a logical relationship
between the tables.
Set operators such as UNION and INTERSECT help you to combine row data from two or more
tables.
The PIVOT and UNPIVOT operators help to change the orientation of data from column-oriented
to row-oriented and vice versa.
The GROUPING SET subclause of the GROUP BY clause helps to specify multiple groupings in a
single query.
Concepts
Try It Yourself
1. Write a query to display the employee names and their departments from the AdventureWorks2012
database.
2. Using the tables Sales.SalesPerson and Sales.SalesTerritory, retrieve the IDs of all the
sales persons who operate in Canada.
3. Using the tables Sales.SalesPerson and Sales.SalesTerritory, retrieve the IDs of all the
sales persons who operate in Northwest or Northeast.
4. Compare the bonus values of salespersons in the Sales.SalesPerson table to find out the sales
persons earning more bonuses. Display the SalesPersonID and bonus values in descending order.
(Hint: Use a self-join and ORDER BY ...DESC).
5. Retrieve all the values of SalesPersonID from Sales.SalesPerson table, but leave out those
values, which are present in the Sales.Store table. (Hint: Use EXCEPT operator).
7. Retrieve all the sales person IDs and territory IDs from Sales.SalesPerson table regardless of
whether they have matching records in the Sales.SalesTerritory table. (Hint: Use a left outer
join).
8. Retrieve a distinct set of Territory IDs that are present in both Sales.SalesPerson and
Sales.SalesTerritory tables. (Hint: Use INTERSECT operator).
Concepts
This session explains about views and describes creating, altering, and
dropping views. The session also describes stored procedures in detail.
The session concludes with an explanation of the techniques to query
metadata.
10.1 Introduction
An SQL Server database has two main categories of objects: those that store data and those that access,
manipulate, or provide access to data. Views and stored procedures belong to this latter category.
10.2 Views
A view is a virtual table that is made up of selected columns from one or more tables. The tables from
which the view is created are referred to as base tables. These base tables can be from different databases.
A view can also include columns from other views created in the same or a different database. A view can
have a maximum of 1024 columns.
The data inside the view comes from the base tables that are referenced in the view definition. The rows
and columns of views are created dynamically when the view is referenced.
Note - While creating a view, test the SELECT statement that defines the view to make sure that SQL
Server returns the expected result. You create the view only after the SELECT statement is tested and
the resultset has been verified.
The following syntax is used to create a view.
Syntax:
where,
view_name: specifies the name of the view.
select_statement: specifies the SELECT statement that defines the view.
Concepts
Code Snippet 1 creates a view from the Product table to display only the product id, product number,
name, and safety stock level of products.
Code Snippet 1:
Note - The words vw are prefixed to a view name as per the recommended coding conventions.
The following code in Code Snippet 2 is used to display the details of the vwProductInfo view.
Code Snippet 2:
The result will show the specified columns of all the products from the Product table. A part of the
output is shown in figure 10.1.
The JOIN keyword can also be used to create views. The CREATE VIEW statement is used along with the
JOIN keyword to create a view using columns from multiple tables.
The following syntax is used to create a view with the JOIN keyword.
Syntax:
where,
view_name: specifies the name of the view.
table_name1: specifies the name of first table.
JOIN: specifies that two tables are joined using JOIN keyword.
table_name2: specifies the name of the second table.
Code Snippet 3 creates a view named vwPersonDetails with specific columns from the Person
and Employee tables. The JOIN and ON keywords join the two tables based on BusinessEntityID
column.
Code Snippet 3:
GO
This view will contain the columns Title, FirstName, MiddleName, and LastName from the Person
table and JobTitle from the Employee table. Once the view is created, you can retrieve records from
it, and manipulate and modify records as well.
When this view is queried with a SELECT statement, the output will be as shown in figure 10.3.
A view is created only in the current database. The base tables and views from which the view is
created can be from other databases or servers.
View names must be unique and cannot be the same as the table names in the schema.
The CREATE VIEW statement can include the ORDER BY clause only if the TOP keyword is used.
The CREATE VIEW statement cannot be combined with other Transact-SQL statements in a single
Concepts
batch.
If a view contains columns that have values derived from an expression, such columns have to be given
alias names. Also, if a view contains similarly-named columns from different tables, to distinguish these
columns, alias names must be specified.
Code Snippet 6 reuses the code given in Code Snippet 5 with an ORDER BY clause. The TOP keyword
displays the name of the first ten employees with their first names in ascending order.
Code Snippet 6:
INSERT
UPDATE
DELETE
Concepts
Designation varchar(30),
Salary int NOT NULL
)
Code Snippet 10 uses the INSERT statement to insert data through the view vwEmployee_Details.
However, the data is not inserted as the view is created from two base tables.
Code Snippet 10:
The following error message is displayed when the INSERT statement is executed.
'Msg 4405, Level 16, State 1, Line 1
View or function 'vEmployee_Details' is not updatable because the
modification affects multiple base tables.'
Values can be inserted into user-defined data type columns by:
The INSERT statement must specify values for all columns in a view in the underlying table that
do not allow null values and have no DEFAULT definitions.
When there is a self-join with the same view or base table, the INSERT statement does not work.
Concepts
This insert is not allowed as the view does not contain the LastName column from the base table and
that column does not allow null values.
Assume some records are added in the table as shown in figure 10.4.
Code Snippet 15 updates the view to change all rates of DVD writers to 3000.
Code Snippet 15:
UPDATE vwProduct_Details
SET Rate=3000
WHERE ProductName='DVD Writer'
The outcome of this code affects not only the view, vwProduct_Details, but also the underlying table
from which the view was created.
Figure 10.5 shows the updated table which was automatically updated because of the view.
Concepts
Large value data types include varchar(max), nvarchar(max), and varbinary(max). To update
data having large value data types, the .WRITE clause is used. The .WRITE clause specifies that a
section of the value in a column is to be modified. The .WRITE clause cannot be used to update a NULL
value in a column. Also, it cannot be used to set a column value to NULL.
Syntax:
where,
column_name: specifies the name of the large value data-type column.
Expression: specifies the value that is copied to the column.
@Offset: specifies the starting point in the value of the column at which the expression is
written.
@Length: specifies the length of the section in the column.
@Offset and @Length are specified in bytes for varbinary and varchar data types and in characters
for the nvarchar data type.
Assume that the table Product_Details is modified to include a column Description having data
type nvarchar(max).
A view is created based on this table, having the columns ProductName, Description, and Rate as
shown in Code Snippet 16.
Code Snippet 16:
Code Snippet 17 uses the UPDATE statement on the view vwProduct_Details. The .WRITE clause is
used to change the value of Internal in the Description column to External.
Code Snippet 17:
UPDATE vwProduct_Details
Concepts
As a result of the code, all the rows in the view that had 'Portable Hard Drive' as product name will be
updated with External instead of Internal in the Description column.
Figure 10.6 shows a sample output of the view after the updation.
While updating a row, if a constraint or rule is violated, the statement is terminated, an error is
returned, and no records are updated.
When there is a self-join with the same view or base table, the UPDATE statement does not work.
Assume that a table named Customer_Details and a view vwCustDetails based on the table are
Concepts
created.
Code Snippet 18 is used to delete the record from the view vwCustDetails that has CustID C0004.
Code Snippet 18:
Note - After a view is created, if the structure of its underlying tables is altered by adding columns, the
new columns do not appear in the view. This is because the column list is interpreted only when you
first create the view. To see the new columns in the view, you must alter the view.
Concepts
Code Snippet 19 alters the view, vwProductInfo to include the ReOrderPoint column.
Code Snippet 19:
Note - The owner of the view has the permission to drop a view and this permission is nontransferable.
However, the system administrator or database owner can drop any object by specifying the owner
name in the DROP VIEW statement.
The following syntax is used to drop a view.
Syntax:
Syntax:
sp_helptext <view_name>
The execution of the code will display the definition about the view as shown in figure 10.8.
Here, the AVG() function calculates the average rate of similar products by using a GROUP BY clause.
Figure 10.9 shows the result when the view is queried.
Concepts
where,
WITH CHECK OPTION: specifies that the modified data in the view continues to satisfy the view
definition.
Code Snippet 23 re-creates the view vwProductInfo having SafetyStockLevel less than or equal
to 1000.
Code Snippet 23:
In Code Snippet 24, the UPDATE statement is used to modify the view vwProductInfo by changing the
value of the SafetyStockLevel column for the product having id 321 to 2500.
Code Snippet 24:
Concepts
The UPDATE statement fails to execute as it violates the view definition, which specifies that
SafetyStockLevel must be less than or equal to 1000. Thus, no rows are affected in the view
vwProductInfo.
Note - Any updates performed on the base tables are not verified against the view, even if CHECK
OPTION is specified.
where,
view_name: specifies the name of the view.
WITH SCHEMABINDING: specifies that the view must be bound to a schema.
select_statement: Specifies the SELECT statement that defines the view.
Code Snippet 25 creates a view vwNewProductInfo with SCHEMABINDING option to bind the view to
the Production schema, which is the schema of the table Product.
Code Snippet 25:
sp_refreshview '<view_name>'
Code Snippet 26 creates a table Customers with the CustID, CustName, and Address columns.
Code Snippet 26:
The output of Code Snippet 28 shows the three columns, CustID, CustName, and Address.
Code Snippet 29 uses the ALTER TABLE statement to add a column Age to the table Customers.
Code Snippet 29:
When a SELECT query is run again on the view, the column Age is seen in the output. This is because the
sp_refreshview procedure refreshes the metadata for the view vwCustomers.
Tables that are schema-bound to a view cannot be dropped unless the view is dropped or changed such
that it no longer has schema binding. If the view is not dropped or changed and you attempt to drop the
table, the Database Engine returns an error message.
Also, when an ALTER TABLE statement affects the view definition of a schema-bound view, the ALTER
TABLE statement fails.
Consider the schema-bound view that was created in Code Snippet 25. It is dependent on the
Production.Product table.
Code Snippet 32 tries to modify the data type of ProductID column in the Production.Product
table from int to varchar(7).
Code Snippet 32:
The Database Engine returns an error message as the table is schema-bound to the vwNewProductInfo
view and hence, cannot be altered such that it violates the view definition of the view.
a specific task. This block of code is identified by an assigned name and is stored in the database in a
compiled form. A stored procedure may also be a reference to a .NET Framework Common Language
Runtime (CLR) method.
Stored procedures are useful when repetitive tasks have to be performed. This eliminates the need for
repetitively typing out multiple Transact-SQL statements and then repetitively compiling them.
Stored procedures can accept values in the form of input parameters and return output values as defined
by the output parameters.
Using stored procedures offers numerous advantages over using Transact-SQL statements. These are as
follows:
Improved Security
The database administrator can improve the security by associating database privileges with stored
procedures. Users can be given permission to execute a stored procedure even if the user does not
have permission to access the tables or views.
Precompiled Execution
Stored procedures are compiled during the first execution. For every subsequent execution,
SQL Server reuses this precompiled version. This reduces the time and resources required for
compilation.
Reuse of code
Stored procedures can be used multiple times. This eliminates the need to repetitively type out
hundreds of Transact-SQL statements every time a similar task is to be performed.
dures are based on the .NET framework CLR methods. Both the stored procedures can take and return
user-defined parameters.
Note - System tables are created by default at the time of creating a new database. These tables store
the metadata information about user-defined objects such as tables and views. Users cannot access
or update the system tables using system stored procedures except through permissions granted by a
database administrator.
Note - A session is established when a user connects to the database and is ended when the user
disconnects.
Concepts
The complete name of a global temporary stored procedure including the prefix # # cannot exceed 128
characters. The complete name of a local temporary stored procedure including the prefix # cannot
exceed 116 characters.
EXECUTE <procedure_name>
Code Snippet 33 executes the extended stored procedure xp_fileexist to check whether the
MyTest.txt file exists or not.
Code Snippet 33:
Note - When you execute an extended stored procedure, either in a batch or in a module, qualify the
stored procedure name with master.dbo.
where,
procedure_name: specifies the name of the procedure.
@parameter: specifies the input/output parameters in the procedure.
data_type: specifies the data types of the parameters.
sql_statement: specifies one or more Transact-SQL statements to be included in the
procedure.
Code Snippet 34 creates and then executes a custom stored procedure, uspGetCustTerritory, which
will display the details of customers such as customer id, territory id, and territory name.
Code Snippet 34:
To execute the stored procedure, the EXEC command is used as shown in Code Snippet 35.
Code Snippet 35:
EXEC uspGetCustTerritory
Concepts
Input Parameters
Input parameters allow the calling program to pass values to a stored procedure. These values are
accepted into variables defined in the stored procedure.
Output Parameters
Output parameters allow a stored procedure to pass values back to the calling program. These
values are accepted into variables by the calling program.
These are now described in detail.
Input Parameters
Concepts
Values are passed from the calling program to the stored procedure and these values are accepted
into the input parameters of the stored procedure. The input parameters are defined at the time of
creation of the stored procedure. The values passed to input parameters could be either constants
or variables. These values are passed to the procedure at the time of calling the procedure. The
stored procedure performs the specified tasks using these values.
Code Snippet 36 creates a stored procedure, uspGetSales with a parameter territory to accept
the name of a territory and display the sales details and salesperson id for that territory. Then, the
code executes the stored procedure with Northwest being passed as the input parameter.
Code Snippet 36:
Output Parameters
Stored procedures occasionally need to return output back to the calling program. This transfer of
data from the stored procedure to the calling program is performed using output parameters.
Output parameters are defined at the time of creation of the procedure. To specify an output
parameter, the OUTPUT keyword is used while declaring the parameter. Also, the calling statement
has to have a variable specified with the OUTPUT keyword to accept the output from the called
procedure.
The following syntax is used to pass output parameters in a stored procedure and then, execute the
stored procedure with the OUTPUT parameter specified.
Syntax:
Code Snippet 38 declares a variable sumsales to accept the output of the procedure
uspGetTotalSales.
Code Snippet 38:
convert(varchar(100),@sumsales);
GO
The code passes Northwest as the input to the uspGetTotalSales stored procedure and
accepts the output in the variable sumsales. The output is printed using the PRINT command.
5. Expand Programmability, right-click Stored Procedures, and then, click New Stored Procedure.
6. On the Query menu, click Specify Values for Template Parameters. The Specify Values for Template
Parameters dialog box is displayed as shown in figure 10.12.
Concepts
7. In the Specify Values for Template Parameters dialog box, enter the following values for the
parameters as shown in table 10.2.
Parameter Value
Author Your name
Create Date Today's date
Description Returns year to sales data for a territory
Procedure_Name uspGetTotals
@Param1 @territory
@Datatype_For_Param1 varchar(50)
Default_Value_For_Param1 NULL
@Param2
@Datatype_For_Param2
Default_Value_For_Param2
Table 10.2: Parameter Values
9. In the Query Editor, replace the SELECT statement with the following statement:
SELECT BusinessEntityID, B.SalesYTD, B.SalesLastYear
FROM Sales.SalesPerson A
JOIN Sales.SalesTerritory B
ON A.TerritoryID = B.TerritoryID
WHERE B.Name = @territory;
10. To test the syntax, on the Query menu, click Parse. If an error message is returned, compare the
statements with the information and correct as needed.
11. To create the procedure, from the Query menu, click Execute. The procedure is created as an object
in the database.
12. To see the procedure listed in Object Explorer, right-click Stored Procedures and select Refresh.
Concepts
The procedure name will be displayed in the Object Explorer tree as shown in figure 10.13.
13. To run the procedure, in Object Explorer, right-click the stored procedure name uspGetTotals and
select Execute Stored Procedure.
14. In the Execute Procedure window, enter Northwest as the value for the parameter
@territory.
Note - In SQL Server 2012, a stored procedure can be up to 250 MB in size. In other words, the bytes in
source text of a stored procedure cannot exceed 250 MB.
Syntax:
sp_helptext '<procedure_name>'
Code Snippet 39 displays the definition of the stored procedure named uspGetTotals.
Code Snippet 39:
where,
ENCRYPTION: encrypts the stored procedure definition.
RECOMPILE: indicates that the procedure is compiled at run-time.
sql_statement: specifies the Transact-SQL statements to be included in the body of the
procedure.
Code Snippet 40 modifies the definition of the stored procedure named uspGetTotals to add a new
column CostYTD to be retrieved from Sales.SalesTerritory.
Code Snippet 40:
JOIN Sales.SalesTerritory B
ON A.TerritoryID = B.TerritoryID
WHERE B.Name = @territory;
GO
Note - When you change the definition of a stored procedure, the dependent objects may fail when
executed. This happens if the dependent objects are not updated to reflect the changes made to the
stored procedure.
When the procedure NestedProcedure is executed, this procedure in turn invokes the
uspGetCustTerritory and uspGetSales stored procedures and passes the value France as the
input parameter to the uspGetSales stored procedure.
Note - Although there can be a maximum of 32 levels of nesting, there is no limit as to the number of
stored procedure that can be called from a given stored procedure.
chain).
Syntax:
@@NESTLEVEL
where,
@@NESTLEVEL: Is a function that returns an integer value specifying the level of nesting.
Code Snippet 43 creates and executes a procedure Nest_Procedure that executes the @@NESTLEVEL
function to determine the level of nesting in three different scenarios.
Code Snippet 43:
EXECUTE Nest_Procedure
Three outputs are displayed in figure 10.14 for the three different methods used to call the @@NESTLEVEL
function.
Note - The sp_executesql stored procedure can also be used to execute Transact-SQL statements.
Concepts
There are over 230 different system views and these are automatically inserted into the user created
database. These views are grouped into several different schemas.
First introduced in SQL Server 2005, Dynamic Management Views (DMVs) and Dynamic Management
Functions (DMFs) are dynamic management objects that return server and database state information.
DMVs and DMFs are collectively referred to as dynamic management objects. They provide useful
insight into the working of software and can be used for examining the state of SQL Server instance,
troubleshooting, and performance tuning.
Both DMVs and DMFs return data in tabular format but the difference is that while a DMF normally
accepts at least one parameter, a DMV does not accept parameters. SQL Server 2012 provides nearly 200
dynamic management objects. In order to query DMVs, it is required to have VIEW SERVER STATE or
VIEW DATABASE STATE permission, depending on the scope of the DMV.
Here, is_user_process is a column in the view that determines if the session is a system session or
not. A value of 1 indicates that it is not a system session but rather a user session. The program_name
column determines the name of client program that initiated the session. The login_time column
establishes the time when the session began. The output of Code Snippet 48 is shown in figure 10.15.
Concepts
b. Only columns from a table can be selected for a view, rows cannot be.
(A) a, c, d, e (C) a, b, d
2. You are creating a view Supplier _ View with FirstName, LastName, and City columns from
the Supplier _ Details table. Which of the following code is violating the definition of a view?
CREATE VIEW Supplier_
View CREATE VIEW Supplier_View
AS AS
3. Which of these statements about CHECK OPTION and SCHEMABINDING options are true?
b. The SCHEMABINDING option binds the view to the schema of the base table.
When a row is modified, the WITH CHECK OPTION makes sure that the data remains visible
c.
through the view.
SCHEMABINDING option ensures the base table cannot be modified in a way that would
d.
affect the view definition.
e. SCHEMABINDING option cannot be used with ALTER VIEW statements.
(A) a, b, c (C) b, c, d
(B) b, c (D) c, d, e
4. You want to create a view Account _ Details with the SCHEMABINDING option. Which of the
following code will achieve this objective?
CREATE VIEW Account_Details CREATE VIEW Account_Details
AS WITH SCHEMABINDING
(A) SELECT AccNo, City (C) AS
FROM dbo.Customer_Details SELECT AccNo, City
WITH SCHEMABINDING FROM dbo.Customer_Details
CREATE VIEW Account_Details CREATE VIEW Account_Details
SCHEMABINDING WITH SCHEMABINDING
(B) AS (D) AS
SELECT AccNo, City SELECT AccNo, City
FROM Customer_Details FROM Customer_Details
Concepts
5. A table Item _ Details is created with ItemCode, ItemName, Price, and Quantity columns.
The ItemCode column is defined as the PRIMARY KEY, ItemName is defined with UNIQUE and
NOT NULL constraints, Price is defined with the NOT NULL constraint, and Quantity is defined
with the NOT NULL constraint and having a default value specified. Which of the following views
created using columns from the Item _ Details table can be used to insert records in the table?
CREATE VIEW ItemDetails CREATE VIEW ItemDetails
AS AS
(A) SELECT ItemCode, (C) SELECT ItemName, Price,
ItemName, Price Quantity
FROM Item_Details FROM Item_Details
CREATE VIEW ItemDetails CREATE VIEW ItemDetails
AS AS
(B) SELECT ItemCode, Price, (D) SELECT ItemCode, ItemName,
Quantity Quantity
FROM Item_Details FROM Item_Details
6. Which of these statements about stored procedures are true?
(A) a, d (C) a, c, d, e
(B) b, c, e (D) d
Concepts
10.18.1 Answers
1. B
2. C
3. C
4. B
5. A
6. C
Concepts
Summary
A view is a virtual table that is made up of selected columns from one or more tables and is
created using the CREATE VIEW command in SQL Server.
Users can manipulate the data in views, such as inserting into views, modifying the data in views,
and deleting from views.
A stored procedure is a group of Transact-SQL statements that act as a single block of code that
performs a specific task.
SQL Server supports various types of stored procedures, such as User-Defined Stored Procedures,
Extended Stored Procedures, and System Stored Procedures.
System stored procedures can be classified into different categories such as Catalog Stored
Procedures, Security Stored Procedures, and Cursor Stored Procedures.
Input and output parameters can be used with stored procedures to pass and receive data from
stored procedures.
The properties of an object such as a table or a view are stored in special system tables and are
referred to as metadata.
DMVs and DMFs are dynamic management objects that return server and database state
information. DMVs and DMFs are collectively referred to as dynamic management objects.
Concepts
Try It Yourself
1. In SQL Server 2012 Management Studio, locate the extended stored procedures defined under the
master database and execute the following procedures in a query window:
sys.xp_readerrorlog
sys.xp_getnetname
sys.xp_fixeddrives
2. ShoezUnlimited is a trendy shoe store based in Miami. It stocks various kinds of footwear in its store
and sells them for profits. ShoezUnlimited maintains the details of all products in an SQL Server 2012
database. The management wants their developer to make use of stored procedures for commonly
performed tasks. Assuming that you are the developer, perform the following tasks:
Create the Shoes table having structure as shown in table 10.6 in the database, ShoezUnlimited.
3. Add at least 5 records to the table. Ensure that the value of the column QtyOnHand is more than
20 for each of the shoes.
4. Write statements to create a stored procedure named PriceIncrease that will increment the
unitprice of all shoes by 10 dollars.
Concepts
5. Write statements to create a stored procedure QtyOnHand that will decrease the quantity on hand
of specified brands by 25. The brand name should be supplied as input.
11.1 Introduction
SQL Server 2012 makes use of indexes to find data when a query is processed. The SQL Server engine
uses an index in the similar way as a student uses a book index. For example, consider that you need to
find all references to INSERT statements in an SQL book. The immediate approach taken will be to scan
each page of the book beginning from the starting page. You mark each time the word INSERT is found,
until the end of the book is reached. This approach is time consuming and laborious. The second way is
to use the index in the back of the book to find the page numbers for each occurrence of the INSERT
statements. The second way produces the same results as the first, but by tremendously saving time.
When SQL Server has not defined any index for searching, then the process is similar to the first way in
the example; the SQL engine needs to visit every row in a table. In database terminology, this behavior is
called table scan, or just scan.
A table scan is not always troublesome, but it is sometimes unavoidable. However, as a table grows up
to thousands and millions of rows and beyond, scans become slower and more expensive. In such cases,
indexes are strongly recommended.
Creating or removing indexes from a database schema will not affect an application's code. Indexes
operate in the backend with support of the database engine. Moreover, creating an appropriate index
can significantly increase the performance of an application.
Page number
Page type
Note - A data page is the smallest unit of data storage. An allocation unit is a collection of data pages
grouped together based on the page type. This grouping is done for efficient management of data.
Log Files
Log files contain information about modifications carried out in the database. This information is
useful in recovery of data in contingencies such as sudden power failure or the need to shift the
database to a different server. There is at least one log file for each database. The recommended
file extension for log files is .ldf.
11.1.4 Indexes
Concepts
In a table, records are stored in the order in which they are entered. Their storage in the database is
unsorted. When data is to be retrieved from such tables, the entire table needs to be scanned. This slows
down the query retrieval process. To speed up query retrieval, indexes need to be created.
When an index is created on a table, the index creates an order for the data rows or records in the table
as shown in figure 11.4. This assists in faster location and retrieval of data during searches.
where,
index_name: specifies the name of the index.
table_name: specifies the name of the table.
column_name: specifies the name of the column.
Code Snippet 1 creates an index, IX_Country on the Country column in the Customer_Details
table.
Code Snippet 1:
USE CUST_DB
CREATE INDEX IX_Country ON Customer_Details(Country);
GO
Concepts
Indexes increase the speed of queries that join tables or perform sorting operations.
Indexes implement the uniqueness of rows if defined when you create an index.
11.1.5 Scenario
In a telephone directory, where a large amount of data is stored and is frequently accessed, the storage
of data is done in an alphabetical order. If such data were unsorted, it would be nearly impossible to
search for a specific telephone number.
Similarly, in a database table having a large number of records that are frequently accessed, the data is to
be sorted for fast retrieval. When an index is created on the table, the index either physically or logically
sorts the records. Thus, searching for a specific record becomes faster and there is less strain on system
resources.
Here, you wish to make the changes for all employees in one department before moving on to employees
in another department. In this case, an index can be created as shown in figure 11.6 on the Department
column before accessing the records.
This index will create logical chunks of data rows based on the department. This again will limit the
amount of data actually scanned during query retrieval.
Hence, retrieval will be faster and there will be less strain on system resources.
Concepts
11.2.1 B-Tree
In SQL Server, all indexes are structured in the form of B-Trees. A B-Tree structure can be visualized as
an inverted tree with the root right at the top, splitting into branches and then, into leaves right at the
bottom as shown in figure 11.8.
Root Node - Contains an index page with pointers pointing to index pages at the first intermediate
level.
Intermediate Nodes - Contain index pages with pointers pointing either to index pages at the next
intermediate level or to index or data pages at the leaf level.
Leaf Nodes - Contain either data pages or index pages that point to data pages.
Note - If an allocation unit contains extents from more than one file, there will be multiple IAM pages
linked together in an IAM chain to map these extents.
Uniqueness of a value in a clustered index is maintained explicitly using the UNIQUE keyword or implicitly
using an internal unique identifier as shown in figure 11.12.
where,
CLUSTERED: Specifies that a clustered index is created.
Concepts
Code Snippet 2 creates a clustered index, IX_CustID on the CustID column in Customer_Details
table.
Code Snippet 2:
USE CUST_DB
CREATE CLUSTERED INDEX IX_CustID ON Customer_Details(CustID)
GO
Note - Before you create a clustered index, you need to make sure the free space in your system is at
least 1.2 times the amount of data in the table.
A clustered index is automatically created on a table when a primary key is defined on the table. In a table
without a primary key column, a clustered index should ideally be defined on:
Note - Two or more tables can be logically joined through columns that are common to the tables. Data
can then be retrieved from these tables as if they were a single table.
Concepts
When a clustered index is re-created or the DROP _ EXISTING option is used, SQL Server rebuilds
the existing nonclustered indexes.
Syntax:
where,
NONCLUSTERED: specifies that a nonclustered index is created.
Code Snippet 3 creates a nonclustered index IX_State on the State column in Customer_Details
table.
Code Snippet 3:
USE CUST_DB
CREATE NONCLUSTERED INDEX IX_State ON Customer_Details(State)
GO
Concepts
For example, if there is a table with ten columns (C1 to C10), the data of all the ten columns from each
row gets stored together contiguously on the same page as shown in figure 11.15.
For example, the data of column C1 of all the rows gets stored together on one page and the data for
column C2 of all the rows gets stored on another page and so on as shown in figure 11.16.
Assume that a table named ResellerSalesPtnd has been created in AdventureWorks2012 database.
Concepts
Code Snippet 4 demonstrates how to create a column store index on this table.
Code Snippet 4:
(
[ProductKey],
[OrderDateKey],
[DueDateKey],
[ShipDateKey],
[CustomerKey],
[EmployeeKey],
[PromotionKey],
[CurrencyKey],
[SalesTerritoryKey],
[SalesOrderNumber],
[SalesOrderLineNumber],
[RevisionNumber],
[OrderQuantity],
[UnitPrice],
[ExtendedAmount],
[UnitPriceDiscountPct],
[DiscountAmount],
[ProductStandardCost],
[TotalProductCost],
[SalesAmount],
[TaxAmt],
[Freight],
[CarrierTrackingNumber],
[CustomerPONumber],
[OrderDate],
[DueDate],
Concepts
[ShipDate]
);
Note - COLUMNSTORE INDEX works only on enterprise edition of SQL Server 2012.
The partition scheme or filegroup specified in the MOVE TO clause must already exist.
The table will be located in the same partition scheme or filegroup of the dropped clustered
index.
The following is the syntax to drop a clustered index.
Syntax:
where,
index_name: specifies the name of the index.
partition_scheme_name: specifies the name of the partition scheme.
filegroup_name: specifies the name of the filegroup to store the partitions.
default: specifies the default location to store the resulting table.
Code Snippet 5 drops the index IX_SuppID created on the SuppID column of the Supplier_Details
table.
Concepts
Code Snippet 5:
The data in the resulting Supplier_Details table is moved to the default location.
Code Snippet 6 drops the index IX_SuppID created on the SuppID column of the Supplier_Details
table.
Code Snippet 6:
The data in the resulting Supplier_Details table is moved to the FGCountry filegroup.
Note - XML is a plain-text, Unicode-based language that provides mechanisms for describing document
structures using markup tags. For example, consider an organization having the details of the employees
stored in XML document. The information per employee is stored in the following form:
<Employees>
<Name>John</Name>
<Age>34</Age>
<Salary>500000</Salary>
</Employees>
Here, <Employees> is the root node and <Name>, <Age>, and <Salary> are the child nodes.
Primary XML Indexes - The process of carrying out queries within an XML column can sometimes
be slow. A primary XML index is created on each XML column to speed up these queries. It is a
special index that shreds the XML data to store information. The following is the syntax to create a
primary XML index.
Syntax:
USE AdventureWorks2012;
CREATE PRIMARY XML INDEX PXML_ProductModel_CatalogDescription
ON Production.ProductModel (CatalogDescription);
GO
Concepts
Secondary XML Indexes - Secondary XML indexes are specialized XML indexes that help with
specific XML queries. The features of secondary XML indexes are as follows:
Searching for values anywhere in the XML document.
Retrieving particular object properties from within an XML document.
Secondary XML indexes can only be created on columns that already have a primary XML index.
Code Snippet 8 demonstrates how to create a secondary XML index on
the CatalogDescription column in the Production.ProductModel table.
Code Snippet 8:
USE AdventureWorks2012;
CREATE XML INDEX IXML_ProductModel_CatalogDescription_Path
ON Production.ProductModel (CatalogDescription)
USING XML INDEX PXML_ProductModel_CatalogDescription FOR PATH ;
GO
Selective XML Indexes (SXI) – This is a new type of XML index introduced by SQL Server 2012. The
features of this new index is to improve querying performance over the data stored as XML in SQL
Server, allows faster indexing of large XML data workloads, and improves scalability by reducing
storage costs of the index. The following is the syntax to create selective XML index.
Syntax:
USE CUST_DB
CREATE SELECTIVE XML INDEX SXI_index
ON BooksBilling(BookDetails)
FOR
(
pathTitle = '/book/title/text()' AS XQUERY 'xs:string',
pathAuthors = '/book/authors' AS XQUERY 'node()',
pathId = '/book/id' AS SQL NVARCHAR(100)
)
GO
Note - SELECTIVE XML INDEX will work only in enterprise edition of SQL Server 2012.
where,
xml_index_name: specifies the name of the XML index.
Code Snippet 10 rebuilds the primary XML index PXML_DocumentStore created on the XMLDocument
table.
Code Snippet 10:
The following is the syntax to remove an XML index using the DROP INDEX statement.
Syntax:
Code Snippet 11 removes the primary XML index PXML_DocumentStore created on the XMLDocument
table.
Code Snippet 11:
IN_ROW_DATA
It is used to manage data or index rows that contain all types of data except large object (LOB)
data.
LOB_DATA
It is used to manage large object data, which is stored in one or more of the following data types:
varbinary(max), varchar(max), and xml.
ROW_OVERFLOW_DATA
It is used to manage data of variable length, which is stored in varchar, nvarchar, varbinary,
or sql_variant columns.
Figure 11.18 shows the allocation units.
Concepts
Note - A heap can have only one allocation unit of each type in a particular partition of a table.
11.4 Partitioning
Partitioning divides data into subsets. This makes large tables or indexes more manageable. Partitioning
enables you to access data quickly and efficiently. Maintenance operations on subsets of data are
performed more efficiently because they target only the required subset of data instead of the entire
table.
By default, a table or an index has only one partition that contains all the data or index pages. When a
table or index uses multiple partitions, the data is partitioned horizontally into groups of rows as shown
in figure 11.19.
descriptions:
USE CUST_DB
CREATE TABLE Employee_Details
(
EmpID int not null,
FirstName varchar(20) not null,
Concepts
Assume that multiple records are inserted in the table, Employee_Details. The SELECT statement is
used to search for records having the FirstName as John.
Since there is no index associated with the FirstName column, SQL Server will perform a full table
scan.
Code Snippet 13 demonstrates how to create a nonclustered index IX_EmployeeCity on the City
column of the Employee_Details table.
Code Snippet 13:
USE CUST_DB
CREATE NONCLUSTERED INDEX IX_EmployeeCity ON Employee_Details(City);
GO
Assume that multiple records are inserted in the table, Employee_Details. The SELECT statement is
used to search for records of employees from city, Boston as shown in Code Snippet 14.
Code Snippet 14:
USE CUST_DB
SELECT EmpID,FirstName,LastName,City FROM Employee_Details WHERE City='Boston'
GO
Since there is a nonclustered index associated with City column, SQL Server will use the
IX_EmployeeCity index to extract the records as shown in figure 11.23.
Note - In a table having a nonclustered index, there is no specific storage order of the data. Data is
retrieved directly from its physical location.
For finding rows using clustered indexes, a SELECT statement is used with the clustered index column
specified in the WHERE clause.
Figure 11.24 shows the process of finding rows with clustered index.
USE CUST_DB
CREATE UNIQUE CLUSTERED INDEX IX_EmployeeID ON Employee_Details(EmpID);
GO
Assume that multiple records are inserted in the table, Employee_Details. The SELECT statement is
used to search for records of employees having EmpID between 102 and 105 as shown in Code Snippet
16.
Code Snippet 16:
Concepts
USE CUST_DB
SELECT EmpID,FirstName,LastName,City FROM Employee_Details WHERE EmpID >= 102
AND EmpID <= 105;
GO
Since there is a clustered index associated with the EmpID column, SQL Server will use the IX_EmployeeID
index to extract the records as shown in figure 11.25.
Concepts
where,
column_name: specifies the name of the column on which the index is to be created.
UNIQUE: specifies that no duplicate values are allowed in the column.
Code Snippet 17 creates a unique index on the CustID column in the Customer_Details table.
Code Snippet 17:
where,
table_name: Specifies the name of the table.
column_name AS computed_column_expression: Specifies the name of the computed
column as well as the expression that defines the values in the column.
Code Snippet 18 creates a computed column Area whose values are calculated from the values entered
in the Length and Breadth fields.
Code Snippet 18:
USE SampleDB
CREATE TABLE Calc_Area(Length int, Breadth int, Area AS Length*Breadth)
GO
Note - A computed column cannot be used as a part of any PRIMARY KEY, UNIQUE KEY, FOREIGN KEY,
or CHECK constraint definition.
where,
computed_column_name specifies the name of the computed column.
USE SampleDB
CREATE INDEX IX_Area ON Calc_Area(Area);
GO
11.6 Cursors
A database object that is used to retrieve data as one row at a time, from a resultset is called as cursors.
Cursors are used instead of the Transact-SQL commands that operate on all the rows at one time in the
resultset. Cursors are used when records in a database table need to be updated one row at a time.
Types of Cursors
Static Cursors – These cursors help to populate the resultset when the cursor is created and the
query result is cached. This is the slowest of all the cursors. This cursor can move/scroll in both
backward and forward directions. Also, the static cursor cannot be updated or deleted.
Dynamic Cursors – These cursors allow you to view the procedures of insertion, updation, and
deletion when the cursor is open. This is one of the most sensitive cursor and is scrollable.
Forward Only Cursors - These cursors also support updation and deletion of data. This is the
fastest cursor though it does not support backward scrolling. The three types of forward cursors
are FORWARD _ ONLY KEYSET, FORWARD _ ONLY STATIC, and FAST _ FORWARD.
Keyset Driven Cursors - These cursors create a set of unique identifiers as keys in a keyset that are
used to control the cursor. This is also a sensitive cursor that can update and delete data. The
keyset is dependent on all the rows that qualify the SELECT statement at the time of opening the
cursor.
The following is the syntax to declare a cursor.
Syntax:
where,
cursor_name: is the name of the cursor defined, cursor_name must comply to the rules for
identifiers.
LOCAL: specifies that the cursor can be used locally to the batch, stored procedure, or trigger in
which the cursor was created.
GLOBAL: specifies that the cursor can be used globally to the connection.
FORWARD_ONLY: specifies that the cursor can only be scrolled from the first to the last row.
STATIC: defines a cursor that makes a temporary copy of the data to be used by the cursor.
KEYSET: specifies that the membership and order of rows in the cursor are fixed when the cursor
is opened.
DYNAMIC: defines a cursor that reflects all data changes made to the rows in its resultset as you
scroll around the cursor.
FAST_FORWARD: specifies a FORWARD_ONLY, READ_ONLY cursor with performance optimizations
enabled.
READ_ONLY: prevents updates made through this cursor.
SCROLL_LOCKS: specifies that positioned updates or deletes made through the cursor are
guaranteed to succeed.
Code Snippet 20 creates an Employee table in SampleDB database.
Code Snippet 20:
USE SampleDB
CREATE TABLE Employee
(
EmpID int PRIMARY KEY,
EmpName varchar (50) NOT NULL,
Salary int NOT NULL,
Address varchar (200) NOT NULL,
Concepts
)
GO
USE SampleDB
SET NOCOUNT ON
DECLARE @Id int
DECLARE @name varchar(50)
Concepts
STATIC FOR
SELECT EmpID,EmpName,Salary from Employee
/*A Cursor is opened and populated by executing the SQL statement defined by the
cursor.*/
OPEN cur_emp
IF @@CURSOR_ROWS > 0
BEGIN
/*Rows are fetched from the cursor one by one or in a block to do data manipulation*/
FETCH NEXT FROM cur_emp INTO @Id,@name,@salary
WHILE @@Fetch_status = 0
BEGIN
PRINT 'ID : '+ convert(varchar(20),@Id)+', Name : '+@name+ ', Salary :
'+convert(varchar(20),@salary)
In the code, the details are retrieved one row at a time. This procedure will help in retrieving large
databases sequentially.
First, a cursor is declared by defining the SQL statement that returns a resultset. Then, it is opened and
populated by executing the SQL statement defined by the cursor. Rows are then fetched from the cursor
one by one or in a block to perform data manipulation.
The cursor is then closed and finally, the cursor definition is deleted and all the system resources associated
with the cursor are released.
Concepts
3. Which of the following code moves the data in the resulting Supplier _ Details table to the
default location?
DROP INDEX IX_SuppID ON Supplier_
DROP INDEX IX_SuppID ON Details
(A) Supplier_Details (C)
WITH (MOVE TO 'default')
MOVE INDEX IX_SuppID ON DELETE INDEX IX_SuppID ON
(B) Supplier_Details (D) Supplier_Details
WITH ('default') WITH ('default')
4. Which of the following code is used to create a clustered INDEX on CustID in
Customer _ Details table?
USE CUST_DB USE CUST_DB
CREATE CLUSTERED INDEX CREATE INDEX Customer_Details ON
(A) Customer_Details(CustID) (C) IX_CustID
GO GO
USE CUST_DB USE CUST_DB
Concepts
5. A clustered index can be created on a table using a column without ________________ values.
11.7.1 Answers
1. D
2. A
3. C
4. D
5. B
Concepts
Summary
Indexes increase the speed of the querying process by providing quick access to rows or columns
in a data table.
SQL Server 2012 stores data in storage units known as data pages.
All input and output operations in a database are performed at the page level.
SQL Server uses catalog views to find rows when an index is not created on a table.
A clustered index causes records to be physically stored in a sorted or sequential order.
A nonclustered index is defined on a table that has data either in a clustered structure or a heap.
XML indexes can speed up queries on tables that have XML data.
Column Store Index enhances performance of data warehouse queries extensively.
Concepts
Try It Yourself
1. Houston State Library is one of the renowned libraries in Houston, Texas. The library has a stock of
around 10, 00000 books of different genres. The library issues books to the students of the college
nearby. With the inflow of students coming to the library growing exponentially, Houston State
Library has decided to automate the entire process of issuing books to the students. The library has
increased the quantity of each book by 10 copies, depending upon the demand made by the
students.
Create a table named BooksMaster to store the details of the books in the library as shown
in table 11.3.
Create a clustered index named IX _ Title on the Title column in the BooksMaster
table.
Create a table BooksMaster1 having field names BookCode, Title, and Book Details.
Specify the data type for BookDetails as xml. Create an XML document with details of
ISBN, Author, Price, Publisher, and NumPages.
Concepts
The library wants to retrieve the publisher name of the company which prints a specific
author's book. Create a primary XML index PXML _ Books on the BookCode column of the
BooksMaster table.
This session explains the triggers and different types of triggers. The
session also describes the procedure to create and alter DML triggers,
nested triggers, update functions, handling multiple rows in a session, and
performance implication of triggers.
Explain triggers
12.1 Introduction
A trigger is a stored procedure that is executed when an attempt is made to modify data in a
table protected by the trigger. Unlike standard system stored procedures, triggers cannot be executed
directly, nor do they pass or receive parameters. Triggers are defined on specific tables and these tables
are referred to as trigger tables.
If a trigger is defined on the INSERT, UPDATE, or DELETE action on a table, it fires automatically when
these actions are attempted. This automatic execution of the trigger cannot be circumvented. In SQL
Server 2012, triggers are created using the CREATE TRIGGER statement. Figure 12.1 displays an example
of triggers.
DML Triggers
DML triggers execute when data is inserted, modified, or deleted in a table or a view using the
INSERT, UPDATE, or DELETE statements.
Concepts
DDL Triggers
DDL triggers execute when a table or a view is created, modified, or deleted using the CREATE,
ALTER, or DROP statements.
Logon Triggers
Logon triggers execute stored procedures when a session is established with a LOGON event.
These triggers are invoked after the login authentication is complete and before the actual session
is established. Logon triggers control server sessions by restricting invalid logins or limiting the
number of sessions.
INSERT trigger
UPDATE trigger
DELETE trigger
Concepts
Inserted Table
The Inserted table contains copies of records that are modified with the INSERT and UPDATE
operations on the trigger table. Trigger table is the table on which the trigger is defined. The INSERT
and UPDATE operations insert new records into the Inserted and Trigger tables.
Deleted Table
The Deleted table contains copies of records that are modified with the DELETE and UPDATE
operations on the trigger table. Trigger table is the table on which the trigger is defined. These
operations delete the records from the trigger table and insert them in the Deleted table.
Note - The Inserted and Deleted tables do not physically remain present in the database. They are
created and dropped whenever any triggering events occur.
An INSERT trigger is executed when a new record is inserted in a table. The INSERT trigger ensures that
the value being entered conforms to the constraints defined on that table.
When a user inserts a record in the table, the INSERT trigger saves a copy of that record in the
Inserted table. It then checks whether the new value in the Inserted table conforms to the specified
constraints.
If the record is valid, the INSERT trigger inserts the row in the trigger table otherwise, it displays an error
message.
An INSERT trigger is created using the INSERT keyword in the CREATE TRIGGER and ALTER TRIGGER
statements.
The following is the syntax for creating an INSERT trigger.
Syntax:
where,
schema_name: specifies the name of the schema to which the table/trigger belongs.
trigger_name: specifies the name of the trigger.
table_name: specifies the table on which the DML trigger is created.
WITH ENCRYPTION: encrypts the text of the CREATE TRIGGER statement.
FOR: specifies that the DML trigger executes after the modification operations are complete.
INSERT: specifies that this DML trigger will be invoked by insert operations.
UPDATE: Returns a Boolean value that indicates whether an INSERT or UPDATE attempt was made
on a specified column.
column_name: Is the name of the column to test for the UPDATE action.
AND: Combines two Boolean expressions and returns TRUE when both expressions are TRUE.
OR: Combines two Boolean expressions and returns TRUE if at least one expression is TRUE.
sql_statement: specifies the SQL statements that are executed in the DML trigger.
Concepts
Code Snippet 1 creates an INSERT trigger on a table named Account_Transactions. When a new
record is inserted, and if the withdrawal amount exceeds 80000, the insert trigger displays an error
message and rolls back the transaction using the ROLLBACK TRANSACTION statement.
A transaction is a set of one or more statements that is treated as a single unit of work. A transaction can
succeed or fail as a whole, therefore all the statements within it succeed or fail together. The ROLLBACK
TRANSACTION statement cancels or rolls back a transaction.
Code Snippet 1:
Code Snippet 2 inserts a record where the Withdrawal amount exceeds 80000. This causes the INSERT
trigger to display an error message and roll back the transaction.
Code Snippet 2:
table when a record is updated. It then evaluates the new record to determine if the values conform to
the constraints specified in the trigger table.
If the new values are valid, the record from the Inserted table is copied to the trigger table. However,
if the new values are invalid, an error message is displayed. Also, the original record is copied from the
Deleted table back into the trigger table.
An UPDATE trigger is created using the UPDATE keyword in the CREATE TRIGGER and ALTER TRIGGER
statements. The following is the syntax for creating an UPDATE trigger at the table-level.
Syntax:
where,
FOR UPDATE: specifies that this DML trigger will be invoked after the update operations.
Code Snippet 3 creates an UPDATE trigger at the table level on the EmployeeDetails table. When a
record is modified, the UPDATE trigger is activated. It checks whether the date of birth is greater than
today's date. It displays an error message for invalid values and rolls back the modification operation
using the ROLLBACK TRANSACTION statement.
Code Snippet 3:
Code Snippet 4 updates a record where an invalid date of birth is specified. This causes the update trigger
to display the error message and roll back the transaction.
Code Snippet 4:
UPDATE EmployeeDetails
SET BirthDate='2015/06/02'
WHERE EmployeeID='E06'
Code Snippet 6 updates a record where the value in the EmployeeID column is being modified.
This causes the update trigger to fire. The update trigger displays an error message and rolls back
the transaction.
Code Snippet 6:
UPDATE EmployeeDetails
SET EmployeeID='E12'
WHERE EmployeeID='E04'
The record is deleted from the trigger table and inserted in the Deleted table.
If there is a constraint on the record to prevent deletion, the DELETE trigger displays an error
message.
The deleted record stored in the Deleted table is copied back to the trigger table.
A DELETE trigger is created using the DELETE keyword in the CREATE TRIGGER statement. The following
is the syntax for creating a DELETE trigger.
Syntax:
where,
DELETE: specifies that this DML trigger will be invoked by delete operations.
Concepts
Code Snippet 8 attempts to delete records from the Account_Transactions table where Deposit is
50000.
Code Snippet 8:
where,
FOR | AFTER: specifies that the DML trigger executes after the modification operations are
complete.
{ [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] }: specifies the operations that invoke the DML
trigger.
Concepts
Code Snippet 9 creates an AFTER DELETE trigger on the EmployeeDetails table. If any employee
record is deleted from the table, the AFTER DELETE trigger activates. The trigger displays the number of
employee records deleted from the table.
Code Snippet 9:
Note - Users cannot create INSTEAD OF triggers for delete or update operations on tables that have
the ON DELETE cascade and ON UPDATE cascade options selected.
The following is the syntax for creating an INSTEAD OF trigger.
Syntax:
where,
view_name: specifies the view on which the DML trigger is created.
INSTEAD OF: specifies that the DML trigger executes in place of the modification operations. These
triggers are not defined on updatable views using WITH CHECK OPTION.
Code Snippet 11 creates an INSTEAD OF DELETE trigger on the Account_Transactions table.
If any record in the Account_Transactions table is deleted, the corresponding records in the
EmployeeDetails table will be removed.
Code Snippet 11:
Concepts
Code Snippet 16 deletes a row from the view Employee_Details_View where EmpID='2'.
Code Snippet 16:
Note - A single AFTER trigger can be invoked by more than one triggering action.
where,
[ triggerschema.] triggername: is the name of the DML or DDL trigger and the schema
to which it belongs and whose order needs to be specified.
Concepts
value: specifies the execution order of the trigger as FIRST, LAST, or NONE. If FIRST is
specified, then the trigger is fired first. If LAST is specified, the trigger is fired last. If NONE is
specified, the order of the firing of the trigger is undefined.
statement_type: specifies the type of SQL statement (INSERT, UPDATE, or DELETE) that
invokes the DML trigger.
Code Snippet 17 first executes the Employee_Deletion trigger defined on the table when the
DELETE operation is performed on the Withdrawal column of the table.
Code Snippet 17:
sp_helptext '<DML_trigger_name>'
where,
DML_trigger_name: specifies the name of the DML trigger whose definitions are to be
displayed.
Code Snippet 18 displays the definitions of the trigger, Employee_Deletion, created on the table.
Code Snippet 18:
sp_helptext 'Employee_Deletion'
Trigger parameters are defined at the time of creating a trigger. These parameters include the type of
triggering action that invokes the trigger and the SQL statements that are executed.
If the user wants to modify any of these parameters for a DML trigger, a user can do so in any one of two
ways:
where,
WITH ENCRYPTION: specifies that the DML trigger definitions are not displayed.
Concepts
FOR | AFTER: specifies that the DML trigger executes after the modification operations are
complete.
INSTEAD OF: specifies that the DML trigger executes in place of the modification operations.
Code Snippet 19 alters the CheckEmployeeID trigger created on the EmployeeDetails table using
the WITH ENCRYPTION option.
Code Snippet 19:
Now, if the user tries to view the definition of the CheckEmployeeID trigger using the sp_helptext
stored procedure, the following error message is displayed:
The text for object CheckEmployeeID is encrypted.
Concepts
Note - When the DML trigger is deleted from the table, the information about the trigger is also removed
from the catalog views.
The following is the syntax for dropping DML triggers.
Syntax:
where,
DML_trigger_name: specifies the name of the DML trigger to be dropped.
[ ,...n ]: specifies that multiple DML triggers can be dropped.
Code Snippet 20 drops the CheckEmployeeID trigger created on the EmployeeDetails table.
Code Snippet 20:
A Data Definition Language (DDL) triggers execute stored procedures when DDL events such as CREATE,
ALTER, and DROP statements occur in the database or the server. DDL triggers can operate only on
completion of the DDL events.
DDL triggers can be used to prevent modifications in the database schema. A schema is a collection of
objects such as tables, views, and so forth in a database.
DDL triggers can invoke an event or display a message based on the modifications attempted on the
schema. DDL triggers are defined either at the database level or at the server level. Figure 12.10 displays
the types of DDL triggers.
where,
ALL SERVER: specifies that the DDL trigger executes when DDL events occur in the current server.
DATABASE: specifies that the DDL trigger executes when DDL events occur in the current
database.
event_type: specifies the name of the DDL event that invokes the DDL trigger.
Concepts
Code Snippet 21 creates a DDL trigger for dropping and altering a table.
Code Snippet 21:
In this code, the DDL trigger is created for DROP TABLE and ALTER TABLE statements.
Thus, the Employee_Deletion and the Deletion_Confirmation triggers are seen to be nested.
12.11 UPDATE()
UPDATE () function returns a Boolean value that specifies whether an UPDATE or INSERT action was
performed on a specific view or column of a table.
UPDATE () function can be used anywhere inside the body of a Transact-SQL UPDATE or INSERT trigger
to test whether the trigger should execute some actions.
The following is the syntax for UPDATE ().
Syntax:
UPDATE ( column )
where,
column: is the name of the column to test for either an INSERT or UPDATE action.
Code Snippet 24 creates a trigger Accounting on the Account_Transactions table to update the
columns TransactionID or EmployeeID.
Code Snippet 24:
When the functionality of a DML trigger involves automatically recalculating summary values of one
table and storing the result in another table, then multirow considerations are important.
USE AdventureWorks2012;
GO
CREATE TRIGGER PODetails
ON Purchasing.PurchaseOrderDetail
AFTER INSERT AS
UPDATE PurchaseOrderHeader
SET SubTotal = SubTotal + LineTotal
FROM inserted
WHERE PurchaseOrderHeader.PurchaseOrderID = inserted.PurchaseOrderID;
In this code, the subtotal is calculated and stored for a single-row insert operation.
Code Snippet 26 stores a running total for a multi-row or single-row insert.
Code Snippet 26:
USE AdventureWorks2012;
GO
CREATE TRIGGER PODetailsMultiple
ON Purchasing.PurchaseOrderDetail
AFTER INSERT AS
UPDATE Purchasing.PurchaseOrderHeader
SET SubTotal = SubTotal +
(SELECT SUM(LineTotal)
FROM inserted
WHERE PurchaseOrderHeader.PurchaseOrderID
= inserted.PurchaseOrderID)
WHERE PurchaseOrderHeader.PurchaseOrderID IN
(SELECT PurchaseOrderID FROM inserted);
Concepts
In this code, the subtotal is calculated and stored for a multi-row or single-row insert operation.
Concepts
Triggers retrieve information from tables DML triggers execute on INSERT, UPDATE,
a. c.
of the same as well as other databases. and DELETE statements.
DDL triggers operate only after a table DDL triggers execute either while modifying
b. d.
or a view is modified. the data or after the data is modified.
(A) a, b, c (C) a, b ,d
(B) b, c, d (D) a, c, d
2. Match the types of DML triggers in SQL Server 2012 against their corresponding descriptions.
(A) a-1, b-4, c-2, d-3, e-5 (C) a-2, b-4, c-3, d-5, e-1
(B) a-2, b-4, c-5, d-1, e-3 (D) a-1, b-2, c-3, d-4, e-5
Concepts
3. Which of these statements about DML triggers in SQL Server 2012 are true?
(A) a, b (C) a, d
(B) c, d (D) b, d
4. Which of these statements about the working with DML triggers of SQL Server 2012 are true?
Each triggering action cannot have DML trigger definition can be modified by
a. c.
multiple AFTER triggers. dropping and recreating the trigger.
Two triggering actions on a table can DML trigger definition can be viewed using
b. d.
have the same first and last triggers. the sp_helptext stored procedure.
(A) a, c (C) b, d
(B) b, c (D) c, d
5. ________________ triggers can be used to perform the functions such as storing the backup of the
rows that are affected by the previous actions.
Concepts
12.14.1 Answers
1. A
2. B
3. C
4. D
5. A
Concepts
Summary
A trigger is a stored procedure that is executed when an attempt is made to modify data in a table
that is protected by the trigger.
Logon triggers execute stored procedures when a session is established with a LOGON event.
DML triggers are executed when DML events occur in tables or views.
The UPDATE trigger copies the original record in the Deleted table and the new record into the
Inserted table when a record is updated.
The DELETE trigger can be created to restrict a user from deleting a particular record in a table.
Concepts
Try It Yourself
1. Galaxy Airlines is a newly launched airline service that operates flights to and from various cities all
over Europe. The company maintains the data pertaining to day-to-day transactions regarding flight
services in SQL Server 2012 databases. To enable efficient and faster performance, Galaxy Airlines
has decided to incorporate use of triggers in their database applications. The detailed list of operations
to be performed are listed as follows:
a. Create the following tables in GalaxyAirlines database. Table 12.2 lists the Flight table.
b. Write statements to create a trigger trgCheckSeats that will activate whenever a new row
is being inserted into the Flight_Details table. The maximum limit of seats that a flight
can contain is 150. The trigger should check for the value of seats being inserted. If it is more
than 150, the INSERT operation is not allowed to succeed.
Try It Yourself
c. Insert at least five records in each table.
d. Write statements to create a trigger UpdateValid that will activate whenever a row is being
updated in the Flight_Details table. The trigger should determine if the Seats column is
present in the list of columns being updated. If yes, the UPDATE operation should not succeed
because the Seats column is defined as a constant and cannot be changed.
e. Write statements to create a DDL trigger ProhibitDelete that will activate whenever a
user is trying to delete a table from the Galaxy Airlines database. The trigger must not allow
a user to perform deletes and must display a message "You are not allowed to delete tables
in this database".
Concepts
13.1 Introduction
Transact-SQL programming is a procedural language extension to SQL. Transact-SQL programming is
extended by adding the subroutines and programming structures similar to high-level languages. Like
high-level languages, Transact-SQL programming also has rules and syntax that control and enable
programming statements to work together. Users can control the flow of programs by using conditional
statements such as IF and loops such as WHILE.
Batches
A batch is a collection of one or more Transact-SQL statements that are sent as one unit from an
application to the server.
Stored Procedures
A stored procedure is a collection of Transact-SQL statements that are precompiled and predefined
on the server.
Triggers
A trigger is a special type of stored procedure that is executed when the user performs an event
such as an INSERT, DELETE, or UPDATE operation on a table.
Scripts
A script is a chain of Transact-SQL statements stored in a file that is used as input to the SSMS code
editor or sqlcmd utility.
The following features enable users to work with Transact-SQL statements:
Variables
A variable allows a user to store data that can be used as input in a Transact-SQL statement.
Control-of-flow
Concepts
Error Handling
Error handling is a mechanism that is used for handling errors and provides information to the
users about the error occurred.
Most of the run-time errors stop the current statement and the statements that follow in the
batch.
A specific run-time error such as a constraint violation stops only the existing statement and the
remaining statements in the batch are executed.
The SQL statements that execute before the run-time error is encountered are unaffected. The only
exception is when the batch is in a transaction and the error results in the transaction being rolled back.
For example, suppose there are 10 statements in a batch and the sixth statement has a syntax error, then
the remaining statements in the batch will not execute. If the batch is compiled and the third statement
fails to run, then, the results of the first two statements remains unaffected as it is already executed.
The following rules are applied to use batches:
1. CREATE FUNCTION, CREATE DEFAULT, CREATE RULE, CREATE TRIGGER, CREATE PROCEDURE,
CREATE VIEW, and CREATE SCHEMA statements cannot be jointly used with other statements in a
batch. The CREATE SQL statement starts the batch and all other statements that are inside the
batch will be considered as a part of the CREATE statement definition.
2. No changes are made in the table and the new columns reference the same batch.
3. If the first statement in a batch has the EXECUTE statement, then, the EXECUTE keyword is not
required. It is required only when the EXECUTE statement does not exist in the first statement in
the batch.
Code Snippet 1 creates a view in a batch.
Code Snippet 1:
Concepts
USE AdventureWorks2012;
GO
CREATE VIEW dbo.vProduct
AS
SELECT ProductNumber, Name
FROM Product;
GO
SELECT *
FROM dbo.vProduct;
GO
In this code snippet, a view is created in a batch. The CREATE VIEW is the only statement in the batch,
the GO commands are essential to separate the CREATE VIEW statement from the SELECT and USE
statements. This was a simple example to demonstrate the use of a batch. In the real-world, a large
number of statements may be used within a single batch. It is also possible to combine two or more
batches within a transaction.
Code Snippet 2 shows an example of this.
Code Snippet 2:
BEGIN TRANSACTION
GO
USE AdventureWorks2012;
GO
CREATE TABLE Company
(
Id_Num int IDENTITY(100, 5),
Company_Name nvarchar(100)
)
GO
INSERT Company (Company_Name)
VALUES (N'A Bike Store')
INSERT Company (Company_Name)
VALUES (N'Progressive Sports')
Concepts
In this code snippet, several batches are combined into one transaction. The BEGIN TRANSACTION and
COMMIT statements enclose the transaction statements. The CREATE TABLE, BEGIN TRANSACTION,
SELECT, COMMIT, and USE statements are in single-statement batches. The INSERT statements are all
included in one batch.
DECLARE
Variables are declared with the DECLARE statement in the body of a batch. These variables are
assigned values by using the SELECT or SET statement. The variables are initialized with NULL
Concepts
values if the user has not provided a value at the time of the declaration.
The following is the basic syntax to declare a local variable.
Syntax:
where,
@local_variable: specifies the name of the variables and begins with @ sign.
data_type: specifies the data type. A variable cannot be of image, text, or ntext data
type.
=value: Assigns an inline value to a variable. The value can be an expression or a constant
value. The value should match with the variable declaration type or it should be implicitly
converted to that type.
Code Snippet 3 uses a local variable to retrieve contact information for the last names starting with
Man.
Code Snippet 3:
USE AdventureWorks2012;
GO
DECLARE @find varchar(30) = 'Man%';
SELECT p.LastName, p.FirstName, ph.PhoneNumber
FROM Person.Person AS p
JOIN Person.PersonPhone AS ph ON p.BusinessEntityID = ph.BusinessEntityID
WHERE LastName LIKE @find;
In this code snippet, a local variable named @find is used to store the search criteria, which will be
then, used to retrieve the contact information. Here, the criteria include all last names beginning
with Man. Figure 13.1 displays the output.
SET
The SET statement sets the local variable created by the DECLARE statement to the specified
value.
SET
{ @local_variable = { expression}
}
|
{ @local_variable
{+= | -= | *= | /= | %= | &= | ^= | |= } expression
}
where,
@local_variable: specifies the name of the variable and begins with @ sign.
=: Assigns the value on the right-hand side to the variable on the left-hand side.
{= | += | -= | *= | /= | %= | &= | ^= | |= }: specifies the compound
assignment operators. They are as follows:
+= Add and then, assign
-= Subtract and then, assign
*= Multiply and then, assign
/= Divide and then, assign
%= Modulo and then, assign
&= Bitwise AND and then, assign
^= Bitwise XOR and then, assign
|= Bitwise OR and then, assign
expression: specifies any valid expression which can even include a scalar subquery.
Code Snippet 4 demonstrates the use of SET to assign a string value to a variable.
Code Snippet 4:
SELECT
The SELECT statement indicates that the specified local variable that was created using DECLARE
should be set to the given expression.
where,
@local_variable: specifies the local variable to which a value will be assigned.
=: Assigns the value on the right-hand side to the variable on the left-hand side.
{= | += | -= | *= | /= | %= | &= | ^= | |= }: specifies the compound
assignment operators.
expression: specifies any valid expression which can even include a scalar subquery.
Code Snippet 5 shows how to use SELECT to return a single value.
Code Snippet 5:
USE AdventureWorks2012 ;
GO
DECLARE @var1 nvarchar(30);
SELECT @var1 = 'Unnamed Company';
In this code snippet, the variable @var1 is assigned Unnamed Company as its value.
The query against the Store table will return zero rows as the value specified for the
BusinessEntityID does not exist in the table. The variable will then, retain the Unnamed
Company value and will be displayed with the heading Company Name. Figure 13.2 displays the
output.
Concepts
Though both the SET and SELECT statements look similar, they are not. Here are a few differences
between the two:
It is possible to assign only one variable at a time using SET. However, using SELECT you can make
multiple assignments at once.
SET can only assign a scalar value when assigning from a query. It raises an error and does not
work if the query returns multiple values/rows. However, SELECT assigns one of the returned
values to the variable and the user will not even know that multiple values were returned.
13.3 Synonyms
Synonyms are database objects that serve the following purposes:
They offer another name for a different database object, also called as the base object, which may
exist on a remote or local server.
They present a layer of abstraction that guards a client application from the modifications made
to the location and the name of the base object.
For example, consider that the Department table of AdventureWorks2012 is located on the first
server named Server1. To reference this table from the second server, Server2, a client application
would have to use the four-part name Server1.AdventureWorks2012.Person.Department. If the
location of the table was modified, for example, to another server, the client application would have to be
rectified to reflect that change. To address both these issues, users can create a synonym, DeptEmpTable,
on Server2 for the Department table on Server1. Now, the client application only has to use the
single name, DeptEmpTable, to refer to the Employee table.
Similarly, if the location of the Department table changes, users have to modify the
synonym, DeptEmpTable, to point to the new location of the Department table. Since there is no
ALTER SYNONYM statement, you first have to drop the synonym, DeptEmpTable, and then, re-create
the synonym with the same name, but point the synonym to the new location of Department.
Note - A synonym is a part of schema, and similar to other schema objects, the synonym name must
be unique.
Table 13.1 lists the database objects for which the users can create synonyms.
Concepts
Database Objects
Extended stored procedure
SQL table-valued function
SQL stored procedure
Database Objects
Table(User-defined)
Replication-filter-procedure
SQL scalar function
SQL inline-tabled-valued function
View
Table 13.1: Database Objects
Permissions
DELETE
INSERT
TAKE OWNERSHIP
VIEW DEFINITION
CONTROL
EXECUTE
SELECT
UPDATE
Table 13.2: Permissions
Users can work with synonyms in SQL Server 2012 using either Transact-SQL or SSMS.
To create a synonym using SSMS, perform the following steps:
1. In Object Explorer, expand the database where you want to create a new synonym.
2. Select the Synonyms folder, right-click it and then, click New Synonym… as shown in
figure 13.3.
Concepts
3. In the New Synonym dialog box, provide the information as shown in figure 13.4.
as 10.2.110.140.
Database name: is the database name to connect the object. Here, AdventureWorks2012
is the database name.
Schema: is the schema that owns the object.
Object type and Object name: is the object type and name respectively. Here,
the object type selected is view and the object name that refers the synonym is
vEmployeeDepartmentHistory.
To create a synonym using Transact-SQL, perform the following steps:
1. Connect to the Database Engine.
2. Click New Query in the Standard bar.
3. Write the query to create the synonym in the query window.
The following is the syntax to create a synonym.
Syntax:
<object> :: =
{
[ server_name.[ database_name ] . [ schema_name_2 ].| database_name . [
schema_name_2 ].| schema_name_2. ] object_name
}
where,
schema_name_1: states that the schema in which the synonym is created.
synonym_name: specifies the new synonym name.
server_name: specifies the server name where the base object is located.
database_name: specifies the database name where the base object is located.
schema_name_2: specifies the schema name of the base object.
object_name: specifies the base object name, which is referenced by the synonym.
Code Snippet 6 creates a synonym from an existing table.
Code Snippet 6:
USE tempdb;
GO
Concepts
In this code snippet, a synonym is created from an existing table present in the AdventureWorks2012
database.
BEGIN
{
sql_statement | statement_block
}
END
where,
{ sql_statement| statement_block }: Is any valid Transact-SQL statement that is
defined using a statement block.
Code Snippet 7 shows the use of BEGIN and END statements.
Code Snippet 7:
USE AdventureWorks2012;
GO
BEGIN TRANSACTION;
GO
IF @@TRANCOUNT = 0
BEGIN
SELECT FirstName, MiddleName
FROM Person.Person WHERE LastName = 'Andy';
ROLLBACK TRANSACTION;
PRINT N'Rolling back the transaction two times would cause an error.';
END;
ROLLBACK TRANSACTION;
PRINT N'Rolled back the transaction.';
GO
In this code snippet, BEGIN and END statements describe a sequence of Transact-SQL statements
that are executed together. Suppose the BEGIN and END are not included, then, the ROLLBACK
TRANSACTION statements will execute and both the PRINT messages will be displayed.
IF…ELSE
The IF…ELSE statement enforces a condition on the execution of a Transact-SQL statement. The
Transact-SQL statement is followed with the IF keyword and the condition executes only if the
condition is satisfied and returns TRUE. The ELSE keyword is an optional Transact-SQL statement
that executes only when the IF condition is not satisfied and returns FALSE.
The following is the syntax for the IF…ELSE statement.
Syntax:
Concepts
IF Boolean_expression
{ sql_statement | statement_block }
[ ELSE
{ sql_statement | statement_block } ]
where,
Boolean_expression: specifies the expression that returns TRUE or FALSE value.
{ sql_statement| statement_block }: Is any Transact-SQL statement or statement
grouping that is defined by using a statement block. If a statement block is not used, the IF
or ELSE condition can affect the performance of only one Transact-SQL statement. In order
to define the statement block, the BEGIN and END keywords are used.
Code Snippet 8 shows the use of IF…ELSE statements.
Code Snippet 8:
USE AdventureWorks2012
GO
DECLARE @ListPrice money;
SET @ListPrice = (SELECT MAX(p.ListPrice)
FROM Production.Product AS p
JOIN Production.ProductSubcategory AS s
ON p.ProductSubcategoryID = s.ProductSubcategoryID
WHERE s.[Name] = 'Mountain Bikes');
PRINT @ListPrice
IF @ListPrice <3000
PRINT 'All the products in this category can be purchased for an amount less
than 3000'
ELSE
PRINT 'The prices for some products in this category exceed 3000'
In this code snippet, the IF…ELSE statement is used to form a conditional statement. First, a
variable @ListPrice is defined and a query is created to return the maximum list price of the
product category Mountain Bikes. Then, this price is compared with a value of 3000 to determine if
products can be purchased for an amount less than 3000. If yes, an appropriate message is printed
using the first PRINT statement. If not, then the second PRINT statement executes.
WHILE
The WHILE statement specifies a condition for the repetitive execution of the statement block. The
Concepts
statements are executed repetitively as long as the specified condition is true. The execution of
statements in the WHILE loop can be controlled by using the BREAK and CONTINUE keywords.
WHILE Boolean_expression
{ sql_statement | statement_block | BREAK | CONTINUE }
where,
Boolean_expression: specifies the expression that returns TRUE or FALSE values.
{sql_statement | statement_block}: Is any Transact-SQL statement that defines
the statement block.
BREAK: Results in an exit from the innermost WHILE loop. Every statement that appears
after the END keyword, that marks the end of the loop, is executed.
CONTINUE: Results in the WHILE loop being restarted. The statements after the CONTINUE
keyword within the body of the loop are not executed.
Code Snippet 9 shows the use of WHILE statement.
Code Snippet 9:
Using this code snippet, all the even numbers beginning from 10 until 95 are displayed. This is
achieved using a WHILE loop along with an IF statement.
Similarly, a WHILE loop can also be used with queries and other Transact-SQL statements.
Concepts
ASIN @@IDLE
ACOS @@IOBUSY
SIGN @@PACK_RECEIVED
Function Description
CONVERT Is deterministic only if one of these
conditions exists:
Functions calling extended stored procedures are non-deterministic because the extended stored
procedures may result in side effects on the database. Changes made to the global state of a
database such as a change to an external resource, or updates to a table, file, or a network are
called side effects. For example, sending an e-mail, or deleting a file can cause side effects. While
executing an extended stored procedure from a user-defined function, the user cannot assure that
it will return a consistent resultset.
Therefore, the user-defined functions that create side effects on the database are not
recommended.
Scalar-Valued Functions
A Scalar-Valued Function (SVF) always returns an int, bit, or string value. The data type
returned from and the input parameters of SVF can be of any data type except text, ntext,
image, cursor, and timestamp.
An inline scalar function has a single statement and no function body. A multi-statement scalar
function encloses the function body in a BEGIN...END block.
Table-Valued Functions
Table-valued functions are user-defined functions that return a table. Similar to an inline scalar
function, an inline table-valued function has a single statement and no function body.
Code Snippet 10 shows the creation of a table-valued function.
Code Snippet 10:
USE AdventureWorks2012;
GO
IF OBJECT_ID (N'Sales.ufn_CustDates', N'IF') IS NOT NULL
DROP FUNCTION Sales.ufn_ufn_CustDates;
GO
CREATE FUNCTION Sales.ufn_CustDates ()
RETURNS TABLE
AS
RETURN
(
SELECT A.CustomerID, B.DueDate, B.ShipDate
FROM Sales.Customer A
LEFT OUTER JOIN
Sales.SalesOrderHeader B
ON
Concepts
Here, an inline table-valued function defines a left outer join between the tables Sales.Customer
and Sales.SalesOrderHeader.
The tables are joined based on customer ids. In this case, all records from the left table and only
matching records from the right table are returned. The resultant table is then returned from the
table-valued function.
The function is invoked as shown in Code Snippet 11.
Code Snippet 11:
The result will be the outcome of the join represented in a tabular format.
Permissions
The ALTER permission is required on the schema or the function. If the function specifies a
user-defined type, then it requires the EXECUTE permission on the type.
Table-valued Functions
Scalar-valued Functions
Aggregate Functions
System Functions
4. Right-click the function to be modified and then, select Modify. The code for the function appears
in a query editor window.
5. In the query editor window, make the required changes to the ALTER FUNCTION statement body.
6. Click Execute on the toolbar to execute the ALTER FUNCTION statement.
USE [AdventureWorks2012]
GO
ALTER FUNCTION [dbo].[ufnGetAccountingEndDate]()
RETURNS [datetime]
AS
BEGIN
RETURN DATEADD(millisecond, -2, CONVERT(datetime, '20040701', 112));
END;
Partitioning
Partitioning is a feature that limits the window of the recent calculation to only those rows from
the resultset that contains the same values in the partition columns as in the existing row. It uses
the PARTITION BY clause.
Code Snippet 13 demonstrates use of the PARTITION BY and OVER clauses with aggregate
functions. Here, using the OVER clause proves to be better efficient than using subqueries to
calculate the aggregate values.
Code Snippet 13:
USE AdventureWorks2012;
GO
SELECT SalesOrderID, ProductID, OrderQty
,SUM(OrderQty) OVER(PARTITION BY SalesOrderID) AS Total
,MAX(OrderQty) OVER(PARTITION BY SalesOrderID) AS MaxOrderQty
FROM Sales.SalesOrderDetail
WHERE ProductId IN(776, 773);
GO
Concepts
Ordering
The ordering element defines the ordering for calculation in the partition. In a standard SQL ordering
element all functions are supported. Earlier, SQL Server had no support for the ordering elements
with aggregate functions as it only supported partitioning. In SQL Server 2012, there is a support
for the ordering element with aggregate functions. The ordering element has different meaning to
some extent for different function categories. With ranking functions, ordering is spontaneous.
Code Snippet 14 demonstrates an example of the ordering element.
Code Snippet 14:
This code snippet makes use of the RANK() function which returns the rank of each row in the
partition of a resultset. The rank of a row is determined by adding 1 to the number of ranks that
come before the specified row. For example, while using descending ordering, the RANK() function
returns one more than the number of rows in the respective partition that has a greater ordering
value than the specified one.
Figure 13.6 displays the output of Code Snippet 14.
Concepts
This code snippet makes use of the RANK() function which returns the rank of each row in the
partition of a resultset. In general, the rank of a row is determined by adding 1 to the number of
ranks that come before the specified row. Here in this code, the first RANK() function generates
the attribute Rnk_One that depends on the default partitioning, and the second RANK function
generates Rnk_Two that uses explicit partitioning by TerritoryID.
Figure 13.7 displays the partitions defined for a sample of three results of calculations in the query:
one Rnk_One value and two Rnk_Two value.
Framing
Framing is a feature that enables you to specify a further division of rows within a window partition.
This is done by assigning upper and lower boundaries for the window frame that presents rows to
the window function. In simple terms, a frame is similar to a moving window over the data that
starts and ends at specified positions. Window frames can be defined using the ROW or RANGE
subclauses and providing starting and ending boundaries.
Code Snippet 16 displays a query against the ProductInventory, calculating the running total
Concepts
ORDER BY LocationID
ROWS BETWEEN UNBOUNDED PRECEDING
AND CURRENT ROW) AS RunQty
FROM Production.ProductInventory;
In this code snippet, the window function applies the SUM aggregate to the attribute Quantity, partitions
the window by ProductID, orders the partition rows by LocationID, and frames the partition rows
depending on the given ordering between unbounded preceding (no low boundary point) and the current
row. In other words, the result will be the sum of all prior rows in the frame, including the current row.
Figure 13.8 displays the output of Code Snippet 16.
Ranking functions
These functions return a rank value for each row in a partition. Based on the function that is used,
many rows will return the same value as the other rows. Ranking functions are non-deterministic.
Table 13.6 lists the various ranking functions.
Ranking Description
Concepts
Functions
NTILE Spreads rows in an ordered partition into a given number of
groups, beginning at 1. For each row, the function returns the
number of the group to which the row belongs.
Ranking Description
Functions
ROW NUMBER Retrieves the sequential number of a row in a partition of a
resultset, starting at 1 for the first row in each partition.
DENSE RANK Returns the rank of rows within the partition of a resultset,
without any gaps in the ranking. The rank of a row is one plus
the number of distinct ranks that come before the row in
question.
Table 13.6: Ranking Functions
Code Snippet 17 demonstrates the use of ranking functions.
Code Snippet 17:
USE AdventureWorks2012;
GO
SELECT p.FirstName, p.LastName
,ROW_NUMBER() OVER (ORDER BY a.PostalCode) AS 'Row Number'
,NTILE(4) OVER (ORDER BY a.PostalCode) AS 'NTILE'
,s.SalesYTD, a.PostalCode
FROM Sales.SalesPerson AS s
INNER JOIN Person.Person AS p
ON s.BusinessEntityID = p.BusinessEntityID
INNER JOIN Person.Address AS a
ON a.AddressID = p.BusinessEntityID
WHERE TerritoryID IS NOT NULL
AND SalesYTD <> 0;
The NTILE() function breaks a given input collection into N equal sized logical groups. To determine
how many rows belong in each group, SQL Server has to determine the total number of rows in
the input collection. The OVER clause decides the order of the rows when they have been divided
into groups. It is possible to perform the grouping in one order and return the resultset in another
order.
Concepts
OFFSET functions
The different types of offset functions are as follows:
SWITCHOFFSET
This function returns a DATETIMEOFFSET value that is modified from the stored time zone
offset to a specific new time zone offset.
The following is the syntax for the SWITCHOFFSET function.
Syntax:
where,
DATETIMEOFFSET: is an expression that is resolved to a datetimeoffset(n)
value.
time_zone: specifies the character string in the format [+|-]TZH:TZM or a signed
integer (of minutes) which represents the time zone offset, and is assumed to be
daylight-saving aware and adjusted.
Code Snippet 18 displays the use of SWITCHOFFSET function.
Concepts
ColDatetimeoffset datetimeoffset
);
GO
INSERT INTO Test
VALUES ('1998-09-20 7:45:50.71345 -5:00');
GO
SELECT SWITCHOFFSET (ColDatetimeoffset, '-08:00')
FROM Test;
GO
--Returns: 1998-09-20 04:45:50.7134500 -08:00
SELECT ColDatetimeoffset
FROM Test;
DATETIMEOFFSETFROMPARTS
This function returns a datetimeoffset value for the specified date and time with specified
precision and offset.
Concepts
where,
year: specifies the integer expression for a year.
month: specifies the integer expression for a month.
day: specifies the integer expression for a day.
hour: specifies the integer expression for an hour.
minute: specifies the integer expression for a minute.
seconds: specifies the integer expression for a day.
fractions: specifies the integer expression for fractions.
hour_offset: specifies the integer expression for the hour portion of the time zone
offset.
minute_offset: specifies the integer expression for the minute portion of the time zone
offset.
precision: specifies the integer literal precision of the datetimeoffset value to be
returned.
Code Snippet 19 displays the use of DATETIMEOFFSETFROMPARTS function.
Code Snippet 19:
The code displays a datetimeoffset value for the given date and time with the specified precision
and offset.
Figure 13.11 displays the output of Code Snippet 19.
Concepts
SYSDATETIMEOFFSET
These functions returns datetimeoffset(7) value which contains the date and time of the
computer on which the instance of SQL Server is running.
The following is the syntax for SYSDATETIMEOFFSET.
Syntax:
SYSDATETIMEOFFSET ()
Code Snippet 20 displays the different formats used by the date and time functions.
Code Snippet 20:
Analytic Functions
SQL Server 2012 supports several analytic functions. These functions compute aggregate value
based on a group of rows. Analytic functions compute running totals, moving averages, or top-N
results within a group.
Table 13.7 lists some of the analytic functions.
Function Description
LEAD Provides access to data from a subsequent row in the same
resultset without using a self-join.
LAST_VALUE Retrieves the last value in an ordered set of values.
LAG Provides access to data from a previous row in the same
resultset without using a self-join.
FIRST_VALUE Retrieves the first value in an ordered set of values.
CUME_DIST Computes the cumulative distribution of a value in a group of
Concepts
values.
PERCENTILE_ CONT Computes a percentile based on a continuous distribution of the
column value in SQL.
Function Description
PERCENTILE_DISC Calculates a particular percentile for sorted values in an entire
rowset or within distinct partitions of a rowset.
Table 13.7: Analytic Functions
Code Snippet 21 demonstrates the use of LEAD() function.
Code Snippet 21:
USE AdventureWorks2012;
GO
SELECT BusinessEntityID, YEAR(QuotaDate) AS QuotaYear, SalesQuota AS
NewQuota,
LEAD(SalesQuota, 1,0) OVER (ORDER BY YEAR(QuotaDate)) AS FutureQuota
FROM Sales.SalesPersonQuotaHistory
WHERE BusinessEntityID = 275 and YEAR(QuotaDate) IN ('2007','2008');
In this code snippet, the LEAD() function is used to return the difference in the sales quotas for a
particular employee over the subsequent years.
Code Snippet 22 demonstrates the use of FIRST_VALUE() function.
Code Snippet 22:
USE AdventureWorks2012;
GO
SELECT Name, ListPrice,
FIRST_VALUE(Name) OVER (ORDER BY ListPrice ASC) AS LessExpensive
FROM Production.Product
WHERE ProductSubcategoryID = 37
In this code snippet, the FIRST_VALUE() function compares products in the product category 37
and returns the product name that is less expensive.
Concepts
2. Which of the following are used to set and declare local variables provided by SQL Server?
a. DECLARE
b. SET
c. DELETE
d. INSERT
(A) a, d (C) a, b
(B) b, c (D) c, d
Concepts
4. Which of the following code uses a local variable to retrieve contact information for the last names
starting with Per?
USE AdventureWorks2012;
GO
DECLARE @find varchar(30);
DECLARE @find varchar(30) = 'Per%';
(A) SET @find = 'Per%';
SELECT p.LastName, p.FirstName, ph.PhoneNumber
FROM Person.Customer AS p
JOIN Person.Phone AS ph ON p.BusinessEntityID = ph.BusinessEntityID
WHERE LastName LIKE @find;
USE AdventureWorks2012;
GO
DECLARE find varchar(30);
DECLARE find varchar(30) = 'Per%';
(B) SET find = 'Per%';
SELECT p.LastName, p.FirstName, ph.PhoneNumber
FROM Person.Customer AS p
JOIN Person.Phone AS ph ON p.BusinessEntityID = ph.BusinessEntityID
WHERE LastName LIKE find;
USE AdventureWorks2012;
GO
@find varchar(30);
@find varchar(30) = 'Per%';
(C) SET @find = 'Per%';
SELECT p.LastName, p.FirstName, ph.PhoneNumber
FROM Person.Customer AS p
JOIN Person.Phone AS ph ON p.BusinessEntityID = ph.BusinessEntityID
Concepts
USE AdventureWorks2012;
GO
SET @find varchar(30);
SET @find varchar(30) = 'Per%';
(D) SET @find = 'Per';
SELECT p.LastName, p.FirstName, ph.PhoneNumber
FROM Person.Customer AS p
JOIN Person.Phone AS ph ON p.BusinessEntityID = ph.BusinessEntityID
WHERE LastName LIKE @find;
Concepts
13.9.1 Answers
1. A
2. C
3. A
4. A
5. D
Concepts
Summary
Transact-SQL provides basic programming elements such as variables, control-of-flow elements,
conditional, and loop constructs.
A batch is a collection of one or more Transact-SQL statements that are sent as one unit from an
application to the server.
Variables allow users to store data for using as input in other Transact-SQL statements.
Synonyms provide a way to have an alias for a database object that may exist on a remote or local
server.
Deterministic functions each time return the same result every time they are called with a definite
set of input values and specify the same state of the database.
Non-deterministic functions return different results every time they are called with specified set
of input values even though the database that is accessed remains the same.
A window function is a function that applies to a collection of rows.
Concepts
Try It Yourself
1. Zen Technologies is a leading company in textiles located in California. The management of the
company wants to give a loyalty award to all employees completing tenure of five years in the
organization. Using the same Employee table, create a Transact-SQL batch to return EmployeeID,
FirstName, Department, and HireDate of all such employees. Assume that the management
gives a twenty five percent salary increment to everybody completing one year in the organization.
The chairman of the organization wants to participate in a national salary survey.
Write a batch to determine the total salary paid to a department which employs more than one
employee. The management wants to find out which department has hired the largest number of
employees in the last five years.
Hints:
14.1 Introduction
A transaction is a single unit of work. A transaction is successful only when all data modifications that
are made in a transaction are committed and are saved in the database permanently. If the transaction
is rolled back or cancelled, then it means that the transaction has encountered errors and there are no
changes made to the contents of the database. Hence, a transaction can be either committed or rolled
back.
Defining Transactions
A logical unit of work must exhibit four properties, called the atomicity, consistency, isolation, and
durability (ACID) properties, to qualify as a transaction.
Atomicity: If the transaction has many operations then all should be committed. If any of
the operation in the group fails then it should be rolled back.
Consistency: The sequence of operations must be consistent.
Isolation: The operations that are performed must be isolated from the other operations on
the same server or on the same database.
Durability: The operations that are performed on the database must be saved and stored in
the database permanently.
Implementing Transactions
SQL Server supports transactions in several modes. Some of these modes are as follows:
Autocommit Transactions: Every single-line statement is automatically committed as soon
as it completes. In this mode, one does not need to write any specific statements to start
Concepts
and end the transactions. It is the default mode for SQL Server Database Engine.
Explicit Transactions: Every transaction explicitly starts with the BEGIN TRANSACTION
statement and ends with a ROLLBACK or COMMIT transaction.
BEGIN TRANSACTION
Concepts
The BEGIN TRANSACTION statement marks the beginning point of an explicit or local transaction.
where,
transaction_name: specifies the name that is assigned to the transaction. It should follow
the rules for identifiers and limit the identifiers that are 32 characters long.
@tran_name_variable: specifies the name of a user-defined variable that contains a valid
transaction name.
WITH MARK['description']: specifies the transaction that is marked in the log. The
description string defines the mark.
Code Snippet 1 shows how to create and begin a transaction.
Code Snippet 1:
USE AdventureWorks2012;
GO
DECLARE @TranName VARCHAR(30);
SELECT @TranName = 'FirstTransaction';
BEGIN TRANSACTION @TranName;
DELETE FROM HumanResources.JobCandidate
WHERE JobCandidateID = 13;
In this code snippet, a transaction name is declared using a variable with value
FirstTransaction. A new transaction with this name is then created having a DELETE
statement. As the transaction comprises a single-line statement, it is implicitly committed.
COMMIT TRANSACTION
The COMMIT TRANSACTION statement marks an end of a successful implicit or explicit
transaction. If the @@TRANCOUNT is 1, then, COMMIT TRANSACTION performs all data
modifications performed on the database and becomes a permanent part of the database.
Further, it releases the resources held by the transaction and decrements @@TRANCOUNT by 0. If
@@TRANCOUNT is greater than 1, then the COMMIT TRANSACTION decrements the
@@TRANCOUNT by 1 and keeps the transaction in active state.
Concepts
where,
transaction_name: specifies the name that is assigned by the previous BEGIN
TRANSACTION statement. It should follow the rules for identifiers and do not allow identifiers
that are 32 characters long.
@tran_name_variable: specifies the name of a user-defined variable that contains a valid
transaction name. The variable can be declared as char, varchar, nchar, or nvarchar
data type. If more than 32 characters are passed to the variable, then only 32 characters are
used and the remaining characters will be truncated.
Code Snippet 2 shows how to commit a transaction in the HumanResources.JobCandidate
table of AdventureWorks2012 database.
Code Snippet 2:
BEGIN TRANSACTION;
GO
DELETE FROM HumanResources.JobCandidate
WHERE JobCandidateID = 11;
GO
COMMIT TRANSACTION;
GO
This code snippet defines a transaction that will delete a job candidate record having
JobCandidateID as 11.
COMMIT WORK
The COMMIT WORK statement marks the end of a transaction.
The following is the syntax for the COMMIT WORK statement.
Syntax:
COMMIT [ WORK ]
[ ; ]
Concepts
COMMIT TRANSACTION and COMMIT WORK are identical except for the fact that COMMIT
TRANSACTION accepts a user-defined transaction name.
Marking a Transaction
Code Snippet 3 shows how to mark a transaction in the HumanResources.JobCandidate table
of AdventureWorks2012 database.
Code Snippet 3:
In this code snippet, a transaction named DeleteCandidate is created and marked in the log.
ROLLBACK TRANSACTION
This transaction rolls back or cancels an implicit or explicit transaction to the starting point of
the transaction, or to a savepoint in a transaction. A savepoint is a mechanism to roll back some
parts of transactions. The ROLLBACK TRANSACTION is used to delete all data modifications made
from the beginning of the transaction or to a savepoint. It also releases the resources held by the
transaction.
The following is the syntax for the ROLLBACK TRANSACTION statement.
Syntax:
where,
transaction_name: specifies the name that is assigned to the BEGIN TRANSACTION
statement. It should confirm the rules for identifiers and do not allow identifiers that are 32
characters long.
@tran_name_variable: specifies the name of a user-defined variable that contains a valid
transaction name. The variable can be declared as char, varchar, nchar, or nvarchar
data type.
savepoint_name: specifies the savepoint_name from a SAVE TRANSACTION statement.
Use savepoint_name only when a conditional roll back affects a part of a transaction.
@savepoint_variable: specifies the name of savepoint variable that contain a valid
Concepts
savepoint name. The variable can be declared as char, varchar, nchar, or nvarchar
data type.
Consider an example that demonstrates the use of ROLLBACK. Assume that a database named
Sterling has been created. A table named ValueTable is created in this database as shown
in Code Snippet 4.
Code Snippet 4:
USE Sterling;
GO
CREATE TABLE ValueTable ([value] char)
GO
Code Snippet 5 creates a transaction that inserts two records into ValueTable. Then, it rolls
back the transaction and again inserts one record into ValueTable. When a SELECT statement
is used to query the table, you will see that only a single record with value C is displayed. This is
because the earlier INSERT operations have been rolled back or cancelled.
Code Snippet 5:
BEGIN TRANSACTION
INSERT INTO ValueTable VALUES('A');
INSERT INTO ValueTable VALUES('B');
GO
ROLLBACK TRANSACTION
INSERT INTO ValueTable VALUES('C');
SELECT [value] FROM ValueTable;
ROLLBACK WORK
This statement rolls back a user-specified transaction to the beginning of the transaction.
The following is the syntax for the ROLLBACK WORK statement.
Syntax:
ROLLBACK [ WORK ]
[ ; ]
Concepts
SAVE TRANSACTION
The SAVE TRANSACTION statement sets a savepoint within a transaction. The following is the
syntax for the SAVE TRANSACTION statement.
Syntax:
where,
savepoint_name: specifies the savepoint_name assigned. These names conform to the
rules of identifiers and are restricted to 32 characters.
@savepoint_variable: specifies the name of a user-defined variable that contain a valid
savepoint name. The variable can be declared as char, varchar, nchar, or nvarchar
data type. More than 32 characters are allowed to pass to the variables but only the first 32
characters are used.
Concepts
In this code snippet, a savepoint transaction is created within a stored procedure. This will then be
used to roll back only the changes made by the stored procedure if an active transaction has started
before the stored procedure executes.
14.4 @@TRANCOUNT
The @@TRANCOUNT system function returns a number of BEGIN TRANSACTION statements that occur in
the current connection. Figure 14.2 displays an example of using @@TRANCOUNT.
Syntax:
@@TRANCOUNT
Code Snippet 7 shows the effect that nested BEGIN and COMMIT statements have on the @@TRANCOUNT
variable.
Code Snippet 7:
PRINT @@TRANCOUNT
BEGIN TRAN
PRINT @@TRANCOUNT
BEGIN TRAN
PRINT @@TRANCOUNT
COMMIT
PRINT @@TRANCOUNT
COMMIT
PRINT @@TRANCOUNT
This code snippet displays the number of times the BEGIN TRAN and COMMIT statement execute in the
current connection.
Figure 14.3 displays the output of Code Snippet 7.
PRINT @@TRANCOUNT
BEGIN TRAN
PRINT @@TRANCOUNT
BEGIN TRAN
PRINT @@TRANCOUNT
ROLLBACK
PRINT @@TRANCOUNT
In this case, the code snippet displays the number of times the BEGIN and ROLLBACK statements execute
Concepts
For creating a marked transaction, users can use the BEGIN TRANSACTION statement and the WITH
MARK [description] clause. The optional description is a textual description of the mark. A mark name
for the transaction is reused and required. The transaction log records the mark description, name, user,
database, datetime information, and the Log Sequence Number (LSN). The datetime information is used
with the mark name for unique identification of the mark.
For creating a marked transaction in a set of databases, the following steps are required:
1. Name the transaction in the BEGIN TRAN statement and use the WITH MARK clause.
USE AdventureWorks2012
GO
BEGIN TRANSACTION ListPriceUpdate
WITH MARK 'UPDATE Product list prices';
GO
UPDATE Production.Product
SET ListPrice = ListPrice * 1.20
WHERE ProductNumber LIKE 'BK-%';
GO
Code Snippet 10 shows how to restore a transaction log. It assumes that a backup named
AdventureWorksBackups has been created.
Code Snippet 10:
USE AdventureWorks2012
GO
BEGIN TRANSACTION ListPriceUpdate
WITH MARK 'UPDATE Product list prices';
GO
UPDATE Production.Product
SET ListPrice = ListPrice * 1.20
WHERE ProductNumber LIKE 'BK-%';
GO
Implicit Explicit
These transactions are maintained by SQL Server These transactions are defined by programmers
for each and every DML and DDL statements
These DML and DDL statements execute under DML statements are included to execute as a unit
the implicit transactions
SQL Server will roll back the entire statement SELECT Statements are not included as they do not
modify data
Table 14.1 Differences between Implicit and Explicit Transactions
When data is read, are there any locks taken and what types of locks are requested?
If a read operation that is referencing a row modified by some other transaction is:
Blocking until the exclusive lock on the row is free
Retrieving the committed version of the row that exists at the time when the transaction or
statement started.
Reading the uncommitted data modification.
While choosing a transaction isolation level, those locks that prevent data modification are not affected.
A transaction acquires an exclusive lock every time on each data that it modifies. Then, it holds that lock
until the transaction is completed, irrespective of the isolation level that is set for that transaction.
Transaction isolation levels mainly describe the protection levels from the special effects of changes
Concepts
made by other transactions for read operations. A lower isolation level increases the capability of several
users to access data at the same time. However, it increases the number of concurrency effects such
as dirty reads or lost updates that users might come across. On the other hand, a higher isolation level
decreases the types of concurrency effects which user may encounter. This requires additional system
resources and increases the chance of one transaction blocking another transaction.
Selecting a suitable isolation level is based on the data integrity requirements of the application as
compared to the overheads of each isolation level. The higher isolation level, serializable, assures that
a transaction will recover the same data each time it repeats the read operation. Then, it does this
by performing a level of locking that is expected to influence other users in a multi-user system. The
lower isolation level, read uncommitted, retrieves data that is modified, and is not committed by other
transactions. All concurrency side effects occur in read uncommitted, however, there is no read versioning
or locking, hence, the overhead is minimized.
Table 14.2 lists the concurrency effects that are allowed by the different isolation levels.
Update Locks
These locks avoid common forms of deadlock. In a serializable transaction, the transaction will read
data, acquire a shared lock on the row or a page, and modify the data that requires lock conversion
to an exclusive lock. When two transactions acquire a shared lock on a resource and try to update
data simultaneously, the same transaction attempts the lock conversion to an exclusive lock. The
shared mode to exclusive lock conversion should wait as the exclusive lock for one transaction
is not compatible with shared mode lock of the other transaction a lock wait occurs. Similarly,
the second transaction tries to acquire an exclusive lock for update. As both the transactions are
converting to exclusive locks and each waits for the other transaction to release its shared lock
mode, a deadlock occurs.
Figure 14.5 depicts the concept of such a deadlock.
Shared Locks
These locks allow parallel transactions to read a resource under pessimistic concurrency control.
Concepts
Transactions can change the data while shared locks exist on the resource. Shared locks are released
on a resource once the read operation is completed, except the isolation level is set to repeatable
read or higher.
Exclusive Locks
These locks prevent access to resources by concurrent transactions. By using an exclusive
lock, no other transaction can change data and read operations take place only through the
read uncommitted isolation level or NOLOCK hint. DML statements such as INSERT, DELETE, and
UPDATE combine modification and read operations. These statements first perform a read operation
to get data before modifying the statements. DML statements usually request both exclusive and
shared locks. For example, if the user wants to use an UPDATE statement that modifies the row in
one table that is dependent on a join with other table. Therefore, the update statements request
shared lock on the rows that reads from the join table and request exclusive locks on the modified
rows.
Intent Locks
The Database Engine uses intent locks for protecting and places an exclusive or shared lock on the
resource that is at a lower level in the lock hierarchy. The name intent locks is given because they
are acquired before a lock at the low level and hence, indicate intent to place locks at low level. An
intent lock is useful for two purposes:
To prevent other transactions from changing the higher-level resource in a way that will
invalidate the lock at the lower-level.
To improve the efficiency of the Database Engine for identifying the lock conflicts those are
at the higher level of granularity.
For example, a shared intent locks are requested at the table level before requesting the shared
locks on rows or pages within the table. Setting the intent lock at the table level protects other
transaction from subsequently acquiring an exclusive lock on the table containing pages. Intent
locks also contain Intent Exclusive (IX), Intent Shared (IS), and Shared with Intent Exclusive (SIX).
Table 14.4 lists the lock modes in Intent locks.
Shared intent Provides the combination of S and IU locks, as a result of acquiring these
update (SIU) locks separately and simultaneously holding both locks.
Update intent Provides the combination of U and IX locks, as a result of acquiring these
exclusive (UIX) locks separately and simultaneously holding both locks.
Table 14.4: Lock Modes in Intent Locks
Schema Locks
Schema modification locks are used by Database Engine while performing a table DDL operation
such as dropping a table or a column. Schema locks prevent concurrent access to the table, which
means a schema lock blocks all external operations until the lock releases.
Some DML operations such as truncating a table use the Schema lock to prevent access to affected
tables by concurrent operations.
Schema stability locks are used by the database engine while compiling and executing the queries.
These stability locks do not block any of the transaction locks including the exclusive locks. Hence,
the transactions that include X locks on the table continue to execute during the query compilation.
Though, the concurrent DML and DDL operations that acquire the Schema modification locks do
not perform on the tables.
Key-Range Locks
These types of locks protect a collection of rows that are implicitly present in a recordset which
is being read by a Transact-SQL statement while using the serializable transaction isolation level.
Key-range locks prevent phantom reads. By protecting the range of keys between rows, they also
prevent the phantom deletions or insertions in the recordset that accesses a transaction.
Transaction log is a critical component of the database and if a system failure occurs, the transaction log
will be required to bring the database to a consistent data. The transaction log should not be moved or
deleted until users understand the consequences of doing it. The operations supported by the transaction
log are as follows:
Individual transactions recovery
Incomplete transactions recovery when SQL Server starts
Transactional replication support
Disaster recovery solutions and high availability support
Roll back a file, restored database, filegroup, or page forward to the point of failure
Concepts
3. Identify the function that returns a number of BEGIN TRANSACTION statements that occur in the
current connection.
4. Which of the following is not the concurrency effect allowed by the different isolation levels?
5. Match the types of resource lock modes in SQL Server 2012 against their corresponding
descriptions.
(A) a-1, b-4, c-2, d-3, e-5 (C) a-2, b-4, c-3, d-5, e-1
(B) a-5, b-4, c-3, d-2, e-1 (D) a-1, b-2, c-3, d-4, e-5
Concepts
14.11.1 Answers
1. D
2. B
3. C
4. D
5. B
Concepts
Summary
A transaction is a sequence of operations that works as a single unit.
Transactions can be controlled by an application by specifying a beginning and an ending.
BEGIN TRANSACTION marks the beginning point of an explicit or local transaction.
COMMIT TRANSACTION marks an end of a successful implicit or explicit transaction.
ROLLBACK with an optional keyword WORK rolls back a user-specified transaction to the beginning
of the transaction.
@@TRANCOUNT is a system function that returns a number of BEGIN TRANSACTION statements
that occur in the current connection.
Isolation levels are provided by the transaction to describe the extent to which a single transaction
needs to be isolated from changes made by other transactions.
The SQL Server Database Engine locks the resources using different lock modes, which determine
the resources that are accessible to concurrent transactions.
Concepts
Try It Yourself
1. Zamora Electronics Ltd. employs more than 500 workers in its units. Some of these are at junior level
while some are at senior level depending upon their expertise and years of experience. Each employee
is given annual leave based on the designation. The management at Zamora Electronics Ltd. is
planning to computerize their human resources department and all the data pertaining to employees
will now be stored in an SQL Server 2012 database. The company has made some changes in the
leave policy of the employees and wants to update the same in their tables. Assume that you are
the database administrator and that you are assigned the following tasks:
Create a transaction to update the records in the table as per the new leave policy.
Check if the transactions are updated in the appropriate table.
Check if the transactions are not updated. Then, ensure that they are rolled back with the
appropriate error messages.
Table 14.6 lists the EmployeeDetails table.
15.1 Introduction
Error handling in SQL Server has become easy through a number of different techniques. SQL Server
has introduced options that can help you to handle errors efficiently. Often, it is not possible to capture
errors that occur at the user's end. SQL Server provides the TRY…CATCH statement that helps to handle
errors effectively at the back end. There are also a number of system functions that print error related
information, which can help fix errors easily.
Syntax Errors
Syntax errors are the errors that occur when code cannot be parsed by SQL Server. Such errors are
detected by SQL Server before beginning the execution process of a Transact-SQL block or stored
procedure.
Some scenarios where syntax errors occur are as follows:
If a user is typing an operator or a keyword is used in a wrong way, the code editor will
display the tooltip showing the error. Figure 15.1 displays an example of syntax error.
In figure 15.1, the SET operator is wrongly used in the Transact-SQL statement, hence a syntax
error will be raised.
If a user types a keyword or an operator wrongly because the user does not remember the
valid usage, the code editor will appropriately indicate it.
Run-time Errors
Run-time errors are errors that occur when the application tries to perform an action that is
supported neither by SQL Server nor by the operating system. Run-time errors are sometimes
difficult to fix as they are not clearly identified or are external to the database.
Some instances where run-time errors can occur are as follows:
Performing a calculation such as division by 0
Trying to execute code that is not defined clearly
Concepts
When executing some DML statements such as INSERT, DELETE, and UPDATE, users can handle
errors to ensure correct output.
When a transaction fails and the user needs to roll back the transaction, an appropriate error
message can be displayed.
When working with cursors in SQL Server, users can handle errors to ensure correct results.
15.4 TRY…CATCH
TRY…CATCH statements are used to implement exception handling in Transact-SQL. One or more
Transact-SQL statements can be enclosed within a TRY block. If an error occurs in the TRY block, the
control is passed to the CATCH block that may contain one or more statements.
Concepts
Syntax:
BEGIN TRY
{ sql_statement | statement_block }
END TRY
BEGIN CATCH
[ { sql_statement | statement_block } ]
Concepts
END CATCH
[;]
where,
sql_statement: specifies any Transact-SQL statement.
statement_block: specifies the group of Transact-SQL statements in a BEGIN…END block.
A TRY…CATCH construct will catch all run-time errors that have severity higher than 10 and that do not
close the database connection. A TRY block is followed by a related CATCH block. A TRY…CATCH block
cannot span multiple batches or multiple blocks of Transact-SQL statements.
If there are no errors in the TRY block, after the last statement in the TRY block has executed, control is
passed to the next statement following the END CATCH statement. If there is an error in the TRY block,
control is passed to the first statement inside the CATCH block. If END CATCH is the last statement in a
trigger or a stored procedure, control is passed back to the calling block.
BEGIN TRY
DECLARE @num int;
SELECT @num=217/0;
END TRY
BEGIN CATCH
PRINT 'Error occurred, unable to divide by 0'
END CATCH;
In this code, an attempt is made to divide a number by zero. This will cause an error, hence, the
TRY…CATCH statement is used here to handle the error.
Both TRY and CATCH blocks can contain nested TRY…CATCH constructs. For example, a CATCH block
can have an embedded TRY…CATCH construct to handle errors faced by the CATCH code. Errors that are
encountered in a CATCH block are treated just like errors that are generated elsewhere. If the CATCH
block encloses a nested TRY…CATCH construct, any error in the nested TRY block passes the control
to the nested CATCH block. If there is no nested TRY…CATCH construct the error is passed back to the
caller.
TRY…CATCH constructs can also catch unhandled errors from triggers or stored procedures that execute
through the code in TRY block. However, as an alternative approach, triggers or stored procedures can
also enclose their own TRY…CATCH constructs to handle errors generated through their code.
GOTO statements can be used to jump to a label inside the same TRY…CATCH block or to leave the
TRY…CATCH block. The TRY…CATCH construct should not be used in a user-defined function.
It is a good practice to display error information along with the error, so that it can help to solve the error
quickly and efficiently.
To achieve this, system functions need to be used in the CATCH block to find information about the error
that initiated the CATCH block to execute.
ERROR _ PROCEDURE(): returns the name of the trigger or stored procedure where the error
occurred.
ERROR _ LINE(): returns the line number that caused the error.
ERROR _ MESSAGE(): returns the complete text of the error. The text contains the value supplied
for the parameters such as object names, length, or times.
The functions return NULL when they are called outside the scope of the CATCH block.
USE AdventureWorks2012;
GO
BEGIN TRY
SELECT 217/0;
END TRY
BEGIN CATCH
SELECT
ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_LINE() AS ErrorLine,
ERROR_MESSAGE() AS ErrorMessage;
Concepts
END CATCH;
GO
In this code, the SELECT statement will cause a divide-by-zero error that is handled using the
TRY…CATCH statement. The error causes the execution to jump to the associated CATCH block
within which the error information will be displayed.
Figure 15.6 displays the result of the error information. The first resultset is blank because the
statement fails.
USE AdventureWorks2012;
GO
IF OBJECT_ID ( 'sp_ErrorInfo', 'P' ) IS NOT NULL
DROP PROCEDURE sp_ErrorInfo;
GO
CREATE PROCEDURE sp_ErrorInfo
AS
SELECT
ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState,
ERROR_PROCEDURE() AS ErrorProcedure,
ERROR_LINE() AS ErrorLine,
Concepts
ERROR_MESSAGE() AS ErrorMessage;
GO
BEGIN TRY
SELECT 217/0;
END TRY
BEGIN CATCH
EXECUTE sp_ErrorInfo;
END CATCH;
In this code, when an error occurs, the CATCH block of the TRY…CATCH construct is called and the
error information is returned.
USE AdventureWorks2012;
GO
BEGIN TRANSACTION;
BEGIN TRY
DELETE FROM Production.Product
WHERE ProductID = 980;
END TRY
BEGIN CATCH
SELECT
ERROR_SEVERITY() AS ErrorSeverity
,ERROR_NUMBER() AS ErrorNumber
,ERROR_PROCEDURE() AS ErrorProcedure
,ERROR_STATE() AS ErrorState
,ERROR_MESSAGE() AS ErrorMessage
,ERROR_LINE() AS ErrorLine;
Concepts
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
END CATCH;
IF @@TRANCOUNT > 0
COMMIT TRANSACTION;
GO
In this code, the TRY…CATCH block works within the transaction. The statement inside the TRY
block generates a constraint violation error as follows:
The DELETE statement is conflicted with the REFERENCE constraint
"FK_BillOfMaterials_Product_ProductAssemblyID". The conflict occurred in
database "AdventureWorks2012", table "Production.BillOfMaterials", column
'ProductAssemblyID'.
Uncommittable Transactions
If an error is generated in a TRY block, it causes the state of the current transaction to be invalid
and the transaction is considered as an uncommitted transaction. An uncommittable transaction
performs only ROLLBACK TRANSACTION or read operations. The transaction does not execute any
Transact-SQL statement that performs a COMMIT TRANSACTION or a write operation.
The XACT_STATE function returns -1 if the transaction has been classified as an uncommittable
transaction. When a batch is completed, the Database Engine rolls back any uncommittable
transactions. If no error messages are sent when the transaction enters the uncommittable state
on completing the batch, then the error messages are sent to the client. This specifies that an
uncommittable transaction is detected and rolled back.
15.6 @@ERROR
The @@ERROR function returns the error number for the last Transact-SQL statement executed.
The following is the syntax for the @@ERROR function.
Syntax:
@@ERROR
The @@ERROR system function returns a value of the integer type. This function returns 0, if the previous
Transact-SQL statement encountered no errors. It also returns an error number only if the previous
statements encounter an error. If an error is among the list of errors in the sys.messages catalog
view includes the value from the sys.messages.messages_id column for that error. Users can view
the text associated with an @@ERROR error number in the sys.messages catalog view.
Concepts
Code Snippet 5 demonstrates how to use @@ERROR to check for a constraint violation.
Code Snippet 5:
USE AdventureWorks2012;
GO
BEGIN TRY
UPDATE HumanResources.EmployeePayHistory
SET PayFrequency = 4
WHERE BusinessEntityID = 1;
END TRY
BEGIN CATCH
IF @@ERROR = 547
PRINT N'Check constraint violation has occurred.';
END CATCH
In this code, @@ERROR is used to check for a check constraint violation (which has error number 547) in
an UPDATE statement.
It displays the following error message:
Check constraint violation has occurred.
15.7 RAISERROR
The RAISERROR statement starts the error processing for a session and displays an error message.
RAISERROR can reference a user-defined message stored in the sys.messages catalog view or build
dynamic error messages at run-time. The message is returned as a server error message to the calling
application or to the associated CATCH block of a TRY…CATCH construct.
The following is the syntax for the RAISERROR statement.
Syntax:
where,
msg_id: specifies the user-defined error message number that is stored in the sys.messages
catalog view using the sp_addmessage.
msg_str: Specifies the user-defined messages with formatting. msg_str is a string of characters
with optional embedded conversion specifications. A conversion specification has the following
format:
% [[flag] [width] [. precision] [{h | l}]] type
The parameters that can be used in msg_str are as follows:
{h | l} type: Specifies the use of character types d, i, o, s, x, X, or u, and
creates shortint(h) or longint(l) values.
The following are some of the type specifications:
d or i: Specifies the signed integer
o: Specifies the unsigned octal
x or X: Specifies the unsigned hexadecimal
flag: Specifies the code that determines the spacing and justification of the substituted value.
This can include symbols such as - (minus) and + (plus) to specify left-justification or to
indicate the value is a signed type respectively.
precision: Specifies the maximum number of characters taken from the argument value for
string values. For example, if a string has five characters and the precision is 2, only the first
two characters of the string value are used.
width: Specifies an integer that defines the minimum width for the field in which the
argument value is placed.
@local_variable: Specifies a variable of any valid character data type that contains string
formatted in the same way as msg_str.
severity: Severity levels from 0 through 18 are specified by any user. Severity levels from 19
through 25 are specified by members of the sysadmin fixed server role or users with ALTER
TRACE permissions. Severity levels from 19 through 25 uses the WITH LOG option is required.
option: Specifies the custom option for the error.
Table 15.1 lists the values for the custom options.
Value Description
LOG Records the error in the error log and the application log for the
instance of the Microsoft SQL Server Database Engine.
NOWAIT Sends message directly to the client
SETERROR Sets the ERROR_NUMBER and @@ERROR values to msg_id or 5000
Concepts
The following errors are returned back to the caller if RAISERROR executes:
Code Snippet 6:
In this code, the RAISERROR statements takes the first argument of N'serial number' changes the first
conversion specification of %s, and the second argument of 23 changes the second conversion of %d.
The code snippet displays the 'This is error message serial number 23'. Code Snippet 7
demonstrates how to use RAISERROR statement to return the same string.
Code Snippet 7:
In this code, the RAISERROR statements return the same string, Hel. The first statement specifies the
width and the precision values and the second statement specifies the conversion specification.
Users can also return error information from a CATCH block.
Concepts
Code Snippet 8 demonstrates how to use RAISERROR statement inside the TRY block.
Code Snippet 8:
BEGIN TRY
RAISERROR ('Raises Error in the TRY block.', 16, 1 );
END TRY
BEGIN CATCH
DECLARE @ErrorMessage NVARCHAR(4000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
SELECT
@ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
RAISERROR (@ErrorMessage, @ErrorSeverity, @ErrorState );
END CATCH;
In this code, the RAISERROR statement used inside the TRY block has severity 16, which causes the
execution to jump to the associated CATCH block.
RAISERROR is then used inside the CATCH block to return the error information about the original error.
15.8 ERROR_STATE
The ERROR_STATE system function returns the state number of the error that causes the CATCH block of
a TRY…CATCH construct to execute. The following is the syntax for the ERROR_STATE system function.
Syntax:
ERROR_STATE ( )
When called in a CATCH block, it returns the state number of the error message that caused the CATCH
block to be run. This returns a NULL when it is called outside the scope of a CATCH block.
There are specific error messages that are raised at various points in the code for the SQL Server Database
Concepts
Engine. For example, an error 1105 is raised for several different conditions. Each specific condition that
raises error assigns the unique state code.
ERROR_STATE is called from anywhere within the scope of a CATCH block. ERROR_STATE returns the
error state regardless of how many times it is executed or whether it is executed within the scope of
the CATCH block. This is in comparison with the functions such as @@ERROR that only returns the error
number in the statement directly after the one that caused error, or in the first statement of a CATCH
block.
Users can use the ERROR_STATE in a CATCH block. Code Snippet 9 demonstrates how to use
ERROR_STATE statement inside the TRY block.
Code Snippet 9:
BEGIN TRY
SELECT 217/0;
END TRY
BEGIN CATCH
SELECT ERROR_STATE() AS ErrorState;
END CATCH;
GO
In this code, the SELECT statement generates a divide-by-zero error. The CATCH statement will then
return the state of the error. The ERROR_STATE is displayed as 1.
15.9 ERROR_SEVERITY
The ERROR_SEVERITY function returns the severity of the error that causes the CATCH block of a
TRY…CATCH construct to be executed.
The following is the syntax for ERROR_SEVERITY.
Syntax:
ERROR_SEVERITY ( )
It returns a NULL value if called outside the scope of the CATCH block. ERROR_SEVERITY can be called
anywhere within the scope of a CATCH block. In nested CATCH blocks, ERROR_SEVERITY will return the
error severity that is specific to the scope of the CATCH block where it is referenced. Users can use the
ERROR_SEVERITY function in a CATCH block.
Code Snippet 10 shows how to display the severity of the error.
Code Snippet 10:
BEGIN TRY
Concepts
SELECT 217/0;
BEGIN CATCH
SELECT ERROR_SEVERITY() AS ErrorSeverity;
END CATCH;
GO
END TRY
In this code, an attempt to divide by zero generates the error and causes the CATCH block to display the
severity error as 16.
15.10 ERROR_PROCEDURE
The ERROR_PROCEDURE function returns the trigger or a stored procedure name where the error has
occurred that has caused the CATCH block of a TRY…CATCH construct to be executed. The following is
the syntax of the ERROR_PROCEDURE.
Syntax:
ERROR_PROCEDURE ( )
It returns the nvarchar data type. When the function is called in a CATCH block, it will return the name
of the stored procedure where the error occurred. The function returns a NULL value if the error has not
occurred within a trigger or a stored procedure. ERROR_PROCEDURE can be called from anywhere in the
scope of a CATCH block. The function also returns NULL if this function is called outside the scope of a
CATCH block.
In nested CATCH blocks, the ERROR_PROCEDURE returns the trigger or stored procedure name specific
to the scope of the CATCH block where it is referenced.
Code Snippet 11 shows the use of the ERROR_PROCEDURE function.
Code Snippet 11:
USE AdventureWorks2012;
GO
IF OBJECT_ID ( 'usp_Example', 'P' ) IS NOT NULL
DROP PROCEDURE usp_Example;
GO
CREATE PROCEDURE usp_Example
AS
SELECT 217/0;
GO
Concepts
BEGIN TRY
EXECUTE usp_Example;
END TRY
BEGIN CATCH
SELECT ERROR_PROCEDURE() AS ErrorProcedure;
END CATCH;
GO
In this code, the stored procedure usp_Example generates a divide-by-zero error. The ERROR_PROCEDURE
function accordingly returns the name of this stored procedure where the error has occurred.
Code Snippet 12 demonstrates the use of ERROR_PROCEDURE function along with other functions.
Code Snippet 12:
USE AdventureWorks2012;
GO
IF OBJECT_ID ( 'usp_Example', 'P' ) IS NOT NULL
DROP PROCEDURE usp_Example;
GO
CREATE PROCEDURE usp_Example
AS
SELECT 217/0;
GO
BEGIN TRY
EXECUTE usp_Example;
END TRY
BEGIN CATCH
SELECT
ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState,
ERROR_PROCEDURE() AS ErrorProcedure,
Concepts
ERROR_MESSAGE() AS ErrorMessage,
ERROR_LINE() AS ErrorLine;
END CATCH;
GO
This code makes use of several error handling system functions that can help to detect and rectify an
error easily.
15.11 ERROR_NUMBER
The ERROR_NUMBER system function when called in a CATCH block returns the error number of the error
that causes the CATCH block of a TRY…CATCH construct to be executed. The following is the syntax of
ERROR_NUMBER.
Syntax:
ERROR_NUMBER ( )
The function can be called from anywhere inside the scope of a CATCH block. The function will return
NULL when it is called out of the scope of a CATCH block.
ERROR_NUMBER returns the error number irrespective of how many times it executes or whether it
executes within the scope of a CATCH block. This is different than the @@ERROR which only returns the
error number in the statement immediately after the one that causes error, or the first statement of the
CATCH block.
Code Snippet 13 demonstrates the use of ERROR_NUMBER in a CATCH block.
Code Snippet 13:
BEGIN TRY
SELECT 217/0;
END TRY
BEGIN CATCH
SELECT ERROR_NUMBER() AS ErrorNumber;
END CATCH;
GO
As a result of this code, the error number is displayed when the attempted division by zero occurs.
15.12 ERROR_MESSAGE
The ERROR_MESSAGE function returns the text message of the error that causes the CATCH block of a
TRY…CATCH construct to execute.
Concepts
ERROR_MESSAGE ( )
When the ERROR_MESSAGE function is called in the CATCH block, it returns the full text of the error
message that causes the CATCH block to execute. The text includes the values that are supplied for any
parameter that can be substituted such as object names, times, or lengths. It also returns NULL if it is
called outside the scope of a CATCH block.
Code Snippet 14 demonstrates the use of ERROR_MESSAGE in a CATCH block.
Code Snippet 14:
BEGIN TRY
SELECT 217/0;
END TRY
BEGIN CATCH
SELECT ERROR_MESSAGE() AS ErrorMessage;
END CATCH;
GO
In this code, similar to other examples, the SELECT statement generates a divide-by-zero error. The
CATCH block displays the error message.
15.13 ERROR_LINE
The ERROR_LINE function returns the line number at which the error occurred in the TRY…CATCH
block.
The following is the syntax of ERROR_LINE.
Syntax:
ERROR_LINE ( )
When this function is called in the CATCH block, it returns the line number where the error has occurred.
If the error has occurred within a trigger or a stored procedure, it returns the line number in that trigger
or stored procedure. Similar to other functions, this function returns a NULL if it is called outside the
scope of a CATCH block.
Code Snippet 15 demonstrates the use of ERROR_LINE in a CATCH block.
Code Snippet 15:
Concepts
BEGIN TRY
SELECT 217/0;
END TRY
BEGIN CATCH
SELECT ERROR_LINE() AS ErrorLine;
END CATCH;
GO
As a result of this code, the line number at which the error has occurred will be displayed.
An error that has a severity of 20 or higher that stops the SQL Server Database Engine task
processing for the session. If errors occur that have severity of 20 or higher and the database
connection is not interrupted, the TRY…CATCH will handle the error
When the session ends because of the KILL statements used by the system administrator
The following types of errors are not handled by a CATCH block that occur at the same execution level as
that of the TRY…CATCH construct:
Compile errors such as syntax errors that restrict a batch from running
Errors that arise in the statement-level recompilation such as object name resolution errors
occurring after compiling due to deferred name resolution.
Code Snippet 16 demonstrates how an object name resolution error is generated by the SELECT
statement.
Code Snippet 16:
USE AdventureWorks2012;
GO
BEGIN TRY
Concepts
BEGIN CATCH
SELECT
ERROR_NUMBER() AS ErrorNumber,
ERROR_MESSAGE() AS ErrorMessage;
END CATCH
This code will cause the object name resolution error in the SELECT statement. It will not be
caught by the TRY…CATCH construct.
Running a similar SELECT statement inside a stored procedure causes the error to occur at a level
lower than the TRY block. The error is handled by the TRY…CATCH construct.
Code Snippet 17 demonstrates how the error message is displayed in such a case.
15.15 THROW
Concepts
The THROW statement raises an exception and transfers control of the execution to a CATCH block of a
TRY…CATCH construct.
where,
error_number: specifies a constant or variable that represents the error_number as int.
message: specifies a variable or string that defines the exception message as nvarchar(2048).
State: specifies a variable or a constant between 0 and 255 that specifies the state to associate
with state of message as tinyint.
Code Snippet 18 demonstrates the use of THROW statement to raise an exception again.
Code Snippet 18:
USE tempdb;
GO
CREATE TABLE dbo.TestRethrow
( ID INT PRIMARY KEY
);
BEGIN TRY
INSERT dbo.TestRethrow(ID) VALUES(1);
INSERT dbo.TestRethrow(ID) VALUES(1);
END TRY
BEGIN CATCH
END CATCH;
In this code, the THROW statement is used to raise once again the exception that had last occurred.
2. Which of the following constructs can catch unhandled errors from triggers or stored procedures?
3. Which of the following functions returns the error number for the last Transact-SQL statement
executed?
4. Which of these functions returns the severity of the error that causes the CATCH block of a
TRY…CATCH construct to be executed?
5. The ________________ statement raises an exception and transmits the execution to a CATCH block
of a TRY…CATCH construct in SQL Server 2012.
15.16.1 Answers
1. (A)
2. (B)
3. (C)
4. (D)
5. (C)
Concepts
Summary
Syntax errors are the errors that occur when code cannot be parsed by SQL Server.
Run-time errors occur when the application tries to perform an action that is supported neither
by Microsoft SQL Server nor by the operating system.
TRY…CATCH constructs can also catch unhandled errors from triggers or stored procedures that
execute through the code in a TRY block.
GOTO statements can be used to jump to a label inside the same TRY…CATCH block or to leave a
TRY…CATCH block.
Various system functions are available in Transact-SQL to print error information about the error
that occurred.
The RAISERROR statement is used to start the error processing for a session and displays an error
message.
Concepts
Try It Yourself
1. For the transactions created in the Try It Yourself of Session 10 and 15, add error-handling statements
to take care of the errors.
2. Acme Technologies Private Limited is a leading software company located at New York. The company
has achieved many awards as the best dealer in the development of software technologies. The
company has received many new projects on mobile and Web development. At present, they are
working on a database project for Payroll Management System in SQL Server 2012.
They have created the database on Payroll management system for the employees. While creating
the tables, they receive different types of errors. Assume that you are the database administrator
of Acme Technologies and the Technical head has assigned you the task of rectifying the errors.
Perform the following steps:
a. Write error-handling statements using the TRY…CATCH construct for both normal statements
as well as stored procedures.