Chapter 2and3

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 37

Chapter 2: Standards

Overview
Standards are the keystone of an SQS. They provide the basis against which activities can be
measured and evaluated. Further, they provide common methods and practices so that the same
task can be accomplished the same way each time it is done.

Standards applied to software development provide uniform direction on how the development is
to be conducted. Standards also apply to the balance of the SLC. They may prescribe everything
from the form on which an original system concept is submitted for consideration to the location
in the computer center for four-ply printer paper storage. The degree of standardization is, of
course, a company decision. It is important, however, that the development portion of the SLC be
standardized as much as is practical. Through the use of intelligent standards, productivity
increases can be had, since many mundane decisions need not be made every time a software
system is undertaken.

Standards arise from many sources. They may come from the day-to-day activities within the
organization, as a "best way to do it" surfaces in some area. An example of this might be the
method by which access to the interactive software development facility is allocated. Companies
within a given industry often band together to set standards so that their products can be used
together or so that information passing between them means the same thing to everyone (e.g., the
telephone industry standards for interconnection). Computer user groups and computer-industry
associations often work on standards dealing with software development. A subgroup of the
IEEE, the Software Engineering Standards Committee, develops standards dealing with software
development. These standards deal with topics ranging from the SLC as a whole down through
individual activities within the life cycle, such as testing and documentation. Still another source
of standards is outside consulting firms that can be retained to study an individual company's
specific situation and develop a set of standards especially tailored to that company's needs.

More and more organizations are recognizing the importance of stable, identified processes and
their relationship to the overall direction of the organization. Standards play an important role in
the development, maintenance, and execution of organizational mission statements, policies, and
process procedures. External standards often define or limit the breadth of an organization's
freedom in the conduct of its business. Internal standards define the organization's own
performance expectations and requirements. Definition of processes is often in the context of
standards. These are all subject to evaluation during process reviews.

Standards are one of the yardsticks against which the processes of software
development and usage can be evaluated. Deviation from the various applicable
standards is an indication that the software development process is veering away
from the production of quality software.
2.1 Areas of standardization
Standardization may be applied to any or all of the areas of software development and
maintenance. Such broad use of standards is rarely the case and usually not appropriate. Areas
that are usually involved in the standardization effort include, but are by no means limited, to the
following:

SLC;
Documentation;

Coding;

Naming;

Operating procedures and protocols;

User development.

2.1.1 The software life cycle

The SLC describes the whole software process from conception through retirement of a given
system. Two life cycles are used in discussing software. The overall SLC for a system, an
example of which is shown in Figure 2.1, begins with the original idea for the software system,
or its conception, and the evaluation of that concept for necessity and feasibility. The life cycle
ends when the software system is retired from use and set aside. Figure 2.1 also shows that,
within the full life cycle, there is the SDLC. This is the portion of the overall SLC that deals
expressly with the development of the software system. It begins with the formation of the
formal requirements documentation, which states specifically what the system will do, and ends
with the implementation of the system into full use. Clearly, there are other software
development paradigms; the example shown is only an example of one that is commonly used.

Figure 2.1: Two software life cycles.

The SLC, and thus the SDLC, is usually divided into portions of work or effort called phases. In
the life of a software system, many functions or activities are performed. These activities are
grouped into phases so that they can be conveniently referenced, monitored, and managed. In
Figure 2.1, the SLC is divided into six major phases, plus the effort required to retire a system at
the end of its useful life. The SDLC is composed of the middle four major phases. In any
particular organization, the various activities may be grouped differently, or the phases may be
combined, further divided, or given different names.

It is appropriate at this point to recognize the methodology called prototyping. Prototyping, a


simplified overview of which is presented in Figure 2.2, is an increasingly popular adjunct to the
SDLC as we present it in this text. Prototyping has as its goal the quick analysis of the problem
to be solved and experimentation with potential solutions. Used properly, it is a powerful
requirements analysis and design tool. Used improperly, it can lead to undocumented and
unmaintainable software.

Figure 2.2: General prototyping approach.

A detailed discussion of prototyping is beyond the intent and scope of this text. It is the subject
of much current literature and the interested reader is encouraged to pursue the topic. It is
sufficient to observe that, while the development of a prototype system can support activities
within the SDLC, the prototyping development itself is expected to follow a standard SDLC.
Prototyping is often used when the requirements determination and expression techniques
include use cases. A use case is a description of what the software must do, or how it must
behave, if the user is exercising some particular functionality. As discussed in Chapter 4, use
cases are also popular means of determining test cases for the software.

The SLC is the basis for many standards applicable to the development and use of quality
software. One of the first standards that should be prepared is the life-cycle description,
sometimes called the software development methodology. Which phases comprise the SLC and
the SDLC and which activities comprise each of the phases must be clearly delineated. Once the
life-cycle phases are defined, the process of determining proper subjects for standardization
within the life-cycle activities can begin.

Most standards will be applicable to activities during the SDLC, since that is where the heaviest
concentration of tasks is found. This in no way means that standards for the other phases should
be ignored or not prepared. As the SQS matures, it will determine, together with the rest of the
software organization, new areas to which standards can be usefully applied.

The arrival of computer-aided software engineering tools has opened another opportunity and
necessity for SLC standardization. Which tools to use; how to specify, acquire and apply them;
and the interfaces among them; may need to be addressed by standards.
2.1.2 Documentation

Comprehensive documentation standards are a basic necessity for thorough design, test,
operation, and maintenance.

A major complaint against most software systems is that they are poorly documented. A
generality is that documentation is done, if at all, after the software system is delivered. Thus,
while in the best of worlds the documentation describes the delivered system, it often fails to
describe what was originally requested by the customer. Further, there is often little
documentation of the test program applied to the software system. This makes the software's
ability to perform as desired suspect. Also, user documentation-how to use the software system-
frequently is accused of being unusable itself.

Standards for documentation should address two fronts-the required documentation for each
software system and the format and content requirements for that documentation. Documents
prepared in nonstandard formats increase the likelihood of misunderstanding or incorrect
interpretation.

A comprehensive set of documentation standards can help assure that the documentation of a
software system is capable of the tasks for which it is intended. Without standards with regard to
what to document and how to document it, the system may well go into production and fail for
one of the following reasons:

It is not what the customer really wanted.

The users and operators do not know how it works.

The most important document, and frequently the least standardized and least well done, is the
requirements document. The requirements document is intended to spell out specifically the
problem of need to be addressed by the software. It must describe the intended software system
from an external, operational point of view. Once the requirements have been determined and
expressed, they must be managed. Every system being developed will undergo requirements
changes. Some will be necessary, some just "nice to have"; others actually may be harmful or
detrimental to the system as a whole. Without rigorous standards for the analysis, definition,
expression, and control of the requirements, a software development project is in danger of
failing to satisfy its users.

2.1.3 Coding

Coding standards can help reduce artistry and enhance clarity and maintainability.

Some coding standards take effect earlier than others, sometimes reaching back into the design
phases. A standard that calls for structured coding techniques will usually imply the imposition
of a standard that calls for structured design techniques. Conversely, standards requiring object-
oriented development techniques will often lead to standards for coding in one or another of the
newer languages that support object development.
Standards such as these are intended to permit greater understanding throughout the balance of
the SLC. Peers who are involved in walkthroughs and inspections are better able to understand
the code as they prepare for the review. Maintainers have a much easier time of correcting and
enhancing code that is well structured and follows adequate standards.

Some coding standards deal with which specific language is to be used. Many shops relying on
large mainframe-based systems still use Cobol or PL/I as their standard application language.
Another, differently oriented, development organization may standardize on Visual Basic, C++,
or Java. Some organizations have several standard languages, depending on which type of
application is being developed, or even specific characteristics of a given application.

Beyond standards specifying a given language, an organization may prepare standards for
subroutine calls, reentrant or recursive coding techniques, reuse of existing code, or standards
covering restrictions on verbs or coding constructs.

Most organizations have specific approaches that are preferred or, perhaps, prohibited. The
coding standards will reflect the needs and personality of the organization. A set of standards is
useful in creating an environment in which all programmers know the rules that govern their
work. If a coding convention is beneficial to the performance of the coding staff, it should be
made a standard so that all coding can benefit from it. On the other hand, if a particular coding
technique is found to be detrimental, a standard prohibiting its use is appropriate so that all
programmers know to avoid it.

2.1.4 Naming

Standard naming conventions assist in readability and CM. The standardization of naming
conventions for system components (units, modules), data, and even entry points within the code
is both easy and beneficial. There is usually little resistance to a consistent naming or labeling
scheme, and its major benefit is the ease of identifying the object being named or labeled.
Beyond that, CM, especially configuration identification, is much more difficult if there are no
consistent rules or standards for component identification.

Naming standards are based on consistent identifiers in specific locations within the name. As
Figure 2.3 shows, identifiers may be assigned to decreasing hierarchical levels within a system;
the first characters specifying the system itself and subsequent characters define lower levels
within the system. Data can be similarly named, as can subroutines and even external interfaces.
Figure 2.3: Identification based on hierarchy.

The important point in naming conventions is that all components of a given software system can
be identified as belonging to that system. This, in turn, can simplify the bookkeeping for testing,
integrating, and delivering the system, since each component is uniquely identified. This, as will
be discussed later, is also very important for managing the overall configuration or version of a
product. To have the user or customer accept one version of the system and then mistakenly
deliver a different version obviously is undesirable.

Configuration identification, while going beyond the basic naming standards and conventions,
depends on unique identifiers for all components of a particular software system. It can perform
its function with whatever naming standards, conventions, schemes, or methods are used.
However, a standard naming convention greatly eases the configuration identification task.

The tasks of the software developer and tester are also simplified if standard naming conventions
are used. Confusion and doubt as to exactly which interface is to be exercised or which module is
to be tested are minimized. The developer can easily determine the subroutine to call or entry
point to use if there are standard rules and formats for the names of those items.

2.1.5 Operating procedures and protocols

Operating procedures are standardized so that everyone does the same thing the same way with a
given software system.

Standardizing the operational environment is important to software system results. Correct data,
entered in different ways, can give different, yet seemingly correct, results. Sequencing
subsystem operations in nonstandard ways may lead to varied results, all of which might be
taken as correct. To be sure, much of the opportunity for variation in use or operation of a
software system can be eliminated by the software itself. On the other hand, software cannot
easily control procedures, so standards are used to govern the remaining variables.

Standard user procedures tend to reduce errors and defects, maximize system response time,
simplify user education, and increase understanding of system outputs.
Standards applied to users may address time of day or cycle considerations with respect to the
running of the system. A payroll system may be run on Friday as a standard to permit proper
interface with the timecard reporting system. A corresponding standard may call for running on
Thursday in holiday situations. By having standards for use, the user is not put in the position of
making decisions that could conflict with those made by someone else. Further, it reduces the
likelihood that a person making the same decision will make it differently from time to time.

The standardization of operating procedures and protocols applies to large centralized data
centers, client-server installations, standalone and networked workstations, and specific
application systems. Specific application systems standards can regulate when the system is run,
how to recover from system crashes, and the like. Equally important, though, the overall
operation of the data center or network should have governing standards. Such things as
scheduled maintenance time, job-entry rules, mass storage allocation, remote job-entry
procedures, log-on and log-off procedures, password use, data access, and distributed computing
are all subjects for appropriate standardization. Such standards have high payback in smoother
operation, reduced errors and defects, and easier education of personnel.

2.1.6 User development

User development of software needs strict standards so that the actions of one user do not affect
other users or the data center itself.

The rapidly growing capability for user-developed software provides a fast, easy method of
providing quick service for small tasks. Another, associated area is the availability of off-the-
shelf software from both regular commercial suppliers and on-line bulletin boards. Software can
be purchased or downloaded and made into an integral part of larger systems being developed.
Users have the ability to buy a package, merge it with another package, write some special code
for their own needs, and run this amalgam of software without the intervention of the regular
software organization. While convenient and often productive, this has opened the door for
uncontrolled software development, potentially damaging access to the organizational database,
excessive loading of the data processing facilities, and wasteful duplication of effort and
resources. Standards for user development of software are needed to address these potential
conditions.

User development can take many forms. The development can be done by someone on behalf of
the user (but not via the established software development processes). For example, some years
ago, a company that made computer systems for restaurants and resorts had a salesman who
fancied himself a wizard programmer and configured some function keys at the top of the
keyboard to do specific tasks such as create an end-of-shift report. Customers were impressed by
his programming prowess and his ability to take a simple request and automate it quickly.

The trouble came later when a customer called the company's customer support center after a
power failure at the restaurant wiped out all the fancy function key programs created by the
salesman. When the customer began complaining that the F8 function did not work anymore,
customer support was baffled. It took some time, and the tracking down of that particular
salesman, to figure out what had gone wrong.
This is a simple case of uncontrolled, undocumented, and unsupervised development. Instead of
enhancing the customer relationship, as the salesman thought he was doing, this case of rogue
programming actually put the account in peril and potentially cost the company thousands of
dollars and a customer.

User understanding and observation of standards is required to avoid negative impact on the
overall data processing facility. Uncontrolled purchase of small, local (departmental)
computational facilities can be an unnecessary drain on a company's resources and can lead to
incompatibilities between local facilities and the main data center or network. Further, as
software is developed, it can, if unregulated, lead to data interface, integrity, and security
problems. Acquisition of software from nonstandard sources or suppliers also increases the
likelihood of virus infections and other security concerns.

User development of software can be a beneficial addition to the computational capabilities of an


organization. However, standards are easier to develop and enforce in traditional mainframe
environments, since all processing is done under a central operating system. As control and
processing move toward a decentralized environment, enforcement becomes more difficult. Not
only are user development standards more necessary, but increased surveillance of storage and
files is appropriate to reduce the chances of misuse of unauthorized or nonlicensed commercial
software. Standards for user development, ranging from equipment and language selection to
data security and networking, will permit maximum user flexibility and still maintain central
control for efficient overall data processing.

Changes to standards affecting user work flow and tasks may also be impacted by new standards
governing user development of software. Users should have the opportunity to participate in the
standardization activities. They might even have a trial-use period. The quality practitioner will
want to assure that addressing the dangers inherent in uncontrolled user development are not
creating unnecessary restrictions.

2.1.7 Emerging technologies

The software development and maintenance world is in a period of great expansion and change.
While most of the new technologies can be traced back to old methods and look more like
changes than innovations, the applications of the technologies are often new or at least different.
Some of us see object-oriented design and development as little more than a refinement of
subroutines and independent modules. Client-server technology probably really began when
IBM introduced its CICS operating system; the clients were terminals, and the server was a big
mainframe. GUIs are no longer new but necessary. Further, the proliferation of software for e-
commerce has added a new dimension of technology.

In any event, developers are having a hard time finding standards to govern these technologies.
That is not to say that there are no standards available; some new concepts have gained wide
industrial acceptance and are more de facto standards than formal standards. This places the
burden on the users of these technologies to develop their own approaches and standards. The
alternative is to gamble on adopting one or another of the de facto standards and hope that the
industry as a whole goes in the same direction. The same is true for the burgeoning field of
multimedia software.

2.2 Sources of standards


It was stated in Section 2.1 that standards should cover as much of the overall SLC as is practical
and appropriate for a given organization. That is clearly a large, as well as important, task.
Certainly, in an organization of more than minimum size, it will involve more than just one or
two persons. Even then, to create all the standards needed can be an overwhelming task and one
that cannot be accomplished in a timely manner. The goal should be to identify the minimum set
of standards that will serve the organization's actual needs and then identify the sources for those
standards.

Software standards can come from many sources. The standards coordinator (or whoever has the
responsibility for standards) can make use of some or any of the standards-acquisition means and
sources. The three main standards-acquisition methods are as follows:

1. Developing them in-house;


2. Buying them from consultants or other similar organizations;

3. Adapting industry prepared and consensus-approved standards.

The three main standards sources are as follows:

1. External standards developers;


2. Purchased standards;

3. In-house development.

2.2.1 External standards developers

Standards can be found in several locations and are available from several sources. Some
externally available standards are useful as starting points for an in-house standards program.
Some are likely to be imposed as a condition of commerce with a particular business field or
with other countries.

One advantage to the standards developed by the various industry and other groups is that they
reflect the consensus of the groups involved. Several industry-segment points of view are usually
represented so that a wide range of applications is available. Table 2.1 presents some typical
standards subjects and representative external standards and sources applicable to them. The list
is certainly not all-inclusive, but it does indicate the breadth of standards available.

Table 2.1: Representative Standards Sources

Major Subject Specific Area Standard Standard Number


Developer
IEEE 1074, 1074.1
Software life Life-cycle processes
cycle ISO/IEC 12207

Project management IEEE 1058


DoD 498
Development
IEEE 1074

ISO 12207
IEEE 1028, 1059
Reviews
NIST 500-165
IEEE 829, 1008, 1012,1059
Testing
NIST 500-75, 500-165
IEEE 1298
Quality program
AS 3563.1, 3563.2

ISO 9000 et al

NRC NUREG/CR-4640
IEEE 982.1, 982.2, 1044, 1044.1,
Metrics 1045, 1061
ISO/IEC
9126

Case tools IEEE 1175, 1209, 1343


IEEE 730, 730.1
Documentation Quality plans
IEEE/EIA 1498/IS 640
IEEE 830
Requirements
Table 2.1: Representative Standards Sources

Major Subject Specific Area Standard Standard Number


Developer

specifications ISO/IEC 12207


IEEE 1016, 1016.11
Design specifications
ISO/IEC 12207

User documentation IEEE 1063


IEEE 1042, 828
Naming CM
EIA 649

User development Software packages ISO/IEC 12119

International standards Of increasing interest is the activity within the international sector with
respect to software standards. The ISO has published what is called the 9000 series of standards
(ISO 9000, 9001, 9002, 9003, 9004) dealing with quality. Standard 9001, "Quality Systems-
Model for Quality Assurance in Design, Development, Production, Installation and Servicing," is
the one most often applied to software development since it includes all aspects of product
development from requirements control through measurements and metrics applied to the quality
program and the control of the products. (The 2000 edition merges 9002 and 9003 into 9001.)

Written from a primarily manufacturing point of view, ISO 9001 has been difficult to relate to
the software realm. Therefore, ISO subsequently added an annex to ISO 9000, called Part 3. This
annex explains how the requirements of ISO 9001 can be applied to software. A working group
of Standards Australia subsequently prepared AS 3563, which is a better interpretation of ISO
9001 for software.

More recently, the responsibility for 9000 Part 3 has been transferred to ISO/IEC JTC1/SC7
(JTC1), another international group dedicated to information technology standards. JTC1
includes several subcommittees that undertake the development of software standards.
Organizations that do, or expect to do, business with countries other than their own, are well
advised to seek out and comply with the tenets of the international standards. At the time of this
writing, JTCl's subcommittee SC7 had published IS 12207 "Information Technology-Software
Life Cycle Processes" and was working on several standards that will support 12207.

It is important to recognize that simply adhering to a process is no guarantee of quality if the


process itself is flawed. For example, a concrete life preserver built to an established, fully ISO
9000-compliant and carefully followed process, would probably fail to garner customer kudos
for its quality.
Industrial and professional groups Several industry and professional societies are developing
generic standards that can be used as is or tailored as appropriate.

A number of professional and technical societies are increasingly active in the preparation of
software standards. The IEEE and the Electronic Industries Association (EIA) have ongoing
working groups addressing standards and guidelines in the software engineering area. The
American National Standards Institute (ANSI) is the coordinating body for standards of all types
within the United States. Another group becoming active in the software area is the American
Society for Quality.

These, and other, groups are preparing generic software engineering standards that can be
adopted as they are written or adapted by an individual organization. Many of them are also
suitable for inclusion in software acquisition contracts, as well as in-house use.

Government agencies In addition to the previously mentioned groups, many software standards
are available from various government agencies. In particular, the Department of Defense (DoD),
the Nuclear Regulatory Agency, the National Institute of Standards and Technology (NIST), and
various other governmental agencies are standards sources. As a large buyer of software, as well
as a developer, the federal government has prepared and is still generating standards meant
primarily for software acquisition contracts. These are frequently applicable, in whole or in part,
to a specific organization's internal standards needs. The DoD recently declared its intention to
cease active writing of its own standards and to adopt existing software standards.

Manufacturers' user groups Another source of software standards can be found in computer
user groups. GUIDE International and SHARE (IBM), DECUS (DEC), and other major user
groups often address the question of software standards as a part of their activities. The standards
generated by these user groups are usually generic in nature. They are sometimes standalone, but
frequently benefit from adaptation.

2.2.2 Purchased standards

Standards can be purchased and then adapted to the needs of a specific organization. Companies
sometimes will provide their standards manuals to other, similar companies for a fee. This is
especially true in industries in which there is a great deal of interaction or interfacing such as
between telephone companies. Although there may be strong resistance to this in industries in
which competition is strong, in general, most organizations are willing to share, even on an
informal basis, their software standards in the interest of overall improvement in the software
area. It must be remembered, though, that standards received from another company, no matter
how similar, are specific to that company's specific situation and needs. Standards obtained in
this way represent only a starting point for the standards effort in the receiving company.

Another avenue for purchased standards is through a consultant or consulting company. Many
consultants will prepare a full set of software standards specific to a client company. This can be
an easy way to get an initial set of the most critical standards in place quickly. The consultant can
then continue the standards development effort or turn the rest of the task over to the client
company. The advantage of this approach is rapid standards development, usually using the
consultant's prior experience in the particular industry and software area. The main disadvantage
is the perception of the consultant as an outsider who "doesn't really understand the situation."
Involvement of the affected departments, as "internal consultants" to, or joint participants with,
the real consultant can usually diminish this perception. In some cases, the consultant could even
be used to gain acceptance of in-house-developed standards that were being resisted. However,
paying an outside consultant to deliver standards, regardless of the actual source of those
standards, sometimes lends undeserved value to them.

2.2.3 In-house development

Standards, from whatever external source, usually need to be adapted to the individual needs and
environment of the specific organization or project. In-house development is the only way to
assure that each standard reflects the organization's actual needs and environment. There are at
least three major approaches to in-house standards development, in addition to an enormous
number of variations and combinations. The three major approaches are as follows:

1. Ad hoc standardization;
2. Standards groups;

3. Standards committees.

Ad hoc standardization Any of the SLC staff may be assigned, as an additional but temporary
part of their job, the responsibility and authority to create and institute software standards. Since
the affected staff is involved with various parts of the entire life cycle of all projects, they have a
high degree of insight into the SLC and its standardization needs. They can become aware of
areas and tasks that need standardization, observe the various methods in use, and propose the
most appropriate methods as candidates to be standards. As the monitor of SLC activities, the
software quality practitioner is in the proper position to determine the appropriateness of the
standard once it is in use.

An advantage of ad hoc standardization is that the experts in the area being standardized can be
called on to write-and to follow-the standard. A disadvantage is that the writers may not be aware
of side issues or potential conflicts with other standards. The "corporate memory" of the
continuity and consistency of the standards program may also be lost.

In general, it is not recommended that the software quality practitioner write standards. That role
could place practitioners in the position of imposing standards on tasks and activities that they
themselves do not perform. For example, the software quality practitioner does little coding,
rarely operates the data center, and usually does not perform data entry.

Standards groups A second method of in-house development is through a separately chartered


standards group (SG). Since the SG has, as its whole task, the management of the body of
standards, it can often spend more time researching a standard or looking to outside sources for a
particular standard. The advantage of having an SG is maintaining the continuity and corporate
memory of the standards program.
The SG, however, suffers the same disadvantage as the software quality practitioner-
standardizing from outside a task or activity. That is, the SG members usually are not in a
position to follow the standards they create. In addition, an SG usually does not have the insight
available to the software quality practitioner as to needed standards or standards appropriateness.

Standards committees The third major approach to standards development is the chartering of a
standards committee (SC). Usually the SC comprises the managers of each department in the
data processing organization: applications development, operations, systems programming, and
so on, as shown in Figure 2.4. The SC is responsible for identifying needed standards and those
requiring modification or replacement. The specific generation of a standard is assigned to the
manager of that department that will be most affected by the standard-language standard to
applications development, database definition standard to database administration, and so on.
The advantage of this approach is that the most knowledgeable and affected department prepares
the standard with inputs from all other interested departments. The disadvantage is the usually
difficult task of involving the actual department managers so that full departmental visibility is
ensured.

Figure 2.4: Standards committee.

Standards coordinator In any of the previously mentioned approaches, or combinations


thereof, a specific person should have the job of ensuring that needed standards are identified,
created, and followed. That person, the standards coordinator, may be a software quality
practitioner, manager of the SG, or chairperson of the SC. The important matter is that he or she
has the ear of the software quality practitioner or upper management to ensure that standards
receive the attention they merit and require.

It is the role of the standards coordinator to ascertain the standards requirements for each
installation and situation and then arrange for those needed standards to be available, invoked,
and followed. As is the case with the software quality practitioner, the standards coordinator
usually should not prepare the standards.

It is the responsibility of the standards coordinator to provide standards as needed and monitor
compliance with them. It is the role and responsibility of everyone in the organization to identify
potential standards needs and adhere to those standards that have been applied. It is the role of
management to enforce the application of and compliance with the implemented standards.
2.3 Selection of standards
Standards must be selected that apply to a company's specific needs, environment, and resources.

Standards for almost everything are available, often from several different sources. As described
in Section 2.2, standards can be obtained from industry and professional groups, other
companies, private consultants, or the government. They can also be prepared in-house. The
major concern, then, is not where to find them, but to be sure that the ones being obtained are
necessary and proper for the given organization. Even in a particular company, all the standards
in use in one data center may not apply to a second data center. For example, the company's
scientific data processing needs may call for different standards from its financial data processing
center.

Many things can affect the need for standards and the standards needed. As stated, different data
processing orientations may call for specific standards. Such things as runtimes, terminal
response times, language selection, operating systems, and telecommunications protocols are all
subject to standardization on different terms in different processing environments. Even such
things as programmer workstation size and arrangement, data input and results output locations,
training and educational needs, and data access rights often are determined on a basis that
includes the type of processing as a consideration.

Not all standards available for a given subject or topic may apply in every situation.
Language standards, particularly selecting the languages to be used, may have
exceptions or not be applied at all. A particular standard life-cycle model may be
inappropriate in a given instance or for a specific project. Data access or
telecommunications standards may be modified or waived to fit a particular project
or installation.

2.4 Promulgation of standards


Standards must be available to the intended users. They must also be followed and kept up-to-
date with the user environment.
2.4.1 Availability

Two common methods of providing standards to the standards user are currently popular.

The foremost method of publishing standards is by way of a standards manual. This is, usually, a
large loose-leaf binder with the organization's standards filed in some logical order. It can also be
a set of binders, each covering some subset of the total standards set. The loose-leaf binder
approach is a convenient and generally inexpensive way of keeping the standards filed, up-to-
date, and accessible.

This approach has some drawbacks, however. Unless some official updating method is used, the
holders of the manuals may be careless about making changes to their copies as new, revised, or
obsolete standards are added, replaced, or removed. Using an incorrect standard is sometimes
worse then using none at all. Loose-leaf pages, especially in a heavily used book or section,
frequently become torn out of the book and lost. A common excuse for not following a standard
is that the offender's standards book has been misplaced, borrowed, or was never issued.

Finally, in a large organization, the cost of providing and maintaining a set of standards books
may be significant. One way to cut this cost is to restrict the distribution of manuals to some
subset of the using population. However, that solution has the usual effect of diminishing the use
of the standards because of the increased difficulty of access.

As more organizations adopt Internet and intranet technology, one way to counter some of the
more severe drawbacks of the book-style manual is to make the standards available on-line. In
organizations with widespread use of terminals, there is an increasing trend to having the full set
of standards available for access through the terminal network. That way, employees who want
to look up a standard need only call it up on their screen. In addition, on-line access eliminates
the problems associated with correcting and updating the manual; the only copy of the standard
that is affected is the one in the database. Once that is changed, everyone has access to the new
version without having to manually update his or her own book.

Like all methods, though, the on-line one has its drawbacks, not the least of which is cost. In a
large organization that already has widespread terminal usage and a large database capability,
automating a standards manual will be a relatively inexpensive situation. For organizations that
do not have the facilities already in place, putting the standards on-line probably is not cost-
justifiable, since there will be limited automated access and the book method will probably still
have to be used as well.

2.4.2 Compliance

Standards that are not followed are frequently worse than no standards at all.

Standards are intended to improve the overall quality of software as it progresses through the life
cycle. When a standard has been established and implemented for a particular subject, the
organization as a whole expects the standard to be followed. If it is not followed, some things
may be done incorrectly or lead to errors on the part of those who work in other portions of the
life cycle.

The role of the software quality practitioner, and specifically the standards coordinator, is to
monitor and report to management on the adherence of the entire computational organization to
the standards that have been implemented. It is not the role of the software quality practitioner or
the SC to enforce the standards. Enforcement is the responsibility of management.

Not every case of noncompliance with a standard represents disregard for, or lack of knowledge
of, the standard. In some cases, lack of compliance may be a signal that the standard is no longer
appropriate or applicable. While it is not practical to investigate every case of noncompliance
with a standard, it is necessary to look for trends in noncompliance that may indicate missing,
faulty, or even incorrect standards. Observation of noncompliance trends can give clues that may
indicate the need for companion standards to those that already exist, additional standards that
complement those in place, or modification or replacement of existing standards. The software
quality practitioner or the standards coordinator is responsible for identifying such cases through
an ongoing review of the standards and their continuing applicability.

2.4.3 Maintenance

Standards must be kept current with the changing computational environment.

No matter where the standards have come from or how they are made available, they will
quickly fall into disuse if they do not continue to reflect the organization's needs. Standards
become obsolete as mainframes, operating systems, federal regulations, business emphases, and
the like change and evolve. Probably no installation is the same today as it was as little as a year
ago. Further, some of the subjects of standards have also changed. Thus, some method of
keeping standards up-to-date must be provided. Clues that it is time to review a standard include
increasing instances of noncompliance, installation of new equipment or support software,
expansion into a new area of business emphasis, the advent of new government regulations, and
so on.

The standards coordinator is the person primarily responsible for standard maintenance, but
anyone in the organization can point out the potential need for change. Just as in the sequence for
requesting changes to software, there should be a formal standards change request and a standard
method for processing that change request. Once the request is received, the standards
coordinator can verify the need for the change, present it to the standards generating group, and
get the change made and distributed.
2.5 Nonstandard standards
It should be noted that not everything in the software arena must be standardized. There may be
preferred practices or suggested practices on which there is no agreement as to a single standard
way to behave. In these cases, recommended approaches or guidelines may be appropriate.

A recommended approach can be seen as the preferred, but not required, situation. In that case,
words such as should, ought to, or other similar, strongly suggestive phrases might be used. A
guideline would be even less restrictive and use verbs like could, might, or may. Use of
recommended approaches and guidelines permits some control of activity performance without
unnecessarily restricting otherwise proper or beneficial behavior.

2.6 Summary
Standards are the keystone of the SQS. They provide the basis against which reviewing and
monitoring are conducted. Areas of standardization cover the entire SLC from the definition of
the SLC itself through the methods by which software is withdrawn from use. All aspects of the
SLC are open to standardization, even the process by which the standards themselves are created.
Standards may be purchased, obtained from professional and user groups, and specifically
developed for or by the organization.

No matter how standards come into being, however, they must be relevant to the organization
and the software development process; that is, they must be reflective of the needs of the
organization. Standards must be appropriate to the environment in which the software is to be
developed and used.

Recommended approaches and guidelines are alternatives to standards in those situations in


which some consistency is needed but absolute consistency may be inappropriate.

Finally, the application of standards must be uniform and enforced across the full
organization, at least at the project level. While it is desirable from a consistency
point of view to impose the same standards on all software development groups
within an organization, it is not always feasible from a business standpoint. Within a
single project, however, there must be standards uniformity.

Chapter 3: Reviews
Overview
Reviews are the first and primary form of quality control activity. Quality control is concerned
with the search for faults or defects in the various products of software development. While
testing of code, as will be discussed in Chapter 4, is also concerned with the search for faults,
reviews are (as has been demonstrated in countless studies) far more effective because they look
for faults sooner than testing. Reviews are also more cost-effective because they consume fewer
resources than do tests. They are short, require small groups of reviewers, and can be scheduled
and rescheduled with minimum impact. Reviews are conducted during the process of the
development, not at the end, as is the case with testing. Quality control has as its mission the
detection and elimination of defects in the product. Reviews are the front line in this mission.

Reviews take place throughout the SLC and verify that the products of each phase are correct
with respect to phase inputs and activities. They are sometimes referred to as precode testing. In
one sense, reviews really are testing, or challenging, the product against its requirements. The
only difference between testing by means of reviewing and testing by executing code is the
method.

Reviews take on many forms. They may be informal peer reviews, such as one-on-one, walk-
throughs, inspections, or audits. They may also be formal reviews such as verification reviews,
decision-point reviews, physical or functional audits, or post-implementation reviews.
Regardless of their form, the primary purpose of a review is to identify defects in the product
being considered.

Boehm (see 1.5) and others have developed graphs similar to that in Figure 3.1, which show that
the costs of defects rise very steeply the longer they remain in the products. Reviews are aimed at
finding the defects as they are created, rather than depending on the test and operation phases to
uncover them.

Figure 3.1: Costs of identified defects.

Each review has a specific purpose, objective, audience, and cast of participants. Some reviews
may be held multiple times during the development, such as design walk-throughs. Others, such
as the functional audit, are of such magnitude that they normally are one-time events that form
the basis for major decisions about the product. In each case, however, a format and procedure
for the review should be reflected in the organization's standards.

A major role of the software quality practitioner is to confirm that the reviews are scheduled as
appropriate throughout the SLC and that they are held as scheduled. In some organizations, a
software quality practitioner is tasked to be the chair of the review. This is up to each individual
organization. It is imperative, however, that for all reviews, except perhaps very informal one-
on-ones or early walk-throughs, the software quality practitioner is an active participant. The
practitioner should also make sure that minutes and action items are recorded as necessary, and
that any action items are suitably addressed and closed before approving and recording the
review as complete. Of course, keeping management informed of the reviews and their results is
imperative.

It must be noted that the entire goal of a SQS is to increase the quality of the
delivered product. This, of course, entails the intentional seeking of faults and
defects. It also entails an opportunity for the unskilled manager to make personnel
decisions based on defects found. Some managers will be tempted to use the
number of errors made by a developer as the basis for performance evaluation This
is a self-defeating approach for two reasons. First, the employees may begin to
react to the stress of reviews and try to minimize their defect-finding effectiveness
so as to not look bad. Second, as the effectiveness of the reviews goes down, the
defects being delivered to the customer will increase, which undermines the
customer's confidence.

3.1 Types of reviews


Reviews take on various aspects depending on their type. The two broad types of reviews are as
follows:

1. In-process (generally considered informal) review;


2. Phase-end (generally considered formal) review.

3.1.1 In-process reviews

In-process reviews are informal reviews intended to be held during the conduct of each SDLC
phase. Informal implies that there is little reporting to the customer on the results of the review.
Further, they are often called peer reviews because they are usually conducted by the producer's
peers. Peers may be coworkers or others who are involved at about the same level of product
effort or detail.
Scheduling of the reviews, while intentional and a part of the overall project plan, is rather
flexible. This allows for reviews to be conducted when necessary: earlier if the product is ready,
later if the product is late. One scheduling rule of thumb is to review no more than the producer
is willing to throw away. Another rule of thumb is to have an in-process review every two weeks.
Figure 3.2 offers suggestions on the application of these two rules of thumb. Each project will
determine the appropriate rules for its own in-process reviews.

Figure 3.2: Scheduling rules of thumb.

There is a spectrum of rigor across the range of in-process reviews. The least rigorous of these
reviews is the peer review. It is followed, in increasing rigor, by walk-throughs of the in-process
product, such as a requirements document or a design specification. The most rigorous is the
inspection. These are discussed more completely next. Table 3.1 summarizes some of the
characteristics of in-process reviews.

Table 3.1: In-Process Review Characteristics

Review Type Records CM Level Participants Stress Level

One-on-one None None Coworker Very low

Walk-through Marked-up copy or Probably Interested project Low to


defect reports none members medium

Structured walk- Defect reports Informal Selected project Medium


through members

Inspection Defect report database Formal Specific role players High

One-on-one reviews The one-on-one review is the least rigorous of the reviews and, thus,
usually the least stressful. In this review, the producer asks a coworker to check the product to
make sure that it is basically correct. Questions like, "Did I get the right information in the right
place?" and "Did I use the right formula?" are the main concerns of the one-on-one review. The
results of the review are often verbal, or at the most, a red mark or two on the draft of the
product.

One-on-one reviews are most often used during the drafting of the product, or the correcting of a
defect, and cover small parts at a time. Since there is virtually no distribution of records of the
defects found or corrections suggested, the producer feels little stress or threat that he or she will
be seen as having done a poor job.

Walk-throughs As the producer of a particular product gets to convenient points in his or her
work, a group of peers should be requested to review the work as the producer describes, or
walks through, the product with them. In this way, defects can be found and corrected
immediately, before the product is used as the basis for the next-phase activities. Since it is
usually informal and conducted by the producer's peers, there is less tendency on the part of the
producer to be defensive and protective, leading to a more open exchange with correspondingly
better results.

Participants in the walk-through are usually chosen for their perceived ability to find defects. The
leader must make sure that the presence of a lead or senior person does not inhibit participation
by others.

In a typical walk-through, the producer distributes the material to be reviewed at the meeting.
While earlier distribution of the material would probably enhance the value of the walk-through,
most describers of walkthroughs do not require prior distribution. The producer then walks the
reviewers through the material, describing his or her thought processes, intentions, approaches,
assumptions, and perceived constraints as they apply to the product. The author should not to
justify or explain his or her work. If there is confusion about a paragraph or topic, the author may
explain what he or she meant so that the review can continue. This, however, is a defect and will
be recorded for the author to clarify in the correction process.

Reviewers are encouraged to comment on the producer's approach and results with the intent of
exposing defects or shortcomings in the product. On the other hand, only deviations from the
product's requirements are open to criticism. Comments, such as "I could do it faster or more
elegantly," are out of order. Unless the product violates a requirement, standard, or constraint or
produces the wrong result, improvements in style and so on are not to be offered in the walk-
through meeting. Obviously, some suggestions are of value and should be followed through
outside of the review.

It must be remembered that all reviews take resources and must be directed at the ultimate goal
of reducing the number of defects delivered to the user or customer. Thus, it is not appropriate to
try to correct defects that are identified. Correction is the task of the author after the review.

Results of the walk-through should be recorded on software defect reports such as those
discussed in Chapter 6. This makes the defects found a bit more public and can increase the
stress and threat to the producer. It also helps ensure that all defects found in the walk-through
are addressed and corrected.

Inspections Another, more rigorous, in-process review is the inspection, as originally described
by Fagan (see Additional reading). While its similarities with walk-throughs are great, the
inspection requires a more specific cast of participants and more elaborate minutes and action
item reporting. Unlike the walk-through, which may be documented only within the UDF, the
inspection requires a written report of its results and strict recording of trouble reports.

Whether you use an inspection as specifically described by Fagan or some similar review, the
best results are obtained when the formality and roles are retained. Having a reader who is
charged with being sufficiently familiar with the product to be able to not read but paraphrase the
documentation is a necessity. The best candidate for reader is likely the person for whom the
product being reviewed will be input. Who has more interest in having a quality product on
which to base his or her efforts?

Fagan calls for a moderator who is charged with logistics preparation, inspection facilitation, and
formal follow-up on implementing the inspection's findings. Fagan also defines the role of the
recorder as the person who creates the defect reports, maintains the defect database for the
inspection, and prepares the required reports. Other participants have various levels of personal
interest in the quality of the product.

Typically, an inspection has two meetings. The first is a mini walkthrough of the material by the
producer. This is not on the scale of a regular walk-through but is intended to give an overview
of the product and its place in the overall system. At this meeting, the moderator sees that the
material to be inspected is distributed. Individual inspectors are then scheduled to review the
material prior to the inspection itself. At the inspection, the reader leads the inspectors through
the material and leads the search for defects. At the inspection's conclusion, the recorder prepares
the required reports and the moderator oversees the follow-up on the corrections and
modifications found to be needed by the inspectors.

Because it is more rigorous, the inspection tends to be more costly in time and resources than the
walk-through and is generally used on projects with higher risk or complexity. However, the
inspection is usually more successful at finding defects than the walk-through, and some
companies use only the inspection as their in-process review.

It can also be seen that the inspection, with its regularized recording of defects, will be the most
stressful and threatening of the in-process reviews. Skilled managers will remove the defect
histories from their bases of performance evaluations. By doing so, and treating each discovered
defect as one that didn't get to the customer, the manager can reduce the stress associated with
the reviews and increase their effectiveness.

The software quality role with respect to inspections is also better defined. The software quality
practitioner is a recommended member of the inspection team and may serve as the recorder. The
resolution of action items is carefully monitored by the software quality practitioner, and the
results are formally reported to project management.
In-process audits Audits, too, can be informal as the SDLC progresses. Most software
development methodologies impose documentation content and format requirements. Generally,
informal audits are used to assess the product's compliance with those requirements for the
various products.

While every document and interim development product is a candidate for an audit, one common
informal audit is that applied to the UDF or software development notebook. The notebook or
folder is the repository of the notes and other material that the producer has collected during the
SDLC. Its required contents should be spelled out by a standard, along with its format or
arrangement. Throughout the SDLC, the software quality practitioner should audit the UDFs to
make sure that they are being maintained according to the standard.

3.1.2 Phase-end reviews

Phase-end reviews are formal reviews that usually occur at the end of the SDLC phase, or work
period, and establish the baselines for work in succeeding phases. Unlike the in-process reviews,
which deal with a single product, phase-end reviews include examination of all the work
products that are to have been completed during that phase. A review of project management
information, such as budget, schedule, and risk, is especially important.

For example, the software requirements review (SRR) is a formal examination of the
requirements document and sets the baseline for the activities in the design phase to follow. It
should also include a review of the system test plan, a draft of the user documentation, perhaps
an interface requirements document, quality and CM plans, any safety and risk concerns or
plans, and the actual status of the project with regard to budget and schedule plans. In general,
anything expected to have been produced during that phase is to be reviewed. The participants
include the producer and software quality practitioner as well as the user or customer. Phase-end
reviews are a primary means of keeping the user or customer aware of the progress and direction
of the project. A phase-end review is not considered finished until the action items have been
closed, software quality has approved the results, and the user or customer has approved going
ahead with the next phase. These reviews permit the user or customer to verify that the project is
proceeding as intended or to give redirection as needed. They are also major reporting points for
software quality to indicate to management how the project is adhering to its standards,
requirements, and resource budgets.

Phase-end reviews should be considered go-or-no-go decision points. At each phase-end review,
there should be sufficient status information available to examine the project's risk, including its
viability, the likelihood that it will be completed on or near budget and schedule, its continued
need, and the likelihood that it will meet its original goals and objectives. History has
documented countless projects that have ended in failure or delivered products that were
unusable. One glaring example is that of a bank finally canceling a huge project. The bank
publicly admitted to losing more than $150 million and laying off approximately 400 developers.
The bank cited inadequate phase-end reviews as the primary reason for continuing the project as
long as it did. Properly conducted phase-end reviews can identify projects in trouble and prompt
the necessary action to recover. They can also identify projects that are no longer viable and
should be cancelled.
Figure 3.3 shows various phase-end reviews throughout the SDLC. Formal reviews, such as the
software requirements review, preliminary design review (PDR), and the critical design review
(CDR) are held at major milestone points within the SDLC and create the baselines for
subsequent SDLC phases. The test readiness review (TRR) is completed prior to the onset of
acceptance or user testing.

Figure 3.3: Typical phase-end reviews.

Also shown are project completion audits conducted to determine readiness for delivery and the
post implementation review, which assesses project success after a period of actual use.

Table 3.2 presents the typical subjects of each of the four major development phase-end reviews.
Those documents listed as required are considered the minimum acceptable set of documents
required for successful software development and maintenance.

Table 3.2: Phase-End Review Subject Documents

Review Required Others


Software requirements specification
Software requirements Interface requirements specification
Software test plan

Software development plan

Quality system plan

CM plan

Standards and procedures

Cost/schedule status report


Table 3.2: Phase-End Review Subject Documents

Review Required Others


Software top-level design Interface design
Preliminary design
Software test description Database design

Cost/schedule status report


Software detailed design Interface design
Critical design
Cost/schedule status report Database design
Software product specification User's manual
Test readiness
Software test procedures Operator's manual

Cost/schedule status report

3.1.3 Project completion analyses

Two formal audits, the functional audit (FA) and the physical audit (PA), are held as indicated in
Figure 3.3. These two events, held at the end of the SDLC, are the final analyses of the software
product to be delivered against its approved requirements and its current documentation,
respectively.

The FA compares the software system being delivered against the currently approved
requirements for the system. This is usually accomplished through an audit of the test records.
The PA is intended to assure that the full set of deliverables is an internally consistent set (i.e.,
the user manual is the correct one for this particular version of the software). The PA relies on the
CM records for the delivered products. The software quality practitioner frequently is charged
with the responsibility of conducting these two audits. In any case, the software quality
practitioner must be sure that the audits are conducted and report the finding of the audits to
management.

The postimplementation review (PIR) is held once the software system is in production. The PIR
usually is conducted 6 to 9 months after implementation. Its purpose is to determine whether the
software has, in fact, met the user's expectations for it in actual operation. Data from the PIR is
intended for use by the software quality practitioner to help improve the software development
process. Table 3.3 presents some of the characteristics of the PIR.

Table 3.3: PIR Characteristics

Timing 3 to 6 months after software system implementation


Return on investment
Software system goals versus experience
Schedule results

User response

Defect history
Input to process analysis and improvement
Review results usage
Too often ignored

The role of the software quality practitioner is often that of conducting the PIR. This review is
best used as an examination of the results of the development process. Problems with the system
as delivered can point to opportunities to improve the development process and verify the
validity of previous process changes.

3.2 Review subjects


Reviews are conducted throughout the SDLC with development reviews focusing on code and its
related products.

Design and code reviews held during the course of the various phases are usually in the form of
walk-throughs and inspections. These are held to get an early start on the elimination of defects
in the products being examined. They are generally informal, which makes them more
productive and less threatening to the egos and feelings of the producers of the products being
reviewed.

Test reviews are much the same as code reviews, covering the test program products rather than
the software products. They include the same types of formal and informal reviews and are held
throughout the SDLC. Their function is to examine the test program as it is being developed and
to make sure that the tests will exercise the software in such a manner as to find defects and to
demonstrate that the software complies with the requirements.

3.3 Documentation reviews


One of the two goals of the SQS is to facilitate the building of quality into the software products
as they are produced. Documentation reviews provide a great opportunity to realize this goal. By
maintaining a high standard for conducting and completing these reviews and establishing the
respective baselines, the software quality practitioner can make significant contributions to the
attainment of a quality software product.

Software quality practitioners have much to do with the documentation reviews. They must make
sure that the proper reviews are scheduled throughout the development life cycle. This includes
determining the appropriate levels of formality, as well as the actual reviews to be conducted.
The software quality practitioner also monitors the reviews to see that they are conducted and
that defects in the documents are corrected before the next steps in publication or development
are taken. In some cases, software quality itself is the reviewing agency, especially where there is
not a requirement for in-depth technical analysis. In all cases, the software quality practitioner
will report the results of the reviews to management.

There are a number of types of documentation reviews, both formal and informal, that are
applicable to each of the software documents. The appendixes present suggested outlines for
most of the development documents. These outlines can be used as they are or adapted to the
specific needs of the organization or project.

The most basic of the reviews is the peer walk-through. As discussed in Section 3.1, this is a
review of the document by a group of the author's peers who look for defects and weaknesses in
the document as it is being prepared. Finding defects as they are introduced avoids more
expensive corrective action later in the SDLC, and the document is more correct when it is
released.

Another basic document review is the format review or audit. This can be either formal or
informal. When it is a part of a larger set of document reviews, the format audit is usually an
informal examination of the overall format of the document to be sure that it adheres to the
minimum standards for layout and content. In its informal style, little attention is paid to the
actual technical content of the document. The major concern is that all required paragraphs are
present and addressed. In some cases, this audit is held before or in conjunction with the
document's technical peer walkthrough.

A more formalized approach to the format audit is taken when there is no content review
scheduled. In this case, the audit takes the form of a review and will also take the technical
content into consideration. A formal format audit usually takes place after the peer walk-throughs
and may be a part of the final review scheduled for shortly before delivery of the document. In
that way, it serves as a quality-oriented audit and may lead to formal approval for publication.

When the format audit is informal in nature, a companion content review should evaluate the
actual technical content of the document. There are a number of ways in which the content
review can be conducted. First is a review by the author's supervisor, which generally is used
when formal customer-oriented reviews, such as the PDR and CDR, are scheduled. This type of
content review serves to give confidence to the producer that the document is a quality product
prior to review by the customer.
A second type of content review is one conducted by a person or group outside the producer's
group but still familiar enough with the subject matter to be able to critically evaluate the
technical content. Also, there are the customer-conducted reviews of the document. Often, these
are performed by the customer or an outside agency (such as an independent verification and
validation contractor) in preparation for an upcoming formal, phase-end review.

Still another type of review is the algorithm analysis. This examines the specific approaches,
called out in the document, that will be used in the actual solutions of the problems being
addressed by the software system. Algorithm analyses are usually restricted to very large or
critical systems because of their cost in time and resources. Such things as missile guidance,
electronic funds transfer, and security algorithms are candidates for this type of review. Payroll
and inventory systems rarely warrant such in-depth study.

3.3.1 Requirements reviews

Requirements reviews are intended to show that the problem to be solved is completely spelled
out. Informal reviews are held during the preparation of the document. A formal review is
appropriate prior to delivery of the document.

The requirements specification (see Appendix D) is the keystone of the entire software system.
Without firm, clear requirements, there will no way to determine if the software successfully
performs its intended functions. For this reason, the informal requirements review looks not only
at the problem to be solved, but also at the way in which the problem is stated. A requirement
that says "compute the sine of x in real time" certainly states the problem to be solved-the
computation of the sine of x. However, it leaves a great deal to the designer to determine, for
instance, the range of x, the accuracy to which the value of sine x is to be computed, the
dimension of x (radians or degrees), and the definition of real time.

Requirements statements must meet a series of criteria if they are to be considered adequate to be
used as the basis of the design of the system. Included in these criteria are the following:

Necessity;
Feasibility;

Traceability;

Absence of ambiguity;

Correctness;

Completeness;

Clarity;

Measurability;
Testability.

A requirement is sometimes included simply because it seems like a good idea; it may add
nothing useful to the overall system. The requirements review will assess the necessity of each
requirement. In conjunction with the necessity of the requirement is the feasibility of that
requirement. A requirement may be thought to be necessary, but if it is not achievable, some
other approach will have to be taken or some other method found to address the requirement. The
necessity of a requirement is most often demonstrated by its traceability back to the business
problem or business need that initiated it.

Every requirement must be unambiguous. That is, every requirement should be written in such a
way that the designer or tester need not try to interpret or guess what the writer meant. Terms like
usually, sometimes, and under normal circumstances leave the door open to interpretation of
what to do under unusual or abnormal circumstances. Failing to describe behavior in all possible
cases leads to guessing on the part of readers, and Murphy's Law suggests that the guess will be
wrong a good portion of the time.

Completeness, correctness, and clarity are all criteria that address the way a given requirement is
stated. A good requirement statement will present the requirement completely; that is, it will
present all aspects of the requirement. The sine of x example was shown to be lacking several
necessary parts of the requirement. The statement also must be correct. If, in fact, the
requirement should call for the cosine of x, a perfectly stated requirement for the sine of x is not
useful. And, finally, the requirement must be stated clearly. A statement that correctly and
completely states the requirement but cannot be understood by the designer is as useless as no
statement at all. The language of the requirements should be simple, straightforward, and use no
jargon. That also means that somewhere in the requirements document the terms and acronyms
used are clearly defined.

Measurability and testability also go together. Every requirement will ultimately have to be
demonstrated before the software can be considered complete. Requirements that have no
definite measure or attribute that can be shown as present or absent cannot not be specifically
tested. The sine of x example uses the term real time. This is hardly a measurable or testable
quality. A more acceptable statement would be "every 30 milliseconds, starting at the receipt of
the start pulse from the radar." In this way, the time interval for real time is defined, as is the
starting point for that interval. When the test procedures are written, this interval can be
measured, and the compliance or noncompliance of the software with this requirement can be
shown exactly.

The formal SRR is held at the end of the requirements phase. It is a demonstration that the
requirements document is complete and meets the criteria previously stated. It also creates the
first baseline for the software system. This is the approved basis for the commencement of the
design efforts. All design components will be tracked back to this baseline for assurance that all
requirements are addressed and that nothing not in the requirements appears in the design.

The purpose of the requirements review, then, is to examine the statements of the requirements
and determine if they adhere to the criteria for requirements. For the software quality
practitioner, it may not be possible to determine the technical accuracy or correctness of the
requirements, and this task will be delegated to those who have the specific technical expertise
needed for this assessment. Software quality or its agent (perhaps an outside contractor or
another group within the organization) will review the documents for the balance of the criteria.

Each nonconformance to a criterion will be recorded along with a suggested correction. These
will be returned to the authors of the documents, and the correction of the nonconformances
tracked. The software quality practitioner also reports the results of the review and the status of
the corrective actions to management.

3.3.2 Design reviews

Design reviews verify that the evolving design is both correct and traceable back to the approved
requirements. Appendix E suggests an outline for the design documentation.

3.3.2.1 Informal reviews

Informal design reviews closely follow the style and execution of informal requirements reviews.
Like the requirements, all aspects of the design must adhere to the criteria for good requirements
statements. The design reviews go further, though, since there is more detail to be considered, as
the requirements are broken down into smaller and smaller pieces in preparation for coding.

The topic of walk-throughs and inspections has already been addressed. These are in-process
reviews that occur during the preparation of the design. They look at design components as they
are completed.

Design documents describe how the requirements are apportioned to each subsystem and module
of the software. As the apportionment proceeds, there is a tracing of the elements of the design
back to the requirements. The reviews that are held determine if the design documentation
describes each module according to the same criteria used for requirements.

3.3.2.2 Formal reviews

There are at least two formal design reviews, the PDR and the CDR. In addition, for larger or
more complex systems, the organization standards may call for reviews with concentrations on
interfaces or database concerns. Finally, there may be multiple occurrences of these reviews if
the system is very large, critical, or complex.

The number and degree of each review are governed by the standards and needs of the specific
organization.

The first formal design review is the PDR, which takes place at the end of the initial design
phase and presents the functional or architectural breakdown of the requirements into executable
modules. The PDR presents the design philosophy and approach to the solution of the problem as
stated in the requirements. It is very important that the customer or user take an active role in this
review. Defects in the requirements, misunderstandings of the problem to be solved, and needed
redirections of effort can be resolved in the course of a properly conducted PDR.

Defects found in the PDR are assigned for solution to the appropriate people or groups, and upon
closure of the action items, the second baseline of the software is established. Changes made to
the preliminary design are also reflected as appropriate in the requirements document, so that the
requirements are kept up to date as the basis for acceptance of the software later on. The new
baseline is used as the foundation for the detailed design efforts that follow.

At the end of the detailed design, the CDR is held. This, too, is a time for significant customer or
user involvement. The result of the CDR is the code-to design that is the blueprint for the coding
of the software. Much attention is given in the CDR to the adherence of the detailed design to the
baseline established at PDR. The customer or user, too, must approve the final design as being
acceptable for the solution of the problem presented in the requirements. As before, the criteria
for requirements statements must be met in the statements of the detailed design.

So that there is assurance that nothing has been left out, each element of the detailed design is
mapped back to the approved preliminary design and the requirements. The requirements are
traced forward to the detailed design, as well, to show that no additions have been made along
the way that do not address the requirements as stated. As before, all defects found during CDR
are assigned for solution and closure. Once the detailed design is approved, it becomes the
baseline for the coding effort. A requirements traceability matrix is an important tool to monitor
the flow of requirements into the preliminary and detailed design and on into the code. The
matrix can also help show that the testing activities address all the requirements.

3.3.2.3 Additional reviews

Another review that is sometimes held is the interface design review. The purpose of this review
is to assess the interface specification that will have been prepared if there are significant
interface concerns on a particular project. The format and conduct of this review are similar to
the PDR and CDR, but there is no formal baseline established as a result of the review. The
interface design review will contribute to the design baseline.

The database design review also may be conducted on large or complex projects. Its intent is to
ascertain that all data considerations have been made as the database for the software system has
been prepared. This review will establish a baseline for the database, but it is an informal
baseline, subordinate to the baseline from the CDR.

3.3.3 Test documentation reviews

Test documentation is reviewed to ensure that the test program will find defects and will test the
software against its requirements.

The objective of the test program as a whole is to find defects in the software products as they
are developed and to demonstrate that the software complies with its requirements. Test
documentation is begun during the requirements phase with the preparation of the initial test
plans. Test documentation reviews also begin at this time, as the test plans are examined for their
comprehensiveness in addressing the requirements. See Appendix G for a suggested test plan
outline.

The initial test plans are prepared with the final acceptance test in mind, as well as the
intermediate tests that will examine the software during development. It is important, therefore,
that each requirement be addressed in the overall test plan. By the same token, each portion of
the test plan must specifically address some portion of the requirements. It is understood that the
requirements, as they exist in the requirements phase, will certainly undergo some evolution as
the software development process progresses. This does not negate the necessity for the test
plans to track the requirements as the basis for the testing program. At each step further through
the SDLC, the growing set of test documentation must be traceable back to the requirements. The
test program documentation must also reflect the evolutionary changes in the requirements as
they occur. Figure 3.4 shows how requirements may or may not be properly addressed by the
tests. Some requirements may get lost, some tests may just appear. Proper review of the test
plans will help identify these mismatches.

Figure 3.4: Matching tests to requirements.

As the SDLC progresses, more of the test documentation is prepared. During each phase of the
SDLC, additional parts of the test program are developed. Test cases (see Appendix H) with their
accompanying test data are prepared, followed by test scenarios and specific test procedures to
be executed. For each test, pass/fail criteria are determined, based on the expected results from
each test case or scenario. Test reports (see Appendix I) are prepared to record the results of the
testing effort.

In each instance, the test documentation is reviewed to ascertain that the test plans, cases,
scenarios, data, procedures, reports, and so on are complete, necessary, correct, measurable,
consistent, traceable, and unambiguous. In all, the most important criterion for the test
documentation is that it specifies a test program that will find defects and demonstrate that the
software requirements have been satisfied.
Test documentation reviews take the same forms as the reviews of the software documentation
itself. Walk-throughs of the test plans are conducted during their preparation, and they are
formally reviewed as part of the SRR. Test cases, scenarios, and test data specifications are also
subject to walk-throughs and sometimes inspections. At the PDR and CDR, these documents are
formally reviewed.

During the development of test procedures, there is a heavy emphasis on walk-throughs,


inspections, and even dry runs to show that the procedures are comprehensive and actually
executable. By the end of the coding phase, the acceptance test should be ready to be performed,
with all documentation in readiness.

The acceptance test is not the only test with which the test documentation is concerned, of
course. All through the coding and testing phases, there have been unit, module, integration, and
subsystem tests going on. Each of these tests has also been planned and documented, and that
documentation has been reviewed. These tests have been a part of the overall test planning and
development process and the plans, cases, scenarios, data, and so on have been reviewed right
along with the acceptance test documentation. Again, the objective of all of these tests is to find
the defects that prevent the software from complying with its requirements.

3.3.4 User documentation reviews

User documentation must not only present information about the system, it must be meaningful
to the reader.

The reviews of the user documentation are meant to determine that the documentation meets the
criteria already discussed. Just as important, however, is the requirement that the documentation
be meaningful to the user. The initial reviews will concentrate on completeness, correctness, and
readability. The primary concern will be the needs of the user to understand how to make the
system perform its function. Attention must be paid to starting the system, inputting data and
interpreting or using output, and the meaning of error messages that tell the user something has
been done incorrectly or is malfunctioning and what the user can do about it.

The layout of the user document (see Appendix K for an example) and the comprehensiveness of
the table of contents and the index can enhance or impede the user in the use of the document.
Clarity of terminology and avoiding system-peculiar jargon are important to understanding the
document's content. Reviews of the document during its preparation help to uncover and
eliminate errors and defects of this type before they are firmly imbedded in the text.

A critical step in the review of the user documentation is the actual trial use of the
documentation, by one or more typical users, before the document is released. In this way,
omissions, confusing terminology, inadequate index entries, unclear error messages, and so on
can be found. Most of these defects are the result of the authors' close association with the
system rather than outright mistakes. By having representatives of the actual using community
try out the documentation, such defects are more easily identified and recommended corrections
obtained.
Changes to user work flow and tasks may also be impacted by the new software system. To the
extent that they are minor changes to input, control, or output actions using the system, they may
be summarized in the user documentation. Major changes to behavior or responsibilities may
require training or retraining.

Software products are often beta-tested. User documents should also be tested. Hands-on trial
use of the user documentation can point out the differences from old to new processes and
highlight those that require more complete coverage than will be available in the documentation
itself.

3.3.5 Other documentation reviews

Other documents are often produced during the SDLC and must be reviewed as they are
prepared.

In addition to the normally required documentation, other documents are produced during the
software system development. These include the software development plan, the software quality
system plan, the CM plan, and various others that may be contractually invoked or called for by
the organization's standards. Many of these other documents are of an administrative nature and
are prepared prior to the start of software development.

The software development plan (see Appendix A), which has many other names, lays out the
plan for the overall software development effort. It will discuss schedules, resources, perhaps
work breakdown and task assignment rules, and other details of the development process as they
are to be followed for the particular system development.

The software quality system plan and the CM plan (see Appendixes B and C, respectively)
address the specifics of implementing these two disciplines for the project at hand. They, too,
should include schedule and resource requirements, as well as the actual procedures and
practices to be applied to the project. There may be additional documents called out by the
contract or the organization's standards as well.

If present, the safety and risk management plans (see Appendixes L and M) must undergo the
same rigor of review as all the others. The software maintenance plan (see Appendix N), if
prepared, is also to be reviewed.

Since these are the project management documents, it is important that they be reviewed at each
of the formal reviews during the SLC, with modifications made as necessary to the documents or
overall development process to keep the project within its schedule and resource limitations.

Reviews of all these documents concentrate on the basic criteria and completeness of the
discussions of the specific areas covered. Attention must be paid to complying with the format
and content standards imposed for each document.

Finally, the software quality practitioner must ascertain that all documents required by standards
or the contract are prepared on the required schedule and are kept up to date as the SLC
progresses. Too often, documentation that was appropriate at the time of delivery is not
maintained as the software is maintained in operation. This leads to increased difficulty and cost
of later modification. It is very important to include resources for continued maintenance of the
software documentation, especially the maintenance documentation discussed in Chapter 8. To
ignore the maintenance of the documentation will result in time being spent reinventing or
reengineering the documentation each time maintenance of the software is required.

3.4 Summary
Reviews take on many forms. Each review has a specific purpose, objective, audience, and cast
of participants.

Informal, in-process peer reviews generally occur during the execution of each SDLC phase.
They concentrate on single products or even small parts of single products. It is the intention of
in-process reviews to detect defects as quickly as possible after their insertion into the product.

Formal, phase-end reviews usually occur at the ends of the SDLC phases and establish the
baselines for work in succeeding phases. They also serve as points in the development when the
decision to continue (go) or terminate (no go) a project can be made. The phase-end reviews are
much broader in scope than in-process reviews. They cover the entire family of products to be
prepared in each major SDLC phase, as well as the various documented plans for the project.

Formal project completion audits include the FA and PA. These assess readiness to ship or
deliver the software. The PIR assesses the success of the software in meeting its goals and
objectives once the software is in place and has been used for a time.

Documentation reviews and audits, both formal and informal, are applicable to each of the
software documents. The most basic of the reviews is the peer walk-through. Another basic
document review is the format review. This can be either formal or informal. Requirements
reviews are intended to show that the problem to be solved is completely spelled out. Design
reviews verify that the evolving design is both correct and traceable back to the approved
requirements. Test documentation is reviewed to assure that the test program will find defects
and will test the software against its requirements. The reviews of the user documentation are
meant to determine that the documentation meets the criteria that have been discussed.

Other documents are often produced during the SLC and must be reviewed as they are prepared.
Reviews of all of these documents concentrate on the basic criteria and on the completeness of
the discussions of the specific areas covered.
Design and code reviews held during the course of the various phases are usually in the form of
walk-throughs and inspections. These generally informal reviews are held to get an early start on
eliminating defects in the products being examined.

Test reviews are much the same as code reviews, covering the test program products rather than
the software products.

Implementation reviews are conducted just prior to implementing the software system into full
use.

Reviews take place throughout the SLC and verify that the products of each phase
are correct with respect to its phase inputs and activities.

You might also like