SQA Unit 1-5
SQA Unit 1-5
SQA Unit 1-5
(IT)
SEMESTER - VI (CBCS)
SOFTWARE QUALITY
ASSURANCE
Unit - I
1. Introduction to Quality 1
2. Software Quality 37
Unit - II
3. Fundamentals of Software Testing 76
4. Challenges in Software Testing 115
Unit - III
5. Unit Testing: Boundary Value Testing, Equivalence Testing 165
6. Decision Table Based Testing, Path Testing, Data Flow Testing 192
Unit - IV
7. Software Verification and Validation 113
8. V-Test Model 237
Unit - V
9. Levels of Testing 249
10. Special Tests 263
*****
B. Sc. (Information Technology) Semester – VI
Course Name: Software Quality Assurance Course Code: 88701
Periods per week (1 Period is 50 minutes) 5
Credits 2
Hours Marks
Evaluation System Theory Examination 2½ 75
Internal -- 25
1
INTRODUCTION TO QUALITY
Unit Structure
1.0 Objectives
1.1 Introduction
1.2 Historical Perspective of Quality
1.3 What is Quality? (Is it a fact or perception?)
1.4 Definitions of Quality
1.5 Core Components of Quality
1.6 Quality View
1.7 Financial Aspect of Quality
1.8 Definition of Quality
1.9 Customers, Suppliers and Processes
1.10 Total Quality Management (TQM)
1.11 Quality Principles of ‘Total Quality Management’
1.12 Quality Management Through Statistical Process Control
1.13 Quality Management Through Cultural Changes
1.14 Continual (Continuous) Improvement Cycle
1.15 Quality in Different Areas
1.16 Benchmarking and Metrics
1.17 Problem Solving Techniques
1.18 Problem Solving Software Tools
1.19 Quality Tips
1.20 Summary
1.21 Exercises
1.22 References
1.0 OBJECTIVES
Objectives of Quality Assurance are as follows:
1
Software Quality It guarantees that the problems, which are not solved within the
Assurance software, are addressed by the higher administration.
1.1 INTRODUCTION
Evolution of mankind can be seen as a continuous effort to make things
better and convenient through improvements linked with inventions of
new products. Since time immemorial, humans have used various products
to enhance their lifestyle. Initially, mankind was completely dependent on
nature for satisfying their basic needs for food, shelter and clothing. With
evolution, human needs increased from the basic level to substantial
addition of derived needs to make life more comfortable. As civilisation
progressed, humans started converting natural resources into things which
could be used easily to satisfy their basic as well as derived needs.
Earlier, product quality was governed by the individual skill and it differed
from instance to instance depending upon the creator and the process used
to make it at that instance. Every product was considered as a separate
project and every instance of the manufacturing process led to products of
different quality attributes. Due to increase in demand for same or similar
products which were expected to satisfy same/similar demands and
mechanisation of the manufacturing processes, the concept of product
specialisation and mass production came into existence. Technological
developments made production faster and repetitive in nature.
Now, we are into the era of specialised mass production where producers
have specialised domain knowledge and manufacturing skill required for
particular product creation, and use this knowledge and skill to produce
the product in huge qualities. The market is changing considerably from
monopoly to fierce competition but still maintaining the individual
product identity in terms of attributes and characteristics. There are large
number of buyers as well as sellers in the market, providing and
demanding similar products, which satisfy similar demands. Products may
or may not be exactly the same but may be similar or satisfying similar
needs/demands of the users. They may differ from one another to some
extent on the basis of their cost, delivery schedule to acquire it, features,
functionalities present/absent, etc. These may be termed as attributes of
the quality of a product.
We will be using the word ‘product’ to represent ‘product’ as well as
‘service’, as both are intended to satisfy needs of users/customers. Service
may be considered as a virtual product without physical existence but
satisfying some needs of users.
3
Software Quality
Assurance
1.3 WHAT IS QUALITY? (IS IT A FACT OR
PERCEPTION?)
What is quality, is an important question but does not have a simple
answer. Some people define it as a fact while others define it as a
perception of customer/user.
We often talk about quality of a product to shortlist or select the best
product among the equals when we wish to acquire one. We may not have
a complete idea about the meaning of quality or what we are looking for
while selecting a product, if somebody questions us about the reason for
choosing one product over the other. This is a major issue faced by the
people working in quality field even today as it is very difficult to decide
what contributes customer loyalty or first-time sale and subsequent repeat
sale. The term ‘quality’ means different things to different people at
different times, different places and for difficult products. For example, to
some users, a quality product may be one, which has no/less defects and
work exactly as expected and matches with his/her concept of cost and
delivery schedule along with services offered. Such a thought may be a
definition of quality – ‘Quality is fitness for use’.
However, some other definitions of quality are also widely discussed.
Quality defined as, ‘Conformance to specifications’ is a position that
people in the engineering industry often promote because they can do very
little to change the design of a product and have to make a product as per
the design which will best suite the user’s other expectations like less cost,
fast delivery and good service support. Others promote wider views,
which may include the attribute of a product which satisfies /exceeds the
expectations of the customer. Some believe that quality is a judgement or
perception of the customer/user about the attributes of a product, as all the
features may not be known or used during the entire life of a product.
Quality is the extent to which the customers/users believe that the product
meets or surpasses their needs and expectations. Others believe that
quality means delivering product that,
5
Software Quality satisfaction. This definition is mainly derived by an approach to quality
Assurance management through ‘Quality is fitness for use’.
5. Transcendent Quality:
To many users/customers, it is not clear what is meant by a ‘quality
product’, but as per their perception it is something good and they may
want to purchase it because of some quality present/absent in the product.
The customer will derive the value and may feel the pride of ownership.
Definitions 2, 3 and 4 are traditionally associated with the idea of a
product quality that,
Less expensive with higher returns or higher values for the customer,
satisfying cost benefit analysis. Directly or indirectly, every owner
would be performing cost benefit analysis before arriving at a
decision to purchase something.
Without any defect or with few defects so that its usage is uninhibited
and failure or repairs would be as less as possible. If the product is
very reliable, it may be liked by prospective buyers.
Define:
There must be some definition of what is required in the product, in terms
of attributes or characteristics of a product, and in how much quantity it is
8
required to derive customer satisfaction. Features, functionalities and Introduction to Quality
attributes of the product must be measured in quantitative terms, and it
must be a part of requirement specification as well as acceptance criteria
defined for it. The supplier as well as the customer must know what ‘Must
be’, what ‘Should be’ and what ‘Could be’ present in the product so
delivered and also what ‘Must not be’, ‘Should not be’ and ‘Could not be’
present in the product.
Measure:
The quantitative measures must be defined as an attribute of quality of a
product. Presence or absence of these attributes in required quantities acts
as an indicator of product quality achievement. Measurement also gives a
gap between what is expected by a customer and what is delivered to him
when the product is sold. This gap may be considered as a lack of quality
for that product. This may cause customer dissatisfaction or rejection by
the customer.
Monitor:
Ability of the product to satisfy customer expectations defines the quality
of a product. There must be some mechanism available with the
manufacturer to monitor the processes used in development, testing and
delivering a product to a customer and their outcome, i.e., attributes of
product produced using the processes, to ensure that customer satisfaction
is incorporated in the deliverables given to the customer. Derivations from
the specifications must be analysed and reasons of these derivations must
be sorted out to improve product and process used for producing it. An
organisation must have correction as well as corrective and preventive
action plans to remove the reasons of deviations/deficiencies in the
product as well as improve the processes used for making it.
Control:
Control gives the ability to provide desired results and avoid the undesired
things going to a customer. Controlling function in the organisation,
properly called as ‘quality control’ or ‘verification and validation’, may be
given a responsibility to control product quality at micro level while the
final responsibility of overall organisational control is entrusted with the
management, popularly called as ‘quality assurance’. Management must
put some mechanism in place for reviewing and controlling the progress
of product development and testing, initiating actions on
deviations/deficiencies observed in the product as well as the process.
Improve:
Continuous/continual improvements are necessary to maintain ongoing
customer satisfaction and overcome the possible competition, customer
complaints, etc. If some producer enjoys very high customer satisfaction
and huge demand for his product, competitors will try to enter the market.
They will try to improve their products further to beat the competition.
9
Software Quality Improvement may be either of two different approaches viz. continuous
Assurance improvement and continual improvement as the case may be.
1.5.3 Management Must Lead The Organisation Through
Improvement Efforts:
Quality must be perceived by a customer to realise customer satisfaction.
Many factors must be controlled by a manufacturer in order to attain
customer satisfaction. Management is the single strongest force existing in
an organisation to make the changes as expected by a customer; it
naturally becomes the leader in achieving customer satisfaction, quality of
product and improvement of the processes used through various programs
of continuous/continual improvement. Quality management must be
driven by the management and participated by all employees.
Management should lead the endeavour of quality improvement program
in the organisation by defining vision, mission, policies, objectives,
strategies, goals and values for the organisation and show the existence of
the same in the organisation by self-examples. Entire organisation should
initiate the behavioural and leadership aspects of the management. Every
word and action by the management may be seen and adopted by the
employees. Quality improvement is also termed as a ‘cultural change
brought in by management’.
Organisation based policies, procedures, methods, standards, systems etc.
are defined and approved by the management. Adherence to these systems
must be monitored continuously and deviation/deficiencies must be
tracked. Actions resulting from the observed deviations/deficiencies shall
be viewed as the areas which need improvements. The improvements may
be required in enforcement or definition of policies and procedures,
methods, standards, etc. Management must have quality planning at
organisation level to support improvement actions.
Customer:
Customer is the main stakeholder for any product/project. The customer
will be paying for the product to satisfy his requirements. He/she must
benefit by acquiring a new product. Sometimes, the customer and user can
be different entities but here, we are defining both as same entity
considering customer as a user. Though sometimes late delivery penalty
clauses are included in contract, the customer is interested in the product
delivery with all features on defined scheduled time and may not be
interested in getting compensated for the failures or delayed deliveries.
Supplier:
Suppliers give inputs for making a project/product. As an organisation
becomes successful, more and more projects are executed, and suppliers
can make more business, profit and expansion. Suppliers can be external
or internal to the organisation. External suppliers may include people
supplying machines, hardware, software, etc. for money while internal
suppliers may include other functions such as system administrator,
training provider, etc, which are supporting projects/product development.
11
Software Quality Employee:
Assurance
People working in a project/an organisation may be termed as employees.
These people may be permanent temporary workers but may not be
contractual labours having no stake in product success. (Contractual
workers may come under supplier category.) As the projects organisations
become successful, people working on these projects in these
organisations get more recognition, satisfaction, pride, etc. They feel
proud to be part of a successful mission.
Management:
People managing the organization / project may be termed as management
in general. Management may be divided further into project management,
staff management, senior management, investors, etc. Management needs
more profit, recognition, turnover improvements, etc to make their vision
and mission successful. Successful projects give management many
benefits like expanding customer base, getting recognition, more profit,
more business, etc.
There are two more stakeholders in the success as well as failure of any
project / product / organisation. Many times, we do not feel their existence
at project level or even at organisation level. But they do exist at macro
level.
Society:
Society benefits as well as suffers due to successful projects
/organisations. It is more of a perception of an individual looking towards
the success of the organisation. Successful organisations / projects
generate more employment, and wealth for the people who are in the
category of customer, supplier, employee, management, etc. It also affects
the resource availability at local as well as global level like water, roads,
power supply, etc. It also affects economics of a society to a larger extent.
Major price rise has been seen in industry dominated areas as the paying
capacity of people in these areas is higher than other areaswhere there is
no such industry.
Government:
Government may be further categorised as local government, state
government, central government, etc. Government benefits as well as
suffers due to successful projects/organisations. Government may get
higher taxes, export benefits, foreign currency, etc, from successful
projects organisations, People living in those areas may get employment
and overall wealth of the nation improves. At the same time, there may be
pressure on resources like water, power, etc. There may be some problems
in terms of money availability and flow as success leads to more buying
power and inflation.
Quality perspective of all these stakeholders defines their expectations
from organisation projects. We may feel that superficially these views
12
may differ from each other though finally they may be leading to the same Introduction to Quality
outcome, If these views match perfectly and there is no gap in the
stakeholder's expectations, then organisational performance and
effectiveness can he improved significantly as collective efforts from all
stakeholders, If the views differ significantly, this may lead to discord and
hamper improvement. Let us discuss two important views of quality which
mainly defines the expectations from a project and success of a project at
unit level viz. customer's view and developer's view of quality.
Doing It on Time:
All resources for developing a new product are scarce and time factor has
a cost associated with it. The value of money changes as time changes.
Thus, money received late is as good as less money received. If the
customer is expected to pay on each milestone, then the producer has to
deliver milestones on time to realise money on time. Delay in delivery
represents a problem with processes, starting from getting requirements,
estimation of efforts and schedule and goes up to delivery and
development.
Difference between the two views discussed above (Customer's view vs
Supplier's view) creates problem for manufacturer as Well as customer
when it comes to requirement gathering, acceptance testing, etc. It may
result into mismatch of expectation and service level, which would be
responsible for introducing defects in the product in terms of
functionalities, features, cost, delivery schedule, etc. Sometimes these
views are considered as two opposite sides of a coin or two poles of the
Earth which can never match. The difference between two views is treated
as a gap.
In many cases, the customer wants to dictate the terms as he is going to
pay for the product and the processes used for making it, while the
supplier wants customer to accept whatever is produced. Wherever the gap
15
Software Quality is less, the ability of a product to satisfy customer needs is considered
Assurance better. At the same time, it helps a development organisation as well as a
customer to get their parts of benefits from steady processes. On the
contrary, larger gap causes more problems in development, distorted
relations between two parties and may result into loss for both. Quality
processes must reduce the gap between two views effectively.
Effectiveness of a quality process may be defined as the ability of the
processes and the product to satisfy both or all stakeholders in achieving
their expectations.
Figure 1.1 explains a gap between actual product, requirements for the
product and customer expectations from the product. It gives two types of
gaps, viz. user's gap and producer's gap.
Customer Survey:
Customer surveys are essential when an organisation is producing a
product for a larger market where a mismatch between the product and
16
expectation can be a major problem to producer. It may also be essential Introduction to Quality
for projects undertaken for a single customer such as mission critical
projects where failure of the project has substantial impact on the
customer as well as producer. For a larger market, survey may be
conducted by marketing function or business analyst to understand user
requirements and collate them into specification documents for the
product. Survey teams decide present and future requirements for the
product and the features required by the potential customers.
For a single customer, a survey is conducted by business analyst and
system analyst to analyse the intended use of the product under
development and the possible environment of usage. Domain experts from
the development side may visit the customer organisation to understand
the specific requirements and work flow related to domain to incorporate
it into the product to be developed.
17
Software Quality requirements. The product so produced and requirement specifications
Assurance used may differ significantly creating producer's gap
Process Definition:
Development and testing processes must be sufficiently mature to handle
the transfer of information from one person or one stage to another during
software development life cycle. There must be continuous " Do' and
Check" processes to build better products and get feedback about the
process performance. Such product development has lifecycle testing
activities associated with development.
18
1.7.1 Cost of Manufacturing: Introduction to Quality
Cost of Prevention:
An organisation may have defined processes, guidelines, standards of
development, testing, etc. It may define a program of imparting training to
all people involved in development and testing. This may represent a cost
of prevention. Creation and use of formats, templates, etc. acquiring
various process models and standards, etc, also represent a cost of
prevention. This is an investment by an organisation and it is supposed to
yield returns. This is also termed as ‘Green money’. Generally, it is
believed that 1 part of cost of prevention can reduce 10 parts of cost of
appraisal and 100parts of cost of failure.
Cost of Appraisal:
An organisation may perform various levels of reviews and testing to
appraise the equality of the product and the process followed for
developing the product. The cost incurred in first time reviews and testing
is called as the cost of appraisal. There is no return on investment but this
helps in identifying the process capabilities and process related problems,
if any. This is termed as ‘Blue money’ as it can be recovered from the
customer. Generally, it is believed that 1 part of cost of appraisal can
reduce 10parts of cost of failure.
Cost of Failure:
Cost of failure starts when there is any defect or violation detected at any
stage of development including post-delivery efforts spent on defect
19
Software Quality fixing. Any extent of rework, retesting, sorting, scrapping, regression
Assurance testing, late payments, sales under concession, etc. represents cost of
failure. There may be some indirect costs such as loss of goodwill, not
getting customer references, not getting repeat orders, and customer
dissatisfaction associated with the failure to produce the right product. The
cost incurred due to some kind of failure is represented as cost of failure.
This is termed as 'Red money'. This cost affects the profitability of the
project organisation badly.
On the basis of quality focus, organisations may be placed in 2 categories
viz, organisations which are less quality conscious (termed as ‘q’) and
organisations which are more quality conscious (termed as 'Q'). The
conscious (termed as 'q') and organisations which are more quality
conscious (termed as 'Q'). The distribution of cost of quality for these two
types may be represented as below. Please refer Fig. 1.2
Internal Customer:
Internal customers are the functions and projects serviced and supported
by some other functions projects. System administration may have
projects as their customer while purchasing may have system
administration as their customer. During value chain, each function must
understand its customers and suppliers. Each function must try to fulfill its
customer requirements. This is one of the important considerations behind
‘Total Quality Management’ where each and every individual in
supplychain must identify and support hi customer, If internal customers
are satisfied, this will automatically satisfy external customer as it sets the
tone and perspective for everybody.
External Customer:
External customers are the external people to the organisation who will be
paying for the services offered by the organisation. These are the people
who will be actually buying products from the organisation. As the
organisation concentrates on external customer for their satisfaction, it
must improve equality of its output.
22
Introduction to Quality
23
Software Quality by management must have relationships with each other and must be able
Assurance to satisfy the vision of the organisation.
1.11.2 Adapting to New Philosophy of Managing People/Stakeholders
by Building Confidence and Relationships:
Management must adapt to the new philosophies of doing work and
getting the work done from its people and suppliers. We are in a new
economic era where skills make an individual indispensable. The process
started in Japan and is perceived as a model throughout the world for
improving quality of working as well as products.
We can no longer live with commonly accepted levels of delays, mistakes,
defective materials, rejections and poor workmanship. Transformation of
management style to total quality management is necessary to take the
business and industry on the path of continued improvements.
1.11.3 Declare Freedom from Mass Inspection of Incoming/Produced
Output:
It was a common belief earlier that for improving quality of a product, one
needs to have rigorous inspection program followed by huge rework,
sorting, scrapping, etc. It was believed that one must check everything to
ensure that no defect goes to customer. But there is a need for change in
thinking of management and people as mass inspection results into huge
cost overrun and product produced is of inferior quality. There must be an
approach for elimination of mass inspection followed by cost of failure as
the way to achieve quality of products. Improving quality of products
requires setting up the right processes of development and measurement of
process capabilities and statistical evidence of built-in quality in all
departments and functions.
24
1.11.5 Improve Every Process Used for Development and Testing of Introduction to Quality
Product:
Improve every process of planning, production and service to the customer
and other support processes constantly. Processes have interrelationships
with each other and one process improvement affects other processes
positively. Search continually for problems in these processes in terms of
variations in order to improve every activity in the organisation- to
improve quality and productivity and decrease the cost of production as
well as cost of quality continuously. Institutionalise innovations and
improvements of products and processes used to make them. It is
management’s job to work continually for optimising processes.
25
Software Quality Management should not stop or discount feedback coming to them even if
Assurance it is negative. Giving positive as well as negative feedback should be
encouraged, and it must be used to perform SWOT analysis followed by
actions. Fear is something which creates stress in minds of people,
prohibiting them from working on new ways of doing Things, Fear can
cause disruption in decision-taking process which may result into
excessive defense and also, major defects in the product. People may not
be able to perform under stress.
26
must be changed from sheer numbers to quality of output. Management Introduction to Quality
must understand managing by facts.
27
Software Quality to the extent required shall define the quality level of the product or
Assurance services.
Plan:
An organisation must plan for improvements on the basis of its vision and
mission definition. Planning includes answering all questions like who,
when, where, why, what, how, etc, about various activities and selling
expectations. Expected results must be defined in quantitative terms and
actions must be planned to achieve answers to these questions, Quality
planning at unit level must be in sync with quality planning at organisation
29
Software Quality level. Baseline studies are important for planning. Baseline studies define
Assurance where one is standing and vision defines where one wishes to reach.
Do:
An organisation must work in the direction set by the plan devised in
earlier phase for improvements. Plan is not everything but a roadmap. It
sets the direction but execution is also important, Actual execution of a
plan can determine whether the results as expected are achieved or not.
Plan sets the tone while execution makes the plan work. 'Do' process need
inputs like resources, hardware, software, training, etc. for execution of a
plan.
Check:
An organisation must compare actual outcome of 'Do' stage with reference
or expected results which are planned outcomes. It must be done
periodically to assess whether the progress is in proper direction or not,
und whether the plan is right or not, Expected and actual results must be in
numerical terms, and compared at some periodicity as defined in the plan.
Act:
If any deviations (positive or negative) are observed in actual outcome
with respect to planned results, the organisation may need to decide
actions to correct the situation. The actions my include changing the plan,
approach or expected outcome as the case may be. One may have to
initiate corrective and or preventive actions as per the outcome of ‘Check’.
When expected results and actuals match with given degree of variation,
one may understand that the plan is going in the right direction. Running
faster or slower than the plan will need action. Figure 1.4 shows
diagrammatically PDCA cycle of continual improvement.
30
1.15 QUALITY IN DIFFERENT AREAS Introduction to Quality
There are some common denominators in all these examples which may
be considered as the quality factors. Although the terms used to explain
each product in different domain areas vary to some extent, almost all
areas can be explained in terms of few basic quality parameters. These are
defined as the basic quality parameters by stalwarts of quality
management.
31
Software Quality
Assurance
1.16 BENCHMARKING AND METRICS
Benchmarking is an important concept used in Quality Function
Deployment (QFD). It is the concept of creating qualitative quantitative
metrics or measurable variables, which can he used to assess product
quality on several scales against a benchmark. Typical variables of
benchmarking may include price of a product paid by customer, time
required to acquire it, customer satisfaction, defects or failures, attributes
and features of product, etc. Metrics are defined for collecting information
about the product capabilities, process variability and outcome of the
process in terms of attributes of products. Metric is a relative measurement
of some parameters of a product which are related to the product and
processes lived to make it. An organisation must develop consistent set of
metrics derived from its strategic business plan and performance of
benchmark partner.
These programs and tools need training before they can be used.
Training incurs cost as well as time. Some tools need specific
trainings to understand them and use them.
All softwares / hardwares are prone for defects and these tools are not
exceptions to it. There can be some mistakes while building/using
them. Sometimes these mistakes can affect the decisions drastically.
Decision has to be taken by human being and not by the tool. Tools
may define some options which may be used as guide. Some tools can
take decision in the limited range.
Tools may mean more cost and time to learn and implement. Every
tool has a learning curve.
33
Software Quality 1.18.1 Tools:
Assurance
Tools are an organisations analytical asset that assist in understanding a
problem through data and try to indicate possible solutions. Quality tops
are more specie tools which can be applied to solving problems faced by
projects and functional teams while improving quality in organisations.
Tools may be hardware/software and physical logical tools. We will learn
more about quality tools in Chapter 16 on ‘Qualitative and Quantitative
Analysis’.
1.18.2 Techniques:
Techniques indicate more about a process used in measurement, analysis
and decision-making during problem solving. Techniques are independent
of tools but they drive tool usage. Techniques do not need tools for
application while tools need techniques for then use. Same tool can be
used for different purposes, if the need tools for application while tools
need techniques for their use. Same tool can be used for different
purposes, if the techniques are differed. Table 1.3 gives a difference
between tools & techniques.
1.20 SUMMARY
In this chapter, we have seen various definitions of quality as understood
by different people and different stakeholders. It also covered the
definitions by quality stalwarts and different international standards. Then,
34
we studied the basic components to produce quality, and the views of Introduction to Quality
customers and producer on quality. As a tester, one must understand
different gaps like user gap, and producer gap, and how to close them to
achieve customer satisfaction.
We have described various cost components like manufacturing cost and
cost of quality. Cost of quality concepts with its three components viz.
preventive cost of quality, appraisal cost of quality and failure cost of
quality are described along with the importance of cost of prevention, and
how it affects cost of quality and improves profitability.
We have seen different approaches to continually (continuously)
improving quality. We have covered ‘TQM principles of quality
management’, and ‘DMMCI principles of continual improvement’ through
quality planning, quality control and quality improvement. We have also
discussed a theory of ‘Cultural change principles of quality management’.
Finally, we have introduced the concept of problem solving through usage
of tools and techniques. We have also briefly elucidated the concept of
benchmarking.
1.21 EXERCISES
1) Explain ‘quality’ in terms of the generic expectations from any
product.
2) Differentiate between continuous improvement and continual
improvement.
3) Define the stakeholders for successful projects at micro level and for
successful organisations at macro level.
4) Define ‘quality’ as viewed by different stakeholders of software
development and usage.
5) Explain customer’s view of quality.
6) Explain supplier’s view of quality.
7) Define "User's gap' and 'Producer's gap' and explain how these gaps
can be closed effectively.
8) Describe various definitions of quality as per international standards.
9) Describe definition of quality as per Dr Deming, Dr Juran and Philip
Crosby.
10) Describe ‘Total Quality Management’ principles of continual
improvement.
11) Describe cultural change requirement for quality improvement.
12) Differentiate between tools and techniques.
35
Software Quality
Assurance 1.22 REFERENCES
Software Testing andContinuous QualityImprovement by William E.
Lewis, CRC Press, 3rdEdition, 2016.
https://www.altexsoft.com/whitepapers/quality-assurance-quality-
control-and-testing-the-basics-of-software-quality-management/
https://www.softwaretestinghelp.com/software-quality-assurance
*****
36
2
SOFTWARE QUALITY
Unit Structure
2.0 Objectives
2.1 Introduction
2.2 Constraints of Software Product Quality Assessment
2.3 Customer is a King
2.4 Quality and Productivity Relationship
2.5 Requirements of a Product
2.6 Organization Culture
2.7 Characteristics of Software
2.8 Software Development Process
2.9 Types of Products
2.10 Schemes of Criticality Definitions
2.11 Problematic Areas of Software Development Life Cycle
2.12 Software Quality Management
2.13 Why Software Has Defects?
2.14 Processes Related to Software Quality
2.15 Quality Management System Structure
2.16 Pillars of Quality Management System
2.17 Important Aspects of Quality Management
2.18 Summary
2.19 Exercise
2.20 References
2.0 OBJECTIVES
This chapter focuses on software quality parameters.
2.1 INTRODUCTION
In the previous chapter, we have discussed various definitions of quality
and how they fit the perspective of different stakeholders. One of the
definitions i.e.,
‘Conformance to explicitly stated and agreed functional and non-
functional requirements’, may be termed as ‘quality’ for the software
product offered to customers/final users from their perspective. Customers
may or may not be the Final user’s and sometimes, developers have to
37
Software Quality understand requirements of final users in addition to the customer. If the
Assurance customer is a distributor or retailer or facilitator who is paying for the
product directly, then the final user’s requirements may be interpreted by
the customer to make a right product.
It is not feasible to put all requirements in requirement specifications. A
large number of requirements remain undefined or implied as they may be
basic requirements for the domain and the customer perceives them as
basic things one must know. It gives importance to the documented and
approved software and system requirement specifications on which quality
of final deliverables can be decided. It shifts the responsibility of making a
software specification document and getting it approved from customer to
producer of a product. There are many constraints for achieving quality for
software application being developed using such software requirement
specifications.
Invention Innovation
Inventions may be accidental Innovation is a planned
in nature. They are generally activity leading to changes.
unplanned. Innovation is done by people
Invention may or may not be in a team, possibly cross-
acceptable to people doing the functional teams, involved in
work immediately. Inventions doing a work. There is higher
are done by scientist and acceptability by people as they
implementation and are involved in it.
acceptance by people can be Innovations can be applied to
cumbersome as general level every day work easily. The
of acceptance is very low. existing methods and
Inventions may not be processes are improved to
directly applied to everyday eliminate waste.
work. It may need heavy Changes in small steps are
customization to make it possible by innovation.
suitable for normal usage. Innovation generally leads to
Breakthrough changes are administrative improvements,
possible due to inventions. whereas technological or
Invention may lead to major breakthrough improvements
changes in technology, way are not possible.
of doing work, etc. Innovation may lead to
Invention may lead to rearrangement of things but
scraping of old technologies, there may not be a
old skills and sometimes, it fundamental change. It
meets with heavy resistance. generally works on
elimination of waste
42
Many organisations have a separate 'Research and Development' function Software Quality
responsible for doing inventions. These functions are dedicated to make
breakthrough changes by developing new technologies and techniques for
accomplishing work. They are supposed to derive major changes in the
approaches of development, implementation, testing, etc. 'Six sigma'
improvements also talk about breakthrough improvements where
processes may be redesigned/redefined. It may add new processes and
eliminate old processes, if they are found to be problematic. But, an
organisation should also create an environment which helps in innovation
or rearrangement of the tasks to make small improvements everyday.
Continuously identifying any type of waste in day-to-day activities and
removing all nonessential things can refine the processes.
45
Software Quality 2.6.1 Shift In Focus From 'q'to'Q':
Assurance
As the organisation grows from ‘q’ to ‘Q’, there is a cultural change in
attitude of the management and employees towards quality and customer.
In initial stages, at the level of higher side of *q', a product is subjected to
heavy inspection, rework, sorting, scrapping, etc., to ensure that no defects
are present in final deliverable to the customer while the final stages of Q'
organisation concentrate on defect prevention through process
improvements. It targets for first-time right. Figure 2.1 shows an
improvement process where focus of quality changes gradually.
46
Quality Management Approach: Software Quality
There are three kinds of system in the universe, viz. completely closed
systems, completely open systems and systems with semi permeable
boundaries, completely closed systems represent that nothing can enter
inside the system and nothing can go out of the system. On the other hand,
open system represents a direct influence of universe on system and vice-
a-versa. Completely closed systems or completely open systems do not
exist in reality. Systems with semi permeable boundaries are the realities,
which allow the system to get impacted from external changes and also
have some effect on external environment. Anything coming from outside
may have an impact on the organisation but the level of impact may be
controlled by taking some actions. Similarly, anything going out can also
affect the universe but impact is controlled. Organisations try to assess the
impact of the changes on the system and try to adapt to the changes in the
environment to get the benefits. They are highly matured when they
implement Quality Management as a management approach. There are
virtually no defects in processes as they are optimised continuously, and
products are delivered to customers without any deficiency. The
organisation starts working on defect prevention mechanism and
continuous improvement plans. The organisation defines methods,
processes and techniques for future technologies and training programs for
process improvements. Management includes planning, organising,
staffing, directing, coordinating, and controlling to get the desired output.
It also involves mentoring, coaching, and guiding people to do better work
to achieve organisational objectives.
47
Software Quality
Assurance
2.8 SOFTWARE DEVELOPMENT PROCESS
Software development process defines how the software is being built.
Some people also refer to SDIC as system development life cycle with a
view that system is made of several components and software is one of
these components. There are various approaches to build software. Every
approach has some positive and some negative points. Let us talk about
few basic approaches of developing software from requirements. It is also
possible that different people may call the same or similar approach by
different
48
Arrows in the waterfall model are unidirectional. It assumes that the Software Quality
developers shall get all requirements from a customer in a single go. The
requirements are converted into high level as well as low level designs.
Designs are implemented through coding. Code is integrated and
executables are created. Executables are tested as per test plan. The final
output in the form of an executable is deployed at customer premises.
Future activities are handled through maintenance. If followed in reality,
waterfall model is the shortest route model which can give highest
efficiency, and productivity. Waterfall models are used extensively in
fixed price/fixed schedule projects where estimation is based on initial
requirements. As the requirement changes, estimation is also revised.
51
Software Quality elicitation is done, the logic is built behind it to implement the elicited
Assurance requirements.
Scrum
52
Extreme Programming Software Quality
Bug fixing where the defects present in the given software are fixed.
This may involve retesting and regression testing. During bug fixing,
analysis of bug is an important consideration. There is always a
possibility that while fixing a bug, new bugs may have been added in
the product.
53
Software Quality Reengineering where there is some change in the logic or algorithm
Assurance used due to changes in business environment.
Any product failure resulting into death of a person. These will be the
most critical products as they can affect human life directly.
Any product failure which may cause minor injury and does not result
into anything as defined above
54
2.9.3 Products Which Can Be Tested Only By Simulators: Software Quality
Product's failure which disrupts the entire business can be very critical
from business point of view. There is no fall back arrangement
available or possible in case of failure of a product which is
completely dependent on the system.
Product's failure which does not affect business at all is one of the
options. If it fails, one may have another method to achieve the same
result. Rearrangement may not have significant distortion of business
process.
55
Software Quality 2.10.2 Another Way of Defining User's Perspective:
Assurance
This classification considers the environment in which the product is
operating. It may range from very complex user environment to very easy
user environment.
Form based software where user inputs are taken and stored in some
database. As and when required, those inputs are manipulated and
shown to a user on screen or in form of a report. There is not much
manipulation of data and no heavy calculations/algorithms are
involved.
Artificial intelligent systems which learn things and use them as per
circumstances are very complex systems. An important consideration
is that the 'learnings' acquired by the system must be stored and used
when required. This makes the system very complicated.
There may be a possibility of combination of criticalities to various
extends for different products at a time. A software used in aviation can
56
affect human life, and also huge money at a time. It may not be feasible to Software Quality
test it in real-life scenario. It may involve some extent of artificial
intelligence where system is expected to learn and use those "learnings'.
Thus, the combination increases severity of failure further.
Technical Requirements:
Technical requirements talk about platform, language, database, operating
system, etc. required for the application to work, Many times, the
customer may not understand the benefits of selectinga specific
technology over the other options and problems of using these technically
specified configurations.
Selection of technology may be done as directed by the development team
or as a fashion. Development organisation is mainly responsible for
definition of technical requirements for software product under
development on the basis of product usage. (Technical requirements also
cover this type of system, whether a standalone or client server or web
application etc, tiers present in the system, processing options such as
online, batch processing, etc). It also talks about configuration of
machines, routers, printers, operating systems, databases, etc.
Economical Requirements:
Economies of software system is dependent on its technical and system
requirements. The technical as well as system requirements may be
governed by the money that the customer is ready to put in software
development, implementation and use. It is governed by cost-benefit
analysis.
These requirements are defined by development team along with the
customer. The customer must understand the benefits and problems
associated with different approaches, and select the approach on the basis
57
Software Quality of some decision-analysis process. The consequences of any specific
Assurance selection may be a responsibility of the customer but development
organisations must share their knowledge and experience to help and
support the customer in making such a selection.
Legal Requirements:
There are many statutory and regulatory requirements for software product
usage. For any software application, there may be some rules and
regulations by government, regulatory bodies, etc. applicable to the
business. There may be some rules which keep on changing as per
decisions made by government, regulatory authorities, and statutory
authorities from time to time. There may be numerous business rules
which are defined by customers or users for doing business. Development
team must understand the rules and regulations applicable for a particular
product and business.
Operational Requirements:
Mostly operational requirements are defined by customers or users on the
basis of business needs. These may be functional as well as non-functional
requirements. They tell the development team, what the intended software
must do/must not do when used by the normal user. Operational
requirements are derived from the business requirements. This may
include non-functional requirements like security, performance, user
interface, etc.
System Requirements:
System requirements including physical/logical security requirements are
defined by a customer with the help of a development team. These include
requirements for hardware, machine configurations, types of backups,
restoration, physical access control, etc. These requirements are defined by
customer's management and it affects economies of the system. There may
be some specific security requirements such as strong password,
encryption, and privilege definitions, which are also declared by the
customer.
58
the techniques used to develop applications by accommodating changes Software Quality
suggested by customers.
Maintainability:
Efforts required to locate and fix errors in an operational system must be
as less as possible to improve its ability for maintenance. There may be
some possibilities of enhancements and re-engineering where good
maintainable software has least cost associated with such activities.
Software may need maintenance activity some time or the other to
improve its current level of working. Ability of software to facilitate
maintenance is termed as maintainability. Good documentation, well
commented code, and requirement traceability matrix improve
maintainability of a product.
Portability:
Efforts required in transferring software from one hardware configuration
and/or software system environment to another environment defines
portability of a system. It may be essential to install same software in
different environments and configurations as per business needs. If the
efforts required are less, then the system may be considered as highly
portable. On the other hand, if the system cannot be put in a different
environment, it may limit the market.
Coupling:
Coupling talks about the efforts required in interconnecting components
within an application and interconnection of system as a whole with all
other applications in a production environment. Software may have to
communicate with operating system, database, and other applications
when a common user is working with it. Good coupling ensures better use
of environment and good communication, while bad coupling creates
limitation on software usage.
Performance:
Amount of resources required to perform stated functions define the
performance of a system. For better and faster performance requirements,
more and more system resources and/or optimised design may be required.
Better performance improves customer satisfaction through better
experience for users. Performance attribute may cover performance, stress
and volume. Details about these will be discussed in the latter chapters of
this book.
61
Software Quality Ease of Operations:
Assurance
Effort required in integrating the system into operating environment may
define case of operations. Ease of operations also talks about the help
available to a user for doing any operation on the system (for example,
online help, user manuals, operations manual). 'Ease of' operations' is
different from "Ease of use' where the former considers user experience
while using the system, while the latter consider show fast the system
working knowledge can be acquired.
Reliability:
Reliability means that the system will perform its intended functions
correctly over an extended time. Consistent results are produced again and
again, and data losses are as less as possible in a reliable system.
Reliability testing may be a base of "Build Verification Testing (VT)'
where test manager tries to analyse whether system generates consistent
results again and again.
Authorisation:
The data is processed in accordance with the intents of the user
management. The authorisation may be required for highly secured
processes which deal with huge sum of money or which have classified
information. Applications dealing with classified information may need
authorisation as the sensitivity of information is very important.
File Integrity:
Integrity means that data will remain unaltered in the system and whatever
goes inside the system will be reproduced back in the same way.
Accepting data in correct format, storing it in the same way, processing it
in a way so that data does not get altered and reproducing it again and
again are covered under file integrity. Data communication within and
from one system to another may also be considered under the scope of file
integrity.
Audit Trail Audit trail talks about the capability of software to substantiate
the processing that has occurred. Retention of evidential information about
the activities done by users for further reference may be maintained.
Continuity of Processing:
Availability of necessary procedures, methods and backup information to
recover operations, system, data, etc. when integrity of the system is lost
due to problems in the system or the environment define continuity of
processing. Timeliness of recovery operations must be defined by the
customer and implemented by developing organisations.
Service Levels:
Service levels mean that the desired results must be available within the
time frame acceptable to the user, accuracy of the information must be
62
reliable, and processing completeness must be achieved. For some Software Quality
applications in Business, service level definition may be a legal
requirement. Service level may have direct relationship with performance.
Access Control:
The application system resources will be protected against accidental or
intentional modification, destruction, misuse or disclosure by authorised as
well as unauthorised people. Access control generally talks about logical
access control as well as physical access control for information, asses.
etc.
Correction:
Correction talks about the condition where defects found in the product or
service are immediately sorted and fixed. This is a natural phenomenon
which occurs when a tester defines any problem found during testing.
Many organisations stop at fixing the defect though it may be defined as
corrective action by them. Responsibility of finding and fixing defects
may be given to a line function. This is mainly a quality control approach.
Corrective Actions:
Every defect needs an analysis to find the root causes for introduction of a
defect in the system. Situation where the root cause analysis of the defects
is done and actions are initiated to remove the root causes so that the same
defect does not recur in future is termed as corrective action, Corrective
action identification and implementation is a responsibility of operations
management group. Generally, project leads are given the responsibilities
of initiating corrective actions. This is a quality assurance approach where
process-related problems are found and resolved to avoid recurrence of
similar problems again and again.
Preventive Actions:
On the basis of root causes of the problems, other potential weak areas are
identified. Preventive action means that there are potential weak areas
where defect has not been found till that point, but there exists a
probability of finding the defect. In this situation, similar scenarios are
checked and actions are initiated so that other potential defects can be
eliminated before they occur. Generally, identification and initiation of
preventive actions are a responsibility of senior management. Project
63
Software Quality managers are responsible for initiating preventive actions for the projects.
Assurance This is a quality management approach where an organisation takes
preventive action so that there is no defect in the first place.
Quality management is a set of planned and systematic activities which
ensures that the software processes and products conform to requirements,
standards and processes defined by management, customer, and regulatory
authorities. The output of the process must match the expectations of the
users.
Customer may not be aware of all requirements, and the ideas develop
as the product is used. Proto typing is used for clarifying requirements
to overcome this problem to some extent.
64
Software Quality
2.14.1 Vision:
The vision of an organisation is established by the policy management.
Vision defines in brief about what the organisation wishes to achieve in
the given time horizon. 'To become a billion-dollar company within3
years' can be a vision for some organisations. Every organisation must
have a vision statement, clearly defining the ultimate aim it wishes to
achieve with respect to time span.
2.14.2 Mission:
In an organisation, there are several initiatives defined as missions which
will eventually help the organisation realise its vision. Success of all these
missions is essential for achieving the organisation's vision. The missions
are expected to support each other to achieve the overall vision put by
management. Missions may have different lifespans and completion dates.
2.14.3 Policy:
Policy statement talks about a way of doing business as defined by senior
management. This statement helps employees, suppliers and customers to
understand the thinking and intent of management. There may be several
policies in an organisation which define a way of achieving missions.
Examples of policies may be security policy, quality policy, and human
resource development policy.
2.14.4 Objectives:
Objectives define quantitatively what is meant by a successful mission. It
defines an expectation from each mission and can be used to measure the
success/failure of it. The objectives must be expressed in numerals along
65
Software Quality with the time period defined for achieving them. Every mission must have
Assurance minimum one objective.
2.14.5 Strategy:
Strategy defines the way of achieving a particular mission. It talks about
the actions required to realize the mission and way of doing things. Policy
is converted into actions through strategy. Strategy must have a time frame
and objectives along with goals associated with it. There may be an action
owner to lead the strategy.
2.14.6 Goals:
Goals define the milestones to be achieved to make the mission successful.
For a mission to be declared as successful/ failure at the end of the defined
time frame in terms of whether the objectives are achieved or not, one
needs a milestone review to understand whether the progress is in proper
direction or not. Goals provide these milestone definitions.
2.14.7 Values:
Values can be defined as the principles, or way of doing a business as
perceived by the management. "Treating customer with courtesy' can be a
value for an organisation. The manner in which the organisation and
management think and behave, is governed by the values it believes in.
66
2.15.1 1stTier-Quality Policy: Software Quality
Quality policy sets the wish, intent and direction by the management about
how activities will be conducted by the organisation. Since management is
the strongest driving force in an organisation, its intents are most
important. It is a basic framework on which the quality temple rests.
67
Software Quality Table 2.3 Difference Between Guidelines and Standards
Assurance
Guidelines Standards
Guidelines are suggested ways Standards are mandatory ways
of doing things. They are made of doing things. These are also
by experts in individual fields. described by experts in
Guidelines may be overruled respective fields.
and there is no issue if Overruling of standards is a
somebody does not follow it. punishable offence. It may lead
Guidelines may or may not be to nonconformance during
written. Generally, it is reviews and audits.
recommended that one must Standards must be written to
write the guidelines to capture avoid any misunderstanding or
the tacit knowledge loss of communication
Guidelines and standards may need revisions from time to time. Revision
must be done to maintain suitability over a time period.
68
Project plan must define all aspects of quality plan at project level, and Software Quality
may have a relation with the organisation's quality planning. The quality
objectives of the project may be inherited from organisation level
objectives or may be defined separately for the project. Project level
objectives must be in syne with organisation level objectives.
Validation:
Validation ensures that the right product has been built. It involves testing
a software product against requirement specifications, design
specifications, and customer needs. The method followed for development
may or may not be correct or capable, but the final output should be as per
customer requirements.
70
2.17.11 Software Quality Audits: Software Quality
71
Software Quality 2.17.13 Information Security Management:
Assurance
Information is one of the biggest assets of the organisation. An
organisation must protect the information assets available in various
databases and all information that has been developed, captured, and used.
It must be able to use that information for its continuous/continual
improvement. Tacit knowledge is very important from security of
information. Information security is associated with three buzz words viz.
confidentiality, integrity and availability.
72
Detect and Remove Defects as Early as Possible, Prevention is Better Software Quality
Than Cure:
Any change in work product must go through the stages of draft, review,
approve and baseline. Version control and labelling is used effectively
during development process to identify work products. Many tools are
available for managing change, though it can also be done manually.
Configuration management is very important to give the right product to
the customer.
An organization must define simple metrics at the start which are useful
for planning improvements and tracking them. The main purpose of
metrics is to define improvement actions needed and measuring how much
has been achieved.
2.18 SUMMARY
This chapter is based upon the foundation of the earlier chapter where we
have seen quality perspectives. In this chapter, we have studied different
constraints faced while building and testing a software product. It tries to
link the relationship between quality improvement and productivity
improvement. We have seen the cultural difference between quality
conscious organisations ‘Q’ and less quality conscious organisations ‘q’.
2.19 EXERCISES
1) What are the constraints of software requirement specifications?
2.20 REFERENCES
74
https://www.geeksforgeeks.org/software-engineering-software- Software Quality
quality-assurance
https://www.simplilearn.com/software-quality-assurance-article
https://www.javatpoint.com/software-quality-assurance
*****
75
UNIT - II
3
FUNDAMENTALS OF SOFTWARE
TESTING
Unit Structure
3.0 Objectives
3.1 Introduction
3.2 Necessity of testing
3.3 What is testing?
3.4 Fundamental Test process
3.5 Psychology of testing
3.6 Historical Perspective of Testing
3.6.1 Debugging –Oriented Testing
3.6.2 Demonstration-Oriented Testing
3.6.3 Destruction-Oriented Testing
3.6.4 Evaluation-Oriented Testing
3.6.5 Prevention-Oriented Testing
3.7 Definition of Testing
3.7.1 Why Testing is Necessary?
3.8 Approaches to Testing
3.8.1 Big Bang Approach to Testing
3.8.2 Total Quality Management Approach
3.8.3 Total Quality Management as Big Bang Approach
3.8.4 TQM in Cost Perspective
3.8.5 Characteristics of Big Bang Approach
3.9 Popular Definitions of Testing
3.9.1 Traditional Definition of Testing
3.9.2 What is testing?
3.9.3 Manager‟s View of Software Testing
3.9.4 Tester‟s View of Software Testing
3.9.5 Customer‟s View of Software Testing
3.9.6 Objectives of Testing
3.9.7 Basic Principles of Software Testing
3.9.8 Successful Testers
3.9.9 Successful Test case
3.10 Testing During Development Life Cycle
3.11 Requirement Traceability Matrix
3.11.1 Advantages of Requirement Traceability Matrix
76
3.11.2 Problems with Requirement Traceability Matrix Fundamentals of Software
3.11.3 Horizontal Traceability Testing
3.0 OBJECTIVES
After studying this chapter the learner would be able to:
3.1 INTRODUCTION
Software testing is the process of examining the correctness of the
software product by considering all its attributes like reliability,
scalability, portability, re-usability and usability. It also evaluates the
execution of software components to find the bugs or defects. Software
77
Software Quality development activities during a life cycle have corresponding verification
Assurance and validation activities at each stage of development.
Software verification involves comparing a work product with processes,
standards, and guidelines in simple words, to check that “are we building
the software correctly?”
Software validation activities are associated with checking the outcome
of developed product and the processes used with respect to standards and
expectations of a customer, in other words, to check “are we building the
correct system?” It is considered as a subset of software quality assurance
activities though there is a huge difference between quality assurance and
quality control. Cost of software verification as well as validation comes
under appraisal cost, when one is doing it for the first time. When repeat
verification/validation (such as retesting or regression testing) is done, it is
defined as cost of failure.
There are many stages of software testing as per software development life
cycle. It begins with feasibility testing at the start of the project, followed
by contract testing and requirements testing, then goes through design
testing and coding testing till final acceptance testing, which is performed
by customer/user.
The main aim of the most testing methods is to systematically and actively
locate the defects/errors in the program and repair them. Debugging is
usually the next stage of testing. Debugging begins with some indication
of the existence of an error.
78
involved; and at the system level when all components are combined Fundamentals of Software
together. At early or late stages, a product or service may also be tested Testing
for usability.
80
finding errors and reporting them can be an achievement for him but is Fundamentals of Software
merely an inconvenience for programmers, designers, developers and Testing
other stake holders of the project. So, it is important for the testers to
impart defects and failures as objectively and politely so as to avoid
upsetting and hurting other members related to the project.
82
This approach is based on the assumption that any amount of testing Fundamentals of Software
cannot show that software product is defect free. If there is no defect Testing
found during testing, it can only show that the scenario and test cases used
for testing did not discover any defect. From user's point of view, it is not
sufficient to demonstrate that software is doing what it is supposed to do.
This is already done by system architects in system architecture design
and testing, and by developers in code reviews and unit testing. Testers are
involved mainly to ensure that the system is not doing what it is not
supposed to do. Their work includes assurance that the system will not be
exposed to any major risks of failure when a normal user is using it. Some
people call this approach as negative approach of testing. This negative
approach is built upon few assumptions and risks for the software being
developed and tested. These assumptions and risks must be documented in
the test plan while deciding test strategy or test approach.
In case of big bang approach, software is tested before delivery using the
executable or final product. It may not be able to detect all defects as all
permutations and combinations cannot be tested in system testing due to
various constraints like time. In such type of testing, one may find a
cascading effect or camouflage effect, and all defects may not be detected.
It may discover the failures but cannot find the problems effectively.
Sometimes, defects found may not be fixed correctly as analysis and
defect fixing can be a problem.
Stage 1:
Even if some defects are produced during any stage of development in
such quality environment, then an organisation may have very good
verification processes which may detect the defects at the earliest possible
stage and prevent defect percolation. There will be a small cost of
verification and fixing, but stage contamination can be saved which helps
in finding and fixing the defects very fast. This is an appraisal cost
85
Software Quality represented by „10‟. This is the cast which reduces the profitability of the
Assurance organisation due to scrap, rework and reverification.
Stage 2:
If some defects escape the verification process, still there are capable
validation processes for filtering the defects before the product goes to a
customer.
Cost of validation and subsequent defect fixing is much higher than
verification. This cost is represented by „100‟. One may have to go to the
stage where defect was introduced in the product and correct all the stages
from that point onward till the defect-detection point. The cost is much
higher, but till that point of time the defect has not reached the customer, it
may not affect customers‟ feelings or goodwill.
Stage 3 At the bottom of the pyramid, there is the highest cost associated
with the defects found by customer during production or acceptance
testing. This is represented by „1000‟ showing that cost paid for paid for
fixing such defect is huge. There may be customer complaints, selling
under concession, sending people onsite for fixing defects in front of the
customer, loss of goodwill, etc. This may result into premature closure of
relationship and bad advertisement by the customer.
The major part of software build never gets tested as coverage cannot
be guaranteed in random testing. Software is generally tested by
adhoc methods and intuition of testers. Generally, positive testing is
87
Software Quality done to prove that software is correct which represents second level of
Assurance maturity.
Organisations following big bang approach are less matured and pay a
huge cost of failure because of several retesting and regression testing
along with defect-fixing cycles. Success is completely dependent on good
development, testing team and type of' customer. The following can be
observed.
Validation in terms of system testing can find out about 10% of the
total number of defects. System testing must be intended to validate
system-level requirements along with some aspect of design. Some
people term system testing as certification testing while some people
term acceptance testing as certification testing. If the exit criteria of
system testing (acceptance criteria by customer) are met, system may
be released to the customer.
Remaining defers (about 5-10%) go to the customer, unless the
organisation makes some deliberate efforts to prevent them from leaking
to production phase. Big bang approach may not be useful in preventing
defects from going to the customer, as it can find only 5% of the total
defects present in the product. In other terms, theoretically, to achieve the
effectiveness of life-cycle testing, one may need about 18cycles of system
testing. This can prove to be a costly affair.
88
Testing is done to establish confidence that the program does what it Fundamentals of Software
is supposed to do. It generally covers the functionalities and features Testing
expected in the software under testing. It covers only positive testing.
Establish that the software performs its functions correctly and is fit
for use
89
Software Quality An activity of identification of the differences between expected
Assurance results and actual results produced, during execution of software
application. Difference between these two results, suggests that there
is a possibility of defect in the process and/or work product. One must
note that this may or may not be a defect.
90
We have previously discussed about different stakeholders and their Fundamentals of Software
interests in software development and testing. Let us try to analyse the Testing
expectations or views of different stakeholders about testing.
The product must be safe and reliable during use, and must work
under normal as well as ad verse conditions when it is actually used
by the intended users.
The product must exactly meet the user's requirements. These may
include implied as well as defined requirements.
Testing must be able to find all possible defects in the software, along
with related documentation so that these defects can be removed.
Customer must be given a product which does not have defects (or
has minimum defects).
Testing must give a confidence that software users are protected from
any unreasonable failure of a product. Mean time between failures
must be very large so that failures will not occur, or will occur very
rarely.
Find a scenario where the product does things it is not supposed to do.
This includes risk.
The first part refers to specifications which were not satisfied by the
product while the second part refers to unwanted side effects while using
the product.
Define the expected output or result for each test case executed, to
understand if expected and actual output matches or not. Mismatches
may indicate possible defects. Defects may be in product or test cases
or test plan.
Inspect the results of each test completely and carefully. It would help
in root cause analysis and can be used to find weak processes. This
will help in building processes rightly and improving their capability.
Do not plan tests assuming that no errors will be found. There must be
targeted number of defects for testing. Testing process must be
capable of finding the targeted number of defects.
92
The probability of locating more errors in any one module is directly Fundamentals of Software
proportional to the number of errors already found in that module. Testing
Testers must ensure that user risks are identified before deploying the
software in production.
Requirement Testing:
Requirement testing involves mock running of future application using the
requirement statements to ensure that requirements meet their acceptance
criteria. This type of testing is used to evaluate whether all requirements
are covered in requirement statement or not.
This type of testing is similar to building use cases from the requirement
statement. If the use case can be built without making any assumption
about the requirements, by referring to the requirements defined and
documented in requirement specification documents, they are considered
to be good. The gaps in the requirements may generate queries or
93
Software Quality assumptions which may possibly lead to risks that the application way not
Assurance performs correctly. Gaps also indicate something as an implied
requirement where the customer may be contacted to get the insight into
business processes. It is a responsibility of business analyst to convert (as
many as possible), implied requirements to expressed requirements. Target
is 100%, though it is difficult to achieve.
Requirement testing differs from verification of requirements. Verification
talks about review of the statement containing requirements for using
some standards and guidelines, while resting talks about dummy execution
of requirements to find the consistency between them, i.e., achievement of
expected results must be possible by requirements without any
assumption. Verification of requirements may talk about the compliance
of an output with defined standards or guidelines.
The characteristics of requirements verification or review may include the
following.
94
Design Testing: Fundamentals of Software
Testing
Design testing involves testing of high-level design (system architecture)
as well as low-level design (detail design). High-level design testing
covers mock running of future application with other prerequisites, as if it
is being executed by the targeted user in production environment. This
testing is similar to developing flow diagrams from the designs, where
flow of information is tracked from start to finish. When the flow is
complete, the design may be considered as good. Wherever the flow is not
defined, or not clear about where it will lead to, there are defects with
design which must be corrected. For low-level design, system
requirements and technical requirements are mapped with the entities
created in design to ensure adequacy of detail design.
Design verification talks about reviewing the design, generally by the
experts who may be termed as subject-matter experts. It involves usage of
standards, templates, and guidelines defined for creating these designs.
Design verification ensures that designs meet their exit criteria.
Code Testing:
Code files, Tables, Stored procedures etc are written by developers as per
guidelines, standards, and detail design specifications. In reality,
developers do not implement requirements directly but they implement
detail design as delivered by the designer. Code testing (unit testing) is
done by using stubs/drivers as required. Code review is done to ensure that
code files written are,
95
Software Quality Optimised to ensure better working of software. Reusability creates a
Assurance lighter system.
Test scenarios and test cases must be prioritised so that in case of less
time availability, the major part of the system (where priority is
higher) is tested.
A customer may not find value in it and may not pay for it.
98
3.12 ESSENTIALS OF SOFTWARE TESTING Fundamentals of Software
Testing
Software testing is a disciplined approach. It executes software work
products and finds defects in it. The intention of software testing is to find
all possible failures, so that eventually these are eliminated and a good
product is given to the customer. It intends to find all possible defects
and/or identify risks which final user may face in real life while using, the
software. It works on the principle that no software is defect free, but less
risky software is better and more acceptable to users. The tester's job is to
find out defects so that they will be eventually fixed by developers before
the product goes to a customer. Completion of testing must yield number
of defects which can be analysed to find the weaker areas in the process of
software development. No amount of testing can show that a product is
defect free as nobody can test all permutations and combinations possible
in the given software. Software testing is also viewed as an exercise of
doing a SWOT analysis of software product where we can build the
software on the basis of strengths of the process of development and
testing, and overcome weakness in the processes to the maximum extent
possible.
Strengths:
Some areas of software are very strong, and no (very less) defects are
found during testing of such areas. The areas may be in terms of some
modules, screens, and algorithms, or processes like requirement definition,
designs, coding, and testing. This represents strong processes present in
these areas supporting development of a good product. We can always rely
on these processes and try to deploy them in other areas.
Weakness:
The areas of software where requirement compliance is on the verge of
failure may represent weak areas. It may not be a failure at that moment,
but it may be on the boundary condition of compliance, and if something
goes wrong in production environment, it will result into defect or failure
of software product. The processes in these areas represent some
problems. An organisation needs to analyse such processes and define the
root causes of problems leading to these possible failures. It may be
attributed to some aspects in the organisation such as training,
communication, etc.
Opportunity:
Some areas of the software which satisfy requirements as defined by the
customer, or implied requirements but with enough space available for
improving it further. This improvement can lead to customer delight (it
must not surprise the customer). These improvements represent ability of
the developing organisation to help the customer and give competitive
advantage. It decides the capability of the developing organisation to
provide expert advice and help to the customer for doing something better.
99
Software Quality Threats:
Assurance
Threats are the problems or defects with the software which result into
failures. They represent the problems associated with some processes in
the organisation such as requirement clarity, knowledge base and
expertise. An organisation must invest in making these processes stronger.
Threats clearly indicate the failure of an application, and eventually may
lead to customer dissatisfaction.
3.13 WORKBENCH
Workbench is a term derived from the engineering set-up of mass
production. Every workbench has a distinct identity as it takes part in the
entire development life cycle. It receives something as an input from
previous workbench, and gives output to the next workbench. This can be
viewed as a huge conveyor belt where people are working their part while
the belt is moving forward. The complete production and testing process is
defined as set of interrelated activities where input of one is obtained from
the output of previous activity, and output of that activity acts as an input
to the next. Each activity represents a workbench. A workbench comprises
some procedures defined for doing a work, and some procedures defined
to check the outcome of the work done. The work may be anything during
software development life cycle such as collecting the requirements,
making designs, coding, testing, etc. Organisational process database
refers to the methods, procedures, processes, standards, and guidelines to
be followed for doing work in the workbench as well as for checking
whether the processes are effective and capable of satisfying what
customer is looking for in the outcome. There are standards and tools
available for doing the work and checking the work in the work bench.
While checking/testing a work product in a workbench, if one finds
deviations between expected result and actual result, it may be considered
as defect, and the work product and the process used lot development
needs to be reworked.
Do Process:
The software undergoes testing as per defined test case and test procedure.
This may be guided by organisational process database defining testing
process. 'Do process' must guide the normal tester while doing the process.
Check Process:
Evaluation of testing process to compare the achievements as defined in
test objectives is done by „check processes. Check process helps in finding
whether „do processes‟ have worked correctly or not.
Output:
Output must be available as required in form of test report and test log
from the test process. Output of the tester's workbench needs to have an
exit criteria definition.
101
Software Quality Remark:
Assurance
If „check processes‟ find that „do processes‟ are no able to achieve the
objectives defined for them, it must follow route of rework. This is a
rework of „do process‟ and not of work product under testing. This ensures
that all incapable processes are captured so that these can be taken for
improvement.
There may be two more criteria in the work bench, viz. suspension criteria
for the work bench resumption criteria for the workbench guided by
organisational policies and standards. Suspension criteria define when the
testing process needs to be suspended or halted, whereas resumption
criteria defines when it can be restarted after such halt or suspension. If
there are some major problems in inputs or standards and tools required by
the workbench, „do/check processes may be suspended. When such
problems are resolved, the processes may be restarted.
Testing process may be defined as a process used to verify and validate
that the system structurally and functionally behaves correctly as defined
by expected result. Components of testing process may include the
following.
Produce output (test results and test log) which may act as an input to
the next work bench
Check process must work to ensure that results meet specifications and
standards, and also that the test process is followed correctly. If no
problem is found in the test process, one can release the output in terms
oftest results. If problems are found in test process, it may need rework.
Fig 3.2 shows a schematic diagram of a workbench.
102
3.14 IMPORTANT FEATURES OF TESTING PROCESS Fundamentals of Software
Testing
Testing is characterised by some special features, as given below.
105
Software Quality Testers Can Test Quality of Product at the End of Development
Assurance Process:
This is a typical approach where system testing or acceptance testing is
considered as qualification testing for software. Few test cases out of
infinite set of possibilities are used for certifying whether the software
application works or not. The customer may be dissatisfied as the
application does not perform well as per his expectations. Sometimes, the
defects remain hidden for an entire life cycle of software without anybody
knowing that there was a defect.
106
Thoroughly inspect Results of Each Test to Find Potential Fundamentals of Software
Improvements: Testing
Test results show possibilities of weaker areas in the work product and the
problems associated with the processes used for developing a work
product. The defects found in test log do not form an exclusive list of all
problems with the application, but indicate the areas where development
team and management must perform a root cause analysis. Corrective
actions are to be planned and executed to prevent any possible recurrence
of similar defect and make software better. Defects indicate process
failures.
Initiate Actions for Correction, Corrective Action and Preventive
Actions:
Defect identification, fixing and initiation of action to prevent further
problems are the natural ways of making better products and improve
processes. Corrections of the defect are done by the developers. But one
must ensure that corrective and preventive actions are initiated for making
better products again and again.
Establishing that a program does what it is supposed to do, is not even half
of the battle and rather easier one than establishing that program does not
do what it is not supposed to do. This is negative testing driven by risk
assessment for the final users. Roughly, testing may involve 5% positive
testing and 95% negative testing.
Design Objectives:
Design objectives state why a particular approach has been selected for
building software. The selection process indicates the reasons and criteria
framework used for development and testing. How an applications
functional requirements, user interface requirements, performance
requirements be defined in an approach document.
User Interfaces:
User interfaces are the ways in which the user interacts with the system.
This includes screens and other ways of communication with the system
as well as displays and reports generated by the system. User interfaces
should be simple, so that the user can understand what he is supposed to
do and what the system is doing. Users‟ ability to interact with the system,
receive error messages, and act according to instructions given is defined
in the user interfaces.
Internal Structures:
Internal structures are mainly guided by software designs and guidelines
or standards used for designing and development. Internal structures may
be defined by development organisation or sometimes defined by
customer. It may talk about reusability, nesting, etc. to analyse the
software product as per standards or guidelines. It may include
commenting standards to be used for better maintenance. Every approach
may have some advantages/disadvantages, and one needs to weigh the
benefits and costs associated with them to get a better solution.
Execution of Code:
Testing is execution of a work product to ensure that it works as intended
by customer or user, and is prevented from any probable misuse or risk of
failure. Execution can only prove that application, module, and program
work correctly as defined in requirement and interpreted in design.
Negative testing shows that application does not do anything which is
detrimental to the usage of a software product.
108
3.19 TEST STRATEGY OR TEST APPROACH Fundamentals of Software
Testing
Test strategy defines the action part of test policy. It defines the ways and
means to achieve the test policy. Generally, there is a single test policy at
organisation level for product organisations while test strategy may differ
from product to product, customer to customer and time to time. Some of
the examples of test strategy may be as follows.
Definition of coverage like requirement coverage or functional
coverage or feature coverage defined for particular product, project
and customer.
Level of testing, starting from requirements and going up to
acceptance phases of the product.
How much testing would be done manually and what can be
automated?
Number of developers to testers.
109
Software Quality Successful Tester is Not One Who Appreciates Development but One
Assurance Who Finds Defects in the Product:
Success of testing is in finding a defect and not certifying that application
or development process is good. Successful testers can find more defects
with higher probabilities of occurrences and higher severities of failure.
Tester should find defects, which have a probability of affecting common
users and thus contribute to a successful application.
Testing is Not a Formality to be completed at the End of Development
Cycle:
Testing is not a certifying process. It is a life-cycle activity and should not
be the last part of a development life cycle before giving the application to
customer/user. Acceptance testing and system testing are the integral parts
of software development where the certification of application is done by
the customer but complete test cycle is much more than black box testing.
Software testing includes the following.
Verification or checking whether a right process is followed or not
during development life cycle.
Validation or checking whether a right product is made or not as per
customer's need.
Some differences between verification and validation are shown in
Table 3.4.
110
3.21 TESTING PROCESS AND NUMBER OF DEFECTS Fundamentals of Software
Testing
FOUND IN TESTING
Testing is intended to find more number of defects. Generally, it is
believed that there are fixed number of defects in a product and as testing
finds more defects, chances of the customer finding the defect will reduce.
Actually, the scenario is reverse. As we find more and more defects in a
product, there is a probability of finding some more defects. This is based
on the principle that every application has defects and every test team has
some efficiency of finding defects. It is governed by the test team's defect-
finding ability. Let us say the organisational statistics shows that after
considerable testing and use of application by a user, number of defects
found is three per KLOC; test planning must intend to find three defects
per KLOC for the program under testing. The number of defects found
after considerable testing will always indicate possibilities of existing
number of defects. Fig 3.3 shows a relationship between number of
defects found and probability of finding more defects.
111
Software Quality Ideally, the test team efficiency must be 100% but in reality, it may not be
Assurance possible to have test teams with efficiency of 100%. It must be very close
to 100% in order to represent a good test team. As it deviates away from
100%, the test team becomes more and more unreliable. Test team
efficiency is dependent on organisation culture and may not be improved
easily unless organisation makes some deliberate efforts.
3.23.1 Reasons for Deviation of Test Team Efficiency From 100% for
Test Team As Well As Mutation Analysis:
Though desirable, it is very difficult to get a test team with 100%
efficiency of finding defects and test cases with 100% efficiency of
finding defects. Some of the reasons for deviation are listed below.
Camouflage Effect:
It may be possible that one defect may camouflage another defect, and the
tester may not be able to see that defect, or test case may not be able to
locate the hidden defect. It is called „camouflage effect‟ or „compensating
defects‟ as two defects compensate each other. Thus, defect introduced by
developer may not be seen by the tester while executing a test case.
112
Cascading Effect: Fundamentals of Software
Testing
It may be possible that due to existence of a certain defect, few more
defects are introduced or seen by the tester. Though there is no problem in
the modification, detects are seen due to cascading effect of one defect.
Thus, defects not introduced by developer may be seen by tester while
executing a test case.
Coverage Effect:
It is understood that moody can test 100%, and there may be few lines of
code or few combinations which are not tested at all dive to some reasons.
If defect is introduced in such a part which is not executed by given set of
test cases, then tester may not be able to find the defect.
Redundant Code:
There may be parts of code, which may not get executed under any
condition, as the conditions may be impossible to occur, or some other
conditions may take precedence over it. If developer introduces a defect in
such parts, testers will not be able to find the defect as that part of code
will never get executed.
3.24 SUMMARY
This chapter establishes the basics of software testing. It starts with a
historical perspective of testing and then explains how testing evolved
from mere debugging to defect prevention technique. It then discusses the
benefits of independent testing. „TQM‟ concept of testing, „Big Bang‟
approach of testing, and benefits of „TQM‟ testing are elucidated in detail.
3.25 EXERCISE
1) Explain the evolution of software testing from debugging to
prevention based testing.
2) Explain why independent testing is required.
3) Explain big bang approach of software testing,
4) Explain total quality management approach of software testing.
5) Explain concept of TQM cost perspective.
6) Explain testing as a process of software certification.
7) Explain the basic principles on which testing is based.
3.26 REFERENCES
1. Software Testing and Continuous Quality Improvement by William E.
Lewis CRC Press Third 2016.
113
Software Quality 2. Software Testing: Principles, Techniques and Tools M. G. Limaye
Assurance TMH 2017.
3. Foundations of Software Testing Dorothy Graham, Erik van
Veenendaal, Isabel Evans, and Rex Black Cengage Learning 3 rd.
4. Software Testing: A Craftsman‟s Approach Paul C. Jorgenson CRC
Press 4 th 2017.
5. https://www.techtarget.com/whatis/definition/software-testing
6. https://www.softwaretestinghelp.com/types-of-software-testing/
7. https://prepinsta.com/software-engineering/big-bang-approach/
*****
114
4
CHALLENGES IN SOFTWARE TESTING
Unit Structure
4.0 Objectives
4.1 Challenges in Testing
4.2 Test Team Approach
4.3 Process Problems Faced by Testing
4.4 Cost Aspect of Testing
4.5 Establishing Testing Policy
4.6 Methods
4.7 Structured Approach to Testing
4.8 Categories of Defect
4.9 Defect, Error or Mistake in Software
4.10 Developing Test Strategy
4.11 Developing Testing Methodologies (Test Plan)
4.12 Testing Process
4.13 Attitude towards Testing (Common People Issues)
4.14 Test Methodologies/Approaches
4.15 People Challenges in Software Testing
4.16 Raising Management Awareness for Testing
4.17 Skills Required by Tester
4.18 Software life cycle
4.19 Software development models
4.20 Test levels
4.21 Test Types
4.22 Targets of testing
4.23 Maintenance testing
4.24 Testing Tips
4.25 Summary
4.26 Exercises
4.27 References
115
Software Quality
Assurance
4.0 OBJECTIVES
After studying this chapter the learner would be able to:
Code logic may be difficult to capture. Often, testers are not able to
understand the code due to lack of technical knowledge. On the other
hand, sometimes, testers do not have access to code files.
116
Badly written code introduces many defects. Code may not be Challenges in Software
readable, maintainable, optimisable, and may create problems in Testing
future. Defects may not be fixed correctly, and fixing of defects may
introduce more defects called „regression defects‟.
117
Software Quality
Assurance
The test team is not under delivery pressure. They can take sufficient
time to execute complete testing as per definitions of iterations,
coverage, etc
Test team is not under pressure of 'not finding' a defect. They are
considered as the certifiers of a product and must be able to find every
conceivable fault in the product before delivery.
118
Challenges in Software
Testing
Often, testers start perceiving the product from developer‟s angle and
their defect finding ability reduces.
Matrix Organisation:
In case of matrix organisation, an effort is made to achieve the advantages
of both approaches, and get rid of disadvantages of both approaches. Test
team may be reporting functionally to the development manager while
administratively it reports to the test manager.
Developers may not find value in performing testing. They may put
more time in developing/optimising code than executing serious
testing.
Special skills required for doing special tests may be available in such
independent teams.
Fitness for use' can be tested in this approach where actual user's
perspective may be obtained. Domain experts will be testing software
from user's perspective.
Domain experts may have prejudices about the domain which may
reflect in testing. Domain experts may have knowledge about a
domain but may not understand exactly what a particular customer is
looking for.
It may mean huge cost for the organisation as these experts cost much
more than normal developers/testers.
Combination of all three approaches (3.21.2, 3.21.3, 3.21.4) can be
advantageous for the organisation. One has to do a cost-benefit analysis to
arrive at any decision about test team formation. Highly complex user
121
Software Quality environment and complex algorithms may make „domain experts doing
Assurance testing‟ more effective. On the contrary, if an organisation has done many
projects in similar domain in past, „developers becoming tester‟ may be
recommended as developers may have sufficient knowledge about the
subject.
In addition to a test team, there are many other agencies involved in
software testing as per phases of software development.
Customer/User:
Customer or users generally do acceptance testing to declare formal
acceptance/rejection/changes in requirements for the product. Customer
perspective is most important in software acceptance. Organisations
creating prototype may rely on customer approval of prototype.
Developers:
Developers do unit testing before the units are integrated. Generally, units
require stubs and drivers for testing, and developers can create the same.
Sometimes, integration testing is also done by developers if it needs stubs
and drivers.
Tester:
Testers perform module, integration, and system testing as independent
testing. They may oversee acceptance testing. Tester's view of system
testing is very close to user's view while accepting software.
Senior Management/Auditors:
Senior management or auditors appointed by senior management such as
Software Quality Assurance (SQA) perform redelivery audit, smoke
testing, and sample testing to ensure that proper product is delivered to the
customer.
122
scope‟ type of defects. The basic constituents of processes are people, Challenges in Software
material, machines and methods. Testing
People:
Many people are involved in software development and testing, such as
customer/user specifying requirements; business analysts/system analysts
documenting requirements; test managers or test leads defining test plans
and test artifacts; and testers defining test scenarios, test cases, and test
data. There is a possibility that at few instances some personal attributes
and capabilities may create problems in development and testing. Proper
skill sets such as domain knowledge and knowledge about development
and testing process may not be available.
Material:
Testers need requirement documents, development standards and test
standards, guidelines, and other material which add to their knowledge
about a prospective system. These documents may not be available, or
may not be clear and complete. Similarly, other documents which act as a
framework for testing such as test plans, project plan, and organisational
process documents may be faulty. Test tools and defect tracking tools may
not be available. All of these may be responsible for introducing defects in
the product.
Machines:
Testers try to build real-life scenarios using various machines, simulators
and environmental factors. These may include computers, hardware,
software, and printers. The scenarios may or may not represent real-life
conditions. There may be problems induced due to wrong environmental
configurations, usage of wrong tool, etc.
Methods:
Methods for doing test planning, risk analysis, defining test scenarios, test
cases, and test data may not be proper. These methods undergo revisions
and updating as the organisation matures.
Economics of Testing:
As one progresses in testing, more and more defects are uncovered, and
probability of customer facing a problem reduces while the cost of testing
goes up. The cost of customer dissatisfaction is inversely proportional to
testing efforts. It means more investment in testing efforts reduces the cost
of customer unhappiness. On the other hand, the cost of testing increases
exponentially. If the first ten defects are found in one hour, for finding the
next ten defects, it may need two hours, and so on. If we plot both curves,
then at some point, the two curves intersect each other. This point shows
optimum testing point. Area before this point represents an area under
testing where defective product goes to customer and customer
dissatisfaction cost is higher than cost of testing, while area after this point
123
Software Quality represents an area of over testing where cost of customer dissatisfaction is
Assurance less than cost of testing.
If test team efficiency and defect fixing efficiency are 100%, and
defect introduction index is zero, we need only two iterations of
testing. As it goes away from ideal numbers, number of iterations
increase exponentially.
It is a very rare scenario that a project will have 100 people working for all
10 months for the development project. Generally, development projects
never have the same number of resources throughout the life cycle.
Initially, it may need less number of people and as one passes through
different phases of development, number of resources required increases
exponentially. Once the peak activities are over, number of resources
required goes down.
The same cycle is followed by testing resource requirements. As the
testing phases progress, number of resources required increases
exponentially. Once the peak activities are over, number of test resources
goes down. Thus, for development project, costing is always dynamic.
In case of maintenance or production support type of work, number of
resources remains fairly constant over a long-time horizon. Fig 4.4
indicates resources required for a development project.
125
Software Quality Fig 4.4 Number of resources for development project
Assurance
Cost of development/manufacturing includes the cost spent in the
following.
Cost spent in writing code, integrating and creating the final product
126
development has more testing costs as there is a change in requirements, Challenges in Software
design, coding, etc. Testing
Cost spent in writing test scenarios, test cases, creating guidelines for
verification and validation, and creating checklists for doing
verification and validation activities. Organisation baselines must
define the time and effort required for this activity.
128
Time and effort required to conduct the given number of test cases Challenges in Software
may be derived from the productivity numbers, and number of test Testing
cases required can be derived from the size of an application.
130
Wastage as Wrong Specifications, Designs, Codes and Documents Challenges in Software
Must Be Replaced by Correct Specifications, Designs, Codes and Testing
Documents:
The cost of fixing defects maybe very high in the last part of testing as
there are more number of phases between defect introduction phase and
defect detection phase, and defect may percolate through the development
phases. Correcting the specifications, designs, codes and documents, and
respective retesting/regression testing is a costly process. It can lead to
schedule variance, effort variance and customer dissatisfaction. One defect
fix can introduce another defect in the system.
Wastage as System Must Be Retested to Ensure That the Corrections
Are Correct:
For every fixing of defect, there is a possibility of some other part of
software getting affected in a negative manner. One needs to test software
again to ensure that fixing of software has been correct, and it has not
affected the other parts in a negative manner. Regression and retesting are
essential when defects are found and fixed.
Data entry error caused by the users while using a product. This can
be possible when users are not protected adequately. This indicates
design problems.
132
4.10 DEVELOPING TEST STRATEGY Challenges in Software
Testing
Test planning includes developing a strategy about how the test team will
perform testing. Some key components of testing strategy are as follows.
133
Software Quality 4.11.1acquire and Study Test Strategy As Defined Earlier:
Assurance
Test strategy is developed by a test team familiar with business risks
associated with software usage. Testing must address the critical success
factors for the project and the risk involved in not achieving it.
Spiral development, where new things are added in system again and
again. Generally followed methodology in spiral development is
modular design, development and testing.
134
4.11.4 Identify tactical Risks Related to development: Challenges in Software
Testing
Risks may be introduced in software due to its nature, type of customer,
type of developing organisation, development methodologies used, and
skills of teams. Risk may differ from project to project.
Build tactical test plan which will be used by the test team in
execution of testing related activities.
135
Software Quality Identify business risks associated with system under development. If
Assurance quality factors are not met, these may induce some risk in a product.
Place risks in a matrix so that one may be able to analyse them. This
may be used to devise preventive, corrective and detective measures
to control risks. Table 4.10 shows how test strategy matrix can be
developed.
136
4.12 TESTING PROCESS Challenges in Software
Testing
Testing is a process made of many milestones. Testers need to achieve
them, one by one, to achieve the final goal of testing. Each milestone
forms a basis on which the next stage is built. The milestones may vary
from organisation to organisation and project to project. Following are few
milestones commonly used by many organisations.
137
Software Quality Writing/Reviewing Test Cases:
Assurance
Writing and reviewing test cases along with test scenarios and updating
requirement traceability matrix accordingly are the tasks done by senior
testers or test leads for the project. The traceability matrix gets completed
by adding the test cases and finally, the test results. One more column in
the traceability matrix would be the results of execution of test cases, i.e.,
test results.
Test Result:
Logging results of testing in test log is the last part of the testing iteration.
There may be several iterations of testing planned and executed. The
defect database may be populated, if expected results are not matching
with the actual results. One needs to make sure that the expected results
are traceable requirements.
138
Performing Retesting/Regression Testing When Defects Are Resolved Challenges in Software
by Development Team: Testing
When defects are given to a development team, they perform analysis and
fix the defects. Retesting is done to find out whether the defects declared
as fixed and verified by the development team are really fixed or not.
Regression testing is done to confirm that the changed part has not
affected (in a negative way) any other parts of software, which were
working earlier.
139
Software Quality
Assurance
4.14 TEST METHODOLOGIES/APPROACHES
The two major disciplines in testing are given below:
140
Challenges in Software
Testing
Black box testing is the only method to prove that software does what
it is supposed to do and it does not do something which can cause a
problem to user/customer.
Equivalence partitioning
Error guessing
142
Advantages of ‘White Box Testing’: Challenges in Software
Testing
White box testing is a primary method of verification
Only white box testing can ensure that defined processes, procedures,
and methods of development have really been followed during
software testing. It can check whether the coding standards,
commenting and reuse have been followed or not.
It does put ensure that user requirements are met correctly. There is
no execution of code, and one does not know whether it will really
work or not.
Statement coverage
Decision coverage
Condition coverage
Path coverage
Logic coverage
Grey box testing tries to combine the advantages of white box testing
and Black box testing. It checks whether the work product works in a
correct manner, both functionally as well as structurally.
Testing needs trained and skilled people who can deliver products
with minimum defects to the stakeholders. Testers have to improve
their skills through continuous learning.
Testers hunt for defects they pursue defects not people, including
developers. Every defect is considered as a process shortcoming.
Defect closure needs retesting and regression testing to find whether
the defect is really fixed or not, and to ensure that there is no negative
impact of a defect on existing functions.
145
Software Quality Get involved in test budgeting. Testing needs people, money, time,
Assurance training and other resources. The organization may have to develop
budget to procure all these aspects.
Facilitation Skill:
Facilitation of development team as well as customer is done by testers, so
that defects are taken in the proper spirit. Testers must be able to tell the
exact nature of a defect, how it is happening and how it will affect the
users. Testers must contribute to improve development process and take
part in building better product.
146
Continuous Education: Challenges in Software
Testing
Testers must undergo continuous education and training to build and
enforce quality practices in development processes. They need to undergo
training for test planning, test case definition, test data definition, methods
and processes applied for testing, and reporting defects.
147
Software Quality Developing Test Plan:
Assurance
Test plan development is generally done by test managers or test leads
while implementation of these plans is done by testers. Individual testers
must plan for their part in overall test plan for the project.
148
Purpose: Challenges in Software
Testing
Purpose of SDLC is to deliver a high-quality product which is as per the
customer‟s requirement.
SDLC has defined its phases as, Requirement gathering, Designing,
Coding, Testing, and Maintenance. It is important to adhere to the phases
to provide the Product in a systematic manner.
For Example, A software has to be developed and a team is divided to
work on a feature of the product and is allowed to work as they want. One
of the developers decides to design first whereas the other decides to code
first and the other on the documentation part.
This will lead to project failure because of which it is necessary to have a
good knowledge and understanding among the team members to deliver
an expected product.
SDLC Cycle:
SDLC Cycle represents the process of developing software.
SDLC Phases
Given below are the various phases:
Requirement gathering and analysis
149
Software Quality Design
Assurance
Implementation or coding
Testing
Deployment
Maintenance
2) Design:
In this phase, the requirement gathered in the SRS document is used as an
input and software architecture that is used for implementing system
development is derived.
3) Implementation or Coding:
Implementation/Coding starts once the developer gets the Design
document. The Software design is translated into source code. All the
components of the software are implemented in this phase.
4) Testing:
Testing starts once the coding is complete and the modules are released for
testing. In this phase, the developed software is tested thoroughly and any
defects found are assigned to developers to get them fixed.
150
Retesting, regression testing is done until the point at which the software Challenges in Software
is as per the customer‟s expectation. Testers refer SRS document to make Testing
sure that the software is as per the customer‟s standard.
5) Deployment:
Once the product is tested, it is deployed in the production environment or
first UAT (User Acceptance testing) is done depending on the customer
expectation.
In the case of UAT, a replica of the production environment is created and
the customer along with the developers does the testing. If the customer
finds the application as expected, then sign off is provided by the customer
to go live.
6) Maintenance:
After the deployment of a product on the production environment,
maintenance of the product i.e. if any issue comes up and needs to be fixed
or any enhancement is to be done is taken care by the developers.
1) Waterfall Model:
Waterfall model is the very first model that is used in SDLC. It is also
known as the linear sequential model.
In this model, the outcome of one phase is the input for the next phase.
Development of the next phase starts only when the previous phase is
complete.
First, Requirement gathering and analysis is done. Once the
requirement is freeze then only the System Design can start. Herein,
the SRS document created is the output for the Requirement phase
and it acts as an input for the System Design.
In System Design Software architecture and Design, documents
which act as an input for the next phase are created i.e.
Implementation and coding.
In the Implementation phase, coding is done and the software
developed is the input for the next phase i.e. testing.
In the testing phase, the developed code is tested thoroughly to detect
the defects in the software. Defects are logged into the defect tracking
tool and are retested once fixed. Bug logging, Retest, Regression
testing goes on until the time the software is in go-live state.
151
Software Quality In the Deployment phase, the developed code is moved into
Assurance production after the sign off is given by the customer.
Any issues in the production environment are resolved by the
developers which come under maintenance.
a) Verification Phase:
(i) Requirement Analysis:
In this phase, all the required information is gathered & analyzed.
Verification activities include reviewing the requirements.
(v) Coding:
Code development is done in this phase.
b) Validation Phase:
(i) Unit Testing:
Unit testing is performed using the unit test cases that are designed and is
done in the Low-level design phase. Unit testing is performed by the
153
Software Quality developer itself. It is performed on individual components which lead to
Assurance early defect detection.
Advantages of V – Model:
It is a simple and easily understandable model.
V –model approach is good for smaller projects wherein the
requirement is defined and it freezes in the early stage.
It is a systematic and disciplined model which results in a high-quality
product.
Disadvantages of V-Model:
V-shaped model is not good for ongoing projects.
Requirement change at the later stage would cost too high.
Unit Testing:
This type of testing is performed by developers before the setup is handed
over to the testing team to formally execute the test cases. Unit testing is
performed by the respective developers on the individual units of source
code assigned areas. The developers use test data that is different from the
test data of the quality assurance team.
154
The goal of unit testing is to isolate each part of the program and show Challenges in Software
that individual parts are correct in terms of requirements and functionality. Testing
Integration Testing:
Integration testing is defined as the testing of combined parts of an
application to determine if they function correctly. Integration testing can
be done in two ways: Bottom-up integration testing and Top-down
integration testing.
1 Bottom-up integration:
This testing begins with unit testing, followed by tests of
progressively higher-level combinations of units called
modules or builds.
2 Top-down integration:
In this testing, the highest-level modules are tested first and
progressively, lower-level modules are tested thereafter.
System Testing:
System testing tests the system as a whole. Once all the components are
integrated, the application as a whole is tested rigorously to see that it
meets the specified Quality Standards. This type of testing is performed by
a specialized testing team.
155
Software Quality The application is tested in an environment that is very close to the
Assurance production environment where the application will be deployed.
System testing enables us to test, verify, and validate both the
business requirements as well as the application architecture.
Regression Testing:
Whenever a change in a software application is made, it is quite possible
that other areas within the application have been affected by this change.
Regression testing is performed to verify that a fixed bug hasn't resulted in
another functionality or business rule violation. The intent of regression
testing is to ensure that a change, such as a bug fix should not result in
another fault being uncovered in the application.
Acceptance Testing:
This is arguably the most important type of testing, as it is conducted by
the Quality Assurance Team who will gauge whether the application
meets the intended specifications and satisfies the client‟s requirement.
The QA team will have a set of pre-written scenarios and test cases that
will be used to test the application.
More ideas will be shared about the application and more tests can be
performed on it to gauge its accuracy and the reasons why the project was
initiated. Acceptance tests are not only intended to point out simple
spelling mistakes, cosmetic errors, or interface gaps, but also to point out
any bugs in the application that will result in system crashes or major
errors in the application.
By performing acceptance tests on an application, the testing team will
reduce how the application will perform in production. There are also
legal and contractual requirements for acceptance of the system.
Alpha Testing:
This test is the first stage of testing and will be performed amongst the
teams (developer and QA teams). Unit testing, integration testing and
156
system testing when combined together is known as alpha testing. During Challenges in Software
this phase, the following aspects will be tested in the application − Testing
Spelling Mistakes
Broken Links
Cloudy Directions
The Application will be tested on machines with the lowest
specification to test loading times and any latency problems.
Beta Testing:
This test is performed after alpha testing has been successfully performed.
In beta testing, a sample of the intended audience tests the application.
Beta testing is also known as pre-release testing. Beta test versions of
software are ideally distributed to a wide audience on the Web, partly to
give the program a "real-world" test and partly to provide a preview of the
next release. In this phase, the audience will be testing the following −
Users will install, run the application and send their feedback to the
project team.
Typographical errors, confusing application flow, and even crashes.
Getting the feedback, the project team can fix the problems before
releasing the software to the actual users.
The more issues you fix that solve real user problems, the higher the
quality of your application will be.
Having a higher-quality application when you release it to the general
public will increase customer satisfaction.
Confirmation Testing:
After a defect is fixed, the software may be tested with all test cases that
failed due to the defect, which should be re-executed on the new software
version.
Regression Testing:
It is possible that a change made in one part of the code, whether a fix or
another type of change, may accidentally affect the behavior of other parts
of the code, whether within the same component, in other components of
the same system, or even in other systems.
Changes may include changes to the environment, such as a new version
of an operating system or database management system. Such unintended
side-effects are called regressions.
Regression testing involves running tests to detect such unintended side
effects. Confirmation testing and regression testing are performed at all
test levels.
158
Regression test suites are run many times and generally evolve slowly, so Challenges in Software
regression testing is a strong candidate for automation. Automation of Testing
these tests should start early in the project.
1. Immediate Goals
2. Long-term Goals
3. Post-Implementation Goals
1. Immediate Goals:
These objectives are the direct outcomes of testing. These objectives
may be set at any time during the SDLC process. Some of these are
covered in detail below:
Bug Discovery: This is the immediate goal of software testing to
find errors at any stage of software development. The number of
159
Software Quality bugs is discovered in the early stage of testing. The primary purpose
Assurance of software testing is to detect flaws at any step of the development
process. The higher the number of issues detected at an early stage,
the higher the software testing success rate.
Bug Prevention: This is the immediate action of bug discovery, that
occurs as a result of bug discovery. Everyone in the software
development team learns how to code from the behavior and
analysis of issues detected, ensuring that bugs are not duplicated in
subsequent phases or future projects.
2. Long-Term Goals:
These objectives have an impact on product quality in the long run after
one cycle of the SDLC is completed. Some of these are covered in detail
below:
Quality: This goal enhances the quality of the software product.
Because software is also a product, the user‟s priority is its quality.
Superior quality is ensured by thorough testing. Correctness,
integrity, efficiency, and reliability are all aspects that influence
quality. To attain quality, you must achieve all of the above-
mentioned quality characteristics.
Customer Satisfaction: This goal verifies the customer‟s
satisfaction with a developed software product. The primary purpose
of software testing, from the user‟s standpoint, is customer
satisfaction. Testing should be extensive and thorough if we want
the client and customer to be happy with the software product.
Reliability: It is a matter of confidence that the software will not
fail. In short, reliability means gaining the confidence of the
customers by providing them with a quality product.
Risk Management: Risk is the probability of occurrence of
uncertain events in the organization and the potential loss that could
result in negative consequences. Risk management must be done to
reduce the failure of the product and to manage risk in different
situations.
162
in simple words, If it is not done, bugs will be missed and the fixes won‟t Challenges in Software
work with new features. Testing
Testers must have a good knowledge about the domain under testing. This
is true for system testers where it is expected that they would be working
as normal users. They must use business logic to improve efficiency and
effectiveness of testing, and also testing must give enough confidence to
users that there will not be any accidental failures.
163
Software Quality Testing Must Cover Functional/Structural Parts:
Assurance
Often, people consider functional testing as a complete testing. One must
keep in mind that there are five types of requirements mentioned by
„TELOS‟ (Technical Economic Legal Operational System). Only
operational requirements may cover functional as well as non-functional
requirements. It must also cover the way software is built to give better
screen designs, optimum performance, and security.
4.25 SUMMARY
This chapter presents the definitions of a successful tester and the basic
principles of software testing. It offers a detailed exposition of the process
of creating test policy, test strategy, and test plan. It also gives an in depth
understanding of „Black Box Testing‟, „White Box Testing‟ and „Gray
Box Testing‟. The chapter concludes with skills required by a good tester
and challenges faced by a tester.
4.26 EXERCISES
1) Explain the concept of test team's defect finding efficiency.
2) Explain test case's defect finding efficiency.
3) What are the challenges faced by testers?
4) Explain the process of developing test strategy.
5) Explain the process of developing test methodology.
6) Which skills are expected in a good tester?
4.27 REFERENCES
Software Testing and Continuous Quality Improvement by William E.
Lewis CRC Press Third 2016.
https://www.softwaretestinghelp.com/writing-test-strategy-document-
template/
https://www.softwaretestinghelp.com/10-qualities-that-can-make-you-
a-good-tester/
*****
164
UNIT - III
5
UNIT TESTING: BOUNDARY VALUE
TESTING, EQUIVALENCE CLASS
TESTING
Unit Structure
5.0 Objectives
5.1 How does software testing work?
5.1.1 Importance of Software Testing
5.1.2 What are the categories of test design techniques?
5.1.3 Types of Dynamic Testing
5.2 Black box testing
5.2.1 Generic steps of black box testing
5.2.2 Test procedure
5.2.3 Types of Black Box Testing
5.2.4 Techniques Used in Black Box Testing
5.3 Boundary value testing
5.4 Normal Boundary Value Testing
5.5 Robust Boundary Value Testing
5.6 Worst-Case Boundary Value Testing
5.7 Special Value Testing
5.8 Random Testing
5.9 Guidelines for Boundary Value Testing
5.10 Equivalence Class Testing
5.11 Traditional Equivalence Class testing
5.12 Improved Equivalence Class testing
5.13 Edge Testing
5.14 Guidelines for Equivalence Class Testing and Observation
5.15 Equivalence Partitioning Example
Example 1: Grocery Store Example
Example 2: Equivalence and Boundary Value
Example 3: Equivalence and Boundary Value
Examples 3: Input Box should accept the Number 1 to 10
5.16 Why Equivalence & Boundary Analysis Testing
5.17 Summary
5.18 Exercises
5.19 References
165
Software Quality
Assurance
5.0 OBJECTIVES
This chapter will make the readers understand the following concepts:
How software works
Test cases
Description:
Software testing is the process of verifying a system with the purpose of
identifying any errors, gaps or missing requirement versus the actual
requirement. Software testing is broadly categorised into two types -
functional testing and non-functional testing.
When to start test activities: Testing should be started as early as possible
to reduce the cost and time to rework and produce software that is bug-
free so that it can be delivered to the client. However, in Software
Development Life Cycle (SDLC), testing can be started from the
Requirements Gathering phase and continued till the software is out there
in productions. It also depends on the development model that is being
used. For example, in the Waterfall model, testing starts from the testing
phase which is quite below in the tree,; but in the V-model, testing is
performed parallel to the development phase.
166
Consider Nissan having to recall over 1 million cars due to a software Unit Testing: Boundary
defect in the airbag sensor detectors. Or a software bug that caused the Value Testing, Equivalence
failure of a USD 1.2 billion military satellite launch. 2 The numbers speak Testing
for themselves. Software failures in the US cost the economy USD 1.1
trillion in assets in 2016. What‘s more, they impacted 4.4 billion
customers. 3
Though testing itself costs money, companies can save millions per year in
development and support if they have a good testing technique and QA
processes in place. Early software testing uncovers problems before a
product goes to market. The sooner development teams receive test
feedback, the sooner they can address issues such as:
Architectural flaws
Security vulnerabilities
Scalability issues
A test design technique basically helps us to select a good set of tests from
the total number of all possible tests for a given system. There are many
different types of software testing technique, each with its own strengths
and weaknesses. Each individual technique is good at finding particular
types of defect and relatively poor at finding other types.
For example, a technique that explores the upper and lower limits of a
single input range is more likely to find boundary value defects than
defects associated with combinations of inputs. Similarly, testing
performed at different stages in the software development life cycle will
find different types of defects; component testing is more likely to find
coding logic defects than system design defects.
1. Static technique
2. Dynamic technique
167
Software Quality Structure-based (white-box or structural techniques)
Assurance
Experience- based
Suppose we are testing a Login Page where we have two fields say
―Username‖ and ―Password‖ and the Username is restricted to
Alphanumeric.
When the user enters Username as ―Guru99‖, the system accepts the same.
whereas when the user enters as Guru99@123 then the application throws
an error message. This result shows that the code is acting
dynamically based on the user input.
Dynamic testing is when you are working with the actual system by
providing an input and comparing the actual behaviour of the application
to the expected behaviour. In other words, working with the system with
the intent of finding errors.
168
Unit Testing: Boundary
Value Testing, Equivalence
Testing
170
Test cases: Unit Testing: Boundary
Value Testing, Equivalence
Test cases are created considering the specification of the requirements. Testing
These test cases are generally created from working descriptions of the
software including requirements, design parameters, and other
specifications. For the testing, the test designer selects both positive test
scenario by taking valid input values and adverse test scenario by taking
invalid input values to determine the correct output. Test cases are mainly
designed for functional testing but can also be used for non-functional
testing. Test cases are designed by the testing team; there is not any
involvement of the development team of software.
Functional testing:
This black box testing type is related to the functional requirements of a
system; it is done by software testers.
Non-functional testing:
This type of black box testing is not related to testing of specific
functionality, but non-functional requirements such as performance,
scalability, usability.
Regression testing:
Regression Testing is done after code fixes, upgrades or any other system
maintenance to check the new code has not affected the existing code.
Use Case Use case Technique used to identify the test cases from
Technique the beginning to the end of the system as per the usage
of the system. By using this technique, the test team
creates a test scenario that can exercise the entire
software based on the functionality of each function
from start to end.
172
5.3 BOUNDARY VALUE TESTING Unit Testing: Boundary
Value Testing, Equivalence
Testing
Boundary value analysis is a software testing technique in which tests are
designed to include representatives of boundary values. The idea comes
from the Boundary (topology).
A greater number of errors occur at the boundaries of the input domain
rather than in the "center"
Boundary value analysis is a test case design method that complements
equivalence partitioning
It selects test cases at the edges of a class
It derives test cases from both the input domain and output domain
Boundary testing is the process of testing between extreme ends or
boundaries between partitions of the input values.
So these extreme ends like Start- End, Lower- Upper, Maximum-
Minimum, Just Inside-Just Outside values are called boundary values
and the testing is called ―boundary testing‖.
173
Software Quality Types of Boundary Value Testing:
Assurance
Normal Boundary Value Testing
Robust Boundary Value Testing
Worst-case Boundary Value Testing
Robust Worst-case Boundary Value Testing
1. Minimum
2. Just above the minimum
3. A nominal value
4. Just below the maximum
5. Maximum
174
The range of X: 0 to 100 Unit Testing: Boundary
Value Testing, Equivalence
The range of Y: 20 to 60 Testing
Test Cases:
Total Test cases =(Number of variables * Number of testing points
without nominal )+ (1 for Nominal)
These testing points are min-, min, min+, max- and max and
max+19=(3*6)+1
We can generate 19 test cases from both variables X, Y, and Z.
There are a total of 3 variables X, Y and Z
There are 6 possible values like min-, min, min+, max-, max and
max+
1 is for nominal
Logic:
When we make test cases, we will fix the nominal value of the two
variables and change the values of the third variable.
For example
We will fix the nominal values of X and Y and make a combination
of these values with each value of the Z variable.
Fix nominal values of X and Y are 50,40, and we will compare these
two values with 79, 80,81,90,99,100 and 101.
We will fix the nominal values of X and Z and will make a
combination of these values with each value of the Y variable.
Fix nominal values of X and Z are 50, 90, and we will make a
combination of these two values with 19, 20,21,40,59,60 and 61.
We will fix the nominal values of Y and Z and will make a
combination of these values with each value of the X variable.
Fix nominal values of Y and Z are 40, 90, and we will make a
combination of these two values with -1, 0,1,50,99,100 and 101.
175
Software Quality Test cases generated in Robust simple Boundary Value Testing.
Assurance
Test
X Y Z Comment
Case#
1 50 40 79 Fix Nominal of X and Y
2 50 40 80 Fix Nominal of X and Y
3 50 40 81 Fix Nominal of X and Y
4 50 40 90 Fix Nominal of X and Y
5 50 40 99 Fix Nominal of X and Y
6 50 40 100 Fix Nominal of X and Y
7 50 40 101 Fix Nominal of X and Y
8 50 19 90 Fix Nominal of X and Z
9 50 20 90 Fix Nominal of X and Z
10 50 21 90 Fix Nominal of X and Z
11 50 59 90 Fix Nominal of X and Z
12 50 60 90 Fix Nominal of X and Z
13 50 61 90 Fix Nominal of X and Z
14 -1 40 90 Fix Nominal of Y and Z
15 0 40 90 Fix Nominal of Y and Z
16 1 40 90 Fix Nominal of Y and Z
17 99 40 90 Fix Nominal of Y and Z
18 100 40 90 Fix Nominal of Y and Z
19 101 40 90 Fix Nominal of Y and Z
There are many benefits of robustness testing. Some of the benefits are
mentioned below;
2. Better design:
The robustness testing result in more options and better software designs
and it is completed before the finalization of the design of the product.
3. Achieve consistency:
Robustness testing helps to increase the consistency, reliability, accuracy
and efficiency of the software.
176
In an electronic circuit this is called Worst Case Analysis. Unit Testing: Boundary
Value Testing, Equivalence
In Worst-Case testing we use this idea to create test cases. Testing
To generate test cases, we take the original 5-tuple set (min, min+, nom,
max-, max) and perform the Cartesian product of these values. The end
product is a much larger set of results than we have seen before.
The range of x1: 10 to 90
The range of x2: 20 to 70
Testing points detected in Worst Case Boundary Value Testing:
X1 X2
Min 10 20
Min+ 11 21
Nominal 50 45
Max- 89 69
Max 90 70
Test cases:
Total test cases = A*A
25 = 5*5
A= Number of testing points.
These testing points are min, min+, nominal, max- and max.
We can generate 25 test cases from both variables x1 and x2 by making a
combination of each value of one variable with each value of another
variable.
These 25 test cases are enough test cases to test the input values for these
variables.
177
Software Quality
Assurance
5.7 SPECIAL VALUE TESTING
Special Value is defined and applied form of Functional Testing, whichis a
type of testing that verifies whether each function of the software
application operates in conformance with the required specification.
Special value testing is probably the most extensively practiced form of
functional testing which is most intuitive and least uniform.
This technique is performed by experienced professionals who are experts
in this field and have profound knowledge about the test and data required
for it.
They continuously participate and apply tremendous efforts to deliver
appropriate test results to suit the client‘s requested demand.
178
Monkey Testing: Unit Testing: Boundary
Value Testing, Equivalence
Monkey testing is a software testing technique in which the testing is Testing
performedon the system under test randomly.
In Monkey Testing the tester (sometimes developer too) is considered as the
'Monkey'.
If a monkey uses a computer, he will randomly perform any task on the
systemout of his understanding.
Just like the tester will apply random test cases on the system under test to
findbugs/errors without predefining any test case.
In some cases, Monkey Testing is dedicated to unit testing
This testing is so random that the tester may not be able to reproduce the
error/defect.
The scenario may NOT be definable and may NOT be the correct business
case.
Monkey Testing needs testers with very good domain and technical
expertise.
179
Software Quality 2. One test case for just below boundary value of input domains .
Assurance
3. One test case for just above boundary values of input domains .
The equivalence classes that are divided perform the same operation
and produce same characteristics or behavior of the inputs provided.
The test cases are created on the basis on the different attributes of the
classes and each input from the each class is used for execution of test
cases, validating the software functions and moreover validating the
working principles of the software products for the inputs that are
given for the respective classes.
It is also referred as the logical step in functional testing model
approach that enhances the quality of test classes and by removing
any redundancy or faults that can exist in the testing approach.
Types of Equivalence Class Testing:
Following four types of equivalence class testing are presented here:
1) Weak Normal Equivalence Class Testing.
2) Strong Normal Equivalence Class Testing.
3) Weak Robust Equivalence Class Testing.
4) Strong Robust Equivalence Class Testing.
1) Weak Normal Equivalence Class Testing:
The word ‗weak‘ means ‗single fault assumption‘. This type of testing is
accomplished by using one variable from each equivalence class in a test
case. We would, thus, end up with the weak equivalence class test cases as
shown in the following figure.
181
Software Quality Each dot in above graph indicates a test data. From each class we have one
Assurance dot meaning that there is one representative element of each test case. In
fact, we will have, always, the same number of weak equivalence class
test cases as the classes in the partition.
Just like we have truth tables in digital logic, we have similarities between
these truth tables and our pattern of test cases. The Cartesian product
guarantees that we have a notion of ―completeness‖ in following two ways
a) We cover all equivalence classes.
b) We have one of each possible combination of inputs.
182
Unit Testing: Boundary
Value Testing, Equivalence
Testing
183
Software Quality
Assurance
We find here that we have 8 robust (invalid) test cases and 12 strong or
valid inputs. Each one is represented with a dot. So, totally we have 20 test
cases (represented as 20 dots) using this technique.
184
5.13 EDGE TESTING Unit Testing: Boundary
Value Testing, Equivalence
Testing
A hybrid of BVT and Equivalence Class Testing forms the name ―Edge
Testing.‖
It is used when contiguous ranges of a particular variable constitute
equivalence classes of valid values.
When a programmer makes an error, which results in a defect in the s/w
source code.
If this defect is executed, system will produce wrong results, causing a
failure.
A defect can be called fault or bug.
Once the set of edge values are determined, edge testing can follow any of
the four forms of equivalence class testing
The number of test cases obviously increase as with the variations of BV
andECT.
185
Software Quality
Assurance
5.15 EQUIVALENCE PARTITIONING EXAMPLE
Example 1 Grocery Store Example:
Consider a software module that is intended to accept the name of a
grocery item and a list of the different sizes the item comes in, specified in
ounc-es. The specifications state that the item name is to be alphabetic
characters 2 to 15 characters in length. Each size may be a value in the
range of 1 to 48, whole numbers only. The sizes are to be entered in
ascending order (smaller sizes first). A maximum of five sizes may be
entered for each item. The item name is to be entered first, followed by a
comma, and then followed by a list of sizes. A comma will be used to
separate each size. Spaces (blanks) are to be ignored anywhere in the
input.
Derived Equivalence Classes:
Item name is alphabetic (valid)
Item name is not alphabetic (invalid)
Item name is less than 2 characters in length (invalid)
Item name is 2 to 15 characters in length (valid)
Item name is greater than 15 characters in length (invalid)
Size value is less than 1 (invalid)
Size value is in the range 1 to 48 (valid)
Size value is greater than 48 (invalid)
Size value is a whole number (valid)
Size value is a decimal (invalid)
Size value is numeric (valid)
Size value includes nonnumeric characters (invalid)
Size values entered in ascending order (valid)
Size values entered in no ascending order (invalid)
No size values entered (invalid)
One to five size values entered (valid)
More than five sizes entered (invalid)
Item name is first (valid)
Item name is not first (invalid)
A single comma separates each entry in list (valid)
A comma does not separate two or more entries in the list (invalid)
186
The entry contains no blanks (???) Unit Testing: Boundary
Value Testing, Equivalence
The entry contains blanks (????) Testing
Advantages:
1. The ECT requires less effort when performing one test case for one partition
2. Performing test on one test case for each partition consumes less time.
Disadvantages:
1. If there is any mistake in accurately defining any class or if the value
selected for finding errors does not properly represent a class, then the
errors will be difficult to find.
2. If any class has subclasses, then selecting a value from any one
subclass may not represent each subclass.
Order Pizza:
187
Software Quality
Assurance
5.18 SUMMARY
Boundary Analysis testing is used when practically it is impossible to
test a large pool of test cases individually
Two techniques - Boundary value analysis and equivalence
partitioning testing techniques are used
In Equivalence Partitioning, first, you divide a set of test condition
into a partition that can be considered.
In Boundary Value Analysis you then test boundaries between
equivalence partitions
Appropriate for calculation-intensive applications with variables that
represent physical quantities.
If the range condition is given as an input, then one valid and two
invalid equivalence classes are defined.
If a specific value is given as input, then one valid and two invalid
equivalence classes are defined.
If a member of set is given as an input, then one valid and one
invalid equivalence class is defined.
If Boolean no. is given as an input condition, then one valid and one
invalid equivalence class is defined.
The whole success of equivalence class testing relies on the
identification of equivalence classes. The identification of these
classes relies on the ability of the testers who creates these classes and
the test cases based on them.
In the case of complex applications, it is very difficult to identify all
set of equivalence classes and requires a great deal of expertise from
the tester‘s side.
Incorrectly identified equivalence classes can lead to lesser test
coverage and the possibility of defect leakage.
190
5.19 EXERCISES Unit Testing: Boundary
Value Testing, Equivalence
Testing
1. Explain Boundary value Analysis?
2. What are important criteria‘s for Boundary value Analysis?
3. Explain Static and Dynamic Testing?
4. Explain different types of Equivalence classes?
5.20 REFERENCES
The Art of Software Testing, 3rd Edition Author: Glenford J. Myers,
Corey Sandler, Tom Badgett.
*****
191
6
DECISION TABLE BASED TESTING,
PATH TESTING, DATA FLOW TESTING
Unit Structure
6.0 Objectives
6.1 What is a Decision Table
6.1.1 Components of a Decision Table
6.2 What is State transition testing in software testing?
6.3 What is Use case testing in software testing?
6.4 Path Testing
6.4.1 Path Testing Process:
6.4.2 Cyclomatic Complexity
6.4.3 Independent Paths:
6.4.4 Design Test Cases:
6.5 Data Flow Testing
6.5.1 Types of data flow testing
6.5.2 Steps of Data Flow Testing
6.5.3 Types of Data Flow Testing
6.5.4 Data Flow Testing Coverage
6.5.5 Data Flow Testing Strategies
6.5.6 Data Flow Testing Applications
6.6 Conclusion
6.7 Exercise
6.8 References
6.0 OBJECTIVES
Understand the data flow testing.
Visualize the decision table
Understand the need and appreciate the usage of the testing methods.
Identify the complications in a transaction flow testing method and
anomalies in data flow testing.
Interpret the data flow anomaly state graphs and control flow grpahs
and represent the state of the data objetcs.
Understand the limitations of Static analysis in data flow testing.
Compare and analyze various strategies of data flow testing.
192
6.1WHAT IS A DECISION TABLE Decision Table Based
Testing, Path Testing, Data
Flow Testing
It is a table which shows different combination inputs with their
associated outputs, this is also known as cause effect table. In EP and
BVA we have seen that these techniques can be applied to only specific
conditions or inputs however if we have different inputs which result in
different actions being taken or in other words we have a business rule to
test where there are different combination of inputs which result in
different actions.
For testing such rules or logic decision table testing is used.
It is a black box test design technique.
2. Action
3. Stub
4. Entry
5. rules
194
When both Fly From & Fly To are not set the Flight Icon is disabled. Decision Table Based
In the decision table , we register values False for Fly From & Fly To Testing, Path Testing, Data
and the outcome would be ,which is Flights Button will be disabled Flow Testing
i.e. FALSE
Next , when Fly From is set but Fly to is not set , Flight button is
disabled. Correspondingly you register True for Fly from in the
decision table and rest of the entries are false
When , Fly from is not set but Fly to is set , Flight button is disabled
And you make entries in the decision table
Lastly , only when Fly to and Fly from are set , Flights button is
enabled And you make corresponding entry in the decision table
If you observe the outcomes for Rule 1 , 2 & 3 remain the same .So
you can select any of the them and rule 4 for your testing
The significance of this technique becomes immediately clear as the
number of inputs increases. .Number of possible Combinations is
given by 2 ^ n , where n is number of Inputs.
For n = 10 , which is very common is web based testing , having big
input forms , the number of combinations will be 1024. Obviously,
you cannot test all but you will choose a rich sub-set of the possible
combinations using decision based testing
Decision table is based on logical relationships just as the truth table .It is
a tool that helps us look at the “complete” combination of conditions
technique.
Advantages:
1. The decision- table-based testing works iteratively, which means that
if the leading table could not deliver the required result, then the
decision –table-based testing helps to develop a new decision table or
tables.
2. The decision table assures complete testing.
3. The decision table does not provide any particular order of
occurrences of conditions and actions.
Disadvantages:
1. The larger decision tables are required to be divided into smaller
tables to reduce redundancy and increase in complexity.
2. The decision tables are not kept proportional.
195
Software Quality
Assurance
6.2 WHAT IS STATE TRANSITION TESTING IN
SOFTWARE TESTING?
State transition testing is used where some aspect of the system can be
described in what is called a „finite state machine‟. This simply means
that the system can be in a (finite) number of different states, and the
transitions from one state to another are determined by the rules of the
„machine‟. This is the model on which the system and the tests are
based.
Any system where you get a different output for the same input,
depending on what has happened before, is a finite state system.
A finite state system is often shown as a state diagram (see Figure
4.2).
One of the advantages of the state transition technique is that the
model can be as detailed or as abstract as you need it to be. Where a
part of the system is more important (that is, requires more testing) a
greater depth of detail can be modeled. Where the system is less
important (requires less testing), the model can use a single state to
signify what would otherwise be a series of different states.
197
Software Quality Test conditions can be derived from the state graph in various ways. Each
Assurance state can be noted as a test condition, as can each transition. However this
state diagram, even though it is incomplete, still gives us information on
which to design some useful tests and to explain the state transition
technique.
198
Use cases must also specify post conditions that are observable results Decision Table Based
and a description of the final state of the system after the use case has Testing, Path Testing, Data
been executed successfully. Flow Testing
The ATM PIN example is shown below in Figure 4.3. We show successful
and unsuccessful scenarios. In this diagram we can see the interactions
between the A (actor – in this case it is a human being) and S (system).
From step 1 to step 5 that is success scenario it shows that the card and pin
both got validated and allows Actor to access the account. But in
extensions there can be three other cases that is 2a, 4a, 4b which is shown
in the diagram below.
For use case testing, we would have a test of the success scenario and one
testing for each extension. In this example, we may give extension 4b a
higher priority than 4a from a security point of view.
System requirements can also be specified as a set of use cases. This
approach can make it easier to involve the users in the requirements
gathering and definition process.
199
Software Quality Complexity is used to determine the number of linearly independent
Assurance paths and then test cases are generated for each path.
It give complete branch coverage but achieves that without covering all
possible paths of the control flow graph. McCabe‟s Cyclomatic
Complexity is used in path testing. It is a structural testing method that
uses the source code of a program to find every possible executable path.
Since this testing is based on the control structure of the program, it
requires complete knowledge of the program‟s structure. To design test
cases using this technique, four steps are followed:
1. Construct the Control Flow Graph
2. Compute the Cyclomatic Complexity of the Graph
3. Identify the Independent Paths
4. Design Test cases from Independent Paths
200
Decision Table Based
Testing, Path Testing, Data
Flow Testing
Sequential Statements:
If – Then – Else –
201
Software Quality Do – While –
Assurance
While – Do
202
Switch – Case – Decision Table Based
Testing, Path Testing, Data
Flow Testing
203
Software Quality where,
Assurance d is number of decision nodes,
P is number of connected nodes.
For example, consider first graph given above,
where, d = 1 and p = 1
1. So,
2. Cyclomatic Complexity V(G)
3. =1+1
4. =2
Note:
1. For one function [e.g. Main( ) or Factorial( ) ], only one flow graph
is constructed. If in a program, there are multiple functions, then a
separate flow graph is constructed for each one of them. Also, in the
cyclomatic complexity formula, the value of „p‟ is set depending of
the number of graphs present in total.
2. If a decision node has exactly two arrows leaving it, then it is
counted as one decision node. However, if there are more than 2
arrows leaving a decision node, it is computed using this formula :
d=k-1
Here, k is number of arrows leaving the decision node.
Path 2:
C -> D
Note:
Independent paths are not unique. In other words, if for a graph the
cyclomatic complexity comes out be N, then there is a possibility of
obtaining two different sets of paths which are independent in nature.
Advantages:
Basis Path Testing can be applicable in the following cases:
1. More Coverage:
Basis path testing provides the best code coverage as it aims to achieve
maximum logic coverage instead of maximum path coverage. This
results in an overall thorough testing of the code.
2. Maintenance Testing:
When a software is modified, it is still necessary to test the changes
made in the software which as a result, requires path testing.
3. Unit Testing:
When a developer writes the code, he or she tests the structure of the
program or module themselves first. This is why basis path testing
requires enough knowledge about the structure of the code.
4. Integration Testing:
When one module calls other modules, there are high chances of
Interface errors. In order to avoid the case of such errors, path testing is
performed to test all the paths on the interfaces of the modules.
205
Software Quality 5. Testing Effort:
Assurance
Since the basis path testing technique takes into account the complexity
of the software (i.e., program or module) while computing the
cyclomatic complexity, therefore it is intuitive to note that testing effort
in case of basis path testing is directly proportional to the complexity of
the software or program.
206
What is Data flow Testing? Decision Table Based
Testing, Path Testing, Data
The programmer can perform numerous tests on data values and Flow Testing
variables. This type of testing is referred to as data flow testing.
It is performed at two abstract levels: static data flow testing and
dynamic data flow testing.
The static data flow testing process involves analyzing the source
code without executing it.
Static data flow testing exposes possible defects known as data flow
anomaly.
Dynamic data flow identifies program paths from source code.
Let us understand this with the help of an example.
There are 8 statements in this code. In this code we cannot cover all 8
statements in a single path as if 2 is valid then 4, 5, 6, 7 are not traversed,
and if 4 is valid then statement 2 and 3 will not be traversed.
Hence we will consider two paths so that we can cover all the statements.
x= 1
Path – 1, 2, 3, 8
Output = 2
If we consider x = 1, in step 1; x is assigned a value of 1 then we move to
step 2 (since, x>0 we will move to statement 3 (a= x+1) and at end, it will
go to statement 8 and print x =2.
For the second path, we assign x as 1
Set x= -1
Path = 1, 2, 4, 5, 6, 5, 6, 5, 7, 8
Output = 2
207
Software Quality X is set as 1 then it goes to step 1 to assign x as 1 and then moves to step
Assurance 2 which is false as x is smaller than 0 (x>0 and here x=-1). It will then
move to step 3 and then jump to step 4; as 4 is true (x<=0 and their x is
less than 0) it will jump on 5 (x<1) which is true and it will move to step
6 (x=x+1) and here x is increased by 1.
So,
x=-1+1
x=0
x become 0 and it goes to step 5(x<1),as it is true it will jump to step
6 (x=x+1)
x=x+1
x= 0+1
x=1
x is now 1 and jump to step 5 (x<1) and now the condition is false and it
will jump to step 7 (a=x+1) and set a=2 as x is 1. At the end the value of a
is 2. And on step 8 we get the output as 2.
Deletion:
Deletion of the Memory allocated to the variables.
208
Decision Table Based
Testing, Path Testing, Data
6.5.3Types of Data Flow Testing: Flow Testing
209
Software Quality 6.5.5 Data Flow Testing Strategies:
Assurance
210
8. All uses: it is a combination of all p-uses criterion and all c-uses Decision Table Based
criterion. Testing, Path Testing, Data
Flow Testing
9. All du-paths: For every variable x and node i in a way that x has a
global declaration in node i, pick a comprehensive path including all
du-paths from node i
To all nodes j having a global c-use of x in j and
6.6 CONCLUSION
Data is a very important part of software engineering. The testing
performed on data and variables play an important role in software
engineering. Hence this is a very important part and should be
properly carried out to ensure the best working of your product.
211
Software Quality
Assurance
6.7 EXERCISE
1. Explain Decision table?
2. How to calculate Cyclomatic Complexity?
3. What are independent Paths?
4. What is static and Dynamic Testing?
6.8 REFERENCES
The Art of Software Testing, 3rd Edition Author: Glenford J. Myers,
Corey Sandler, Tom Badgett.
www.guru.com
www.istbq.com
www.tutorial.com
*****
212
UNIT - IV
7
SOFTWARE VERIFICATION
AND VALIDATION
Unit Structure
7.0 Objectives
7.1 Introduction
7.2 Verification
7.3 Verification Workbench
7.4 Methods of Verification
7.5 Types of reviews on the basis of Stage Phase
7.6 Entities involved in verification
7.7 Reviews in testing lifecycle
7.8 Coverage in Verification
7.9 Concerns of Verification
7.10 Validation
7.11 Validation Workbench
7.12 Levels of Validation
7.13 Coverage in Validation
7.14 Acceptance Testing
7.15 Management of Verification and Validation
7.16 Software development verification and validation activities
7.17 Let us Sum Up
7.18 Exercises
7.19 References
7.0 OBJECTIVES
After going through this chapter, you will be able to:
Verification and its Workbench
Methods of Verification
Types of Review
Validation and its Workbench
Levels of Validation
213
Software Quality
Assurance
7.1 INTRODUCTION
Verification and validation (V & V) have become important, especially in
software, as the complexity of software in systems has increased, and
planning for V & V is necessary from the beginning of the development
life cycle. Over the past 20 to 30 years, software development has evolved
from small tasks involving a few people to enormously large tasks
involving many people. Because of this change, verification and validation
has similarly also undergone a change. Previously, verification and
validation was an informal process performed by the software engineer
himself. However, as the complexity of systems increased, it became
obvious that continuing this type of testing would result in unreliable
products. It became necessary to look at V & V as a separate activity in
the overall software development life cycle.
7.2 VERIFICATION
Verification is the process of evaluating work-products of a development
phase to determine whether they meet the specified requirements.
Verification ensures that the product is built according to the
requirements and design specifications.
It also answers the question, Are we building the product rightly?
Verification is also called “Static technique” or “Conformance to
Requirements” as it does not involve execution of any code, program
or work product.
Input:
Verification Process:
Check:
215
Software Quality Production output:
Assurance
It is the final stage of a workbench in case the check confirmed the
properly conducted performance.
Reworking:
If the outcome parameters are not in compliance with the desired result, it
is necessary to return to the verification process and conduct it from the
beginning.
A. Self Review:
1. Self Review may not be referred to as an official review.
2. Everybody does a self check before giving a work product for further
verification.
3. One must capture the self review records and defect found in self
review to improve the process.
4. It is a self learning and retrospection process.
B. Peer Review:
The very easiest method and informal way of reviewing the documents or
the programs/software for the purpose of finding out the faults during the
verification process is the peer-review method. In this method, we give the
document or software programs to others and ask them to review those
documents or software programs where we expect their views about the
quality of our product and also expect them to find the faults in the
program/document.
216
Online Peer Review: Software Verification
and Validation
In this review author and reviewer meet together and review the work
jointly. Any explanation required by the reviewer may be provided by the
author. The defects are found and corrected jointly by the peers.
Offline Peer Review:
In such kind of review the author informs the reviewer that the work
product is ready for the review. The reviewer reviews the work product as
per the time availability. The review report is sent to the author along with
the defects then the author may decide to accept or reject it.
C. Walkthrough:
Walkthrough is more formal than peer review but less formal than
inspection. It can be called as semi formal review. In typical walkthrough
some members of the project team are involved in examining an artifact
under review. In a walkthrough, the author of the software document
presents the document to other persons which can range from 2 to 7.
Participants are not expected to prepare anything. The presenter is
responsible for preparing the meeting. The document(s) is/are distributed
to all participants. At the time of the meeting of the walk-through, the
author introduces the content in order to make them familiar with it and all
the participants are free to ask their doubts.
Advantages:
1. Walkthrough is useful for making joint decisions and each member
must be involved in making decisions.
2. Defects are recorded and suggestions can be received from the team
for improving the work product.
217
Software Quality Disadvantages:
Assurance
1. Availability of people can be an issue when teams are large.
2. Time can be a constraint
3. Members in the team may not be expert in giving comments, so may
need some training and basic knowledge about the project.
Advantages of Inspection:
1. Helps in the Early removal of major defects.
2. This inspection enables a numeric quality assessment of any technical
document.
3. Software inspection helps in process improvement.
4. It helps in staff training on the job.
5. Software inspection helps in gradual productivity improvement.
Phases of Inspection:
Planning:
The planning phase starts when the entry criteria for the inspection state
are met. The moderator planned the inspection. A moderator verifies that
the product entry criteria are met.
218
Kick-off Inspection: Software Verification
and Validation
In the overview phase, a presentation is given to the inspector with some
background information needed to review the software product properly.
Here objective of the inspection is explained and process to be followed.
Preparation:
This is considered an individual activity. In this part of the process, the
inspector collects all the materials needed for inspection, reviews that
material, and notes any defects.
Meeting:
The moderator conducts the meeting. In the meeting, the defects are
collected and reviewed within a defined time frame.
Rework:
The author performs this part of the process in response to defect
disposition determined at the meeting.
Follow-up:
In follow-up, the moderator makes the corrections and then compiles the
inspection management and defects summary report. The findings can be
used be used to gather statistics about work product, project and progress.
Moderator:
The moderator runs the inspection and enforces the protocols of the
meeting. The moderator's job is mainly one of controlling interactions and
keeping the group focused on the purpose of the meeting—to discover
(but not fix) deficiencies. The moderator also ensures that the group does
not go off on tangents and sticks to a schedule.
Reader:
The reader calls attention to each part of the document in turn, and thus
paces the inspection.
Recorder:
Whenever any problem is uncovered in the document being inspected, the
recorder describes the defect in writing. After the inspection, the recorder
and moderators prepare the inspection report.
219
Software Quality Inspectors:
Assurance
Inspectors raise questions, suggest problems, and criticize the document.
Inspectors are not supposed to “attack” the author or the document but
should be objective and constructive. Everyone except the author can act
as an inspector.
E. Audit:
Audit means an independent examination of a software product or
processes to assess compliance with specifications, standards, contractual
agreements, or other criteria. Software Development and testing process
audit is an examination of product to ensure that the product as well as the
process used to build them predefined criteria. The outcome of the audit
report comprising the following.
Observations:
Observations are findings which may be converted into future non
conformances if proper care is nottaken. They may be termed as
conformance just on the verge of breakage.
Achievement or Good Practices:
These are good achievements by areas under audit which can be used by
others.
In software development and testing phases, various audit are
conducted, as mentioned below
A. Kick Off Audit:
This audit covers the areas that ensure whether all the processes required
at the start of the project are covered or not. It may start from proposal,
contract, risk analysis, scope definition, team size and Skill requirements,
authorities and responsibilities of various roles in the team. Kick-off audits
cover checking documentation and compliances required at the start of the
project.
220
B. Periodic Software Quality Assurance Audit: Software Verification
and Validation
Software quality assurance audit is famous by the name 'SQA audit' or
'SQA review'. It is conducted for a product under development and
processes used as defined by quality assurance process definition for these
work products. Auditor is expected to check whether different work
products meet the defined exit criteria or not, and the process used for
building these work products are correctly implemented or not.
C. Phase-End Audit:
Phase-end audit checks whether the phase defined in the development life
cycle achieves its expected outcome or not. It is also used as a gate to
decide whether the next phase can be started or not. These are also termed
'gate reviews'.
D. Pre Delivery Audit:
Pre Delivery audit checks whether all the requisite processes of delivery
are followed or not, and whether the work product meets the expected
delivery criteria or not. Only those work products which are successful in
pre delivery audits can be given to a customer.
E. Product Audit:
Product audits can be Covered in SQA audit. It is done by executing
sample test cases to find whether the product meets its defined exit criteria
or not. Sometimes, this is also considered as 'smoke testing'.
Percent Completion:
Percent completion review is a combination of periodic and phase-end
review where the project activities or product development activities are
assessed on the basis of percent completion.
222
And for organizations benefiting from the project, it makes sense to Software Verification
ensure that all desired benefits have been realized, and to understand and Validation
what additional benefits can be achieved.
223
Software Quality
Assurance review work to ensure that the
documentation is free of errors.
Prerequisites Testing:
Software installation has some prerequisite testing such as operating
system, database, reporting services as the case may exist. If prerequisites
are not available installation may prompt or prerequisite will be installed
during installation.
Updation Testing:
There must be a check whether similar operating systems already exist in
the system or there is a need for a new version. Sometime it also prompts
for repair or new installation.
Un-Installation Testing:
It is done when there is a need to check whether uninstallation is clean.
When an application is uninstalled all the files installed must be removed
from the disk.
224
○ Second is creating a test summary report & communicating it to Software Verification
stakeholders. and Validation
○ Next comes finalizing and archiving the test environment, the test
data, the test infrastructure, and other test ware for later reuse.
○ In addition to the above, analysing lessons learned from the finished
test activities to determine changes needed for future iterations,
releases, and projects happens.
Cyclomatic Complexity:
Cyclomatic Complexity for a flow graph is computed in one of three
ways:
1. The numbers of regions of the flow graph correspond to the
Cyclomatic complexity.
2. Cyclomatic complexity, V(G), for a flow graph G is defined as
V(G) = E – N + 2
3. where E is the number of flow graph edges and N is the number of
flow graph nodes.
4. Cyclomatic complexity, V(G), for a graph flow G is also defined as
V(G) = P + 1
225
Software Quality 5. Where P is the number of predicate nodes contained in the flow graph
Assurance G.
Example: Consider the following flow graph
Region, R= 6
Number of Nodes = 13
Number of edges = 17
Number of Predicate Nodes = 5
Cyclomatic Complexity, V(C):
V (C) = R = 6;
Or
V(C)= E-N+2
= 17-13+2
7.10 VALIDATION
Validation is the process of checking whether the software product is up to
the mark or in other words the product has high level requirements.
It is the process of checking the validation of a product
It also answers the question, are we building the right product?
It is validation of actual and expected product.
Validation is the Dynamic Testing.
227
Software Quality Validation is done during testing like feature testing, integration
Assurance testing, system testing, load testing, compatibility testing, stress
testing, etc.
Validation helps in building the right product as per the customer‟s
requirement and helps in satisfying their needs.
Inputs:
There must be some entry criteria definition when inputs are entering the
work bench. This definition should match with the output criteria of the
earlier work bench.
228
Outputs: Software Verification
and Validation
Similarly, there must be some exit criteria from the workbench which
should match with input criteria for the next workbench. Outputs may
include validated work products, defects, and test logs.
Validation:
Validation process must describe step-by-step activities to be conducted
in a workbench. It must also describe the activities done while validating
the work product under testing.
Check Process:
Check process must describe how the validation process has been
checked. Test plan must define the objectives to be achieved during
validation and check processes must verify that the objectives have been
really achieved.
2) Integration testing
Integration means combining. For Example, in this testing phase,
different software modules are combined and tested as a group to
make sure that integrated system is ready for system testing.
Integrating testing checks the data flow from one module to other
modules. This kind of testing is performed by testers.
3) System testing:
System testing is performed on a complete, integrated system. It
allows checking the system's compliance as per the requirements. It
tests the overall interaction of components. It involves load,
performance, reliability and security testing.
229
Software Quality System testing most often the final test to verify that the system meets
Assurance the specification. It evaluates both functional and non-functional
needs for the testing.
4) Acceptance testing
Acceptance testing is a test conducted to find if the requirements of a
specification or contract are met as per its delivery.
Acceptance testing is basically done by the user or customer.
However, other stockholders can be involved in this process.
5) Interface Testing
Interface Testing is defined as a software testing type which verifies
whether the communication between two different software systems is
done correctly.
An interface is actually software that consists of sets of commands,
messages, and other attributes that enable communication between a
device and a user.
1. Requirement Coverage:
Requirements are defined in the requirement specification document.
Traceability matrix start with requirement and goes forward up to test
result.
All requirements are not mandatory. They are put into different
classes must, should be and could be. Most higher priority
requirement is expressed as must, some lower priority is expressed as
could be and should be.
2. Functionality Coverage:
Sometimes requirements are expressed in terms of functionality. Test
cases representing higher priority must be tested to a larger extent than the
functionality with the lower priority.
3. Feature Coverage:
Features are groups of functionality doing same or similar things. It
means covering at least one of the functionality which represent a
feature provided even if there's multiple way of doing it.
Feature with higher priority are indicated with must and lower one as
should be or could be.
230
7.14 ACCEPTANCE TESTING Software Verification
and Validation
Acceptance testing is generally done by the users and/or customers to
understand whether the software satisfies requirements or not, and to
ascertain whether it is fit for use. Users\customers execute test cases to
show if the acceptance criterion for the application has been met or not.
There are three levels of acceptance testing.
231
Software Quality 7.15.1 Defining the Processes for Verification and Validation:
Assurance
An organisation must define the processes applied for verification and
validation activities during the development life cycle phase of the project.
The processes involved may be as follows.
232
testing, integration testing, interface testing, system testing, and Software Verification
acceptance testing. and Validation
Conceptualisation:
Conceptualìsation is the first phase of developing a product or project for a
customer, Conceptualisation means converting the thoughts or concepts
into reality. It can be through proof of concept or prototyping model.
During proposal, the supplier may give of solution, and the customer may
have to evaluate the feasibility of such an approach solution from product
perspective. The supplier must understand what is expected by the
customer, and whether it be provided or not by an organisation. Here
verification and validation determine feasibility study of the project based
on technical feasibility, economical feasibility and skill availability.
Requirement Analysis:
Requirement analysis phase starts from conceptualisation. Requirement
understanding is done through communication with customer. It can be
done- by various approaches joint application development and customer
233
Software Quality survey. Requirement analysis verification and validation the feasibility of
Assurance requirements and gives inputs to design approach. It can help in finding
the gaps in proposals and requirements so that further clarifications can be
achieved.
Design Requirements:
Design Requirements are implemented through design. Verification and
validation of design understanding that design is complete in all respects
and matches with requirements. Requirement ensures that all requirements
are converted into design. It involves design through data flow diagram,
prototyping.
Coding:
It involves code review and testing of the units, as part of verification and
Validation, to make sure that requirements and design are correctly
implemented. Units must be traceable to requirements through design.
Integration:
Integration validation and verification show that the individually tested
units work correctly when brought together to form a module or sub
module. It involves testing of modules/sub modules to ensure proper
working with respect to requirements and designs. Integration with other
hardware/ software is defined in architectural design. Interface testing
must satisfy architectural design.
Testing:
Test artifacts such as test plan, test scenario, test bed, test cases, and test
data must be subjected to verification and validation activities. Test plan
must be complete, covering all aspects of testing expected by the
customer. Test cases must cover the application completely. Test cases for
acceptance testing must be validated by customer/user/business analyst.
Installation:
Application must be tested for installation, if installation is required by the
customer, the documentation giving instructions for installation must be
complete and sufficient to install the application, Verification and
validation must ensure that there is adequate support and help available to
common users for installation of an application.
Documentation:
There are many documents given along with a software product such as
installation guide and user manual. They must be complete, detailed and
informative, so that it can be referred to by a common user. The list of
documents must be mentioned in contract or statement of work so that
compliance can be checked.
234
7.17 LET US SUM UP Software Verification
and Validation
This chapter provides a clear exposition of verification and validation
activities. It covers methods of verification such as reviews, walkthrough
and inspection and various stages of verification such as requirement
verification, design verification, and test artifacts verification. Advantages
of different levels of reviews such as self review, peer review,
walkthrough, and inspection have been dealt in detail. It also presents
audits as an independent way. In-process reviews and post implementation
reviews have also been evaluated. The Second half of the chapter focuses
on validation techniques. The chapter concludes with a clear overview of
acceptance testing.
7.18 EXERCISES
1. Discuss the advantages and disadvantages of verification
2. Describe different types of verification on the basis of parties
involved in verification.
3. Explain self review
4. Describe the advantages and disadvantages of peer review.
5. Describe about walkthrough review
6. Explain formal review(Inspection).
7. Describe the process of inspection
8. Describe the auditing process.
9. What are the different audits planned during the development life
cycle?
10. Explain various types of in-process review
11. Explain the concept of post-mortem review. it is essential for a
learning organisation
12. Explain test readiness review.
13. What are the concerns of verification?
14. Discuss the advantages of validation
15. How coverage is measured in case of verification?
16. How coverage is measured in case of validation?
17. Discuss the different levels of validation.
18. Explain different levels of acceptance testing.
235
Software Quality
Assurance
7.19 REFERENCES
● [Andriole86] Andriole, Stephen J., editor, Software Validation,
Verification, Testing, and Documentation, Princeton, NJ: Petrocelli
Books, 1986.
● http://tryqa.com/what-is-verification-in-software-testing-or-what-is-
software-verification/
● https://www.tutorialspoint.com/software_testing_dictionary/audit.htm
● https://www.tutorialspoint.com/verification-and-validation-with-
example
● https://qatestlab.com/resources/knowledge-center/alpha-beta-
gamma/#:~:text=Gamma%20testing%20is%20the%20final,any%20in
-house%20QA%20activities.
● M.G., LIMAYE, ed. (2009) Software Testing: Principle, Technique
and Tools. Tata McGraw-Hill Publishing Ltd.
*****
236
8
V-TEST MODEL
Unit Structure
8.0 Objectives
8.1 Introduction
8.2 V-model for software
8.3 Testing during Proposal stage
8.4 Testing during requirement stage
8.5 Testing during test planning phase
8.6 Testing during design phase
8.7 Testing during coding
8.8 VV Model
8.9 Critical Roles and Responsibilities.
8.10 Let us Sum Up
8.11 Exercises
8.12 References
8.0 OBJECTIVES
After going through this chapter, you will be able to:
V-Model (Validation Model)
VV Model (Verification and Validation Model)
Roles and Responsibilities of three critical entities in software
development.
8.1 INTRODUCTION
Testing is a lifecycle activity. It starts when the proposal of software
development is made to a prospect, and ends only when the application is
finally delivered and accepted by the customer/end user. For product
development, we may define each iteration of development as a separate
project, and several projects may come together to make a complete
product. For a customer, it starts from a problem statement or
conceptualization of a new product, and ends with satisfactory product
receipt, acceptance and usage. For every development activity, there is a
testing activity associated with it, so that the phase achieves its milestone
deliverable with minimum problem (theoretically, no problem). This is
also termed 'certification approach of testing' or 'gate approach of testing'.
237
Software Quality
Assurance
8.2 V MODEL FOR SOFTWARE
Validation model describes the validation activities associated with
different phases of software development.
Documents and work products produced during a life cycle phase must be
analysed beforehand to understand their coverage, relationship with
different entities, structure in overall development and traceability. An
organization must have definition of processes, guideline and standard
which can be used for making such documents and artifacts.
Functional test scenarios and test cases are generally developed based
upon the functional requirements of the software. Functional requirements
refer to the operational requirements of an application. Structural test
scenario and test cases are developed from the structures defined in the
design specifications. They must correspond to the structures of the work
product produced during development. Requirements/ designs are
verified and validated by preparing use case diagrams, data flow diagrams,
and prototypes with the question 'what happens when' to identify the
completeness of design and requirements. For validation testing scenarios,
one must use techniques like boundary value analysis, equivalence
partitioning, error guessing.
Testing in low-level design and coding phases must confirm that first
phase output of capturing the requirements and developing architecture
matches with the inputs and outputs required by the low-level design
phase. High-level design and low-level design must ensure that
requirements are completely covered so that software developed covers all
requirements. Verification and validation of design must ensure that the
requirement verification and validation is proper and can be handled
through the structures created for the purpose. Similarly, verification and
validation of coding must ensure that all aspects of designs are covered by
the code developed
239
Software Quality 8.2.7 Refine And Redefine Test Sets Generated Earlier:
Assurance
Test artifacts such as test scenarios, test cases, and test data may be
generated in each phase of software development life cycle from
requirements, design, and coding, as the case may be. The test artifacts so
generated must be reviewed and updated continuously as per the change in
requirement, design etc.
241
Software Quality situation and perform risk-benefit analysis of lesser coverage with the help
Assurance of customer and take corrective measures if required. Practically, it may
not be possible, or it may not be required to cover all requirements.
242
Module Interface Mismatch: V-Test Model
The data input/output from one module to another must be checked for
consistency with design. Parameter passing is a major area of defects in
software, where communication in two modules is affected. Interface test
cases must test the scenario where one system is communicating with
another system.
Erroneous Input/Output:
If the system needs to be protected from erroneous operations such as
huge/invalid input/output, the design must describe how it will handle the
situation.
Coding Optimization:
Coding standards must also talk about optimization of code. It talks about
how nesting must be done, how declaration of variables and functions
must be done.
243
Software Quality Code Interpreting Design:
Assurance
Coding must interpret designs correctly. Coding files and what they are
supposed to implement must be defined in low-level design.
Unit Testing:
Unit testing must be done by the developers to ensure that written code is
working as expected. Sometimes, unit testing is done by peer of an author
(of a code) to maintain independence of testing with respect to
development. Unit test case logs must be prepared and available for peer
review, SQA review as well as customer audits.
8.8 VV MODEL
Quality checking involves verification as well as validation activities. 'VV
model' considers all the activities related to verification as well as
validation. It is also termed as 'Verification and Validation Model' or
'Quality model'.'VV model' talks about verification and validation
activities associated with software development during the entire life
cycle.
Requirements:
As requirements are obtained from customer like customer survey,
prototyping. The intention would be to find if there is any gap existing
between user requirements and requirement definition.
1. Requirement Verification:
a. Verification tests check to ensure the program is built according to the
stated requirements. The verification process includes activities such
as reviewing the code and doing walkthroughs and inspections.
b. Missing requirements or invalid requirements can be discovered
during this phase, which can minimize the risk of rework and the cost
associated with overruns. It’s far more effective to fix a small bug
upfront than in the future when hundreds of lines of code must be
identified and corrected
2. Requirement Validation:
a. It ensures that the requirements have achieved the business objectives,
meet the needs of any relevant stakeholders and are clearly
understood by developers. Validation is a critical step to finding
missing requirements and ensuring that requirements have a variety of
important characteristics.
b. Software validation addresses the following:
I. Correctly outlines the end user’s needs.
II. Has only one exact meaning.
244
III. Can be modified as necessary. V-Test Model
Design:
Design may include high-level design or architectural design, and low-
level design or detail design. Designs are created by architects (high-level
designs)/designers (low-level designs) as the case may be.
a. Design Verification:
Verification of design may be a walkthrough of a design document by
design experts, team members and stakeholders of the project. Project
team along with the architect/designer may walkthrough the design to find
the completeness and give comments, if any. Traceability of design with
requirement must be established in the requirement traceability matrix.
b. Design Validation:
Validation of design can happen at two stages.
i. The first stage is at the time of creation of data flow diagram. If flow
of data is complete, design is complete. Any problem in flow of data
indicates gap in design. This is term as flow anomaly.
ii. The second stage of validation is at integration or interface testing.
Integration testing is an activity to bring the units together and test
them as a module. Interface testing is an activity of testing
connectivity and communication of application with the outside
world. For eg: the module connection with outside world like
database, browser, and operating system.
Coding:
Coding is an activity of writing individual units as defined in low level
design.
a. Code Verification:
As coding is done, it undergoes a code review (generally peer review).
Peer helps in identification of errors with respect to coding standards,
indenting standards, commenting standards and variable declaration
issues. The Checklist approach is used in code review.
b. Code Validation:
Validation of coding happens through unit testing where individual units
are separately The developer may have to write special programs (such as
stubs and drivers) so that units can be tested. The executable is created by
combining stubs, drivers and the units under testing.
245
Software Quality
Assurance
Development:
Development team may have various roles under them. They may be
performing various activities as per the role and responsibilities at various
stages. Some of the activities related to development group may be as
follows
Project Planning activities include requirement elicitation, project
planning scheduling. Project planning is the foundation of success.
Resources may include identification and organization of adequate
number of people, machine, hardware, software and tools required by
the project. It may also include assessment of skills required by plans.
Interacting with customers and stakeholders as per project
requirement.
Defining policies and procedures for creating development work,
verification and validation activities so that quality is built properly,
delivering it to test team/customer as the case may be.
Testing:
Testing team may include test manager, test leads, and testers as per scope
of testing, size of project, and type of customer. Generally, it is expected
that a test team would have independence of working and they do not have
to report to the development team.
246
Roles and responsibilities of test team may include the following: V-Test Model
Test planning including test strategy definition, and test case writing.
Test planning may include estimation of efforts and resources
required for testing.
Resourcing may include identification and organisation of adequate
numbers of people, machines, hardware, software, and tools as
required by the project. It may also involve an assessment of skills
required by the project, skills already available with them and any
training needs.
Defining policies and procedures for creating and executing tests as
per test strategy and test plan. Testers may have to take part in
verification and validation activities related to test artifacts.
Doing acceptance testing related activities such as training and
mentoring to users from customer side before/during acceptance
testing activities.
Customer:
Customer may be the final user group, or people who are actually
sponsoring the project. Customers can be internal to an organisation or
external to the organisation. Roles and responsibilities of a customer may
include the following.
Specifying requirements and signing off requirement statements and
designs as per contract. It may also include solving any queries or
issues raised by the development/test team.
Participating in acceptance testing as per roles and responsibilities
defined in the acceptance test plan. A customer may be responsible
for alpha, beta and gamma testing, as the case may be.
8.11 EXERCISES
1. What are the characteristics of good requirements?
2. Describe V & V activities during the proposal.
3. Describe V & V activities during requirement generation.
4. Describe V & V activities for test artifacts.
247
Software Quality 5. Describe V & V activities during designs.
Assurance
6. Describe V & V activities during coding.
7. Explain VV Model
8. What are the role and responsibilities in software verification and
validation
8.12 REFERENCES
● [Andriole86] Andriole, Stephen J., editor, Software Validation,
Verification, Testing, and Documentation, Princeton, NJ: Petrocelli
Books, 1986.
● https://www.softwaretestinghelp.com/what-is-stlc-v-model/
● M.G., LIMAYE, ed. (2009) Software Testing: Principle, Technique
and Tools. Tata McGraw-Hill Publishing Ltd.
*****
248
UNIT - V
9
LEVELS OF TESTING
Unit Structure
9.0 Objectives
9.1 Introduction
9.2 Proposal Testing
9.3 Requirement Testing
9.4 Design Testing
9.5 Code Review
9.6 Unit Testing
9.6.1 Difference between debugging and unit testing
9.7 Module Testing
9.8 Integration Testing
9.8.1 Bottom-Up Testing
9.8.2 Top-Down Testing
9.8.3 Modified Top-Down Approach
9.9 Big-Bang Testing
9.10 Sandwich Testing
9.11 Critical Path First
9.12 Subsystem Testing
9.13 System Testing
9.14 Testing Stages
9.15 Let us Sum Up
9.16 Exercises
9.0 OBJECTIVES
After going through this chapter, you will be able to:
• describe various verification and validation activities associated with
various stages of SDLC starting from proposal.
• discuss various approaches of integration testing.
9.1 INTRODUCTION
Testing is a life-cycle activity which begins with a proposal for
software/system application and ends when product is finally delivered to
customer. Different agencies/stakeholders are involved in conducting
specialised testing required for the specific application. Definition of these
stakeholders/agencies depends on the type of testing involved and level of
249
Software Quality testing to be done in various stages of software development life cycle.
Assurance Table 9.1 shows in brief the level of testing performed by different
agencies. It is an indicative list which may differ from customer to
customer, product to product and organisation to organisation.
Commercial Review:
A proposal undergoes financial feasibility and other types of feasibilities
involved with respect to the business. Commercial review may stress on
the gross margins of the project and fund flow in terms of money going
out and coming in the development organisation. Generally, total payment
is split into installations depending upon completion of some phases of
development activity. As different phases are completed, money is
realised at those instances.
Several iterations of proposals and scope changes may happen before all
the parties involved in Request for Quotation (RFQ) and proposal agree on
some terms and conditions for doing a project.
Validation of Proposal:
A proposal sometimes involves development of prototypes or proof of
concept to explain the proposed approach to customer problem. One must
define the approach of handling customer problem in the model or
prototype.
In case of product organisation, the product development group along with
marketing and sales functions decides about the new release of a product.
Complete:
Requirement statement must be complete and must talk about all business
aspects of the application under development. Any assumption must be
documented and approved by the customer.
Measurable:
Requirements must be measurable in numeric terms as far as possible.
Such definition helps to understand whether the requirement has been met
or not while testing the application. Qualitative terms like ‘good’,
‘sufficient’, and ‘high’ must be avoided as different people may attach
different meanings to these words.
Testable:
The requirement must help in creating use cases/scenario which can be
used for testing the application. Any requirement which cannot be tested
must be drilled down further, to understand what is to be achieved by
implementing these requirements.
Not Conflicting:
The requirements must not conflict with each other. There is a possibility
of trade-off between quality factors which needs to be agreed by the
customer. Conflicting requirements indicate problem in requirement
collection process.
Identifiable:
The requirements must be distinctively identifiable. They must have
numbers or some other way which can help in creating requirement
traceability matrix. Indexing requirements is very important.
Validation of Requirements:
In requirement testing, clear and complete use cases are developed using
requirement statement. Any assumption made indicates lacunae in
requirement statement and it should be updated by verifying the
assumptions made with the customer.
Complete:
A design must be complete in all respect. It must define the parameters to
be passed/received, formats of data handled, etc. Once the design is
finalised, programmers must do their work of implementing designs
mechanically and system must be working properly.
Traceable:
A design must be traceable to requirements. The second column of
requirement traceability matrix is a design column. The project manager
must check if there is any requirement which does not have corresponding
design or vice versa. It indicates a defect if traceability cannot be
established completely.
Implementable:
A design must be made in such a way that it can be implemented easily
with selected technology and system. It must guide the developers in
coding and compiling the code. The design must include interface
requirements where different components are communicating with each
other and with other systems.
Testable:
A good design must help testers in creating structural test cases.
Validation of Design:
Design testing includes creation of data flow diagrams, activity diagrams,
information flow diagram, and state transition diagram to show that
information can flow through the system completely. Flow of data and
information in the system must complete the loop. Another way of testing
design is creating prototypes from design.
Clarity:
Code must be written correctly as per coding standards and syntax,
requirements for the given platform. It must follow the guidelines and
253
Software Quality standards defined by the organisation/project/customer, as the case may
Assurance be. It must declare variables, functions, and loops very clearly with proper
comments. Code commenting must include which design part has been
implemented, author, date, revision number, etc. so that it can be traced to
requirements and design.
Complete:
Code, class, procedure, and method must be complete in all respect. One
class doing multiple things, or multiple objects created for same purpose
indicates a problem in design.
Traceable:
Code must be traceable with design components. Code files having no
traceability to design can be considered as redundant code which will
never get executed even if design is correct.
Maintainable:
Code must be maintainable. Any developer with basic knowledge and
training about coding must be able to handle the code in future while
maintaining or bug fixing it.
Individual components and units are tested to ensure that they work
correctly as an individual as defined in design.
Unit test cases must be derived from use cases/design component used
at lowest levels of design.
254
9.6.1 Difference between Debugging and Testing: Levels of Testing
Many developers consider unit testing and debugging as the same thing. In
reality there is no connection between the two. Difference between them
can be illustrated as shown in Table 9.2
Units at the lowest level are tested using stubs/drivers. Stubs and
drivers are designed for a special purpose. They must be tested before
using them for unit testing.
Once the units are tested and found to be working, they are combined
to form modules. Modules may need only drivers, as the low-level
units which are already tested and found to be working may act as
stubs.
If required, drivers and stubs are designed for testing as one goes
upward. Developers may write the stubs and drivers as the
input/output parameters must be known while designing.
256
Levels of Testing
Stubs/Drivers:
Stub:
Stub is a piece of code emulating a called function. In absence of a called
function, stub may take care of that part for testing purpose.
Driver:
Driver is a piece of code emulating a calling function. In absence of actual
function calling the piece of code under testing, driver tries to work as a
calling function.
Stubs/drivers must be very simple to develop and use. They must not
introduce any defect in the application. They should be eliminated
before delivering code to customer.
257
Software Quality Disadvantages of Bottom-Up Approach:
Assurance
Slower process.
Generally, this approach does not need drivers as top-layers can work
as drivers for bottom layers.
This approach can create a false belief among customer that software
is ready as prototype of software is delivered to customer even before
design phase.
258
9.8.3 Modified Top-Down Approach: Levels of Testing
Important units are tested individually, and then combined to form the
modules, and finally modules are tested to make the system. This is
done during unit testing followed by integration testing.
The systems tested using this approach are more efficient as compared
to top-down and bottom-up approach.
259
Software Quality Advantages of Big-Bang Approach:
Assurance
Since testing is done at the end phase, therefore time required for
writing test cases and defining test data at unit level, integration level,
etc. may be saved.
Fast approach.
Bottom-up testing starts from middle layer and moves upward. Top-
down testing starts from middle layer and goes downward.
Big-bang approach is followed for the middle layer. From this layer,
bottom-up approach goes upwards and top-down approach goes
downward.
260
Advantages of Sandwich Testing: Levels of Testing
Higher testing costs as it requires more resources and big teams for
performing top-down and bottom-up testing simultaneously.
Not suitable for smaller projects.
Skilled testers with varying skill sets are required as different domains
representing different functional areas have to be handled.
261
Software Quality
Assurance
9.14 TESTING STAGES
First, unit testing is performed to identify and eliminate any defects at unit
level. Second, integration testing or module testing is done on integrated
units. After module testing, subsystem testing is executed. Then system
testing is performed after the system completes all levels of integration. In
some cases, interface testing is done instead of system testing to fix any
issues with inter system communication. Lastly, acceptance testing is done
to check whether acceptance criteria is fulfilled or not. Testing stages are
shown in Fig. 9.6.
9.15 SUMMARY
This chapter describes verification and validation activities related to
various stages of software development phases. It discusses merits and
demerits of various integration testing approaches.
9.16 EXCERCISES
1. Describe proposal review process.
2. Describe requirement verification and validation process.
3. Describe design verification and validation process.
4. Describe code review and unit testing process.
5. Describe difference between debugging and unit testing.
6. Differentiate between integration testing and interface testing.
7. Describe bottom-up approach for integration.
8. Describe top-down approach for integration.
9. Describe modified top-down approach for integration.
*****
262
10
SPECIAL TESTS
Unit Structure
10.0 Objectives
10.1 Introduction
10.2 GUI Testing
10.3 Compatibility Testing
10.3.1 Multiplatform Testing
10.4 Security Testing
10.5 Performance Testing
10.6 Volume Testing
10.7 Stress Testing
10.8 Recovery Testing
10.8.1 System Recovery
10.8.2 Machine Recovery
10.9 Installation Testing
10.10 Requirement Testing
10.11 Regression Testing
10.12 Error Handling Testing
10.13 Manual Support Testing
10.14 Intersystem Testing
10.15 Control Testing
10.16 Smoke Testing
10.17 Adhoc Testing
10.18 Parallel Testing
10.19 Execution Testing
10.20 Operations Testing
10.21 Compliance Testing
10.22 Usability Testing
10.23 Decision Table Testing
10.24 Documentation Testing
10.25 Training Testing
10.26 Rapid Testing
10.27 Control Flow Graph
10.27.1 Program Dependence Graph
10.27.2 Category Partition Method
10.27.3 Test Generation from Predicate
10.27.4 Fault Model for Predicate Testing
10.27.5 Difference between Control flow and Data flow
241
Software Quality 10.28 Generating Tests on the Basis of Combinatorial Designs
Assurance
10.28.1 Combinatorial Test Design Process
10.28.2 Generating Fault Model from Combinatorial Designs
10.29 State Graph
10.29.1 Characteristics of Good State Graphs
10.29.2 Number of States
10.29.3 Matrix of Graphs
10.30 Risk Associated with New Technologies
10.31 Process Maturity Level of Technology
10.32 Testing Adequacy of Control in New Technology Usage
10.33 Object-Oriented Application Testing
10.34 Testing of Internal Controls
10.34.1 Testing of Transaction Processing Control
10.34.2 Testing Security Control
10.35 „COTS‟ Testing
10.36 Client-Server Testing
10.37 Web Application Testing
10.38 Mobile Application Testing
10.39 eBusiness/eCommerce Testing
10.40 Agile Development Testing
10.41 Data Warehousing Testing
10.42 Let us Sum Up
10.43 Exercises
10.0 OBJECTIVES
After going through this chapter, you will be able to:
10.1 INTRODUCTION
Testing includes not just functionality testing. Depending on application
and requirement specifications, some specialized tests may be conducted.
Special testing may need some special testing skills, tools and techniques.
242
Some systems are intended for specific purpose and they need to fulfill Special Tests
special characteristics mentioned in requirement statement. Testing of
such systems is included in this type of testing. Examples include
eBusiness system, security system, antivirus applications, administrative
system, operating systems and databases. Such systems are created for
specific users and may or may not cater to all in general.
244
The platform on which software development is done is referred as Special Tests
„base platform‟ and it is expected to behave in similar manner on
range of platforms. This range of platforms must be clearly defined in
requirement statement and they should be already existing platforms
so that testing can be performed.
When software is working on different platforms, some performance
variations are allowed and this range of performance variations
allowable must be mentioned in requirement statement.
245
Software Quality Assess test laboratory configuration with respect to base platform and
Assurance target platform configuration mentioned in requirement statement. It
must include the service packs or hot fixes to be included/excluded in
testing and configuration of system which can impact the application.
Types of Compatibility:
Friend Compatibility:
There is no change of application behavior between mother platform and
new platform. The application utilizes all facilities and services available
on the platform efficiently.
Neutral Compatibility:
The application behavior on new platform is similar to its working on
parent platform. Only difference is that the application does not use the
facilities provided by new platform at all. Instead, it uses its own utilities
and services thus overburdening the system.
Enemy Compatibility:
The application does not perform as expected or does not perform at all on
new platform.
248
Examples of Volume Testing Process: Special Tests
249
Software Quality rejects, transactions are permanently deleted and user begins from initial
Assurance point. In this method, memory space is required to store transactions and
space requirement increases with increase in users. This is an advanced
way of handling disaster.
Cold Recovery:
Back up data at a defined frequency, such as once in a week on
external device like tape/CD. When disaster occurs, recovery is done
by loading this back up data on new machine and this machine
replaces original machine. However, last updated data is permanently
lost as backup is done at specified intervals only.
Warm Recovery:
Backup is taken from one machine to another machine directly in this
recovery.
Frequent and automatic backup is possible. Data is already on the
hard drive of both machines, if the backup is successful.
In case of disaster, backup machine in introduced in the system
temporarily.
Backup machine configuration may differ from original machine. Up
gradation of backup machine is required otherwise user faces problem
with inferior backup machine.
Maintenance cost of two machines is involved.
Data may be lost in case data is updated after last backup.
Hot Recovery:
In this recovery scenario, both, original and backup machines are
present in the system.
One machine is primary and another is backup machine. Both have
similar configuration and capabilities.
In case of disaster, control is shifted from primary machine to backup
machine.
250
Backup frequency is defined as per backup plan. Special Tests
251
Software Quality Uninstallation Testing:
Assurance
Uninstallation testing is used when requirement specification mentions the
need of uninstallation. In this testing, all components and files of
application are cleaned such that no thread of that application is left. The
process is as follows: Image of hard disk will be captured before software
installation. It will note all the files existing before installation. Then,
tester installs application and again captures the hard disk image. These
two images are compared and they should match exactly to prove clean
uninstallation.
Upgradation Testing:
Software applications require upgradation to newer versions by installing
updates released from product manufacturer from time to time. It is
required that the application should already exist on the disk before
upgradation. During upgradation, installer must be able to identify the
older version of application already available on disk. If this existing
application is more updated than the available upgradation, then
upgradation should not be done. Upgradation is similar to installation
process. It may be automatic, semi-automatic or manual. It may be done
using CD, floppy disk, etc. User must be given adequate support in case of
any problem in upgradation process.
252
Unit level to ensure unit level changes do not affect other units of the Special Tests
system and unit fulfils its intended purpose.
Module level to ensure module behavior is not affected by changing
of individual units.
System level to ensure system is performing correctly according to
requirements even after changes made in some system parts.
Regression testing is performed when there is high risk that changes made
in one part of software may affect unchanged components or system
adversely. It is done by rerunning previously conducted successful tests to
ensure that unchanged components function correctly after change is
incorporated. It also involves reviewing previously prepared documents to
ensure they contain correct information even after changes have been
made in the system.
254
between two or more systems to ensure they work correctly together and Special Tests
support information transfer amongst them.
It determines whether:
Parameters and data are correctly passed between systems.
Documentation must be complete and accurate with expected inputs
and outputs from the system.
System testing is conducted each time when there is change in
parameters, application, or other external systems.
Manual verification of documentation is done to understand the
relationship between different systems.
256
There is consistency or inconsistency between two systems. Special Tests
Consistency in terms of user interaction, user capabilities, transaction
processing and controls designed for validation is also evaluated.
Outputs of both systems are compared by providing same inputs to
both.
Both systems are used in parallel for certain time period for thorough
comparison.
Security, productivity, and effectiveness of new system must be
comparable with the old system. If there is any lacuna in new system
with respect to old system, new system may get rejected.
258
Usability testing is done by, Special Tests
259
Software Quality Decision table contains triggering conditions, often combinations of true
Assurance and false for all input conditions, and the resulting actions for each
combination of conditions. Each table column corresponds to certain
business rule that defines a unique combination of conditions, which result
in the execution of the actions associated with that rule. The coverage
standard commonly used with decision table testing is to have at least one
test per column, which typically involves covering all combinations of
triggering conditions. One may use equivalence partitioning and boundary
value analysis for testing such system.
Table 10.2 shows a sample decision table.
260
Have people with strong language skills to review all the Special Tests
documentation for professionalism and readability.
Try out all alternative ways mentioned in document to accomplish a
task. In many cases, the primary way works, but the alternatives do
not work as expected.
Documentation should strictly be aligned with company policies and
these policies and standards must be reviewed and approved by
appropriate authorities in the organization.
Ensure the accuracy of any manual forms, checklists, and templates.
Dominators:
There exists a set of code in the program which will always be executed
when a path is selected. This is termed as „dominator‟. These paths are
independent of any decision or branch.
262
Post-Dominator: Special Tests
When end of application is reached from any point, there exists a set of
code that is always encountered, which is termed as „post-dominator‟.
There may be several possible paths depending upon the number of
decisions.
Analyze Requirements:
Requirements are categorized into functions, user interface, and
performance requirements. They are placed in different test cases to be
tested independently.
Identify Categories:
Inputs producing specific categories of output are analyzed. Decision
tables can be used to identify such categories.
Partition Categories:
Respective input-output pairs are placed into different partitions.
Identification of Constraint:
There may be some input output categories which cannot exist together.
They must be identified and excluded from testing.
263
Software Quality Evaluate the Output:
Assurance
The output is verified by comparing it with expected output.
265
Software Quality
Assurance
10.29 STATE GRAPH
State graph and state table are useful models for describing software
behavior under various input conditions. State testing approach is based
upon the finite state machine model for the structures and specifications of
an application under testing. In a state graph, states are represented by
nodes. The system changes its state when an input is provided. This is
called „state transition‟.
With multiple state graphs, drawing them and understanding becomes
difficult. To remove this complexity, state tables are used. State tables
may be defined as follows:
Each row indicates a state of an application.
Each column indicates the input provided to an application.
Intersection of rows and columns indicate the next state of an
application.
Impossible States:
States which appear to be impossible in reality due to some limitations
outside the application in real life scenario.
Equivalent States:
When transitions of two or multiple states result into the same final state,
they are called „equivalent states‟. Equivalent states can be identified as,
266
The rows corresponding to two states are identical with respect to Special Tests
inputs/outputs/next state of an application.
There are two sets of rows which have identical state graphs.
Dead States:
Dead state is a state where there is no reversal possible. These states
cannot be exited even if the user wishes to come out of them. There is no
possibility to come back to the original state.
Transitive Relationship:
Consider a relation between three nodes „A‟, „B‟, and „C‟ such that A-R-B
and B-R-C indicate A-R-C, and then it is called „transitive relationship‟.
Reflexive Relationship:
Reflexive relationships are self-loops.
Symmetric Relationship:
Consider a relationship between nodes „A‟ and „B‟ where A-R-B indicates
that there is B-R-A. This is considered as symmetric relationship or
undirected relationship.
Equivalence Relationship:
If a relationship between two nodes is reflexive, transitive and symmetric,
then it is called „equivalence relationship‟.
267
Software Quality
Assurance
10.30 RISK ASSOCIATED WITH NEW
TECHNOLOGIES
Organizations adapt to new technologies due to several benefits of these
technologies. New technology may mean a technology which is
completely new to the world or not new to the world but new to the
organization. In any case, these new technologies introduce some risks
along with benefits. Risks may be faced by the development team as well
as end users. Following are the different risks:
1. Unproven technology:
New technologies are used by organization due to their advantages. But, it
may have some drawbacks or lacunae, which the organization might not
be aware of. Also, it may not be possible to use all advantages offered by
the new technology. Sometimes, the new technology is not assessed
properly for positive and negative factors before using it. Technology
needs some time to mature and as users use it, they provide feedback to
technology owners, and thus help in the maturing process.
268
6. Variation between technology delivered and documentation Special Tests
provided:
Every technology comes with documentation like user manuals and
troubleshooting manuals. Users and developers refer this documentation to
understand the usage of new technology. If a gap exists between
documentation and actual technology working, then people may not be
able to use the technology by referring the documentation. Thus,
documentation not being in sync with technology may hamper usage of
new technology as people may not know how to use it.
269
Software Quality 11. Inadequate support for technology from vendor:
Assurance
When an organization adapts new technology, it relies on the vendor
providing service support for such technology, at least for initial phases of
usage. Service support may include trouble shooting and trainings which
may not be available as and when required by organisation/users. Many
aspects depend upon people, both internal as well as external, to make the
project successful. Technology may become a problem if there is not
adequate or completely unavailable vendor support.
270
decisions about the selection of technology on the basis of benefits and Special Tests
shortcomings analysis.
271
Software Quality 2. Test the adequacy of current process definition available to control
Assurance technology:
An organisation needs process definition to support new technology
implementation and usage. If support is unavailable, then testers may have
to raise issues and get the process definition. It can be a risky situation and
customer must be involved in decision making or customer must be
informed about the possible lacunae of process definition. If the processes
are available, their usage must be tested. Testers must ensure that,
Identify potential weak areas and create mitigation action plan, if risk
turns into reality.
274
10.34.1 Testing of Transaction Processing Control: Special Tests
An application may process the data received from different inputs and
provide outputs in different forms. Application controls must be designed
to ensure data accuracy in all phases of data receipt, processing and
output. The control placement may include the following:
Transaction Origination:
The points where the data originates must be controlled. Sometimes, data
originates in one system and it is transferred to given system for
processing through different methods. Data may originate in manual
operations also. Generally, these controls may not be applicable to the
given system, if data preparation happens outside the system either
manually or automatically but data entry must be controlled in such cases.
Transaction Output:
Processed output may be transferred from one system to another or may be
delivered to users in different ways including printing, displaying or
sending data from one location to another or from one system to another.
Transaction output must show the processing results correctly and data
input and output must be matched in data transfer process.
275
Software Quality 10.34.2 Testing Security Control:
Assurance
Security controls are designed to protect the system from internal or
external attacks from possible penetrators. Security testing is a thought
process where one needs to understand the thinking of an intruder and try
to device control mechanisms accordingly. Following analysis should be
done:
Points where Security is more likely to be Penetrated (Points of
Penetration):
These are the areas where the system is exposed to outside world and there
are possibilities of outside attacks. At these points, attacker‟s entry
probability is high. These are also called „threat points‟. Few threat points
may be,
Data preparation stage which can be a manual operation or it may
occur in some other system.
Computer operations where the system processes or transfers data
from one place to another. It may include server rooms, processors,
etc.
Non-IT areas where people working with system do not understand
system security concept. They may be ignorant of security practices
and may cause failure unknowingly.
Software development, maintenance and enhancement phases where
data is used for building, maintaining or testing of an application.
Online data preparation facilities where data is not validated
adequately before entering the system.
Digital media storage facilities which may be subject to electro-
magnetic fields, temperatures, humidity, etc. Storage facilities may
not be secured enough and storage media may get damaged due to
wrong handling.
Online operations where data is manually entered in system without
any verification and validation or data is transferred from one system
to another without any control.
276
Simplicity of Control and Usage: Controls must be simple to Special Tests
understand and use for the users.
Failure Safe Controls: Controls must be safe and free from failure.
Open Design for Controls: Control design must be flexible enough
to accept technology and process changes to modify controls
accordingly.
Separation of Privileges of Users: System must enforce separation
of user privileges to avoid any unauthorised entries bypassing the
controls.
Psychological Acceptability of Controls by Users: An organisation
must plan for adequate training for users. Security and controls and
their purpose must be well explained to people so that they can
psychologically accept the controls and do not bypass them.
Layered Defense in System: Entire system must not collapse. If any
system part fails, there must be alternative defense mechanism to
detect and prevent complete system failure. If unauthorised
transaction entry occurs, it must be detected and proper actions must
be initiated.
Compromised Recording: Transactions made by users must be
recorded and audited. This assists in early problem detection in
transactions and processing. However, it may hamper user‟s privacy
and should be incorporated if requirements specify the same. Audit
trail is an example of compromised recording in a system.
Line of Business:
An organisation‟s line of business might differ from the software that it
requires.
Example: Organisation developing software for banking domain is not in
a line of business to develop automation software required for testing. In
such cases, organisation might purchase and directly use this testing
software from market without investing resources and time for creating it.
277
Software Quality Cost-Benefit Analysis:
Assurance
Sometimes, it is very costly to develop software in-house due to various
constraints like lack of knowledge, skills, resources, budget, etc. Buying
the software becomes more convenient and cost effective than creating it.
Usually, „COTS‟ products are much cheaper than the projects as cost is
distributed over number of users.
Expertise/Domain Knowledge:
An organisation might know usage of software with no knowledge of its
creation. Sometimes, it is sufficient to have understanding of using
software without going into details of its development. Organisation might
purchase them simply from market and use it.
Delivery Schedules:
„COTS‟ are available immediately in market by paying for them. This
saves the time and efforts of development. This aspect is beneficial when
efforts and schedule do not justify use and benefits. Cost of purchasing
software may be less compared to developing it.
280
Software Compatibility: There may be existing software in system Special Tests
where COTS will be implemented. COTS should not affect their
existence or working. Other software may provide input or accept
output from COTS software. COTS must be able to integrate with
them and communicate effectively.
Data Compatibility: Data transfer may happen from one system to
another during COTS implementation. COTS should not hamper or
interfere with data, format and transfer process. Data format, style,
and frequency mismatch can cause severe system problems.
Communication Compatibility: Protocols used in communication
must be compatible to prevent any communication loss. If there is no
compatibility of communication protocols, adopters will be required
to convert protocols to facilitate communication.
281
Software Quality Evaluate People Fit:
Assurance
Testers need to analyse whether people would be able to work with the
system effectively or not. There should be understanding whether software
can be used as it is or may require some configuration, modification or
external changes to make it usable. Sometimes, additional training and
support may be required by users.
Component Testing:
One needs to define the approach and test plan for testing client and server
individually. One may have to devise simulators to replace corresponding
components to test the target component. For server testing, client
simulator may be needed and client testing may require server simulator.
We may have to test network by using client and server simulators at a
time.
Integration Testing:
After successful testing of clients, servers and network individually, they
are brought together to form the system and system test cases are
executed. Client server communication testing is done in integration
testing. There are regular testing methods like functionality testing and
user interface testing to ensure that system meets the requirement
specifications and design specifications correctly. In addition to these,
several special testing techniques are as follows:
282
Performance Testing: Special Tests
Concurrency Testing:
It may be possible that multiple users may access same record at a time.
Concurrency testing is required to understand the system behaviour under
such circumstances.
Compatibility Testing:
Client and server are subjected to different networks when used by users
in production. Servers may be in different hardware, software or operating
system environment than the recommended one. Clients may significantly
differ from the expected environmental variables. Testing must ensure that
performance is maintained on the range of hardware and software
configurations and users must be adequately protected in case of
configuration mismatch. Similarly, any limiting factors must be informed
to prospective user.
Other testing such as security testing and compliance testing may be done,
if needed, as per scope of testing and type of system.
Component Testing:
One must define the approach and test plan for testing web application
individually at client side and at server side. One may have to design
simulators to replace corresponding components. During server testing,
client simulator will be needed and vice versa. Network testing is also
required.
Integration Testing:
Successfully tested servers and clients are brought together to form the
web system and system test cases are executed. Communication between
client and server is tested in integration testing.
There are other testing ways like functionality testing and user interface
testing to ensure that system meets the requirement specifications and
design specifications correctly. In addition to them, several other testing is
involved as follows:
284
Performance Testing: Special Tests
Concurrency Testing:
It may be possible that multiple users access same record simultaneously.
Concurrency testing is needed to understand system behaviour under such
circumstances. Probability of concurrency increases with increase in
number of users.
Security Testing:
As the communication occurs through virtual network, security is
important. Applications may use communication protocols, coding and
decoding mechanisms, and schemes to maintain system security. System
must be tested for possible weak areas called „vulnerabilities‟ and possible
intruders trying to attack the system called „penetrators‟.
Compatibility Testing:
Web applications may be placed in different environments when users use
them in production. Servers may be exposed to different hardware,
software or operating system environment than the expected one. Client
browsers may differ significantly from the expected environmental
variables. Testing must ensure that performance is maintained on the
range of hardware and software configurations and users must be
adequately protected in case of configuration mismatch. Similarly, any
limiting factors must be informed to the prospective user.
285
Software Quality
Assurance
10.38 MOBILE APPLICATION TESTING (PDA
DEVICES)
Today‟s generation uses pocket devices extensively for communication
and computing due to mobility offered by them. There is tremendous
increase in memory levels and technologies adopted by such appliances.
With the advent of Bluetooth and Wi Fi, many PDA‟s are replacing
desktop computers due to ease of usage. New technologies have converted
normal communication device into internet-driven palm tops.
287
Software Quality disaster recovery ability are important aspects from user‟s
Assurance perspective.
E-business/E-commerce Development:
Development of e-Business/e-Commerce applications may not follow
waterfall or iterative model. Agile methodologies and spiral development
models are used extensively. It is characterized by the following:
Rapid and easy assembly of application modules is essential as the
system size increases as defined by spiral methodology. Initially,
some parts of an application are delivered and used, and as per user
response and expectations, latter parts are then delivered at multiple
instances.
Testing of component functionality and performance at each
increment is required to ensure correct integration. Huge regression
testing cycles may be required.
Since there may not be real time testing, simulation is done by
designing models to simulate real world scenario and then testing is
performed.
Usually, such applications are deployed in a distributed environment,
24 * 7 * 365 for an extended period, and there is no downtime
allowable. MTTR and MTBF are important parameters for assuring
service to users.
Monitoring performance and transactions over an extended period is
essential to understand if there is any deterioration of service level
over a time span.
Analysing system effectiveness and gathering business intelligence
may be essential to maximize sale and profit.
288
from time to time as per policy and strategy decisions of government and Special Tests
categories of industry that they belong to.
All critical business functions with respect to common users are
identified and evaluated on the basis of their criticality.
Mechanisms must be in place to ensure connectivity and handling
connection loss. Disaster recovery procedures and business continuity
procedures must be developed.
Systems are tested to assess security of online transactions. User
information must be protected and privacy practices must be applied
stringently.
Privacy audits must be conducted to confirm correct working of
strategies to protect customer privacy and confidentiality.
Vulnerabilities are analysed to prevent hacker attacks and virus
attacks. Users may not visit site if there are possible vulnerabilities
and threats.
Disaster avoidance measures are developed such as redundant
systems, alternative routing, precise change controls, encryption,
capacity planning, load and stress testing, and access control.
Agility Testing:
289
Software Quality with high maturity and technical competence to adapt to changing
Assurance customer needs.
Every process has an inborn variability. One may have to attack the
generic reasons of variations while there may be some controls to identify
special causes of variations.
Different test plans are made to test software. However, as the process
begins, many changes may be required. Test plans should be flexible
enough to adapt to these changes.
Stakeholder Maturity/Involvement:
Agile process demands maturity from stakeholders and all the people
involved since there is time pressure, adapting to dynamic requirements,
and faster delivery pressure.
290
Data Warehouse Definition: Special Tests
Subject-oriented:
Subject-oriented data warehouses are designed to help the user in
analysing data. The data is organised so that all the data elements relating
to the same real world event or object are linked together.
Integrated:
Database integration is closely related to subject orientation. Data
warehouses must place data from different sources into a consistent
format. The database contains data from most or all of an organisation‟s
operational applications and is made consistent.
Time-variant:
The changes done to data in database are tracked and recorded to produce
reports on data varying with time. In order to discover trends in business,
analysts need large amounts of data. Time-variant is data warehouse‟s
track of changes made in data over time.
Non-volatile:
Data in the database is never over written or deleted once committed-the
data is static and read-only but retained for future reporting. Once data
enters warehouse, it should not change as the purpose of data warehouse is
to be able to analyse what has occurred.
291
Software Quality Testing Process for Data Warehouse:
Assurance
Testing for a data warehouse includes requirements testing, unit testing,
integration testing followed by acceptance testing.
Requirements Testing
Unit Testing:
Integration Testing:
Test team must understand the linkages for the fields displayed in the
report and must trace back and compare that with the source systems.
Creating Queries:
Create queries to fetch and verify the data from source and target.
Sometimes, it is not possible to do the complex transformations done in
ETL. In such a case, the data can be transferred to some file and
calculations can be performed.
292
Data Completeness: Special Tests
Basic test of data completeness is to verify that all expected data loads into
the data warehouse. This includes validating that all records, all fields and
the full contents of each field are loaded. This may cover:
Comparing record counts between source data, data loaded to the
warehouse and rejected records.
Comparing unique values of key fields between source data and data
loaded to the warehouse.
Utilizing a data profiling tool that shows the range and value
distributions of fields in a data set.
Populating the full contents of each field to validate that no truncation
occurs at any step in the process.
Testing the boundaries of each field to find any database limitations.
Data Transformation:
Validating that data is transformed correctly from database is based on
business rules. This can be the complex part of testing.
Create a spreadsheet of scenarios of input data and expected results
and validate these with the business customer.
Create test data that includes all scenarios.
Utilize data profiling results to compare range and distribution of
values in each field between source and target data.
Validate correct processing.
Validate that data types in the warehouse are as specified in the design
and/or data model.
Set up data scenarios that test referential integrity between tables.
Validate parent-to-child relationships in the data.
Data Quality:
Data quality rules are defined during data warehouse design. It may
include:
Reject the record if a certain decimal field has non-numeric data.
Substitute null if a certain decimal field has non-numeric data.
Duplicate records.
293
Software Quality Performance and Scalability:
Assurance
As data volume in a data warehouse grows, load times can be expected to
increase and performance of queries can degrade. The aim of performance
testing is to point out any potential weaknesses in the design such as
reading a file multiple times or creating unnecessary intermediate files.
Load the database with peak expected production volumes to ensure
that this volume of data can be loaded within the specific time period.
The time period may be defined in SLA.
Compare these loading times to loads performed with a smaller
amount of data to anticipate scalability issues.
Monitor the timing of the reject process and consider how large
volumes of rejected data will be handled.
Perform simple and multiple join queries to validate query
performance on large database volumes.
Object-oriented development
Internal controls
Web application
PDAs/Mobile applications
10.43 EXERCISES
1. Describe the risk associated with new technology usage.
2. Explain technology process maturity.
3. Explain the test process for new technology.
4. Explain the process of testing object-oriented development.
294
5. Explain the process of testing of internal controls. Special Tests
*****
295