R20CSE2207 Software Engineering
R20CSE2207 Software Engineering
R20CSE2207 Software Engineering
SOFTWARE ENGINEERING
Syllabus:
UNIT I :
Introduction to Software Engineering : The evolving role of software, Changing Nature of
Software, Software myths.
A Generic view of process : Software engineering- A layered technology, a process
framework, The Capability Maturity Model Integration (CMMI), Process patterns, process
assessment, personal and team process models
Process models : The waterfall model, Incremental process models, Evolutionary process
models, The Unified process.
UNIT II :
UNIT III:
Design Engineering : Design process and Design quality, Design concepts, the design model.
Creating an architectural design : Software architecture, Data design, Architectural styles
and patterns, Architectural Design.
-1-
Performing User interface design : Golden rules, User interface analysis and design,
interface analysis, interface design steps, Design evaluation.
UNIT IV : Testing Strategies : A strategic approach to software testing, test strategies for
conventional software, Black-Box and White-Box testing, Validation testing, System testing,
the art of Debugging.
Product metrics : Software Quality, Metrics for Analysis Model, Metrics for Design Model,
Metrics for source code, Metrics for testing, Metrics for maintenance.
Metrics for Process and Products : Software Measurement, Metrics for software quality.
UNIT IV :
Risk management : Reactive vs. Proactive Risk strategies, software risks, Risk identification,
Risk projection, Risk refinement, RMMM, RMMM Plan.
TEXT BOOKS :
REFERENCES :
1. Software Engineering- K.K. Agarwal & Yogesh Singh, New Age International Publishers
-2-
1. INTRODUCTION TO SOFTWARE ENGINEERING
◼ What is Software?
• Computer Software is the product that software professional design and built. It includes
• Programs
• Content
• Documents
• Related to the process: a systematic procedure used for the analysis, design,
implementation, test and maintenance of software.
• Related to the product: the software should be efficient, reliable, usable, modifiable,
portable, testable, reusable, maintainable, interoperable, and correct.
• The study of approaches as in 1993: The Joint IEEE Computer Society and ACM Steering
Committee for the establishment of software engineering as a profession.
-3-
• The Product
• Software is a product
• Delivers computing potential
• Produces, manages, acquires, modifies, displays, or transmits information.
• The Law of Continuing Change (1974): E-type systems must be continually adapted else they
become progressively less satisfactory.
• The Law of Increasing Complexity (1974): As an E-type system evolves its complexity increases
unless work is done to maintain or reduce it.
• The Law of Self Regulation (1974): The E-type system evolution process is self-regulating with
distribution of product and process measures close to normal.
• The Law of Conservation of Organizational Stability (1980): The average effective global activity
rate in an evolving E-type system is invariant over product lifetime.
• The Law of Conservation of Familiarity (1980): As an E-type system evolves all associated with
it, developers, sales personnel, users, for example, must maintain mastery of its content and
behavior to achieve satisfactory evolution.
• The Law of Continuing Growth (1980): The functional content of E-type systems must be
continually increased to maintain user satisfaction over their lifetime.
• The Law of Declining Quality (1996): The quality of E-type systems will appear to be declining
unless they are rigorously maintained and adapted to operational environment changes.
-4-
• The Feedback System Law (1996): E-type evolution processes constitute multi-level, multi-loop,
multi-agent feedback systems and must be treated as such to achieve significant improvement
over any reasonable base.
• Most software is custom-built, rather than being assembled from existing components.
◼ Software Components
• In the hardware world, component reuse is a natural part of the engineering process.
• In the software world, it is something that has yet to be achieved on a broad scale.
• Reusability is an important characteristic of a high-quality software component.
• A software component should be designed and implemented so that it can be reused in many
different programs.
• In the 1960s, we built scientific subroutine libraries that wear reusable in a broad array of
engineering and scientific application.
• These subroutine libraries reused well defined algorithms in an effective manner, but had a
limited domain of application.
• Today, we have extended our view of reuse to encompass not only algorithms, but also data
structures.
Software Applications
• System software: system software is collection of programs written to service other programs.
E.g. Compilers, editors, file management utilities, operating systems, drivers, networking etc.
• Application software: Application software consists of standalone programs that solve a specific
business need. E.g. Point of sole transaction processing, real time manufacturing control etc.
-5-
• Engineering/scientific software: Computer-aided design, system simulation, and other
interactive applications have begun to take on real-time and even system software
characteristics.
• Embedded software: Embedded software resides within a product or system and is used to
implement and control features and functions for the end-user and for the system itself.
Embedded software can perform limited and esoteric functions. e.g. digital functions in an
automobile such as fuel control, dashboard displays, braking systems, etc.
• Product-line software: Designed to provide a specific capability for use by many different
customers product-line software can focus on a limited and esoteric marketplace or address
mass consumer markets. e.g. inventory control, word processing, spreadsheets, computer
graphics, multimedia etc.
• WebApps (Web applications): “WebApps,” span a wide array of applications. In their simplest
form, webApps can be little more than a set of linked hypertext files that present information
using text and limited graphics.
• AI software: Al software makes use of non numerical algorithms to solve complex problems that
are not amenable to computation or straight forward analysis. e.g. robotics.
◼ Software—New Categories
• Open source: ”free” source code open to the computing community (a blessing, but also a
potential curse!)
• Also …
• Data mining
• Grid computing
• Cognitive machines
-6-
• Unlike ancient myths, software myths propagate misinformation and confusion that have
caused serious problems for managers, technical people and customers.
Management Myths
• Myth: We already have a book that’s full of standards and procedures for
building software. Won’t that provide my people with everything they need to
know?
• Reality: The book of standards may very well exist, but is it used? Is it
complete? In many cases, the answer is no.
• Myth: If we get behind schedule, we can add more programmers and catch
up.
Customer Myths
-5-
Practitioner’s Myths
• Myth: Once we write the program and get it to work our job is done.
• Reality: “The sooner you begin writing code, the longer it’ll take you to get
done.” Industry data indicate that between 60 and 80 percent of all effort
expended on software will be expended after it is delivered to the customer for
the first time.
• Myth: Until I get the program running, I have no way of assessing its quality.
• Myth: The only deliverable work product for a successful project is the
working program.
________
-6-
2. A GENERIC VIEW OF PROCESS
◼ Chapter Overview
• What is it? A software process-a series of predictable steps that leads to a timely, high-quality
product.
• What are the Steps? A handful of activities are common to all software process, details vary.
• Process model: foundation for SE is the process layers, a frame work, models documents,
data, reports etc.
• Tools: provide automated or semi automated support for the process and methods.
-7-
2.2 A PROCESS FRAMEWORK
◼ Process framework
• Framework activities
• Work tasks
• Work products
• Milestones & deliverables
• QA checkpoints
• Umbrella Activities
Framework Activities
• Planning: It describes the technical tasks to be conducted, the risks that are likely, the
resources that will be required, the work products to be produced, and a work schedule.
• Deployment: The software is delivered to the customer who evaluates the delivered product
and provides feedback based on the evaluation.
-8-
Umbrella Activities
• Software project tracking and control: allows the software team to assess progress against
the project plan and take necessary action to maintain schedule.
• Software quality assurance: Defines and conducts the activities required to ensure software
quality.
• Software configuration management: manages the effects of change throughout the software
process.
• Work product preparation and production: encompasses the activities required to create
work products such as models, documents, logs, forms, and lists.
• Reusability management: defines criteria for work product reuse and establishes mechanisms
to achieve reusable components.
• Measurement: defines and collects process, project, and product measures that assist the
team in delivering software that meets customers’ needs; can be used in conjunction with all
other framework and umbrella activities.
• Risk management: assesses risks that may affect the outcome of the project or the quality of
the product.
◼ The Process Model: Adaptability
• the framework activities will always be applied on every project ... BUT
• the tasks (and degree of rigor) for each activity will vary based on:
• the type of project
• characteristics of the project
• common sense judgment; concurrence of the project team
• The CMMI defines each process area in terms of “specific goals” and the “specific practices”
required to achieve these goals.
• Specific goals establish the characteristics that must exist if the activities implied by a
process area are to be effective.
-9-
• Specific practices refine a goal into a set of process-related activities.
• The CMMI represents a process meta-model in tow different ways: (1) as continuous model
and (2) as a staged model.
• Capability levels:
• Level 0: Incomplete. The process area is either not performed or does not achieve all
goals and objectives defined by the CMMI for level 1 capacity.
• Level 1: Performed. All of the specific goals of the process area have been satisfied. Work
tasks required to produce defined work products are being conducted.
• Level 2: Managed. All level 1 criteria have been satisfied. In addition, all work associated
with the process area conforms to an organizationally defined policy; all people doing the
work have access to adequate resources to get the job done; stakeholders re actively
involved in the process area as require; all work tasks and work products are “monitored,
controlled, and reviewed; and are evaluated for adherence to the process description”
• Level 3: Defined. All level 2 criteria have been achieved. In addition, the process is
“tailored from the organization’s set of standard processes according to the
organization’s tailoring guidelines, and contributes work products, measures, and other
process-improvement information to the organizational process assets”
• Level 4: Quantitatively managed. All level 3 criteria have been achieved. In addition, the
process area is controlled and improved using measurement and quantitative
assessment. “Quantitative objectives for quality and process performance are established
and used as criteria in managing the process”
• Level 5: Optimized. All capability level 4 criteria have been achieved. In addition, the
process area is adapted and optimized using quantitative means to meet changing
customer needs and to continually improve the efficacy of the process area under
consideration”.
2.4 PROCESS PATTERNS
• Process patterns define a set of activities, actions, work tasks, work products and/or related
behaviors.
• Typical examples:
• Customer communication (a process activity)
• Analysis (an action)
- 10 -
• Requirements gathering (a process task)
• Reviewing a work product (a process task)
• Design model (a work product)
• The process should be assessed to ensure that it meets a set of basic process criteria that have
been shown to be essential for a successful software engineering.
Software Process
Software Process
Assessment
Capability
Software Process leads to leads to
Determination
Improvement
motivates
• CBA IPI: provides diagnostic technique for assessing the relative maturity of a
software organization.
• ISO 9001:2000: for software is a generic standard that applies to any organization that
want improve the overall quality of the product systems, or services that it provides.
- 11 -
3. PROCESS MODELS
◼ Prescriptive Models
• If prescriptive process models strive for structure and order, are they inappropriate
for a software world that thrives on change?
• Yet, if we reject traditional process models (and the order they imply) and replace
them with something less structured, do we make it impossible to achieve
coordination and coherence in software work?
• The Waterfall model sometimes called the classic life cycle, suggests a systematic, sequential
approach to software development.
- 12 -
• It is a oldest paradigm for software engineering.
• There are many situations in which initial software requirements are reasonably well defined,
but the overall scope of the development effort precludes a purely linear process.
• In addition, there may be a compelling need to provide a limited set of software functionality
to users quickly and then refine and expand on that functionality in later software releases.
• In such cases, a process model that is designed to produce the software in increments is
chosen.
• Divide functionality of system into increments and use a liner sequence of development on
each increment.
• First increment delivered is usually the core product, i.e. only basic functionality.
- 13 -
• Reviews of each increment impact on design of later increments.
• Extreme Programming (XP), and other Agile Methods, are incremental, but they do not
implement the waterfall model steps in the standard order.
• Similar to waterfall but uses a very short development cycle (60to90 days to completion).
- 14 -
3.3 EVOLUTIONARY MODELS
• Evolutionary models are iterative. They are characterized in a manner that enables software
engineers to develop increasingly more complete versions of the software.
◼ Prototyping
• Users like the method, get a feeling for the actual system.
- 15 -
• May use a less-than-ideal platform to deliver e.g Visual Basic – excellent for prototyping,
may not be as effective in actual operation.
• Users don’t know exactly what they want until they see it.
• Prototyping involves building a mock-up of the system and using to obtain for user feedback.
Q u i ck p l an
Co m m u n icat io n
Mo d e l i n g
Q u i ck d e si g n
Deployment
D e live r y
& Fe e d b ack Co n st r u ct io n
of
p r o t o t yp e
• Customer communication
• Planning
• Risk analysis
• Engineering
• Construction and release
• Customer evaluation
• Incremental releases
- 16 -
• Early releases may be paper or prototypes.
- 17 -
◼ Concurrent Development Model
• The concurrent process model defines a series of events that will trigger transitions from state
to state for each of the software engineering activities, action, or tasks.
• The concurrent process model is applicable to all types of software development and provides
an accurate picture of the current state of a project.
• Rather than confining software engineering activities, actions, and tasks to a sequence of
events, it defines a network of activities.
• Each activity, action, or task on the network exists simultaneously with other activities,
actions, or tasks.
• Events generated at one point in the process network trigger transitions among the states.
none
represent s t he st at e
Under of a sof t ware engineering
act ivit y or t ask
development
A wait ing
changes
Under review
Under
revision
Baselined
Done
- 18 -
3.4 SPECIALIZED PROCESS MODELS
◼ Unified Process
- 19 -
◼ Unified Process Phases
• The inception phases of the up encompass both customer communication and planning
activities and emphasize the development and refinement of use-cases as a primary
model.
• A construction phase that refines and translates the design model into implemented
software components.
• A transition phase that transfers the software from the developer to the end-user for beta
testing and acceptance.
- 20 -
• A production phase in which on-going monitoring and support are conducted.
UP Phases
Incept ion Elaborat ion Const ruct ion Transit ion Product ion
Workflows
Requirements
Analysis
Design
Implementation
Test
Support
Iterations #1 #2 #n-1 #n
◼ UP Work Products
• The following figure illustrates the key work products produced as consequences of the four
technical up phases.
- 21 -
Inception phase
Elaboration phase
Vision document
Init ial use-case model
Init ial project glossary Construction phase
Use-case model
Init ial business case Supplement ary requirement s
Init ial risk assessment . including non-funct ional Design model
Transition phase
Project plan, Analysis model Soft ware component s
phases and it erat ions. Soft ware archit ect ure Delivered soft ware increment
Int egrat ed soft ware
Business model, Descript ion. increment Bet a t est report s
if necessary. Execut able archit ect ural General user feedback
Test plan and procedure
One or more prot ot ypes prot ot ype.
I nc e pt i o
Test cases
n Preliminary design model Support document at ion
Revised risk list user manuals
Project plan including inst allat ion manuals
it erat ion plan descript ion of current
adapt ed workflows increment
milest ones
t echnical work product s
Preliminary user manual
________
4. SOFTWARE REQUIREMENTS
◼ Contents:
• Functional and non-functional requirements
• User requirements
• System requirements
• Interface specification
• The Software Requirements Document
◼ Requirements engineering:
• The process of establishing the services that the customer requires from a system and the
constraints under which it operates and is developed
• The requirements themselves are the descriptions of the system services and constraints that
are generated during the requirements engineering process
- 22 -
◼ What is a requirement?
• May be the basis for a bid for a contract - therefore must be open to interpretation
• May be the basis for the contract itself - therefore must be defined in detail
• Both these statements may be called requirements
“If a company wishes to let a contract for a large software development project, it must
define its needs in a sufficiently abstract way that a solution is not pre-defined. The
requirements must be written so that several contractors can bid for the contract,
offering, perhaps, different ways of meeting the client organisation’s needs. Once a
contract has been awarded, the contractor must write a system definition for the client
in more detail so that the client understands and can validate what the software will do.
Both of these documents may be called the requirements document for the system.”
◼ Types of requirement
• User requirements
• Statements in natural language plus diagrams of the services the system
provides and its operational constraints. Written for customers
• System requirements
• A structured document setting out detailed - descriptions of the system
services. Written as a contract between client and contractor
• Software specification
• A detailed software description which can serve as a basis for a design or
implementation. Written for developers.
- 23 -
◼ Definitions and specifications
◼ Requirements readers:
• Functional requirements
• Non-functional requirements
• Domain requirements
• Describe functionality or system services, how the system should react to particular inputs
and how the system should behave in particular situations.
- 24 -
• Depend on the type of software, expected users and the type of system where the software
is used
• Functional user requirements may be high-level statements of what the system should do but
functional system requirements should describe the system services in detail.
• Examples:
• The user shall be able to search either all of the initial set of databases or select a
subset from it.
• The system shall provide appropriate viewers for the user to read documents in the
document store.
• Every order shall be allocated a unique identifier (ORDER_ID) which the user shall be
able to copy to the account’s permanent storage area.
• Problems arise when requirements are not precisely stated.
• Ambiguous requirements may be interpreted in different ways by developers and users.
• Consider the term ‘appropriate viewers’ in the previous example.
• User intention - special purpose viewer for each different document type
• Developer interpretation - Provide a text viewer that shows the contents of
the document
• Define system properties and constraints e.g. reliability, response time and storage
requirements. Constraints are I/O device capability, system representations, etc.
◼ Non-functional classifications
• Product requirements
- 25 -
• Requirements which specify that the delivered product must behave in a particular
way e.g. execution speed, reliability, etc.
• Organisational requirements
• Requirements which arise from factors which are external to the system and its
development process e.g. interoperability requirements, legislative requirements,
etc.
• Product requirement
• It shall be possible for all necessary communication between the APSE and the user to
be expressed in the standard Ada character set
• Organisational requirement
• The system development process and deliverable documents shall conform to the
process and deliverables defined in XYZCo-SP-STAN-95
• External requirement
• The system shall not disclose any personal information about customers apart from
their name and reference number to the operators of the system
◼ Examples
• A system goal
• The system should be easy to use by experienced controllers and should be organised
in such a way that user errors are minimised.
- 26 -
• Experienced controllers shall be able to use all the system functions after a total of
two hours training. After this training, the average number of errors made by
experienced users shall not exceed two per day.
◼ Requirements measures
Property Meas ure
Sp eed Pro cess ed trans action s/s econd
User/Event resp ons e time
Screen refres h time
Size K Bytes
Number of RAM chips
Ease of use Trainin g time
Number of h elp frames
Reliability Mean time to failure
Pro bab ility of u nav ailability
Rate o f failure occurrence
Availab ility
Rob ustn es s Time to restart after failu re
Percentage of even ts cau sin g failure
Pro bab ility of d ata co rruptio n on failure
Po rtability Percentage of target dep end ent statemen ts
Number of target sy stems
◼ Maintainability ?
• Derived from the application domain and describe system characteristics and features that
reflect the domain
• May be new functional requirements, constraints on existing requirements or define specific
computations
• If domain requirements are not satisfied, the system may be unworkable
- 27 -
◼ Domain requirements (examples)
• Library system
• There shall be a standard user interface to all databases which shall be based on the
Z39.50 standard.
• Because of copyright restrictions, some documents must be deleted immediately on
arrival. Depending on the user’s requirements, these documents will either be printed
locally on the system server for manually forwarding to the user or routed to a
network printer.
Where Dgradient is 9.81ms2 * compensated gradient/alpha and where the values of 9.81ms2
/alpha are known for different types of train
• Understand ability
• Requirements are expressed in the language of the application domain
• This is often not understood by software engineers developing the system
• Implicitness
• Domain specialists understand the area so well that they do not think of making the
domain requirements explicit.
• Should describe functional and non-functional requirements so that they are understandable
by system users who don’t have detailed technical knowledge
• User requirements are defined using natural language, tables and diagrams.
• Lack of clarity
• Precision is difficult without making the document difficult to read
• Requirements confusion
- 28 -
• Functional and non-functional requirements tend to be mixed-up
• Requirements amalgamation
• Several different requirements may be expressed together
• Grid facilities To assist in the positioning of entities on a diagram, the user may turn
on a grid in either centimetres or inches, via an option on the control panel. Initially,
the grid is off. The grid may be turned on and off at any time during an editing session
and can be toggled between inches and centimetres at any time. A grid option will be
provided on the reduce-to-fit view but the number of grid lines shown will be reduced
to avoid filling the smaller diagram with grid lines.
◼ Requirement problems
◼ Structured presentation
Grid facilities: The editor shall provide a grid facility where a matrix of horizontal
and vertical lines provides a background to the editor window. This grid shall be a passive
grid where the alignment of entities is the user's responsibility.
Rationale: A grid helps the user to create a tidy diagram with well-spaced entities.
Although an active grid, where entities 'snap-to' grid lines can be useful, the positioning is
imprecise. The user is the best person to decide where entities should be positioned.
- 29 -
• System requirements may be expressed using system models
• In principle, requirements should state WHAT the system should do (and the design should
describe how it does this)
• In practice, requirements and design are inseparable
• A system architecture may be designed to structure the requirements
• The system may inter-operate with other systems that generate design requirements
• The use of a specific design may be a domain requirement
• Ambiguity
• The readers and writers of the requirement must interpret the same words in the
same way. NL is naturally ambiguous so this is very difficult
• E.g. signs on an escalator:
• ‘Shoes must be worn’
• ‘Dogs must be carried’
• Over-flexibility
• The same thing may be said in a number of different ways in the specification
• Lack of modularisation
• NL structures are inadequate to structure system requirements
Notation
◼ AlternativesDescription
to NL specification
Structured natural This approach depends on defining standard forms or templates to express the
language requirements specification.
Design description This approach uses a language like a programming language but with more
languages abstract features to specify the requirements by defining an operational model of
the system.
- 30 -
• A limited form of natural language may be used to express requirements
• This removes some of the problems resulting from ambiguity and flexibility and imposes a
degree of uniformity on a specification
• Often supported using a forms-based approach
◼ Form-based specifications
• Requirements may be defined operationally using a programming language like notation but
with more flexibility of expression
• Most appropriate in two situations
• Where an operation is specified as a sequence of actions and the order is important
(when nested conditions and loops are involved)
• When hardware and software interfaces have to be specified. Allows interface objects
and types to be specified
◼ PDL disadvantages
- 31 -
4.6 Interface specification
• Most systems must operate with other systems and the operating interfaces must be
specified as part of the requirements
• Three types of interface may have to be defined
• Procedural interfaces
• Data structures that are exchanged
• Data representations
• Formal notations are an effective technique for interface specification
} //PrintServer
• The requirements document is the official statement of what is required of the system
developers
- 32 -
• Should include both a definition and a specification of requirements
• It is NOT a design document. As far as possible, it should set of WHAT the system should do
rather than HOW it should do it
• Introduction
• General description
• Specific requirements
- 33 -
• Appendices
• Index
• This is a generic structure that must be instantiated for specific systems
• Introduction
• Glossary
• User requirements definition
• System architecture
• System requirements specification
• System models
• System evolution
• Appendices
• Index
_______
◼ Objectives
The objective of this chapter is to discuss the activities involved in the requirements
engineering process. When you study this chapter, you will:
▪ understand the importance of requirements validation and how requirements reviews are
used in this process;
◼ Contents
• Feasibility studies
- 34 -
• Requirements elicitation and analysis
• Requirements validation
• Requirements management
- 35 -
• Questions for people in the organisation
• What if the system wasn’t implemented?
• What are current process problems?
• How will the proposed system help?
• What will be the integration problems?
• Is new technology needed? What skills?
• What facilities must be supported by the proposed system?
• The FS report should recommend whether or not the system development should continue.
• It may propose changes to the scope, budget, schedule and also suggest requirement changes
• (requirements discovery)
• Involves technical staff working with customers to find out about the application domain, the
services that the system should provide and the system’s operational constraints
• May involve end-users, managers, engineers involved in maintenance, domain experts, trade
unions, etc. These are called stakeholders
• The requirements change during the analysis process. New stakeholders may emerge and the
business environment change
- 36 -
◼ The requirements analysis process
• Domain understanding
• Requirements collection
• Classification
• Conflict resolution
• Prioritisation
• Requirements checking
• viewpoint-oriented elicitation
• ethnography
• scenarios
• structured analysis methods (system models)
• prototyping
• There is no universal method for requirement analysis!
◼ System models
◼ Viewpoint-oriented elicitation
- 37 -
• Stakeholders represent different ways of looking at a problem or problem viewpoints
• This multi-perspective analysis is important as there is no single correct way to analyse system
requirements
◼ Scenarios
◼ Scenario descriptions
• use cases!
• event scenarios
• Event scenarios may be used to describe how a system responds to the occurrence of
some particular event
• Used in the Viewpoint Oriented Requirements Definition (VORD) method.
◼ Ethnography
• An analyst spends time observing and analysing how people actually work
• People do not have to explain their work
• Social and organisational factors of importance may be observed
• Identifies implicit system requirements
• Ethnographic studies have shown that work is usually richer and more complex than suggested
by simple system models
◼ Scope of ethnography
• Requirements that are derived from the way that people actually work rather than the way
in which process definitions suggest that they ought to work
• e.g. air-traffic controllers switch off flight path conflict alarms
- 38 -
• Requirements that are derived from cooperation and awareness of other people’s activities
• e.g. predict no. of aircraft entering their sector by getting information from
neighbouring controllers and plan accordingly
• Not suitable for using alone, has to be combined with some other technique
• Concerned with showing that the requirements define the system that the customer really
wants
• Requirements error costs are high so validation is very important
• Fixing a requirements error after delivery may cost up to 100 times the cost of fixing
an implementation error
• Validity. Does the system provide the functions which best support the customer’s needs?
• Consistency. Are there any requirements conflicts?
• Completeness. Are all functions required by the customer included?
• Realism. Can the requirements be implemented given available budget and technology
• Verifiability. Can the requirements be checked
• Requirements reviews
• Systematic manual analysis of the requirements
• Prototyping
• Using an executable model of the system to check requirements.
• Test-case generation
• Developing tests for requirements to check testability
• Automated consistency analysis
• Checking the consistency of a structured requirements description
◼ Requirements reviews
◼ Review checks
- 39 -
• Verifiability. Is the requirement realistically testable?
• Comprehensibility. Is the requirement properly understood?
• Traceability. Is the origin of the requirement clearly stated?
• Adaptability. Can the requirement be changed without a large impact on other requirements?
• The priority of requirements from different viewpoints changes during the development
process
• System customers may specify requirements from a business perspective that conflict
with end-user requirements
• The business and technical environment of the system changes during its development
◼ Requirements evolution
- 40 -
◼ Enduring and volatile requirements
• Enduring requirements. Stable requirements derived from the core activity of the
customer organisation.
• May be derived from domain models
• E.g. a hospital will always have doctors, nurses, etc.
• Mutable requirements
• Requirements that change due to the system’s environment
• Emergent requirements
• Requirements that emerge as understanding of the system develops
• Consequential requirements
• Requirements that result from the introduction of the computer system
• Compatibility requirements
• Requirements that depend on other systems or organisational processes
- 41 -
• The tool support required to help manage requirements change
◼ Traceability
• Traceability is concerned with the relationships between requirements, their sources and the
system design
• Source traceability
• Links from requirements to stakeholders who proposed these requirements
• Requirements traceability
• Links between dependent requirements
• Design traceability
• Links from the requirements to the design
▪ A traceability matrix
1.1 D R
1.2 D R D
1.3 R R
2.1 R D D
2.2 D
2.3 R D
3.1 R
3.2 R
• Requirements storage
• Requirements should be managed in a secure, managed data store
• We need requirement databases!
• Change management
• The process of change management is a workflow process whose stages can be
defined and information flow between these stages partially automated
• Traceability management
• Automated retrieval of the links between requirements
- 42 -
◼ Requirements change management
________
6. SYSTEM MODELS
◼ Objectives
The objective of this chapter is to introduce a number of system models that may be
developed during the requirements engineering process. When you have study the chapter,
you will:
▪ Understand why its is important to establish the boundaries of a system and model its
context;
▪ Understand the concepts of behavioural modeling, data modeling and object modeling;
▪ Have been introduced to some of the notations defined in the Unified Modeling Language
(UML) and how these notations may be used to develop system models.
◼ Contents
• Context models
• Behavioural models
- 43 -
• Data models
• Object models
• Structured methods
◼ System models
◼ Different perspectives
◼ Model types
• Data flow model showing how the data is processed at different stages.
• Used to illustrate the operational context of a system - show what lies outside the system
boundaries.
- 44 -
• Social and organisational concerns
• Architectural models show the system and its relationship with other systems.
• Architectural models do not describe the system’s relationships with the other systems in the
environment
• Process models
• Process models show the overall process and the processes that are supported by the
system
- 45 -
• Data flow models may be used to show the processes and the flow of information
from one process to another
◼ Data-processing models
• Data flow diagrams (DFD) are used to model the system’s data processing
• These show the processing steps as data flows through a system
• Intrinsic part of many analysis methods
• Simple and intuitive notation that customers can understand easily
• (different notations are used in different methods for drawing DFDs)
◼ Elements of a DFD
- 46 -
• Processes
• Change the data. Each process has one or more inputs and outputs
• Data stores
• used by processes to store and retrieve data (files, DBs)
• Data flows
• movement of data among processes and data stores
• External entities
• outside things which are sources or destinations of data to the system
- 47 -
◼ State machine models
• These model the behaviour of the system in response to external and internal events.
• They show the system’s responses to stimuli so are often used for modelling real-time
systems.
• State charts are an integral part of the UML and are used to represent state machine models.
◼ State charts
- 48 -
◼ Microwave oven state description
State Description
Waiting The oven is waiting for input. The display shows the current time.
Half power The oven power is set to 300 watts. The display shows ‘Half power’.
Full power The oven power is set to 600 watts. The display shows ‘Full power’.
Set time The cooking time is set to the user’s input value. The display shows the cooking
time selected and is updated as the time is set.
Disabled Oven operation is disabled for safety. Interior oven light is on. Display shows
‘Not ready’.
Enabled Oven operation is enabled. Interior oven light is off. Display shows ‘Ready to
cook’.
Operation Oven in operation. Interior oven light is on. Display shows the timer countdown.
On completion of cooking, the buzzer is sounded for 5 seconds. Oven light is on.
Display shows ‘Cooking complete’ while buzzer is sounding.
◼ Microwave oven stimuli
- 49 -
Stimulus Description
Half power The user has pressed the half power button
Full power The user has pressed the full power button
Timer The user has pressed one of the timer buttons
Number The user has pressed a numeric key
Door open The oven door switch is not closed
Door closed The oven door switch is closed
Start The user has pressed the start button
Cancel The user has pressed the cancel button
- 50 -
• Used to describe the logical structure of data processed by the system.
• An entity-relation-attribute model sets out the entities in the system, the relationships
between these entities and the entity attributes
• Widely used in database design. Can readily be implemented using relational databases.
◼ Library semantic model
◼ Data dictionaries
• Data dictionaries are lists of all of the names used in the system models. Descriptions of the
entities, relationships and attributes are also included.
• Advantages
• Support name management and avoid duplication;
• Store of organisational knowledge linking analysis, design and implementation;
• Many CASE workbenches support data dictionaries.
- 51 -
The names of the authors of the article who may be due a share
authors Attribute 30.12.2002
of the fee.
The person or organisation that orders a copy of the article.
Buyer Entity 30.12.2002
A 1:1 relationship between Article and the Copyright Agency
fee-payable- Relation 29.12.2002
who should be paid the copyright fee.
to
The address of the buyer. This is used to any paper billing
Address Attribute 31.12.2002
information that is required.
(Buyer)
• Object models describe the system in terms of object classes and their associations.
• An object class is an abstraction over a set of objects with common attributes and the services
(operations) provided by each object.
• Various object models may be produced
• Inheritance models;
• Aggregation models;
• Interaction models.
• Natural ways of reflecting the real-world entities manipulated by the system
• More abstract entities are more difficult to model using this approach
• Object class identification is recognised as a difficult process requiring a deep understanding
of the application domain
• Object classes reflecting domain entities are reusable across systems
◼ Inheritance models
• The UML is a standard representation devised by the developers of widely used object-
oriented analysis and design methods.
• It has become an effective standard for object-oriented modelling.
• Notation
• Object classes are rectangles with the name at the top, attributes in the middle
section and operations in the bottom section;
• Relationships between object classes (known as associations) are shown as lines
linking objects;
- 52 -
• Inheritance is referred to as generalisation and is shown ‘upwards’ rather than
‘downwards’ in a hierarchy.
• The UML is a standard representation devised by the developers of widely used object-
oriented analysis and design methods.
• It has become an effective standard for object-oriented modelling.
• Notation
• Object classes are rectangles with the name at the top, attributes in the middle
section and operations in the bottom section;
• Relationships between object classes (known as associations) are shown as lines
linking objects;
• Inheritance is referred to as generalisation and is shown ‘upwards’ rather than
‘downwards’ in a hierarchy.
◼ Library class hierarchy
- 53 -
◼ Multiple inheritance
• Rather than inheriting the attributes and services from a single parent class, a system which
supports multiple inheritance allows object classes to inherit from several super-classes.
• This can lead to semantic conflicts where attributes/services with the same name in different
super-classes have different semantics.
• Multiple inheritance makes class hierarchy reorganisation more complex.
▪ Multiple inheritance
- 54 -
◼ Object aggregation
• An aggregation model shows how classes that are collections are composed of other classes.
• Aggregation models are similar to the part-of relationship in semantic data models.
◼ Object aggregation
• A behavioural model shows the interactions between objects to produce some particular
system behaviour that is specified as a use-case.
• Sequence diagrams (or collaboration diagrams) in the UML are used to model interaction
between objects.
- 55 -
6.5 STRUCTURED METHODS
◼ Method weaknesses
◼ CASE workbenches
• A coherent set of tools that is designed to support related software process activities such as
analysis, design or testing.
• Analysis and design workbenches support system modelling during both requirements
engineering and system design.
- 56 -
• These workbenches may support a specific design method or may provide support for a
creating several different types of system model.
• Diagram editors
• Model analysis and checking tools
• Repository and associated query language
• Data dictionary
• Report definition and generation tools
• Forms definition tools
• Import/export translators
• Code generation tools
______
- 57 -
7. DESIGN ENGINEERING
• The goal of design engineering is to produce a model or representation that exhibits firmness,
commodity, and delight. To accomplish this, a designer must practice diversification and then
convergence.
• Diversification and Convergence - the qualities which demand intuition and judgment are
based on experience.
• Principles and heuristics that guide the way the model is evolved.
• Set of criteria that enables quality to be judge.
• Process of iteration that ultimately leads to final design representation.
• Design engineering for computer software changes continually as new methods, better
analysis, and broader understanding evolve. Even today, most software design methodologies
lack the depth, flexibility, and quantitative nature that are normally associated with more
classical engineering design disciplines.
• The software design is an iterative process through which requirements are translated into a
blueprint for constructing the software.
• Throughout the design process, the quality of the evolving design is assessed with a series of
formal technical reviews or design. Three characteristics that serve as a guide for the
evaluation of a good design.
• The design must implement all of the explicit requirements contained in the analysis
model, and it must accommodate all of the implicit requirements desired by the
customer.
• The design must be a readable, understandable guide for those how generate code
and for those who test and subsequently support the software.
• The design should provide a complete picture of the software, addressing the data,
functional, and behavioral domains form an implementation perspective.
◼ Quality Guidelines
- 58 -
In order to evaluate the quality of a design representation, we must establish technical criteria
for good design.
• A design should be modular; that is, the software should be logically partitioned into elements
or subsystems.
• A design should contain distinct representations of data, architecture, interfaces, and
components.
• A design should lead to data structures that are appropriate for the classes to be implemented
and are drawn from recognizable data patterns.
• A design should lead to components that exhibit independent functional characteristics.
• A design should lead to interfaces that reduce the complexity of connections between
components and with the external environment.
• A design should be derived using a repeatable method that is driven by information obtained
during software requirements analysis.
• A design should be represented using a notation that effectively communicates its meaning.
◼ Quality Attributes
Hewlett Packard developed set of software quality attributes name FURPS. Functionality,
Usability, Reliability, Performance and Supportability. The FURPS quality attributes represent
a target for all software design.
• Functionality is assessed by evaluating the feature set and capabilities of the program, the
generality of the functions that are delivered, and the security of the overall system.
• Reliability is evaluated by measuring the frequency and severity of failure, the accuracy of
output results, the mean-time-to-failure (MTTF), the ability to recover from failure and the
predictability of the program.
- 59 -
• Supportability combines the ability to extend the program (extensibility), adaptability,
serviceability-these three attributes represent amore common term, maintainability-in
addition, testability, compatibility, and configurability.
Fundamental software design concepts provide the necessary frame work for “getting it
right.”
• Architecture – provides overall structure of the software and the ways in which the structure
provides conceptual integrity for a system. The goal of software design is to derive and
architectural rendering of a system which serves as a framework from which more detailed
design activities can be conducted.
• Process Model – Focuses on design of business or technical process that system must
accommodate.
- 60 -
• Patterns – A pattern is a named nugget of insight which conveys the essences of proven
solutions to a recurring problem with a certain context amidst competing concerns. The
design pattern provides a description that enables a designer to determine
• Modularity – the software is divided into separately named and addressable components
called modules that are integrated to satisfy problem requirements.
• Information Hiding – modules should be specified and designed so that information contained
within a module is inaccessible to other modules that have no need for such information.
Information hiding provides greatest benefits when modifications are required during testing
and software maintenance.
• Design Classes – describes some element of problem domain, focusing on aspects of the
problem that are user or customer visible. The software team must define a set of design
classes that
(1) Refine the analysis classes by providing design detail that will enable the classes to be
implemented and
- 61 -
(2) Create a new set of design classes that implement a software infrastructure to support the
business solution. Five different types of design classes each representing a different
layer of the design architecture is suggested
• User Interface classes define all abstractions that are necessary for Human Computer
Interaction (HCI). In many cases, HCI occurs within the context of a metaphor (e.g., a
checkbook, an order form, a fax machine) and the design classes for the interface may
be visual representations of the elements of the metaphor.
• Business domain classes are often refinements of the analysis classes defined earlier.
The classes identify the attributes and services (methods) that are required to
implement some element of the business domain.
• Persistent classes represent data stores (e.g, a database) that will persist beyond the
execution of the software.
• System classes implement software management and control functions that enable
the system to operate and communicate within its computing environment and with
the outside world.
• The design model can be viewed in two in two different dimensions. The process dimension
indicates the evolution of the design model as design tasks are executed as part of the
software process. The abstraction dimension represents the level of detail as each element
of the analysis model is transformed into a design equivalent and then refined iteratively.
- 62 -
high
a na ly sis m ode l
process dimension
• Like other software engineering activities, data design creates a model of data and/or
information that is represented at a high level of abstraction.
• At the program component level, the design of data structures and the associated
algorithms required to manipulate them is essential to the creation of high-quality
applications.
• At the application level, the translation of a data model into a database is pivotal to
achieving the business objectives of a system.
• At the business level, the collection of information stored in disparate databases and
reorganized into a “data warehouse” enables data mining or knowledge discovery that
can have an impact on the success of the business itself.
- 63 -
• The architectural design for software is the equivalent to the floor plan of a house. The
floor plan depicts the overall layout of the rooms, their size, shape, and relationship to
one another, and the doors and windows that allow movement into and out of the rooms.
The floor plan gives us an overall view of the house. Architectural design elements give us
an overall view of the software.
(1) Information about the application domain for the software to be built;
(2) Specific analysis model elements such as data flow diagrams or analysis classes, their
relationships and collaborations for the problem at hand
• The interface design for software is the equivalent to a set of detailed drawing for the
doors, windows and external utilities of a house. These drawings depict the size and shape
of doors and windows, the manner in which they operate, the way in which utilities
connections (e.g., water, electrical, gas, and telephone) come into the house and are
distributed among the rooms depicted in the floor plan.
(3) internal interfaces between various design components. These interface design
elements allow the software to communicate externally and enable internal
communication and collaboration among the components that populate the
software architecture.
- 64 -
MobilePhone
WirelessPDA
Cont rolPanel
LCDdisplay
LEDindicat or s
keyPadChar act er ist ics K ey Pad
speaker
wir elessInt er f ace
r eadKeySt r oke( )
decodeKey ( )
displaySt at us( )
light LEDs( )
sendCont r olMsg( )
r eadKeyst r oke( )
decodeKey( )
• The component-level design for software is equivalent to a set of detailed drawings for
each room in a house. These drawing depict wiring and plumbing within each room, the
location of electrical receptacles and switches, faucets, sinks, showers, tubs, drains,
cabinets, and closets. They also describe the flooring to be used, the moldings to be
applied, and every other detail associated with a room. The component-level design for
software fully describes the internal detail of each software component.
SensorManagement
Sensor
- 65 -
• Deployment level design elements indicates how software functionality and subsystems
will be allocates within the physical computing environment that will support the software.
Security homeownerAccess
externalAccess
Security Surveillance
homeManagement communication
_______
• Design is an activity concerned with making major decisions, often of s structural nature.
- 66 -
8.1 SOFTWARE ARCHITECTURE
• Effective software architecture and its explicit representation and design have become
dominant themes in software engineering.
◼ Why Architecture?
• The architecture is not the operational software. Rather, it is a representation that enables a
software engineer to:
(1) analyze the effectiveness of the design in meeting its stated requirements,
(2) consider architectural alternatives at a stage when making design changes is still
relatively easy, and
(3) reduce the risks associated with the construction of the software.
The data design action translates data objects defined as part of the analysis model into data
structures at the software component level and, when necessary, a database architecture at the
application level.
• The systematic analysis principles applied to function and behavior should also
be applied to data.
• All data structures and the operations to be performed on each should be
identified.
• A data dictionary should be established and used to define both data and
program design.
• Low level data design decisions should be deferred until late in the design
process.
• The representation of data structure should be known only to those modules
that must make direct use of the data contained within the structure.
• A library of useful data structures and the operations that may be applied to
them should be developed.
• A software design and programming language should support the specification
and realization of abstract data types.
(3) constraints that define how components can be integrated to form the system,
and
(4) semantic models that enable a designer to understand the overall properties of
a system by analyzing the known properties of its constituent parts.
• Data-centered architectures
• Data flow architectures
• Call and return architectures
• Object-oriented architectures
• Layered architectures
◼ Data-Centered Architecture
• A data store (e.g., a file or database) resides at the center of this architecture and is accessed
frequently by other components that update, add, delete, or otherwise modify data within
the store. The following figure illustrates a typical data-centered style.
• This architectural style enables a software designer (system architect) to achieve a program
structure that is relatively easy to modify and scale. Two substyles exist within this category:
• The component of a system encapsulates data and the operations that must be applied to
manipulate the data communication and coordination between components is accomplished
via message passing.
◼ Layered Architecture
• The basic structure of a layered architecture is illustrated in the following figure. A number of
different layers are defined, each accomplishing operations that progressively become closer
to the machine instruction set. At the outer layer, components service user interface
operations. At the inner layer, components perform operating system interfacing.
Intermediate layers provide utility services and application software functions.
8.4 ARCHITECTURAL PATTERNS
• Persistence—Data persists if it survives past the execution of the process that created it. Two
patterns are common:
• a database management system pattern that applies the storage and
retrieval capability of a DBMS to the application architecture
• an application level persistence pattern that builds persistence features into
the application architecture
◼ Architectural Context
• A system engineer must model context. A system context diagram accomplishes this
requirement by representing the flow of information into and out of the system, the user
interface and relevant support processing. At the architectural design level, a software
architect uses an architectural context diagram (ACD) to model the manner in which software
interacts with entities external to its boundaries. The generic structure of the architectural
context diagram is illustrated in Figure.
Safehome Internet-based
Product system
control
panel target system: surveillance
Security Function function
uses
homeowner peers
uses
uses
sensors sensors
◼ Archetypes
• the SafeHome home security function, we might define the following archetypes:
• Node – Represents a cohesive collection of input and output elements of the home
security function. For example a node might be comprised of
(1) various sensors and
(2) a variety of alarm (output) indicators.
• Indicator – An abstraction that represents all mechanism (e.g., alarm siren, flashing
lights, bell) for indicating that an alarm condition is occurring.
• Controller – An abstraction that depicts the mechanism that allows the arming or
disarming of a node. If controllers reside on a network, they have the ability to
communicate with one another.
Cont r oller
Node
Figur e 10.7 UML r elat ionships f or Saf eHom e secur it y f unct ion ar chet ypes
◼ Refining the Architecture into Components
( adapt ed f r om [ BOS00] )
• Continuing the SafeHome home security function example, we might define the set of top-
level components that address the following functionality.
• External communication management – coordinates communication of the security
function with external entities, for example, internet-based systems. External alarm
notification.
• Control panel processing – manages all control panel functionality.
• Detector management – coordinates access to all detectors attached to the system.
• Alarm processing – verifies and acts on all alarm conditions.
SafeHome
Execut ive
Funct ion
select ion
Ext ernal
Communicat ion
Management
Ext er nal
Communicat ion
Management
Securit y
Co n t ro l d e t e ct o r alarm
p an e l m an ag e m e n t p ro ce ssin g
p ro ce ssin g
Ke y p ad
p ro ce ssin g phone
sch e d u le r
co m m u n icat io n
CP d isp lay
fu n ct io n s
alarm
se
se nn so
so r
se
se
se nn so
so rrr
se
se nnsosorr
se nnsosorr
se n so r
_________
9. OBJECT-ORIENTED DESIGN
OBJECTIVES
• To explain how a software design may be represented as a set of interacting objects that
manage their own state and operations.
• To describe the activities in the object-oriented design process.
• To introduce various models that can be used to describe an object-oriented design.
• To show how the UML may be used to represent these models.
• Not about implementation, but about design.
◼ TOPICS COVERED
◼ OBJECT-ORIENTED DEVELOPMENT
◼ Characteristics of OOD
◼ Interacting objects
◼ Advantages of OOD
• Objects are entities in a software system which represents instances of real-world and system
entities.
• Object classes are templates for objects. They may be used to create objects.
• Object classes may inherit attributes and services from other object classes.
An object is an entity that has a state and a defined set of operations which operate on
that state. The state is represented as a set of object attributes. The operations associated
with the object provide services to other objects (clients) which request these services
when some computation is required.
Objects are created according to some object class definition. An object class definition
serves as a template for objects. It includes declarations of all the attributes and services
which should be associated with an object of that class.
• Several different notations for describing object-oriented designs were proposed in the 1980s
and 1990s.
• The Unified Modeling Language is an integration of these notations.
• It describes notations for a number of different models that may be produced during OO
analysis and design.
• It is now a de facto standard for OO modelling.
◼ Employee object class (UML)
◼ Object communication
◼ Message examples
v = circularBuffer.Get () ;
// Call the method associated with a
// thermostat object that sets the
// temperature to be maintained
thermostat.setTemp (20) ;
• Objects are members of classes that define attribute types and operations.
• Classes may be arranged in a class hierarchy where one class (a super-class) is a generalisation
of one or more other classes (sub-classes).
• A sub-class inherits the attributes and operations from its super class and may add new
methods or attributes of its own.
• Generalisation in the UML is implemented as inheritance in OO programming languages.
◼ A generalisation hierarchy
◼ Advantages of inheritance
• Object classes are not self-contained. They cannot be understood without reference to their
super-classes.
• The inheritance graphs of analysis, design and implementation have different functions and
should be separately maintained.
9.2 AN OBJECT-ORIENTED DESIGN PROCESS
◼ Process stages
• The area computer system validates the collected data and integrates it with the data
from different sources. The integrated data is archived and, using data from this
archive and a digitised map database a set of local weather maps is created. Maps
may be printed for distribution on a special-purpose map printer or may be displayed
in a number of different formats.
• Develop an understanding of the relationships between the software being designed and its
external environment
• System context
• A static model that describes other systems in the environment. Use a subsystem
model to show other systems. Following slide shows the systems around the weather
station system.
• Model of system use
• A dynamic model that describes how the system interacts with its environment. Use
use-cases to show interactions
◼ Layered architecture
◼ Use-case models
• Use-case models are used to represent each interaction with the system.
• A use-case model shows the system features as ellipses and the interacting entity as a stick
figure.
▪ Use-case description
System Weather station
Use-case Report
Data The weather station sends a summary of the weather data that has been collected
from the instruments in the collection period to the weather data collection system. The data
sent are the maximum minimum and average ground and air temperatures, the maximum,
minimum and average air pressures, the maximum, minimum and average wind speeds, the
total rainfall and the wind direction as sampled at 5 minute intervals.
StimulusThe weather data collection system establishes a modem link with the weather
station and requests transmission of the data.
Response The summarised data is sent to the weather data collection system
Comments Weather stations are usually asked to report once per hour but this
frequency may differ from one station to the other and may be modified in future.
◼ Architectural design
• Once interactions between the system and its environment have been understood, you use
this information for designing the system architecture.
• A layered architecture as discussed in Chapter 11 is appropriate for the weather station
• Interface layer for handling communications;
• Data collection layer for managing instruments;
• Instruments layer for collecting data.
• There should normally be no more than 7 entities in an architectural model.
• Identifying objects (or object classes) is the most difficult part of object oriented design.
• There is no 'magic formula' for object identification. It relies on the skill, experience and
domain knowledge of system designers.
• Object identification is an iterative process. You are unlikely to get it right first time.
◼ Approaches to identification
• Use a grammatical approach based on a natural language description of the system (used in
Hood OOD method).
• Base the identification on tangible things in the application domain.
• Use a behavioural approach and identify objects based on what participates in what
behaviour.
• Use a scenario-based analysis. The objects, attributes and methods in each scenario are
identified.
• When a command is issued to transmit the weather data, the weather station processes
and summarises the collected data. The summarised data is transmitted to the mapping
computer when a request is received.
◼ Weather station object classes
WeatherStation WeatherData
collect ()
summarise ()
◼ Design models
• Design models show the objects and object classes and relationships between these entities.
• Static models describe the static structure of the system in terms of object classes and
relationships.
• Dynamic models describe the dynamic interactions between objects.
◼ Examples of design models
• Sub-system models that show logical groupings of objects into coherent subsystems.
• Sequence models that show the sequence of object interactions.
• State machine models that show how individual objects change their state in response to
events.
• Other models include use-case models, aggregation models, generalisation models, etc.
◼ Subsystem models
• Shows how the design is organised into logically related groups of objects.
• In the UML, these are shown using packages - an encapsulation construct. This is a logical
model. The actual organisation of objects in the system may be different.
• Sequence models show the sequence of object interactions that take place
• Objects are arranged horizontally across the top;
• Time is represented vertically so models are read top to bottom;
• Interactions are represented by labelled arrows, Different styles of arrow represent
different types of interaction;
• A thin rectangle in an object lifeline represents the time when the object is the
controlling object in the system.
•
request (repor t
)
acknowledge ()
repor t ()
summarise ()
◼ Statecharts
• Show how objects respond to different service requests and the state transitions triggered by
these requests
• Object interfaces have to be specified so that the objects and other components can be
designed in parallel.
• Designers should avoid designing the interface representation but should hide this in the
object itself.
• Objects may have several interfaces which are viewpoints on the methods provided.
• The UML uses class diagrams for interface specification but Java may also be used.
◼ Weather station interface
interface WeatherStation {
} //WeatherStation
• Hiding information inside objects means that changes made to an object do not affect other
objects in an unpredictable way.
• Assume pollution monitoring facilities are to be added to weather stations. These sample the
air and compute the amount of different
pollutants in the atmosphere.
• Pollution readings are transmitted with weather data.
◼ Changes required
◼ Pollution monitoring
W eatherStation
Air quality
identifier
NOData
repor tWeather () smok eData
repor tAirQuality () benz eneData
calibrate (instruments)
test () collect ()
startup (instruments) summarise ()
shutdown (instruments)
BenzeneMeter
10. PERFORMING USER INTERFACE DESIGN
◼ User Interface design create an effective communication medium between a human and a
computer. Following a set of interface design principles, design identifies interface objects
and actions and then creates a screen layout that forms the basis for a user interface
prototype.
◼ Interface Design
Easy to learn?
Easy to use?
Easy to understand?
no guidance / help
no context sensitivity
poor response
Arcane/unfriendly
• The overall process for analyzing and designing a user interface begins with the creation of
different models of system function.
• Mental model (system perception) — the user’s mental image of what the interface is
• Implementation model — the interface “look and feel” coupled with supporting information
that describe interface syntax and semantics.
• The analysis and design process for user interfaces is iterative and can be represented using
a spiral model.
• The user interface analysis and design process encompasses four distinct framework
activities.
• The information gathered as part of the analysis activity is used to create an analysis model for
the interface.
• The goal of interface design is to define a set of interface objects and actions (and their screen
representations) that enable a user to perform all defined tasks in a manner that meets every
usability goal defined for the system.
• Validation focuses on
(1) the ability of the interface to implement every user task correctly, to accommodate
all task variations, and to achieve all general user requirements.
(2) the degree to which the interfaced is easy to use and easy to learn
(3) the users acceptance of the interface as a useful tool in their work.
◼ User Analysis
• Are users trained professionals, technician, clerical, or manufacturing workers?
• What level of formal education does the average user have?
• Are the users capable of learning from written materials or have they expressed a desire
for classroom training?
• Are users expert typists or keyboard phobic?
• What is the age range of the user community?
• Will the users be represented predominately by one gender?
• How are users compensated for the work they perform?
• Do users work normal office hours or do they work until the job is done?
• Is the software to be an integral part of the work users do or will it be used only
occasionally?
• What is the primary spoken language among users?
• What are the consequences if a user makes a mistake using the system?
• Are users experts in the subject matter that is addressed by the system?
• Do users want to know about the technology the sits behind the interface?
◼ Swimlane Diagram
p at ien t p h armacist p h ysician
r e q u e st s t h at a d e t e r m in e s st at u s o f
p r e scr ip t io n b e r e f ille d p r e scr ip t io n
ref ills
remaining approv es ref ill
e v alu at e s alt e r n at iv e
m e d icat io n
r e ce iv e s o u t o f st o ck out of st ock
n o t if icat io n
alt ernat iv e
av ailable
in st ock
r e ce iv e s t im e / d at e
none
t o p ick u p
p icks u p f ills
p r e scr ip t io n p r e scr ip t io n
r e ce iv e s r e q u e st t o
co n t act p h y sician
• Are different types of data assigned to consistent geographic locations on the screen (e.g.,
photos always appear in the upper right hand corner)?
• Can the user customize the screen location for content?
• Is proper on-screen identification assigned to all content?
• If a large report is to be presented, how should it be partitioned for ease of understanding?
• Will mechanisms be available for moving directly to summary information for large collections
of data.
• Will graphical output be scaled to fit within the bounds of the display device that is used?
• How will color to be used to enhance understanding?
• How will error messages and warning be presented to the user?
• Using information developed during interface analysis; define interface objects and actions
(operations).
• Define events (user actions) that will cause the state of the user interface to change. Model
this behavior.
• Depict each interface state as it will actually look to the end-user.
• Indicate how the user interprets the state of the system from information provided through
the interface.
The Design pattern is an abstraction that prescribes a design solution to a specific, well
bounded design problem
• The complete UI
• Page layout
• Forms and input
• Tables
• Direct data manipulation
• Navigation
• Searching
• Page elements
• e-Commerce
◼ Design Issues
• Response time
• Help facilities
• Error handling
• Menu and command labeling
• Application accessibility
• Internationalization
10.5 DESIGN EVALUATION
• Design evaluation determines whether it meets the needs of the user. Evaluation can
span a formality spectrum that ranges from an informal “test drive,” in which a user
provide impromptu feedback to a formally designed study that uses statistical methods
for the evaluation of questionnaires completed by a population of end-users.
• After the design model has been completed, a first level prototype is created.
• The design model of the interface has been created, a number of evaluation criteria can
be applied during early design reviews.
• The length and complexity of the written specification of the system and its interface
provide an indication of the amount of learning required by users of the system.
• The number of user tasks specified and the average number of actions per task
provide an indication of interaction time and the overall efficiency of the system.
• The number of actions, tasks and system states indicated by the design model imply
the memory load on users of the system
• Interface style, help facilities and error handling protocol provide indication of the
complexity of the interface and the degree to which it will be accepted by the user.
(6) open-ended.
• If quantitative data are desired, a form of time study analysis can be conducted.
11. TESTING STRATEGIES
◼ Software Testing
• Testing is the process of exercising a program with the specific intent of finding errors prior
to delivery to the end user.
• A number of software testing strategies have proposed in the literature. All provide the
software developer with a template for testing and all have the following generic
characteristics:
• To perform effective testing, a software team should conduct effective formal technical
reviews. By doing this, many errors will be eliminated before testing commences.
• Testing begins at the component level and works “outward” toward the integration of the
entire computer-based system.
• Different testing techniques are appropriate at different points in time.
• Testing is conducted by the developer of the software and (for large projects) an
independent test group.
• Testing and debugging are different activities, but debugging must be accommodated in
any testing strategy.
◼ Verification and Validation
• Verification refers to the set of activities that ensure that software correctly implements a
specific function. Validation refers to a different set of activities that ensure that the
software that has been built is traceable to customer requirements.
• Verification and validation encompasses a ;wide array of SQA activities that include formal
technical reviews, quality and configuration audits, performance monitoring, simulation,
feasibility study, documentation review, database review, algorithm analysis, development
testing, usability testing, qualification testing, and installation testing.
(2) that the software should be “tossed over the wall” to strangers who will test it
mercilessly,
(3) that testers get involved with the project only when the testing steps are about to
begin.
• In many cases, the developer also conducts integration testing-a testing step that leads
to the construction of the complete software architecture. Only after the software
architecture is complete does an independent test group become involved.
• The role of an independent test group (ITG) is to remove the inherent problems
associated with letting the builder test the thing that has been built.
• The developer and the ITG work closely throughout a software project to ensure that
thorough tests will be conducted. While testing is conducted, the developer must be
available to correct errors that are uncovered.
◼ STRATEGIC ISSUES
• A software team could wait until the system is fully constructed and then conduct
tests on the overall system in hopes of finding errors. This approach, although
appealing, simply does not work. It will result in buggy software that disappoints the
customer and end-user.
• A software engineer could conduct tests on a daily basis, whenever any part of the
system is constructed. This approach, although less appealing to many, can be very
effective.
◼ Unit Testing
• Unit testing focuses verification effort on the smallest unit of software design- the software
component or module.
▪ Unit Test Environment
◼ Integration Testing
◼ Bottom-Up Integration
• Bottom-up integration testing as its name implies, begins construction and testing with
atomic modules.
• A bottom-up integration strategy may be implemented with the following four steps:
• Low-level components are combined into clusters that perform a specific
software subfunction.
• A driver is written to coordinate test case input and output.
• The cluster is tested.
• Drivers are removed and clusters are combined moving upward in the program
structure.
◼ Regression testing
• Regression testing may be conducted manually, by re-executing a subset of all test cases or
using automated capture/playback tools.
• The regression test suite contains three different classes of test cases.
• A representative sample of tests that will exercise all software functions.
• Additional tests that focus on software functions that are likely to be affected by the
change.
• Tests that focus on the software components that have been changed.
◼ Smoke Testing
• A common approach for creating “daily builds” for product software.
• Smoke testing steps:
• Software components that have been translated into code are integrated into a
“build.”
• A build includes all data files, libraries, reusable modules, and engineered
components that are required to implement one or more product functions.
• A series of tests is designed to expose errors that will keep the build from properly
performing its function.
• The intent should be to uncover “show stopper” errors that have the highest
likelihood of throwing the software project behind schedule.
• The build is integrated with other builds and the entire product (in its current form)
is smoke tested daily.
• The integration approach may be top down or bottom up.
• Black-Box Testing alludes to tests that are conducted at the software interface. A black-box
test examines some fundamental aspect of a system with little regard for the internal logical
structure of the software.
• How is functional validity tested?
• How is system behavior and performance tested?
• What classes of input will make good test cases?
• Is the system particularly sensitive to certain input values?
• How are the boundaries of a data class isolated?
• What data rates and data volume can the system tolerate?
• What effect will specific combinations of data have on system operation?
◼ Why Cover?
• Logic errors and incorrect assumptions are inversely proportional to a path’s execution
probability
• We often believe that a path is not likely to be executed; in fact, reality is often counter
intuitive.
• Typographical errors are random; it’s likely that untested paths will contain some.
• Debugging occurs as a consequence of successful testing. That is, when a test case uncovers
an error, debugging is an action that results in the removal of the error.
• The symptom and the cause may be geographically remote. That is, the symptom
may appear in one part of a program, while the cause may actually be located at
a site that is far removed. Highly coupled components exacerbate this situation.
• The symptom may disappear when another error is corrected.
• The symptom may actually be caused by non errors.
• The symptom may be caused by human error that is not easily traced.
• The symptom may be a result of timing problems, rather than procession
problems.
• It may be difficult to accurately reproduce input conditions.
• The symptom may be intermittent. This is particularly common in embedded
systems that couple hardware and software inextricably.
• The symptom may be due to causes that are distributed across a number of tasks
running on different processors.
• During debugging, we encounter errors that range from mildly annoying to catastrophic.
◼ Psychological Considerations
• Debugging is one of the more frustrating parts of programming. It has elements of
problem solving or brain teasers, coupled with the annoying recognition that you
have made a mistake. Heightened anxiety and the unwillingness to accept the
possibility of error increases the task difficulty. Fortunately, there is a great sigh of
relief and a lessening of tension when the bug is ultimately corrected.
◼ Debugging Strategies
• Regardless of the approach that is taken, debugging has one overriding objective: to
find and correct the cause of a software error. The objective is realized by a
combination of systematic evaluation, intuition and luck.
• Debugging tactics:
• The brute force category of debugging is probably the most common and
least efficient method of isolating the cause of a software error.
• Backtracking is a fairly common debugging approach that can be used
successfully in small programs.
• Cause elimination is manifested by induction or deduction and introduces the
concept of binary partitioning.
• Automated debugging
• Once a bug has been found, it must be corrected. But, as we have already noted, the
correction of a bug can introduce other errors and therefore do more harm than good. Van
Vleck suggests three simple questions that every software engineer should ask before
making the “correction” that removes the cause of a bug:
_______
12. PRODUCT METRICS
12.1 SOFTWARE QUALITY
• Software quality is conformance to explicitly stated functional and performance
requirements, explicitly documented development standards, and implicit
characteristics that are expected of all professionally developed software.
• The definition serves to emphasize three important points:
• Software requirements are the foundation from which quality is measured. Lack
of conformance to requirements is lack of quality.
• Specified standards define a set of development criteria that guide the manner
in which software is engineered. If the criteria are not followed, lack of quality
will almost surely result.
• There is asset of implicit requirements that often goes unmentioned. If software
conforms to its explicit requirements but fails to meet implicit requirements,
software quality is suspect.
• Software quality is a complex mix of factors that will vary across different applications
and the customers who request them.
• McCall’s quality factors were proposed in the early 1970s. They are as valid today as they
were in that time. It’s likely that software built to conform to these factors will exhibit high
quality well into the 21st century, even if there are dramatic changes in technology.
• Function-based metrics: use the function point as a normalizing factor or as a measure of the
“size” of the specification
• Specification metrics: used as an indication of quality by measuring number of requirements
by type
◼ Function-Based Metrics
• The function point metric (FP), first proposed by Albrecht, can be used effectively as a means
for measuring the functionality delivered by a system.
• Function points are derived using an empirical relationship based on countable (direct)
measures of software's information domain and assessments of software complexity
• Information domain values are defined in the following manner:
• Number of external inputs (EIs)
• Number of external outputs (EOs)
• Number of external inquiries (EQs)
• Number of internal logical files (ILFs)
• Number of external interface files (EIFs)
• Function Points
• To compute function points (FP), the following relationship is used:
FP = count total X [0.65 + 0.01 X ∑ (Fi)] (1)
Where count total is the sum of all FP entries obtained from figure.
Information Weighting factor
Domain Value Count simple average complex
Count total X
• The Fi (i= 1 to 14) are value adjustment factors based on responses to the following questions:
X
The constant values in equation-1 and the weighting factors that are applied to information
domain counts are determined empirically.
• Design metrics for computer software, like all other software metrics, are not perfect. Debate
continues over their efficacy and the manner in which they should be applied.
• Data complexity provides an indication of the complexity in the internal interface for
a module I and is defined as
D(i) = v(i) / [fout(i) + 1]
Where v(i) is the number of input and output variables that are passed to and
from module i.
Finally, system complexity is defined as the sum of structural and data complexity
specified as
• As each of these complexity values increases, the overall architectural complexity or the
system also increases. This leads to a greater likelihood that integration and testing effort will
also increase.
• Fenton suggests a number of simple morphology metrics that enable different program
architectures to be compared using a set of straight forward dimensions. Referring to the call-
and-return architecture in figure. The following metric can be defined:
size = n + a
• The U.S. Air Force Systems command has developed a number of software quality indicators
that are based on measurable design characteristics of a computer program. The Air Force
uses information obtained form data and architectural design to derive a design structure
quality index that ranges from 0 to 1. The following values must be ascertained to compute
the DSQI.
S2 = the number of modules whose correct function depends on the source of data input or
that produce data to be used elsewhere.
• Program Structure: D1, where D1 is defined as follows: If the architectural design was developed
using a distinct method, then D1= 1, otherwise D1=0.
With these intermediate values determined, the DSQI is computed in the following
manner:
DSQI = ∑wiDi
Where i = 1 to 6, wi is the relative weighting of the importance of each of the
intermediate values, and ∑wi = 1
• Size
▪ Size is defined in terms of four views: population, volume, length, and
functionality
• Complexity
▪ How classes of an OO design are interrelated to one another
• Coupling
▪ The physical connections between elements of the OO design
• Sufficiency
▪ “the degree to which an abstraction possesses the features required of it, or
the degree to which a design component possesses features in its abstraction,
from the point of view of the current application.”
• Completeness
▪ An indirect implication about the degree to which the abstraction or design
component can be reused
• Cohesion
▪ The degree to which all operations working together to achieve a single, well-
defined purpose
• Primitiveness
▪ Applied to both operations and classes, the degree to which an operation is
atomic
• Similarity
▪ The degree to which two or more classes are similar in terms of their
structure, function, behavior, or purpose
• Volatility
▪ Measures the likelihood that a change will occur
◼ Class-Oriented Metrics
The class is the fundamental unit of an object oriented system.
• Cohesion metrics: a function of data objects and the locus of their definition
• Coupling metrics: a function of input and output parameters, global variables, and modules
called
• Complexity metrics: hundreds have been proposed (e.g., cyclomatic complexity)
◼ Operation-Oriented Metrics
• Layout appropriateness: a function of layout entities, the geographic position and the “cost”
of making transitions among entities.
• Also called Halstead’s Software Science: a comprehensive collection of metrics all predicated
on the number (count and occurrence) of operators and operands within a component or
program
• It should be noted that Halstead’s “laws” have generated substantial controversy, and
many believe that the underlying theory has flaws. However, experimental
verification for selected programming languages has been performed. The measures
are
▪ n1 = the number of distinct operators that appear in a program.
▪ n2 = the number of distinct operands that appear in a program.
▪ N1 = the total number of operator occurrences.
▪ N2 = the total number of operand occurrences.
▪ Halstead shows that length N can be estimated
N= n1 log2 n1 + n2 log2 n2
V = N log2 (n1+n2)
• It should be noted that V will vary with programming language and represents the volume of
information required to specify a program.
• Theoretically, a minimum volume must exist for a particular algorithm. Halstead defines a
volume ratio L as the ratio of volume of the most compact form of a program to the volume
of the actual program.
L must always be less than 1. In terms of primitive measures, the volume ratio may
be expressed as
L = 2/n1 X n2/N2
• Modules with high cyclomatic complexity are more likely to be error prone than modules
whose cyclomatic complexity is lower. For this reason, the tester should expand above
average effort to uncover errors in such modules before they are integrated in a system.
• Testing effort can also be estimated using metrics derived from Halstead measures
• Using the definitions for program volume, V, and program level, PL, Halstead effort, e, can be
computed as
PL = 1 / [n1/2) X (N2/n2)] and e = V/PL
• The percentage of overall testing effort to be allocated to a module k can be estimated using
the following relationship: Percentage of testing effort (k) = e(k) / ∑e(i)
Where e(k) is computed for module k using equations and the summation in the denominator
of equation is the sum of Halstead effort across all modules of the system.
• Binder suggests a broad array of design metrics that have a direct influence on the
“testability” of an OO system.
• Lack of cohesion in methods (LCOM): The higher the value of LCOM, the more states must be tested
to ensure that methods do not generate side effects.
• Percent public and protected (PAP): This metric indicates the percentage of class attributes that are
public or protected.
• Public access to data members (PAD): This metric indicates the number of classes that can be access
another class’s attributes, a violation of encapsulation.
• Number of root classes (NOR): This metric is a count of the distinct class hierarchies that are
described in the design model.
• Fan-in (FIN): When used in the OO context, fan-in for the inheritance hierarchy is an indication of
multiple inheritance. FIN>1 indicates that a class inherits its attributes and operations from more
than one root class.
• Number of children (NOC) and depth of the inheritance tree (DIT): Superclass methods will have to
be retested for each subclass.
• IEEE Std. 982.1 – 1988 suggests a software maturity index that provides an indication of the
stability of a software product. The following information is determined:
Fc = the number of modules in the current release that have been changed.
Fa = the number of modules in the current release that have been added.
Fd = the number of modules from the preceding release that were deleted in the
current release.
As SMI approaches 1.0, the product begins to stabilize. SMI may also be used as a metric for
planning software maintenance activities. The mean time to produce a release of a software
product can be correlated with SMI, and empirical models for maintenance effort can be
developed.
13. METRICS FOR PROCESS & PROJECTS
• Software process and project metrics are quantitative measures that enable software
engineers to gain insight into the efficacy of the software process and the projects that are
conducted using the process as a framework.
◼ Size-Oriented Metrics
• errors per KLOC (thousand lines of code)
• defects per KLOC
• $ per LOC
• pages of documentation per KLOC
• errors per person-month
• Errors per review hour
• LOC per person-month
• $ per page of documentation
◼ Function-Oriented Metrics
• errors per FP (thousand lines of code)
• defects per FP
• $ per FP
• pages of documentation per FP
• FP per person-month
• The relationship between lines of code and function points depend upon the
programming language that is used to implement the software and the quality of the
design. A number of studies have attempted to relate FP and LOC measures.
• The following table provides rough estimates of the average number of lines of code
required to build one function point in various programming languages.
• Why Opt for FP?
◼ Object-Oriented Metrics
• Number of Key classes: Key classes are the “highly independent components” that are
defined early in object-oriented analysis.
• Number of support classes: Supports Classes are required to implement the system but
are not immediately related to the problem domain.
• Average number of support classes per key class (analysis class): The average number of
support classes per key class were known for a given problem domain, estimating would
be much simplified.
• Number of static Web pages (the end-user has no control over the content displayed on
the page)
• Number of dynamic Web pages (end-user actions result in customized content displayed
on the page)
• Number of internal page links (internal page links are pointers that provide a hyperlink to
some other Web page within the WebApp)
• Number of persistent data objects
• Number of external systems interfaced
• Number of static content objects
• Number of dynamic content objects
• Number of executable functions
• This metrics provide the effectiveness of individual and group software quality
assurance and control activities.
• Error data can also be used to compute defect removal efficiency for each process
framework activity.
◼ Measuring Quality
• A quality metric that provides benefits at both the project and process level is defect
removal efficiency.
• When considered for a project as a whole, DRE is defined in the following manner:
DRE = E / (E + D)
Where E is the number of errors found before delivery of the software to the end-user
______
14. RISK MANAGEMENT
Project Risks
• Technical risks threaten the quality and timeliness of the software to be produced. If a
technical risk becomes a reality, implementation may become difficult or impossible.
Technical risks occur because the problem is harder to solve than we thought it would be.
(2) Building a product that no longer fits into the overall business strategy for the
company,
(3) Building a product that the sales force doesn’t understand how to sell,
(4) Losing the support of senior management due to a change in focus or a change in
people and
• Unpredictable risks are the joker in the deck. They can and do occur, but they are extremely
difficult to identify in advance.
1. Maintain a global perspective—view software risks within the context of system and the
business problem
2. Take a forward-looking view—think about the risks that may arise in the future; establish
contingency plans
3. Encourage open communication—if someone states a potential risk, don’t discount it.
4. Integrate—a consideration of risk must be integrated into the software process
5. Emphasize a continuous process—the team must be vigilant throughout the software process,
modifying identified risks as more information is known and adding new ones as better insight
is achieved.
6. Develop a shared product vision—if all stakeholders share the same vision of the software, it
likely that better risk identification and assessment will occur.
7. Encourage teamwork—the talents, skills and knowledge of all stakeholder should be pooled
14.3. RISK IDENTIFICATION
• Product size—risks associated with the overall size of the software to be built or modified.
• Customer characteristics—risks associated with the sophistication of the customer and the
developer's ability to communicate with the customer in a timely manner.
• Process definition—risks associated with the degree to which the software process has been
defined and is followed by the development organization.
• Development environment—risks associated with the availability and quality of the tools to
be used to build the product.
• Technology to be built—risks associated with the complexity of the system to be built and the
"newness" of the technology that is packaged by the system.
• Staff size and experience—risks associated with the overall technical and project experience
of the software engineers who will do the work.
• Have top software and customer managers formally committed to support the project?
• Are end-users enthusiastically committed to the project and the system/product to be built?
• Are requirements fully understood by the software engineering team and their customers?
• Have customers been involved fully in the definition of requirements?
• Do end-users have realistic expectations?
• Is project scope stable?
• Does the software engineering team have the right mix of skills?
• Are project requirements stable?
• Does the project team have experience with the technology to be implemented?
• Is the number of people on the project team adequate to do the job?
• Do all customer/user constituencies agree on the importance of the project and on the
requirements for the system/product to be built?
◼ Risk Components
• performance risk—the degree of uncertainty that the product will meet its requirements and
be fit for its intended use.
• cost risk—the degree of uncertainty that the project budget will be maintained.
• support risk—the degree of uncertainty that the resultant software will be easy to correct,
adapt, and enhance.
• schedule risk—the degree of uncertainty that the project schedule will be maintained and that
the product will be delivered on time.
• Risk projection, also called risk estimation, attempts to rate each risk in two ways
• the likelihood or probability that the risk is real
• the consequences of the problems associated with the risk, should it occur.
• The are four risk projection steps:
• establish a scale that reflects the perceived likelihood of a risk
• delineate the consequences of the risk
• estimate the impact of the risk on the project and the product,
• note the overall accuracy of the risk projection so that there will be no
misunderstandings.
Risk Mitigation
Monitoring &
Management
• The overall risk exposure, RE, is determined using the following relationship:
RE = P x C
where
• During early stages of project planning, a risk may be stated quite generally. As time passes
and more is learned about the project and the risk, it may be possible to refine the risk into a
set of more detailed risks, each somewhat easier to mitigate, monitor, and manage.
• One way to do this is to represent the risk in condition transition-consequence format. That
is, the risk is stated in the following form:
• Monitoring—what factors can we track that will enable us to determine if the risk is becoming
more or less likely?
• The RMMM plan documents all work performed as part of risk analysis and are used by the
project manager as part of the overall project plan.
• Some software teams do not develop a formal RMMM document. Rather, each risk is
documented individually using a Risk Information Sheet (RIS).
• In most cases, the RIS is maintained using a database system, so that creation and information
entry, priority ordering, searches, and other analysis may be accomplished easily.
• Once RMMM has been documented and the project has begun, risk mitigation and
monitoring steps commence.
2. To ensure that risk aversion steps defined for the risk are being properly applied.
Risk factor: Project completion will depend on tests which require hardware component under
development. Hardware component delivery may be delayed.
Probability: 60 %
Impact: Project completion will be delayed for each day that hardware is unavailable for use in
software testing.
Monitoring approach:
Contingency plan:
_____
15. QUALITY MANAGEMENT
• Variation control is the heart of the quality control. A manufacturer wants to minimize the
variation among the products that are produced.
◼ Quality
◼ Quality Control
• Quality control involves the series of inspections, reviews, and tests used throughout the
software process to ensure each work product meets the requirements placed upon it.
• Quality control includes a feedback loop to the process that created the work product.
• A key concept of quality control is that all work products have defined, measurable
specifications to which h we may compare the output of each process.
• The feedback loop is essential to minimize the defects produced.
◼ Quality Assurance
• Quality assurance consists of a set of auditing and reporting functions that assess the
effectiveness and completeness of quality control activities.
• The goal of quality assurance is to provide management with the data necessary to be
informed about product quality, thereby gaining insight and confidence that product quality
is meeting its goals.
◼ Cost of Quality
• Software Quality can be defined as Conformance to explicitly state functional and performance
requirements, explicitly documented development standards, and implicit characteristics that are
expected of all professionally developed software.
• Definition serves to emphasize three important points:
• Software requirements are the foundation from which quality is measured. Lack of
conformance to requirements is lack of quality.
• Specified standard define a set of development criteria that guide the manner in which
software is engineered. If the criteria are not followed, lack of quality will almost surely
result.
• A set of implicit requirements often goes unmentioned. If software conforms to its explicit
requirements but fails to meet implicit requirements, software quality is suspect.
◼ SQA Activities
• The SQA group reviews the process description for compliance with organizational policy, internal
software standards, externally imposed standards (e.g., ISO-9001), and other parts of the
software project plan.
• Reviews software engineering activities to verify compliance with the defined software process.
• Identifies, documents, and tracks deviations from the process and verifies that
corrections have been made.
• Audits designated software work products to verify compliance with those defined as part of the
software process.
• Reviews selected work products; identifies, documents, and tracks deviations; verifies
that corrections have been made
• Periodically reports the results of its work to the project manager.
• Ensures that deviations in software work and work products are documented and handled
according to a documented procedure.
• Software reviews are a “filter” for the software process. That is, reviews are applied at various
points during software engineering and serve to uncover errors and defects that can then be
removed.
• The primary objective of formal technical reviews is to find errors during the process so that they
do not become defects after release of the software.
• The obvious benefit of formal technical reviews is the early discovery of errors so that they do not
propagate to the next step in the software process.
• To illustrate the cost impact of early error detection, we consider a series of relative costs that
are based on actual cost data collected for large software projects.
• A defect amplification model can be used to illustrate the generation and detection of errors
during the preliminary design, detail design, and coding steps of a software engineering process.
• During the step, errors may be inadvertently generated. Review may fail to uncover newly
generated errors and errors from previous steps, resulting in some number of errors that are
passed through.
• To conduct reviews, a software engineer must expend time and effort, and the development
organization must spend money.
• A formal technical review is a software quality control activity performed by software engineers.
• It is important to establish a follow-up procedure to ensure that items on the issues list have been
properly corrected.
◼ Review Guidelines
• The following represents a minimum set of guidelines for formal technical reviews:
• SDRs attempt to quantify those work products that are primary targets for full FTRs.
• To accomplish this …
• Inspect a fraction ai of each software work product, i. Record the number of faults, fi
found within ai.
• Develop a gross estimate of the number of faults within work product i by multiplying fi
by 1/ai.
• Sort the work products in descending order according to the gross estimate of the number
of faults in each.
• Focus available review resources on those work products that have the highest estimated
number of faults.
15.5 STATISTICAL SOFTWARE QUALITY ASSURANCE
• The term “six sigma” is derived from six standard deviations—3.4 instances (defects) per million
occurrences—implying an extremely high quality standard.
• Define customer requirements and deliverables and project goals via well-defined
methods of customer communication
• Measure the existing process and its output to determine current quality performance
(collect defect metrics)
• Analyze defect metrics and determine the vital few causes.
• If an existing software process is in place, but improvement is required, Six Sigma suggests two
additional steps:
• If an organization is developing a software process the core steps are augmented as follows:
• Design the process to (1) avoid the root causes of defects and (2) to meet customer
requirements.
• Verify that the process model will, in fact, avoid defects and meet customer requirements.
15.6 SOFTWARE RELIABILITY
• The acronyms MTTF and MTTR are mean-time-to-failure and mean-time-to-repair, respectively.
◼ Software Safety
• Software safety is a software quality assurance activity that focuses on the identification and
assessment of potential hazards that may affect software negatively and cause an entire system
to fail.
• If hazards can be identified early in the software process, software design features can be
specified that will either eliminate or control potential hazards.
• A modeling and analysis is conducted as part of software safety. Initially, hazards are identified
and categorized by criticality and risk. For example, some of the hazards associated with a
computer-based cruise control for an automobile might be.
• ISO 9000 describes a quality assurance system in generic terms that can be applied to any business
regardless of the products or services offered.
• ISO 9001:2000 is the quality assurance standard that applies to software engineering.
• The standard contains 20 requirements that must be present for an effective quality assurance
system.