SE Notes
SE Notes
SE Notes
LECTURE NOTES
DEPARTMENT OF COMPUTER
SCIENCE AND ENGINEERING
(Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – ‘A+’ Grade
L T/P/D C
II Year B.Tech AIML,DS and SE -II SEM 3 -/-/- 3
SOFTWARE ENGINEERING
Objectives:
The students will be able:
1. To comprehend the various software process models.
2. To understand the types of software requirements and SRS document.
3. To know the different software design and architectural styles.
4. To learn the software testing approaches and metrics used in software development.
5. To know about quality control and risk management.
UNIT - I:
Introduction to Software Engineering: The evolving role of software, Changing Nature of
Software, Software myths.
A Generic view of process: Software engineering- A layered technology, a process framework,
Process patterns, process assessment.
Process models: The waterfall model, Incremental process models, Evolutionary process models,
The Unified process, Agility and Agile Process model, Extreme Programming, Other process models
of Agile Development and Tools
UNIT - II:
Software Requirements: Functional and non-functional requirements, User requirements,System
requirements, Interface specification, the software requirements document.
Requirements engineering process: Feasibility studies, Requirements elicitation and
analysis, Requirements validation, Requirements management.
System models: Context Models, Behavioral models, Data models, Object models,
structured methods. UML Diagrams.
UNIT - III:
Design Engineering: Design process and Design quality, Design concepts, the design model.
Creating an architectural design: Software architecture, Data design, Architectural
styles and patterns, Architectural Design.
Object-Oriented Design: Objects and classes, An Object-Oriented design process, Designevolution.
Performing User interface design: Golden rules, User interface analysis and design, interface
analysis, interface design steps, Design evaluation.
UNIT - IV:
Testing Strategies: A strategic approach to software testing, test strategies for conventional
software, Black-Box and White-Box testing, Validation testing, System testing, the art of Debugging.
Product metrics: Software Quality, Metrics for Analysis Model, Metrics for Design Model,
Metrics for source code, Metrics for testing, Metrics for maintenance.
Metrics for Process and Products: Software Measurement, Metrics for software quality.
Model Integration (CMMI), Software reliability, The ISO 9000 quality standards.
TEXT BOOKS:
1. Software Engineering A practitioner’s Approach, Roger S Pressman, 6thedition.
McGraw Hill International Edition.
2. Software Engineering, Ian Sommerville, 7th edition, Pearson education.
REFERENCE BOOKS:
1. Software Engineering, A Precise Approach, Pankaj Jalote, Wiley India, 2010.
2. Software Engineering: A Primer, Waman S Jawadekar, Tata McGraw-Hill, 2008
3. Software Engineering, Principles and Practices, Deepak Jain, Oxford University Press.
4. Software Engineering1: Abstraction and modelling, Diner Bjorner, Springer International
5. edition, 2006.
6. Software Engineering2: Specification of systems and languages, Diner Bjorner,
7. Springer International edition 2006.
8. Software Engineering Principles and Practice, Hans Van Vliet, 3rd edition,
John Wiley & Sons Ltd.
9. Software Engineering3: Domains, Requirements, and Software Design, D. Bjorner,
Springer International Edition.
10. Introduction to Software Engineering, R. J. Leach, CRC Press.
Course Outcomes:
Students will have the ability:
1. To compare and select a process model for a business system.
2. To identify and specify the requirements for the development of an application.
3. To develop and maintain efficient, reliable and cost effective software solutions.
To critically think and evaluate assumptions and arguments
INDEX
S. No Topic Page no
Unit
4 I Process models 11
5 II Software Requirements 21
7 II System models 29
12 IV Testing Strategies 44
13 IV Product metrics 53
15 V Risk management 58
16 V Quality Management 61
INTRODUCTION:
Engineering is the branch of science and technology concerned with the design, building, and use of
engines, machines, and structures. It is the application of science, tools and methods to find cost effective
solution to simple and complex problems.
SOFTWARE ENGINEERING is defined as a systematic, disciplined and quantifiable approach for the
development, operation and maintenance of software.
Characteristics of software
• Software is developed or engineered, but it is not manufactured in the classical sense.
• Software does not wear out, but it deteriorates due to change.
• Software is custom built rather than assembling existing components.
• System software. System software is a collection of programs written to service other programs
• Embedded software-- resides in read-only memory and is used to control products and systems for
the consumer and industrial markets.
• Artificial intelligence software. Artificial intelligence (AI) software makes use of nonnumeric
algorithms to solve complex problems that are not amenable to computation or straightforward analysis
• Engineering and scientific software. Engineering and scientific software have been characterized
LEGACY SOFTWARE
Legacy software are older programs that are developed decades ago. The quality of legacy software is
poor because it has inextensible design, convoluted code, poor and nonexistent documentation, test cases
and results that are not achieved.
As time passes legacy systems evolve due to following reasons:
The software must be adapted to meet the needs of new computing environment or technology.
The software must be enhanced to implement new business requirements.
The software must be extended to make it interoperable with more modern systems or database
The software must be rearchitected to make it viable within a network environment.
SOFTWARE MYTHS
Myths are widely held but false beliefs and views which propagate misinformation and confusion.
Three types of myth are associated with software:
- Management myth
- Customer myth
- Practitioner’s myth
MANAGEMENT MYTHS
• Myth(1)-The available standards and procedures for software are enough.
• Myth(2)-Each organization feel that they have state-of-art software development tools since they
have latest computer.
• Myth(3)-Adding more programmers when the work is behind schedule can catch up.
• Myth(4)-Outsourcing the software project to third party, we can relax and let that party build it.
CUSTOMER MYTHS
• Myth(1)- General statement of objective is enough to begin writing programs, the details can be
filled in later.
• Myth(2)-Software is easy to change because software is flexible
PRACTITIONER’S MYTH
• Myth(1)-Once the program is written, the job has been done.
• Myth(2)-Until the program is running, there is no way of assessing the quality.
• Myth(3)-The only deliverable work product is the working program
• Myth(4)-Software Engineering creates voluminous and unnecessary documentation and invariably
slows down software development.
A PROCESS FRAMEWORK
• Establishes the foundation for a complete software process
• Identifies a number of framework activities applicable to all software projects
• Also include a set of umbrella activities that are applicable across the entire software process.
A PROCESS FRAMEWORK
Generic view of engineering complimented by a number of umbrella activities
Software project tracking and control
Formal technical reviews
Software quality assurance
Software configuration management
Document preparation and production
Reusability management
Measurement
Risk management
Continuous model:
-Lets organization select specific improvement that best meet its business objectives and minimize risk-
Levels are called capability levels.
-Describes a process in 2 dimensions
-Each process area is assessed against specific goals and practices and is rated according to the
following capability levels.
CMMI
• Six levels of CMMI
– Level 0:Incomplete
– Level 1:Performed
– Level 2:Managed
– Level 3:Defined
– Level 4:Quantitatively managed
– Level 5:Optimized
CMMI
• Incomplete -Process is adhoc . Objective and goal of process areas are not known
• Performed -Goal, objective, work tasks, work products and other activities of software process are
carried out
• Managed -Activities are monitored, reviewed, evaluated and controlled
• Defined -Activities are standardized, integrated and documented
• Quantitatively Managed -Metrics and indicators are available to measure the process and quality
• Optimized - Continuous process improvement based on quantitative feed back from the user
-Use of innovative ideas and techniques, statistical quality control and other methods for process
improvement.
PROCESS PATTERNS
Software Process is defined as collection of Patterns.Process pattern provides a template. It comprises of
• Process Template
-Pattern Name
-Intent
-Types
-Task pattern
- Stage pattern
-Phase Pattern
• Initial Context
• Problem
• Solution
• Resulting Context
• Related Patterns
PROCESS ASSESSMENT
Does not specify the quality of the software or whether the software will be
delivered on time or will it stand up to the user requirements. It attempts to keep a check on the current
state of the software process with the intention of improving it.
PROCESS ASSESSMENT
Software Process
Software Process Assessment Software Process improvement Motivates Capability determination
APPROACHES TO SOFTWARE ASSESSMENT
• Standard CMMI assessment (SCAMPI)
• CMM based appraisal for internal process improvement
• SPICE(ISO/IEC 15504)
• ISO 9001:2000 for software
Personal and Team Software Process
Personal software process
PLANNING
HIGH LEVEL DESIGN
HIGH LEVEL DESIGN REVIEW
DEVELOPMENT
POSTMORTEM
Communication
Planning
Modeling
Construction
Deployment
This Model suggests a systematic, sequential approach to SW development that begins at the system
level and progresses through analysis, design, code and testing
PROBLEMS IN WATERFALLMODEL
• Real projects rarely follow the sequential flow since they are always iterative
• The model requires requirements to be explicitly spelled out in the beginning, which is often
difficult
• A working model is not available until late in the project time plan
• Linear sequential model is not suited for projects which are iterative in nature
• Incremental model suits such projects
• Used when initial requirements are reasonably well-defined and compelling need to provide limited
functionality quickly
• Functionality expanded further in later releases
• Software is developed in increments
The Incremental Model
Communication
Planning
Modeling
Construction
Deployment
Communication
Planning
Modeling
Construction
Deployment
INCREMENT 2
Communication
Planning
Modeling
Construction
: Deployment
:
:
:
INCREMENT N
Communication
Planning
Modeling
Construction
Deployment
Problems in RAD
• Requires a number of RAD teams
• Requires commitment from both developer and customer for rapid-fire completion of activities
• Requires modularity
• Not suited when technical risks are high
EVOLUTIONARY PROCESSMODEL
PROTOTYPING
Quick Design
STEPS IN PROTOTYPING
• Begins with requirement gathering
• Identify whatever requirements are known
• Outline areas where further definition is mandatory
• A quick design occur
• Quick design leads to the construction of prototype
• Prototype is evaluated by the customer
An evolutionary model which combines the best feature of the classical life cycle and
the iterative nature of prototype model. Include new element : Risk element. Starts in middle and
continually visits the basic tasks of communication, planning, modeling, construction and deployment
Evolved by Rumbaugh, Booch, Jacobson. Combines the best features their OO models. Adopts
additional features proposed by other experts. Resulted in Unified Modeling Language (UML). Unified
process developed Rumbaugh and Booch. A framework for Object-Oriented Software
Engineering using UML
2. Elaboration Phase
*Use-Case model
*Analysis model
*Software Architecture description
*Preliminary design model
*Preliminary model
3. Construction Phase
*Design model
*System components
*Test plan and procedure
*Test cases
*Manual
4. Transition Phase
*Delivered software increment
*Beta test results
*General user feedback
The meaning of Agile is swift or versatile."Agile process model" refers to a software development
approach based on iterative development. Agile methods break tasks into smaller iterations, or parts
do not directly involve long term planning. The project scope and requirements are laid down at the
beginning of the development process. Plans regarding the number of iterations, the duration and the
scope of each iteration are clearly defined in advance
1. Requirements gathering: In this phase, you must define the requirements. You should
explain business opportunities and plan the time and effort needed to build the project.
Based on this information, you can evaluate technical and economic feasibility.
2. Design the requirements: When you have identified the project, work with stakeholders to
define requirements. You can use the user flow diagram or the high-level UML diagram to
show the work of new features and show how it will apply to your existing system.
3. Construction/ iteration: When the team defines the requirements, the work begins.
Designers and developers start working on their project, which aims to deploy a working
product. The product will undergo various stages of improvement, so it includes simple,
minimal functionality.
4. Testing: In this phase, the Quality Assurance team examines the product's performance and
looks for the bug.
5. Deployment: In this phase, the team issues a product for the user's work environment.
6. Feedback: After releasing the product, the last step is feedback. In this, the team receives
feedback about the product and works through the feedback.
Advantages:
1. Frequent Delivery
2. Face-to-Face Communication with clients.
3.Efficient design and fulfils the business requirement.
4. Anytime changes are acceptable.
5. It reduces total development time.
Disadvantages:
1. Due to the shortage of formal documents, it creates confusion and crucial decisions taken
throughout various phases can be misinterpreted at any time by different team members.
Extreme Programming
XP is a lightweight, efficient, low-risk, flexible, predictable, scientific, and fun way to develop
software.
Extreme Programming (XP) was conceived and developed to address the specific needs of
software development by small teams in the face of vague and changing requirements.
Extreme Programming is one of the Agile software development methodologies. It provides
values and principles to guide the team behavior. The team is expected to self-organize.
Extreme Programming provides specific core practices where −
Each practice is simple and self-complete.
Crystal
Scrum
Scrum
Scrum is aimed at sustaining strong collaboration between people working on complex
products, and details are being changed or added. It is based upon the systematic interactions
between the three major roles: Scrum Master, Product Owner, and the Team.
Crystal
Crystal is an agile methodology for software development. It places focus on people over
processes, to empower teams to find their own solutions for each project rather than being
constricted with rigid methodologies.
Crystal methods focus on:-
People involved
Interaction between the teams
Community
Skills of people involved
Their Talents
Communication between all the teams
UNIT II
SOFTWARE REQUIREMENTS
SOFTWARE REQUIREMENTS
• Encompasses both the User’s view of the requirements( the external view ) and the Developer’s
view( inside characteristics)
User’s Requirements
--Statements in a natural language plus diagram, describing the services the system is expected to
provide and the constraints
• System Requirements --Describe the system’s function, services and operational condition
SOFTWARE REQUIREMENTS
• System Functional Requirements
--Statement of services the system should provide
--Describe the behavior in particular situations
--Defines the system reaction to particular inputs
• Nonfunctional Requirements
- Constraints on the services or functions offered by the system
--Include timing constraints, constraints on the development process and standards
--Apply to system as a whole
• Domain Requirements
--Requirements relate to specific application of the system
--Reflect characteristics and constraints of that system
FUNCTIONAL REQUIREMENTS
• Should be both complete and consistent
• Completeness
-- All services required by the user should be defined
• Consistent
-- Requirements should not have contradictory definition
• Difficult to achieve completeness and consistency for large system
NON-FUNCTIONALREQUIREMENTS
Types of Non-functional Requirements 1.Product Requirements
-Specify product behavior
-Include the following
STRUCTURED LANGUAGESPECIFICATION
• Requirements are written in a standard way
• Ensures degree of uniformity
• Provide templates to specify system requirements
• Include control constructs and graphical highlighting to partition the specification
Interface Specification
• Working of new system must match with the existing system
• Interface provides this capability and precisely specified
Purpose of SRS
• Communication between the Customer, Analyst, system developers, maintainers,
• firm foundation for the design phase
• support system testing activities
• Support project management and control
• controlling the evolution of the system
Process activities
1. Requirement Discovery -- Interaction with stakeholder to collect their requirements including
domain and documentation
2. Requirements classification and organization -- Coherent clustering of requirements from
unstructured collection of requirements
3. Requirements prioritization and negotiation -- Assigning priority to requirements
--Resolves conflicting requirements through negotiation
4. Requirements documentation -- Requirements be documented and placed in the next round of spiral
2. Interviewing--Puts questions to stakeholders about the system that they use and the system to be
developed. Requirements are derived from the answers.
Two types of interview
– Closed interviews where the stakeholders answer a pre-defined set of questions.
– Open interviews discuss a range of issues with the stakeholders for better understanding their needs.
3. Scenarios --Easier to relate to real life examples than to abstract description. Starts with an outline of
the interaction and during elicitation, details are added to create a complete description of that
interaction
Scenario includes:
• 1. Description at the start of the scenario
• 2. Description of normal flow of the event
• 3. Description of what can go wrong and how this is handled
• 4.Information about other activities parallel to the scenario
• 5.Description of the system state when the scenario finishes
LIBSYS scenario
• Initial assumption: The user has logged on to the LIBSYS system and has located the journal
containing the copy of the article.
• Normal: The user selects the article to be copied. He or she is then prompted by the system to either
provide subscriber information for the journal or to indicate how they will pay for the article.
Alternative payment methods are by credit card or by quoting an organizational account number.
• The user is then asked to fill in a copyright form that maintains details of the transaction and they
then submit this to the LIBSYS system.
• The copyright form is checked and, if OK, the PDF version of the article is downloaded to the
LIBSYS working area on the user’s computer and the user is informed that it is available. The user is
asked to select a printer and a copy of the article is printed
LIBSYS scenario
• What can go wrong: The user may fail to fill in the copyright form correctly. In this case, the form
should be re-presented to the user for correction. If the resubmitted form is still incorrect then the user’s
request for the article is rejected.
• The payment may be rejected by the system. The user’s request for the article is rejected.
• The article download may fail. Retry until successful or the user terminates the session..
• Other activities: Simultaneous downloads of other articles.
• System state on completion: User is logged on. The downloaded article has been deleted from
LIBSYS workspace if it has been flagged as print-only.
4. Use cases -- scenario based technique for requirement elicitation. A fundamental feature of UML,
notation for describing object-oriented system models. Identifies a type of interaction and the actors
involved. Sequence diagrams are used to add information to a Use case
REQUIREMENTS VALIDATION
Concerned with showing that the requirements define the system that the customer wants. Important
because errors in requirements can lead to extensive rework cost
Validation checks
1. Validity checks --Verification that the system performs the intended function by the user 2.Consistency
check --Requirements should not conflict
3. Completeness checks --Includes requirements which define all functions and constraints intended
bythe system user
4. Realism checks --Ensures that the requirements can be actually implemented
5. Verifiability -- Testable to avoid disputes between customer and developer.
3. TEST-CASE GENERATION
Requirements management
Requirements are likely to change for large software systems and as such requirements management
process is required to handle changes.
Reasons for requirements changes
(a) Diverse Users community where users have different requirements and priorities
(b) System customers and end users are different
(c) Change in the business and technical environment after installation Two classes of requirements
(a) Enduring requirements: Relatively stable requirements
(b) Volatile requirements: Likely to change during system development process or during operation
Traceability
Maintains three types of traceability information.
1. Source traceability--Links the requirements to the stakeholders
2. Requirements traceability--Links dependent requirements within the requirements document
3. Design traceability-- Links from the requirements to the design module
SYSTEM MODELS
Used in analysis process to develop understanding of the existing system or new system. Excludes
details. An abstraction of the system
Types of system models 1.Context models
2. Behavioural models 3.Data models 4.Object models 5.Structured models
CONTEXT MODELS
A type of architectural model. Consists of sub-systems that make up an entire system First step: To
identify the subsystem.
Represent the high level architectural model as simple block diagram
• Depict each sub system a named rectangle
• Lines between rectangles indicate associations between subsystems Disadvantages
--Concerned with system environment only, doesn't take into account other systems, which may take
data or give data to the model
Behavioral models
Describes the overall behaviour of a system. Two types of behavioural model
1.Data Flow models 2.State machine models
Data flow models --Concentrate on the flow of data and functional transformation on that data. Show the
processing of data and its flow through a sequence of processing steps. Help analyst understand what is
going on
DATA MODELS
Used to describe the logical structure of data processed by the system. An entity-relation- attribute
model sets out the entities in the system, the relationships between these entities and the entity attributes.
Widely used in database design. Can readily be implemented using relational databases. No specific
notation provided in the UML but objects and associations can be used.
OBJECT-BEHAVIORAL MODEL
-- Shows the operations provided by the objects
-- Sequence diagram of UML can be used for behavioral modeling
DESIGN ENGINEERING
QUALITY GUIDELINES
• Uses recognizable architectural styles or patterns
• Modular; that is logically partitioned into elements or subsystems
• Distinct representation of data, architecture, interfaces and components
• Appropriate data structures for the classes to be implemented
• Independent functional characteristics for components
• Interfaces that reduces complexity of connection
• Repeatable method
QUALITY ATTRIBUTES
FURPS quality attributes
• Functionality
* Feature set and capabilities of programs
* Security of the overall system
• Usability
* user-friendliness
* Aesthetics
* Consistency
* Documentation
• Reliability
* Evaluated by measuring the frequency and severity of failure
* MTTF
• Supportability
* Extensibility
* Adaptability
* Serviceability
DESIGN CONCEPTS
1. Abstractions
2. Architecture
3. Patterns
4. Modularity
5. Information Hiding
DESIGN CONCEPTS
ABSTRACTION
Many levels of abstraction.
Highest level of abstraction: Solution is slated in broad terms using the language of the problem
environment
Lower levels of abstraction: More detailed description of the solution is provided
• Procedural abstraction-- Refers to a sequence of instructions that a specific and limited function
• Data abstraction-- Named collection of data that describe a data object
DESIGN CONCEPTS
ARCHITECTURE--Structure organization of program components (modules) and their interconnection
Architecture Models
(a) Structural Models-- An organized collection of program components
(b) Framework Models-- Represents the design in more abstract way
(c) Dynamic Models-- Represents the behavioral aspects indicating changes as a function of external
events
(d). Process Models-- Focus on the design of the business or technical process
PATTERNS
Provides a description to enables a designer to determine the followings: (a). whether the pattern isapplicable
to the current work
(b). Whether the pattern can be reused
(c). Whether the pattern can serve as a guide for developing a similar but functionally or structurally
different pattern
MODULARITY
Divides software into separately named and addressable components, sometimes called modules.
Modules are integrated to satisfy problem requirements. Consider two problems p1 and p2. If the
complexity of p1 iscp1 and of p2 is cp2 then effort to solve p1=cp1 and effort to solve p2=cp2
If cp1>cp2 then ep1>ep2
The complexity of two problems when they are combined is often greater than the sum of the perceived
complexity when each is taken separately. • Based on Divide and Conquer strategy
: it is easier to solve a complex problem when broken into sub-modules
INFORMATION HIDING
Information contained within a module is inaccessible to other modules who do not need such
information. Achieved by defining a set of Independent modules that communicate with one another
only that information necessary to achieve S/W function. Provides the greatest benefits when
modifications are required during testing and later. Errors introduced during modification are less likely
to propagate to other location within the S/W.
DESIGN CLASSES
Class represents a different layer of design architecture. Five types of Design Classes
1. User interface class -- Defines all abstractions that are necessary for human computer interaction
2. Business domain class -- Refinement of the analysis classes that identity attributes and services to
implement some of business domain
3. Process class -- implements lower level business abstractions required to fully manage the business
domain classes
4. Persistent class -- Represent data stores that will persist beyond the execution of the software
5. System class -- Implements management and control functions to operate and communicate within
the computer environment and with the outside world.
Software Architecture is not the operational software. It is a representation that enables a software
engineer to
• Analyze the effectiveness of the design in meeting its stated requirements.
• • consider architectural alternative at a stage when making design changes is still relatively easy .
• Reduces the risk associated with the construction of the software.
Data Design
The data design action translates data objects defined as part of the analysis model into data structures at
the component level and database architecture at application level when necessary.
ARCHITECTURAL STYLES
Describes a system category that encompasses:
(1) a set of components
(2) a set of connectors that enables “communication and coordination
(3) Constraints that define how components can be integrated to form the system
(4) Semantic models to understand the overall properties of a system
Data-flow architectures
Shows the flow of input data, its computational components and output data. Structure is also called pipe
and Filter. Pipe provides path for flow of data. Filters manipulate data and work independent of its
neighboring filter. If data flow degenerates into a single line of transform, it is termed as batch
sequential.
Call and return architectures
Achieves a structure that is easy to modify and scale .Two sub styles
(1) Main program/sub program architecture
-- Classic program structure
Layered architectures
A number of different layers are defined Inner Layer( interface with OS)
• Intermediate Layer Utility services and application function) Outer Layer (User interface)
FIG: Layered
ARCHITECTURAL PATTERNS
A template that specifies approach for some behavioral characteristics of the system Patterns are
imposed on the architectural styles
Pattern Domains 1.Concurrency
--Handles multiple tasks that simulate parallelism.
--Approaches (Patterns)
(a) Operating system process management pattern
(b) A task scheduler pattern 2.Persistence
--Data survives past the execution of the process
--Approaches (Patterns)
(a) Data base management system pattern
(b) Application Level persistence Pattern( word processing software)
Object-Oriented Design: Objects and object classes, An Object-Oriented design process, Design
evolution.
• Performing User interface design: Golden rules, User interface analysis and design, interface
analysis, interface design steps, Design evaluation.
Systems context and modes of use. It specifies the context of the system. it also specify
the relationships between the software that is being designed and its external environment.
• If the system context is a static model it describes the other system in that environment.
• If the system context is a dynamic model then it describes how the system actually interact with
theenvironment.
System Architecture
Once the interaction between the software system that being designed and the system environment have
been defined. We can use the above information as basis for designing the System
Architecture.
Object Identification--This process is actually concerned with identifying the object classes. We can
identify the object classes by the following
1) Use a grammatical analysis 2) Use a tangible entities 3) Use a behavioral approach
4) Use a scenario based approach
Golden Rules
1. Place the user in control
2. Reduce the user’s memory load
3. Make the interface consistent
Make the Interface Consistent. Allow the user to put the current task into a meaningful context.
Maintain consistency across a family of applications. If past interactive models have created user
expectations, do not make changes unless there is a compelling reason to do
so.
Interface analysis
-Understanding the user who interacts with the system based on their skill levels.i.e, requirement
gathering
-The task the user performs to accomplish the goals of the system are identified, described and
elaborated. Analysis of work environment.
Interface design
In interface design, all interface objects and actions that enable a user to perform all desired task are
defined
Implementation
A prototype is initially constructed and then later user interface development tools may be used to
complete the construction of the interface.
• Validation
The correctness of the system is validated against the user requirement
User Analysis
• Are users trained professionals, technician, clerical, o manufacturing workers?
• What level of formal education does the average user have?
• Are the users capable of learning from written materials or have they expressed a desire for
classroom training?
• Are users expert typists or keyboard phobic?
• What is the age range of the user community?
• Will the users be represented predominately by one gender?
• How are users compensated for the work they perform?
• Do users work normal office hours or do they work until the job is done?
UNIT IV
Testing Strategies: A strategic approach to software testing, test strategies for conventional
software, Black-Box and White-Box testing, Validation testing, System testing, the art of
Debugging.
Product metrics: Software Quality, Metrics for Analysis Model, Metrics for Design Model,
Metrics for source code, Metrics for testing, Metrics for maintenance.
Metrics for Process and Products: Software Measurement, Metrics for software quality.
Testing Strategies
Software is tested to uncover errors introduced during design and construction. Testing
often accounts for
More project effort than other s/e activity. Hence it has to be done carefully using a testing
strategy.
The strategy is developed by the project manager, software engineers and testing specialists.
Testing is the process of execution of a program with the intention of finding errors
Involves 40% of total project cost
Testing Strategy provides a road map that describes the steps to be conducted as part of
testing.
It should incorporate test planning, test case design, test execution and resultant data
collection and execution
Validation refers to a different set of activities that ensures that the software is traceable to
the Customer requirements.
V&V encompasses a wide array of Software Quality Assurance
Testing is a set of activities that can be planned in advance and conducted systematically.
Testing strategy
Should have the following characteristics:
-- usage of Formal Technical reviews(FTR)
-- Begins at component level and covers entire system
-- Different techniques at different points
-- conducted by developer and test group
-- should include debugging
Software testing is one element of verification and validation.
Verification refers to the set of activities that ensure that software correctly implements a
specific function.
( Ex: Are we building the product right? )
Validation refers to the set of activities that ensure that the software built is traceable to
customer requirements.
( Ex: Are we building the right product ? )
Testing Strategy
Unit Testing begins at the vortex of the spiral and concentrates on each unit of software in
source code.
It uses testing techniques that exercise specific paths in a component and its control structure to
ensure complete coverage and maximum error detection. It focuses on the internal processing logic
and data structures. Test cases should uncover errors.
Boundary testing also should be done as s/w usually fails at its boundaries. Unit
tests can be designed before coding begins or after source code is generated.
Integration testing: In this the focus is on design and construction of the software architecture.
It addresses the issues associated with problems of verification and program construction by
testing inputs and outputs. Though modules function independently problems may arise
because of interfacing. This technique uncovers errors associated with interfacing. We can use
top-down integration wherein modules are integrated by moving downward through the control
hierarchy, beginning with the main control module. The other strategy is bottom –up which
begins construction and testing with atomic modules which are combined into clusters as we
move up the hierarchy. A combined approach called Sandwich strategy can be used i.e., top-
down for higher level modules and bottom-up for lower level modules.
System Testing: In system testing, s/w and other system elements are tested as a whole. This is
the last high-order testing step which falls in the context of computer system engineering.
Software is combined with other system elements like H/W, People, Database and the overall
functioning is checked by conducting a series of tests. These tests fully exercise the computer
based system. The types of tests are:
1. Recovery testing: Systems must recover from faults and resume processing within a
prespecified time.
It forces the system to fail in a variety of ways and verifies that recovery is properly performed.
Here the Mean Time To Repair (MTTR) is evaluated to see if it is within acceptable limits.
2. Security Testing: This verifies that protection mechanisms built into a system will protect it
from improper penetrations. Tester plays the role of hacker. In reality given enough resources
and time it is possible to ultimately penetrate any system. The role of system designer is to
make penetration cost more than the value of the information that will be obtained.
3. Stress testing: It executes a system in a manner that demands resources in abnormal quantity,
frequency or volume and tests the robustness of the system.
4. Performance Testing: This is designed to test the run-time performance of s/w within the
context of an integrated system. They require both h/w and s/w instrumentation.
Testing Tactics:
The goal of testing is to find errors and a good test is one that has a high probability of finding
an error.
A good test is not redundant and it should be neither too simple nor too complex.
Two major categories of software testing
Black box testing: It examines some fundamental aspect of a system, tests whether each
function of product is fully operational.
White box testing: It examines the internal operations of a system and examines the
procedural detail.
Object
Link
2) Equivalence partitioning: This divides the input domain of a program into classes of data
from which test
Cases can be derived. Define test cases that uncover classes of errors so that no. of test cases are
reduced.This is based on equivalence classes which represents a set of valid or invalid states for
inputconditions. Reduces the cost of testing
Example
Input consists of 1 to 10
Then classes are n<1,1<=n<=10,n>10
Choose one valid class with value within the allowed range and two invalid classes where
values are greater than maximum value and smaller than minimum value.
Example
If 0.0<=x<=1.0
Then test cases are (0.0,1.0) for valid input and (-0.1 and 1.1) for invalid input
This broadens testing coverage and improves quality of testing. It uses the following methods:
a) Condition testing: Exercises the logical conditions contained in a program module.
Focuses on testing each condition in the program to ensure that it does not contain errors
Simple condition
E1<relation operator>E2 Compound condition
simple condition<Boolean operator>simple condition
Types of errors include operator errors, variable errors, arithmetic expression errors etc.
b) Data flow Testing
This selects test paths according to the locations of definitions and use of variables in a
program Aims to ensure that the definitions of variables and subsequent use is tested
First construct a definition-use graph from the control flow of a program
DEF(definition):definition of a variable on the left-hand side of an assignment statement
USE: Computational use of a variable like read, write or variable on the right hand of
assignment statement Every DU chain be tested at least once.
c) Loop Testing
This focuses on the validity of loop constructs. Four categories can be defined
1.Simple loops
2.Nested loops
Debugging is an innate human trait. Some are good at it and some are not.
Debugging Strategies:
The objective of debugging is to find and correct the cause of a software error which is realized
by a combination of systematic evaluation, intuition and luck. Three strategies are proposed:
1)Brute Force Method.
2)Back Tracking
3)Cause Elimination
Brute Force: Most common and least efficient method for isolating the cause of a s/w error.
This is applied
when all else fails. Memory dumps are taken, run-time traces are invoked and program is
loaded with output statements. Tries to find the cause from the load of information Leads to
waste of time and effort.
Cause Elimination: Based on the concept of Binary partitioning. Data related to error
occurenec are organized to isolate potential causes. A “cause hypothesis” is devised and
data is used to prove or disprove it. A list of all possible causes is developed and tests are
conducted to eliminate each
Automated Debugging: This supplements the above approaches with debugging tools that
provide semi-automated support like debugging compilers, dynamic debugging aids, test
casegenerators, mapping tools etc.
Regression Testing: When a new module is added as part of integration testing the software
changes.
This may cause problems with the functions which worked properly before. This testing is the
re-execution of some subset of tests that are already conducted to ensure that changes have
not propagatedunintended side effects. It ensures that changes do not introduce unintended
behaviour or errors. This can be done manually or automated. Software Quality Conformance
to explicitly stated functional andperformance requirements, explicitly documented
development standards, and implicit characteristics that are expected of
1. Functionality
2.Reliability
3.Usability
4.Efficiency
5.Maintainability
6.Portability
Product metrics
• Information
Domain Count Simple avg Complex
EIS 3 4 6
EOS 4 5 7
EQS 3 4 6
ILFS 7 10 15
EIFS 5 7 10
DSQI=sigma of WiDi
Primitive measure that may be derived after the code is generated or estimated once design is
complete
Software Measurement:
Software measurement can be categorized as
1)Direct Measure and
2)Indirect Measure
UNIT – V
Risk management: Reactive vs. Proactive Risk strategies, software risks, Risk
identification, Risk projection, Risk refinement, RMMM, RMMM Plan.
Quality Management: Quality concepts, Software quality assurance, Software Reviews,
Formal technical reviews, Statistical Software quality Assurance, The Capability Maturity
Model Integration (CMMI), Software reliability, The ISO 9000 quality standards.
Risk Management
Risk is an undesired event or circumstance that occur while a project is underway It is
necessary for the project manager to anticipate and identify different risks that a project may be
susceptible to Risk Management. It aims at reducing the impact of all kinds of risk that may effect
a project by identifying, analyzing and managing them
Software Risk
It involve 2 characteristics
Uncertainty : Risk may or may not happen
Loss : If risk is reality unwanted loss or consequences will occur
It includes
1)Project Risk
2)Technical Risk
3)Business Risk
4)Known Risk
5)Unpredictable Risk
6) Predictable risk
Project risk: Threaten the project plan and affect schedule and resultant cost
Technical risk: Threaten the quality and timeliness of software to be produced
Business risk: Threaten the viability of software to be built
Known risk: These risks can be recovered from careful evaluation
Predictable risk: Risks are identified by past project experience
Unpredictable risk: Risks that occur and may be difficult to identify
Risk Projection
Also called risk estimation. It estimates the impact of risk on the project and the product.
Estimation is done by using Risk Table. Risk projection addresses risk in 2 ways
Prob
abilit Imp RM
Risk Category y act MM
Size
estimate PS 60% 2
may be
significantly
low
Larger no.
of PS 30% 3
users than
planned
Risk Refinement
Also called Risk assessment
Refines the risk table in reviewing the risk impact based on the following three factors
a.Nature: Likely problems if risk occurs
b.Scope: Just how serious is it?
c.Timing: When and how long
Quality Concepts
Variation control is the heart of quality control
Form one project to another, we want to minimize the difference between the predicted
resources needed to complete a project and the actual resources used, including staff,
equipment, and calendar time
Quality of design
Refers to characteristics that designers specify for the end product
Quality Management
Quality of conformance
Degree to which design specifications are followed in manufacturing the product
Quality control
Series of inspections, reviews, and tests used to ensure conformance of a work product to its
specifications
Quality assurance
Consists of a set of auditing and reporting functions that assess the effectiveness and
completeness of quality control activities
Cost of Quality
Prevention costs
Quality planning, formal technical reviews, test equipment, training
Appraisal costs
In-process and inter-process inspection, equipment calibration and maintenance, testing
Failure costs
rework, repair, failure mode analysis
External failure costs
Complaint resolution, product return and replacement, help line support, warranty work
Software Quality Assurance
Software quality assurance (SQA) is the concern of every software engineer to reduce
cost and improve product time-to-market.
A Software Quality Assurance Plan is not merely another name for a test plan, though
test plans are
included in an SQA plan.
SQA activities are performed on every software project.
Use of metrics is an important part of developing a strategy to improve the quality of
both software processes and work products.
SQA Activities
Prepare SQA plan for the project.
Participate in the development of the project's software process description.
Review software engineering activities to verify compliance with the defined software
process.
Audit designated software work products to verify compliance with those defined as part
of the software process.
Ensure that any deviations in software or work products are documented and handled
according to a documented procedure.
Record any evidence of noncompliance and reports them to management.
Software Reviews
Purpose is to find errors before they are passed on to another software engineering
activity or released to the customer.
Software engineers (and others) conduct formal technical reviews (FTRs) for software
quality assurance.
Using formal technical reviews (walkthroughs or inspections) is an effective means for
improving software quality.
Review Guidelines
Review the product, not the producer
Set an agenda and maintain it
Limit debate and rebuttal
Enunciate problem areas, but don’t attempt to solve every problem noted
Take return notes
Limit the number of participants and insist upon advance preparation.
Develop a checklist for each product i.e likely to be reviewed
Allocate resources and schedule time for FTRS
Conduct meaningful training for all reviewer
Review your early reviews
Software Defects
Industry studies suggest that design activities
introduce 50-65% of all defects or errors
during the software process
Review techniques have been shown to be up
to 75% effective in uncovering design flaws
which ultimately reduces the cost of
subsequent activities in the software process
An organization determines to obtain ISO 9000 certification applies to ISO registrar office for
registration. The process consists of the following stages:
1. Application: Once an organization decided to go for ISO certification, it applies to the registrar for
registration.
2. Pre-Assessment: During this stage, the registrar makes a rough assessment of the organization.
3. Document review and Adequacy of Audit: During this stage, the registrar reviews the document
submitted by the organization and suggest an improvement.
4. Compliance Audit: During this stage, the registrar checks whether the organization has compiled the
suggestion made by it during the review or not.
5. Registration: The Registrar awards the ISO certification after the successful completion of all the
phases.
6. Continued Inspection: The registrar continued to monitor the organization time by time.