Se Monograph CST-220 PDF
Se Monograph CST-220 PDF
Se Monograph CST-220 PDF
UNIT-1
Introduction: Definition of software and Software engineering, Difference between Program and
Product, Software development life cycle, Different life cycle models (waterfall, Iterative waterfall,
Proto type, Evolutionary and Spiral model), Agile software development and Their characteristics.
Function and Object oriented design: Structured analysis, Dataflow diagrams, Basic object
Orientation concepts, Unified modeling language, Unified modeling language, Use case model, Class
Diagrams, Interaction diagrams, Activity diagrams, State chart diagrams.
UNIT-2
Software design: Design process and concepts, Effective Modular design, the design model, Design
documentation, Approaches to Software design.
Software Project management: Software project planning, Project estimation techniques, COCOMO
Model, Project scheduling, Risk analysis and management, Software quality and management Staffing,
software configuration management.
User interface Design: Characteristics of good user inter face design, Command language user
interface, Menu based, Direct manipulation interfaces, Fundamentals of command based user
interface.
UNIT-3
Software Testing: Testing levels, Activities, Verification and Validation, Unit testing,
System testing Integration testing, Validation testing, Black box and white box testing.
Quality management: Software quality, Software reliability, Software re-views, Formal
technical reviews, Statistical SQA, Software reliability, The ISO 9000 coding standards, SQA
plan, SEICMM.
1
1.Important Definitions
2
This makes Risk Analysis an essential tool when your work involves risk. It can help you
identify and understand the risks that you could face in your role. In turn, this helps you
manage these risks, and minimize their impact on your plans.
1.6. Software Quality
In context of software engineering, software quality refers to two related but distinct
notions that exist wherever quality is defined in a business context:
Software functional quality reflects how well it complies with or conforms to a given
design, based on functional requirements or specifications. That attribute can also be
described as the fitness for purpose of a piece of software or how it compares to
competitors in the marketplace as a worthwhile product. It is the degree to which the
correct software was produced.
Software structural quality refers to how it meets non-functional requirements that
support the delivery of the functional requirements, such as robustness or maintainability.
It has a lot more to do with the degree to which the software works as needed.
1.7. Prototyping-
It is the process of building a model of a system. In terms of an information system,
prototypes are employed to help system designers build an information system that
intuitive and easy to manipulate for end users. Prototyping is an iterative process that is
part of the analysis phase of the systems development life cycle.
During the requirements determination portion of the systems analysis phase, system
analysts gather information about the organization's current procedures and business
processes related the proposed information system. In addition, they study the current
information system, if there is one, and conduct user interviews and collect
documentation. This helps the analysts develop an initial set of system requirements.
3
of object relationships. After each data object or item is given a descriptive name, its
relationship is described (or it becomes part of some structure that implicitly describes
relationship), the type of data (such as text or image or binary value) is described,
possible predefined values are listed, and a brief textual description is provided. This
collection can be organized for reference into a book called a data dictionary.
4
Level of Data Flow Diagrams
Note that the letters along the arrows in the example diagrams represent the name of the
information being transferred in each direction – you should use a description of what the
information is, e.g. Patient name – these names should match across the different levels of your
DFD. Likewise, each process should give a name, with the number only being used in the top
half of the box. There is a sample DFD fragment overleaf.
Context Diagram
The context diagram just shows the system in the context of the external entities, and
what information flows in and out of the system as a whole.
X 0
Z
Entity A Information System Entity B
Y
The system is then subsequently broken down into its component parts – which are
themselves broken down, until each process represents a single step in your system.
Level 0 DFD
1
X
Entity A Process 1 Entity B
Y
V Z 5
Level 1 DFD
For process 2
2.1
V G
Process
2.1
J
2.2 2.3 Y
N H
Data store N Process Process
N 2.2 2.3 W
Level 2 DFD
For process 2.2
2.2.1 H1
J1
Process
H
2.2.1
2.2.3
J
Process
K H2
2.2.2 2.2.3
J2 L N N
Process
2.2.2
Data store N
Appointment
Possible
Patient name 2
Patient appointment Maintain
Patient
Appointment to appointments
Patient information
Appointment Available
Appointment
Patients
Appointments
1.10. Object Orientation Concepts-
Object means a real word entity such as pen, chair, table etc. Object-Oriented
Programming is a methodology or paradigm to design a program using classes and
objects. It simplifies the software development and maintenance by providing some
concepts:
Object
Class
Inheritance
Polymorphism
Abstraction
Encapsulation
1.1.1. Object
Any entity that has state and behavior is known as an object. For example: chair, pen,
table, keyboard, bike etc. It can be physical and logical.
1.1.2. Class
Collection of objects is called class. It is a logical entity.
1.1.3. Inheritance
When one object acquires all the properties and behaviors of parent object i.e. known as
inheritance. It provides code reusability. It is used to achieve runtime polymorphism.
1.1.4. Polymorphism
One task is performed by different ways i.e. known as polymorphism. For example: to
convince the customer differently, to draw something e.g. shape or rectangle etc.
In java, we use method overloading and method overriding to achieve polymorphism.
Another example can be to speak something e.g. cat speaks meow, dog barks woof etc.
1.1.5. Abstraction
Hiding internal details and showing functionality is known as abstraction. For
example:
Phone call, we don't know the internal processing.
In java, we use abstract class and interface to achieve abstraction.
1.1.6. Encapsulation
Binding (or wrapping) code and data together into a single unit is known as
encapsulation. For example: capsule, it is wrapped with different medicines.
A java class is the example of encapsulation. Java bean is the fully encapsulated
class because all the data members are private here.
1.11. Modularity-
In software engineering, modularity refers to the extent to which a software/Web
application may be divided into smaller modules. Software modularity indicates that the
number of application modules is capable of serving a specified business domain.
Modularity is successful because developers use prewritten code, which saves resources.
Overall, modularity provides greater software development manageability.
1.12. Software Design-
It may be seen as a software engineering artifact derived from the available specifications
and requirements of a software product, which needs to be developed. It transforms and
implements the requirements, specified in the software requirement specification (SRS)
in a viewable object or form so as to assist developers to carry out the development
process in a particular direction in order to build a desired product.
Software designing is one of the early phases of the Software Development Life Cycle
(SDLC), which provides the necessary outputs for the next phase, i.e. coding &
development.
Further, these designs may be structured using different strategies, and are available in
multiple variants so as to view the logical structure of a software product from multiple
perspectives. Let's go through each of them.
11
Reengineering: The examination and alteration of an existing subject system to
reconstitute it in a new form. This process encompasses a combination of sub-processes
such as reverse engineering, restructuring, re-documentation, forward engineering, and
retargeting.
Software Development Life Cycle (SDLC) is a process used by the software industry to
design, develop and test high quality software. The SDLC aims to produce high-quality
software that meets or exceeds customer expectations, reaches completion within times
and cost estimates.
SDLC is the acronym of Software Development Life Cycle. It is also called as Software
Development Process. SDLC is a framework defining tasks performed at each step in the
software development process.
ISO/IEC 12207 is an international standard for software life-cycle processes. It aims to be
the standard that defines all the tasks required for developing and maintaining software.
12
2.2. Phase2: Defining Requirements
Once the requirement analysis is done the next step is to clearly define and document the
product requirements and get them approved from the customer or the market analysts.
This is done through an SRS (Software Requirement Specification) document which
consists of all the product requirements to be designed and developed during the project
life cycle.
13
code. Different high level programming languages such as C, C++, Pascal, Java and PHP
are used for coding. The programming language is chosen with respect to the type of
software being developed.
3. Important Rules
3.1. Classical Waterfall Model
The Waterfall Model was the first Process Model to be introduced. It is also referred to as
a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall
model, each phase must be completed before the next phase can begin and there is no
overlapping in the phases.
The Waterfall model is the earliest SDLC approach that was used for software
development.
The waterfall Model illustrates the software development process in a linear sequential
flow. This means that any phase in the development process begins only if the previous
phase is complete. In this waterfall model, the phases do not overlap.
14
3.1.1. Waterfall Model - Design
Waterfall approach was first SDLC Model to be used widely in Software Engineering to
ensure success of the project. In "The Waterfall" approach, the whole process of software
development is divided into separate phases. In this Waterfall model, typically, the
outcome of one phase acts as the input for the next phase sequentially.
The following illustration is a representation of the different phases of the Waterfall
Model.
15
Implementation − With inputs from the system design, the system is first developed in
small programs called units, which are integrated in the next phase. Each unit is
developed and tested for its functionality, which is referred to as Unit Testing.
Integration and Testing −All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire system is
tested for any faults and failures.
Deployment of system −Once the functional and non-functional testing is done; the
product is deployed in the customer environment or released into the market.
Maintenance −There are some issues which come up in the client environment. To fix
those issues, patches are released. Also to enhance the product some better versions are
released. Maintenance is done to deliver these changes in the customer environment.
All these phases are cascaded to each other in which progress is seen as flowing steadily
downwards (like a waterfall) through the phases. The next phase is started only after the
defined set of goals are achieved for previous phase and it is signed off, so the name
"Waterfall Model". In this model, phases do not overlap.
16
3.1.3. Waterfall Model - Advantages
The advantages of waterfall development are that it allows for departmentalization and
control. A schedule can be set with deadlines for each stage of development and a
product can proceed through the development process model phases one by one.
Development moves from concept, through design, implementation, testing, installation,
troubleshooting, and ends up at operation and maintenance. Each phase of development
proceeds in strict order.
Some of the major advantages of the Waterfall Model are as follows −
o Simple and easy to understand and use
o Easy to manage due to the rigidity of the model. Each phase has specific
deliverables and a review process.
o Phases are processed and completed one at a time.
o Works well for smaller projects where requirements are very well understood.
o Clearly defined stages.
o Well understood milestones.
o Easy to arrange tasks.
o Process and results are well documented.
17
High amounts of risk and uncertainty.
Not a good model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Not suitable for the projects where requirements are at a moderate to high risk of
changing. So, risk and uncertainty is high with this process model.
It is difficult to measure progress within stages.
Cannot accommodate changing requirements.
Adjusting scope during the life cycle can end a project.
Integration is done as a "big-bang. At the very end, which doesn't allow identifying any
technological or business bottleneck or challenges early.
18
3.2.2. Advantages of Prototype model:
o Users are actively involved in the development
o Since in this methodology a working model of the system is provided, the users
get a better understanding of the system being developed.
o Errors can be detected much earlier.
o Quicker user feedback is available leading to better solutions.
o Missing functionality can be identified easily
20
3.3.1.1. Identification
This phase starts with gathering the business requirements in the baseline spiral. In
the subsequent spirals as the product matures, identification of system requirements,
subsystem requirements and unit requirements are all done in this phase.
This phase also includes understanding the system requirements by continuous
communication between the customer and the system analyst. At the end of the spiral,
the product is deployed in the identified market.
3.3.1.2. Design
The Design phase starts with the conceptual design in the baseline spiral and involves
architectural design, logical design of modules, physical product design and the final
design in the subsequent spirals.
21
Based on the customer evaluation, the software development process enters the next
iteration and subsequently follows the linear approach to implement the feedback
suggested by the customer. The process of iterations along the spiral continues
throughout the life of the software.
22
o New product line which should be released in phases to get enough customer
feedback.
o Significant changes are expected in the product during the development cycle.
23
o Not suitable for small or low risk projects and could be expensive for small
projects.
o Process is complex
o Spiral may go on indefinitely.
o Large number of intermediate stages requires excessive documentation.
Important statements
4.1. Software Evolution Laws
Lehman has given laws for software evolution. He divided the software into three
different categories:
S-type (static-type) - This is a software, which works strictly according to defined
specifications and solutions. The solution and the method to achieve it, both are
immediately understood before coding. The s-type software is least subjected to changes
hence this is the simplest of all. For example, calculator program for mathematical
computation.
P-type (practical-type) - This is a software with a collection of procedures. This is
defined by exactly what procedures can do. In this software, the specifications can be
described but the solution is not obvious instantly. For example, gaming software.
E-type (embedded-type) - This software works closely as the requirement of real-world
environment. This software has a high degree of evolution as there are various changes in
laws, taxes etc. in the real world situations. For example, Online trading software.
24
Conservation of familiarity - The familiarity with the software or the knowledge about
how it was developed, why was it developed in that particular manner etc. must be
retained at any cost, to implement the changes in the system.
Continuing growth- In order for an E-type system intended to resolve some business
problem, its size of implementing the changes grows according to the lifestyle changes of
the business.
Reducing quality - An E-type software system declines in quality unless rigorously
maintained and adapted to a changing operational environment.
Feedback systems- The E-type software systems constitute multi-loop, multi-level
feedback systems and must be treated as such to be successfully modified or improved.
Self-regulation - E-type system evolution processes are self-regulating with the
distribution of product and process measures close to normal.
Organizational stability - The average effective global activity rate in an evolving E-type
system is invariant over the lifetime of the product.
5. Formulae/formulations
5.1. Basic COCOMO:
The basic COCOMO equations take the form
Effort Applied (E) = ab (KLOC)bb [ man-months ]
Development Time (D) = cb(Effort Applied)db [months]
People required (P) = Effort Applied / Development Time [count]
where, KLOC is the estimated number of delivered lines (expressed in thousands ) of
code for project. The coefficients ab, bb , cb and db are given in the following table (note:
the values listed below are from the original analysis, with a modern reanalysis producing
different values):
Software project ab bb cb db
Organic 2.4 1.05 2.5 0.38
25
Semi-detached 3.0 1.12 2.5 0.35
26
Ratings
Very Extra
Cost Drivers Very Low Low Nominal High High High
Product attributes
Hardware attributes
Personnel attributes
27
Software engineer capability 1.42 1.17 1.00 0.86 0.70
Project attributes
28
Memory constraints
Volatility of the virtual machine environment
Required turnabout time
Personnel attributes
Analyst capability
Software engineering capability
Applications experience
Virtual machine experience
Programming language experience
Project attributes
o Use of software tools
o Application of software engineering methods
o Required development schedule
Each of the 15 attributes receives a rating on a six-point scale that ranges from
"very low" to "extra high" (in importance or value). An effort multiplier from the
table below applies to the rating. The product of all effort multipliers results in
an effort adjustment factor (EAF). Typical values for EAF range from 0.9 to 1.4.
The Intermediate Cocomo formula now takes the form:
E=ai(KLoC)(bi)(EAF)
Where E is the effort applied in person-months, KLoC is the estimated number of
thousands of delivered lines of code for the project, and EAF is the factor
calculated above. The coefficient ai and the exponent bi are given in the next table.
Software project ai bi
29
Semi-detached 3.0 1.12
The Development time D calculation uses E in the same way as in the Basic COCOMO.
30
A Function Point (FP) is a unit of measurement to express the amount of business
functionality, an information system (as a product) provides to a user. FPs measure
software size. They are widely accepted as an industry standard for functional sizing.
For sizing software based on FP, several recognized standards and/or public
specifications have come into existence. As of 2013, these are −
31
Automating FP counting according to the guidelines of the International Function Point
User Group (IFPUG).
Function Point Analysis (FPA) technique quantifies the functions contained within
software in terms that are meaningful to the software users. FPs consider the number of
functions being developed based on the requirements specification.
Function Points (FP) Counting is governed by a standard set of rules, processes and
guidelines as defined by the International Function Point Users Group (IFPUG). These
are published in Counting Practices Manual (CPM).
32
Constitutes a complete transaction.
Is self-contained and leaves the business of the application being
counted in a consistent state.
5.8. Functions
There are two types of functions −
Data Functions
Transaction Functions
Internal Logical File (ILF) is a user identifiable group of logically related data or control
information that resides entirely within the application boundary. The primary intent of
an ILF is to hold data maintained through one or more elementary processes of the
application being counted. An ILF has the inherent meaning that it is internally
maintained, it has some logical structure and it is stored in a file. (Refer Figure 1)
External Interface Files
External Interface File (EIF) is a user identifiable group of logically related data or
control information that is used by the application for reference purposes only. The data
resides entirely outside the application boundary and is maintained in an ILF by another
application. An EIF has the inherent meaning that it is externally maintained, an interface
has to be developed to get the data from the file. (Refer Figure 1)
33
5.8.2. Transaction Functions
There are three types of transaction functions.
External Inputs
External Outputs
External Inquiries
Transaction functions are made up of the processes that are exchanged between the user,
the external applications and the application being measured.
External Inputs
External Input (EI) is a transaction function in which Data goes ―into‖ the application from
outside the boundary to inside. This data is coming external to the application.
34
o Data may be used to maintain one or more Internal Logical Files.
o If the data is control information, it does not have to update an Internal Logical
File. (Refer Figure 1)
External Outputs
External Output (EO) is a transaction function in which data comes ―out‖ of the system.
Additionally, an EO may update an ILF. The data creates reports or output files sent to
other applications. (Refer Figure 1)
External Inquiries
External Inquiry (EQ) is a transaction function with both input and output components
that result in data retrieval. (Refer Figure 1)
35
The transaction functions EI, EO, EQ are measured by counting FTRs and DETs that
they contain following counting rules. Likewise, data functions ILF and EIF are
measured by counting DETs and RETs that they contain following counting rules. The
measures of transaction functions and data functions are used in FP counting which
results in the functional size or function points.
5.9. COUPLING
An indication of the strength of interconnections between program units.
Highly coupled have program units dependent on each other. Loosely coupled are made
up of units that are independent or almost independent.
Modules are independent if they can function completely without the presence of the
other. Obviously, can't have modules completely independent of each other. Must
interact so that can produce desired outputs. The more connections between modules, the
more dependent they are in the sense that more info about one modules is required to
understand the other module.
Three factors: number of interfaces, complexity of interfaces, type of info flow along
interfaces.
In general, modules tightly coupled if they use shared variables or if they exchange
control info.
Loose coupling if info held within a unit and interface with other units via parameter
lists. Tight coupling if shared global data.
If need only one field of a record, don't pass entire record. Keep interface as simple and
small as possible.
36
Two types of info flow: data or control.
Passing or receiving back control info means that the action of the module will
depend on this control info, which makes it difficult to understand the module.
Interfaces with only data communication result in lowest degree of coupling,
followed by interfaces that only transfer control data. Highest if data is hybrid.
When one module modifies local data values or instructions in another module.(can
happen in assembly language)
37
coupling because fewer modules will have to be modified if a shared data structure
is modified. Pass entire data structure but need only parts of it.
5.10. COHESION
Measure of how well module fits together.A component should implement a single
logical function or single logical entity. All the parts should contribute to the
implementation.
i) Coincidental cohesion: the parts of a component are not related but simply bundled
into a single component. Harder to understand and not reusable.
ii) Logical association: similar functions such as input, error handling, etc. put
together. Functions fall in same logical class. May pass a flag to determine which ones
executed. Interface difficult to understand. Code for more than one function may be
intertwined, leading to severe maintenance problems. Difficult to reuse
iii) Temporal cohesion: all of statements activated at a single time, such as start up or
shut down, are brought together. Initialization, clean up.Functions weakly related to one
another, but more strongly related to functions in other modules so may need to change lots of
modules when do maintenance.
iv) Procedural cohesion: a single control sequence, e.g., a loop or sequence of
decision statements. Often cuts across functional lines. May contain only part of a
complete function or parts of several functions. Functions still weakly connected,
and again unlikely to be reusable in another product.
38
v) Communicational cohesion: operate on same input data or produce same output
data. May be performing more than one function. Generally acceptable if alternate
structures
with higher cohesion cannot be easily identified. Still problems with reusability.
vi) Sequential cohesion: output from one part serves as input for another part. May Comment [C1]:
39
5.11.1. Recovery Testing
To check the recovery of the software, force the software to fail in various ways.
Ensuring proper steps are documented to verify the compatibility of backup facilities.
Demonstrating the ability of the organization to recover from all critical failures.
40
5.11.2. Security testing
It checks the system protection mechanism and secure improper penetration.
41
The debugging process gives two results, i.e the cause is found and corrected second is the
cause is not found.
5.12.1.2. Backtracking
Backtracking is successfully used in small programs.
The source code is traced manually till the cause is found.
42
It indicates the use of induction or deduction.
The data related to the error occurrence is arranged in separate potential cause.
43
A test should be used for the highest probability of uncovering the errors of a complete
class.
The test must be executed separately and it should not be too simple nor too complex.
White-box testing known as glass-box testing. Black-box testing also called as behavioral testing.
It starts early in the testing process. It is applied in the final stages of testing.
In this testing knowledge of implementation is needed. In this testing knowledge of implementation is not needed.
White box testing is mainly done by the developer. This testing is done by the testers.
In this testing, the tester must be technically sound. In black box testing, testers may or may not be technically sound.
44
5.16.1. Difference between Verification
and Validation
Verification Validation
Verification is the process to find whether the software meets the The validation process is checked whether the software
specified requirements for particular phase. meets requirements and expectation of the customer.
The objectives of verification is to check whether software is The objectives of the validation is to check whether the
constructed according to requirement and design specification. specifications are correct and satisfy the business need.
It describes whether the outputs are as per the inputs or not. It explains whether they are accepted by the user or not.
Plans, requirement, specification, code are evaluated during the Actual product or software is tested under validation.
verifications.
It manually checks the files and document. It is a computer software or developed program based
checking of files and document.
45
Unit Testing
Unit testing starts at the center and each unit is implemented in source code.
Integration testing
A integration testing focuses on the construction and design of the software.
Validation testing
Check all the requirements like functional, behavioral and performance requirement are
validate against the construction software.
System testing
System testing confirms all system elements and performance are tested entirely.
The software project management focuses on four P's. They are as follows:
People
The product objectives and the scope should be established before the project planning.
Process
The planned and controlled software projects are managed for one reason. It is known
way of managing complexity.
To avoid the project failure, the developer should avoid a set of common warning, develop
a common sense approach for planning, monitoring and controlling the project etc.
46
5.17. Problem Decomposition
Problem decomposition is known as partitioning or problem elaboration.
It is an activity present during the software requirement analysis.
The problem is not completely decomposed during the scope of software.
A software team must have a significant level of flexibility for choosing the software
process model which is best for the project and populate the model after it chosen.
Following work tasks are needed in simple and small projects for communication
activity:
47
Gather the use case into a scoping document, review it with all concerned.
Modify the use cases as needed.
Metrics are collected from the previous projects act as base using
which effort and time estimates are created for current software work.
The time and effort are compared to original estimates as a project goes
on.
If the quality is improved then the defects are minimized and if the defect
goes down, then the amount of rework needed during the project is also
reduced.
48
6. Important contents beyond syllabus
6.1. Advance Agile Project Management
6.1.1. Introduction
Agile Project Management is one of the revolutionary methods introduced for the
practice of project management. This is one of the latest project management
strategies that is mainly applied to project management practice in software
development. Therefore, it is best to relate agile project management to the software
development process when understanding it.
From the inception of software development as a business, there have been a number of
processes following, such as the waterfall model. With the advancement of software
development, technologies and business requirements, the traditional models are not
robust enough to cater the demands.
Therefore, more flexible software development models were required in order to address
the agility of the requirements. As a result of this, the information technology community
developed agile software development models.
'Agile' is an umbrella term used for identifying various models used for agile
development, such as Scrum. Since agile development model is different from
conventional models, agile project management is a specialized area in project
management.
49
There are many differences in agile development model when compared to
traditional models:
The agile model emphasizes on the fact that entire team should be a tightly
integrated unit. This includes the developers, quality assurance, project management,
and the customer.
Frequent communication is one of the key factors that makes this integration possible.
Therefore, daily meetings are held in order to determine the day's work and
dependencies.
Deliveries are short-term. Usually a delivery cycle ranges from one week to four weeks.
These are commonly known as sprints.
Agile project teams follow open communication techniques and tools which enable the
team members (including the customer) to express their views and feedback openly
and quickly. These comments are then taken into consideration when shaping the
requirements and implementation of the software.
50
6.1.3. Scope of Agile Project
Management
In an agile project, the entire team is responsible in managing the team and it is not
just the project manager's responsibility. When it comes to processes and procedures,
the common sense is used over the written policies.
This makes sure that there is no delay is management decision making and therefore
things can progress faster.
In addition to being a manager, the agile project management function should also
demonstrate the leadership and skills in motivating others. This helps retaining the
spirit among the team members and gets the team to follow discipline.
Agile project manager is not the 'boss' of the software development team. Rather, this
function facilitates and coordinates the activities and resources required for quality
and speedy software development.
Software
Hardware
People
Database
Documentation
Procedures.
51
Q.3 What are the factors to be considered in the System Model Construction?
Assumption
Simplification
Limitation
Constraints
Preferences
Ans. Framework is the Code Skeleton that can fleshed out with specific classes or functionality
and Designed to address specifies problem at hand.
Q.6 What are the important roles of Conventional Component within the Software
Architecture?
Ans. The important roles of Conventional component within the Software Architecture are:
51
Q.7 Differentiate Software Engineering methods, tools and procedures.
Ans. Methods: Broad array of tasks like project planning cost estimation etc.
Procedures: Holds the methods and tools together. It enables the timely development of
computer software.
Ans. Stakeholder is anyone in the organization who has a direct business interest in the system
or product to be built.
Ans. It provides specified amount of computation with in fixed time intervals. RTS sense and
control external devices, respond to external events and share processing time between
tasks.
52
Software Embedded software.
Web Applications.
Artificial Intelligence software.
Ans. Software process is defined as the structured set of activities that are required to develop
the software system.
Specification
Design and Implementation
Validation
Evolution
53
Reusability management, Measurement.
Ans. Work breakdown structure is the decomposition of the project into smaller and more
manageable parts with each part satisfying the following criteria-
Q.19 What are the issues that get discussed during project closure?
Ans. The issues that get discussed during project closure are:
54
Was our estimation of the hardware correct?
Ans. They are the measurable and quantifiable attributes of progress. They are the
intermediate points in the project which ensure that we are in the right track. They are
under the control of project manager.
Ans. The overall responsibility for ensuring satisfaction progress on a project is the role of
the project board.
Ans. The project manager is responsible for day to day administration of the project.
Ans. Closed systems are those that do not interact with the environment.
Ans. A system that is a part of a large system whose primary purpose is non computational.
55
Q.28 What are the Generic Framework Activities?
Communication.
Planning.
Modeling.
Construction.
Deployment.
Ans. Stakeholder is anyone who has stake in successful outcome of project such as:
Business Managers,
End-users,
Software Engineer,
Support People
Ans. Process Model differ from one another due to the following reasons:
Q.31 Write out the reasons for the Failure of Water Fall Model?
Real project rarely follow sequential Flow. Iterations are made in indirect manner.
Difficult for customer to state all requirements explicitly.
56
Customer needs more patients as working products reach only at deployment
phase.
Ans. Scripts Specific Process Activities and other detailed work functions that are part of
team process.
Real projects rarely follow sequential flow. Iteration always occurs and creates
problem.
Difficult for the customer to state all requirements.
Working version of the program is not available. So the customer must have
patience.
Ans. Each of the regions in the spiral model is populated by a set of work tasks called a task
set that are adopted to the characteristics of the project to be undertaken.
57
Ans. The customer and the developer enter into the process of negotiation where the
customer may be asked to balance functionality, performance and other product against
cost and time to market.
Q.37. Which of the software engineering paradigms would be most effective? Why?
Reasons :
The incremental model can be adopted when there are less number of people
involved in the project.
Technical risks can be managed with each increment.
For a very small time span, at least core product can be delivered to the customer.
58
Customer Evaluation: Customer's feedback is obtained and based on the customer
evaluation required tasks are performed and implemented at installation stage.
Incremental model
Spiral model
WIN-WIN spiral model
Concurrent Development
Ans. Software prototyping is defined as a rapid software development for validating the
requirements.
59
Ans. The prototyping approaches in software process are :
Ans. This prototyping is used to pre-specify the look and feel of user interface in an effective
way.
60
Idea generation: Ideas come from various sources like customers, suppliers,
employees, market place demands.
Prototype development phase: This entails buildings simplistic model of final
product.
Beta phase: This iron out the kinks in the product and add necessary supporting
infrastructure to roll out the product.
Production phase: In this phase product is ready for prime time.
Maintenance and obsolescence phase: In this critical bugs are fixed after which
the product goes into obsolescence.
Ans. The project is divided into sequence of well defined phases. One phase is completed
before next starts. There is a feedback loop between adjacent phases. What the actual
phase are depends on the project.
Advantages :
Simplicity
Lining up resources with appropriate skills is easy
Disadvantages :
Ans. The customer and developer agree on breaking the product into small units.
Development is carried out using modeling tools and CASE tools. Customer is kept in
touch so the changes are reflected time. Quality assurance is imposed.
Advantages:
Responsiveness to change
Ability to capture user requirements effectively.
61
Application turnaround time is shorter.
Disadvantages:
Need for modeling tools which adds expense.
Places restriction on type and structure.
Ans. A prototype is built to quickly demonstrate to the customer what the product would
look like. Only minimal functionality of the actual product is provided during the
prototyping phase.
Ans. The main advantages of spiral model is, it is realistic and typifies most software
development products/projects. It combines the best features of most of the earlier
models. It strikes a good balance mechanism for early problem identification and
correction while not missing out proactive problem prevention.
Ans. Here different terms have specialization and responsibility in different life cycle phase.
Ans. Formal Methods are not widely used due to the following reasons:
62
It establishes a basis for creation of software design.
It defines a set of requirements that can be validated once the software design is
built.
Inception
Elaboration
Specification
Management
Elicitation
Negotiation
Validation
63
Q.59. What are the Difficulties in Elicitations?
Problem of Scope
Problem of Volatility
Problem of Understanding
Ans. Quality Function Deployment (QFD) is a technique that translates needs of customer
into technical requirement. It concentrates on maximizing customer satisfaction from
the software engineering process.
Ans. These are used in architectural design to document hierarchical structure, parameters and
interconnections in a system. No Decision box. The chart can be augmented with
I o
and
P p
module by module specification of attributes.
Q.62. What are the contents of HIPO diagrams?
64
Ans. Requirement engineering is the process of establishing to services that the customer
required from the system and constraints under which it operates and is developed.
Correct: The SRS should be made up the date when appropriate requirements are identified.
Unambiguous: When the requirements are correctly understood then only it is possible to
write unambiguous software.
Complete: To make SRS complete, its hold be specified what a software designer wants to
create software.
VI. Traceable: What is the need for mentioned requirement? This should be correctly
identified.
VII. Q.66 What are the objectives of Analysis modeling ?
III.To devise a set of valid requirements after which the software can be build.
Ans. Entity Relationship Diagram is the graphical representation of the object relationship
pair. It is mainly used in database application.
65
Ans. Data Flow Diagram depicts the information flow and the transforms that are applied on
the data as it moves from input to output.
Ans. Level-0 DFD is called as fundamental system model or context model. In the context
model the entire software system is represented by a single bubble with input and
output indicated by incoming and outgoing arrows.
Ans. State transition diagram is basically a collection of states and events. The events cause
the system to change its state. It also represents what actions are to be taken on the
occurrence of particular events.
Ans. The data dictionary can be defined as an organized collection of all the data elements of
the system with precise and rigorous definitions so that user and system analyst will
have a common understanding of inputs, outputs, components of stores and
intermediate calculations.
Data Dictionary
Entity Relationship Diagram
Data flow Diagram
State Transition Diagram
Control Specification
Process Specification
66
Data Design
Architectural design
Interface design
Component-level design.
Responsibilities: Commitments on either side Requirement form the basis for the success of
further in a project.
Current system requirements
Functionality requirements
Performance requirements
Availability needs
Security
Environmental definitions
Targets
Acceptance criteria
Ongoing needs: Documentation
Training
Ongoing support
Q.75 List the skill sets required during the requirements phase.
Ans. The skill sets required during the requirements phase are:
67
Availability to look the requirements
domain expertise
Storing interpersonal skills
Ability to tolerate ambiguity
Technology awareness
Strong negotiation skills
Strong communication skills
Ans. Evaluating effectiveness of the original project goals and providing to improve the
system.
Responsibilities
Targets
Current system needs
Ongoing needs
Functionality Requirements
Availability needs
68
Performance requirements
Security
Q.79 List some of the skills essential for requirements gathering phase.
Quality Costs.
Appraisal Costs
Prevention Costs.
Ans. Software Quality Control involves series of inspections, reviews and tests which are
used throughout software process to ensure each work product meets requirements
placed upon it.
69
Q.83 What is Software Quality Assurance?
Ans. Software Quality Assurance is a set of auditing and reporting functions that assess
effectiveness and completeness of quality control activities.
Ans. SQA Plan provides road map for instituting SQA and it serves as template for SQA
activities that are instituted for each software project.
Question 86. Explain the SE paradigms, Explain each and every model with diagram
also compare each model-
Answer 1. The software engineering paradigm which is also referred to as a software process
model or Software Development Life Cycle model is the development strategy that encompasses
the process, methods and tools. In simple words Software Engineering Paradigms is the Software
Development Life Cycle only. The objectives of the use of software engineering paradigms
include the following points:
70
1. Waterfall Model:
Waterfall Model is the basic of all models of Software Development. It is also known as
Linear Sequential Model as every stage is reached or executed sequentially. Waterfall model is
considered to be the classic life cycle model as it is the basic and the first model to be proposed
and accepted worldwide.
The below is the image of the different stages of the Waterfall Model.
We can see that every stage is executed after the first stage is executed completely. No
backward movement in this model can be done therefore we cannot correct an error occurring in
the previous stages before executing the whole software.
Advantages of Waterfall Model:
The model has well-defined phases with well-defined inputs.
The model recognizes the sequences of software engineering activities that result in a
software product.
The Spiral Model involves the same steps which waterfall model has but the spiral model is a
model which worked upon the disadvantages of waterfall model. The spiral model motivates
from the concept that the requirements given at the beginning are incomplete so there is a
need of requirement analysis phase and design phase to evolve periodically as the project
processes. Below is the image of the structure of the Spiral Model:
From the above image we can see that the stages are repeating periodically as per the need.
The spiral model focuses on the constant re-evaluation of the design and risks involved. It can
be described as being iterative and sequential. As we can see every stage is executed
sequentially and is repeated after a period of time to re-evaluate some points or some
requirements.
It is flexible and can be tailored for a variety of situations such as reuse component
based development and prototyping.
It can be adjusted to be used as any other model.
It takes into consideration the change of the requirements during the development process.
The incremental process model uses the same phases as the waterfall process model. The
difference between the incremental process model and the waterfall model is that in this model
the phases are much smaller than the waterfall phases. This model is similar to the spiral model
in that the requirements are not complete at project start but the difference with the spiral model
is that rather than going in a cyclical cycle like the spiral model, the incremental model has
something similar to multiple waterfalls within the model i.e. in the Incremental Process Model
follows a sequential execution of the phases after the last phase is completed and any updation
is required than it moves to the first phase or the phase where the updation is required.
72
Advantages of Incremental Process Model:
It is easier to test and debug during a smaller iteration.
It is easier to manage risk in incremental process model because risky pieces are
identified and handled during iteration.
It lowers the initial delivery cost.
This model is more flexible, less costly to change scope and requirements.
73
Disadvantages of Incremental Process Model:
Incremental process model needs a clear and complete definition of the whole system
before it can be broken down and built incrementally.
The cost needed for this model is higher than the Waterfall model.
To build Incremental Process Model we need good planning and design.
Question 87. Draw level 0 or level 1 DFD for the Railway Reservation System ? -4
Answer .
Level 0 DFD of Railway Reservation System: Advantages of Spiral Model:
74
Question88. Draw the E-R Diagram of banking system
Answer . ER diagram for Bank system is:
75
Question89. Outline the major goals & key challenges faced by software engineering ? -
Answer . The Major Goals of Software Engineering are:
Readability
Reusability
Correctness
Reliability
Flexibility
Efficiency
76
Question 90. Distinguish between generic and customize software production?
Answer .
Answer: Software Engineering Institute defines a software product line as "a set of software-
intensive systems that share a common, managed set of features satisfying the specific needs of a
particular market segment or mission and that are developed from a common set of core assets in
a prescribed way
77
In software engineering, a software development methodology (also known as a system
development methodology, software development life cycle, software development process,
software process) is a splitting of software development work into distinct phases (or stages)
containing activities with the intent of better.
A milestone is a significant event in the course of a project that is used to give visibility of
progress in terms of achievement of predefined milestone goals. Failure to meet a milestone
indicates that a project is not proceeding to plan and usually triggers corrective action by
management.
Question 92Difference between logical and physical DFD-2
Answer:Data flow diagrams (DFDs) are categorized as either logical or physical. A logical
DFD captures the data flows that are necessary for a system to operate. It describes the
processes that are undertaken, the data required and produced by each process, and the stores
needed to hold the data.
Question93. What do you mean by Black Box View? -3
Answer:Black Box view is the view in which we are considered with the outside element of the
project not with the interior of the project that what we have to deal inside the project in not of
anconcern. Black box view is basically use for the input and output of a software we are
developing. Black box testing is also generated from this black box view only. In black box
testing we check the input we give and check for the output we getting for that particular input.
We check for the robustness, boundary values of the software. Black Box view is the same as we
are seeing through an opaque body i.e. we will be able to see the exterior of that body not the
interior, it‘s same for the black box view only outside things will be seen which are input and
output.
78
Now we can relate all this to answer Phase Containment errors, there are some errors
which escape the phase and were found in the later phases which will affect the cost and
SE-
productivity of the software. We will calculate the cost of removing this error which increase
the budget of the software. This escaping of error in the phase and finding the errors in the later
phase is called phase containment of errors. There are many ways to remove these errors or to
deal with these errors.
Question 95: What is a structured design? Draw structure chart, HIPO diagram.
Answer:Structured analysis and design technique (SADT) is a systems engineering and
software engineering methodology for describing systems as a hierarchy of functions. SADT is
a structured analysis modeling language, which uses two types of diagrams: activity models
and data models.
A Structure Chart (SC) in software engineering and organizational theory, is a chart which
shows the breakdown of a system to its lowest manageable levels. They are used in structured
programming to arrange program modules into a tree. Each module is represented by a box,
which contains the module's name.
Que 96. Software doesn’t wear out. Elaborate it.
Ans: Software doesn‘t ―wear out‖ :
This figure 01 is often called the ―bathtub curve‖. It indicates that, at the beginning of the life of
hardware it shows high failure rate as it contains many defects. By time, the manufacturers or the
designers repair these defects and it becomes idealized or gets into the steady state and continues.
79
But after that, as time passes, the failure rate rises again and this may be caused by excessive
temperature, dust, vibration, improper use and so on and at one time it becomes totally unusable.
This state is the ―wear out‖ state. On the other hand, software does not wear out. Like hardware,
software also shows high failure rate at its infant state. Then it gets modifications and the defects
get corrections and thus it comes to the idealized state. But though a software not having any
defects it may get the need of modification as the users demand from the software may change.
And when it occurs, the unfulfilled demands will be considered as defects and thus the failure
rate will increase. After one modification another may get the necessity. In that way, slowly, the
minimum failure rate level begins to rise which will cause the software deteriorated due to
change, but it does not ―wear out‖. What is the advantage of using prototype software
development model instead of waterfall model? Also explain the effect of defining a prototype
on the overall cost of the software project? What is the difference between function oriented and
object oriented design?
Ques 97. Differentiate between iterative Enhancement Model and Evolutionary
Development model.
Ans: Iterative Enhancement Model: This model has the same phases as the waterfall model, but
with fewer restrictions. Generally the phases occur in the same order as in the waterfall model,
but these may be conducted in several cycles. A useable product is released at the end of the
each cycle, with each release providing additional functionality. Evolutionary Development
Model: Evolutionary development model resembles iterative enhancement model. The same
phases as defined for the waterfall model occur here in a cyclical fashion. This model differs
from iterative enhancement model in the sense that this does not require a useable product at the
end of each cycle. In evolutionary development, requirements are implemented by category
rather than by priority.
Ques 98.How does the risk factor affect the spiral model of software development?
Ans: Risk Analysis phase is the most important part of "Spiral Model". In this phase all possible
(and available) alternatives, which can help in developing a cost effective project are analyzed
and strategies are decided to use them. This phase has been added specially in order to identify
and resolve all the possible risks in the project development. If risks indicate any kind of
uncertainty in requirements, prototyping may be used to proceed with the available data and
find out possible solution in order to deal with the potential changes in the requirements.
80
Ques 99. Why is SRS also known as the black box specification of system?
Ans: SRS document is a contract among the development team and the customer. Once the SRS
document is accepted by the customer, any subsequent controversies are settled by referring the
SRS document. The SRS document is called as black-box specification. Since the system is
considered as a black box whose internal details are not known and only its visible external (i.e.
input/output) behavior is recognized.
Ques 100. Consider a program which registers students for different programs. The
students fill a form and submit it. This is sent to the departments for confirmation. Once it
is confirmed, the form and the fees are sent to the account section. Draw a data flow
diagram using SRD technique.
Ans:
81
Ques101.What is modularity? List the important properties of modular system?
ANS) Modularity is the degree to which a system's components may be separated and
recombined. The meaning of the word, however, can vary somewhat by context: In
biology, modularity is the concept that organisms or metabolic pathways are composed
of modules.
Self- Contained: "Agile & Autonomous"
A module is a self-contained component of a larger software system. This doesn't mean that it
is an atomic component. In fact a module consists a several smaller pieces which are
collectively contributed to the functionality/performance of the module.
We cannot remove or modify at least any of these tiny (compared to larger software system)
components and if we do so, the 'Module' will cease it expected functionality. A module can
be installed, un-installed or moved as a whole (single unit) and it won‘t affect the functionality
of the other modules.
82
'High Cohesiveness' means that a component (module) is strongly related or focussed to carry
out a specific task and also not doing any unrelated tasks. Therefore, cohesive modules are fine-
grained, robust, reusable and less in complexity.
In our compartment examples, you can see each compartment (module) contains a predefined set
of sub components and they are responsible to carry out a well defined task and doing it with
absolute efficiency.
A given module's internal implementation is not dependent on the other module that it interacts
with. Modules are interacting with a well defined clean interface and any of modules can change
its internal implementation without affecting other modules.
It‘s vital to define the interfaces between the modules with extreme care. In the ideal case an
interface should be define based on the what a given module offers to other modules and what it
requires from other modules.
So back to our real world scenario, in the compartment example, we can clearly see that the
interfaces are well defined and any of the internal modification inside a compartment would not
affect other modules.
Ques102. Define cohesion & coupling. Explain various types of cohesion & coupling. What
are the effects of module cohesion & coupling?
ANS) Coupling: Two modules are considered independent if one can function completely
without the presence of other. Obviously, if two modules are independent, they are solvable and
modifiable separately. However, all the modules in a system cannot be independent of each
other, as they must interact so that together they produce the desired external behavior of the
system
Cohesion: Cohesion is the concept that tries to capture this intra-module. With cohesion we are
interested in determining how closely the elements of a module are related to each other.
83
Cohesion of a module represents how tightly bound the internal elements of the module are to
one another. Cohesion of a module gives the designer an idea about whether the different
elements of a module belong together in the same module. Cohesion and coupling are clearly
related. Usually the greater the cohesion of each module in the system, the lower the coupling
between modules is. There are several levels of Cohesion:
Coincidental
Logical
Temporal
Procedural
Communicational
Sequential
Functional
Coincidental is the lowest level, and functional is the highest. Coincidental Cohesion occurs
when there is no meaningful relationship among the elements of a module. Coincidental
Cohesion can occur if an existing program is modularized by chopping it into pieces and
making different pieces modules.
A module has logical cohesion if there is some logical relationship between the elements of a
module, and the elements perform functions that fill in the same logical class. A typical
example of this kind of cohesion is a module that performs all the inputs or all the outputs.
Temporal cohesion is the same as logical cohesion, except that the elements are also related in
time and are executed together. Modules that perform activities like ―initialization‖, ―clean-
up‖ and ―termination‖ are usually temporally bound.
A procedurally cohesive module contains elements that belong to a common procedural unit. For
example, a loop or a sequence of decision statements in a module may be combined to form a
separate module. A module with communicational cohesion has elements that are related by a
reference to the same input or output data. That is, in a communication ally bound module, the
elements are together because they operate on the same input or output data.
When the elements are together in a module because the output of one forms the input to another,
we get sequential cohesion. Functional cohesion is the strongest cohesion. In a functionally
bound module, all the elements of the module are related to performing a single function. By
84
function, we do not mean simply mathematical functions; modules accomplishing a single goal
are also included
Ques103. It is possible to estimate software size before coding. Justify your answer?
ANS) A Function Point (FP) is a unit of measurement to express the amount of business
functionality, an information system (as a product) provides to a user. FPs measure software size.
They are widely accepted as an industry standard for functional sizing.
For sizing software based on FP, several recognized standards and/or public specifications have
come into existence. As of 2013, these are −
ISO Standards
Object Management Group (OMG), an open membership and not-for-profit computer industry
standards consortium, has adopted the Automated Function Point (AFP) specification led by the
Consortium for IT Software Quality. It provides a standard for automating FP counting
according to the guidelines of the International Function Point User Group (IFPUG).
85
Function Point Analysis (FPA) technique quantifies the functions contained within software in
terms that are meaningful to the software users. FPs the number of functions being developed
based on the requirements specification.
Ques104. Discuss various types of COCOMO model? Explain the phase wise distribution of
effort?
ANS) Any cost estimation model can be viewed as a function that outputs the cost estimate. The
basic idea of having a model or procedure for cost estimation is that it reduces the problem of
estimation of determining the value of he ―key parameters‖ that characterize the project, based
on which the cost can be estimated.
The primary factor that controls the cost is the size of the project. That is, the larger the project,
the greater the cost & resource requirement. Other factors that affect the cost include
programmer ability, experience of developers, complexity of the project, & reliability
requirements.
The goal of a cost model is to determine which of these many parameters have significant effect
on cost & then to discover the relationships between the cost. The most common approach for
estimating effort is to make a function of a single variable. Often this variable is the project size,
the equation of efforts is:
EFFORT = a x size b
If the size estimate is in KDLOC, the total effort, E, in person-months can be given by the
equation.
E = 5.2 (KDLOC) 91
The Constructive cost model (COCOMO) was developed by Boehm. This model also estimates
the total effort in terms of person-months of the technical project staff. The effort estimate
includes development, management, and support tasks but does not include the cost of the
86
secretarial and other staff that might be needed in an organization. The basic steps in this model
are: -
Obtain an initial estimate of the development effort from the estimate of thousands of
delivered lines of source code (KDLOC).
Adjust the effort estimate by multiplying the initial estimate with all the multiplying factors.
The initial estimate is determined by an equation of the form used in the static single – variable
models, using KDLOC as the measures of size The value of the constants a and b depend on the
project type. In COCOMO, projects are categorized into three types – organic, semidetached,
and embedded.
Organic projects are in an area in which the organization has considerable experience and
requirements are less stringent. A small team usually develops such systems. Examples of this
type of project are simple business systems, simple inventory management systems, and data
processing systems.
87
used to measure the product performance. These attributes can be used for Quality assurance
as well as Quality control. Quality Assurance activities are oriented towards prevention of
introduction of defects and Quality control activities are aimed at detecting defects in products
and services.
Reliability
Measure if product is reliable enough to sustain in any condition. Should give
consistently correct results.
Product reliability is measured in terms of working of project under different working
environment and different conditions.
Maintainability
Different versions of the product should be easy to maintain. For development its should be easy
to add code to existing system, should be easy to upgrade for new features and new technologies
time to time. Maintenance should be cost effective and easy. System be easy to maintain and
correcting defects or making a change in the software.
Usability
This can be measured in terms of ease of use. Application should be user friendly. Should
be easy to learn. Navigation should be simple.
2. Conciseness
By clarity in user interface design it however doesn‘t mean that you need not overflow your
design with information. It is not at all difficult to add definitions and explanations but over flow
of the content will surely bug the user as it will ask them to spend too much time reading them. It
is highly suggestive to keep things clear and concise which means if something can be explained
88
without sparing extra words do that! The idea is to save the valuable time of the users by
keeping things as concise as possible.
3. Consistency
Consistency is yet another characteristic of a good user interface design. It enables users to
develop usage patterns which will make them learn what the different buttons, tabs, icons and
other interface elements look like thereby easily recognizing them. A unique design with
conformity on user‘s end speaks for a good user interface design.
4. Legibility
While designing a user interface what must be kept in mind is legibility which means you need
not use complicated words which might be difficult to read and understand instead use easy
language and make sure your design includes information that is easy to read.
5. Responsiveness
By responsive user interface design we mean that there should be no time lag in loading. It
should be quite fast! Witnessing good loading speed is sure to enhance the user experience.
Besides, it should provide informative stuff to the user about the task in hand. Also, the
interface should provide some form of feedback to the user. The interface should let users know
as to what‘s happening. Its a wise idea to play a spinning wheel or show a progress bar to let the
user know the current status.
6. Efficient
By making the user interface it means is to figure out what exactly the user is trying to achieve
and then let them do exactly what they choose that too without any fuss. Prepare an interface
that enables people easily accomplish what they want instead of fussy listing which can be
bugging thereby marring the overall experience.
7. Attractiveness
Last but not the least, your user interface design should focus mainly on user experience which
besides cool user-friendly features should include the visual appeal. If the visual appeal is
missing in your user interface design, the overall effort goes waste. So, while preparing the user
interface design do not underestimate the value of visual appeal.
89
ANS) Staffing is the process of hiring, positioning and overseeing employees in an
organisation. Nature of Staffing Function
Staffing is an important managerial function- Staffing function is the most important
mangerial act along with planning, organizing, directing and controlling. The operations of
these four functions depend upon the manpower which is available through staffing function.
Staffing is a pervasive activity- As staffing function is carried out by all mangers and in
all types of concerns where business activities are carried out.
Staffing is a continuous activity- This is because staffing function continues throughout the life
of an organization due to the transfers and promotions that take place.
The basis of staffing function is efficient management of personnels- Human resources can be
efficiently managed by a system or proper procedure, that is, recruitment, selection, placement,
training and development, providing remuneration, etc.
Staffing helps in placing right men at the right job. It can be done effectively through proper
recruitment procedures and then finally selecting the most suitable candidate as per the job
requirements.
Staffing is performed by all managers depending upon the nature of business, size of the
company, qualifications and skills of managers etc. In small companies, the top management
generally performs this function. In medium and small scale enterprise, it is performed especially
by the personnel department of that concern.
90
Command Line Interface (CLI)
CLI has been a great tool of interaction with computers until the video display monitors
came into existence. CLI is first choice of many technical users and programmers. CLI is
minimum interface a software can provide to its users.
CLI provides a command prompt, the place where the user types the command and feeds to the
system. The user needs to remember the syntax of command and its use. Earlier CLI were not
programmed to handle the user errors effectively.
A command is a text-based reference to set of instructions, which are expected to be executed by
the system. There are methods like macros, scripts that make it easy for the user to operate.
91
Risk Identification- Risk management outlines various categories of risks faced by new business
including operational, financial, strategic, compliance related and environmental, political, safety
and health risks.
Risk Management- Clarifies the importance and events for tackling the risks that your new
business establishments may face. This includes the information about the evaluation of various
risks and four options for managing each risk. This also helps in outlining some preventive ideas
to decrease the likely hood of risks immobilizing your business.
Business recovery planning- Outlines disaster planning and also minimizes the impact of the
disaster on your business and this includes aspects such as data security, employees, insurance
policies and equipment.
Prevention of crime- This outlines crimes disturbing small businesses and derives some
simple steps to tackle it.
Scams-Risk management discusses scams and how they could hamper your business. It also lists
the methods that could help to avoid scams such as investigating the source of the scam, keeping
and maintaining proceedings and filtering the scam.
Shop Theft- Risk management discusses theft problems in a business and the areas to protect,
such as adopting simple safety measures and by keeping track of the staff and inventory.
Data Security- This offers a variety of information, which protects the businesses and also
secures data. Includes disaster recovery, risk assessment, backups and policies regarding data
security.
92
Reduced redundant work.
Effective management of simultaneous updates.
Avoids configuration-related problems.
Facilitates team coordination.
Helps in building management; managing tools used in builds.
93
Every module has a well defined single purpose
Modules can be separately compiled and kept in library
Modules can use other modules
Modules should be simpler to use than build
Modules should have a easy interface
Ques114: What are the objectives of software design? How do we transform an informal design
to a detailed design?
Ans: Ans Objectives of software design
The purpose of the design phase is to plan a solution of the problem specified by the
requirements document. This phase is the first step in moving from the problem domain to the
solution domain. In other words, starting with what is needed; design takes us toward how to
satisfy the needs, so the basic objectives are:
Identify different types of software, based on the usage.
Show differences between design and coding.
Define concepts of structured programming.
Illustrate some basic design concepts.
See how to design for testability and maintainability.
Non-formal methods of specification can lead to problems during coding, particularly if the
coder is a different person from the designer that is often the case. Software designers do not
arrive at a finished design document immediately but develop the design iteratively through a
number of different phases. The design process involves adding details as the design is
developed with constant backtracking to correct earlier, less formal, designs. The transformation
is done as per the following diagram.
Ques115: Explain the cost drivers and EAF of the intermediate COCOMO model.
Ans: There are 15 different attributes, called cost drivers attributes that determine the
multiplying factors. These factors depend on product, computer, personnel, and technology
attributes also know as project attributes. Of the attributes are required software reliability
(RELY), product complexity (CPLX), analyst capability(ACAP), application experience
(AEXP), use of modern tools (TOOL), and required development schedule (SCHD). Each cost
driver has a rating scale, and for each rating, there is multiplying factor is provided. For eg. for
94
the product attributes RELY, the rating scale is very low, low, nominal, high, and very high.
The multiplying factor for these ratings is .75, .88, .1.00, .1.15, and 1.40,
respectively. So, if the reliability requirement for the project is judged to be low when the
multiplying factor is .75, while if it is judged to be very high the factor is1.40. The attributes and
their multiplying factors for different ratings are shown in the table below
Cost Drivers
Ratings
Very
Low Low Nominal High
Very
High
Extra
High
Product attributes
Required software reliability 0.75 0.88 1.00 1.15 1.40
Size of application database 0.94 1.00 1.08 1.16
Complexity of the product 0.70 0.85 1.00 1.15 1.30 1.65
Hardware attributes
Run-time performance constraints 1.00 1.11 1.30 1.66
Memory constraints 1.00 1.06 1.21 1.56
Volatility of the virtual machine environment 0.87 1.00 1.15 1.30
Required turnabout time 0.87 1.00 1.07 1.15
Personnel attributes
Analyst capability 1.46 1.19 1.00 0.86 0.71
Applications experience 1.29 1.13 1.00 0.91 0.82
Software engineer capability 1.42 1.17 1.00 0.86 0.70
Virtual machine experience 1.21 1.10 1.00 0.90
Programming language experience 1.14 1.07 1.00 0.95
Project attributes
Use of software tools 1.24 1.10 1.00 0.91 0.82
95
Application of software engineering methods 1.24 1.10 1.00 0.91
0.83 Required development schedule 1.23 1.08 1.00 1.04 1.10
Ques116:Compare the following (i) Productivity and difficulty (ii) Manpower and
development time (iii) Static single variable model and static multivariable model
(iv)Intermediate and Detailed COCOMO model Ans: Productivity and difficulty
Productivity refers to metrics and measures of output from production processes, per unit of
input. Productivity P may be conceived of as a metrics of the technical or engineering efficiency
of production. In software project planning , productivity is defined as the number of lines of
code developed per person-month Difficulty The ratio (K/td), where K is software development
cost and td is peak
Development time, is called difficulty and denoted by D, which is measured in person/year.
D= (K/td2)
The relationship shows that a project is more difficult to develop when
the Man power demand is high or when the time schedule is short.
Putnam has observed that productivity is proportional to the difficulty
Pα Dβ
The average productivity may be defined as
P=Lines of code produced/Cumulative manpower used to produce code=S/E Where
S is the lines of code produced and E is cumulative manpower used from t=0 to td
(inception of the project to the delivery time) (ii) Time and cost
In software projects, time cannot be freely exchanged against cost. Such a trade off is limited by
the nature of the software development. For a given organization, developing software of size
S, the quantity obtained is constant. We know K1/3 td
4/3 =S/C If we raise power by 3, then Ktd4 is constant for constant size software. A compression
of the development time td will produce an increase of manpower cost.. If compression is
excessive, not only would the software cost much more, but also the development would become
so difficult that it would increase the risk of being unmanageable.
Ques117: Discuss the various strategies of design. Which design strategy is most popular
and practical?
96
Ans: Software design is a process to conceptualize the software requirements into software
implementation. Software design takes the user requirements as challenges and tries to find
optimum solution. While the software is being conceptualized, a plan is chalked out to find the
best possible design for implementing the intended solution.
There are multiple variants of software design. Let us study them briefly:
Structured Design
Structured design is a conceptualization of problem into several well-organized elements of
solution. It is basically concerned with the solution design. Benefit of structured design is, it
gives better understanding of how the problem is being solved. Structured design also makes it
simpler for designer to concentrate on the problem more accurately.
Structured design is mostly based on ‗divide and conquer‘ strategy where a problem is
broken into several small problems and each small problem is individually solved until the
whole problem is solved.
The small pieces of problem are solved by means of solution modules. Structured design
emphasis that these modules be well organized in order to achieve precise solution.
These modules are arranged in hierarchy. They communicate with each other. A good
structured design always follows some rules for communication among multiple modules,
namely - Cohesion - grouping of all functionally related elements. Coupling - communication
between different modules.
A good structured design has high cohesion and low coupling arrangements.
Function Oriented Design
In function-oriented design, the system is comprised of many smaller sub-systems known as
functions. These functions are capable of performing significant task in the system. The
system is considered as top view of all functions
Function oriented design inherits some properties of structured design where divide and
conquer methodology is used.
This design mechanism divides the whole system into smaller functions, which provides means
of abstraction by concealing the information and their operation.. These functional modules
can share information among themselves by means of information passing and using
information available globally.
97
Another characteristic of functions is that when a program calls a function, the function changes
the state of the program, which sometimes is not acceptable by other modules. Function oriented
design works well where the system state does not matter and program/functions work on input
rather than on a state.
Design Process
The whole system is seen as how data flows in the system by means of data flow diagram.
DFD depicts how functions changes data and state of entire system.
The entire system is logically broken down into smaller units known as functions on the basis of
their operation in the system.
Each function is then described at large.
Object Oriented Design
Object oriented design works around the entities and their characteristics instead of functions
involved in the software system. This design strategies focuses on entities and its
characteristics. The whole concept of software solution revolves around the engaged entities.
Let us see the important concepts of Object Oriented Design:
Objects - All entities involved in the solution design are known as objects. For example, person,
banks, company and customers are treated as objects. Every entity has some attributes associated
to it and has some methods to perform on the attributes.
Classes - A class is a generalized description of an object. An object is an instance of a class.
Class defines all the attributes, which an object can have and methods, which defines the
functionality of the object.
In the solution design, attributes are stored as variables and functionalities are defined by means
of methods or procedures.
Encapsulation - In OOD, the attributes (data variables) and methods (operation on the data) are
bundled together is called encapsulation. Encapsulation not only bundles important information
of an object together, but also restricts access of the data and methods from the outside world.
This is called information hiding.
Inheritance - OOD allows similar classes to stack up in hierarchical manner where the lower or
sub-classes can import, implement and re-use allowed variables and methods from their
immediate super classes. This property of OOD is known as inheritance. This makes it easier to
define specific class and to create generalized classes from specific ones.
98
Polymorphism - OOD languages provide a mechanism where methods performing similar tasks
but vary in arguments, can be assigned same name. This is called polymorphism, which allows a
single interface performing tasks for different types. Depending upon how the function is
invoked, respective portion of the code gets executed.
Ques118: Why does the software design improve when we use object-oriented concepts?
Ans: Object oriented design:-Object oriented design transforms the analysis model created using
object-oriented analysis into a design model that serves as a blueprint for software construction.
It is a design strategy based on information hiding. Object oriented design is concerned with
developing an object-oriented model of a software system to implement the identified
requirements. Object oriented design establishes a design blueprint that enables a software
engineer to define object oriented architecture in a manner that maximizes reuse, thereby
improving development speed and end product quality.
Ques119: Define coupling. Discuss various types of coupling.
Ans: Coupling is the measure of the degree of interdependence between modules.
Type of coupling : Different types of coupling are content, common,external, control, stamp and
data. The strength of coupling from lowest coupling (best) to highest coupling (worst) is given in
the figure.
Data coupling Best
Stamp coupling
Control coupling
External coupling
Common coupling
Content coupling (Worst)
Given two procedures A and B, we can identify a number of ways in whichthey can be coupled.
Data coupling
The dependency between module A and B is said to be data coupled if theirdependency is based
on the fact they communicate by only passing of data.Other than communicating through data,
the two modules are independent. Agood strategy is to ensure that no module communication
contains ―tramp data‖ only the necessary data is passed. Students name, address, course are
example of tramp data that are unnecessarily communicated between modules. By ensuring that
modules communicate only necessary data, module dependency is minimized.
99
Stamp coupling: Stamp coupling occurs between module A and B when complete data structure
is passed from one module to another. Since not all data making up the structure is usually
necessary in communication between the modules, stamp coupling typically involves data. If
one procedure only needs a part of a data structure, calling modules pass just that part, not the
complete data structure. communicate by passing of control information. This is usually
accomplished by means of flags that are set by one module and reacted upon by the dependent
module. External coupling: A form of coupling in which a module has a dependency to other
module, external to the software being developed or to a particular type of hardware. This is
basically related to the communication to external tools and devices.
Ques120. Explain the concept of bottom-up, top-down and hybrid design.
Ans: Top-down and bottom-up are both strategies of information processing and knowledge
ordering, used in a variety of fields including software, humanistic and scientific theories
(see systemics), and management and organization. In practice, they can be seen as a style of
thinking, teaching, or leadership.
A top-down approach (also known as stepwise design and in some cases used as a synonym of
decomposition) is essentially the breaking down of a system to gain insight into its compositional
sub-systems in a reverse engineering fashion. In a top-down approach an overview of the system is
formulated, specifying but not detailing any first-level subsystems. Each subsystem is then refined in
yet greater detail, sometimes in many additional subsystem levels, until the entire specification is
reduced to base elements. A top-down model is often specified with the assistance of "black boxes",
which make it easier to manipulate. However, black boxes may fail to elucidate elementary
mechanisms or be detailed enough to realistically validate the model. Top down approach starts with
the big picture. It breaks down from there into smaller segments A bottom-up approach is the piecing
together of systems to give rise to more complex systems, thus making the original systems sub-
systems of the emergent system. Bottom-up processing is a type of information processing based on
incoming data from the environment to form a perception. From a Cognitive Psychology perspective,
information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into
an image by the brain that can be interpreted and recognized as a perception (output that is "built up"
from processing to final cognition). In a bottom-up approach the individual base elements of the
system are first specified in great detail. These elements are then linked together to form larger
subsystems, which then in
100
turn are linked, sometimes in many levels, until a complete top-level system is formed. This
strategy often resembles a "seed" model, by which the beginnings are small but eventually
grow in complexity and completeness. However, "organic strategies" may result in a tangle of
elements and subsystems, developed in isolation and subject to local optimization as opposed to
meeting a global purpose.
Ques 11: Explain the following Software Metrics (i) Lines of Code (ii) Function Count (iii)
Token Count (iv) Equivalent size measure
Ans: Lines of code (LOC) is a software metric used to measure the size of asoftware program
by counting the number of lines in the text of the program's sourcecode. LOC is typically used
to predict the amount of effort that will be required todevelop a program, as well as to estimate
programming productivity or effort oncethe software is produced. Advantages:-
Scope for Automation of Counting: Since Line of Code is a physical entity; manual counting
effort can be easily eliminated by automating the counting process.Small utilities may be
developed for counting the LOC in a program. However, a code counting utility developed for
a specific language cannot be used for other languages due to the syntactical and structural
differences among languages.
An Intuitive Metric: Line of Code serves as an intuitive metric for measuring the
size of software due to the fact that it can be seen and the effect of it can be
visualized. Function Point is more of an objective metric which cannot be
imagined as being a physical entity, it exists only in the logical space. This way,
LOC comes in handy to express the size of software among programmers with low
levels of experience.
Ques121. Software project planning entails what activities? What are the difficulties faced
in measuring the Software Costs?
Ans: Software project planning entails the following activities:
• Estimation:
o –Effort, cost, resource, and project duration
Project scheduling:
Staff organization:
o –staffing plans
• Risk handling:
101
o -identification, analysis, and abatement procedures
• Miscellaneous plans:
o –quality assurance plan, configuration management plan, etc.
Software costs are due to the requirement for software, hardware and human resources. One
can perform cost estimation at any point in the software life cycle.
As the cost of software depends on the nature and characteristics of the project, the accuracy of
estimate will depend on the amount of reliable information we have about the final product. So
when the product is delivered, the costs can be actually determined as everything spend is
known by then. However when the software is being initiated or during feasible study, we have
only some idea about the functionality of software. There is very high uncertainty about the
actual specifications of the system hence cost estimations based on uncertain information
Ques123. Compute function point value for a project with the following domain
characteristics: No. of Input = 30 No. of Outputs = 62 No. of user Inquiries = 24 No. of files
8 No. of external interfaces = 2 Assume that all the complexity adjustment values
are average. Assume that 14 algorithms have been counted.
Ans: s. We know
UFP=Σ Wij Zij where j=2 because all weighting factors are average.
=30*4+62*5+24*4+8*10+2*7
=120+310+96+80+14
=620
CAF=(0.65+0.01Σ Fi)
=0.65+0.01(14*3)
=0.65+0.42
=1.07
nFP=UFP*CAF
=620*1.07
=663.4≈663
Ques124. What is the difference between the “Known Risks” andKnowable risk?
Ans: Known risk – This is actually the easiest risk to cope with for most people for 1 reason. It‘s
controllable. You can do qualitative and quantitative risk analysis with known risks. You can make
millions doing this, even if you get it really wrong – just talk to Moody‘s about the risk
102
Knowable risk – These may be the toughest. You don‘t know these risks right now, but they are
knowable. I try to mitigate through education. I read voraciously and try to stay up to date on
recent research and trends. As I gather data, more risks can become known so I can move it to
the Known bucket and manage it accordingly. A recent example of this is Apple stock (AAPL).
Unknowable risk – It is by definition unknowable and will always be present in some form. It‘s a
sunk cost associated with participating in life. Small bits of the unknowable may eventually
become knowable,
Ques125. Explain the types of COCOMO Models and give phase-wise distribution of
effort. Ans: COCOMO model stand for Constructive Cost Model. It is an empirical model based
on project experience. It is well-documented, independent model, which is notified to a specific
software vendor. Long history from initial version published in 1981(COCOMO-81) through
various instantiations to COCOMO 2. COCOMO 2takes into account different approaches to
software development, reuse etc. This model gives 3 levels of estimation namely basic,
intermediate and detail.1) Basic COCOMO model:- It gives an order of magnitude of cost. This
model uses estimated size of software project and the type of software being developed. The
estimation varies for various types of projects and these various kinds are:-• Organic-mode
project:- These project include relatively small teams working in a familiar environment,
developing a well understood application, the feature o fsuch project are-1) The communication
overheads are low.2) The team members know what they can achieve.3) This project is much
common in nature.• Semi-detached model: A project consists of mixed project team of
experienced and fresh engineers. The team has limited experience of related system development
and some of them are unfamiliar with output and also some aspects of system being developed.•
Embedded model:- There will be very strong coupled hardware, software regulations and
operational procedures. Validation costs are very high. For e.g. System program and
development of OCR for English.2) Intermediate COCOMO model:-The intermediate
COCOMO model estimates the software development effort by using 15 cost drivers‘ variables
besides the size variable used in basic COCOMO.3) Detailed COCOMO model:-The detailed
COCOMO model can estimate the staffing cost and duration of each of the development phases,
subsystem, and modules. It allows you to experiment with different development strategies, to
find the plan that best suits your needs and resources.
103
Ques126 Illustrate of Risk Management-Activity and explain various Software
risks.
The goal of most software development and software engineering projects is to be distinctive—
often through new features, more efficiency, or exploiting advancements in software
engineering. Any software project executive will agree that the pursuit of such opportunities
cannot move forward without risk.
Ques127 Explain COCOMO model with its relevant equations. Explain various
attributes of cost drivers used in COCOMO model.
Ans: COCOMO model stand for Constructive Cost Model. It is an empirical model based on
project experience. It is well-documented, independent model, which is notified to a specific
software vendor. Long history from initial version published in1981(COCOMO-81) through
various instantiations to COCOMO 2. COCOMO 2takes into account different approaches
to software development, reuse etc.
104
This model gives 3 levels of estimation namely basic, intermediate and detail.
1)Basic COCOMO model :- It gives an order of magnitude of cost. This model uses
estimated size of software project and the type of software being developed. The estimation
varies for various types of projects and these various kinds are:-
Organic-mode project:- These project include relatively small teams working in a
familiar environment, developing a well understood application, the feature of such project
are-
1) The communication overheads are low.
2) The team members know what they can achieve.
3) This project is much common in nature
experienced and fresh engineers. The team has limited experience of related system development
and some of them are unfamiliar with output and also some aspects of system being developed.
Embedded model:- There will be very strong coupled hardware, software
regulations and operational procedures. Validation costs are very high. For e.g. System program
and development of OCR for English.
2) Intermediate COCOMO model:-
The intermediate COCOMO model estimates the software development effort by using 15 cost
drivers‘ variables besides the size variable used in basic COCOMO.
3) Detailed COCOMO model:-
The detailed COCOMO model can estimate the staffing cost and duration of each of the
development phases, subsystem, and modules. It allows you to experiment with different
development strategies, to find the plan that best suits your needs and
105
Quies128: What is function point? Explain its importance. What is function-
oriented metrics?
Ans: Function points: Function point measures the functionality from the user point of view,
that is, on the basis of what the user request and receives in return. Therefore, it deals with the
functionality being delivered, and not with the lines of code, source modules, files etc.
Measuring size in this way has the
advantage that size measure is independent of the technology used to deliver the function.
Importance of function point:
This is independent of the languages tools, or methodology used
for implementation.
They can be estimated from requirement specification or
design specification.
They are directly linked to the statement of request.
Ques129 Explain the following with the help of an example (i) Common coupling (ii)
Communicational cohesion (iii) Class diagram (iv) Structure chart.
Ans: With common coupling, module A and module B have shared data. Global data areas are
commonly found in programming languages. Making a change to the common data means
tracing back to all the modules which access that data to evaluate the effect of change. With
common coupling, it can be difficult to determine which module is responsible for having set a
variable to a particular value.
Communicational cohesion: Communicational cohesion is when parts of a module are
grouped because they operate on the same data (e.g. a module which operates on the same
record of information). In this all of the elements of a component operate on the same input
data or produce the same output data. So
we can say if a module performs a series of actions related be a sequence of steps to be followed
by the product and all actions to be performed on the same data.
(iii)Class diagram: A class diagram in the Unified Modeling Language (UML) is a type of
static structure diagram that describes the structure of a system by showing the system's
classes, their
106
attributes, and the relationships between the classes. The UML specifies two types of scope for
members: instance and
classifier. In the case of instance members, the scope is a specific instance. For attributes, it
means that its value can vary between instances. For methods, it means that its invocation
affects the instance state, in other words, affects the instance attributes. Otherwise, in the
classifier member, the scope is the class.
For attributes, it means that its value is equal for all instances. For methods, it means that its
invocation do not affect the instance state. Classifier members are commonly recognized as
"static" in many programming languages.
Ques130: What are some of the steps of software project? Why it is important to
assign different roles to team members?
Ans: A Software Project is the complete procedure of software development from requirement
gathering to testing and maintenance, carried out according to the execution methodologies, in
a specified period of time to achieve intended software product. Need of software project
management
Software is said to be an intangible product. Software development is a kind of all new stream
in world business and there‘s very little experience in building software products. Most software
products are tailor made to fit client‘s requirements. The most important is that the underlying
technology changes and advances so frequently and rapidly that experience of one product may
not be applied to the other one. All such business and environmental constraints bring risk in
software development hence it is essential to manage software projects efficiently.
Time_Cost_Quality
The image above shows triple constraints for software projects. It is an essential part of
software organization to deliver quality product, keeping the cost within client‘s budget
constrain and deliver the project as per scheduled. There are several factors, both internal and
external, which may impact this triple constrain triangle. Any of three factor can severely
impact the other two.
Therefore, software project management is essential to incorporate user requirements along
with budget and time constraints.
107