Software Engeeinering Project
Software Engeeinering Project
Software Engeeinering Project
Practical
File
SOFTWARE ENGINEERING
III SEM
2021-22
List of Experiments
The selection of model has very high impact on the testing that is carried out. It will define
the what, where and when of our planned testing, influence regression testing and largely
determines which test techniques to use.
There are various Software development models or methodologies. They are as follows:
Waterfall Model
Prototype Model
Incremental Model
Iterative Model
Spiral Model
WATERFALL MODEL:
The Waterfall Model was first Process Model to be introduced. It is very simple to
understand and use. In a waterfall model, each phase must be completed fully before the
next phase can begin. At the end of each phase, a review takes place to determine if the
project is on the right path and whether or not to continue or discard the project. In
waterfall model phases do not overlap. Waterfall Model contains the following stages:
Once an application is in the testing stage, it is very difficult to go back and change
something that was not well-thought out in the concept stage.
No working software is produced until late during the life cycle.
High amounts of risk and uncertainty.
Not a good model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Not suitable for the projects where requirements are at a moderate to high risk of
changing.
PROTOTYPE MODEL:
The basic idea here is that instead of freezing the requirements before a design or coding
can proceed, a throwaway prototype is built to understand the requirements. This
prototype is developed based on the currently known requirements. By using this
prototype, the client can get an “actual feel” of the system, since the interactions with
prototype can enable the client to better understand the requirements of the desired
system. Prototyping is an attractive idea for complicated and large systems for which there
is no manual process or existing system to help determining the requirements. The
prototype are usually not complete systems and many of the details are not built in the
prototype. The goal is to provide a system with overall functionality.
Prototype model should be used when the desired system needs to have a lot of
interaction with the end users.
Typically, online systems, web interfaces have a very high amount of interaction with
end users, are best suited for Prototype model.
Prototyping ensures that the end users constantly work with the system and provide a
feedback which is incorporated in the prototype to result in a useable system.
They are excellent for designing good human computer interface systems.
INCREMENTAL MODEL:
In incremental model the whole requirement is divided into various builds. Multiple
development cycles take place. Cycles are divided up into smaller, more easily
managed modules. Each module passes through the requirements, design,
implementation and testing phases. A working version of software is produced during
the first module, so you have working software early on during the software life cycle.
Each subsequent release of the module adds function to the previous release. The
process continues till the complete system is achieved.
DIAGRAM OF INCREMENTAL MODEL:
Generates working software quickly and early during the software life cycle.
More flexible – less costly to change scope and requirements.
Easier to test and debug during a smaller iteration.
Customer can respond to each built.
Needs a clear and complete definition of the whole system before it can be broken down
and built incrementally.
ITERATIVE MODEL:
An iterative life cycle model does not attempt to start with a full specification of
requirements. Instead, development begins by specifying and implementing just part of the
software, which can then be reviewed in order to identify further requirements. This
process is then repeated, producing a new version of the software for each cycle of the
model.
In iterative model we can only create a high-level design of the application before we
actually begin to build the product and define the design solution for the entire product.
Later on we can design and built a skeleton version of that, and then evolved the design
based on what had been built.
In iterative model we build and improve the product step by step. Hence we can track the
defects at early stages. This avoids the downward flow of the defects.
Reliable user feedback. When presenting sketches and blueprints of the product to users
for their feedback, we are effectively asking them to imagine how the product will work.
Less time is spent on documenting and more time is given for designing.
Costly system architecture or design issues may arise because not all requirements are
gathered up front for the entire lifecycle.
Major requirements must be defined; however, some details can evolve with time.
SPIRAL MODEL:
The spiral model is similar to the incremental model, with more emphasis placed on
risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering
and Evaluation. A software project repeatedly passes through these phases in iterations
(called Spirals in this model). The baseline spiral, starting in the planning phase,
requirements are gathered and risk is assessed. Each subsequent spirals builds on the
baseline spiral. Requirements are gathered during the planning phase. In the risk
analysis phase, a process is undertaken to identify risk and alternate solutions. A
prototype is produced at the end of the risk analysis phase. Software is produced in the
engineering phase, along with testing at the end of the phase. The evaluation phase
allows the customer to evaluate the output of the project to date before the project
continues to the next spiral.
INTRODUCTION:
A data flow diagram (DFD) is a graphical representation of the "flow" of data
through an information system, modelling its process aspects. Often they are a
preliminary step used to create an overview of the system which can later be
elaborated. DFDs can also be used for the visualization of data processing
(structured design).
At its simplest, a data flow diagram looks at how data flows through a system. It
concerns things like where the data will come from and go to as well as where it
will be stored. But you won't find information about the processing timing (e.g.
whether the processes happen in sequence or in parallel).
REPRESENTATION OF COMPONENTS:
Process
Data Object
Data Store
External entity
Process
Transform of incoming data flow(s) to outgoing flow(s).
Data Flow
Movement of data in the system.
Data Store
Data repositories for data that are not moving. It may be as simple as a buffer or a
queue
or as sophisticated as a relational database.
A context diagram is a top level (also known as Level 0) data flow diagram. It only
contains one process node (process 0) that generalizes the function of the entire system in
relationship to external entities.
Context Diagram
LEVEL 0 DFD:
LEVEL 1 DFD:
LEVEL 2 DFD:
ADVANTAGES:
DFDs have diagrams that are easy to understand, check and change data.
DFDs help tremendously in depicting information about how an organization
operations.
They give a very clear and simple look at the organization of the interfaces between
an application and the people or other applications that use it.
DISADVANTAGES:
Modification to a data layout in DFDs may cause the entire layout to be changed.
This is because the specific changed data will bring different data to units that it
accesses. Therefore, evaluation of the possible of the effect of the modification must
be considered first.
The number of units in a DFD in a large application is high. Therefore, maintenance
is harder, more costly and error prone. This is because the ability to access the data
is passed explicitly from one component to the other. This is why changes are
impractical to be made on DFDs especially in large system.
Balancing a DFD
The data that flow into or out of a bubble must match the data flow at the next level of
DFD. This is known as balancing a DFD. The concept of balancing a DFD has been
illustrated in fig. 10.3. In the level 1 of the DFD, data items d1 and d3 flow out of the
bubble 0.1 and the data item d2 flows into the bubble 0.1. In the next level, bubble 0.1 is
decomposed. The decomposition is balanced, as d1 and d3 flow out of the level 2 diagram
and d2 flows in.
An example showing balanced decomposition
EXPERIMENT: 3
Risk
Management
Risk Risk
Assesment Control
RISK ASSESSMENT
Risk assessment is an activity that must be undertaken during project planning. This involves
identifying the risks, analysing them, and prioritizing them on the basis of the analysis. The
goal of risk assessment is to prioritize the risks so that risk management can focus attention
and resources on the more risky items.
The risk assessment consists of three steps:
Risk identification
Risk analysis
Risk prioritization
RISK IDENTIFICATION is the first step in risk assessment, which identifies all the different
risks for a particular project. These risks are project dependent, and their identification is
clearly necessary before any risk management can be done for the project.
Based on surveys of experienced project managers, Boehm has produced a list of the top-10
risk items likely to compromise the success of a software project. These risks and
management techniques are shown above and the description is given as follows:
The top ranked risk item is Personnel Shortfalls. This involves just having fewer people than
necessary or not having people with specific skills that a project may require. Some of the
ways to manage these risks is to get the top talent possible and to match the needs of the
project with the skills of the available personnel.
The second item, Unrealistic Schedules And Budgets, happens very frequently due to
business and other reasons. It is very common that high level management imposes a
schedule for a software project that is not based on the characteristics of the project and is
unrealistic.
Project runs the risk of Developing The Wrong Software if the requirements analysis is not
done properly and if development begins to earlier.
Similarly, often Improper User Interface may be developed. This requires extensive rework of
the user interface later or the software benefits are not obtained because users are reluctant
to use it.
Some Requirements Changes are to be expected in any project, but some requirements
frequent changes are requested, which is often a reflection of the fact that the client has not
yet understood or settled on its own requirements.
Gold Plating refers to adding features in the software that are only marginally useful. This
adds unnecessary risk to the project because gold plating consumes recourses and time with
little return.
Performance Shortfalls are critical in real time systems and poor performance can mean the
failure of project.
The project might be delayed if the External Component Is Not Available on time. The
project would also suffer if the quality of the external component is poor or if the component
turns out to be incompatible with the other project components or with the environment in
which the software is developed or is to operate.
If a project relies on Technology That Is Not Well Developed, it may fail. This is a risk due to
straining the computer science capabilities.
RISK ANALYSIS include studying the probability and the outcome of possible decisions,
understanding the task dependencies to decide critical activities and the probability and cost
of their not being completed on time, risks on the various quality factors like reliability and
usability, and evaluating the performance early through simulation, etc , if there are strong
performance constraints on the system.
One approach for RISK PRIORITIZATION is through the concept of risk exposure, which is
sometimes called risk impact. RE is defined by the relationship
RE=Prob (U O)*Loss (U O),
Where Prob (U O) is the probability of the risk materializing and Loss (U O) is the total loss
incurred due to the unsatisfactory outcome.
RISK CONTROL
Risk control includes three tasks which are
Risk management planning
Risk resolution
Risk monitoring
Risk control starts with Risk Management Planning. Plans are developed for each identified
risk that needs to be controlled. This activity, like other planning activities, is done during
the initiation phase. A basic risk management plan has five components. These are
Why the risk is important and why it should be managed
What should be delivered regarding disk management and when
Who is responsible for performing the different risk management activities
How will the risk be abetted or the approach is taken.
And how many resources are needed.
The main focus of risk management planning is to enumerate the risks to be controlled and
specify how to deal with a risk.
The actual elimination or reduction is done in the Risk Resolution step. Risk resolution is
essentially implementation of the risk management plan.
Risk Monitoring is the activity of monitoring the status of various risks and their control
activities. Like project monitoring, it is performed through the entire duration of the project.
EXPERIMENT: 4
AIM: To Study About Functional Point Analysis.
FUNCTIONAL POINT ANALYSIS:
Function points are a unit measure for software much like an hour is to measuring time,
miles are to measuring distance or Celsius is to measuring temperature. Function Points are
an ordinal measure much like other measures such as kilometers, Fahrenheit, hours, so on
and so forth.
The functional user requirements of the software are identified and each one is categorized
into one of five types: outputs, inquiries, inputs, internal files, and external interfaces. Once
the function is identified and categorized into a type, it is then assessed for complexity and
assigned a number of function points.
Since Function Points measures systems from a functional perspective they are independent
of technology. Regardless of language, development method, or hardware platform used, the
number of function points for a system will remain constant. The only variable is the amount
of effort needed to deliver a given set of function points; therefore, Function Point Analysis
can be used to determine whether a tool, an environment, a language is more productive
compared with others within an organization or among organizations. This is a critical point
and one of the greatest values of Function Point Analysis.
Function Point Analysis can provide a mechanism to track and monitor scope creep.
Function Point Counts at the end of requirements, analysis, design, code, testing and
implementation can be compared. The function point count at the end of requirements and/or
designs can be compared to function points actually delivered. If the project has grown, there
has been scope creep. The amount of growth is an indication of how well requirements were
gathered by and/or communicated to the project team. If the amount of growth of projects
declines over time it is a natural assumption that communication with the user has improved.
Number of user inputs: Each user input that provides distinct application oriented data to the
software is counted.
Number of user outputs: Each user output that provides application oriented information
to the user is counted. In this context "output" refers to reports, screens, error messages,
etc. Individual data items within a report are not counted separately.
Number of user inquiries: An inquiry is defined as an on-line input that results in the
generation of some immediate software response in the form of an on-line output. Each
distinct inquiry is counted.
Number of external interfaces: All machine-readable interfaces that are used to transmit
information to another system are counted.
Once this data has been collected, a complexity rating is associated with each count
according to the following table:
Each count is multiplied by its corresponding complexity weight and the results are
summed to provide the UFC. The adjusted function point count (FP) is calculated by
multiplying the UFC by a technical complexity factor (TCF) also referred to as Value
Adjustment Factor (VAF). Components of the TCF are listed in below given table.
Heavily used
F5 configuration F6 Online data entry
F7 Operational ease F8 Online update
F1 Complex
F9 Complex interface 0 processing
F1 F1
1 Reusability 2 Installation ease
F1 F1
3 Multiple sites 4 Facilitate change
Alternatively the following questionnaire could be utilized
Is performance critical?
Does the on-line data entry require the input transaction to be built over multiple screens or
operations?
Function Points can be used to size software applications accurately. Sizing is an important
component in determining productivity (outputs/inputs).
They can be counted by different people, at different times, to obtain the same measure
within a reasonable margin of error.
Function Points are easily understood by the non-technical user. This helps communicate
sizing information to a user or customer.
Function Points can be used to determine whether a tool, a language, an environment, is
more productive when compared with others.
EXPERIMENT: 5
AIM: To Study Software Testing, Blackbox and Whitebox Testing and Different Types of
Testing.
DEFINITION:
Software testing is performed to verify that the completed software package functions
according to the expectations defined by the requirements/specifications. The overall
objective to not to find every software bug that exists, but to uncover situations that could
negatively impact the customer, usability and/or maintainability. Software testing is the
process of evaluating a software item to detect differences between given input and expected
output. Testing assesses the quality of the product. Software testing is a process that should
be done during the development process. In other words software testing is a verification
and validation process.
BLACKBOX TESTING:
Black box testing is a testing technique that ignores the internal mechanism of the system
and focuses on the output generated against any input and execution of the system. It is also
called functional testing.
The technique of testing without having any knowledge of the interior workings of the
application is Black Box testing. The tester is oblivious to the system architecture and does
not have access to the source code. Typically, when performing a black box test, a tester will
interact with the system's user interface by providing inputs and examining outputs without
knowing how and where the inputs are worked upon.
Advantages:
Well suited and efficient for large code segments.
Code Access not required.
Clearly separates user's perspective from the developer's perspective through visibly defined
roles.
Large numbers of moderately skilled testers can test the application with no knowledge of
implementation, programming language or operating systems.
Disadvantages:
Limited Coverage since only a selected number of test scenarios are actually performed.
Inefficient testing, due to the fact that the tester only has limited knowledge about an
application.
Blind Coverage, since the tester cannot target specific code segments or error prone areas.
The test cases are difficult to design.
WHITEBOX TESTING:
White box testing is a testing technique that takes into account the internal mechanism of a
system. It is also called structural testing and glass box testing.
White box testing is the detailed investigation of internal logic and structure of the code.
White box testing is also called glass testing or open box testing. In order to perform white
box testing on an application, the tester needs to possess knowledge of the internal
working of the code.
The tester needs to have a look inside the source code and find out which unit/chunk of the
code is behaving inappropriately.
Advantages:
As the tester has knowledge of the source code, it becomes very easy to find out which type
of data can help in testing the application effectively.
It helps in optimizing the code.
Extra lines of code can be removed which can bring in hidden defects.
Due to the tester's knowledge about the code, maximum coverage is attained during test
scenario writing.
Disadvantages:
Due to the fact that a skilled tester is needed to perform white box testing, the costs are
increased.
Sometimes it is impossible to look into every nook and corner to find out hidden errors that
may create problems as many paths will go untested.
It is difficult to maintain white box testing as the use of specialized tools like code
analyzers and debugging tools are required.
Black box testing is often used for validation and white box testing is often used for
verification.
Accessibility Testing: Type of testing which determines the usability of a product to the
people having disabilities (deaf, blind, mentally disabled etc). The evaluation process is
conducted by persons having disabilities.
Age Testing: Type of testing which evaluates a system's ability to perform in the future.
The evaluation process is conducted by testing teams.
Alpha Testing: Type of testing a software product or system conducted at the developer's
site. Usually it is performed by the end user.
Backward Compatibility Testing: Testing method which verifies the behavior of the
developed software with older versions of the test environment. It is performed by testing
teams.
Beta Testing: Final testing before releasing application for commercial purpose. It is
typically done by end-users or others.
Bottom Up Integration Testing: In bottom up integration testing, module at the lowest level
are developed first and other modules which go towards the 'main' program are integrated
and tested one at a time. It is usually performed by the testing teams.
Branch Testing: Testing technique in which all branches in the program source code are
tested at least once. This is done by the developer.
Compatibility Testing: Testing technique that validates how well a software performs in a
particular hardware/software/operating system/network environment. It is performed by
the testing teams.
Component Testing: Testing technique similar to unit testing but with a higher level of
integration - testing is done in the context of the application instead of just directly testing
a specific method. Can be performed by testing or development teams.
Compliance Testing: Type of testing which checks whether the system was developed in
accordance with standards, procedures and guidelines. It is usually performed by external
companies which offer "Certified OGC Compliant" brand.
Destructive Testing: Type of testing in which the tests are carried out to the specimen's
failure, in order to understand a specimen's structural performance or material behavior
under different loads. It is usually performed by QA teams.
Dynamic Testing: Term used in software engineering to describe the testing of the
dynamic behavior of code. It is typically performed by testing teams.
Error-Handling Testing: Software testing type which determines the ability of the system to
properly process erroneous transactions. It is usually performed by the testing teams.
Gray Box Testing: A combination of Black Box and White Box testing methodologies:
testing a piece of software against its specification but using some knowledge of its
internal workings. It can be performed by either development or testing teams.
Integration Testing: The phase in software testing in which individual software modules
are combined and tested as a group. It is usually conducted by testing teams.
Load Testing: Testing technique that puts demand on a system or device and measures its
response. It is usually conducted by the performance engineers.
Regression Testing: Type of software testing that seeks to uncover software errors after
changes to the program (e.g. bug fixes or new functionality) have been made, by retesting
the program. It is performed by the testing teams.
Recovery Testing: Testing technique which evaluates how well a system recovers from
crashes, hardware failures, or other catastrophic problems. It is performed by the testing
teams.
Unit Testing: Software verification and validation method in which a programmer tests if
individual units of source code are fit for use. It is usually conducted by the development
team.
EXPERIMENT: 6
AIM: To use information gathering tools(questionnaries , interview , on site survey).
Information Gathering: A problem Solving Approach
Information gathering is an art and a science. The approach and manner, in which
information is gathered, require persons with sensitivity, common sense and knowledge
of what and when to gather and the channels used to secure information.
KINDS OF INFORMATION REQUIRED
Before one determines where to go for information or what tools to use, the first
requirement is to figure out what information to gather. Much of the information we need
to analyze relates to the organization in general, the user staff, and the workflow.
Another kind of information for analysis is knowledge about the people who run the
present system, their job functions and information requirements, the relationships of
their jobs to the existing system and the interpersonal network that holds the user group
together. The main focus is on the roles of the people, authority relationships and inters
personnel relations. Information of this kind highlights the organization chart and
establishes a basis for determining the importance of the existing system for the
organization. Thus the major focus is to find out the expectations of the people before
going in for the design of the candidate system.
Information about the Work Flow
The workflow focuses on what happens to the data through various points in a system.
This can be shown by a data flow diagram or a system flow chart.
A data flow diagram represents the information generated at each processing point in the
system and the direction it takes from source to destination.
A system flowchart describes the physical system. The information available from such
charts explains the procedures used for performing tasks and work schedules.
Information Gathering Techniques
No two projects are ever the same. This means that the analyst must decide on the
information-gathering tool and how it must be used. Although there are no standard
rules for specifying their use, an important rule is that information must be acquired
accurately, methodically, under the right conditions, and with minimum interruption to
user personnel. There are various information-gathering tools. Each tool has a special
function depending on the information needed.
Review of Literature, Procedures and Forms
Review of existing records, procedures, and forms helps to seek insight into a system
which describes the current system capabilities, its operations, or activities.
Advantages
It helps user to gain some knowledge about the organization or operations by themselves
before they impose upon others.
It helps in documenting current operations within short span of time as the procedure
manuals and forms describe the format and functions of present system.
It can provide a clear understanding about the transactions that are handled in the
organization, identifying input for processing, and evaluating performance.
It can help an analyst to understand the system in terms of the operations that must be
supported.
It describes the problem, its affected parts, and the proposed solution.
Disadvantages:
The primary drawback of this search is time.
Sometimes it will be difficult to get certain reports.
Publications may be expensive and the information may be out dated due to a time lag in
publication.
On-site Observation
A fact-finding method used by the systems analyst is on-site or direct observation. It is
the process of recognizing and noting people, objects and occurrences to obtain
information. The major objective of on-site observation is to get as close as possible to
the “real” system
being studied. For this reason it is important that the analyst is knowledgeable about the
general makeup and activities of the system. The analyst‟s role is that of an information
seeker. As an observer, the analyst follows a set of rules.
While making observations he/she is more likely to listen than talk and has to listen with
interest when information is passed on.
The analyst has to observe the physical layout of the current system, the location and
movement of the people and the workflow.
The analyst has to be alert to the behavior of the user staff and the people to whom they
come into contact. A change in behavior provides a clue to an experienced analyst. The
clue can be used to identify the problem.
Direct or indirect: A direct observation takes place when the analyst actually observes
the subject or the system at work. In an indirect observation, the analyst uses mechanical
devices such as cameras and videotapes to capture information.
Reliability means that the information gathered is trustworthy enough to be used for
making decisions about the system being studied. Validity means that the questions to be
asked are worded in such a way as to elicit (obtain) the intended information. So the
reliability and validity of the data collected depends on the design of the interview or
questionnaire and the manner in which each instrument is administered.
The interview is a face-to-face interpersonal role situation in which a person called the
interviewer asks a person being interviewed questions designed to gather information
about a problem area. The interview is the oldest and most often used device for
gathering information in system work. It can be used for two main purposes
As an exploratory device to identify relations or verify information
To capture information, as it exists.
In an interview, since the analyst and the person interviewed meet face to face, there is
an opportunity for greater flexibility in eliciting information. The interviewer is also in a
natural position to observe the subjects and the situation to which they are responding. In
contrast the information obtained through a questionnaire is limited to the written
responses of the subjects to predefined questions.
The art of interviewing:
Interviewing is an art. The analyst learns the art by experience. The interviewer‟s art
consists of creating a permissive situation in which the answers offered are
reliable. Respondent‟s opinions are offered with no fear of being criticized by others.
Primary requirements for a successful interview are to create a friendly atmosphere and
to put the respondent at ease. Then the interview proceeds with asking questions
properly, obtaining reliable responses and recording them accurately and completely.
Arranging the interview:
The interview should be arranged so that the physical location, time of the interview and
order of interviewing assure privacy and minimal interruption. A common area that is
non- threatening to the respondent is chosen. Appointments should be made well in
advance and a fixed time period adhered to as closely as possible. Interview schedules
generally begin at the top of the organization structure and work down so as not to
offend anyone.
Stage setting: This is a relaxed, informal phase where the analyst opens the interview by
focusing on
The purpose of the interview
Why the subject was selected
The confidential nature of the interview.
After a favorable introduction, the analyst asks the first question and the respondent
answers it and goes right through the interview. The job of the analyst should be that of a
reporter rather than a debater. Discouraging distracting conversation controls the
direction of the interview.
Establishing rapport:
Some of the pitfalls to be avoided are
Do not deliberately mislead the user staff about the purpose of the study. A careful
briefing is required. Too much of technical details will confuse the user and hence only
information that is necessary has to be given to the participants.
Assure interviewees confidentiality that no information they offer will be released to
unauthorized personnel. The promise of anonymity is very important.
Avoid personal involvement in the affairs of the users department or identification with
one section at the cost of another.
Avoid showing off your knowledge or sharing information received from other sources.
Avoid acting like an expert consultant and confidant. This can reduce the objectivity of
the approach and discourage people from freely giving information
Respect the time schedules and preoccupations of your subjects. Do not make an
extended social event out of the meeting.
Do not promise anything you cannot or should not deliver, such as advice or feedback.
Dress and behave appropriately for the setting and the circumstances of the user contact.
Do not interrupt the interviewee. Let him/her finish talking.
Asking the questions: Except in unstructured interviews, it is important that each
question is asked exactly as it is worded. Rewording may provoke a different answer .The
question must also be asked in the same order as they appear on the interview schedule.
Reversing the sequence destroys the comparability of the interviews. Finally each
question must be asked unless the respondent, in answering the previous question, has
already answered the next one.
Obtaining and recording the response: Interviews must be prepared well in order to
collect further information when necessary. The information received during the
interview must be recorded for later analysis.
Data recording and the notebook: Many system studies fail because of poor data
recording. Care must be taken to record the data, their source and the time of collection.
If there is no record of a conversation, the analyst won‟t be remembering enough
details, attributing to the wrong source or distorting the data. The form of the notebook
varies according to the type of study, for the amount of data, the number of analysts, and
their individual preferences. The “notebook” may be a card file, a set of carefully
coded file folders. It should be bound and the pages numbered.
Questionnaire
This method is used by analyst to gather information about various issues of system from
large number of persons. This tool has collection of questions to which individuals
respond.
The advantages of questionnaire are
It is economical and requires less skill to administer than the interview.
Unlike the interview, which generally questions one subject at a time, a questionnaire
can be administered to large numbers of individuals simultaneously.
The standardized wording and order of the questions and the standardized instructions
for reporting responses ensure uniformity of questions.
The respondents feel greater confidence in the anonymity of a questionnaire than in that
of an interview. In an interview, the analyst usually knows the user staff by name,
job function or other identification. With a questionnaire, respondents give opinions
without fear that the answer will be connected to their names.
The questionnaire places less pressure on subjects for immediate responses. Respondents
have time to think the questions over and do calculations to provide more accurate data.
Interviews and Questionnaires vary widely in form and structure. Interviews range from
highly unstructured to the highly structured alternative in which the questions and
responses are fixed.
The unstructured Interview:
The Unstructured interview is non-directive information gathering technique. It allows
respondents to answer questions freely in their own words. The responses in this case are
spontaneous and self-revealing. The role of the analyst as an interviewer is to encourage
the respondent to talk freely and serve as a catalyst to the expression of feelings and
opinions. This method works well in a permissive atmosphere in which subjects have no
feeling of disapproval.
The structured Interview:
In this alternative the questions are presented with exactly the same wordings and in the
same order to all subjects. Standardized questions improve the reliability of the
responses by ensuring that all subjects are responding to the same questions.
Structured interviews and questionnaires may differ in the amount of structuring of the
questions.
Questions may be either
Open-ended questions
Close-ended questions
An open-ended question requires no response direction or specific response.
Questionnaire is written with space provided for the response. Such questions are more
often used in interviews than in questionnaires because scoring takes time.
Close-ended questions are those, in which the responses are presented as a set of
alternatives.
There are five major types of closed questions.
Fill in the blanks: in which questions request specific information. These responses can
be statically analyzed.
Dichotomous (Yes/No type): in which questions will offer two answers. This has
advantages similar to those of the multiple-choice type. Here the question sequence and
content are also important
Ranking scales questions: Ask the respondent to rank a list of items in order of
importance or preference
Multiple-choice questions: Offer respondents specific answer choices. This offers the
advantage of faster tabulation and less analyst bias due to the order in which the
questions are given. Respondents have a favorable bias toward the first alternative
item. Alternating the order in which answer choices are listed may reduce bias but at the
expense of additional time to respond to the questionnaire.
Rating scales – These types of questions are an extension of the multiple-choice
design. The respondent is offered a range of responses along a single dimension
Open-ended questions are ideal in exploratory situations where new ideas and
relationships are sought.
Disadvantages of open-ended questions:
The main drawback is the difficulty of interpreting the subjective answers and the tedious
responses to open ended questions.
Other drawbacks are potential analyst, bias in interpreting the data and time- consuming
tabulation
Closed questions are quick to analyze.
Disadvantages of close-ended questions:
They are costly to prepare.
They have the additional advantage of ensuring that answers are given in a frame of
reference consistent with the line of inquiry.
Feasibility Study
Feasibility Study can be considered as preliminary investigation that helps the
management to take decision about whether study of system should be feasible for
development or not.
It identifies the possibility of improving an existing system, developing a new system, and
produce refined estimates for further development of system.
It is used to obtain the outline of the problem and decide whether feasible or appropriate
solution exists or not.
The main objective of a feasibility study is to acquire problem scope instead of solving
the problem.
The output of a feasibility study is a formal system proposal act as decision document
which includes the complete nature and scope of the proposed system.
Steps Involved in Feasibility Analysis
The following steps are to be followed while performing feasibility analysis −
Form a project team and appoint a project leader.
Develop system flowcharts.
Identify the deficiencies of current system and set goals.
Enumerate the alternative solution or potential candidate system to meet goals.
Determine the feasibility of each alternative such as technical feasibility, operational
feasibility, etc.
Weight the performance and cost effectiveness of each candidate system.
Rank the other alternatives and select the best candidate system.
Prepare a system proposal of final project directive to management for approval.
Types of Feasibilities
Economic Feasibility
It is evaluating the effectiveness of candidate system by using cost/benefit analysis
method.
It demonstrates the net benefit from the candidate system in terms of benefits and costs to
the organization.
The main aim of Economic Feasibility Analysis (EFS) is to estimate the economic
requirements of candidate system before investments funds are committed to proposal.
It prefers the alternative which will maximize the net worth of organization by earliest
and highest return of funds along with lowest level of risk involved in developing the
candidate system.
Technical Feasibility
It investigates the technical feasibility of each implementation alternative.
It analyzes and determines whether the solution can be supported by existing technology
or not.
The analyst determines whether current technical resources be upgraded or added it that
fulfill the new requirements.
It ensures that the candidate system provides appropriate responses to what extent it can
support the technical enhancement.
Operational Feasibility
It determines whether the system is operating effectively once it is developed and
implemented.
It ensures that the management should support the proposed system and its working
feasible in the current organizational environment.
It analyzes whether the users will be affected and they accept the modified or new
business methods that affect the possible system benefits.
It also ensures that the computer resources and network architecture of candidate system
are workable.
Behavioral Feasibility
It evaluates and estimates the user attitude or behavior towards the development of new
system.
It helps in determining if the system requires special effort to educate, retrain, transfer,
and changes in employee‟s job status on new ways of conducting business.
Schedule Feasibility
It ensures that the project should be completed within given time constraint or schedule.
It also verifies and validates whether the deadlines of project are reasonable or not.
EXPERIMENT: 8
AIM: To create Data Dictionary for some applications.
Data Dictionary
A data dictionary lists all data items appearing in the DFD model of a system. The data
items listed include all data flows and the contents of all data stores appearing on the
DFDs in the DFD model of a system. A data dictionary lists the purpose of all data items
and the definition of all composite data items in terms of their component data items. For
example, a data dictionary entry may represent that the data grossPay consists of the
components regularPay and overtimePay.
Fig 10.2 (a) Level 0 (b) Level 1 DFD for Tic-Tac-Toe game
It may be recalled that the DFD model of a system typically consists of several DFDs:
level 0, level 1, etc. However, a single data dictionary should capture all the data
appearing in all the DFDs constituting the model. Figure 10.2 represents the level 0 and
level 1 DFDs for the tic-tac- toe game. The data dictionary for the model is given below.
The use case model for the Tic-tac-toe problem is shown in fig. 13.1. This software has
only one use case “play move”. Note that the use case “get-user- move” is not used here.
The name “get-user-move” would be inappropriate because the use cases should be
named from the user’s perspective.
Fig. 13.1: Use case model for tic-tac-toe game
Text Description
Each ellipse on the use case diagram should be accompanied by a text description. The
text description should define the details of the interaction between the user and the
computer and other aspects of the use case. It should include all the behavior associated
with the use case in terms of the mainline sequence, different variations to the normal
behavior, the system responses associated with the use case, the exceptional conditions
that may occur in the behavior, etc. The behavior description is often written in a
conversational style describing the interactions between the actor and the system. The text
description may be informal, but some structuring is recommended. The following are
some of the information which may be included in a use case text description in addition
to the mainline sequence, and the alternative scenarios.
Contact persons: This section lists the personnel of the client organization with whom the
use case was discussed, date and time of the meeting, etc.
Actors: In addition to identifying the actors, some information about actors using this use
case which may help the implementation of the use case may be recorded.
Pre-condition: The preconditions would describe the state of the system before the use
case execution starts.
Post-condition: This captures the state of the system after the use case has successfully
completed.
Non-functional requirements: This could contain the important constraints for the design
and implementation, such as platform and environment conditions, qualitative statements,
response time requirements, etc.
Exceptions, error situations: This contains only the domain-related errors such as lack of
user’s access rights, invalid entry in the input fields, etc. Obviously, errors that are not
domain related, such as software errors, need not be discussed here.
Sample dialogs: These serve as examples illustrating the use case.
Specific user interface requirements: These contain specific requirements for the user
interface of the use case. For example, it may contain forms to be used, screen shots,
interaction style, etc.
Document references: This part contains references to specific domain-related documents
which may be useful to understand the system operation
Example 2:
A supermarket needs to develop the following software to encourage regular customers.
For this, the customer needs to supply his/her residence address, telephone number, and
the driving license number. Each customer who registers for this scheme is assigned a
unique customer number (CN) by the computer. A customer can present his CN to the
checkout staff when he makes any purchase. In this case, the value of his purchase is
credited against his CN. At the end of
each year, the supermarket intends to award surprise gifts to 10 customers who make the
highest total purchase over the year. Also, it intends to award a 22 caret gold coin to
every customer whose purchase exceeded Rs.10,000. The entries against the CN are the
reset on the day of every year after the prize winners’ lists are generated.
The use case model for the Supermarket Prize Scheme is shown in fig. 13.2. As discussed
earlier, the use cases correspond to the high-level functional requirements. From the
problem description, we can identify three use cases: “register-customer”, “register-
sales”, and “select- winners”. As a sample, the text description for the use case “register-
customer” is shown.
Includes
The includes relationship in the older versions of UML (prior to UML 1.1) was known as
the uses relationship. The includes relationship involves one use case including the
behavior of another use case in its sequence of events and actions. The includes
relationship occurs when a chunk of behavior that is similar across a number of use
cases. The factoring of such behavior will help in not repeating the specification and
implementation across different use cases. Thus, the includes relationship explores the
issue of reuse by factoring out the commonality across use cases. It can also be gainfully
employed to decompose a large and complex use cases into more manageable parts. As
shown in fig. 13.4 the includes relationship is represented using a predefined stereotype
<<include>>.In the includes relationship, a base use case compulsorily and
automatically
includes the behavior of the common use cases. As shown in example fig. 13.5, issue- book
and renew-book both include check-reservation use case. The base use case may include
several use cases. In such cases, it may interleave their associated common use cases
together. The common use case becomes a separate use case and the independent text
description should be provided for it.
PERT Chart
PERT (Program Evaluation & Review Technique) chart is a tool that depicts
project as network diagram. It is capable of graphically representing main
events of project in both parallel and consecutive way. Events, which occur one
after another, show dependency of the later event over the previous one.
Events are shown as numbered nodes. They are connected by labeled arrows
depicting sequence of tasks in the project.
Resource Histogram
This is a graphical tool that contains bar or chart representing number of
resources (usually skilled staff) required over time for a project event (or
phase). Resource Histogram is an effective tool for staff planning and
coordination.
As brands and retailers experience growing demand for the latest consumer
products, the resulting increase in production and batch sizes makes quality
control more challenging for companies.
Software Reliability