SAD Notes
SAD Notes
SAD Notes
1/147
-QUESTION AND ANSWER SET
OF
SYSTEM ANALYSIS AND DESIGN
1. How data dictionary can be used during software development?(May06-
5Marks)
Data Dictionary
a) Data dictionary are integral components of structured analysis; since data
flow diagrams by themselves do not fully describe a subject of the
investigation.
b) A data dictionary is a catalog-a repository of the element in the system.
c) In data dictionary one will find a list of all the elements composing the data
flowing through a system. The major elements are -
Data flows
Data stores
Processes
d) The dictionary is developed during data flows analysis and assist the analysis
Involved in determining system requirements and its content are used during
System design as well.
e) The data dictionary contains the following description about the data:-
1 The name of the data element.
2 The physical source/ Destination name.
3 The type of the data element.
4 The size of data element.
5 Usage such as input or output or update etc
6 Reference (/s) of DFD process nos, where used.
7 Any special information useful for system specification such as
validation rules etc.
f) this is appropriately called as system development data dictionary, since it is
created during the system development, facilitating the development
functions, used by the developers and is designed for the developers
information needs in mind.
g) for every single data element mentioned in the DFD there should be at least
one and only one unique data element in the data dictionary.
h) the type of data elements is considered as numeric, textual or image, audio
etc
i) usage should specify whether the referenced DFD process uses it as input
data(only read) or creates data output (e.g Insert) or update.
j) a data element can be mentioned with reference to multiple DFD processes
but in that case if the usages are different , then there should be one entry
for each usage.
k) the data dictionary is used as an important basic information during the
development stages.
l) Importance of data dictionary:
To manage the details in large system.
To communicate a common meaning for all system elements.
To document the features of the system.
To facilitate analysis of the details in order to evaluate characteristics
and determine where system changes should be made.
To locate errors and omissions in the system.
VINAYAK SINGH
2/147
m) Points to be considered while constructing a data dictionary
Each unique data flow in the DFD must have one data dictionary entry.
There is also a data dictionary entry for each data store and process.
Definition must be readily accessible by name.
There should be no redundancy or unnecessary definition in the data
definitions. It must also be simple to make updates.
The procedures for writing definitions should be straightforward but
specific there should be only one way of defining words.
2. Under what circumstances or for what purposes would one use an
interview rather than other data collection methods?Explain
(May06-12Marks)
OR
Discuss hoe interview technique can be used to need of a new
system(May-03)
a) Interview is a fact finding technique whereby the system analyst collects
information from individuals through face to face interaction.
b) there are two types of interviews:
a. Unstructured interviews: this is an interview that is conducted with
only a general goal or subject in mind and with few, if any specific
questions.
b. Structured interviews: This is an interview in which the interviewer
has a specific set of questions to ask the interviewee.
c) unstructured interviews tend to involve asking open ended questions while
structured interviews tend to involve asking more close ended questions.
d) ADVANTAGES:
e) Interviews gives the analyst an opportunity to motivate the
interviewee to respond freely and openly to questions.
f) Interviews allow the system analyst to probe (penetrating
investigation) for more feedback from the interviewee.
g) Interviews permit the system analyst to adapt or reword questions
for each individual.
h) A good system analyst may be able to obtain information by
observing the interviewees body movements and facial
expressions as well as listening to verbal replies to questions.
i) this technique is advised in the following situations:-
j) Where the application system under consideration is highly specific
to the user organization.
k) When the application system may not very specialize but the
practices followed this organization may be specific.
l) The organization does not have any documentation where the part
of information requirements are documented or any such
documentation is irrelevant or not available or cannot be shared
with developers due to privacy issues etc.
m) The organization is not decided on the details of the practices, the
new application system would demand or would be used to
implement new practices but want to decided when responding to
the information requirements determination .
n) A structured interview meeting is useful in the following situation:-
VINAYAK SINGH
3/147
a) When the development team and the user team
members know the broad system environment with high
familiarity. This reduces the amount of communication to
significantly to just a few words or a couple of sentences.
b) Responding to the questions involves collecting data from
different sources or person and/or analyzing it and/or
making the analysis ready for discussion at the meeting,
in order to save on the duration of meeting.
c) The costly communication media such as international
phone/ conference call are to be used.
d) Some of all of the members of the user team represent
the top management or external consultants or specialist.
o) A semi structured interview meeting is useful in the following situations :-
p) Usually in the initial stage of information requirements
determination, the users and the developers need to
exchange significant amount of basic information. The
structured interview meetings may not be effective here
since there is not enough information to ask questions of
importance.
q) Also very initially or with the new organization , the
personal meetings are important because ,they help the
development team not only in knowing the decisions but
also in understanding the decision making process of the
organization ,the role of every user team members in the
decision and their organizational and personal interests.
These observations during the meetings can help the
software development team significantly in future
communications.
r) When the new application system is not a standard one and
/or the users follow radically different or highly specialized
business practices ,then it is very difficult to predict which
questions are important from the point of determining the
information requirements.
s) When the members of the development team expects to
participate in formulating some new information
requirements or freezing them, as an advisor, say then
interview meeting has to be personal and therefore, semi
structured.
t) If the users are generally available , located very close and
the number of top management users or external consultant
is nil or a negligible issue, then the personal meeting are
conducted.
u) When there are no warranted reasons that only the
structured interviews are to be conducted.
VINAYAK SINGH
4/147
3. Describe steps in SDLC model with an example(Nov-03,May-06,Sies-
04,Dec-04)
The steps in SDLC model are listed as follows
a) System analysis:-
It is a set of activity in which the system analyst gathers
the information requirements of the users, analyses
them systematically in the form of functionality of the
application system, the input data requirement and their
sources, the output data and their presentation
requirements.
The system analyst gathers data about the performance
expectations of the users such as expected response
time, the total turn around time etc. the system analyst
finally prepares a document called as system
requirement specification (SRS) which documents all the
agreements reached between the users and the system
analyst.
b) System design:-
It involves preparing the blue print of the new software
system. Taking the SRS as a base to start with, it
prepares various diagrammatic representations of the
logical and physical artifacts to be developed during the
software development stages to follow. The major
artifacts include data models, process models and
presentation model. Finally the system design is
documented
c) Coding or Construction:-
This involves programming and testing individual
programs on the basis of the design document. The
developers responsible for programming also creates
text data sets for inputs and verifies that the program
generate the expected output for these inputs data sets
The individual program are also reviewed to ensure that
they meet programming standard as expected by the
users. This is the only face where the conceptual system
is first translated into a computer executable program
sources.
d) Testing:-
It is to demonstrate to the development team members
that the software system works exactly to meet the user
expectation of information requirements as well as the
performance expectation. It involves planning the test,
creating the text data, executing text runs, matching the
text results with the expected results, analyzing the
differences, fixing the bugs and testing the bug fixing
repeatedly until a satisfactory number of mismatches
are removed.
e) Implementation:-
It involves installing the software system on the user
computer system, conducting user training on new
software system, data preparations, parallel running and
VINAYAK SINGH
5/147
going live as core activities. This is the stage where the
software system is first transferred to the users
premises and the users get a chance to work on the new
software system for the first time. Also it involves the
most important step of user acceptance testing
which marks the technical and commercial
milestone of the software development project
f) Maintenance:-
It involves maintaining the software system always up
to date to ensure that it is in the line with current
information requirements considering even the latest
changes in the same. It helps keep the software system
up to date thereby ensuring the users high return on
their investment at operational level of the business.
The developer analyses the changes in the light of the
latest changes in the design identifies the new changes
in the system design, verify quickly that it works as
expected.
E.G :- the library management system done as the assignment.
VINAYAK SINGH
6/147
5. What are the CASE tools? explain some CASE tools used for
prototyping. (May-06-15Marks, Nov-03, M-05, Dec-04).
Ans. Computer assisted software engineering (CASE)
Computer assisted software engineering (CASE) is the use of
automated software tools that support the drawing and
analysis of system models and associated specifications.
Some tools also provide pr typing and code generation
facilities.
At the center of any true CASE tools architecture is a
developers database called a CASE repository where
developers can store system models, detailed descriptions
and specifications and other products of systems
developments.
A CASE tool enables people working on a software project to
store data about a project, its plan and schedules, to be able to
track its progress and, make changes easily, analyze and store
data about user, store the design of a s stem through
automation..
A CASE environment makes system development
economical and practical.The automated tools and
environment provides a mechanism for system personnel to
capture the document and model and information system.
A CASE environment is a number of CASE tools, which use
integrated approach to support the interaction between
environments components and user of the environment.
CASE Components
CASE tools generally include five components -
diagrammatic tools, an information repository interface
generators, code generators, and management tools.
Diagrammatic Tools
VINAYAK SINGH
7/147
Typically, they include the capabilities to produce data flow
diagrams, data structure diagrams, and program structure
charts.
These high-level tools are essential for support of structured
analysis mythology and CASE tools incorporate structured
analysis extensively.
They support the capability to draw diagram in chart and to
store the details internally. When changes must be made
the nature of changes is described to the system which can
then withdraw the entire diagram automatically.
The ability to change and redraw eliminates an activity that
analyst finds both tedious and undesirable.
1 Centralized Information Repository
1 A centralized information repository or data dictionary aides
the capture analysis processing and distribution of all
system information.
2 The dictionary contains the details system components such
as data items, data flows and processes and also includes
information describing the volumes and frequency of each
activities.
3 While dictionary are designed so that the information is
easily accessible. They also includes built-in control and
safeguards to preserve the accuracy and consistency of the
system details.
4 The use of authorization levels process validation and
procedures for testing consistency of the description
ensures that access to definitions and the revisions made to
them in the information repository occur properly according
to the prescribed procedures.
Interface Generators
System interfaces are the means by which users interact
with an application, both to enter information and data
and to receive information.
Interface generator provides the capability to prepare
mockups and prototypes of user interfaces.
Typically the support the rapid creation of demonstration
system menus, presentation screens and report layouts.
VINAYAK SINGH
8/147
Interfaces generators are an important element for
application prototyping, although they are useful with all
developments methods.
1 Code Generators:
1 Code generators automated the preparations of
computer software.
2 They incorporate method that allows the conversion of
system specifications into executable source code.
3 The best generator will produce the approximately 75
percent of the source code for an application. The rest
must be written by hand. The hand coding as this
process is termed, is still necessary.
4 Because CASE tools are general purpose tools not
limited any specific area such as manufacturing control,
investment portfolio analysis, or accounts management,
the challenge of fully automating software generation is
substantial.
5 The greatest benefits accrue in the code generator are
integrated with the central information repository such
as combination achieved objective of creating reusable
computer code.
6 When specification change code can be regenerated by
feeding details from data dictionary through the code
generators. The dictionary contents can be reused to
prepare the executable code.
Management Tools: CASE systems also assist project manager in maintaining
efficiency and effectiveness throughout the application development process.
1 The CASE components assist development manager in
the scheduling of the analysis and designing activities
and allocation of resources to different project activities.
2 Some CASE systems support the monitoring of project
development schedules against the actual progress as
well as the assignments of specific task individuals.
3 Some CASE management tools allow project managers
to specify custom elements. For example, they can
select the graphic symbols to describe process, people,
department, etc.
VINAYAK SINGH
9/147
4. What is cost benefit analysis? Describe any two methods of
performing
same(May- 06,May-04).
Cost -benefit analysis:
Cost benefit analysis is a procedure that gives the picture of various costs, benefits,
and rules associated with each alternative system.
The cost - benefit analysis is a part of economic feasibility study of a system.
The basic tasks involved in cost-benefit analysis are as follows:
1 To compute the total costs involved.
2 To compute the total benefits from the project.
3 Top compare to decide, if the project provides more net gains or no
net gains.
Procedure for cost and benefit determination:
The determination of costs and benefits entails the following steps-
1 Identify the costs and benefits pertaining to a given project.
2 Categorize the various costs and benefits for analysis.
3 Select a method for evaluation.
4 Interpret the result of the system.
5 Take action.
Costs and benefits are classified are classified as follows:
i. Tangible or intangible cost and benefits:
Tangibility refers to the ease with which costs or benefits can be
measured. The following are the examples of tangible costs and benefits:
Purchase of hardware or software.(tangible cost)
Personnel training.(tangible cost)
Reduced expenses.(tangible benefit)
Increased sales. (tangible benefit)
Costs and benefits that are known to exist but whose financial value
cannot be accurately measured are referred to as intangible costs and
benefits. The following are the examples of intangible costs and
benefits:
1 Employee morale problems caused by a new system.(intangible
cost)
2 Lowered company image. (intangible cost)
3 Satisfied customers. (intangible benefit)
4 Improved corporate image. (intangible benefit)
ii. Direct or Indirect costs and benefits:
Direct costs are those with which a money figure can be directly
associated in a project. They are directly applied to a particular operation.
Direct benefits can also be specifically attributable to a given project.
The following are the examples:
1 The purchase of a box of diskettes for 35$ is a direct cost.
2 A new system that can handle 25% more transaction per day is a
direct benefit.
Indirect costs are the results of operation that are not directly
VINAYAK SINGH
10/147
associated with a given system or activity. They are often referred to as
overhead.
Indirect benefits are realized as a by-product of another activity or
system.
iii. Fixed or variable costs and benefits:
Fixed costs are sunk costs. They are constant and do not change.
Once encountered, they will not recur.
For example: straight-line depreciation of hardware, exempt employee
salary.
Fixed benefits are also constant and do not change.
For example: A decrease in the number of personnel by 20% resulting
from the use of a new computer.
Variable costs are incurred on a regular basis. They are usually proportional
to work volume and continue as long as the system is in operation.
For example: The costs of computer forms vary in proportion to amount of
processing or the length of the reports required.
Variable benefits are realized on a regular basis.
For example: Consider a safe deposit tracking system that saves 20 minutes
preparing customer with manual system.
The following are the methods of performing cost and benefit analysis:
1 Net benefit analysis.
2 Present value analysis.
3 Payback analysis.
4 Break-even analysis.
5 Cash flow analysis.
6 Return on investment analysis.
Net benefit analysis
Net benefit analysis simply involves subtracting total costs from total benefits. It is
easy to calculate, easy to interpret and easy to present. The main drawback is that it
does not account for the time value of money and does not discount future cash
flow.
The time value of money is usually expressed in form of interest on the funds
invested to realize the future value. Assuming compounded interest, the formula is:
F = P (1 + i)
n
Where
F= Future value of an investment.
P= Present value of the investment.
I = Interest rate per compounding year.
N = Number of years .
Present value analysis
In developing long-term projects, it is often difficult to compare todays costs with
the full value of tomorrows benefits. The time value of money allows for interest
VINAYAK SINGH
11/147
rates, inflation and other factors that alter the value of the investment. Present value
analysis controls for these problems by calculating the costs and benefits of the
system in terms of todays value at the investment and then comparing across
alternatives.
Present value = future value
----------------
(1+i)
n
Net present value is equal to discounted benefits minus discounted costs.
5. Explain the concept of Normalization with examples. why would
you
denormilzation? (May-04)
Normalization:
Normalization is a process of simplifying the relationship between data elements in a
record. Through normalization a collection of data in a record structure is replaced by
successive record structures that are simpler and more predictable and therefore
more manageable. Normalization is carried out for four reasons:
1 To structure the data so that any important relationships between entities can
be represented.
2 To permit simple retrieval of data in response to query and report requests.
3 To simplify the maintenance of the data through updates, insertions, and
deletions.
4 To reduce the need to restructure or reorganize data when new application
requirement arise.
A great deal of research has been conducted to develop methods for carrying out the
normalization. Systems Analysts should be familiar with the steps in normalization,
since this process can improve the quality of design for an application.
1. Decompose all data groups into two- dimensional records.
2. Eliminate any relationships in which data elements do not fully depend on
the primary key of the record.
3. Eliminate any relationships that contain transitive dependencies.
There are three normal forms. Research in database design has also identified other
forms, but they are beyond what analysts use in application design.
First Normal Form:
First normal form is achieved when all repeating groups are removed so that a
record is of fixed length. A repeating group, the reoccurrence of a data item or group
of data items within a record is actually another relation. Hence, it is removed from
the record and treated as an additional record structure, or relation.
Consider the information contained in a customer order: order number, customer
name, customer address, order date, as well as the item number, item description,
price and quantity of the item ordered. Designing a record structure to handle an
order containing such data is not difficult.
The analyst must consider how to handle the order. The order can be treated
as four separate records with the order and customer information included in each
record. However it increases complexity of changing the details of any part of the
order and uses additional space.
VINAYAK SINGH
12/147
Another alternative is to design the record to be of variable length. So when four
items are ordered, the item details are repeated four times. This portion is termed as
repeating group.
First normal form is achieved when a record is designed to be of fixed length.
This is accomplished by removing the repeating and creating a separate file or
relation containing the repeating group. The original record and the new records are
interrelated by a common data item.
Second Normal Form:
Second normal form is achieved when a record is in first normal form and each item
in the record is fully dependent on the primary record key for identification. In other
words, the analyst seeks functional dependency.
For example: State motor vehicle departments go to great lengths to ensure that
only one vehicle in the state is assigned a specific license tag number. The license
number is uniquely identifies a specific vehicle; a vehicles serial number is
associated with one and only one state license number. Thus if you know the serial
number of a vehicle, you can determine the state license number. This is functional
dependency.
In contrast, if a motor vehicle record contains the name of all individuals who
drive the vehicle, functional dependency is lost. If we know the license number, we
do not know who the driver is -there can be many. And if we know the name of the
driver, we do not know the specific license number or vehicle serial number, since a
driver can be associated with more than one vehicle in the file.
Thus to achieve second normal form, every data item in a record that is not
dependent on the primary key of the record should be removed and used to form a
separate relation.
Third Normal Form:
Third normal form is achieved when transitive dependencies are removed from
record design. The following is the example of third normal form:
1 A, B, C are three data items in a record.
2 If C is a functionally dependent on B and
3 B is functionally dependent on A,
4 Then C is functionally dependent on A
5 Therefore, a transitive dependency exists.
In data management, transitive dependency is a concern because data can
inadvertently be lost when the relationship is hidden. In the above case, if A is
deleted, then B and C are deleted also, whether or not this is intended. This problem
is eliminated by designing the record for third normal form. Conversion to third
normal form removes the transitive dependency by splitting the relation into two
separate relations.
Denormilzation
Performance needs dictate very quick retrieval capability for data stored in relational
databases. To accomplish this, sometimes the decision is made to denormalize the
physical implementation. Denormalization is the process of putting one fact in
numerous places. This speeds data retrieval at the expense of data modification.
Of course, a normalized set of relational tables is the optimal environment and
should be implemented for whenever possible. Yet, in the real world, denormalization
is sometimes necessary. Denormalization is not necessarily a bad decision if
implemented wisely. You should always consider these issues before denormalizing:
Can the system achieve acceptable performance without denormalizing?
VINAYAK SINGH
13/147
Will the performance of the system after denormalizing still be unacceptable?
Will the system be less reliable due to denormalization?
If the answer to any of these questions is "yes," then you should avoid
denormalization because any benefit that is accrued will not exceed the cost. If, after
considering these issues, you decide to denormalize be sure to adhere to the general
guidelines that follow.
The reasons for Denormalization
Only one valid reason exists for denormalizing a relational design - to enhance
performance. However, there are several indicators which will help to identify
systems and tables which are potential denormalization candidates. These are:
Many critical queries and reports exist which rely upon data from more than
one table. Often times these requests need to be processed in an on-line
environment.
Repeating groups exist which need to be processed in a group instead of
individually.
Many calculations need to be applied to one or many columns before queries
can be successfully answered.
Tables need to be accessed in different ways by different users during the
same timeframe.
Many large primary keys exist which are clumsy to query and consume a
large amount of DASD when carried as foreign key columns in related tables.
Certain columns are queried a large percentage of the time. Consider 60% or
greater to be a cautionary number flagging denormalization as an option.
Be aware that each new RDBMS release usually brings enhanced performance and
improved access options that may reduce the need for denormalization. However,
most of the popular RDBMS products on occasion will require denormalized data
structures. There are many different types of denormalized tables which can resolve
the performance problems caused when accessing fully normalized data. The
following topics will detail the different types and give advice on when to implement
each of the denormalization types.
6. Write detailed note about the different levels and methods of
testing software (May-06).
Ans. A table that is not sufficiently normalized can suffer from logical
inconsistencies of various types, and from anomalies involving data operations. In
such a table:
The same fact can be expressed on multiple records; therefore updates to the
table may result in logical inconsistencies. For example, each record in an
unnormalized "DVD Rentals" table might contain a DVD ID, Member ID, and
Member Address; thus a change of address for a particular member will
potentially need to be applied to multiple records. If the update is not carried
through successfully-if, that is, the member's address is updated on some
VINAYAK SINGH
14/147
records but not others-then the table is left in an inconsistent state.
Specifically, the table provides conflicting answers to the question of what this
particular member's address is. This phenomenon is known as an update
anomaly.
There are circumstances in which certain facts cannot be recorded at all. In
the above example, if it is the case that Member Address is held only in the
"DVD Rentals" table, then we cannot record the address of a member who
has not yet rented any DVDs. This phenomenon is known as an insertion
anomaly.
There are circumstances in which the deletion of data representing certain
facts necessitates the deletion of data representing completely different facts.
For example, suppose a table has the attributes Student ID, Course ID, and
Lecturer ID (a given student is enrolled in a given course, which is taught by
a given lecturer). If the number of students enrolled in the course temporarily
drops to zero, the last of the records referencing that course must be deleted-
meaning, as a side-effect, that the table no longer tells us which lecturer has
been assigned to teach the course. This phenomenon is known as a deletion
anomaly.
Ideally, a relational database should be designed in such a way as to exclude the
possibility of update, insertion, and deletion anomalies. The normal forms of
relational database theory provide guidelines for deciding whether a particular design
will be vulnerable to such anomalies. It is possible to correct an unnormalized design
so as to make it adhere to the demands of the normal forms: this is normalization.
Normalization typically involves decomposing an unnormalized table into two or more
tables which, were they to be combined (joined), would convey exactly the same
information as the original tabl
Background to normalization: definitions
Functional dependency:Attribute B has a functional dependency on
attribute A if, for each value of attribute A, there is exactly one value of
attribute B. For example, Member Address has a functional dependency on
Member ID, because a particular Member Address value corresponds to every
Member ID value. An attribute may be functionally dependent either on a
single attribute or on a combination of attributes. It is not possible to
determine the extent to which a design is normalized without understanding
what functional dependencies apply to the attributes within its tables;
understanding this, in turn, requires knowledge of the problem domain.
Trivial functional dependency: A trivial functional dependency is a
functional dependency of an attribute on a superset of itself. {Member ID,
Member Address} {Member Address} is trivial, as is {Member Address}
{Member Address}.
Full functional dependency: An attribute is fully functionally dependent on
a set of attributes X if it is a) functionally dependent on X, and b) not
functionally dependent on any proper subset of X. {Member Address} has a
functional dependency on {DVD ID, Member ID}, but not a full functional
dependency, for it is also dependent on {Member ID}.
Multivalued dependency: A multivalued dependency is a constraint
VINAYAK SINGH
15/147
according to which the presence of certain rows in a table implies the
presence of certain other rows: see the Multivalued Dependency article for a
rigorous definition.
Superkey: A superkey is an attribute or set of attributes that uniquely
identifies rows within a table; in other words, two distinct rows are always
guaranteed to have distinct superkeys. {DVD ID, Member ID, Member
Address} would be a superkey for the "DVD Rentals" table; {DVD ID, Member
ID} would also be a superkey.
Candidate key: A candidate key is a minimal superkey, that is, a superkey
for which we can say that no proper subset of it is also a superkey. {DVD ID,
Member ID} would be a candidate key for the "DVD Rentals" table.
Non-prime attribute: A non-prime attribute is an attribute that does not
occur in any candidate key. Member Address would be a non-prime attribute
in the "DVD Rentals" table.
Primary key: Most DBMSs require a table to be defined as having a single
unique key, rather than a number of possible unique keys. A primary key is a
candidate key which the database designer has designated for this purpose.
History
Edgar F. Codd first proposed the process of normalization and what came to be
known as the 1st normal form:
There is, in fact, a very simple elimination
[1]
procedure which we shall call
normalization. Through decomposition non-simple domains are replaced by
"domains whose elements are atomic (non-decomposable) values."
-Edgar F. Codd, A Relational Model of Data for Large Shared Data Banks
[2]
In his paper, Edgar F. Codd used the term "non-simple" domains to describe a
heterogeneous data structure, but later researchers would refer to such a structure
as an abstract data type.
Normal forms
The normal forms (abbrev. NF) of relational database theory provide criteria for
determining a table's degree of vulnerability to logical inconsistencies and anomalies.
The higher the normal form applicable to a table, the less vulnerable it is to such
inconsistencies and anomalies. Each table has a "highest normal form" (HNF): by
definition, a table always meets the requirements of its HNF and of all normal forms
lower than its HNF; also by definition, a table fails to meet the requirements of any
normal form higher than its HNF.
The normal forms are applicable to individual tables; to say that an entire database
is in normal form n is to say that all of its tables are in normal form n.
Newcomers to database design sometimes suppose that normalization proceeds in
an iterative fashion, i.e. a 1NF design is first normalized to 2NF, then to 3NF, and so
on. This is not an accurate description of how normalization typically works. A
sensibly designed table is likely to be in 3NF on the first attempt; furthermore, if it is
3NF, it is overwhelmingly likely to have an HNF of 5NF. Achieving the "higher"
normal forms (above 3NF) does not usually require an extra expenditure of effort on
VINAYAK SINGH
16/147
the part of the designer, because 3NF tables usually need no modification to meet
the requirements of these higher normal forms.
Edgar F. Codd originally defined the first three normal forms (1NF, 2NF, and 3NF).
These normal forms have been summarized as requiring that all non-key attributes
be dependent on "the key, the whole key and nothing but the key". The fourth and
fifth normal forms (4NF and 5NF) deal specifically with the representation of many-
to-many and one-to-many relationships among attributes. Sixth normal form (6NF)
incorporates considerations relevant to temporal databases.
First normal form
Main article: First normal form
The criteria for first normal form (1NF) are:
A table must be guaranteed not to have any duplicate records;
therefore it must have at least one candidate key.
There must be no repeating groups, i.e. no attributes which occur a
different number of times on different records. For example, suppose
that an employee can have multiple skills: a possible representation of
employees' skills is {Employee ID, Skill1, Skill2, Skill3 ...}, where
{Employee ID} is the unique identifier for a record. This
representation would not be in 1NF.
Second normal form
Main article: Second normal form
The criteria for second normal form (2NF) are:
The table must be in 1NF.
None of the non-prime attributes of the table are functionally
dependent on a part (proper subset) of a candidate key; in other
words, all functional dependencies of non-prime attributes on
candidate keys are full functional dependencies. For example, consider
a "Department Members" table whose attributes are Department ID,
Employee ID, and Employee Date of Birth; and suppose that an
employee works in one or more departments. The combination of
Department ID and Employee ID uniquely identifies records within the
table. Given that Employee Date of Birth depends on only one of those
attributes - namely, Employee ID - the table is not in 2NF.
Note that if none of a 1NF table's candidate keys are composite - i.e.
every candidate key consists of just one attribute - then we can say
immediately that the table is in 2NF.
Third normal form
Main article: Third normal form
The criteria for third normal form (3NF) are:
The table must be in 2NF.
There are no non-trivial functional dependencies between non-prime
attributes. A violation of 3NF would mean that at least one non-prime
VINAYAK SINGH
17/147
attribute is only indirectly dependent (transitively dependent) on a candidate
key, by virtue of being functionally dependent on another non-prime
attribute. For example, consider a "Departments" table whose attributes are
Department ID, Department Name, Manager ID, and Manager Hire Date; and
suppose that each manager can manage one or more departments.
{Department ID} is a candidate key. Although Manager Hire Date is
functionally dependent on {Department ID}, it is also functionally dependent
on the non-prime attribute Manager ID. This means the table is not in 3NF.
Boyce-Codd normal form
Main article: Boyce-Codd normal form
The criteria for Boyce-Codd normal form (BCNF) are:
The table must be in 3NF.
Every non-trivial functional dependency must be a dependency on a
superkey.
Fourth normal form
Main article: Fourth normal form
The criteria for fourth normal form (4NF) are:
The table must be in BCNF.
There must be no non-trivial multivalued dependencies on something
other than a superkey. A BCNF table is said to be in 4NF if and only if
all of its multivalued dependencies are functional dependencies.
Fifth normal form
Main article: Fifth normal form
The criteria for fifth normal form (5NF and also PJ/NF) are:
The table must be in 4NF.
There must be no non-trivial join dependencies that do not follow from
the key constraints. A 4NF table is said to be in the 5NF if and only if
every join dependency in it is implied by the candidate keys.
Domain/key normal form
Main article: Domain/key normal form
Domain/key normal form (or DKNF) requires that a table not be subject to
any constraints other than domain constraints and key constraints.
Sixth normal form
It has been suggested that this section be split into a new article entitled Sixth
normal form. (Discuss)
This normal form was, as of 2005, only recently proposed: the sixth normal form
(6NF) was only defined when extending the relational model to take into account the
temporal dimension. Unfortunately, most current SQL technologies as of 2005 do not
take into account this work, and most temporal extensions to SQL are not relational.
VINAYAK SINGH
18/147
See work by Date, Darwen and Lorentzos
[3]
for a relational temporal extension, or
see TSQL2 for a different approach.
Denormalization
Main article: Denormalization
Databases intended for Online Transaction Processing (OLTP) are typically more
normalized than databases intended for On Line Analytical Processing (OLAP). OLTP
Applications are characterized by a high volume of small transactions such as
updating a sales record at a super market checkout counter. The expectation is that
each transaction will leave the database in a consistent state. By contrast, databases
intended for OLAP operations are primarily "read only" databases. OLAP applications
tend to extract historical data that has accumulated over a long period of time. For
such databases, redundant or "denormalized" data may facilitate Business
Intelligence applications. Specifically, dimensional tables in a star schema often
contain denormalized data. The denormalized or redundant data must be carefully
controlled during ETL processing, and users should not be permitted to see the data
until it is in a consistent state. The normalized alternative to the star schema is the
snowflake schema.
Denormalization is also used to improve performance on smaller computers as in
computerized cash-registers. Since these use the data for look-up only (e.g. price
lookups), no changes are to be made to the data and a swift response is crucial.
Non-first normal form (NF)
In recognition that denormalization can be deliberate and useful, the non-first
normal form is a definition of database designs which do not conform to the first
normal form, by allowing "sets and sets of sets to be attribute domains" (Schek
1982). This extension introduces hierarchies in relations.
Consider the following table:
Non-First Normal Form
Person Favorite Colors
Bob blue, red
Jane green, yellow, red
Assume a person has several favorite colors. Obviously, favorite colors consist of a
set of colors modeled by the given table.
To transform this NF table into a 1NF an "unnest" operator is required which
extends the relational algebra of the higher normal forms. The reverse operator is
called "nest" which is not always the mathematical inverse of "unnest", although
"unnest" is the mathematical inverse to "nest". Another constraint required is for the
operators to be bijective, which is covered by the Partitioned Normal Form (PNF).
The most useful and practical approach is with the understanding that
testing is the process of executing a program with tha explicit intention of finding
errors, that is ,making the program dail
TESTING STRATEGIES:
VINAYAK SINGH
19/147
A test case is a set of data that the system will process
as normal input. However the data are created with the express intent of
determining whther the system will process them correctly
There are two logical strategies for testing software that is the strategic of code
testing and specification testing
CODE TESTING:
The code testing strategy examines the logic of the
program to follow this testing method the analyst develops test cases that result in
executing every instruction in the program or module that is every path through the
program is tested
This testing strategy does not indicate whether the
code meets its specification nor does it determine whether all aspects are even
implemented. Code testing also does not check the range of data that the program
will accept even through when the software failure occur in actual size it is frequently
because users submitted data outside of expected ranges(for example a sales order
for $1 the largest in the history of the organization)
SPECIFICATION TESTING:
To perform specification testing the analyst
examines the specifications stating what the program should do and how it should
perform under
various conditions. Then test cases are developed for each condition or combination
of condition and submitted for processing. by examining the results the analyst can
determine whether the program perform according to its specified requirement
LEVELS OF TESTING:
Systems are not designed as entire systems nor are
they tested as single systems.The analyst must perform both unit and system testing
UNIT TESTING:
In unit testing the analyst tests the programs making up a system.(for this reason
sometimes unit testing is also called as program testing)
Unit testing focuses first on the modules independently of one another to locate
error.
This enables the tester to detect errors in coding and logic that are contained within
that module alone.
Unit testing can be performed only from bottom up,starting with the smallest and
VINAYAK SINGH
20/147
lowest level modules are proceeding one at time .for each module in bottom up
testing a shot program executes the module and provides the needed data so that
the module is asked to perform the way it will when embedded within the larger
system.when bottom level modules tested attention turns to those on the next level
that use the lower ones. They are tested individually and the linked with the
previously examined lower level modules
SYSTEM TESTING:
System testing does not test the software but rather the integration of each module
in the system .it also tests to find discrepancies between the system and its original
objective current specifications , and system documentation. The primary concern is
the compatibility of individual modules .Analyst trying to find areas where modules
have been designed with the different specifications for data length,type and data
element name For example one module may expect the data item for customer
identification number to be character data item
7. What are structured walkthrough and how are they carried out?
Describe the Composition of walkthrough system.(May-06,Nov-
03,May-05).
A structured walkthrough is a planned review of a system or its software by
persons involved in the development efforts. Sometimes structured walkthrough is
called peer walkthrough because the participant are colleagues of the same levels in
the organization
PURPOSE:
1. The purpose of structured walkthrough is to find area where improvement can
be made in the system or the development process.
2. structured walkthrough are often employed to enhance quality and to provide
guidance to system analyst and programmers.
3. A walkthrough should be viewed by the programmers and analyst as an
opportunity to receive assistance not as an obstacle to be avoided tolerated.
4. The structured walkthrough can be used to process a constructive and cost
effective management tool after detailed investigation following design and
during program development.
PROCESS OF STRUCTURED WALKTHROUGH:
1. The walkthrough concept recognizes that system development is team
process. The individuals who formulated the design specifications or crated
the program code are parts of the review team
2. A moderated is chosen to lead the review.
3. A scribe a recorder is also needed to capture the details of the discussions
and the ideas that are raised.
4. Maintenance should be addressed during walkthrough.
5. Generally no more than seven persons should be involved including the
individuals who actually developed the product under review, the recorder
and the review leader.
6. structured review rarely exceed 90 minutes in length.
VINAYAK SINGH
21/147
REQUIREMENT REVIEW:
1. A requirement review is a structured walkthrough conducted to examine the
requirement specifications formulated by the analyst.
2. It is also called as specification review.
3. It aims at examining the functional activities and processes that the review
the new system will handle.
4. it includes documentation that participants read and study price to actual
walkthrough.
DESIGN REVIEW
1. Design review focuses on design specification for meeting previously identified
system requirements
2. The purpose of this type of structured walkthrough is to determine whether
the proposed design will meet the requirement effectively and efficiently.
3. If the participant find discrepancies between the design and requprement
they will point out them discuss them.
CODE REVIEW:
1. A code review is structured walkthrough conducted to examine the program
code developed in a system along its documentation.
2. It is used for new systems and for systems under maintenance
3. A code review does not deal with the entire software but rather with
individual modules or major components in a program.
4. when programs are reviewed the participants also assess the execution
efficiency use of standard data names and modules and program errors.
VINAYAK SINGH
22/147
8. What is user interface design?What tools is used to chart a User
Interface Design(May-06).
Ans:
User interface design is the spercification of of a conversation between the system
user and the computer.it generally results in the form of input or output.there are
several types of user interface styles including menu selection , instruction sets,
question answer dialogue and direct manipulation.
1. Menu selection: It is a strategy of dialogue design that presents a
list of alternatives or options to the user.The system user selects the
desired alternative or option by keying in the no. of alternatives
associated with the options.
2. Instruction sets: It is strategy where the application is designed osing
a dialogue syntax that the user must learn.There are three types of
syntax :structured English,mnemonic syntax and natural language.
3. Question answer dialogue strategy: It is a style that wsa primerly used
to supplement either menu-driven or syntax-driven dialogues.The
system question involves yes or no.It was also popular in developing
interfaces for character based screens inainframe applications.
4. Direct manipulation: It allows graphical objects appear on a
screen.Essentialy, this user interface style foceses on using icons ,
small graphical images ,to sugest functions to the user.
9. Describe the different methods of file organization .Illustrate with
examples. For what type of system,which type of file organization
methods can be used(May-06,Nov-03)
Ans. File Organisation :
A file id organized to ensure that records are available for processing.It
should be designed in line with the acivity and volatility of the information and the
nature of the storage media and devices.
There are Four basic File organization Methods:
1. Sequential organization:
i. Sequential organization simply means storing data in physical
contigous blocks within files on tape or disk.Records are also in
sequence with in each block.
ii. To access a record previous records within the block are
scanned.Thus sequential record design is best suited for get next
activities,reading one record after another without a search delay.
iii. In a sequential organistaion records can be added only at the end
of the file.It is not possible to insert a record in the middle of the
file without rewriting the file.
iv. In this file update,transactions records are in same sequence as in
the master file.Records from both files are matched.One recordat
a time,resulting in an updated mster file.
v. Advantages:
- simple to design
- easy to program
- variable length and blocked records are available
- best use of disk storage
vi. Disadvantages :
Records cannot be added to middle of the file.
VINAYAK SINGH
23/147
2. Indexed Sequencial Organisation :
vii. Like Sequencial Organisation,keyed Sequencial Organisation stores
data in physically contigous blocks.
viii. The difference in the use of indexes to locate the records.
ix. Disk storage is divided into three areas:
a. Prime Area:
The prime are contains file records stored by key or
ID numbers.All records are initially stored in the
prime area.
b. OverFlow area:
The overflow area contains records added to the files
that cannot be placed in logical sequence in the
prime area.
c. Index area:
The index area is more like a data dictionary.It
contains keys of records and their loctions on the
disks.A pointer associated with each key is an
address that tells the system where to find a record.
x. Advantages :
a. Indexed sequential organization reduces the
magnitude of the sequential search and provides
quick access for sequential and direct processing.
b. Records can be inserted or updated in the middle of
the file.
xi. Disadvantages :
a. The Prime drawback is the extra storage required for
the index.
b. It also takes longer to search the index for data
access or retrieval.
c. Periodic reorganization of file is required.
3. Inverted List Organisation :
xii. Like the indexed-sequencial storage method the inverted list
organization maintains an index.
xiii. The two methods differ however in the index level and record
storage.
xiv. The indexed seqeuncial method has a multiple index for a given
key,whereas the inverted list method has a single index for each
key type.
xv. In an inverted list records are not necessarily stored in aparticular
sequence.They are placed in data storage area bur indexes are
updated for the record keys and location.
xvi. Advantage:
Inverted lists are best for application that request specific data on
multiple keys.They are ideal for static file as additions and
deletions cause expensive pointer updating.
4. Direct access Organisation:
xvii. In direct-access file organization,records are placed randomly
throughout the file.
xviii. Records need not be in sequence because they are updated
directly and rewritten back in the same location.
xix. New records at the end of the file or inserted in specific locations
VINAYAK SINGH
24/147
based on software commands.
xx. Records are accessed by address that specify their disk location.An
address is required for locating a record,for linking records or for
establishing relationships.
xxi. Adantages:
a. Records can be inserted or updated in middle of the
file.
b. Better control over at a location.
xxii. Disadvantages:
a. Address calculation is required for the processing.
b. Variable legth records are nearly impossible to
process.
10. What is mean by prototype? What is its use in application
prototyping.(May-06).
Prototyping:
a. Prototyping is a technique for quickly building a functioning but incomplete
model of the information system.
b. A prototype is a small, representative or working model of users
requirements of a proposed design for an information system.
c. The Development of the prototype is based on the fact that the
requirements are seldom fully known at the beginning of the project. The
idea is to build a first simplified version of the system and seek feedback
from the people involved in order to then design a better subsequent
version. This process is repeated until the system meets the client
condition of acceptance.
d. Any given prototype may omit certain functions or features until such a
time as the prototype has sufficiently involved into an acceptable
implementation of requirements.
e. There are two types of prototyping models
a. Throw-away prototyping - In this model, the prototype is discarded
once it has served the purpose of clarifying the system
requirements.
b. Explanatory prototyping - In this model, the prototype evolves into
the main system.
Need for Prototyping:
a. Information requirement are not always well defined. Users may know
only that certain business areas need improvements or that the
existing procedures must be changed. Or may know that they need
better information for managing certain activities but are not sure
what that information is.
b. The users requirements may not be too vague to even begin
formulating a design.
c. Developers may have neither information nor experience of some
unique situations or some high-cost or high-risk situations, in which
the proposed design is new and untested.
d. Developers may be unsure of the efficiency of an algorithm, the
adaptabily of an operating system, or the form that human-machine
interaction should take.
e. In these and many other situations a prototypingapproach may offer
the best approach.
VINAYAK SINGH
25/147
Steps in Prototyping:
a. Requirement Analysis:
Identify the users information and operating requirements.
b. Prototype creation or modification :
Develop a working prototype that focuses on only the most important
functions using a basic database.
c. Customer Evalution :
Allow the customers to use the prototype and evaluate it.Gather the
feedback, reviews, comments and suggestion.
d. Prototype refinement :
Integrate the most important changes into the current prototype on
the basis of customer feedback.
e. System development and implementation :
Once the refined prototype satisfies all the clients conditions of
acceptance, it is transformed into the actual system.
Advantages :
i. Shorter development time.
ii. More accurate user requirements.
iii. Greater user participation and support
iv. Relatively inexpensive to build as compared with the cost of
conventional system.
Disadvantages :
i. An appropriate operating system or programming languages may be
used simply because is it available.
ii. The completed system may not contain all the features and final
touch. For instance headings titles and page numbers in the report
may be missing.
iii. File organization may be temporary nd record structures may left
incomplete.
iv. Processing and input controls may be missing and documentation of
system may have been avoided entirely.
v. Development of system may become never ending process as changes
will keep happening.
vi. Adds to cost and time of the developing system if left uncontrolled.
Application :
i. This method is most useful for unique applications where developers
have little information or experience or where risk or error may be
high.
ii. It is useful to test the feasibility of the system or to identify user
requirements.
11. Distinguish between reliability and validity how are they
related?(May-06,Nov-03).
Ans:
Reliability and validity are two faces of information gathering .the term reliability is
synonymous with dependability,consistency and accuracy.Concern for reliability
comes from the necesisity for dependability in measurement whereas validity is
concerned on what is being measured rather than consistency and accuracy.
Reliability can be approached in three ways
1.it is defined as stability, dependability, and predictability.
VINAYAK SINGH
26/147
2.It focuses on accuracy aspect.
3.Errors of measurement-they are random errors stemming from fatigue or
fortuitous conditions at a given time.
The most common question that defines validity is :Does the instrument measure
what it is measuring? It refers to the notion that the question asked are worded to
produce the information sought.in validity the emphasis is on what is being
measured .
12. Write short notes on(May-06)
a. Structure charts (May-04)(May-03)(Dec-04)
b. HIPO charts(May-04,May-03,May-05,Dec-04)
c. Security and Disaster Recovery Dec-04
d. List of Deliverables Dec-04
e. Warrier or Diagram (M-05,Dec-04,May-04)
HIPO charts
1. HIPO is a commonly used method for developing software.An
acronym for Hierachical Input Process Output,this method was
developed by IBM for its large,complex Operating Systems.
2. Purpose: The assumption on which HIPO is based is that one
oftenly loses track of the intended function of large system. This
is one reason why it is difficult to compare existing systems
against their original specifications and therefore why failure can
occur even in system that are technically well formulated. From
the user view, single function can often extend across several
modules. The concerns of the analyst is that understanding,
describing and documenting the modules and their interaction in
a way that provides sufficient details out that does not lose sight
of the larger picture.
3. HIPO diagrams are graphic, rather than prose or narrative,
descriptions of the system.
4. They assest the analyst in answering three guidelines.
a. What does the system or module do?
b. How does to do it?
c. What are the inputs and outputs?
5. A HIPO descriptions for a system consists of
d. Visual of the the table of contents(VTOC),and
e. Functional diagrams.
6. Advantages:
f. HIPO diagrams are effective for documenting a system.
g. They also aid designers and force them to think about how
specification will be met and where activities and components
must be linked together.
7. Disadvantages:
h. They rely on a set of specialized symbols that require
explanation. an extra concern when compared to the simplicity
of, for example, data flow diagrams.
i. HIPO diagrams are not as easy to use communication purpose as
many people would like.And of course, they donot guarantee
error free systems. Hence, their greatest strength is the
documentation of a system.
List of Deliverables:
1. The deliverables are those artifacts
VINAYAK SINGH
27/147
of the software system to be
deliverable to the users.
2. It is wrong to consider that only
the executable code is deliverable.
Some additional documentation as
the part of the deliverables, but
this too is incomplete list of
deliverables.
3. The deliverables of software
system development project are
numerous and vary with user
organization.
4. The broad classes of the system
deliverables are as follows:-
a. Document deliverables.
b. Source code deliverables.
5. Some of the document deliverables
are listed as follows:-
c. System requirement specifications.
d. System Design documents.
i. Data flow diagrams with process descriptions,
data dictionary, etc.
ii. System Flow chart.
iii. Entity Relationship Diagrams.
iv. Structure charts.
v. Forms Design of Input data forms.
vi. Forms Design of Outputs such as Query
Responses/Reports.
vii. Errors Messages: List, Contents, Suggested
Corrective Actions.
viii. Databases Schema.
e. Testing documents:
i. Test plan with Test Cases, Test Data and
expected results.
f. User Training Manual
i. Operational manual-containing routine
operations,
exception
processing,
housekeeping
functions etc.
6. The source code deliverables-these
are also called as soft copy
deliverables, because they are
delivered in a soft form.(may be in
addition to print form),Major
deliverables in this category are
listed as follows
g. The source code of programs.
h. The source code of libraries/sharable code.
i. The data base schema source code.
j. The database Triggers, Stored Procedures, etc. If not
covered above.
VINAYAK SINGH
28/147
k. The online help pages contents.
7. The list of deliverables is generally
a part of proposal. However, the
final list of deliverables may
include additional elements or
clarifications after the system
requirements specifications are
finalized.
Structure Charts:
It is described as follows:-
1. The structured chart is graphical representation of the hierarchical
organization of program functions of a program and the
communication between program components.
2. It is drawn to describe a process or program component shown in
System Flow chart in more details. The structure charts create
further top-down decomposition. Thus it is another lower level of
abstraction of the design before implementation.
3. A computer program is modularized so that it is very easy to
understand; it avoids repetition of the same code in multiple places
of the same (or different) programs and can be reused to save
development time, effort and costs. The Structured chart makes it
possible to model even at the lower level of design.
4. These modules of a program, called as subprograms, are placed
physically one after the other serially. However, they are
referenced in the order that the functionality requires them. Thus
the program control is transferred from a line is the calling
subprogram to the first line in the called subprograms. Thus they
perform the role of calling and/or called subprograms, at
different times during the program execution.
5. The calling subprograms is referred to as a parent and the called
one is referred to as a child subprogram. Thus a program can be
logically arranged into hierarchy of subprograms. The structured
charts can be used to represent the parent-child relationships
between subprograms effectively by correctly representing these
relationships.
6. The subprograms also communicate with each other in either
direction. The structure chart can describe the data flows
effectively. These individuals data items between programs
modules are called as data couples . They are represented by an
arrow starting from a hollow circle, as shown in the diagram. The
arrow is labeled with the name of data item passed.
7. The subprograms also communicate among themselves a type of
data item called as flag which is purely internal information
between subprograms, used to indicate some result. They could be
binary values, indicating presence or absence of a thing.
Symbol below is of calling Module calls the called Module
Warrier or Diagram:
1. The ability to show the relationship between processes and steps in a
process is not unique to Warnier/Orr diagrams, not is the use of
iteration ,or treatment of individual cases.
VINAYAK SINGH
29/147
2. Both the structured flowcharts and structured-English methods do
this equality well. However, the approach used to develop systems
definitions with Warnier/Orr diagrams are different and fit well with
those used in logical system design.
3. To develop a Warnier/Orr diagram, the analyst works backwards,
starting with systems output and using on output-oriented analysis.
On paper, the development moves from left to right. First, the
intended output or results of processing are defined..
4. At the next level, shown by inclusion with a bracket, the steps
needed to produce the output are defined. Each step in turn is
further defined. Additional brackets group the processes required to
produce the result on the next level.
5. A complete Warnier/Orr diagram includes both process groupings
and data requirements. Both elements are listed for each process or
process component. These data elements are the ones needed to
determine which alternative or case should be handled by the system
and to carry out the process.
6. The analyst must determine where each data element originates how
it is used, and how individual elements are combined. When the
definition is completed, data structure for each process is
documented. It is turn, is used by the programmers, who work from
the diagrams to code the software.
7. Advantages:
a. Warnier/Orr diagrams offer some distinct advantages to
system experts. They are simple in appearance and easy to
understand. Yet they are powerful design tools.
b. They have the advantage of showing grouping of processes
and the data that must be passed from level to level.
c. In addition, the sequence of working backwards ensures that
the system will be result-oriented.
d. This method is also a natural process. with structured
flowcharts. for example it is often necessary to determine the
innermost steps before interactions and modularity.
Security and Disaster Recovery:
The system security problem can be divided into four related
issues:
1. System Security:
System security refers to the technical innovations and
procedures applied to the hardware and operating systems to
protect deliberate or accidental damage from a defined threat.
2. System Integrity:
System integrity refers to the proper functioning of hardware and
programs, appropriate physical security, and safety against
external threats such as eavesdropping and wiretapping.
3. Privacy:
Privacy defines the rights of the users or organizations to
determine what information they are willing to share with or
accept from others and how the organization can be
protected against unwelcome, unfair, or excessive
dissemination of information about it.
4. Confidentiality:
VINAYAK SINGH
30/147
The term confidentiality is a special status given to sensitive
information in a database to minimize the possible invasion of
privacy. It is an attribute of information that characterizes its
need for protection.
Disaster/Recovery Planning:
1. Disaster/recovery planning is a means of addressing the
concern for system availability by identifying potential
exposure, prioritizing applications, and designing safeguards
that minimize loss if disaster occurs. It means that no
matter what the disaster, the system can be recovered. The
business will survive because a disaster/recovery plan
allows quick recovery under the circumstances.
2. In disaster/recovery planning, managements primary roll
is to accept the need for the consistency planning, select
an alternative measure, and recognize the benefits that
can be delivered from establishing a disaster/recovery
plan. Top management should establishing a
disaster/recovery policy and commit corporate support
staff for its implementation
3. The users role is also important. The users responsibilities
include the following:
a. Identifying critical applications, why they are critical,
and how computer unavailability would affect the
department.
b. Approving data protection procedures and
determining how long and how well operations will
continue without the data.
c. Funding the costs of backup.
13. What are the roles of the system Analyst in system analysis
design?(Nov-03,May-01)
The various roles of system analyst are as follows:
1 Change Agent:
The analyst may be viewed as an agent of change. A candidate system is designed to
introduce change and reorientation in how the user organization handles information
or makes decisions .In role of a change agent, the system analyst may select various
styles to introduce change to user organization. The styles range from that of the
persuader to imposer. In between, there are the catalyst and confronter roles. When
user appears to have a tolerance for change ,the persuader or catalyst(helper)styles
is appropriate. On the other hand , when drastic changes are requires, it may be
necessary to adopt the confronter or even imposer style .No matter what style is
used; however the goal is same :to achieve acceptance of candidate system with a
minimum of resistance.
2 Investigator and Monitor.
In defining a problem, the analyst pieces together the information gathered to
determine why the present system does not work well and what changes will correct
the problem .In one respect, this work is similar to that of an investigator. Related to
the role of investigator is that of monitor programs in relation to time, cost, and
quality. Of these resources, time is the most important, If time gets away, the
project suffers from increased costs and wasted human resources.
3 Architect:
Just as an architect related the client abstract design requirement and the
VINAYAK SINGH
31/147
contractors detailed building plan, an analyst relates the users logical design
requirement with the detailed physical system design,As an architect,the analyst also
creates a detailed physical system design of the candidate systems.
4 Psychologist
In system development, system are built around people. The analyst plays the role
of psychologist in the way he/she reaches people, interprets their thoughts, assesses
their behaviour and draws conclusions from these interactions. Understanding
inetrfunctional relationships is important.It is important that analyst be aware of
peoples feelings and be prepared to get around things in a graceful way. The art of
listening is important in evaluating responses and feedback.
5 Salesperson:
Selling change can be as crucial as initiating change. Selling the system actually
takes place at each step in the system life cycle. Sales skills and persuasiveness are
crucial to the success of system.
6 Motivator:
The analyst role as a motivator becomes obvious during the first few weeks after
implementation of new system and during times when turnover results in new people
being trained to work with the candidate system. The amount of dedication if takes
to motivate the users often taxes the analyst abilities to maintain the pace.
7 Politcian:
Related to the role of motivator is that of politician. In implementing a candidate
system , the analyst tries to appeases all parties involved.Diplomacy and fitness in
dealing with the people can improve acceptance of the system.In as much as a
politician must have the support of his/her constituency, so as good analysts goal to
have the support of the userss staff.
14. What is the requirement of good system analyst (M-04,May-01,M-
06)
Requirements of a good System Analyst:
An analyst must possess various skills to effectively carry out the job. The
skills required by system analyst can be divided into following categories:
i) Technical Skills:
Technical skills focus on procedures and techniques for operetional analysis,
system analysis and computer science. The technical skills relevant to system
include the following:
a. Working Knowledge of Information Technologies:
The system analyst must be aware of both existing and
emerging information technologies. They should also stay
through disciplined reading and participation in current
appropriate professional societies.
b. Computer programming experience and expertise:
System Analyst must have some programming experience.
Most system analyst need to be proficient in one or more high
level programming language.
ii) Interpersonal Skills:
Interpersonal skills deal with relationships and interface of the analyst
with people in the business.The interpersonal skills relevant to system
VINAYAK SINGH
32/147
include the following:
b. Good interpersonal Communication skills:
An analyst must be able to communicate, both orally and in
writing. Communication is not just reports, telephone
conversations and interviewes. It is people talking, listening,
feeling and reaching to one another; their experiences and
reactions. Opening communication channel are a must for
system development.
c. Good interpersonal Relations skills:
An analyst interacts with all stackholders in a system
development project. There interaction requires effective
interpersonal skills that enable the analyst o deal with a group
dynamics, bussiness politics, conflict and change.
iii) General knowledge of Business process and terminology:
System analyst must be able to communicate with business
experts to gain on understanding of their problems and needs.
They should avail themselves of every opportunity to complete
basic business literacy courses such as financial accounting,
management or cost accounting, finance, marketing ,
manufacturing or operations management, quality
management, economics and business law.
iv) General Problem solving skills:
The system analyst must be able to tke a large bussiness
problem, break down that problem into its parts, determine
problem causes and effects and then recommended a
solution.Analyst must avoid the tendancy to suggest the
solution before analyzing the problem.
v) Flexibility and Adaptability:
No two are alike.Accordingly there is no single, magical approach or
standard that is equally applicable to all projects.Successful system
analyst learn to be flexible and to adopt to unique challenges and
situations.
vi) Character and Ethics:
The nature of the systems analyst job requires strong character and a
sense of right and wrong.Analysts often gain access to sensitive and
confidential facts and information that are not meant for public
disclosure.
15. Build a current Admission for MCA system .Draw context level
diagram, DFD upto Two level, ER diagram and a dataflow and data
stores and a process. Draw input, output screen.(May-04)
EVENT LIST FOR THE CURRENT MCA ADMISSION SYSTEM
1. Administrator enters the college details.
2. Issue of admission forms.
3. Administrator enters the student details into the system.
4. Administrator verifies the details of the student.
5. System generates the hall tickets for the student.
6. Administrator updates the CET score of student in the system.
VINAYAK SINGH
33/147
7. System generates the score card for the students.
8. Student enters his list of preference of college into the system.
9. System generates college-wise student list according to CET score.
10. System sends the list to the college as well as student.
Data Store used in MCA admission:
1) Student_Details :
i) Student_id
ii) Student Name
iii) Student Address
iv) STudent_contactNo
v) Student Qualifiacation
vi) Studentmarks10th
vii) Studentmarks12th
viii) StudentmarksDegree
2) Stud_cet details:
i) Student_id
ii) Student_rollNo
iii) Student_cetscore
iv) Student_percentile
3) Stud_preference List:
i) Student_id
ii) Student_rollNo
iii) Student_prefenence1
iv) Student_prefenence3
v) Student_prefenence3
vi) Student_prefenence4
vii) Student_prefenence5
4) College List:
i) College_id
ii) college_name
iii) college_address
iv) seats available
v) Fee
Input Files
1) Student Details Form:
Student Name: ___________________________
Student Address:
Student Contact NO:
VINAYAK SINGH
34/147
Student Qualification:
Students Percentge 12th:
Students Percentage 10th:
Students Degree Percentage:
(optional)
2) Student preference List:
Student_rollNo: _________
Preference No 1:
Preference No 2:
Preference No 3:
Preference No 4:
Preference No 5:
3) Stdent CET Details :
Student id :
Student rollno:
Student Score:
Student Percentile:
4) College List :
College Name:
College Address:
Sets Available :
Fees :
OUTPUT FILES
1) Student ScoreCard
Student RollNo :
VINAYAK SINGH
35/147
Student Name :
Student SCore :
Percentile :
2) Student List to College
RollNo Name Score Percentile
1
2
3
4
5
6
7
8
9
VINAYAK SINGH
36/147
16. Give the importance of formal methods.specially give the importance
of spiral development model.(M-03).
DEFINITION - The spiral model, also known as the spiral lifecycle model, is a
systems development method (SDM) used in information technology (IT). This model
of development combines the features of the prototyping model and the waterfall
model. The spiral model is favored for large, expensive, and complicated projects.
The steps in the spiral model can be generalized as follows:
1. The new system requirements are defined in as much detail as possible. This
usually involves interviewing a number of users representing all the external
or internal users and other aspects of the existing system.
2. A preliminary design is created for the new system.
3. A first prototype of the new system is constructed from the preliminary
design. This is usually a scaled-down system, and represents an
approximation of the characteristics of the final product.
4. A second prototype is evolved by a fourfold procedure: (1) evaluating the first
prototype in terms of its strengths, weaknesses, and risks; (2) defining the
requirements of the second prototype; (3) planning and designing the second
prototype; (4) constructing and testing the second prototype.
5. At the customer's option, the entire project can be aborted if the risk is
deemed too great. Risk factors might involve development cost overruns,
operating-cost miscalculation, or any other factor that could, in the
customer's judgment, result in a less-than-satisfactory final product.
6. The existing prototype is evaluated in the same manner as was the previous
prototype, and, if necessary, another prototype is developed from it according
to the fourfold procedure outlined above.
7. The preceding steps are iterated until the customer is satisfied that the
refined prototype represents the final product desired.
8. The final system is constructed, based on the refined prototype.
9. The final system is thoroughly evaluated and tested. Routine maintenance is
carried out on a continuing basis to prevent large-scale failures and to
minimize downtime.
Advantages
Estimates (i.e. budget, schedule, etc.) get more realistic as work progresses,
because important issues are discovered earlier.
It is more able to cope with the (nearly inevitable) changes that software
development generally entails.
Software engineers (who can get restless with protracted design processes)
can get their hands in and start working on a project earlier.
VINAYAK SINGH
37/147
The spiral model is a realistic approach to the development of large-scale
software products because the software evolves as the process progresses. In
addition, the developer and the client better understand and react to risks at
each evolutionary level.
The model uses prototyping as a risk reduction mechanism and allows for the
development of prototypes at any stage of the evolutionary development.
It maintains a systematic stepwise approach, like the classic life cycle model,
but incorporates it into an iterative framework that more reflect the real
world.
If employed correctly, this model should reduce risks before they become
problematic, as consideration of technical risks are considered at all stages.
Disadvantages
1 Demands considerable risk-assessment expertise
2 It has not been employed as much proven models (e.g. the WF model) and
hence may prove difficult to sell to the client (esp. where a contract is
involved) that this model is controllable and efficient. [More study needs to be
done in this regard
3 It may be difficult to convince customers that evolutionary approach is
controllable
17. What is 4GL model? What are its advantages and diadvantages ?
(M-03).
Fourth Generation Technique:
1) The term 'Fourth Generation Technique' encompasses broad array of
software tools that have one thing in common:ech toolenables
this software developer to secify some characteristics of software of a
high level.The tool then automatically generates source code based on
developers specification.
2) There is a little debate that higher the level at which software can be
specified to a machine,the faster a program can be built.
3) The 4gl model focuses on the ability to specify software to a machine
at level that is close to natural language or using a notation that
imparts significant functions.
4) Currently, a software development enviornment that supports th 4Gl
model includes some or all of the foloowing tools: non-procedural
languages for database query,report generation,data
manipulation,screen interaction and defination,code generation,high-
level graphics cpability,spreadsheets capability.
5) There is no 4GT enviornment today that may be applied with the equal
facility to each of the software application categories.
Steps in 4GT:
1) Requirement Gathering:
i) Like other paradigms,4Gt begins with the requirement gathering
VINAYAK SINGH
38/147
step.Ideally,customer would describe requirements and these would be
directly translated into an operational prototype.But this is
unworkable.
ii) The customer may be unsure of what is required,may be ambigous in
specifying facts that are known and may be unable or unwilling to
specify information in a matter that 4GT can consume.
iii) In addition, current 4GT tools are not sophisticated enough to
accomodate truly 'natural language' and won't be for some time.At this
time, the customer-developer dialouge described for other paradigms
remains an essential part of 4GT model.
2) Design Stratergy:
i) For small applications it may be possible to move directly from the
requirement gathering step to implementation using non-procedural
fourth-generation language(4GT).
ii) However for large it is necessary to develop a design strategy for the
system,even a 4GL is to be used.
iii) The use of 4GT without design(for large project) will cause the same
difficulties (poor quality,poor maintainibilty etc) that we
haveencountered when developing software using conentional
approaches.
3) Implemention using 4GL:
Implementation using a 4GL enables the software developer to represent
desired results in a manner that results in automatic generation of code to
generate those results obviously,a data structure with relevant information
must exist and be readily accessible by 4GL.
4) Testing:
To transform a $GT implementation into a product the development must
conduct through testing,develop meaningful documentation and perform all
other 'transition' activities that are also required in other paradigms.In
addition the 4GT developed software must be built in a manner that enables
maintainance to be performed expeditiousl.
Merits:
i) Drastic reduction in software deveopment time.
ii) Improved productivity for Software developments.
Demerits:
i) Not much easier to use as compared to programming languages.
ii) Large software systems builkt using 4GT are very difficult to maintain.
18. Discuss prototyping as a way to test a new idea?( M-03)
1. Prototyping is a technique for quickly building a functioning but incomplete model
of the information system.
2. A prototype is a small, representative, or working model of users requirements or
a proposed design for an information system.
3. The development of the prototype is based on the fact that the requirements are
seldom fully know at the beginning of the project. The idea is to build a first,
simplified version of the system and seek feedback from the people involved, in
order to then design a better subsequent vesion. This process is repeated until the
system meets the clients condition of acceptance.
4. any given prototype may omit certain functions or features until such a time as
VINAYAK SINGH
39/147
the prototype has sufficiently evolved into an acceptable implementation of
requirements.
Reason for Protoyping:
1. information requirements are not always well defined. Users may know only
that certain business areas need improvement or that the existing procedures
must be changed. Or, they may know that they need better information for
managing certain activities but are not sure what that information is.
2. the users requirements may be too vague to even begin formulating a design.
3. developers may have neither information nor experience of some unique
sitations or some high-cost or high-risk situations, in which the proposed
design is new and untested.
4. developers may be unsure of the efficiency of an algorithm, the adaptability
of an operating system, or the form that human-machine interaction should
take.
5. in these and many other situations, a prototyping approach may offer the
best approach.
Advantages:
1. Shorter development time.
2. more accurate user requirements
3. greater user participation and support
4. relatively inexpensive to build as compared with the cost of conventional
system.
This method is most useful for unique applications where developers have little
information or experience or where risk of error may be high. It is useful to test
the feasibility of the system or to identify user requirements.
19. Discuss features of good user interface design, using login
context(M-03).
The features of the good user interface design are as follows:
1 Provide the best way for people to interact with computers. This is
commonly known as Human Computer Interaction. (HCI).
2 The presentation should be user friendly. User- friendly Interface is helpful
e.g. it should not only tell user that he has committed and error, but also
provide guidance as to how she can rectify it soon.
3 It should provide information on what is the error and how to fix it.
4 User-friendly ness also includes tolerant and adaptable.
5 A good GUI makes the User more productive.
6 A good GUI is more effective, because it finds the best solutions to a
problem.
7 It is also efficient because it helps the User to find such solution much
quickly and with the least error.
VINAYAK SINGH
40/147
8 For a User, using a computer system, his workspace is the computer
screen. The goal of the good GUI is to make the best, if not all, of a Users
workspace.
9 A Good GUI should be robust. High robustness implies that the interface
should not fail because of some action taken by the User. Also, and User
error should not lead to a system breakdown.
10 The Usability o0f the GUI is expected to be high. The Usability is measures
in the various terms.
11 A good GUI is of high Analytical capability; most of the information
needed by the User appears on the screen.
12 With a good GUI, User finds the work easier and more pleasant.
13 The User should be happy and confident to use the interface.
14 A good GUI has a high cognitive workload ability i.e. the mental efforts
required of the User to use the system should be the least. In fact, the
GUI closely approximates the Users mental model or reactions to the
screen.
15 For a good GUI, the User satisfaction is high.
20. Discuss use and abuse of multiwindow displays (M-03).
Multiple Window Design:-
1 Designer use multiple screens to give users the information they need.
The first screen gives gives general information.
2 By depressing a specified key, the user retrieves a second screen
containing details.
3 It allows users to browse through the details quickly and identify each
item about which more detail is needed. At the same time, the
explosion into detail for one specific item on a second (or even a
third) screen maintains the readability of the first screen by requiring
display of only enough detail to identify the item wanted.
4 Display different data or report sets simultaneously .
5 Switch between several programs, alternatively displaying the output
from each.
6 Move information from one window to the other (within the same or
different programs).
21. Write short notes on(M-03)
f. Design of input and control
g. Economic feasibility analysis
h. Structural walkthroughs
i. Design reviews.
Q 21 (a) Short note on design of input and control ?
Ans : The design of inputs involve the following four tasks :-
1. Identify the data input devices and mechanisms.
VINAYAK SINGH
41/147
2. Identify the input data and attributes.
3. Identify input controls.
4. Prototype the input forms.
We study the detailed activities as follows.
1) Identify the data input devices and mechanisms:
To reduce the input errors, use the following guidelines :
Apart from entering data through the electronic form, new ways of
entering data is through scanning , reading and transmitting devices.
They are faster, more efficient and less error prone.
Capture the data as close to where it originates as possible. These
days application systems eliminate the use of paper forms, but
encourage entry through laptops etc. which the user can carry to the
origin of the data and enter data directly through it. This reduces
errors in data entry dramatically.
Automate the data entry and avoid the human intervention in the data
capture, as much as possible. This will give users less chance to make
typing errors and increase the speed of data capture.
In case, the information is available in the electronic form, prefer it
over entering data manually. This will also reduce errors further.
Validate the data completely and correctly at the location where it is
entered. Reject the wrong data at its source only.
2) Identify the input data and attributes :
The activities involved in this step are as follows :-
1 This is to ensure that all system inputs are indentified and have been
specified correctly.
2 Basically, it involves indenfying the information flows across the system
boundary.
3 Using the DFD models, at the lowest levels of the system the developers
mark the system boundary considering what part of the system is to be
automated and what other not.
4 Each input data flow in the DFD may translate into one or more of the
physical inputs to the system. Thus it is easy to identify the DFD
processes, which would input the data from outside sources.
5 Knowing now the details of what data elements to input, the GUI
designer prepares a list of input data elements.
6 When the input form is designed the designer ensures that all these data
elements are provided for entry, validations and storage.
3) Identify input controls :
Input integrity controls are used with all kinds of input mechanisms.
They help reduce the data entry errors at the input stage. One more
control is required on the input data to ensure the completeness.
Various error detection and correction methods are employed these
days. Some of them arelisted as follows :-
i. Data validation controls introduce an additional digit called as
check digit - which is inserted based on some mathematical
formula. The data verification step recalculates, if there is an
VINAYAK SINGH
42/147
data entry error, the check digit would not match and flag
error.
ii. Range of acceptable values in a data item can also be used to
check the defects going into the data. If the data value being
entered is beyond the acceptable range, it flags out an error.
iii. References to master data tables already stored can be used
to validate codes, such as customer codes, product codes,
etc. if there is an error then either the reference would not be
available or will be wrong.
iv. Some of the controls are used in combination depending upon
the business knowledge.
1 Transaction logging is another technique of input control. It logs
important data base update operations by user. The transaction log
records the user Id, date, time, location of user. This information is
useful to control the fraudulent transactions, and also provides
recovery mechanisms for erroneous transactions.
4) Prototype input forms for better design :
i. Use the paper form for recording the transaction mainly, if an evidence is
required or direct data capture is not feasible.
ii. Provide help to every data element to be entered.
iii. It should be easy to use, appear natural to the user and should be
complete.
iv. The data elements asked should be related in logical order. Typically, the
left - to - right and top - to - bottom order of entry is more natural.
v. A form should not have too many data elements in one screen display.
The prototype form can be provided to the users. The steps are as follows :
a) The users should be provided with the form prototype and related
functionality including validations, help, error messages.
b) The users should be invited to use the prototype forms and the designers
should observe them.
c) The designers should observe their body language, in terms of ease of use,
comfort levels, satisfaction levels, help asked etc. they also should note the
errors in the data entry.
d) The designers should ask the users their critical feedback.
e) The designers should improve the GUI design and associated functionality
and run second test runs for the users in the same way.
f) This exercise should be continued until the input form delivers expected
levels of usability.
Q 21 (b) Economic feasibility analysis ?
Ans : Economic feasibility consists of two tests :
2 Is the anticipated value of benefits greater than the projected costs of
development.
3 Does the organization have adequate flow to find theproject during the
development period?
VINAYAK SINGH
43/147
As soon as specific requirements and solutions have been identified, the analyst can
weigh the costs and benefits of each alternative. This is called costs benefit analysis.
o Variable costs occur in proportion to some usage factor. For eg :
Costs of computer usage (eg : CPU time used) which vary with
the workload.
Supplies ( printer paper used) which vary with the workload.
Overhead costs ( utilities) which can be allocated throughout the
lifetime of the system using standard techniques of cost
accounting.
Benefits
Benefits are classified as tangible and intangible.
1 Tangible benefits: They are those benefits that can be easily quantified.
Tangible benefits are usually measured in terms of monthly or annual savings
or of profits to the firm. Eg: fewer processing errors, decreased response
time etc.
2 Intangible benefits : they are those benefits believed to be difficult or
impossible to quantify. Unless these benefits are at least identified, it is
entirely possible that many projects would not be feasible. Eg: improved
customer goodwill, better decision making etc.
Cost benefit analysis :
a) The cost benefit analysis is a part of economic feasibility analysis :
b) The basic tasks here are as follows:
i. To compute the total costs involved.
ii. To compute the totals benefits from the project.
iii. Top compare to decide, if the project provides more net gains or no
net gains.
c) The costs are classified under two heads :
i. Development costs :
Although the project manager has final responsibility for estimating the
costs of development, senior analyst always assists with the calculations.
Generally project costs come in the following categories :
Salaries and wages
Software and licenses
Training
Facilities
Support staff.
ii) Operational costs :
Once the system is up and running, normal operating costs are incurred
every year. These annual operating costs must also be accounted for
calculating costs and benefits of the new system. The following is the list of
different operational cost :
1 Connectivity
2 Equipment maintenance
3 Computer operations
4 Supplies
VINAYAK SINGH
44/147
Sources of benefits :
Benefits are usually come under two major sources : decreased costs or increased
revenues.
Costs savings come from greater efficiency in the operations of the company. Areas
to look for reduced costs include the following :
1 Reducing staff due to automating manaual functions or increasing efficiency.
2 Maintaining constant staff with increasing volumes of work.
3 Decreasing operating expenses, such as shipping charges for emergency
shipments.
4 Reducing error rates due to automated editing or validation.
5 Reducing bad accounts or bad credit losses.
6 Collecting accounts receivables more quickly.
7 Reducing the costs of goods with the volume discounts and purchases.
Financial calculations :
There are two popular techniques to access economic feasibility also called as costs
effectiveness.
1 Payback analysis : The payback analysis technique is simple and popular
method for determining if and when an investment will be beneficial or not.
The payback period, sometimes called the break- even point, is the point in
time at which the increased cash flow exactly pays off the costs of
development and operation.
2 Return on investments analysis : The return on investment (roi) analysis
technique compares the lifetime profitability of alternatives solutions or
projects. The ROI for solutions or project is a percentage rate that measures
the relationship between the amount the business gets back from an
investment and the amount invested. The lifetime ROI for a potential solution
or project is calculated as follows :
Lifetime ROI = (Estimated benefits - estimated lifetime costs)
Q 21 (c) Structural walkthroughs ?
Ans : it is described as follows :
a) The structured walkthrough is a technique to verify and validate the system
requirements.
b) Verifying involves that right requirements have been incorporated into the
system requirements documentation and validation involves that the
models used to represent these requirements communicate the system
requirements correctly.
c) Thus it is a technique used to assure quality into the system development
very early in the SDLC, right early at the stage of requirements
specifications.
d) It is called as structured b.coz most system analyst follow it as a part ofste
of procedures of quality assurance and also b.coz of that there are slightly
formal in terms of the specific procedures to invite the meeting, recording
the major agreements, etc.
e) Systems analysts carry out a structured walkthrough with the help of a
VINAYAK SINGH
45/147
documented system requirements, typically in the form of the System
Requirements Specifications.
f) The objective is to highlight the inconsistencies and incompleteness in the
system requirements.
g) Typically the systems analysts walk the team line - by - line or model - by -
model through the documentation and also describes the system
requirements in simple English.
h) The other system analyst also think along and seek clarifications, where
required. The system analysts response reveals the details, if the
requirements have been strong, but otherwise they jointly discover the
weakness.
i) Since this meeting is generally with peers, it is informal in its conduct. The
bosses are generally kept away, to foster free and open discussions.
j) The merit of the structured walkthrough is that it reveals the important
deficiencies in the system requirements specifications, which is otherwise
being abstract very difficult to discover.
k) Also, for new or inexperienced system analysts, this technique provides
learning or a growing opportunity. This is useful for the development of
organization also, to bring up maturity in its system analysts.
l) The drawback of this technique is that since usually the systems analysts
are invited from outside of the team, the other projects also suffer from the
absence of their system analysts. This may be costly or may be cut short at
times, hampering the effectiveness of the walkthrough.
Q 21 (d) Short note on Design reviews ?
1 Ans : Design review focuses on design specification for meeting
previously identified system requirements.
2 The purpose of this type of walkthrough is to determine whether the
proposed design will meet the requirement effectively and efficiently.
3 If the participants find discrepancies between the design and requirement,
they will point out them and discuss them.
22. What are the most important reasons why analysis use a Data
Dictionary. Give atleast One example illustrating each reason.
(M-03)
Analyst use data dictionaries for five important reasons -
1. To manage the detail in large systems.
2. To communicate a common meaning.
3. To document the features of the system.
4. To facilitate analysis of the details in order to evaluate
characteristics and determine where system changes should be
made.
5. To locate errors and omissions in the system.
Ans:
A data dictionary is a catalog - a repository - of the elements in the
system. Data dictionary is list of all the elements composing the data
flowing through a system. The major elements are data flows, data stores,
and processes.
Analysts use data dictionary for five important reasons:
1. To manage the details in large systems:- Large systems have huge
VINAYAK SINGH
46/147
volumes of data flowing through them in the form of documents,
reports, and even conversations. Similarly, many different activities take
place that use existing data or create new details. All systems are
ongoing all of the time and management of all the descriptive details is a
challenge. So the best way is to record the information.
2. To communicate a common meaning for all system elements:- Data
dictionaries assist in ensuring common meanings for system elements
and activities.
For example: Order processing (Sales orders from customers are
processed so that specified items can be shipped) -have one of its data
element invoice. This is a common business term but does this mean
same for all the people referring it?
Does invoice mean the amount owed by the supplier?
Does the amount include tax and shipping costs?
How is one specific invoice identified among others?
Answers to these questions will clarify and define systems requirements
by more completely describing the data used or produced in the system.
Data dictionaries record additional details about the data flow in a
system so that all persons involved can quickly look up the description
of data flows, data stores, or processes.
3. To document the features of the system :- Features include the parts or
components and the characteristics that distinguish each. We want to
know about the processes are data stores. But we also need to know
under what circumstances each process is performed and hoe often the
circumstances occur. Once the features have been articulated and
recorded, all participants in the project will have a common source for
information about the system.
For example: Payment voucher - vendor details include field vendor
telephone in which area can be optional if local phone. Item details can
be repeated for each item. Purchasing Authorisation field may be added
after invoice arrives if special order.
4. To facilitate analysis of the details in order to evaluate characteristics
and determine where system changes should be made:- The fourth
reason for using data dictionaries is to determine whether new features
are needed in a system or whether changes of any type are in order.
For example: University is considering allowing its students to register
for course by dialing into an online registration system over touch-tone
telephones. Then a system analyst will focus on following system
characteristics:
Nature of transactions: What additional features are needed to permit
registration by touch-tone phone? How will payments be received if
students do not choose to pay by credit card? Will the system permit the
processing of course registration transaction where payment is by bank
credit card?
Inquiries: Student & course descriptive data are in two separate files
VINAYAK SINGH
47/147
that are currently not linked together. How can we make the data jointly
available for advisors who wish to assist students in program planning &
course scheduling?
Output & Report Generation: How can we identify those student who will
register for courses over touch-tone telephones so that they can be
listed on a separate report? How do we provide these same students
with a signed registration record as we now do for those registering on-
site?
Files & Databases: What data must be captured to verify the accuracy
and authenticity of transactions arriving over telephones?
System Capacity: How many students can register simultaneously over
touch-tone telephones? What are the current & the anticipated numbers
of student that can be registered in one hour?
5. Locate errors & omissions:- To know that the information itself is
complete & accurate we use data dictionaries. It helps to locate errors in
the system description. Conflicting dataflow descriptions, processes that
neither receive input nor generate output, data stores that are never
updated, etc. indicate incomplete or incorrect analysis that we want to
correct before determining that changes are needed.
23. Consider a road-side news paper shopGive a typical Data
Dictionary for its Operations (M-03).
24. Describe, with example ,the Entity-Realtionship Diagram (ERD).
Use those examples and show how you will derive a Data
Structure Diagram(DSD)from them(M-03).
The object relationship pair is the corner stone of data model. A set of
primary components is identified for ER diagram : data objects, attributes,
relationships & various type indicators. The primary purpose of ERD is to represent
data objects & their relationships.
Data objects are represented by a labeled rectangle. Relationships are
indicated with a labeled line connecting objects. In some variations of ER diagrams,
the connecting line contains a diamond shape that is labeled with a relationship.
Connections between data objects and relationships are established using a variety
of special symbols that indicate cardinality & modality.
The following is the example of ERD with two entities customer and order.
VINAYAK SINGH
48/147
License Dealership Stocks
Manufacturer Builds Car
Contracts Shipper Transports
Ans: Entity relationship diagram show the entities and relationships
graphically. For example taking order requires the relating of the 3 distinct
entities of order, customer and inventory.
CUSTOMER
ORDER
PLACE
Buy SmartDraw!- purchased copies print this
document without a watermark .
Visit www.smartdraw.comor call 1-800-768-3729.
Entity Relationship diagram
One customer may Each order is for one
VINAYAK SINGH
49/147
Place many orders customer
One order may
Include many
items
Items may be included
On many orders
Dependencies between entities
Entity relationships are described by their dependence on each other as well
as by the extent of the relationship.
Entity dependency are of 2 types existence dependency -one entity is enable to exist
in the database unless the other is first present & identification dependency - an
entity cannot be uniquely identified by its own attributes. In given example orders
cannot exist unless there is first a customer.
Extent of dependency includes 2 interrelated concerns: the direction of the
relationship & the type of association between them. In the given example customer
entity points to order entity as indicated by crows foot. The relationship means that
customers own/have orders.
Once the entities & relationships are determined we need to focus on data
requirements for each entity.
In addition to the basic components we have already identified in a data structure
diagram- entities , attributes, and records- two additional elements are essential:
Attribute pointers: Link two entities by common information, usually a key attribute
in one and an attribute (non key)in another.
Logical pointers: identify the relationship b/w entities, serve to gain immediate
access to the information in one entity by defining a key attribute in another entity.
VINAYAK SINGH
50/147
ORDER
CUSTOMER
ORDER
NUMBER
ITEM DESCRIPTION
ITEM PRICE
QUANTITY ORDERED
CUSTOMER
NUMBER
ITEM NUMBER
CUSTOMER
NUMBER
CUSTOMER NAME
CUS. ADDRESS
CURRENT BAL.
30 DAY BAL
60 DAY BAL
90 DAY BAL
OVER 120 DAY BAL
ITEM
ITEM
NUMBER
ITEM DESCRIPTION
ITEM COST
ITEM RENTAL
No attribute pointer
order may contain
many items
key
attributes
attribute
pointers
logical pointer
one customer may
have many orders
hence customer order information
is available only through the order
record
DATA STRUCTURE DIAGRAM FOR
ORDER EXAMPLE
Buy SmartDraw!- purchased copies print this
document without a watermark.
Visit www.smartdraw.com or call 1-800-768-3729.
VINAYAK SINGH
51/147
25. Consider a factory making garments. One sequence of actions
starts with the factory receiving an order. Raw materials and bought
Work distributed among the workers. Finished goods are checked
and packed. When the total number of garments are ready. The
packed cartoons are counted, a packing list made, an Invoice
prepared and they are shipped. Give the ERD and DSD for this
sequence of actions .Give a sample, single record of the resulting
database that will result from this ERD and DSD.(M-03).
The ERD will be..
VINAYAK SINGH
52/147
And the DSD will be..
VINAYAK SINGH
53/147
26. Write short notes on(M-03)
j. Use CASE Tools in the requirements phase
k. Types of Documentations.
l. rapid application development
m. requirement of a good system analyst
Types of Documentation
Documentation is an important part of software engineering that is often overlooked.
Types of documentation include:
Architecture/Design - Overview of software. Includes relations to an
environment and construction principles to be used in design of software
components.
Technical - Documentation of code, algorithms, interfaces, and APIs.
End User - Manuals for the end-user, system administrators and support staff.
Marketing - Product briefs and promotional collateral
Architecture/Design Documentation
Architecture documentation is a special breed of design documents. In a way,
architecture documents are third derivative from the code (design documents being
second derivative, and code documents being first). Very little in the architecture
documents is specific to the code itself. These documents do not describe how to
program a particular routine, or even why that particular routine exists in the form
that it does, but instead merely lays out the general requirements that would
motivate the existence of such a routine. A good architecture document is short on
details but thick on explanation. It may suggest approaches for lower level design,
but leave the actual exploration trade studies to other documents.
Technical Documentation
This is what most programmers mean when using the term software documentation.
When creating software, code alone is insufficient. There must be some text along
with it to describe various aspects of its intended operation. This documentation is
usually embedded within the source code itself so it is readily accessible to anyone
who may be traversing it.
User Documentation
Unlike code documents, user documents are usually far divorced from the source
code of the program, and instead simply describe how it is used.
In the case of a software library, the code documents and user documents could be
effectively equivalent and are worth conjoining, but for a general application this is
not often true. On the other hand, the Lisp machine grew out of a tradition in which
every piece of code had an attached documentation string. In combination with
strong search capabilities (based on a Unix-like apropos command), and online
sources, Lispm users could look up documentation and paste the associated function
directly into their own code. This level of ease of use is unheard of in putatively more
modern systems.
VINAYAK SINGH
54/147
Marketing Documentation
For many applications it is necessary to have some promotional materials to
encourage casual observers to spend more time learning about the product. This
form of documentation has three purposes:-
1. To excite the potential user about the product and instill in
them a desire for becoming more involved with it.
2. To inform them about what exactly the product does, so that
their expectations are in line with what they will be receiving.
To explain the position of this product with respect to other alternatives.
Requirement/Characteristics of Good Requirements
As described above, a list of system requirements contains a complete description of
the important requirements for a product design. From this list, subsequent design
decisions can be based. Of course, if one is to place their trust in them, we must
assume that all the requirements in such a list are good. This raises the question,
How does one differentiate between good requirements and those that are not so
good?
It turns out that good requirements have the following essential qualities:
1. A good requirement contains one idea. If a requirement is found to contain more
than one idea then it should be broken into two or more new requirements.
2. A good requirement is clear; that is, the idea contained within it is not open to
interpretation. If any aspects of a
requirement are open to interpretation then the designer should consult the relevant
parties and clarify the statement.
3. Requirements should remain as general as possible. This ensures that the scope of
the design is not unnecessarily limited.
4. A good requirement is easily verifiable, that is, at the end of the design process it
is possible to check whetherthe requirement has been met.
These criteria apply to both user and systems requirements. Examples are given in
the section below.
In addition to the qualities listed above, a set of good requirements should
completely describe all aspects relevant to aproducts design.
Examples
User Requirement
A good user requirement is listed below
The seat shall be comfortable for 95% of the population of each country in which the
vehicle is sold.
It meets the three criteria as follows:
1. It contains one idea. If the requirement had also made reference to leg room,
then the requirement would need to have been broken into two requirements.
2. It is easily verifiable. The statement has given quantitative limits from which the
seat can be designed. Stating that, The seat shall be comfortable for the majority of
its intended users, would be unacceptable.
VINAYAK SINGH
55/147
3. The requirement is general. The requirement does not state how the seat should
be made so as to be
Comfortable for 95% of the population. To do so would unnecessarily narrow the
scope, and limit the design
process.
4. The requirement is verifiable. To test whether the seat is comfortable or
uncomfortable for the right number of people, you need only get people to sit in it,
and see if during normal operation they experience discomfort.
Use Case Tool
What is a use case?
A use case is a description of how users will perform tasks on your Web site.
A use case includes two main parts:
the steps a user will take to accomplish a particular task on your site
the way the Web site should respond to a user's actions
A use case begins with a user's goal and ends when that goal is fulfilled.
What does a use case describe?
A use case describes a sequence of interactions between a user and a Web site,
without specifying the user interface.
Each use case captures:
The actor (who is using the Web site?)
The interaction (what does the user want to do?)
The goal (what is the user's goal?)
How do you write a use case?
Generally, you write the steps in a use case in an easy-to-understand narrative. This
engages members of the design team and encourages them to be actively involved
in defining the requirements.
Kenworthy (1997) outlines eight steps to developing use cases:
1. Identify who is going to be using the Web site.
2. Pick one of those actors.
3. Define what that actor wants to do on the site. Each thing the actor does on
the site becomes a use case.
4. For each use case, decide on the normal course of events when that actor is
using the site.
VINAYAK SINGH
56/147
5. Describe the basic course in the description for the use case. Describe it in
terms of what the actor does and what the system does in response that the
actor should be aware of.
6. When the basic course is described, consider alternate courses of events and
add those to "extend" the use case.
7. Look for commonalities among the use cases. Extract these and note them as
common course use cases.
Repeat the steps 2 through 7 for all other actors.
27. Consider the DFD given in the figure below
If Item is available, customer buys it and makes cash payment. Add
these details and redraw the DFD. Give a sample entry in each of
above 3 database in the original DFD. In your modification, do you
need a new database ? If yes, describe it in terms of sample entry. If
no, justify that the 3 databases can contain the additional information
and give a sample entry in modified 3 databases.
Catalogue
item
Product item
Availability
Details Inventory item
Catalogue
item
Product item
Availability
Details Inventory item
cash
Customer
We need an extra database for sale of a product to a customer after he/she has paid
the cash as the same details can be recorded in the customer database and to check
out later if he returns the product or if sale is made on credit basis or cash n the
records of customers is need for future use. This is clearly seen by the sample entry
of the customer database.
Product Item
Name Availability Quantiity Available Price/unit
Milk Y 100 liters 15
Soap Y 2 pieces 20
VINAYAK SINGH
57/147
Pen N 10
Customer
Name Address Phone
NO
Products
Purchased
Quantiity
Purchased
Total
Amount
Cash/Credit Return
ABC Xyz 5641 Milk,Soap 1 liters,1
piece
35 Cash Y
LMN Jkn 323 Milk 2 liters 30 Credit N
PQR Jsur 6543 Soap 1 Piece 20 Cash N
28. Explain RAD model?(m-04)
Rapid Application Development
Rapid Application Development (RAD) is an incremental software development
process model that emphasises a very short development cycle [typically 60-90
days]. The RAD model, shown in Fig. 1.5, is a high-speed adaptation of the waterfall
model, where the result of each cycle a fully functional system.
RAD is used primarily for information systems applications, the RAD approach
encompasses the following phases:
Business modeling
The information flow among business functions is modeled in a way that answers the
following questions:
What information drives the business process?
What information is generated?
Who generates it?
Where does the information go?
Who processes it?
Data modeling
The information flow defined as part of the business modeling phase is refined into a
set of data objects that are needed to support the business. The characteristics
(called attributes) of each object are identified and the relationships between these
objects are defined.
Process modeling
The data objects defined in the data-modeling phase are transformed to achieve the
information flow necessary to implement a business function. Processing descriptions
are created for adding, modifying, deleting, or retrieving a data object.
Application generation
RAD assumes the use of the RAD fourth generation techniques and tools like VB,
VC++, Delphi etc rather than creating software using conventional third generation
programming languages. The RAD works to reuse existing program components
(when possible) or create reusable components (when necessary). In all cases,
VINAYAK SINGH
58/147
automated tools are used to facilitate construction of the software.
Testing and turnover
Since the RAD process emphasizes reuse, many of the program components have
already been tested. This minimizes the testing and development time.
If a business application can be modularized so that each major function can be
completed within the development cycle then it is a candidate for the RAD model. In
this case, each team can be assigned a model, which is then integrated to form a
whole.
Disadvantages
For Large (but scalable) projects, RAD requires sufficient resources to create
the right number of RAD teams.
RAD projects will fail if there is no commitment by the developers or the
clients to rapid-fire activities necessary to get a system complete in a much
abbreviated time frame.
If a system cannot be properly modularized, building components for RAD
will be problematic
RAD is not appropriate when technical risks are high, e.g. this occurs when a
new application makes heavy use of new technology.
29. What is the difference between system analysis and system design.
How does the focus of information system analysis differ from
information system design?(m-04,m-05)
. System analysis is a problem solving technique that decomposes a system into its
component pieces for the purpose of studying how well the components parts work
and interact to accomplish their purpose .
System Analysis:
System analysis is a problem solving technique that decomposes a system
into its component pieces for the purpose of studying how well those component
parts work and interact to accomplish their purpose
System design :
System design is a complimentary problem solving technique(to system
analysis) that reassembles a systems component pieces back into a complete system
.This may involve adding deleting and changing pieces relative to the original
system.
Information system analysis:
Information system Analysis primarily focuses on the business problems and
requirements, independent of any technology that can or will be used to implement a
solution to that problem .
Information system design:
Information system design is defined as those tasks that follow system
VINAYAK SINGH
59/147
analysis and focus on the specification of a detailed computer based solution
Whereas system analysis emphasizes the business problems, system design focuses
on the technical implementation concerns of the system.
30. What are the elements of the cost benefit analysis?(m-05)
Cost/Benefit analysis:
Cost benefit analysis is a procedure that gives the picture of various
costs, benefits and rules associated with each alternative system.
Cost and benefit categories:
In developing cost estimates for a system we need to consider several
cost elements among them are the following:
1. Hardware costs:
Hardware cost relate to the actual purchase or lease of the
computer and peripherals (e.g. printer, disk drive, tape unit etc .)Determining
the actual costs of hardware is generally more difficult when the system is
shared by many users than for a dedicated stand alone system.
2. Personnel costs:
Personnel costs include EDP staff salaries and benefits (health
insurance, vacation time, sick pay , etc.)as well as payment of those involved
in developing the system . Costs incurred during the development of a system
are one time costs and are labeled development costs.
3. Facility cost:
Facility costs are facilities incurred in the preparation of the
physical site where the application of computer will be in operation. This
includes wiring, flooring, lighting and air conditioning .These costs are treated
as one time costs
4. Operating costs:
Operating costs include all costs associated with the day -to-day
operation of the system. The amount depends on the number shifts the
nature of the applications and the caliber of the operating staff. The amount
charged is based on computer time, staff time and volume of the output
produced.
5. Supply cost:
Supply costs are variable costs that increase with increased use of
paper, ribbons, disks and the like.
Procedure for cost benefit determination:
The determination of cost and benefit entails the following steps :
1. Identify the cost and benefits pertaining to a given project
2. Categorize the various costs and benefits for analysis
3. Select a method for evaluation
4. Interpret the result of the system
5. Take action
VINAYAK SINGH
60/147
Classification of costs and benefits:
1. Tangible and intangible costs and benefits
a. Tangibility refers to the ease with which costs or benefits can
be measured. An outlay of cash for a specific item or activity is
referred to as tangible costs. The purchase of hardware or
software personnel training and employee salaries are
examples of tangible costs.
b. Costs that are known to exist but its financial value cannot be
accurately measured are referred to as intangible costs .For
example employee morale problems caused by a new system or
lowered company image is an intangible cost.
c. Benefits can also be classified as tangible and intangible
.Tangible benefits such a s completing jobs in fewer hours .
d. Intangible benefits such as more satisfied customers or an
improved corporate image are not easily quantified .
2. Direct or indirect costs and benefits :
a. Direct costs are those with which a direct figure can be directly
associated in a project .
b. Indirect costs are the results of operation that are directly associated
with a given system or activity
3. Fixed or variable cost and benefits
a. Fixed costs are sunked costs .They are constant and d not change
b. Variable costs are incurred on regular basis .they are usually
proportional to work volume and continue as long as the system is in
operation.
Examples of tangible benefits
1 Fewer processing errors
2 Increased throughput
3 Elimination of job steps
Examples of intangible benefits
1 Improved customer goodwill
2 Improved employers morale
3 Better service to community
Evaluation method :
1. Net benefit analysis
Net benefit analysis simply involves subtracting total costs from
total benefits .Its easy to calculate , easy to interpret and easy to
present .its main drawback is that it does not account for the time
value for money and does not discount future cash flow
The time value in money is usually expressed in the form of
interest on the funds invested to realize the future value. Assuming the compound
VINAYAK SINGH
61/147
interest the formula is
F=P(L+i)^n
Where F=future value of an investment
P=present value of the investment
I=interest rate pre compounding year
N=number of years
2. Present value analysis:
In developing long term projects , it is often difficult to compare
todays cost with the full value of tomorrows benefits Present value
analysis controls for these problems by calculating the costs and
benefits of the system in terms of todays value of the investment and
then comparing across alternatives
Present value = future value/(1+i)^n
Net price value =discounted value-discounted costs
3. Payback analysis:
The payback method is a common measure of the relative time value
of a project .It is easy to calculate and allows two or more activities to
be ranked .The payback period may be computed by the following
formula
Overall cash outlay/Annual cash return=
(A*B)+(C*D)/5+2=Years +Ins.Time/Years to recover
Element of the formula :
A=capital investment B=investment credit
C=Cost investment
D=Companies income tax
E=state and local taxes
F=Life of capital
G=Time to install system
H=Benefit and savings
4. Break even analysis:
Break even analysis is a point where the cost of the candidate system
and that of the current one are equal. Break even analysis compares the cost
of the current and candidate system .When a candidate system is developed
initial costs usually exceed those of the current system. This is an investment
period .When both costs are equal, it is break even analysis
5. Cash flow analysis:
Cash flow analysis keeps a track of accumulated cost and revenues on
a regular basis.
6. Return of investment analysis:
The ROI technique analysis compares the life time probability of
alternative solution and projects
ROI=Estimated life time benefits-Estimated costs /Estimated life time costs
VINAYAK SINGH
62/147
31. Summarize the procedure for developing DFD?(m-05)
Using your own example illustrate?(m-06)
Developing a DFD
Step1. Make a list of business activities and use it to determine:
a. External entities i.e. source and sink
b. Data flows
c. Processes
d. Data stores
e.
Step2. Draw a context level diagram:
Context level diagram is a top level diagram and contain sonly
one process representing the entire system .It determines the
boundaries of the system .Anything that is not inside the
diagram will not be the part of the system study
Step3. Develop process chart
It is also called as hierarchy chart or decomposition diagram .It
shows top down functional decomposition of the system
Step4. Develop the first level dfd:
Its also known as diagram 0 or level 0diagram.Its the explosion
of the context level diagram It includes data stores and
external entities. Here the processes are number
Step5. Draw more detailed level :
Each process in diagram 0 may in turn be exploded to create a
more detailed DFD. New data flows and data stores are added .There are
further decomposition/leveling of process
32. What is the reason for selecting the prototype development
method? What is the desired impact on the application development
process?(m-05)
1. Prototyping is a technique for quickly building a functioning but incomplete model
of the information system.
2. A prototype is a small, representative, or working model of users requirements or
a proposed design for an information system.
3. The development of the prototype is based on the fact that the requirements are
seldom fully know at the beginning of the project. The idea is to build a first,
simplified version of the system and seek feedback from the people involved, in
order to then design a better subsequent vesion. This process is repeated until the
system meets the clients condition of acceptance.
4. any given prototype may omit certain functions or features until such a time as
the prototype has sufficiently evolved into an acceptable implementation of
requirements.
Reason for Protoyping:
6. information requirements are not always well defined. Users may know only
that certain business areas need improvement or that the existing procedures
must be changed. Or, they may know that they need better information for
VINAYAK SINGH
63/147
managing certain activities but are not sure what that information is.
7. the users requirements may be too vague to even begin formulating a design.
8. developers may have neither information nor experience of some unique
sitations or some high-cost or high-risk situations, in which the proposed
design is new and untested.
9. developers may be unsure of the efficiency of an algorithm, the adaptability
of an operating system, or the form that human-machine interaction should
take.
10. in these and many other situations, a prototyping approach may offer the
best approach.
Advantages:
5. Shorter development time.
6. more accurate user requirements
7. greater user participation and support
8. relatively inexpensive to build as compared with the cost of conventional
system.
This method is most useful for unique applications where developers have little
information or experience or where risk of error may be high. It is useful to test
the feasibility of the system or to identify user requirements.
33. What is the feasibility study? Hat are different types of feasibility
study?(m-05)
Feasibility study
A feasibility study is a preliminary study undertaken before the real work of a
project starts to ascertain the likelihood of the project's success. It is an analysis of
possible alternative solutions to a problem and a recommendation on the best
alternative. It, for example, can decide whether an order processing be carried out
by a new system more efficiently than the previous one.
A feasibility study could be used to test a new working system, which could be used
because:
The current system may no longer suit its purpose,
Technological advancement may have rendered the current system
redundant,
The business is expanding, allowing it to cope with extra work load,
Customers are complaining about the speed and quality of work the business
provides,
Competitors are now winning a big enough market share due to an effective
integration of a computerized system.
Types of Feasibility
Within a feasibility study, six areas must be reviewed, including those of Economics,
Technical, Schedule, Organizational, Cultural, and Legal.
VINAYAK SINGH
64/147
Economic feasibility study
This involves questions such as whether the firm can afford to build the system,
whether its benefits should substantially exceed its costs, and whether the project
has higher priority and profits than other projects that might use the same
resources. This also includes whether the project is in the condition to fulfill all the
eligibility criteria and the responsibility of both sides in case there are two parties
involved in performing any project.
Technical feasibility study
This involves questions such as whether the technology needed for the system
exists, how difficult it will be to build, and whether the firm has enough experience
using that technology.The assessment is based on an outline design of system
requirements in terms of Input, Output, Fields, Programs, and Procedures.This can
be qualified in terms of volumes of data,trends,frequency of updating,etc..in order to
give an introduction to the technical system.
Schedule Feasibility study
This involves questions such as how much time is available to build the new system,
when it can be built (i.e. during holidays), whether it interferes with normal business
operation, etc.
Organizational Feasibility study
This involves questions such as whether the system has enough support to be
implemented successfully, whether it brings an excessive amount of change, and
whether the organization is changing too rapidly to absorb it.
Cultural Feasibility study
In this stage, the project's alternatives are evaluated for their impact on the local
and general culture. For example, environmental factors need to be considered.
Legal Feasibility study
Not necessarily last, but all projects must face legal scrutiny. When an organization
either has legal council on staff or on retainer, such reviews are typically standard.
However, any project may face legal issues after completion too.
Marketing Feasibility study
This will include analysis of single and multi-dimensional market forces that could
affect the commercial
34. Explain briefly by example ?
n.decision table
o.Decision tree.
p.Structured English
q.Data dictionary
Decision Trees, Decision Tables, and Structured English are tools used to represent
process logic.
VINAYAK SINGH
65/147
Decision table
Decision tables are a precise yet compact way to model complicated logic. Decision
tables, like if-then-else and switch-case statements, associate conditions with actions
to perform. But, unlike the control structures found in traditional programming
languages, decision tables can associate many independent conditions with several
actions in an elegant way.
Structure
Decision tables are typically divided into four quadrants, as shown below.
The four quadrants
Conditions Condition alternatives
Actions Action entries
Each decision corresponds to a variable, relation or predicate whose possible values
are listed among the condition alternatives. Each action is a procedure or operation
to perform, and the entries specify whether (or in what order) the action is to be
performed for the set of condition alternatives the entry corresponds to. Many
decision tables include in their condition alternatives the don't care symbol, a
hyphen. Using don't cares can simplify decision tables, especially when a given
condition has little influence on the actions to be performed. In some cases, entire
conditions thought to be important initially are found to be irrelevant when none of
the conditions influence which actions are performed.
Aside from the basic four quadrant structure, decision tables vary widely in the way
the condition alternatives and action entries are represented. Some decision tables
use simple true/false values to represent the alternatives to a condition (akin to if-
then-else), other tables may use numbered alternatives (akin to switch-case), and
some tables even use fuzzy logic or probabilistic representations for condition
alternatives. In a similar way, action entries can simply represent whether an action
is to be performed (check the actions to perform), or in more advanced decision
tables, the sequencing of actions to perform (number the actions to perform).
Example
The limited-entry decision table is the simplest to describe. The condition
alternatives are simple Boolean values, and the action entries are check-marks,
representing which of the actions in a given column are to be performed.
A technical support company writes a decision table to diagnose printer problems
based upon symptoms described to them over the phone from their clients.
Printer troubleshooter
Conditions Printer does not print Y Y Y Y N N N N
A red light is flashing Y Y N N Y Y N N
Printer is unrecognized Y N Y N Y N Y N
Actions Check the power cable X
Check the printer-computer cable X X
Ensure printer software is installed X X X X
Check/replace ink X X X X
Check for paper jam X X
Of course, this is just a simple example (and it does not necessarily correspond to
the reality of printer troubleshooting), but even so, it is possible to see how decision
VINAYAK SINGH
66/147
tables can scale to several conditions with many possibilities.
Software engineering benefits
Decision tables make it easy to observe that all possible conditions are accounted
for. In the example above, every possible combination of the three conditions is
given. In decision tables, when conditions are omitted, it is obvious even at a glance
that logic is missing. Compare this to traditional control structures, where it is not
easy to notice gaps in program logic with a mere glance --- sometimes it is difficult
to follow which conditions correspond to which actions!
Just as decision tables make it easy to audit control logic, decision tables demand
that a programmer think of all possible conditions. With traditional control structures,
it is easy to forget about corner cases, especially when the else statement is
optional. Since logic is so important to programming, decision tables are an excellent
tool for designing control logic. In one incredible anecdote, after a failed 6 man-year
attempt to describe program logic for a file maintenance system using flow charts,
four people solved the problem using decision tables in just four weeks. Choosing the
right tool for the problem is fundamental.
Decision tree
In operations research, specifically in decision analysis, a decision tree is a decision
support tool that uses a graph or model of decisions and their possible
consequences, including chance event outcomes, resource costs, and utility. A
decision tree is used to identify the strategy most likely to reach a goal. Another use
of trees is as a descriptive means for calculating conditional probabilities.
In data mining and machine learning, a decision tree is a predictive model; that is,
a mapping from observations about an item to conclusions about its target value.
More descriptive names for such tree models are classification tree or reduction
tree. In these tree structures, leaves represent classifications and branches
represent conjunctions of features that lead to those classifications [1]. The machine
learning technique for inducing a decision tree from data is called decision tree
learning, or (colloquially)
Four major steps in building Decision Trees:
1. Identify the conditions
2. Identify the outcomes (condition alternatives) for each decision
3. Identify the actions
4. Identify the rules.
Decision Tree Example
Structured English
The two building blocks of Structured English are (1) structured logic or instructions
VINAYAK SINGH
67/147
organized into nested or grouped procedures, and (2) simple English statements
such as add, multiply, move, etc. (strong, active, specific verbs)
Five conventions to follow when using Structured English:
1. Express all logic in terms of sequential structures, decision structures, or
iterations.
2. Use and capitalize accepted keywords such as: IF, THEN, ELSE, DO, DO
WHILE, DO UNTIL, PERFORM
3. Indent blocks of statements to show their hierarchy (nesting) clearly.
4. When words or phrases have been defined in the Data Dictionary, underline
those words or phrases to indicate that they have a specialized, reserved
meaning.
5. Be careful when using "and" and "or" as well as "greater than" and "greater
than or equal to" and other logical comparisons.
Data dictionary
A data dictionary is a set of metadata that contains definitions and representations
of data elements. Within the context of a DBMS, a data dictionary is a read-only set
of tables and views. Amongst other things, a data dictionary holds the following
information:
Precise definition of data elements
Usernames, roles and privileges
Schema objects
Integrity constraints
Stored procedures and triggers
General database structure
Space allocations
One benefit of a well-prepared data dictionary is a consistency between data items
across different tables. For example, several tables may hold telephone numbers;
using a data dictionary the format of this telephone number field will be consistent.
When an organization builds an enterprise-wide data dictionary, it may include both
semantics and representational definitions for data elements. The semantic
components focus on creating precise meaning of data elements. Representation
definitions include how data elements are stored in a computer structure such as an
integer, string or date format (see data type). Data dictionaries are one step along a
pathway of creating precise semantic definitions for an organization.
Initially, data dictionaries are sometimes simply a collection of database columns and
the definitions of what the meaning and types the columns contain. Data dictionaries
VINAYAK SINGH
68/147
are more precise than glossaries (terms and definitions) because they frequently
have one or more representations of how data is structured. Data dictionaries are
usually separate from data models since data models usually include complex
relationships between data elements.
Data dictionaries can evolve into full ontology (computer science) when discrete logic
has been added to data element definitions.
Example of data Dictionary
Library Management System (Data Dictionary of Book Details)
Attribute Data
type
Size Integrity Description
Book_Id Number 4 Primary key Book Identity
Bk_name Text 30 Name of book
Stud_Id Number 4 Foreign Key Student Identity
Bk_author Text 20 Name of author
Bk_price Number 3 Price of book
Bk_edition Text 6 Number of edition
35. What is normalization? What is purpose of normalization of
database?(m-06)
Normalization
Broadly, normalization (also spelled normalisation) is any process that makes
something more normal, which typically means conforming to some regularity or
rule, or returning from some state of abnormality.
It has specific meanings in various fields:
Audio normalization
Database normalization, used in database theory.
Knowledge normalization
Normalization of a wave function in quantum mechanics
Normalization (people with disabilities)
Normalizing constant, used in mathematics, perhaps most often in probability theory
Normalization (Czechoslovakia), the restoration of the conditions prevalent before
the reform in Czechoslovakia, 1969
Normalization (economics), which pertains when only relative prices matter
Normalization (image processing)
Normalization (metallurgy)
Normalization (sociology), used in sociology.
Normalization (statistics)
Normalization model (visual neuroscience)
Normalization of a function (in general calculus) is the process of removing a
discontinuity (or singularity).
Normalization property, used in Raymond's term rewriting systems
Range normalization
Text normalization
Normalization of relations, a concept in diplomacy
Normalization of speech sounds in speech perception
VINAYAK SINGH
69/147
Database normalization
Database normalization is a data design and organization process applied to data
structures based on rules that help build relational databases. In common terms it
defines the buckets, fields, columns or blank spots on a form and how that is to be
kept in the file, table or form. In sequential files a field is a component of a file and
in Data Base terms a column belongs to a table and in data design terms an element
belongs to an entity. Normalization defines the entity and the elements in it
according to a series of rules. These rules are called normal forms and they are
numbered. The first rule of normalization is also called first normal form. The term
normalization has come to imply the data is in third normal form or more.
Normalization helps; prevent data anomalies, support a single consistent version of
the truth and reduces input and output delays as well as reducing memory usage.
This generally speeds up response time. It is an industry best practice method of file
or table or entity design.
Purpose
The primary purpose of database normalization is to improve data quality through
the elimination of redundancy. This involves identification and isolation of repeating
data, so that the repeated information may be reduced down to a single record, then
conveniently retrieved wherever it is needed, reducing the potential for anomalies
during data operations. Maintenance of normalized data is simpler because the user
need only modify the repeated information in one place, with confidence that the
new information will be immediately available wherever it is needed. This is because
all data duplication is system maintained by key field inheritance.
Uses
Database normalization is a useful tool for requirements analysis and data modeling
processes in software development. The process of database normalization provides
many opportunities to improve understanding of the information which the data
represents, leading to the development of a logical data model which may be used
for design of tables in a relational database, classes in an object database, or
elements in an XML schema, to offer just a few examples.
Description
A non-normalized database can suffer from data anomalies:
A non-normalized database may store data representing a particular referent
in multiple locations. An update to such data in some but not all of those
locations results in an update anomaly, yielding inconsistent data. A
normalized database prevents such an anomaly by storing such data (i.e.
data other than primary keys) in only one location.
A non-normalized database may have inappropriate dependencies, i.e.
relationships between data with no functional dependencies. Adding data to
such a database may require first adding the unrelated dependency. A
VINAYAK SINGH
70/147
normalized database prevents such insertion anomalies by ensuring that
database relations mirror functional dependencies.
Similarly, such dependencies in non-normalized databases can hinder
deletion. That is, deleting data from such databases may require deleting data
from the inappropriate dependency. A normalized database prevents such
deletion anomalies by ensuring that all records are uniquely identifiable and
contain no extraneous information.
Normalized databases have a design that reflects the true dependencies between
tracked quantities, allowing quick updates to data with little risk of introducing
inconsistencies. Instead of attempting to lump all information into one table, data is
spread out logically into many tables. Normalizing the data is decomposing a single
relation into a set of smaller relations which satisfy the constraints of the original
relation. Redundancy can be solved by decomposing the tables. However certain new
problems are caused by decomposition.
One can only describe a database as having a normal form if the relationships
between quantities have been rigorously defined. It is possible to use set theory to
express this knowledge once a problem domain has been fully understood, but most
database designers model the relationships in terms of an "idealized schema". (The
mathematical support came back into play in proofs regarding the process of
transforming from one form to another.)
36. what are the major threads in the system security. Which is one of
the most serious and important and why?
System Security Threats
ComAirs system crash on December 24, 2004, was just one example showing that
the availability of data and system operations is essential to ensure business
continuity. Due to resource constraints, organizations cannot implement unlimited
controls to protect their systems. Instead, they should understand the major threats,
and implement effective controls accordingly. An effective internal control structure
cannot be implemented overnight, and internal control over financial reporting must
be a continuing process.
The term system security threats refers to the acts or incidents that can and will
affect the integrity of business systems, which in turn will affect the reliability and
privacy of business data. Most organizations are dependent on computer systems to
function, and thus must deal with systems security threats. Small firms, however,
are often understaffed for basic information technology (IT) functions as well as
system security skills. Nonetheless, to protect a companys systems and ensure
business continuity, all organizations must designate an individual or a group with
the responsibilities for system security. Outsourcing system security functions may
be a less expensive alternative for small organizations
Top System Security Threats
The 2005 CSI/FBI Computer Crime and Security Survey of 700 computer security
practitioners revealed that the frequency of system security breaches has been
steadily decreasing since 1999 in almost all threats except the abuse of wireless
networks.
VINAYAK SINGH
71/147
Viruses
A computer virus is a software code that can multiply and propagate itself.
A virus can spread into another computer via e-mail, downloading files from
the Internet, or opening a contaminated file. It is almost impossible to
completely protect a network computer from virus attacks; the CSI/FBI
survey indicated that virus attacks were the most widespread attack for six straight
years since 2000.
Insider Abuse of Internet Access
Annual U.S. productivity growth was 2.5% during the second half of the 1990s, as
compared to 1.5% from 1973 to 1995, a jump that has been attributed to the use of
IT (Stephen D. Oliner and Daniel E. Sichel, Information Technology and
Productivity: Where Are We Now and Where Are We Going?, Reserve Bank of
Atlanta Economic Review, Third Quarter 2002). Unfortunately, IT tools can be
abused. For example, e-mail and Internet connections are available in
almost all offices to improve productivity, but employees may use them for
personal reasons, such as online shopping, playing games, and sending
instant messages to friends during work hours.
Laptop or Mobile Theft
Because they are relatively expensive, laptops and PDAs have become the targets of
thieves. Although the percentage has declined steadily since 1999, about half of
network executives indicated that their corporate laptops or PDAs were stolen in
2005 (Network World Technology Executive Newsletter, 02/21/05). Besides being
expensive, they often contain proprietary corporate data, access codes
Denial of Service
A denial of service (DoS) attack is specifically designed to interrupt normal system
functions and affect legitimate users access to the system. Hostile users send a flood
of fake requests to a server, overwhelming it and making a connection between the
server and legitimate clients difficult or impossible to establish. The distributed denial
of service (DDoS) allows the hacker to launch a massive, coordinated attack from
thousands of hijacked (zombie) computers remotely controlled by the hacker. A
massive DDoS attack can paralyze a network system and bring down giant websites.
For example, the 2000 DDoS attacks brought down websites such as Yahoo! and
eBay for hours. Unfortunately, any computer system can be a hackers target as long
as it is connected to the Internet.
Unauthorized Access to Information
To control unauthorized access to information, access controls, including passwords
and a controlled environment, are necessary. Computers installed in a public area,
such as a conference room or reception area, can create serious threats and should
be avoided if possible. Any computer in a public area must be equipped with a
physical protection device to control access when there is no business need. The
LAN should be in a controlled environment accessed by authorized
employees only. Employees should be allowed to access only the data necessary
for them to perform their jobs.
Abuse of Wireless Networks
VINAYAK SINGH
72/147
Wireless networks offer the advantage of convenience and flexibility, but
system security can be a big issue. Attackers do not need to have physical
access to the network. Attackers can take their time cracking the passwords
and reading the network data without leaving a trace. One option to prevent
an attack is to use one of several encryption standards that can be built into wireless
network devices. One example, wired equivalent privacy (WEP) encryption, can be
effective at stopping amateur snoopers, but it is not sophisticated enough to foil
determined hackers. Consequently, any sensitive information transmitted over
wireless networks should be encrypted at the data level as if it were being sent over
a public network.
System Penetration
Hackers penetrate systems illegally to steal information, modify data, or harm the
system.
Telecom Fraud
In the past, telecom fraud involved fake use of telecommunication (telephone)
facilities. Intruder often hacked into a companys private branch exchange (PBX) and
administration or maintenance port for personal gains, including free long-distance
calls, stealing (changing) information in voicemail boxes, diverting calls illegally,
wiretapping, and eavesdrop
Theft of Proprietary Information
Information is a commodity in the e-commerce era, and there are always buyers for
sensitive information, including customer data, credit card information, and trade
secrets. Data theft by an insider is common when access controls are not
implemented. Outside hackers can also use Trojan viruses to steal information
from unprotected systems. Beyond installing firewall and anti-virus software to
secure systems, a company should encrypt all of its important data
Financial Fraud
The nature of financial fraud has changed over the years with information
technology. System-based financial fraud includes trick e-mails, identity theft, and
fraudulent transactions. With spam, con artists can send scam e-mails to thousands
of people in hours. Victims of the so-called 419 scam are often promised a lottery
winning or a large sum of unclaimed money sitting in an offshore bank account, but
they must pay a fee first to get their shares. Anyone who gets this kind of e-mail is
recommended to forward a copy to the U.S. Secret Service
Misuse of Public Web Applications
The nature of e-commerce-convenience and flexibility-makes Web applications
vulnerable and easily abused. Hackers can circumvent traditional network firewalls
and intrusion-prevention systems and attack web applications directly. They can
inject commands into databases via the web application user interfaces and
surreptitiously steal data, such as customer and credit card information.
Website Defacement
Website defacement is the sabotage of webpages by hackers inserting or altering
information. The altered webpages may mislead unknowing users and represent
VINAYAK SINGH
73/147
negative publicity that could affect a companys image and credibility. Web
defacement is in essence a system attack, and the attackers often take advantage of
undisclosed system vulnerabilities or unpatched systems.
Most Serious And Important
Viruses
A computer virus is a software code that can multiply and propagate itself. A virus
can spread into another computer via e-mail, downloading files from the Internet, or
opening a contaminated file. It is almost impossible to completely protect a network
computer from virus attacks; the CSI/FBI survey indicated that virus attacks were
the most widespread attack for six straight years since 2000.
Viruses are just one of several programmed threats or malicious codes (malware) in
todays interconnected system environment. Programmed threats are computer
programs that can create a nuisance, alter or damage data, steal information, or
cripple system functions. Programmed threats include, computer viruses, Trojan
horses, logic bombs, worms, spam, spyware, and adware.
According to a recent study by the University of Maryland, more than 75% of
participants received e-mail spam every day. There are two problems with spam:
Employees waste time reading and deleting spam, and it increases the system
overhead to deliver and store junk data. The average daily spam is 18.5 messages,
and the average time spent deleting them all is 2.8 minutes.
Spyware is a computer program that secretly gathers users personal information
and relays it to third parties, such as advertisers. Common functionalities of spyware
include monitoring keystrokes, scanning files, snooping on other applications such as
chat programs or word processors, installing other spyware programs, reading
cookies, changing the default homepage on the Web browser, and consistently
relaying information to the spyware home base. Unknowingly users often install
spyware as the result of visiting a website, clicking on a disguised pop-up window, or
downloading a file from the Internet.
Adware is a program that can display advertisements such as pop-up windows or
advertising banners on webpage. A growing number of software developers offer free
trials for their software until users pay to register. Free-trial user view sponsored
advertisements while the software is being used. Some adware does more than just
present advertisements, however; it can report users habits, preferences, or even
personal information to advertisers or other third parties, similar to spyware.
To protect computer systems against viruses and other programmed threats,
companies must have effective access controls and install and regularly update
quarantine software. With effective protection against unauthorized access and by
encouraging staff to become defensive computer users, virus threats can be reduced.
Some viruses can infect a computer through operating system vulnerabilities. It is
critical to install system security patches as soon as they are available. Furthermore,
effective security policies can be implemented with server operating systems such as
Microsoft Windows XP and Windows Server 2003. Other kinds of software (e.g., Deep
Freeze) can protect and preserve original computer configurations. Each system
restart eradicates all changes, including virus infections, and resets the computer to
its original state. The software eliminates the need for IT professionals to perform
time-consuming and counterproductive rebuilding, re-imaging, or troubleshooting
VINAYAK SINGH
74/147
when a computer becomes infected.
Fighting against programmed threats is an ongoing and ever-changing battle.
Many organizations, especially small ones, are understaffed and underfunded
for system security. Organizations can use one of a number of effective security
suites (e.g., Norton Internet Security 2005, ZoneAlarm Security Suite 5.5,
McAfee VirusScan) that offer firewall, anti-virus, anti-spam, anti-spyware, and
parental controls (for home offices) at the desktop level. Firewalls and routers
should also be installed at the network level to eliminate threats before they
reach the desktop. Anti-adware and anti-spyware software are signature-based,
and companies are advised to install more than one to ensure effective
protection. Installing anti-spam software on the server is important because
increasing spam results in productivity loss and a waste of computing
resources. Important considerations for selecting anti-spam software include a
systems effectiveness, impact on mail delivery, ease of use, maintenance, and
cost. Many Internet service providers conveniently reduce spam on their
servers before it reaches subscribers. Additionally, companies must maintain in-
house and off-site backup copies of corporate data and software so that data
and software can be quickly restored in the case of a system failure.
37. Define Data structure? What are the major type of data structure ?
illustrate?
Data are structured according to the data model. An entity is a conceptual
representation of an object. Relationships between entities makeup a data structure.
A data model represents a data structure that is described to the DBMS in DDL.
Types of Data-Structure
Data-structuring determines whether the system can create 1:1 1:m or m:m
relationships among entities. Although all DBMSs have a common approach to data
management they differ in the way they structure data.
i. Hierarchical
ii. Network and
iii. Relational
Hierarchical Structuring
i. FHierarchical (also called tree) structuring specifies that an entity can have no
more than one owing entity; that is, we can establish a 1:1 or a 1:m
relationship.
ii. The owing entity is called the parent; the owned entity, the child. A parent
can have many children (1:m), whereas a child can have only one parent
iii. A parent with no owners is called the root. There is only one root in a
hierarchical model.
iv. Elements at the ends of the branches with no children are called leaves.
v. Trees are normally upside down, with the root at the top and leaves at the
bottom.
vi. The hierarchical model is easy to design and understand . some
application,however do not conform to such a scheme. The problem is
sometimes resolved by using a network-structure.
Network-Structuring
i. A network structure allows 1:1 1:m or m:m relationships among entities.
For example, an auto parts shop may have dealings with more than one
automaker(parent).
VINAYAK SINGH
75/147
ii. A network structure reflects the real-world although a PROGRAM
STRUCTURE CAN BECOME COMPLEX. The solution is to separate the
network into several hierarchies with duplicates. This simplifies the
relationship to more complex than 1: M A hierarchy, then , becomes a
subview of the network structure
Relational Structuring
i. In relational structuring all data and relationships are represented in a flattwo
dimensional tables called a relation
ii. A relation is equivalent to a file where each line represents a record.
iii. All entries in each column are of same kind. Furthermore each column has a
unique name
iv. Finally, no two rows in the tables are identical a row is reffered to as a tuple
38. what cost elements are considered in the cost/benefit analysis?
Which do you think is most difficult to estimate? Why?
The cost-benefit analysis is a part of economic feasibility study of a system. The
basic elements of cost-benefit analysis are as follows.
I. To compute the total costs involved.
II. To compute the total benefits from the project.
III. Top compare to decide, if the project provides more net gains or no net
gains.
The costs are classified under two heads as follows.
i. Initial Costs- They are mainly the development costs.
ii. Recurring Costs- They are the costs incurred in running the
application system or operating the system (usually per
month or per year)
The initial costs include the following major heads of expenses:-
Salary of development staff
Consultant fees
Costs of software Development tools-Licensing Fees.
Infrastructure development costs
Training and Training material costs
Travel costs
Development hardware applied for te costs, Networking
costs
Salary of support staff
The development costs are applied for the entire duration of the software
development. Therefore, they are over initial but longer period. To keep these costs
lower, the development time reduction is the key.
The Recurring costs include the following major costs:-
1 Salary of operation users
2 License fees for software tools used for running the software systems, if any.
3 Hardware/Networking Maintenance charges
4 Costs of hard discs, magnetic tapes, being used as media for data storage.
5 The furniture and fixture leasing charges
6 Electricity and other charges
7 Rents and Taxes
8 Infrastructure Maintenance charges
9 Spare Parts and Tools
VINAYAK SINGH
76/147
The benefits are classified under two heads, as follows:-
i. Tangible benefits- benefits that can be measured in terms of the rupee
value.
ii. Intangible benefits-benefits that can not be measured in terms of the
rupee value.
The common heads of tangible benefits vary from organisation to organisation, but
some of them are as follows:-
i. Benefits due to reduced business cycle times i.e production cycle, marketing
cycle, etc.
ii. Benefits due to increased efficiency
iii. Savings in the salary of operational users
iv. Savings in the space rent and taxes
v. Savings in the cost of stationary, telephone and other communication costs,
etc.
vi. Savings in the costs due to the use of storage media
vii. Benefits due to overall increase in profits before tax.
The common heads of intangible benefits vary from organisation to organisation, but
some of them are as follows:-
i. Benefits due to increased working satisfaction of employees.
ii. Benefits due to increased quality of products and/or services provided to the
end-customer.
iii. Benefits in terms of business growth due to better customer services
iv. Benefits due to increased brand image
v. Benefits due to captured errors, which could not be captured in the current
system.
vi. Savings on costs of extra activities that were carried out and now not
required to be carried out in the new system.
The net gain or loss is worked after calculating the difference in the costs
and benefits.
The advantages of cost or benefit analysis are many. Some of them are
listed below
i. Since it is a computed value, the decisions of go or no-go ahead based
on it are likely to be more effective than just going ahead without it.
ii. It completes the user management proposal developer to think through
the costs and benefits ahead of time proactively and assign some
numbers to these heads of accounts.
iii. Since it assigns the amount values to each one of these heads, they can
be used to verify against, after the g-ahead decision is implemented, to
test the correctness of basis of assumptions made during the planning
stage. This acts as a very effective control tool.
iv. If used properly, the cost/benefit analysis can be used as a very good
opportunity for organisational learning which will go into building up the
organisational maturity into that key performance area. (KPA)
Some common drawbacks of the cost/benefits analysis are as follows:
i. The basis for computing of costs and that of benefits should be
VINAYAK SINGH
77/147
balanced. E.g. salary expenses incurred and salary expenses saved
should have some bearings on reality. Unfortunately, these basis
are subjective and therefore, may be difficult to balance.
ii. The personal bias of the team members may reflect upon the
computations. If a person/team is in favour of developing the new
system, they may overlook, some important cost elements or likely
to underestimate the costs and vice versa.
iii. Some of the intangible benefits may be very hard to compute
accurately.
iv. Many times, in practice, the Top managements decision does not
depend upon the outcome of the cost/benefit analysis. If the
project manager/system analyst knows about the decision, they
may not take all hard efforts to collect data and carry out the
comparative analysis.
v. Many times, the software development projects are business
compulsions rather than the matter of choice. Therefore, the
cost/benefit analysis may be either fertile exercise or acts only as a
paper horse.
39. There are 2 ways of debugging program software bottom up and top
down. How do they differ?
Its a long-standing principle of programming style that the functional elements of a
program should not be too large. If some component of a program grows beyond the
stage where it's readily comprehensible, it becomes a mass of complexity which
conceals errors.Such software will be hard to read, hard to test, and hard to debug.
In accordance with this principle, a large program must be divided into pieces, and
the larger the program, the more it must be divided. How do you divide a program?
The traditional approach is called top-down design: you say "the purpose of the
program is to do these seven things, so I divide it into seven major subroutines. The
first subroutine has to do these four things, so it in turn will have four of its own
subroutines," and so on. This process continues until the whole program has the
right level of granularity-- each part large enough to do something substantial, but
small enough to be understood as a single unit.
As well as top-down design, they follow a principle which could be called bottom-up
design-- changing the language to suit the problem.It's worth emphasizing that
bottom-up design doesn't mean just writing the same program in a different order.
When you work bottom-up, you usually end up with a different program. Instead of a
single, monolithic program, you will get a larger language with more abstract
operators, and a smaller program written in it. Instead of a lintel, you'll get an arch.
This brings several advantages:
1.By making the language do more of the work, bottom-up design yields programs
which are smaller and more agile. A shorter program doesn't have to be divided into
so many components, and fewer components means programs which are easier to
read or modify. Fewer components also means fewer connections between
components, and thus less chance for errors there. As industrial designers strive to
reduce the number of moving parts in a machine, experienced Lisp programmers use
bottom-up design to reduce the size and complexity of their programs.
VINAYAK SINGH
78/147
2.Bottom-up design promotes code re-use. When you write two or more programs,
many of the utilities you wrote for the first program will also be useful in the
succeeding ones. Once you've acquired a large substrate of utilities, writing a new
program can take only a fraction of the effort it would require if you had to start with
raw Lisp.
3.Bottom-up design makes programs easier to read. An instance of this type of
abstraction asks the reader to understand a general-purpose operator; an instance
of functional abstraction asks the reader to understand a special-purpose subroutine.
4.Because it causes you always to be on the lookout for patterns in your code,
working bottom-up helps to clarify your ideas about the design of your program. If
two distant components of a program are similar in form, you'll be led to notice the
similarity and perhaps to redesign the program in a simpler way.
Top-Down
Advantages
1.Design errors are trapped earlier
2.A working (prototype) system
3.Enables early validation of design
4.No test drivers are needed
5.The control program plus a few modules forms a basic early prototype
6.Interface errors discovered early
7.Modular features aid debugging
Disadvantages
1.Difficult to produce a stub for a complex component
2.Difficult to observe test output from top-level modules
3.Test stubs are needed
4.Extended early phases dictate a slow manpower buildup
5.Errors in critical modules at low levels are found late
Bottom-Up
Advantages
1.Easier to create test cases and observe output
2.Uses simple drivers for low-level modules to provide data and the interface
3.Natural fit with OO development techniques
4.No test stubs are needed
5.Easier to adjust manpower needs
6.Errors in critical modules are found early
7.Natural fit with OO development techniques
Disadvantages
1.No program until all modules tested
2.High-level errors may cause changes in lower modules
3.Test drivers are needed
4.Many modules must be integrated before a working program is available
VINAYAK SINGH
79/147
5.Interface errors are discovered late
40. Discuss the six special system test? Give special examples?
Special System Tests:
The tests which do not focus on the normal running of the system but come
under special category and are used for performing specific tests related specific
tasks are termed as Special System Tests.
They are listed as follows:
1. Peak Load Test:
This is used to determine whether the system will handle the volume of activities
that occur when the system is at peak of the processing demand. For instance when
all terminals are active at the same time. This test applies mainly for on-line
systems.
For example, in a banking system, analyst want to know what will happen if all the
tellers sign on at their terminals at the same time before start of the business day.
Will the system handle them one at a time without incident, or will it attempt to
handle all of them at once and be so confused that it locks up and must be
restarted, or will terminal address be post? The only sure way to find out is to test
for it.
2. Storage Testing:
This test is to be carried out to determine the capacity of the system to store
transaction data on a disk or in other files. Capacities her are measured in terms of
the number of records that a disk will handle or a file can contain. If this test is not
carried out then there are possibilities that during installation one may discover that,
there is not enough storage capacity for transactions and master file records.
3. Performance Time Testing:
This test refers to the response time of the system being installed. Performance time
testing is conducted prior to implementation to determine how long it takes to
receive a response to a inquiry, make a backup copy of the file, or send a
transmission and receive a response.
This also includes test runs to time indexing or restoring of large files of the size the
system will have during atypical run or to prepare a report. A system may run well
with only a handful of test transactions may be unacceptably slow when full loaded.
This should be done using the entire volume of live data.
4. Recovery Testing:
Analyst must never be too sure of anything. He must always be prepared for the
worst. One should assume that the system will fail and data will be damaged or lost.
Even though plans and procedures are written to cover these situations, they also
must be tested.
5. Procedure Testing:
Documentation & manuals telling the user how to perform certain functions and tests
quite easily by asking the user to follow them exactly through a series of events.
VINAYAK SINGH
80/147
By not including instructions about aspects such as, when to depress the enter key,
removing the diskettes before putting off the power and so on, could cause
problems. This type of testing brings out what is not mentioned in the
documentation, and also the errors in them.
6. Human Factors:
In case during processing, the screen goes blank the operator may start to wonder
as to what is happening and the operator may just do things like press the enter key
number of times, or switch off the system and so on, but if a message is displayed
saying that the processing is in progress and asking the operator to wait, then these
types of problems can be avoided.
Thus, during this test we determine how users will use the system when processing
data or preparing reports.
As we have noticed that these special test are used for some special situations, and
hence the name as Special System Tests.
41. Define the following types of maintenance. Give examples for each?
r.corrective maintenance
s.adaptive maintenance
t.perfective maintenance
u.preventive maintenance
a) Corrective Maintenance
b) Adaptive Maintenance
c) Perfective Maintenance
d) Preventive Maintenance
Solution
Corrective Maintenance:
Corrective maintenance means repairing processing or performance
failures or making changes because of previously uncorrected problems or false
assumptions.
For example,
1 fixing cosmetic problems, like correcting a misspelled word in the
2 user interface;
3 fixing functional errors that dont obviously affect processing,
4 like correcting a mathematical function so that it is calculated
5 correctly;
6 fixing algorithmic errors that cause severe performance problems,
7 like changing a program to avoid crashes or infinite loops
8 fixing algorithmic problems that damage lose, corrupt, or destroy
9 data in the program or in files.
Adaptive Maintenance:
Adaptive maintenance is a type of software maintenance where changes the
software in response to changes in the working environment or due to system
upgrade or hardware replacement in short changing the program function/
For example,
VINAYAK SINGH
81/147
1 porting to use newer versions of the development tools and/or
2 components,
3 porting product to a different operating system,
4 adapting a program for new locales,
5 modification of code to take full advantages of hardware supported
6 operations,
most year 2000 fixes,
7 adding support for network access or web access.
Perfective Maintenance:
Perfective maintenance means enhancing the performance or modifying the
program(s) to respond to the users additional or changing needs.
For example:
8 changing the GUI to streamline user interactions,
9 replacing algorithms to speed up processing,
10 adding color, higher resolution, better sound, better graphics
11 animation, and other multimedia enhancements,
12 adding security features,
13 making a program more customizable and adaptable to user
14 preferences.
Preventive Maintenance:
Preventive maintenance is a type of software maintenance where work is
done in order to try to prevent malfunctions or improve
maintainability.
42. what are different methods of file organization? Explain advantages
and disadvantages of each?
File Organization:
A file is organized to ensure that records are available for processing. It should be
designed in line with the activity and volatility of the information and the nature of
the storage media and devices. Other consideration for it are Cost of the media, File
privacy, security, and confidentiality.
There are four methods of organizing files:
1. Sequential organization:
- Sequential organization simply means storing data in physical , contiguous blocks
within files on tapes or disk. Records are also in sequence within each block.
- To access a record, previous records within the block are scanned. Thus this design
is best suited for get next activities, reading one record after another without a
search delay.
- In this, records can be added only at the end of the file. It is not possible to insert
a record in the middle of file without rewriting the file.
- In this, file update, transactions records are in the same sequence as in the master
file. Records from both files are matched, one record at a time, resulting in an
updated master file.
VINAYAK SINGH
82/147
Advantages:
4 Simple to design and Easy to program.
5 Variable length and blocked records are available.
6 Best use of disk storage.
Disadvantages:
7 Records cannot be added to middle of the file.
2. Indexed Sequential Organization:
- Like sequential organization, it also stores data in physically contiguous blocks.
- The difference is in the use of indexes to locate the records.
-Disk storage is divided into three areas:
(a). Prime area: It contains file records stored by key or ID numbers. All records
are initially stored in the prime area.
(b). Overflow area: It contains records added to the files that cannot be placed in
logical sequence in the prime area.
(c) Index area: This is more like a data dictionary. It contains keys of records and
their locations on the disks. A pointer associated with each key is an address that
tells the system where to find a record.
Advantages:
8 Indexed sequential organization reduces the magnitude of the sequential
search and provides quick access for sequential and direct processing.
9 Records can be inserted or updated in middle of the file.
Disadvantages:
10 The prime drawback is the extra storage required for the index.
11 It also takes longer to search the index for data access or retrieval.
12 Periodic reorganization of file is required.
3. Inverted List Organization:
- Like the Indexed-sequenced storage method, the inverted list organization
maintains an index.
- The two methods differ, however, in the index level and record storage.
- The Indexed-sequenced method has a multiple index for a given key, whereas the
inverted list method has a single index for each key type.
- In an inverted list, records are not necessary stored in a particular sequence. They
are placed in data storage area, but indexes are updated for record keys & location.
Advantage:
13 Inverted lists are best for applications that request specific data on multiple
keys. They are ideal for static files because additions and updated cause
expensive pointer updating.
4. Direct Access Organization:
- In direct-access file organization, records are placed randomly throughout the file.
- Records need not be in sequence because they are updated directly and rewritten
back in the same location.
- New records are added at the file or inserted in specific locations based on software
commands.
- Records are accessed by addresses that specify their disk locations. An address is
required for locating a record. for linking records, or for establishing relationships.
VINAYAK SINGH
83/147
Advantages:
- Records can be inserted or updated in middle of the file.
- Better control over record at a location.
Disadvantages:
- Address calculation is required for the processing.
- Variable length records are nearly impossible to process.
43. Consider an online railway reservation system perform output
design. Show some sample screen layouts. A detailed note about
different levels and methods of testing software?
Levels of tests:
1) Unit Test
2) System Testing
Unit Testing:
In unit testing the analyst test the programs making up a system. The
software units in a system are the modules and routines that are assembled and
integrated to perform a specific function.
Unit testing focuses first on the modules, independently of one
another, to locate errors. This enables the tester to detect errors in coding and logic
that are contained within that model only. The test cases needed for unit testing
should exercise each condition and option
Unit testing can be performed from the bottom up, starting with the
smallest and lowest-levels modules and proceeding one at a time. For each module
in bottom-up testing, a short program executes the module and provides the
needed data, so that the module is asked to perform the way it will when embedded
within the larger systems. When bottom-level modules are tested, attention turns on
the next level that use the lower ones. Then are tested individually and then linked
with the previously examined lower-level modules.
Top-down testing begins with the upper-level modules. However, since the
detailed activities usually performed in the lower-level routines are not provided,
stubs are written. A stub is a module shell that can be called by the upper-level
module and that, when reached properly, return a message to the calling module,
indicating that proper interaction occurred. No attempt is made to verify lower-level
module.
System Testing:
System testing does not test the software per se but rather the
integration of each module in the system. It also tests to find discrepancies between
the system and its original objective, current specifications,
and systems documentation. The primary concern is the compatibility of
individual modules.
System testing must also verify that file sizes are adequate and thatindices
have been built properly. Sorting and reindexing procedures
VINAYAK SINGH
84/147
assumed to be present in the lower-level modules must be tested at the
system level to see that they in fact exist and achieve the results
modules expect.
Methods of testing software:
There are two general methods for testing software,
1) Code Testing
2) Specification Testing
Code Testing:
The code-testing strategy examines the logic of the program. To follow the
testing method, the analyst develops test cases that result in executing every
instruction in the program or module, that is, every path through the program is
tested. A path is a specific combination of
conditions that is handled by the program.
Code testing seems the ideal method for testing software. This testing
strategy does not indicate whether the code meets its specification nor
does it determines whether all aspects are even implemented. It also
does not check the range of data that the program will accept.
Specification Testing:
In this the analyst examines the specifications stating what the
program should do and how it should perform under various conditions. Then the
test case are developed for each condition or combination of
conditions and submitted for the processing. By examining the results, the analyst
can determine whether the program performs according to its
specified requirements.
This strategy treats the program as if it were a black box: the
analyst does not look into the program to study the code and is not concerned about
whether every instruction or path through the program m is tested.
Specification testing strategy is a more efficient, since it focuses on the way
software expected to be used.
44. Write a note on
a) software design tools
b) GUI design
- The Concept of use of tools is brought to software engineering is similar to that of
tools used in the other Engineering production technologies such as automobile
industry etc.
- A software tool is general-purpose software, which carries out a highly specific task
of human being, which would completely or partially eliminate need of human skills
in that task.
- Though there are various types of software tools available, we study here only
those which can automate some or many of the software system development tasks.
VINAYAK SINGH
85/147
These are broadly termed as Software development (design) tools. CASE (Computer
Aided Software Engineering) tool is another broader set of tools.
- The Software tools can be useful in the following SDLC phases as shown below:
Sr
No.
SDLC Phase Software Tools can be used for
1 System Analysis DFD, ERD, and other models, Word Processor and
editors, Drawing/Graphics.
2 High-Level Design DFD, ERD, and other models, Word Processor and
editors, Drawing/Graphics.
3 Low-Level Design Integrated Development Environment (IDE).
4 Construction IDE, DBMS, Code generation, Report generators,
Query builders, Form developers, Control
development tools, Web Page Builder.
5 Testing Test data generation tools, Tools to execute and
monitor test runs, Debugging tools, Change Impact
Analysis Tools.
6 Implementation Reverse Engineering Tools, Remote deployment
tools, e-tutors, On-line help development
management tools.
7 Project Management PERT/CPM schedule generation tools, Spread
sheet, Charting tools, Reporting tools,
Appointments/ Schedule management tools.
8 Software Configuration
& Control Management
Software Inventory Management, Version Control
Management, Source code safes.
The basic purpose of using software tools are as follows:
1 Increased Speed and Accuracy.
2 High-level of automation brings in lot of Standardization.
3 Less dependence on human skills.
4 For high volume of work, the costs are considerably low.
5 Iterative several times.
6 Versatile because of high programmability.
Thus the use of tools increases the development productivity several times.
(b) GUI Design:
- The word GUI stands for Graphical User Interface (GUI). Since the PC was
invented, the computer systems and application systems have started considering
the GUI as one of the most important aspect of system design. The GUI design
component in every application system will vary depending upon the importance of
the user interactions for the success of the system.
- However, by now, we have definitely shifted successfully from purely non-
interactive batch oriented application system, having almost no component of GUI,
to present day applications such as games and animations and internet based
VINAYAK SINGH
86/147
application systems, where the success of the application systems vests almost
completely on the effectiveness of the GUI.
The Goals of a good GUI:
1. Provide the best way for people to interact with computers. This is commonly
known as Human Computer Interaction (HCI).
2. The presentation should be user friendly. User-friendly Interface is helpful eg.
It should not only tell user that he has committed an error, but also provide
guidance as to how he/she can rectify it soon.
3. It should provide information on what is the error and how to fix it.
4. User-friendliness also includes tolerant and adaptable.
5. A good GUI makes the User more productive.
6. A good GUI is more effective, because it finds the best solutions to a problem.
7. It is also efficient because it helps the User to find such solution much quickly
and with the least error.
8. For a user, using a computer system, his workspace is the computer screen.
The goal of a good GUI is to make the best, if not all, of a Users workspace.
9. A good GUI should be robust. High robustness implies that the interface
should not fall because of some action taken by the user. Also, any user error
should not lead to a system breakdown.
10. The Usability of a GUI is expected to be high. The usability is measured in the
various terms.
11. A good GUI is of high Analytical capability, most of the information needed by
user appears on the screen.
12. With a good GUI, user finds the work easier and more pleasant.
13. The user should be happy and confident to use the interface.
14. A good GUI has a high cognitive workload ability i.e. the mental efforts
required of the user to use the system should be the latest. In fact, the GUI
closely approximates the users mental model or reactions to the screen.
15. For a good GUI, the user satisfaction is HIGH.
These are the Characteristics or Goals of a good GUI.
VINAYAK SINGH
87/147
45. Describe in brief the spiral model of development?
DEFINITION - The spiral model, also known as the spiral lifecycle model, is a
systems development method (SDM) used in information technology (IT). This model
of development combines the features of the prototyping model and the waterfall
model. The spiral model is favored for large, expensive, and complicated projects.
The steps in the spiral model can be generalized as follows:
10. The new system requirements are defined in as much detail as possible. This
usually involves interviewing a number of users representing all the external
or internal users and other aspects of the existing system.
11. A preliminary design is created for the new system.
12. A first prototype of the new system is constructed from the preliminary
design. This is usually a scaled-down system, and represents an
approximation of the characteristics of the final product.
13. A second prototype is evolved by a fourfold procedure: (1) evaluating the first
prototype in terms of its strengths, weaknesses, and risks; (2) defining the
requirements of the second prototype; (3) planning and designing the second
prototype; (4) constructing and testing the second prototype.
14. At the customer's option, the entire project can be aborted if the risk is
deemed too great. Risk factors might involve development cost overruns,
operating-cost miscalculation, or any other factor that could, in the
customer's judgment, result in a less-than-satisfactory final product.
15. The existing prototype is evaluated in the same manner as was the previous
prototype, and, if necessary, another prototype is developed from it according
to the fourfold procedure outlined above.
16. The preceding steps are iterated until the customer is satisfied that the
refined prototype represents the final product desired.
17. The final system is constructed, based on the refined prototype.
18. The final system is thoroughly evaluated and tested. Routine maintenance is
carried out on a continuing basis to prevent large-scale failures and to
minimize downtime.
Advantages
Estimates (i.e. budget, schedule, etc.) get more realistic as work progresses,
because important issues are discovered earlier.
It is more able to cope with the (nearly inevitable) changes that software
development generally entails.
Software engineers (who can get restless with protracted design processes)
can get their hands in and start working on a project earlier.
The spiral model is a realistic approach to the development of large-scale
VINAYAK SINGH
88/147
software products because the software evolves as the process progresses. In
addition, the developer and the client better understand and react to risks at
each evolutionary level.
The model uses prototyping as a risk reduction mechanism and allows for the
development of prototypes at any stage of the evolutionary development.
It maintains a systematic stepwise approach, like the classic life cycle model,
but incorporates it into an iterative framework that more reflect the real
world.
If employed correctly, this model should reduce risks before they become
problematic, as consideration of technical risks are considered at all stages.
Disadvantages
4 Demands considerable risk-assessment expertise
5 It has not been employed as much proven models (e.g. the WF model) and
hence may prove difficult to sell to the client (esp. where a contract is
involved) that this model is controllable and efficient. [More study needs to be
done in this regard
6 It may be difficult to convince customers that evolutionary approach is
controllable
46. what is the purpose of feasibility study? What are the parameter
used to decide different feasibilities of a system in detail?
Feasibility study is an analysis of possible alternative solutions to a problem and a
recommendation on the best alternative .It, for example, can decide whether an
order processing be carried out by a new system more efficiently than the previous
one.
A feasibility study is a study conducted to find out whether the proposed system
would be :
1. Possible to build with given technology and resources
2. Affordable given the time and cost constraint of the organisation ,and
3. Acceptable for use by the eventual users of the system
Purpose of the feasibility study
1. Need analysis-to determine the need for a change in an organisation
2. Cost -benefit analysis -to study the effect of the change on the economics of
the organisation
3. Technical feasibility - to evaluate various technologies that can be used for
implementing the suggested change given the cost and resources constraints
of an organisation
VINAYAK SINGH
89/147
4. Legal feasibility - to evaluate the legal procedure ,if any should come into play
to implement the suggested change
5. Evaluation of alternatives -to evaluate the various alternatives that would be
thrown up with regards to resolving the problems of an organisation and
recommend the best suited one.
Parameter used to decide different feasibilities :
1.Economic feasibility study
This involves questions such as whether the firm can afford to build the system,
whether its benefits should substantially exceed its costs, and whether the project
has higher priority and profits than other projects that might use the same
resources. This also includes whether the project is in the condition to fulfill all the
eligibility criteria and the responsibility of both sides in case there are two parties
involved in performing any project.
2.Technical feasibility study
This involves questions such as whether the technology needed for the system
exists, how difficult it will be to build, and whether the firm has enough experience
using that technology.The assessment is based on an outline design of system
requirements in terms of Input, Output, Fields, Programs, and Procedures.This can
be qualified in terms of volumes of data,trends,frequency of updating,etc..in order to
give an introduction to the technical system.
3.Schedule Feasibility study
This involves questions such as how much time is available to build the new system,
when it can be built (i.e. during holidays), whether it interferes with normal business
operation, etc.
4.Organizational Feasibility study
This involves questions such as whether the system has enough support to be
implemented successfully, whether it brings an excessive amount of change, and
whether the organization is changing too rapidly to absorb it.
5.Cultural Feasibility study
In this stage, the project's alternatives are evaluated for their impact on the local
and general culture. For example, environmental factors need to be considered.
6.Legal Feasibility study
Not necessarily last, but all projects must face legal scrutiny. When an organization
either has legal council on staff or on retainer, such reviews are typically standard.
However, any project may face legal issues after completion too.
7.Marketing Feasibility study
This will include analysis of single and multi-dimensional market forces that could
affect the commercial
VINAYAK SINGH
90/147
47. Explain how the waterfall model and the prototyping model can be
accommodated in spiral process model?
DEFINITION - The spiral model, also known as the spiral lifecycle model, is a
systems development method (SDM) used in information technology (IT). This model
of development combines the features of the prototyping model and the waterfall
model. The spiral model is favored for large, expensive, and complicated projects.
The steps in the spiral model can be generalized as follows:
19. The new system requirements are defined in as much detail as possible. This
usually involves interviewing a number of users representing all the external
or internal users and other aspects of the existing system.
20. A preliminary design is created for the new system.
21. A first prototype of the new system is constructed from the preliminary
design. This is usually a scaled-down system, and represents an
approximation of the characteristics of the final product.
22. A second prototype is evolved by a fourfold procedure: (1) evaluating the first
prototype in terms of its strengths, weaknesses, and risks; (2) defining the
requirements of the second prototype; (3) planning and designing the second
prototype; (4) constructing and testing the second prototype.
23. At the customer's option, the entire project can be aborted if the risk is
deemed too great. Risk factors might involve development cost overruns,
operating-cost miscalculation, or any other factor that could, in the
customer's judgment, result in a less-than-satisfactory final product.
24. The existing prototype is evaluated in the same manner as was the previous
prototype, and, if necessary, another prototype is developed from it according
to the fourfold procedure outlined above.
25. The preceding steps are iterated until the customer is satisfied that the
refined prototype represents the final product desired.
26. The final system is constructed, based on the refined prototype.
27. The final system is thoroughly evaluated and tested. Routine maintenance is
carried out on a continuing basis to prevent large-scale failures and to
minimize downtime.
Advantages
Estimates (i.e. budget, schedule, etc.) get more realistic as work progresses,
because important issues are discovered earlier.
It is more able to cope with the (nearly inevitable) changes that software
development generally entails.
Software engineers (who can get restless with protracted design processes)
can get their hands in and start working on a project earlier.
VINAYAK SINGH
91/147
The spiral model is a realistic approach to the development of large-scale
software products because the software evolves as the process progresses. In
addition, the developer and the client better understand and react to risks at
each evolutionary level.
The model uses prototyping as a risk reduction mechanism and allows for the
development of prototypes at any stage of the evolutionary development.
It maintains a systematic stepwise approach, like the classic life cycle model,
but incorporates it into an iterative framework that more reflect the real
world.
If employed correctly, this model should reduce risks before they become
problematic, as consideration of technical risks are considered at all stages.
Disadvantages
7 Demands considerable risk-assessment expertise
8 It has not been employed as much proven models (e.g. the WF model) and
hence may prove difficult to sell to the client (esp. where a contract is
involved) that this model is controllable and efficient. [More study needs to be
done in this regard
9 It may be difficult to convince customers that evolutionary approach is
controllable
48. Discuss the advantages of graphical information displays and
suggest four application areas where it would be more appropriate
to use graphical rather than digital display of numeric information?
The Advantages of Graphical information display are:
- Improve Effevtiveness of Output.
- Manage Information Volume.
- Fulfilling Personal Preference.
- Use of different graphic forms ensures better readibility of
information.
Use of ICONS- < Pictorial representation of entities described by
data. >
- Properly selected icons communicate information immediately since
they duplicate images that Users are already familiar with.
- Appropriate icon ensures that the right words & phrases,that is the
Intended Meaning is conveyed.
COLOUR REPRESENTATION-
- A consistent colour usage enhances a good output design.
eg- RED for Exceptions, GREEN/BLUE for Normal Situations.
- The Bightest colour Emphasizing most important information on
Display screen
eg- TORQUOISE & PINK.
Application areas-
VINAYAK SINGH
92/147
i. BUSINESS REPORTS- PIE-DIAGRAMS, BAR CHARTS help better
understand
the information than just printing the
values &
figures.
ii. AUTOMOBILE SHOWROOMS- A Comparison of Different Models using
different graphics gives a
better output value to the user.
iii.HOSPITALS- Diagrams showing different dosages,
Instrument outputs on paper< Scans>
give a quick idea about the patient condition n needs.
49. What are design principles of a good user interface?
The U.I is everything that the end user comes into contact with while
using the system.
The 3 aspects of UI are:-
1.Physical 2.Perceptual 3.Conceptual aspect.
The General Design Principles a Good USER INTERFACE are:
1. VISIBILITY: All the controls should be Visible & provide Feedback to
indicate that the control is responding to the User's Action.
2. AFFORDANCE: Appearance of any Control should suggest its
FUNTIONALITY.
3. ROBUSTNESS: Ability to prevent interface error from Corrupting the
system.
4: USABILITY: How EASY it is to use an Interface..
The 8 Golden Rules for DESIGNING INTERACTIVE INTERFACE are:
1: STRIVE for CONSISTENCY.
2: ENABLE USE of SHORTCUTS.
3: OFFER INFORMATIVE FEEDBACK.
4: DESIGN DIALOGUE to YIELD CLOSURE.
5: OFFER SIMPLE ERROR HANDLING.
6: SUPPORT INTERNAL LOCUS of CONTROL.
7: PERMIT EASY REVERSAL of ACTIONS.
8: REDUSE SHORT TERM MEMORY-LOADS.
50. What do you mean by structured analysis? describe the various
tools used for structured analysis wit pros n cons of each
STRUCTURED ANALYSIS:-
SET OF TECHNIQUES & GRAPHICAL TOOLS THAT ALLOW THE ANALYST TO
DEVELOP
A NEW KIND
OF SYSTEM SPECIFICATIONS THAT ARE UNDERSTANDABLE TO THE USER.
Tools of Structured Analysis are:
1: DATA FLOW DIAGRAM ( DFD )--[ Bubble Chart ]
VINAYAK SINGH
93/147
Modular Design;
Symbols:
i. External Entities: A Rectangle or Cube -- Represents SOURCE or
DESTINATION.
ii. Data Flows/Arrows: Shows MOTION of Data.
iii.Processes/Circles/Bubbles: Transform INCOMING to OUTGOING
functions.
iv. Open Rectangles: File or Data Store.
( Data at rest or temporary repository of
data )
-DFD describes data flow logically rahter than how it is processed.
-Independent of H/W, S/W, Data Structure or File Organization.
ADV- Ability to represent data flow.
Useful in HIGH & LOW level Analysis.
Provides Good System Documentation.
DIS ADV- Weak input & output details.
Confusing Initially.
Iterations.
2: DATA DICTIONARY-- It is a STRUCTURED REPOSITORY of DATA
METADATA: Data about Data
3 items of Data present in Data Dictionary are:
i. DATA ELEMENT.
ii. DATA STRUCTURE.
iii. DATA FLOW & DATA STORES.
ADV- Supports Documentation.
Improves Communication b/w Analyst & User.
Provides common database for Control.
Easy to locate error.
DIS ADV- No Functional Details.
Not Acceptable by non-technical users.
3: DECISION TREE--
ADV- Used to Verify Logic.
Used where Complex Decisions are Involved.
Used to Calculate Discount or Commissions in Inventory Control
System.
DIS ADV- A Large no. of Branches with MANY THROUGH PATHS will make S.A
difficult rather than Easy.
4: DECISION TABLE-- A Matrix of rows & coloumns that shows conditions &
actions.
LIMITED/EXTENDED/MIXED ENTRY DECISION TABLE.
CONDITION STUB | CONDITION ENTRY
VINAYAK SINGH
94/147
ACTION STUB | ACTION ENTRY
ADV- Condition Statement identifies relevant conditions.
Condition Entries tell which value applies for any particular
condition.
Actions are based on the above condition statements & entries.
DIS ADV- Drawing tables becomes Cumbersome if there are many
conditions and Respective entries.
5: STRUCTURED ENGLISH-- 3 Statements are used:
i. SEQUENCE- All statements written as Sequence get executed
Sequentially.
The Execution does not depend upon the existence of any
other statement.
ii. Selection- Make a choice from given options.
Choice is made based on conditions.
Normally condition is written after 'IF' & action after
'THEN'.
For 2 way selection 'ELSE' is used.
iii.Iteration- When a set of statements is to be performed a no. of
times,the statements are put iin a loop.
ADV- Easy to understand.
DIS ADV- Unambigious language used to describe the logic.
51. Describe how you would expect documentation to help analyst and
designers?
Introduction:
Documentation is not a step in SDLC. It is an activity on-going in every phase of
SDLC. It is about developing documents initially as a draft, later on the review
document and then a signed-off document.
The document is born, either after it is signed-off by an authority or after its review.
It cries initial version number. However, the document also undergoes changes and
then the only way to keep your document up tom date is to incorporate these
changes.
Software Documentation helps Analysts and Designers in the following
ways:
1. The development of software starts with abstract ideas in the minds of the
Top Management of User organization, and these ideas take different forms
as the software development takes place. The Documentation is the only link
between the entire complex processes of software development.
2. The documentation is a written communication, therefore, it can be used for
future reference as the software development advances, or even after the
software is developed, it is useful for keeping the software up to date.
3. The documentation carried out during a SDLC stage, say system analysis, is
useful for the respective system developer to draft his/her ideas in the form
which is shareable with the other team members or Users. Thus it acts as a
very important media for communication.
4. The document reviewer(s) can use the document for pointing out the
VINAYAK SINGH
95/147
deficiencies in them, only because the abstract ideas or models are
documented. Thus, documentation provides facility to make abstract ideas,
tangible.
5. When the draft document is reviewed and recommendations incorporated, the
same is useful for the next stage developers, to base their work on. Thus
documentation of a stage is important for the next stage.
6. Documentation is a very important because it documents very important
decisions about freezing the system requirements, the system design and
implementation decisions, agreed between the Users and Developers or
amongst the developers themselves.
7. Documentation provides a lot of information about the software system. This
makes it very useful tool to know about the software system even without
using it.
8. Since the team members in a software development team, keep adding, as
the software development project goes on, the documentation acts as
important source of detailed and complete information for the newly joined
members.
9. Also, the User organization may spread implementation of a successful
software system to few other locations in the organization. The
documentation will help the new Users to know the operations of the software
system. The same advantage can be drawn when a new User joins the
existing team of Users. Thus documentation makes the Users productive on
the job, very fast and at low cost.
10. Documentation is live and important as long as the software is in use by the
User organization.
11. When the User organization starts developing a new software system to
replace this one, even then the documentation is useful. E.G. The system
analysts can refer to the documentation as a starting point for discussions on
the documentation as a starting point for discussions on the new system
requirements.
Hence, we can say that Software documentation is a very important aspect of SDLC.
52. What are the elements of cost/benefits analysis. Take a suitable
example and given a system proposal for it?
Soln: Cost-benefit analysis:
Cost-benefit analysis is a procedure that gives the picture of various
costs,benefits and rules associated with each alternative system.
Cost-benefit categories:
In developing cost estimates for a system we need to consider several cost
elements. Among them are following:
1. Hardware costs:
Hardware costs relate to the actual purchase or lease of the computer
and
Peripherals(e.g printer,disk drive, tape unit, etc). Determining the actual costs of
hardware is generally more difficult when the system is shared by many users than
for a dedicated stand-alone system.
2.Personnel Costs:
Personnel costs include EDP staff salaries and benefits (health insurance,
vacation time, sick pay, etc.) as well as payment of those involved in developing the
system . Costs incurred during the development of a system are one-time costs and
are labeled development costs.
VINAYAK SINGH
96/147
3.Facility Costs:
Facility costs are expenses incurred in the preparation of the physical
site
where the application or computer will be in operation. This includes wiring, flooring,
lightning and air conditioning. These costs are treated as one time costs.
4. Operating costs:
Operating costs includes all costs associated with the day-to-day operation of
the system. The amount depends on the number shifts, the nature of the application
and the caliber of the operating staff. The amount charged is based on computer
time, staff time and volume of the output produced.
5. Supply Costs:
Supply costs are variable costs that increase with increased use of paper,
ribbons, disks and the like.
Procedure for cost-benefit Determination:
The determination of costs and benefit entails the following steps:
1. Identify the costs and benefits pertaining to a given project.
2. Categorize the various costs and benefits for analysis.
3. Select a method of evaluation.
4. Interpret the result of the system.
5. Take action.
Classification of costs and benefits:
1. Tangible and intangible costs and benefits:
a. Tangible refers to the ease with which costs or benefits can be
measured. An outlay of cash for a specific item or activity is referred to
as tangible costs. The purchase of a hardware or software, personnel
training and employee salaries are examples of tangible costs.
b. Costs that are known to exist but those financial value cannot be
accurately measured are referred to as the intangible costs. For
example employee morale problems caused by a new system or
lowered company image is an intangible cost.
c. Benefits can also be classified as tangible and intangible. Tangible
benefits such as completing jobs in few hours or producing reports
with no errors are quantifiable.
d. Intangible benefits such as more satisfied customers or an improved
corporate image are not easily quantified.
2. Direct and Indirect Cost and benefits:
e. Direct costs are those with which a money figure can be directly
associated in a project. They are applied directly to a particular
operation. For example the purchase of a box of diskettes for $35 is a
direct cost.
f. Indirect costs are the result of operation that are not directly
associated with a given system or activity. They are often referred to
as overhead.
g. Direct benefits also can be specifically attributable to a given project.
For example a new system that can handle 25 percent more
transaction per day is a direct benefit.
h. Indirect benefits are realized as a by-product at another activity or
system.
VINAYAK SINGH
97/147
3. Fixed or variable cost and benefits:
i. Fixed costs sre sunk costs. They are constant and do not change. Once
encountered, they will not recur. Examples are straight-line
depreciation of hardware, exempt employee salary, and insurance.
j. Variable costs are incurred on a regular basis. They are usually
proportional to work volume and continue as long as system is in
operation. For example the costs of the computer forms vary in
proportion to amount of processing or the length of the reports
required.
k. Fixed benefits are also constant and do not change. An example is a
decrease in the number of personnel by 20 percent resulting from the
use of a new computer.
l. Variable costs are realized on a regular basis. For example consider a
safe deposit tracking system that saves 20 minutes preparing
customer notices compared with the manual system.
Examples of Tangible benefits:
1 Fewer processing errors.
2 Increased throughput.
3 Decreased response time.
4 Elimination of job step.
5 Reduced expenses.
6 Increased Scale.
7 Faster turnaround.
8 Better credit.
9 Reduced credit losses.
Examples of Intangible Benefits:
1 Improved customer goodwill.
2 Improve employer morals.
3 Improved employer job satisfaction.
4 Better service to community.
5 Better decision making.
Evaluation Method:
1. Net Benefit analysis:
Net benefit analysis simply involves subtracting total costs from total
benefits. It is easy to calculate, easy to interpret and easy to present.The main
drawback is that it does not account for the time value of money and does not
discount future cash flow.
The time value of money is usually expressed in the form of interest on the
funds invested to realize the future value. Assuming compounded interest, the
formula is:
F=P(1+i)
n
Where
F=future value of an investment.
P=Present value of the investment.
I=Interest rate per compounding year.
N=Number of years.
2. Present value analysis:
In developing long-term project, it is often difficult to compare todays
costs with the full value of tomorrows benefits. The time value of
money allows for interest rates,inflation and other factors that alter
the value of the investment. Present value analysis controls for these
problems by calculating the costs and benefits of the system in terms
VINAYAK SINGH
98/147
of todays value of the investment and then comparing across
alternatives.
Present Value=Future Value
(1+i)
n
Net present value is equal to discounted benefits minus discounted
costs.
3. Payback analysis:
The pay back method is a common measure of the relative time value
of a project. It determines the time it takes for the accumulated
benefits to equal the initial investment. It is easy to calculate and
allows two or more activities to be ranked. The payback period may be
computed by the following formula:
Overall Cash Outlay = (A*B)+(C*D) = Years + Ins.Time
Annual Cash return 5+2 Years to recover
Where
A=Capital Investment
B=Investment Credit
C=Cost investment.
D=Companies income tax.
E=State and local taxes.
F=Life of capital
G=Time to install system.
H=Benefits and Savings.
4. Break even analysis:
Break even is a point where the cost of the candidate system and that
of the current one are equal. Break-even compares the costs of the
current and the candidate system. When a candidate system is
developed initial costs usually exceeds those of the current system.
This is an investment period. When both costs are equal, it is break-
even. Beyond that point, the candidate system provides greater
benefits than the old one-a return period.
5. Cash Flow analysis:
Cash-flow analysis keeps track of accumulated costs and revenues on
a regular basis. The spreadsheet format also provides break-even
analysis and payback information.
6. Return on Investment analysis:
The ROI analysis technique compares the life time probability of
alternative solutions and projects. The ROI for a solution or project is a
percentage note that measures the relationship between the amount
the business gets back from an investment and the amount invested.
The ROI is calculated as follows:
ROI=Estimated lifetime benefits-Estimated costs
Estimated lifetime costs
53. What are the sources of information used for evaluating hardware
and software? Which source do you consider the most reliable and
why?
Hardware Selection:
1. Determining Size and Capacity Requirements:
The starting point in the equipment decision process is the size and
VINAYAK SINGH
99/147
capacity requirements. One particular system may be appropriate for one workload
and inappropriate for another. Systems capacity is frequently the determining factor.
Relevant features to consider include the following:
1. Internal memory size.
2. Cycle speed of system for processing.
3. Number of channels for input, output and communication.
4. Characteristics of display and communication components.
5. Types and numbers of auxiliary storage units that can be attached.
6. Systems support and utility software provided or available.
Auxiliary storage capacity is generally determined by file storage and processing
needs. To estimate the disk storage needed for a system, the analyst must consider
the space needed for each master file, the space for programs and software,
including systems software, and the method by which backup copies will be made.
When using flexible diskettes on a small business system the analyst must determine
whether master and transaction files will be maintained on the same diskette and on
which diskette programs will be stored. Backup considerations as well as file size,
guide the decision about how many disk drives are needed.
2. Design of synthetic Programs:
A synthetic job is a program written to exercise a computers resources
in a
way that allows the analyst to imitate the expected job stream and determine the
results. Then the artificial job stream can be adjusted and rerun to determine the
impact. The process can be repeated as many times as necessary to see which tasks
a comparison set of computers handles well and which they do not handle as well.
The synthetic jobs can be adjusted to produce the same type of activity as actual
programs, including random access of files, sequential searching of files with varying
size records.
3. Plug Compatible Equipment:
For reasons of cost, analysts frequently consider using equipment for a
particular make of computer that is not manufactured by the computer vendor. Such
components are called plug-compatible equipment. Some companies specialize in
manufacturing systems components, such as printers, disk drives, or memory units,
that can be connected to a vendors system in place of the same equipment
manufactured by the vendor. The central processing unit does not care or know that
the equipment is not the same make.
The benefit of plug-Compatible equipment is the lower cost of an item compared
with one produced by a major computer vendor. Because firms specializing in
specific components can develop manufacturing expertise or are likely to have a
smaller investment in research and development-they are duplicating components
developed by another firm-they are able to offer the same product at a lower cost.
VINAYAK SINGH
100/147
Although there is a large market for plug-compatible equipment because of price
differences, the analyst must ensure that the equipment will meet necessary quality
levels, that it will perform as well as(or possibly better than) the original equipment,
and that the computer vendor will not disallow warranties and service agreements on
the rest of the system. Also must reach agreements on maintenance responsibilities
and methods for resolving possible disputes about malfunction.
4. Financial Factors:
The acquisition of and payment for a computer system are usually
handled
through one of three common methods: rental, lease, or purchase. Determining
which option is appropriate depends on the characteristics and plans of the
organization at the time the acquisition is made. No one option is always better than
others.
5. Maintenance and Support:
An additional factor in hardware decisions concerns the maintenance
and
support of the system after it is installed. Primary concerns are the source of
maintenance, terms, and response times.
Maintenance Source:
Once the system is delivered and installed, there is a brief warranty period
during which time the sales unit is responsible for maintenance. This is a typically a
90-day period. After that time, the purchaser has the option of acquiring
maintenance from various sources.
The most common source of maintenance for new equipment is the firm from
which it was purchased.
Service is also available from companies specializing in providing maintenance
service. Third-party maintenance companies, as these firms are called, frequently
provide service in smaller communities, where manufactures do not find it cost
effective to maintain offices. When a used computer system is purchased from an
independent sales organization, the purchaser may have no choice but to use a
third-party maintenance firm. Many manufactures do not service equipment they did
not sell.
Terms:
In formulating a maintenance agreement, the terms of agreement are as
important as the cost. The type of contract desired depends on the expenditures the
organization is willing to make in comparison with how frequently it estimates
service will be required.
The analyst should also consider how maintenance costs will change.
Service and Response:
Maintenance support is useful only if it is available when needed. Two concerns
VINAYAK SINGH
101/147
in maintenance are the response time when service is requested and the hours of
support.
The user has a right to expect a reasonable response time after making an
emergency call. Organizations often specify in the contract that the response to a
telephone call must be made in how much time. Whenever contracting for
maintenance, a schedule of preventive maintenance must be agreed on in advance.
Evaluation of software:
One of the most difficult tasks in selecting software, once systems requirements
are known, is determining whether a particular software package fits the
requirements. After initial selection, further scrutiny is needed to determine the
desirability of a particular software compared with other candidates.
1. Application Requirements Questions:
When analyst evaluates possible software for adoption, they do so by
comparing software features with previously developed application requirements.
Representative requirements considerations include the following:
What transactions and what data about each transaction must be handled?
What reports, documents, and other output must the system produce?
What files and database drive the system? What transaction files are needed
to maintain them?
What is the volume of data to be stored? What volume of transactions will be
processed?
Are there unique features about this application that require special
consideration when selecting software?
What inquiry requirements must the software support?
What are the limitations of the software?
Working from this basic set of questions, coupled with mandated cost
and
expenditure limitations, the analyst is able to quickly remove from consideration
those packages that do not meet requirements. Then it is necessary to further
examine the remaining candidates for adoption on the basis of other attributes such
as flexibility, capacity and vendor support.
2. Flexibility:
The flexibility of a software system should include the ability to meet
changing requirements and varying user needs. Software that is flexible is generally
is more valuable than a program that is totally inflexible. However, excessive
flexibility is not desirable, since that requires the user or analyst to define many
details in the system that could be included in the design as the standard feature.
VINAYAK SINGH
102/147
Areas where flexibility is needed are data storage, reporting and options,
definition of parameters, and input output. The flexibility of software varies according
to the types of hardware it will support. The capability to instruct the system to
handle one of the optional formats is another dimension of software flexibility.
3. Audit and Reliability Provisions:
Users often have a tendency to trust systems more than should, to the
extent that they frequently believe the results produced through a computer-based
information system without sufficient skepticism. Therefore the need to ensure that
adequate controls are included in the system is an essential step in the selection of
the software. Auditors must have the ability to validate reports and output and to
test the authenticity and accuracy of data and information.
Systems reliability means that the data are reliable, that they are accurate and
believable. It also includes the element of security, which the analyst evaluates by
determining the method and suitability of protecting the system against
unauthorized use. Ensuring system has passwords is not sufficient access protection.
Multiple levels of passwords are often needed to allow different staff members access
to those files and databases or capabilities that they need.
4. Capacity:
Systems capacity refers to the number of files that can be stored and
the
amount each file will hold. To show complete capacity, it may be necessary to
consider the specific hardware on which the software will be used. Capacity also
depends on the language in which the software is written.
Capacity is also determined by following:
The maximum size of each record measured in number of bytes.
The maximum size of the file measured in number of bytes.
The maximum size of the file measured in number of fields per record.
The number of files that can be active at one time.
The number of files that can be registered in a file directory.
54. STAR HOTELis medium size hotel having capacity of 100 rooms
belonging to to different categories such as AC/Non-
AC/Delux/Super Delux/Suit etc
The main purpose of the Hotel is to computerized its billing
system end get some financial related MIS reports.
The Hotel wants to keep following information in their database
1. Customer who are booking and checking in the hotel.
2. The Hotel offers different kinds of services such as Bar and
Restaurant, Laundry ,Room service, Rent-a-car etc.
3. The customer are charges daily for the given services, through
voucher and such transactions are maintained.
4. The room tariff depends on the category of the room selected by
VINAYAK SINGH
103/147
the customer.
5. When the customer check-out,The bill is generated and details of
payments received are maintained for generating various financial
reports.
a. Draw context level DFD for the above case.
b. Draw ER-Diagram mentioninfg the key attributes of
Entities.(May-04)
55. Draw screen Layout for the capturing information written on
following input documents.
i.Purchase Order.
ii. Case paper of patients,admitted in the Hospital.
iii. Saving Bamk Account opening form[M-04]
Purchase Order
Purchase Order
No.
Supl No. Date:-
Supplier:-
Address:-
Pin:-
Sr. No Class Title Qty Rate
Value
1.
2.
3.
4
5
Case paper of patient admitted in the hospital
II SHREE HOSPITAL II
Name of patient:-
Address:-
Telephone No:-
Age:-
Sex (M/F):-
Disease:-
Ref of Dr.:-
Case Handled by Dr.:-
Admit Date:-
VINAYAK SINGH
104/147
Saving Bank Account Opening Form
ICICI Bank
Photo
Name:-
Address:-
Telephone:-
Age:-
Sex (M/F):-
Nationality:-
Marital status:-
Email:-
Deposit Amt:-
ATM(Y/N):-
Internet Banking(Y/N):-
Signature
For office use only
Name:-
A/c No.:-
Signature
56. i) Explain with examples specialization/Generalization
ii) Explain with examples Aggregation
iii) Explain with example Decision Table[May-04]
iv) Discuss Importance of Functional Decomposition Diagram.
Decision table:-
A decision table is a matrix of rows and column rather than
a tree that shows conditions and actions. Decision rules, included in a
decision table, state what rules to follow when certain condition exist.
Decision table characteristic:-
a) condition statement :-
The condition statement identifies the relevant
condition
b) condition entries:-
Condition entries tell which value, if any, apply for a
particular condition.
c) Action statement:-
Action statements list the set of all steps that can be
taken when a certain condition occurs.
d) Action entities:-
VINAYAK SINGH
105/147
Action entities show what specific actions in the set to
take when selected condition or combinations of conditions are true.
For example :-
Conditions Decision rules 1 2
3 4
C1 patient has basic health
C2 patient has social health
assurance
Y N Y N N
Y N Y
2) Specialization:-
The term specialization often comes up when discussing
inheritance,
Specialization is define as the term taking on features in addition to those inherited
from another object. We can say that5 item offers and volume is a specialization of
offers. The inherit all the features of offers but are also specialized with additional
features.
Decision table
1. This techniques use a tabular structure of specification.
2. It is tabular representation of processing logic containing the decision
variables, decision variable values, and actions.
3. The decision variables are the parameters on the basis of which a decision
alternative is chosen. A decision variable can take carious values.
4. Decision variable values are those values decision variable is likely to take,
one at a time. All different values that decision variable can take and that can
impact the decision are tabulated
5. An action or a decision to be taken by the process is a direct result of set of
decision values of different decision variables in combination such as AND, OR
and NOT or another combination of them.
6. Typically, one set of decision values generates only one set of action(s) or
decision(s). However, vice versa may not be true.
Advantages
1. The tabular structure is well understood by the user as well as developers.
2. Less or no chance of communication error.
3. Highly specific and therefore no room for ambiguity.
4. Usually, it provides self check to ensure the completeness of specification.
5. Since it provides for combinations of decision values, complex conditions can
also be represented using this technique.
VINAYAK SINGH
106/147
Disadvantages
1. This is reasonably good technique up to certain number of decision variables,
but when the decision variable exceed 5, typically, the decision table is
difficult to communicate
2. Even for lesser number of decision variables, the possible number of
alternatives for decision values of all decision variables may pose threat to
the increasing complexity of the decision table.
VINAYAK SINGH
107/147
57. Solve
v.Draw report of pay-slip give to employees in payroll system
w.Draw layout of the Hotel Bill given to customer for lodging and
boarding.
Employee no : Employee Name : Designation : Branch Name : Month : Basic :
Days :
Sal Code Earnings Sal Code Deduction PAN No:
Current Arrears Current Arrears
Basic NPF
DA EDB
TSA PT
MMA
HRA
Total Signature
COIN B/F : COIN C/F : PF-ADV-BAL Net Rs
RC POST-EXP PF-AC-NO
Officer In Charge
58. Write short notes.
x.Importance of Documentation in SSAD?
y.Feasibility study
z.Requirements Gathering techniques
Importance of documentation in SSAD
1. The development software starts with abstract ideas in the minds of the top
management of user organization, and these ideas take different forms as the
software development take place. The documentation is the only link between
the entire complex processes of software development.
2. The documentation is a written communication, therefore it can be used for
future reference as the software development advances, or even after
software developed, and it is useful for keeping software up to date.
3. The documentation carried out during a SDLC stage, say a system analysis is
useful for the respective system developer to draft his/her ideas in the form
which is sharable with the other team members or users. Thus it acts as a
very important media for communication.
4. The documentation reviewer can use the document for pointing out the
deficiencies in them, only because the abstract ideas or models are
documented. Thus the documentation provides facility to make abstract
ideas.
5. When the draft document is reviewed and recommendations incorporated, the
same is useful for the next stage developers, to base their work on, thus
VINAYAK SINGH
108/147
documentation of stage is important for the next stage.
6. Documentation is very important because it documents very important
decision about freezing the system requirements, the system design and
implementation decision, agreed between the user and developer or among
the developers itself.
7. Documentation provides lots of information about software system. This
makes it very useful tool to know about the software system even without
using it.
8. Since the team members in a software development team, keep adding, as
the software development project going on, the documentation acts as
important source of detailed and complete information for the newly joined
members.
9. Also, the user organization may spear implementation of a successful
software system to little other location in the organization. The
documentation will help the new user to know the operation of the software
system. The same advantage can be drawn when a new user joins existing
team of user. Thus documentation makes the users productive on the job,
very fast and at low cost.
10. Documentation is live and important as long as software is in use by the user
organization.
11. When the user organization starts developing a new software system to
replace older one, even then the documentation is useful.
b) Feasibility Study
The feasibility of a project can be ascertained in terms of technical factors, economic
factors, or both. A feasibility study is documented with a report showing all the
ramifications of the project. In project finance, the pre-financing work (sometimes
referred to as due diligence) is to make sure there is no "dry rot" in the project and
to identify project risks ensuring they can be mitigated and managed in addition to
ascertaining "debt service" capability.
Technical Feasibility
Technical feasibility refers to the ability of the process to take advantage of the
current state of the technology in pursuing further improvement. The technical
capability of the personnel as well as the capability of the available technology
VINAYAK SINGH
109/147
should be considered. Technology transfer between geographical areas and cultures
needs to be analyzed to understand productivity loss (or gain) due to differences
(see Cultural Feasibility).
Managerial Feasibility
Managerial feasibility involves the capability of the infrastructure of a process to
achieve and sustain process improvement. Management support, employee
involvement, and commitment are key elements required to ascertain managerial
feasibility.
Economic Feasibility
This involves the feasibility of the proposed project to generate economic benefits. A
benefit-cost analysis and a breakeven analysis are important aspects of evaluating
the economic feasibility of new industrial projects. The tangible and intangible
aspects of a project should be translated into economic terms to facilitate a
consistent basis for evaluation.
Financial Feasibility
Financial feasibility should be distinguished from economic feasibility. Financial
feasibility involves the capability of the project organization to raise the appropriate
funds needed to implement the proposed project. Project financing can be a major
obstacle in large multi-party projects because of the level of capital required. Loan
availability, credit worthiness, equity, and loan schedule are important aspects of
financial feasibility analysis.
Cultural Feasibility
Cultural feasibility deals with the compatibility of the proposed project with the
cultural setup of the project environment. In labor-intensive projects, planned
functions must be integrated with the local cultural practices and beliefs. For
example, religious beliefs may influence what an individual is willing to do or not do.
Social Feasibility
Social feasibility addresses the influences that a proposed project may have on the
social system in the project environment. The ambient social structure may be such
that certain categories of workers may be in short supply or nonexistent. The effect
VINAYAK SINGH
110/147
of the project on the social status of the project participants must be assessed to
ensure compatibility. It should be recognized that workers in certain industries may
have certain status symbols within the society.
Safety Feasibility
Safety feasibility is another important aspect that should be considered in project
planning. Safety feasibility refers to an analysis of whether the project is capable of
being implemented and operated safely with minimal adverse effects on the
environment. Unfortunately, environmental impact assessment is often not
adequately addressed in complex projects. As an example, the North America Free
Trade Agreement (NAFTA) between the U.S., Canada, and Mexico was temporarily
suspended in 1993 because of the legal consideration of the potential environmental
impacts of the projects to be undertaken under the agreement.
Political Feasibility
A politically feasible project may be referred to as a "politically correct project."
Political considerations often dictate direction for a proposed project. This is
particularly true for large projects with national visibility that may have significant
government inputs and political implications. For example, political necessity may be
a source of support for a project regardless of the project's merits. On the other
hand, worthy projects may face insurmountable opposition simply because of
political factors. Political feasibility analysis requires an evaluation of the
compatibility of project goals with the prevailing goals of the political system.
Environmental Feasibility
Often a killer of projects through long, drawn-out approval processes and outright
active opposition by those claiming environmental concerns. This is an aspect
worthy of real attention in the very early stages of a project. Concern must be
shown and action must be taken to address any and all environmental concerns
raised or anticipated. A perfect example was the recent attempt by Disney to build a
theme park in Virginia. After a lot of funds and efforts, Disney could not overcome
the local opposition to the environmental impact that the Disney project would have
on the historic Manassas battleground area.
VINAYAK SINGH
111/147
Market Feasibility
Another concern is market variability and impact on the project. This area should
not be confused with the Economic Feasibility. The market needs analysis to view
the potential impacts of market demand, competitive activities, etc. and "divertible"
market share available. Price war activities by competitors, whether local, regional,
national or international, must also be analyzed for early contingency funding and
debt service negotiations during the start-up, ramp-up, and commercial start-up
phases of the project.
58.c
Information Gathering
1 Gathering all the relevant information is one of the most crucial tasks in the
analysis of system.
2 The steps followed in gathering information are to first identify the
information sources and then find the appropriate method of obtaining
information from each identified source.
3 The most important source of information, both qualitative and quantitative,
are the end-users of the system at all levels.
4 The other secondary sources can be:
o Forms and reports used by the organization
o Procedure manual
o Book of rules
5 Information is gathered from top down. An overview is obtained at the top.
Details are collected from the people at working level. Gaining the confidence
of working level users are vital for the success of the project.
6 General strategy used by an analyst to gather information:
o Identifying information sources
o Evolving a method of obtaining information from the identified
sources, and
o Using an informational flow model of the organization
7 Main sources of information of an organization:
o Interview with users of systems
o Group Discussion
VINAYAK SINGH
112/147
o Forms and documents used in the organization
o Procedure manuals and rule books (if any)
o Any existing automated applications
o Systems used in other similar organizations
o Trade Journals
o Conference Proceedings
o Trade Statistics
o Conversation with other system analyst
8 Methods of Gathering Information:
o Interviewing various levels of managers
o Interviewing persons who will operate the system
o Using questionnaires distributed to the users
9 Interviews
o Both qualitative and quantitative information can be obtained
o Suggestion based on experience should be incorporated
o Reading of the background material and preparation of the checklist
prior to interview is always recommended
o Taking prior appointment, informing the purpose and time needed for
interview always help.
o Interview should be brief and should not exceed 40 minutes (ideally)
10 Group Discussions
o Useful to obtain consensus on priorities
o Many facts collected from individuals serves as an useful input to
project
11 Questionnaires
o Can gather quantitative data quickly from a large number of
responders
o Short questionnaires elicit quick response.
o Anonymity, if needed, of responders can be preserved
o Follow up may be required to get questionnaires back.
o Questionnaires should not be used when qualitative information is
needed
VINAYAK SINGH
113/147
59. What is data structure? What is there are relation to data elements
to data process data Hows, data stores,[Nov-04].
Data Elements:-
The most fundamental level is the data element, for e.g. invoice no. invoice
date, and amount due are data elements included in the invoice data flow.
These serve as building blocks for all other data elements in the system. By
themselves, they do not convey any enough message to any user, for e.g. the
meaning of the data item DATE on an invoice may be well understood it means the
date invoice was issued. However, out of this context it is meaningless. It might
pertain to pay date, graduation date, starting date or invoice date.
Data Structures:-
It is a set of data items that are related to one another and that collectively
describe a component in the system.
Both Data flows and data stores are data structures. They consist relevant
elements that describe the activity or entity being studied.
Relationship between data structures, data elements, data processes, data
flows, and data stores:-
1 Data element is the smallest unit of data that provides no further
decomposition.
2 Data structure is a group of data elements handled as a unit e.g. Student
Data Structure consist of student_id, student_name, gender, prog_enrolled,
blood_grp, Address, contact_no.
3 Data flows and data stores are data structures in motion and data structures
at rest respectively.
4 And, Process is a procedure that transforms incoming data flows to outgoing
Data flows.
60. Built air line reservation system. Draw context level diagram, DFD
upto two level, ER diagram, draw input, output screen [Nov-04].
61. What is goal of input and output design.[Apr-04]
Goals of Input Design:
1. The data which is input to a computerized system should be correct.. If the
data is carelessly entered will lead to erroneous results. The system designer
should ensure that system prevents such errors.
2. The Volume of data should be considered since if any error occurs, it will take
long time to find out the source of error.
The design of inputs involves the following four tasks:
a. Identify data inputs devices and mechanism.
b. Identify the input data and attributes
c. Identify input controls
d. Prototype the input forms
a. Identify data inputs devices and mechanism.
To reduce input error:
1. Automate the data entry process for reducing human errors.
2. Validate the data completely and correctly at the location where it is
entered. Reject the wrong data at its source only.
b. Identify the input data and attributes
1. It involves identifying information flow across the system
VINAYAK SINGH
114/147
boundary.
2. When the input form is designed the designer ensures that all
these data elements are provided for entry, validations and
storage.
c. Identify input controls
1. Input integrity controls are used with all kinds of mechanisms
which help reduces the data entry errors at input stage and ensure
completeness.
2. Various error detection and correction techniques are applied.
d. prototype the input forms
1. the users should be provided with the form prototype and related
functionality including validation, help ,error message.
Goals of Output Design:
In order to select an appropriate output device and to design a proper output format,
it is necessary to understand the objectives the output is expected to serve.
Following questions can address these objectives:
1. Who will use the report?
2. What is the proposed use of the report?
3. What is the volume of the output?
4. How often is the output required?
a. If the report is for top management, it must be summarized,
highlighting important results. Graphical outputs e.g. bar charts and pie
charts convey information in a form useful for decision making.
b. If the report is for middle management it should highlight exceptions.
for example, in a stores management system, information on whether items
are rapidly consumed or not consumed for longer period should be
highlighted. Exceptions convey information for tactical management.
c. At operational level all relevant information needs to be printed. for
example, in a Payroll system, pay slips for all employees have to be printed
monthly.
d. The system should provide the volume of information appropriate for user
requirement. The total volume of printed output should be minimal. Irrelevant
reports should be avoided.
62. Describe the concept and procedure used in constructing
DFDs.Using and example of your own to illustrate.[Apr-04]
Data flow diagram (DFD):
1 DFD is a process model used to depict the flow of data through a system &
work or processing performed by the system.
2 It also known as Bubble Chart Transformation graph & process model.
3 DFD is a graphic tool to describe & analyze the movement of data through a
system, using the processes, stores of data & delay in system.
4 DFD are of two types:
A) Physical DFD
B) Logical DFD
1 Physical DFD:
Represents implementation-dependent view of current system & system show
VINAYAK SINGH
115/147
what task are carried out & how they are performed. Physical
Characteristics are:
1. name of people
2. form & document names or numbers
3. names of department
4. master & transaction files
5. locations
6. names of procedures
2 Logical DFD:
They represent implementation-independent view of the system & focus
on the flow of the specific devices, storage locations, or people in the
system. They do not specify physical characteristics
Listed above for physical DFDs.
The most useful approach to develop an accurate & complete description
system begins with the development of a physical DFD & then they are
converted to logical DFD
Procedure:
Step 1: make a list of business activities & use it to determine:
1 External entities
2 Data flows
3 Processes
4 Data stores
Step 2: draw a context level diagram:
Context level diagram is a top level diagram & contains
Only one process representing the entire system. Anything that not inside the
context diagram will not be the part of the system study.
Step3: develop process chart:
It is also called as hierarchy charts or decomposition diagram. It shows
top down functional decomposition of the system.
Step4: develop the first level DFD:
It is also known as diagram 0 or level 0 diagram. It is the explosion of
the context level diagram. It includes data stores & external entities. Here the
processes are number.
Step 5: draw more detailed level:
Each process in diagram 0 may in turn be exploded to create a more
detailed DFD. New data flows & data stores are added. There are
decomposition/ leveling of processes.
E.g. library management system
63. What consideration are involved in feasibility analysis which
consideration do you think is not crucial? Why?
Feasibility study
1. Feasibility study is the measure of how beneficial or practical development of
an information system will be to an organization.
2. Feasibility analysis is the process by which feasibility is measured
VINAYAK SINGH
116/147
3. Feasibility study should be performed throughout the system development file
cycle
4. A feasibility study is a study conducted to find out whether the proposed
system would be
1 Possible to build with given technology and given resources
2 Affordable given the time and cost constraints of the organization
3 Acceptable for the use by an eventual user of the system
Purpose of the feasibility study
1. Need analysis : to determine the need for a change in an organization
2. cost benefit analysis : to study the effect of the change on the economics of
the organization
3. Technical Feasibility : to evaluate various technologies that can be used for
implementing suggested change given the cost and resources constraints of
an organization
4. Legal feasibility : to evaluate the legal procedure , if any should come into
play to implement the suggested change
Feasibility consideration:
1. need analysis
a need analysis is conducted with the following
objectives
1 seekig background information of the organization
2 understanding current issues to be tackled
3 understanding the user profile
Economic feasibility
Economic analysis is the most frequently used method for evaluating
the effectiveness of the candidate system. It is the most important phase in the
development of the project. It is also know as cost benefit analysis
While doing economic feasibility, one attempt to weigh the cost of
developing n implementing a new system against the benefit to be occurred from the
having the new system in place
Tangible benefits are those that can be measured in money value
where as intangible are difficult to quantity
Technical feasibility
Technical feasibility is a measure of the practicality of a technical
solution and the availability of the technical resources and expertise
It helps I understanding what level and kind of technology is needed
for the system
It entails an understanding in different technologies involved in the
proposed system, existing technology levels within the organizations and the level of
expertise to use the suggested technology
Legal feasibility
It entails copyright violations for systems that have to be developed for
the open market framing the contract for large systems, violation of terms etc
In order to ascertain legal feasibility, legal expert have to be called in
In many organization company secretaries can help out
Personal/ behavioral feasibility
People are into inherently resistant to change and computers have
VINAYAK SINGH
117/147
been to know to facilitate the change
It is common knowledge that computer installations have something
to do turnover, transfer, retraining and changes in employee job status
Its is considerable that the introduction of a candidate system requires
special effort to educate, sell and train staff on new ways of conducting business
Political feasibility
It depends on the political environment of the organization
Steps in feasibility analysis
1 Form a project team and a appoint a project leader
2 Prepare system flow chart
3 Enumerate potential candidate systems
4 Describe and identify the characteristic of the candidate systems
5 Determine and evaluate the performance of each candidate systems
6 Weight system performance and cost data
7 Select the best candidate systems
8 Prepare and report the final project directives to management
64. What is normalization? What is purpose of normization,illustrate the
method of normalization of databases.
Normalization is a data analysis technique that organizes data attributes such that
they are grouped to form non redundant, stable, flexible and adaptive entities.
Normalization is a three step technique that places the data model into first normal
form (1NF), second normal form (2NF), and third normal form (3NF).
It is a process of analyzing the given relation schemas based on their
functional dependencies and primary key to achieve the desirable properties of:-
1 Minimizing Redundancy.
2 Minimizing insertion, deletion, and update anomalies.
Method of Normalization of Databases
The process of normalization was first proposed by Codd, the process
proceeds in a top down fashion by evaluating each relation against a criterion of
normal forms and decomposing relations as necessary.
A relation schema is said to be in first normal form (1NF) if the values in the
domain of each attribute relation are atomic. In, other words only one value is
associated with each attribute and the value is not a set of value. A database schema
is in 1NFif every relation schema included in the database schema is in 1NF.
A relation schema is in second normal form (2NF) if it is in 1NFand if all the
non prime attribute are fully functional dependent on the relation keys. A database
schema is in 2NF if every relation schema included in the database schema is in 2NF.
A relation schema is in 3NF if for all non trivial functional dependencies in F
+
(closure of the given set of functional dependencies ) of the form X->A, either X
contains a key (i.e. X is a Super key) or A is a prime attribute. A database schema is
in 3NF if every relation schema included in the database schema is in 3NF.
65. Discuss the six special system tests. Explain the purpose of each.
Give example to show the purpose of the tests.
Following are the types of system testing:
1 Functional Requirement Testing
VINAYAK SINGH
118/147
2 Regression Testing
3 Parallel Testing
4 Execution Testing
5 Recovery Testing
6 Operations Testing
The first three types of system testing focus on testing the functional aspects of
the system. i.e. examining if the system is doing all that it was expected to do
and it doing completely and accurately.
The last three types of system testing focus on testing the structural aspects of
the system. I.e. examining if the structure on the system is built meets
expectations and how best.
1) Functional Requirement Testing
This is described as follows:-
a. The focus of this system testing is to ensure that the system
requirements and specifications are all met by the system. It also
examines, if the application system meets the user standards.
b. The test conditions are created directly from the user requirements
and they aim to prove the correctness of the system functionality.
c. The development team may prepare a list of core functionality to
check and follow it rigorously during the system testing for completion.
d. Every software system must be tested for the functional requirements
testing. It is a mandatory system testing.
e. The testing activities should begin at the system analysis phase and
continue in every phase, as expected.
2) Regression Testing
This is described as follows:-
f. Software testing is peculiar in a way that a system change carried out
in one part of the application system to charge its (previously tested)
functionality.
g. The regression testing examines that the (testing) changes carried out
in one part has not changed the functionality in the other parts of the
system.
h. This may happen, if a change (in software requirements or due to
testing) is implemented incorrectly - either to wrong program (or
lines) or wrongly.
i. In order to carry out the regression testing, the previously run test-
data-packs are applied at the input of the tested and/or unchanged
programs and their expected outputs are compared to match exactly
to the expected output.
j. It also checks that there is no change in the manual procedures of
unchanged parts of the application system.
k. Regression system is more useful during system testing of large
complex system, where development team is very large, multiplication
and/or team communication may be weak.
l. It is also advisable in frequently changing systems, such as the fast
changing requirements application system etc., where the regression
system testing not only is effective technique but also saves the
system testing time and efforts significantly controlling the costs.
m. Also, with the changing requirements, changing test data is not
VINAYAK SINGH
119/147
current, then quality of regression testing is poor.
n. Regression testing is optional.
3) Parallel Testing
It is described as follows:-
o. The objective of the parallel system testing is to ensure that the new
application system generates exactly similar output data as the just
previous or the current application system.
p. Since the current application system is working live, the input data for
that system is readily available for parallel testing the new application
system. Also, the expected results for that data as input are also
computed by the current application system.
q. The parallel testing for an exiting computerized current system
involves mainly setting up the environment for accepting the same
input data, running the new application system and matching the
results or highlight the differences.
r. The purpose is to develop the user confidence in the new application
systems. The Users use the current system, so they are more
confident about the functionality of the current system. If the new
system results the current one, Users derive confidence from it.
s. The parallel testing is advantageous because the testing activities are
very minimum and user confidence, which is otherwise very difficult to
build, is built.
t. The drawback of the parallel testing is that the variation in the new
and current system functionality may make it difficult. Usually, the
user builds newer systems to draw many more benefits and therefore,
typically, there are many differences. Therefore, analyzing the
mismatches is a very complex task.
u. Since the users are not trained so well on the new system. Or they are
new to the new system. They take time to analyze the differences and
hence it may be costly activity.
v. Some times the changes may need to change the input data before
the test run. This may not be simple, therefore error prone, time
consuming and frustrating to many users.
w. The parallel Run testing is optional.
4) Execution Testing
This is described as follows:-
x. The execution testing is to examine the new system by executing it.
The purpose is to check, how far it meets the operational expectations
of the users.
y. It focuses on measuring the actual performance of the system, by
measuring the response time, turn around time, etc.
z. The response time is measured by timing the following activities as
follows:-
The time taken by the new system to respond to critical
queries of the users, where a large database is accessed,
computations performed and the query response is displayed
on the user screen.
The time taken by the new system to respond to some on-line
transactions processing requests of the users, involving
Updating of the database and/or printing a document and/or
VINAYAK SINGH
120/147
any other activity, as a part of the same transaction
processing.
Similar operations carried out by secondary User, such as a
Customer of the User organization, etc who is located in
different local and/or remote locations of the installations etc.
aa. the turn-around-time is measured in some situations as follows:-
For some batch processing part of the system, it measures
the end-to-end time taken for completely executing a certain
number of transactions.
The above processing may include or exclusive processing
dedicated processing involving one or more of the following
activites,such as report printing or documents printing (e.g.
bills)
And/or mass database updating/insertions.
1 The above processing in the different combination can be
carried out from local and remote locations.
bb. The execution testing will also examine the system performance under
the User stated constraints. Various possible situations are as follows:-
Shortage of memory-either internal or external.
Use of slightly old and/or slower technologies to accept the
external input data or to send the output data to.
Restrictions of execution time-windows e.g. a specific
processing must complete end-to-end in45 minutes -60
minutes during the lunch hours in the Head Office etc. or
night processing/week-end processing windows etc.
cc. The execution testing is also used to find out the resource
requirements of the new system, such as the internal memory, storage
space, etc; this can also be used to plan for capacity expansion, e.g.
targeted speed of the proposed printer to be purchased soon.
dd. Some times it is not possible to create the actual scenarios during the
execution testing, for various reasons. Then the execution testing uses
simulations and models and uses them effectively.
ee. The execution testing is advantageous, because it can be carried out
with little modifications along with the other types of testing,
e.g.requriements testing. Therefore, it saves time, efforts and costs of
total testing.
ff. The drawback of execution testing is that creating the situations like
remote processing or use of new hardware/software proposed to be
bought but not in possession at the time execution time.
gg. Also, the simulation may not be possible at times.
5) Recovery Testing
This is described as follows:-
hh. Due to any attempts of attacks on the software systems integrity,
such as virus or unauthorized intrusion, etc. the systems integrity
may be threatened or sometimes harmed. In that case, the software
system reliability is said to be in danger.
ii. The recovery testing is expected to examine how far the system can
recover from such a disintegrated state and how fast.
jj. The recover testing is also aimed at the following:-
Establishing the procedures for successful recovery.
Create Operational documentation for recovery procedure.
Train the users on the same and provide them an opportunity
VINAYAK SINGH
121/147
to develop/build confidence on the system security.
kk. The integrity loss may happen at any time for any long duration.
Therefore the system recovery processing must identify the time of
failure, duration of failure and the scope of damages carried out. The
recovery testing should examine various commonly occurring
situations and even some exceptional situations.
ll. The recovery procedure essentially involves the procedures to back-up
of data, documentating and training on the same.
mm. The recovery requirements differ from one application system
to another. Therefore it may not be carried out for some application
system.
nn. The drawback of the recovery system testing is the number of security
failure scenarios, it has been built for may be inadequate.
oo. Also, the time, budget inadequacies may reflect to avoid or to
inadequate focus on it.
6) Operation Testing
This is described as follows:-
pp. The operation testing is to examine the operations of the new system
in the operational environment of the organization, along with the
other systems being executed simultaneously.
qq. Typically, it can be used for the other purpose also, as mentioned
below:-
To examine if the operations documents are complete,
unambiguous and user friendly.
To examine the effectiveness of the User training.
To examine the completeness and correctness of the Job
Control source codes developed to automate system
operations.
To examine the operations of external interfaces and their
Users efficiency in handling them at work.
rr. The advantages of the operation Testing is that it helps surface
otherwise hidden problems related to User Training, Documentation,
and other operational flaws.
ss. Also, if planned properly, it need not be carried out separately; it can
be combined to save testing time, effort and costs.
tt. The drawback is that the users availability is always very for a limited
time. Therefore, this testing is either avoided or inadequately
performed.
66. Consider payroll system for college, Explain the system to be
developed for this the task through. Develop Context level
DFD,Draw physical and logical DFD,Data Dictionary and Draw ERD
Diagram.
67. What is purpose of the system study? Whom should it invove? What
outcome is expected?
Different stages in the System Study (SDLC) addresses the following keys points:
1. Recognition of needs:
Identify the business problem and opportunities.
2. Feasibility Study:
Check whether the problem is worth solving. Redefine the problem.
3. Analysis:
VINAYAK SINGH
122/147
Select appropriate solution for solving the problem.
4. Design:
Design the system to address ho must the problem be solved and define
system flow. the user approval is important.
Proper testing should be exercised over each and every program/module.
5. Implementation:
Actual operations should be identified. Manuals should be provided to the
user.
6. Post Implementation and Maintenance:
Proper maintenance support should be provided. Modifications are done if
some change occurs.
The primary source of information for the functional system requirement is
various types of stake holders of the new system. Stake holders are all the people
who have interest in the successful implementation of the system.
Stake holders are classified in three groups:
1. The uses: who actually work on the system on a daily basis.
2. the clients: who pay or own the system.
3. the technical staff: who ensure that the system operates the computing
environment of the organization.
The next important after identifying the stakeholders, is to identify the critical
persons from each stakeholder type.
68. What is structured analysis? How is it related to Fact Finding
Technique?
1) Structured analysis is a model-driven, process-centered technique used
to either analyze system or define business requirements for new system, or
both.
2) The models are pictures that illustrate the systems components pieces:
processes and their associated inputs, outputs, and files.
3) The traditional approach focuses on cost-benefit and feasibility anaysis,
project management, hardware and software selection, and personnel
considerations. In contrast, structured analysis considers new goals and
structured tools for analysis.
4) The new goal specify the following:
i. Use graphics wherever possible to help communicate better with the
user.
ii. Differentiate between logical and physical systems.
iii. Build a logical system model to familiarize the user with system
characteristics and interrelationships before implementation.
5) The structured tools focus on the following tools-
iv. Data flow diagrams (DFDs)
v. Data Dictionary
vi. Structured English
vii. Decision trees and
viii. Decision tables.
6) The objective is to build a new document, called system specifications. This
document provides the basis for design and implementation.
Step 1: Study affected user areas, resulting in a physical DFD.
Step 2: remove the physical checkpoint and replace them with a logical
equivalent, resulting in the logical DFD.
Step 3: Model new logical system.
Step 4: Establish man-machine interface.
Step 5: Quantify costs and benefits and select hardware.
VINAYAK SINGH
123/147
Features of Structured Analysis:
1. It is graphic. The DFD for example, presents a picture of what is being
specified and is a conceptually easy-to-understand presentation of the
application.
2. The process is partitioned so that we have a clear picture of the progression
from general to specific in the system flow.
3. It is logical rather than physical. The elements of the system do not depend on
vendor or hardware. They specify in a precise, concise, and highly readable
manner the working of the system and how it hangs together.
4. It calls for a rigorous study of the user area, a commitment that is often taken
lightly in the traditional approach to system analysis.
5. Certain tasks that are normally carried out late in the system development life
cycle are moved to the analysis phase. For example, user procedures are
documented during analysis rather than later in implementation.
Following are the fact finding techniques:
1. Existing documents or record review
2. On-site observation
3. Interview
4. Questionnaires
Existing documents or record review:
1. Many kinds of records and reports can provide analysts with valuable
information about the organization and its operation.
2. 2. In record reviews analyst examines information that has been recorded
about system and users.
3. Records include written policy manual, regulation and standard operation
procedures used by most organization as a guide for managers and
employees.
4. They describe the formal and functions of the present system. Included in
most manual are systems requirements that help determine how well
various objectives are met.
5. Up-to-date manuals save hours of information gathering time. Unfortunately,
in many cases, manuals do not exist or are seriously out of date.
On-site observation:
1. On-site observation is the process of recognizing and nothing people, objects
and occurrences to obtain information.
2. The analyst role is that of an information seeker who is expected to be
detached
from the system being observed. This role permits participation with the user
staff
openly and freely.
3. The major objective of on-site observation is to get as close as possible to the
real system being studied. For this reason it is important that the analyst is
knowledgeable about the general make-up and activities of the system.
Advantages:
VINAYAK SINGH
124/147
1. Data gathered based on the observation can be reliable. Sometimes
observations are conducted to check the validity of the data obtained directly
from individuals.
2. The system analyst is able to see exactly what is being done. Through
observation the analyst can identify tasks that have been missed or
inaccurately described by other fact-finding techniques.
3. Observation is relatively inexpensive compared with other fact-finding
techniques usually require substantially more employee release time and
copying expenses.
4. Observation allows the system analyst to do work measurements.
Disadvantages:
1. Because people usually feel uncomfortable when being watched they may
unwittingly perform differently when being observed.
2. The work being observed may not involve the level of difficulty or volume
normally experienced during that time period.
3. Some system activities may take place at odd times, causing a scheduling
inconvenience for the system analyst.
4. If people have been performing tasks in a manner that violates standards
operating procedures, they may temporarily perform their job correctly while
they are being observed. In other words people may let you see what they
want you to see.
Interviews:
1. Interview is a fact-finding technique where by the system analyst collects
information from individuals through face-to-face interaction.
2. There are two roles assumed in an interview:
a. Interviewer: The system analyst is the interviewer responsible for
organizing interviews.
b. Interviewee: The system user or system owner is the Interviewee,
who is asked to respond to a series of questions.
3. There are two types of interviews:
c. Unstructured interviews: This is an interview that is conducted with
only a general goal or subject in mind and with few if any specific
questions. The interviewer counts on the interviewee to provide a
framework and direct the conversation.
d. Structured interviews: This is an interview in which the interviewer
has a specific set of questions to ask of the interviewee.
4. Unstructured interviews tend to involve asking open ended questions while
structured interviews tent to involve asking more closed ended questions.
Advantages:
1. Interviews give the analyst an opportunity to motivate the interviewee to
respond freely and openly to questions.
2. Interviews allow the system analyst to probe for more feedback from the
interviewee.
3. Interviews permits the system analyst to adapt or reword questions for each
individual.
4. A good system analyst may be able to obtain information by observing the
VINAYAK SINGH
125/147
interviewees body movements and facial expressions as well as by listening to
verbal replies to questions.
Disadvantages:
1. Interviewing is very time consuming fact-finding approach.
2. It is costlier that other approaches.
3. Success of interviews is highly dependant on the system analysts human
skills.
4. Interviewing may be impractical due to location of the interviewees.
Questionnaires:
1. Questionnaire is a special purpose documents that allows the analyst to
collect information and options from respondent.
2. There are two format for questionnaires:
a. Free format questionnaire: these are designed to offer the respondent
greater latitude in answer. A question is asked, and the respondent
records the answer in the space provided after the question.
b. Fixed format questionnaires: these are more rigid and contain
questions that require selecting an answer from predefined available
responses.
Advantages:
1. Most questionnaires can answers quickly. People can complete and returns
questionnaires at their convenience.
2. Questionnaires allow individuals to maintain anonymity. Therefore, individuals
are more likely to provide the real facts, rather that telling u what they think
their boss would want them to.
3. Questionnaires are relatively inexpensive means of gathering data from a
large number of individuals.
4. Responses can be tabulated and analyzed quickly.
Disadvantage:
1. The number of respondents is often low.
2. Theres no guarantee that an individual will answers or expand an all of the
questions.
3. Questionnaires tend to be inflexible. There is no opportunity for the system
analyst to obtain voluntary information from individuals or to reword
questions that may have been misinterpreted.
4. Its impossible for the system analyst to observe and analyze the respondents
body language.
5. Good questionnaires are difficultly to prepare.
VINAYAK SINGH
126/147
69. What are the basic components of the file? Give an example of each.
Explain how file differ.
70. Explain how you would expect documentation to help analyst and
designers.
Introduction:
Documentation is not a step in SDLC. It is an activity on-going in every phase of
SDLC. It is about developing documents initially as a draft, later on the review
document and then a signed-off document.
The document is born, either after it is signed-off by an authority or after its review.
It cries initial version number. However, the document also undergoes changes and
then the only way to keep your document up tom date is to incorporate these
changes.
Software Documentation helps Analysts and Designers in the following
ways:
12. The development of software starts with abstract ideas in the minds of the
Top Management of User organization, and these ideas take different forms
as the software development takes place. The Documentation is the only link
between the entire complex processes of software development.
13. The documentation is a written communication, therefore, it can be used for
future reference as the software development advances, or even after the
software is developed, it is useful for keeping the software up to date.
14. The documentation carried out during a SDLC stage, say system analysis, is
useful for the respective system developer to draft his/her ideas in the form
which is shareable with the other team members or Users. Thus it acts as a
very important media for communication.
15. The document reviewer(s) can use the document for pointing out the
deficiencies in them, only because the abstract ideas or models are
documented. Thus, documentation provides facility to make abstract ideas,
tangible.
16. When the draft document is reviewed and recommendations incorporated, the
same is useful for the next stage developers, to base their work on. Thus
documentation of a stage is important for the next stage.
17. Documentation is a very important because it documents very important
decisions about freezing the system requirements, the system design and
implementation decisions, agreed between the Users and Developers or
amongst the developers themselves.
18. Documentation provides a lot of information about the software system. This
makes it very useful tool to know about the software system even without
using it.
19. Since the team members in a software development team, keep adding, as
the software development project goes on, the documentation acts as
important source of detailed and complete information for the newly joined
members.
20. Also, the User organization may spread implementation of a successful
software system to few other locations in the organization. The
documentation will help the new Users to know the operations of the software
system. The same advantage can be drawn when a new User joins the
existing team of Users. Thus documentation makes the Users productive on
the job, very fast and at low cost.
VINAYAK SINGH
127/147
21. Documentation is live and important as long as the software is in use by the
User organization.
22. When the User organization starts developing a new software system to
replace this one, even then the documentation is useful. E.G. The system
analysts can refer to the documentation as a starting point for discussions on
the documentation as a starting point for discussions on the new system
requirements.
Hence, we can say that Software documentation is a very important aspect of SDLC.
71. Discuss the six special system test give example?
The system testing examines the entire system to check how far it meets user
expectations, especially in terms of meeting the functional requirements and also
meeting the performance requirements.
Six different types of testing are
1 Functional requirements testing
2 Regression testing
3 Parallel testing
4 Execution testing
5 Recovery testing
6 Operations testing
Functional requirements testing:
This is described as follows:
1 The focus of this system testing is to ensure that the system requirements
and specifications are all met by the system. It also examines, if the
application system meets the user standards.
2 The test conditions are created directly from the user requirements and
they aim to prove the correctness of the system functionality.
3 The development team may prepare a list of core functionality to check
and follow it rigorously during the system testing for completion.
4 Every software system must be tested for the functional requirements
testing. It is a mandatory system testing.
5 The testing activities should begin at the system analysis phase and
continue in every phase, as expected.
Regression Testing:
This is described as follows:-
1 Software testing is peculiar in a way that a system change carried out in
one part of the application system, may impact the unchanged part of the
application system to change its functionality.
2 The regression testing examines that the testing changes carried out in
one part has not changed the functionality in the other parts of the
system.
3 This may happen, if a change is implemented incorrectly - either to wrong
programs or wrong.
VINAYAK SINGH
128/147
4 In order to carry out the regression testing, the previously run test data
packs are applied at the input of the tested and/or unchanged programs
and their expected outputs are compared to match exactly to the
expected output.
5 This testing is more useful during system of large complex system, where
development team is very large, multilocation and or team communication
may be weak.
Parallel Testing:
It is described as below:
1 The objective of the parallel system is to ensure that the new application
system generates exactly similar output data as the just previous or the
current application system.
2 The parallel testing for an existing computerized current system involves
mainly setting up the environment for accepting the same input data, running
the new application system and matching the results to highlight the
differences.
3 The parallel testing is advantageous because the testing activities are
minimum and user confidence, which is otherwise very difficult to build, is
built.
4 The drawback of the parallel testing is that the variations in the new and
current system functionality may make it difficult.
Execution testing:
It is described as below:
1 The execution testing is to examine the new system by executing it. The
purpose is to check, how far it meets the operational expectations of the
users.
2 It focuses on measuring the actual performance of the system, by measuring
the response time, turn around time etc.
3 The execution testing is also used to find out the resource requirements of
the new system, such as the internal memory, storage space, etc. this can
also be used to plan for capacity expansion.
4 The execution testing is advantageous, because it can be carried out with
little modifications along with the other types of testing. Therefore it saves
time effort and cost of total testing.
5 The drawback of execution testing is that creating situations like remote
processing or use of new hardware/ software proposed to be bought but not
in possession at the time of execution time.
Recovery testing:
It is described as below:
1 Due to any attempts of attacks on the software systems integrity, such as
virus or unauthorized intrusion. The sys integrity may be threatened or
sometimes harmed. In that , the software system reliability is said to be in
danger.
2 The recovery testing is also aimed at the following:
- Establishing the procedures for successful recovery,
- Create operational documentation for recovery procedure.
- Train the users on the same and provide them an opportunity to
develop/build confidence on the security .
VINAYAK SINGH
129/147
1 The integrity loss may happen at any time for any long duration. Therefore
the system recovery processing must identify the time of failure, duration of
failure and the scope of damages carried out. The recovery testing should
examine various commonly occurring situations and even some exceptional
situations.
Operations testing:
It is described as below:
2 The operation testing is to examine the operations of the new system in the
operational environment of the organization, along with the other systems
being executed simultaneously.
3 Typically it can be used for the other purposes also, as mentioned below:
- To examine if the operations documents are complete, unambiguous
and user friendly
- To examine the effectiveness of the user training
- To examine the completeness and correctness of the job control
source codes developed to automate system operations
- To examine the operations of external interfaces and their users
efficiency in handling them at work.
1 The advantages of the operations testing is that it helps surface otherwise
hidden problems related to user training, documentation and any other
operational flaws.
2 The drawback is that the users availability is always very for a limited.
Therefore this testing is either avoided or inadequately performed.
72. Build Buying and selling system of import business of engineering
products. Using following problem definition.
aa.Recording details of sales.
bb.Details of stock
cc.Evaluation of stock and placement of order to appropriate
suppliers develop context level DFD.Draw physical and logical
DFD, Data Dictionary and Draw ER Diagram.
73. Explain the difference between:
dd.Logical and physical record.
ee.Data item and field
ff.File activity and file volatility.
gg.Sequential and indexed sequential.
74. Logical and Physical record.
75. Data item and field.
76. File activity and file volatility.
77. Sequential and indexed sequential.
Logical record:-
1) A logical record maintains a logical relationship among all the data times in record
.
2) It is the way the program or user sees the data.
3) The software present the logical record in the require sequence.
VINAYAK SINGH
130/147
Physical record:-
1) Physical record is the way data recorded on storage medium.
2) The programmer does not mean about the physical Map on the disk.
Data Item:-
1) Individual elements of data are called data items (also known as fields). Each data
item is identified by name and has specific value associate with it. The association of
a value with a field creates one instance of data item.
2) Data items can comprise sub items or subfields for examples, Data is often used
as a single data item, consisting of subfields of month, day and year.
3) Whenever a field consisting of subfields is referenced by name, it automatically
includes only that subfields and excludes all other subfields in data item. Therefore
the subfield day in data item excludes months and year.
Data Fields:
1) Fields are the smallest units of meaningful data stored in a file or database. There
are four types of fields that can be stored: primary key, secondary key, and foreign
keys, and descriptive fields.
1 Primary key: A primary key is a field that identifies a record in a file.
2 Secondary key: A secondary key is a field that identifies single record or a
subset of related records.
3 Foreign key: A foreign key is a field that points to records in different file in
a database.
4 Descriptive fields: A descriptive field is a nonkey field that stores business
data.
File Activity:
1) Its specifies the percentage of actual record processed in single record.
2) If small percentage of record is accessed at any given time file should be
organized on the disk for direct access and if fair percentage is effected regularly
then storing the file on the tap would be more efficient and less costly.
File Volatility:-
1) It addresses properties of record changes .
2) File record with substantial change are highly volatile, meaning disk design
would be more efficient then tape.
3) Higher the volatility more attractive is disk design.
Sequential:-
1) In computer science sequential access means that a group of elements (e.g.
data in a memory array or a disk file or on a tape) is accessed in a predetermined,
ordered sequence. Sequential access is sometimes the only way of accessing the
data, for example if it is on a tape. It may also be the access method of choice, for
VINAYAK SINGH
131/147
example if we simply want to process a sequence of data elements in order.
2) Write operation require that data is first arrange in order thus data need to be
sorted before entry .Append -adding at the end of the file -is simple.
3) to access the record, previous record within the block are scanned thus the
sequential record design is best suited for get next activities , reading one record
after another without a search delay.
4) Advantages:
1 Simple to desine
2 Easy to program
3 Variable length & block record are available
4 Best use of best disk storage
5) Disadvantage
1 Records can not be added to the middle of the file.
Indexed Sequential:-
1) ISAM stands for Indexed Sequential Access Method, a method for storing data for
fast retrieval. ISAM was originally developed by IBM and today forms the basic data
store of almost all databases, both relational and otherwise.
2) The difference is in use of indexes to locate the record.
3) Disks storage:
Disk storage is divided in 3 parts
1) prime area: Contain file records stored by key or id no
2) Overflow area: contains records added to the files that cannot be placed to in
the logical sequence in prime area.
3) Index area: Contains keys of records and their locations on the disks.
4) Advantages:
2 reduces the magnitude of the sequential search
3 Records can be inserted or updated in middle of the file.
5) Disadvantage: extra storage is required
1 Longer to search
Periodic reorganization off file is required.
74. Describe some ways of questioning during an interview?Describe
some typical users and the most appropriate questioning for then.
Asking question depends upon the developer team asking questions to different
users with the purpose to seek their exact and complete information requirements.
The types of Questions to be asked:- either the structured interview meetings or
semi-structure, questions is the powerful tool the software development team uses
to determine information requirements in this technique. Therefore, it is important to
ask right questions, at the right time, to the right user members and confirms time
and again with the user, the development team members understanding of the exact
information requirements during the meeting and document them correctly for future
references.
VINAYAK SINGH
132/147
It is therefore, important to know, how the question should be framed.
There are two types of questions they are as follows:
1. Open questions - These are those questions which seek descriptions of an
object of the system. Eg.what is the purpose of the new software system? is
an open ended question, since it expects the elaborate, descriptive answers to
the question asked. Generally, this type of questions start with words, such as
what, how why etc. some times answers to questions starts when and who
may also descriptive.
The open questions are very useful in information requirements
determination, because if answered properly by the user, they provide a lot of
information to the developers. This helps very significantly, in ending a lot of
uncertainty about the proposed application system just with a single question.
That means, if used properly the open ended questions are very efficient for
information determination requirements.
2. Closed questions - It can also be used effectively to freeze the
understanding of the user team and development team on long discussed
topic. eg. in payroll application there was a long discussion on should the new
information system should provide for two components of salary structure,
such as a fixed component and a variable components. The meeting nearly
concluded that new application system should provide the same. The
developers may ask questions to the users, such as Can this meeting
confirms to the development team, that new payroll application should
provide for two components salary structure, such as fixed and variable
components? Will force the meeting to answer as either Yes or No or To be
confirmed later by the VR (HR). this will summarize the status on agreement
on this issue. This is an important issue of the payroll application
development, which can be concluded very effectively by asking a close
question as above.
There are two types of interviews, unstructured and structured. Unstructured
interviews are characterized as involving general questions that allow the
interviewee to direct the conservation. This type of interview frequently gets off
track, and the analyst must be prepared to redirect the interview back to main goal
or subject. For this reason, unstructured interviews dont usually work well for
systems analysis and design.
Structured interviews involve the interviewer asking specific questions designed
to elicit specific information from the interviewee. Depending on the interviewees
responses the interviewer will direct additional questions to obtain clarification or
amplification.
Some of these questions may be planned and others spontaneous.
Unstructured interviews tend to involve asking open-ended questions. Questions give
the interviewee significant latitude in their answers. An example of an open-ended
question is Why are You dissatisfied with there ort of uncollectible accounts?.
Structured interviews tend to involve asking more closed-ended Questions that are
designed to elicit short, direct responses from the interviewee .Examples of such
questions are Are you receiving the report of uncollectible accounts on time?. And
does the report of uncollectible accounts on time? and Does the report of
uncollectible accounts contain accurate information / Realistically, most questions
fall between the two extremes.
VINAYAK SINGH
133/147
75. What is decoupling? What is its role in system handling?
DECOUPLING AND ITS ROLE IN SYSTEM HANDLING
1 Decoupling facilitates each module of the system to work independently of
others.
2 Decoupling enhances the adaptability of the system by helping in isolating the
impact of potential changes i.e. more the decoupling, easier it is to make
changes to a subsystem without effecting the rest of the system.
3 Concept of decoupling is also used in :
Separation of functions between human beings and machines.
Defining the human-computer interfaces.
4 Decoupling is achieved by:
Defining the subsystems such that each performs a single
complete function.
Minimizing the degree of interconnection (exchange of data and
control parameters).
5 Key decoupling mechanisms in system designs are:
Inventory, buffer, and waiting line.
Provision of slack resources.
Extensive use of standards.
76. Draw a context diagram for purchasing systems. Also draw two
levels of details for the same. Write data dictionary entries for any 2
data elements,2 data stores,2 processes and 2 data structure of
your choice for the above system.
DATA DICTIONARY
Data Elements:
1.Shipment Request
Order number
Product id
Product name
Quantity
Order date
2. Low inventory notice
Order number
Product id
Product name
Quantity ordered
Quantity available
VINAYAK SINGH
134/147
Data stores
Order details
Order number
Product code
Product name
Quantit y demanded
Quantit y Supplied
Customer name
Customer id
2. Stock Master
Product code
Product Name
Quantit y
77. Develop a decision tree and a decision table for the following:
hh.If thre person is under 3 years of age,there is no admission
fee.If a person is under 16 half the full admission is charged
and this admission is reduced to quarter of full admission if the
person is accompanied by adult (the reduction applies only if
the person is under 12).Between 16 and 18,half the full
admission fee is charged if the person is student :otherwise the
full admission is charged.
ii.Over 18,the full admission fee is charged.
jj.A discount of 10% is allowed for a person over 16,if they are in
group of 10 or more.
kk.There are no student concession during weekends.On
weekends under 125 get one free ride.
VINAYAK SINGH
135/147
Fee
DECISION TREE
Free
Age
Adult
Student
Group
1/4 Fee
Group
Group
Weekend
Weekend
Fee
Fee
9/10 Fee
9/20 Fee
Free
Fee
Fee
9/10 Fee
Fee
A
g
e
<
3
3
<
A
g
e
<
1
2
y
e
s
n
o
1
2
<
a
g
e
1
6
1
6
<
a
g
e
<
1
8
y
e
s
n
o
>
=
1
0
<
1
0
y
e
s
no
y
e
s
no
>
=
10
<10
1
8
<
a
g
e
>
=
10
<10
Ans 79
VINAYAK SINGH
136/147
16 < age < 18 Age > 18 12 < age
< 16
3 < age < 12 Age < 3 Age
no Yes Adul t
Yes no St udent
>= 10 < 10
<10 >=10 <10 >=10 Group
N
o
Y
e
s
N
o
y
e
s
Weeken
d
9/10 fee
9/20 fee
Ful l fee
fee
fee
Free
Fee
Decision table
Ans 79
VINAYAK SINGH
137/147
What is difference between system analysis and system design. How does
the focus of information system analysis differ from information system
design?
System analysis:
System analysis is a problem solving technique that decomposes a system into its
component pieces for the purpose of studying how well those components parts work
and interact to accomplish their purpose
System design:
System design is the process of planning a new business system or one to replace or
complement an existing system.
Information system analysis:
Information system analysis primarily focuses on the business problem and
requirements, independent of any technology that can or will be used to implement a
solution to that problem
Information system design:
Information system design is defined as those tasks that follow system analysis and
focus on the specification of a detailed computer based solution.
System analysis emphasizes the business problems; system design focuses on the
technical or implementation concerns of the system.
What are the elements of cost benefit analysis?
Cost benefit analysis is a procedure that gives the picture of various costs, benefits and
rules association with each alternative system
Cost benefit categories:
In developing cost estimates for a system, we need to consider several cost elements.
They are:
1 Hardware costs: It relates to the actual purchase or lease of the computer and
peripherals (for e.g. printer, diskdrive, and tape unit). Determining the actual
cost of hardware is generally more difficult when the system is shared by various
users than for a dedicated stand alone system alone system. In some cases, the
best way to control for this cost is to treat it as an operating cost.
2 Personnel costs: It includes EDP staff salaries and benefits (health insurance,
vacation time, sick pay etc.) as well as pay for those involved in developing the
system. Costs incurred during the development of a system are one time costs
and are labeled developmental costs. Once the system is installed, the costs of
operating and maintaining the system become recurring costs.
3 Facility costs: They are the expenses incurred in the preparation of the physical
site where the application or the computer will be in operation. This includes
wiring, flooring, acoustics lighting and air conditioning. These costs are treated as
VINAYAK SINGH
138/147
one time cost and are incorporated into overall cost estimate of the candidate
system.
4 Operating costs : It includes all costs associated with the day to day operation of
the system; the amount depends on the no. of shifts, the nature of the
applications and the caliber of the operating staff. There are various ways of
covering operating costs. One approach is to treat operating costs as overhead.
Another method is to charge each authorized user for the amount of processing
they request from the system. The amount of processing they request from the
system. The amount charged is based on computer time, time and volume of the
o/p produced. In any case, some accounting is necessary to determine how
operating costs should be handled.
5 Supply costs: They are variable cost that increases with increased use of paper,
ribbons, disks etc. They should be estimated & included in the overall cost of the
system.
The two major benefits are improving performance & minimizing the cost of
processing. The performance category emphasizes improvement in the accuracy of
or access to info & easier access to the system by authorized users. Minimizing costs
through an efficient sys. Error control or reduction of staff is benefit that should be
measured & included in cost benefit analysis.
Summarize he procedure for developing DFD, using your own example
illustrate.
Structured analysis is a model driven, process centered technique used to either
analyze an existing system or define business requirements for a new system or
both.
One of the tools of structured analysis is the DFD.
DFD is a process model used to depict the flow of data through a system and work or
processing performed by the system.
DFD are of 2 types:
Physical DFD
Logical DFD
Physical DFD represent implementation dependent view of the current system &
show what tasks are carried out and how they are performed.
Logical DFD represent implementation independent view of the system and focus on
the flow of the specific devices, storage locations or people in the system.
Most comprehensive and useful approach to develop an accurate & complete
description of the current begins with the development of physical DFD & then they
are converted to logical DFD.
Developing DFD:
1. make a list of biz activities & use it to determine
External entities i.e. source & sink
VINAYAK SINGH
139/147
Data flows
Processes
Data stores
2. draw a context level diagram
Context level diagram is a top level diag and contains only one process
representing the entire system. It determines the boundaries of the system.
Anything that is not inside the context diag will not be the part of system
study.
3. develop process chart
It is also called as hierarchy charts or decomposition diagram. It shows top
down functional decomposition of the sys.
4. develop the first level DFD
It is aka diag 0 or 0 level diag. It is the explosion of the context level
diagram. More processes are included. It includes data stores and external
entities. Here the processes are numbered.
5. draw more detailed level :
Each process in diagram 0 may in turn be exploded to create a more detailed
DFD. New data flows & data stores are added. There are further
decomposition/ leveling of processes.
Explain briefly by example:
Decision table
A decision tree is a diagram that presents conditions and actions sequentially
and thus shows which conditions to consider first, which second and so
on..it is also a method of showing the relationship of each condition and its
permissible actions.
The root of the tree, on the left of the diagram is the starting point of the
decision sequence. The particular branch to follow depends on the conditions
that exist and the decision to be made. Progression form the left to right
along a particular branch is the result of making a series of decisions.
Developing decision trees is beneficial to analyst in 2 ways:
The need to describe conditions & actions forces analysts to formally
identify the actual decisions that must be made. It becomes difficult to
overlook an integral step in the decision process, whether it depends on
quantitative or quantitative value
Decision trees also force analysts to consider the sequence of decision
Condition: Action:
VINAYAK SINGH
140/147
Size of order: over $10,000 take 3 % discounts from
Invoice tools
$ 5000 to $ 10,000 2%
Less than $ 5000 no discount
Decision tables
A decision table is a matrix of rows and columns, rather than a tree that
shows conditions and actions.
The decision table is made up o four sections: conditions statements,
condition entries, action statement and action entries.
1 Condition statements identifies the relevant conditions
2 Conditions entries tell which value, if any applied for particular
condition.
3 Action statements list the set of all steps that can be taken when a
certain condition occurs.
4 Action entries show what specific action in the set to take when
selected conditions or combinations of conditions are true.
The columns on the right side of the table, linking conditions and actions,
form decision rules, which state the conditions that must be satisfied for a
particular set of actions to be taken.
e.g. decision table using y/n format for payment discount
<10 days Y Y Y N N N
> $
10,000
Y N N Y N N
$ 5000 -
$ 10,000
N Y N N Y N
< $ 5000 N N Y N N Y
Take 3%
discount
X
Take 2 %
discount
X
Pay full
invoice
amount
X X X X
Structured English
This technique is described as follows: -
1 The structured English description of process combines the structured
programming techniques with the simple English
2 The statements used are very brief
3 The process description is articulated very carefully
4 This method does not use trees or tables, but rather narrative statements to
VINAYAK SINGH
141/147
describe a procedure
5 It does not show decision rules: it states them
6 The terminology used in the structured description of an application consists
largely of data names for elements that are defined in the data dictionary
developed for the project.
7 It makes a rich use of indentations to denote the nesting of blocks of
statements
8 There is no scope for ambiguity & representation errors.
Eg a/c payable processing
Accept invoice for processing
Prepare payment voucher using invoice
Revise a/c balance due
Mail check to vendor
Data Dictionary
Data dictionaries are integral components of structures analysis, since data flow
diagrams themselves do not fully describe the subject of the investigations.
A data dictionary is a catalog a repository of the elements in a system
In data dictionary one will find a list of all the elements composing the data flowing
through a system. The major elements are
Data flows
Data stores
Processes
The dictionary is developed during data flow analysis and assists the analysis
involved in determining the sys. Design as well..
Importance
1 To manage the details in large sys.
2 To communicate a common meaning for all system elements
3 To document the features of the sys.
4 To facilitate analysis of the details in order to evaluate characteristics and
determine where sys changes should be made
5 To locate errors and omissions in the sys
Contents of data dictionary
Data elements
The most fundamental data level is the data element. They are the building blocks
for all other data in the system
Data structures
A data structure is a set of data items that are related to one another and that
collectively describe a component in the sys.
VINAYAK SINGH
142/147
What is the reason for selecting the prototype development method? What
are the desired impacts on the application development process?
1 The sys prototype method involves the user more directly in the analysis &
design experience than does the SDLC or structured analysis method.
2 A prototype is working sys - not just an idea on paper - that is developed to
test ideas and assumption about the new system. Like any computer based
sys it consists of working software that accepts input, performs calculations,
produces printed or displayed information or performs other meaningful
activities.it is the first version or iteration of an information system an original
model
3 The design and the information produced by the system are evaluated by
users. This can be effectively done only if the data are real & the situations
live. Changes are expected as the system is used
Reasons for sys prototyping:
1 Information requirements are not always well defined. Users may know only
that certain biz areas need improvement or that existing procedures must be
changed. Or they may know that they need better information for managing
certain activities but are not sure what that info is.
2 The users requirements might be too vague to even begin formulating a
design. In other cases, a well managed systems investigations may produce a
comprehensive set of sys requirements, but building a sys that will meet
those requirements may require development of new technology.
3 Unique situations, about which developers have neither info nor experience,
and high cost or high risk situations in the proposed design is new and
untested, are often evaluated through prototypes.
4 The prototype is actually a pilot or test model; the design evolves through
use.
5 Although the prototype is working system, it is designed to be easily changed.
Information gained through its use is applied to a modified design that may
again be used as a prototype to reveal still more valuable design information.
6 The process is repeated as many times as necessary to reveal essential
design requirement.
7 System prototyping is an interactive process. It begins with only few functions
and be expanded to include others that are identified later.
Steps in the prototyping process:
1. identify the user known information requirements and features needed in the
system.
2. develop a working prototype
3. use the prototype, noting needed enhancements and changes. These expand
the list of known sys requirements
4. revise the prototype based on info gained through user experience
5. repeat these steps as needed to achieve a satisfactory system.
VINAYAK SINGH
143/147
What s feasibility study? What are different types of feasibility study?
What considerations are involved in feasibility analysis? Which
considerations do you think is most crucial? Why?
Feasibility study:
1. Feasibility is the measure of how beneficial or practical the development of an
information system will be to an organization
2. Feasibility study (feasibility analysis) is the process by which feasibility is
measured.
3. Feasibility study should be performed throughout the system development life
cycle
4. A feasibility study is a study conducted to find out whether the proposed
system would be :
Possible to build with given technology and resources.
Affordable given the time and cost constraints of the organization and
Acceptable for use by the eventual users of the system
Purpose of feasibility study:
1 Need analysis - to determine the need for a change in an organization.
2 Cost benefit analysis - to study the effect of the change on the economics of
the organization.
3 Technical feasibility - to evaluate various technologies that can be used for
implementing the suggested change given the cost and resources constraints
of an organization
4 Legal feasibility - to evaluate the legal procedures, if any should come into
play to implement the suggested change
5 Evaluation of alternatives - to evaluate the various alternatives that would be
thrown up with regard to resolving the problems of an organization and
recommend the best suited one.
Feasibility considerations:
1 Need analysis: A need analysis is conducted with the following objectives in
mind:
- Seeking background information of the organization.
- Understanding current issues to be tackled, and
- Understanding the user profile.
2 Economic feasibility:
- Economic analysis is the most frequently used method for evaluating
the effectiveness of a candidate system. It is the most important
phase in the development of the project. It is also known as cost
benefit analysis
- While doing economic feasibility one attempt to weigh the costs of
developing and implementing a new system against the benefits to be
accrued from having the new system in place. Several costs as well as
the benefits of a system are considered when studying the economic
feasibility of a system.
- When benefits outweigh cost, a system is said to be economic feasible.
VINAYAK SINGH
144/147
It includes both, the tangible and the intangible benefits.
- Tangible benefits are those that can be measured in money value
whereas intangible benefits are difficult to quantity.
1 Technical feasibility :
- Technical feasibility is a measure of the practicality of a technical
solution and the availability of the technical resources and expertise.
- It helps in understanding what level and kind of technology is needed
for a system.
- It includes functions, performance issues and constraints that may
affect the availability of the technical resources and expertise.
- Technical feasibility entails an understanding in different technologies
involved in the proposed system, existing technology levels within the
organization and the level of expertise to use the suggested
technology.
2 Legal feasibility:
1 Legal feasibility entails copyright violations for systems that have to be
developed for the open market, framing of the contract for large
system, violation of terms etc.
2 In order to ascertain legal feasibility, legal experts have to be called in.
3 In many organizations, company secretaries can help out.
4 It is beyond expertise level of the analyst conducting the study.
1 Personal/ behavioral feasibility:
- People are inherently resistant to change and computers have been
known to facilitate change.
- An estimate should be made of how strong a reaction the user staff is
likely to have towards the development of a computerized sys.
- It is common knowledge that computer installations have something to
do with turnover, transfers, retraining and changes in employee job
status.
- Therefore, it is considerable that the introduction of a candidate sys
requires special effort to educate, sell and train the staff on new ways
of conducting biz.
2 Political feasibility:
1 It depends on the political environment of an organization.
1 Evaluation of alternatives :
- It includes an evaluation of alternative approaches to the development
of system
- The option with the lowest cost and maximum returns is considered
the most feasible option
- However a number of qualitative and intangible issues also greatly
influence this decision.
Steps in feasibility analysis:
VINAYAK SINGH
145/147
1. Form a project team and appoint a project leader
2. Prepare sys flow chart
3. Enumerate potential candidate systems.
4. Describe and identify characteristics of candidate system.
5. Determine and evaluate performance and effectiveness of each candidate
system.
6. Weight system performance and cost data.
7. Select the best candidate system.
8. Prepare and report final project directive to management.
What is normalization? What is purpose of normalization of database?
Normalization is a process of simplifying the relationship between data elements in a
record. Through normalization a collection of data in a record structure is replaced by
successive record structures is replaced by successive record structures that are
simpler and more predictable and therefore more manageable. Normalization is
carried out for four reasons:
2 To structure the data so that any pertinent relationships between entities can
be represented.
3 To permit simple retrieval of data in response to query and report requests.
4 To simplify the maintenance of the data through updates, insertions and
deletions.
5 To reduce the need to restructure or reorganize data when new application
requirements arise.
Systems analysts should be familiar with the steps in normalization, since this
process can improve the quality of design for an application.
1 Decompose all data groups into two dimensional records.
2 Eliminate any relationships in which data elements do not fully depend on the
primary key of the record.
3 Eliminate any relationships that contain transitive dependence.
There are three normal forms. They are
First normal form:
One of the most basic improvements the analyst can make is to design the record
structure so that all record in a file is the same length. Variable length records create
special problems, since the system must always check to see where a record ends.
Fixing record length eliminates this problem.
First normal form is achieved when all repeating groups are removed so that a
record is of fixed length.
Second normal form:
Second normal form is achieved when a record is in first normal form and each item
in the record is fully dependent on the primary record key for identification. In other
words, the analyst seeks functional dependency: a data item is functionally
dependent if its value is uniquely associated with a specific data item.
VINAYAK SINGH
146/147
Third normal form :
Third normal form is achieved when transitive dependencies are removed from a
record design. The general case is as follows:
A, B,C are three data items in a record.
If C is functionally dependent on B and B is functionally dependent on A,
Then C is Functionally dependent on A.
Therefore, a transitive dependency exists.
What are major threats of system security? Which one is more serious?
Why?
System security:
The system security problem can be divided into four related issues:
System security:
System security refers to the technical innovations and procedures applied to
the hardware and operating systems to protect deliberate or accidental damage from
a defined threat.
System integrity:
System integrity refers to the proper functioning of hardware and programs,
appropriate physical security, and safely against external threats such as
eavesdropping and wiretapping
Privacy :
Privacy defines the rights of the users or organizations to determine what
information they are willing to share with or accept from others and how the
organization can be protected against un welcome, unfair or excessive dissemination
of information about it.
Confidentiality:
The term confidentiality is special status given to sensitive information in a
database to minimizes the possible invasion of privacy. It is an attribute of
information that characterizes its need for protection
Threats to system security:
1 Errors and omissions: errors and omission contains a broad range of miscues.
Some results in incredible but short lived
2 Disgruntled and dishonest employees: when huge quantities of info are stored
in one database sensitive data can be easily copied and stolen. A dishonest
programmer can bypass control and surreptitiously authorize his/ her own
transactions. Dishonest employee have an easier time identifying the
vulnerabilities of a software sys than outside hackers because they have
access to the sys for a much longer time and can capitalize on its weakness.
3 Fire: fire and other man made disasters that deny the system power, air
conditioning or needed supplies can have a crippling effect. In the design of
sys facility, there is tendency to place fire fighting equipments.
VINAYAK SINGH
147/147
4 Natural disaster: natural disasters are floods hurricanes, snowstorms,
lightening, and other calamities. Although there is no way to prevent them
from occurring there are measures to protect computer based systems from
being wiped out.
5 External Attack: Outside hackers can get into the systems and have access to
confidential, sensitive data. This is possible because of bugs or vulnerabilities
in the current system
According to a survey, an estimated $ 70 billion are lost each year to computer
related crime, fraud and embezzlement; 75 percent of this is attributed to insiders of
organizations. Therefore disgruntled and dishonest employees are more
serious threats to systems security.
Define data structure? What are the major types of data structure?
An entity is conceptual representation of an object. Relationships between entities
make up a data structure.
Types of relationships exist among entities: one to one, one to many, many to many
relationships.
A one to one (1:1) relationship is an association between two entities.
A one to many (1:M) relationship describes an entity that may have two or more
entities related to it.
A many to many (M:M) relationship describes entities that may have many
relationships in both the directions.
Types of data structures:
Data structuring determines whether the system can create 1:1, 1:M, M:M
relationships among entities. There are three types of data structures: hierarchical,
network, and relational
Hierarchical structuring: hierarchical (also called tree) structuring specifies that
an entity can have no more than one own entity; that is we can establish a 1:1 or
1:M relationship. The owning entity is called parent or root. There is only one root in
a hierarchical model.
A parent can have many children 1:M whereas a child can have only one parent.
Network Structuring: a network structure allows 1:1 , 1:M or M:M relationships
among entities. A network structure reflects the real world.
Relational structuring: in relational structuring, all data and relationships are
represented in a flat , tow dimensional table called a relation. It is equivalent to a
file. It allows user to update the tables content.
Employee number degree
211 MBA
212 MCA
213 high school