SE - Solution - Sample Questions
SE - Solution - Sample Questions
SE - Solution - Sample Questions
1. Explain COCOMO II model with a suitable example. A project size of 200 KLOC is to be developed.
Software development team has average experience on similar type of project. The project schedule was
not very tight. Calculate Effort, development time, average staff size and productivity of the project.
2. Explain different size estimation metrics with their advantages and disadvantages.
Estimation of the size of the software is an essential part of Software Project Management. It helps the
project manager to further predict the effort and time which will be needed to build the
project. Various measures are used in project size estimation. Some of these are:
● Lines of Code
● Number of entities in ER diagram
● Total number of processes in detailed data flow diagram
● Function points
1. Lines of Code (LOC): As the name suggests, LOC counts the total number of lines of source code in a
project. The units of LOC are:
KLOC- Thousand lines of code
● NLOC- Non-comment lines of code
● KDSI- Thousands of delivered source instruction
The size is estimated by comparing it with the existing systems of the same kind. The experts use it to
predict the required size of various components of software and then add them to get the
total size.
It’s tough to estimate LOC by analysing the problem definition. Only after the whole code has been
developed can accurate LOC be estimated. This statistic is of little utility to project managers
because project planning must be completed before development activity can begin.
Two separate source files having a similar number of lines may not require the same effort. A file with
complicated logic would take longer to create than one with simple logic. Proper estimation
may not be attainable based on LOC.
The length of time it takes to solve an issue is measured in LOC. This statistic will differ greatly from one
programmer to the next. A seasoned programmer can write the same logic in fewer lines
than a newbie coder.
Advantages:
Universally accepted and is used in many models like COCOMO.
● Estimation is closer to the developer’s perspective.
● Simple to use.
Disadvantages:
Different programming languages contain a different number of lines.
● No proper industry standard exists for this technique.
● It is difficult to estimate the size using this technique in the early stages of the project.
2. Number of entities in ER diagram: ER model provides a static view of the project. It describes the
entities and their relationships. The number of entities in ER model can be used to measure
the estimation of the size of the project. The number of entities depends on the size of the
project. This is because more entities needed more classes/structures thus leading to more
coding.
Advantages:
Size estimation can be done during the initial stages of planning.
● The number of entities is independent of the programming technologies used.
Disadvantages:
No fixed standards exist. Some entities contribute more project size than others.
● Just like FPA, it is less used in the cost estimation model. Hence, it must be converted to LOC.
3. Total number of processes in detailed data flow diagram: Data Flow Diagram(DFD) represents the
functional view of software. The model depicts the main processes/functions involved in
software and the flow of data between them. Utilization of the number of functions in DFD
to predict software size. Already existing processes of similar type are studied and used to
estimate the size of the process. Sum of the estimated size of each process gives the final
estimated size.
Advantages:
It is independent of the programming language.
● Each major process can be decomposed into smaller processes. This will increase the accuracy of
estimation
Disadvantages:
Studying similar kinds of processes to estimate size takes additional time and effort.
● All software projects are not required for the construction of DFD.
4. Function Point Analysis: In this method, the number and type of functions supported by the software
are utilized to find FPC(function point count). The steps in function point analysis are: Count
the number of functions of each proposed type.
● Compute the Unadjusted Function Points(UFP).
● Find Total Degree of Influence(TDI).
● Compute Value Adjustment Factor(VAF).
● Find the Function Point Count(FPC).
Abstraction
An abstraction is a tool that enables a designer to consider a component at an abstract level without
bothering about the internal details of the implementation. Abstraction can be used for existing element
as well as the component being designed.
Here, there are two common abstraction mechanisms
1. Functional Abstraction
2. Data Abstraction
Functional Abstraction
A module is specified by the method it performs.
The details of the algorithm to accomplish the functions are not visible to the user of the function.
Functional abstraction forms the basis for Function oriented design approaches.
Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction forms the basis for
Object Oriented design approaches.
Modularity
Modularity specifies to the division of software into separate modules which are differently named and
addressed and are integrated later on in to obtain the completely functional software. It is the only
property that allows a program to be intellectually manageable. Single large programs are difficult to
understand and read due to a large number of reference variables, control paths, global variables, etc.
Strategy of Design
A good system design strategy is to organize the program modules in such a method that are easy to
develop and latter too, change. Structured design methods help developers to deal with the size and
complexity of programs. Analysts generate instructions for the developers about how code should be
composed and how pieces of code should fit together to form a program.
● Selection of structural elements and their interfaces by which the system is composed.
● Behavior as specified in collaborations among those elements.
● Composition of these structural and behavioral elements into a large subsystem.
● Architectural decisions align with business objectives.
● Architectural styles guide the organization.
Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or "low
cohesion."
Types of Modules Cohesion
1. Functional Cohesion: Functional Cohesion is said to exist if the different elements of a module,
cooperate to achieve a single function.
2. Sequential Cohesion: A module is said to possess sequential cohesion if the element of a
module forms the components of the sequence, where the output from one component of the
sequence is input to the next.
3. Communicational Cohesion: A module is said to have communicational cohesion, if all tasks of
the module refer to or update the same data structure, e.g., the set of functions defined on an
array or a stack.
4. Procedural Cohesion: A module is said to be procedural cohesion if the set of purpose of the
module are all parts of a procedure in which a particular sequence of steps has to be carried out
for achieving a goal, e.g., the algorithm for decoding a message.
5. Temporal Cohesion: When a module includes functions that are associated by the fact that all
the methods must be executed in the same time, the module is said to exhibit temporal
cohesion.
6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the module
perform a similar operation. For example Error handling, data input and data output, etc.
7. Coincidental Cohesion: A module is said to have coincidental cohesion if it performs a set of
tasks that are associated with each other very loosely, if at all.
Integration Testing: This testing is the collection of the modules of the software, where the
relationship and the interfaces between the different components are also tested. It needs
coordination between the project-level activities of integrating the constituent components at a
time. The integration and integration testing must adhere to a building plan for the defined
integration and identification of the bug in the early stages. However, an integrator or integration
tester must have programming knowledge, unlike a system tester.
11. Differentiate Verification and Validation with examples. Which comes first
Verification is the process of reviewing the intermediate work products of a software development
lifecycle to ensure that we are on track to complete the final result.
In other words, verification is a process of evaluating software mediation products to see whether
they meet the requirements set out at the start of the phase.
These may comprise documentation like requirements specifications, design documents, database
table design, ER diagrams, test cases, traceability matrix, and so on that are created throughout the
development stages.
Verification uses review or non-executable techniques to guarantee that the system (software,
hardware, documentation, and staff) meets an organization's standards and procedures.
Validation is the process of assessing the completed product to see whether it fits the business
requirements. To put it another way, the test execution that we undertake on a daily basis is
essentially a validation activity that comprises smoke testing, functional testing, regression testing,
systems testing, and so on.
Validation encompasses all types of testing that include interacting with the product and putting it
to the test.
The validation procedures are listed below −
Unit Testing
Testing for integration
System Evaluation
Acceptance Testing of Users
1) Unit testing
Unit testing focus on the smallest unit of software design, i.e module or software component.
Test strategy conducted on each module interface to access the flow of input and output.
The local data structure is accessible to verify integrity during execution.
Boundary conditions are tested.
In which all error handling paths are tested.
An Independent path is tested.
2) Integration testing
Integration testing is used for the construction of software architecture.
13. How effective equivalence partitioning method based testing can be done?
14. Compute cyclomatic complexity for given code. Also compute basis set & determine test data for each
basis set
15. Discuss various testing methods applicable for Web application.
Web Testing, or website testing is checking your web application or website for potential bugs before
its made live and is accessible to general public.
In Software Engineering, the following testing types/technique may be performed depending on
your web testing requirements.
1. Functionality Testing of a Website
Functionality Testing of a Website is a process that includes several testing parameters like user
interface, APIs, database testing, security testing, client and server testing and basic website
functionalities.
2. Usability testing:
Usability Testing has now become a vital part of any web based project. It can be carried out by
testers like you or a small focus group similar to the target audience of the web application.
3.Interface Testing:
Three areas to be tested here are – Application, Web and Database Server
Application: Test requests are sent correctly to the Database and output at the client side is
displayed correctly. Errors if any must be caught by the application and must be only shown to the
administrator and not the end user.
Web Server: Test Web server is handling all application requests without any service denial.
Database Server: Make sure queries sent to the database give expected results.
4. Database Testing:
Database is one critical component of your web application and stress must be laid to test it
thoroughly. Testing activities will include-
Browser Compatibility Test: Same website in different browsers will display differently. You need to
test if your web application is being displayed correctly across browsers, JavaScript, AJAX and
authentication is working fine.
6.Performance Testing:
This will ensure your site works under all loads. Software Testing activities will include but not
limited to –
Adaptive maintenance:
This includes modifications and updations when the customers need the product to run on new
platforms, on new operating systems, or when they need the product to interface with new hardware
and software.
Perfective maintenance:
A software product needs maintenance to support the new features that the users want or to change
different types of functionalities of the system according to the customer demands.
Preventive maintenance:
This type of maintenance includes modifications and updations to prevent future problems of the
software. It goals to attend problems, which are not significant at this moment but may cause serious
issues in future.
18. Mention reason for project delay. Prepare RMMM plan for the same.
19. Explain steps of Version control and Change control
Version control – Creating versions/specifications of the existing product to build new products from
the help of SCM system. A description of version is given below:
Suppose after some changes, the version of configuration object changes from 1.0 to 1.1. Minor
corrections and changes result in versions 1.1.1 and 1.1.2, which is followed by a major update that
is object 1.2. The development of object 1.0 continues through 1.3 and 1.4, but finally, a noteworthy
change to the object results in a new evolutionary path, version 2.0. Both versions are currently
supported.
Change control – Controlling changes to Configuration items (CI). The change control process is
explained in Figure below:
A change request (CR) is submitted and evaluated to assess technical merit, potential side effects,
overall impact on other configuration objects and system functions, and the projected cost of the
change. The results of the evaluation are presented as a change report, which is used by a change
control board (CCB) —a person or group who makes a final decision on the status and priority of the
change. An engineering change Request (ECR) is generated for each approved change.
Also CCB notifies the developer in case the change is rejected with proper reason. The ECR describes
the change to be made, the constraints that must be respected, and the criteria for review and audit.
The object to be changed is “checked out” of the project database, the change is made, and then the
object is tested again. The object is then “checked in” to the database and appropriate version
control mechanisms are used to create the next version of the software.
1. Risk Identification:
Risk identification involves brainstorming activities. it also involves the preparation of a risk list.
Brainstorming is a group discussion technique where all the stakeholders meet together. this
technique produces new ideas and promotes creative thinking.
Preparation of risk list involves identification of risks that are occurring continuously in previous
software projects.
2. Risk Analysis and Prioritization:
It is a process that consists of the following steps:
Identifying the problems causing risk in projects
Identifying the probability of occurrence of problem
Identifying the impact of problem
Prepare a table consisting of all the values and order risk on the basis of risk exposure factor
3. Risk Avoidance and Mitigation:
The purpose of this technique is to altogether eliminate the occurrence of risks. so the method to
avoid risks is to reduce the scope of projects by removing non-essential requirements.
4. Risk Monitoring:
In this technique, the risk is monitored continuously by reevaluating the risks, the impact of risk, and
the probability of occurrence of the risk.
This ensures that:
Risk has been reduced
New risks are discovered
Impact and magnitude of risk are measured