Manual Testing With Ans
Manual Testing With Ans
Manual Testing With Ans
b. Peer review: It is done by another tester who hasnt written those test cases b
ut is familiar with
the system under test. Also known as Maker and Checker review.
c. Review by a supervisor: It is done by a team lead or manager who is superior
to the tester who has
written the test cases and has great knowledge about the requirements and system
under test.
5. What is a
A test suite
re or
system under
the
precondition
test suite?
is a set of several test cases designed for a component of a softwa
test, where the post condition of one test case is normally used as
for the next test.
software testing, which is performed without any sort of planning and/or documen
tation.
These tests are intended to run only once. However in case of a defect found it
can be
carried out again. It is also said to be a part of exploratory testing.
10. Explain performance testing.
It is one of the non-functional types of software testing. Performance of softwa
re is
the degree to which a system or a component of system accomplishes the designate
d
functions given constraints regarding processing time and throughput rate.
Therefore, performance testing is the process to test to determine the performan
ce of
software.
11. What is integration testing?
One of the software testing types, where tests are conducted to test interfaces
between components, interactions of the different parts of the system with
operating system, file system, hardware and between different software.
It may be carried out by the integrator of the system, but should ideally be
carried out by a specific integration tester or a test team.
12. What is meant by functional defects and usability defects in general? Give a
ppropriate example.
We will take the example of Login window to understand functionality and usability
defects. A functionality defect is when a user gives a valid user name but inval
id
password and the user clicks on login button. If the application accepts the use
r
name and password, and displays the main window, where an error should have been
displayed. On the other hand a usability defect is when the user gives a valid u
ser
name, but invalid password and clicks on login button. The application throws up
an
error message saying Please enter valid user name when the error message should
have been Please enter valid Password.
13. What is pilot testing?
It is a test of a component of
real
time operating conditions. The
n the
system and prevent costly bugs
use
the system before its complete
em.
100%
15. What is regression testing?
Regression testing is the testing of a particular component of the software or t
he
entire software after modifications have been made to it. The aim of regression
testing is to ensure new defects have not been introduced in the component or so
ftware,
especially in the areas where no changes have been made. In short, regression te
sting
is the testing to ensure nothing has changed, which should not have changed due
to
changes made.
16. What is system testing?
System testing is testing carried out of an integrated system to verify, that th
e system
meets the specified requirements. It is concerned with the behavior of the whole
system,
according to the scope defined. More often than not system testing is the final
test
carried out by the development team, in order to verify that the system develope
d does
meet the specifications and also identify defects which may be present.
17. What is the difference between retest and regression testing?
Retesting, also known as confirmation testing is testing which runs the test cas
es that
failed the last time, when they were run in order to verify the success of corre
ctive
actions taken on the defect found. On the other hand, regression testing is test
ing of
a previously tested program after the modifications to make sure that no new def
ects
have been introduced. In other words, it helps to uncover defects in the unchang
ed
areas of the software.
18. Explain priority, severity in software testing.
Priority is the level of business importance, which is assigned to a defect foun
d.
On the other hand, severity is the degree of impact, the defect can have on the
development or operation of the component or the system.
19. Explain the test case life cycle.
On an average a test case goes through the following phases. The first phase of
the
test case life cycle is identifying the test scenarios either from the specifica
tions
or from the use cases designed to develop the system. Once the scenarios have be
en
identified, the test cases apt for the scenarios have to be developed. Then the
test
cases are reviewed and the approval for those test cases has to be taken from th
e
concerned authority. After the test cases have been approved, they are executed.
When the execution of the test cases start, the results of the tests have to be
recorded.
The test cases which pass are marked accordingly. If the test cases fail,
defects have to be raised. When the defects are fixed the failed test case has t
o
be executed again.
20. Explain is Validation?
The process of evaluating software at the end of the software development proces
s
to ensure compliance with software requirements. The techniques for validation a
re
testing, inspection and reviewing.
21. What is Verification?
The process of determining whether or not the products of a given phase of the
software development cycle meet the implementation steps and can be traced to th
e
incoming objectives established during the previous phase. The techniques for
verification are testing, inspection and reviewing.
22. What is the difference between the QA and software testing?
The role of QA (Quality Assurance) is to monitor the quality of the process to p
roduce a
quality of a product. While the software testing, is the process of ensuring the
final
product and check the functionality of final product and to see whether the fina
l product
meets the users requirement.
23. What is Testware?
Testware is the subset of software, which helps in performing the testing of app
lication.
It is a term given to the combination of software application and utilities whic
h is
required for testing a software package.
24. What is the difference between build and release?
Build: It is a number given to Installable software that is given to testing tea
m by the development team.
Release: It is a number given to Installable software that is handed over to cus
tomer by the tester or developer.
25. Explain the steps for Bug Cycle?
Once the bug is identified by the tester, it is assigned to the development mana
ger in open status
If the bug is a valid defect the development team will fix it and if it is not a
valid defect, the defect will be ignored and marked as rejected
If the defect or bug is raised earlier then the tester will assigned a DUPLICATE
status
Once the defect is repaired, the status will changed to FIXED at the end the tes
ter will give CLOSED status if it passes the final test.
26. What does the test strategy include?
The test strategy includes introduction, resource, scope and schedule for test a
ctivities,
test tools, test priorities, test planning and the types of test that has to be
performed.
27. Mention the different types of software testing?
Unit testing
Integration testing and regression testing
Smoke testing
Functional testing
Performance testing
White box and Black box testing
Alpha and Beta testing
Load testing and stress testing
System testing
28. What is Agile testing and what is the importance of Agile testing?
Agile testing is software testing, which involves the testing of the software f
rom
the customer point of view. The importance of this testing is that, unlike nor
mal
testing process, this testing does not wait for development team to complete th
e coding
first and then doing testing. The coding and testing both goes simultaneously.
It requires continuous customer interaction.
It works on SDLC (Systems Development Life Cycle) methodologies, it means that t
he
task is divided into different segments and compiled at the end of the task.
29. What is Test case?
Test case is a specific term that is used to test a specific element.
It has information of test steps, prerequisites, test environment and outputs.
30.
Load Testing: Testing an application under heavy but expected load is known as L
oad Testing. Here, the load refers to the large volume of users, messages, requ
ests, data, etc.
Stress Testing: When the load placed on the system is raised or accelerated beyo
nd the normal range then it is known as Stress Testing.
Volume Testing: The process of checking the system, whether the system can hand
le the required amounts of data, user requests, etc. is known as Volume Testing.
35. What are the five common solutions for software developments problems?
The next thing is the realistic schedule like time for planning, designing, test
ing, fixing bugs and re-testing
Adequate testing, start the testing immediately after one or more modules develo
pment.
Use rapid prototype during design phase so that it can be easy for customers to
find what to expect
Constraint Check
Stored procedure
Natural Join
Inner Join
Outer Join
Cross Join
The outer join is divided again in two:
B-Tree index
Bitmap index
Clustered index
Covering index
Non-unique index
Unique index
Database Testing
47.
While testing stored procedures what are the steps does a tester takes?
The tester will check the standard format of the stored procedures and also it c
hecks the
fields are correct like updates, joins, indexes, deletions as mentioned in the s
tored procedure.
48.
How would you know for database testing, whether trigger is fired or no
t?
On querying the common audit log you would know, whether, a trigger is fired or
not. It is in audit log where you can see the triggers fired.
49.
In data base testing, what are the steps to test data loading?
Following steps need to follow to test data loading
In SQL Enterprise manager, run the DTS package after opening the corresponding D
TS package
After updating data in the source, check whether the changes appears in the targ
et or not.
It identifies amongst others test items, the features to be tested, the testing
tasks, who will do
each task, degree of tester independence, the test environment, the test design
techniques and
entry and exit criteria to be used, and the rationale for their choice, and any
risks requiring
contingency planning. It is a record of the test planning process.
a. Introduction:
Provide an overview of the test plan.
Specify the goals/objectives.
Specify any constraints.
b. References:
List the related documents, with links to them if available, including t
he following:
Project Plan
Configuration Management Plan
c. Test Items:
List the test items (software/products) and their versions.
d. Features to be Tested:
List the features of the software/product to be tested.
Provide references to the Requirements and/or Design specifications of t
he features to be tested
e. Features Not to Be Tested:
List the features of the software/product which will not be tested.
Specify the reasons these features wont be tested.
f. Approach:
Mention the overall approach to testing.
Specify the testing levels [if its a Master Test Plan], the testing types
, and the testing methods [Manual/Automated; White Box/Black Box/Gray Box]
g.Item Pass/Fail Criteria:
Specify the criteria that will be used to determine whether each test it
em (software/product) has passed or failed testing.
h.Suspension Criteria and Resumption Requirements:
Specify criteria to be used to suspend the testing activity.
Specify testing activities which must be redone when testing is resumed.
i. Test Deliverables:
List test deliverables, and links to them if available, including the fo
llowing:
Test Plan (this document itself)
Test Cases
Test Scripts
Defect/Enhancement Logs
Test Reports
j.Test Environment:
Specify the properties of test environment: hardware, software, network
etc.
List any testing or related tools.
h.Estimate:
Provide a summary of test estimates (cost or effort) and/or provide a li
nk to the detailed estimation.
i.Schedule:
Provide a summary of the schedule, specifying key test milestones, and/o
r provide a link to the detailed schedule.
j.Staffing and Training Needs:
Specify staffing needs by role and required skills.
Identify training that is necessary to provide those skills, if not alre
ady acquired.
k.Responsibilities:
List the responsibilities of each team/role/individual.
l.Risks:
List the risks that have been identified.
Specify the mitigation plan and the contingency plan for each risk.
m.Assumptions and Dependencies:
List the assumptions that have been made during the preparation of this
plan.
List the dependencies.
n.Approvals:
Specify the names and roles of all persons who must approve the plan.
Provide space for signatures and dates. (If the document is to be printe
d)
57. Software Testing Levels
a.Unit Testing is a level of the software testing process where individual units
/components of a software/system are tested.
The purpose is to validate that each unit of the software performs as designed.
b.Integration Testing is a level of the software testing process where individua
l units are combined and tested as a group.
The purpose of this level of testing is to expose faults in the interaction betw
een integrated units.
c.System Testing is a level of the software testing process where a complete, in
tegrated system/software is tested.
The purpose of this test is to evaluate the systems compliance with the specified
requirements.
d.Acceptance Testing is a level of the software testing process where a system i
s tested for acceptability.
The purpose of this test is to evaluate the systems compliance with the business
requirements and assess whether it is acceptable for delivery.
58.What is an Entry Criterion?
Entry criterion is used to determine when a given test activity should start.
It also includes the beginning of a level of testing, when test design or when t
est execution is ready to start.
Examples for Entry Criterion:
Verify if the Test environment is available and ready for use.
Verify if test tools installed in the environment are ready for use.
Verify if Testable code is available.
What is an Exit Criterion?
Exit criterion is used to determine whether a given test activity has been compl
eted or NOT.
Exit criteria can be defined for all of the test activities right from planning,
specification and execution.
Exit criterion should be part of test plan and decided in the planning stage.
Examples of Exit Criterion
Verify if All tests planned have been run.
Verify if the level of requirement coverage has been met.
Verify if there are NO Critical or high severity defects that are left outst
anding.
Verify if all high risk areas are completely tested.
Verify if software development activities are completed within the projected
cost.
Verify if software development activities are completed within the projected
timelines.