Evaluation: Quality Assurance Testing
Evaluation: Quality Assurance Testing
Evaluation: Quality Assurance Testing
Activities designed to ensure the software The process of exploring a system in order to find
corresponds to specific. defects.
Involves activities related to the implementation Involves actives concerning verification of product
of processes, procedures, and standards. Example - Review Testing
Example - Audits Training
The scope of SQA applied to all products that will The scope of Software Testing applies to a particular
be created by the organization product being tested
Defect Priority has defined the order in which the developer Defect Severity is defined as the degree of impact that
should resolve a defect a defect has on the operation of the product
Priority is categorized into three types Severity is categorized into five types
✓ Low ✓ Critical
✓ Medium ✓ Major
✓ High ✓ Moderate
✓ Minor
✓ Cosmetic
Priority indicates how soon the bug should be fixed Severity indicates the seriousness of the defect on the
product functionality
Priority of defects is decided in consultation with the QA engineer determines the severity level of the defect
manager/client
High priority and low severity status indicate, defect have to be High severity and low priority status indicate defect
fixed on immediate bases but does not affect the application have to be fixed but not on immediate bases
Priority status is based on customer requirements Severity status is based on the technical aspect of the
product
During UAT the development team fix defects based on During SIT, the development team will fix defects based
priority on the severity and then priority
High Severity, Low Priority :- Web page not found when user clicks on a link
(user's does not visit that page generally)
Customers who use very old browsers cannot continue with their purchase of a product.
Because the number of customers with very old browsers is very low, it is not a high
priority to fix the issue.
Low Severity High Priority :- If the company name is misspelled in the home page of the
website, then the priority is high and severity is low to fix it.
The logo or name of the company is not displayed on the website. It is important to fix
the issue as soon as possible, although it may not cause a lot of damage.
4) “QA activity should start at the beginning of the project”. Justify the statement.
I agree with the statement that a good time to start the QA is from the beginning of the project
startup. This will lead to plan the process which will make sure that product coming out meets
the customer quality expectation. Indeed, QA also plays a major role in the communication
between teams.
Software testing life cycle (STLC) is a sequence of specific activities conducted during the testing
process to ensure software quality goals are met. STLC involves both verification and validation
activities.
phases of STLC:
➢ Requirement Analysis
During this phase, test team studies the requirements from a testing point of view to identify the
testable requirements.
The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System
Architects etc.) to understand the requirements in detail.
Requirements could be either Functional (defining what the software must do) or Non-Functional
(defining system performance /security availability)
➢ Test Planning
Typically, in this stage, a Senior QA manager will determine effort and cost estimates for the project and
would prepare and finalize the Test Plan. In this phase, Test Strategy is also determined.
This phase involves the creation, verification and rework of test cases & test scripts. Test data, is
identified/created and is reviewed and then reworked as well.
Test environment decides the software and hardware conditions under which a work product is tested.
Test environment set-up is one of the critical aspects of testing process and can be done in parallel with
Test Case Development Stage. Test team may not be involved in this activity if the
customer/development team provides the test environment in which case the test team is required to
do a readiness check (smoke testing) of the given environment.
➢ Test Execution
During this phase, the testers will carry out the testing based on the test plans and the test cases
prepared. Bugs will be reported back to the development team for correction and retesting will be
performed.
Testing team will meet, discuss and analyze testing artifacts to identify strategies that have to be
implemented in the future, taking lessons from the current test cycle. The idea is to remove the process
bottlenecks for future test cycles and share best practices for any similar projects in the future
Agile testing is all about changes and making differences in requirements even in the future with
later on better development phases. It is very important to understand the basics of Agile
methodology.
The main objective of Agile testing is to deliver the product with minimal functionalities to the client
itself. These only happen when you have adapted to agile development fully.
Agile testing is a continuous process normally and is done in multiple phases. With every new
delivery adds more features, smoother functions and more capabilities added on to the product.
This method results in higher customer satisfaction.
When a testing team uses Agile methodology, the test does not finish in just one phase it is done in
various intervals. Testing it earlier ensured minimal damage and risk in the end application.
Applications with minimal functionalities are ready faster and hence satisfying the customer/client.
Testing is a process which ensures that the product will meet all the requirements in each iteration
as there is sign of progress.
The process of testing is conducted by the entire team and not by testers.
Instead of test documentation there are lightweight documents and checklist suffices.
The developers and testers only consider the application complete once the tests are implemented
and tested.
Verification Validation
1. Verification is a static practice of
1. Validation is a dynamic mechanism of
verifying documents, design, code and
validating and testing the actual product.
program.
2. It does not involve executing the code. 2. It always involves executing the code.
4. Verification uses methods like 4. Validation uses methods like black box
inspections, reviews, walkthroughs, and (functional) testing, gray box testing, and
Desk-checking etc. white box (structural) testing etc.
6. It can catch errors that validation 6. It can catch errors that verification
cannot catch. It is low level exercise. cannot catch. It is High Level Exercise.
A Defect life cycle, also known as a Bug life cycle, is a cycle of a defect from which it goes
through covering the different states in its entire life. This starts as soon as any new defect is
found by a tester and comes to an end when a tester closes that defect assuring that it won’t get
reproduced again.
The number of states that a defect goes through varies from project to project.
• New: When a new defect is logged and posted for the first time. It is assigned a status as NEW.
• Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug and
assigns the bug to the developer team
• Open: The developer starts analyzing and works on the defect fix
• Fixed: When a developer makes a necessary code change and verifies the change, he or she can
make bug status as "Fixed."
• Pending retest: Once the defect is fixed the developer gives a particular code for retesting the
code to the tester. Since the software testing remains pending from the testers end, the status
assigned is "pending retest."
• Retest: Tester does the retesting of the code at this stage to check whether the defect is fixed
by the developer or not and changes the status to "Re-test."
• Verified: The tester re-tests the bug after it got fixed by the developer. If there is no bug
detected in the software, then the bug is fixed and the status assigned is "verified."
• Reopen: If the bug persists even after the developer has fixed the bug, the tester changes the
status to "reopened". Once again, the bug goes through the life cycle.
• Closed: If the bug is no longer exists then tester assigns the status "Closed."
• Duplicate: If the defect is repeated twice or the defect corresponds to the same concept of the
bug, the status is changed to "duplicate."
• Rejected: If the developer feels the defect is not a genuine defect then it changes the defect to
"rejected."
• Deferred: If the present bug is not of a prime priority and if it is expected to get fixed in the next
release, then status "Deferred" is assigned to such bugs
• Not a bug: If it does not affect the functionality of the application then the status assigned to a
bug is "Not a bug".
i. White box Testing: Testing technique based on knowledge of the internal logic of an
application's code and includes tests like coverage of code statements, branches, paths,
conditions. It is performed by software developers.
ii. Black box Testing: A method of software testing that verifies the functionality of an
application without having specific knowledge of the application's code/internal structure.
Tests are based on requirements and functionality. It is performed by QA teams.
iii. Usability Testing: Testing technique which verifies the ease with which a user can learn to
operate, prepare inputs for, and interpret outputs of a system or component. It is usually
performed by end users.
iv. Unit Testing: Software verification and validation method in which a programmer tests if
individual units of source code are fit for use.
v. Top Down Integration Testing: Testing technique that involves starting at the top of a
system hierarchy at the user interface and using stubs to test from the top down until the
entire system has been implemented. It is conducted by the testing teams.
vi. System Testing: The process of testing an integrated hardware and software system to
verify that the system meets its specified requirements. It is conducted by the testing teams
in both development and target environment.
vii. Structural Testing: White box testing technique which takes into account the internal
structure of a system or component and ensures that each program statement performs its
intended function. It is usually performed by the software developers.
viii. Security Testing: A process to determine that an information system protects data and
maintains functionality as intended. It can be performed by testing teams or by specialized
security-testing companies.
ix. Integration Testing: The phase in software testing in which individual software modules are
combined and tested as a group.
x. Glass box Testing: Similar to white box testing, based on knowledge of the internal logic of
an application's code. It is performed by development teams.
xi. Dependency Testing: Testing type which examines an application's requirements for pre-
existing software, initial states and configuration in order to maintain proper functionality. It
is usually performed by testing teams.
xii. Bottom Up Integration Testing: In bottom-up Integration Testing, module at the lowest
level are developed first and other modules which go towards the 'main' program are
integrated and tested one at a time. It is usually performed by the testing teams.
xiii. Beta Testing: Final testing before releasing application for commercial purpose. It is typically
done by end-users or others.
xiv. Alpha Testing: Type of testing a software product or system conducted at the developer's
site. Usually it is performed by the end users.
xv. Agile Testing: Software testing practice that follows the principles of the agile manifesto,
emphasizing testing from the perspective of customers who will utilize the system. It is
usually performed by the QA teams.
xvi. Ad-hoc Testing: Testing performed without planning and documentation - the tester tries to
'break' the system by randomly trying the system's functionality. It is performed by the
testing team.