Testing Training (1) - Jan - Batch
Testing Training (1) - Jan - Batch
Testing Training (1) - Jan - Batch
System
Firmware
Programming
Driver
Freeware
Shareware
What is testing?
Defect/bug-the variance between the expected and actual value is known as bug
Product-- Product is like Ready to use solution which is built by the company and sold to different
customers
Purpose of testing
To prevent defects
To make sure that the end result meet business and user requirements
To ensure that it satisfies the BRS (BUISNESS REQUIREMENT SPECIFICATION) and SRS (SYSTEM
REQUIRMENT SPECIFICATIONS)
Software Quality
Steps taken to build the product staring from the scratch i.e., gathering requirement till deploying the
product
Planning
Design-review
Coding-
Testing
Deploy
Maintenance
Testing has to start from first phase itself and it has to be conducted at each and every
stage of SDLC
Stage containment: testing at each and every stage of SDLC is known as stage containment
Without stage containment: Lost of time, money, raw materials (Resources), effort….
Testing
Though not execution anything but some time tools are used
Artifact: The document which has to be reviewed is the artifact in this scenario
C. Technical reviews
A. white box testing: is a testing strategy in which test design is based on internal logic,
structure and implementation of the AUT
Other names: structural testing, glass box testing, clear box testing, open box testing
Techniques:
1. Statement coverage
Execution of all statements of the source code at least once.it is used to calculate the total no of
executed statements in the source code out of total statements `present in the source code
Test (int a)
If (a>4)
a=a*3
Print (a)
}
Conclusion only 1 test case is enough to achieve 100% S.C
a=5
100% branch/decision coverage will imply 100% statement coverage, but 100% SC does not branch
coverage and decision coverage are same
2. Decision coverage
For each if statement both the true and false has to be check compulsorily
One condition check for true and another for false statement
If (a||b) {
Test1=true;
Else{
If(c) {
Test2=true
a=-1,b=0,c=3
dc=
3. Branch
4. Condition
5. Multiple condition
7. Path coverage
Exhaustive testing is not possible due to time and budget so black box testing techniques helps to
reduce the test cases and also helps in conducting the testing in effective way.
Technique :
1.equilence partitioning:
Applied to all levels of testing units like unit, integration, system etc.,
In this technique invalid dated units are divided into equivalent partitions that can be used to derive test
cases which reduces time required for testing because of small number of test cases
Question: a bank is offering a loan in the range of $0 to $100 with 5% interest, a balance over $100 up to
$10000 with 10% interest and balance over $10000 with 20% interest
Testing between extreme ends or boundaries between partitions of the input values.
If (Min, Max) is the range given for a field validation, then the boundary values come as follows:
question:
requirements : user name field allows only lower case min 6 and max 8 letters
Decision Table:
Changing requirement
Programming errors
Miscommunication or no communications
Software complexity
Time pressures
SDLC Models
1. Waterfall model
Requirement—design—development—testing—deployment—maintenance
Suitable for the projects where the requirements are very clear and not suitable for changing and
unclear requirements
Advantage: very less chance of defect been carried from one stage to another
Drawbacks: a very rigid model, very less flexible model, testing starts at very later stage, defects found
at later stage very difficult fix
Prototyping model
A prototype of the end product is first developed, tested and refined as per customer feedback
repeatedly till a final acceptable prototype is achieved which forms the for developing the final product
Iterative model
A new technology is being is used and is being learnt by the development while working on the project
Spiral model
Long term projects are not feasible due to altered economic priorities
V-shape model:
4 level of testing:
2. Integration testing: integrating all the units and checking if the units have been integrated in proper
order and checking the correct flow of data from one module to another module.
4. User acceptance testing: client should accept your product. Whether the product is according to the
customer/client specification or not
-------------------------------------------------------------------------------------------------------------------------------------
Positive testing: Checking for valid conditions are known as positive testing
Negative testing: checking for invalid conditions are known as negative testing
5. Pesticide paradox:
Pesticide Paradox principle says that if the same set of test cases are executed again
and again over the period of time then these set of tests are not capable enough to
identify new defects in the system.
In order to overcome this “Pesticide Paradox”, the set of test cases needs to be
regularly reviewed and revised. If required a new set of test cases can be added and
the existing test cases can be deleted if they are not able to find any more defects
from the system.
6. Testing is context dependent:
Different domains are tested differently, thus testing is purely based on the context of domain
or application. testing is done differently in different contexts. For example, safety-critical
software is tested differently from an e-commerce site
7. Absence of errors fallacy:
If the software is fully tested and if no defects are found before release,then we can say that the
software is 99% defect free.but what if this software is tested against weong requirnebts?in
such cases,even finding defectd and fixing thenm on time would not help as testing id
performed on wrong requiremnwts which are not as per neds of the end user.
-------------------------------------------------------------------------------------------------------------------------------------
Agile
Scrum
Kanban
--------------------------------------------------------------------------------------------------------------------------------
How many test cases are needed for 100 % statement coverage?--1
How Many test cases are needed for 100 % decision coverage?---2
Read p-71
Read q-30
Print “large”
Endif
If p>50 then
Print “p large”
Endif
Read p-30
Read q-70
Print “large”
Endif
If p<50 then
Print “p small”
Endif
SC--1
DC—2
-----------------------------------------------------------------------------------------------------------------------------------------
Integration testing:
The higher level modules are tested first and then lower level modules are tested and integrated in
order to check the software functionality
Main()
Add();-Calling function
Sub();Calling function
Mul();-Calling Function
Int c;
C=a+b;
Return 0;-Stub
Add();-Calling function
Sub();Calling function
Driver-Dummy code
Int c;
C=a+b;
Int c;
C=a+b;
Sandwich approach:
Combination of top down and bottom up
Big bang testing:
In which all the components or modules are integrated together at once and then tested as a unit. This
combined set of components is considered as an entity while testing. If all of the components in the unit
are not completed, the integration process will not execute.
Acceptance testing
Performance testing:
Performance Testing is a software testing process used for testing the speed, response time, stability,
reliability, scalability and resource usage of a software application under particular workload
Load testing – checks the application’s ability to perform under anticipated user loads. The objective is
to identify performance bottlenecks before the software application goes live.
•Stress testing – involves testing an application under extreme workloads to see how it handles high
traffic or data processing. The objective is to identify the breaking point of an application.
•Endurance testing – is done to make sure the software can handle the expected load over a long period
of time.
•Spike testing – tests the software’s reaction to sudden large spikes in the load generated by users.
•Volume testing – Under Volume Testing large no. of. Data is populated in a database and the overall
software system’s behavior is monitored. The objective is to check software application’s performance
under varying database volumes.
•Scalability testing – The objective of scalability testing is to determine the software application’s
effectiveness in “scaling up” to support an increase in user load. It helps plan capacity addition to your
software system.
•Confidentiality
•Integrity
•Authentication
•Authorization
•Availability
•Non-repudiation
•Encryption/Decryption
In authentication process, the identity of users are checked for providing the access to the system using
username and password. While in authorization process, person’s or user’s authorities are checked for
accessing the resources like read permission, write permission, Execute permission. Authentication is
done before the authorization process, whereas authorization process is done after the authentication
process.
Recovery testing
it is the activity of testing how well an application is able to recover from crashes, hardware failures and
other similar problems
Compatibility testing
Checking the functionality of an application on different software, hardware platforms, network, and
browsers is known as compatibility testing
Sanitation Testing:
It is also called as garbage testing during this test the test engineers find extra features in our application
build with respect to customer requirements.
1) Application enhancements:
2) Performance enhancements:
REGRESSION TESTING:
•Repeated testing of an already tested program, after modification, to discover any defects introduced
or uncovered as a result of the changes in the software being tested or in another related or unrelated
software components.
3.Defect Fixing
Retesting
Testing again the same test cases to confirm if the bug has been fixed
Testing that runs test cases that failed last time that they are running fine after fixation of bugs in order
to verify success of corrective actions.
To ensure that the defects which were found and posted in the earlier build were fixed or not in the
current build.
Adhoc testing:
Testing is carried out with the knoweldge of the tester about the application and the tester tests
randomly without following the specifications/requirements
Monkey Testing:
Smoke testing:
Smoke testing is the means by which a software build’s stability is tested. It’s used to determine whether
a new build is ready for the next phase of testing, or whether that new build needs to be made stable
first.
•smoke test: A test suite that covers the main functionality of a component or system to determine
whether it works properly before planned testing begins.
Sanity testing:
At times the testing is even done randomly with no test cases. But remember, the sanity test should only
be done when you are running short of time, so never use this for your regular releases. Theoretically,
this testing is a subset of Regression Testing.
Exploratory testing:
Graphical User Interface Testing (GUI) Testing is the process for ensuring proper functionality of the
graphical user interface (GUI) for a specific application. GUI testing generally evaluates a design of
elements such as layout, colors and also fonts, font sizes, labels, text boxes, text formatting, captions,
buttons, lists, icons, links, and content. GUI testing processes may be either manual or automatic and
are often performed by third-party companies, rather than developers or end users.
Usability testing:
Testing to determine the extent to which software product is understood, easy to learn ,easy to operate
and attractive to users under specifies conditions
Install/Uninstall Testing
Installation and Uninstallation Testing is done on full, partial, or upgraded install/uninstall processes on
different operating systems under different hardware or software environments.
Production testing
• Globalization Testing (or) Internationalization Testing(118N):
• Checks if the application has a provision of setting and changing languages date and time format and
currency etc. If it is designed for global users, it is called globalization testing.
Localization Testing:
• Checks default languages currency date and time format etc. If it is designed for a particular locality of
users is called Localization testing.
Function requirements:
Nonfunctional requirements:
These requirements describe how each feature should behave under what conditions, what limitations
there should be, and so on.
The constraints, like how many processes the system can handle (performance), what are the (security)
issues the system needs to take care of such as SQL injections …
The rate of failure (reliability), what are the languages and tools that will be used (development), what
are rules you need to follow to ensure the system operates within the law of the organization
(legislative).
Requirement engineering
It is a process of gathering and defining what service should be provided by the system
Check worth implementing and if it can be implemented under the current budget, technical skills,
schedule and if it does contribute to the whole organization objectives or not etc.,
Specification-standard format(BRS)
Validation-customer satisfaction
It focuses on assessing if the system is useful to the business (feasibility study), discovering requirements
(elicitation and analysis), converting these requirements into some standard format (specification), and
checking that the requirements define the system that the customer wants (validation).
User requirements:
BRS
It describes the services that the system should provide and the constraints under which it must
operate. We don’t expect to see any level of detail, or what exactly the system will do, It’s more of
generic requirements.
System requirements:
SRS
The system requirements mean a more detailed description of the system services and the operational
constraints such as how the system will be used and development constraints such as the programming
languages.
This level of detail is needed by those who are involved in the system development, like engineers,
system architects, testers, etc
However, more specific functional system requirements describe the system functions, its inputs,
processing; how it’s going to react to a particular input, and what’s the expected output.
1. Feasibility study
2. Requirement gathering
3. Software requirement specification
4. Software requirement validation
Feasibility study
When the client approaches the organization for getting the desired product developed, it comes up
with rough idea about what all functions the software must perform and which all features are expected
from the software.
Referencing to this information, the analysts does a detailed study about whether the desired system
and its functionality are feasible to develop.
This feasibility study is focused towards goal of the organization. This study analyzes whether the
software product can be practically materialized in terms of implementation, contribution of project to
organization, cost constraints and as per values and objectives of the organization. It explores technical
aspects of the project and product such as usability, maintainability, productivity and integration ability.
The output of this phase should be a feasibility study report that should contain adequate comments
and recommendations for management about whether or not the project should be undertaken.
Requirement Gathering
If the feasibility report is positive towards undertaking the project, next phase starts with gathering
requirements from the user. Analysts and engineers communicate with the client and end-users to know
their ideas on what the software should provide and which features they want the software to include.
SRS is a document created by system analyst after the requirements are collected from various
stakeholders.
SRS defines how the intended software will interact with hardware, external interfaces, speed of
operation, response time of system, portability of software across various platforms, maintainability,
speed of recovery after crashing, Security, Quality, Limitations etc.
Technical requirements are expressed in structured language, which is used inside the organization.
After requirement specifications are developed, the requirements mentioned in this document are
validated. User might ask for illegal, impractical solution or experts may interpret the requirements
incorrectly. This results in huge increase in cost if not nipped in the bud. Requirements can be checked
against following conditions -
Test independence
As this type of testing is mainly performed by individuals, who are not related to the project directly or
are from a different organization, they are hired mainly to test the quality as well as the effectiveness of
the developed product. Test independence, therefore, helps developers and other stakeholders get
more accurate test results, which helps them build a better software product, with innovative and
unique features and functionality
Functional testing
It is a kind of black-box testing that is performed to confirm that the functionality of an application or
system is behaving as expected.
1. unit
2. component
3. smoke
4. sanity
5. regression
6. integration
7. API
8. UI
9. System
10. White box
11. Blackbox
12. Acceptance
13. Alpha
14. Beta
15. Production
Steps Involved
1. The very first step involved is to determine the functionality of the product that needs to be
tested and it includes testing the main functionalities, error condition, and messages, usability
testing i.e. whether the product is user-friendly or not, etc.,
2. The next step is to create the input data for the functionality to be tested as per the
requirement specification.
3. Later, from the requirement specification, the output is determined for the functionality under
test.
4. Prepared test cases are executed.
5. Actual output i.e. the output after executing the test case and expected output (determined
from requirement specification) are compared to find whether the functionality is working as
expected or not.
1. Performance
Load
Stress
Volume Tests.
2. Security Tests.
3. Upgrade & Installation Tests.
4. Recovery Tests
5. Compatibility testing
6. Configuration testing
7. Sanitation testing
Black box Testing Types
1. GUI Testing
2. Usabilty Testing
3. Functional Testing
4. Non Functional Testing
QA Vs QC
Quality Assurance Quality Control: Definition QA is a set of activities for ensuring quality in the
processes by which products are developed QC is a set of activities for ensuring quality in products. The
activities focus on identifying defects in the actual products produced.
Focus on QA aims to prevent defects with a focus on the process used to make the product. It is a
proactive quality process QC aims to identify (and correct) defects in the finished product. Quality
control, therefore, is a reactive process
Goal The goal of QA is to improve development and test processes so that defects do not arise when the
product is being developed The goal of QC is to identify defects after a product is developed and before
it's released
How Establish a good quality management system and the assessment of its adequacy Periodic
conformance audits of the operations of the system. Finding & eliminating sources of quality problems
through tools & equipment so that customer's requirements are continually met.
What Prevention of quality problems through planned and systematic activities including
documentation. The activities or techniques used to achieve and maintain the product quality, process
and service.
Verification Validation
It includes checking documents, design, codes It includes testing and validating the actual
and programs. product
lt does not include the execution of the code. It includes the execution of the code.
Methods used in verification are reviews, Methods used in validation are Black Box Testing,
walkthroughs, inspections and desk-checking, White Box Testing and non-functional testing
It checks whether the software conforms to It checks whether the software meets the
specifications or not. requirements and expectations of a customer or
not.
Positive testing is a process where the system or an application is tested against the valid input data.
It is implemented to check how the application can gracefully handle invalid input or unpredicted user
performance.
Quality: the degree to which a component, system or process meets specified requirements and
customer/user needs and requirements
Debugging: the processs of finding ,analysing and removing the cause of failures or defects
1. Requirement analysis
2. Test planning
3. Test case development
4. Environment setup
5. Test execution
6. Test cycle closure
Test Plan
A test plan is a detailed document which describes software testing areas and activities. It outlines the
test strategy, objectives, test schedule, required resources (human resources, software, and hardware),
test estimation and test deliverables.
The test plan is a base of every software's testing.
A document describing the scope, approach, resources and schedule of intented test activities
Test level: A group of test activities that are organized and managed together.
Test strategy: a high level description of the test levels for an organization
Test approach: the implementation of the test strategy for the specific project
1. Risks: Testing is about risk management, so consider the risks and level of risks
3. Objective
4.Business
Test manager/test lead: the person responsible for project management of testing activities and
resources. test lead tends to be involved in planning ,monitoring and control the tset activities
RBT is testing carried out based on product risks. RBT is to find out well in advance, as what is the
risk of the failure of a particular feature or functionality in the production and its impact on the
business in terms of cost and other damages by using a prioritization technique for the test cases.
Hence, Risk-Based testing uses the principle of prioritizing the tests of the features, modules, and
functionalities of a product or software. The prioritization is based on the risk of the likelihood of
failure of that feature or functionality in production and its impact on the customers.
#1) Be Skeptical
PUBLIC - Certified software testers shall act consistently with the public interest.
•CLIENT AND EMPLOYER - Certified software testers shall act in a manner that is in the best interests
of their client and employer, consistent with the public interest.
•PRODUCT - Certified software testers shall ensure that the deliverables they provide (on the
products and systems they test) meet the highest professional standards possible.
•JUDGMENT - Certified software testers shall maintain integrity and independence in their
professional judgment.
•MANAGEMENT - Certified software test managers and leaders shall subscribe to and promote an
ethical approach to the management of software testing.
•PROFESSION - Certified software testers shall advance the integrity and reputation of the profession
consistent with the public interest.
•COLLEAGUES - Certified software testers shall be fair to and supportive of their colleagues, and
promote cooperation with software developers.
•SELF - Certified software testers shall participate in lifelong learning regarding the practice of their
profession and shall promote an ethical approach to the practice of the profession.
The psychology of testing improves mutual understanding among team members and helps them
work towards a common goal.
1. Proper documentation
2. Software application size
3. Software life cycle
4. Process maturity
5. Time
6. Skilled team
7. Team and work relationship
Use case
'How a System will respond to a given Scenario?'. It is 'user-oriented' not 'system-oriented'. It is 'user-
oriented': We will specify 'what are the actions done by the user and 'What the Actors see in a system?'.
It is not 'system-oriented': We will not specify 'What are the input given to the system?' and 'What are
the output produced by the system?”.
Who uses 'Use Case' documents?
This documentation gives a complete overview of the distinct ways in which the user interacts with a
system to achieve the goal. Better documentation can help to identify the requirement for a software
system in a much easier way.
This documentation can be used by Software developers, software testers as well as Stakeholders.
Developers use the documents for implementing the code and designing it.
Business stakeholders use the document for understanding the software requirements.
They are the primary cases that are most likely to happen when everything does well. These are given
high priority than the other cases. Once we have completed the cases, we give it to the project team for
review and ensure that we have covered all the required cases.
These can be defined as the list of edge cases. The priority of such cases will come after the 'Sunny Use
Cases'. We can seek the help of Stakeholders and product managers to prioritize the cases.
4) Basic Flow: 'Basic Flow' or 'Main Scenario' is the normal workflow in the system. It is the flow of
transactions done by the Actors on accomplishing their goals. When the actors interact with the system,
as it's the normal workflow, there won't be any error and the Actors will get the expected output.
5) Alternative flow: Apart from the normal workflow, a system can also have an 'Alternate workflow'.
This is the less common interaction done by a user with the system.
6) Exception: The flow that prevents a user from achieving the goal.
7) post conditions: The conditions that need to be checked after the case is completed.
Example: school management system
Admin and Staff are considered as secondary actors, so we place them on the
right side of the rectangle. Actors can log in to the system, so we connect the
actors and login case with a connector.
Points to be noted
Common mistakes that the participants do with use case is that either it contains too many details about
a particular case or no enough details at all
Example:
Actors: buyers, sellers, wholesale dealers, auditers, suppliers, distributors, customer care etc.,
A System is ‘whatever you are developing’. It can be a website, an app or any other software component.
It is generally represented by a rectangle. It Contains Use Cases. Users are placed outside the ‘rectangle’.
Use Cases are generally represented by Oval shapes specifying the Actions inside it.
Actors/Users are the people who use the system. But sometimes it can be other systems, person or any
other organization.
It comes under the Functional Black Box testing technique. As it is a black box testing, there won’t be any
inspection of the codes. Several interesting facts about this are briefed in this section.
It ensures if the path used by the user is working as intended or not. It makes sure that the user can
accomplish the task successfully.
For Example, Consider the ‘Show Student Marks’ case, in a School Management System.
Use case Name: Show Student Marks
Actors: Students, Teachers, Parents
Pre-Condition:
1) The system must be connected to the network.
2) Actors must have a ‘Student ID’.
Use Case for ‘Show Student Marks’:
Main Scenario Serial Number Steps
3 Enter Student ID
There are broadly 2 kinds of testing that take place on mobile devices:
The device including the internal processors, internal hardware, screen sizes, resolution, space or
memory, camera, radio, Bluetooth, WIFI etc. This is sometimes referred to as, simple "Mobile Testing".
The applications that work on mobile devices and their functionality are tested. It is called the "Mobile
Application Testing" to differentiate it from the earlier method. Even in mobile applications, there are
few basic differences that are important to understanding:
a) Native apps: A native application is created for use on a platform like mobile and tablets.
b) Mobile web apps are server-side apps to access website/s on mobile using different browsers like
Chrome, Firefox by connecting to a mobile network or wireless network like WIFI.
c) Hybrid apps are combinations of native app and web app. They run on
Native apps have single platform affinity while mobile web apps have the cross-platform affinity.
Native apps are written in platforms like SDKs while Mobile web apps are written with web technologies
like HTML, CSS, asp.net, Java, PHP.
For a native app, installation is required but for mobile web apps, no installation is required.
A native app can be updated from the play store or app store while mobile web apps are centralized
updates.
Many native apps don’t require an Internet connection but for mobile web apps, it’s a must.
Native apps are installed from app stores like Google play store or app store where mobile web are
websites and are only accessible through the Internet.
Use Case
The developers use the standard symbols to write a use case so that everyone will understand easily.
They will use the Unified modeling language (UML) to create the use cases.
There are various tools available that help to write a use case, such as Rational Rose. This tool has a
predefined UML symbols, we need to drag and drop them to write a use case, and the developer can also
use these symbols to develop the use case.
Test Scenario: Test Scenario gives the idea of what we have to test. Test Scenario is like a high-level
description of what you are going to test
2. Enter valid User Name and invalid Password 3. Enter invalid User Name and valid Passwor
Test Scenario is "what is to be tested" and test cases are "how it is to be tested"
3. Enough space should be provided between field labels, columns, rows, error messages, etc.
5. Font size, style, and color for headline, description text, labels, infield data, and grid info should be
standard as specified in SRS.
7. Disabled fields should be greyed out and users should not be able to set focus on these fields.
8. Upon clicking on the input text field, the mouse arrow pointer should get changed to the cursor.
9. The user should not be able to type in the drop-down select list.
11. Check if proper field labels are being used in error messages.
17. Check if the drop-down list options are readable and not truncated due to field size limits.
18. All buttons on the page should be accessible with keyboard shortcuts and the user should be able to
1. Check if the correct data is getting saved in the database upon a successful page submit.
2. Check values for columns that are not accepting null values.
3. Check for data integrity. Data should be stored in single or multiple tables based on the design.
IND_<Tablename>_<ColumnName>
6. Table columns should have description information available (except for audit columns like created
date, created by, etc.)
7. For every database add/update operation logs should be added.
9. Check if data is committed to the database only when the operation is successfully completed.
3. Check image upload functionality with image files of different extensions (For Example, JPEG, PNG,
BMP, etc.)
4. Check image upload functionality with images that have space or any other allowed special character
in the file name.
6. Check the image upload with an image size greater than the max allowed size. Proper error message
should be displayed
7. Check image upload functionality with file types ther than images (For Example, txt, doc, pdf, exe,
etc.). A proper error message should be displayed.
8. Check if images of specified height and width (if defined) are accepted or otherwise rejected.
3. Page crash should not reveal application or server info. The error page should be displayed for this.
Verify that the login page have the basic web element which every login page contain such as-
Username ,Password, Login button, Forgot password, Remember me, cancel button, Logo, create
an account etc.
Verify the login by entering the valid username and valid password.
Check the login functionality by entering the valid username and invalid password (An error
message must be displayed).
Verify the login functionality by entering the invalid username and valid password (An error
message must be displayed).
Verify the functionality of login without entering the username and password (An error message
must be displayed).
Verify by entering only username (An error message must be displayed).
Check the login functionality by entering only password (An error message must be displayed).
Verify that the password is encrypted or not when user entered.
Verify that the “Remember me” check box by default unselected.(or as per srs)
Verify that the user Login credential save or not in browser when user login by checking the
“Remember me” Check box .
Verify the “forgot password” functionality , user able to click on “forgot password” link or not .
Verify the functionality of “reset” button user able to click or not.
Verify that the text box gets cleared when click on rest button.
Verify the keyboard tab functionality ,User able to select the web element through the tab
button or not.
Verify that the user able to click on login button through the enter button or not.
Verify that cursor blink in text box by default or not when login page get load.
Verify the Place holder for text boxes(username ,password).
Verify that user able to login with new password or not after when he /she changed the
password
Verify the Spelling ,font size etc. for alert message .
Verify the login screen will appear after clicking on a login link or login button.
Verify all login related elements and fields are present on the login page.
Verify the alignment of displayed elements on the login screen should be compatible in cross
browsers testing.
Verify that the size, colour and UI of different elements should match with the specifications.
Verify that the login page of the application is responsive and align properly on different screen
resolutions and devices.
Verify login page title.
After the user login page is open, the cursor should remain in the username text box by default.
Verify that there is a checkbox with the label remember password on the login page.
Verify the remember me checkbox should mark as checked after clicking on the label text and the
check box.
Test suite:----image
Mutation Testing
Mutation testing, also known as code mutation testing, is a form of white box testing in which testers
change specific components of an application's source code to ensure a software test suite will be able
to detect the changes. Changes introduced to the software are intended to cause errors in the program.
where we insert errors purposely into a program (under test) to verify whether the existing test case can
detect the error or not.
it's a high-level document to map and trace user requirements with test cases to ensure that for each
and every requirement adequate level of testing is being achieved.
Requirement Traceability Matrix helps to link the requirements, Test cases, and defects accurately. The
whole of the application is tested by having Requirement Traceability (End to End testing of an
application is achieved).
Requirement Traceability assures good 'Quality' of the application as all the features are tested. Quality
control can be achieved as software gets tested for unforeseen scenarios with minimal defects and all
Functional and non-functional requirements being satisfied.
Advantage :
Very less chance of requirements getting missed out as each test cases are mapped to the requirements
so the test lead can easily makeout if any requirements are missed out.
RTM assures good quality as we make sure that all features are tested as they can be easily traced from
this document
Forward Traceability
In forward Traceability requirements to the test cases, it ensures that the project progresses as per
the described direction and that every requirement is tested thoroughly.
Backward Traceability
The test cases are mapped with the Requirements in ‘Backward Traceability’. Its main purpose is to
ensure that the current product being developed is on the right track. It also helps to determine that
no extra unspecified functionalities are added and thus the scope of the project is affected.
A Good Traceability matrix has references from test cases to requirements and vice versa (requirements
to test cases). This is referred to as 'Bi Directional Traceability. It ensures that all the Test cases can be
traced to requirements and each and every requirement specified has accurate and valid Test cases for
them.
•Test Coverage states which requirements of the customers are to be verified when the testing phase
starts. Test Coverage is a term that determines whether the test cases are written and executed to
ensure to test the software application completely, in such a way that minimal or NIL defects are
reported.
• The maximum Test Coverage can be achieved by establishing good 'Requirement Traceability
Mapping all the Customer Reported Defects (CRD) to individual test cases for the future regression test
suite
Test monitoring: a test management task that deats with the activities related to periodically checking
the status of a test project. Reports are prepared that compare the actuals to that which was planned.
Test management: the planning, estimating, monitoring and control of test activities, typically carried
out by test manager or test lead
Test summary report: a document summarizing testing activities and results. It also contains an
evaluation of the corresponding test items against the exit criteria.
Test control: a test management task that deals with developing and applying a set of corrective actions
to get a test project on track.
Software metrics : Software metrics are used to measure the quality of the product
Software Metrics are used to measure the quality of the project. Simply, a Metric is a unit used for
describing an attribute. Metric is a scale for measurement. Software Metrics are used to measure the
quality of the project. Simply, a Metric is a unit used for describing an attribute. Metric is a scale for
measurement .Generation of Software Test Metrics is the most important responsibility of the Software
Test Lead/Manager.
1.Take the decision for the next phase of activities such as, estimate the cost & schedule of future
projects.
If Metrics are involved in the project, then the exact status of his/her work with proper numbers/data
can be published.
•%ge of work completed•%ge of work yet to be completed•Time to complete the remaining
work•Whether the project is going as per the schedule or lagging? etc.Based on the metrics, if the
project is not going to complete as per the schedule, then the manager will raise the alarm to the client
and other stakeholders by providing the reasons for lagging to avoid the last-minute surprises.
#1) %ge Test cases Executed: This metric is used to obtain the execution status of the test cases in terms
of %ge.
%ge Test cases Executed = (No. of Test cases executed / Total no. of Test cases written) * 100. So, from
the above data,
# 2) %ge Test cases not executed: This metric is used to obtain the pending execution status of the test
cases in terms of %ge.
%ge Test cases not executed = (No. of Test cases not executed / Total no. of Test cases written) *
100.
7) Defect Removal Efficiency (DRE) = (No. of Defects found during QA testing / (No. of Defects found
during QA testing +No. of Defects found by End-user)) * 100 DRE is used to identify the test
effectiveness of the system.
Project managers have a wide variety of metrics to choose from. We can classify the most commonly
used metrics into the following groups:
1.Process Metrics
2.These are metrics that pertain to Process Quality. They are used to measure the efficiency and
effectiveness of various processes
.3.Project Metrics
4.These are metrics that relate to Project Quality. They are used to quantify defects, cost, schedule,
productivity and estimation of various project resources and deliverables.
5.Product Metrics
6.These are metrics that pertain to Product Quality. They are used to measure cost, quality, and the
product’s time-to-market.
7.Organizational Metrics
9.Software Development Metrics Exampl esThese metrics enable management to understand the
quality of the software, the productivity of the development team, code complexity, customer
satisfaction, agile process, and operational metrics
Clarify the difference between perform quality assurance and perform quality control in the project
quality management
#7) Defect Removal Efficiency (DRE) = (No. of Defects found during QA testing / (No. of Defects found
during QA testing +No. of Defects found by End-user)) * 100DRE is used to identify the test effectiveness
of the system.
After the QA testing, during Alpha & Beta testing, the end-user / client identified 40 defects, which could
have been identified during the QA testing phase.Now, The DRE will be calculated as,
DRE = [100 / (100 + 40)] * 100 = [100 /140] * 100 = 71%#8) Defect Leakage: Defect Leakage is the Metric
which is used to identify the efficiency of the QA testing i.e., how many defects are missed/slipped
during the QA testing.
This tutorial explains what is Efficiency Testing, techniques to measure, formulae for Test Efficiency, Test
Efficiency Vs Test Effectiveness
Cyclomatic complexity (CYC) is a software metric used to determine the complexity of a program. It is a
count of the number of decisions in the source code. The higher the count, the more complex the code.
CYC= E-N + 2P
in this equation:
P = Number of disconnected parts of the flow graph (e.g. a calling program and a subroutine)
•N = Number of nodes (sequential group of statements containing only one transfer of control)
Second method: sum the number of binary decision statements(e.g if, while, for) and add 1 to it.
CYC>5---greater complex
CYC<5----LESS COMPLEX
A=10
If B>C then
A=B
Else
A=C
Endif
Print A
Print B
Print C
Edges=7
Nodes=7
P=1
If A=354
Then if B>C
Then A=B
Else A+C
Endif
Endif
Enlisted below are the widely used terminologies in Orthogonal Array Testing:
Term Description
Runs It is the number of rows which represents the number of test conditions to be performed.
Factors It is the number of columns which represents in the number of variable to be tested
As the rows represent the number of test conditions (experiment test) to be performed, the
goal is to minimize the number of rows as much as possible.
Levels represent the maximum number of values for a factor (0 – levels – 1). Together, the
values in Levels and Factors are called LRUNS (Levels**Factors).
#1) Decide the number of variables that will be tested for interaction. Map these variables to the factors
of the array.
#2) Decide the maximum number of values that each independent variable will have. Map these values
to the levels of the array.
#3) Find a suitable orthogonal array with the smallest number of runs. The number of runs can be
derived from various websites. One such website is listed here.
#6) Look out for the leftover or special Test Cases (if any)
After performing the above steps, your Array will be ready for testing with all the possible combinations
covered.
Example 1
Let’s say that the pages or links have three dynamic frames (Sections) that can be made as hidden or
visible.
Step 1: Determine the number of independent variables. There are three independent variables
(sections on the page) = 3 Factors.
Step 2: Determine the maximum number of values for each variable. There are two values (hidden and
visible) = 2 Levels.
Step 3: Determine the Orthogonal Array with 3 Factors and 2 Levels. Referring to the link we have
derived the number of rows required i.e. 4 Rows.
Orthogonal array follows the pattern LRuns(LevelsFactors). Hence in this example, the Orthogonal Array
will be L4(23).
Error handling refers to the response and recovery procedures from error conditions present in a
software application. In other words, it is the process comprised of anticipation, detection and
resolution of application errors, programming errors or communication errors. Error handling helps in
maintaining the normal flow of program execution. In fact, many applications face numerous design
challenges when considering error-handling techniques.
Cookie—temporary internet files which are created at client side when we open the web sites. these
files contains user data
Sessions: these are timeslots which are allocated to the user at the serve side
On the desktop, the application is tested on a central processing unit. On a mobile device, the
application is tested on handsets like Samsung, Nokia, Apple, and HTC.
Mobiles use network connections like 2G, 3G, 4G or WIFI where desktop use broadband or dial-up
connections.
The automation tool used for desktop application testing might not work on mobile applications.
To address all the above technical aspects, the following types of testing are performed on
Mobile applications.
Usability testing– To make sure that the mobile app is easy to use and provides a satisfactory
user experience to the customers
Compatibility testing– Testing of the application in different mobiles devices, browsers, screen
sizes and OS versions according to the requirements.
Interface testing– Testing of menu options, buttons, bookmarks, history, settings, and navigation
flow of the application.
Services testing– Testing the services of the application online and offline.
Low-level resource testing: Testing of memory usage, auto-deletion of temporary files, local
database growing issues known as low-level resource testing.
Performance testing– Testing the performance of the application by changing the connection
from 2G, 3G to WIFI, sharing the documents, battery consumption, etc.
Operational testing– Testing of backups and recovery plan if a battery goes down, or data loss
while upgrading the application from a store.
Installation tests– Validation of the application by installing /uninstalling it on the devices.
Security Testing– Testing an application to validate if the information system protects data or
not.
Object-based mobile testing tools– automation by mapping elements on the device screen into objects.
This approach is independent of screen size and mainly used for Android devices.
Image-based mobile testing tools– create automation scripts based on screen coordinates of
elements.
•Software Life Cycle: Selection of software development life cycle model is the most influential factor in
the test efforts. For each and every addition of new functionalities or features by the developers, tester
is bound to perform repetitive regression testing of the application
Process Maturity: In any software development, maturity of the process significantly impacts the test
efforts. Introducing and managing changes in the already existing mature process is a difficult task to
execute as these changes are need to be introduced with utmost care and needs considerable amount
of time.
•Time: Everyone wants to reduce time related factors to level-up the competition in the market. The
amount of time required to perform various tests by testing team should be decided appropriately.
Otherwise the whole system will face shortage of time, hence the project at risk.
•Skilled Team
Defect report:
It is known as Bug report, is a document that identifies and describes a defect detected by a tester. The
purpose of a defect report is to state the problem as clearly as possible so that developers can replicate
the defect easily and fix it.
• Clear Quest
• DevTrack
Jira
•Product Version – This includes the product version of the application in which the defect is found.
•Detail Steps – This includes the detailed steps of the issue with the screenshots attached so that
developers can recreate it.
•Date Raised – This includes the Date when the bug is reported
•Reported By – This includes the details of the tester who reported the bug like Name and ID
•Status – This field includes the Status of the defect like New, Assigned, Open, Retest, Verification,
Closed, Failed, Deferred, etc.
•Fixed by – This field includes the details of the developer who fixed it like Name and ID
Status of Defect
• New: Tester provides new status while Reporting (for the first time)
Rejected: Developer / Dev lead /DTT rejects if the defect is invalid or defect is duplicate.
Re-open: Tester Re-opens the defect with valid reasons and proofs
Severity of bug/defect-how badly the defect/bug affects the application. Testers will decide the severity
of the bug.
Priority: what is effect/impact of the defect/bugs from business point of view. what is the priority of
fixing the bug that is decided only from the business perspective.
Will are the high severity bugs will have high priority(i.e have to be fixed urgently)? Not necessary
Defect Severity Severity describes the seriousness of defect. In software testing, defect severity can be
degree of impact a defect has on the development or operation of a component application being
tested.
• Critical: This defect indicates complete shut-down of the process, nothing can proceed further.
(Show stopper defect).
• High: It is a highly severe defect and collapse the system. However, certain parts of the system
remain control.
• Medium: It cause some undesirable behaviour, but the system is still functional.
• Low: It wont cause any major break down of the system ow Severity and Low Priority
Any spelling mistakes /font casing/ misalignment in the paragraph of the 3rd or 4th page of the
application and not in the main or front page/ title.
• These defects are classified in the green lines as shown in the figure and occur when there is no
functionality impact, but still not meeting the standards to a small degree. Generally cosmetic errors or
say dimensions of a cell in a table on UI are classified here
Defect Priority
• Defect Priority states the order in which a defect should be fixed. Higher the priority the sooner the
defect should be resolved.
• P1 (High): The defect must be resolved as soon as possible as it affects the system severely and cannot
be used until it is fixed
• P2 (Medium): During the normal course of the development activities defect should be resolved. It can
wait until a new version is created
• P3 (Low): The defect is an irritant but repair can be done once the more serious defect have been fixed
Severity Impact
• This bug is critical enough to crash the system, cause file corruption, or
cause potential data loss
1
• It causes an abnormal return to the operating system (crash or a system
(Critical)
failure message appears).
• It causes the application to hang and requires re-booting the system.
2 (High) • It causes a lack of vital program functionality with workaround.
• This Bug will degrade the quality of the System. However there is an
intelligent workaround for achieving the desired functionality - for example
3
through another screen.
(Medium)
• This bug prevents other areas of the product from being tested. However
other areas can be independently tested.
• There is an insufficient or unclear error message, which has minimum impact
4 (Low)
on product use.
5
(Cosmeti
c)