STQA Bankai
STQA Bankai
STQA Bankai
Focus External Behavior: Tester is unaware Internal Logic: Tester has knowledge
of internal logic or code structure. of internal code structure and logic.
Testing Level Higher Level: Primarily focuses on Lower Level: Concentrates on code,
functional and user interface testing. logic, and internal structures.
1. Error Guessing:
○ Definition: Testers use their experience to anticipate likely error-prone areas.
○ Approach: Informal and intuition-based, leveraging tester's domain knowledge.
○ Advantage: Complements formal testing methods by targeting specific potential issues.
○ Example: Testers might focus on complex calculations or frequently modified code.
2. Exploratory Testing:
○ Definition: Simultaneous learning, test design, and test execution.
○ Approach: Tester explores the application, creating and executing tests dynamically.
○ Advantage: Unearths unexpected defects and provides rapid feedback.
○ Example: Tester navigates the software, varying inputs to uncover unforeseen issues.
4. Unit Testing:
5. Integration Testing:
1. Top-Down Integration:
○ Strategy: Testing starts from the top of the hierarchy and progresses downward.
○ Advantage: Early focus on critical functionalities.
○ Disadvantage: Need for stubs for lower-level modules.
2. Bottom-Up Integration:
○ Strategy: Testing starts from the bottom of the hierarchy and moves upward.
○ Advantage: Early testing of basic functionalities.
○ Disadvantage: Requires drivers for higher-level modules.
3. Big Bang Integration:
○ Strategy: All components are integrated simultaneously.
○ Advantage: Quick integration assessment.
○ Disadvantage: Defect identification may be challenging.
4. Incremental Integration:
○ Strategy: System is built, tested, and integrated incrementally.
○ Advantage: Early detection of defects in smaller, manageable pieces.
○ Disadvantage: Complex coordination as components are added.
7. System Testing:
8. Acceptance Testing:
9. Non-Functional Testing:
1. Overview:
○ Definition: Testing methods based on testers' knowledge, skills, and intuition.
○ Rationale: Leverages experience to identify potential defects and effective testing
approaches.
2. Error Guessing:
○ Approach: Testers use intuition and experience to anticipate likely error-prone areas.
○ Execution: Informal, unscripted, and often ad-hoc testing based on experience.
3. Exploratory Testing:
○ Approach: Simultaneous learning, test design, and test execution.
○ Execution: Tester explores the application dynamically, adapting test cases on the fly.
○ Benefits: Unearths unexpected defects, provides rapid feedback, and encourages
creativity.
4. Checklist-Based Testing:
○ Approach: Testers follow a predefined checklist of test conditions.
○ Execution: Systematic verification of features against a list of expected behaviors.
○ Benefits: Ensures coverage of critical test scenarios and reduces oversight.
5. Ad-Hoc Testing:
○ Approach: Informal and unplanned testing without predefined test cases.
○ Execution: Testers use intuition and domain knowledge to identify and explore potential
issues.
○ Benefits: Quick identification of defects, especially in scenarios not covered by formal
testing.
6. Scenario-Based Testing:
○ Approach: Testing based on realistic usage scenarios or user stories.
○ Execution: Testers create and execute test cases that mimic real-world usage.
○ Benefits: Aligns testing with user expectations and helps validate end-to-end
functionality.
● Definition: Re-running previously executed test cases to ensure that existing functionalities
remain unaffected after code changes.
● Process:
○ Select Test Cases: Choose a set of test cases covering critical functionalities.
○ Execute Tests: Run selected test cases on the modified code.
○ Compare Results: Compare new test results with baseline results to identify any
deviations.
○ Address Defects: If issues are found, address them before releasing the updated
software.
Overall Significance: Regression testing is crucial for maintaining the stability and reliability of a
software application, especially in dynamic development environments. It provides confidence that code
changes do not introduce new defects and that the software continues to meet user expectations over
time.
15. Statement Coverage Testing (Focuses on ensuring each line of code is executed) :
1. Definition: Measures the extent to which each statement in the code has been executed during
testing.
2. Objective: Ensure that every line of code has been executed at least once.
3. Metric Calculation: Ratio of executed statements to total statements in the code.
4. Advantage: Reveals unexecuted or dead code segments.
5. Limitation: Does not ensure all logical conditions have been exercised.
Example:
if a > 0:
result = a + b
else:
result = a - b
return result
● Statement Coverage: Both branches of the if-else statement and the return statement should
be executed during testing.
16. Branch Coverage Testing (Ensures that all decision points and branches in the code have
been exercised) :
1. Definition: Measures the percentage of branches (decision points) that have been executed.
2. Objective: Ensure that each possible branch outcome has been exercised.
3. Metric Calculation: Ratio of executed branches to total branches in the code.
4. Advantage: Provides a more comprehensive assessment of code execution paths.
5. Limitation: Does not guarantee coverage of all possible combinations of conditions.
Example:
if a > 0:
result = a + b
else:
result = a - b
return result
● Branch Coverage: Both the true and false outcomes of the if-else statement should be
covered during testing.
1. Definition:
○ Objective: Evaluate system performance under various conditions.
○ Focus Areas: Response time, scalability, reliability, and resource utilization.
○ Purpose: Identify and address performance bottlenecks and ensure the system meets
specified criteria.
2. Types of Performance Testing:
○ Load Testing: Assess system behavior under expected load.
○ Stress Testing: Determine system behavior at or beyond maximum capacity.
○ Performance Testing: Measure response time, throughput, and resource usage.
○ Volume Testing: Evaluate system performance with a large volume of data.
○ Scalability Testing: Assess the system's ability to scale with increased load.
3. Performance Testing Process:
○ Identify Performance Metrics: Define criteria such as response time, throughput, and
resource usage.
○ Develop Test Scenarios: Create realistic scenarios simulating user interactions.
○ Select Testing Tools: Choose tools for load generation, monitoring, and analysis.
○ Execute Tests: Run tests under various conditions to collect performance data.
○ Analyze Results: Evaluate performance metrics and identify bottlenecks.
○ Optimize and Retest: Address issues, optimize code or configurations, and retest.
4. Example: Load Testing for an E-commerce Website:
○ Scenario: An e-commerce website preparing for a Black Friday sale.
○ Load Test: Simulate a high number of simultaneous users accessing the website.
○ Metrics: Measure response time, server CPU usage, and database performance.
○ Outcome: Ensure the website handles peak loads without slowdowns or errors.
○ Considerations: Test scenarios include browsing products, adding items to the cart, and
completing transactions.
5. Tools for Performance Testing:
○ Apache JMeter: Open-source tool for load and performance testing.
○ LoadRunner: Performance testing tool from Micro Focus.
○ Gatling: Open-source load testing framework based on Scala.
6. Challenges in Performance Testing:
○ Realistic Simulation: Creating scenarios that accurately represent actual user behavior.
○ Dynamic Environments: Testing in agile or continuously changing environments.
○ Data Management: Handling large datasets in volume testing.
7. Benefits:
○ Early Issue Identification: Discover and address performance issues in the
development phase.
○ User Satisfaction: Ensure a responsive and reliable application under various
conditions.
○ Cost Savings: Identify and fix performance bottlenecks before deployment.
Unit 4:
1. Differentiate between Quality Assurance and Quality Control.
Timing Proactive: Takes place before or Reactive: Occurs during or after the
during the development process. development phase.
Metrics Process Metrics: Focus on process Defect Metrics: Measure the number
efficiency and effectiveness. and severity of defects found.
Cost Implication Upfront Investment: May require Operational Cost: Involves costs related
initial investment in process to testing and defect correction.
improvement and training.
1. Requirements Phase:
○ Impact: Misunderstood or incomplete requirements can lead to incorrect functionalities.
○ Consequence: May result in rework, delays, and increased development costs.
○ Mitigation: Thoroughly review and validate requirements with stakeholders.
2. Design Phase:
○ Impact: Flaws in the design may result in inefficient system architecture.
○ Consequence: Higher chances of rework, scalability issues, and compromised system
performance.
○ Mitigation: Conduct design reviews and leverage design patterns for robust
architectures.
3. Coding Phase:
○ Impact: Bugs and coding errors can lead to functional and logical issues.
○ Consequence: Reduced code quality, increased maintenance efforts, and potential
system failures.
○ Mitigation: Implement coding standards, conduct code reviews, and utilize automated
testing.
4. Testing Phase:
○ Impact: Undetected defects can make their way into the final product.
○ Consequence: Reduced product quality, negative user experience, and potential post-
release defects.
○ Mitigation: Rigorous testing, including unit, integration, and system testing, to identify
and fix defects.
5. Deployment Phase:
○ Impact: Deployment issues may lead to system downtimes or service disruptions.
○ Consequence: User dissatisfaction, loss of revenue, and damage to the organization's
reputation.
○ Mitigation: Conduct thorough deployment testing, use staging environments, and have
rollback plans.
6. Post-Release Phase:
○ Impact: Defects discovered after release can result in patches and hotfixes.
○ Consequence: Increased support and maintenance costs, customer frustration, and
potential loss of trust.
○ Mitigation: Establish effective monitoring, collect user feedback, and implement agile
release cycles.
1. Definition:
○ Overall System: Framework of policies, processes, procedures, and resources.
○ Objective: Ensure consistent product or service quality, customer satisfaction, and
compliance with standards.
2. Key Components:
○ Policies: Documented guidelines outlining the organization's commitment to quality.
○ Processes: Defined workflows and procedures for delivering products or services.
○ Procedures: Detailed instructions for specific tasks within processes.
○ Resources: Allocation of personnel, tools, and technologies to support quality goals.
3. Standards and Regulations:
○ Compliance: Adherence to industry-specific standards (e.g., ISO 9001) and regulatory
requirements.
○ Certification: Pursuing and maintaining certifications to validate compliance.
4. Continuous Improvement:
○ Feedback Loops: Gathering feedback from customers, employees, and stakeholders.
○ Corrective and Preventive Actions: Addressing issues and preventing their recurrence.
5. Documentation:
○ Quality Manual: Document outlining the QMS structure and policies.
○ Work Instructions: Detailed instructions for specific tasks.
○ Records: Documented evidence of compliance and performance.
6. Roles and Responsibilities:
○ Quality Management Representative: Oversees the QMS and ensures compliance.
○ Process Owners: Responsible for specific processes within the QMS.
○ Employees: Implement and follow defined processes.
7. Audits and Assessments:
○ Internal Audits: Regular self-assessment of QMS effectiveness.
○ External Audits: Independent assessments by third-party organizations for certifications.
8. Customer Focus:
○ Understanding Requirements: Ensuring products or services meet customer needs.
○ Customer Feedback: Utilizing feedback for continuous improvement.
9. Risk Management:
○ Identification: Identifying and assessing potential risks to quality.
○ Mitigation: Implementing strategies to manage and mitigate identified risks.
10. Technology Integration:
○ QMS Software: Implementing tools to streamline document management, audits, and
reporting.
○ Automation: Using technology for process automation and data analysis.
11. Training and Competence:
○ Training Programs: Ensuring employees are trained in relevant processes.
○ Competency Assessments: Assessing and maintaining employee competence.
12. Benefits:
○ Consistency: Ensures consistent and predictable product or service quality.
○ Customer Satisfaction: Enhanced customer satisfaction through quality products or
services.
○ Regulatory Compliance: Meeting industry and regulatory standards.
4. Quality Plan:
1. Definition:
○ Documented Strategy: Outlines the approach, processes, and resources for ensuring
product or service quality.
○ Tailored to Project: Customized to the specific needs and characteristics of a particular
project.
2. Key Components:
○ Objectives: Clearly defined quality objectives aligned with project goals.
○ Scope: Defines the boundaries and limitations of the quality management efforts.
○ Roles and Responsibilities: Assigns responsibilities for quality-related tasks to
individuals or teams.
3. Quality Standards and Criteria:
○ Adherence to Standards: Specifies relevant quality standards (e.g., ISO) and
compliance criteria.
○ Acceptance Criteria: Defines the criteria for accepting or rejecting deliverables.
4. Quality Processes:
○ Development Processes: Describes how the development processes ensure quality.
○ Testing and Inspection: Details testing methodologies, inspection procedures, and
quality control points.
5. Documentation and Reporting:
○ Document Control: Describes how documentation will be managed and controlled.
○ Reporting Frequency: Specifies the frequency and format of quality reports.
6. Resources and Training:
○ Resource Allocation: Outlines the allocation of personnel, tools, and technologies to
support quality goals.
○ Training Programs: Identifies training needs and plans for the development of
necessary skills.
7. Quality Assurance Activities:
○ Audits and Reviews: Specifies plans for internal and external audits and reviews.
○ Process Improvement: Outlines activities for continuous process improvement.
8. Testing and Inspection Activities:
○ Test Planning: Details the approach to testing, including test strategies and test plans.
○ Inspection Procedures: Describes procedures for inspecting deliverables.
9. Risk Management:
○ Identification: Outlines how risks to quality will be identified.
○ Mitigation Plans: Specifies plans for mitigating identified risks.
10. Customer Satisfaction Measures:
○ Customer Feedback: Describes how customer feedback will be collected and used.
○ Customer Surveys: Outlines plans for conducting customer satisfaction surveys.
11. Change Control:
○ Change Management Procedures: Describes how changes to the project will be
managed.
○ Impact Assessment: Specifies how changes may impact quality and how those impacts
will be assessed.
12. Communication Plan:
○ Stakeholder Communication: Details how communication about quality will be
managed.
○ Issue Resolution: Outlines processes for resolving quality-related issues.
13. Closure Criteria:
○ Project Closure: Specifies the criteria for determining when the project has achieved the
desired level of quality.
○ Sign-off Procedures: Describes the procedures for obtaining acceptance or sign-off.
14. Monitoring and Control:
○ Performance Metrics: Identifies key performance indicators (KPIs) for monitoring
quality.
○ Control Mechanisms: Describes how deviations from the quality plan will be addressed.
15. Review and Update:
○ Frequency: Specifies how often the quality plan will be reviewed and updated.
○ Feedback Mechanisms: Describes how feedback from quality assurance activities will
be incorporated.
Purpose:
● Guidance: Provides a roadmap for achieving and maintaining the desired level of quality.
● Communication: Communicates the quality expectations and processes to all stakeholders.
● Control: Enables monitoring and control of quality throughout the project lifecycle.
1. Definition:
○ ISO 9001: An international standard for Quality Management Systems (QMS).
○ Objective: Establishes criteria for a systematic approach to managing processes,
ensuring consistency, and delivering products or services that meet customer
expectations.
2. Importance in Software Testing:
○ Quality Assurance: ISO 9001 emphasizes quality assurance processes, aligning with
the principles of software testing to ensure the delivery of high-quality software products.
○ Process Improvement: ISO 9001 promotes continuous improvement, encouraging
organizations to regularly assess and enhance their processes. This aligns with the
iterative and feedback-driven nature of software testing.
○ Customer Satisfaction: ISO 9001 prioritizes customer satisfaction through the delivery
of quality products. Effective software testing contributes directly to meeting customer
expectations and requirements.
○ Documentation and Records: ISO 9001 mandates documentation and record-keeping,
ensuring that software testing activities, methodologies, and results are well-documented.
This helps in traceability and auditability.
○ Risk Management: The standard emphasizes risk-based thinking. Software testing
inherently involves identifying and mitigating risks related to defects and failures, aligning
with ISO 9001 principles.
○ Resource Management: ISO 9001 requires efficient resource management, including
personnel and tools. Effective resource allocation is critical in software testing to conduct
thorough and effective testing activities.
○ Consistency and Standardization: ISO 9001 promotes consistency in processes and
procedures. In software testing, having standardized testing processes ensures
repeatability and reliability in testing efforts.
○ Supplier Relationships: The standard addresses relationships with external providers.
In software testing, this can include collaboration with testing tool vendors or outsourcing
testing services, requiring effective management of these relationships.
○ Management Commitment: ISO 9001 emphasizes top management's commitment to
quality. In software testing, strong leadership support is crucial for prioritizing testing
activities and allocating necessary resources.
○ Audits and Reviews: ISO 9001 requires regular audits and reviews to ensure
compliance and effectiveness. In software testing, conducting internal and external audits
ensures that testing processes align with best practices.
○ Customer Feedback: The standard emphasizes the importance of customer feedback.
In software testing, gathering user feedback and addressing reported issues contribute to
ongoing improvement.
3. Certification:
○ ISO 9001 Certification: Organizations can obtain ISO 9001 certification, demonstrating
their commitment to quality management. This certification can be particularly important
for software development and testing companies, instilling confidence in clients and
stakeholders.
4. Competitive Advantage:
○ Market Recognition: ISO 9001 certification is often recognized globally, providing a
competitive edge for organizations in the software industry. Clients may prefer working
with certified companies to ensure quality standards are met.
5. Continuous Improvement:
○ Feedback Loops: ISO 9001's emphasis on continuous improvement aligns with the
iterative nature of software testing. Feedback loops in testing help identify areas for
improvement and drive ongoing enhancements.
7. Important Aspects of Quality Management System (QMS):
Key Features:
1. Recording a Test:
○ Illustration: Click the record button and perform actions on the web application.
○ Explanation: Selenium IDE captures user actions and generates corresponding
commands in the script.
2. Enhancing Test Scripts:
○ Illustration: Refine the generated script by adding validations, loops, and other
commands.
○ Explanation: Testers can modify and enhance the recorded script using Selenium
commands to meet specific testing requirements.
3. Playback and Debugging:
○ Illustration: Use playback controls to execute the script and identify any issues.
○ Explanation: Testers can play, pause, or step through the script to observe the browser
interactions and identify potential problems.
4. Exporting and Integration:
○ Illustration: Export the test script in the desired programming language.
○ Explanation: Testers can export the script for integration with different testing frameworks
or execution environments.
Limitations:
● While Selenium IDE is a powerful tool for quick test script creation, it may not be suitable for
complex testing scenarios.
● It lacks robust features for handling dynamic elements and advanced scripting logic.
● Selenium IDE is primarily used for rapid prototyping and initial test script development
● Definition:
○ Methodology: Uses statistical techniques to monitor and control a process.
○ Objective: Detect and prevent variations in the production process that could lead to
defects.
● Key Components:
○ Control Charts: Graphical representation of process data over time, helping identify
trends or unusual patterns.
○ Process Capability Analysis: Assessing the ability of a process to produce products
within specified limits.
○ Statistical Tools: Measures such as mean, standard deviation, and range are used to
analyze process data.
● Process:
○ Data Collection: Gather data from the production process.
○ Control Chart Construction: Plot data points on a control chart.
○ Analysis: Monitor the control chart for trends, cycles, or abnormal patterns.
○ Response: Take corrective actions if the process is out of control.
● Benefits:
○ Early Detection: Identifies process variations before they result in defects.
○ Continuous Improvement: Provides data for ongoing process improvement initiatives.
○ Objective Decision-Making: Data-driven decisions enhance the overall quality control
process.
● Definition:
○ Methodology: Involves examining and testing products at various stages of the
production process.
○ Objective: Identify and eliminate defects or deviations from quality standards.
● Key Components:
○ In-Process Inspection: Examining products during different stages of manufacturing.
○ Final Inspection: Comprehensive assessment of finished products.
○ Testing: Conducting specific tests to ensure product performance and reliability.
● Process:
○ Establish Quality Criteria: Define acceptance criteria and quality standards.
○ In-Process Inspection: Regular checks during manufacturing to catch defects early.
○ Final Inspection: Thorough examination of finished products against defined criteria.
○ Testing: Conduct tests such as functionality, performance, and reliability.
○ Disposition: Accept, reject, or rework products based on inspection and testing results.
● Benefits:
○ Defect Identification: Detects defects early in the production process.
○ Conformance to Standards: Ensures that products meet predefined quality criteria.
○ Customer Satisfaction: Reduces the likelihood of delivering defective products to
customers.
11. The Capability Maturity Model (CMM) is a framework that describes the evolutionary stages of an
organization's software development and process improvement practices. The CMM defines five maturity
levels, each representing a level of organizational capability and process maturity. Here are the different
levels of CMM:
Key Concepts:
● Process Areas:
○ Each maturity level is associated with specific process areas that represent a cluster of
related practices in the organization's processes.
● Key Process Areas (KPAs):
○ Within each maturity level, there are key process areas that the organization must
address to achieve that level.
● Institutionalization:
○ The concept of institutionalization refers to the extent to which a process is ingrained in
the organization and consistently applied.
● Continuous Improvement:
○ CMM emphasizes the importance of continuous improvement at higher maturity levels,
where organizations strive for optimization and innovation.
Unit 5:
1. Definition:
○ Overview: RPA is a technology that uses software robots or "bots" to automate repetitive
and rule-based tasks within business processes.
○ Objective: Improve operational efficiency, reduce manual effort, and enhance accuracy
by automating routine tasks.
2. Key Components:
○ Software Robots (Bots): Virtual agents that mimic human actions to interact with digital
systems.
○ Bot Development Tools: Platforms for designing, configuring, and deploying automation
scripts.
○ Control Room: Centralized management console for monitoring and controlling bots.
○ Integration Tools: Interfaces for connecting RPA with existing software applications and
systems.
3. Benefits:
○ Efficiency: Speeds up task execution, allowing organizations to accomplish more in less
time.
○ Accuracy: Reduces human errors associated with repetitive tasks, improving data
accuracy.
○ Cost Savings: Automating routine tasks leads to reduced operational costs and
increased productivity.
○ Scalability: Easily scales to handle increasing workloads without a proportional increase
in workforce.
○ Auditability: Provides detailed logs and audit trails, enhancing accountability and
compliance.
4. Use Cases:
○ Data Entry and Migration: Automates data entry and migration between systems.
○ Invoice Processing: Extracts data from invoices and updates financial systems.
○ Customer Onboarding: Automates steps in the customer onboarding process.
○ HR Processes: Streamlines tasks like employee onboarding and payroll processing.
○ Report Generation: Automates report generation and distribution.
○ Data Extraction: Extracts information from documents or websites.
5. RPA Workflow:
○ Bot Development: Develop automation scripts using RPA development tools.
○ Bot Deployment: Deploy bots to the production environment.
○ Task Execution: Bots execute tasks by interacting with applications, databases, and
systems.
○ Monitoring and Control: Centralized control room monitors bot activities, logs, and
exceptions.
○ Exception Handling: Bots handle exceptions or errors based on predefined rules.
○ Reporting and Analytics: Analyze performance metrics, identify bottlenecks, and
optimize processes.
6. Integration with AI and Cognitive Technologies:
○ AI Integration: Combines RPA with artificial intelligence to handle more complex
decision-making tasks.
○ Cognitive Automation: Enables bots to understand and process unstructured data, like
natural language.
7. Challenges:
○ Complexity: Automation of highly complex processes may require advanced
programming skills.
○ Process Changes: Rapid changes in business processes can pose challenges to RPA
implementation.
○ Security Concerns: Bots interacting with sensitive data require robust security
measures.
8. Future Trends:
○ Hyperautomation: Integration of RPA with AI, machine learning, and other automation
technologies for end-to-end automation.
○ Process Discovery: AI-driven tools to identify and prioritize processes suitable for
automation.
○ Cloud-Based RPA: Increasing adoption of RPA solutions delivered as cloud services.
9. Key RPA Tools:
○ UiPath: Popular RPA platform with a visual design interface.
○ Automation Anywhere: Offers features like bot analytics and cognitive automation.
○ Blue Prism: Known for its scalability and enterprise-grade RPA capabilities.
2. Automation Testing:
1. Definition:
○ Overview: Automation testing involves using specialized tools and scripts to execute pre-
defined test cases on software applications, comparing actual outcomes with expected
results, and reporting test results.
○ Objective: Improve testing efficiency, accuracy, and coverage by automating repetitive
and time-consuming manual testing tasks.
2. Key Concepts:
○ Test Scripts: Sets of instructions written in programming languages to perform
automated testing.
○ Automation Tools: Software tools that facilitate the creation, execution, and analysis of
automated test scripts.
○ Test Frameworks: Organized structures providing guidelines and practices for creating
and managing automated test scripts.
3. Benefits:
○ Efficiency: Rapid execution of test cases, saving time and resources.
○ Repeatability: Consistent execution of tests with minimal variations.
○ Accuracy: Reduces human errors associated with manual testing.
○ Regression Testing: Efficiently executes repetitive tests after code changes.
○ Parallel Execution: Enables simultaneous execution of tests across multiple
environments.
4. When to Use Automation Testing:
○ Repetitive Tasks: Tasks that require frequent execution.
○ Regression Testing: Ensuring existing functionality works after changes.
○ Large Data Sets: Testing scenarios with extensive datasets.
○ Parallel Execution: Validating application behavior across different configurations.
○ Performance Testing: Simulating user loads for performance assessment.
5. Popular Automation Testing Tools:
○ Selenium: Open-source framework for web application testing.
○ Appium: For mobile application testing.
○ JUnit: Framework for Java applications.
○ TestNG: Testing framework inspired by JUnit for Java.
○ Cypress: JavaScript-based framework for web applications.
6. Challenges:
○ Initial Setup: Setting up automation infrastructure requires effort.
○ Maintenance: Regular updates and changes may necessitate script modifications.
○ Script Creation Time: Creating automated test scripts can initially be time-consuming.
○ Not Suitable for Exploratory Testing: Manual testing is often more suitable for
exploratory testing scenarios.
7. Best Practices:
○ Selecting Appropriate Test Cases: Choose test cases suitable for automation.
○ Regular Maintenance: Keep test scripts up-to-date with application changes.
○ Parallel Execution: Execute tests concurrently to save time.
○ Version Control: Use version control systems to manage test scripts.
○ Collaboration: Foster collaboration between developers and testers.
8. Automation Testing Life Cycle:
○ Test Planning: Identify test cases suitable for automation.
○ Test Script Development: Create automated test scripts using selected tools.
○ Execution: Execute automated test scripts on the application.
○ Result Analysis: Analyze test results and identify issues.
○ Defect Reporting: Report defects found during automation testing.
○ Maintenance: Regularly update and maintain test scripts.
1. Test Planning:
○ Objective: Define the scope, objectives, and criteria for automation.
○ Tasks:
■ Identify test cases suitable for automation.
■ Establish criteria for selecting automation tools.
■ Define the testing environment and configurations.
■ Develop a timeline and resource plan for automation.
2. Tool Selection:
○ Objective: Choose appropriate automation tools based on project requirements.
○ Tasks:
■ Evaluate and compare automation tools (e.g., Selenium, Appium, JUnit).
■ Consider factors like compatibility, scripting languages, and reporting capabilities.
■ Select tools that align with the technology stack of the application.
3. Test Case Design:
○ Objective: Design test cases suitable for automation.
○ Tasks:
■ Identify high-priority and high-impact test scenarios.
■ Define input data and expected outcomes for each test case.
■ Structure test cases to be modular and reusable.
■ Include positive and negative test scenarios.
4. Script Development:
○ Objective: Create automated test scripts using selected tools.
○ Tasks:
■ Write script code based on the designed test cases.
■ Use scripting languages supported by the chosen automation tool.
■ Implement error handling and reporting mechanisms.
■ Develop scripts to interact with the application's user interface or APIs.
5. Test Data Setup:
○ Objective: Prepare and manage test data required for automated testing.
○ Tasks:
■ Create datasets for various test scenarios.
■ Ensure data integrity and relevance for test cases.
■ Implement mechanisms for data variation and parameterization.
6. Environment Setup:
○ Objective: Configure the test environment for automated execution.
○ Tasks:
■ Set up the test infrastructure, including servers and databases.
■ Install and configure the application under test.
■ Establish connections to external systems or APIs.
■ Validate the readiness of the testing environment.
7. Execution:
○ Objective: Execute automated test scripts on the application.
○ Tasks:
■ Trigger the automation tool to run test scripts.
■ Monitor script execution for any errors or failures.
■ Capture screenshots or logs for further analysis.
■ Implement parallel execution for faster results.
8. Result Analysis:
○ Objective: Analyze test results and identify issues.
○ Tasks:
■ Review test logs, error messages, and screenshots.
■ Identify failed test cases and classify issues.
■ Log defects in the defect tracking system.
■ Analyze patterns and trends in test results.
9. Defect Reporting:
○ Objective: Report defects found during automation testing.
○ Tasks:
■ Document defect details, including steps to reproduce.
■ Assign severity and priority to each defect.
■ Communicate defects to the development team.
■ Verify fixes and update defect status.
10. Maintenance:
○ Objective: Regularly update and maintain test scripts.
○ Tasks:
■ Update scripts to accommodate changes in the application.
■ Modify scripts for new features or enhancements.
■ Re-run regression tests after each code change.
■ Review and optimize existing test scripts.
11. Continuous Integration/Continuous Deployment (CI/CD) Integration:
○ Objective: Integrate automated testing into CI/CD pipelines.
○ Tasks:
■ Automate the execution of tests triggered by code commits.
■ Integrate test reporting into CI/CD tools.
■ Implement automated deployment of test environments.
■ Ensure seamless testing as part of the software delivery pipeline.
12. Documentation:
○ Objective: Maintain documentation for the automated testing process.
○ Tasks:
■ Document test plans, test cases, and automation scripts.
■ Capture configurations and settings for the testing environment.
■ Update documentation with changes in tools or processes.
■ Share knowledge and best practices with the testing team.
4. Selenium WebDriver:
1. Overview:
○ Definition: Selenium WebDriver is a powerful automation tool for controlling web
browsers through programs and performing browser automation.
○ Purpose: Facilitates the automation of web application testing, interacting with web
elements and simulating user actions.
2. Key Features:
○ Cross-Browser Compatibility: Supports multiple browsers such as Chrome, Firefox,
Safari, and Edge.
○ Programming Language Support: Works with various programming languages,
including Java, Python, C#, and others.
○ Dynamic Element Interaction: Enables interaction with dynamic web elements through
locators and methods.
○ Parallel Execution: Allows parallel execution of test scripts for faster testing.
○ Integration with Testing Frameworks: Easily integrates with testing frameworks like
TestNG and JUnit.
3. Basic Architecture:
○ WebDriver Interface: Core interface providing methods to interact with browsers.
○ Browser Drivers: Browser-specific executable files (e.g., chromedriver.exe,
geckodriver.exe) responsible for browser communication.
○ Client Libraries: Language-specific bindings (e.g., Selenium WebDriver for Java)
enabling communication between code and WebDriver.
4. Key Components:
○ WebDriver Interface: Defines methods for browser interactions (e.g., open URL, find
element, click).
○ WebElement Interface: Represents individual elements on a web page, allowing
interaction (e.g., click, type).
○ Browser Drivers: Acts as a bridge between WebDriver and browsers, translating
commands.
5. Basic WebDriver Commands:
○ Initialization: Instantiate a WebDriver object for a specific browser.
○ Navigation: Open URLs and navigate between pages.
○ Find Elements: Locate web elements using various locators (e.g., ID, name, XPath).
○ Interactions: Simulate user actions like clicking, typing, and submitting forms.
○ Assertions and Verifications: Verify expected conditions and outcomes.
6. Pros:
○ Cross-Browser Compatibility: Supports various browsers, ensuring broad compatibility.
○ Extensive Community Support: Large community and documentation for problem-
solving.
○ Flexibility: Integrates with multiple programming languages and testing frameworks.
7. Cons:
○ Learning Curve: Requires understanding of programming and WebDriver concepts.
○ Flakiness: Sensitive to changes in web page structure, requiring updates to scripts.
○ Limited Support for Non-Web Applications: Primarily designed for web application
testing.
8. Use Cases:
○ Automated Testing: Automate functional testing of web applications.
○ Regression Testing: Ensure existing features work after changes.
○ Performance Testing: Simulate user interactions for performance assessments.
○ Browser Compatibility Testing: Validate application behavior across different browsers.
Unit 6:
Six Sigma is a set of techniques and tools for process improvement, aiming to minimize defects and
variations. The term "Six Sigma" refers to the goal of having processes that operate with virtually no
defects. The following are the key characteristics of Six Sigma:
Process Steps 1. Define the problem or effect. 2. Identify 1. Gather and organize data. 2.
major categories (branches) of potential Define data ranges (bins) for
causes. 3. Identify sub-causes under each analysis. 3. Create histogram
major category. 4. Analyze and prioritize bars representing data
causes. frequency. 4. Analyze
distribution patterns.
Decision Support Root Cause Analysis: Helps teams identify Data Distribution: Assists in
the most significant factors contributing to a understanding the central
problem. tendency and variation in data.
Example Use Case Manufacturing Defects: Identifying causes of Exam Scores: Analyzing the
defects in a manufacturing process. distribution of exam scores to
understand performance
patterns.
1. Clear Requirements:
○ Definition: Well-defined and unambiguous software requirements.
○ Importance: Clear requirements serve as the foundation for designing and building
software that meets user expectations.
2. Effective Planning:
○ Definition: Comprehensive project planning, including resource allocation and timelines.
○ Importance: A well-structured plan ensures organized development, testing, and delivery
phases.
3. Skilled Team:
○ Definition: Competent and well-trained team members.
○ Importance: Skilled professionals contribute to efficient development, reduced errors,
and effective problem-solving.
4. Robust Architecture:
○ Definition: Well-designed software architecture that supports scalability and
maintainability.
○ Importance: A solid architecture ensures the software can evolve, adapt, and
accommodate changes over time.
5. Effective Communication:
○ Definition: Open and transparent communication among team members.
○ Importance: Clear communication minimizes misunderstandings, reduces errors, and
fosters collaboration.
6. Continuous Testing:
○ Definition: Comprehensive and continuous testing throughout the development life
cycle.
○ Importance: Ongoing testing identifies and rectifies defects early, ensuring higher
software reliability.
7. Automated Testing:
○ Definition: Implementation of automated testing tools and frameworks.
○ Importance: Automation enhances testing efficiency, coverage, and consistency.
8. Code Review and Quality Checks:
○ Definition: Regular code reviews and adherence to coding standards.
○ Importance: Code reviews catch errors, promote best practices, and maintain code
quality.
9. Version Control:
○ Definition: Implementation of version control systems.
○ Importance: Version control ensures traceability, collaboration, and the ability to revert to
previous states.
10. Documentation:
○ Definition: Comprehensive and up-to-date documentation.
○ Importance: Documentation facilitates understanding, maintenance, and future
development.
11. User Feedback Incorporation:
○ Definition: Integration of user feedback into the development process.
○ Importance: User feedback ensures the software aligns with user expectations and
needs.
12. Security Measures:
○ Definition: Implementation of robust security measures.
○ Importance: Ensuring data integrity, protection against vulnerabilities, and safeguarding
against cyber threats.
13. Scalability and Performance:
○ Definition: Designing software to handle growth and ensuring optimal performance.
○ Importance: Scalability and performance are crucial for user satisfaction, especially as
user loads increase.
14. Regulatory Compliance:
○ Definition: Adherence to relevant industry standards and regulations.
○ Importance: Compliance ensures ethical practices, data protection, and legal alignment.
15. Continuous Improvement:
○ Definition: Embracing a culture of continuous improvement.
○ Importance: Regularly assessing processes, seeking feedback, and implementing
enhancements.
16. Risk Management:
○ Definition: Identification and mitigation of potential risks.
○ Importance: Proactive risk management minimizes the impact of uncertainties on
software quality.
17. Customer Satisfaction:
○ Definition: Prioritizing customer satisfaction through user-centric design.
○ Importance: Satisfied customers contribute to software success and positive brand
perception.
18. Adherence to Deadlines:
○ Definition: Meeting project deadlines and milestones.
○ Importance: Timely delivery is crucial for project success and stakeholder satisfaction.
1. Definition:
● Overview: TQM is a holistic approach to management that aims to enhance the quality of
products and services throughout an organization.
● Principles: Focuses on customer satisfaction, continuous improvement, employee involvement,
and the integration of quality into all aspects of the business.
2. Core Principles:
● Customer Focus:
○ Understand and meet customer needs and expectations.
○ Solicit customer feedback for improvement.
● Continuous Improvement:
○ Strive for ongoing improvement in all processes.
○ Use data-driven methods like Plan-Do-Check-Act (PDCA) cycles.
● Employee Involvement:
○ Involve all employees in quality improvement initiatives.
○ Encourage a culture of ownership and responsibility.
● Process-Centric Approach:
○ View the organization as a series of interconnected processes.
○ Optimize processes for efficiency and effectiveness.
● Decision-Making Based on Data:
○ Use data and statistical methods for informed decision-making.
○ Monitor and analyze key performance indicators.
● Supplier Relationships:
○ Establish strong relationships with suppliers.
○ Collaborate for mutual benefit and improved product/service quality.
● Leadership Involvement:
○ Leadership plays a crucial role in driving TQM.
○ Create a supportive environment for quality initiatives.
1. Definition:
○ Overview: Defect Removal Effectiveness (DRE) is a metric used in software engineering
to measure the efficiency of the testing process in identifying and removing defects
before the software is released.
○ Calculation: DRE is calculated as the percentage of defects identified and removed
during the testing phase compared to the total number of defects, including those found
later in the development life cycle.
2. Calculation Formula:
○ DRE (%) = (Number of Defects Removed in Testing / Total Number of Defects) * 100
3. Key Concepts:
○ Defect Identification:
■ DRE focuses on defects identified and addressed during the testing phase.
■ Includes defects found in various testing activities such as unit testing, integration
testing, system testing, and acceptance testing.
○ Total Number of Defects:
■ The denominator includes all defects found throughout the development life
cycle, including those identified by users after the software is released.
○ Quality Assurance Impact:
■ DRE provides insights into the effectiveness of quality assurance processes in
preventing defects from reaching the end-users.
4. Interpretation:
○ Higher DRE:
■ A higher DRE percentage indicates that a significant portion of defects has been
identified and addressed during testing.
■ Suggests a more effective testing process and a lower likelihood of defects
reaching production.
○ Lower DRE:
■ A lower DRE percentage indicates that a significant number of defects are
escaping into later phases or production.
■ May indicate gaps in the testing process, insufficient test coverage, or
inadequate test case design.
5. Benefits and Use Cases:
○ Process Improvement:
■ DRE is used as a key metric for assessing the effectiveness of testing practices.
■ Identifies areas for process improvement and optimization.
○ Decision-Making:
■ Helps in decision-making regarding release readiness.
■ Higher DRE provides confidence in the software's stability and quality.
○ Benchmarking:
■ Allows organizations to benchmark their defect removal effectiveness against
industry standards or best practices.
■ Facilitates comparisons with previous projects or competitors.
○ Resource Allocation:
■ Assists in optimizing resource allocation for testing efforts.
■ Helps determine the balance between testing activities to achieve better defect
identification.
6. Challenges:
○ Incomplete Data:
■ Calculating DRE requires comprehensive data on defects throughout the
development life cycle.
■ Incomplete or inaccurate defect data can lead to misleading DRE values.
○ Subjectivity:
■ Defining what constitutes a defect and when it is considered removed can be
subjective.
■ Consistency in defect identification and resolution criteria is essential.
○ External Factors:
■ External factors, such as changes in project scope or requirements, can impact
the calculation of DRE.
■ Understanding and accounting for such factors is important.
7. Continuous Improvement:
○ Trend Analysis:
■ Organizations can perform trend analysis over multiple projects to assess the
effectiveness of process improvements.
■ Identifying trends helps in refining testing strategies.
○ Feedback Loop:
■ Establishing a feedback loop based on DRE results allows teams to learn from
each project's performance.
■ Continuous improvement efforts can be guided by insights gained from DRE
analysis.
Focus Visualizing trends, patterns, and Identifying and controlling special cause
shifts. variation.
Data Points Simple plot of individual data Plots central tendency (mean) and
points. variability.
Lines on Chart Median line (optional), centerline. Centerline, upper control limit, lower
control limit.
Variability Detection Limited; does not explicitly show Explicitly shows control limits for process
control limits. stability.
Rules for Variation No specific rules for identifying Rules like 8 points in a row above/below
out-of-control points. centerline, etc.
Common Cause vs. Not explicitly differentiated. Clearly distinguishes between common
Special Cause and special causes of variation.
Decision Criteria Relies on visual inspection for Uses statistical control limits and rules.
patterns.
Statistical Analysis Minimal statistical analysis. In-depth statistical analysis for control
limit determination.
Application Stage Early stages of process Continuous monitoring in ongoing
improvement. production.
Sensitivity Less sensitive to small process More sensitive to small process shifts.
shifts.
Commonly Used with Initial data exploration, basic Statistical process control in
trend analysis. manufacturing and other processes.