Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Software testing unit1 aktu, Schemes and Mind Maps of Software Engineering

Software testing aktu unit 1 notes

Typology: Schemes and Mind Maps

2024/2025

Uploaded on 12/23/2024

charvi-singh-1
charvi-singh-1 🇮🇳

1 document

1 / 31

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
UNIT 1
SOFTWARE TESTING
Topics : Overview of Software Evolution, SDLC, Testing Process, Terminologies in Testing: Error,
Fault, Failure, Verification, Validation, Difference Between Verification and Validation, Test Cases,
Testing Suite, Test ,Oracles, Impracticality of Testing All Data; Impracticality of Testing AllPaths.
Verification: Verification Methods, SRS Verification, Source Code Reviews, User Documentation
Verification, Software, Project Audit, Tailoring Software Quality Assurance Program by Reviews,
Walkthrough, Inspection and Configuration Audits .
Software evaluation is the process of systematically assessing and reviewing
software to ensure that it meets the required standards and fulfills its intended
purpose. This process involves examining various aspects such as functionality,
performance, security, usability, and maintainability. It is crucial in selecting,
developing, and deploying software solutions in both personal and
organizational contexts
1. Purpose of Software Evaluation
The primary goal of software evaluation is to:
Assess Quality: Ensure that the software meets user needs, quality
standards, and business goals.
Mitigate Risks: Identify potential risks related to performance, security,
and integration before full deployment.
Optimize Resources: Ensure that the software is the most effective use
of time, money, and other resources.
Support Decision-Making: Provide data and insights to help stakeholders
make informed decisions about acquiring, developing, or maintaining
software.
2. Types of Software Evaluation
Software evaluation can be divided into several categories based on different
stages and aspects of software.
Pre-Evaluation: Conducted before software is procured or developed. It
assesses available options and alternatives.
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f

Partial preview of the text

Download Software testing unit1 aktu and more Schemes and Mind Maps Software Engineering in PDF only on Docsity!

UNIT 1

SOFTWARE TESTING

Topics : Overview of Software Evolution, SDLC, Testing Process, Terminologies in Testing: Error,

Fault, Failure, Verification, Validation, Difference Between Verification and Validation, Test Cases, Testing Suite, Test ,Oracles, Impracticality of Testing All Data; Impracticality of Testing AllPaths. Verification: Verification Methods, SRS Verification, Source Code Reviews, User Documentation Verification, Software, Project Audit, Tailoring Software Quality Assurance Program by Reviews, Walkthrough, Inspection and Configuration Audits. Software evaluation is the process of systematically assessing and reviewing software to ensure that it meets the required standards and fulfills its intended purpose. This process involves examining various aspects such as functionality, performance, security, usability, and maintainability. It is crucial in selecting, developing, and deploying software solutions in both personal and organizational contexts

1. Purpose of Software Evaluation The primary goal of software evaluation is to: - Assess Quality : Ensure that the software meets user needs, quality standards, and business goals. - Mitigate Risks : Identify potential risks related to performance, security, and integration before full deployment. - Optimize Resources : Ensure that the software is the most effective use of time, money, and other resources. - Support Decision-Making : Provide data and insights to help stakeholders make informed decisions about acquiring, developing, or maintaining software. 2. Types of Software Evaluation Software evaluation can be divided into several categories based on different stages and aspects of software. - Pre-Evaluation : Conducted before software is procured or developed. It assesses available options and alternatives.

  • Post-Evaluation : Performed after the software has been implemented to evaluate its actual performance and benefits.
  • Technical Evaluation : Focuses on the software’s technical specifications such as compatibility, integration capabilities, and performance under load.
  • User Evaluation : Assesses user satisfaction, usability, and user experience through feedback, surveys, and testing.
  • Security Evaluation : Focuses on assessing how secure the software is, including identifying vulnerabilities and compliance with security standards. 3. Evaluation Criteria Several key criteria are used during the software evaluation process to determine the overall suitability of a software solution:
  • Functionality : This is about assessing whether the software provides the required features and functions needed by the users. Does the software do what it promises?
  • Performance : Evaluates how well the software performs under different conditions, including speed, stability, and responsiveness. Factors like scalability and system load handling are considered here.
  • Usability : Determines how easy the software is to use. It includes user interface design, user experience, ease of navigation, and the learning curve. Usability testing can involve end users in trials or simulations.
  • Security : Assesses whether the software protects data and is resistant to threats like hacking, viruses, and malware. It checks compliance with security regulations and standards (e.g., GDPR, HIPAA).
  • Compatibility & Integration : Ensures that the software can integrate seamlessly with existing systems, databases, and other software. It also looks at hardware compatibility and operating system support.
  • Maintainability : Evaluates how easy it is to maintain the software in the long term, including aspects like code clarity, documentation, ease of debugging, and future updates.
  • Use scoring models, checklists, or weighted scoring systems to rank the software against each criterion. d. Pilot Testing:
  • Implement the software on a smaller scale before full deployment.
  • Assess the results and gather feedback from users.
  • Resolve any issues or bugs that surface during the pilot phase. e. Decision & Selection:
  • Analyze all data from testing and pilots.
  • Perform a cost-benefit analysis.
  • Select the best software solution based on the evaluation criteria. f. Post-Implementation Review:
  • Conduct a review after full deployment to assess whether the software meets the organization’s needs.
  • Collect user feedback to understand areas for improvement.
  • Reassess performance, security, and maintainability based on real-world use. 5. Tools & Techniques for Software Evaluation There are several tools and techniques commonly used in software evaluation:
  • Benchmarking : Comparing the software against industry standards or competitor solutions.
  • Prototyping : Developing a simplified version of the software to test core features and functionality.
  • Surveys and Questionnaires : Collecting user feedback on the software's ease of use, performance, and overall satisfaction.
  • Performance Testing Tools : Using tools like JMeter or LoadRunner to simulate high-load conditions and assess the software's performance.
  • Security Testing Tools : Tools like Nessus, OWASP ZAP, or Burp Suite can be used to test for security vulnerabilities.
  • User Acceptance Testing (UAT) : A formalized testing process where users validate that the software meets their requirements before full deployment. 6. Challenges in Software Evaluation
  • Complexity of Requirements : Ensuring all user requirements are captured accurately can be difficult, especially in large or diverse organizations.
  • Bias in Evaluation : Subjective biases from evaluators can impact the software's assessment, leading to suboptimal decisions.
  • Inadequate Testing : Skipping thorough testing or pilot phases can lead to problems post-deployment, such as unexpected bugs or integration issues.
  • Vendor Influence : Vendors may provide incomplete or biased information, making it hard to assess the software objectively. The Software Development Life Cycle (SDLC) is a systematic process used by software engineers and project managers to design, develop, test, and deploy software. It ensures that the software meets the business requirements, is of high quality, and is delivered within time and budget constraints. SDLC outlines a structured framework for a standard process that involves several stages, from the initial idea of the software to its final release and subsequent maintenance. Here’s a detailed breakdown of the SDLC process, its stages, and methodologies involved: Phases of the SDLC
  1. Planning and Requirement Analysis o Purpose : This is the foundation of the SDLC process. The objective is to understand the requirements of the client or stakeholders, set the project scope, and plan resources. o Activities : ▪ Feasibility Study : Assesses the technical, operational, and financial feasibility of the project.

Unit Testing : Testing individual components or modules to ensure they work as expected. ▪ Version Control : Managing and controlling code versions to handle changes and avoid conflicts, typically using tools like Git. ▪ Continuous Integration : Frequent integration of code into a shared repository to detect errors early.

  1. Testing o Purpose : To ensure that the software is functional, bug-free, and meets the requirements. o Activities : ▪ Test Planning : Develop a strategy and plan for the various types of testing. ▪ Unit Testing : Testing individual components or modules. ▪ Integration Testing : Testing how different modules or systems interact with each other. ▪ System Testing : Verifying that the entire system works as expected. ▪ User Acceptance Testing (UAT) : Engaging end-users to test whether the software meets their needs and business requirements. ▪ Bug Tracking and Fixing : Identifying, logging, and resolving bugs and defects. ▪ Performance Testing : Checking system performance under various conditions (e.g., load, stress, scalability). ▪ Security Testing : Assessing the software for security vulnerabilities.
  2. Deployment o Purpose : Releasing the software to the production environment where end-users can begin using it.

o Activities : ▪ Deployment Planning : Scheduling and managing the release process. It can involve a phased release, a pilot, or full deployment. ▪ Configuration Management : Setting up the infrastructure needed for the software to run in production. ▪ Installation : Deploying the software to the client or the intended server. ▪ User Training : Providing documentation or training to users on how to use the new system. ▪ Data Migration : Moving data from legacy systems (if applicable) to the new system.

  1. Maintenance o Purpose : Ensuring the software continues to work as expected after deployment, and making improvements or updates as needed. o Activities : ▪ Bug Fixes : Identifying and fixing issues that arise after deployment. ▪ Enhancements : Implementing new features or changes based on user feedback or evolving requirements. ▪ Performance Monitoring : Continuously monitoring the performance of the software in production. ▪ Patch Management : Updating the software with security patches or other necessary fixes. ▪ Support : Providing technical support to end-users and handling operational issues. ▪ Retirement : Eventually, the software might need to be retired or replaced as newer technology evolves. SDLC Models/Methodologies

▪ Allows for partial implementation and feedback before the full system is completed. ▪ Flexible for evolving requirements. o Weaknesses : ▪ More complex management and planning due to multiple iterations. ▪ Needs continuous feedback from stakeholders.

  1. Agile Model o Overview : Emphasizes flexibility and customer satisfaction by delivering software in short, iterative cycles called sprints (often 2- 4 weeks). o Strengths : ▪ Highly flexible and adaptive to change. ▪ Continuous collaboration with stakeholders. ▪ Quick delivery of functional software. o Weaknesses : ▪ Requires active client involvement. ▪ Can lead to scope creep without careful management. Popular Agile Frameworks : o Scrum : Divides development into time-boxed iterations (sprints) and involves specific roles such as Product Owner and Scrum Master. o Kanban : A visual framework for managing work in progress (WIP) by limiting the amount of work and focusing on flow.
  2. Spiral Model o Overview : Combines iterative development with risk management. The software is developed in cycles (or spirals) that assess risks at each stage. o Strengths :

▪ Excellent for large, complex projects with high risk. ▪ Continuous risk assessment ensures issues are identified early. o Weaknesses : ▪ Requires considerable expertise in risk analysis. ▪ Can be costly due to the repetitive nature of its cycles.

  1. DevOps Model o Overview : Focuses on integrating development and operations to improve collaboration, automate processes, and ensure continuous integration/continuous deployment (CI/CD). o Strengths : ▪ Streamlines the software development and release process. ▪ Ensures rapid and continuous delivery of features. o Weaknesses : ▪ Needs a high level of automation and collaboration. ▪ Requires cultural and organizational changes. Key Roles in SDLC
  2. Project Manager : Oversees the entire SDLC process, manages resources, timelines, and ensures project goals are met.
  3. Business Analyst : Engages with stakeholders to gather and document business requirements.
  4. Software Architect : Designs the overall system architecture and ensures the technical framework meets business needs.
  5. Developers : Write and maintain the source code based on the design specifications.
  6. Testers/QA Engineers : Test the software to ensure it is functional, bug- free, and meets user expectations.
  • Definition : A failure occurs when the software does not perform as expected during execution. It is the observable result of a fault or defect in the software system. Failures typically occur during testing or in a production environment.
  • Relationship with Fault : A fault can cause a failure, but not all faults will immediately result in failures. Failures depend on execution and the specific conditions under which the fault is triggered.
  • Example : A website fails to load when the user clicks a certain button, which is caused by a bug (fault) in the code. 4. Verification
  • Definition : Verification is the process of checking whether the software product meets its specified requirements. It is a static process that involves reviews, walkthroughs, and inspections, ensuring that the software is being built correctly according to the specifications.
  • Objective : Ensure the product is built right.
  • Techniques : o Code Reviews o Requirement Reviews o Design Inspections o Walkthroughs
  • Example : A team reviews the design documents and checks that they meet the functional requirements as stated in the specification. 5. Validation
  • Definition : Validation is the process of evaluating the software during or at the end of the development process to ensure that it meets the user's needs and requirements. It is a dynamic process that involves actual testing of the product in a real or simulated environment.
  • Objective : Ensure the right product is built.
  • Techniques : o Functional Testing

o System Testing o User Acceptance Testing (UAT)

  • Example : A customer tests the final product to confirm that it works as expected and meets their needs. Key Differences Between Verification and Validation: Test Case: Detailed Overview A test case is a set of actions executed to verify a particular feature or functionality of a software application. It defines the conditions, inputs, execution steps, and expected outcomes for validating a specific aspect of the system. Test cases are essential components of software testing as they help ensure that the software behaves as expected and meets the specified requirements. Key Components of a Test Case
  1. Test Case ID : o A unique identifier for the test case. It helps in tracking and referencing the test case easily. o Example: TC_001, Login_
  2. Test Case Title/Name : o A short, descriptive name summarizing the test scenario. o Example: "Verify successful login with valid credentials."
  3. Test Description : o A detailed explanation of what is being tested and why. It provides context for the test case. o Example: "This test case verifies that a user can successfully log in to the application using valid username and password."
  4. Preconditions : o Conditions that must be met before the test case can be executed. These can include setting up test data, navigating to a specific page, or configuring certain settings.
  1. Pass/Fail Status : o After executing the test case, the tester marks it as either "Pass" (if the expected and actual results match) or "Fail" (if they don't match). o Example: "Pass" or "Fail."
  2. Postconditions : o Any state or conditions that should be achieved after the test execution, often used for cleanup or resetting data. o Example: "User remains logged in after successful login."
  3. Test Priority : o Indicates the importance of the test case. High-priority test cases should be executed before low-priority ones. This helps in risk management and efficient test execution. o Example: "High," "Medium," or "Low."
  4. Type of Test : o Specifies whether the test is functional, non-functional, UI, integration, system, etc. o Example: "Functional Test" or "Regression Test."
  5. Environment : o Specifies the test environment where the test will be executed (e.g., operating system, browser, hardware). o Example: "Windows 10, Chrome browser v85."
  6. Test Execution Date and Time : o The date and time when the test case was executed. This helps in tracking and audit purposes. o Example: "Execution Date: 2024- 10 - 12, Time: 10:30 AM."
  7. Executed By :

o The name of the tester who executed the test case. o Example: "Executed by: John Doe." Types of Test Cases

  1. Functional Test Case : o Focuses on verifying that the system functions according to the specified requirements. o Example: Testing login functionality, form submissions, or data processing.
  2. Non-Functional Test Case : o Verifies non-functional attributes like performance, scalability, usability, and reliability. o Example: Load testing, security testing, UI responsiveness.
  3. Positive Test Case : o Tests a system using valid inputs to ensure expected behavior. o Example: Entering a valid email and password combination for login.
  4. Negative Test Case : o Tests a system using invalid or incorrect inputs to ensure that it handles errors gracefully. o Example: Entering an invalid password or leaving a mandatory field blank during login.
  5. Boundary Test Case : o Verifies how the system behaves at the boundary values of input domains. o Example: Testing a password field with 6 (minimum) and 12 (maximum) characters.
  6. Regression Test Case :

o A test suite typically contains multiple test cases, each designed to validate a particular aspect of the software. The test cases within a test suite can be related to a single module, feature, or behavior of the system.

  1. Execution Sequence : o Test suites often define an execution sequence in which the test cases need to be run. For example, setup and configuration test cases might run first, followed by functional test cases, and then cleanup cases.
  2. Grouped by Purpose : o Test suites can be categorized by the purpose of the test cases they contain: ▪ Smoke Test Suite : Contains basic tests to verify that critical functionality works after a new build. ▪ Regression Test Suite : Contains test cases to ensure that existing functionality is not broken after changes are introduced. ▪ Functional Test Suite : Focuses on verifying specific functional requirements. ▪ Performance Test Suite : Used to measure how the application performs under different loads. Components of a Test Suite
  3. Test Suite ID : o A unique identifier for the test suite. It helps in tracking and referencing the suite. o Example: TS_001, Login_Module_TestSuite
  4. Test Suite Name/Title : o A descriptive title indicating the functionality or module being tested. o Example: "User Authentication Test Suite."
  1. Test Suite Description : o A detailed description of what the test suite covers, such as the purpose and scope of the test suite. o Example: "This test suite validates the user authentication functionality including login, logout, and password recovery."
  2. Test Cases : o The list of all test cases included in the test suite, each identified by their unique test case ID. o Example: ▪ TC_001: Verify successful login with valid credentials. ▪ TC_002: Verify error message with invalid login credentials. ▪ TC_003: Verify password recovery functionality.
  3. Preconditions : o Any prerequisite conditions that must be fulfilled before executing the test cases in the suite. o Example: "Database should be populated with user data. The server must be up and running."
  4. Execution Flow : o The sequence in which the test cases should be executed, if the test cases are interdependent or need to follow a specific flow. o Example: Execute TC_001 (Login) first, then TC_003 (Password Recovery), followed by TC_002 (Invalid Login).
  5. Test Data : o Any test data that is required for the execution of the test cases within the suite. o Example: Username and password combinations for various test scenarios.
  6. Expected Outcome :