Test Techniques & Events Training
Table of Contents
- Total number of test cases
- Number of test cases passed
- Number of test cases failed
- Number of test cases blocked
- Number of defects found
- Number of defects accepted
Finally, let’s talk about test techniques. Test techniques are approaches used to design and execute software tests. Some of the commonly used test techniques include Equivalence Partitioning, boundary value analysis, Decision Table Testing, Exploratory testing, Risk-Based Testing, and more. Each technique has its benefits and can help uncover different types of defects.
This technique involves dividing the input data into partitions or groups, where each group is expected to behave similarly. It is used to reduce the number of test cases by selecting a representative test case from each group. For example, consider a website that requires users to enter their age to verify they are old enough to access certain content. Equivalence partitioning could be used to divide the possible ages into three groups: below 18, 18-60, and above 60.
This technique involves selecting test cases that are at or near the boundaries of the input domain. It is used to ensure that the system behaves correctly at the limits of its input range. For example, if a system requires a value between 1 and 100, boundary value analysis would select values such as 1, 2, 99, and 100 as test cases.
This technique involves creating a table that maps inputs to outputs based on the system’s rules or requirements. It is used to test combinations of inputs and their expected outputs. For example, consider a system that calculates the cost of a hotel room based on the number of nights stayed, the type of room, and any additional services requested. A decision table could be used to test different combinations of these inputs and their expected costs.
This technique involves testing a system without a predefined plan or script, and instead relies on the tester’s intuition, experience, and creativity. It is used to discover defects and evaluate the system’s usability, user experience, and overall quality. For example, a tester could use exploratory testing to navigate a new mobile app and identify any issues with its navigation, layout, or functionality.
This technique involves using the knowledge and expertise of experienced testers to guide testing activities. It is used to leverage the tester’s skills, intuition, and experience to identify defects and optimize the testing process. For example, an experienced tester could use their knowledge of similar systems to identify potential risks or defects in a new system.
This technique involves testing the system’s behavior in response to user scenarios or use cases. It is used to validate that the system meets the user’s requirements and expectations. For example, if the system is a shopping cart application, use case testing could involve testing scenarios such as adding items to the cart, updating quantities, and completing the purchase process.
This technique involves creating a list of items to be tested and checking them off as they are completed. It is used to ensure that all necessary testing activities are completed and that no important areas are missed. For example, a checklist for testing a new software application could include items such as functionality, performance, security, and user interface.
This technique involves prioritizing testing activities based on the level of risk associated with each system component or function. It is used to ensure that the most critical areas of the system are thoroughly tested and that potential risks are identified and mitigated. For example, if the system handles sensitive customer data, risk-based testing could focus on areas such as data encryption, access controls, and data storage.
Assessing software quality and testing effectiveness is another test technique and it is important to the test effort. Metrics and tools like defect density, defect arrival rate, and test coverage can help assess how effective software testing is for the project.
Consider these metrics or tools for testing effectiveness:
Defect density is a metric used to measure the number of defects per unit of software code or functionality. This metric is calculated by dividing the total number of defects found in a software application by the size of the code or functionality that was tested. The result is usually expressed as defects per KLOC (thousand lines of code) or defects per function point. Another way to calculate the density is to divide the number of defects by the total number of lines of code. Multiply the result by 1000 (KLOC = 1000 lines of code) to get a whole number.
It may help to emphasize that it is a useful metric for measuring the quality of a software system. A low defect density indicates that the software has fewer defects, which means it is more reliable and less likely to fail. On the other hand, a high defect density can indicate poor software quality and may require additional testing and debugging. This might be a measurement to set for determining the exit criteria. In other words, identify that when the metric is below a certain number no more testing is required.
Defect arrival rate is a metric used to measure the rate at which defects are being found in a software application over time. This metric is calculated by dividing the total number of defects found during a given period by the amount of time that was spent testing during that period. The result is usually expressed as defects per hour or defects per day. Another calculation that might help is the amount of defective products observed divided by the number of units tested. For example, if 10 out of 200 tested units are defective, the defect rate is 10 divided by 200, or 5 percent.
When learning about defect arrival rate, it’s important to emphasize that this metric can help identify trends in defect detection over time. A high defect arrival rate may indicate that the software is unstable and may require additional testing and debugging. Conversely, a low defect arrival rate can indicate that the software is stable and reliable.
Test coverage is a metric used to measure the degree to which a software application has been tested. This metric is calculated by dividing the amount of code or functionality that has been tested by the total amount of code or functionality in the application. The result is usually expressed as a percentage.
When teaching about test coverage, it’s important to emphasize that it is a useful metric for assessing the completeness of testing. A high-test coverage percentage indicates that a large portion of the software application has been tested, which means that the likelihood of undetected defects is reduced. Conversely, a low-test coverage percentage may indicate that there are areas of the software application that have not been adequately tested and may require additional testing.
Some additional metrics are the following:
- The total number of test cases is a metric used to measure the overall size of a software testing effort. This metric represents the total number of individual tests that are executed as part of the testing process.
- For example, if a testing effort includes 100 test cases, then the total number of test cases would be 100.
- Number of test cases passed: The number of test cases passed is a metric used to measure the number of tests that were executed successfully without any issues. This metric represents the percentage of tests that have passed and met their acceptance criteria.
- For example, if a testing effort includes 100 test cases and 80 of them have passed, then the number of test cases passed would be 80.
- Number of test cases failed: The number of test cases failed is a metric used to measure the number of tests that did not meet their acceptance criteria and have failed during execution. This metric represents the percentage of tests that have failed and require further investigation and debugging.
- For example, if a testing effort includes 100 test cases and 20 of them have failed, then the number of test cases failed would be 20.
- Number of test cases blocked: The number of test cases blocked is a metric used to measure the number of tests that could not be executed due to a system issue or dependency. This metric represents the percentage of tests that were blocked and could not be executed, but may still be necessary for comprehensive testing.
- For example, if a testing effort includes 100 test cases and 5 of them were blocked due to a system issue, then the number of test cases blocked would be 5.
- Number of defects found: The number of defects found is a metric used to measure the number of issues or bugs that were discovered during testing. This metric represents the total number of defects identified and reported to the development team for resolution.
- For example, if a testing effort includes 100 test cases and 10 defects were identified, then the number of defects found would be 10.
- Number of defects accepted: The number of defects accepted is a metric used to measure the number of issues or bugs that were considered valid and accepted by the development team. This metric represents the percentage of defects that were deemed significant and required resolution.
- For example, if a testing effort includes 100 test cases and 10 defects were identified, but only 5 of them were considered significant and accepted by the development team, then the number of defects accepted would be 5.
Improving quality and measuring progress are probably the most important benefits related to taking test measurements during testing activities. It is beneficial to utilize some metrics that tell you how well the testing is progressing. It gives the team information that can let you know that quality is being achieved. Without the test metrics you don’t know if you are getting good results. You don’t know if the test strategy is accomplishing the goals.
By following some testing strategy principles, you can design and implement a useful software testing strategy that helps ensure that the software meets the quality standards and customer requirements. There are some testing strategy principles that can help guide the design and implementation of a useful software testing strategy. Let’s review some common principles:
- Test Early: Testing should start as early as possible in the software development life cycle to detect defects and issues early and reduce the cost of fixing them.
- Test Continuously: Testing should be an ongoing process throughout the software development life cycle, not just at the end of the project. This helps identify issues early and ensure that the software meets the quality standards.
- Use Appropriate Test Methods and Techniques: Use a variety of testing methods and techniques to identify different types of defects and vulnerabilities in the software, such as white-box, black-box, smoke, sanity, and grey-box testing. But don’t leave out the techniques we discussed.
- Prioritize Testing: Prioritize testing efforts based on the software’s criticality, complexity, and risk factors. Allocate more people, time, and skill resources to the critical and high-risk areas of the software.
- Automate Testing: Use automated testing tools and frameworks to increase testing efficiency and accuracy, especially for repetitive or tedious testing tasks like regression testing and smoke testing or sanity testing.
- Maintain Test Documentation: Maintain proper documentation of test cases, test results, and other relevant information to improve communication and ensure that testing is reproducible.
- Continuously Evaluate and Improve Testing: Continuously evaluate and improve the testing process by analyzing test results, identifying issues, and incorporating feedback from stakeholders.
Understanding and implementing the different types of software testing, test events, test methods, and test techniques can help developers create better software products that meet their customers’ needs. Through principle-driven implementation of test metrics, test analysts are armed with useful awareness. They can answer the questions: Is the application stable? Is the application reliable?
A website has been selected for an application under test (AUT). The objective is to have you get some exposure to testing activities. In the training modules so far, you have been familiarized with the processes of manual testing software. In this assessment exercise you will be asked to get familiar with two business functions:
- The Store Front function of the OpenCart application. You will be testing its Phones & PDAs sales scenario.
- OpenCart has an Administration function that you will need to test the Login Security scenario.
- The application URL is: https://www.opencart.com/index.php?route=cms/demo.
- Go to the website and get familiar with it. Put all responses in an email or attach using MS Word. (I am assessing your approach to reporting for this exercise)
- You are given two scenarios to test. You are given the two business scenarios à Phones & PDAs and Login Security.
- Phones & PDAs Scenario
- Displays three items that are selected from the main menu using a button
- It can return to the home page clicking the home icon button
- Phones & PDAs Scenario
- It has three functional buttons under each item displayed
- It has several buttons across the top of the items displayed
- It has two clickable links to display a detail page for the item
- You can hover over images, icons, and other objects to display labels
- If you select Add to Cart for an item, it will change the cart value at top of screen
- The cart has several options that need testing. When you click the Continue Shopping option from the cart display, it should return to the Phones & PDAs page
- Check the search process
- Administration Login Security Scenario
- The login screen allows user credentials entry. Check for valid and invalid data.
- Error messages should identify what entry is in error for invalid entry
- Valid login should display dashboard
- User profile should be accessible
- Check the forgot password feature
- Check the logout function
- With the information in step 2 develop adequate test cases to test the application.
- Run your test and record results.
- Answer the following questions.
- How many test cases did you develop?
- How many defects did you find? Any system failures? Any requirement failures?
- What method of testing did you conduct?
- What are the test events could your tests case satisfy?
- Did you use any test techniques?