Standards for Testing And Tools for Testing
Explain that QA standards simply specify that testing should be performed, while industry-specific standards specify what level of testing to perform, and testing standards specify how to perform testing. Ideally testing standards should be referenced from the other two.
Tool Support for Testing (CAST):
Requirements testing tools provide automated support for the verification and validation of requirements models, such as consistency checking and animation.Static analysis tools provide information about the quality of the software by examining the code, rather than by running test cases through the code. Static analysis tools usually give objective measurements of various characteristics of the software, such as the cyclomatic complexity measure and other quality metrics.
Test design tools generate test cases from a specification that must normally be held in a CASE tool repository or from formally specified requirements held in the tools itself. Some tools generate test cases from an analysis of the code.
Test data preparation tools enable data to be selected from existing databases or created, generated, manipulated and edited for use in tests. The most sophisticated tools can deal with a range of file and database formats.
Character-based test running tools provide test capture and replay facilities for dumb-terminal based applications. The tools simulate user-entered terminal keystrokes and capture screen responses for later comparison. Test procedures are normally captured in a programmable script language, data, test cases and expected results may be held in separate test repositories. These tools are most often used to automate regression testing.
GUI test running tools provide test capture and replay facilities for WIMP interface based applications. The tools simulate mouse movement, button clicks and keyboard inputs and can recognise GUI objects such as windows, fields, buttons and other controls. Object states and bitmap images can be captured for later comparison. Test procedures are normally captured in a programmable script language, data, test cases and expected results may be held in separate test repositories. These tools are most often used to automate regression testing.
Test harnesses and drivers are used to execute software under test which may not have a user interface or to run groups of existing automated test scripts which can be controlled by the tester. Some commercially available tools exist, but custom-written programs also fall into this category. Simulators are used to support tests where code or other systems are either unavailable or impracticable to use (e.g. testing software to cope with nuclear meltdowns).
Performance test tools have two main facilities: load generation and test transaction measurement. Load generation is done either by driving the application using its user interface or by test drivers, which simulate the load generated by the application on the architecture. Records of the numbers of transactions executed are logged. Driving the application using its user interface, response time measurements are taken for selected transactions and these are logged. Performance testing tools normally provide reports based on test logs, and graphs of load against response times.
Dynamic analysis tools provide run-time information on the state of executing software. These tools are most commonly used to monitor the allocation, use and de-allocation of memory, flag memory leaks, unassigned pointers, pointer arithmetic and other errors difficult to find statically'.
Debugging tools are used mainly by programmers to reproduce bugs and investigate the state of programs. Debuggers enable programmers to execute programs line by line, to halt the program at any program statement and to set and examine program variables.
Comparison tools are used to detect differences between actual results and expected results. Standalone comparison tools normally deal with a range of file or database formats. Test running tools usually have built-in comparators that deal with character screens, GUI objects or bitmap images. These tools often have filtering or masking capabilities, whereby they can 'ignore' rows or columns of data or areas on screens.
Test management tools may have several capabilities. Testware management is concerned with the creation, management and control of test documentation, e.g. test plans, specifications, and results. Some tools support the project management aspects of testing, for example the scheduling of tests, the logging of results and the management of incidents raised during testing. Incident management tools may also have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents. Most test management tools
provide extensive reporting and analysis facilities.
Coverage measurement (or analysis) tools provide objective measures of structural test coverage when tests are executed. Programs to be tested are instrumented before compilation. Instrumentation code dynamically captures the coverage data in a log file without affecting the functionality of the program under test. After execution, the log file is analysed and coverage statistics generated. Most tools provide statistics on the most common coverage measures such as statement or branch coverage.
Labels: Debugging Tools, Dynamic Analysis Tools, GUI test running Tools, Performance Test Tools, Requirement Testing Tools, Standards of Testing, Test Data preparation Tools, Test Design Tools