Monday, May 24, 2010

Contents of Test Plan

Labels:

Monday, May 10, 2010

Documentation Hierarchy

In order to identify and create the required tests, execute them and manage the process, we need to produce a detailed set of test plans. These plans vary in strategic aim, use, level of detail, and content.



During the planning stage the documentation set required to be produced are:
  • Test Policy [one-off company wide document].
  • Test Strategy [a high level one-off document].
  • Project Test Plan [one for each project].
  • Phase Test Plan [one for each phase if required.

Test Policy Document

  • A document describing the organisation's philosophy towards software testing
  • Normally a short high-level document . Reflects the company's business, products, marketplace, customers, business risks, mission statements etc.
Test Policy statements will reflect the nature of the business, the risks associated with the products and market place, and the business attitude regarding the required quality of products and deliverables. The test policy will dictate the overall approach to testing and the strategies employed.

Example Test Policy Statements
  • A company manufacturing circuit boards for the space shuttle might include policy statements like:
    • The build and test processes will be automated and amalgamated wherever possible
    • All individual components will be tested for accuracy of tolerance prior to insertion into a build

Test Strategy Document

  • A high level document defining the test phases to be performed and the testing within those phases for a programme(one or more projects)
  • Details the overall approach to testing
  • Meets the policy requirements
  • Integrates the test function with other areas at the strategic level.
The test strategy details the overall testing approach and what will be done in order to satisfy the criteria detailed in the test policy. This document is a strategic document and as such it must complement the other IT strategic working practices and development procedures. Testing is a part of the development process and must be fully integrated with the other project teams in order to be successful. Test strategies cannot be developed in isolation, they must have buy in from the other project areas and work in conjunction with the other teams - remember testing cannot do it alone!

The test strategy will detail:
  • Test approach (V-model)
  • Test team structure (independent, role based)
  • Ownership and responsibilities
  • Test tools strategy
  • Reporting process
  • Fault and change management process etc.
Some examples of the type of information you may see in a strategy for a company designing WEB sites may look like this:

"In order to meet the company's test policy on quality deliverables within tight timescales the company has adopted the extreme programming lightweight development methodology."

The independent test function will provide the XP coach to the development teams for each project.

All project baseline documentation and code will be subject to appropriate review and sign off.

The independent test function will specify, create and execute the acceptance test cases in conjunction with the customer.
Automated test tools will be considered at the start of [and throughout] each project and will be used wherever advantage can be identified.

Labels: , , , ,

Friday, March 5, 2010

Tool Selection and Implementation

There are many test activities which can be automated and test execution tools are not necessarily the first or only choice. Identify your test activities where tool support could be of benefit and prioritise the areas of most importance.

The fit with your test process may be more important than choosing the tool with the most features in deciding whether you need a tool, and which one you choose. The benefits of tools usually depend on a systematic and disciplined test process. If testing is chaotic, the tools may not be useful and may hinder testing. You must have a good process now, or recognise that your process must improve in parallel with tool implementation. The ease by which CAST tools can be implemented might be called ‘CAST readiness’.

Tools may have interesting features, but may not necessarily be available on your platforms. E.g. ‘works on 15 flavours of Unix, but not yours…’. Some tools, e.g. performance testing tools, require their own hardware, so the cost of procuring this hardware should be a consideration in your cost benefit analysis. If you already have tools, you may need to consider the level and usefulness of integration with other tools. E.g., you may want a test execution tool to integrate with your existing test management tool (or vice versa). Some vendors offer integrated toolkits, e.g. test execution, test management, performance-testing bundles. The integration between some tools may bring major benefits, in other cases, the level of integration is cosmetic only.

Once automation requirements are agreed, the selection process has 4 stages:
1. Creation of a candidate tool shortlist
2. Arrange demos
3. Evaluation(s) of selected tool(s)
4. Review and select tool.

Before making a commitment to implementing the tool across all projects, a pilot project is usually undertaken to ensure the benefits of using the tool can actually be achieved. The objectives of the pilot are to gain some experience in use of the tools, identify changes in the test process required and assess the actual costs and benefits of implementation. Roll out of the tool should be based on a successful result from the evaluation of the pilot. Roll-out normally requires strong commitment from tool users and new projects, as there is an initial overhead in using any tool in new projects.

Labels: ,

Monday, February 1, 2010

Standards for Testing And Tools for Testing

Explain that QA standards simply specify that testing should be performed, while industry-specific standards specify what level of testing to perform, and testing standards specify how to perform testing. Ideally testing standards should be referenced from the other two.

Tool Support for Testing (CAST):

Requirements testing tools provide automated support for the verification and validation of requirements models, such as consistency checking and animation.

Static analysis tools provide information about the quality of the software by examining the code, rather than by running test cases through the code. Static analysis tools usually give objective measurements of various characteristics of the software, such as the cyclomatic complexity measure and other quality metrics.

Test design tools generate test cases from a specification that must normally be held in a CASE tool repository or from formally specified requirements held in the tools itself. Some tools generate test cases from an analysis of the code.

Test data preparation tools enable data to be selected from existing databases or created, generated, manipulated and edited for use in tests. The most sophisticated tools can deal with a range of file and database formats.

Character-based test running tools provide test capture and replay facilities for dumb-terminal based applications. The tools simulate user-entered terminal keystrokes and capture screen responses for later comparison. Test procedures are normally captured in a programmable script language, data, test cases and expected results may be held in separate test repositories. These tools are most often used to automate regression testing.

GUI test running tools provide test capture and replay facilities for WIMP interface based applications. The tools simulate mouse movement, button clicks and keyboard inputs and can recognise GUI objects such as windows, fields, buttons and other controls. Object states and bitmap images can be captured for later comparison. Test procedures are normally captured in a programmable script language, data, test cases and expected results may be held in separate test repositories. These tools are most often used to automate regression testing.

Test harnesses and drivers are used to execute software under test which may not have a user interface or to run groups of existing automated test scripts which can be controlled by the tester. Some commercially available tools exist, but custom-written programs also fall into this category. Simulators are used to support tests where code or other systems are either unavailable or impracticable to use (e.g. testing software to cope with nuclear meltdowns).

Performance test tools have two main facilities: load generation and test transaction measurement. Load generation is done either by driving the application using its user interface or by test drivers, which simulate the load generated by the application on the architecture. Records of the numbers of transactions executed are logged. Driving the application using its user interface, response time measurements are taken for selected transactions and these are logged. Performance testing tools normally provide reports based on test logs, and graphs of load against response times.

Dynamic analysis tools provide run-time information on the state of executing software. These tools are most commonly used to monitor the allocation, use and de-allocation of memory, flag memory leaks, unassigned pointers, pointer arithmetic and other errors difficult to find statically'.

Debugging tools are used mainly by programmers to reproduce bugs and investigate the state of programs. Debuggers enable programmers to execute programs line by line, to halt the program at any program statement and to set and examine program variables.

Comparison tools are used to detect differences between actual results and expected results. Standalone comparison tools normally deal with a range of file or database formats. Test running tools usually have built-in comparators that deal with character screens, GUI objects or bitmap images. These tools often have filtering or masking capabilities, whereby they can 'ignore' rows or columns of data or areas on screens.

Test management tools may have several capabilities. Testware management is concerned with the creation, management and control of test documentation, e.g. test plans, specifications, and results. Some tools support the project management aspects of testing, for example the scheduling of tests, the logging of results and the management of incidents raised during testing. Incident management tools may also have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents. Most test management tools
provide extensive reporting and analysis facilities.

Coverage measurement (or analysis) tools provide objective measures of structural test coverage when tests are executed. Programs to be tested are instrumented before compilation. Instrumentation code dynamically captures the coverage data in a log file without affecting the functionality of the program under test. After execution, the log file is analysed and coverage statistics generated. Most tools provide statistics on the most common coverage measures such as statement or branch coverage.

Labels: , , , , , , ,

Tuesday, January 26, 2010

Test Management

Organisation - organisational structures for testing; team composition
Explain that organisations may have different testing structures: testing may be the developer’s responsibility, or may be the team’s responsibility (buddy testing), or one person on the team is the tester, or there is a dedicated test team (who do no development), or there are internal test consultants providing advice to projects, or a separate organisation does the testing.

A multi-disciplinary team with specialist skills is usually needed. Most of the following roles are required: test analysts to prepare strategies and plans, test automation experts, database administrator or designer, user interface experts, test environment management, etc.

Configuration Management
Describe typical symptoms of poor CM such as: unable to match source and object code, unable to identify which version of a compiler generated the object code, unable to identify the source code changes made in a particular version of the software, simultaneous changes are made to the same source code by multiple developers (and changes lost), etc.

Configuration identification requires that all configuration items (CI) and their versions in the test system are known. Configuration control is maintenance of the CIs in a library and maintenance of records on how CIs change over time.

Status accounting is the function recording and tracking problem reports, change requests, etc.
Explain that configuration auditing is the function to check on the contents of libraries, etc. for standards compliance, for instance.

CM can be very complicated in environments where mixed hardware and software platforms are being used, but sophisticated cross-platform CM tools are increasingly available.

Test Estimation, Monitoring and Control

Test estimation - explain that the effort required to perform activities specified in the high-level test plan must be calculated in advance and that rework must be planned for.

Test monitoring – describe useful measures for tracking progress (e.g. number of tests run, tests passed/failed, incidents raised and fixed, retests, etc.). Explain that the test manager may have to report on deviations from the project/test plans such as running out of time before completion criteria achieved.

Test control – explain that the re-allocation of resources may be necessary, such as
changes to the test schedule, test environments, number of testers, etc.

Incident Management

An incident is any significant, unplanned event that occurs during testing that requires subsequent investigation and/or correction. Incidents are raised when expected and actual test results differ.

Incidents may be raised against documentation as well as code or a system under test.

Incidents may be analysed to monitor the test process and to aid in test process improvement.

Incidents should be logged when someone other than the author of the product under test performs the testing. Typically the information logged on an incident will include expected and actual results, test environment, software under test id, name of tester(s), severity, scope, priority and any other information deemed relevant to reproducing and fixing the potential fault.

Incidents should be tracked from inception through various stages to eventually close out and resolution.

Labels: , , , ,

Monday, January 18, 2010

Static Testing and Static Analysis

Why, when and what to review?

Any document can be reviewed. For instance, requirement specifications, design specifications, code, test plans, user guides, etc. Ideally review as soon as possible.

Costs – on-going review costs of approx. 15% of development budget. The cost of reviews includes activities such as the review process itself, metrics analysis and process improvement.

Benefits – include areas such as development productivity improvements, reduced development time-scales, testing cost and time reductions, lifetime cost reductions, reduced fault levels, etc.

Types of Reviews

Walkthroughs – scenarios, dry runs, peer group, led by author.

Inspections – led by trained moderator (not author), defined roles, includes metrics, formal process based on rules and checklists with entry and exit criteria.

Informal reviews – undocumented, but useful, cheap, widely-used.

Technical reviews (also known as peer reviews) – documented, defined fault-detection process, includes peers and technical experts, no management participation.

Goals – validation and verification against specifications and standards, (and process improvement). Achieve consensus.

Activities – planning, overview meeting, preparation, review meeting, and follow-up (or similar).

Roles and responsibilities – moderators, authors, reviewers/inspectors and managers (planning activities).

Deliverables – product changes, source document changes, and improvements (both review and development).

Pitfalls – lack of training, lack of documentation, lack of management support (and failure to improve process).

Static Analysis

- compiler-generated information; dataflow analysis; control-flow graphing; complexity analysis

Explain that static analysis involves no dynamic execution and can detect possible faults such as unreachable code, undeclared variables, parameter type mismatches, uncalled functions and procedures, possible array bound violations, etc.

Explain that any faults found by compilers are found by static analysis. Compilers find faults in the syntax. Many compilers also provide information on variable use, which is useful during maintenance.

Explain that data flow analysis considers the use of data on paths through the code, looking for possible anomalies, such as ‘definitions’ with no intervening ‘use’, and ‘use’ of a variable after it is ‘killed’.

Explain use of, and provide example of production of control flow graph for a program.

Introduce complexity metrics, including cyclomatic complexity.

Labels: ,

Monday, January 11, 2010

Types of Testing - Part II

Functional System Testing - functional requirements; requirements-based testing; business process-based testing

Functional requirement as per IEEE definition, which is “A requirement that specifies a function that a system or system component must perform”.

Requirements-based testing – where the user requirements specification and the system requirements specification (as used for contracts) may be used to derive test cases.

Business process-based testing – based on expected user profiles (e.g. scenarios, use cases, etc.).

Non-Functional System Testing - non-functional requirements; non-functional test types: load, performance and stress; security; usability; storage; volume; installability; documentation; recovery

Explain that non-functional requirements are as important as functional requirements.

Integration Testing in the Large - testing the integration of systems and packages; testing interfaces to external organisations (e.g. Electronic Data Interchange, Internet)

Integration with other (complete) systems.

Identification of, and risk associated with, interfaces to these other systems.

Incremental/non-incremental approaches to integration.

Integration Testing in the Small - assembling components into sub-systems; sub-systems to systems; stubs and drivers; big-bang, top-down, bottom-up, other strategies

Integration testing tests interfaces and interaction of modules/subsystems.

Role of stubs and drivers.

Incremental strategies, to include: top-down, bottom-up and functional incrementation. Non-incremental approach (“big bang”).

Maintenence Testing - problems of maintenance; testing changes; risks of changes and regression testing

Testing old code – with poor/missing specifications.

Scope of testing with respect to changed code.

Impact analysis is difficult – so higher risk when making changes – and difficult to decide how much regression testing to do.

Labels: , , ,