Monday, December 28, 2009

Economics of Testing

The cost of faults escalates as we move the product towards field use. If a fault is detected before field use, the cost of rework to correct the fault increases dramatically because more than one previous stage of design, coding and testing may have to be repeated. If the fault occurs during field use, the potential cost of the fault might be catastrophic. If faults present in documentation go un-detected, then development based on that documentation might generate many related faults which multiply the effect of the original one.

Early test design can prevent fault multiplication. Analysis of specifications during test preparation often brings faults in specifications to light.

The cost of testing is generally lower than the cost associated with major faults (such as poor quality product and/or fixing faults), although few organisations have figures to confirm this.

What to consider, based on IEEE 829-1998 Test Plan Outline.
  • Test plan identifier
  • Introduction
  • Test items
  • Features to be tested
  • Features not to be tested
  • Approach
  • Item pass/fail criteria
  • Suspension criteria and resumption requirements
  • Test deliverables
  • Testing tasks
  • Environmental needs
  • Responsibilities
  • Staffing and training needs
  • Schedule
  • Risks and contingencies
  • Approvals

Labels: ,

Tuesday, December 22, 2009

Whenever a fault is detected and fixed then the software should be re-tested to ensure that the original fault has been successfully removed. You should also consider testing for similar and related faults.

Tests should be repeatable, to allow re-testing / regression testing.

Regression testing attempts to verify that modifications have not caused unintended adverse side effects in the unchanged software (regression faults) and that the modified system still meets its requirements. It is performed whenever the software, or its environment, is changed.

Regression test suites are run many times and generally evolve slowly, so regression testing is ideal for automation. If automation is not possible or the regression test suite is very large then it may be necessary to prune the test suite. You may drop repetitive tests, reduce the number of tests on fixed faults, combine test cases, designate some tests for periodic testing, etc. A subset of the regression test suite may also be used to verify.

Monday, December 21, 2009

The Psychology of Testing

Testing is performed with the primary intent of finding faults in the software, rather than of proving correctness. Testing can therefore be perceived as a destructive process. The mindset required to be a tester is different to that of a developer.

There are right and wrong ways of presenting faults to authors or management (give examples). It is important to communicate between developer and tester: e.g., changes to the application or menu structures that might affect the tests; or where the developer thinks the code might be buggy; or where there might be difficulty in reproducing reported bugs.

Generally it is believed that objective, independent testing is more effective. If author tests then assumptions made are carried into testing, people see what they want to see, there can be emotional attachment, and there may be a vested interest in not finding faults.
Levels of independence, such as:
  • test cases are designed by the person(s) who writes the software under test;
  • test cases are designed by another person(s);
  • test cases are designed by a person(s) from a different section;
  • test cases are designed by a person(s) from a different organisation;
  • test cases are not chosen by a person.

Labels: ,

Monday, December 14, 2009

Fundamental Test Process

The fundamental test process comprises planning, specification, execution, recording and checking for completion. In more detail:
  • Test planning: The test plan should specify how the test strategy and project test plan apply to the software under test. This should include identification of all exceptions to the test strategy and of all software with which the software under test will interact during test execution, such as drivers and stubs.
  • Test specification: Test cases should be designed using the test case design techniques selected in the test planning activity.
  • Test execution: Each test case should be executed.
  • Test recording: The test records for each test case should unambiguously record the identities and versions of the software under test and the test specification. The actual outcome should be recorded. It should be possible to establish that all of the specified testing activities have been carried out by reference to the test records.

    The actual outcome should be compared against the expected outcome. Any discrepancy found should be logged and analysed in order to establish where its cause lies and the earliest test activity that should be repeated, e.g. in order to remove the fault in the test specification or to verify the removal of the fault in the software.

    The test coverage levels achieved for those measures specified as test completion criteria should be recorded.
  • Checking for test completion:
    The test records should be checked against the previously specified test completion criteria. If these criteria are not met, the earliest test activity that must be repeated in order to meet the criteria should be identified and the test process should be restarted from that point.

    It may be necessary to repeat the Test Specification activity to design further test cases to meet a test coverage target.
As the objective of a test should be to detect faults, a 'successful' test is one that does detect a fault. This is counter-intuitive, because faults delay progress: a successful test is one that may cause delay. The successful test reveals a fault which, if found later, may be many times more costly to correct so, in the long run, is a good thing.

Completion or exit criteria are used to determine when testing (at any test stage) is complete. These criteria may be defined in terms of cost, time, faults found or coverage criteria. Coverage criteria are defined in terms of items that are exercised by test suites, such as branches, user requirements, most frequently used transactions, etc.

Labels: , , , , ,

Tuesday, December 8, 2009

Why Software testing is necessary?

An error is a human action that produces an incorrect result. A fault is a manifestation of an error in software (also known as a defect or bug). A fault, if encountered, may cause a failure, which is a deviation of the software from its expected delivery or service. Reliability is the probability that software will not cause the failure of a system for a specified time under specified conditions. Errors occur because we are not perfect and, even if we were, we are working under constraints such as delivery deadlines.

Testing identifies faults, whose removal increases the software quality by increasing the software’s potential reliability. Testing is the measurement of software quality. We measure how closely we have achieved quality by testing the relevant factors such as correctness, reliability, usability, maintainability, reusability, testability, etc.

Other factors that may determine the testing performed may be contractual requirements, or legal requirements, normally defined in industry-specific standards, or based on agreed best practice (or more realistically non negligent practice).

Though It is very difficult to determine how much testing is enough because sometimes A single failure can cost nothing or a lot. Software in safety-critical systems can cause death or injury if it fails, so the cost of a failure in such a system may be in human lives.

The amount of testing performed depends on the risks involved. Risk must be used as the basis for allocating the test time that is available and for selecting what to test and where to place emphasis.