An acceptance test is a formal description of the behavior of a software product, generally expressed as an example or a usage scenario. A number of different notations and approaches have been proposed for such examples or scenarios. In many cases, the aim is that it should be possible to automate the execution of such tests by a software tool, either ad-hoc to the development team or off the shelf.
Similar to a unit test, an acceptance test generally has a binary result, pass or fail. A failure suggests, though does not prove, the presence of a defect in the product.
Teams mature in their practice of agile use acceptance tests as the main form of functional specification and the only formal expression of business requirements. Other teams use acceptance tests as a complement to specification documents containing use cases or more narrative text.
The terms “functional test”, “acceptance test” and “customer test” are used more or less interchangeably. A more specific term “story test”, referring to user stories is also used, as in the phrase “story test driven development”.
Acceptance testing has the following benefits, complementing those which can be obtained from unit tests:
- encouraging closer collaboration between developers on the one hand and customers, users, or domain experts on the other, as they entail that business requirements should be expressed
- providing a clear and unambiguous “contract” between customers and developers; a product that passes acceptance tests will be considered adequate (though customers and developers might refine existing tests or suggest new ones as necessary)
- decreasing the chance and severity both of new defects and regressions (defects impairing functionality previously reviewed and declared acceptable)
Expressing acceptance tests in an overly technical manner
Customers and domain experts, the primary audience for acceptance tests, find acceptance tests that contain implementation details difficult to review and understand. To prevent acceptance tests from being overly concerned with technical implementation, involve customers and/or domain experts in the creation and discussion of acceptance tests. See Behavior Driven Development for more information.
Acceptance tests that are unduly focused on technical implementation also run the risk of failing due to minor or cosmetic changes which in reality do not have any impact on the product’s behavior. For example, if an acceptance test references the label for a text field and that label changes, the acceptance test fails even though the actual functioning of the product is not impacted.
Unlike automated unit tests, automated acceptance tests are not universally viewed as a net benefit and some controversy has arisen after experts such as Jim Shore or Brian Marick questioned whether the following costs were outweighed by the benefits of the practice:
- many teams report that the creation of automated acceptance tests requires significant effort
- sometimes due to the “fragile” test issue, teams find the maintenance of automated acceptance tests burdensome
- the first generation of tools in the Fit/FitNesse tradition resulted in acceptance tests that customers or domain experts could not understand.
The BDD approach may hold promise for a resolution of this controversy.
- 1996: Automated tests identified as a practice of Extreme Programming, without much emphasis on the distinction between unit and acceptance testing, and with no particular notation or tool recommended
- 2002: Ward Cunningham, one of the inventors of Extreme Programming, publishes Fit, a tool for acceptance testing based on a tabular, Excel-like notation
- 2003: Bob Martin combines Fit with Wikis (another invention of Cunningham’s), creating FitNesse
- 2003-2006: Fit/FitNesse combo eclipses most other tools and becomes the mainstream model for Agile acceptance testing
For a comprehensive survey, see Automated Acceptance Testing: A Literature Review and an Industrial Case Study