Generating effective tests for concurrent programs via AI automated planning techniques

Testing concurrent programs is a challenging problem due to interleaving explosion : even for a fixed set of inputs, there is a huge number of concurrent runs that need to be tested to account for scheduler behavior. Testing all possible schedules is not practical. Consequently, most effective testi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal on software tools for technology transfer 2014-02, Vol.16 (1), p.49-65
Hauptverfasser: Razavi, Niloofar, Farzan, Azadeh, McIlraith, Sheila A.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Testing concurrent programs is a challenging problem due to interleaving explosion : even for a fixed set of inputs, there is a huge number of concurrent runs that need to be tested to account for scheduler behavior. Testing all possible schedules is not practical. Consequently, most effective testing algorithms only test a select subset of runs. For example, limiting testing to runs that contain data races or atomicity violations has been shown to capture a large proportion of concurrency bugs. In this paper we present a general approach to concurrent program testing that is based on techniques from artificial intelligence (AI) automated planning. We propose a framework for predicting concurrent program runs that violate a collection of generic correctness specifications for concurrent programs, namely runs that contain data races, atomicity violations, or null-pointer dereferences. Our prediction is based on observing an arbitrary run of the program, and using information collected from this run to model the behavior of the program, and to predict new runs that contain bugs with one of the above noted violation patterns. We characterize the problem of predicting such new runs as an AI sequential planning problem with the temporally extended goal of achieving a particular violation pattern. In contrast to many state-of-the-art approaches, in our approach feasibility of the predicted runs is guaranteed and, therefore, all generated runs are fully usable for testing. Moreover, our planning-based approach has the merit that it can easily accommodate a variety of violation patterns which serve as the selection criteria for guiding search in the state space of concurrent runs. This is achieved by simply modifying the planning goal. We have implemented our approach using state-of-the-art AI planning techniques and tested it within the Penelope concurrent program testing framework [ 35 ]. Nevertheless, the approach is general and is amenable to a variety of program testing frameworks. Our experiments with a benchmark suite showed that our approach is very fast and highly effective, finding all known bugs.
ISSN:1433-2779
1433-2787
DOI:10.1007/s10009-013-0277-y