An Empirical Exploration of the Difficulty Function

The theory developed by Eckhardt and Lee (and later extended by Littlewood and Miller) utilises the concept of a “difficulty function” to estimate the expected gain in reliability of fault tolerant architectures based on diverse programs. The “difficulty function” is the likelihood that a randomly c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Bentley, Julian G. W., Bishop, Peter G., van der Meulen, Meine
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The theory developed by Eckhardt and Lee (and later extended by Littlewood and Miller) utilises the concept of a “difficulty function” to estimate the expected gain in reliability of fault tolerant architectures based on diverse programs. The “difficulty function” is the likelihood that a randomly chosen program will fail for any given input value. To date this has been an abstract concept that explains why dependent failures are likely to occur. This paper presents an empirical measurement of the difficulty function based on an analysis of over six thousand program versions implemented to a common specification. The study derived a “score function” for each version. It was found that several different program versions produced identical score functions, which when analysed, were usually found to be due to common programming faults. The score functions of the individual versions were combined to derive an approximation of the difficulty function. For this particular (relatively simple) problem specification, it was shown that the difficulty function derived from the program versions was fairly flat, and the reliability gain from using multi-version programs would be close to that expected from the independence assumption.
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-540-30138-7_6