Consistency for Partially Defined Constraints

Partially defined or Open Constraints [2] can be used to model the incomplete knowledge of a concept or a relation. In an Open Constraint, some tuples are known to be true, some other are known to be false and some are just unknown. We propose to complete its definition by using Machine Learning tec...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Legtchenko, Andreï, Lallouet, Arnaud
Format: Buchkapitel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Partially defined or Open Constraints [2] can be used to model the incomplete knowledge of a concept or a relation. In an Open Constraint, some tuples are known to be true, some other are known to be false and some are just unknown. We propose to complete its definition by using Machine Learning techniques. The idea of the technique we use for learning comes directly from the classical model of solvers computing a chaotic iteration of reduction operators [1]. We begin by learning the constraint. But instead of learning it by a classifier which takes as input all its variables and answers ”yes” if the tuple belongs to the constraint and ”no” otherwise, we choose to learn the support functionn  of the constraint for each value of its variables’ domains. A tuple is part of the constraint if accepted by all support functions for each of its values and rejected as soon as it gets rejected by one. We propose to use as representation for learning an Artificial Neural Network (ANN) with an intermediate hidden layer trained by the classical backpropagation algorithm [4]. When put in a CSP, a constraint should contribute to the domain reduction. We propose to use the learned classifiers also for solving. In order to do this, we take the natural extension to intervals [3] of the learned classifiers. Let N  be the natural interval extension of n . Then, by using as input the current domain of the variables, we can obtain a range for its output. Since we put a 0.5 threshold after the output neuron, we can reject the value a for X if the maximum of the output range is less than 0.5, which means that all tuples are rejected in the current domain intervals. Otherwise, the value remains in the domain. Our experiments show that the learned consistency is weaker than more classical consistencies but still reduces notably the search space. We show that our technique not only has good learning performances but also yields a very efficient solver for the learned constraint.
ISSN:0302-9743
1611-3349
DOI:10.1007/11564751_92