Learning in the Presence of Self-Interested Agents
In many situations a principal gathers a data sample containing positive and negative examples of a concept to induce a classification rule using a machine learning algorithm. Although learning algorithms differ from each other in various aspects, there is one essential issue that is common to all:...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In many situations a principal gathers a data sample containing positive and negative examples of a concept to induce a classification rule using a machine learning algorithm. Although learning algorithms differ from each other in various aspects, there is one essential issue that is common to all: the assumption that there is no strategic behavior inherent in the sample data generation process. In that respect, we ask the question "what if the observed attributes are being deliberately modified by the acts of some self-interested agents who will gain a preferred classification by engaging in such behavior". Therein such cases, there is a need for anticipating this kind of strategic behavior and incorporating it into the learning process. Classical learning approaches do not consider the existence of such behavior. In this paper we study the need for this kind of a paradigm and outline related research issues. |
---|---|
ISSN: | 1530-1605 2572-6862 |
DOI: | 10.1109/HICSS.2006.250 |