Autonomy and interdependence in human-agent-robot teams
There is a common belief that making systems more autonomous will improve the system and is therefore a desirable goal. Though small scale simple tasks can often benefit from automation, this does not necessarily generalize to more complex joint activity. When designing today's more sophisticat...
Gespeichert in:
Veröffentlicht in: | IEEE intelligent systems 2012-03, Vol.27 (2), p.43-51 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | There is a common belief that making systems more autonomous will improve the system and is therefore a desirable goal. Though small scale simple tasks can often benefit from automation, this does not necessarily generalize to more complex joint activity. When designing today's more sophisticated systems to work closely with humans, it is important not only to consider the machine's ability to work independently through autonomy, but also its ability to support interdependence with those involved in the joint activity. We posit that to truly improve systems and have them reach their full potential, designing systems that support interdependent activity between participants is the key. Our claim is that increasing autonomy, even in a simple and benign environment, does not always result in an improved system. We will show results from an experiment in which we demonstrate this phenomena and explain why increasing autonomy can sometimes negatively impact performance. |
---|---|
ISSN: | 1541-1672 1941-1294 |
DOI: | 10.1109/MIS.2012.1 |