Autonomous human-machine teams: Reality constrains logic, but hides the complexity of data dependency
In this review, scientists have struggled to apply logic to rational beliefs of collectives, concluding that belief logics fail in the face of conflict and uncertainty where reality governs. We have generalized this finding by concluding that traditional social science based on independent concepts...
Gespeichert in:
Veröffentlicht in: | Data science in finance and economics 2022-12, Vol.2 (4), p.464-499 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this review, scientists have struggled to apply logic to rational beliefs of collectives, concluding that belief logics fail in the face of conflict and uncertainty where reality governs. We have generalized this finding by concluding that traditional social science based on independent concepts about individuals and interpretations of reality requires too many fixes to address its replication crisis, yet ignores the generalization of individuals to teams, for which social science has become largely irrelevant. Unable to process the data dependencies of autonomous human-machine teammates in orthogonal roles for successful teams, producing zero correlations, this problem extends to traditional social science predicated on the belief that perceptions in closed systems (laboratories) are eality. But, as the National Academy of Sciences has noted, this assumption fails in open spaces. Thus, the study of group processes has de-evolved to become overly focused on individuals (e.g., biases), which do not generalize to teams. For a theory of autonomous human-machine teams and systems, generalization is critical. By using an open-systems approach, we have been able to explain the failures of social science, and its ack of success in the field, and we have generalized to autonomous human-machine teams and human-human teams. We extend our theory to conclude that traditional belief logics uses assumptions that, if not tested in reality (e.g., with debates), can be lethal (e.g, DoD's drone tragedy in Afghanistan in 2021). We conclude that an AI machine operating interdependently with a human teammate, jointly challenging each other's beliefs about reality while sharing and shaping their experiences, is the path to autonomy in the open, justifying our research program. |
---|---|
ISSN: | 2769-2140 2769-2140 |
DOI: | 10.3934/DSFE.2022023 |