Improving Visual Perception of Artificial Social Companions Using a Standardized Knowledge Representation in a Human-Machine Interaction Framework
In Human-Machine Interaction for Artificial Social Companions, we must incorporate features that allow an agent to be capable of delivering a sociable experience to the user. The associated technological challenges include active perception features, mobility in unstructured environments, understand...
Gespeichert in:
Veröffentlicht in: | International journal of social robotics 2023-03, Vol.15 (3), p.425-444 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In Human-Machine Interaction for Artificial Social Companions, we must incorporate features that allow an agent to be capable of delivering a sociable experience to the user. The associated technological challenges include active perception features, mobility in unstructured environments, understanding human actions, detect human behaviours and predict human intentions, access to large repositories of personal and social related data, adapt to changing context. These features are paramount for applications in the field of Active and Assisted Living (AAL), where the primary goal is to provide solutions that help people through ageing, by promoting active and healthy living. The research questions being addressed can be stated as: What strategy could be developed to mitigate low specificity? How can we adopt standards in ASCs - Social Robots implementation? We believe that part of the answer to these questions is to improve the way user’s needs and expectations are described and represented in knowledge models used in ASC, and these knowledge models should adhere to flexible, extensible and standardized knowledge representations. In these knowledge models we shall incorporate the representation for decision processes to cope with redundancy and fall-back mechanisms in terms of interaction functionalities that result in the agent’s self-adaptation to its context (e.g. user model and environment conditions). To test our hypothesis, we formulated our concept for designing a framework that captures the expected behaviour of the agent into descriptive scenarios, then translates these into the agent’s information model and use the resulting representation in probabilistic planning and decision-making to control interaction. Our expectation was that adopting this framework could reduce errors and faults on agent’s operation, resulting in an improved performance while interacting with the user. The results, from our experiment, confirmed that our framework is effective to a certain level and can improve agent’s performance by improving specificity. Although, we consider that designing and implementing interaction workflows in artificial social companions are still challenging. Taking into consideration the landscape of Artificial Social Companions (i.e. Social Robots) for Active and Assisted Living, and associated barriers for the adoption of such solutions. We believe this study will contribute to this field of application, in particular, contributing to the demons |
---|---|
ISSN: | 1875-4791 1875-4805 |
DOI: | 10.1007/s12369-021-00859-6 |