Abstract
<jats:p>The development of data-driven behaviour generating systems has recently become the focus of considerable attention in the fields of human–agent interaction and human–robot interaction. Although rule-based approaches were dominant for years, these proved inflexible and expensive to develop. The difficulty of developing production rules, as well as the need for manual configuration to generate artificial behaviours, places a limit on how complex and diverse rule-based behaviours can be. In contrast, actual human–human interaction data collected using tracking and recording devices makes humanlike multimodal co-speech behaviour generation possible using machine learning and specifically, in recent years, deep learning. This survey provides an overview of the state of the art of deep learning-based co-speech behaviour generation models and offers an outlook for future research in this area.</jats:p>
Original language | English |
---|---|
Article number | 2 |
Pages (from-to) | 1-39 |
Number of pages | 0 |
Journal | ACM Transactions on Human-Robot Interaction |
Volume | 13 |
Issue number | 1 |
Early online date | 30 Jan 2024 |
DOIs | |
Publication status | Published - Mar 2024 |
ASJC Scopus subject areas
- Human-Computer Interaction
- Artificial Intelligence
Keywords
- Additional Key Words and PhrasesDatasets
- data-driven behaviour generation
- neural networks