EMRES: A New EMotional RESpondent Robot
dc.authorid | Battini Sonmez, Elena/0000-0003-0090-984X|Han, Hasan/0000-0002-7105-4351|Sarioglu, Baykal/0000-0002-7433-3823 | |
dc.authorwosid | Battini Sonmez, Elena/AAZ-6358-2021 | |
dc.contributor.author | Sonmez, Elena Battini | |
dc.contributor.author | Han, Hasan | |
dc.contributor.author | Karadeniz, Oguzcan | |
dc.contributor.author | Dalyan, Tugba | |
dc.contributor.author | Sarioglu, Baykal | |
dc.date.accessioned | 2024-07-18T20:47:27Z | |
dc.date.available | 2024-07-18T20:47:27Z | |
dc.date.issued | 2022 | |
dc.department | İstanbul Bilgi Üniversitesi | en_US |
dc.description.abstract | The aim of this work is to design an artificial empathetic system and to implement it into an EMotional RESpondent (EMRES) robot, called EMRES. Rather than mimic the expression detected in the human partner, the proposed system achieves a coherent and consistent emotional trajectory resulting in a more credible human-agent interaction. Inspired by developmental robotics theory, EMRES has an internal state and a mood, which contribute in the evolution of the flow of emotions; at every episode, the next emotional state of the agent is affected by its internal state, mood, current emotion, and the expression read in the human partner. As a result, EMRES does not imitate, but it synchronizes to the emotion expressed by the human companion. The agent has been trained to recognize expressive faces of the FER2013 database and it is capable of achieving 78.3% performance with wild images. Our first prototype has been implemented into a robot, which has been created for this purpose. An empirical study run with university students judged in a positive way the newly proposed artificial empathetic system. | en_US |
dc.identifier.doi | 10.1109/TCDS.2021.3120562 | |
dc.identifier.endpage | 780 | en_US |
dc.identifier.issn | 2379-8920 | |
dc.identifier.issn | 2379-8939 | |
dc.identifier.issue | 2 | en_US |
dc.identifier.scopus | 2-s2.0-85117791138 | en_US |
dc.identifier.scopusquality | Q1 | en_US |
dc.identifier.startpage | 772 | en_US |
dc.identifier.uri | https://doi.org/10.1109/TCDS.2021.3120562 | |
dc.identifier.uri | https://hdl.handle.net/11411/7789 | |
dc.identifier.volume | 14 | en_US |
dc.identifier.wos | WOS:000809402600050 | en_US |
dc.identifier.wosquality | Q2 | en_US |
dc.indekslendigikaynak | Web of Science | en_US |
dc.indekslendigikaynak | Scopus | en_US |
dc.language.iso | en | en_US |
dc.publisher | IEEE-Inst Electrical Electronics Engineers Inc | en_US |
dc.relation.ispartof | Ieee Transactions on Cognitive and Developmental Systems | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | Robots | en_US |
dc.subject | Computational Modeling | en_US |
dc.subject | Face Recognition | en_US |
dc.subject | Mood | en_US |
dc.subject | Faces | en_US |
dc.subject | Synchronization | en_US |
dc.subject | Three-Dimensional Displays | en_US |
dc.subject | Computational Affective Models | en_US |
dc.subject | Deep Learning | en_US |
dc.subject | Developmental Robotics | en_US |
dc.subject | Facial Expressions | en_US |
dc.subject | Virtual Human | en_US |
dc.subject | Developmental Robotics | en_US |
dc.subject | Core Affect | en_US |
dc.title | EMRES: A New EMotional RESpondent Robot | en_US |
dc.type | Article | en_US |