Daniela Seucan is working towards a PhD in Applied Cognitive Psychology in Romania.
AI is now good at making autonomous decisions. It can even beat us at Go. Next step: human emotions?
Traditionally, machine learning has been used to model natural intelligence. Emerging researchers shows that robots can be trained to have emotional intelligence. This has potential benefits for the care-giving sector, but it remains to be seen how humans will respond to human-like robots in their home.
Teaching robots emotional intelligence is a different task to training them to make smart decisions, requiring new neuronal network models. How do you teach robots to detect anxiety or confidence, for example? New emotional neural networks are investigating how robots can pick up social cues via facial recognition or pupil dilation.
As a first step, scientists want AI to read how we feel, for use in caring for autistic children, the elderly or mental patients, for example. Robots can now be programmed to identify emotions in our facial expressions and the tone of our voice.
Indeed, we are still far from building AI that's able to experience the full spectrum of human emotions, but more and more emotional features are being developed to improve human-machine communication in the context of patient-centred care and other services.
The 'Uncanny Valley' theory states that humans feel more and more repulsed as a robot starts to closely resemble them. Recent studies support the idea that people become worried when they believe a robot can experience emotions. So perhaps get out more and mix with humans?
Abstract: This article describes a pilot study in which a novel experimental setup, involving an autonomous humanoid robot, KASPAR, participating in a collaborative, dyadic video game, was implemented and tested with children with autism, all of whom had impairments in playing socially and communicating with others. The children alternated between playing the collaborative video game with a neurotypical adult and playing the same game with the humanoid robot, being exposed to each condition twice. The equipment and experimental setup were designed to observe whether the children would engage in more collaborative behaviours while playing the video game and interacting with the adult than performing the same activities with the humanoid robot. The article describes the development of the experimental setup and its first evaluation in a small-scale exploratory pilot study. The purpose of the study was to gain experience with the operational limits of the robot as well as the dyadic video game, to determine what changes should be made to the systems, and to gain experience with analyzing the data from this study in order to conduct a more extensive evaluation in the future. Based on our observations of the childrens’ experiences in playing the cooperative game, we determined that while the children enjoyed both playing the game and interacting with the robot, the game should be made simpler to play as well as more explicitly collaborative in its mechanics. Also, the robot should be more explicit in its speech as well as more structured in its interactions.Results show that the children found the activity to be more entertaining, appeared more engaged in playing, and displayed better collaborative behaviours with their partners (For the purposes of this article, ‘partner’ refers to the human/robotic agent which interacts with the children with autism. We are not using the term’s other meanings that refer to specific relationships or emotional involvement between two individuals.) in the second sessions of playing with human adults than during their first sessions. One way of explaining these findings is that the children’s intermediary play session with the humanoid robot impacted their subsequent play session with the human adult. However, another longer and more thorough study would have to be conducted in order to better re-interpret these findings. Furthermore, although the children with autism were more interested in and entertained by the robotic partner, the children showed more examples of collaborative play and cooperation while playing with the human adult.
Pub.: 11 Sep '13, Pinned: 31 May '17
Abstract: Speech Emotion Recognition (SER) represents one of the emerging fields in human-computer interaction. Quality of the human-computer interface that mimics human speech emotions relies heavily on the types of features used and also on the classifier employed for recognition. The main purpose of this paper is to present a wide range of features employed for speech emotion recognition and the acoustic characteristics of those features. Also in this paper, we analyze the performance in terms of some important parameters such as: precision, recall, F-measure and recognition rate of the features using two of the commonly used emotional speech databases namely Berlin emotional database and Danish emotional database. Emotional speech recognition is being applied in modern human-computer interfaces and the overview of 10 interesting applications is also presented in this paper to illustrate the importance of this technique.
Pub.: 02 Sep '11, Pinned: 30 May '17
Abstract: In this paper, we outline the approach we have developed to construct an emotion-recognising system. It is based on guidance from psychological studies of emotion, as well as from the nature of emotion in its interaction with attention. A neural network architecture is constructed to be able to handle the fusion of different modalities (facial features, prosody and lexical content in speech). Results from the network are given and their implications discussed, as are implications for future direction for the research.
Pub.: 01 Jun '05, Pinned: 30 May '17
Abstract: In order to avoid the complex explicit feature extraction process and the problem of low-level data operation involved in traditional facial expression recognition, we proposed a method of Faster R-CNN (Faster Regions with Convolutional Neural Network Features) for facial expression recognition in this paper. Firstly, the facial expression image is normalized and the implicit features are extracted by using the trainable convolution kernel. Then, the maximum pooling is used to reduce the dimensions of the extracted implicit features. After that, RPNs (Region Proposal Networks) is used to generate high-quality region proposals, which are used by Faster R-CNN for detection. Finally, the Softmax classifier and regression layer is used to classify the facial expressions and predict boundary box of the test sample, respectively. The dataset is provided by Chinese Linguistic Data Consortium (CLDC), which is composed of multimodal emotional audio and video data. Experimental results show the performance and the generalization ability of the Faster R-CNN for facial expression recognition. The value of the mAP is around 0.82.
Pub.: 08 Apr '17, Pinned: 30 May '17
Abstract: The current work addresses a virtual environment with self-replicating agents whose decisions are based on a form of "somatic computation" (soma - body) in which basic emotional responses, taken in parallelism to actual living organisms, are introduced as a way to provide the agents with greater reflexive abilities. The work provides a contribution to the field of Artificial Intelligence (AI) and Artificial Life (ALife) in connection to a neurobiology-based cognitive framework for artificial systems and virtual environments' simulations. The performance of the agents capable of emotional responses is compared with that of self-replicating automata, and the implications of research on emotions and AI, in connection to both virtual agents as well as robots, is addressed regarding possible future directions and applications.
Pub.: 09 Jan '14, Pinned: 31 May '17
Abstract: We have developed convolutional neural networks (CNN) for a facial expression recognition task. The goal is to classify each facial image into one of the seven facial emotion categories considered in this study. We trained CNN models with different depth using gray-scale images. We developed our models in Torch and exploited Graphics Processing Unit (GPU) computation in order to expedite the training process. In addition to the networks performing based on raw pixel data, we employed a hybrid feature strategy by which we trained a novel CNN model with the combination of raw pixel data and Histogram of Oriented Gradients (HOG) features. To reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to L2 regularization. We applied cross validation to determine the optimal hyper-parameters and evaluated the performance of the developed models by looking at their training histories. We also present the visualization of different layers of a network to show what features of a face can be learned by CNN models.
Pub.: 22 Apr '17, Pinned: 30 May '17
Abstract: Pupil diameter (PD) has been suggested as a reliable parameter for identifying an individual's emotional state. In this paper, we introduce a learning machine technique to detect and differentiate between positive and negative emotions. We presented 30 participants with positive and negative sound stimuli and recorded pupillary responses. The results showed a significant increase in pupil dilation during the processing of negative and positive sound stimuli with greater increase for negative stimuli. We also found a more sustained dilation for negative compared to positive stimuli at the end of the trial, which was utilized to differentiate between positive and negative emotions using a machine learning approach which gave an accuracy of 96.5% with sensitivity of 97.93% and specificity of 98%. The obtained results were validated using another dataset designed for a different study and which was recorded while 30 participants processed word pairs with positive and negative emotions.
Pub.: 07 Jan '16, Pinned: 30 May '17
Abstract: Some robots have been given emotional expressions in an attempt to improve human-computer interaction. In this article we analyze what it would mean for a robot to have emotion, distinguishing emotional expression for communication from emotion as a mechanism for the organization of behavior. Research on the neurobiology of emotion yields a deepening understanding of interacting brain structures and neural mechanisms rooted in neuromodulation that underlie emotions in humans and other animals. However, the chemical basis of animal function differs greatly from the mechanics and computations of current machines. We therefore abstract from biology a functional characterization of emotion that does not depend on physical substrate or evolutionary history, and is broad enough to encompass the possible emotions of robots.
Pub.: 24 Nov '04, Pinned: 30 May '17
Abstract: Robots are currently utilized by various civilian and military agencies, and are becoming more common in human environments. These machines can vary in form and function, but require an interface supporting naturalistic social interactions. Emotion is a key component of social interaction that conveys states and action tendencies, and standard design protocol is necessary to guide the research and development of emotive display systems so that reliable implementations are supported. This work suggests a framework for conveying emotion based on the analogous physical features of emotive cues and their associations with the dimensions of emotion. Sound, kinesics, and color can be manipulated according to their speed, intensity, regularity, and extent to convey the emotive states of a robot. Combinations of cues can enhance human recognition accuracy of robot emotion, but further research is necessary to understand the extent of these interactions and establish each parameter space.
Pub.: 08 Sep '16, Pinned: 30 May '17
Abstract: This paper presents a method of mood transition design of a robot for autonomous emotional interaction with humans. A 2-D emotional model is proposed to combine robot emotion, mood, and personality in order to generate emotional expressions. In this design, the robot personality is programmed by adjusting the factors of the five factor model proposed by psychologists. From Big Five personality traits, the influence factors of robot mood transition are determined. Furthermore, a method to fuse basic robotic emotional behaviors is proposed in order to manifest robotic emotional states via continuous facial expressions. An artificial face on a screen is a way to provide a robot with a humanlike appearance, which might be useful for human-robot interaction. An artificial face simulator has been implemented to show the effectiveness of the proposed methods. Questionnaire surveys have been carried out to evaluate the effectiveness of the proposed method by observing robotic responses to a user's emotional expressions. Preliminary experimental results on a robotic head show that the proposed mood state transition scheme appropriately responds to a user's emotional changes in a continuous manner.
Pub.: 01 Aug '13, Pinned: 30 May '17
Abstract: The objective of a socially assistive robot is to create a close and effective interaction with a human user for the purpose of giving assistance. In particular, the social interaction, guidance, and support that a socially assistive robot can provide a person can be very beneficial to patient-centered care. However, there are a number of research issues that need to be addressed in order to design such robots. This paper focuses on developing effective emotion-based assistive behavior for a socially assistive robot intended for natural human-robot interaction (HRI) scenarios with explicit social and assistive task functionalities. In particular, in this paper, a unique emotional behavior module is presented and implemented in a learning-based control architecture for assistive HRI. The module is utilized to determine the appropriate emotions of the robot to display, as motivated by the well-being of the person, during assistive task-driven interactions in order to elicit suitable actions from users to accomplish a given person-centered assistive task. A novel online updating technique is used in order to allow the emotional model to adapt to new people and scenarios. Experiments presented show the effectiveness of utilizing robotic emotional assistive behavior during HRI scenarios.
Pub.: 10 Nov '15, Pinned: 30 May '17
Abstract: In our continuous attempts to model natural intelligence and emotions in machine learning, many research works emerge with different methods that are often driven by engineering concerns and have the common goal of modeling human perception in machines. This paper aims to go further in that direction by investigating the integration of emotion at the structural level of cognitive systems using the novel emotional DuoNeural Network (DuoNN). This network has hidden layer DuoNeurons, where each has two embedded neurons: a dorsal neuron and a ventral neuron for cognitive and emotional data processing, respectively. When input visual stimuli are presented to the DuoNN, the dorsal cognitive neurons process local features while the ventral emotional neurons process the entire pattern. We present the computational model and the learning algorithm of the DuoNN, the input information-cognitive and emotional-parallel streaming method, and a comparison between the DuoNN and a recently developed emotional neural network. Experimental results show that the DuoNN architecture, configuration, and the additional emotional information processing, yield higher recognition rates and faster learning and decision making.
Pub.: 03 Aug '10, Pinned: 30 May '17
Abstract: In this paper, we propose a limbic-based artificial emotional neural network (LiAENN) for a pattern recognition problem. LiAENN is a novel computational neural model of the emotional brain that models emotional situations such as anxiety and confidence in the learning process, the short paths, the forgetting processes, and inhibitory mechanisms of the emotional brain. In the model, the learning weights are adjusted by the proposed anxious confident decayed brain emotional learning rules (ACDBEL). In engineering applications, LiAENN is utilized in facial detection, and emotion recognition. According to the comparative results on ORL and Yale datasets, LiAENN shows a higher accuracy than other applied emotional networks such as brain emotional learning (BEL) and emotional back propagation (EmBP) based networks.
Pub.: 01 Aug '14, Pinned: 30 May '17
Abstract: In computerized technology, artificial speech is becoming increasingly important, and is already used in ATMs, online gaming and healthcare contexts. However, today's artificial speech typically sounds monotonous, a main reason for this being the lack of meaningful prosody. One particularly important function of prosody is to convey different emotions. This is because successful encoding and decoding of emotions is vital for effective social cognition, which is increasingly recognized in human-computer interaction contexts. Current attempts to artificially synthesize emotional prosody are much improved relative to early attempts, but there remains much work to be done due to methodological problems, lack of agreed acoustic correlates, and lack of theoretical grounding. If the addition of synthetic emotional prosody is not of sufficient quality, it may risk alienating users instead of enhancing their experience. So the value of embedding emotion cues in artificial speech may ultimately depend on the quality of the synthetic emotional prosody. However, early evidence on reactions to synthesized non-verbal cues in the facial modality bodes well. Attempts to implement the recognition of emotional prosody into artificial applications and interfaces have perhaps been met with greater success, but the ultimate test of synthetic emotional prosody will be to critically compare how people react to synthetic emotional prosody vs. natural emotional prosody, at the behavioral, socio-cognitive and neural levels.
Pub.: 01 Dec '15, Pinned: 30 May '17
Abstract: The uncanny valley-the unnerving nature of humanlike robots-is an intriguing idea, but both its existence and its underlying cause are debated. We propose that humanlike robots are not only unnerving, but are so because their appearance prompts attributions of mind. In particular, we suggest that machines become unnerving when people ascribe to them experience (the capacity to feel and sense), rather than agency (the capacity to act and do). Experiment 1 examined whether a machine's humanlike appearance prompts both ascriptions of experience and feelings of unease. Experiment 2 tested whether a machine capable of experience remains unnerving, even without a humanlike appearance. Experiment 3 investigated whether the perceived lack of experience can also help explain the creepiness of unfeeling humans and philosophical zombies. These experiments demonstrate that feelings of uncanniness are tied to perceptions of experience, and also suggest that experience-but not agency-is seen as fundamental to humans, and fundamentally lacking in machines.
Pub.: 13 Jul '12, Pinned: 30 May '17
Abstract: Abstract Using a hypothetical graph, Masahiro Mori proposed in 1970 the relation between the human likeness of robots and other anthropomorphic characters and an observer’s affective or emotional appraisal of them. The relation is positive apart from a U-shaped region known as the uncanny valley. To measure the relation, we previously developed and validated indices for the perceptual-cognitive dimension humanness and three affective dimensions: interpersonal warmth, attractiveness, and eeriness. Nevertheless, the design of these indices was not informed by how the untrained observer perceives anthropomorphic characters categorically. As a result, scatter plots of humanness vs. eeriness show the stimuli cluster tightly into categories widely separated from each other. The present study applies a card sorting task, laddering interview, and adjective evaluation ( \(N=30\) ) to revise the humanness, attractiveness, and eeriness indices and validate them via a representative survey ( \(N = 1311\) ). The revised eeriness index maintains its orthogonality to humanness ( \(r=.04\) , \(p=.285\) ), but the stimuli show much greater spread, reflecting the breadth of their range in human likeness and eeriness. The revised indices enable empirical relations among characters to be plotted similarly to Mori’s graph of the uncanny valley. Accurate measurement with these indices can be used to enhance the design of androids and 3D computer animated characters.AbstractUsing a hypothetical graph, Masahiro Mori proposed in 1970 the relation between the human likeness of robots and other anthropomorphic characters and an observer’s affective or emotional appraisal of them. The relation is positive apart from a U-shaped region known as the uncanny valley. To measure the relation, we previously developed and validated indices for the perceptual-cognitive dimension humanness and three affective dimensions: interpersonal warmth, attractiveness, and eeriness. Nevertheless, the design of these indices was not informed by how the untrained observer perceives anthropomorphic characters categorically. As a result, scatter plots of humanness vs. eeriness show the stimuli cluster tightly into categories widely separated from each other. The present study applies a card sorting task, laddering interview, and adjective evaluation ( \(N=30\) ) to revise the humanness, attractiveness, and eeriness indices and validate them via a representative survey ( \(N = 1311\) ). The revised eeriness index maintains its orthogonality to humanness ( \(r=.04\) , \(p=.285\) ), but the stimuli show much greater spread, reflecting the breadth of their range in human likeness and eeriness. The revised indices enable empirical relations among characters to be plotted similarly to Mori’s graph of the uncanny valley. Accurate measurement with these indices can be used to enhance the design of androids and 3D computer animated characters.Uuncanny valleyhumannesswarmth, attractiveness,eeriness \(N=30\) \(N=30\) \(N = 1311\) \(N = 1311\) \(r=.04\) \(r=.04\) \(p=.285\) \(p=.285\)
Pub.: 28 Oct '16, Pinned: 30 May '17
Abstract: For more than 40years, the uncanny valley model has captivated researchers from various fields of expertise. Still, explanations as to why slightly imperfect human-like characters can evoke feelings of eeriness remain the subject of controversy. Many experiments exploring the phenomenon have emphasized specific visual factors in connection to evolutionary psychological theories or an underlying categorization conflict. More recently, studies have also shifted away focus from the appearance of human-like entities, instead exploring their mental capabilities as basis for observers' discomfort. In order to advance this perspective, we introduced 92 participants to a virtual reality (VR) chat program and presented them with two digital characters engaged in an emotional and empathic dialogue. Using the same pre-recorded 3D scene, we manipulated the perceived control type of the depicted characters (human-controlled avatars vs. computer-controlled agents), as well as their alleged level of autonomy (scripted vs. self-directed actions). Statistical analyses revealed that participants experienced significantly stronger eeriness if they perceived the empathic characters to be autonomous artificial intelligences. As human likeness and attractiveness ratings did not result in significant group differences, we present our results as evidence for an "uncanny valley of mind" that relies on the attribution of emotions and social cognition to non-human entities. A possible relationship to the philosophy of anthropocentrism and its "threat to human distinctiveness" concept is discussed.
Pub.: 04 Jan '17, Pinned: 30 May '17
Abstract: The work presented in this paper was part of our investigation in the ROBOSKIN project. The project has developed new robot capabilities based on the tactile feedback provided by novel robotic skin, with the aim to provide cognitive mechanisms to improve human–robot interaction capabilities. This article presents two novel tactile play scenarios developed for robot-assisted play for children with autism. The play scenarios were developed against specific educational and therapeutic objectives that were discussed with teachers and therapists. These objectives were classified with reference to the ICF-CY, the International Classification of Functioning—version for Children and Youth. The article presents a detailed description of the play scenarios, and case study examples of their implementation in HRI studies with children with autism and the humanoid robot KASPAR.
Pub.: 04 Apr '14, Pinned: 30 May '17
Abstract: Haptic sensors are essential devices that facilitate human-like sensing systems such as implantable medical devices and humanoid robots. The availability of conducting thin films with haptic properties could lead to the development of tactile sensing systems that stretch reversibly, sense pressure (not just touch), and integrate with collapsible. In this study, a nanocomposite based hemispherical artificial fingertip fabricated to enhance the tactile sensing systems of humanoid robots. To validate the hypothesis, proposed method was used in the robot-like finger system to classify the ripe and unripe tomato by recording the metabolic growth of the tomato as a function of resistivity change during a controlled indention force. Prior to fabrication, a finite element modeling (FEM) was investigated for tomato to obtain the stress distribution and failure point of tomato by applying different external loads. Then, the extracted computational analysis information was utilized to design and fabricate nanocomposite based artificial fingertip to examine the maturity analysis of tomato. The obtained results demonstrate that the fabricated conformable and scalable artificial fingertip shows different electrical property for ripe and unripe tomato. The artificial fingertip is compatible with the development of brain-like systems for artificial skin by obtaining periodic response during an applied load.
Pub.: 07 Apr '17, Pinned: 30 May '17
Join Sparrho today to stay on top of science
Discover, organise and share research that matters to you