Quantcast


CURATOR
A pinboard by
Endre Szvetnik

Daniela Seucan is working towards a PhD in Applied Cognitive Psychology in Romania.

PINBOARD SUMMARY

AI is now good at making autonomous decisions. It can even beat us at Go. Next step: human emotions?

Traditionally, machine learning has been used to model natural intelligence. Emerging researchers shows that robots can be trained to have emotional intelligence. This has potential benefits for the care-giving sector, but it remains to be seen how humans will respond to human-like robots in their home.

How can we encode emotions into machines?

Teaching robots emotional intelligence is a different task to training them to make smart decisions, requiring new neuronal network models. How do you teach robots to detect anxiety or confidence, for example? New emotional neural networks are investigating how robots can pick up social cues via facial recognition or pupil dilation.

What benefit will emotionally intelligent machines bring society?

As a first step, scientists want AI to read how we feel, for use in caring for autistic children, the elderly or mental patients, for example. Robots can now be programmed to identify emotions in our facial expressions and the tone of our voice.

Surely a robot's emotions will never meet the complexity of a humans?

Indeed, we are still far from building AI that's able to experience the full spectrum of human emotions, but more and more emotional features are being developed to improve human-machine communication in the context of patient-centred care and other services.

How would you respond to a robot in your living room?

The 'Uncanny Valley' theory states that humans feel more and more repulsed as a robot starts to closely resemble them. Recent studies support the idea that people become worried when they believe a robot can experience emotions. So perhaps get out more and mix with humans?

20 ITEMS PINNED

A Pilot Study with a Novel Setup for Collaborative Play of the Humanoid Robot KASPAR with Children with Autism

Abstract: This article describes a pilot study in which a novel experimental setup, involving an autonomous humanoid robot, KASPAR, participating in a collaborative, dyadic video game, was implemented and tested with children with autism, all of whom had impairments in playing socially and communicating with others. The children alternated between playing the collaborative video game with a neurotypical adult and playing the same game with the humanoid robot, being exposed to each condition twice. The equipment and experimental setup were designed to observe whether the children would engage in more collaborative behaviours while playing the video game and interacting with the adult than performing the same activities with the humanoid robot. The article describes the development of the experimental setup and its first evaluation in a small-scale exploratory pilot study. The purpose of the study was to gain experience with the operational limits of the robot as well as the dyadic video game, to determine what changes should be made to the systems, and to gain experience with analyzing the data from this study in order to conduct a more extensive evaluation in the future. Based on our observations of the childrens’ experiences in playing the cooperative game, we determined that while the children enjoyed both playing the game and interacting with the robot, the game should be made simpler to play as well as more explicitly collaborative in its mechanics. Also, the robot should be more explicit in its speech as well as more structured in its interactions.Results show that the children found the activity to be more entertaining, appeared more engaged in playing, and displayed better collaborative behaviours with their partners (For the purposes of this article, ‘partner’ refers to the human/robotic agent which interacts with the children with autism. We are not using the term’s other meanings that refer to specific relationships or emotional involvement between two individuals.) in the second sessions of playing with human adults than during their first sessions. One way of explaining these findings is that the children’s intermediary play session with the humanoid robot impacted their subsequent play session with the human adult. However, another longer and more thorough study would have to be conducted in order to better re-interpret these findings. Furthermore, although the children with autism were more interested in and entertained by the robotic partner, the children showed more examples of collaborative play and cooperation while playing with the human adult.

Pub.: 11 Sep '13, Pinned: 31 May '17

What is the Value of Embedding Artificial Emotional Prosody in Human-Computer Interactions? Implications for Theory and Design in Psychological Science.

Abstract: In computerized technology, artificial speech is becoming increasingly important, and is already used in ATMs, online gaming and healthcare contexts. However, today's artificial speech typically sounds monotonous, a main reason for this being the lack of meaningful prosody. One particularly important function of prosody is to convey different emotions. This is because successful encoding and decoding of emotions is vital for effective social cognition, which is increasingly recognized in human-computer interaction contexts. Current attempts to artificially synthesize emotional prosody are much improved relative to early attempts, but there remains much work to be done due to methodological problems, lack of agreed acoustic correlates, and lack of theoretical grounding. If the addition of synthetic emotional prosody is not of sufficient quality, it may risk alienating users instead of enhancing their experience. So the value of embedding emotion cues in artificial speech may ultimately depend on the quality of the synthetic emotional prosody. However, early evidence on reactions to synthesized non-verbal cues in the facial modality bodes well. Attempts to implement the recognition of emotional prosody into artificial applications and interfaces have perhaps been met with greater success, but the ultimate test of synthetic emotional prosody will be to critically compare how people react to synthetic emotional prosody vs. natural emotional prosody, at the behavioral, socio-cognitive and neural levels.

Pub.: 01 Dec '15, Pinned: 30 May '17

Measuring the Uncanny Valley Effect

Abstract: Abstract Using a hypothetical graph, Masahiro Mori proposed in 1970 the relation between the human likeness of robots and other anthropomorphic characters and an observer’s affective or emotional appraisal of them. The relation is positive apart from a U-shaped region known as the uncanny valley. To measure the relation, we previously developed and validated indices for the perceptual-cognitive dimension humanness and three affective dimensions: interpersonal warmth, attractiveness, and eeriness. Nevertheless, the design of these indices was not informed by how the untrained observer perceives anthropomorphic characters categorically. As a result, scatter plots of humanness vs. eeriness show the stimuli cluster tightly into categories widely separated from each other. The present study applies a card sorting task, laddering interview, and adjective evaluation ( \(N=30\) ) to revise the humanness, attractiveness, and eeriness indices and validate them via a representative survey ( \(N = 1311\) ). The revised eeriness index maintains its orthogonality to humanness ( \(r=.04\) , \(p=.285\) ), but the stimuli show much greater spread, reflecting the breadth of their range in human likeness and eeriness. The revised indices enable empirical relations among characters to be plotted similarly to Mori’s graph of the uncanny valley. Accurate measurement with these indices can be used to enhance the design of androids and 3D computer animated characters.AbstractUsing a hypothetical graph, Masahiro Mori proposed in 1970 the relation between the human likeness of robots and other anthropomorphic characters and an observer’s affective or emotional appraisal of them. The relation is positive apart from a U-shaped region known as the uncanny valley. To measure the relation, we previously developed and validated indices for the perceptual-cognitive dimension humanness and three affective dimensions: interpersonal warmth, attractiveness, and eeriness. Nevertheless, the design of these indices was not informed by how the untrained observer perceives anthropomorphic characters categorically. As a result, scatter plots of humanness vs. eeriness show the stimuli cluster tightly into categories widely separated from each other. The present study applies a card sorting task, laddering interview, and adjective evaluation ( \(N=30\) ) to revise the humanness, attractiveness, and eeriness indices and validate them via a representative survey ( \(N = 1311\) ). The revised eeriness index maintains its orthogonality to humanness ( \(r=.04\) , \(p=.285\) ), but the stimuli show much greater spread, reflecting the breadth of their range in human likeness and eeriness. The revised indices enable empirical relations among characters to be plotted similarly to Mori’s graph of the uncanny valley. Accurate measurement with these indices can be used to enhance the design of androids and 3D computer animated characters.Uuncanny valleyhumannesswarmth, attractiveness,eeriness \(N=30\) \(N=30\) \(N = 1311\) \(N = 1311\) \(r=.04\) \(r=.04\) \(p=.285\) \(p=.285\)

Pub.: 28 Oct '16, Pinned: 30 May '17

Venturing into the uncanny valley of mind-The influence of mind attribution on the acceptance of human-like characters in a virtual reality setting.

Abstract: For more than 40years, the uncanny valley model has captivated researchers from various fields of expertise. Still, explanations as to why slightly imperfect human-like characters can evoke feelings of eeriness remain the subject of controversy. Many experiments exploring the phenomenon have emphasized specific visual factors in connection to evolutionary psychological theories or an underlying categorization conflict. More recently, studies have also shifted away focus from the appearance of human-like entities, instead exploring their mental capabilities as basis for observers' discomfort. In order to advance this perspective, we introduced 92 participants to a virtual reality (VR) chat program and presented them with two digital characters engaged in an emotional and empathic dialogue. Using the same pre-recorded 3D scene, we manipulated the perceived control type of the depicted characters (human-controlled avatars vs. computer-controlled agents), as well as their alleged level of autonomy (scripted vs. self-directed actions). Statistical analyses revealed that participants experienced significantly stronger eeriness if they perceived the empathic characters to be autonomous artificial intelligences. As human likeness and attractiveness ratings did not result in significant group differences, we present our results as evidence for an "uncanny valley of mind" that relies on the attribution of emotions and social cognition to non-human entities. A possible relationship to the philosophy of anthropocentrism and its "threat to human distinctiveness" concept is discussed.

Pub.: 04 Jan '17, Pinned: 30 May '17

Development of haptic based piezoresistive artificial fingertip: Toward efficient tactile sensing systems for humanoids

Abstract: Haptic sensors are essential devices that facilitate human-like sensing systems such as implantable medical devices and humanoid robots. The availability of conducting thin films with haptic properties could lead to the development of tactile sensing systems that stretch reversibly, sense pressure (not just touch), and integrate with collapsible. In this study, a nanocomposite based hemispherical artificial fingertip fabricated to enhance the tactile sensing systems of humanoid robots. To validate the hypothesis, proposed method was used in the robot-like finger system to classify the ripe and unripe tomato by recording the metabolic growth of the tomato as a function of resistivity change during a controlled indention force. Prior to fabrication, a finite element modeling (FEM) was investigated for tomato to obtain the stress distribution and failure point of tomato by applying different external loads. Then, the extracted computational analysis information was utilized to design and fabricate nanocomposite based artificial fingertip to examine the maturity analysis of tomato. The obtained results demonstrate that the fabricated conformable and scalable artificial fingertip shows different electrical property for ripe and unripe tomato. The artificial fingertip is compatible with the development of brain-like systems for artificial skin by obtaining periodic response during an applied load.

Pub.: 07 Apr '17, Pinned: 30 May '17