Quantcast


CURATOR

Hi! I currently work on my PhD in Applied Cognitive Psychology.

PINBOARD SUMMARY

Robotic systems significantly contribute to a better understanding of the human brain and viceversa.

The mind-body dichotomy is an ancient subject of debate dating back to the time of Plato, Descartes and Kant. But we cannot separate the mind from the body without having a laconic understanding of the way that the brain really works. And although the brain has been studied for several centuries now, it still holds mysteries at the molecular level and as well at the cognitive and behavioral levels.

To fully understand the mechanisms of the brain we must learn at the same time about the function of the body. This view is supported also by robotics. It seems that in order for an artificial system – a robot - to develop intelligent behavior it is essential to have a compliant body with a muscular-skeletal structure. Neurocognitive Robotics, or the science of embodiment, is an interdisciplinary field that combines neurocognitive science, robotics, and artificial inteligence. On one hand, this science and technology of embodied autonomous neural systems allows us to better understand the human brain. The interactions between neurons and neural networks in our brain generate the cognitive processes such as perception, memory, language and reasoning. By modeling human cognition and programming it into robots that act in a non-simulated environment (neurocognitive robots are required to live in the real world because they have to imitate humans that gain their knowledge and representations of the world by interacting with their environments) we can gain precious knowledge about the functions and malfunctions of the human brain. On the other hand, the robot becomes a mechanical agent (robot, prosthetic, wearable machines) that can actively help elderly, disabled people and even healthy people in performing day to day activities.

Follow this pinboard to find out:

  • The recent findings and developments in robotics based on neurocognitive science;
  • New methods and novel ideas in various domains relevant to neurocognitive robotics and how they interact with each other;
  • The new robotic models designed to help children and adults with cognitive impairments or support elderly people with their daily chores;
  • Humanoids and social robots designed for entertainment, teaching or building the next generation of human-machine symbiosis.
58 ITEMS PINNED

What is the Value of Embedding Artificial Emotional Prosody in Human-Computer Interactions? Implications for Theory and Design in Psychological Science.

Abstract: In computerized technology, artificial speech is becoming increasingly important, and is already used in ATMs, online gaming and healthcare contexts. However, today's artificial speech typically sounds monotonous, a main reason for this being the lack of meaningful prosody. One particularly important function of prosody is to convey different emotions. This is because successful encoding and decoding of emotions is vital for effective social cognition, which is increasingly recognized in human-computer interaction contexts. Current attempts to artificially synthesize emotional prosody are much improved relative to early attempts, but there remains much work to be done due to methodological problems, lack of agreed acoustic correlates, and lack of theoretical grounding. If the addition of synthetic emotional prosody is not of sufficient quality, it may risk alienating users instead of enhancing their experience. So the value of embedding emotion cues in artificial speech may ultimately depend on the quality of the synthetic emotional prosody. However, early evidence on reactions to synthesized non-verbal cues in the facial modality bodes well. Attempts to implement the recognition of emotional prosody into artificial applications and interfaces have perhaps been met with greater success, but the ultimate test of synthetic emotional prosody will be to critically compare how people react to synthetic emotional prosody vs. natural emotional prosody, at the behavioral, socio-cognitive and neural levels.

Pub.: 01 Dec '15, Pinned: 30 May '17

Emotional Interaction between Artificial Companion Agents and the Elderly

Abstract: Artificial companion agents are defined as hardware or software entities designed to provide companionship to a person. The senior population are facing a special demand for companionship. Artificial companion agents have been demonstrated to be useful in therapy, offering emotional companionship and facilitating socialization. However, there is lack of empirical studies on what the artificial agents should do and how they can communicate with human beings better. To address these functional research problems, we attempt to establish a model to guide artificial companion designers to meet the emotional needs of the elderly through fulfilling absent roles in their social interactions. We call this model the Role Fulfilling Model. This model aims to use role as a key concept to analyse the demands from the elderly for functionalities from an emotional perspective in artificial companion agent designs and technologies. To evaluate the effectiveness of this model, we proposed a serious game platform named Happily Aging in Place. This game will help us to involve a large scale of senior users through crowdsourcing to test our model and hypothesis. To improve the emotional communication between artificial companion agents and users, This book draft addresses an important but largely overlooked aspect of affective computing: how to enable companion agents to express mixed emotions with facial expressions? And furthermore, for different users, do individual heterogeneity affects the perception of the same facial expressions? Some preliminary results about gender differences have been found. The perception of facial expressions between different age groups or cultural backgrounds will be held in future study.

Pub.: 21 Jan '16, Pinned: 30 May '17

Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection

Abstract: Publication date: Available online 10 December 2016 Source:Neural Networks Author(s): Jihun Kim, Jonghong Kim, Gil-Jin Jang, Minho Lee Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance.

Pub.: 17 Dec '16, Pinned: 30 May '17

A Pilot Study with a Novel Setup for Collaborative Play of the Humanoid Robot KASPAR with Children with Autism

Abstract: This article describes a pilot study in which a novel experimental setup, involving an autonomous humanoid robot, KASPAR, participating in a collaborative, dyadic video game, was implemented and tested with children with autism, all of whom had impairments in playing socially and communicating with others. The children alternated between playing the collaborative video game with a neurotypical adult and playing the same game with the humanoid robot, being exposed to each condition twice. The equipment and experimental setup were designed to observe whether the children would engage in more collaborative behaviours while playing the video game and interacting with the adult than performing the same activities with the humanoid robot. The article describes the development of the experimental setup and its first evaluation in a small-scale exploratory pilot study. The purpose of the study was to gain experience with the operational limits of the robot as well as the dyadic video game, to determine what changes should be made to the systems, and to gain experience with analyzing the data from this study in order to conduct a more extensive evaluation in the future. Based on our observations of the childrens’ experiences in playing the cooperative game, we determined that while the children enjoyed both playing the game and interacting with the robot, the game should be made simpler to play as well as more explicitly collaborative in its mechanics. Also, the robot should be more explicit in its speech as well as more structured in its interactions.Results show that the children found the activity to be more entertaining, appeared more engaged in playing, and displayed better collaborative behaviours with their partners (For the purposes of this article, ‘partner’ refers to the human/robotic agent which interacts with the children with autism. We are not using the term’s other meanings that refer to specific relationships or emotional involvement between two individuals.) in the second sessions of playing with human adults than during their first sessions. One way of explaining these findings is that the children’s intermediary play session with the humanoid robot impacted their subsequent play session with the human adult. However, another longer and more thorough study would have to be conducted in order to better re-interpret these findings. Furthermore, although the children with autism were more interested in and entertained by the robotic partner, the children showed more examples of collaborative play and cooperation while playing with the human adult.

Pub.: 11 Sep '13, Pinned: 09 May '17

Upper limb robot-assisted therapy in cerebral palsy: a single-blind randomized controlled trial.

Abstract: Several pilot studies have evoked interest in robot-assisted therapy (RAT) in children with cerebral palsy (CP).To assess the effectiveness of RAT in children with CP through a single-blind randomized controlled trial.Sixteen children with CP were randomized into 2 groups. Eight children performed 5 conventional therapy sessions per week over 8 weeks (control group). Eight children completed 3 conventional therapy sessions and 2 robot-assisted sessions per week over 8 weeks (robotic group). For both groups, each therapy session lasted 45 minutes. Throughout each RAT session, the patient attempted to reach several targets consecutively with the REAPlan. The REAPlan is a distal effector robot that allows for displacements of the upper limb in the horizontal plane. A blinded assessment was performed before and after the intervention with respect to the International Classification of Functioning framework: body structure and function (upper limb kinematics, Box and Block test, Quality of Upper Extremity Skills Test, strength, and spasticity), activities (Abilhand-Kids, Pediatric Evaluation of Disability Inventory), and participation (Life Habits).During each RAT session, patients performed 744 movements on average with the REAPlan. Among the variables assessed, the smoothness of movement (P < .01) and manual dexterity assessed by the Box and Block test (P = .04) improved significantly more in the robotic group than in the control group.This single-blind randomized controlled trial provides the first evidence that RAT is effective in children with CP. Future studies should investigate the long-term effects of this therapy.

Pub.: 13 Jul '14, Pinned: 06 May '17

Clinical Application of a Humanoid Robot in Pediatric Cancer Interventions

Abstract: This paper propounds a novel approach by exploring the effect of utilizing a social humanoid robot as a therapy-assistive tool in dealing with pediatric distress. The study aims to create a friendship bond between a humanoid robot and young oncology patients to alleviate their pain and distress. Eleven children, ages 7–12, diagnosed with cancer were randomly assigned into two groups: a social robot-assisted therapy (SRAT) group with 6 kids and a psychotherapy group with five kids at two specialized hospitals in Tehran. A NAO robot was programmed and employed as a robotic assistant to a psychologist in the SRAT group to perform various scenarios in eight intervention sessions. These sessions were aimed at instructing the children about their affliction and its symptoms, sympathizing with them, and providing a space for them to express their fears and worries. The same treatment was conducted by the psychologist alone on the control group. The children’s anxiety, anger, and depression were measured with three standard questionnaires obtained from the literature before and after the treatment (March et al., in J Am Acad Child Adolesc Psychiatry 36:554–565, 1997; Nelson and Finch, in Children’s inventory of anger, 2000; Kovacs, in Psychopharmacol Bull 21:995–1124, 1985). The results of descriptive statistics and MANOVA indicated that the children’s stress, depression, and anger were considerably alleviated during SRAT treatment and significant differences were observed between the two groups. Considering the positive reactions from the children to the robot assistant’s presence at the intervention sessions, and observing the numerical results, one can anticipate that utilizing a humanoid robot with different communication abilities can be beneficial, both in elevation of efficacy in interventions, and fomenting kids to be more interactive and cooperative in their treatment sessions. In addition, employing the humanoid robot was significantly useful in teaching children about their affliction and instructing them in techniques such as: relaxation or desensitization in order to help them confront and manage their distress themselves and take control of their situation.

Pub.: 18 Mar '15, Pinned: 06 May '17

Development of haptic based piezoresistive artificial fingertip: Toward efficient tactile sensing systems for humanoids

Abstract: Haptic sensors are essential devices that facilitate human-like sensing systems such as implantable medical devices and humanoid robots. The availability of conducting thin films with haptic properties could lead to the development of tactile sensing systems that stretch reversibly, sense pressure (not just touch), and integrate with collapsible. In this study, a nanocomposite based hemispherical artificial fingertip fabricated to enhance the tactile sensing systems of humanoid robots. To validate the hypothesis, proposed method was used in the robot-like finger system to classify the ripe and unripe tomato by recording the metabolic growth of the tomato as a function of resistivity change during a controlled indention force. Prior to fabrication, a finite element modeling (FEM) was investigated for tomato to obtain the stress distribution and failure point of tomato by applying different external loads. Then, the extracted computational analysis information was utilized to design and fabricate nanocomposite based artificial fingertip to examine the maturity analysis of tomato. The obtained results demonstrate that the fabricated conformable and scalable artificial fingertip shows different electrical property for ripe and unripe tomato. The artificial fingertip is compatible with the development of brain-like systems for artificial skin by obtaining periodic response during an applied load.

Pub.: 07 Apr '17, Pinned: 06 May '17

Balancing While Executing Competing Reaching Tasks: An Attractor-Based Whole-Body Motion Control System Using Gravitational Stiffness

Abstract: International Journal of Humanoid Robotics, Volume 13, Issue 01, March 2016. Whole-body control (WBC) systems represent a wide range of complex movement skills in the form of low-dimensional task descriptors which are projected on to the robot’s actuator space. Using these methods allow to exploit the full capabilities of the entire body of redundant, floating-base robots in compliant multi-contact interaction with the environment, to execute any single task and simultaneous multiple tasks. This paper presents an attractor-based whole-body motion control (WBMC) system, developed for torque-control of floating-base robots. The attractors are defined as atomic control modules that work in parallel to, and independently from the other attractors, generating joint torques that aim to modify the state of the robot so that the error in a target condition is minimized. Balance of the robot is guaranteed by the simultaneous activation of an attractor to the minimum effort configuration, and of an attractor to a zero joint momentum. A novel formulation of the minimum effort is proposed based on the assumption that whenever the gravitational stiffness is maximized, the effort is consequently minimized. The effectiveness of the WBMC was experimentally demonstrated with the COMAN humanoid robot in a physical simulation, in scenarios where multiple conflicting tasks had to be accomplished simultaneously.

Pub.: 01 Apr '16, Pinned: 03 May '17

Learning assistive strategies for exoskeleton robots from user-robot physical interaction

Abstract: Social demand for exoskeleton robots that physically assist humans has been increasing in various situations due to the demographic trends of aging populations. With exoskeleton robots, an assistive strategy is a key ingredient. Since interactions between users and exoskeleton robots are bidirectional, the assistive strategy design problem is complex and challenging. In this paper, we explore a data-driven learning approach for designing assistive strategies for exoskeletons from user-robot physical interaction. We formulate the learning problem of assistive strategies as a policy search problem and exploit a data-efficient model-based reinforcement learning framework. Instead of explicitly providing the desired trajectories in the cost function, our cost function only considers the user’s muscular effort measured by electromyography signals (EMGs) to learn the assistive strategies. The key underlying assumption is that the user is instructed to perform the task by his/her own intended movements. Since the EMGs are observed when the intended movements are achieved by the user’s own muscle efforts rather than the robot’s assistance, EMGs can be interpreted as the “cost” of the current assistance. We applied our method to a 1-DoF exoskeleton robot and conducted a series of experiments with human subjects. Our experimental results demonstrated that our method learned proper assistive strategies that explicitly considered the bidirectional interactions between a user and a robot with only 60 seconds of interaction. We also showed that our proposed method can cope with changes in both the robot dynamics and movement trajectories.

Pub.: 10 Apr '17, Pinned: 29 Apr '17