Hi! I currently work on my PhD in Applied Cognitive Psychology.
Robotic systems significantly contribute to a better understanding of the human brain and viceversa.
The mind-body dichotomy is an ancient subject of debate dating back to the time of Plato, Descartes and Kant. But we cannot separate the mind from the body without having a laconic understanding of the way that the brain really works. And although the brain has been studied for several centuries now, it still holds mysteries at the molecular level and as well at the cognitive and behavioral levels.
To fully understand the mechanisms of the brain we must learn at the same time about the function of the body. This view is supported also by robotics. It seems that in order for an artificial system – a robot - to develop intelligent behavior it is essential to have a compliant body with a muscular-skeletal structure. Neurocognitive Robotics, or the science of embodiment, is an interdisciplinary field that combines neurocognitive science, robotics, and artificial inteligence. On one hand, this science and technology of embodied autonomous neural systems allows us to better understand the human brain. The interactions between neurons and neural networks in our brain generate the cognitive processes such as perception, memory, language and reasoning. By modeling human cognition and programming it into robots that act in a non-simulated environment (neurocognitive robots are required to live in the real world because they have to imitate humans that gain their knowledge and representations of the world by interacting with their environments) we can gain precious knowledge about the functions and malfunctions of the human brain. On the other hand, the robot becomes a mechanical agent (robot, prosthetic, wearable machines) that can actively help elderly, disabled people and even healthy people in performing day to day activities.
Follow this pinboard to find out:
Abstract: We have been advocating cognitive developmental robotics to obtain new insight into the development of human cognitive functions by utilizing synthetic and constructive approaches. Among the different emotional functions, empathy is difficult to model, but essential for robots to be social agents in our society. In my previous review on artificial empathy (Asada, 2014b), I proposed a conceptual model for empathy development beginning with emotional contagion to envy/schadenfreude along with self/other differentiation. In this article, the focus is on two aspects of this developmental process, emotional contagion in relation to motor mimicry, and cognitive/affective aspects of the empathy. It begins with a summary of the previous review (Asada, 2014b) and an introduction to affective developmental robotics as a part of cognitive developmental robotics focusing on the affective aspects. This is followed by a review and discussion on several approaches for two focused aspects of affective developmental robotics. Finally, future issues involved in the development of a more authentic form of artificial empathy are discussed.
Pub.: 17 Dec '14, Pinned: 30 May '17
Abstract: The current work addresses a virtual environment with self-replicating agents whose decisions are based on a form of "somatic computation" (soma - body) in which basic emotional responses, taken in parallelism to actual living organisms, are introduced as a way to provide the agents with greater reflexive abilities. The work provides a contribution to the field of Artificial Intelligence (AI) and Artificial Life (ALife) in connection to a neurobiology-based cognitive framework for artificial systems and virtual environments' simulations. The performance of the agents capable of emotional responses is compared with that of self-replicating automata, and the implications of research on emotions and AI, in connection to both virtual agents as well as robots, is addressed regarding possible future directions and applications.
Pub.: 09 Jan '14, Pinned: 30 May '17
Abstract: Emotion estimation in music listening is confronting challenges to capture the emotion variation of listeners. Recent years have witnessed attempts to exploit multimodality fusing information from musical contents and physiological signals captured from listeners to improve the performance of emotion recognition. In this paper, we present a study of fusion of signals of electroencephalogram (EEG), a tool to capture brainwaves at a high-temporal resolution, and musical features at decision level in recognizing the time-varying binary classes of arousal and valence. Our empirical results showed that the fusion could outperform the performance of emotion recognition using only EEG modality that was suffered from inter-subject variability, and this suggested the promise of multimodal fusion in improving the accuracy of music-emotion recognition.
Pub.: 30 Nov '16, Pinned: 30 May '17
Abstract: Music emotion recognition (MER) is usually regarded as a multi-label tagging task, and each segment of music can inspire specific emotion tags. Most researchers extract acoustic features from music and explore the relations between these features and their corresponding emotion tags. Considering the inconsistency of emotions inspired by the same music segment for human beings, seeking for the key acoustic features that really affect on emotions is really a challenging task. In this paper, we propose a novel MER method by using deep convolutional neural network (CNN) on the music spectrograms that contains both the original time and frequency domain information. By the proposed method, no additional effort on extracting specific features required, which is left to the training procedure of the CNN model. Experiments are conducted on the standard CAL500 and CAL500exp dataset. Results show that, for both datasets, the proposed method outperforms state-of-the-art methods.
Pub.: 19 Apr '17, Pinned: 30 May '17
Abstract: In computerized technology, artificial speech is becoming increasingly important, and is already used in ATMs, online gaming and healthcare contexts. However, today's artificial speech typically sounds monotonous, a main reason for this being the lack of meaningful prosody. One particularly important function of prosody is to convey different emotions. This is because successful encoding and decoding of emotions is vital for effective social cognition, which is increasingly recognized in human-computer interaction contexts. Current attempts to artificially synthesize emotional prosody are much improved relative to early attempts, but there remains much work to be done due to methodological problems, lack of agreed acoustic correlates, and lack of theoretical grounding. If the addition of synthetic emotional prosody is not of sufficient quality, it may risk alienating users instead of enhancing their experience. So the value of embedding emotion cues in artificial speech may ultimately depend on the quality of the synthetic emotional prosody. However, early evidence on reactions to synthesized non-verbal cues in the facial modality bodes well. Attempts to implement the recognition of emotional prosody into artificial applications and interfaces have perhaps been met with greater success, but the ultimate test of synthetic emotional prosody will be to critically compare how people react to synthetic emotional prosody vs. natural emotional prosody, at the behavioral, socio-cognitive and neural levels.
Pub.: 01 Dec '15, Pinned: 30 May '17
Abstract: Described is a system for object detection from dynamic visual imagery. Dynamic visual input obtained from a stationary sensor is processed by a surprise-based module. The surprise-based module detects a stationary object in a scene to generate surprise scores. The dynamic visual input is also processed by a motion-based saliency module which detects foreground in the scene to generate motion scores. The surprise scores and motion scores are fused into a single score, and the single score is used to determine the presence of an object of interest.
Pub.: 03 Nov '15, Pinned: 30 May '17
Abstract: In this paper, we propose a limbic-based artificial emotional neural network (LiAENN) for a pattern recognition problem. LiAENN is a novel computational neural model of the emotional brain that models emotional situations such as anxiety and confidence in the learning process, the short paths, the forgetting processes, and inhibitory mechanisms of the emotional brain. In the model, the learning weights are adjusted by the proposed anxious confident decayed brain emotional learning rules (ACDBEL). In engineering applications, LiAENN is utilized in facial detection, and emotion recognition. According to the comparative results on ORL and Yale datasets, LiAENN shows a higher accuracy than other applied emotional networks such as brain emotional learning (BEL) and emotional back propagation (EmBP) based networks.
Pub.: 01 Aug '14, Pinned: 30 May '17
Abstract: Artificial companion agents are defined as hardware or software entities designed to provide companionship to a person. The senior population are facing a special demand for companionship. Artificial companion agents have been demonstrated to be useful in therapy, offering emotional companionship and facilitating socialization. However, there is lack of empirical studies on what the artificial agents should do and how they can communicate with human beings better. To address these functional research problems, we attempt to establish a model to guide artificial companion designers to meet the emotional needs of the elderly through fulfilling absent roles in their social interactions. We call this model the Role Fulfilling Model. This model aims to use role as a key concept to analyse the demands from the elderly for functionalities from an emotional perspective in artificial companion agent designs and technologies. To evaluate the effectiveness of this model, we proposed a serious game platform named Happily Aging in Place. This game will help us to involve a large scale of senior users through crowdsourcing to test our model and hypothesis. To improve the emotional communication between artificial companion agents and users, This book draft addresses an important but largely overlooked aspect of affective computing: how to enable companion agents to express mixed emotions with facial expressions? And furthermore, for different users, do individual heterogeneity affects the perception of the same facial expressions? Some preliminary results about gender differences have been found. The perception of facial expressions between different age groups or cultural backgrounds will be held in future study.
Pub.: 21 Jan '16, Pinned: 30 May '17
Abstract: Publication date: Available online 10 December 2016 Source:Neural Networks Author(s): Jihun Kim, Jonghong Kim, Gil-Jin Jang, Minho Lee Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance.
Pub.: 17 Dec '16, Pinned: 30 May '17
Abstract: In our continuous attempts to model natural intelligence and emotions in machine learning, many research works emerge with different methods that are often driven by engineering concerns and have the common goal of modeling human perception in machines. This paper aims to go further in that direction by investigating the integration of emotion at the structural level of cognitive systems using the novel emotional DuoNeural Network (DuoNN). This network has hidden layer DuoNeurons, where each has two embedded neurons: a dorsal neuron and a ventral neuron for cognitive and emotional data processing, respectively. When input visual stimuli are presented to the DuoNN, the dorsal cognitive neurons process local features while the ventral emotional neurons process the entire pattern. We present the computational model and the learning algorithm of the DuoNN, the input information-cognitive and emotional-parallel streaming method, and a comparison between the DuoNN and a recently developed emotional neural network. Experimental results show that the DuoNN architecture, configuration, and the additional emotional information processing, yield higher recognition rates and faster learning and decision making.
Pub.: 03 Aug '10, Pinned: 30 May '17
Abstract: Systems and methods for reducing the computational load of processing measurements of affective response of a user to content. A content emotional response analyzer (content ERA) receives a segment of content, analyzes it, and outputs an indication regarding whether a value related to a predicted emotional response to the segment reaches a predetermined threshold. Based on the indication, a controller selects a processing level, from among at least first and second processing levels, for a processor to process measurements of affective response. The first level may be selected when the value does not reach the predetermined threshold, while the second level may be selected when the value reaches it. The processor is configured to utilize significantly fewer computation cycles to process data operating at the first processing level, compared to the number of computation cycles it utilizes to process data operating at the second processing level.
Pub.: 16 Jun '15, Pinned: 30 May '17
Abstract: Pupil diameter (PD) has been suggested as a reliable parameter for identifying an individual's emotional state. In this paper, we introduce a learning machine technique to detect and differentiate between positive and negative emotions. We presented 30 participants with positive and negative sound stimuli and recorded pupillary responses. The results showed a significant increase in pupil dilation during the processing of negative and positive sound stimuli with greater increase for negative stimuli. We also found a more sustained dilation for negative compared to positive stimuli at the end of the trial, which was utilized to differentiate between positive and negative emotions using a machine learning approach which gave an accuracy of 96.5% with sensitivity of 97.93% and specificity of 98%. The obtained results were validated using another dataset designed for a different study and which was recorded while 30 participants processed word pairs with positive and negative emotions.
Pub.: 07 Jan '16, Pinned: 30 May '17
Abstract: The development of robots that learn from experience is a relentless challenge confronting artificial intelligence today. This paper describes a robot learning method which enables a mobile robot to simultaneously acquire the ability to avoid objects, follow walls, seek goals and control its velocity as a result of interacting with the environment without human assistance. The robot acquires these behaviors by learning how fast it should move along predefined trajectories with respect to the current state of the input vector. This enables the robot to perform object avoidance, wall following and goal seeking behaviors by choosing to follow fast trajectories near: the forward direction, the closest object or the goal location respectively. Learning trajectory velocities can be done relatively quickly because the required knowledge can be obtained from the robot's interactions with the environment without incurring the credit assignment problem. We provide experimental results to verify our robot learning method by using a mobile robot to simultaneously acquire all three behaviors.
Pub.: 01 Sep '00, Pinned: 09 May '17
Abstract: The objective of a socially assistive robot is to create a close and effective interaction with a human user for the purpose of giving assistance. In particular, the social interaction, guidance, and support that a socially assistive robot can provide a person can be very beneficial to patient-centered care. However, there are a number of research issues that need to be addressed in order to design such robots. This paper focuses on developing effective emotion-based assistive behavior for a socially assistive robot intended for natural human-robot interaction (HRI) scenarios with explicit social and assistive task functionalities. In particular, in this paper, a unique emotional behavior module is presented and implemented in a learning-based control architecture for assistive HRI. The module is utilized to determine the appropriate emotions of the robot to display, as motivated by the well-being of the person, during assistive task-driven interactions in order to elicit suitable actions from users to accomplish a given person-centered assistive task. A novel online updating technique is used in order to allow the emotional model to adapt to new people and scenarios. Experiments presented show the effectiveness of utilizing robotic emotional assistive behavior during HRI scenarios.
Pub.: 10 Nov '15, Pinned: 09 May '17
Abstract: Robot-assisted minimally invasive surgery (RMIS) holds great promise for improving the accuracy and dexterity of a surgeon and minimizing trauma to the patient. However, widespread clinical success with RMIS has been marginal. It is hypothesized that the lack of haptic (force and tactile) feedback presented to the surgeon is a limiting factor. This review explains the technical challenges of creating haptic feedback for robot-assisted surgery and provides recent results that evaluate the effectiveness of haptic feedback in mock surgical tasks.Haptic feedback systems for RMIS are still under development and evaluation. Most provide only force feedback, with limited fidelity. The major challenge at this time is sensing forces applied to the patient. A few tactile feedback systems for RMIS have been created, but their practicality for clinical implementation needs to be shown. It is particularly difficult to sense and display spatially distributed tactile information. The cost-benefit ratio for haptic feedback in RMIS has not been established.The designs of existing commercial RMIS systems are not conducive for force feedback, and creative solutions are needed to create compelling tactile feedback systems. Surgeons, engineers, and neuroscientists should work together to develop effective solutions for haptic feedback in RMIS.
Pub.: 06 Dec '08, Pinned: 09 May '17
Abstract: This article describes a pilot study in which a novel experimental setup, involving an autonomous humanoid robot, KASPAR, participating in a collaborative, dyadic video game, was implemented and tested with children with autism, all of whom had impairments in playing socially and communicating with others. The children alternated between playing the collaborative video game with a neurotypical adult and playing the same game with the humanoid robot, being exposed to each condition twice. The equipment and experimental setup were designed to observe whether the children would engage in more collaborative behaviours while playing the video game and interacting with the adult than performing the same activities with the humanoid robot. The article describes the development of the experimental setup and its first evaluation in a small-scale exploratory pilot study. The purpose of the study was to gain experience with the operational limits of the robot as well as the dyadic video game, to determine what changes should be made to the systems, and to gain experience with analyzing the data from this study in order to conduct a more extensive evaluation in the future. Based on our observations of the childrens’ experiences in playing the cooperative game, we determined that while the children enjoyed both playing the game and interacting with the robot, the game should be made simpler to play as well as more explicitly collaborative in its mechanics. Also, the robot should be more explicit in its speech as well as more structured in its interactions.Results show that the children found the activity to be more entertaining, appeared more engaged in playing, and displayed better collaborative behaviours with their partners (For the purposes of this article, ‘partner’ refers to the human/robotic agent which interacts with the children with autism. We are not using the term’s other meanings that refer to specific relationships or emotional involvement between two individuals.) in the second sessions of playing with human adults than during their first sessions. One way of explaining these findings is that the children’s intermediary play session with the humanoid robot impacted their subsequent play session with the human adult. However, another longer and more thorough study would have to be conducted in order to better re-interpret these findings. Furthermore, although the children with autism were more interested in and entertained by the robotic partner, the children showed more examples of collaborative play and cooperation while playing with the human adult.
Pub.: 11 Sep '13, Pinned: 09 May '17
Abstract: In this article we describe a human–robot interaction study, focusing on tactile aspects of interaction, in which children with autism interacted with the child-like humanoid robot KASPAR. KASPAR was equipped with touch sensors in order to be able to distinguish gentle from harsh touch, and to respond accordingly. The study investigated a novel scenario for robot-assisted play, with the goal to increase body awareness of children with autism spectrum condition (hereafter ASC) by teaching them how to identify human body parts, and to promote a triadic relationship between the child, the robot and the experimenter. Data obtained from the video analysis of the experimental sessions showed that children treated KASPAR as an object of shared attention with the experimenter, and performed more gentle touches on the robot along the sessions. The children also learned to identify body parts. The study showed the potential that teaching children with autism about body parts and appropriate physical interaction using a humanoid robot has, and highlighted the issues of scenario development, data collection and data analysis that will inform future studies.
Pub.: 24 Aug '14, Pinned: 09 May '17
Abstract: The work presented in this paper is part of our investigation in the ROBOSKIN project. The project aims to develop and demonstrate a range of new robot capabilities based on robot skin tactile feedback from large areas of the robot body. The main objective of the project is to develop cognitive mechanisms exploiting tactile feedback to improve human-robot interaction capabilities. The project aims also to investigate the possible use of this technology in robot-assisted play in the context of autism therapy. This article reports progress made in investigating tactile child-robot interactions where children with autism interacted with the humanoid robot KASPAR equipped with the first prototype of skin patches, introducing a new algorithm for tactile event recognition which will enhance the observational data analysis that has been used in the past.
Pub.: 19 Jan '12, Pinned: 09 May '17
Abstract: Several pilot studies have evoked interest in robot-assisted therapy (RAT) in children with cerebral palsy (CP).To assess the effectiveness of RAT in children with CP through a single-blind randomized controlled trial.Sixteen children with CP were randomized into 2 groups. Eight children performed 5 conventional therapy sessions per week over 8 weeks (control group). Eight children completed 3 conventional therapy sessions and 2 robot-assisted sessions per week over 8 weeks (robotic group). For both groups, each therapy session lasted 45 minutes. Throughout each RAT session, the patient attempted to reach several targets consecutively with the REAPlan. The REAPlan is a distal effector robot that allows for displacements of the upper limb in the horizontal plane. A blinded assessment was performed before and after the intervention with respect to the International Classification of Functioning framework: body structure and function (upper limb kinematics, Box and Block test, Quality of Upper Extremity Skills Test, strength, and spasticity), activities (Abilhand-Kids, Pediatric Evaluation of Disability Inventory), and participation (Life Habits).During each RAT session, patients performed 744 movements on average with the REAPlan. Among the variables assessed, the smoothness of movement (P < .01) and manual dexterity assessed by the Box and Block test (P = .04) improved significantly more in the robotic group than in the control group.This single-blind randomized controlled trial provides the first evidence that RAT is effective in children with CP. Future studies should investigate the long-term effects of this therapy.
Pub.: 13 Jul '14, Pinned: 06 May '17
Abstract: People with ASD (Autism Spectrum Disorders) have difficulty in managing interpersonal relationships and common life social situations. A modular platform for Human Robot Interaction and Human Machine Interaction studies has been developed to manage and analyze therapeutic sessions in which subjects are driven by a psychologist through simulated social scenarios. This innovative therapeutic approach uses a humanoid robot called FACE capable of expressing and conveying emotions and empathy. Using FACE as a social interlocutor the psychologist can emulate real life scenarios where the emotional state of the interlocutor is adaptively adjusted through a semi closed loop control algorithm which uses the ASD subject's inferred "affective" state as input. Preliminary results demonstrate that the platform is well accepted by ASDs and can be consequently used as novel therapy for social skills training.
Pub.: 19 Jan '12, Pinned: 06 May '17
Abstract: This paper propounds a novel approach by exploring the effect of utilizing a social humanoid robot as a therapy-assistive tool in dealing with pediatric distress. The study aims to create a friendship bond between a humanoid robot and young oncology patients to alleviate their pain and distress. Eleven children, ages 7–12, diagnosed with cancer were randomly assigned into two groups: a social robot-assisted therapy (SRAT) group with 6 kids and a psychotherapy group with five kids at two specialized hospitals in Tehran. A NAO robot was programmed and employed as a robotic assistant to a psychologist in the SRAT group to perform various scenarios in eight intervention sessions. These sessions were aimed at instructing the children about their affliction and its symptoms, sympathizing with them, and providing a space for them to express their fears and worries. The same treatment was conducted by the psychologist alone on the control group. The children’s anxiety, anger, and depression were measured with three standard questionnaires obtained from the literature before and after the treatment (March et al., in J Am Acad Child Adolesc Psychiatry 36:554–565, 1997; Nelson and Finch, in Children’s inventory of anger, 2000; Kovacs, in Psychopharmacol Bull 21:995–1124, 1985). The results of descriptive statistics and MANOVA indicated that the children’s stress, depression, and anger were considerably alleviated during SRAT treatment and significant differences were observed between the two groups. Considering the positive reactions from the children to the robot assistant’s presence at the intervention sessions, and observing the numerical results, one can anticipate that utilizing a humanoid robot with different communication abilities can be beneficial, both in elevation of efficacy in interventions, and fomenting kids to be more interactive and cooperative in their treatment sessions. In addition, employing the humanoid robot was significantly useful in teaching children about their affliction and instructing them in techniques such as: relaxation or desensitization in order to help them confront and manage their distress themselves and take control of their situation.
Pub.: 18 Mar '15, Pinned: 06 May '17
Abstract: Haptic sensors are essential devices that facilitate human-like sensing systems such as implantable medical devices and humanoid robots. The availability of conducting thin films with haptic properties could lead to the development of tactile sensing systems that stretch reversibly, sense pressure (not just touch), and integrate with collapsible. In this study, a nanocomposite based hemispherical artificial fingertip fabricated to enhance the tactile sensing systems of humanoid robots. To validate the hypothesis, proposed method was used in the robot-like finger system to classify the ripe and unripe tomato by recording the metabolic growth of the tomato as a function of resistivity change during a controlled indention force. Prior to fabrication, a finite element modeling (FEM) was investigated for tomato to obtain the stress distribution and failure point of tomato by applying different external loads. Then, the extracted computational analysis information was utilized to design and fabricate nanocomposite based artificial fingertip to examine the maturity analysis of tomato. The obtained results demonstrate that the fabricated conformable and scalable artificial fingertip shows different electrical property for ripe and unripe tomato. The artificial fingertip is compatible with the development of brain-like systems for artificial skin by obtaining periodic response during an applied load.
Pub.: 07 Apr '17, Pinned: 06 May '17
Abstract: International Journal of Humanoid Robotics, Volume 13, Issue 01, March 2016. Whole-body control (WBC) systems represent a wide range of complex movement skills in the form of low-dimensional task descriptors which are projected on to the robot’s actuator space. Using these methods allow to exploit the full capabilities of the entire body of redundant, floating-base robots in compliant multi-contact interaction with the environment, to execute any single task and simultaneous multiple tasks. This paper presents an attractor-based whole-body motion control (WBMC) system, developed for torque-control of floating-base robots. The attractors are defined as atomic control modules that work in parallel to, and independently from the other attractors, generating joint torques that aim to modify the state of the robot so that the error in a target condition is minimized. Balance of the robot is guaranteed by the simultaneous activation of an attractor to the minimum effort configuration, and of an attractor to a zero joint momentum. A novel formulation of the minimum effort is proposed based on the assumption that whenever the gravitational stiffness is maximized, the effort is consequently minimized. The effectiveness of the WBMC was experimentally demonstrated with the COMAN humanoid robot in a physical simulation, in scenarios where multiple conflicting tasks had to be accomplished simultaneously.
Pub.: 01 Apr '16, Pinned: 03 May '17
Abstract: The work presented in this paper was part of our investigation in the ROBOSKIN project. The project has developed new robot capabilities based on the tactile feedback provided by novel robotic skin, with the aim to provide cognitive mechanisms to improve human–robot interaction capabilities. This article presents two novel tactile play scenarios developed for robot-assisted play for children with autism. The play scenarios were developed against specific educational and therapeutic objectives that were discussed with teachers and therapists. These objectives were classified with reference to the ICF-CY, the International Classification of Functioning—version for Children and Youth. The article presents a detailed description of the play scenarios, and case study examples of their implementation in HRI studies with children with autism and the humanoid robot KASPAR.
Pub.: 04 Apr '14, Pinned: 03 May '17
Abstract: Humanoids have potential in the augmentation of rehabilitation programme for children with cerebral palsy. To make the humanoid programme applicable and clinically compliant, correct interactive scenarios had to be developed. Development of Human Robot Interaction (HRI) scenario is the main focus of this study. Through discussions with clinicians and therapists, four interactive scenarios have been formulated. The researchers have designed and developed the interactive scenarios concerning the suitability of measuring items in the Gross Motor Function Measure (GMFM) that is suitable to be applied by humanoid robot NAO. Choregraphe software, a programming tool that allows programmer to create and compile the behavior of the robot was used in this study. Choregraphe Suite is a multi-platform desktop application, to create animations, behaviors and dialogs. The developed interactive scenario had undergone a face validity process. This method of validation is used to confirm through peer reviews that the content of the interactive scenario is suitable to be used for children with cerebral palsy. Thirty peer reviewers, made up by a group of physiotherapists and occupational therapists validated the suitability of the interactive scenario. The result of the validation will be explained in this paper.
Pub.: 28 Feb '17, Pinned: 03 May '17
Abstract: Social demand for exoskeleton robots that physically assist humans has been increasing in various situations due to the demographic trends of aging populations. With exoskeleton robots, an assistive strategy is a key ingredient. Since interactions between users and exoskeleton robots are bidirectional, the assistive strategy design problem is complex and challenging. In this paper, we explore a data-driven learning approach for designing assistive strategies for exoskeletons from user-robot physical interaction. We formulate the learning problem of assistive strategies as a policy search problem and exploit a data-efficient model-based reinforcement learning framework. Instead of explicitly providing the desired trajectories in the cost function, our cost function only considers the user’s muscular effort measured by electromyography signals (EMGs) to learn the assistive strategies. The key underlying assumption is that the user is instructed to perform the task by his/her own intended movements. Since the EMGs are observed when the intended movements are achieved by the user’s own muscle efforts rather than the robot’s assistance, EMGs can be interpreted as the “cost” of the current assistance. We applied our method to a 1-DoF exoskeleton robot and conducted a series of experiments with human subjects. Our experimental results demonstrated that our method learned proper assistive strategies that explicitly considered the bidirectional interactions between a user and a robot with only 60 seconds of interaction. We also showed that our proposed method can cope with changes in both the robot dynamics and movement trajectories.
Pub.: 10 Apr '17, Pinned: 29 Apr '17
Abstract: In the study of theology relevant to contemporary advances in science and technology, the underpinnings with regards to the religious and spiritual outcomes have to be considered. In the case of humanoids for spiritual augmentation of children with various brain impairments, the religious implications to the children and their families require adequate support prior to the sessions. Hence, this paper provides a review of a monotheistic religion, Islam, that is, the perceptions on the use of robots for spiritual augmentation of special-needs children within the context of the Islamic faith. This is important to teachers and researchers in anticipating better outcomes and in contradicting the debate on psychedelic consequences.
Pub.: 28 Feb '17, Pinned: 28 Apr '17
Abstract: Assistive technologies in the form of humanoids have gained mileage in the area of rehabilitation, in particular, for children with various mental disabilities such as autism. The extent of the use of humanoids in augmenting these children are numerous yet, the social inclusiveness in the form of religious values, spirituality and ethics have hardly been explored. In these new and ambiguous dimensions, evidences of inclusiveness through repeated observations and interviews as well as secondary data analyses formed the hybrid methodology for this research project. The findings revealed a positive influence by humanizing humanoids in the social skill augmentation, religious and spiritual enhance of the scope. In attempting such a sensitive project, proper ethical procedures have to be in place because of the focus group. The implications of the findings are important in drafting relevant policies not just in educating the children, but to improve their quality of life, enriching the family well-being and enhance societal awareness for social inclusiveness.
Pub.: 28 Feb '17, Pinned: 28 Apr '17
Abstract: Interactions between humans and service robots are more natural when emotions can be synthesized in those robots. Cognitive appraisal theory of emotions provides a theoretical basis for designing artificial emotion generation systems for robots. Computational algorithms to implement the cognitive appraisal theory were proposed in this research. The algorithms were based on a probabilistic description about the world. The proposed model was applied to a sample of interactive tasks, and the robot’s emotions during the task execution can lead to a more positive human–robot interaction experience.
Pub.: 22 Apr '10, Pinned: 28 Apr '17
Abstract: We present a model for anchoring categorical conceptual information which originates from physical perception and the web. The model is an extension of the anchoring framework which is used to create and maintain over time semantically grounded sensor information. Using the augmented anchoring framework that employs complex symbolic knowledge from a commonsense knowledge base, we attempt to ground and integrate symbolic and perceptual data that are available on the web. We introduce conceptual anchors which are representations of general, concrete conceptual terms. We show in an example scenario how conceptual anchors can be coherently integrated with perceptual anchors and commonsense information for the acquisition of novel concepts.
Pub.: 29 Sep '12, Pinned: 28 Apr '17