Quantcast

Autonomous human–robot proxemics: socially aware navigation based on interaction potential

Research paper by Ross Mead, Maja J Matarić

Indexed on: 08 Jun '16Published on: 07 Jun '16Published in: Autonomous Robots



Abstract

To enable situated human–robot interaction (HRI), an autonomous robot must both understand and control proxemics—the social use of space—to employ natural communication mechanisms analogous to those used by humans. This work presents a computational framework of proxemics based on data-driven probabilistic models of how social signals (speech and gesture) are produced (by a human) and perceived (by a robot). The framework and models were implemented as autonomous proxemic behavior systems for sociable robots, including: (1) a sampling-based method for robot proxemic goal state estimation with respect to human–robot distance and orientation parameters, (2) a reactive proxemic controller for goal state realization, and (3) a cost-based trajectory planner for maximizing automated robot speech and gesture recognition rates along a path to the goal state. Evaluation results indicate that the goal state estimation and realization significantly improve upon past work in human–robot proxemics with respect to “interaction potential”—predicted automated speech and gesture recognition rates as the robot enters into and engages in face-to-face social encounters with a human user—illustrating their efficacy to support richer robot perception and autonomy in HRI.