What does self-movement mean in robotics

Gestures are a communication channel that conveys information or intent to a viewer. Therefore, they can be used effectively in human-robot interaction, or in human-machine interaction in general. They represent a way for a robot or a machine to derive a meaning. In order to be able to use gestures intuitively and to understand gestures carried out by robots, it is necessary to define associations between gestures and the meanings associated with them - a gesture vocabulary. A human gesture vocabulary defines which gestures a group of people uses intuitively to convey information. A robot gesture vocabulary shows which robot gestures match which meaning. Their effective and intuitive use depends on gesture recognition, that is, the classification of body movement into discrete gesture classes through the use of pattern recognition and machine learning. The present dissertation deals with both research areas. As a prerequisite for intuitive human-robot interaction, an attention model for humanoid robots is initially being developed. A method for defining gesture vocabulary based on user observations and surveys is then presented. Experimental results are then presented. A method for refining robotic gestures based on interactive genetic algorithms is being developed. A robust and high-performance gesture recognition algorithm is being developed that is based on dynamic time warping and is characterized by the use of one-shot learning, i.e. the use of a small number of training gestures. The algorithm can be used in real scenarios, thereby reducing the influence of environmental conditions and gesture properties. Finally, a method for learning the relationships between self-movement and pointing gestures is presented.

 

Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings - a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.