center for robotics and embedded systems University of Southern California Viterbi School of Engineering

This dissertation presents a framework that enables robots to learn complex tasks from one or several demonstrations by a teacher, based on a set of available underlying robot capabilities. The framework, inspired from the approach people use when teaching each other through demonstration, uses an action embedded approach for task representation, and uses learning by having the robot perform the task along with the teacher during the demonstration.

Among humans, teaching skills or tasks is a complex process that relies on multiple means of interaction and learning, both on the part of the teacher and of the learner. In robotics, however, skill teaching has largely been addressed by using only one or very few of these modalities. This dissertation presents a novel approach that uses multiple modalities for instruction and learning, allowing a robot to learn representations of high-level, sequential tasks from instructive demonstrations, generalization over multiple but sparse demonstrations and by practicing under a teacher's supervision.

To enable this type of learning, the underlying robot control architecture should exhibit the following key properties: modularity, reusability of existing modules, robustness and real-time response, support for learning and the ability to encode complex task representations. Behavior-based control (BBC) is an effective methodology for robot control, which provides modularity and robust real-time properties, but is limited with respect to the other capabilities listed above. This dissertation presents a hierarchical abstract behavior architecture that extends the standard BBC in that: i) it allows for the representation and execution of complex, hierarchically structured tasks within a behavior-based framework; ii) it enables reusablity of behaviors through the use of abstract behaviors; and iii) it provides support for automatic generation of a behavior-based system.

The experimental validation and evaluation presented in the dissertation demonstrate that the proposed task representation, in conjunction with multiple instructional modalities, provides an effective approach for robot learning of high-level sequential tasks, based on a set of underlying capabilities (behaviors).


Go Home
Maintained by webmaster