Intelligent User Interface for Roombots

 

Abstract

The Roombots (RB) platform developed at the Biorobotics Laboratory is a modular robot platform that is designed to study both robotic reconfiguration and reconfigurable locomotion. Each RB module is composed of two connected “sphere-like” structures. It has 3 rotational Degrees of Freedom (DOF): One is located in between the sphere-like structures and two are located along each sphere-like structure’s diameter. Active Connection Mechanisms (ACMs) can be installed on each face, letting the module connect to structured grid tiles and to other modules. The structured grid is designed such that one sphere-like structure of a module fits exactly on a grid tile. A single module can move anywhere on this grid by moving one tile at a time, latching onto the next tile before releasing the previous one using ACMs. However, free locomotion and structured motion using connected structures with more than one module are also possible. One of the major goals of the platform that incorporates all of these concepts is to autonomously create furniture, where modules move both inside and outside the structured grid environment. 
 
Up to now, RB modules were mainly controlled by sending hardcoded motion command sequences, using a classical Graphical User Interface (GUI), or through an augmented display on a mobile tablet PC. The first is restricted to developers only. The second requires the user to focus on a computer, most often using a monitor and mouse/keyboard pair, to command the RB modules that are spatially located elsewhere. The last method requires the user to carry an additional device. To construct a User Interface (UI) that controls robots coexisting with humans, a more natural interface paradigm must be considered. An instance of such an interface would be where the user controls the robots using physical gestures and receives sensory feedback in the very same space that the robots are located without carrying additional devices. 
 
Motivated by the existing natural control interface studies and the lack of their application to modular reconfigurable robots, we implemented and studied such an interface for RB. For simplicity, we limited our study to the structured grid. The user is able to select individual RB modules and move them to target locations by pointing at the modules and at the targets. The pointing gestures and the grid state are recognized by a dual depth sensor setup. The user’s pointing gestures cause the emission of visual feedback on LED setups on both the grid tiles and the RB modules, indicating where the user is pointing and which object is selected. In addition to this, the system state is also exhibited on the same feedback setup, further enhancing the user interface experience. 

Demonstration

Link to Demonstration Video

Files