Mobile control interface for modular robots

Abstract

The goal of this project was to create an augmented reality (AR) interface for modular robots called
Roombots. Currently a 3D simulation of Roombots exists but it doesn’t use the capability of augmented
reality to improve the user experience. We envision an application that allow the user to watch the
simulated Roombots superimpressed to the real view from the camera, from any points of view by just
moving in the environment as if there were real Roombots in the room. We considered many possibilities
and we chose to use a SLAM-like algorithm because this doesn’t rely on any sensors or external beacons
except the camera to works. I first begin to implement such tool and finally decided to modify an
augmented reality software from Oxford called PTAM (Parallel Tracking and Mapping) to match our
need.

 

SLAM and PTAM

SLAM (Simultaneous Localization and Mapping) is a set of many algorithms to construct a map of the environment and to localize the video capture device. PTAM (Parallel Tracking and Mapping) is a software from Oxford which is a kind of SLAM but do the tracking and the mapping in separate tasks.

These algorithms works on similar model. They recognize features (points of interest) in the picture coming from the camera. During the initialization phase, the real position of these features are computed by making the assumption the displacement of the camera is known between two or more frames.

Then as the camera’s projection is known, its position can be estimated. New points are added to the known map when they can be tracked in many frames. Then the position error is corrected to have a correct maching between all stored pair of camera, features positions called a Keyframe.

 

 

 

Why PTAM

A SLAM-like algorithm is what we want because it doesn’t need patterns or sensors or any previous knowledges of the room inside we construct the AR scene. The user can enter the room with only his laptop and a camera.

Instead of creating a complete AR software ourself, we choose to adapt an existing one to our need. To verify that PTAM is a good solution, we tested it in a regular room with a low-end Logitech camera. Different environments were used, empty room with not many features to track and room with many of such features. For each environment the tracking has been tested from different angles of view and distance from the scene.

Atfer testing we find that PTAM is a good choice but needs some changes. The main problem is the scene which is drawn even when the camera doesn’t point to the known map. For example the user is watching toward the augmented scene and then he watches behind himself and see the scene he was seeing before appears from nowhere. The solution is to compare how the current frame is different to the stored keyframes and empirically find for which values the scene should be drawn or not.

 

 

Augmented scene

PTAM come more as a complete software instead of an API for augmented reality. But the included scenery stands entirely in its own class which provides a function to draw. All the transformations to have the correct projection and the vector basis in the middle of the scene are already computed. The exemple class removed, we add a new class to create a complete Roombots simulation. We add at the same time an Obj (3D object file format) loader and a texture loader. The calls of the exemple class were replaced by the calls of the function of our new class.

We now have all the tools to move the existing 3D Roombot simulation into our augmented scenery system.

Report and presentations