Roel Pieters. Photo | Bart van Overbeeke

Watch it!

When finding our way, we chiefly rely on our trusted eyes. For robots, on the other hand, basing their orientation on visual input is not that self-evident. Doctoral candidate ir. Roel Pieters does see a great future for vision-based robot control, however. He’ll be receiving his doctoral degree at Mechanical Engineering on March 25.

Most robots are blind. That’s not an issue as long as they’re operating in a predictable environment. Assembly line robots that do nothing but pick up and tighten screws or weld metal plates together don’t need to visualize their environment. Since they repeat the exact same maneuver over and over, they can find the next screw with their eyes closed, so to speak.

Using robots in a home environment, as care robots in nursing homes for example, is quite a different matter. After all, safety is top priority in these surroundings. A care robot can never crash into an elderly person in need of care, of course, which is why it must be able to respond quickly whenever something or someone crosses its path.

We rely on sight mostly whenever we want to orientate ourselves, search for objects and avoid obstacles, so it may seem obvious to equip robots with the gift of eyesight as well. There are ample compact video cameras available these days. Still, visual perception in robots is still in its infancy.

The main reason for that is the enormous computing power needed for the processing of visual data – video – says doctoral candidate Roel Pieters. “About forty percent of our brain’s capacity is used for image processing. Our brain is a kind of parallel processor that can compute millions of processes at the same time. A computer can’t top that.” Until recently, the visual control of robots was hindered greatly by the time it took to convert images to control signals. Over the past few years, however, computers have become sufficiently powerful to operate robots in real time, based on video input.

As far as that’s concerned, we’ve come a long way since the very first visual-controlled robot forty years ago. It took that specimen ten seconds to recognize a cardboard box. Today, even Pieters’ laptop can be used as the ‘brain’ of a robotic arm, making it recognize and grab objects.

The arm concerned measures approximately one meter and is normally part of ROSE, the Remotely Operated Service Robot that’s being developed at TU/e. Unlike the other Eindhoven care robot AMIGO, ROSE isn’t autonomous, says Pieters. “ROSE is operated from a kind of cockpit. An operator, a nursing home employee for example, can see what the robot is looking at on a screen and can then decide on an action for the robot from a menu.” Obviously, it’s impossible for the person in the cockpit to manage the movements of ROSE to the tiniest detail. It’s up to the machine itself to determine the route and speed by which to approach its goal.

Share this article