Newswise — Movies portray robots that can move through the world as easily as humans, and use their hands to operate everything from dishwashers to computers with ease. But in reality, the creation of robots with these skills remains a major challenge. Researchers at the University of Massachusetts Amherst are solving this problem by giving a mobile robotic arm the ability to "see" its environment through a digital camera.

"Mobile robots play an important role in many settings, including planetary exploration and manufacturing," says Dov Katz, a doctoral student of computer science. "Giving them the ability to manipulate objects will extend their use in medical care and household assistance."

Results of experiments performed by Katz and Oliver Brock, a professor of computer science, were presented at the Proceedings of the International Electrical and Electronics Engineers Conference on Robotics and Automation May 21 in Pasadena, Calif.

So far, the team has successfully taught their creation, dubbed the UMan, or UMass Mobile Manipulator, to approach unfamiliar objects, such as scissors, garden shears and jointed wooden toys " and learn how they work by pushing on them and observing how they change, the same process used by children as they explore the world.

Like a child forming a memory, UMan then stores this knowledge of how the objects move as a "kinematic model" which can be used to perform specific tasks, such as opening scissors and shears to a 90 degree angle. Video shot by the team shows UMan easily completing this task.

According to Katz, teaching the UMan, to "walk" was the easy part. "UMan sits on a base with four wheels that allow it to move in any direction, and a system of lasers keeps it from bumping into objects by judging their distance from the base," says Katz, who filmed the UMan taking its first trip around the laboratory navigating through a maze of boxes.

What turned out to be harder was teaching the robotic arm to manipulate objects."Robots in factories perform complex tasks with ease, but one screw out of place can shut down the entire assembly line," say Katz, who recently met with representatives from Toyota Motors. "Giving robots the same skills as humans turned out to be much more difficult than we imagined, which is why we don't have robots working in unstructured environments like homes."

The key was giving the UMan eyes in the form of a digital camera that sits on the wrist. Once they added the camera, which coupled manipulating objects with the ability to "see," the complex computer algorithms needed to instruct the UMan to perform specific tasks became much simpler.

A video shot by the team shows what the UMan "sees" as it approaches a jointed wooden toy on a wooden table, which appears as a uniform field of green dots. The first gentle touch from the hand quickly separates the toy from the background, and moving the various parts eventually labels each section with a specific color, identifying all the moving pieces and the joints holding them together. UMan then stores this knowledge, and can use it to put the object in a specific shape.

Future research by Katz and Brock will focus on teaching UMan to operate different types of machines, including doorknobs and light switches, and work on taking UMan's manipulation skills into three dimensions.

"Once robots learn to combine movement, perception and the manipulation of objects, they will be able to perform meaningful work in environments that are unstructured and constantly changing," says Katz. "At that point, we will have robots that can explore new planets and clean houses in a flexible way."

Video is available by contacting the source.

MEDIA CONTACT
Register for reporter access to contact details
CITATIONS

IEEE Conference on Robotics and Automation