Real-world robotic-manipulation system

Date:


Russ Tedrake, a professor of electrical engineering and computer science and head of the Robot Locomotion Group at MIT, received his first Amazon Research Award (ARA) in 2017 — the first year that robotics was included as one of the ARA research areas.

In a succession of ARA awards since then, Tedrake has continued to explore the challenge of robotic manipulation — the grasping and manipulation of objects in arbitrary spatial configurations.

“There’s one level of manipulation that is basically just looking for big flat areas to attach to, and you don’t think very much about the objects,” Tedrake says. “And there is a big step where you understand, not just that this is a flat surface, but that it has inertia distributed a certain way. If there was a big, heavy book, for instance, it would be much better to pick in the middle than at the edge. We’ve been trying to take the revolution in computer vision, take what we know about control, understand how to put those together, and push forward.”

Self-supervised learning in robotics

Related content

Learn how Bill Smart wants to simplify the ways that robots and people work together — and why waiting on a date one night changed his career path.

With their first ARA award, Tedrake’s group worked on applying self-supervised learning to problems of robotic manipulation. Today, self-supervised learning is all the rage, but at the time, it was little explored in robotics.

The basic method in self-supervised learning is to use unlabeled — but, often, algorithmically manipulated — data to train a machine learning model to represent data in a way that’s useful for some task. The model can then be fine-tuned on that task with very little labeled data.

In computer vision, for instance, self-supervised learning often involves taking two copies of the same image, randomly modifying one of them — cropping it, rotating it, changing its colors, adding noise, and so on — and training the model to recognize that both images are of the same object.

In Tedrake’s case, his team allowed a sensor-laden robotic arm to move around an object, simultaneously photographing it and measuring the distance to points on its surface using a depth camera. From the depth readings, software could construct a 3-D model of the object and use it to map points from one 2-D photo onto others.

Self-supervision to learn invariant object representations

From the point-mapped images, a neural network could then learn an invariant representation of the object, one that allows it to identify parts of the object regardless of perspective — for instance, to identify the handle of a coffee mug whether it was viewed from the top, the side, or straight on.

The goal: enable a robot to grasp objects at specified points — to, say, pick up coffee mugs by their handles. That, however, requires the robot to generalize from a canonical instance of an object — a mug with its handle labeled — to variants of the object — mugs that are squatter or tapered or have differently shaped handles.

Keypoint correspondences

So Tedrake and his students’ next ARA-sponsored project was to train a neural network to map keypoints across different instances of the same type of object. For instance, the points at which a mug’s handle joins the mug could constitute a set of keypoints; keypoints might also be points in free space, defined relative to the object, such as the opening left by the mug handle.

Tedrake’s group began with a neural network pretrained through self-supervision and fine-tuned it using multiple instances of the same types of objects — mugs and shoes of all shapes and sizes, for example. Instances of the same objects had been labeled with corresponding keypoints, so that the model could learn category-level structural principles, as opposed to simply memorizing diverse shapes. Tedrake’s group also augmented their training images of real objects with computer-generated images of objects in the same categories.

Learning keypoint correspondences

After training the model, the group tested it on a complete end-to-end robotic-manipulation task. “We can do the task with 99% confidence,” Tedrake says. “People would just come into the lab and take their shoes off, and we’d try to put a shoe on the rack. Daniela [Rus, a roboticist, the director of MIT’s Computer Science and Artificial Intelligence Laboratory, and fellow ARA recipient] had these super shiny black Italian shoes, and they did totally fool our system. But we just added them to the training set and trained the model, and then it worked fine.”

This system worked well so long as the object to be grasped (a shoe or, in a separate set of experiments, a coffee cup) remained stationary after the neural model had identified the grasp point. “But if the object slipped, or if someone moved it as the robot reached for it, it would still air ball in the way robots have done for far too long,” Tedrake says.

Adapting on the fly

Related content

The AWS Machine Learning Research Award winner is working to develop methods and open-source libraries that can potentially benefit the artificial intelligence and robotics communities.

So the next phase of the project was to teach the robot to use video feedback to adjust trajectories on the fly. Until now, Tedrake’s team had been using machine learning only for the robot’s perceptual system; they’d designed the control algorithms using traditional control-theoretical optimization. But now they switched to machine learning for controller design, too.

To train the controller model, Tedrake’s group used data from demonstrations in which one of the lab members teleoperated the robotic arm while other members knocked the target object around, so that its position and orientation changed. During training, the model took as input sensor data from the demonstrations and tried to predict the teleoperator’s control signals.

“By the end, we had versions that were just super robust, where you’re antagonizing the robot, trying to knock objects away just as it reaches for them,” Tedrake says.

Still, producing those robust models required around 100 runs of the teleoperation experiment for each object, a resource-intensive data acquisition procedure. This led to the next step: generalizing the feedback model, so that the robot could learn to handle perturbations from just a handful — even just one — example.

Related content

While these systems look like other robot arms, they embed advanced technologies that will shape Amazon’s robot fleet for years to come.

“From all that data, we’re now trying to learn, not the policy directly, but a dynamics model, and then you compute the policy after the fact,” Tedrake explains.

This requires a combination of machine learning and the more traditional, control-theoretical analysis that Tedrake’s group has specialized in. From data, the machine learning model learns vector representations of both the input and the control signal, but hand-tooled algorithms constrain the representation space to optimize the control signal selection. “It’s basically turning it back into a planning and control problem, but in the feature space that was learned,” Tedrake explains.

And indeed, with his current ARA grant, Tedrake is pursuing ever more sophisticated techniques for analyzing planning and control problems. In a recent paper, he and two of his students, Tobia Marcucci and Jack Umenberger, together with Pablo Parrilo, a professor in MIT’s Laboratory for Information and Decision Systems, consider a variation on the shortest-path problem, or finding the shortest path through a graph with edges of varying lengths.

In Tedrake and his colleagues’ version of the problem, the locations of the graph nodes vary according to some function, and as a consequence, so do the edge lengths. This formalism lends itself to a wide range of problems, including motion planning for robots and autonomous vehicles.

An example of Tedrake and his colleagues’ variation of the shortest-path problem. White circles represent locations of vertices, which can vary anywhere within the pale-blue polygons; the dotted blue lines represent the current distances between vertices along the shortest route through the graph. Black arrows represent the direction of flow through the graph.

Computing the shortest path through such a graph is an NP-complete problem, meaning it is computationally intractable for graphs of sufficient size. But the MIT researchers showed how to find an approximate solution efficiently.

This continued focus on traditional optimization techniques puts Tedrake at odds with the prevailing shift toward machine learning in so many branches of AI.

“Learning is working extremely well, but too often, I think, people have thrown the baby out with the bathwater,” he says. “There are some things that we still know how to do very, very well with control and optimization, and I’m trying to push the boundary back towards everything we do know how to do.”





Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Popular

More like this
Related

Nvidia: Chip giant posts record sales as boss sees AI ‘tipping point’

The chip giant's shares have soared by more...

How AI is helping the search for extraterrestrial life

Artificial intelligence software is being used to look...

Gemma, Google’s new open-source AI model, could make your next chatbot safer and more responsible

Google has unveiled Gemma, an open-source AI model...