Friday, August 3, 2018

Robot Hands Are Evolving

How Robot Hands Are Evolving to Do What Ours Can

By MAE RYAN, CADE METZ and RUMSEY TAYLOR

Robotic hands could only do what vast teams of engineers programmed them to do. Now they can learn more complex tasks on their own.

A robotic hand? Four autonomous fingers and a thumb that can do anything your own flesh and blood can do? That is still the stuff of fantasy.

But inside the world’s top artificial intelligence labs, researchers are getting closer to creating robotic hands that can mimic the real thing.


THE SPINNER

Inside OpenAI, the San Francisco artificial intelligence lab founded by Elon Musk and several other big Silicon Valley names, you will find a robotic hand called Dactyl. It looks a lot like Luke Skywalker’s mechanical prosthetic in the latest Star Wars film: mechanical digits that bend and straighten like a human hand.

If you give Dactyl an alphabet block and ask it to show you particular letters — let’s say the red O, the orange P and the blue I — it will show them to you and spin, twist and flip the toy in nimble ways.

For a human hand, that is a simple task. But for an autonomous machine, it is a notable achievement: Dactyl learned the task largely on its own. Using the mathematical methods that allow Dactyl to learn, researchers believe they can train robotic hands and other machines to perform far more complex tasks.

This remarkably nimble hand represents an enormous leap in robotic research over the last few years. Until recently, researchers were still struggling to master much simpler tasks with much simpler hands.

THE GRIPPER

Created by researchers at the Autolab, a robotics lab inside the University of California at Berkeley, this system represents the limits of technology just a few years ago.

Equipped with a two-fingered “gripper,” the machine can pick up items like a screwdriver or a pair of pliers and sort them into bins.

The gripper is much easier to control than a five-fingered hand, and building the software needed to operate a gripper is not nearly as difficult.


It can deal with objects that are slightly unfamiliar. It may not know what a restaurant-style ketchup bottle is, but the bottle has the same basic shape as a screwdriver — something the machine does know.

But when this machine is confronted with something that is different from what it has seen before — like a plastic bracelet — all bets are off.

THE PICKER

What you really want is a robot that can pick up anything, even stuff it has never seen before. That is what other Autolab researchers have built over the last few years.

This system still uses simple hardware: a gripper and a suction cup. But it can pick up all sorts of random items, from a pair of scissors to a plastic toy dinosaur.

The system benefits from dramatic advances in machine learning. The Berkeley researchers modeled the physics of more than 10,000 objects, identifying the best way to pick up each one. Then, using an algorithm called a neural network, the system analyzed all this data, learning to recognize the best way to pick up any item. In the past, researchers had to program a robot to perform each task. Now it can learn these tasks on its own.

When confronted with, say, a plastic Yoda toy, the system recognizes it should use the gripper to pick the toy up.

But when it faces the ketchup bottle, it opts for the suction cup.

The picker can do this with a bin full of random stuff. It is not perfect, but because the system can learn on its own, it is improving at a far faster rate than machines of the past.

THE BED MAKER

This robot may not make perfect hospital corners, but it represents notable progress. Berkeley researchers pulled the system together in just two weeks, using the latest machine learning techniques. Not long ago, this would have taken months or years.

Now the system can learn to make a bed in a fraction of that time, just by analyzing data. In this case, the system analyzes the movements that lead to a made bed.

THE PUSHER

Across the Berkeley campus, at a lab called BAIR, another system is applying other learning methods. It can push an object with a gripper and predict where it will go. That means it can move toys across a desk much as you or I would.

The system learns this behavior by analyzing vast collections of video images showing how objects get pushed. In this way, it can deal with the uncertainties and unexpected movements that come with this kind of task.

THE FUTURE

These are all simple tasks. And the machines can only handle them in certain conditions. They fail as much as they impress. But the machine learning methods that drive these systems point to continued progress in the years to come.

Like those at OpenAI, researchers at the University of Washington are training robotic hands that have all the same digits and joints that our hands do.

That is far more difficult than training a gripper or suction cup. An anthropomorphic hand moves in so many different ways.

So, the Washington researchers train their hand in simulation -- a digital recreation of the real world. That streamlines the training process.

At OpenAI, researchers are training their Dactyl hand in much the same way. The system can learn to spin the alphabet block through what would have been 100 years of trial and error. The digital simulation, running across thousands of computer chips, crunches all that learning down to two days.

It learns these tasks by repeated trial and error. Once it learns what works in the simulation, it can apply this knowledge to the real world.

Many researchers have questioned whether this kind of simulated training will transfer to the physical realm. But like researchers at Berkeley and other labs, the OpenAI team has shown that it can.

They introduce a certain amount of randomness to the simulated training. They change the friction between the hand and the block. They even change the simulated gravity. After learning to deal with this randomness in a simulated world, the hand can deal with the uncertainties of the real one.

Today, all Dactyl can do is spin a block. But researchers are exploring how these same techniques can be applied to more complex tasks. Think manufacturing. And flying drones. And maybe even driverless cars.

Additional production by Keith Collins, Tim Hussin, Whitney Richardson, and Josh Williams.

No comments:

Post a Comment