A robotic hand? Four autonomous fingers and a thumb that can do anything your own flesh and blood can do? That's still the stuff of imagination.
But within the best artificial intelligence laboratories in the world, researchers are approaching robotic hands that can mimic the real thing.
Inside OpenAI, San Francisco artificially. Www.dlr.de/de/desktopdefault.aspx/t…_read-12353/ Created by Elon Musk and some other big Silicon Valley names Intelligence ̵
When you give Dactyl a block of letters and ask him for certain letters – let's say red O, the orange P and the blue I – it shows you and turns, twirls and turns the toy in a nimble way.
This is a simple task for a human hand. But for an autonomous machine it is a remarkable achievement: Dactyl has largely learned the task on its own. Using the mathematical methods that allow Dactyl to learn, the researchers believe that they can train robotic hands and other machines to perform far more complex tasks.
This remarkably nimble hand represents a huge leap forward in robotics research in recent years. Until recently, researchers still had difficulty coping with much simpler tasks with much simpler hands.
Developed by researchers at the Autolab, a robotic laboratory at the University of California, Berkeley, this system represents the limits of technology a few years ago
Equipped with a two-headed "gripper", the system can be used as a tool Machine pick up items such as a screwdriver or pliers and sort into containers.
The gripper is much easier to control than a five-fingered hand and creating the software needed to operate a gripper is not nearly as difficult.
It can deal with objects that are a little unknown. It may not know what a restaurant-style ketchup bottle is, but the bottle has the same basic shape as a screwdriver – something the machine knows.
But when this machine is faced with something different from what it saw before – like a plastic bracelet – all bets are off.
What you really want is a robot that can hold anything, even things it has never seen before. That's what other Autolab researchers have built up in recent years.
This system still uses simple hardware: a gripper and a suction cup. But it can handle all sorts of random items, from scissors to a plastic toy dinosaur.
The system benefits from dramatic advances in machine learning. The Berkeley researchers modeled the physics of more than 10,000 objects and identified the best way to capture each one. Then, using an algorithm called a neural network, the system analyzed all of this data and learned to recognize the best way to capture any object. In the past, researchers had to program a robot that would do any job. Now he can learn these tasks independently.
For example, when confronted with a plastic Yoda toy, the system recognizes that it should use the gripper to pick up the toy.
But when it faces the ketchup bottle, it chooses the sucker
The picker can do this with a container full of random stuff. It's not perfect, but because the system can learn on its own, it improves much faster than machines of the past.
The Bed Maker
While this robot may not make perfect hospital corners, it does represent remarkable advances. Berkeley researchers gathered the system in just two weeks and used the latest machine learning techniques. Not so long ago, this would have taken months or years.
Now the system can learn to make a bed in a fraction of this time, just by analyzing data. In this case, the system analyzes the movements leading to a bed made.
On the Berkeley campus, in a lab called BAIR, another system uses other learning methods. It can push an object with a grapple and predict where it will go. That means it can move toys across a desk, just like you or me.
The system learns this behavior by analyzing large collections of video images that show how objects are moved. In this way it can deal with the uncertainties and unexpected movements associated with this kind of task.
These are all simple tasks. And the machines can only handle them under certain conditions. They fail just as they impress. But the machine learning methods driving these systems point to further progress in the years to come.
Like the OpenAI, Washington University researchers are training robotic hands, all with the same fingers and joints as our hands.
This is much more difficult than training a gripper or sucker. An anthropomorphic hand moves in so many different ways.
So the Washington researchers train their hand in the simulation – a digital replica of the real world. This facilitates the training process.
At OpenAI, researchers train their Dactyl hand the same way. The system can learn to turn the alphabet block through 100 years of trial and error. The digital simulation, which runs over thousands of computer chips, crunches all learning up to two days.
It learns these tasks through repeated trials. Once it has learned what works in the simulation, it can apply that knowledge to the real world.
Many researchers have questioned whether this type of simulated training transitions into the physical domain. But like the researchers at Berkeley and other labs, the OpenAI team has shown that it's possible.
They introduce some randomness into simulated training. They change the friction between the hand of the block. They even change the simulated gravity. Having learned to handle this randomness in a simulated world, the hand can handle the vagaries of the real world.
Today Dactyl can only turn a block. But researchers are exploring how these same techniques can be applied to more complex tasks. Think of the manufacturing. And flying drones. And maybe even driverless cars.