قالب وردپرس درنا توس
Home / Technology / OpenAI's AI-powered robot learned how to solve a Rubik's cube with one hand

OpenAI's AI-powered robot learned how to solve a Rubik's cube with one hand



OpenAI has reached a new milestone in its search for self-learning general purpose robots. According to the group's robotics department, Dactyl, the humanoid robotic hand first developed last year, has learned to dislodge a magic cube with one hand. OpenAI sees the feat as a leap forward, both in terms of the dexterity of robot attachments and its own AI software, allowing Dactyl to use virtual simulations to learn new tasks before facing a real, physical challenge.

In a demonstration video introducing Dactyl's new talent, you can see the robot's hand move towards a complete cube with awkward but precise maneuvers. It takes many minutes, but Dactyl is finally able to solve the puzzle. It's a bit disturbing to see in action, if only because the movements are noticeably less fluid than those of humans, and compared to the dazzled speed and raw dexterity displayed when a human flash fires the cube in seconds , are particularly incoherent.

But for OpenAI, Dactyl's success brings it one step closer to a much-sought-after goal for the AI ​​and robotics industries: A robot that can learn to perform a variety of real-world tasks without having to spend months training for years in the real world and without to be specially programmed.



Image: OpenAI

"Many robots can solve Rubik's dice very quickly. The important difference between what they did there and what we do here is that these robots are very purpose-built, "says Peter Welinder, a research scientist and head of robotics at OpenAI. "Obviously, there is no way to use the same robot or the same approach to perform another task." The robot team at OpenAI has quite different ambitions, we are trying to build a general-purpose robot, much as humans and our human hands can do a lot of things "And not just a specific task, we try to build something that is much more general in scope."

Welinder points to a series of robots In recent years, Rubik's cube has been solving far beyond the boundaries of human hands and The semiconductor manufacturer Infineon developed a robot specifically for solving a Rubik's cube at super-human speed in 2016. The bot succeeded in less than a second, breaking the then world record of less than five seconds The MIT developed a die in less than 0.4 seconds In 2018, a Japanese YouTube channel called Human Controller even developed a self-dissolving Rubik cube with a 3D printed core attached to programmable servo motors.

In other words, a robot built for a particular task and programmed to perform this task as efficiently as possible can usually best solve a person and the magic cube. Solving is something software has long mastered , The development of a robot to solve the cube, even a humanoid one, is by itself not too remarkable, and that at the slow speed with which Dactyl works.

But the Dactyl robot from OpenAI and the software that powers it differ significantly in design and purpose from a dedicated cube-solver. As Welinder says, OpenAI's ongoing robotics work does not aim to achieve superior results in tight tasks, as this only requires you to develop a better robot and program it accordingly. That works without modern artificial intelligence.

Instead, Dactyl is designed from the ground up as a self-learning robot hand that approaches new tasks like a human being. It is trained with the help of software that is currently trying in a rudimentary way to replicate the millions of years of evolution that help us instinctively use our hands as children. That might one day, OpenAI hopes, help humanity develop the kind of humanoid robots we only know from science fiction. Robots that can safely operate in society without endangering us and perform a variety of tasks in such chaotic environments as city streets and factory buildings.

To learn how to solve a magic cube with one hand, OpenAI did not explicitly program Dactyl to release the toy. Free software on the internet can do that for you. It was also omitted to program individual movements for the hand, because it wanted to detect these movements independently. Instead, the robot team of the underlying software of the hand gave the ultimate goal of solving a scrambled cube, using modern AI – especially an incentive-based immersive learning brand termed reinforcing learning – to guide them towards their own discovery support. The same approach to training AI agents is how OpenAI developed its first-rate Dota 2 bot.

Until recently, however, it was much easier to train an AI agent to perform virtual tasks, such as playing a computer game, than performing a real-world task. That's because the training software can be accelerated to perform tasks in a virtual world, so the AI ​​can spend tens of thousands of years in the real world with thousands of high-end CPUs and Ultra in just a few months. Powerful GPUs that work in parallel.

It is not possible to do the same training with a physical robot. For this reason, OpenAI is trying to advance new methods of robot training with simulated environments rather than the real world that the robotics industry has rarely experimented with. In this way, the software can practice on many different computers at the same time at an accelerated pace, hoping that it will retain that knowledge when it starts to control a real robot.

Due to training limitations and obvious safety concerns, commercially used robots today do not use AI and instead are programmed with very specific instructions. "In the past, you used very specialized algorithms to solve tasks where you have an exact model of both the robot and the environment you are working in," says Welinder. "For a factory robot, you have very accurate models and know exactly the environment you are working in. You know exactly how the particular part will work."

That's why current robots are far less versatile than humans. It takes a lot of time, effort and money to reprogram a robot that assembles, for example, a particular part of an automobile or a computer component to do something else – introduce a robot that has not even been trained in a simple task However, with modern AI techniques, robots can be modeled as human beings so that they can use the same intuitive understanding of the world to do everything from opening the doors to opening the door At least that's the dream.

We're still decades away from this level of sophistication The jumps made by the AI ​​community on the software side – like self-driving cars, machine translation and image recognition – were not accurately transferred to next-generation robots. Right now, OpenAI is just trying to mimic the complexity of a human body part and make this robot analogue more natural.

That's why Dactyl is a 24-hand robotic hand modeled on a human hand, instead of the claw or claw-style gripper robots you see in factories. And for the software that lets Dactyl learn how to use all those joints the way a human would do, OpenAI has spent thousands of years of simulation training before the physical cube is solved.


Image: OpenAI

"When you train things on the real robot, obviously everything you learn works on what you really want to implement your algorithm for. That way, it's a lot easier. However, algorithms today require a lot of data. Training a robot in the real world and completing complex tasks requires years of experience, "says Welinder. "Even for a human, it takes a few years, and humans have millions of years of evolution to have the ability to learn to operate a hand."

However, in a simulation, Welinder says training can be sped up just as much with games and other tasks that are popular as AI benchmarks. "It takes on the order of thousands of years to train the algorithm. However, this only takes a few days because we can parallelize the training. And you do not have to worry about the robots breaking or hurting you when you're training these algorithms, "he adds. However, in the past researchers have encountered significant problems in attempting to provide virtual training for working on physical robots. According to OpenAI, it is one of the first organizations that has made real progress in this regard.

When Dactyl got a real die, he used his training and solved it himself under a variety of conditions for which he had never been explicitly trained. This includes solving the dice with one glove and two fingers glued together, while OpenAI members constantly bothered him by nudging him with other objects and showering them with blisters and confetti-like paper.

"We found that in all these disturbances, the robot was still able to successfully turn the Rubik's cube. But that did not happen in training, "says Matthias Plappert, Welinders OpenAI team leader for robotics. "The robustness we found when we tried this on the physical robot was surprising to us."

For this reason, OpenAI views Dactyl's newly acquired skills as equally important to the advancement of robot hardware and AI training. Even the world's most advanced robots, such as the humanoid and canine robots developed by industry leader Boston Dynamics, can not operate autonomously, requiring extensive task-specific programming and frequent human intervention to perform basic actions themselves.

According to OpenAI, Dactyl is a small but crucial step in the direction of the kind of robots that one day could do manual labor or household chores and even work with people, rather than in closed environments, without explicit programming governing their actions.

In this vision for the future, the ability of robots to learn new tasks and adapt to changing environments will depend on both the flexibility of the AI ​​and the robustness of the physical machine. "These methods really begin to demonstrate that these are the solutions to overcoming all the inherent complications and disorder of the physical world we live in," says Plappert.


Source link