قالب وردپرس درنا توس
Home / Business / Why Uber's self-propelled crash is confusing to humans

Why Uber's self-propelled crash is confusing to humans



Anyone who worked in the autonomous vehicle space said it was unavoidable. In America – and the rest of the world – cars kill people, around 40,000 in the US and 1.25 million in the world every year. High speeds, metal boxes. Self-driving cars would be better. But nobody promised perfection. After all, they had hurt someone.

Yet, the death of Elaine Herzberg, hit two weeks ago by a self-driving Uber in Tempe, Arizona, was a shock. Even more so, after the Tempe Police Department released a video of the incident that showed both the outside view – an inferior dash camera suddenly caught the victim from the roadside darkness – as well as the inside view of Uber's safety driver, The Woman Faced to watch and then take control of the vehicle when the technology failed and looked away from the road.

But the shock cut off in different ways. Most of the world says the video and thought gee, this crash seemed inevitable . In an interview with the San Francisco Chronicle Tempes police chief even suggested so much. "It's very clear that it would have been difficult to avoid this collision in any mode (autonomous or man-powered) based on how it came out of the shadows directly onto the road," Chief Sylvia Moir said. (The police department later said some of Moir's comments were "taken out of context.")

But autonomous vehicle developers were disturbed. Many experts say the car should have tracked down a pedestrian on a wide open lane after crossing a wide open lane. "I think the sensors on the vehicles should have seen the pedestrian in advance," said Steven Shladover, a UC Berkeley research engineer, to WIRED. "This should have been easy." Something went wrong here.

This gap between an audience accustomed to the particular weaknesses of human drivers and engineers developing self-driving technology is very important. because Uber's self-driving accident will not be the last. While companies like General Motors, Ford, Aptiv, Zoox and Waymo continue to test their vehicles on public roads, there will be more dust throws, fenders and yes, accidents that mutilate and kill.

"The argument is that the accident rate should decrease when autonomy has matured to a certain level," says Mike Wagner, co-founder and CEO of Edge Case Research, helping robotics companies develop more robust software. "But how we get from here to there is not always clear, especially when it requires a lot of testing on the road."

When companies test and fumble with their machine learning algorithms, they expect these vehicles to get mixed up in a special way. Crashes that are inevitable are the ones that should prevent this technology. Maneuvers that seem easy to humans will make the robots stump. One day self-driving cars could be much safer than human drivers. But in the meantime, it helps to understand how these vehicles work ̵

1; and the strange ways in which they go wrong.

Sensors

Sensors are self-propelled car eyes – they help the vehicle understand what is around it. Cameras are great for capturing lines and signs, but they capture data in 2-D. Radars are cheap, good over long distances and can "see" some barriers, but with little detail.

This is where Lidar comes into play. The laser sensor uses light pulses to draw a 3-D image around the world around it. The Lidar Uber used by a company called Velodyne is considered by many to be the best system on the market. (At around $ 80,000, it's also one of the most expensive.) But even the best lidar works a bit like the game Battleship. The laser pulses must hit enough parts of the object to get a detailed understanding of its shape within a few seconds. Ideally, the sensor provides an accurate reading of the world, the kind of well-informed, goal-oriented estimate that could help one player sink (or avoid in this case) another's fleet.

But it is possible, especially when the vehicle is moving at high speed, so that the lasers do not land on the right things. This could be the case, especially with experts, when an object like Herzberg moves perpendicular to the vehicle. This will sound strange to human drivers, who are much more likely to see a person or a bicycle when their full forms are revealed in profile. But if there is a less consistent perspective – as an object moves second by second – it is harder for the system to interpret and classify what the movement is doing.

Classification

And classification is the key. These systems "learn" about the world of machine learning systems that need to be supplied with a giant set of road images – curbs, curbs, pedestrians, cyclists, lanes – before they can identify objects themselves.

But the system can go wrong. It's possible that Uber engineers messed up the machine learning process and that the self-driving car software interpreted a pedestrian and their bike as a plastic bag or a piece of cardboard. Even small things were observed to fool these systems, like a few strips of tape on a stop sign. It is also known that self-driving cars see shimmering exhausts as solid objects.

Wagner, who has studied this problem, discovered that a system did not see through certain types of weather – even if objects were still completely visible to the viewer's human eye. "If there was the least amount of fog, the neural network has lost it," he says.

If the classification is off, the predictions of the system may also be off. These systems expect people to move in specific ways and plastic bags to move in another. That kind of predictions could have been botched, too. If classification is the problem, Uber might need to collect an extra 100,000+ images to re-train their system.

The software

Or crashes like Uber could be caused by errors. Autonomous vehicles execute hundreds of thousands of lines of code. An engineer could have introduced a problem somewhere. Or maybe the system mistakenly discarded sensor data that used it for identification and then escaped the woman.

Probably, this crash and future crashes will be combinations of many things. "I suspect that this is the result of a complex sequence of things that have never happened before," says Raj Rajkumar, who studies autonomous systems at Carnegie Mellon University. In other words, a perfect storm. One system failed and the backup failed too. The last fail-safe, the system that should intervene at the last moment to avoid danger, has also failed. That was the human security driver.

"One of the processes of building a robot that has to do real things is that real things are incredibly complicated and difficult to understand," says Wagner. Robots do not understand eye contact or waves or nodding. You may think that random things are walls or bushes or paper bags. Their mistakes will seem mysterious and alarming to the human eye. But those who develop the Tech will survive – for being drunk, sleepy, or distracted will make the robots seem mysterious.

Cars that will drive themselves


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *