Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

What virtual reality can teach an autonomous vehicle

Source – lasvegassun.com

An undated handout image of a simulation for the virtual testing of Waymo’s driverless technology. As the computers that operate driverless cars digest the rules of the road, some engineers think it might be nice if they can learn from mistakes made in virtual reality rather than on real streets.

SAN FRANCISCO — As the computers that operate driverless cars digest the rules of the road, some engineers think it might be nice if they can learn from mistakes made in virtual reality rather than on real streets.

Companies like Toyota, Uber and Waymo have discussed at length how they are testing autonomous vehicles on the streets of Mountain View, California, Phoenix and other cities. What is not as well known is that they are also testing vehicles inside computer simulations of these same cities. Virtual cars, equipped with the same software as the real thing, spend thousands of hours driving their digital worlds.

Think of it as a way of identifying flaws in the way the cars operate without endangering real people. If a car makes a mistake on a simulated drive, engineers can tweak its software accordingly, laying down new rules of behavior. On Monday, Waymo, the autonomous car company that spun out of Google, is expected to show off its simulator tests when it takes a group of reporters to its secretive testing center in California’s Central Valley.

Researchers are also developing methods that would allow cars to actually learn new behavior from these simulations, gathering skills more quickly than human engineers could ever lay them down with explicit software code. “Simulation is a tremendous thing,” said Gill Pratt, chief executive of the Toyota Research Institute, one of the artificial intelligence labs exploring this kind of virtual training for autonomous vehicles and other robotics.

These methods are part of a sweeping effort to accelerate the development of autonomous cars through machine learning. When Google designed its first self-driving cars nearly a decade ago, engineers built most of the software line by line, carefully coding each tiny piece of behavior. But increasingly, thanks to recent improvements in computing power, autonomous carmakers are embracing complex algorithms that can learn tasks on their own, like identifying pedestrians on the roadways or predicting events.

“This is why we think we can move fast,” said Luc Vincent, who recently started an autonomous vehicle project at Lyft, Uber’s main rival. “This stuff didn’t exist 10 years ago when Google started.”

There are still questions hanging over this research. Most notably, because these algorithms learn by analyzing more information than any human ever could, it is sometimes difficult to audit their behavior and understand why they make particular decisions. But in the years to come, machine learning will be essential to the continued progress of autonomous vehicles.

Today’s vehicles are not nearly as autonomous as they may seem. After 10 years of research, development and testing, Google’s cars are poised to offer public rides on the streets of Arizona. Waymo, which operates under Google’s parent company, is preparing to start a taxi service near Phoenix, according to a recent report, and unlike other services, it will not put a human behind the wheel as a backup. But its cars will still be on a tight leash.

For now, if it doesn’t carry a backup driver, any autonomous vehicle will probably be limited to a small area with large streets, little precipitation, and relatively few pedestrians. And it will drive at low speeds, often waiting for extended periods before making a left-hand turn or merging into traffic without the help of a stoplight or street sign — if it doesn’t avoid these situations altogether.

At the leading companies, the belief is that these cars can eventually handle more difficult situations with help from continued development and testing, new sensors that can provide a more detailed view of the surrounding world and machine learning.

Waymo and many of its rivals have embraced deep neural networks, complex algorithms that can learn tasks by analyzing data. By analyzing photos of pedestrians, for example, a neural network can learn to identify a pedestrian. These kinds of algorithms are also helping to identify street signs and lane markers, predict what will happen next on the road, and plan routes forward.

The trouble is that this requires enormous amounts of data collected by cameras, radar and other sensors that document real-world objects and situations. And humans must label this data, identifying pedestrians, street signs and the like. Gathering and labeling data describing every conceivable situation is an impossibility. Data on accidents, for instance, is hard to come by. This is where simulations can help.

Recently, Waymo unveiled a roadway simulator it calls Carcraft. Today, the company said, this simulator provides a way of testing its cars at a scale that is not possible in the real world. Its cars can spend far more time on virtual roads than the real thing. Presumably, like other companies, Waymo is also exploring ways its algorithms can learn new behavior from this kind of simulator.

Pratt said Toyota is using images of simulated roadways to train neural networks, and this approach has yielded promising results. In other words, the simulations are similar enough to the physical world to reliably train the systems that operate the cars.

Part of the advantage with a simulator is that researchers have complete control over it. They need not spend time and money labeling images — and potentially making mistakes with these labels. “You have ground truth,” Pratt explained. “You know where every car is. You know where every pedestrian is. You know where every bicycler is. You know the weather.”

Others are exploring a more complex method called reinforcement learning. This a major area of research inside many of the world’s top artificial intelligence labs, including DeepMind (the London-based lab owned by Google), the Berkeley AI Research Lab, and OpenAI (the San Francisco-based lab founded by Tesla Chief Executive Elon Musk and others). These labs are building algorithms that allow machines to learn tasks inside virtual worlds through intensive trial and error.

DeepMind used this method to build a machine that could play the ancient game Go better than any human. In essence, the machine played thousands upon thousands of Go games against itself, carefully recording which moves proved successful and which didn’t. And now, DeepMind and other leading labs are using similar techniques in building machines that can play complex video games like “StarCraft.”

That may seem frivolous. But if machines can navigate these virtual worlds, they can make their way through the physical world.

Inside Uber’s autonomous car operation, for example, researchers have trained systems to play the popular racing game “Grand Theft Auto,” with an eye toward applying these methods, eventually, to real world cars. Training systems in simulations of physical locations is the next step.

Bridging the gap between the virtual and the physical is no easy task, Pratt said. And companies must also ensure that algorithms don’t learn unexpected or harmful behavior while learning on their own. That is a big worry among artificial intelligence researchers.

For this and other reasons, companies like Toyota and Waymo are not building these cars solely around machine learning. They also hand-coded software in more traditional ways in an effort to guarantee certain behavior. Waymo cars don’t learn to stop at stop lights, for example. There is a hard and fast rule that they stop.

But the industry is headed toward more machine learning, not less. It provides a better way to train the car to do tasks like identifying lane makers, said Waymo’s vice president of engineering Dmitri Dolgov. But it becomes even more important, he explained, when a car needs a much deeper understanding of the world around it. “Robotics and machine learning go hand in hand,” he said.

Related Posts

Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
2
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence