Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Machine Learning to Help Optimize Traffic and Reduce Pollution

Source- newscenter.lbl.gov

Applying artificial intelligence to self-driving cars to smooth traffic, reduce fuel consumption, and improve air quality predictions may sound like the stuff of science fiction, but researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have launched two research projects to do just that.

In collaboration with UC Berkeley, Berkeley Lab scientists are using deep reinforcement learning, a computational tool for training controllers, to make transportation more sustainable. One project uses deep reinforcement learning to train autonomous vehicles to drive in ways to simultaneously improve traffic flow and reduce energy consumption. A second uses deep learning algorithms to analyze satellite images combined with traffic information from cell phones and data already being collected by environmental sensors to improve air quality predictions.

“Thirty per cent of energy use in the U.S. is to transport people and goods, and this energy consumption contributes to air pollution, including approximately half of all nitrogen oxide emissions, a precursor to particular matter and ozone – and black carbon (soot) emissions,” said Tom Kirchstetter, director of Berkeley Lab’s Energy Analysis and Environmental Impacts Division, an adjunct professor at UC Berkeley, and a member of the research team.

“Applying machine learning technologies to transportation and the environment is a new frontier that could pay significant dividends – for energy as well as for human health.”

Traffic smoothing with Flow

The traffic-smoothing project, dubbed CIRCLES, or Congestion Impact Reduction via CAV-in-the-loop Lagrangian Energy Smoothing, is led by Berkeley Lab researcher Alexandre Bayen, who is also is a professor of electrical engineering and computer science at UC Berkeley and director of UC Berkeley’s Institute of Transportation Studies. CIRCLES is based on a software framework called Flow, developed by Bayen’s team of students and post-doctoral researchers.

Flow is a first-of-its-kind software framework allowing researchers to discover and benchmark schemes for optimizing traffic. Using a state-of-the-art open-source micro simulator, Flow can simulate hundreds of thousands of vehicles – some driven by humans, others autonomous – driving in custom traffic scenarios.

“The potential for cities is enormous,” said Bayen. “Experiments have shown that the energy savings with just a small percentage of vehicles on the road being autonomous can be huge. And we can improve it even further with our algorithms.”

Flow was launched in 2017 and released to the public in September, and the benchmarks are being released this month. With funding from the Laboratory Directed Research and Development program, Bayen and his team will use Flow to design, test, and deploy the first connected and autonomous vehicle (CAV)-enabled system to actively reduce stop-and-go phantom traffic jams on freeways.

How reinforcement learning can reduce congestion

Some of the current research into using autonomous vehicles to smooth traffic was inspired by a simple experiment done by Japanese researchers 10 years ago in which about 20 human drivers were instructed to drive in a ring at 20 mph. At first everyone is proceeding smoothly, but within 30 seconds, the traffic waves start and cars come to a standstill.

“You have stop-and-go oscillation within less than a minute,” Bayen said. “This experiment led to hundreds if not thousands of research papers to try to explain what is happening.”

A team of researchers led by Dan Work of Vanderbilt University repeated the same experiment last year but made one change: they added a single autonomous vehicle in the ring. As soon as the automation is turned on, the oscillations are immediately smoothed out.

Why? “The automation essentially understands to not accelerate and catch up with the previous person – which would amplify the instability – but rather to behave as a flow pacifier, essentially smoothing down by restraining traffic so that it doesn’t amplify the instability,” Bayen said.

Deep reinforcement learning has been used to train computers to play chess and to teach a robot how to run an obstacle course. It trains by “taking observations of the system, and then iteratively trying out a bunch of actions, seeing if they’re good or bad, and then picking out which actions it should prioritize,” said Eugene Vinitsky, a graduate student working with Bayen and one of Flow’s developers.

In the case of traffic, Flow trains vehicles to check what the cars directly in front of and behind them are doing. “It tries out different things – it can accelerate, decelerate, or change lanes, for example,” Vinitsky explained. “You give it a reward signal, like, was traffic stopped or flowing smoothly, and it tries to correlate what it was doing to the state of the traffic.”

With the CIRCLES project, Bayen and his team plan to first run simulations to confirm that significant energy savings result from using the algorithms in autonomous vehicles. Next they will run a field test of the algorithm with human drivers responding to real-time commands.

DeepAir

The pollution project, named DeepAir (Deep Learning and Satellite Imaginary to Estimate Air Quality Impact at Scale), is led by Berkeley Lab researcher Marta Gonzalez, who is also a professor in UC Berkeley’s City & Regional Planning Department. In past research, she has used cell phone data to study how people move around cities and to recommend electric vehicle charging schemes to save energy and costs.

For this project, she will take advantage of the power of deep learning algorithms to analyze satellite images combined with traffic information from cell phones and data already being collected by environmental monitoring stations.

“The novelty here is that while the environmental models, which show the interaction of pollutants with weather – such as wind speed, pressure, precipitation, and temperature – have been developed for years, there’s a missing piece,” Gonzalez said. “In order to be reliable, those models need to have good inventories of what’s entering the environment, such as emissions from vehicles and power plants.

“We bring novel data sources such as mobile phones, integrated with satellite images. In order to process and interpret all this information, we use machine learning models applied to computer vision. The integration of information technologies to better understand complex natural system interactions at large scale is the innovative piece of DeepAir.”

The researchers anticipate that the resulting analysis will allow them to gain insights into the sources and distribution of pollutants, and ultimately allow for the design of more efficient and more timely interventions. For example, the Bay Area has “Spare the Air” days, in which traffic restrictions are voluntary, and other cities have schemes to restrict traffic or industry.

While the idea of using algorithms to control cars and traffic may sound incredible at the moment, Bayen believes technology is headed in that direction. “I do believe that within 10 years the things we’re coming up with here, like flow smoothing, will be standard practice, because there will be more automated vehicles on the road,” he said.

Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 13 Nobel Prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy’s Office of Science. For more, visit www.lbl.gov.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

Related Posts

Subscribe
Notify of
guest
4 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
4
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence