Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Google AI beats experienced human players at real-time strategy game StarCraft II

Source: nature.com

Players of the science-fiction video game StarCraft II faced an unusual opponent this summer. An artificial intelligence (AI) known as AlphaStar — which was built by Google’s AI firm DeepMind — achieved a grandmaster rating after it was unleashed on the game’s European servers, placing within the top 0.15% of the region’s 90,000 players.

The result, published on 30 October in Nature1shows that an AI can compete at the highest levels of StarCraft II, a massively popular online strategy game in which players compete in real time as one of three factions — the human Terran forces or the aliens Protoss and Zerg — battling against each other in a futuristic warzone.

DeepMind, which previously built world-leading AIs that play chess and Go, targeted StarCraft II as its next benchmark in the quest for a general AI — a machine capable of learning or understanding any task that humans can — because of the game’s strategic complexity and rapid pace.

“I did not expect AI to essentially be superhuman in this domain so quickly, maybe not for another couple of years,” says Jon Dodge, an AI researcher at Oregon State University in Corvallis.

In StarCraft II, experienced players multitask by managing resources, executing complex combat manoeuvres and ultimately out-strategizing their opponents. Professionals play the game at a breakneck pace, making more than 300 actions per minute. The machine-learning techniques underlying DeepMind’s AI rely on artificial neural networks, which learn to recognize patterns from large data sets, rather than being given specific instructions.

DeepMind first pitted AlphaStar against high-level players in December 2018, in a series of laboratory-based test games. The AI played — and beat — two professional human players. But critics asserted that these demonstration matches weren’t a fair fight, because AlphaStar had superhuman speed and precision.

Before the team let AlphaStar out of the lab and onto the European StarCraft II servers, they restricted the AI’s reflexes to make it a fairer contest. In July, players received notice that they could opt-in for a chance to potentially be matched against the AI. To keep the trial blind, DeepMind masked AlphaStar’s identity.

“We wanted this to be like a blind experiment,” says David Silver, who co-leads the AlphaStar project. “We really wanted to play under those conditions and really get a sense of, ‘how well does this pool of humans perform against us?’”

AlphaStar’s training paid off: it crushed low-ranking opponents and ultimately amassed 61 wins out of 90 games against high-ranking players.

Challenging complexity

StarCraft II’s complexity poses immense challenges to AIs. Unlike chess, StarCraft II has hundreds of ‘pieces’ — soldiers in the factions’ armies — that move simultaneously in real time, not in an orderly, turn-based fashion. Whereas a chess piece has a limited number of legal moves, AlphaStar has 1026 actions to choose from at any moment. And StarCraft II, unlike chess, is a game of imperfect information — players often cannot see what their opponent is doing. This makes it unpredictable.

For nearly a decade, researchers have pitted StarCraft– and StarCraft II-playing AIs against one another in an annual competition. However, unlike AlphaStar, most of these ‘bots’ relied on hard-coded rules, rather than neural networks that can self-train. Oriol Vinyals, who now co-leads the AlphaStar project, was on the team from the University of California, Berkeley, that won the first competition in 2010.

“Back then, I kind of started thinking maybe we should just do [machine] learning, but it was just too early,” says Vinyals.

In 2016, Vinyals joined DeepMind, where he began working on AIs that could teach themselves how to play StarCraft II. AlphaStar started its training by learning to imitate from a set of nearly one million human games. To improve AlphaStar’s play further, DeepMind created a league where versions of the AI competed against one another. This method makes sense for a game like StarCraft II in which no one strategy is best — as well as for many other real-life applications of AI, says Kai Arulkumaran, an AI researcher at Imperial College London.

Perceptive players

DeepMind also put constraints on AlphaStar to make sure the AI was truly out-thinking and not just out-clicking its human opponents. Because the game rewards an ability to click rapidly, a computer that clicks at superhuman speed might beat humans without being more intelligent or making better decisions. So DeepMind limited the speed of AlphaStar’s reflexes to that of experienced human players.

Under those conditions, and after 27 days of training, AlphaStar placed within the top 0.5% of all players on the European server.

After 50 games, however, DeepMind hit a snag. Some players had noticed that three user accounts on the Battle.net gaming platform had played the exact same number of StarCraft II games over a similar time frame — the three accounts that AlphaStar was secretly using. When watching replays of these matches, players noticed that the account owner was performing actions that would be extremely difficult, if not impossible, for a human. In response, DeepMind began using a number of tricks to keep the trial blind and stop players spotting AlphaStar, such as switching accounts regularly.

The final version of AlphaStar relied on a cumulative 44 days of training and frequently ran into professional players. The AI wasn’t able to beat the best player in the world, as AIs have in chess and Go, but DeepMind considers its benchmark met, and says it has completed the StarCraft II challenge.

Other AI scientists aren’t yet convinced that AlphaStar can claim complete victory. Dave Churchill, an AI researcher at Memorial University of Newfoundland in St John’s, Canada, thinks that AlphaStar still has a number of weaknesses, such as a vulnerability to strategies it hasn’t seen before.

“AlphaStar is very impressive, and is definitely the strongest AI system for any StarCraft game to date,” he says. “That being said, StarCraft is nowhere near being ‘solved’, and AlphaStar is not yet even close to playing at a world champion level.”

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence