Why An AI Beating Pro STARCRAFT II Players Is (And Isn’t) A Big Deal

Powered by Geek & Sundry

If you’ve been keeping up with gaming news over the last day, you might have heard the story of Google’s DeepMind creating an artificial intelligence program (referred to as an “agent”) called AlphaStar and how it handily beat some professional Starcraft II players.

Now if you’re anything like me, losing to a computer in StarCraft II isn’t especially spectacular or newsworthy (I’m an easily distracted player with a cat who likes to lay on my keyboard), but AlphaStar’s accomplishment is very much a big deal, so much so that  technology and culture outlets are talking about it. Make no mistake: it is a big deal, but there are also several caveats to this story that shouldn’t be overlooked.

The basic facts of the story are pretty straightforward: DeepMind is the Google-owned AI company that created artificial intelligence agents that have mastered (and beaten human masters of) games like  chess and Go, as well as a variety of video games. The company took the leap from turn-based games, where players alternate taking actions,  to the real-time strategy game of StarCraft II, where players simultaneously take actions. This past December, DeepMind had trained an agent with the equivalent of 200 years of experience and determined it was ready to be tested against professional StarCraft II players.

They revealed the results of these tests yesterday, and also livestreamed a “rematch” game with a new agent:

With all the spectacle around the games (and the obviously clickable headlines), there are a few points that seemed to be overlooked that contextualize what happened and highlight the ways that this event is meaningful and important, while also recognizing the ways that this event leaves several caveats.

1. The AI Didn’t “Cheat” In Any Of Its Games (Mostly)

I don’t know a videogamer out there who hasn’t at one point used some derivative of the phrase, “The computer cheated.” Sometimes it’s pure conjecture, sometimes the computer just has an advantage that a human doesn’t. For example, the algorithm-driven computer opponents of  StarCraft 2 can simultaneously build and control units (they do not suffer the action limitation of having to click to control or build, and making such actions occur subsequently rather than concurrently).  Similarly, such computer opponents also have perfect, real-time information about the human player: information such as their base location, force, and tech tree development, whereas human players have imperfect information about their opponents and are limited by the fog of war (limiting their view of the map, and the information they have of their opponents to what their forces can see.)

DeepMind made it clear that they did their best to limit AlphaStar’s computer advantages. For example, the agent wouldn’t be able to take more actions than professional human players. In the December test games, however, AlphaStar did have access to the raw interface (read: entire map though it also suffered the fog of war). It meant that it could micromanage units on a scale that professional human players could not – selecting and controlling multiple units in different parts of the map with a precision and speed that for a human player, limited by both a mouse and keyboard input system, a point-and-click unit controlling interface, and needing to control the map camera, would be nearly impossible.

In those December games, AlphaStar went 10-0 against professional human players. Technically, at least. It was actually five different agent in the AlphaStar project each playing a game against Team Liquid’s Dario “TLO” Wünsch and a week later AlphaStar playing against Team Liquid’s Grzegorz “MaNa” Komincz.

However, in the livestreamed game yesterday, the agent instead had to also control the map, and it should be noted that the human player, MaNa won.

2. Yes, Professional Player MaNa Beat AlphaStar, but it Wasn’t a Rematch

The livestream game which MaNa won was billed as a rematch, the agent that DeepMind fielded was another entirely new AI agent (described as “started from scratch” by the DeepMind team – perhaps they should have called it BetaStar).

The rematch started rough for MaNa – the commentators and MaNa agreed in post-game commentary that the first 7 minutes of the game were rough for the professional player, and it looked like the AI would prevail.  The professional player, however, talked about how he adjusted his own approach to playing this AI, making a point to have better information about his opponent a priority strategy and thus make better decisions.

MaNa played 5 games against an AI and adjusted his approach substantially. For the DeepMind researchers, this is one of the elements that they have to account for, and something machine learning is abysmal at. Machine learning is a trial-and-error process that takes a lot of time and even more input of information.

Observing the game, one turning point was the way MaNa was able to perpetually harass and distract the AlphaStar from initiating an all-out assault by annoying the AI with minor attacks on its base. Instead of committing to an attack, the AI instead would pull its forces back to defend its base. The strategy bought MaNa the time he needed to build out his tech tree and create a better attacking force and it worked brilliantly, where it seemed that AlphaStar’s more limited access to the map (because of its new camera parameters), along with the fact that its designers admitted it had a slower response time than most professional human players, meant it couldn’t make the right decision to commit to an assault instead of defense.

3. AlphaStar’s Limitations Were on Display

First and foremost, AlphaStar is a Protoss main (meaning it trains competitively with that particular race, as opposed to the other 2 available races, Terran and Zerg), and it has only ever played pros who played Protoss (though TLO is a Zerg main and admitted he wasn’t playing the race at a pro level, but likely within the top 1% of player tier). Essentially, that cuts out two-thirds of the game as the Terran and Zerg bring different tactics and require different strategies and responses.

Additionally, in the post-match commentary of live game where MaNa won, the commentators, DeepMind designers, and even MaNa joked about how the agent had terrible manners in not GG’ing (“GG” text typed in the game chat that is shorthand for “good game” and effectively concedes the match). It demonstrates something that the designers did not train, program, or account for, but something that human intelligence does intuitively:  foresee the inevitable.

A human player learning to play StarCraft simultaneously develops an intuition for when a game reaches a point and the outcome is determined, before the actual end. By becoming more skilled at StarCraft, the human player becomes better able to see when a game hits a tipping point, one way or another. AlphaStar doesn’t yet seem to have that – it can only train on the parameters set by its designers, and obviously, the agent that ended up playing MaNa was trained for victory, and not concurrently trained to recognize its own defeat.

At the very least, it means that AlphaStar needs to work on its manners, and we’re still really far away from having ourselves a proper protocol droid. At worst, AlphaStar may become the insufferable troll player who builds a pylon in the middle of nowhere and forces pro players to hunt down a rogue building.

probe

4. The Gaming Community Is Advancing Humanity

I grew up in a time when politicians were demonizing video games and correlated games to societal decay. We’ve come a long way since (mostly) but one part of this story is Blizzard’s support of this initiative, providing the DeepMind team a specific version of StarCraft II that facilitated the development of AlphaStar. The collaboration between Blizzard and DeepMind was announced over 2 years ago:

This collaboration can lead to design structures and algorithms that can handle hundreds of actions with branching options and decision making with imperfect information. The practical applications of AlphaStar go well beyond gaming and can improve the lives of people in the future. Just be aware that video games might make your AI more prone to violence (KIDDING!). 

5. It Should Give Us Hope

Let’s be real: computers have been showing up humans in many capacities for a long time (when was the last time you needed to dial a phone number from memory?) so there has always been some inevitability of an AI beating someone at a complex video game. That moment is now.

So yes, some professional StarCraft II players lost to an AI. But it doesn’t matter because this isn’t a story just about technology beating a human, but a group of brilliant people coming together and creating technology that can do what we previously thought was impossible. This isn’t just a story about StarCraft, artificial intelligence or gaming.

It matters because it is a benchmark of human achievement. It’s a story of human ingenuity.  It’s the same ingenuity that we need to address bigger and ever pressing issues, like our planet’s changing climate. We should have hope. As DeepMind CEO Demis Hassabis reflects:

MORE GAMING GOODNESS!

Image Credits: Blizzard

Teri Litorco is a fangirl whose relationship with games (of the video and tabletop variety) was significantly shaped by StarCraft. She makes YouTube videos and overshares on social media: Facebook, Twitter, and Instagram.



Top Stories
More by Teri Litorco
Trending Topics