laitimes

Musk's OpenAI version of "Alpha Dog" defeated the strongest human player

author:DeepTech

On August 11, artificial intelligence surprised humans again, and OpenAI, an artificial intelligence research institute owned by Elon Musk, announced that an AI robot they had built had defeated a human professional player named Dendi in the e-sports game Dota 2, and the format of the game was also very direct 1v1.

Musk's OpenAI version of "Alpha Dog" defeated the strongest human player

The two sides agreed to play a total of three rounds, the first game, the artificial intelligence in less than ten minutes to defeat the opponent lightning; the second game was also won by the artificial intelligence; and this directly led Dendi to abandon the third game.

Musk's OpenAI version of "Alpha Dog" defeated the strongest human player

Figure丨 Dota 2's strongest human player Dendi who was defeated by AI

"This guy is horrible," Dendi inhaled cold air during the game.

Musk's OpenAI version of "Alpha Dog" defeated the strongest human player

Figure丨 Musk first forwarded the news of OpenAI's victory

OpenAI officials explained that the AI robot was completely self-trained, and researchers trained it from scratch through past competition videos. Greg Brockman, CTO of OpenAI, said that in just two weeks of training, ai had already beaten the top 1v1 players, including the world number one.

Brockman adds, "With 1v1 training, we've targeted training on the strengths and weaknesses of AI. Next, OpenAI will train the AI to play 5 players at a time, so that a complete team can be formed. At the same time, they are also ready to open this AI to the outside world, so that everyone has the opportunity to fight against artificial intelligence.

Artificial intelligence intervention in the game field is not new, including DeepMind and Facebook have launched a path of exploration for the real-time strategy game "StarCraft 2", they collect a large number of human player game record data, and use this data to train deep learning algorithms, and finally can defeat human players in the human-machine war.

Just on August 10, DeepMind and Blizzard officially launched the starcraft 2 machine learning tool group: SC2LE (Starcraft 2 Learning Environment), which they hope to help researchers speed up the development of StarCraft 2 AI.

SC2LE includes:

A machine learning API developed by Blizzard that provides researchers and developers with access to games. For the first time, the tools developed for Linux systems will be fully included. (GitHub Address: https://github.com/Blizzard/s2client-proto)

The open source version of DeepMind's toolset PySC2 makes it easy for researchers to use Blizzard's feature layer API in their respective models. (GitHub Address: https://github.com/deepmind/pysc2)

A series of mini-games that allow researchers to conduct real-world tests of the performance of their own systems.

A dataset of 65,000 anonymous games will expand to more than 500,000 games in the coming weeks.

A paper that describes the entire environment and provides baseline grades. Baseline scores are derived from the results of the mini-game, supervised learning from game replays, and heads-ups with the computer opponents that come with Interstellar 2. (Address: https://deepmind.com/documents/110/sc2le.pdf)

Musk's OpenAI version of "Alpha Dog" defeated the strongest human player

Of course, StarCraft 2 also has some difficult problems that cannot be solved by technical means for the time being, such as "strategy". As a strategy game, many times, the decisions made by players will often only have an effect after a dozen or even dozens of minutes. Therefore, in order for AI to learn "strategy", it must have the ability to "plan" and "remember". "Storage is critical," says Oriol Vinyals, head of deepMind's StarCraft 2 project.

Because of the length of the game, DeepMind's enhanced learning isn't a good fit for StarCraft 2, "and what I'm doing now may have consequences a lot later," Vinyals says. Because existing technical means cannot overcome this problem, DeepMind hopes to lower the threshold for developing StarCraft 2 AI and bring together new technical means to solve the "strategic" problem.

It can be said that the tools jointly developed by DeepMind and Blizzard have opened the door to the ultimate StarCraft 2 AI in one fell swoop. At present, several of the world's top interstellar 2 players have expressed their willingness to fight with AI.

In addition to StarCraft 2, it is worth mentioning that before the "AlphaGo" boom that spread around the world, in the most complex board game of Go, artificial intelligence has defeated the top players of humanity, including Ke Jie, which even made the "AI threat" argument very popular at that time.

Musk's OpenAI version of "Alpha Dog" defeated the strongest human player

Musk believes that Dota2 is much more difficult than chess and Go

Although we can't yet evaluate the advantages and disadvantages of this experiment of OpenAI with the "StarCraft II" AI created by Facebook and DeepMind and the Go AI alphaGo of DeepMind. But so far, no AI research team can beat human gamers, and OpenAI is a precedent.

However, this may not be so optimistic for humans, "it feels like a human player, but it has advantages that humans don't have," Dendi said of OpenAI's artificial intelligence.

Read on