South Korean Lee Sedol, the world champion of the ancient Chinese board game Go, said Tuesday that he felt slightly nervous ahead of a match with Google's computer program AlphaGo.
Lee told a press conference in Seoul that he seemed to get slightly "nervous" though he still has confidence in his victory in the five-game match scheduled for Wednesday to next Tuesday. The winner of the match will get 1 million U.S. dollars in prize. If AlphaGo wins, the prize will be given to charities as donation.
"(My) winning rate does not seem to go as far as 5-0," the 33-year-old said, slightly lowering his confidence compared with his Feb. 22 press conference in which he said AlphaGo could by no means defeat him.
Touching on his lowered confidence, Lee said he got to think that AI could mimic humans' intuition to an extent after listening to the AlphaGo algorithm explanations though it would be irrational yet to say that AI can mimic humans' intuition and sense completely.
Lee said that his strength compared with AlphaGo would be his intuition and sense of humans themselves although AlphaGo can mimic them to some extent.
AlphaGo, developed by Google's London-based subsidiary DeepMind, demonstrated a major step forward in artificial intelligence (AI) after defeating European Go champion Fan Hui in October 2015.
It thrilled the public as some experts predicted decades of years needed to see AI program's victory over human professionals at the ancient Chinese board game.
Go originated in China more than 2,500 years ago, which has been viewed as a grand challenge for AI due to its complexity and intuitive nature. AI researchers often use games as a testing ground to invent smart and flexible algorithms that can tackle problems in ways similar to humans.
Demis Hassabis, CEO of the DeepMind, held a press conference in Seoul to explain the principle of AlphaGO algorithm, saying AlphaGo became stronger now than in October because it has made many upgrades since then.
AlphaGo combines an advanced tree search with neural networks. The networks take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections.
The "policy" neural network selects a next move to play, and the "value" network predicts a winning rate in order to mimic humans' intuition. The AlphaGo developer trained neural networks on 30 million moves from games by human experts.
The algorithm learned to discover new strategies for itself, by playing thousands of games between its neural networks and adjusting connections by use of a trial-and-error process known as reinforcement learning.
Hassabis told reporters that intuition is important in the Go game, known as Weiqi in China and Baduk in South Korea, saying the neural network approach for human intuition is at the core of the AlphaGo system.
He said that if human Go players go in a match with Lee Sedol, they will get nervous. But AlphaGo has no such possibility, which is the computer program's strength, Hassabis said.
Hassabis stressed Google's goal of using technology earned from games like Go for general purpose, saying that Artificial Global Intelligence (AGI) can be applied to health care, robot and smart systems as well as disease analysis.
Computers have a long history of competing with human rivals in various games. The first game mastered by a computer program was noughts and crosses, known as tic-tac-toe, in 1952. Then, AI conquered chess, with IBM's Deep Blue famously beating Garry Kasparov at chess in 1997.
Go has always been considered as the last game in which humans can defeat computer programs. In early days, AI could not win a single game even against amateur players, so AlphaGo's victory over Fan Hui was extraordinarily surprising.
"If humans lose (in a match with AlphaGo), it will have a bad influence on the baduk world, but it would be inevitable during the current era. Artificial intelligence will win (humans) sometime in the future," said Lee.