You are currently viewing Do Machines Think? Reflections 20 Years After Kasparov’s Defeat

Just 20 years ago, an event occurred that marked an important milestone in artificial intelligence.

In May 1997, for the first time, a reigning world chess champion was beaten by a computer in a match under tournament conditions.

In 2016, another significant milestone was reached: a program defeated one of the best Go players in the world. This event had less media coverage than Kasparov’s defeat, but it has aroused a great deal of wonder among artificial intelligence experts.

Why did it take almost 20 years from the chess victory to victory in the game of Go? Will the machines overtake men in all activities, even the most complex? Can we now say that machines do think?

1997: Kasparov vs Deep Blue

The leading characters in the 1997 match were Garry Kasparov, then the undisputed World Chess Championship, and Deep Blue, a supercomputer designed in its hardware and software components by IBM. The match consisted of six games and, as usually happens in chess tournaments, after each game the winner gains one point or, in case of a tie, half a point is given to each player.

The previous year a similar match was held in which, although Kasparov had lost a game, the man won with a score of 4-2.

The second time the computer prevailed, winning two games, drawing three and losing just one (the first) so the final score was 3.5-2.5 for Deep Blue.

Garry Kasparov leaves the table after being defeated in the last game of the match.

How did Deep Blue choose his moves?

The computer based its analysis on an algorithm that took a board position as an input and returned as an output a value that quantified the advantage (or disadvantage) with respect to the opponent player.

This algorithm was created by IBM engineers with the help of professional chess players and took into account material advantages (as a result of piece captures) or positional advantages (as a consequence of placing a piece in a key square).

Given this algorithm, Deep Blue implemented a brute-force approach. The supercomputer calculated this function on all the possible following positions up to a certain depth and chose the move that guaranteed the best result.

Deep Blue evaluated 200 million positions per second. This enabled it to analyze a position up to a depth of 6/8 moves if the chessboard presented many pieces, or up to 20 or more moves in the presence of only a few pieces.

Since then, chess programs have become a bit smarter and instead of searching through all possible moves, they only analyze the most promising variants. That’s more similar to how humans decide what the next move is: they evaluate fewer positions, but based on experience they know how to choose the most significant sequences.

For example, in 2006 the program Deep Fritz won against the world champion Vladimir Kramnik running on a standard personal computer that allowed it to evaluate “only” eight million positions per second, a lot fewer than Deep Blue.

Now, no human is able to win a chess game against the smartest computer program.

2016: Lee Sedol vs AlphaGo

Go is a board game with very simple rules. The players take turns placing pieces called “stones” on a square board with 19×19 intersections. If a player occupies all the intersections adjacent to an opponent’s group with his stones, the opponent’s group is captured and removed from the board.

The player who gains more points adding up the stones he captured and the number of intersections surrounded with his own stones wins the game.

Despite its simple rules (much simpler than chess), the game is extremely complex for these reasons:

  1. the larger size of the Go board implies that the number of possible moves is much greater than in chess. Therefore, the game variations tree is so vast, it makes brute-force approaches unfeasible
  2. initial moves in Go are much more various than in chess so that it’s not possible to create a database of openings for a program to use
  3. move choices in Go are often guided by abstract principles of balance between the various parts of the board and it is very difficult to create a function that interprets these principles to evaluate the quality of a move.

As a consequence, the creation of a program capable of competing with the best Go players has been considered an ambitious challenge in the field of artificial intelligence.

AlphaGo was developed by Google Deep Mind and, unlike Deep Blue, has been programmed according to the machine learning approach. This approach consists in submitting examples to a program in such a way that it learns how to make decisions based on those examples without being given explicit instructions.

More specifically, AlphaGo consists of two neural networks and has been trained with the submission of many professional player games for a total of 60 million moves. One of the two neural networks is used to figure out what the most promising future moves are (policy network), while the other one assigns a value to a position to represent the probability of victory for either player (value network).

After the first phase of learning based on human player games, AlphaGo has been trained by playing against different versions of itself to further improve its way of playing.

The result of the match against Lee Sedol, one of the strongest go players in the world, has been quite clear-cut: AphaGo won 4 out of 5 games.

Lee Sedol vs. AlphaGo

Recently a new match as been held, this time between AlphaGo and Ke Jie, the latter considered the word’s strongest Go player. AlphaGo also won this new match with a score of 3-0.

Given these stunning results, we should now begin to ask ourselves one question:

Have Machines Started to Think?

From the point of view of the results they have achieved there is no doubt that in a certain way, machines are thinking.

Compared to humans, they think differently, that’s for sure. But planes also fly differently from birds, and we still describe what they do as flying. Why shouldn’t we say that computers think?

Machines are not yet able to perform more artistic tasks like composing music or writing coherent texts, but that’s just a matter of reaching higher levels of complexity that sooner or later will be achieved. I’m pretty sure in some years we’ll be listening to the first symphony completely composed by a computer.

By now let’s make do with the first pop song written by a computer imitating the style of the Beatles, and then performed and mixed by humans.

Some argue that machines cannot create anything genuinely new, because whatever they do is an imitation of some human activity or something they have been taught to do.

I’d like to offer a couple of considerations to counterpose to this reasoning.

First, consider AlphaGo. The program began trying to imitate the moves of professional players, but then also improved playing against itself. Just as if it had “studied” to find stronger than human strategies.

In a sense, AlphaGo’s playing style is new and in fact, according to professional players, AlphaGo occasionally chooses moves considered rather original.

AlphaGo Move number 19 in the second match against Lee Sedol, considered particularly original by professional players.

Second, even human creativity doesn’t come out of nowhere. In an artist’s style you can find influences of other artists, elements linked to the place where the artist grew up, events they took part in, things they heard or read during their life. Artists basically transform personal experiences into pictures, sounds, or words.

Nothing prevents us from imagining a process whereby a very complex neural network could start imitating the works of one or more artists and then form a personal style, maybe with random elements inserted into its evolution or with the influence of something similar to “personal experiences” that could be linked to images, text, or music it takes its cue from.

The situation is quite clear to me. Machines are going to execute every human task in the same way we do or better than we do. In the coming years we’ll become more and more aware of this trend.

And yes, I find no reason to say that machines don’t think.

What’s your opinion?

At this time I am particularly interested in human points of view, but if some machine wants to post a comment or join the newsletter, it is welcome!

EnricoDeg

I live in Verona, Italy and I teach mathematics and physics at high school trying to make students understand the beauty and usefulness of STEM subjects. Before being a teacher, I worked for 12 years in the financial industry dealing with risk management, stochastic models for derivative pricing, and IT banking applications. My interests include chess, go, guitar, volleyball, trekking, snowboarding and of course reading and writing about mathematics and physics!

This Post Has 2 Comments

  1. Matteo

    I think the current machines do not think.
    I think the real difference between us and the machines is that we have consciousness and the machines do not. Consciousness is difficult to define in formal terms but we all have direct experience of it. Perhaps it’s the only thing we can say of having direct experience. According to recent theories still under development, consciousness emerges from quantum effects. Aside from the details, I fully agree with the importance of this general question:

    “Nobody understands what science is or how it works. Nobody understands quantum mechanics either. Could that be more than coincidence?”
    (From article http: // http://www.bbc.com/earth/story/20170215-the-strange-link-between-the-human-mind-and-quantum-physics.)

    I think that there is no room in classical mechanics for an explanation of consciousness.

    Thanks,
    bye

    Matteo

  2. Matteo

    “Nobody understands what consciousness is or how it works. Nobody understands quantum mechanics either. Could that be more than coincidence?”

Leave a Reply