## The minimax algorithm in chess

*Hello, Habre!*

Now when you hear about AI in chess, probably the first thing that comes to mind is AlphaZero, which “learned” to play chess by beating world champions without any prior knowledge of the game. But allow me, this is only the tip of the iceberg!

We won’t waste time explaining how the pieces move – you know that. In today’s article we will analyze **Minimax algorithm.**

Contents

#### A bit of history

There was a time when a mathematician **Ernest Zermelo** developed the theory of games, and **John von Neumann**Who would later become the father of modern computer architecture, introduced the concept of Minimax.

He showed that in games with two players where the information is completely open (as in chess), there is an optimal game strategy that minimizes the maximum possible losses.

In the 50s, Claude Shannon proposed the concept of programming chess algorithms. He described the basic strategy for chess programs that use Minimax and introduced the concept of “scoring functions” for position analysis. It was a real breakthrough, because now computers could “think” several moves ahead, evaluating each of them.

In the 60s, the first chess programs using Minimax appeared. Programs like IBM’s Deep Thought and others were beginning to show impressive results, although they were still far from grandmaster level. These programs used Minimax to analyze thousands of positions, but the limited computing power at the time was limiting.

In the 90s, IBM introduced Deep Blue. In 1997, this program defeated Grandmaster Garry Kasparov. Deep Blue used advanced Minimax with alpha-beta trimming and could analyze up to 200 million positions per second.

It was the moment when the world realized that in chess the computer can compete on an equal footing with the best.

### Minimax

Minimax is an algorithm used in game theory, especially in two-player games such as chess. Simply put, you play so that your opponent gets as little advantage as possible, even if he makes the best possible move.

#### Decision tree

A decision tree is a visual and conceptual representation of all possible moves in the game, starting from the current moment. This is an element that allows the algorithm to make informed decisions.

Everyone **node** a tree has a certain state of play or position on the board. In chess, this will be the position of the pieces on the board after a certain move.

**Branches**, coming from a node, represent the possible moves that can be made from this state. Each branch leads to a new node, which is the new game state after the turn.

**Root node** – This is the current state of the game, where the analysis begins.

**Leafy** **nodes** are the endpoints of the tree, where the game ends (chess is checkmate, stalemate, or draw) or where the analysis stops at a certain depth.

#### Evaluation of positions

Position evaluation is usually done using a heuristic function. This feature assigns a numerical value to each position based on various factors such as material advantage, positional considerations, and other strategic elements.

The simplest form of assessment is the calculation of the total value of the pieces on the board. For example, a pawn may be valued at 1 point, a knight and bishop at 3, a rook at 5, a queen at 9. Thus, if one player has a material advantage, his position is valued higher.

More advanced heuristics take into account positional aspects: center control, king safety, piece activity, pawn structure, etc. For example, active pieces that control multiple squares increase the position rating.

**A simple material assessment**: Let’s imagine that White has a queen, a rook and five pawns on the board (total score 19 points), and Black has only a queen and six pawns (total score 15 points) The heuristic function can evaluate this position as +4 in favor of white .

**Positional assessment**: Suppose White has pawns on c4 and d4 controlling the center, while Black’s pawns are on a7 and b7. Although materially the situation is equal, positionally White is better because of control of the center and more active pawns.

**Combined assessment**: In a more complex case where both material and positional factors are considered, White may have a small material advantage, but Black may have a better position, such as a safer king or better piece coordination.

#### Min and Max are equal

At the level **Max** the player seeks to maximize his profit. In the context of chess, this means choosing a move that results in the most advantageous position for the player. At this level, the algorithm evaluates all possible moves and chooses the one that has the highest score in terms of its strategy and goals.

Let’s imagine that you are playing for white in chess. You can make a move that either attacks an opponent’s piece or develops your own piece. The algorithm will evaluate both moves and choose the one that maximizes your position – for example, capturing an opponent’s piece without risking your own pieces.

At the level **Min** player Max’s opponent seeks to minimize Max’s payoff. In chess, this means that the opponent will choose moves that either reduce Max’s potential winnings or create difficulties for him.

Now let’s imagine that you are playing for black. White has just made a move that threatens your queen. Your task is to minimize White’s potential win. You can protect the queen by moving it to a safe place or offer an exchange of pieces that will reduce white’s material advantage.

In Minimax, the algorithm alternates between Min and Max levels, representing the moves of both players. It starts with Max level (player’s turn), then goes to Min level (opponent’s turn), and so on. This alternation continues until reaching a given depth of analysis or the end of the game.

#### Backtracking and move selection

Backtracking is the process of returning from the leaf nodes of a decision tree to the root in order to choose the best move. After the algorithm reaches a certain depth or endgame (a leaf node), it begins to “backtrack” by evaluating and comparing the results obtained.

p align=”justify”> At each level of the decision tree, the algorithm evaluates the positions using a heuristic function. This function can take into account various factors such as material advantage, positional strength, security of the king, etc.

The ratings of the positions at the lower levels of the tree go up. At each level of Min and Max, the algorithm chooses the appropriate move: Max chooses the move with the maximum score, Min – with the minimum.

When the algorithm returns to the root node, it chooses the move that leads to the best evaluated position.

Suppose you are considering two possible moves for White. After analyzing the possible black responses to each of these moves, you arrive at leaf nodes with positional values.

Now you start the backtracking process. For each of the possible black moves, you choose the one that minimizes your score (Min level). Then you go back to White’s moves and choose the one that maximizes the score (Max level).

Suppose White’s first move leads to a score of +3 (after a better response by Black), and the second move leads to a score of +5. When choosing between these two moves, the algorithm will choose the second one because it leads to a more advantageous position.

#### Alpha-Beta Pruning

Alpha-Beta Pruning is an optimization method for the Minimax algorithm that reduces the number of nodes that need to be evaluated in the game tree. The basic idea is to “prune” branches of the tree that don’t need to be explored because they won’t lead to better choices.

The algorithm uses two variables: Alpha and Beta. Alpha is the best estimate already found on the path to the root for player Max, and Beta is for player Min. Initially, Alpha is minus infinity, and Beta is plus infinity.

If at some point it turns out that Beta is less than or equal to Alpha (Beta ≤ Alpha), it means that the current node cannot improve on the previous move found, and therefore further exploration of this branch is pointless.

For example, you are playing for white and you have two possible moves. You begin the analysis with the first move and calculate that Black’s best response to this move gives a score of +2 (for White). You set Alpha to +2 since that is the best move for you right now.

Now you are considering a second move. Suppose that Black’s first response to this move gives White a score of +1. Since +1 is less than the current value of Alpha (+2), further exploration of this branch does not make sense, since Black can choose a move that is already worse for White than the one found earlier.

You “prune” this branch and analyze Black’s other responses to White’s second move, as they may lead to a better result than the one already found.

Alpha-Beta Pruning significantly reduces the number of nodes that need to be evaluated, making Minimax much faster and more efficient. In chess, this cuts the analysis time from several hours to a few seconds.

I hope this article inspires you not only to study algorithms, but also to apply strategic thinking in your daily work and life. If you are interested in an article in this format about Alpha Zero, write back.

I am also a big fan of playing chess. If anyone wants to play a game, send me a message.

*And practical skills in algorithms, mathematics, machine learning and not only you can get in the framework of online courses from my colleagues at OTUS. Go to **catalog** and **choose the appropriate direction**.*