The Advance of Chess Engines with Deep Learning

Authors

  • Haoran Wang Author

DOI:

https://doi.org/10.61173/c5arxh08

Keywords:

Chess, Machine learning, Deep Learning, Interpretability

Abstract

Since IBM’s “Deep Blue” computer defeated the world champion Garry Kasparov, chess is a vital evaluation scenario to verify the learning ability of artificial intelligence algorithms. Recently, with the rapid development of this neural network technology, deep learning and reinforcement learning technology based on neural networks has completely changed the chess artificial intelligence. Several mainstream neural networks, such as Convolutional Neural Networks (CNN), are good at recognizing chess pieces and extracting game features, while Recurrent Neural Network (RNNs) analyzes complex moving sequences. AlphaZero, based on deep reinforcement learning, can even surpass human champions in the field of Go through self-supervised learning, demonstrating the great potential of artificial intelligence in intellectual games. Although artificial intelligence has greatly enhanced the competitiveness and accessibility of the game, the interpretability of the deep learning model is still a limitation, especially in high-risk or high-trust areas, where it is essential to understand the model behaviour, decision-making process and transparency. In this paper, the development of deep learning in the chess system is deeply studied, the challenge of interpretability is explored, and the potential of causal reasoning is discussed to enhance the interpretability and the overall application value of chess artificial intelligence.

Downloads

Published

2024-12-31

Issue

Section

Articles