Chess is a game designed for two players. All the experiments with ever more advanced computers playing either against humans or each other naturally conform to this assumption. But what if the game were changed so that each piece were artificially intelligent (AI), made its own moves, and the decision about which piece on a side were to move next were negotiated among the pieces on that side? What could this tell us about ways we can use distributed AI (DAI) or develop a complex swarm intelligence (SI), and whether and how the “wisdom of crowds” dynamic might apply to groups or teams of AI processes.
The idea presently ventured is not a computerized version of the “Wizard’s chess” in the Harry Potter books and movies, since like real chess, that is two-player game. The only similarity might be that each piece would indeed have its own opinion about its next move. It might be a bit more like a virtual chess version of the sci-fi drama Westworld, in that pieces interact (although not with people) and learn. In any event, the immediate inspiration for this line of thinking actually was sparked by an image (right) of a couple of robot figures on a chessboard* at the just concluded Future Fest 2018 conference in London.
While being quite aware of the advantage of a single mind or computer directing a side in chess, I’ve also become interested (as a non-expert in the field) in how intelligent agents with different though perhaps complementary goals, might interact in a defined environment towards a particular goal. Since so much work has been done with computer/AI chess in the standard 2-player mode, I wondered what might be possible with AI on the level of all the 32 individual pieces on a virtual chessboard.
Autonomous pieces work out the moves
According to the scenario I’m imagining, there is no overseeing player controlling the pieces on a side. Each AI chess piece:
- is autonomous
- knows the rules of the game and cannot break them
- knows the main object of the game
- plans its own moves and will normally avoid a move resulting in its being taken
- can communicate with all the other pieces on its side (but not the other side)
Another design parameter involves a choice: Either each piece would know no more than the above, or would be given data on how it has (been) moved throughout many actual games. In the latter case, it would have a repertoire of possible moves in various situations to choose (or depart) from, but not a generalized overview of the games.
The communication among pieces on a side prior to a move would be a critical aspect, of course. For the opening move on each side, there are only 10 of the 16 pieces that can move, and each has exactly two (2) choices, for a total of 20 potential moves. Beyond that, the numbers and combinations – and hence complexity – increase significantly. There have been experiments where AI bots have interacted, but this 16 sided interaction would be significantly more complex.
With each piece being able to consider its own possible moves plus not moving on each turn – even given a specific arrangement on the board after each previous move – there is no one obvious decision for the side from the point of view of the individual pieces (except where a piece wants to escape being taken, or if the king is threatened). Unless instructed in the set-up how to arrive at a decision, the pieces would have to develop their own criteria or method for choosing which piece on the side will make the next move. Some protocol for communication would likely be necessary, especially to facilitate human study of the decision process.
Learning from learning chess pieces
If we treat the AI chess pieces as learning programs (like AlphaZero and Leela Chess Zero), then each piece would probably best always be in the same position – such as e2 white pawn or b8 black knight. That would simplify reference to past games if we choose to give pieces that data, and would in any case presumably facilitate learning the role of the piece over many games.
One could also try various experiments such as switching the position of a piece (that e2 white pawn to, say, h2), or putting a veteran piece on a rookie team to study how it affects the team function.
It would be interesting to evaluate how much computing power is needed for each AI piece and whether/how much that varies by type of piece or position. And naturally also what the aggregate of those demands are per side.
As with AI in the 2-player game, one would watch for unexpected outcomes in the 32-player (but still 2-side) game. Would for instance pieces develop the willingness to sacrifice themselves in scenarios that might lead to their side winning?
Although the object of such an effort would not necessarily be to develop a “team” of pieces that could win against accomplished players, it might be useful at some point to play against single players – human or computer – for the AI pieces to learn in different settings, and also to measure the effectiveness of their “teamwork.”
* Drikybot, the creation of Audrick Fausta, dancer and engineer in mechatronics. Image was copied from Twitter. The caption on the tweet that triggered my thinking on this was something like “what are their thoughts?” – unfortunately I was unable to retrieve that specific tweet for this post.