So far I have mentioned two very general methods to play better in most games and to grow playing skills like a science.

The first one is heuristics — and heuristics to find them and improve on them — so that you develop a well-structured theory about how to augment your view of the playing field with useful concepts abstractly derived from the game specifications and constraint relaxations. Its efficiency relies on your general past experience with games and combinatorial searches, on your meta-thinking abilities, and more generally on your scientific intelligence before and while playing.

The second general method is deepening combinatorial minimax, so that you explore most pertinent moves and their counter-responses farther and farther in the branching-off immediate future. Its efficiency relies on your awareness and brute-force thinking while playing, and on specialized experience with the game coupled with general or specialized heuristics and time-saving algorithms.

It is time to introduce Monte Carlo — a third general method that has recently emerged and has proved to be surprisingly powerful, especially with the game of Go. It consists in randomly playing out many games up to their end and counting how many you have won, starting from a few board states you wish to compare. You may think sampling tons of games may only yield a dull average. But something unexpected may as well come out. Think of how our unexpected physical world comes out of tons of games...

We'll see.

# Game Strategy Research

Discovering and refining new simple games and strategies is a joy and a model of poetic science

## Saturday, March 29, 2014

## Monday, February 11, 2013

### Meta-games motivate contradictory efforts

When you play, why should you

and why should your opponent do the same from her point of view and counter your very efforts? I am still looking for simple and sound grounds to motivate such a

We may try to look higher at meta-strategies. Consider the meta-game of playing a series of ten consecutive games with the rule that you win the series iff you win more than half of the games in it.

Now, if it were true that you should always strive to win fast or else lose slowly, whatever the game, then you should as well make the same effort for the meta-game under consideration, which is a game too.

If you happen to be losing one of the games in the series and you are sure to win the whole meta-game because there is no doubt on your superior strength or because you've already won six games in the series, then you should lose that specific single lost game as soon as possible so as to win the final meta-game as fast as possible. Shorten any local battle to shorten the final victory.

Or if you happen to be winning one of the games in the series and you are sure to lose the whole meta-game because there is no doubt on you inferior strength or because you've already lost six games in the series, then you should win that specific single won game as late as possible so as to lose the final meta-game as slowly as possible. Lengthen any local battle to lengthen the final defeat.

These local efforts would contradict the

Hence, meta-strategies justify radically different local strategies, and nobody can consistently follow the general principle I started the post with! Unless in unrealistic isolation from sub-games and meta-games.

*strive to win fast, or lose slowly if you cannot win*

*general principle*.We may try to look higher at meta-strategies. Consider the meta-game of playing a series of ten consecutive games with the rule that you win the series iff you win more than half of the games in it.

Now, if it were true that you should always strive to win fast or else lose slowly, whatever the game, then you should as well make the same effort for the meta-game under consideration, which is a game too.

If you happen to be losing one of the games in the series and you are sure to win the whole meta-game because there is no doubt on your superior strength or because you've already won six games in the series, then you should lose that specific single lost game as soon as possible so as to win the final meta-game as fast as possible. Shorten any local battle to shorten the final victory.

Or if you happen to be winning one of the games in the series and you are sure to lose the whole meta-game because there is no doubt on you inferior strength or because you've already lost six games in the series, then you should win that specific single won game as late as possible so as to lose the final meta-game as slowly as possible. Lengthen any local battle to lengthen the final defeat.

These local efforts would contradict the

*general principle*I am trying to justify.Hence, meta-strategies justify radically different local strategies, and nobody can consistently follow the general principle I started the post with! Unless in unrealistic isolation from sub-games and meta-games.

## Tuesday, July 31, 2012

### Science grows on heuristic failures

As I wrote before, the simplest strategy that seems to make sense while being very general and easy to follow on many games consists in aiming straight at your goal while fully ignoring your opponent -- both

It does look dumb! Let's call it

I mean, consider Strategy 0 as instructions to evaluate how far you seem to be from your goal. Let's call such an evaluation function

as before. On any of the three board states below,

Heuristic 0 would say you seem to be 6 moves away from your goal -- since you are ignoring the Dark stones and any future influence of the Dark player. Now don't really ignore your opponent. Evaluate instead how far she too seems to be from her goal according to Heuristic 0. On the three board states above, that would be infinitely far, 1 move away and 1 move away respectively. Admittedly with many more ways to win for her on the 2nd board state than on the 3rd in 2 moves -- we'll refine heuristics later on with such considerations.

Then consider something like the difference between what the heuristic says about you and what it says about your opponent, so that you get a feeling of which player seems to be winning and by how much -- we'll refine that difference computation later on. As Light you would then favor the first board state among the three above, because Dark seems infinitely farther from winning there than on the two other boards.

Then explore the close future of all possible interactions between the two players and minimax, which means chose the move that would look best to you according to the difference above, after your opponent would choose an answering move to your move that would look best to her according to the difference from her point of view, after you would answer to her answer and she would answer to your answer to her answer -- stopping at some predefined depth.

Try it with ikabegamá or some other games you love. You should make it your first step when building up any game engine or the theory of any game. It's quite exciting already. But don't play too long either with those deep short-term tactics equiped with such a simple strategy, although they certainly feel like on the verge of showing you some hidden knowledge. You should soon try instead to jump beyond and transcend them.

The three board states above are extreme cases showing you how Heuristics 0 can get wrong, since you are indeed infinitely far from your goal on the 2nd board state -- you can't move -- and you are indeed only 1 move away from your goal on the 3rd -- you only have to jump into the middle of Dark's formation to win immediately.

By finding where Heuristic 0 can get really wrong with the game, exploring what difference it makes to apply or not apply one of its over-simplifying assumptions, you'll understand something highly rewarding, something transcending most ill-equiped tactical obfuscation. You'll surf through games at a higher level.

As a human, don't feel that much superior to machines either, because you may as well equip your game engines with the same understanding. That's the beauty of an improving sequence of well-defined heuristics, well-defined enough to enter a computer program.

- as the past or present owner of anything on the board,
- and as an active agent having any potential present or future influence on the board.

It does look dumb! Let's call it

*Strategy 0*. Still, you'll often beat total beginners with it on many games! Provided you couple it with well-known basic short-term tactics deep enough to obfuscate your opponent's vision of what is going on.I mean, consider Strategy 0 as instructions to evaluate how far you seem to be from your goal. Let's call such an evaluation function

*Heuristic 0*. For example suppose you play ikabegamá as Light and your goal is the patternas before. On any of the three board states below,

Heuristic 0 would say you seem to be 6 moves away from your goal -- since you are ignoring the Dark stones and any future influence of the Dark player. Now don't really ignore your opponent. Evaluate instead how far she too seems to be from her goal according to Heuristic 0. On the three board states above, that would be infinitely far, 1 move away and 1 move away respectively. Admittedly with many more ways to win for her on the 2nd board state than on the 3rd in 2 moves -- we'll refine heuristics later on with such considerations.

Then consider something like the difference between what the heuristic says about you and what it says about your opponent, so that you get a feeling of which player seems to be winning and by how much -- we'll refine that difference computation later on. As Light you would then favor the first board state among the three above, because Dark seems infinitely farther from winning there than on the two other boards.

Then explore the close future of all possible interactions between the two players and minimax, which means chose the move that would look best to you according to the difference above, after your opponent would choose an answering move to your move that would look best to her according to the difference from her point of view, after you would answer to her answer and she would answer to your answer to her answer -- stopping at some predefined depth.

Try it with ikabegamá or some other games you love. You should make it your first step when building up any game engine or the theory of any game. It's quite exciting already. But don't play too long either with those deep short-term tactics equiped with such a simple strategy, although they certainly feel like on the verge of showing you some hidden knowledge. You should soon try instead to jump beyond and transcend them.

The three board states above are extreme cases showing you how Heuristics 0 can get wrong, since you are indeed infinitely far from your goal on the 2nd board state -- you can't move -- and you are indeed only 1 move away from your goal on the 3rd -- you only have to jump into the middle of Dark's formation to win immediately.

By finding where Heuristic 0 can get really wrong with the game, exploring what difference it makes to apply or not apply one of its over-simplifying assumptions, you'll understand something highly rewarding, something transcending most ill-equiped tactical obfuscation. You'll surf through games at a higher level.

As a human, don't feel that much superior to machines either, because you may as well equip your game engines with the same understanding. That's the beauty of an improving sequence of well-defined heuristics, well-defined enough to enter a computer program.

## Monday, March 19, 2012

### Assuming passive opponents - Beginners meet Pólya and Pearl

When you are playing a game you've never played before or when you are designing a new playing engine, a good, universal and natural starting point is to try to get as close as possible to your winning goal as if there wasn't going to be any other player acting on the game.

It makes your first strategical concepts easier to design, because it is often about finding shortest paths or solving a solitaire puzzle.

I think most children do that when learning a game. I do. I would like to see it applied as well in any basic discussion about understanding a game.

More generally, you might look at the game rules and ignore some part. First the part that tells "there is another player" as I said. Then some other part if needed, such as "there is some other player's stones on the board" or some spatial dimension, flattening the board. It's like

In Chess for instance, it makes you focus on quickly threatening your opponent's King while cutting its escape routes and other anti-check answers in one move. It's infinitely much better than playing randomly.

The next step is then to add some concern about your opponent's distance to her own goal. How is she close to her goal? How can you slow her down? How can you keep getting closer to your own goal at the same time? How can you prevent her from slowing you down? How can she prevent you from preventing her... ?

It makes your first strategical concepts easier to design, because it is often about finding shortest paths or solving a solitaire puzzle.

I think most children do that when learning a game. I do. I would like to see it applied as well in any basic discussion about understanding a game.

More generally, you might look at the game rules and ignore some part. First the part that tells "there is another player" as I said. Then some other part if needed, such as "there is some other player's stones on the board" or some spatial dimension, flattening the board. It's like

*Generalization*or*Constraint Relaxation*suggested by Geoge Pólya in Mathematical Research or by Judea Pearl in Computer Search.In Chess for instance, it makes you focus on quickly threatening your opponent's King while cutting its escape routes and other anti-check answers in one move. It's infinitely much better than playing randomly.

The next step is then to add some concern about your opponent's distance to her own goal. How is she close to her goal? How can you slow her down? How can you keep getting closer to your own goal at the same time? How can you prevent her from slowing you down? How can she prevent you from preventing her... ?

## Thursday, January 19, 2012

### ikabegamá 2

Here is a new game with seemingly more interesting rules.

No tiles are distinguished from any others. The players agree on some starting setup consisting of some of their stones already on the board. For example:

The players also agree on some goal pattern or patterns which make them win as soon as some or all are achieved on the board. For example Light might win as soon as the following pattern of Light stones shows up locally anywhere on the board:

and Dark migh win as soon as the same pattern with dark stones shows up locally anywhere on the board:

The other rules are otherwise left unchanged from the first game ikabegamá in this blog. Let's call this game

Here is a sample game with the starting setup and goal patterns as above. The two players are naively building up their own pattern apart from each other until Dark realises Light is going to win and it is too late to change that - except to postpone it one turn. The pictures show the board position after every move. The moves are shown by transparent reduced previous versions of every tiles that has just changed.

012345678910111213

(game over, Light wins on move 13)

No tiles are distinguished from any others. The players agree on some starting setup consisting of some of their stones already on the board. For example:

The players also agree on some goal pattern or patterns which make them win as soon as some or all are achieved on the board. For example Light might win as soon as the following pattern of Light stones shows up locally anywhere on the board:

and Dark migh win as soon as the same pattern with dark stones shows up locally anywhere on the board:

The other rules are otherwise left unchanged from the first game ikabegamá in this blog. Let's call this game

*ikabegamá*as well, adding 2 to the name only when the distinction is needed.Here is a sample game with the starting setup and goal patterns as above. The two players are naively building up their own pattern apart from each other until Dark realises Light is going to win and it is too late to change that - except to postpone it one turn. The pictures show the board position after every move. The moves are shown by transparent reduced previous versions of every tiles that has just changed.

012345678910111213

(game over, Light wins on move 13)

## Thursday, December 29, 2011

### The time to be spent as a measure of optimal play

The rules in my previous post seem to lack substance when they are played out. If the players have decided to set up the two opposing nests close enough to each others, the first player always win with the strategy to simply rush forward. Otherwise, if the nests are set up far enough, the two players have time to first secure their nest by surrounding them with two layers of their stones. I don't see any way then for them to reach their respective winning goal. We'd better forget about those rules.

More generally, avoid rules or positions where one player may force the game to last endlessly.

Even more generally, try to win in as few moves as possible, or else lose in as many moves as possible. Renown researchers in Game Theory and developers of games engines such as Jonathan Schaeffer and his colleagues at The University of Alberta GAMES Group are well aware that time is part of optimal play.

I don't know however how to justify it. Is it an axiom we can hardly legitimate? Or can it be derived from some other principle that would be more widely accepted?

What I know is, it doesn't show up in most discussions about developing basic ai and game engines.

Most chess-playing engines start evaluating positions with a materialistic account of the pieces on the board, adding 1 for every White Pawn, subtracting 1 for every Black Pawn, adding about 3 for every White Knight, subtracting the same amount for every Black Knight, and so on ... Positional and dynamic advantages and weaknesses are also carefully weighted and added or subtracted, yielding a final score in the form of a real number. The closer the score is to some positive big number, the closer White is to

In go, the basic unit of measure might be the stone and its equivalent territory point, all the more as they directly contribute to the winning condition. In Arimaa, it might be the Rabbit or the Elephant or anything else -- there is no consensus yet. And so on.

While this heterogenous approach has made it fairly easy for every engine to compare positions within the same game, there are general flaws with it :

We'll keep that in mind while exploring future games.

More generally, avoid rules or positions where one player may force the game to last endlessly.

Even more generally, try to win in as few moves as possible, or else lose in as many moves as possible. Renown researchers in Game Theory and developers of games engines such as Jonathan Schaeffer and his colleagues at The University of Alberta GAMES Group are well aware that time is part of optimal play.

I don't know however how to justify it. Is it an axiom we can hardly legitimate? Or can it be derived from some other principle that would be more widely accepted?

What I know is, it doesn't show up in most discussions about developing basic ai and game engines.

Most chess-playing engines start evaluating positions with a materialistic account of the pieces on the board, adding 1 for every White Pawn, subtracting 1 for every Black Pawn, adding about 3 for every White Knight, subtracting the same amount for every Black Knight, and so on ... Positional and dynamic advantages and weaknesses are also carefully weighted and added or subtracted, yielding a final score in the form of a real number. The closer the score is to some positive big number, the closer White is to

*win easily*. Symmetrically, the closer the score is to some negative big number, the closer Black is to*win easily*.In go, the basic unit of measure might be the stone and its equivalent territory point, all the more as they directly contribute to the winning condition. In Arimaa, it might be the Rabbit or the Elephant or anything else -- there is no consensus yet. And so on.

While this heterogenous approach has made it fairly easy for every engine to compare positions within the same game, there are general flaws with it :

- You can't easily compare any Chess position with any Go position.
- You can't easily compare what two different Arimaa engines think of the same position.
- You must invent specific formulae to combine the score of a leaf and its depth in the tree search
*Winning easily*is not an explicit and well-defined part of the score definition.

We'll keep that in mind while exploring future games.

## Wednesday, January 19, 2011

### ikabegamá - a game to explore

Here is a game I would like to explore. Let's call it

*Ikabegamá,*which means*"nesting over the other"*in Kotava, because it is about swarming into the other player's nest:- The features listed in my previous post apply.
- There are two players. Let's call them
*Light*and*Dark*and color their respective tokens accordingly. - Before the game really starts, the players agree on some board tile to be called the
*Light nest*and another to be called the*Dark nest*. I'll picture them with an hexagonal ring with their owner's color, for example:The farther the two nests lie from each other, the longer the game is going to last. - The game starts with only two tokens on the board -- a Light token in the Ligth nest and a Dark token in the Dark nest. All the other tiles start empty. For example:

- The goal for each player is to get a token of his or her own into the other player's nest.
- The game is turn-based, the players alternating moves on the board. Light starts the game with a valid move of his choice, changing the board accordingly, then Dark follows with a valid move of her choice, changing the board accordingly, then Light follows and so on.
- The rules for valid moves are the same as for the game of Hexxagon. In other words, a valid move for Light is
- either a
*growth*-- Light fills in some empty tile of his choice with a new Light token, provided at least one of the 6 adjacent tiles is already occupied by a Light token. For example the following position

yields 3 valid growths for Light, which I'll picture with small Light tokens hovering above the empty tiles:

Light is allowed to choose only one of them during a single turn. - or a
*jump*-- Light moves some Light token of his choice on the board two tiles away from the tile it currently occupies, provided the destination is empty. (Two tiles away means adjacent to an adjacent tile, but not immediately adjacent nor the same tile. There can be from none to twelve possible destinations). Except for the source and destination, it doesn't matter whether any other tiles are empty or not. For example the position above yields 12 valid jumps for Light, which I'll picture with small Light tokens hovering above the empty destination tiles and a small empty tile hovering above the jumping token:

Light is allowed to choose only one of them during a single turn. - After Light has either
*grown up*or*jumped*and changed the board accordingly, some*automatic flipping*occurs : any Dark token adjacent to the newly occupied tile is turned into a Light token, thus changing owner and color from Dark to Light. I'll picture them with small light tokens hovering above them. For example if Light chooses to grow onto the left, two Dark tokens are replaced by Light tokens. Here is the position, the chosen move, the resulting flipping and the resulting position :

- It is then Dark's turn and the same rule applies to Dark, with Dark tokens instead of Light tokens and Light tokens instead of Dark tokens. For example on the last position above, Dark may now choose to jump downwards, thus turning two Light tokens into Dark tokens :

- The players must always choose a valid move as above if they can -- either a growth or a jump, not both, and only one. If they can't because they don't have any token left on the board or their tokens are completely surrounded by opposing tokens at distance 1 and 2, they pass and it is the other player's turn.
- The game is over as soon as
- either some Light token occupies Dark's nest, in which case Light has won the game,
- or some Dark token occupies Light's nest, in which case Dark has won the game.

Subscribe to:
Posts (Atom)