These sequences are also known as "cubefree," so you might want to continue researching along those lines.
In particular, the game discussed is trying to find cubefree words over a two-letter alphabet. The sample infinite game seems to agree with the listed sequence on OEIS for the lexicographically earliest infinite cubefree word, though your method of generation appears to be different from the one in the comments. (I haven't analyzed it in detail.)
Cubefree! Oh, that’s a good keyword to find more results about this.
From your link it seems it is a conjecture that this is the lexicographically earliest one. Very interesting!
This is just a thought, but I wonder if there is a mathematical connection between this game and something like the binary representation of irrational (or maybe transcendental) numbers.
The article is also notable for its consistency in spelling "lose" as "loose".
That's a neat thought!
One could interpret the outcome of the game as a number by ○ being the digit 0 and ● being 1. For fun we could also say that if there is a repeating subsequence at the end (someone lost), then that is repeated infinitely. I suggest this because any won game has a sub-string repeated three times at the end, so we might as well repeat it to infinity!
Say the example game, ● ● ○ ○ ● ● ○ ○ ● ● ○ ● ● ○ ● ○ ○ ● ● ○ ○ ● ● ○ ○ ●, would be 0.11001100110110100110011001, or perhaps 0.11001100110110(1001) where the parenthesis express infinite repetition. If we choose the first, it is 768160/959951 and the second would be 65553/81920.
In any case, a won game would be a rational number, while a game which goes on forever would be an irrational! One could then wonder which irrational numbers are represented by such games.
A neat, but perhaps difficult question just occurred to me: Is every irrational number which is represented by an anti-pattern game transcendental?
Without having a proof ready at hand, I am quite sure that the `generate` sequence from my post represents a transcendental number.
Just adding to op, for some (myself included) it is quite painful to see loose in place of lose, this should be fixed asap as it distracts the reader from the content. Lose = opposite of win, loose = opposite of tight.
Should be fixed now! I hope the writing is easier to bare now.
A few other typos (first sentence, though instead of thought, trice instead of thrice, simlpe instead of simple). Hope this helps!
Also, weather -> whether
> One could then wonder which irrational numbers are represented by such games.
Seems like there should be a bijection there no?
Some irrational numbers are not valid games. For instance, I am sure the expansion of let us say π/4 in binary has 000 as a subsequence somewhere. But that could never happen in a game, because it disallows repetitions of a substring three times in a row.
Ah, you are right! At least wrt the "naive" mapping of irrationals to binary representation. My gut still tells me it's bijective, but the mapping has to be a bit more involved :D
There is a simple intuitive explanation for how an "infinite" game is possible:
We can define two different sequences of three characters that start with 0 and end with 1: 001 and 011. Because they each start and end with a different character, we can never create a series of three characters by chaining two of these sequences.
Now we can go one step deeper and encode the "001" sequence as 0, and the "011" sequence as 1. We can generate our 001 and 011 pattern again, but with our encoded versions of 0 and 1, giving us these sequences: 001001011 and 001011011. These sequences again have the same characteristics, they start and end with different sub-sequences (001 and 011) so they can be chained without creating a series of three sub-sequences.
We can now use these larger sequences and encode these as 0 and 1, and so forth ad infinitum.
Great explanation!
I guess it also hints at why the sequences keep getting smaller when you compress it several times (as mentioned in a footnote). Each compression peels away layers in this expansion. That’s my intuition at least.
One relatively common way to remove first player advantage is to have the first player place one stone, and then after that each person places two stones on their turn. So
After player 1s first move there is one stone in the sequence.
After player 2s first move there are three stones.
After player 1s next move there are five stones.
After player 2s next move there are seven stones.
Etc. Usually this completely removes first player advantage. It’s obvious that this removes player 1s potential advantage since black or white, his move is symmetrical without loss of generality.
Player 2 actually has the first consequential move, with three possible options — again, taking player 1s move as a given color, player 2 can play
DD DS SD
where S means “same” and D means “different”. Technically player 2 has four options: he could also play SS and lose immediately :-)
Cool! I haven’t thought about varying the number of stones one could put. It’s definitely a variation to look into!
Would your intuition be that this makes it so that neither player has a winning strategy? That perfect play would yield an infinite sequence?
My intuition would be that either there is an infinite game possible and perfect play produces a stalemate, or games must inevitably end, and someone must have a winning strategy — in which case I’m betting its player 2, who makes the first “real” move.
It's not intuitively clear to me how the game can go on forever -- I would expect that, eventually, you would hit upon some valid pattern. The explanation in the text didn't really make sense to me. Could anyone help with this?
The game could use a better definition of what constitutes a pattern.
>> A pattern is a sequence of pebbles repeated three times in a row.
By that logic, player 2 would have lost at their fifth turn. ... And 'in a row' is open to interpretation since it is additional information to 'repeated three times'.
I was confused also. My guess was that maybe a “sequence” has to be 4 or more pebbles?
I think it's pretty clear that a string s of pebbles is a pattern iff there exist strings x and y such that s = x|y|y|y, where | denotes string concatenation and y is nonempty.
EDIT: Clarify that only y is nonempty.
> To solve this question I wrote a short Haskell program which does a brute force search to find a winning strategy.
Could you tell us more about this? I am curious how this problem was formulated using modal logic. Seems fascinating
> The file is a mere 512 bytes, and unpacks to a 26kb file, which again unpacks to 3Mb.
My brain hurts when thinking about that. How could 512 bytes be enough to store ~3 million bytes? I know that compression is basically about finding patterns and this sequences should be very compressible.
If it was a file filled entirely with one character, the compression could simply be to write a file saying "this character copied 3 million times", which is less than 512 bytes.
This is not exactly what happens here, but many compression algorithms work by recognising that certain substrings are very common, and give them a "shorter code". In this game, there are some quite long such strings, giving a good compression rate. Furthermore, because of the recursive nature, it can find such patterns again after the common substrings are replaced by shorter codes, because these codes again form patterns with repeated substrings. This goes on until there is almost just a bit of meta data and an "ur-pattern".
Compression is fascinating in many ways. For instance, since there are a fixed number of files of a certain size and some bigger files are made smaller, some smaller files must be made bigger by the compression! Of course, this could be as simple as attaching a header or flag which says "I could not compress this. Here is the old file verbatim." But that is still a bit longer than the original!
In some sense, the program itself is a ~512 byte compression of an infinite stream of bytes.
This is the idea behind Kolmogorov complexity[0], that the complexity of a string (finite or infinite) can be measured, relative to a programming language, as the the length of shortest program which produces it.
Precisely computing the Kolmogorov complexity of a given string could be very difficult, though. In general, it is uncomputable because we cannot decide if a given program will output a given string.
It's also always relative to some specific programming language, it's not an intrinsic property of strings. (You can of course, convert to another programming language where it's simpler, but then you incur the (constant) cost of the transpiler from language 1 to language 2.)
And by adding a constant to the specification of your programming language, any sequence can have complexity 1! (But of course not every sequence can have its own constant.)
You can think about the compressed size of some file as approximating the amount of information (in the Shannon sense[1]) there is in the file. A perfect compression would reduce the file to exactly the size of the amount of information it contains.
This reminds me of primitive words [1]: A primitive word is a word that is not the (2+ times) repetition of any other word. This is slightly different than a non-pattern word from the article, which is a word that is not a 3+ times repetition of any other word.
The anti-pattern game is about extending words such that they do not contain a pattern word.
I wonder how the situation changes if 2 times repetitions would count as pattern (i.e. non-primitive words).
For primitive words, it is an open problem if the language of primitive words (over any non-trivial finite alphabet) is context free.
I wonder if the language of words that don't contain patterns (or non-primitive words) is context free.
> I wonder how the situation changes if 2 times repetitions would count as pattern
I might be misunderstanding, but do you mean that you cannot even have two of the same colour in a row? This is a very simple win for first player:
W B W ?
Yes, seems like there are only finitely many words over a binary alphabet that do not contain a non-primitive word (0, 01, 010 and 1, 10, 101). How would it change if the alphabet has three symbols?
It certainly seems like you can get much longer words. I just had a quick go and came up with
0 1 0 2 0 1 2 0 1 0 2 0 2 1
but I stopped there because it gets tedious to check manually for repetition. Might be worth writing a little script to produce the word where each letter is the smallest possible number that doesn't create repetition.Just checked with AI: Thue showed 1906 that there are infinitely many square free words (:= a word that doesn't contain a non-primitive word) over an alphabet with at least 3 symbols.
Cool! This paper is also quite readable: https://arxiv.org/pdf/2104.04841
On p.2 they follow my idea of adding the lowest possible letter at the end, although they generalise it to adding the letter as close to the end as possible. They conjecture that this process does not stop. I'm always amazed with combinatorics how quickly you arrive at questions that no one knows the answer to.
This little game I made might be one of the most tedious little games to actually play. But I found it great fun to analyse!
(For the initiated, I should mention that it is related to Thue–Morse sequences.)
But what is the winning strategy? It would be interesting to know whether the decision tree has a compact algorithmic, since that would serve as an explanation of the strategy. Even if not, I'd be curious if there is some other simple proof of its existence. And failing that, it would be interesting to see an optimal game: a game in which player 2, though doomed, makes the game take as long as possible (presumably 21 moves).
Yes, sorry for just teasing its existence. I shall see if I can find a good way to present it when I have some time!
This reminds me of Borel Games
https://gowers.wordpress.com/2013/08/23/determinacy-of-borel...
This reminds me of the movie "The Oxford Murders" with Elijah Wood, where a maths professor and their student argue if any pattern can be predicted by logic. Well worth a watch.
https://claude.ai/public/artifacts/4e8bcfee-f333-4f27-87cc-a...
runnable version
Neat!
The AI game seems to go on forever. It hasn’t found the winning strategy. Which is fine. The infinite sequences seem to be getting the most juice in the discussion here.
very interesting, once I realized that the longer sequences were wrapping on my mobile. At first, it appeared that the “top” line was player 1 in the “bottom” line was player 2
Ah, I hadn't tested this on mobile. I could try find some way of preventing the line break.
Does this have anything to do with modal logic?
Yes, but the connection is not clear from what I wrote. I keep intending to make a little post about the connection, but I want it to highlight some Haskell code I wrote, but I haven't polished it yet. I want to make an update to it, using my applicative logic library[0][1].
The short version: By representing the game coalgebraically, one can use modal logic to solve it (find a winning strategy for player 1) by brute force.
The old code looks like:
-- Modal operators
e = modal any' (Coalg possible)
a = modal all' (Coalg possible)
-- Test for winning strategy within a limited number of moves.
winning :: Integer -> Player -> State -> Bool
winning 0 _ _ = False
winning n p s = wonAlready || e (a (winning (n-1) p)) s where
wonAlready = (winner s == Just p)
We can translate this into more standard modal logic: Letting ◇ be "There is a move a player could make", and □ be any move a player makes. We define the existence of a winnning strategy for player 1 inductively: S(0) = ⊥
S(n+1) = W(p₁) ∨ ◇ □ S(n)
Intuitively, you have a winning strategy if you won already, or if there is a move you make, such that whatever move the opponent makes you, you still have a winning strategy.[0]: https://hakon.gylterud.net/programming/applicative-logic.htm...
[1]: https://github.com/typeterrorist/applicative-logic/blob/moda... – this is a branch with the modal logic operators defined.
The above just test for existence, but a slight modification, based on the same logical expression, gets us to a winning strategy:
-- Game data
data Player = P1 | P2
deriving (Eq, Show)
data Color = Red | Blue
deriving (Eq, Show)
data State = S [Color]
deriving (Eq, Show)
data Strategy = Won
| Force Color Strategy
| Choice (Color -> Strategy)
-- Modal operators
e' = modal sany' (Coalg possible')
a' = modal sall' (Coalg possible')
-- Test for winning strategy within a limited number of moves.
winning' :: Integer -> Player -> State -> Maybe Strategy
winning' 0 _ _ = Nothing
winning' n p s = wonAlready <|> (e' (a' (winning' (n-1) p)) s) where
wonAlready = if (winner s == Just p) then Just Won else Nothing
You can try making a chess or other physical game box, using two animals to represent