THREAD: USING AI TO WIN NUMBER GUESSING GAME

(But really just using logistic regression)

I recently learned about Thomas Cover's game from @fermatslibrary & really liked it! Its a great example for learning about games, probability, ML & RL. Lets use ML to "beat" the game.
Consider Mark and Sally playing a 2-person random number guessing game.

Mark picks two numbers and keeps them hidden from Sally. Sally randomly selects one of the numbers.
Now Sally has to decide if the number NOT shown is bigger than the revealed number. If she guesses right, she wins. Otherwise, she loses.
It would seem that Sally has a 50% chance of winning. But Thomas Cover, a pioneer in information theory and statistics, came up with a strategy for her to win with probability greater than 50%!
But super smart folks like Thomas Cover are rare and if we want to solve a new problem every time, we would have to somehow clone them. In other words, what can we do to scale Thomas Cover?

Use Machine Learning of course!!
Unlike Thomas Cover, ML needs lots of examples to work. We will teach our machine to play this game by having it "self play" many times and guide it to get better. The ideas used here are the same ones that ML uses to beat humans at games like Go and Chess.
Here is how we will simulate the game.

We will generate 2 random numbers from a random number generator & only show one of them to the classifier. If the other hidden number is bigger than the shown number, then this is a “positive” example. Otherwise, its a “negative” example.
We can generate as many of these examples as we like & have our classifier play against itself.
We can do this the usual way by generating a training set of these examples to train the classifier (or “self-play” to get more buzz) and then evaluate the classifier (or “agent” for even more buzz) on a held out test set.
We have made plenty of over-simplifying assumptions (iid, stationarity etc). But now that we have the basics down we can begin asking how does our algorithm degrade when we start relaxing some assumptions. We can also explore how we can make our algorithms more robust!
Here are some questions to consider:

- violation of statistical independence: Mark picks the two numbers strategically (eg: gonna pick 2 numbers super close!)
- what if instead of Sally picking the number, Marks selects which number to reveal?
- violation of stationarity: Mark is getting tired of losing to Sally. He starts changing his strategy for selecting the random numbers.
- change the rewards: winning or losing for Sally has equal reward/loss. What if we change this? Make Sally lose more when she loses?
Here is the Google colab notebook to solve the game!

I kinda didn't want to give the answer away, but our "AI agent" (against, really just logistic regression) is able to beat this random number guessing about 74% of the time.

This is not just better than random, but also better than the strategy that Thomas Cover came up with!
This may seem surprising (it was to me), but it turns out there is good reason why that is the case. In some sense, asking the question "given this number, is the hidden number bigger" has much higher odds than just "what is my number?"
What's cool though is that our AI learns the best strategy for guessing this!

See the Google colab notebook to get more details :)

Missing some Tweet in this thread?
You can try to force a refresh.

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" `@threadreaderapp unroll`