3 * 4 = 2 * 6. You can multiply different bunches of numbers and get the same result. But you can't do this if the numbers you multiply are all prime. Why is that?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The answer is Alice. I will give the exact probabilities and much more after work, but for now, I will give a rigorous argument for why the answer is Alice.
Another way of phrasing the question is "Who is more likely to be the last one to see a present?".
There are only 25 odd numbers below 50, so at least some present is either an even number or a number above 50. In the former case, Bob sees it on turn 51 or later. In the latter case, Alice sees it on turn 51 or later. Either way, the last present seen is on turn 51 or later.
I've had a response sitting in my drafts since April 29, 2024, but for some reason I never posted it. Presumably because Twitter is the worst medium ever invented for discussing math. Still, here I go, clearing out my drafts:
Before discussing the OP, I want to observe how I would prefer to think about this:
First of all, it's easy to see there is at most one solution (up to constant re-scaling) to f' = f: Given any two solutions f and g, consider f(x) g(-x). Its derivative is f'(x) g(-x) - f(x) g'(-x) = f(x) g(x) - f(x) g(x) = 0. Thus, it is constant.
I've been playing with the new o3-mini-high model which came out this weekend, marketed as "Great at coding and logic". A surprising start to its chain of thought here.
(Some of you may enjoy thinking about this question yourself.)
ChatGPT is not happy with me. How it started, how it ended. I unfortunately can no longer share a link to the full conversation but I'll share the salient points in posts below.
Before sharing more of that forbidden conversation, here's one I CAN share a link to, which shows the frustrations of ChatGPT for discussing logic questions that have come up in my life. Verbose yet repeatedly fallacious is not a combo I have much use for. chatgpt.com/share/679ed1ee…
o1-mini (the latest OpenAI offering, ChatGPT with advanced reasoning skills) can be quite impressive, when you know the correct answers to accept and incorrect answers to reject. Here's a simple question, which some of you may enjoy figuring out.
Its answers in more detail. Can you figure out which is correct?
In case you would like a second opinion from o1-preview.
In general, I find it easier to think about problems by abstracting, to hide the irrelevant specifics and emphasize the relevant patterns. That's what I will do in this case as well.