A probability paradox from @leiyu

in #steemstem6 years ago (edited)

@leiyu asks about a probability paradox. Here's my attempt to restate it.

Two gamblers and an infinite deck

Suppose you have infinitely many cards in a deck. Each has a number N on one side, the number N+1 on the other side. There is just 1 card numbered 0 and 1, 10 cards with 1 and 2, 100 cards numbered 2 and 3, and in general 10^N cards with numbers N and N+1. The cards are drawn in such a way that the orientation of the card is unknown.

Gambler A draws a card at random and looks at the top side. He sees some number K, and reasons as follows:

  • There are 10^K cards with numbers K and K+1
  • There are 10^(K-1) cards with numbers K-1 and K
  • Therefore, it is 10 times more likely that this card is in the former set, and so I have 10 to 1 odds that I'm looking at the lower of the two numbers on the card.

For example, if he sees a 2, then there are 100 possibilities that the cards is labelled (2,3) and only 10 possibilities that it is labelled (1,2). So he should bet that the other side of the card is a 3.

Gambler B sees the bottom side of the card. He reasons identically, so he also believes he is much more likely to have seen the lower of the two numbers on the card.

Can they both be correct? What's wrong?

There is no infinite deck

One problem is the assumption that you can draw uniformly from an infinite deck. You can't; the very notion is contradictory, and so any conclusion you draw is nonsense.

Try this: how likely are you to draw an individual card? If we give any finite probability p, then it'll be too large. We can easily count more than 1/p cards. For example, suppose we decided that the probability of an individual card was 10^(-100). But there are 10^(101) cards labelled (101,102) so if they all have that probability, the total is greater than 1!

What about a range of cards? How about any card with a number less than, say, a million? Same problem, this probability can only be zero. There are infinitely many cards not in our range, so N / total goes to zero as total goes to infinity. We can always show that the probability, if it is uniform, has to be less than any positive number.

The probability on each card must be zero, the total probability of all events (cards) must be 0 + 0 + 0 + ... = 0. We can't sum infinitely many zeros and get something that's not zero.

If we were working on a continuous set of events (like the real numbers) then we could use a probability density function and ensure that even though the probability of any specific event was zero, the probability over all events was one, as it should be. But we can't do calculus on the integers.

Fixing the paradox

There are two ways to make this situation real, and they both eliminate the paradox.

We could come up with a nonuniform distribution on the cards, such that the probabilities form a convergent series summing to one. How the game behaves will depend very much on what that distribution is, but it cannot play the same trick of making larger numbers more and more probable. Eventually P( smaller | N ) must decrease.

More simply, we could assume there is some maximum card (M, M+1). Then the gambler's reasoning is correct for all numbers less than M+1, but fallacious when they see M+1. We can run a quick Python simulation to demonstrate this

Code:

import random

M = 6

cards = []

for n in xrange( 0, M+1 ):
    lotsOfCards = [(n,n+1)] * (10**n)
    cards += lotsOfCards

numTrials = 1000000

numLower = [0] * (M+2)
numOccur = [0] * (M+2)

for i in xrange( numTrials ):
    myCard = random.choice( cards )
    mySide = random.choice( myCard )
    numOccur[mySide] += 1
    if mySide == min( myCard[0], myCard[1] ):
        numLower[mySide] +=1

for n in xrange( 0, M+2 ):
    if numOccur[n] != 0:
        print "number seen = {}, prob( lower ) = {}/{} = {}".format(
            n, numLower[n], numOccur[n], numLower[n] * 1.0 / numOccur[n] )

Output:

number seen = 1, prob( lower ) = 7/8 = 0.875
number seen = 2, prob( lower ) = 31/34 = 0.911764705882
number seen = 3, prob( lower ) = 463/501 = 0.924151696607
number seen = 4, prob( lower ) = 4465/4897 = 0.911782724117
number seen = 5, prob( lower ) = 44925/49336 = 0.910592670666
number seen = 6, prob( lower ) = 450435/495519 = 0.90901660683
number seen = 7, prob( lower ) = 0/449705 = 0.0

Gambler A does conclude his chances are are 10/11 (about 0.91) to have the lower card when he sees a 1, 2, 3, 4, 5, or 6. But he can't conclude that on a 7.

But what are his chances of winning, from the start of the game? If he sees a 0 they're 100%, on 1-6 they're 91%, and on a 7 they're zero. Because the distribution is now well-defined we can calculate A's expectation. A always has 50% chance of winning on a given card that is drawn, so we should expect the expectation to be 0.50.

sideP(side)P(win given side)
01/22222221
111/222222210/11
2110/222222210/11
31100/222222210/11
411000/222222210/11
5110000/222222210/11
61100000/222222210/11
71000000/22222220

There are 1,111,111 cards and thus 2,222,222 sides that Gambler A might see.

Summing everything up, we get 0.50. So the game is fair, as expected. All of those times A is near-certain that he's going to win are balanced out by the case when he's 100% sure to have lost.

Conditional probability

We are still left with both A and B convinced they're going to win, when the number is somewhere in the middle. That's still a little paradoxical, but that is the nature of conditional probability. The players aren't reasoning from the same information, so it doesn't matter if their conclusions agree or not.

We can't assume P(A win | A's information ) = 1 - P(B win | B's information ).

If A is actually going to win, he sees a higher-probability event. If B is actually going to lose, he sees a lower-probability event. So the fact that B is wrong in this case is outweighed by all the cases B actually wins. B really does win 91% of the time when he sees a 5, as the simulation shows. His expectation is out of whack with reality because he got unlucky, not because the reasoning is unsound.

Suppose you were playing a poker variant where a hand with the Queen of Spades automatically loses at showdown. Two players start with 50% probability of winning. After the hands are dealt, their conditional probability of winning may both go up if both avoid the Queen of Spades, even though one player will have the advantage in reality. That's because while A didn't see the Q♠, he doesn't know that B also didn't see the Q♠.

So, the paradox relies upon two misunderstandings:

  • We can draw uniformly from an infinite deck, and
  • That conditional probabilities can be compared based on our global knowledge of the "real" state of the game

Note that the "mismatched" conditional probabilities can't be exploited by a third party. If a bookie offers to lay 1-to-10 odds (or better) for each player they'll gladly take it when they don't have a 7. But only one player will take the bet when the card is (6,7). That means the bookie can't arbitrage, because his action will dry up 45% of the time.

Sort:  

Ah, these are fun. Probability "paradoxes" that confuse intuition are plenty even without poking at infinity in some way, at least as witnessed by the Monty Hall problem (ok, yes I know this one is trivial to anyone studying probability). It reminds me of the time I spent musing over variants of https://en.m.wikipedia.org/wiki/Two_envelopes_problem . This has at least a common theme in not having a well defined prior distribution.

Posted using Partiko Android

Hi markgritter,

This post has been upvoted by the Curie community curation project and associated vote trail as exceptional content (human curated and reviewed). Have a great day :)

Visit curiesteem.com or join the Curie Discord community to learn more.

Oh man...I saw code and you got me hooked :D I started to read and got kinda lost after like 10 rows. I mean...not lost..but somehow bored. I've never been that much into math and stuff. Passed it pretty easily in the UNI, but I'm not enjoying it hah :) Soo yeah, I'm giving up pretty early on, just as you intruduced number K :D Srry for it :D I just like coding, not the math itself.

But hey! Congratz to the upvote!

That sounds very entertaining. It is obvious that you should be good in math in first instance. For those who are interested in such games if they want to be good they have to have great quick ability to count and good or photographic memory would be beneficial. Luckily I never had a chance to get in the depth of such games but it was interesting to read your post about that :)

Wow! It's been ages since I last paid attention to source codes as intently as those on your post. I am yet to paid more attention. I just need to run back to work as break time is over with this. But I hope to get back and figure this paradox out.. 😀

Hi @markgritter!

Your post was upvoted by Utopian.io in cooperation with @steemstem - supporting knowledge, innovation and technological advancement on the Steem Blockchain.

Contribute to Open Source with utopian.io

Learn how to contribute on our website and join the new open source economy.

Want to chat? Join the Utopian Community on Discord https://discord.gg/h52nFrV

@lieyu says that this comes from "Littlewood's Miscellany", which I managed to find on Google books:

The same problem occurs in (3) and (4). There is an intuitive sense that you can draw from an infinite deck, but when observed closely there is no way to do so uniformly. If we assume nonuniformity then the problem becomes much more difficult to solve.

Coin Marketplace

STEEM 0.18
TRX 0.13
JST 0.028
BTC 58484.86
ETH 3100.06
USDT 1.00
SBD 2.40