PDA

View Full Version : Some funky paradox...


RocketManJames
09-30-2004, 02:26 PM
A friend brought this up, and I am pretty sure I know why this doesn't work, but I wanted to throw this out at all of you.

Two distinct real numbers are chosen randomly from the universe of real numbers. So we have number A and number B.

Now, your goal is to pick the bigger number. You are allowed to look at A... then you must decide if you want to keep it or take number B instead.

This is obviously a 50/50 proposition.

Now, here's a strategy that seems to give you better than 50/50. It shouldn't work, but it's sort of clever.

Choose any number... let's call it C.

Now, when you look at A, you compare it to C. If A is bigger than C, take A, otherwise take B.

There is some probability P_bigger that your number C is bigger than A and B. There is some P_smaller that your number C is smaller than both A and B. There is some P_between that C is between A and B.

So, using this strategy, the probability that you pick the bigger number is:

P_smaller * 0.5 + P_bigger * 0.5 + P_between * 1.00

which is

P_between + (1 - P_between) * 0.5

So, only if P_between is 0, do we end up with 50/50. If P_between is anything greater than 0, then we have better than 50% chance at picking the bigger number.

In Edit: In case it wasn't clear... the reason why P_between is 100% probability, is that if your number C is between A and B, then you are guaranteed to choose the bigger number following the above strategy as A relates to C.

This can't be right, because you cannot create information out of non-information.

I am pretty sure why this doesn't work has to do with the fact that the random numbers A and B are chosen from all real numbers. If it were chosen from a bounded set of numbers then the strategy does give you a boost in probability.

Any thoughts?

-RMJ

TomCollins
09-30-2004, 02:53 PM
This is not a paradox, but it has been discussed on this forum before.

The problem is you are assuming it is a 50-50 proposition. It is not.

TomCollins
09-30-2004, 03:04 PM
I only have a few minutes, but I can put a math behind it showing that it is not 50-50.

Suppose I have some increasing function that maps R -> [0,1]. I can't think of any off the top of my head, but at least grant me one exists.

So for f(x) > f(y) iff x > y.

If I find f(A) and keep my number that probability of the time. I will come out ahead.

I will let others elaborate (ok, I just have to get to lunch). I'll have the rest of the proof later.

TomCollins
09-30-2004, 04:02 PM
So say you randomly choose one of the two numbers. Suppose you chose x, and x > y.

Then you would stay f(x) of the time, and switch 1-f(x) of the time.

Your probability of being right would then be f(x) of the time.

If you chose y, you would switch (1-f(y)) of the time.

So you will end up being right .5f(x) -.5f(y) + .5.

Since x is bigger than y, we know that f(x) > f(y). So your probability of being correct is .5(f(x)-f(y)) + .5.

Since f(x) - f(y) > 0 for all x > y, p(correct guess) > .5.

QED

Jman28
09-30-2004, 04:30 PM
[ QUOTE ]
Since x is bigger than y, we know that f(x) > f(y).

[/ QUOTE ]

I think this is a mistake.

You're saying that since the random number X is greater than the random number Y, the probability of you picking and staying with X is greater than the probability of you picking and staying with Y?

I don't see how you proved this, but I may be wrong.

-Jman28

Jman28
09-30-2004, 04:38 PM
[ QUOTE ]
So, only if P_between is 0, do we end up with 50/50. If P_between is anything greater than 0, then we have better than 50% chance at picking the bigger number.


[/ QUOTE ]

This statement is true. I think here's the solution...

P_between = 0.

Hear me out on this one...

Let the B - A = Z

The probability of C falling between A and B = (Z / infinity)

Z/infinity = 0

So, P_between = 0

I think this is right... Unless we say that Z = infinity, which it sort of does?

-Jman28

RocketManJames
09-30-2004, 05:58 PM
[ QUOTE ]

This statement is true. I think here's the solution...

P_between = 0.

-Jman28

[/ QUOTE ]

This is how I figured it also... that P_between is 0, because of the infinity argument. I don't follow Tom Collins' proof. I guess if he were to provide the mapping function for R to [0,1], I'd be more inclined to believe it. As it stands, it seems that you've got a "range" for all Reals, when there really isn't one. You're essentially creating information out of thin air.

But, that said, I'm not in a position to judge its correctness, so I'll let others discuss.

-RMJ

TomCollins
09-30-2004, 08:05 PM
You can and are wrong.

But its ok, I wasn't as clear as I could have been.

Suppose you have two numbers A and B. One of these two is larger, we will call it x, and the smaller one y. You have A, which is either x or y. Since A and B are equally likely to be x or y, we have a 50% chance at each. First we must find an increasing function that maps from R-> [0,1]. Since you would not take my word on it, I will find one.

So the function I will choose is f(x)=tan^(-1)(x)/4+.5. This function is increasing, defined for all real numbers, and has a range within [0,1]. This is just one example, but it any function that is increasing and maps from R->[0,1] works.

So back to the real proof. We will have x 50% of the time, and y 50% of the time. (If it were any more or any less, then the proof is done, since we would simply stay with the first number we picked if it were more, or always switch if it were less). So we take f(A) and stay that much of the time. (We could take a random number between 0 and 1, and if it is greater than f(A) we switch, and less we stay). So 50% of the time f(A) = f(x), and 50% of the time, f(A) = f(y). Again, this MUST be true, otherwise the proof is trivial. So when we end up with x, we end up correct f(x) of the time, and wrong 1-f(x) of the time. If we end up with y, we end up switching 1-f(y) and being correct, and are wrong f(y) of the time. So our odds of being correct come down to .5f(x)+.5-.5f(y). Simplified this is .5+ .5(f(x)-f(y)). We know f(x) > f(y) since f is increasing, so P(correct) > .5.

QED

TomCollins
09-30-2004, 08:11 PM
This argument doesn't work because there are an infinite amount of real numbers inside of any range. So we are really dividing an infinite number by an infinite number. This is why it is possible to map all real numbers to a range of numbers R-> [0,1].

Leo99
09-30-2004, 08:13 PM
You choose A. Now there is an infinite number of real numbers smaller and an infinite number of real numbers bigger. I don't see how probability applies when you're dealing with infinite numbers like this.

By the way, what do you guys do for a living that you remember all this math stuff?

RocketManJames
09-30-2004, 08:39 PM
So, Tom... I'm trying to follow this a bit better.

You have provided me with an increasing function that maps Reals to [0,1]... the range of your function is:

(.5 - PI/8) to (.5 + PI/8); or approximately +0.1073 to +0.8927.

So, this function can map every real number to exactly one number in that range. An input of 0 yields +0.500, which is the center point of this range.

So essentially, if you're playing this game with A and B, and you see that A is greater than 0, then you should always keep A. And, if A is less than 0, you should switch. And, if A = 0, then you're just as well off either keeping or switching.

Interesting... so it's really not 50/50. Just seems like you're creating information out of nowhere, but maybe I'm mistaken. Can you try (if possible) to explain this further in layman's terms (using some non-mathematical conceptual reasoning)? Because, intuitively, it sure seems like a 50/50 proposition.

And I have a follow-up question...

If 0 is the cut-off point (using your mapping function), it sort of makes sense, because you'd think this is the midpoint of all Reals. What if the A and B were strictly non-negative Reals? Does this change anything? And what value is the "midpoint" if this is the new game?

-RMJ

Leo99
09-30-2004, 09:17 PM
Good follow up RMJ,

I guess in simple terms you don't need any advanced math. If A is greater than 0 you keep it. If A is less than zero you switch. What does happen when you don't have a middle point?

TomCollins
09-30-2004, 09:51 PM
This is not what I said. I said you change a probability of the time. If you change always at 0, you are at a 50-50 proposition. You change (in the long run) f(x) number of times. So for 0, you change half of the time. for 1, you change slightly less than 50% of the time, and for 1 million, you change about 20% of the time. Follow the math and it works. For your followup, it still does not matter where I am more likely to switch than not, just that the larger number I have, the less likely it is that I switch.

RocketManJames
10-01-2004, 01:46 AM
I got it now (I think). So you map the reals using an increasing function to [0,1]... then this new [0,1] is like a probability map. So, it tells you to keep the number with higher probability the higher it is. Thus, the bigger A is, the more likely you are to keep it.

I think this is making sense now. Neat. I would never have imagined that a better than 50/50 strategy existed.

-RMJ

benthehen
10-01-2004, 02:02 AM
[ QUOTE ]
So say you randomly choose one of the two numbers. Suppose you chose x, and x > y.

Then you would stay f(x) of the time, and switch 1-f(x) of the time.

Your probability of being right would then be f(x) of the time.

If you chose y, you would switch (1-f(y)) of the time.

So you will end up being right .5f(x) -.5f(y) + .5.

Since x is bigger than y, we know that f(x) > f(y). So your probability of being correct is .5(f(x)-f(y)) + .5.

Since f(x) - f(y) > 0 for all x > y, p(correct guess) > .5.

QED

[/ QUOTE ]

I followed everything except for the switching part. Suppose I'm given x, how do I switch 1-f(x) of the time? I have to choose one or the other, there is no choice for f(x) of one and 1-f(x) of the other.

Piz0wn0reD!!!!!!
10-01-2004, 07:33 AM
This thread always gets long.....

Leo99
10-01-2004, 10:15 AM
I don't get it. The middle point concept I understand. I don't understand the concept of the "the bigger A is." How big is big when you're dealing with infinity? If you only include positive numbers so you're dealing with 1 to infinity. Whatever A is, there is a finite number of numbers smaller and an infinite number of numbers bigger.

TomCollins
10-01-2004, 12:31 PM
One way to do this is to choose another random number between 0 and 1, and if it is bigger, switch, if not, don't switch. So you are varying your strategy based on a random number. This, surprisingly, makes the strategy work. This also, by the way is the same as picking another random number without bounds and switching if A is smaller than it. Because you can take f(C) and if f(C)> f(A), switch, which is exactly what I said to begin with. My point is, if you run this test a million times, you will get it above 50% correct. You can switch 50% of the time if A is 0. You just have to randomly decide.

This is no different than TOP which has you bluffing x% of the time. You either bluff a particular hand or you don't. But in the long run, you bluff x% of the time, based on some random event (ideally). This provides an optimial strategy, much like in the number switching game.

TomCollins
10-01-2004, 12:34 PM
With an increasing function, the larger A is, the larger f(A) becomes. There is no concept of "big", just "bigger". Every pair of real numbers has one that is "bigger".

And no, there are not a finite number of numbers less than A, even if A is bound by 1. We are talking about real numbers here. The set of integers 1 to infinity is smaller than the set of reals. I forget the name of this, but they are different sizes. Anyone with a math background is recommended to give the name of this.

Leo99
10-01-2004, 01:56 PM
Density?

RocketManJames
10-01-2004, 02:28 PM
[ QUOTE ]
Density?

[/ QUOTE ]

I think he's referring to Cardinality.

-RMJ

theTourne
10-01-2004, 02:38 PM
[ QUOTE ]
I think he's referring to Cardinality.

[/ QUOTE ]

I think so too. The set of integers is countable (by definition). The set of real numbers is not.

TomCollins
10-01-2004, 02:40 PM
Correct. R has a higher cardinality than Z.

Leo99
10-01-2004, 02:55 PM
I stole this from a website:

The concept of cardinality is of interest to set theoreticians because it has been used to demonstrate that some infinite sets are larger than others. The cardinality of the set of real numbers is greater than the cardinality of the set of integers, even though both sets are infinite. The cardinality of the set of integers is called aleph-null or aleph-nought; the cardinality of the set of real numbers is called aleph-one.

One of the great mysteries of mathematics is contained in the question, "What is the cardinality of the set of points on a geometric line?" Generally it is presumed to be aleph-one; the set of points on a line is thought to correspond one-to-one with the set of real numbers. This is by no means a trivial supposition, and has become known as the Continuum Hypothesis.

TomCollins
10-01-2004, 02:55 PM
To be more specific, the "number" of natural numbers can be represented by the Hebrew letter Aleph-null, which sort of looks like an N. Aleph-null (A0) is smaller than Aleph-one (A1).

RiverTheNuts
10-05-2004, 10:03 PM
I wrote this program, with all the numbers between 0 and 1

I told it to pick 3 random numbers... if A<C ... then do B-A... and if that is >0, add one to the win counter

If A>C, then do A-B, and if that is >0, add one to win counter

divide win counter by the total cycle counter... and Im getting numbers around 68% ...

I dont get how this works because Im mathematically retarded, but it does work

Also, im sure my calculator numbers are pseudo random, so my results may be skewed

TomCollins
10-05-2004, 11:32 PM
A computer only knows a finite number of numbers between 0 and 1. So you could simply swap if below .5, and it would be correct. I'm really amazed it is 68%, I honestly would expect it to be much closer to 51% or so.

danderso8
10-06-2004, 04:17 PM
[ QUOTE ]
You have provided me with an increasing function that maps Reals to [0,1]... the range of your function is:

(.5 - PI/8) to (.5 + PI/8); or approximately +0.1073 to +0.8927.

[/ QUOTE ]



to map to [0,1], the function should have been (atan(x/4))/pi + .5

that might have been a little more clear, but it works either way

--dan

jason1990
10-06-2004, 04:28 PM
As has been pointed out in gruesome detail, the "paradox" arises from the belief that, after seeing A, you only have a 50% chance of choosing the larger number. It would be a 50-50 situation if whoever originally selected the random real numbers was able to do it in such a way that every real number was equally likely to be chosen. But this is not possible.

We say that there is no "uniform" distribution on the entire real line. Similarly, there is no uniform distribution on the set of all whole numbers. You cannot choose a random whole number in a way that makes every whole number equally likely.

It is the mistaken belief that a uniform distribution exists in these cases that leads many people to dream up these supposed "paradoxes".

jimymat
10-07-2004, 02:24 AM
It is and always will be a 50-50 guess. Every body that responded with some type of formula to answer this are the reason all the casinos put up the number board at the roulette wheel. If a ball lands on black 1000000 times in a row then what are the odds it will land on red the 1000001 time? Sometimes the simplest answer is the right answer.

danderso8
10-07-2004, 04:22 PM
I was thinking about this a little more, and I think the flaw in Tom's proof is that it assumes that there is a midpoint of the reals, and that the midpoint is 0.

That is, the mapping assumes that for a given x, we can know the probability that A is less than x, i.e, f(x). For example, take a number n that is large enough that the mapping suggests that there is a 99.9999% chance that A lies below this number. If instead of the original mapping, we use (atan(x-n/4))/pi + .5 for our mapping, we now would say there is a 50% chance that the A is below the number. But why isn't the new map just as valid as the old one? that's the thing...it is. So the source of the "information creation" in the paradox lies in the choice of a mapping.

--dan

TomCollins
10-07-2004, 07:55 PM
I did not assume that at all. I also did not suggest there was a 99.9999% chance that the number is below it either. My map is not a probability of being correct.

Find a flaw with my math, besides a gut feeling that you are right and I am wrong. Math does not lie my friend.

TomCollins
10-07-2004, 07:57 PM
This question has nothing to do with roulette, and it is not 50%. Prove my math wrong. This also has nothing to do with 10,000,000 times in a row either.

TomCollins
10-07-2004, 08:01 PM
All you need to say is that one number is as likely to be there as another. Or that I have no way of predicting what the real number is. The method of randomization is not important. Also, uniform distribution has nothing to do with this.

jason1990
10-07-2004, 08:15 PM
Hi Tom,

I had not replied to any of your posts, and this post of mine has nothing to do with anything you've had to say on this subject. The fact that people mistakenly believe there is a uniform distribution on the set of real numbers is the source of a whole family of supposed paradoxes, and this is one example. For instance, if it were possible to choose a random number X uniformly from the reals, then given any real number A, it would be true that P(X < A) = P(X > A) = 1/2. Of course, it is not possible to choose a random number in this way. But many people mistakenly believe it is, and that is the basis for a whole lot of confusion. This was the point of my post and has nothing to do with anything you've posted in this thread.

Now, as for your other posts, I agree that you have proven that there is a way of playing this game which gives you a greater than 50% chance of winning, regardless of the distribution of the random numbers. Your chosen function works just fine, as do many others, of course. Can you determine which function gives you the greatest chance of winning?

Jason

SomethingClever
10-08-2004, 04:32 PM
Consider this:

If the field of numbers is *truly* unbounded, and stretches out to infinity, the odds that you are going to get a comprehensible number are infinitessimal.

See, since there are an infinite number of numbers, the probability that you're going to get a number with a near-infinite number of digits is incredibly high.

The number of numbers with a rational number of digits, say less than a million, are microscopic compared to the scope you're choosing from.

You're going to get two numbers that are nearly infinitely long and incomprehensible.

If you got a number that's actually under a million digits, that would probably be the most improbable event in the history of the universe.

Edit: I guess this doesn't matter in terms of the functions you're talking about; I just thought it was interesting.

TomCollins
10-09-2004, 05:35 PM
There are the same number of rational numbers as there are whole numbers. You cannot quantify how many more real numbers than rational numbers there are though. Either way, its extremely simple to figure out which number is larger.

bigpooch
10-09-2004, 07:52 PM
This problem was addressed much earlier! Strangely, many
people don't think there exists a switching strategy that
has a strictly positive EV (see TomCollins' posts or search
for the "two envelope paradox"), but obviously there exists
infinitely many such strategies as there are infinitely many
functions f on the positive real numbers with the property
that f(0)=1, f is strictly decreasing and f(x)>0 for all x.
Also, for the original post, if the range of real numbers
that may be selected includes all the real numbers, then
one can simply find a bijection (one to one and onto
function) from all the reals just to the nonnegative reals.

The only requirement is that the two real numbers be
distinct.

A friend (who just graduated with a Ph.D. in geology and
obviously not in math) even contended that no such strategy
could exist arguing somewhat vaguely that any strategy would
be in some sense a linear combination of always switching
and never switching (both of these strategies have EV=0).
The most convincing argument is the EV calculation and is
something most intelligent teenagers could follow.

Also, for those that are interested, there obviously isn't
a "best function" that in some sense maximizes this positive
expectation.

jason1990
10-10-2004, 04:48 AM
[ QUOTE ]
Also, for those that are interested, there obviously isn't a "best function" that in some sense maximizes this positive expectation.

[/ QUOTE ]

Okay, let's make things concrete. Suppose two random, independent, identically distributed (iid) real numbers, A and B, are chosen. Further assume that the probability they are equal is 0.

You get to see A and, based on that, choose either A or B. If you choose the larger of the two numbers, you win 1 unit. If you choose the smaller of the two, you lose 1 unit.

It has been shown that, by selecting a nondecreasing function f:R->[0,1] and choosing A with probability f(A), the EV of this game is positive.

I claim that, no matter what function you choose, the EV of this game is never greater than 1/2 (i.e. your probability of winning is never greater than 3/4.) I also claim that there is one and only one function you can choose in order to achieve an EV of 1/2. (Note: I'm not claiming that, as the player of this game, you have enough information to find this function.)

Moreover, this function is of the form

f(x) = 0 for x < c and f(x) = 1 for x >= c

for some real number c. (Note again: I'm not claiming that, as the player, you have enough information to determine the actual value of c.)

And, finally, c satisifies

P(A < c) = P(A > c) = P(B < c) = P(B > c) = 1/2.

The proof is quite nice, but probably beyond the average intelligent teenager, especially the proof that this is the only such function that produces an EV of 1/2. The only proof of this fact that I can come up with uses Lebesgue measure and Stieltjes integrals.

jason1990
10-10-2004, 04:59 AM
[ QUOTE ]
I wrote this program, with all the numbers between 0 and 1

I told it to pick 3 random numbers... if A<C ... then do B-A... and if that is >0, add one to the win counter

If A>C, then do A-B, and if that is >0, add one to win counter

divide win counter by the total cycle counter... and Im getting numbers around 68% ...

[/ QUOTE ]

What you have done here is to estimate the probability that

(A<C and A<B) or (A>C and A>B).

Given A=x, this probability is (1-x)^2 + x^2. Integrating this from 0 to 1 gives 2/3. So the true probability is 66.666...%, which explains the numbers you're getting.

jason1990
10-10-2004, 05:08 AM
This is a little misleading. The Continuum Hypothesis (CH) is the claim that there does not exist a set with cardinality strictly between the cardinality of the integers and the cardinality of the reals. It is known that CH is unprovable in the sense that both CH and its negation are consistent with the "standard" axioms of set theory.

jason1990
10-10-2004, 04:20 PM
I would just like to point that your proof assumes that A and B are chosen independently. It does not work without that assumption.

For example, suppose I am the one picking A and B. I choose A randomly in some way (let's say I generate a standard normal with my computer). Then I choose an increasing function f:R->[0,1] (let's say I choose f(x) = 0.5 + (1/pi)*tan^{-1}(x)). I then generate an independent uniform random number U between 0 and 1 and let B = A + 1 if U < f(A), and I let B = A - 1 if U > f(A). So I let B be the larger number with probability f(A). (Note that, here, A and B are dependent.)

Now suppose you play the game, and by chance you happen to select the same function f(x) in order to apply your strategy. So you generate a uniform random number V, independent of my numbers, and choose A if V < f(A) and choose B if V > f(A). (So you are choosing A with probability f(A).) You then win only if U < f(A) < V or V < f(A) < U. Given A = x, this probability is 2*f(x)(1 - f(x)). Since 0 < f(x) < 1 and the maximum value of 2*x(1 - x) on the interval [0,1] is 1/2, it follows that your probability of winning is not greater than 1/2. (It is actually less than 1/2, but this requires some calculus.)

Carl_William
10-10-2004, 05:02 PM
Hi RocketManJames;

Any thoughts you mentioned. I cannot take the time to read all of the replies -- not because it is not interesting, but because it it too time comsuming and not really a productive task. It's just an interesting hobby type thing.

So here is my thought on your request or suggestion for thoughts:


I'll assume there exists a infinite universe of real numbers. So after blindly and randomly selecting two numbers (A and B) from this universe we peek at number A. If A is greater the infinity divided by two; then we can make a guess that A is probably bigger than B, and likewise if A is less than Infinity divided by two; then we can guess that B is greater than A. But....

CONCLUSION:

This technique is absurd: Because infinity divided by two is still infinity -- you know that I hope. Likewise: infinity is not equal or less than infinity; infinity is equal to infinity. Therefore the whole "Some funky paradox" is really absurd. But it might still be fun to think about things like this from time to time. But is does not have a sound math foundation and is really just a waste of time. It is just one of those things that fakes out ignorant people from time to time.

Just my thoughts:

Carl_William

bigpooch
10-10-2004, 07:57 PM
In my post, I was thinking about the two envelope paradox
wherein the numbers were positive real numbers and the EV
was the difference in expected value of the final number
chosen and the first number seen.

Also, in your post, you are making an assumption about A and
B which may not necessarily hold. For example, A and B
could be determined in advance but you may have no knowledge
whatsoever about A and B. On the other hand, obviously if
you have some knowledge of the distribution of A and B then
the answer is as you stated.

elwoodblues
10-11-2004, 02:07 PM
[ QUOTE ]
I wrote this program, with all the numbers between 0 and 1

[/ QUOTE ]

No you didn't. Probably a lot of numbers between 0 and 1, but nowhere near all the numbers between 0 and 1 (as that number is itself and infinite number.)

TomCollins
10-11-2004, 02:35 PM
On a computer, you can choose all real numbers between 0 and 1. It can only represent a finite subset of these numbers.

But I think he decided that the random numbers will all be between 0 and 1, not that he tried every number.

jimdmcevoy
11-05-2004, 05:26 AM
I don't know if you still care, but the answer is P-between=0 giving you a 50/50. This is if the numbers are randomly chosen and could be anything, as in the chance of choosing 1 is the same as the chance of choosing 7887444 or any other finite number. I will elaborate if you are still interested.

TomCollins
11-05-2004, 04:12 PM
P-between is not 0. What is a "finite" number? When you said that, you instantly showed me you obviously don't know what you are talking about. Do you mean an integer? Do you mean a finite interval?

P-between > 0. Period.

jimdmcevoy
11-05-2004, 06:45 PM
Woops, P_between is not 0 indeed.

"What is a "finite" number? When you said that, you instantly showed me you obviously don't know what you are talking about. Do you mean an integer? Do you mean a finite interval? "

Whenever I tell someone "Ok so you got a number, it could be any finite number" they instantly know that I am talking about x where x (lower case epsilon symbol) (-infinity,infinity)

As in Any real number.

Okay, so now if A, B and C are randomly chosen from any interval where there is an infinite amount of numbers, for instance (0,1), even if numbers closer to 1 are more likely to be picked than numbers closer to 0, then using the method prescribed from the original post P_between = 1/3. This will pcik you the right number 2/3 of the time.

I'm kinda suprised no one has worked this out. Something more simple than that is as long as A and B are equaly likely to be positive or negative, and again if they are radomly chosen from and interval with an infinite amount of numbers with any type of weighting, then if you choose C=0, P_between=1/2, which is the optimal strategy. I am even more suprised no one spotted that.

In summary if A is greater than 0, stick with it, if A is less than 0, switch, and this will pick you the right answer 75% of the time. You can't do better than that. Come to think of it, I see some proposition betting with my friends on this one...

**WARNING** - Do not keep reading unless you are a math nerd who likes to think about math for fun. Here is a math 'paradox'

suppose you choose A,B and C randomly on the interval (-infinity, infinity).

The parabola Ax^2+Bx+C will be concave up half the time and concave down the other half of the time. Also, it's vertex will be above the x=0 line half the time and below it the other half.

Given these two statements, there will be a real solution(s) to the equation Ax^2+Bx+C=0 half the time. If you don't see what I'm saying, draw the real plane, and draw some random parabola's, and figure out what is required for the parabola to cut x=0.

Fair enough.

But look at this another way:

The solution of Ax^2+Bx+C=0

is the good old quadratic formula:

-B/2A +- sqrt(B^2-4AC)/2A

which means there will be a real solution when B^2>=4AC.

Now since A and C are equaly likely to be postive or negative, AC is also. So half the time AC is negative, and since B^2 is always positive half the time B^2>=4AC.

But what about when AC>=0. In this case some times B^2>4AC and sometimes B^2<4AC. Let's call P the probabilty that B^2>4AC given that A,B and C are random numbers on the interval (-infinity,infinity) like in this example.

So with a little thought, this makes the probability of a real solution(s) .5+.5*P

So as long as P>0 then you have a paradox. You might say P=0, but how? Sometimes B^2>4AC even if A and C are positive right?

I assumed that this problem was essentially the same as the problem rocketmanjames posted, so I blurted out P_between=0. Silly me, shoulda thought more.

jason1990
11-05-2004, 08:29 PM
[ QUOTE ]
suppose you choose A,B and C randomly on the interval (-infinity, infinity).

[/ QUOTE ]

This is not mathematics. If you want to formulate a rigorous mathematical statement of this kind, then A, B, and C must be random variables, and you must specify the joint cumulative distribution function of A, B, and C.

jimdmcevoy
11-06-2004, 12:35 PM
Okay, I'll say this and see if this satisfies you:

The probability density distribution of A, B and C are:

p(x)=
{0 x < -n
1/2n -n < x < n
0 x > n}

Do all the maths you need, and in the end take the limit as n approaches infinity.

I'm not sure what you mean by joint cumulative distribution function, but I'll say that these random variables are independent, and if you want the cumulative distribution function integrate the probabitity density function I gave you. Are you studying a maths major by chance? You strike me as picky...

jason1990
11-07-2004, 11:06 AM
[ QUOTE ]
The probability density distribution of A, B and C are:

p(x)=
{0 x < -n
1/2n -n < x < n
0 x > n}

[/ QUOTE ]
So if I'm understanding you correctly, we start with A, B, and C independent and uniformly distributed on [-n,n].

[ QUOTE ]
in the end take the limit as n approaches infinity.

[/ QUOTE ]
Unfortunately, you can't do that. This function approaches the zero function as n goes to infinity. And the zero function is not a probability density function, since it doesn't integrate to 1.

Another way of saying this is that there is no uniform distribution on the entire real line. This is exactly why there is no "paradox". If you specify a valid probability density function for A, B, and C, it will not be uniform, and you will get the same probability for the existence of roots no matter how you approach the problem.

This is a point I made earlier in this thread, and you have illustrated it quite nicely.

jimdmcevoy
11-07-2004, 11:27 AM
I disagree. The integral of this function is independent of n and always 1. Using a very very similar process one defines the Dirac Delta function:

limit as n approaches infinity of:

p(x)=
{0 x < -1/n
2n -n < x < n
0 x > 1/n}

is the Dirac Delta function.

I don't see how you can accept this function and not the one I stated.

" Unfortunately, you can't do that. This function approaches the zero function as n goes to infinity. And the zero function is not a probability density function, since it doesn't integrate to 1. "

I'm not sure you completely understand limits. As you said yourself, the function approaches the zero function.

Let me put it this way:

The limit as n approaches infinity of:

integral from -infinity to infinity of p(x) dx

=1

if p(x)=
{0 x < -n
1/2n -n < x < n
0 x > n}


If you really don't believe me I can prove this last statement from the definition of a limit.

jason1990
11-07-2004, 12:48 PM
[ QUOTE ]
I disagree. The integral of this function is independent of n and always 1.

[/ QUOTE ]
True.

[ QUOTE ]
Using a very very similar process one defines the Dirac Delta function

[/ QUOTE ]
Not really. The Dirac Delta function is not a function. It is a Schwartz distribution.

[ QUOTE ]
limit as n approaches infinity of:

p(x)=
{0 x < -1/n
2n -n < x < n
0 x > 1/n}

is the Dirac Delta function.

[/ QUOTE ]
It depends what you mean by "limit". This statement (with the typo corrected) is true if you are taking the limit in the space of Schwartz distributions. In other words, this sequence converges to the delta "function" because, for every Schwartz function f (or just take f to be a C^infty function with compact support if that makes things easier)

lim_{n->infty} int_{-infty}^{infty} {p(t)f(t)dt} = f(0)

and the delta "function" is defined to be the linear functional such that Delta(f)=f(0).

[ QUOTE ]
I don't see how you can accept this function and not the one I stated.

[/ QUOTE ]
This "function" is not a function. It is a Schwartz distribution. It also happens to be a measure - the delta point mass measure at the origin. The delta point mass measure is a perfectly good probability measure on the reals since its total mass is 1. However, the limit of your original sequence of functions in the space of Schwartz distributions is the zero measure. You can check for yourself that for every f of the kind given above

lim_{n->infty} int_{-infty}^{infty} {p(t)f(t)dt} = 0

where p is the function in your original post. The zero measure is not a valid probability measure.

[ QUOTE ]
I'm not sure you completely understand limits.

[/ QUOTE ]
/images/graemlins/smile.gif Okay. Well, for what it's worth (if anything), I am a university professor of mathematics and a professional research mathematician, specializing in probability. But, maybe you're right. Maybe I have no idea what I'm talking about. /images/graemlins/wink.gif

[ QUOTE ]
Let me put it this way:

The limit as n approaches infinity of:

integral from -infinity to infinity of p(x) dx

=1

if p(x)=
{0 x < -n
1/2n -n < x < n
0 x > n}


If you really don't believe me I can prove this last statement from the definition of a limit.

[/ QUOTE ]
That's all right. I grade enough proofs from the definition of a limit as it is. /images/graemlins/wink.gif

Your statement is true, but it is irrelevant. We've danced around all kinds of convergence here: pointwise, as Schwartz distributions, as measures, etc. But the real kind of convergence you need here is convergence in law (or as probability distributions).

You have created a sequence of random variables, A_n, such that A_n is uniform on [-n,n]. You want to claim that there is a random variable, A, such that A_n converges to A. The kind of convergence you need is the kind that appears, for example, in the Central Limit Theorem. You need

P(A_n <= x) --> P(A <= x)

for all x such that the function F_A(x) = P(A <= x) is continuous. You need this because, in your argument, you're going to be making claims about limits of this kind, not just limits of p(x) integrated over the entire real line.

But for this to work, we would need F_A(x)=1/2 for all x. (Should I include a proof?) The problem with this is that F_A(x)=1/2 is not a valid cumulative distribution function, since it does not approach 0 at -infty, nor does it approach 1 at infty. So there does not exist a random variable A such that A_n converges to A in law. (Note also that if F_A(x)=1/2 for all x, then dF is the zero measure.)

This is really just a special case of the fact that there does not exist a real-valued random variable, A, with the property that P(A <= x) = 1/2 for all x. In other words, there is no uniform distribution on the entire real line.

Note that you do not run into this problem when you deal with the random variables whose densities are given by the functions which converge to the Dirac Delta. These do have a limit in law, namely the random variable A which is always 0.

If you're still not comfortable with all of this, then please tell me precisely: for each x, what is

P(A <= x)?

Is it

lim_{n->infty} int_{-infty}^x {p(t) dt}

where p is the function in your original post? Is this what you meant to say? Do you agree that, for any real number x, this limit is 1/2? Are you aware that, for every well-defined, real-valued random variable A,

lim_{x->infty} P(A <= x) = 1?

This is the heart of the matter in both your "paradox" and the original one. I can't say it enough: in modern probability theory, there is no random variable A such that

P(A <= x) = 1/2

for all x. If you want to work with objects that have this property, then you must work outside of current mathematical theory. If you want to do that, fine. But it's not mathematics. At least not yet. But maybe you can start a revolution! /images/graemlins/smile.gif

eldynamite
11-07-2004, 02:23 PM
[ QUOTE ]
Two distinct real numbers are chosen randomly from the universe of real numbers.

[/ QUOTE ]

It seems to me that this premise is absurd. It is not possible to choose an item randomly from an infinite set.

"Randomly" implies that every number has the same probability of being chosen. That probability must be 1/infinity, which is simply nonsense.

Am I missing something?


Tim

jason1990
11-07-2004, 02:46 PM
You can choose a number "randomly" from the interval [0,1] in the sense that, for every subinterval I, the probability that this number lies in I depends only on the length of I. In the same way, you can choose a number "randomly" from any bounded interval. But you cannot choose a number "randomly" (in this sense) from the entire real line.

Another way of looking at it is this: someone tells you they are choosing a number X "randomly" from the entire real line. You have to try to understand what they mean by "randomly". Whatever they mean, they probably want it to be true that the numbers

p_n = P(n <= X < n+1)

are all the same and do not depend on n. But these numbers must add up to 1. So they cannot be positive (since then they add up to infinity) and they cannot be zero (since then they add up to zero). So we have a contradiction and it is therefore not possible to pick a number randomly from the entire real line so that all these numbers are the same.

jason1990
11-07-2004, 04:34 PM
[ QUOTE ]
I'm not sure what you mean by joint cumulative distribution function, but I'll say that these random variables are independent, and if you want the cumulative distribution function integrate the probabitity density function I gave you. Are you studying a maths major by chance? You strike me as picky...

[/ QUOTE ]

I didn't see this part earlier. Did you add it after the fact?

The joint cumulative distribution function of A, B, and C is the function of 3 variables

F(x,y,z) = P(A <= x, B <= y, C <= z).

If A, B, and C are independent and identically distributed, then it is enough to specify the cumulative distribution function of A, F_A(x) = P(A <= x).

As I discussed at length in another post, if I want the cumulative distribution function of A, I cannot simply integrate the probabitity density function you gave me. I must either

(a) take the limit and then integrate, or
(b) integrate and then take the limit.

If I do (a), I get F_A(x) = 0 for all x. If I do (b), I get F_A(x) = 1/2 for all x. Neither of these is a valid cumulative distribution function. So you have not adequately described the random variables you're talking about. If you ever do provide a valid description, you will find that there is no paradox.

As for being picky, paradoxes have never been resolved by being sloppy. Being sloppy is what produces paradoxes. In your paradox, you are talking about certain random variables and you are implying that they have certain properties. There is only one way to resolve this paradox: by realizing that random variables with these properties do not exist.

Being "picky" was my attempt to get you to realize this for yourself. I was hoping you would discover that it is impossible to provide a valid description for these random variables, because they do not exist! This is the resolution of the paradox. If you want to resolve it, this is what you must come to terms with.

And as for my mathematical background, I mentioned it in another post.

jimdmcevoy
11-07-2004, 05:33 PM
[ QUOTE ]


/images/graemlins/smile.gif Okay. Well, for what it's worth (if anything), I am a university professor of mathematics and a professional research mathematician, specializing in probability. But, maybe you're right. Maybe I have no idea what I'm talking about. /images/graemlins/wink.gif



[/ QUOTE ]

I knew I smelled a mathematician /images/graemlins/wink.gif

You are right that the Dirac Delta function is not a function, I knew better but was lazy with words. But I think there may be a miscommunication. I agree that:

lim n approachs infinity of:
p(x)=
{0 x < -n
1/2n -n < x < n
0 x > n}

is not an appropriate probability density, but I never use this. To see what I mean I will edit my original paradox post to say what I meant in terms of proper mathematics (In my opinion you took what I said too literally, but objectivly you are right, I said many things that make no sense mathematically).

But I am still interested in this uniform distribution over all reals. I think we agree about the Dirac Delta function so I'll leave that alone.

[ QUOTE ]


You can check for yourself that for every f of the kind given above

lim_{n->infty} int_{-infty}^{infty} {p(t)f(t)dt} = 0

where p is the function in your original post. The zero measure is not a valid probability measure.



[/ QUOTE ]

when I try for example f(t)=
{0, t<1
1/t^2, t>1}

I get your above expression equal to 1/4. Is this not a valid f(t)?

Anyways, I agree that P(A <= x) = 1/2 for all x. And

lim_{n->infty} int_{-infty}^x {p(t) dt} = 1/2

So if this limit has to be 1 then that makes my distribution no good according to you, and I'll take your word for it since I have never actually taken a stats course.

I guess that puts it outside of mathematics, and there's probably something I need to understand but in my mind this distribution is still Ok. I mean, if you plotted the Cumulative distribution function from -infinity to infinity (as in scale the x-axis so that -infinity becomes x=-5 and infinity becomes x=5) Then it would start with the point (-5,0), then have a jump discontinuoty and on the interval (-5,5) notice parentheses there not brackets, the function would equal 1/2, and then there would be a point at (5,1). But as you say it doesn't reach 1 soon enough.

In my opinion the problem for me comes down to the fact that infinity is not a number while 0 is a number. This is why the Dirac Delta is valid and mine is not. It's funny because I've often thought before that one day when I have nothing to do I'm gonna try and create a 'new type of maths' where 0 doesn't always equal 0, just like infinity doesn't always equal infinity.

But enough of my non-mathematical rambling, I was glad to hear you say this:

[ QUOTE ]


That's all right. I grade enough proofs from the definition of a limit as it is. /images/graemlins/wink.gif



[/ QUOTE ]

Because I have only ever proved a limit formally about 5 times in my life, and that was 4 times to many in my opinion.

Btw I was arguing with my friend over this one a couple weeks ago, would you happen to know the answer?:

int_-1^1 of 1/x

I think it is undefined.

I been taught that int_-infinity^infinity of x is undefined and agree with that, my own interpretation is since they could be 'different infinites'. but what about this other integral? 0 or undefined? My friend actually came up with Pi*i or -Pi*i, can't remember (he used a contour integral in the complex plane, that didn't sit right with me..)

And one last thing, do you happen to know much about Nash Equalibria?

Btw if you don't feel like answering off topic maths questions no worries, I'm just curious.

jimdmcevoy
11-07-2004, 05:39 PM
Well, it didn't let me edit the post since it was too long ago, so here it is again in proper mathematics:

**WARNING** - Do not keep reading unless you are a math nerd who likes to think about math for fun. Here is a math 'paradox'

suppose you choose A,B and C randomly on the interval (-n, n). A, B and C are independent.

As n approaches infinity, The parabola Ax^2+Bx+C will be concave up half the time and concave down the other half of the time. Also, it's vertex will be above the x=0 line half the time and below it the other half.

Given these two statements, as n approaches infinity there will be a real solution(s) to the equation Ax^2+Bx+C=0 half the time. If you don't see what I'm saying, draw the real plane, and draw some random parabola's, and figure out what is required for the parabola to cut x=0.

Fair enough.

But look at this another way:

The solution of Ax^2+Bx+C=0

is the good old quadratic formula:

-B/2A +- sqrt(B^2-4AC)/2A

which means there will be a real solution when B^2>=4AC.

Now since as n approaches infninity A and C are equaly likely to be postive or negative, AC is also. So half the time AC is negative, and since B^2 is always positive half the time B^2>=4AC.

But what about when AC>=0. In this case some times B^2>4AC and sometimes B^2<4AC. Let's call P the probabilty that B^2>4AC given that A,B and C are random numbers on the interval (-n,n) like in this example.

So with a little thought, this makes the probability of a real solution(s) .5+.5*P

So as long as when n approaches infinity P>0 then you have a paradox. You might say as n approaches infinity P=0, but how? Sometimes B^2>4AC even if A and C are positive right?

I assumed that this problem was essentially the same as the problem rocketmanjames posted, so I blurted out P_between=0. Silly me, shoulda thought more.

jimdmcevoy
11-07-2004, 05:45 PM
[ QUOTE ]


I didn't see this part earlier. Did you add it after the fact?



[/ QUOTE ]

Well I didn't say it until that post, before that I assumed people would assume it, seems I've been doing that too much.

[ QUOTE ]


Being "picky" was my attempt to get you to realize this for yourself. I was hoping you would discover that it is impossible to provide a valid description for these random variables, because they do not exist! This is the resolution of the paradox. If you want to resolve it, this is what you must come to terms with.



[/ QUOTE ]

Well in my mind this improper use of maths is not the reason that the parodox is not acutally a paradox (don't get me wrong, I don't actually believe this is a paradox), it is something else (which I wasn't going to post in the hopes that some one may actually try to figure it out). This is the reason I viewed your critisizm as 'picky', it was also half in jest. I am finishing off my physics major, and me and my physics friends usually make fun of mathematicians given the chance, in good nature.

jason1990
11-07-2004, 06:13 PM
[ QUOTE ]
when I try for example f(t)=
{0, t<1
1/t^2, t>1}

I get your above expression equal to 1/4. Is this not a valid f(t)?

[/ QUOTE ]
Strictly speaking, it is not, because of the discontinuity at 1, and the fact that it does not decay rapidly enough near +infty. But nonetheless, I think it still works:

int_{-infty}^{infty} {p(t)f(t) dt}
= int_1^n{1/(2nt^2) dt}
= -1/(2nt)|_1^n
= 1/(2n) - 1/(2n^2)
= (n-1)/(2n^2) --> 0.

[ QUOTE ]
Anyways, I agree that P(A <= x) = 1/2 for all x. And

lim_{n->infty} int_{-infty}^x {p(t) dt} = 1/2

So if this limit has to be 1 then that makes my distribution no good according to you, and I'll take your word for it since I have never actually taken a stats course.

[/ QUOTE ]
Certain facts about limits are built into the foundations of probability theory. If you construct an example in which these facts do not hold, then none of the tools of probability will be applicable - not even the very basics. For example, the foundations of probability imply that for every real-valued random variable

P(A > 0) = lim_{n->infty} P(0 < A <=n).

But if P(A <= x) = 1/2 for all x, then the left hand side is 1/2, but for all n

P(A < 0 <= n) = P(A <= n) - P(A <= 0) = 1/2 - 1/2 = 0.

As for improper integrals, the normal way of defining integrals in general is to first define them only for positive functions. Then, given a function f which takes on both positive and negative values,

int_a^b{f(x)dx}

is only defined when

int_a^b{|f(x)|dx}

is finite. When this is finite, the original integral is defined to be

int_a^b{ max(f(x),0) dx } - int_a^b{ max(-f(x),0) dx}.

Using this definition, the integral you mentioned is undefined. This definition must be used whenever you want to invoke the more powerful theorems of integration theory. However, there are different approaches. For example, there is something called the "principal value" integral. Using this definition, the integral you mentioned is defined and is equal to zero. But these alternative definitions are not very common (precisely because they are not very powerful) and when someone uses them, they usually explicitly say so.

And I only know enough about Nash Equilibria to get by. But I'd be happy to discuss it if I can.

jason1990
11-07-2004, 06:27 PM
[ QUOTE ]
I am finishing off my physics major, and me and my physics friends usually make fun of mathematicians given the chance, in good nature.

[/ QUOTE ]
I have a very nice article, written by a physicist, that outlines seven very subtle "paradoxes" in quantum mechanics. They all arise from physicists being too "sloppy" with mathematics. Many of them come from being too careless about the domains of certain operators, an issue most physicists I know would lump into the "picky" category.

The author then goes on to resolve these paradoxes by appealing to mathematical formalism. If you're interested in reading it, let me know and I'll email it to you. They are all very challenging paradoxes. You might have fun stumping your friends (or teachers!) with them.

jimdmcevoy
11-07-2004, 06:40 PM
[ QUOTE ]


Strictly speaking, it is not, because of the discontinuity at 1, and the fact that it does not decay rapidly enough near +infty. But nonetheless, I think it still works:

int_{-infty}^{infty} {p(t)f(t) dt}
= int_1^n{1/(2nt^2) dt}
= -1/(2nt)|_1^n
= 1/(2n) - 1/(2n^2)
= (n-1)/(2n^2) --> 0.



[/ QUOTE ]

/images/graemlins/blush.gif woops, nevermind me saying that was 1/4.

And thanks for the answer to that integral, I was pretty sure it came down to how you define your integral.

Btw I just read a post by Helios which clarifies my thoughts on my "badly behaved function", my function says that half the time the number is infinity and the other half negative infinity.

Helios
11-07-2004, 06:56 PM
Answer is
any random number between 1 and infinity equals infinity think about it because there are always an infinite number of numbers above any possible number the probality of any number occuring is 1 divided by infinity or zero making any number chosen equal to infinity.

This also works for any string of numbers because any fixed number divided by infinity equals zero.

jimdmcevoy
11-07-2004, 06:57 PM
Yah if it's not too much trouble that article would interest me, my email is jimdmcevoy@yahoo.com

So anyways are you in agreement with me that the 'paradox' does not arise from my 'uniform over the reals' distribution function? In other words, is my edited post mathematically ok? If it is then all we had was a misscommunication.

And I was wondering if you happened to know how to do this:

Suppose that Player A gets to choose 10 variables, each variable has to be between 0 and 1 say. call these variable a_1, a_2, a_3,... a_10. Same for player B, call his variables b_1, b_2, b_3,.... b_10.

Now suppose that In one round of a game, Player B pays Player A f(a_1,a_2,a_3,...,a_10,b_1,b_2,b_3,...,b_10) dollars. If f is negative that means Player A pays player B |f| dollars, which makes this a zero sum game. Both players know the function f.

As it happens, f can be expressed as a_1*g + h where g and h are functions of all the 20 variables except a_1. This is true for all 20 variables, as in f can be expressed as a_n*j_n+k_n where j_n and k_n are functions of the other 19 variables. Also f can be expressed as b_n*j+k where l_n and m_n are functions of the other 19 variables. I'm not sure I said this clearly, hopefully you get what I mean.

Anyway, how does Player A and Player B play to win/save themselves the most money? Is there some sort of mathematical process/algorithm to figure this out? From what I know the two sets of 10 variables will form a Nash equilibrium, but I can't figure out a general method to find the Nash equilibrium, has Nash done this?

If you haven't figured out by now, this has applications to solve for perfect strategys for poker games.

jimdmcevoy
11-07-2004, 07:04 PM
[ QUOTE ]
Answer is
any random number between 1 and infinity equals infinity think about it because there are always an infinite number of numbers above any possible number the probality of any number occuring is 1 divided by infinity or zero making any number chosen equal to infinity.


[/ QUOTE ]

I think I agree here, but as Jason1990 has shown the probability distribution you are talking about is 'outside' of mathematics, as in the probability distribution is not a valid probabiltiy distribution.

jason1990
11-07-2004, 07:40 PM
[ QUOTE ]
So anyways are you in agreement with me that the 'paradox' does not arise from my 'uniform over the reals' distribution function? In other words, is my edited post mathematically ok? If it is then all we had was a misscommunication.

[/ QUOTE ]
Yes, I think the edited post is okay, for the most part. But then, where's the paradox? Clearly, the paradox lies in the fact that we have certain contradictory expectations about the limiting values of these probabilities. But where do those expectations come from? They come from the fact that we are still thinking of these limiting objects as "probabilities", which they are not. And they are not "probabilities" precisely because of the 'uniform over the reals' issue.

Put another way, I like the way you rephrased it, but I think the paradox is intimately tied to this issue no matter how we slice it up.

[ QUOTE ]
As it happens, f can be expressed as a_1*g + h where g and h are functions of all the 20 variables except a_1. This is true for all 20 variables

[/ QUOTE ]
I think I would rather it be the case that

f = a_1 g_1 + ... + a_{10} g_{10} + c, and
f = b_1 h_1 + ... + b_{10} h_{10} + c,

where each g_i is an affine linear function of only the b_i's and each h_i is an affine linear function of the a_i's. If that were the case, I think you would only need to solve a simple linear program and some version of the simplex method would do it for you.

jimdmcevoy
11-07-2004, 08:11 PM
[ QUOTE ]


Yes, I think the edited post is okay, for the most part. But then, where's the paradox? Clearly, the paradox lies in the fact that we have certain contradictory expectations about the limiting values of these probabilities.



[/ QUOTE ]

Yep that's the paradox,

The probabilty that B^2>4AC given that A,B and C are random numbers on the interval (-n,n) as n approaches infinity is 0. But intuitivly some would think this is has to be greater than 0. You think this is obvious, but you are a professor in mathematics /images/graemlins/wink.gif

[ QUOTE ]


But where do those expectations come from? They come from the fact that we are still thinking of these limiting objects as "probabilities", which they are not. And they are not "probabilities" precisely because of the 'uniform over the reals' issue.



[/ QUOTE ]

Yah I guess I didn't think of it that way.

[ QUOTE ]


I think I would rather it be the case that

f = a_1 g_1 + ... + a_{10} g_{10} + c, and
f = b_1 h_1 + ... + b_{10} h_{10} + c,



[/ QUOTE ]

Unfortunatly this is not the case, there could be a term like a_1*a_2*b_1*b_2

What do you think about this approach?

Suppose f can be expressed as f=a_n*j_n+k_n and f=b_n*l_n+m_n for n=1,2,3,...10

set a_n=Hermitite(j_n)

and set b_n=Hermitite(-l_n)

now you have 20 equations with 20 variables. Can anything be done from here?

Helios
11-07-2004, 08:18 PM
Yeah I thats what I was trying to show that both numbers are equal and it is out side mathmatics for a answer.

jason1990
11-07-2004, 08:48 PM
[ QUOTE ]
The probabilty that B^2>4AC given that A,B and C are random numbers on the interval (-n,n) as n approaches infinity is 0.

[/ QUOTE ]
Are you sure about that? What's wrong with following:

P(B^2 > 4AC) = P(A > 0, C < 0) + P(A < 0, C > 0)
+ P(B^2 > 4AC, A > 0, C > 0)
+ P(B^2 > 4AC, A < 0, C < 0)
= 1/2 + 2P(B^2 > 4AC, A > 0, C > 0)
= 1/2 + (2/(2n)^2)int_0^n int_0^n{P(B^2 > 4xy)dxdy}

P(B^2 > 4xy) = P(B > 2sqrt{xy}) + P(B < -2sqrt{xy})
= 2P(B > 2sqrt{xy})
= 2[(n - 2sqrt{xy})/(2n)]
= 1 - (2/n)sqrt{xy}

P(B^2 > 4AC)
= 1/2 + (1/2n^2)int_0^n int_0^n{[1 - (2/n)sqrt{xy}]dxdy}
= 1/2 + (1/2n^2)[n^2 - (2/n)(int_0^n{sqrt{x}dx})^2]
= 1/2 + 1/2 - (1/n^3)[(2/3)n^{3/2}]^2
= 1/2 + 1/2 - (1/n^3)(4n^3/9)
= 1 - 4/9
= 5/9.

If this is right, then P(B^2 > 4AC) = 5/9 for all n. Perhaps the intuitive mistake is thinking that

[ QUOTE ]
it's vertex will be above the x=0 line half the time and below it the other half.

[/ QUOTE ]

As for the "Hermitite" stuff, I don't know what that is. Sorry I can't help you with that.

jimdmcevoy
11-07-2004, 09:43 PM
The first problem is I said something wrong:

[ QUOTE ]


The probabilty that B^2>4AC given that A,B and C are random numbers on the interval (-n,n) as n approaches infinity is 0.



[/ QUOTE ]

What I mean to say was:

The probabilty that B^2>4AC and AC>0 given that A,B and C are random numbers on the interval (-n,n) as n approaches infinity is 0.

Taking this into account your answer will become 1/18. I worked out the answer a slightly different way, and also got 1/18. But then I think we may have both made a mistake in our calculation.

It seems you have said that:

P(B > 2sqrt{xy})=[(n - 2sqrt{xy})/(2n)]

I am thinking that it should be:

P(B > 2sqrt{xy})=max(0,[(n - 2sqrt{xy})/(2n)])

And as for the Hermitite function, it is just the function define as:

Hermitite(x):=

{0 x<0
1 x>=0}

As it happens, if you have a probability density function of Dirac(a), then your Cumulative probability function will be Hermitite(a).

It may actually be Hermite(x), I can't remember.

jason1990
11-07-2004, 10:19 PM
[ QUOTE ]
It seems you have said that:

P(B > 2sqrt{xy})=[(n - 2sqrt{xy})/(2n)]

I am thinking that it should be:

P(B > 2sqrt{xy})=max(0,[(n - 2sqrt{xy})/(2n)])

[/ QUOTE ]
You're right. But I just calculated it the other way (too much to type up) and I don't believe it changes the fact that this probability does not depend on n, which means it doesn't vanish in the limit.

We want to calculated P(B^2 > 4AC) conditioned on the fact that A > 0 and C > 0. This is the same as computing P(B^2 > 4UV) where U and V are uniform on [0,n]. To compute this, we take the function (defined for x and y in [0,n])

f(x,y) = P(B^2 > 4xy) = 2P(B > 2sqrt{xy})
= 2max(0,(n - 2sqrt{xy})/(2n))

and compute E[f(U,V)]. But if X and Y are uniform on [0,1], then we can write U = nX and V = nY. Hence,

P(B^2 > 4UV) = E[f(nX,nY)]
= 2E[max(0,(n - 2sqrt{(nX)(nY)})/(2n))]
= 2E[max(0,(1 - 2sqrt{XY})/2)],

and this does not at all depend on n.

[ QUOTE ]
Hermitite(x):=

{0 x<0
1 x>=0}

[/ QUOTE ]

Okay, I see. So are you saying that any solution to the equations

a_n=Hermitite(j_n)
b_n=Hermitite(-l_n)

will be a Nash equilibrium? Is this obvious? (Sorry, I told you I didn't know much about Nash.)

jimdmcevoy
11-07-2004, 11:09 PM
Yeah I agree, it seems that B^2>4AC and AC>0 sometimes, and it doesn't depend on n. And I think that the odds that the vertex is above x=0 are greater than 1/2, and independent of n. Now I find that pretty interesting.

I worked out the odds that the vertex is above the x-axis as:

1/(2n)^3 * int_-n^n int_-n^n (n-min(y^2/4z,-n)dydz

since the quadratic forumla re-arranged is A(x+B/2A)^2+C-B^2/4A

and with a change of variable y=A*n and z=B*n it shows it's independent of n, and the fact that sometimes y^2/4z<-n shows (when z is sufficiently small and negative) that the vertex is more likely to be above the x axis. I think I should have created a paradox based on this alone.

So unless I made a mistake I can say that there is no paradox, I tried to work out the values of these integrals exactly but was not confident with my answer, it had ln(2) in it. Plus I'm too lazy to do both.

[ QUOTE ]


Okay, I see. So are you saying that any solution to the equations

a_n=Hermitite(j_n)
b_n=Hermitite(-l_n)

will be a Nash equilibrium? Is this obvious? (Sorry, I told you I didn't know much about Nash.)



[/ QUOTE ]

Well I was half saying half asking, here are my thoughts on it:

if you are Player A, and you know Player B's strategy (b_n=Hermitite(-l_n)), and you already have your own strategy set out (a_n=Hermitite(j_n)), but now you want to 'fiddle' with a_1 to make sure you are choosing your best a_1. It seems you me you can't get better than a_1=Hermitite(j_1), so you leave it. You go through the process with all a_n and come to the conclusion you should change nothing. Player B goes through the same process. So in the end, if you fix any 19 variables, you cannot change the 20th variable to make a better strategy, hence this is a Nash Equalibrium.

But maybe there are solutions not of the form mentioned?

In any case, looking at it from a purely maths point of view, can you think of any sure fire way to find the solution(s) of

a_n=Hermitite(j_n)
b_n=Hermitite(-l_n)

?

jason1990
11-07-2004, 11:24 PM
[ QUOTE ]
I tried to work out the values of these integrals exactly but was not confident with my answer, it had ln(2) in it.

[/ QUOTE ]
I had ln(2) in mine also. It's probably right or close to it.

[ QUOTE ]
In any case, looking at it from a purely maths point of view, can you think of any sure fire way to find the solution(s) of

a_n=Hermitite(j_n)
b_n=Hermitite(-l_n)

[/ QUOTE ]
Not off the top of my head. But any solution is just going to be a bunch of 1's and 0's. And I'm guessing the j's and l's are all just linear combinations of products of the variables. So my first instinct is that it should be doable, under some conditions on the coefficients of the j's and l's.

I'll have to think some more about your heuristics as to why this should be a Nash eq.

jimdmcevoy
11-07-2004, 11:43 PM
Come to think of it, I think my method will miss a lot of Nash Equilibrium, but anway, I'll try and figure it out one day. Game theory is my latest hobby, since I like poker and maths (Australians usually refer to mathematics as maths), so it will always be in the back of my mind.

Anyway, if you ever get bored one day, I would be very curious to hear your opinion on what's called the Doomsday Theorem. I wrote about it on this forum a few weeks ago

http://forumserver.twoplustwo.com/showthreaded.php?Cat=&Number=1170570&page=3&view=c ollapsed&sb=5&o=14&fpart=1

but as I have been told by several people, I am very bad as explaining things, so here is a link to some one who probably explains it much better than I:

http://www.anthropic-principle.com/preprints/lit/

jason1990
11-08-2004, 12:08 AM
This Doomsday thing is quite interesting. Thanks for showing me. I'll be thinking about it. (Whoa, another time draining distraction. Just what I need! /images/graemlins/smile.gif)

TomCollins
11-08-2004, 11:40 AM
Switching only when negative will not result in a +EV.

jimdmcevoy
11-08-2004, 11:52 AM
The way I see it, if A and B are just as likely to be positive or negative, and have the same probability distribution, then you will pick the winner 75% of the time.

Think of it this way, there are four possibilities each with 25% probability of occouring.

A>0 B>0 my method will pick the winner 50% of the time here

A>0 B<0 my mehtod will pick the winner 100% of the time here

A<0 B>0 my mehtod will pick the winner 100% of the time here


A<0 B<0 my method will pick the winner 50% of the time here

hence you have a 75% chance of picking the winner. This will work as long as there is some midpoint such that A and B are just as likely to be higher or lower than this midpoint (and never actually the midpoint), and in this example the midpoint is 0.

If you would like I will prove this with rigourous mathematics that even jason1990 would of approve of /images/graemlins/wink.gif

TomCollins
11-09-2004, 11:11 AM
Wrong.
The strategy you are using (I believe)
If A>0, you are switching.
If A<0, you are not switching.

So there are 4 cases:

1) A>0 A>B No switch. You win. 100% of the time.
2) A>0 A<B No switch. You lose. 100% of the time.
3) A<0 A>B Switch. You lose. 100% of the time.
4) A>0 A<B Switch. You win. 100% of the time.

So you are right 50% of the time, and wrong 50% of the time. Nowhere near 75%.

Maybe throw in more big words like Nash Equilibrium and Quadratic Equation and it might confuse those of us not astute enough to counter your ill-formed logic.

jason1990
11-09-2004, 12:26 PM
If A and B are independent and identically distributed, and P(A<0)=P(A>0)=1/2, then he is absolutely right. And the fact that he is right in this case is completely trivial and doesn't rely on any Nash Equilibrium or Quadratic Equation or anything like that.

Your argument relies on the hidden premise that the cases you outlined are equally likely. Essentially, you are saying:

In case 1, he wins with probability 1.
In case 2, he wins with probability 0.
In case 3, he wins with probability 0.
In case 4, he wins with probability 1.
So his probability of winning is
P(case 1)*1 + P(case 2)*0 + P(case 3)*0 + P(case 4)*1
= .25*1 + .25*0 + .25*0 + .25*1
= .5

But this is not correct. These cases are not equally likely under his assumptions. For example,

P(case 1) = P(A>0, A>B)
= P(A>0, A>B, B>0) + P(A>0, A>B, B<0)
= P(A>0, B>0)*P(A>B | A>0, B>0) + P(A>0, B<0)
= P(A>0)*P(B>0)*P(A>B | A>0, B>0) + P(A>0)*P(B<0)
= .5*.5*.5 + .5*.5
= .125 + .25
= .375

Similarly, you can compute that

P(case 2) = P(case 3) = .125

and P(case 4) = .375.

So his probability of winning is actually

.375*1 + .125*0 + .125*0 + .375*1 = .75

It baffles me why you insist on inserting such attitude into your posts. In this case, it is your argument which is not valid because you have relied on an implicit assumption which is false. What you should have been arguing with were his assumptions, not his logic. And you shouldn't berate someone else's use of logic if you can't even use it effectively yourself.

jimdmcevoy
11-09-2004, 01:00 PM
I think there may be some confusion, I think I gave the wrong impression I was talking about my paradox. I am talking about the paradox rocketmanjames posted originally.

And for his paradox the optimal strategy is:

switch when A<0

don't swith when A>0

which will give 75% success.

The things I was trying to 'confuse' you with were for a seperate parodox I made up. And I have no sympathy for you because I clearly stated:

[ QUOTE ]

**WARNING** - Do not keep reading unless you are a math nerd who likes to think about math for fun. Here is a math 'paradox'


[/ QUOTE ]

jason1990
11-09-2004, 01:13 PM
I said in my reply to TomCollins, he should have been arguing with your assumptions. So I'll take on the burden and argue against them myself.

I see no reason why you shouldn't be able to assume that A and B (in the OP's paradox) are independent, identically distributed, and distinct with probability 1. But you cannot assume anything about their individual distributions. (The most natural assumption is to assume they are "uniform" on the real line, but we discussed that.) So, for example, A and B may be uniform on [0,1]. If they are, then your strategy only gives a 50% chance of winning. If they are both standard normals, then you're right, you have a 75% chance of winning. But you have no idea what the distributions of these numbers are!

Your strategy does not give you a 75% chance of winning, regardless of the distribution of these random numbers. Sometimes it does. Sometimes it gives you only 50%. Sometimes it gives you something in between. The strategy TomCollins outlined, however, always produces a chance of winning which is greater than 50%, regardless of the distributions.

jimdmcevoy
11-09-2004, 01:26 PM
What I said in my reply to TomCollins was:

[ QUOTE ]


This will work as long as there is some midpoint such that A and B are just as likely to be higher or lower than this midpoint (and never actually the midpoint), and in this example the midpoint is 0.



[/ QUOTE ]

And you are right what I should of said (and what I was thinking but too lazy to add in the extra bit) was:

This will work as long as there is some midpoint such that A and B are just as likely to be higher or lower than this midpoint (and never actually the midpoint). You then choose A if it is greater than the midpoint, if A is less than the midpoint you switch. In this example the midpoint is 0.

[ QUOTE ]


I said in my reply to TomCollins, he should have been arguing with your assumptions. So I'll take on the burden and argue against them myself.



[/ QUOTE ]

Thanks for that, good luck.

Btw do you agree with what I said in a post way back?

[ QUOTE ]


Okay, so now if A, B and C are randomly chosen from any interval where there is an infinite amount of numbers, for instance (0,1), even if numbers closer to 1 are more likely to be picked than numbers closer to 0, then using the method prescribed from the original post P_between = 1/3. This will pcik you the right number 2/3 of the time.



[/ QUOTE ]

(Assume A,B,C are independent and have the same distributions)

TomCollins
11-09-2004, 01:42 PM
Sorry for being a stickler Jason. Yes, my proof was flawed based on my incorrect reading of his assumption that each case was equally as likely. I forgot I changed the cases.

My problem is when people use fancy math speak and really have no clue. You obviously know what you are talking about. When he insulted you by saying you didn't understand the basics, I took him to be a fool who knows lots of math terms but no clue how to use them. So perhaps he isn't as foolish as I thought.

The distribution is the key, and there are too many assumptions about it, many of which lead to contradictions.

jason1990
11-09-2004, 03:00 PM
[ QUOTE ]
This will work as long as there is some midpoint such that A and B are just as likely to be higher or lower than this midpoint (and never actually the midpoint). You then choose A if it is greater than the midpoint, if A is less than the midpoint you switch. In this example the midpoint is 0.

[/ QUOTE ]
There is always such a midpoint (when the distribution is continuous). It is called the median of the distribution. And you're right, this will work if you assume the median is 0. But if you don't assume the median is 0, then you don't know what the median is, so you can't actually apply this strategy. However, there is a strategy you can apply, without knowing the median, which gives you a probability of winning which is greater than 50%. This strategy is more interesting, in my opinion, since it makes less assumptions about the random variables being discussed.

[ QUOTE ]
Btw do you agree with what I said in a post way back?


Quote:
--------------------------------------------------------------------------------



Okay, so now if A, B and C are randomly chosen from any interval where there is an infinite amount of numbers, for instance (0,1), even if numbers closer to 1 are more likely to be picked than numbers closer to 0, then using the method prescribed from the original post P_between = 1/3. This will pcik you the right number 2/3 of the time.




--------------------------------------------------------------------------------



(Assume A,B,C are independent and have the same distributions)

[/ QUOTE ]
Yes, this is true. However, C must be chosen by the player. He will have no problem choosing it independently of A and B. But it will be quite difficult for him to choose it with the same distribution, if he doesn't know what that distribution is.

jimdmcevoy
11-09-2004, 03:28 PM
[ QUOTE ]


There is always such a midpoint (when the distribution is continuous). It is called the median of the distribution.



[/ QUOTE ]

Oh yeah, median /images/graemlins/blush.gif (Did I mention I've never taken a stats course?)

[ QUOTE ]


However, there is a strategy you can apply, without knowing the median, which gives you a probability of winning which is greater than 50%



[/ QUOTE ]

I am interested, please tell!

jason1990
11-09-2004, 03:42 PM
It was already described by TomCollins. Take a continuous, strictly increasing function, f, which approaches 0 at -infty and 1 at +infty. Look at the number A, keep it with probability f(A), switch with probability 1-f(A). The probability that you win is a function of f and F (the unknown cumulative distribution of A and B - which is continuous by the assumption that A and B are never equal). Call it p(f,F).

It can be proven that for any fixed f with the above properties, p(f,F)>0.5 for all continuous cumulative distribution functions F.

(However, it should be noted that there may exist a sequence of continuous CDF's F_n - which might depend on f - such that

lim_{n->infty} p(f,F_n) = 0.5.

In other words, you are guaranteed to have a probability of winning which is greater than 0.5, but you are not necessarily guaranteed that it exceeds 0.5 by any fixed amount. Proving that, for every f, such a sequence of F_n's exists would be a nice challenge for those who feel inspired.)

TomCollins
11-09-2004, 03:50 PM
I believe the strategy he is talking about is the one I outlined earlier.

jimdmcevoy
11-09-2004, 03:58 PM
Oops, sorry Tom, I read that before but forgot about it.

AONN_Records
11-29-2004, 12:20 AM
Ah! The funky paradox. The funky paradox and I go way back. I once knew this guy who was a real math genius--a natural, true prodigy right outta the movies. One summer I studied a little math here and there under his direction--more like picked up on a few good pointers, as math has never been my strong point. Back to the prodigy... He had a small chateau trimmed (by way of apartment managers) with light-purple and other multi-colored bavarian decorations, where he'd play poker and other parlor games while drinking cocktails with invited guests. This was my first experience with organized underground gambling...I mean "card playing." We called this haunt of his, "The Alps." This is where I was first schooled on Xeno's Paradox, which is used sometimes as an ancient ritual of fraternal initiation into higher learning and thus higher society. I didn't exactly understand the precise significance at the time, but I understand now. You never really ever get to 1. Mathematical phenomena of this type are fascinating and I wish that I could shake the hand of the mathematician who taught me something useful.

MortalWombatDotCom
12-04-2004, 05:23 AM
The thing that blows my mind is that the probability of me making the last post on this thread is zero (this follows directly from the proof jason1990 gave in post #1228065, which i'll leave as an exercise for the reader).

This is why i prefer discrete mathematics.

jason_t
12-05-2004, 02:05 AM
There no paradox because there is no uniform probability distribution on the real numbers.

TomCollins
12-05-2004, 02:44 PM
Replace uniform with the word "unknown", and it is still true.