Two Plus Two Older Archives  

Go Back   Two Plus Two Older Archives > General Gambling > Probability

Reply
 
Thread Tools Display Modes
  #1  
Old 06-25-2005, 04:37 PM
BruceZ BruceZ is offline
Senior Member
 
Join Date: Sep 2002
Posts: 1,636
Default Using probability to solve other types of problems

Sometimes insight from probability allows us to solve problems or prove theorems from branches of mathematics outside of probability. Sklansky gave an example of this with a problem that asked "If 1 billion Chinese couples continued to have children until they had a boy, and then stopped, how many children would be born on average, assuming boys and girls are equally likely?". You could solve this problem if you knew how to sum the series 1*1/2 + 2*1/4 + 3*1/8 + .... But if you don't know how to sum this series, you could simply notice that each family will have exactly 1 boy, and since half of the children will be boys, there must be a billion boys and a billion girls for a total of 2 billion children born. Therefore we can conclude that the series equals 2, which is correct.

Here is one I just discovered. I claim that for all 0 < x < 1, and all integers n > 1,

(1 - x/n)^n > 1 - x

This could probably be proved for all values of n by induction, but then it would have to be proved for all x, perhaps by considering derivatives and logarithms. Here is a simple probabilistic argument:

Proof: Suppose we have a set of n possible events, each with an equal probability of occurring on a given trial. Take x to be the probability that at least one of the events will not occur in some number of independent trials N > n. For example, the n events could be the 38 numbers on a roulette wheel, and x would be the probability that at least one of them does not occur in N > 38 spins. Then the probability that any particular event fails to occur in N trials must be >= x/n, with equality being taken when the events are mutually exclusive. This is because if this probability were x/n, simply summing n probabilities x/n to yield x will always overestimate the probability that some event fails to occur by over counting the cases where more than one event fails to occur, unless the events are mutually exclusive. Therefore since we know that the probability that some event fails to occur in N trials is indeed x, the probability that a particular event does not occur in N trials must be >= x/n.

Now consider the probability that all of the n events occur in N trials, 1 - x. Note that the probability that a particular event occurs in N trials must be > (1 - x)^1/n. This is because if this probability were (1 - x)^1/n, simply raising this to the nth power to yield 1 - x would produce the probability that all of the events occur only if the events occur in N trials independently of one another. This is not the case, however, since the occurrence of an event will allow fewer chances for another event to occur, so raising the probability that a single event occurs to the nth power must overestimate the probability that all events occur. Therefore, since we know that the probability that all events occur is indeed 1 - x, the probability that a particular event occurs must be > (1 - x)^1/n, so the probability that a particular event does not occur in N trials must be < 1 - (1 - x)^1/n. Since we already proved that this probability is >= x/n, x/n < 1 - (1 - x)^1/n. By rearrangement, QED.

The interesting thing is that this inequality doesn't depend on probability theory; we just have some functions of numbers. Yet we are able to say something about how these functions behave because of our knowledge of independence and mutually exclusive events from probability theory.

This has been verified up for n = 2 to 1,000,000,000 and x = 1/1,000,000,000, at which point Excel fails to recognize the truth of this theorem; however, the windows calculator, with it's 32 digits of accuracy, still maintains the small difference between the two terms, at that point on the order of 10^-13 (smallest for n=2). Then it verifies it for x = 1/1,000,000,000,000 for which the difference is on the order of 10^-25 at n=2. This inequality is a good way to check the precision of your calculator.

This was used in the solution to this problem.
Reply With Quote
  #2  
Old 06-25-2005, 07:29 PM
pzhon pzhon is offline
Member
 
Join Date: Mar 2004
Posts: 66
Default Re: Using probability to solve other types of problems

That is a nice inequality.

Here are a few applications of probability to other parts of mathematics:

[img]/images/graemlins/diamond.gif[/img] The Stone-Weierstrass Approximation Theorem
Every continuous funtion f(x) on the interval [0,1] can be uniformly approximated by polynomials f_1(x), f_2(x), f_3(x), etc. That is, for any small number z>0, you can find some n so that for all x, -z < f(x)-f_n(x) < z. This is an important fundamental result in analysis.

You can prove this by constructing the Bernstein polynomials:
f_n(x) = Expected value of f(X(x,n)/n),
where X(x,n) is a sum of n independent 0-1 random variables which are 1 with probability x.

[img]/images/graemlins/diamond.gif[/img] The probabilistic method in combinatorics.
This is a nonconstructive technique for showing that combinatorial objects with certain properties exist. The idea is to construct probability measures on collections of objects, then use simple results or calculations in probability to show that the probability of encountering a structure with the desired property is greater than 0. This technique is due to Paul Erdo"s.

Here is a probabilistic proof that there are infinitely many primes: For any finite list of primes p1...pn, find the probability that a random integer on a large interval is not divisible by any of these primes: (1-1/p1)(1-1/p2)...(1-1/pn). This is greater than 0, so there is some integer in that interval not divisible by any of these primes.

Packing unit spheres densely in higher dimensions is important for coding theory (ideal transmission of information, not cryptography). In 2 dimensions, we know the best packing for unit circles: Every circle has 6 neighbors. In 3 dimensions, there is more flexibility, but it has been shown recently (Hales) that the most natural "cannon ball" packing has the highest density. In dimensions up to 24, we know some fairly good packings of spheres. In many more dimensions, say 10,000, we don't have any reasonable candidates for the densest packing of spheres. The unit 10,000-dimensional cube has room for a lot of unit diameter spheres (with wrap-around), and explicit constructions are poor. In fact, we can prove that denser packings exist by throwing in a lot of centers of spheres randomly, then estimating the probability that at least two are within didstance 1 or each other. If the probability is less than 1, there is some way of packing that many spheres without overlap.

[img]/images/graemlins/diamond.gif[/img] Random Matrices.
Much of quantum mechanics in practice involves linear algebra. If you study a hydrogen atom, this is not too hard. If you study a uranium atom, this is extremely complicated. Even for a simple model of the lower energy states, you may have to understand the eigenvalues of matrix with hundreds of dimensions, whose coefficients you can't estimate very easily with accuracy. People decided to analyze the properties of random matrices, to tell what can be expected of a typical matrix, a typical symmetric matrix, or a typical Hermitian matrix. Then you hope the specific matrix corresponding to uranium isn't particularly unusual.

Actually, I haven't seen published that many spectral properties of random matrices are really just properties of random polynomials. After all, the characteristic polynomial of a random matrix is a random polynomial.
Reply With Quote
  #3  
Old 06-25-2005, 10:27 PM
Siegmund Siegmund is offline
Senior Member
 
Join Date: Feb 2005
Posts: 415
Default Re: Using probability to solve other types of problems

[ QUOTE ]

Here is one I just discovered. I claim that for all 0 < x < 1, and all integers n > 1,

(1 - x/n)^n > 1 - x

This could probably be proved for all values of n by induction, but then it would have to be proved for all x, perhaps by considering derivatives and logarithms.

[/ QUOTE ]

The usual algebraic proof is a very short one: Multiply out the left side.

1 - n * x/n + nC2 * x^2/n^2 - nC3 * x^3/n^3 ...

The first two terms simplify to 1-x. The third term is positive unless n=1. For all subsequent terms, x^k is less than x^(k-1), and nCk/n^k is less than half of nC(k-1)/n^(k-1), so each subsequent term is less than half the magnitude of the previous. So the series must converge, to something between 1-x and 1-x+x^2/2.

Of course this too has a probabilistic interpretation (each term is one more level of inclusion-exclusion, for sufficiently rare events successive levels of inclusion-exclusion contribute less and less to the final answer, and at most n levels are needed to obtain an exact probability.)

My favorite probability-inspired tricks are those that greatly simplify the evaluation of certain classes of integrals, by recognizing that the integrand is closely related to a PDF, i.e. a function that must integrate to 1.
(I always did think integrals were more fun than sums!)
Reply With Quote
  #4  
Old 06-25-2005, 11:52 PM
BruceZ BruceZ is offline
Senior Member
 
Join Date: Sep 2002
Posts: 1,636
Default Re: Using probability to solve other types of problems

[ QUOTE ]
[ QUOTE ]

Here is one I just discovered. I claim that for all 0 < x < 1, and all integers n > 1,

(1 - x/n)^n > 1 - x

This could probably be proved for all values of n by induction, but then it would have to be proved for all x, perhaps by considering derivatives and logarithms.

[/ QUOTE ]

The usual algebraic proof is a very short one: Multiply out the left side.

1 - n * x/n + nC2 * x^2/n^2 - nC3 * x^3/n^3 ...

The first two terms simplify to 1-x. The third term is positive unless n=1. For all subsequent terms, x^k is less than x^(k-1), and nCk/n^k is less than half of nC(k-1)/n^(k-1), so each subsequent term is less than half the magnitude of the previous. So the series must converge, to something between 1-x and 1-x+x^2/2.

[/ QUOTE ]

I was being overly dramatic for effect. [img]/images/graemlins/smile.gif[/img] Here's an even shorter proof. Take the derivative of the left side:

-(1 - x/n)^(n-1) > -1. Since the slope of 1 - x = -1, and since the expressions are equal for x = 0, the left side must always be greater than 1 - x.

The probability proof is only long with the detailed explanation. It's really almost a one-liner. The important thing is that we can "discover" this algebraic truth from only probability considerations.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 09:25 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.