![]() |
#25
|
|||
|
|||
![]()
[ QUOTE ]
Suppose m is a measure on [0,1] such that m([0,1]) = 1 and m({x}) = m({y}) for any x, y in [0,1]. That's what we mean by a uniform distribution. [/ QUOTE ] No it's not. That would allow too many measures. For example, any probability measure on [0,1] which is absolutely continuous with respect to Lebesgue measure has this property. A uniform distribution is one which is invariant under translations, rotations, and reflections. [ QUOTE ] If m({x}) > 0 for some x in [0,1] choose n so large that n * m({x}) > 1. Then choose n distinct x_1, ... x_n in [0,1]. We'd have 1 = m([0,1]) >= m({x_1, ..., x_n}) = m({x_1}) + ... + m({x_n}) = n * m({x_1}) > 1 so that 1 > 1. This is a contradiction. Hence m({x}) = 0 for any x in [0,1]. Oops. [/ QUOTE ] What does "oops" mean here? Do you think you have arrived at a contradiction to the existence of m? "Uniform distribution on [0,1]" is standard terminology for Lebesgue measure. And if m is Lebesgue measure, then m([0,1])=1 and m({x})=0 for all x. The statement of the problem makes perfect sense and the existence of the uniform distribution on [a,b] is proven in any first-year graduate real analysis course, and some undergraduate ones. Are you, by chance, an algebraist? |
|
|