|
#1
|
|||
|
|||
Re: A hypothesis worth testing
Once again this comes down to a muck problem, you have to watch a very very significant number of hands to catch two all-ins with those types of cards. Also I don't have pokertracker otherwise I'd just observe 4 SnGs and let it run all day, as soon as one was done I'd replace it. But considering how rare those situations would be and the fact I need a MUCH larger sample size than 300, I might ask for some pokertracker data that others might have. It's an idea though, so I appreciate the input. Also a variance of .3 odds over 300 hands is nothing to worry about, a variance of .3 odds over 300k hands is a different story.
Unfortunately my prof. knows very little about cards etc. so his input has been kinda moot, but he offered an idea similar to this earlier. He proposed I look at the win rates of certain hands, and see how they compare up to how often they should be winning. Of course as stated there are FAR too many variables in that idea, namely people in hand with you, but heads-up would solve quite a few problems. Anyways I'm still brainstorming ideas and looking at the data I've already found, to see if I can think of any interesting questions. Also I am on a slight time constraint, which makes it so I can't run as exhaustive a search on my own data as I'd like. But if I get some decent results I might continue the experiment out to a few hundred thousand hands just for the benefit of the community. Don't worry though, I got a back up plan for my project if I can't secure a poker hypothesis, looking at exactly how "random" pi is, since it's a "normal" number. Matt |
#2
|
|||
|
|||
Re: A hypothesis worth testing
If you observe 3 tourneys at a time you will see this event every 5th hand in the late stages. That gets you one event every minute. Collecting data is time consuming but this is the one experiment that might actually lead somewhere.
As for sample size try this. You go to a new poker site and the first hand you get AA. It gets cracked. Next hand AA, again cracked. Repeat. Repeat. How many times will it take before you believe there's some problem here? I'm gone after 4, maybe 3. Sample size must be large to detect tiny differences from the expected mean. It need not be huge to hint at less subtle problems. As soon as any event is running 2 SDs out (regardless of sample size) you can raise an eyebrow, collect more data and recheck your work and if all is correct you will find something is usually not as expected. |
#3
|
|||
|
|||
Re: A hypothesis worth testing
[ QUOTE ]
As for sample size try this. You go to a new poker site and the first hand you get AA. It gets cracked. Next hand AA, again cracked. Repeat. Repeat. How many times will it take before you believe there's some problem here? I'm gone after 4, maybe 3. [/ QUOTE ] How much is enough? How many times in a row would you have to lose AA head up before you suspect something fishy? |
|
|