A Problem with Calculating Variance
The variance calculation (Malmuth, "Computing Your Standard Deviation"; 2+2 Publishing, originally published in "Gambling Theory and Othre Yopics")implies that if you have a series of winning sessions with few corresponding losing sessions, your variance will increase. This is because standard deviation is unconcerned about the DIRECTION of the variance - ie., a large gain is treated equally to a large loss. I have a problem with this as an operational tool to assist you decide how large your bankroll should be. Also, with the implication that lower variance is better than higher variance. To illustrate, let's say you never have a losing session. As your winnings increase this formula, as I understand it, says you should keep on increasing your bankroll! I suspect the problem with this concept is that it assumes a normal, or at least a symmetrical, distribution about the mean return. In fact, an expert player's session returns are going to be skewed to the right, while the poor player's distribution will be skewed to the leaft. (This is not unlike the type of analysis done on investment returns, where high variance that is positivley skewed is not used to penalize the risk-adjusted returns of that set of results. In this world distributions such as the three-parameter lognormal are used, in conjunction with alternative risk measures, such as semi-deviation to eliminate the bias agaisnt high-return, high-variance time series). Comments?
|