\"i got an idea...how bout n-1!\" (nerds rejoice)
why is variance of a sample defined as the sum of the squared deviations of the measurements about their mean divided by (n-1), where n is the number of elements in the sample? My text book stated that "Dividing the sum of squares of deviations by n produces estimates that tend to underestimate sig^2 (for a population). Division by (n-1) eliminates this difficulty."
Is this the only reason...just cuz it works better? Am I missing something?
ERBY [img]/images/graemlins/spade.gif[/img]
|