View Single Post
  #1  
Old 08-16-2005, 06:35 PM
erby erby is offline
Junior Member
 
Join Date: Nov 2004
Location: PA
Posts: 20
Default \"i got an idea...how bout n-1!\" (nerds rejoice)

why is variance of a sample defined as the sum of the squared deviations of the measurements about their mean divided by (n-1), where n is the number of elements in the sample? My text book stated that "Dividing the sum of squares of deviations by n produces estimates that tend to underestimate sig^2 (for a population). Division by (n-1) eliminates this difficulty."

Is this the only reason...just cuz it works better? Am I missing something?

ERBY [img]/images/graemlins/spade.gif[/img]
Reply With Quote