In statistics and probability theory, the standard deviation (represented by the Greek letter sigma, σ) shows how much variation or dispersion from the average (mean, also called expected value) exists. A low standard deviation indicates that the data points tend to be very close to the mean; a high standard deviation indicates that the data points are spread out over a large range of values.
The standard deviation of a random variable, statistical population, data set, or probability distribution is the square root of its variance. It is algebraically simpler though in practice less robust than the average absolute deviation. A useful property of the standard deviation is that, unlike the variance, it is expressed in the same units as the data. Note, however, that for measurements with percentage as the unit, the standard deviation will have percentage points as the unit.
Basically, we have a "standard" way of knowing what is normal, and how far from the normal we are [deviation]
Un-arranged values are:
Re-arranged values (sum)
Variance: variance =
step by step:
1. calculate the mean (the simple average of the numbers)
2. calculate the variance (subtract the Mean and square the result (the squared difference).
For example: √ Square root of 2.8
3. standard deviation = [nearest hundredth]
- ↑ ^ Gauss, Carl Friedrich (1816). "Bestimmung der Genauigkeit der Beobachtungen". Zeitschrift für Astronomie und verwandt Wissenschaften 1: 187–197.
- ↑ Jump up ^ Walker, Helen (1931). Studies in the History of the Statistical Method. Baltimore, MD: Williams & Wilkins Co. pp. 24–25.
- ↑ Mathisfun. "Standard Deviation and Variance Accessdate: http://www.mathsisfun.com/mean.html 6/21/8