Statistical regularity is a notion in statistics and probability theory that random events exhibit regularity when repeated enough times or that enough sufficiently similar random events exhibit regularity. It is an umbrella term that covers the law of large numbers, all central limit theorems and ergodic theorems.
If one throws a dice once, it is difficult to predict the outcome, but if one repeats this experiment many times, one will see that the number of times each result occurs divided by the number of throws will eventually stabilize towards a specific value.
Repeating a series of trials will produce similar, but not identical, results for each series: the average, the standard deviation and other distributional characteristics will be around the same for each series of trials.
The notion is used in games of chance, demographic statistics, quality control of a manufacturing process, and in many other parts of our lives.
Observations of this phenomenon provided the initial motivation for the concept of what is now known as frequency probability.
This phenomenon should not be confused with the gambler's fallacy, because regularity only refers to the (possibly very) long run. The gambler's fallacy does not apply to statistical regularity because the latter considers the whole rather than individual cases.
See also
editReferences
editThis article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. (February 2012) |
- Leon-Garcia, Albert (1994). Probability and Random Processes for Electrical Engineering (2nd ed.). Boston: Addison-Wesley. ISBN 0-201-50037-X.
- Whitt, Ward (2002). "Experiencing Statistical Regularity" (PDF). Stochastic-Process Limits, An Introduction to Stochastic-Process Limits and their Application to Queues. New York: Springer. ISBN 0-387-95358-2.