In common usage, randomness is the apparent or actual lack of definite pattern or predictability in information.[1][2] A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if there is a known probability distribution, the frequency of different outcomes over repeated events (or "trials") is predictable.[note 1] For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4. In this view, randomness is not haphazardness; it is a measure of uncertainty of an outcome. Randomness applies to concepts of chance, probability, and information entropy.
The fields of mathematics, probability, and statistics use formal definitions of randomness, typically assuming that there is some 'objective' probability distribution. In statistics, a random variable is an assignment of a numerical value to each possible outcome of an event space. This association facilitates the identification and the calculation of probabilities of the events. Random variables can appear in random sequences. A random process is a sequence of random variables whose outcomes do not follow a deterministic pattern, but follow an evolution described by probability distributions. These and other constructs are extremely useful in probability theory and the various applications of randomness.
Randomness is most often used in statistics to signify well-defined statistical properties. Monte Carlo methods, which rely on random input (such as from random number generators or pseudorandom number generators), are important techniques in science, particularly in the field of computational science.[3] By analogy, quasi-Monte Carlo methods use quasi-random number generators.
Random selection, when narrowly associated with a simple random sample, is a method of selecting items (often called units) from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, with a bowl containing just 10 red marbles and 90 blue marbles, a random selection mechanism would choose a red marble with probability 1/10. A random selection mechanism that selected 10 marbles from this bowl would not necessarily result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable, a random selection mechanism requires equal probabilities for any item to be chosen. That is, if the selection process is such that each member of a population, say research subjects, has the same probability of being chosen, then we can say the selection process is random.[2]
According to Ramsey theory, pure randomness (in the sense of there being no discernible pattern) is impossible, especially for large structures. Mathematician Theodore Motzkin suggested that "while disorder is more probable in general, complete disorder is impossible".[4] Misunderstanding this can lead to numerous conspiracy theories.[5] Cristian S. Calude stated that "given the impossibility of true randomness, the effort is directed towards studying degrees of randomness".[6] It can be proven that there is infinite hierarchy (in terms of quality or strength) of forms of randomness.[6]
History
In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threw dice to determine fate, and this later evolved into games of chance. Most ancient cultures used various methods of divination to attempt to circumvent randomness and fate.[7][8] Beyond religion and games of chance, randomness has been attested for sortition since at least ancient Athenian democracy in the form of a kleroterion.[9]
The formalization of odds and chance was perhaps earliest done by the Chinese of 3,000 years ago. The Greek philosophers discussed randomness at length, but only in non-quantitative forms. It was only in the 16th century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention of calculus had a positive impact on the formal study of randomness. In the 1888 edition of his book The Logic of Chance, John Venn wrote a chapter on The conception of randomness that included his view of the randomness of the digits of pi (π), by using them to construct a random walk in two dimensions.[10]
The early part of the 20th century saw a rapid growth in the formal analysis of randomness, as various approaches to the mathematical foundations of probability were introduced. In the mid-to-late-20th century, ideas of algorithmic information theory introduced new dimensions to the field via the concept of algorithmic randomness.
Although randomness had often been viewed as an obstacle and a nuisance for many centuries, in the 20th century computer scientists began to realize that the deliberate introduction of randomness into computations can be an effective tool for designing better algorithms. In some cases, such randomized algorithms even outperform the best deterministic methods.[11]
In science
Many scientific fields are concerned with randomness:
In the physical sciences
In the 19th century, scientists used the idea of random motions of molecules in the development of statistical mechanics to explain phenomena in thermodynamics and the properties of gases.
According to several standard interpretations of quantum mechanics, microscopic phenomena are objectively random.[12] That is, in an experiment that controls all causally relevant parameters, some aspects of the outcome still vary randomly. For example, if a single unstable atom is placed in a controlled environment, it cannot be predicted how long it will take for the atom to decay—only the probability of decay in a given time.[13] Thus, quantum mechanics does not specify the outcome of individual experiments, but only the probabilities. Hidden variable theories reject the view that nature contains irreducible randomness: such theories posit that in the processes that appear random, properties with a certain statistical distribution are at work behind the scenes, determining the outcome in each case.
In biology
The modern evolutionary synthesis ascribes the observed diversity of life to random genetic mutations followed by natural selection. The latter retains some random mutations in the gene pool due to the systematically improved chance for survival and reproduction that those mutated genes confer on individuals who possess them. The location of the mutation is not entirely random however as e.g. biologically important regions may be more protected from mutations.[14][15][16]
Several authors also claim that evolution (and sometimes development) requires a specific form of randomness, namely the introduction of qualitatively new behaviors. Instead of the choice of one possibility among several pre-given ones, this randomness corresponds to the formation of new possibilities.[17][18]
The characteristics of an organism arise to some extent deterministically (e.g., under the influence of genes and the environment), and to some extent randomly. For example, the density of freckles that appear on a person's skin is controlled by genes and exposure to light; whereas the exact location of individual freckles seems random.[19]
As far as behavior is concerned, randomness is important if an animal is to behave in a way that is unpredictable to others. For instance, insects in flight tend to move about with random changes in direction, making it difficult for pursuing predators to predict their trajectories.
In mathematics
The mathematical theory of probability arose from attempts to formulate mathematical descriptions of chance events, originally in the context of gambling, but later in connection with physics. Statistics is used to infer an underlying probability distribution of a collection of empirical observations. For the purposes of simulation, it is necessary to have a large supply of random numbers—or means to generate them on demand.
Algorithmic information theory studies, among other topics, what constitutes a random sequence. The central idea is that a string of bits is random if and only if it is shorter than any computer program that can produce that string (Kolmogorov randomness), which means that random strings are those that cannot be compressed. Pioneers of this field include Andrey Kolmogorov and his student Per Martin-Löf, Ray Solomonoff, and Gregory Chaitin. For the notion of infinite sequence, mathematicians generally accept Per Martin-Löf's semi-eponymous definition: An infinite sequence is random if and only if it withstands all recursively enumerable null sets.[20] The other notions of random sequences include, among others, recursive randomness and Schnorr randomness, which are based on recursively computable martingales. It was shown by Yongge Wang that these randomness notions are generally different.[21]
Randomness occurs in numbers such as log(2) and pi. The decimal digits of pi constitute an infinite sequence and "never repeat in a cyclical fashion." Numbers like pi are also considered likely to be normal:
Pi certainly seems to behave this way. In the first six billion decimal places of pi, each of the digits from 0 through 9 shows up about six hundred million times. Yet such results, conceivably accidental, do not prove normality even in base 10, much less normality in other number bases.[22]
In statistics
In statistics, randomness is commonly used to create simple random samples. This allows surveys of completely random groups of people to provide realistic data that is reflective of the population. Common methods of doing this include drawing names out of a hat or using a random digit chart (a large table of random digits).
In information science
In information science, irrelevant or meaningless data is considered noise. Noise consists of numerous transient disturbances, with a statistically randomized time distribution.
In communication theory, randomness in a signal is called "noise", and is opposed to that component of its variation that is causally attributable to the source, the signal.
In terms of the development of random networks, for communication randomness rests on the two simple assumptions of Paul Erdős and Alfréd Rényi, who said that there were a fixed number of nodes and this number remained fixed for the life of the network, and that all nodes were equal and linked randomly to each other.[clarification needed][23]
In finance
The random walk hypothesis considers that asset prices in an organized market evolve at random, in the sense that the expected value of their change is zero but the actual value may turn out to be positive or negative. More generally, asset prices are influenced by a variety of unpredictable events in the general economic environment.
In politics
Random selection can be an official method to resolve tied elections in some jurisdictions.[24] Its use in politics originates long ago. Many offices in ancient Athens were chosen by lot instead of modern voting.
Randomness and religion
Randomness can be seen as conflicting with the deterministic ideas of some religions, such as those where the universe is created by an omniscient deity who is aware of all past and future events. If the universe is regarded to have a purpose, then randomness can be seen as impossible. This is one of the rationales for religious opposition to evolution, which states that non-random selection is applied to the results of random genetic variation.
Hindu and Buddhist philosophies state that any event is the result of previous events, as is reflected in the concept of karma. As such, this conception is at odds with the idea of randomness, and any reconciliation between both of them would require an explanation.[25]
In some religious contexts, procedures that are commonly perceived as randomizers are used for divination. Cleromancy uses the casting of bones or dice to reveal what is seen as the will of the gods.
Applications
In most of its mathematical, political, social and religious uses, randomness is used for its innate "fairness" and lack of bias.
Politics: Athenian democracy was based on the concept of isonomia (equality of political rights), and used complex allotment machines to ensure that the positions on the ruling committees that ran Athens were fairly allocated. Allotment is now restricted to selecting jurors in Anglo-Saxon legal systems, and in situations where "fairness" is approximated by randomization, such as selecting jurors and military draft lotteries.
Games: Random numbers were first investigated in the context of gambling, and many randomizing devices, such as dice, shuffling playing cards, and roulette wheels, were first developed for use in gambling. The ability to produce random numbers fairly is vital to electronic gambling, and, as such, the methods used to create them are usually regulated by government Gaming Control Boards. Random drawings are also used to determine lottery winners. In fact, randomness has been used for games of chance throughout history, and to select out individuals for an unwanted task in a fair way (see drawing straws).
Sports: Some sports, including American football, use coin tosses to randomly select starting conditions for games or seed tied teams for postseason play. The National Basketball Association uses a weighted lottery to order teams in its draft.
Mathematics: Random numbers are also employed where their use is mathematically important, such as sampling for opinion polls and for statistical sampling in quality control systems. Computational solutions for some types of problems use random numbers extensively, such as in the Monte Carlo method and in genetic algorithms.
Medicine: Random allocation of a clinical intervention is used to reduce bias in controlled trials (e.g., randomized controlled trials).
Religion: Although not intended to be random, various forms of divination such as cleromancy see what appears to be a random event as a means for a divine being to communicate their will (see also Free will and Determinism for more).
Generation
It is generally accepted that there exist three mechanisms responsible for (apparently) random behavior in systems:
- Randomness coming from the environment (for example, Brownian motion, but also hardware random number generators).
- Randomness coming from the initial conditions. This aspect is studied by chaos theory, and is observed in systems whose behavior is very sensitive to small variations in initial conditions (such as pachinko machines and dice).
- Randomness intrinsically generated by the system. This is also called pseudorandomness, and is the kind used in pseudo-random number generators. There are many algorithms (based on arithmetics or cellular automaton) for generating pseudorandom numbers. The behavior of the system can be determined by knowing the seed state and the algorithm used. These methods are often quicker than getting "true" randomness from the environment.
The many applications of randomness have led to many different methods for generating random data. These methods may vary as to how unpredictable or statistically random they are, and how quickly they can generate random numbers.
Before the advent of computational random number generators, generating large amounts of sufficiently random numbers (which is important in statistics) required a lot of work. Results would sometimes be collected and distributed as random number tables.
Measures and tests
There are many practical measures of randomness for a binary sequence. These include measures based on frequency, discrete transforms, complexity, or a mixture of these, such as the tests by Kak, Phillips, Yuen, Hopkins, Beth and Dai, Mund, and Marsaglia and Zaman.[26]
Quantum nonlocality has been used to certify the presence of genuine or strong form of randomness in a given string of numbers.[27]
Misconceptions and logical fallacies
Popular perceptions of randomness are frequently mistaken, and are often based on fallacious reasoning or intuitions.
Fallacy: a number is "due"
This argument is, "In a random selection of numbers, since all numbers eventually appear, those that have not come up yet are 'due', and thus more likely to come up soon." This logic is only correct if applied to a system where numbers that come up are removed from the system, such as when playing cards are drawn and not returned to the deck. In this case, once a jack is removed from the deck, the next draw is less likely to be a jack and more likely to be some other card. However, if the jack is returned to the deck, and the deck is thoroughly reshuffled, a jack is as likely to be drawn as any other card. The same applies in any other process where objects are selected independently, and none are removed after each event, such as the roll of a die, a coin toss, or most lottery number selection schemes. Truly random processes such as these do not have memory, which makes it impossible for past outcomes to affect future outcomes. In fact, there is no finite number of trials that can guarantee a success.
Fallacy: a number is "cursed" or "blessed"
In a random sequence of numbers, a number may be said to be cursed because it has come up less often in the past, and so it is thought that it will occur less often in the future. A number may be assumed to be blessed because it has occurred more often than others in the past, and so it is thought likely to come up more often in the future. This logic is valid only if the randomisation might be biased, for example if a die is suspected to be loaded then its failure to roll enough sixes would be evidence of that loading. If the die is known to be fair, then previous rolls can give no indication of future events.
In nature, events rarely occur with a frequency that is known a priori, so observing outcomes to determine which events are more probable makes sense. However, it is fallacious to apply this logic to systems designed and known to make all outcomes equally likely, such as shuffled cards, dice, and roulette wheels.
Fallacy: odds are never dynamic
In the beginning of a scenario, one might calculate the probability of a certain event. However, as soon as one gains more information about the scenario, one may need to re-calculate the probability accordingly.
For example, when being told that a woman has two children, one might be interested in knowing if either of them is a girl, and if yes, the probability that the other child is also a girl. Considering the two events independently, one might expect that the probability that the other child is female is ½ (50%), but by building a probability space illustrating all possible outcomes, one would notice that the probability is actually only ⅓ (33%).
To be sure, the probability space does illustrate four ways of having these two children: boy-boy, girl-boy, boy-girl, and girl-girl. But once it is known that at least one of the children is female, this rules out the boy-boy scenario, leaving only three ways of having the two children: boy-girl, girl-boy, girl-girl. From this, it can be seen only ⅓ of these scenarios would have the other child also be a girl[28] (see Boy or girl paradox for more).
In general, by using a probability space, one is less likely to miss out on possible scenarios, or to neglect the importance of new information. This technique can be used to provide insights in other situations such as the Monty Hall problem, a game show scenario in which a car is hidden behind one of three doors, and two goats are hidden as booby prizes behind the others. Once the contestant has chosen a door, the host opens one of the remaining doors to reveal a goat, eliminating that door as an option. With only two doors left (one with the car, the other with another goat), the player must decide to either keep their decision, or to switch and select the other door. Intuitively, one might think the player is choosing between two doors with equal probability, and that the opportunity to choose another door makes no difference. However, an analysis of the probability spaces would reveal that the contestant has received new information, and that changing to the other door would increase their chances of winning.[28]
See also
- Chaitin's constant
- Chance (disambiguation)
- Frequentist probability
- Indeterminism
- Nonlinear system
- Probability interpretations
- Probability theory
- Pseudorandomness
- Random.org—generates random numbers using atmospheric noise
- Sortition
Notes
- ^ Strictly speaking, the frequency of an outcome will converge almost surely to a predictable value as the number of trials becomes arbitrarily large. Non-convergence or convergence to a different value is possible, but has probability zero. Consistent non-convergence is thus evidence of the lack of a fixed probability distribution, as in many evolutionary processes.
References
- ^ The Oxford English Dictionary defines "random" as "Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method or conscious choice; haphazard."
- ^ a b "Definition of randomness | Dictionary.com". www.dictionary.com. Retrieved 21 November 2019.
- ^ Third Workshop on Monte Carlo Methods, Jun Liu, Professor of Statistics, Harvard University
- ^ Hans Jürgen Prömel (2005). "Complete Disorder is Impossible: The Mathematical Work of Walter Deuber". Combinatorics, Probability and Computing. 14. Cambridge University Press: 3–16. doi:10.1017/S0963548304006674 (inactive 3 December 2024). S2CID 37243306.
{{cite journal}}
: CS1 maint: DOI inactive as of December 2024 (link) - ^ Ted.com, (May 2016). The origin of countless conspiracy theories
- ^ a b Cristian S. Calude, (2017). "Quantum Randomness: From Practice to Theory and Back" in "The Incomputable Journeys Beyond the Turing Barrier" Editors: S. Barry Cooper, Mariya I. Soskova, 169–181, doi:10.1007/978-3-319-43669-2_11.
- ^ Handbook to life in ancient Rome by Lesley Adkins 1998 ISBN 0-19-512332-8 page 279
- ^ Religions of the ancient world by Sarah Iles Johnston 2004 ISBN 0-674-01517-7 page 370
- ^ Hansen, Mogens Herman (1991). The Athenian Democracy in the Age of Demosthenes. Wiley. p. 230. ISBN 9780631180173.
- ^ Annotated readings in the history of statistics by Herbert Aron David, 2001 ISBN 0-387-98844-0 page 115. The 1866 edition of Venn's book (on Google books) does not include this chapter.
- ^ Reinert, Knut (2010). "Concept: Types of algorithms" (PDF). Freie Universität Berlin. Retrieved 20 November 2019.
- ^ Zeilinger, Anton; Aspelmeyer, Markus; Żukowski, Marek; Brukner, Časlav; Kaltenbaek, Rainer; Paterek, Tomasz; Gröblacher, Simon (April 2007). "An experimental test of non-local realism". Nature. 446 (7138): 871–875. arXiv:0704.2529. Bibcode:2007Natur.446..871G. doi:10.1038/nature05677. ISSN 1476-4687. PMID 17443179. S2CID 4412358.
- ^ "Each nucleus decays spontaneously, at random, in accordance with the blind workings of chance." Q for Quantum, John Gribbin
- ^ "Study challenges evolutionary theory that DNA mutations are random". U.C. Davis. Retrieved 12 February 2022.
- ^ Monroe, J. Grey; Srikant, Thanvi; Carbonell-Bejerano, Pablo; Becker, Claude; Lensink, Mariele; Exposito-Alonso, Moises; Klein, Marie; Hildebrandt, Julia; Neumann, Manuela; Kliebenstein, Daniel; Weng, Mao-Lun; Imbert, Eric; Ågren, Jon; Rutter, Matthew T.; Fenster, Charles B.; Weigel, Detlef (February 2022). "Mutation bias reflects natural selection in Arabidopsis thaliana". Nature. 602 (7895): 101–105. Bibcode:2022Natur.602..101M. doi:10.1038/s41586-021-04269-6. ISSN 1476-4687. PMC 8810380. PMID 35022609.
- ^ Belfield, Eric J.; Ding, Zhong Jie; Jamieson, Fiona J.C.; Visscher, Anne M.; Zheng, Shao Jian; Mithani, Aziz; Harberd, Nicholas P. (January 2018). "DNA mismatch repair preferentially protects genes from mutation". Genome Research. 28 (1): 66–74. doi:10.1101/gr.219303.116. PMC 5749183. PMID 29233924.
- ^ Longo, Giuseppe; Montévil, Maël; Kauffman, Stuart (1 January 2012). "No entailing laws, but enablement in the evolution of the biosphere". Proceedings of the 14th annual conference companion on Genetic and evolutionary computation. GECCO '12. New York, NY, US: ACM. pp. 1379–1392. arXiv:1201.2069. CiteSeerX 10.1.1.701.3838. doi:10.1145/2330784.2330946. ISBN 9781450311786. S2CID 15609415.
- ^ Longo, Giuseppe; Montévil, Maël (1 October 2013). "Extended criticality, phase spaces and enablement in biology". Chaos, Solitons & Fractals. Emergent Critical Brain Dynamics. 55: 64–79. Bibcode:2013CSF....55...64L. doi:10.1016/j.chaos.2013.03.008. S2CID 55589891.
- ^ Breathnach, A. S. (1982). "A long-term hypopigmentary effect of thorium-X on freckled skin". British Journal of Dermatology. 106 (1): 19–25. doi:10.1111/j.1365-2133.1982.tb00897.x. PMID 7059501. S2CID 72016377.
The distribution of freckles seems entirely random, and not associated with any other obviously punctuate anatomical or physiological feature of skin.
- ^ Martin-Löf, Per (1966). "The definition of random sequences". Information and Control. 9 (6): 602–619. doi:10.1016/S0019-9958(66)80018-9.
- ^ Yongge Wang: Randomness and Complexity. PhD Thesis, 1996. http://webpages.uncc.edu/yonwang/papers/thesis.pdf
- ^ "Are the digits of pi random? researcher may hold the key". Lbl.gov. 23 July 2001. Archived from the original on 20 October 2007. Retrieved 27 July 2012.
- ^ Laszso Barabasi, (2003), Linked, Rich Gets Richer, P81
- ^ Municipal Elections Act (Ontario, Canada) 1996, c. 32, Sched., s. 62 (3) : "If the recount indicates that two or more candidates who cannot both or all be declared elected to an office have received the same number of votes, the clerk shall choose the successful candidate or candidates by lot."
- ^ Reichenbach, Bruce (1990). The Law of Karma: A Philosophical Study. Palgrave Macmillan UK. p. 121. ISBN 978-1-349-11899-1.
- ^ Terry Ritter, Randomness tests: a literature survey. ciphersbyritter.com
- ^ Pironio, S.; et al. (2010). "Random Numbers Certified by Bell's Theorem". Nature. 464 (7291): 1021–1024. arXiv:0911.3427. Bibcode:2010Natur.464.1021P. doi:10.1038/nature09008. PMID 20393558. S2CID 4300790.
- ^ a b Johnson, George (8 June 2008). "Playing the Odds". The New York Times.
Further reading
- Randomness by Deborah J. Bennett. Harvard University Press, 1998. ISBN 0-674-10745-4.
- Random Measures, 4th ed. by Olav Kallenberg. Academic Press, New York, London; Akademie-Verlag, Berlin, 1986. MR0854102.
- The Art of Computer Programming. Vol. 2: Seminumerical Algorithms, 3rd ed. by Donald E. Knuth. Reading, MA: Addison-Wesley, 1997. ISBN 0-201-89684-2.
- Fooled by Randomness, 2nd ed. by Nassim Nicholas Taleb. Thomson Texere, 2004. ISBN 1-58799-190-X.
- Exploring Randomness by Gregory Chaitin. Springer-Verlag London, 2001. ISBN 1-85233-417-7.
- Random by Kenneth Chan includes a "Random Scale" for grading the level of randomness.
- The Drunkard’s Walk: How Randomness Rules our Lives by Leonard Mlodinow. Pantheon Books, New York, 2008. ISBN 978-0-375-42404-5.
External links
- QuantumLab Quantum random number generator with single photons as interactive experiment.
- HotBits generates random numbers from radioactive decay.
- QRBG Quantum Random Bit Generator
- QRNG Fast Quantum Random Bit Generator
- Chaitin: Randomness and Mathematical Proof
- A Pseudorandom Number Sequence Test Program (Public Domain)
- Dictionary of the History of Ideas: Chance
- Chance versus Randomness, from the Stanford Encyclopedia of Philosophy