Is it possible to design conceptually definite processes with unpredictable
outcomes? Can indeterminacy be implemented without invoking "nature",
and without shifting artistic decisions to curators, performing artists,
or the public itself? The obvious answer to this challenge is the
use of chance procedures a method that may
be summarized as follows: (1) define a space of possibilities in explicit,
mathematical terms; (2) define a probability distribution over this
space; (3) draw random samples from the space, in accordance with
the probability distribution.
This probabilistic art generation strategy highlights one artistic
problem with relentless clarity: How to define the space of possible
outcomes (and the concomitant probability distribution)? This problem
is discussed in our page on chance art. The strategy also raises some slightly esoteric philosophical/physical
questions: What is chance, and does it exist?
For the practice of chance art, the answers to these questions are
largely immaterial, but for an appreciation of its conceptual dimensions,
they are indispensible.
What is chance?
The common-sense notion of chance refers to real-life
unpredictability. (William Wollaston, 1722: "Chance seems to
be only a term, by which we express our ignorance of the cause of
any thing.") For predictions
about an ongoing sequence of events that must be based on observations
of an initial segment, a mathematical correlary of unpredictability
can be developed: unpredictablity = the absence of regularity = the
impossibility of a gambling strategy. This analysis was first proposed
by Richard von Mises in 1919. It was perfected by Abraham Wald (1936/1937)
and Alonzo Church (1940), criticized by Ville (1939), and saved by
Per Martin-Löf (1966).
A different perspective on this matter, based on Shannon's
information theory, is due
to Andrey Kolmogorov, who focussed directly on the absence of regularities
in initial segments of a random sequence. Since any regularity
in a sequence allows it to be encoded in a more efficient way, randomness
may be equated with incompressibility.
This idea was further developed
by Gregory Chaitin. (Cf. Li & Vitanyi, 1993; Calude, 1994; Chaitin,
2001.)
Randomness implies various kinds of statistical uniformity
and for many practical purposes, that is all one needs from a "random" sequence. Effective criteria for statistical uniformity were first
proposed by Kermack & McKendrick (1936/1937)
and Kendall & Babington Smith (1938).
See Meyer (1956) for a bibliography of early work in this area. The
current state of the art is the Diehard test-suite (cf. Marsaglia & Tsang, 2002).
Does it exist?
Unpredictability is often operationalized through uncontrolled physical
processes, such as casting dice, tossing coins, and spinning the roulette
wheel. For practical purposes, this works fine. We know, however,
that events of this sort can in principle be predicted, by
measuring initial conditions and applying the laws of classical mechanics.
For roulette wheels this is even practically feasible (Bass, 1985).
But prediction becomes increasingly difficult if we look at modern
devices for random number generation, which generate fast bit streams
from small-scale physical phenomena such as thermal noise (electric
potential fluctuations in conducting materials) or atmospheric radio
noise (cf. random.org).
Physical measurements at the quantum
level are not predicted by any known theory; they are thus "random" in an unusually strong sense of that word. It is sometimes asserted
that they are absolutely random, i.e., that we know that no
conceivable deterministic theory could predict their outcomes. Von
Neumann (1932) presented a formal proof to this effect, which was,
however, based on an incorrect assumption (cf. Hermann, 1935; Bell,
1966). In the meantime, there is experimental evidence about the reality
of quantum-entanglement, which implies that quantum-measurements cannot
be accounted for by local hidden variables. HotBits
is an online source of random numbers which uses quantum effects:
radioactive decay.
Can we fake it?
An old challenge in computer
science: can a deterministic computer be programmed to yield number
sequences which are "random" in the mathematical sense of
that word? In the strict sense demanded by Von Mises and Kolmogorov,
this is obviously out of the question: the generating algorithm defines
both a perfect gambling strategy and an extremely efficient compressed
code. (John von Neumann, 1951: "Anyone who considers arithmetical
methods of producing random digits is, of course, in a state of sin.") Mere
statistical uniformity, on the other hand, is a difficult but not
impossible challenge. Divisions between large incommensurable numbers
often yield sequences with reasonable statistical properties (Knuth,
1969). Several other methods have been developed over the years; see
Coddington (1996) for an overview. The current state of the art is
the "Mersenne Twister" (Matsumoto & Nishimura, 1998).