Du er ikke logget ind
Beskrivelse
Intuitively, a sequence such as 101010101010101010... does not seem random, whereas 101101011101010100..., obtained using coin tosses, does. How can we reconcile this intuition with the fact that both are statistically equally likely? What does it mean to say that an individual mathematical object such as a real number is random, or to say that one real is more random than another? And what is the relationship between randomness and computational power. The theory of algorithmic randomness uses tools from computability theory and algorithmic information theory to address questions such as these. Much of this theory can be seen as exploring the relationships between three fundamental concepts: relative computability, as measured by notions such as Turing reducibility; information content, as measured by notions such as Kolmogorov complexity; and randomness of individual objects, as first successfully defined by Martin-Lof. Although algorithmic randomness has been studied for several decades, a dramatic upsurge of interest in the area, starting in the late 1990s, has led to significant advances. This is the first comprehensive treatment of this important field, designed to be both a reference tool for experts and a guide for newcomers. It surveys a broad section of work in the area, and presents most of its major results and techniques in depth. Its organization is designed to guide the reader through this large body of work, providing context for its many concepts and theorems, discussing their significance, and highlighting their interactions. It includes a discussion of effective dimension, which allows us to assign concepts like Hausdorff dimension to individual reals, and a focused but detailed introduction to computability theory. It will be of interest to researchers and students in computability theory, algorithmic information theory, and theoretical computer science.