In mathematics, the real numbers may be described informally in several different ways. The real numbers include both rational numbers, such as 42 and -23/129, and irrational numbers, such as pi and the square root of two; or, a real number can be given by an infinite decimal representation, such as 2.4871773339...., where the digits continue in some way; or, the real numbers may be thought of as points on an infinitely long number line.
These descriptions of the real numbers, while intuitively accessible, are not sufficiently rigorous for the purposes of pure mathematics. The discovery of a suitably rigorous definition of the real numbers - indeed, the realisation that a better definition was needed - was one of the most important developments of 19th century mathematics. Popular definitions in use today include equivalence classes of Cauchy sequences of rational numbers; Dedekind cuts; a more sophisticated version of "decimal representation"; and an axiomatic definition of the real numbers as the unique complete Archimedean ordered field. These definitions are all described in detail below.
A real number may be either rational or irrational; either algebraic or transcendental; and either positive, negative, or zero. Real numbers measure continuous quantities. They may in theory be expressed by decimal representations that have an infinite sequence of digits to the right of the decimal point; these are often represented in the same form as 324.823122147… The ellipsis (three dots) indicate that there would still be more digits to come.
More formally, real numbers have the two basic properties of being an ordered field, and having the least upper bound property. The first says that real numbers comprise a field, with addition and multiplication as well as division by nonzero numbers, which can be totally ordered on a number line in a way compatible with addition and multiplication. The second says that if a nonempty set of real numbers has an upper bound, then it has a least upper bound. These two together define the real numbers completely, and allow its other properties to be deduced. For instance, we can prove from these properties that every polynomial of odd degree with real coefficients has a real root, and that if you add the square root of -1 to the real numbers, obtaining the complex numbers, the result is algebraically closed.
Measurements in the physical sciences are almost always conceived of as approximations to real numbers. While the numbers used for this purpose are generally decimal fractions representing rational numbers, writing them in decimal terms suggests they are an approximation to a theoretical underlying real number.
A real number is said to be computable if there exists an algorithm that yields its digits. Because there are only countably many algorithms, but an uncountable number of reals, most real numbers are not computable. Some constructivists accept the existence of only those reals that are computable. The set of definable numbers is broader, but still only countable.
Computers can only approximate most real numbers. Most commonly, they can represent a certain subset of the rationals exactly, via either floating point numbers or fixed-point numbers, and these rationals are used as an approximation for other nearby real values. Arbitrary-precision arithmetic is a method to represent arbitrary rational numbers, limited only by available memory, but more commonly one uses a fixed number of bits of precision determined by the size of the processor registers. In addition to these rational values, computer algebra systems are able to treat many (countable) irrational numbers exactly by storing an algebraic description (such as "sqrt(2)") rather than their rational approximation. Note that a few programming languages, such as AppleScript, use "real" to describe their main numeric data type.
Mathematicians use the symbol R (or alternatively,
In mathematics, real is used as an adjective, meaning that the underlying field is the field of real numbers. For example real matrix, real polynomial and real Lie algebra. As a substantive, the term is used almost strictly in reference to the real numbers, themselves (e.g., The "set of all reals").
Vulgar fractions had been used by the Egyptians around 1000 BC; the Vedic "Sulba Sutras" ("rule of chords" in, ca. 600 BC, include what may be the first 'use' of irrational numbers. The concept of irrationality was implicitly accepted by early Indian mathematicians since Manava (c. 750 - 690 BC), who was aware that the square roots of certain numbers such as 2 and 61 could not be exactly determined. Around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2.
The Middle Ages saw the acceptance of zero, negative, integral and fractional numbers, first by Indian and Chinese mathematicians, and then by Arabic mathematicians, who were also the first to treat irrational numbers as algebraic objects, which was made possible by the development of algebra. Arabic mathematicians merged the concepts of "number" and "magnitude" into a more general idea of real numbers. The Egyptian mathematician Abū Kāmil Shujā ibn Aslam (c. 850 - 930) was the first to accept irrational numbers as solutions to quadratic equations or as coefficients in an equation, often in the form of square roots, cube roots and fourth roots.
In the 18th and 19th centuries there was much work on irrational and transcendental numbers. Lambert (1761) gave the first flawed proof that π cannot be rational; Legendre (1794) completed the proof, and showed that π is not the square root of a rational number. Ruffini (1799) and Abel (1842) both constructed proofs of Abel–Ruffini theorem: that the general quintic or higher equations cannot be solved by a general formula involving only arithmetical operations and roots.
Évariste Galois (1832) developed techniques for determining whether a given equation could be solved by radicals which gave rise to the field of Galois theory. Joseph Liouville (1840) showed that neither e nor e2 can be a root of an integer quadratic equation, and then established existence of transcendental numbers, the proof being subsequently displaced by Georg Cantor (1873). Charles Hermite (1873) first proved that e is transcendental, and Ferdinand von Lindemann (1882), showed that π is transcendental. Lindemann's proof was much simplified by Weierstrass (1885), still further by David Hilbert (1893), and has finally been made elementary by Hurwitz and Paul Albert Gordan.
The development of calculus in the 1700s used the entire set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871. In 1874 he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, which he published in 1891. See Cantor's first uncountability proof.
See main article: Construction of the real numbers.
The real numbers can be constructed as a completion of the rational numbers in such a way that a sequence defined by a decimal or binary expansion like