home | alphabetical index | |||||||
CombinatoricsCombinatorics is a branch of mathematics that studies finite collections of objects that satisfy certain criteria, and is in particular concerned with "counting" the objects in those collections (enumerative combinatorics) and with deciding whether certain "optimal" objects exist (extremal combinatorics). One of the most prominent combinatorialists of recent times was Gian-Carlo Rota, who helped formalize the subject beginning in the 1960s. The prolific problem-solver Paul Erdös worked mainly on extremal questions. The study of how to count objects is sometimes thought of separately as the field of enumeration.A quite comprehensive listing by Wikipedia page is list of combinatorics topics. An example of a combinatorial question is the following: What is the number of possible orderings of a deck of 52 playing cards? That number equals 52! (i.e., "fifty-two factorial"). It is the product of all the natural numbers from one to fifty-two. It may seem surprising that this number, about 8.065817517094 × 10^{67}, is so large. That is a little bit more than 8 followed by 67 zeros. Comparing that number to some other large numbers, it is greater than the square of Avogadro's number, 6.022 × 10^{23}, "the number of atoms, molecules, etc., in a gram mole".
Counting functionsCalculating the number of ways that certain patterns can be formed is the beginning of combinatorics. Let S be a set with n objects. Combinations of k objects from this set S are subsets of S having k elements each (where the order of listing the elements does not distinguish two subsets). Permutations of k objects from this set S refer to sequences of k different elements of S (where two sequences are considered different if they contain the same elements but in a different order). Formulas for the number of permutations and combinations are readily available and important throughout combinatorics. More generally, given an infinite collection of finite sets {S_{i}} typically indexed by the natural numbers, enumerative combinatorics seeks a variety of ways of describing a counting function, f(n), which counts the number of objects in S_{n} for any n. Although the activity of counting the number of elements in a set is a rather broad mathematical problem, in a combinatorial problem the elements S_{i} will usually have a relatively simple combinatorial description, and little additional structure. The simplest such functions are closed formulas, which can be expressed as a composition of elementary functions such as factorials, powers, and so on. As noted above, the number of possible different orderings of a deck of n cards is f(n) = n!. This approach may not always be entirely satisfactory (or practical) for every combinatoric problem. For example, let f(n) be the number of distinct subsets of the integers in the interval [1,n] that do not contain two consecutive integers; thus for example, with n = 4, we have {}, {1}, {2}, {3}, {4}, {1,3}, {1,4}, {2,4}, so f(4) = 8. It turns out that f(n) is the nth Fibonacci number, which can be expressed in closed form as:
Another approach is to find an asymptotic formula f(n) ~ g(n) where g(n) is a "familiar" function, and where f(n) approaches g(n) as n approaches infinity. In some cases, a simple asymptotic function may be preferable to a horribly complicated closed formula that yields no insight to the behaviour of the counted objects. In the above example, an asymptotic formula would be
Finally, and most usefully, f(n) may be expressed by a formal power series, called its generating function, which is most commonly either the ordinary generating function
ResultsSome very subtle patterns can be developed and some surprising theorems proved. One example of a surprising theorem is of Frank P. Ramsey: Suppose 6 people meet each other at a party. Some of those already know each other, some of them do not. It is always the case that one can find 3 people out of the 6 such that they either all know each other or that they are all strangers to each other. The proof is a short proof by contradiction: suppose that there aren't 3 people who either all know each other or all don't know each other. Then consider any one person at the party, hereafter called person A: among the remaining 5 people, there must be at least three who either all know or all do not know A. Without loss of generality, assume three such people all know A. But then among those three people, at least two of them must know each other (otherwise we would have 3 people who all don't know each other). But then those two also know A, so we have 3 people who all know each other. (This is a special case of Ramsey's theorem) The idea of finding order in random configurations gives rise to Ramsey theory. Essentially this theory says that any sufficiently large configuration will contain at least one instance of some other type of configuration. See also: finite mathematics, inclusion-exclusion principle External linksReferences
| |||||||
copyright © 2004 FactsAbout.com |