Artificial Intelligence, or AI, is the ability of a computerto act like a human being. It has several applications, including software simulations and robotics. However, artificial intelligence is most commonly used in video games, where the computer is made to act as another player.
Nearly all video games include
some level of artificial intelligence. The most basic type of AI produces
characters that move in standard formations and perform predictable actions.
More advanced artificial intelligence enables computer characters to act unpredictably
and make different decisions based on a player's actions. For example, in a
first-person shooter (FPS), an AI opponent may hide behind a wall while the
player is facing him. When the player turns away, the AI opponent may attack.
In modern video games, multiple AI opponents can even work together, making the
gameplay even more challenging.
Artificial intelligence is
used in a wide range of video games, including board games, side-scrollers, and
3D action games. AI also plays a large role in sports games, such as football,
soccer, and basketball games. Since the competition is only as good as the
computer's artificial intelligence, the AI is a crucial aspect of a game's
playability. Games that lack a sophisticated and dynamic AI are easy to beat and
therefore are less fun to play. If the artificial intelligence is too good, a
game might be impossible to beat, which would be discouraging for players.
Therefore, video game developers often spend a long time creating the perfect
balance of artificial intelligence to make the games both challenging and fun
to play. Most games also include different difficulty levels, such as Easy,
Medium, and Hard, which allows players to select an appropriate level of
artificial intelligence to play against.
Highlights of AI History—From Godel (1931) to 2010
Highlights of AI History—From Godel (1931) to 2010
Godel and Lilienfeld. In 1931, just a few years after Julius Lilienfeld
patented the transistor, Kurt Godel layed the foundations of theoretica l
computer science (CS) with his work on uni- versal formal languages and the
limits of proof and computation [19]. He constructed formal systems allowing for self-referential statements that
talk about themselves, in particular, about whether they can be derived
from a set of given axioms through a computational theorem proving procedure. Godel went on to construct
statements that claim their own unprovability, to demonstrate that traditional math is either flawed in a certain
algorithmic sense or contains unprovable
but true statements. Godel's incompleteness
result is widely regarded as the most remarkable achievement of 20th century mathematics,
although some mathematicians say it is logic, not math, and oth- ers call it the fundamental result of theoretical
computer science, a discipline that did not yet officially exist back then but was effectively created through G¨del's
work. It had enormous.Impact not only on computer science but also on
philosophy and other fields. In particular, since
humans can "see" the truth of Godel's unprovable statements, some
researchers mistak- enly thought that
his results show that machines and Artificial Intelligences (AIs) will always be inferior to humans. Given the tremendous
impact of Godel's results on AI theory, it does make sense to date AI's beginnings back to his 1931.
Zuse and Turing. In 1936 Alan Turing [91] introduced the Turing machine to reformu- late Godel's results and Alonzo Church's extensions thereof. TMs are often more convenient than Godel's integer-based formal systems, and later became a central tool of CS theory. Si- multaneously Konrad Zuse built the first working program-controlled computers (1935-1941), using the binary arithmetic and the bits of Gottfried Wilhelm von Leibniz (1701) instead of the more cumbersome decimal system used by Charles Babbage, who pioneered the con- cept of program-controlled computers in the 1840s, and tried to build one, although without success. By 1941, all the main ingredients of 'modern' computer science were in place, a decade after Godel's paper, a century after Babbage, and roughly three centuries after Wil helm Schickard, who started the history of automatic computing hardware by constructing the first non-program-controlled computer in 1623. In the 1940s Zuse went on to devise the first high-level programming language (Plankalkul), which he used to write the first chess program. Back then chess-playing was considered an in- telligent activity, hence one might call this chess program the first design of an AI program, al- though Zuse did not really implement it back then. Soon afterwards, in 1948, Claude Shannon [82] published information theory, recycling several older ideas such as Ludwig Boltzmann's entropy from 19th century statistical mechanics, and the bit of information (Leibniz, 1701).
Relays, Tubes, Transistors. Variants of transistors, the concept pioneered and patented by Julius Edgar Lilienfeld (1920s) and Oskar Heil (1935), were built by William Shockley, Walter H. Brattain & John Bardeen (1948: point contact transistor) as well as Herbert F. Matare & Heinrich Walker (1948, exploiting transconductance effects of germanium diodes observed in the Luftwaffe during WW-II). Today, however, most transistors are of the old field-effect type a la Lilienfeld & Heil. In principle a switch remains a switch no matter whether it is implemented as a relay or a tube or a transistor, but transistors switch faster than relays (Zuse, 1941) and tubes (Colossus, 1943; ENIAC, 1946). This eventually led to significant speedups of computer hardware, which was essential for many subsequent AI applications.
Zuse and Turing. In 1936 Alan Turing [91] introduced the Turing machine to reformu- late Godel's results and Alonzo Church's extensions thereof. TMs are often more convenient than Godel's integer-based formal systems, and later became a central tool of CS theory. Si- multaneously Konrad Zuse built the first working program-controlled computers (1935-1941), using the binary arithmetic and the bits of Gottfried Wilhelm von Leibniz (1701) instead of the more cumbersome decimal system used by Charles Babbage, who pioneered the con- cept of program-controlled computers in the 1840s, and tried to build one, although without success. By 1941, all the main ingredients of 'modern' computer science were in place, a decade after Godel's paper, a century after Babbage, and roughly three centuries after Wil helm Schickard, who started the history of automatic computing hardware by constructing the first non-program-controlled computer in 1623. In the 1940s Zuse went on to devise the first high-level programming language (Plankalkul), which he used to write the first chess program. Back then chess-playing was considered an in- telligent activity, hence one might call this chess program the first design of an AI program, al- though Zuse did not really implement it back then. Soon afterwards, in 1948, Claude Shannon [82] published information theory, recycling several older ideas such as Ludwig Boltzmann's entropy from 19th century statistical mechanics, and the bit of information (Leibniz, 1701).
Relays, Tubes, Transistors. Variants of transistors, the concept pioneered and patented by Julius Edgar Lilienfeld (1920s) and Oskar Heil (1935), were built by William Shockley, Walter H. Brattain & John Bardeen (1948: point contact transistor) as well as Herbert F. Matare & Heinrich Walker (1948, exploiting transconductance effects of germanium diodes observed in the Luftwaffe during WW-II). Today, however, most transistors are of the old field-effect type a la Lilienfeld & Heil. In principle a switch remains a switch no matter whether it is implemented as a relay or a tube or a transistor, but transistors switch faster than relays (Zuse, 1941) and tubes (Colossus, 1943; ENIAC, 1946). This eventually led to significant speedups of computer hardware, which was essential for many subsequent AI applications.
The I in AI. In 1950, some 56 years ago,
Turing invented a famous subjective test to decide whether a machine or
something else is intelligent. 6 years later, and 25 years af- ter Godel's paper, John McCarthy finally coined
the term "AI". 50 years later, in 2006, this prompted some to celebrate the 50th birthday of
AI, but this section's title should make clear that its author cannot agree with this view—it is the thing that counts,
not its name [72].
Roots of Probability-Based AI. In the 1960s and 1970s Ray Solomonoff combined theo- retical CS and probability theory to establish a general theory of universal inductive inference and predictive AI [85] closely related to the concept of Kolmogorov complexity [29]. His theoretically optimal predictors and their Bayesian learning algorithms only assume that the observable reactions of the environment in response to certain action sequences are sampled from an unknown probability distribution contained in a set M of all enumerable distributions. That is, given an observation sequence we only assume there exists a computer program that can compute the probabilities of the next possible observations. This includes all scientific theories of physics, of course. Since we typically do not know this program, we predict using a weighted sumx of all distributions in M , where the sum of the weights does not exceed 1. It turns out that this is indeed the best one can possibly do, in a very general sense [85, 25]. Although the universal approach is practically infeasible since M contains infinitely many distributions, it does represent the first sound and general theory of optimal prediction based on experience, identifying the limits of both human and artificial predictors, and providing a yardstick for all prediction machines to come.
AI vs Astrology? Unfortunately, failed prophecies of human-level AI with just a tiny fraction of the brain's computing power discredited some of the AI research in the 1960s and 70s. Many theoretical computer scientists actually regarded much of the field with contempt for its perceived lack of hard theoretical results. ETH Zurich's Turing award winner and cre- ator of the PASCAL programming language, Niklaus Wirth, did not hesitate to compare AI to astrology. Practical AI of that era was dominated by rule-based expert systems and Logic Programming. That is, despite Solomonoff's fundamental results, a main focus of that time was on logical, deterministic deduction of facts from previously known facts, as opposed to (probabilistic) induction of hypotheses from experience.
Evolution, Neurons, Ants. Largely unnoticed by mainstream AI gurus of that era, a biology-inspired type of AI emerged in the 1960s when Ingo Rechenberg pioneered the method of artificial evolution to solve complex optimization tasks [44], such as the design of optimal airplane wings or combustion chambers of rocket nozzles. Such methods (and later variants thereof, e.g., Holland [23] (1970s), often gave better results than classical approaches. In the following decades, other types of "subsymbolic" AI also became popular, especially neural networks. Early neural net papers include those of McCulloch & Pitts, 1940s (linking certain simple neural nets to old and well-known, simple mathematical concepts such as lin- ear regression); Minsky & Papert [35] (temporarily discouraging neural network research), Kohonen [27], Amari, 1960s; Werbos [97], 1970s; and many others in the 1980s. Orthog- onal approaches included fuzzy logic (Zadeh, 1960s), Rissanen's practical variants [45] of Solomonoff's universal method, "representation-free" AI (Brooks [5]), Artificial Ants (Dorigo & Gambardella [13], 1990s), statistical learning theory (in less general settings than those studied by Solomonoff) & support vector machines (Vapnik [94] and others). As of 2006, this alternative type of AI research is receiving more attention than "Good Old-Fashioned AI" (GOFAI).
Mainstream AI Marries Statistics. A dominant theme of the 1980s and 90s was the marriage of mainstream AI and old concepts from probability theory. Bayes networks, Hidden Markov Models, and numerous other probabilistic models found wide applications ranging from pattern recognition, medical diagnosis, data mining, machine translation, robotics, etc.
Hardware Outshining Software: Humanoids, Robot Cars, Etc. In the 1990s and 2000s, much of the progress in practical AI was due to better hardware, getting roughly 1000 times faster per Euro per decade. In 1995, a fast vision-based robot car by Ernst Dickmanns (whose team built the world's first reliable robot cars in the early 1980s with the help of Mercedes-Benz, e. g., [12]) autonomously drove 1000 miles from Munich to Denmark and back, up to 100 miles without intervention of a safety driver (who took over only rarely in critical situations), in traffic at up to 120 mph, visually tracking up to 12 other cars simulta- neously, automatically passing other cars. Japanese labs (Honda, Sony) and Pfeiffer's lab at TU Munich built famous humanoid walking robots. Engineering problems often seemed more challenging than AI-related problems.
Another source of progress was the dramatically improved access to all kinds of data through the WWW, created by Tim Berners-Lee at the European particle collider CERN (Switzerland) in 1990. This greatly facilitated and encouraged all kinds of "intelligent" data mining applications. However, there were few if any obvious fundamental algorithmic break- throughs; improvements / extensions of already existing algorithms seemed less impressive and less crucial than hardware advances. For example, chess world champion Kasparov was beaten by a fast IBM computer running a fairly standard algorithm. Rather simple but com- putationally expensive probabilistic methods for speech recognition, statistical machine trans- lation, computer vision, optimization, virtual realities etc. started to become feasible on PCs, mainly because PCs had become 1000 times more powerful within a decade or so.
As noted by Stefan Artmann (personal communication, 2006), today's AI textbooks seem substantially more complex and less unified than those of several decades ago, e. g., [39], since they have to cover so many apparently quite different subjects. There seems to be a need for a new unifying view of intelligence. Today the importance of embodied, embedded AI (real robots living in real physical environments) is almost universally acknowledged (e. g., [41]). While the extension of AI into the realm of the physical body seems to be a step away from formalism, the new millennium's formal point of view is actually taking this step into account in a very general way, through the first mathematical theory of universal embedded AI, combining "old" theoretical computer science and "ancient" probability theory to derive optimal behavior for embedded, embodied rational agents living in unknown but learnable environments.
Roots of Probability-Based AI. In the 1960s and 1970s Ray Solomonoff combined theo- retical CS and probability theory to establish a general theory of universal inductive inference and predictive AI [85] closely related to the concept of Kolmogorov complexity [29]. His theoretically optimal predictors and their Bayesian learning algorithms only assume that the observable reactions of the environment in response to certain action sequences are sampled from an unknown probability distribution contained in a set M of all enumerable distributions. That is, given an observation sequence we only assume there exists a computer program that can compute the probabilities of the next possible observations. This includes all scientific theories of physics, of course. Since we typically do not know this program, we predict using a weighted sumx of all distributions in M , where the sum of the weights does not exceed 1. It turns out that this is indeed the best one can possibly do, in a very general sense [85, 25]. Although the universal approach is practically infeasible since M contains infinitely many distributions, it does represent the first sound and general theory of optimal prediction based on experience, identifying the limits of both human and artificial predictors, and providing a yardstick for all prediction machines to come.
AI vs Astrology? Unfortunately, failed prophecies of human-level AI with just a tiny fraction of the brain's computing power discredited some of the AI research in the 1960s and 70s. Many theoretical computer scientists actually regarded much of the field with contempt for its perceived lack of hard theoretical results. ETH Zurich's Turing award winner and cre- ator of the PASCAL programming language, Niklaus Wirth, did not hesitate to compare AI to astrology. Practical AI of that era was dominated by rule-based expert systems and Logic Programming. That is, despite Solomonoff's fundamental results, a main focus of that time was on logical, deterministic deduction of facts from previously known facts, as opposed to (probabilistic) induction of hypotheses from experience.
Evolution, Neurons, Ants. Largely unnoticed by mainstream AI gurus of that era, a biology-inspired type of AI emerged in the 1960s when Ingo Rechenberg pioneered the method of artificial evolution to solve complex optimization tasks [44], such as the design of optimal airplane wings or combustion chambers of rocket nozzles. Such methods (and later variants thereof, e.g., Holland [23] (1970s), often gave better results than classical approaches. In the following decades, other types of "subsymbolic" AI also became popular, especially neural networks. Early neural net papers include those of McCulloch & Pitts, 1940s (linking certain simple neural nets to old and well-known, simple mathematical concepts such as lin- ear regression); Minsky & Papert [35] (temporarily discouraging neural network research), Kohonen [27], Amari, 1960s; Werbos [97], 1970s; and many others in the 1980s. Orthog- onal approaches included fuzzy logic (Zadeh, 1960s), Rissanen's practical variants [45] of Solomonoff's universal method, "representation-free" AI (Brooks [5]), Artificial Ants (Dorigo & Gambardella [13], 1990s), statistical learning theory (in less general settings than those studied by Solomonoff) & support vector machines (Vapnik [94] and others). As of 2006, this alternative type of AI research is receiving more attention than "Good Old-Fashioned AI" (GOFAI).
Mainstream AI Marries Statistics. A dominant theme of the 1980s and 90s was the marriage of mainstream AI and old concepts from probability theory. Bayes networks, Hidden Markov Models, and numerous other probabilistic models found wide applications ranging from pattern recognition, medical diagnosis, data mining, machine translation, robotics, etc.
Hardware Outshining Software: Humanoids, Robot Cars, Etc. In the 1990s and 2000s, much of the progress in practical AI was due to better hardware, getting roughly 1000 times faster per Euro per decade. In 1995, a fast vision-based robot car by Ernst Dickmanns (whose team built the world's first reliable robot cars in the early 1980s with the help of Mercedes-Benz, e. g., [12]) autonomously drove 1000 miles from Munich to Denmark and back, up to 100 miles without intervention of a safety driver (who took over only rarely in critical situations), in traffic at up to 120 mph, visually tracking up to 12 other cars simulta- neously, automatically passing other cars. Japanese labs (Honda, Sony) and Pfeiffer's lab at TU Munich built famous humanoid walking robots. Engineering problems often seemed more challenging than AI-related problems.
Another source of progress was the dramatically improved access to all kinds of data through the WWW, created by Tim Berners-Lee at the European particle collider CERN (Switzerland) in 1990. This greatly facilitated and encouraged all kinds of "intelligent" data mining applications. However, there were few if any obvious fundamental algorithmic break- throughs; improvements / extensions of already existing algorithms seemed less impressive and less crucial than hardware advances. For example, chess world champion Kasparov was beaten by a fast IBM computer running a fairly standard algorithm. Rather simple but com- putationally expensive probabilistic methods for speech recognition, statistical machine trans- lation, computer vision, optimization, virtual realities etc. started to become feasible on PCs, mainly because PCs had become 1000 times more powerful within a decade or so.
As noted by Stefan Artmann (personal communication, 2006), today's AI textbooks seem substantially more complex and less unified than those of several decades ago, e. g., [39], since they have to cover so many apparently quite different subjects. There seems to be a need for a new unifying view of intelligence. Today the importance of embodied, embedded AI (real robots living in real physical environments) is almost universally acknowledged (e. g., [41]). While the extension of AI into the realm of the physical body seems to be a step away from formalism, the new millennium's formal point of view is actually taking this step into account in a very general way, through the first mathematical theory of universal embedded AI, combining "old" theoretical computer science and "ancient" probability theory to derive optimal behavior for embedded, embodied rational agents living in unknown but learnable environments.