Artificial intelligence (AI)
A term applied to the study and use of computers that can simulate some
of the characteristics normally ascribed to human intelligence, such as learning,
deduction, intuition, and self-correction. The subject encompasses many branches
of computer science, including cybernetics, knowledge-based systems, natural
language processing, pattern recognition, and robotics. Progress has been
made in several areas, notably problem-solving, language comprehension, vision,
More information provided by Webster Publishing
There are a number of steps along the way to artificial intelligence. At
the lower end, we may find
where the alternative game positions are evaluated according to some artificial
rule. This can result in a computer doing the equivalent of playing ''Pawn
to King 4!! because the rules of evaluation give this move a value of 305.2
points, rather than because the machine is aware that this move will infuriate
the opponent it is playing.
We do not perceive a
as a computer perceives it: rather, experts engage in a procedure called
''chunking''. If you ask an expert player to look briefly at a game in progress,
and then to reproduce the whole board, there will be subtle differences in
the two versions. The relationships of whole sets of pieces will be
reconstructed, so the facts of the game are the same, but a pawn or a knight
may be placed on the wrong square, still threatening the king as before,
but from a new position.
The binary logic
of a computer only allows two possibilities: right or wrong, with no
opportunity to offer a value for ''maybe'' or ''possibly'', and this will
remain a major stumbling block for artificial intelligence or machine
intelligence, for some years to come.
Other games which have been explored include Go, draughts (checkers), a variety
of card games, including Solitaire, but also extending to bidding in Bridge,
poker, and Blackjack. In
each of these cases. the ''play'' is based on the fast calculation of
probabilities, giving the program an advantage over humans.
The most challenging aspects of artificial intelligence come in communication.
There must be a limited set of rules that we use to construct a sentence,
and another limited set of rules that we use to interpret a statement, but
what are these rules? Noam Chomsky offered us a perfectly constructed sentence,
''colourless green ideas sleep furiously'', but what would a machine intelligence
make of this? How could we stop machines from generating such statements,
unless they did so for valid reasons like Chomsky's?
On the other hand, what would speech recognition software make of ''I threw
a stare at the bare hare''? The distinction between ''I'' and ''eye'' may
be easy enough, but after that, the going gets hard. In most cases, the
threw/through problem would be resolved by deciding that a verb must follow
a pronoun, but the stair/stare, bear/bare and hair/hare problems would be
much harder. Humans deal with such problems by reviewing what has passed
in the conversation, identifying an attribute of stairs (they are usually
attached, and so hard to throw), and an attribute of bear hairs (you would
be unlikely to throw things at them) to work out what the sentence really
The problem here is amassing the bank of data about words and concepts, and
their attributes. we do it naturally, without thinking, right through our
lives, but nobody has yet worked out just what mental rules we use in classifying
the things we encounter. It is unlikely that any speech application will
ever use a particular word in a poem because it has a value of 305.2 in this
context. If one ever does, it is unlikely that the poetry so produced will
win any prizes.
Another interesting challenge for machine intelligence will be the recognition
of ''tone of voice''. The Internet community has long recognised this,
with the introduction of ''smileys'' or ''emoticons'', small collections
of punctuation marks such as :-) (a smile) or (a winking smile) to indicate
what written text cannot easily convey. Tall tale tellers, for example, indicate
that they are telling a tale by keeping an extremely straight face, and showing
no emotion at all. In this case, the machine may need to gather information
about the attributes of speakers as well as for words.
Visual recognition works rather better, with neural networks that have been
''trained'' on large numbers of writing samples showing quite good capability
to read hand-writing, but mistakes can still occur, equivalents in the visual
area of the bare hare problem.
The future of artificial intelligence
Artificial intelligence is the branch of computing which looks at the ways
in which computers may match human intelligence. As we do not yet understand
what human intelligence is, or how it works, there is probably a long way
to go. The term itself was coined by John Mccarthy in 1956, and he and Marvin
Minsky founded the AI laboratory at MIT (Massachusetts Institute of Technology)
in 1957. Other laboratories were also set up in the early 1960s.
Many of the people working in the area still adhere to ''strong AI'', and
believe that in almost no time at all, computers will be doing everything
humans do. Marvin Minsky, for example, referred once to people as ''computers
made of meat''. This school of thought considers that pain, love, consciousness,
a feeling for beauty and humour can all arise from a sufficiently complex
computer with sufficiently complex algorithms.
Others disagree, seeing the computer as an overgrown mechanical calculator,
able to carry out very complex calculations, but without any true
''understanding''. To this group, the computer which plays chess at grandmaster
level still has no knowledge of, or love of, the game of chess.
Most public attention has been given to the popular idea of ''robots'', usually
seen as electro-mechanical servants, able to carry out all of the useful
and annoying tasks like vacuum cleaning the carpet without taking the pet
hamster into the cleaner as well, and able to bath the baby without getting
soap in its eyes. In this case, the intelligence needed is mainly to distinquish
between obiects while manipulatina them. and while
''strong AI'' might be useful, it is not really necessary for such a robot
to function effectively.
The near future is rather more likely to see intelligent software, rather
than intelligent hardware, with neural nets, agents and expert systems being
at the top of everybody's wish list, while a state-of-the-art project announced
during 1997 was to complete robots which would collect slugs. The idea behind
this plan is that slugs move a small amount, making them more recognisable,
but they move slowly enough to be easy to catch, there are plenty of them,
and it probably did not escape the planners that nobody really loves slugs
all that much, so objections should be rare, even if people find out that
the slugs are to be ''digested'' to provide the energy to charge the robots'
Neural nets- These are artificial networks (either in hardware, or
simulated by software) which mimic the biological networks of neurons in
the brains of animals. A large number of simple processors, each with a small
allocation of memory, are linked to each other by unidirectional connectors.
These connectors transmit numeric data, and each unit operates solely on
the basis of local data stored in memory, and on the inputs received.
As a general rule, neural networks need to be ''trained'', for example by
being given some hundreds of hand-written samples to ''read'', and then being
told what each sample actually says, so that the network is trained to decipher
a wider range of hand-writing. The network will ideally begin to generalise
from what it ''learns''.
Agents- An agent is a piece of software which runs in the background,
analysing patterns, and using these to make judgements about the best future
actions. The software would also be able to communicate with other agents
on your behalf, and using your known characteristics and constraints, book
you the best available airline ticket, restaurant table, or whatever. In
most media presentations, the agent is compared with a butler who knows your
tastes, and arranges things to suit you.
Agents are likely to feature first on the Internet, where they may be able
to select news
stories for you to read ,refining the search criteria, based on a profile
created from ones that you previously have identified as interesting. Such
an agent will probably have a small degree of randomness thrown in, so that
it occasionally offers you something completely novel. Agents are also likely
to be used in Internet marketing, where they will use data about the products
you have bought, along with information about things like your clothing sizes,
to alert you to good offers in selected areas.
Expert systems- These are computer programs which contain a knowledge
base of some sort, along with algorithms which allow the system to infer
new facts from what it knows already, and from new data as they are received.
Ideally, the expert system will work as well as a human expert, but the
performance of such systems usually varies wildly. The 1987 share market
crash, for example, is often blamed on computer programs which over reacted
to market fluctuations, making the situation worse, and triggering a stronger
reaction. This, of course, is one of the better examples of what computerists
call GIGO - Garbage In [means] Garbage Out.
Typical applications of expert systems include medical diagnosis, where the
system may well be better able to connect several unusual symptoms than a
trained medical practitioner who is looking at a more ''obvious'' explanation,
investment and financial planning, and even in the routine winnowing of large
numbers of suspects in a case of serious crime, such as serial rape or murder.
Several police forces are using expert systems to predict the areas where
crime is most likely at a given time, in order to locate spare officers in
Case-based reasoning is often used to develop the ''skills'' of these systems.
Most experts are less than willing to write a set of ''rules'' for working
in their field of expertise, but more willing to describe the steps they
would take in tackling a specific case. By assembling a set of case studies,
programmers can fine-tune their systems. Some case based systems aim to collect
an exhaustive set of cases to hold ready for use.
The analogy here is with people who have a set of stories ready to tell,
and at a given stimulus, will tell one of those stories. As they do so, they
observe how effective that response was, and may learn from this to modify
their future behaviour.
The Frankenstein threat
The popular notion of creations turning on their creators remains alive and
kicking, especially where robots and computers with some degree of AI are
The term ''robotics'' was first used in the March 1942 issue of Astounding
(science fiction), where a character says ''Now, look, let's start with
the three fundamental Rules of Robotics''. The story was, of course, the
work of Isaac Asimov, who rejected the notion that robots must always destroy
their creators. Together with editor John W. Campbell, Asimov worked out
what would be needed in the way of safeguards, and the story ''Runaround''
goes onto list the rules, which remain a valid starting point for AI today:
1.A robot may not injure a human being, or, through
inaction, allow a human being to come to harm.
2.A robot must obey the orders given to it by human beings except when such
orders conflict with the First Law.
3.A robot must protect its own existence as long as such protection does
not conflict with the First or Second Law.
An interesting addendum comes in the form of the ''Hobson block''. Proposed
in his book Black Holes and Time warps by Kip S. Thorne, this is a
''block'' which may in future need to be placed on computers, in order to
prevent the computers from warning humans every time the humans act in such
a way as to endanger themselves. The lack of this block, suggests Thorne,
would make life lose its zest and richness. He is uncertain whether the idea
originated with him, and thinks it may have come from a science fiction source,
Even with Asimov's rules, the question of artificial intelligence running
amok is a popular one in fiction, from Frankenstein's monster through to
HAL, the renegade computer in Arthur C. Clarke's 2001 A Space Odyssey,
which promotes the Third Law of Robotics to first place in the list of
priorities (because it assumes that the most important law is to preserve
the spacecraft's ''mission'', even if that means killing humans who may elect
to interfere with that.
At the moment, this is an interesting theme for science fiction, but by the
year 2025, when Marvin Minsky believes that there could be real (''strong
AI'') machine intelligence (see The Turing option, a science fiction
novel by Minsky and Harry Harrison), the question will probably raise more
fears and worries than genetic engineering does today.
As a side issue, in the plot line of 2001 A Space odyssey, HAL was
commissioned in very early 1997. HAL was able to do a number of things which
were not generally available at the start of 1997, including providing
intonations in speech and the ability to ''lip-read'' speech.
Interestingly, lip-reading is currently being explored as a way of resolving
apparent similarities in words such as ''me'' and ''knee'' which sound similar
but look very different, but there are no plans for straight-forward lip
reading computer software, not even on the horizon.
Written by Peter Macinnis
©WebsterWorld Pty Ltd/contributors 2002
See Also : BBC
Hot Topics ,Art by