Monday 2 July 2012

Artificial Intelligence: A Beginner's Guide

I've finished reading 'Artificial Intelligence: A Beginner's Guide' by Blay Whitby, which I loaned from Bristol Central Library, and here are my thoughts, as well as some useful extracts and the recommended further reading (which is extensive).

The book provides a great starting point for someone (like me) who is new to artificial intelligence. Topics covered include knowledge based (expert) system, neural nets, evolutionary computing, the main problems facing AI, philosophical debates involving the nature consciousness and intelligence, holism and the future of artificial intelligence.

The main success of this book is in displaying the field of AI as one in which anyone with a good idea can make a big impact, and the reader is encouraged, helped by stories of previous successes, to develop their own ideas. Everything is explained clearly, and the only vagueness arises in the philosophical chapters, but that's philosophy for you. There's a fair amount of humour sprinkled throughout the book, making reading it a pleasure. The chapter on neural nets is quite thorough and more specific than the rest of the book, which is nice.

A downside is (excluding neural nets) the lack of any detailed explanations of the workings of the AI systems, and there's not a single piece of code in the book.

There are a few pages relevant to chess playing, and alpha-beta pruning is explained well, but overall, little of the book was relevant to my extended project.

Extracts:

"when we look in detail at the way in which computers play chess we shall see that it is probably very different from the ways in which humans play chess." p4
"Nowadays it is possible to buy computers that play chess very well indeed in almost every mall or high street. This had led some people to assume that it must be easy to get programs to play chess or that chess is  a basically computational game. Neither of these assumptions is correct and they fail to do justice to the brilliance and perseverance of those AI pioneers who made the early breakthroughs in the field of computer game playing." p25
"In fact chess is an extremely difficult game to play using computational techniques. Firstly, it turns out that chess is an extremely good example of combinatorial explosion [...]. In the middle part of the average chess game the branching factor is about 36. That is to say that you have a choice of about 36 legal moves. [...] Your opponent can respond in about 36 ways to each of your moves. So if you want to consider your next move, you have a field of 1,296 moves to choose from. If you want to think about the move after that then the number of possible moves has risen to 1,679,616. So the number of moves to be considered continues to grow at an impossible rate even for the most powerful computer." p25-26
"When a game of chess starts there are about 10^123 (that is 10 followed by 123 noughts) possible board positions in the game. This is an alarmingly large number, more than the number of electrons in the known universe." p26
"[Arthur Samuel] introduced the idea of a static evaluation function. This is yet another heuristic in that it enables the program to take a guess about the best sort of move to make in a given situation. The idea is that the program looks at the board positions available from the present position and comes up with an evaluation as to how good each of them looks. This is a static evaluation in that it does not ask whether or not this board position will lead to a win, merely how good it looks. A board position, for example, in which one's opponent has fewer pieces looks relatively good and one in which it is your own pieces that have been taken looks relatively bad." p26-27
"If you consider your own playing of board games such as chess, you might be thinking at this point that this is still a rather wasteful way of doing things. You probably don't examine all the moves available to you at a given point in the game. Many will be obviously stupid. Well, a way of enabling programs not to waste time considering stupid moves was also devised. This is known as alpha-beta pruning. The pruning here is a horticultural metaphor suggesting that unproductive branches are being lopped of the search tree. When programs looks at possible board positions there are two classes that can be discounted. The first class is those that have high static evaluation functions but are unobtainable because the opposing player will not let the program move so as to attain them. The second class is those that are disastrous. If either of these situations is detected early in the search, then the search can be stopped at that point. There is no point, for example, in expanding the possible board positions which follow from a disastrous board position. This move is not going to be made and any effort put in to exploring just how bad things might go on to become is obviously wasted effort. Similarly, effort used to explore board positions that our opponent is capable of preventing us from attaining is the computational equivalent of day dreaming. Effort put into exploring how good things could be "if only" is also wasted effort". p27-28

Further Reading:

(Preface):
  • A good general AI textbook is Artificial Intelligence, A Modern Approach (Russell and Norvig, 2003). Since it adopts an agent-based approach to AI, it is an excellent starting point in pursuing many of the ideas in this book.
  • A less technical book about many early AI programs is Artificial Intelligence and Natural Man by Margaret Boden (1987).
(What is AI?):
  • An historical account of the early enthusiasm about AI in the US with many anecdotes and details of the personalities involved is provided in Pamela McCorduck's book, Machines Who Think (1979).
  • The best place to find out about Alan Turing and his achievements is in Alan Turing: The Enigma of Intelligence by Andrew Hodges (1983). Hodges also maintains a large and truly comprehensive Alan Turing website at http://www.turing.org.uk/turing/index.html
  • Turing's "Computing Machinery and Intelligence" (published originally in Mind in 1950) is an easy to read and non-technical paper. It has been published in many places - including a good collections of though-provoking papers: The Mind's I (Hoffstadter and Dennett, 1981).
  • A must-read on the amazing history of Bletchley Park is Britain's Best Kept Secret by Ted Enver (1994).
  • You can read all about the Loebner prize online at: http://www.loebner.net/Prizef/loebner-prize.html
  • A wonderful book which makes clear just how the same principles of aerodynamics apply to both birds and aircraft is The Simple Science of Flight: from Insects to Jumbo Jets by Henk Tennekes (1997).
(AI at work):
  • NASA has a tremendous commitment to all sorts of AI research. Details of many current AI projects can be found by exploring the website: http://www-aig.jpl.nasa.gov
  • Further details of how search works as an AI technique are in Thorton and du Boulay's Artificial Intelligence through Search (1992).
  • A good general introduction to the technology of knowledge-based systems is P. Jackson's Introduction to Expert Systems (1990).
  • Most AI textbooks cover chess-playing and Samuel's work. My favourite is Luger and Stubblefield's Artificial Intelligence Structures and Strategies for Complex Problem Solving (1993).
  • There's a web-site which details many of the achievements of the Clementine data-mining program at: http://www.spss.com/spssbi/clementine/
(AI and biology):
  • Artificial neural nets are covered in general AI textbooks. A specific and readable introduction to them is provided in An Introduction to Neural Computing (Aleksander and Morton, 1990).
  • A good explanation of why there is much more to evolution than in popular accounts can be found in Darwin's Dangerous Idea (1995).
  • A good textbook which details these biologically inspired approaches to AI is Understanding Intelligence (Pfeifer and Scheier, 1999).
(Some challenges):
  • A witty and readable account of why the frame problem matters can be found in Cognitive Wheels by Dan Dennett (1984).
  • A non-technical introduction to what philosophers have to say about some of these questions is Tim Crane's The Mechanical Mind (1995).
  • Andy Clark explains the 007 principle in Microcognition (1989) but his later book Being There (1997) would perhaps make a better companion to this chapter.
  • John Searle describes the Chinese Room thought experiment in a number of places including Minds, Brains and Programs (1980); Minds, Brains, and Science (1991); and The Rediscovering of the Mind (1994).
  • John Lucas made his original claim in a 1961 paper which is included in Anderson, 1964 in the bibliography. Roger Penrose sets out a more detailed and wide-ranging attack on AI in The Emperor's New Mind (1989).
  • For an introduction to the enthusiasm surrounding Artificial Life the best book is Steven Levy's Artificial Life, The Quest for a New Creation (1993).
  • The paper in which Rod Brooks uses the 747 parable ("Intelligence Without Representation", 1991) is available online together with many details of the Cog project at: http://www.ai.mit.edu/people/brooks/
  • Rod Brooks's latest book, Robot: the Future of Flesh and Machines (2002) is a general-audience account of his approach and work.
  • Steve Grand sets out his stall in Creation: Life and How to Make it (2000). You can keep up to date with Steve's progress at http://www.cyberlife-research.com/people/steve/
(AI diffuses):
  • If you are interested in understanding the differences between human and artificial forms of intelligence then Geoffrey Miller's, The Mating Mind, is a very good starting point.
  • Consciousness has, over the last decade, become a large multi-disciplinary field of study. If you want to look into it further, I'd suggest Consciousness Explained (Dennett, 1993) as one way into the subject.
  • A book which takes a different approach drawing more on human emotion and less on the computational metaphor is The Search for Mind, A New foundation for Cognitive Science by Sean ONuallain (2002).
  • A good general overview of Cognitive Science and its relationship to AI is Minds, Brains, and Computers by Robert Harnish (2002).
(Present and future trends):
  • One place to pursue the social implications of AI further is in Reflections on AI: the Legal, Moral and Ethical Dimensions (Whitby, 1996).
  • Hans Moravec makes the case for robots taking over the world in Mind Children (1990).
  • Margaret Boden has written a book (The Creative Mind, 1990) which applies AI to artistic (and other) creativity. This has become the starting point for just about anybody who is interested in the relationship between AI and Art.
  • Stelarc has a website at: http://www.stelarc.va.com.au/ This gives a taste of his work and thinking.

No comments:

Post a Comment