Monday 12 November 2012

Claude Shannon Paper

I read the paper 'Programming a Computer for Playing Chess' by Claude Shannon (1949). I thought it would be difficult to read, but it was fine, and certainly very useful from a theory point of view. It's remarkable how relevant most of it is today, more than 60 years later.

Download link

Move Generation

Move generation is sorted, apart from special cases like castling, en passant and not allowing moves in to check. I have to be careful with them, they're rare cases so if I implement them badly, they'll slow the search down with next to no gain. Here are some pictures with green squares illustrated available moves for a certain piece:



Monday 5 November 2012

New Pieces Icons

I've found some good looking icons to replace my horrible home made icons for the chess pieces.

Look how pretty they are:












Icons are from Clker, which are all public domain images.

Sunday 4 November 2012

First Version of Something

I finally got round to making an actual thing:

All it does so far is let the user place pieces and remove them ( all the pieces are included, not just white queens).

It can be quite difficult to make this, because you have to be very forward thinking. You don't want to implement a clumsy mechanism early on because it'll probably make things so much harder later on in the project.

The next thing to do is move generation, which I'll then test thoroughly for as many different configurations as I can be bothered to do. If I can implement it successfully, I may use Perft, which is an algorithm designed to test the move generation system.

After that, I'll make sure the whole thing is optimized as much as reasonably possible, and everything is done probably (as in, no messy short cuts that may cause problems in the future).

Wednesday 31 October 2012

And Another Book

It’s hard to find books that cover chess AIs. I read ‘The Binary Revolution’ by Neil Barrett. It would have been very useful to read this before I did Computing at AS level, it covers a lot of the content in an easier way. There was a small bit on chess at the end, which I may as well copy here:


“The principle of playing chess was actually found to be reasonably straightforward for a computer. At any stage of the game, each player has only a limited set of available moves, creating a small set of alternative board positions. From each one of these, a further set of moves can be made and board positions created; and from those again, a small set of yet further moves is possible. At each stage, a ‘tree’ of possible board positions is produced. Chess programs work by producing a complete set of ‘If I do this, then you can do this, and then I can do this’ sequences of anticipated moves; the number of future moves calculated is called the ‘depth’. For each of the board positions then produced, the program calculates a score for itself and for its opponent; it chooses the move which minimizes the opponent’s advantage whilst maximizing its own. This is a strategy called ‘minimax’, created by Claude Shannon in 1949 and perfected by Alan Turing the following year. The first computer program written to execute minimax for chess was produced in 1956 at Los Alamos, followed two years later by a more powerful system at IBM.
Using minimax and a depth of only five moves, chess programs can trounce all but the most experienced of club players; given a slightly greater depth, the programs can beat all but the most experienced chess grandmasters. In 1997, when a dedicated computer system developed by IBM (called ‘Deep Blue’ and capable of examining 200 million chess positions per second) was allowed a depth of 14 moves, it beat the then world champion, Gary Kasparov. Although chess is indeed a striking example of human intelligence, and although computers are not programmed to play in anything like the same way as humans apparently do, it seems to be at least one human-like thing that can better performed by computers.”

That “five moves” bit is interesting, I hadn't heard that before.

Thursday 20 September 2012

Made a Google calendar

I've set a rough schedule up on Google calendar. Hopefully I'll remember to keep to it.

Saturday 28 July 2012

Board Representation

I'm going to use a one-dimensional array with border squares to represent my board, as described here:

http://www.cis.uab.edu/hyatt/boardrep.html

0x88 representation seems to be the most commonly used, but I don't believe it's worth me using it. The different it will make seems minimal, and I would have to work with bitwise operations, which I'm not at all comfortable with. 0x88 representation is described at these links, as well as the first link:

http://diovo.com/2012/01/chess-programming-the-0x88-board-representation/
http://mediocrechess.blogspot.co.uk/2006/12/0x88-representation.html

I can't really build a prototype for the board representation on it's own, I'd have to work out move generation first, so I can do something with it.

Plan and Schedule

I figured it's about time I wrote a rough plan/schedule for this project.

August: and September: Read more books, papers and articles on AI, specifically chess AI, and different methods I could use for my program. I'd like to do some small tests on the way, testing out certain ideas.
October: Start building the program, putting all the ideas together.
November: to January: Keep building, testing, improving, all that stuff.

This plan came out even vaguer than I'd expected. I'm not sure what else I can do though, that really is all there is to my project: do research, build the program.

Virtual Organisms

I've read 'Virtual Organisms' by Mark Ward.

There is next to nothing in the book that's relevant to this project, except for a couple of passing references to chess playing computers. I'll quickly summarise the book, so I can refer back to this if I need to, for some reason. I'm aware this post is an incoherent mess, but it's really just for my reference.

Chapter 1) Biological stuff, the theories about how life started on Earth. DNA, RNA, genes, those sort of things.
Chapter 2) Von Neumann's automata ideas, cellular automaton, Conway's Game of Life, CA categorisation (classes I - IV), Sugarscape, quantum cellular automata to process information and make smaller CPUs.
Chapter 3) Turing machine, Godel's incompleteness theorem/Hilbert's problem, 'On Computable Numbers', theorem proving programs, Shakey, William Gray Walter's tortoises - exploring what complexity of behaviour can be achieved with the smallest number of elements, the previous belief that thinking involves mainpulating symbols that represent the world. Tilden's BEAM robots.
Chapter 4) Genetic algorithms (Tierra, Avida) - evolution. Ants/agents in BT's CSS program.
Chapter 5) FGPA (Field Programmable Gate Arrays), Adrian Thompson explores how evolution via GAs can create useful circuits on these. Working tone discriminator in 32 cells, 100 was thought to be the minimum. Thompson doesn't know how it works.
6) Humans playing God, better be careful.



And a quote from Kasparov on Deep Blue: "I could feel — I could smell — a new kind of intelligence across the table."


It was quite an interesting book.

Tuesday 3 July 2012

Tic Tac Toe Board Storage Idea

I'm using a Tic Tac Toe (Noughts and Crosses) AI to test certain ideas for my chess AI, because Tic Tac Toe is an awful lot easier to build. Whilst designing this, I was thinking about the best way to represent the board and where the noughts and crosses are in the board. My first thought was to have two 2d arrays, one for each player, with each position in the array representing a position on the board. The values would be true or false, true if that player had a piece in that position, and false if they didn't. I thought this would be a better idea than having one array with either "X", "O" or "nothing" in the positions, as booleans are faster to compare than strings.

To check for a win with this system would involve checking the 8 possible lines in which there could be three of the same piece in a row, and doing this for each player. This would mean (8 * 3 * 2) = 48 checks would need to be performed on the board.

I came up with another system (which is almost definitely not original, but I hadn't heard of anything like it before), in which each player has a number that represents their board. Both numbers start at 1. The positions on the board are represented as prime numbers, as shown in the picture below:

When a player puts a piece in a position, their number is multiplied by the prime number representing that place. So if a player was to put a piece in the middle, they would multiply their number by 11. Because numbers can be split in to unique prime factors, every board layout can be represented by two numbers, one for the positions of the noughts, and the other number for the positions of the crosses. It should be noted that this approach is heavily influenced by Gödel numbering (http://en.wikipedia.org/wiki/Gödel_numbering).


In the board opposite, the number representing the crosses would be (3 * 7 * 13 * 23) = 6279. The number representing the noughts would be (2 * 11 * 19) = 418. Obviously, this approach greatly reduces the number of values that need to be stored in the computer's memory in order to represent the board. However, more interesting is the way in which this can be used to check for a win. Say you wanted to check whether a certain player had won down the middle column. If they had, their number would at some point have been multiplied by 3, 11 and 19, so (3 * 11 * 19) would be a factor of their number. Every line can be checked by this method, in only one operation, instead of the three it would take with the conventional approach.

What is surprising is just how much more efficient this prime method is. I wrote a short program in Visual Basic that generates a number of tic tac toe boards with noughts and crosses randomly distributed in them, and then the user can use either the conventional boolean array method, or the prime method, to check all of the boards for which player has won, if any. (The source code for this program can be found at  http://pastebin.com/MpnrPKjs). In all cases, both methods returned the same results for the number of wins for each player, so it's safe to say that they're accurate. Here are the results of the tests:



Now, it's difficult to see how this exact method could be of use in my chess AI: on an 8x8 grid with 7 different types of pieces, any system of numbers like this would seemingly be far too large to work. However, this experiment does demonstrate the importance of using efficient methods, especially ones that aren't quite so obvious to begin with. The chess AI will involve looking ahead across many possible moves, and analysing the positions. The more efficient the methods used, the quicker this can be done, and the further ahead the AI can look.

Monday 2 July 2012

Artificial Intelligence: A Beginner's Guide

I've finished reading 'Artificial Intelligence: A Beginner's Guide' by Blay Whitby, which I loaned from Bristol Central Library, and here are my thoughts, as well as some useful extracts and the recommended further reading (which is extensive).

The book provides a great starting point for someone (like me) who is new to artificial intelligence. Topics covered include knowledge based (expert) system, neural nets, evolutionary computing, the main problems facing AI, philosophical debates involving the nature consciousness and intelligence, holism and the future of artificial intelligence.

The main success of this book is in displaying the field of AI as one in which anyone with a good idea can make a big impact, and the reader is encouraged, helped by stories of previous successes, to develop their own ideas. Everything is explained clearly, and the only vagueness arises in the philosophical chapters, but that's philosophy for you. There's a fair amount of humour sprinkled throughout the book, making reading it a pleasure. The chapter on neural nets is quite thorough and more specific than the rest of the book, which is nice.

A downside is (excluding neural nets) the lack of any detailed explanations of the workings of the AI systems, and there's not a single piece of code in the book.

There are a few pages relevant to chess playing, and alpha-beta pruning is explained well, but overall, little of the book was relevant to my extended project.

Extracts:

"when we look in detail at the way in which computers play chess we shall see that it is probably very different from the ways in which humans play chess." p4
"Nowadays it is possible to buy computers that play chess very well indeed in almost every mall or high street. This had led some people to assume that it must be easy to get programs to play chess or that chess is  a basically computational game. Neither of these assumptions is correct and they fail to do justice to the brilliance and perseverance of those AI pioneers who made the early breakthroughs in the field of computer game playing." p25
"In fact chess is an extremely difficult game to play using computational techniques. Firstly, it turns out that chess is an extremely good example of combinatorial explosion [...]. In the middle part of the average chess game the branching factor is about 36. That is to say that you have a choice of about 36 legal moves. [...] Your opponent can respond in about 36 ways to each of your moves. So if you want to consider your next move, you have a field of 1,296 moves to choose from. If you want to think about the move after that then the number of possible moves has risen to 1,679,616. So the number of moves to be considered continues to grow at an impossible rate even for the most powerful computer." p25-26
"When a game of chess starts there are about 10^123 (that is 10 followed by 123 noughts) possible board positions in the game. This is an alarmingly large number, more than the number of electrons in the known universe." p26
"[Arthur Samuel] introduced the idea of a static evaluation function. This is yet another heuristic in that it enables the program to take a guess about the best sort of move to make in a given situation. The idea is that the program looks at the board positions available from the present position and comes up with an evaluation as to how good each of them looks. This is a static evaluation in that it does not ask whether or not this board position will lead to a win, merely how good it looks. A board position, for example, in which one's opponent has fewer pieces looks relatively good and one in which it is your own pieces that have been taken looks relatively bad." p26-27
"If you consider your own playing of board games such as chess, you might be thinking at this point that this is still a rather wasteful way of doing things. You probably don't examine all the moves available to you at a given point in the game. Many will be obviously stupid. Well, a way of enabling programs not to waste time considering stupid moves was also devised. This is known as alpha-beta pruning. The pruning here is a horticultural metaphor suggesting that unproductive branches are being lopped of the search tree. When programs looks at possible board positions there are two classes that can be discounted. The first class is those that have high static evaluation functions but are unobtainable because the opposing player will not let the program move so as to attain them. The second class is those that are disastrous. If either of these situations is detected early in the search, then the search can be stopped at that point. There is no point, for example, in expanding the possible board positions which follow from a disastrous board position. This move is not going to be made and any effort put in to exploring just how bad things might go on to become is obviously wasted effort. Similarly, effort used to explore board positions that our opponent is capable of preventing us from attaining is the computational equivalent of day dreaming. Effort put into exploring how good things could be "if only" is also wasted effort". p27-28

Further Reading:

(Preface):
  • A good general AI textbook is Artificial Intelligence, A Modern Approach (Russell and Norvig, 2003). Since it adopts an agent-based approach to AI, it is an excellent starting point in pursuing many of the ideas in this book.
  • A less technical book about many early AI programs is Artificial Intelligence and Natural Man by Margaret Boden (1987).
(What is AI?):
  • An historical account of the early enthusiasm about AI in the US with many anecdotes and details of the personalities involved is provided in Pamela McCorduck's book, Machines Who Think (1979).
  • The best place to find out about Alan Turing and his achievements is in Alan Turing: The Enigma of Intelligence by Andrew Hodges (1983). Hodges also maintains a large and truly comprehensive Alan Turing website at http://www.turing.org.uk/turing/index.html
  • Turing's "Computing Machinery and Intelligence" (published originally in Mind in 1950) is an easy to read and non-technical paper. It has been published in many places - including a good collections of though-provoking papers: The Mind's I (Hoffstadter and Dennett, 1981).
  • A must-read on the amazing history of Bletchley Park is Britain's Best Kept Secret by Ted Enver (1994).
  • You can read all about the Loebner prize online at: http://www.loebner.net/Prizef/loebner-prize.html
  • A wonderful book which makes clear just how the same principles of aerodynamics apply to both birds and aircraft is The Simple Science of Flight: from Insects to Jumbo Jets by Henk Tennekes (1997).
(AI at work):
  • NASA has a tremendous commitment to all sorts of AI research. Details of many current AI projects can be found by exploring the website: http://www-aig.jpl.nasa.gov
  • Further details of how search works as an AI technique are in Thorton and du Boulay's Artificial Intelligence through Search (1992).
  • A good general introduction to the technology of knowledge-based systems is P. Jackson's Introduction to Expert Systems (1990).
  • Most AI textbooks cover chess-playing and Samuel's work. My favourite is Luger and Stubblefield's Artificial Intelligence Structures and Strategies for Complex Problem Solving (1993).
  • There's a web-site which details many of the achievements of the Clementine data-mining program at: http://www.spss.com/spssbi/clementine/
(AI and biology):
  • Artificial neural nets are covered in general AI textbooks. A specific and readable introduction to them is provided in An Introduction to Neural Computing (Aleksander and Morton, 1990).
  • A good explanation of why there is much more to evolution than in popular accounts can be found in Darwin's Dangerous Idea (1995).
  • A good textbook which details these biologically inspired approaches to AI is Understanding Intelligence (Pfeifer and Scheier, 1999).
(Some challenges):
  • A witty and readable account of why the frame problem matters can be found in Cognitive Wheels by Dan Dennett (1984).
  • A non-technical introduction to what philosophers have to say about some of these questions is Tim Crane's The Mechanical Mind (1995).
  • Andy Clark explains the 007 principle in Microcognition (1989) but his later book Being There (1997) would perhaps make a better companion to this chapter.
  • John Searle describes the Chinese Room thought experiment in a number of places including Minds, Brains and Programs (1980); Minds, Brains, and Science (1991); and The Rediscovering of the Mind (1994).
  • John Lucas made his original claim in a 1961 paper which is included in Anderson, 1964 in the bibliography. Roger Penrose sets out a more detailed and wide-ranging attack on AI in The Emperor's New Mind (1989).
  • For an introduction to the enthusiasm surrounding Artificial Life the best book is Steven Levy's Artificial Life, The Quest for a New Creation (1993).
  • The paper in which Rod Brooks uses the 747 parable ("Intelligence Without Representation", 1991) is available online together with many details of the Cog project at: http://www.ai.mit.edu/people/brooks/
  • Rod Brooks's latest book, Robot: the Future of Flesh and Machines (2002) is a general-audience account of his approach and work.
  • Steve Grand sets out his stall in Creation: Life and How to Make it (2000). You can keep up to date with Steve's progress at http://www.cyberlife-research.com/people/steve/
(AI diffuses):
  • If you are interested in understanding the differences between human and artificial forms of intelligence then Geoffrey Miller's, The Mating Mind, is a very good starting point.
  • Consciousness has, over the last decade, become a large multi-disciplinary field of study. If you want to look into it further, I'd suggest Consciousness Explained (Dennett, 1993) as one way into the subject.
  • A book which takes a different approach drawing more on human emotion and less on the computational metaphor is The Search for Mind, A New foundation for Cognitive Science by Sean ONuallain (2002).
  • A good general overview of Cognitive Science and its relationship to AI is Minds, Brains, and Computers by Robert Harnish (2002).
(Present and future trends):
  • One place to pursue the social implications of AI further is in Reflections on AI: the Legal, Moral and Ethical Dimensions (Whitby, 1996).
  • Hans Moravec makes the case for robots taking over the world in Mind Children (1990).
  • Margaret Boden has written a book (The Creative Mind, 1990) which applies AI to artistic (and other) creativity. This has become the starting point for just about anybody who is interested in the relationship between AI and Art.
  • Stelarc has a website at: http://www.stelarc.va.com.au/ This gives a taste of his work and thinking.

Sunday 1 July 2012

Wikipedia Links

Quite a lot of Wikipedia articles that provide a decent overall idea of some ideas, and, as always on Wikipedia, have a huge amount of citations. Wikipedia's renowned for being inaccurate, but I'm only using these pages to get a general understanding of concepts. If I wanted to investigate specific information stated, I would follow the citation for that piece of information, and could evaluate the trustworthiness of that site.

General chess stuff:
Algorithms:
Notation:
Board storage:

Friday 29 June 2012

Thinking Machine Website

I found an interesting website. You can play chess against it, and it shows what the computer is thinking as you play, in the form of pretty lines going across the board. It's a lovely idea, and demonstrates the complexity of the AI well. Don't try playing a full game though, it takes ages to actually move.

http://www.turbulence.org/spotlight/thinking/chess.html

Initial Idea

For my extended project, I'm going to design and program a chess AI, a computer capable of playing chess against a human player. I've looked on the internet (mainly a lot of Wikipedia articles), and I reckon I can make it, and implement quite a few interesting features. It fits in pretty well with the subjects I'm doing, there's definitely maths involved. Although it would be a lot more relevant if I was doing Computer Science at university. Ah well.

I've programmed a few simple games, such as Connect Four, so making a playable chess game shouldn't be a problem, but making the AI actually work to a decent level, and not take hours to make a move, will be a good challenge.