This sounds a bit of an "all-win" situation, but this needed a significant and focussed investment of effort to create this generic resource in the first place.
First we should examine the underlying mechanisms of game engine tree search: The commonly used minimax and alpha-beta algorithms. These have become the bedrock of many game engines. The complexity of behaviour of these algorithms has been the source of a huge wealth of academic papers describing variations and enhancements of this core technology. Looking at these in turn:
The basis of the Minimax algorithm is simply to minimise the maximum possible loss by choosing a move which offers your opponent the minimum gain in replying.
In this simple example player A can choose 2 moves, and player B has 2 replies to each. The first player A move A1 has a value of "5" as player B can choose moves B1 and B2 that lead to 9 points or 5 point with respect to A, so chooses B2 with the lower value of 5. Player A's move A2 leads to player B moves B3 and B4 with the value 3 or 1, for which B will choose B4. Therefore the second move A2 is worth only 1 for player A. The move chosen is therefore A1 with value 5.
This example is only for depth 2, but can be used for any depth, potentially creating huge search trees.
This minimax procedure can be expressed with the following program:
You might well ask: What is the fuss about? This rather disarmingly short piece of code has all that is required to do a minimax evaluation. Of course it needs a move generator to deliver legal moves in next_move(), and the other support functions for evaluation and making moves, but this is all you need to determine the minimax value of a position. What it does not do here is record the best move, but this is a simple modification. The parameter "n" is the search depth.
Looking at this, you can see that it heavily depends on recursion. Note that this negates the returned values as the function returns, so that each side works with a score value relative to that side (unlike the diagram above). Note that this searches depth-first, so that a depth 6 search will consider the first move in each list to reach depth 6 (1,1,1,1,1,1). It will then consider the next move at depth 6 (1,1,1,1,1,2) and then (1,1,1,1,1,3) and so on. Therefore many moves will have been assessed before the next move at the root (first) depth is examined.
This leads on to alpha beta, which is a powerful optimisation of minimax.
In the diagram above you can see that move B4 is shown as a dotted line. This is a beta cut, using the alpha beta algorithm. The purpose of this is to avoid evaluating moves that have already been proven to be not worth considering. "proven" in this context means proven that a better value cannot be backed up by assessing another branch.
In the diagram above, you can see that after evaluating move B3, that player B can at least force the position to one of only value 3. However player A can already play move A1, which I proven to have value 5. Therefore player B has already shown that move A2 cannot be better than move A1, therefore searching move B4 is pointless. Move B3 has caused a beta cut.
This trick causes huge savings. In a search where there are 30 moves to depth 6, there are 2**6 moves = 720 million possible moves. With a perfect ideal alpha beta, this is reduced to 2x2**(6/2) = 54,000 moves, a gigantic saving.
Now, without experience, trying to write code to implement this optimisation sounds a bit of a potential brain-teaser. However minimax with alpha beta is implemented by the following code:
Once again, this apparently complex algorithm is reduced to just a few lines of code. However this is misleading as the behaviour of this simple algorithm shows great complexity, perhaps analogous to the way that fractals generate complex structures with just a very simple algorithm.
Of course you will find few, if any, programs that implement alpha beta in such simple form. In practice this needs many enhancements in order to deliver the high performance. A key aspect of this is move ordering. With perfect move ordering, you can achieve the optimum savings, but move ordering requires careful management. Five key methods widely used are:
The Killer Heuristic - which saves the best move at each depth, to try again later
Hash tables - which record the best move in any position to try again
Plausibility analysis - which examines the moves list and orders moves on merit
Forward Pruning - which performs "unsafe" removal of moves that probably need not be searched
Iterative Deepening - which searches to depth 1, then repeating for 2 on to depth "n". This allows the search to discover better moves earlier and therefore search deeper trees more efficiently.
Minimax Alpha Beta - AI Factory style
The minimax used by AI Factory is highly developed, from 25 years direct practical experience, backed up by the support of thousands of academic papers published on this topic. It contains the key elements above, but also including many other techniques and also a proprietary algorithm that only searches probable variations. The latter technique is under wraps, and is crucial to the success of AI Factory's game engines.
The implementation also takes one further great step, which was referenced in the earlier article "Writing cpu intensive AI without multi-threading". This is the reduction of alpha beta to a non-recursive function. This unfortunately breaks the natural structure of this algorithm and requires emulation of the recursive processing. This makes this already tricky algorithm to follow even more difficult to visualise. However this rare "flattening" of alpha beta has great benefits, as it removes the need for stack space and allows an alpha beta search to be divided into thousands on small steps, rather than a single long alpha beta search. It is this latter rare feature that has greatly attracted many 3rd party developers, allowing the program to easily be used in a single thread environment.
The net impact of this expanded implementation is a main module that is not the 13 lines of code above, but now 1742 lines. Not all of this is used at the same time, as the code has floating and non-floating point alternative sections for different platforms.
It is the algorithms that drive this search module that has given the Shogi program "Shotest" (Japanese Chess) the edge that allowed it to twice come 3rd in the World Computer Shogi championships in Japan.
Using the Generic Pluggable Minimax
The generic plug-in minimax module can plug into any new product to provide tree-searching. This uses C++ inheritance to allow the common module to be easily shared by multiple game engines. This search is more general than minimax, also allowing non-minimax one player search, which is needed for route-searching and also puzzle solving.
To gain access to this a new engine needs to define just the following core elements:
With this minimum set of 3 core elements defined, the minimax module can be plugged in and minimax evaluation run. In practice this usually needs other secondary procedures defined, but this minimum entry to minimax allows development of a new game to proceed very quickly.
So what has been gained?
It is worth reviewing the benefits of this particular piece of engineering.
1. Reduced project pool complexity
Having just one shared minimax module reduces the proliferation of a dozen different flavours of minimax, each with its own peculiarities and opaque flaws. If we had only a couple of products this would be less of an issue, but with many engines such a "richness" of diversity would not be a help.
2. Synchronised product enhancement
New ideas developed with one game engine can be shared by all others with little fuss, and with the minimum of effort. Of course we do not automatically impose each new tweak on every engine, as each has its own copy of minimax, but as each product is developed it can inherit the latest minimax for testing to see whether it yields benefits for that game. In practise engines are all generally kept up-to-date as algorithm enhancements are usually tied to search options, so are not necessarily imposed.
3. High reliability and high maturity
Debugging a single module has got to be better than debugging a whole family of minimax variants. The concentration of development on a single module greatly increases the chance of this module being bug free. It also encourages us to refine minimax, as the gains are greater as the module is shared by many engines.
4. New projects can plug in off-the-shelf components
This is perhaps the greatest boon. Any new game engine can be put together so easily. You only need define a few key functions for the game and right away you have access to a sophisticated and mature search program. This has allowed some new game engines to be developed from zero to a good working program in just a few days, where otherwise many weeks might have been required.
This final feature encourages us to expand the diversity of game engines supported and allows us to easily respond to publisher requests for new games.
Jeff Rollason: October 2005