The most tangible change to our range of apps has been the addition of speech bubbles to our Spades app. This has been a welcome diversion from AI improvements, which do not deliver any visible change, just the more abstract "better play". Speech bubbles offered a chance to also design a new framework to manage them. It would have been of no value to simply have AI opponents just make random remarks: these needed to be tied into the game situation. This set us up for a protocol design task to allow the game engine to talk to the UI. The game engine would flag when significant game events had occurred and the UI would register this game event and decide what response might be delivered. It might be a comment to all or just to the partner of that player. In all we identified 36 possible events and since any event was not going to be tied to a single response, we provided a a pseudo-random selection from a number of pre-determined responses. Even this random selection was not pure random, or a round-robin play, but a system that randomly chose a response from the last "N" responses that had not been played. This avoided the danger of repeated or clustered responses and also any familiar response ordering. In total the app has 812 possible different speech responses.
Having released this we are now assessing how users react and find, to our surprise, that users mostly want to talk back. We are now considering an Eliza based response system so that users can type and have the AI players make some kind of response.
Of course, as the lead 2 articles show, Spades AI has been a topic and again our Spades program has made some significant improvements in play performance. A driving force behind this has been the improved testing toolkit. In general AI developers discuss AI coding and little is said about test frameworks, but our experience has shown that good testing tools are vital. From this we could postulate a maxim: "Bad AI and good tools will beat Good AI and bad tools", which has some real truth in there. Part of this is reliability and part exposure to compare the performance of any change against multiple versions. Getting these tools right is a must, not a luxury.
As ever, we attended GDC San Francisco and this is a good chance to take off the blinkers and put your work bench out of reach, so no distractions. GDC has some 30,000 or so delegates and so many presentation tracks, full of motivated people with plenty of ideas. It is the perfect opportunity to network and feel you way around what people are doing. My notepad is full of notes gleamed directly or indirectly from inspirational talks and contacts. It adds to the company game plan for the year ahead.
In among these inputs are goofball and impractical ideas that actually surprisingly work very well (one is shown below). Also is is inspiring to attend great talks and my personal nomination was for Christine Love's talk on "Ladykillers in a bind", which is about the best postmortem on a game development I have seen: completely brutal and utterly honest.
Our mid-year conference is Develop Brighton, where one of our games was featured in the excellent talk by Dr Sam Devlin of York University. They have been key to getting our ISMCTS-based search of the ground and in return we provide game data that they can use for research. This close collaboration with York has been excellent in allowing us to create a much stronger product. The team there has plenty of AI expertise and just talking to them is profoundly helpful in exercising possible ideas.
Finally in this age of media trolls, we are paying GamingLabs to test and verify that our Android app "Backgammon Free" does not cheat. Of course we already know that but we have attracted a huge number of users who are convinced we do cheat and no rebuttal by us will satisfy them. Time to turn to a professional body whose job it is to certify games and other such programs against cheating. We have thousands of e-mails and review comments accusing us of cheating and we need a solution!
At the end of it, it is another good year for AI Factory. We seem to be getting it about right!