The best thing about AIIDE has to be the industry participation. In my experience, AI researchers are prone to a kind of hubris that expresses itself as “Here is a problem (in games or engineering or medicene) that could be solved using AI. I know all about AI. Therefore, I can solve this problem.” The ‘solutions’ that are thus produced are often laughably innapropriate or otherwise bad when examined by someone who actually works in the field.
In the field of game AI, the temptation is always to make a really clever AI that can play the game ‘properly’, which generally means that it can beat the player and that it doesn’t need to cheat. Neither of these things are necessarily of interest to game designers, whose goal is to entertain the player. The role of a good enemy AI is to put up a good fight and then lose. The role of a good ally AI is to support thr player’s decisions. In neither case does this necessarily mean playing the game as well as possible. If the player can’t tell the difference between cheating and ‘proper’ play, then by all means cheat. In fact, cheating often allows us to craft the play experience more finely, tailoring it for drama and difficulty.
The industry presense at AIIDE gives the academic researchers a much-needed reality check. After almost every talk, the questions were “How does this improve the player experience?” or “How does this make the designer/developer’s job easier?”. Sadly, in many cases the presenter did not have a good answer to this question. In many cases it seemed more that the game was merely being used as a simulation for testing the AI research, and not the focus of the research itself. This is, of course, totally valid. AI researchers need simulations in which to test their algorithms, and games provide us with some of the richest simulated worlds available. However I would have expected a conference on “AI for Interactive Digital Entertainment” to contain more work which was actively trying to advance the art of games.
That said, there was some very interesting work presented. The interesting theme for me this year was that of “social games”. There seemed to be a strong interest both among academics and industry in producing better character AI, to simulate emotion or knowledge or language-use or deal-making. These are areas of AI that are beginning to become mature enough to start appearing in games, and they offer the promise of more meaningful characters and stories.
This presents us with a problem: As our NPCs become more complex, the poverty of the player’s expressive power becomes increasingly evident. As someone pointed out, Gordon Freeman never smiles. The input pipe is very thin – 2 joysticks and 12 buttons does not allow for much expression – and this, I think, is going to be our next major hurdle in gaming.