My research works in the interface between artificial intelligence and game design. I am interested in the use of AI tools to give more power to designers to create richer player experiences in games. One of the critical limitations of current computer games is our inability to make rich emergence models of social systems. As a result, social interaction in games is obvious and clunky; it lacks the subtlety, playfulness and room for mastery that we can provide in physical systems.
My work is based around a theme of abstraction. Abstraction is the process of finding useful high-level chunks that enable us to talk and reason about complex concrete systems without having to go into complete detail. In AI abstraction can be used as a divide-and-conquer approach, allowing us to factor a problem into independent parts and solve them individually. In education, abstraction is our fundamental tool for learning. We take concrete experiences and use them to build abstract mental models with which we understand the world. In game design, abstraction is the key force behind emergent gameplay.
Active research projects:
- Game design and analysis
- Emergence and Games-based learning
- Character Modelling for Narrative Generation
Past research projects:
Game design and analysis
Game design is still a very young discipline and has little established knowledge. Practice far outstrips theory. We need to analyse games and reflect on the processes we use for design to better understand how we do the things we do.
- Ryan, M R & Costello, B, 2012, ‘My Friend Scarlet: Interactive Tragedy in The Path‘, Games and Culture, vol. 7, no. 2, pp. 111 – 126
- Ryan, M R, 2009, ‘Illuminati: The Game of Conspiracy — A Close Reading‘, in , presented at Australasian Conference on Interactive Entertainment 2009, Sydney, Australia, December 2009
- Ryan, MR, 2007, ‘Eleven programmers, seven artists and five kilograms of play-doh: games for teaching games design‘, in 2007 Australasian conference on interactive entertainment, presented at Australasian conference on interactive entertainment 2007, Melbourne, 3 – 5 December 2007
Emergence and Games-based learning
Experiential learning theory tells us we learn by constructing abstract mental models of the concrete world through reflection and observation and test these models by active experimentation. Academic learning often fails to make this connected cycle between concrete experience and abstract knowledge. Games provide an opportunity to learn about systems through hands-on experience, but only if the game is designed to support playful discovery and mastery.
- Ryan, M, Costello, B, Stapleton, A, Deep Learning Games through the Lens of the Toy, in Proceedings of Meaningful Play 2012
Character Modelling for Narrative Generation
Computer games are much better at simulating physical systems than social systems. As a result, it is much easier to make a playful physics game than a playful social or story-based game. Story is very difficult to model, it involves understanding the world in terms of action, character and plot. The same action can have several different ’causes’ in a game:
- The World: The state of the world that precipitated the action.
- The Character: The intent of the character who performed the action.
- The Author: The intent of the author who wrote the action.
- Sarlej, M. K., Ryan, M., 2012 ‘Representing Morals in Terms of Emotion‘, in Proceedings of Eighth Artificial Intelligence and Interactive Digital Entertainment Conference, AAAI Press
- Sarlej, M. K., Ryan, M. 2011 ‘A Discrete Event Calculus Implementation of the OCC Theory of Emotion‘, Intelligent Narrative Technologies Workshop, in Proceedings of Seventh Artificial Intelligence and Interactive Digital Entertainment Conference, AAAI Press
- Ryan, M R, 2007, ‘The Tale of Peter Rabbit: A Case-study in Story-sense Reasoning‘, in 2007 Australasian conference on interactive entertainment, presented at Australasian conference on interactive entertainment 2007, Melbourne, 3 – 5 December 2007
Multiagent path planning
Planning paths for multiple agents (robots, game characters, network packets) to simultaneous move around a shared space without colliding is a computationally difficult problem. However real-world maps have structure that we can exploit to make the problem much easier.
- Ryan, M, 2008, ‘Exploiting subgraph structure in multi-robot path planning‘, Journal of Artificial Intelligence Research, vol. 31, pp. 497 – 542
- Ryan, M, 2010, ‘Constraint-based multi-robot path planning‘ in Proceeedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA), pp922-928, IEEE Press
Hierarchical reinforcement learning
My PhD research investigated the synthesis of semi-Markov reinforcement learning with teleo-reactive planning. I built a system, called Rachel, which could construct an abstract plan based on user-specified teleo-operators (TOPs) and then learn a concrete implementation of that plan using reinforcement learning.
- Ryan, M R, 2004, Hierarchical reinforcement learning: a hybrid approach, PhD Thesis, University of New South Wales
- Ryan, M R, 2004, ‘Hierarchical Decision Making’, in Jennie Si, Andrew G. Barto, Warren Buckler Powell, Don Wunsch (ed.), Handbook of Learning and Approximate Dynamic Programming, Wiley-IEEE Press, pp. 203 – 227
- Ryan, M R, 2002, ‘Using Abstract Models of Behaviours to Automatically Generate Reinforcement Learning Hierarchies‘, in Proceedings of The 19th International Conference on Machine Learning, Morgan Kaufmann, presented at 19th International Conference on Machine Learning, Sydney, Australia, July 2002