Books: The Tale of Peter Rabbit

 In Books, Uncategorized

The Tale of Peter RabbitThe Tale of Peter Rabbit, by Beatrix Potter.

I know that this may seem like an unlikely title for game designers, but I firmly believe that anyone who wants to work in the area of Narrative or Expressive AI should become intimately familiar with this story and regularly ask themselves the question “Could my storytelling system possibly produce works as richly complex as this?” I’ve found it to be a valuable exercise in humility.

I gave my notorious Peter Rabbit talk again at EIS last week. It always makes me feel kind of guilty. I first presented it at the AAAI Symposium in 2007 and after the talk Erik Mueller (author of the text on Commonsense Reasoning) told me that I made him want to give up his research.

In the talk I take a single page from the book as my text:

‘Now my dears,’ said old Mrs. Rabbit one morning, ‘you may go into the fields or down the lane, but don’t go into Mr. McGregor’s garden: your Father had an accident there; he was put in a pie by Mrs. McGregor.’

If we were to build an AI that could understand or (more ambitiously) generate a text such as this, what would it need to understand? Setting aside issues of language and looking just at the events of this scene there are so many forces at work. Ask yourself: Why does Mrs Rabbit forbid the children to enter the garden? The answers are many:

  1. She remembers that something bad happened to Mr Rabbit there.
  2. She assumes, by analogy, that something similiarly bad could also happen to them.
  3. She is their parent are cares for them, so she does not want this to happen.
  4. She will be away and unable to prevent them from going to the garden.
  5. She knows that going to the garden is something that they might otherwise want to do (she does not forbid them from other misdeeds).
  6. She believes that, as their parent, she will be obeyed.
  7. As the author, Beatrix Potter wants to set up a scenario in which Peter will disobey. This is the nature of his character.
  8. By analogy with his father’s story, he will indeed get into trouble. He will escape, but not without some loss.
  9. This event is part of a greater narrative pattern of disobedience leading to adventure but ultimate comeuppance.

There is a lot at play here (without venturing anywhere near a deeper psychoanalysis of Peter’s role as the only male in a fatherless household and Mrs Rabbit’s ambiguous relationship with the baker), and I have never seen an AI system that could begin to handle such complexity. And yet a lot of people tout their fabulous* ‘story generation’ systems.

There is a lot of danger in AI of falling into a kind of arrogance that says “This is a problem that could be tackled with a computer. I know a lot about computers. Therefore, I can solve this problem.” The results are invariably ‘solutions’ which real experts from the field find laughably naive. If we want to make real progress in these areas (and I think we can) we need to work hand-in-hand with actual experts from the field. In the case of Narrative AI these people are writers and narrative theorists.

This is not an easy ask. I know from first-hand experience that it is very hard to sell Narrative AI to real experts in the humanities. Part of the difficulty is that computer scientists naturally take a structuralist approach to any other discipline and humanities research shifted from structuralism to post-structuralism several decades ago. However as more and more people are beginning to see the roles computers can play in diverse disciplines, opportunities are becoming more widely available. Taking advantage of them will involve learning to think and speak in unfamiliar ways, but the alternatives are ignorance and irrelevance.

* (pun intended)

Recommended Posts
Showing 9 comments
  • Max Battcher

    This is a brilliant example. It captures the goal-driven nature of game dialog; and yet points to where games can improve in their planning for such examples.


  • Yusuf

    Any casualties from the most recent talk. Maybe, you could single-handedly convert commonsense reasoning researchers one at a time! 🙂

  • Russell

    Wonderful read. I’ve half-heartidly followed your blog for several months; in my opinion, this reads as one of your most stimulating posts.

    Who better to reference humanities than, say: a human, a writer, one who thinks and breathes in narrative structures?

    It seems that open dialog and earnest, collaborative thought–particularly early in the design process–affects success of the above. My background is with humanities. Quite frankly, I’m fascinated with the application of narrative theory and semiotics in a gaming framework.

    Bravo, Malcolm.

  • Gian Mancuso

    I think the brick wall that many in our field face is mortared in the assumption that characters in a story are psychological agents with thoughts and feelings. I think an actantial model would serve us much better, but I’ve yet to see it applied.

    If computer scientists naturally take a structuralist approach to other disciplines, I always wonder why they approach story and narrative using a high school English class approach rather than looking into the theories of literary structuralism developed over the last 50 years.

    If we ever hope to create a system that generates compelling stories, the answers will be found in the study of stories as systems.

    So, you’re absolutely right. I can’t wait for there to be a collaboration between those that study Narrative AI and actual experts in Narrative theory, and I hope to be a part of it.

  • Malcolm

    @Yusuf: My aim really isn’t to discourage commonsense reasoners. The questions I pose aren’t going to be answered without solving the commonsense problem one way or another.

    @Russell: Thankyou for your (somewhat backhanded) compliment. I’m glad that I can occasionally produce something interesting among all the dross. 😉

    @Gian: The answer is “because high school English is all they know”. And narrative theory is not easily accessible to someone who has not studied it for many years (as, I’m sure, is artificial intelligence). I’ve tried reading Greimas’ work on the Actantial model and learnt nothing. He uses words in ways that have no meaning to me.

    There is no point in waiting until we understand each other before we work together. We must begin to work together now so that we might eventually we might understand each other.

  • Mark Reid

    Is there a name for the type of fallacy you described: “X can solve Y. I know a lot about Y, therefore I can solve X”? If not, I propose Ryan’s Fallacy. 🙂

    In the scenario from Peter Rabbit you analyse it seems that reasoning by analogy constitutes much of the difficulty for computers. Does Hofstatder’s work on analogies offer any hope for building Narrative AI?

    I know a thing or two about machine learning so I’m going to knowingly commit Ryan’s Fallacy and wonder if some of the techniques from natural language processing might not be applied to inferring narrative analogy. This would of course require the input of narrative experts to provide examples of various tropes in stories and see whether a machine can be trained to generate them (or at least do prediction so generate-and-test can be used).

    As a starting point, Peter Turney has been doing some interesting work on analogies in natural language.

  • Malcolm

    @Mark: Yes, getting a grip on analogy is a very important part of this problem. The big question is: how far does the analogy extend? I haven’t read much of the research in this area, Hofstadter’s or otherwise, so I’m not sure how applicable it is.

    The most profound problem I see with understanding Peter Rabbit is the anthropomorphisation of the rabbits. Sometimes they act like human beings, at other times they are clearly rabbits not people. We have some ability to apply either model as needs be, without delving too far into the obvious contradictions this raises (how does Peter do up the buttons on his coat?)

pingbacks / trackbacks
  • […] enough freedom to not simply pick a book for the same level of engagement.  Malcolm Ryan lectures on Narrative AI, which is tangent to this article, but interesting nonetheless.  Malcolm lists […]

  • […] my recent review of Peter Rabbit I spoke about the dangers of “AI Arrogance” and the embarrassment of Narrative AI […]

Leave a Comment

Start typing and press Enter to search