One of the central problems faced by autonomous agents is the selection of the action to do next. In AI, three approaches have been used to address this problem: the programming-based approach, where behavior is hardwired, the learning-based approach, where behavior is learned, and the model-based approach, where behavior follows from a predictive model of the environment and the agent goals. Planning represents the model-based approach with the model representing the situation, the actions, and the sensors. The main challenge in planning is computational, as all the models, whether accommodating feedback and uncertainty or not, are intractable in the worst case. Thus, planners must recognize and exploit the structure of problems automatically in order to scale up.
In the talk, I'll review some the models considered in planning research, the progress achieved in solving these models, and some of the ideas that have turned out to be most useful computationally. I will also discuss applications in video games and how the work fits with the overall goal of a general artificial intelligence.