Home About EIS →

Dissertation: Integrating Learning in a Multi-Scale Agent

Building expert-level artificial intelligence for real-time strategy games remains an open research challenge. StarCraft in particular provides an excellent environment for AI research, because the game has many real-world properties and is played at an extremely competitive level. It is also an environment in which human decision making can be observed, emulated, and evaluated. During gameplay, professional players demonstrate a broad range of reasoning capabilities including estimation, anticipation, and adaptation.

My approach for working towards the goal of building expert-level AI for StarCraft is to emulate a subset of these capabilities in an agent that plays complete games of StarCraft. The core of the agent is built on the ABL reactive planning language, which was used to author the autonomous agents in Façade. To incorporate additional reasoning capabilities, I integrate the reactive planner with external components that perform estimation using a particle model, anticipation using machine learning, and adaptation using case-based reasoning. The resulting system, EISBot, learns from demonstrations and models player behavior. An evaluation of EISBot versus human opponents demonstrates that the agent performs at the level of a competitive amateur player.

The dissertation is available for download here and the source code for EISBot is available here. A paperback version is also available on Amazon.

Abstract:

Video games are complex simulation environments with many real-world properties that need to be addressed in order to build robust intelligence. StarCraft is a real-time strategy (RTS) game that exhibits both cognitive complexity and task environment complexity. Expert StarCraft gameplay involves many cognitive processes including estimation, anticipation, and adaptation. Achieving the objective of destroying all enemy forces requires managing a number of concurrent subtasks while working towards higher-level objectives. Working towards the goal of building expert-level performance for RTS games presents a multi-scale AI problem, which motivates the need for integrative AI systems.

This thesis investigates the capabilities necessary to realize expert StarCraft gameplay in an agent. My central claim is that in order to perform at the level of an expert player, a StarCraft agent must utilize heterogeneous reasoning capabilities. This requirement is motivated by the structure of RTS gameplay, which involves both deliberative and reactive decision making, and analysis of professional gameplay, which demonstrates the need for estimation, adaptation, and anticipation reasoning capabilities. Additionally, StarCraft gameplay involves decision making across multiple scales, or levels of coordination. My approach for supporting these capabilities in an agent is to identify the competencies necessary for RTS gameplay, and develop techniques for implementing and integrating these competencies. The resulting agent, EISBot, integrates reactive planning for plan execution and monitoring, machine learning for opponent modeling, and case-based reasoning for goal formulation and strategy learning. EISBot plays StarCraft at the same action and sensing granularity as human players, and is evaluated against AI and human opponents.

The contributions of this thesis are idioms for authoring agents for multi-scale AI problems, techniques for learning domain knowledge from gameplay demonstrations, and methods for integrating a variety of learning algorithms in a real-time, multi-scale agent.


About the author:  Ben Weber is a PhD student at UC Santa Cruz.

This entry was posted in Academics. Bookmark the permalink. Both comments and trackbacks are currently closed.