Current Research

This page describes the current directions of my research on Soar with my students. Other pages on my web site give a more complete view of my previous research.

The following paper gives an overview of fitting all of these pieces together:

  • Laird, J. E. 2008. Extending the Soar Cognitive Architecture. Artificial General Intelligence Conference, Memphis, TN.

My research centers on cognitive architecture – the fixed, primitive computational structures that are the building blocks of human-level intelligence. Over the last five years, my research group has made a concerted effort to increase the cognitive capabilities of cognitive architectures by greatly expanding the set architectural components in the Soar architecture. Our goal was to include proven components from other architectures that were missing from Soar, but also to expand the set of components studied in cognitive architecture to include components found in humans that have been largely ignored in the cognitive architecture community. The components we have added include the following:

Reinforcement learning (RL) provides the ability to learn about statistical regularities in the environment related to reward. Previously in Soar, learning this type of knowledge was cumbersome and usually required an internal model of the environment (or at least a model of the agent’s own actions). With RL, Soar can now learn in domains where its only knowledge is how to initiate action.

  • Wang, Y. (2011). Hierarchical Functional Category Learning for Efficient Value Function Approximation in Object-Based Environments. PhD Thesis, University of Michigan
  • Wang, Y., Laird, J. E. (2010). Efficient Value Function Approximation with Unsupervised Hierarchical Categorization for a Reinforcement Learning Agent. International Conference on Intelligent Agent Technology (IAT-10), Toronto. (Best Paper Award Nomination)
  • Gorski, N.A. & Laird, J.E. (2009). Learning to Use Episodic Memory. Proceedings of the 9th International Conference on Cognitive Modeling (ICCM-09). Manchester, UK. Wang, Y., and Laird, J.E. 2007. The Importance of Action History in Decision Making and Reinforcement Learning. Proceedings of the Eighth International Conference on Cognitive Modeling. Ann Arbor, MI. http://www-personal.umich.edu/~yongjiaw/publications/ICCM_2007.pdf
  • Nason, S. and Laird, J. E., Soar-RL, Integrating Reinforcement Learning with Soar, Cognitive Systems Research, 6 (1), 2005, pp. 51-59. Also in International Conference on Cognitive Modeling, 2004.

Semantic learning/memory provides the ability to store and retrieve declarative facts about the world. This capability has always been central to ACT-R and adding it to Soar should provide us with the ability to create agents that are better able to reason and use general about knowledge about the world.

  • Derbinsky, N., Laird, J. E., Smith, B.: Towards Efficiently Supporting Large Symbolic Declarative Memories. International Conference on Cognitive Modeling, ICCM (2010)
  • Derbinsky, N., Laird, J.E. Extending Soar with Dissociated Symbolic Memories. Symposium on Human Memory for Artificial Agents, AISB (2010)

Episodic learning/memory provides the ability to remember past experiences. Although similar mechanisms have been studied in case-based reasoning, episodic memory is distinguished by the fact that it is task-independent, available for every problem. We have demonstrated that episodic memory enables many advanced cognitive capabilities such as learning action models, internal simulation, and retrospective reasoning and learning. Our emphasis has been both on functionality and efficiency.

  • Derbinsky, N., Laird, J.E.: Efficiently Implementing Episodic Memory. Proceedings of the 8th International Conference on Case-Based Reasoning, ICCBR (2009)
  • Gorski, N.A. & Laird, J.E. (2009). Learning to Use Episodic Memory. Proceedings of the 9th International Conference on Cognitive Modeling (ICCM-09). Manchester, UK.
  • Laird, J.E., Derbinsky, N.: A Year of Episodic Memory. Workshop on Grand Challenges for Reasoning from Experiences, 21st IJCAI (2009)
  • Nuxoll, A. M. and Laird, J. E. (2007). Extending Cognitive Architecture with Episodic Memory. In Proceedings of the 21st National Conference on Artificial Intelligence (AAAI). http://ai.eecs.umich.edu/soar/sitemaker/docs/pubs/
    AAAI2007_NuxollLaird_ver14(final).pdf
  • Nuxoll, A., Laird, J., A Cognitive Model of Episodic Memory Integrated With a General Cognitive Architecture, International Conference on Cognitive Modeling 2004.

Visual imagery provides the ability of an agent to use the unique representational and processing characteristics of its visual processing to perform internal processing. We have demonstrated that this can lead to orders of magnitude improvements in processing time compared to pure symbolic reasoning on certain tasks and that it is a critical component of spatial reasoning. Moreover, Sam Wintermute has shown that there are additional functional reasons for spatial reasoning, and demonstrated it across many different tasks.

  • Wintermute, S. (2010). Abstraction, Imagery, and Control in Cognitive Architecture. PhD Thesis, University of Michigan, Ann Arbor.
  • Wintermute, S. (2009). Integrating Reasoning and Action through Simulation. In Proceedings of the Second Conference on Artificial General Intelligence (AGI-09). Arlington, VA.
  • Wintermute, S. (2009). An Overview of Spatial Processing in Soar/SVS (Report No. CCA-TR-2009-01). Ann Arbor, MI: Center for Cognitive Architecture, University of Michigan.
  • Wintermute, S. (2009). Representing Problems (and Plans) Using Imagery In Papers from the 2009 AAAI Fall Symposium Series: Multi-Representational Architectures for Human-Level Intelligence, Arlington, VA, November 2009. AAAI Press.
  • Wintermute, S., and Laird, J.E. (2009). Imagery as Compensation for an Imperfect Abstract Problem Representation. In Proceedings of The 31st Annual Conference of the Cognitive Science Society (CogSci-09)
  • Wintermute, S. and Laird, J. E. (2008). Bimodal Spatial Reasoning with Continuous Motion. Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (AAAI-08), Chicago, Illinois
  • Lathrop, S.D., and Laird, J.E. (2007). Towards Incorporating Visual Imagery into a Cognitive Architecture. Proceedings of the Eighth International Conference on Cognitive Modeling. Ann Arbor, MI.
    http://www.eecs.umich.edu/~slathrop/publications/ICCM07_paper_final.pdf
  • Lathrop, S., and Laird, J.E. 2006. Incorporating Visual Imagery into a Cognitive Architecture: An Initial Theory, Design and Implementation. http://ai.eecs.umich.edu/soar/sitemaker/docs/pubs/cca_tech_report_2006-01.pdf
  • Wintermute, S., and Laird, J. E. (2007). Predicate Projection in a Bimodal Spatial Reasoning System. Proceedings of the 21st National Conference on Artificial Intelligence (AAAI), Vancouver, B.C., Canada.

Emotion, mood, and feeling (EMF) are often considered detractors from pure rational thought; however, there is significant evidence from humans that they play a critical role in providing direction for problem solving, reasoning, and learning. We have demonstrated that a computational model of emotion, mood, and feeling in Soar can help direct problem solving and greatly speed reinforcement learning.

  • Marinier, R. and Laird, J. E. (2008). Emotion-Driven Reinforcement Learning. CogSci 2008, Washington, D.C.
  • Marinier, R., Laird, J. E., and Lewis, R. L. (2008). A Computational Unification of Cognitive Behavior and Emotion. Journal of Cognitive Systems Research.
  • Marinier, R.P., Laird, J.E. 2007. Computational Modeling of Mood and Feeling from Emotion. CogSci 2007. Nashville, TN. http://sitemaker.umich.edu/marinier/files/ Marinier_Laird_CogSci_2007_ComputationalModeling.pdf
  • Marinier, R. and Laird, J. A Cognitive Architecture Theory of Comprehension and Appraisal. Agent Construction and Emotion 2006, Vienna, Austria, April 2006. http://sitemaker.umich.edu/marinier/files/Marinier_Laird_ACE_2006_ ComprehensionAndAppraisal.pdf
  • Marinier, R., Laird, J. Toward a Comprehensive Computational Model of Emotions and Feelings, International Conference on Cognitive Modeling 2004.

Clustering provides a way of automatically building up new representations of regularities in the environments. Few existing cognitive architectures have the capability to create new symbolic structures – they are a prisoner of the original encodings provided by a human programmer. Clustering of perceptual data, and even internal representations, may provide a partial answer as to source of symbolic structures.

  • Wang, Y. (2011). Hierarchical Functional Category Learning for Efficient Value Function Approximation in Object-Based Environments. PhD Thesis, University of Michigan
  • Wang, Y., Laird, J. E. (2010). A Computational Model of Functional Category Learning in a Cognitive Architecture. Proceedings of the Tenth International Conference on Cognitive Modeling (ICCM-10), Philadelphia, PA.

Performance Evaluation

  • Laird, J. E., Derbinsky, N., Voigt, J. (2011). Performance Evaluation of Declarative Memory Systems in Soar, Proceedings of BRIMS 2011, Sundance, UT.

Architecture Evaluation

  • Laird, J. E., Wray III, R. E. (2010). Cognitive Architecture Requirements for Achieving AGI. Proceedings of the Third Conference on Artificial General Intelligence (AGI)
  • Gorski, N.A., and Laird, J.E. (2009). Evaluating Evaluations: A Comparative Study of Metrics for Comparing Learning Performances (Report No. CCA-TR-2009-05). Ann Arbor, MI: Center for Cognitive Architecture, University of Michigan.
  • Laird, J.E., Wray, R.E., Marinier, R.P., Langley, P. (2009) Claims and Challenges in Evaluating Human-Level Intelligent Systems, Proceedings of the Second Conference on Artificial General Intelligence.

Cognitive Robotics

  • Laird, J.E. (2009). Towards Cognitive Robotics, SPIE Defense and Sensing Conferences, Orlando, FL.

In addition to the research of developing and integrating these architectural components, we have been actively engaged in using these components to provide new high-level cognitive capabilities. Cognitive capabilities are general reasoning and learning abilities that are supported by a combination of architectural components, general knowledge for using the capability, and task knowledge (that enables the cognitive capability for a specific task). For example, we have studied how an agent can use episodic memory to remember the locations of objects it has sensed in the past, which at the time the objects were experience were not relevant to any of the agent’s goal. Normally an agent would ignore these objects, but with episodic memory the agent can recall the information in the future when it does become relevant to solve a problem. An example is when you are about to run out of gas, and then remember that there is a gas station on a nearby corner although you’ve never visited it in the past. Episodic memory also provides an agent ability to create a crude internal model of its own actions or others so that in the future it can evaluate its alternative actions by asking itself, “What will happen if I do X?”

  • Laird, J. E., Xu, J. Z., and Wintermute, S. (2010). Using Diverse Cognitive Mechanisms for Action Modeling. Proceedings of the Tenth International Conference on Cognitive Modeling, Philadelphia, PA.

To date we have only scratched the surface in developing these new general cognitive capabilities, in part because we have been studying the new architectural components individually (within the context of Soar), which is a natural outcome of having individual graduate students with specific thesis topics. A major thrust of future research will be to explore the cognitive capabilities that emerge from the interactions among these new architectural components. Below are a few examples we are interested in pursuing:

  1. The intermixing of visual and symbolic representations promises to greatly enhance spatial reasoning abilities, which would be further enhanced by adding in mental imagery to episodic memory. This can lead to the ability to internally simulate not just changes to symbolic structures, but also to simulate over a combination of symbolic and visual structures, using both current sensory data as well as memories to drive the internal simulations.

    Wintermute, S. (2010). Using Imagery to Simplify Perceptual Abstraction in Reinforcement Learning Agents. Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10), Atlanta, Georgia


    Wintermute, S. (2009). Representing Problems (and Plans) Using Imagery In Papers from the 2009 AAAI Fall Symposium Series: Multi-Representational Architectures for Human-Level Intelligence, Arlington, VA, November 2009. AAAI Press.


    Wintermute, S. (2009). Integrating Reasoning and Action through Simulation. In Proceedings of the Second Conference on Artificial General Intelligence (AGI-09). Arlington, VA.

  2. Already our work on emotion, mood, and feeling (EMF) has had a direct impact on reinforcement learning and we expect that to broaden to the other memories. EMF provides an intrinsic reward for reinforcement learning, but has the potential to provide saliency to bias storage and retrieval for episodic and semantic memories.
  3. Reinforcement learning may have synergistic interactions with episodic memory where RL is used to learn retrieval cues and strategies for retrieving the most appropriate memories for different situations.

    Gorski, N.A. & Laird, J.E. (2009). Learning to Use Episodic Memory. Proceedings of the 9th International Conference on Cognitive Modeling (ICCM-09). Manchester, UK.

  4. Social interactions can stress every aspect of a cognitive system. In the past, we have done research on anticipating an opponent, based in large part on using the agent’s own knowledge of itself to drive the expectation of opponent behavior, but modulated with knowledge of the opponent’s specific goals and knowledge. Without emotion, mood, and feeling, improved memory and learning as well as mental imagery, our agents were unable to model the complexities of human behaviors. Adding these new components will greatly improve our agents’ ability to predict and anticipate complex opponents with those same abilities – especially opponents that are adapting to our agent’s behavior.

To conclude, we are interested in studying the components of cognitive architecture, their interactions, and how they support high-level cognitive capabilities that span the range of human-level intelligent behavior. We are always open to considering new architectural components, but we are also at a point where much can be learned by studying the interactions and use of the newly added architecture components to support cognitive capabilities in knowledge-rich agents.