Artificial Intelligence Research

  1. Agent Architecture Development: Soar
  2. Agent Architecture Evaluation
  3. Agent Behavior Validation
  4. Agent Learning
  5. Agent Development
  6. Rapid Knowledge Acquisition

Cognitive Architecture:

Laird, J. E., Lebiere, C. & Rosenbloom, P. S. (2017). A Standard Model for the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics , AI Magazine 38(4).

Langley, P., & Laird, J. E. (2002). Cognitive architectures: Research issues and challenges (Technical Report). Institute for the Study of Learning and Expertise, Palo Alto, CA. We hope to have a published version of this soon (late 2005).

1. Agent Architecture Development: Soar

The best reference to Soar is: Laird, J.E.: The Soar Cognitive Architecture, 2012, MIT Press.

For historical reference on Soar development

  • pre-1990: The Soar Papers: Readings on Integrated Intelligence, Rosenbloom, Laird, and Newell (1993), and Unified Theories of Cognition, Newell (1990). 
  • A review of its evolution up through Soar 6 (current version is 9.6) is in: Laird, J.E., & Rosenbloom, P.S. (1996) The evolution of the Soar cognitive architecture. In T. Mitchell (ed.) Mind Matters. 

Soar 8 incorporates changes to the semantics of Soar to enhance its ability to maintain consistency in its reasoning as the world changes. Some of this has been described in the following papers:

  • Robert E. Wray and John E. Laird. An architectural approach to consistency in hierarchical execution. Journal of Artificial Intelligence Research. 19. 355–398. 2003.
  • Wray, R. E. (1998). Ensuring Reasoning Consistency in Hierarchical Architectures. Ph. D. Thesis. University of Michigan. Ann Arbor, MI. (also published as Technical Report CSE-TR-379-98.)
  • Wray, R. E., and Laird, J. (1998). Maintaining consistency in hierarchical reasoning. Fifteenth National Conference on Artificial Intelligence. 928-935. Madison, WI. July, 1998.
  • Wray, R. E., Laird, J., and Jones, R. M. (1996). Compilation of non-contemporaneous constraints. In Proceedings of the Thirteenth National Conference on Artificial Intelligence, 771-778. Portland, Oregon. August 1996.

Soar 9 is “under development” and will include reinforcement learning, episodic memory, semantic memory and emotion, as well as activation. We are developing these pieces independently, but plan on working on integration in fall 2005. Some aspects may be available for release in spring 2006.

  • Nason, S. and Laird, J. E., Soar-RL, Integrating Reinforcement Learning with Soar, Cognitive Systems Research, 6 (1), 2005, pp. 51-59. Also in International Conference on Cognitive Modeling, 2004.
  • Nuxoll, A., Laird, J., A Cognitive Model of Episodic Memory Integrated With a General Cognitive Architecture, International Conference on Cognitive Modeling 2004.
  • Nuxoll, A., Laird, J., James, M. Comprehensive Working Memory Activation in Soar. International Conference on Cognitive Modeling, Poster, 2004.
  • Marinier, R., Laird, J. Toward a Comprehensive Computational Model of Emotions and Feelings, International Conference on Cognitive Modeling 2004.

2. Agent Architecture Evaluation

I’m also very interested developing methodologies for evaluating and comparing agent architectures. Some the issues in doing this include identifying the set of desired architectural capabilities.

  • Wallace, S., Laird, J. E., Coulter, K. Examining the Resource Requirements of Artificial Intelligence Architectures. Conference on Computer Generated Forces and Behavior Representation, May 2000.
  • Bhattacharyya, S. & Laird, J. E., Lessons for Empirical AI in Plan Execution. Accepted to the IJCAI-99 workshop on Empirical AI.
  • Wallace, S. & Laird, J. E., Toward a Methodology for AI Architecture Evaluate: Comparing Soar and CLIPS. ATAL-99, July, 1999.
  • Laird, J., Pearson, D. J., Jones, R. M., and Wray, R. E. (1996). Dynamic Knowledge Integration During Plan Execution. In Papers from the 1996 AAAI Fall Symposium on Plan Execution: Problems and Issues, 92-98. Cambridge, MA. November, 1996.

The students in a class of mine created an extensive web document that attempts to classify and analyze many existing AI Agent Architectures.

  • Robert E. Wray, Ronald Chong, Joseph Phillips, Seth Rogers, William Walsh, and John Laird. Organizing information in Mosaic: A classroom experiment. Computer Networks and ISDN Systems, 28:167–178, 1995. Originally published in: Proceedings of the Second International World Wide Web Conference 1994: Mosaic and the Web, 475-485. Chicago, Illinois. October, 1994.

3. Agent Behavior Validation

Scott Wallace did his thesis (completed in June 2003) on validating agent behavior. A very tough problem, simplified by comparing the agent’s behavior to human expert behavior.

  • Wallace, S. Validating Complex Agent Behavior, Ph.D. Thesis University of Michigan, Ann Arbor, MI, 2003.
  • Wallace, S. and Laird, J. E. Behavior Bounding: Toward Effective Comparisons of Agents & Human Behavior, International Joint Conference on Artificial Intelligence, 2003.
  • Wallace, S., and Laird, J. E. Toward Automatic Knowledge Validation. In Proceedings of the Eleventh Conference on Computer Generated Forces and Behavioral Representation. pp. 447-456. May 2002.
  • Wallace, S., and Laird, J. E. Intelligence and Behavioral Boundaries. NIST Workshop on Performance Metrics for Intelligent Systems (PerMIS 2002). Gaithersburg, MD. 2002

4. Agent Learning

My interests in learning center on integrating learning with performance (planning and execution) in architectures for general intelligent agents who interact with complex environments. Over the years, my students have looked at different aspects of this problem, always trying to understand the integration and architectural issues. We have tried to build up more and more learning capabilities, within a single architecture. One cut at what we’ve done is that we’ve continually looked at different sources of knowledge (experience, instruction, examples) and how they can improve different aspects of performance. This material is based upon work supported by the National Science Foundation under Grant No. 0413013.

1. Reinforcement Learning

We are in the process of adding reinforcement learning to Soar. This paper is the first publication on the work. Nason, S. and Laird, J. E., Soar-RL, Integrating Reinforcement Learning with Soar, International Conference on Cognitive Modeling, 2004.

This paper presents a model of rat learning using Soar’s RL mechanism and compares it to an ACT-R model. Wang, Y., and Laird, J.E. 2007. The Importance of Action History in Decision Making and Reinforcement Learning. Proceedings of the Eighth International Conference on Cognitive Modeling. Ann Arbor, MI. http://www-personal.umich.edu/~yongjiaw/publications/ICCM_2007.pdf

2. Episodic Memory

We are also adding an episodic memory to Soar. Nuxoll, A. M. and Laird, J. E. (2007). Extending Cognitive Architecture with Episodic Memory. In Proceedings of the 21st National Conference on Artificial Intelligence (AAAI). http://ai.eecs.umich.edu/soar/sitemaker/docs/pubs /AAAI2007_NuxollLaird_ver14(final).pdf

3. Original Work on Chunking in Soar

The first integration of chunking in Soar led to the study of learning search control knowledge and is described in the AAAI-84 paper below. This was followed by looking at learning macro-operators using chunking. We also did a comparison of chunking in Soar to Explanation-based Generalization/Learning, concluding that chunking is one form of EBL.

  • Laird, J. E., Rosenbloom, P. S., and Newell, A. Towards chunking as a general learning mechanism. In Proceedings of AAAI-84, American Association for Artificial Intelligence, Austin, TX, 188-192, 1984.
  • Laird, J. E., Rosenbloom, P. S., and Newell, A. Chunking in Soar: the anatomy of a general learning mechanism. Machine Learning, 1(1), 11-46, 1986.
  • Rosenbloom, P. S. and Laird, J. E. Mapping explanation-based generalization onto Soar. In Proceedings of AAAI-86, American Association for Artificial Intelligence, Philadelphia, PA, 561-567, 1986.

4. Learning Integrated with Performance

Although chunking has always been integrated with performance in Soar, the following papers exam that issue directly.

  • Laird, J. E. and Rosenbloom, P. S. Integrating Execution, Planning, and Learning in Soar for External Environment. In Proceedings of National Conference of Artificial Intelligence, 1022-1029, July 1990, Boston, MA.
  • Laird, J. E., Hucka, M., Yager, E. S., and Tuck, C. M. Robo-Soar: An integration of external interaction, planning and learning using Soar. Robotics and Autonomous Systems, Vol. 8, 1991, pp 113-129. This also appears as a chapter in Toward Learning Robots, W. Van de Velde (Editor), MIT Press, Boston, MA, 1993.

5. Concept learning and inductive learning

In this work, we’ve looked at inductive learning of concepts using symbolic mechanisms, while still preserving many of the typicality and graded performance behaviors seen in humans. Craig Miller developed an approach to inductive learning, called SCA, that does not require any modifications to chunking. This research demonstrates that it is possible to use analytic methods, such as chunking (or EBL) to do more than just speed-up learning. SCA is notable because it is incremental, noise tolerant, can make use of many different sources of knowledge (not just examples), and matches human typicality data for concept learning. SCA has been integrated into many of the follow-on learning systems listed below.

  • Miller, C. S., and Laird, J. E. A Constraint-Motivated Lexical Acquisition Model. In Proceedings of the Eighth International Workshop on Machine Learning, 95-99, 1991.
  • Miller, C. S. and Laird, J. E., Accounting for graded performance within a discrete search framework, Cognitive Science, 20 (4), 1996, pp.499-537.

6. Learning from instruction

To further demonstrate the ability to integrate multiple sources of knowledge in learning, as well as demonstrate the integration of learning and performance, we took a page from Winograd’s book and explored how to integrate instruction with performance. This work was done by Scott Huffman for his thesis and resulted in a system called Instructo-Soar, which had the ability to dynamically request instruction whenever it was unable to work on a problem. It could accept a variety of types of instructions, perform the task, and learn from the instructions so that in future similar situations, the agent would perform the task without the need of instructions.

  • S. B. Huffman and J. E. Laird, Learning procedures from interactive natural language instructions, in P. E. Utgoff, ed., Machine Learning: Proceedings of the Tenth International Conference (ML-93), 1993.
  • S. B. Huffman, The requirements of instructability, in Working notes of the 1994 AAAI Spring Symposium on Active Natural Language Processing, ed. C. Martin, J. Lehman, and K. Eiselt, March 1994.
  • S. B. Huffman and J. E. Laird. Learning from highly flexible tutorial instruction, in Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI-94).
  • S. B. Huffman and J. E. Laird. Flexibly Instructable Agents. In Journal of Artificial Intelligence Research, Volume 3, Pages 271-324, 1995.

7. Learning to correct errors in planning knowledge

This work started out with the goal of just showing that it is possible to correct knowledge using EBL-like techniques if the architecture supports the right kind of deliberation mechanism for selecting operators. It then expanded as Doug Pearson perused it for his thesis. In the end, his system, IMPROV, is pretty amazing in that it is able to detect and correct errors in its planning knowledge through interactions with dynamic environments in which actions take time and where there may be noise in the sensors. This uses SCA extensively.

  • Laird, J. E. Recovery from Incorrect Knowledge in Soar. In Proceedings of AAAI-88, American Association for Artificial Intelligence, Minneapolis, MN, 618-623, 1988.
  • S. B. Huffman, D. J. Pearson, and J. E. Laird, Correcting Imperfect Domain Theories: A Knowledge-Level Analysis. in Machine Learning: Induction, Analogy and Discovery, edited by Susan Chipman and Alan Meyrowitz, Kluwer Academic Press, 1993. This is about the types of errors you can have in an agent’s knowledge of the world.
  • D. J. Pearson, Learning Procedural Planning Knowledge in Complex Environments. Ph.D. Thesis, 1996.
  • Active Learning in Correcting Domain Theories: Help or Hindrance? in AAAI Symposium on Active Learning (1995). This is about the relative merits of learning by doing, rather than learning by being given carefully chosen sample problems.
  • Toward Incremental Knowledge Correction for Agents in Complex Environments. In Machine Intelligence 15 (1998). This is a general overview of Doug Pearson’s system that learns to correct mistakes as it solves problem. The system is called IMPROV.
  • Dynamic Knowledge Integration during Plan Execution. in AAAI-96 Fall Symposium on Plan Execution: Problems and Issues (1996). This is about how tasks and environments constrain the way plans can be built and used.

8. Integration of learning from instruction and learning to correct errors

Instructo-Soar and IMPROV have been integrated together. Here are two similar papers that cover their integration. The second tries to take a broader view of adaptation in intelligent agents.

  • Pearson, D. J ., Laird, J. E., “Incremental Learning of Procedural Planning Knowledge in Challenging Environments,” Computational Intelligence, 2005, 21:4, 414
  • D. J. Pearson and S. B. Huffman, “Combining learning from instruction with recovery from incorrect knowledge.” ML-95 workshop on Agents that learn from other agents, July 1995.
  • Laird, J. E., Pearson, D. J., Huffman, S. B., Knowledge-Directed Adaptation in Intelligent Agents. AAAI Workshop on Intelligent Adaptive Agents, August 1996. Published in Imam, I.F., and Kodratoff, Y., Intelligent Adaptive Agents: A Highlight on the Field and A Report on the AAAI-96 Workshop, A Technical Report of the Machine Learning and Inference Laboratory, George Mason University, 1996. Also published in Journal of Intelligent Information Systems , 9, 261-2775 (1997)

9. Learning in continuous domains

This work was done by Seth Rogers (now of ISLE at Stanford). It investigates using SCA style symbolic learning within the context of continuous environments.

  • Increasing Learning Rate via Active Goal Selection (84KB), in the 1995 AAAI Symposium on Active Learning.
  • New Results on Learning from Experience in Continuous Domains (136KB), unpublished technical update.
  • Symbolic Performance & Learning in Complex Environments (43KB), AAAI 1996 National Conference on Artificial Intelligence. Student Abstract and Poster Program. (new version (161KB))
  • Symbolic Performance & Learning in Complex Environments (485KB), unpublished longer version of AAAI abstract.

Seth’s thesis is available as a Technical Report from EECS Department, University of Michigan. Contact me ([email protected]) if you want a copy.

10. Learning from Observation

We have started a research project on learning from observation. The basic idea is to learn procedural knowledge based on observations of humans performing the same task. We are building on work done on “behavioral cloning” by Claude Sammut and his colleagues. One extension we have added is the ability of the human to add annotations in terms of current goals. This simplifies the parsing of the behavior into relevant segments. This work is being done by Michael van Lent. His web site is more likely to have current descriptions of the research. Click here for more information.

  • Learning by Observation in a Complex Domain, Proceedings of the 1998 Banff Knowledge Acquisition Workshop.
  • van Lent, and Laird, Learning by Observation in a Tactical Air Combat Domain, Proceedings of the Seventh Conference on Computer Generated Forces and Behavioral Representation. Orlando, FL. May 1998.
  • van Lent, M., & Laird, J. E., Learning Hierarchical Performance Knowledge by Observation. International Conference on Machine Learning, June, 1999.
  • van Lent, M. & Laird, J. E., Learning Procedural Knowledge by Observation. Proceedings of the First International Conference on Knowledge Capture (K-CAP 2001), October 21-23, 2001, Victoria, BC, Canada, ACM, pp 179-186.

5. Agent Development

1. Air-Soar

Air-Soar was a system that flew the SGI flight simulator. It demonstrated that it was possible to have a symbolic rule-based system act as a controller in a dynamic environment. Air-Soar designed directly used Soar’s hierarchical operators to represent a hierarchy of goals and action.

  • D. J. Pearson, S. B. Huffman, M. B. Willis, J. E. Laird, and R. M. Jones, “Intelligent multi-level control in a highly reactive domain,” in Proceedings of the Third International Conference on Intelligent Autonomous Systems, Pittsburgh, PA, February 1993.
  • D. J. Pearson, S. B. Huffman, M. B. Willis, J. E. Laird and R. M. Jones, A symbolic solution to intelligent real-time control. in Robotics and Autonomous Systems 11 (1993) (or try Elsevier Publishers). This is about designing a rule-based system to fly a simulated plane in real-time.

2. TacAir-Soar

Over the last eight years, we have been developing intelligent agents for simulated battlefields. This is the Soar/IFOR component of the WISSARD/IFOR project (funded by DARPA/ISO). The goal of Soar/IFOR is the development of autonomous computer agents whose behavior is tactically indistinguishable from humans. These synthetic agents must not only be lifelike, they must be humanlike with many of the capabilities we commonly associate with intelligent human behavior: real-time reactivity, goal-directed problem solving and planning, large bodies of knowledge, adaptation to changing situations, and interaction and coordination with other intelligent entities. The Soar/IFOR consortium, involving the University of Michigan, University of Southern California’s Information Sciences Institute, and Carnegie Mellon University , is developing such agents for air missions: air to air combat, air to ground attacks, helicopter missions. A long-term goal of this research is to extend this technology to education, training, and entertainment where humans can interact with humanlike intelligent agents in a variety of synthetic environments.

A number of papers have been published about our project. Below are some of the UM papers:

  • Jones, R. M., Tambe, M., Laird, J. E., Rosenbloom, P. S. 1993. Intelligent Automated Agents for Flight Training Simulators. Proceedings of the Third Conference on Computer Generated Forces and Behavioral Representation. Orlando, FL. pp. 33-42. (Figures 1 and 2, Figure 3).
  • Jones, R. M., Wray, R. E., van Lent, M., and Laird, J. (1994). Planning in the Tactical Air Domain. AAAI Fall Symposium. New Orleans, LA. November, 1994.
  • Rosenbloom, Johnson, W. L., Jones, R. M., Koss, F., Laird, J. E., Lehman, J. F., Rubinoff, R., Schwamb, K. B., Tambe, M. 1994. Intelligent Automated Agents for Tactical Air Simulation: A Progress Report. Proceedings of the Fourth Conference on Computer Generated Forces and Behavioral Representation. Orlando, FL.
  • Laird, J. E., Johnson, W. L., Jones, R. M., Koss, F., Lehman, J. F., Nielsen, P. E., Rosenbloom, P. S., Rubinoff, R., Schwamb, K. Tambe, M., Van Dyke, J., van Lent, M. and Wray, R. E. 1995. Simulated Intelligent Forces for Air: The Soar/IFOR Project 1995. Proceedings of the Fifth Conference on Computer Generated Forces and Behavioral Representation. Orlando, FL. pp. 27-36.
  • Tambe, M., Johnson, W. L., Jones, R. M., Koss, F., Laird, J. E., Rosenbloom, P. S., and Schwamb, K. 1995. Intelligent Agents for Interactive Simulation Environments. AI Magazine, 16(1).
  • John E. Laird, Karen J. Coulter, Randolph M. Jones, Patrick G. Kenny, Frank Koss, Paul E. Nielsen Integrating Intelligent Computer Generated Forces in Distributed Simulations: TacAir-Soar in STOW-97. Simulation Interoperability Workshop, Orlando, FL 1998.
  • Laird, J. E., Jones, R. M., and Nielsen, P. E., Knowledge-Based Multiagent Coordination, Presence, Vol. 7, No. 6, December 1998, 547-563.
  • Jones, R. M., Laird, J. E., Nielsen P. E., Coulter, K., Kenny, P., and Koss, F. Automated Intelligent Pilots for Combat Flight Simulation, AI Magazine , Spring 1999, Vol. 20, No. 1, pp. 27-42. This is the best overview paper of the project. The research and development in this area continues at Soar Technology, Inc.

3. Soar-Games

We are now working hard on developing AI systems for Computer Games.

Below are papers on integrating Soar with a real-time strategy game engine (ORTS)

  • Wintermute, S., Xu, J., Irizarry, J., Laird, J.E. 2007. SORTS Tech Report.
    http://ai.eecs.umich.edu/soar/sitemaker/docs/pubs/sorts_report.pdf
  • Wintermute, S., Xu, J., and Laird, J.E. SORTS: A Human-Level Approach to Real-Time Strategy AI. Proceedings of the Third Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE-07), Stanford, California http://www.eecs.umich.edu/~swinterm/papers/AIIDE07-SORTS.pdf

4. MOUT-Bot

Working with Soar Technology, I’m developing an adversary for training in synthetic environments. This is part of ONRs VIRTE/DEMO II project.

  • Robert E. Wray, John E. Laird, Andrew Nuxoll, Devvan Stokes, Alex Kerfoot, Synthetic Adversaries for Urban Combat Training, Proceedings of the 2004 Innovative Applications of Artificial Intelligence Conference, San Jose, CA, July 2004. AAAI Press.
  • Robert E. Wray and John E. Laird. Variability in Human Behavior Modeling for Military Simulations. Proceedings of the 2003 Conference on Behavior Representation in Modeling and Simulation. Scottsdale, AZ. May, 2003. This paper describes our attempts at understanding the reasons for variability and we propose some approaches for modeling it.

6. Rapid Knowledge Acquisition

Doug Pearson and I have been working on a new was for doing knowledge acquisition that involves scenarios described as diagrams by an expert.

  • Douglas Pearson, John E. Laird, Redux: Example-Driven Diagrammatic Tools for Rapid Knowledge Acquisition, Proceedings of Behavior Representation in Modeling and Simulation, 2004, Washington, D.C.