Van Lehn, KurtTETON  - is the Situated and Planned Action architecture, developed by Kurt van Lehn and a research group, including William Ball - co-author of TETON, in the Learning Research and Development Center and Computer Science Department of the University of Pittsburgh, PA, USA / Department of Psychology, Carnegie-Mellon University, Pittsburg, PA, USA.
Email: Этот адрес электронной почты защищён от спам-ботов. У вас должен быть включен JavaScript для просмотра. | K. van Lehn's Home Page at ASU | Teton Architecture | An Overview of Teton | Email: Этот адрес электронной почты защищён от спам-ботов. У вас должен быть включен JavaScript для просмотра.


Общие сведения

Архитектура TETON разработана исследовательской группой, включая Уильяма Болла (William Ball) в Питтсбургском университете (University of Pittsburgh), Питтсбург, Пенсильвания, США и Университете Карнеги-Меллона (Carnegie Mellon University), Питтсбург, Пенсильвания, США под руководством Курта ван Лехна (Kurt van Lehn) - автора многих когнитивных архитектур, включая описанную здесь архитектуру TETON.

В настоящее время Курт ван Лехн трудится в Университете  штата Аризона (Arizona State University - ASU, Tempe, AZ) США - смотри его проекты: Projects | intelligent tutoring systems.


An Overview of Teton

The motivation underlying Teton's design is the incorporation of goal reconstruction within a cognitive architecture. VanLehn and Bell view goal reconstruction as both a practical tool and a close approximation of capabilities possessed by humans.

Teton Architecture

Teton is a problem solver that consists of two memory areas, and an execution cycle. One memory area is called the short term or working memory, while the other is the long-term memory or knowledge base. Knowledge is represented in an open and declarative format that allows the active operators to examine, interpret and alter any of the agent's knowledge.


[VB1] Извлечение

5. Appendix: Teton

Teton is a von Neuman machine, so it has two kinds of memory.
The knowledge base is a large, slowly changing memory that holds general knowledge, such as procedures for solving problems, inference rules and general facts.
The working memory is a rapidly changing memory that holds information produced in the course of a computation.
Like all von-Neuman machines, Teton has an built-in execution cycle that interprets procedural knowledge stored in its knowledge base.
The execution cycle consists of:
1) deciding what to do, based on the current states of the working memory and the knowledge base, and
(2) doing what it decided to do.

The execution cycle is an algorithm that treats the information in the working memory and the knowledge base as formatted data. The format of the data is called the representation language.

This description of Teton has, so far, said nothing that would distinguish it from any other von Neuman machine.
To define Teton per se, the following three section will describe, respectively, its representation language, its execution cycle and its memories.

5.1. Knowledge representation

Teton's representation language is appropriate for procedural knowledge, but clumsy at best for representing declarative knowledge. For instance, it is simple to represent addition and subtraction algorithms, but it is difficult to represent that addition and subtraction are inverses. This is not intended to be a claim that the mind has only clumsy ways to represent declarative knowledge. It means only that we have not investigated tasks where declarative knowledge has a major influence, so we have not yet included a language appropriate for representing declarative knowledge.

In working memory, the main unit of information is the goal. A goal serves many purposes. It can represent an action that has already been completed, or an action that is planned but not yet begun, or an action that is in progress. A goal has slots for indicating a state to be achieved, an operation, the state resulting from the operation, subgoals created by the operation, the supergoal of this goal, the time that the goal was created, and so on.

In the knowledge base, there are two kinds of knowledge: operators and selection rules.
Operators have the following parts:
1. A goal type, which indicates what kinds of goals this operator is appropriate for. This description usually has variables that must be instantiated before the operator can be executed.
2. A set of preconditions. If all these prc.icates hold of the current state of working memory, then the operator can be executed. If not, then the architecture will automatically create subgoals for operators allow both deliberate subgoaling and operator subgoaling. The execution of the body of an operator can create subgoals (deliberate subgoaling), and the architecture will create subgoals if an operator's preconditions are unsatisfied (operator subgoaling).

Selection rules are the other type of knowledge in Teton's knowledge base. They are used for selecting a goal to work on and for selecting an operator to use for achieving the selected goal. There are three types of selection rules. Consideration rules indicate that a goal or operator should be considered.

These rules are consulted first. They usually produce a large set of items. Rejection rules are consulted next, and cause some of the items to be removed from the set of items under consideration. Preference rules are consulted last. They partially order the set of items under consideration. Normally, one item will be preferred over all the others. It is the one selected. Teton's selection rule mechanism is similar to the ones used by Soar (Rosenbloom, Newell & Laird, 1990) and Prodigy (Carbonell, Knoblock & Minton, 19??). All three system use this type of mechanism because it makes it eas, to implement the acquisition of strategic knowledge: just add new selection rules.

5.2. The execution cycle

The main loop of Teton's interpreter is shown in table 5-1. Most of it is quite standard: Goals are selected by goal selection rules. Operators are selected by operator selection rules. Unsatisfied preconditions cause subgoaling. Execution of macro-operators cauzes subgoaling. Execution of primitive operators causes state changes. However, there are two facilities, impasses and shortcut conditions, that are not standard and deserved some explanation.

Table 5-1: The main loop of Teton's interpreter.

1. Select a goal from working memory using the goal selection rules. If there is no unique selection exists, then create an impasse goal describing that and select it.
2. If the selected goal has an operation selected for it already, then skip the next step.
3. Select an operation (a partially instantiated operator) for the current goal using the operator selection rules. If there is no unique operation, then create an impasse goal describing that, make it a subgoal of the selected goal, select it, and repeat this step.
4. If the selected operation has unsatisfied preconditions, then create a new goal for each such precondition and link it to the selected goal as a subgoal. Leave the selected goal marked "pending," and return to step 1.
5. If the selected operation has a shortcut condition and it is true, or it has subgoals and they are all completed, then mark the selected goal "completed" and return to step 1.
6. If the operation is primitive, then execute the operation, mark the selected goal "completed", and return to step I.
7. Otherwise, the operation is non-primitive, so execute the operation and return to step 1.
Execution will cause new subgoals to be created and linked to the selected goal as subgoals.

Whenever the architecture needs to select a goal or operation, it enumerates all possible candidates, filters this set with the rejection-type selection rules, then rank orders the set with the remaining selection rules. If one choice is better than all the others, then Teton takes it. However, if the selection rules fail to uniquely specify a choice (e.g., they reject all possibilities, or they cannot decide among a two possibilities), then an impasse occurs.
As in Soar (Rosenbloom, Newell & Laird, 1990) and Sierra (VanLehn, 1987; VanLehn, 1989a), an impasse causes the architecture to automatically create a new goal, which is to resolve the impasse. Typically, such resolve-impasse goals are tackled by task-general knowledge.
For instance, one of Sierra's methods is: If the selection rules cannot decide among several possible candidates, then choose one randomly. Another popular impasse-resolving method is: If the selection rules rejected all operations for the current goal, then mark the goal as accomplished even though it is not. This causes the architecture to "skip" planned actions that it does not know how to accomplish. Brown and VanLehn (1980) exhibited a collection of such impasse-resolving methods (called "repairs") and showed how they could explain the acquisition of many students' bugs (procedural misconceptions).

Shortcut conditions play an important role when Teton reconstructs goals that have been forgotten (i.e., deleted from working memory). In order to recover from such working memory failures, Teton has to reconstruct some of the goals it once had. It is assumed that there are some top-level goal that is not forgotten. The remaining goals are reconstructed by simply executing the procedural knowledge with the interpreter of table 5-1.
However, when the situation corresponds to a half-completed problem, some of the goals created are superfluous because they have already been achieved. In such cases, the appropriate shortcut conditions are true, and goals are murked "completed" before any attempt is made to execute them.

One mechanism that is common in other architectures is missing in Teton. Teton goals need not be selected in last-in-first-out (LIFO) order. For instance, if there are two pending goals, A and B, and A is selected and leads to a subgoal C, then a LIFO restriction would rule out selecting goal B since C is more recently created. Most architectures, including Soar and Grapes, place a LIFO restriction on goal selection, but Teton does not. In the case just mentioned, it allows either B or C to be selected.

5.3. Memories

As mentioned earlier, Teton has two memory stores, the knowledge base and the working memory.
Working memory is composed of four distinct memories:
1. The main working memory is the one that holds the goals and other data structures generated by the execution cycle.
2. The situation holds a representation of the external environment. Its contents model the subjects' interpretation of what they see, which is task-specific, like a problem space's current state. For instance, an arithmetic problem is represented as a grid of rows and columns in the situation, whereas an algebra equation is represented as a tree.
3. The scratchpad is just like the situation, except that the contents represent something that the subject is imagining, rather than actually seeing. For instance, some subjects imagine the result of a move during problem solving before actually making the move in the real world. In order to model such events, Teton distinguishes the situation from the imagination.
4. The buffer is a limited capacity store for items that have simple verbal encodings, such as numbers.

The latter two memories are a novelty in computational models of the architecture, so they are worth a little explanation. They are designed as simple versions of the two slave memories described by Baddeley (1986) and called the articulatory loop and the visio-spatial scratchpad. According to Baddeley, the articulatory loop consists of a passive storage medium, called the phonological store, and a mechanism for "rehearsing" its contents (analogously to a dynamic RAM). The phonological store can hold a phonological code for about 2 or 3 seconds (Zhang & Simon, 1985). If it is not rehearsed in that time, it becomes inaccessible. The time required to rehearse a code is linearly related to the time required to read the equivalent lexical item.

Thus a person can store a given list of stimulus items if the time required to rehearse them once is less than 2 or 3 seconds. This accounts for the often-cited finding that untrained subjects can store and immediately recall about 7 plus or minus 2 chunks (Miller, 1956). Because rehearsal can go on relatively independently of most cognitive tasks (Baddeley, 1986), the articulatory loop acts like a short term store with a capacity of a few phonologically encoded chunks. Teton uses this much simpler model, and allows N chunks to be stored in the articulatory loop, where N is a parameter of the architecture. Typically, the articulatory loop is used for temporary storage of numbers.

The visual-spatial scratchpad contains the same kind of items as the situation does, but it is meant to model a scene that the subject is imagining, rather than the real world. Teton's version of the scratchpad is only used for one purpose, which is looking ahead during problem solving in order to project the consequences of contemplated moves. Consequently, Teton supports only a simple model the scratchpad.
There is a switch in the architecture, which can be set by a primitive operation to either "normal" or "imaginary." When the switch is thrown from "normal" to "imaginary," the scratchpad is initialized with a copy of the items in the current situation. Thereafter, all reading and writing opeiations that would normally access the situation access the scratchpad instead. The volatility of the scratchpad is modeled, again quite crudely, by counting the number of operations applied to it. After a threshold is crossed (the threshold is a parameter of the model), the contents of the scratchpad become inaccessible.

This facility was used to simulate look-ahead search in the Tower of Hanoi, which plays a crucial role in Anzai and Simon's (1979) account of strategy acquisition. In the course of developing a similar account of strategy acquisition, we discovered that learning the more advanced versions of the disk subgoaling strategy would require looking ahead 12 moves in the scratchpad. Not only is this implausible, but setting the stability parameter of the scratchpad to 13 caused learning of earlier versions of the strategy to go awry. This led us to look for methods of strategy acquisition that did not use the scratchpad. We found not one but several, along with good support for them in the protocol data (VanLehn, 19??; VanLehn, 1989b).


Publications
Kurt A. VanLehn’s Selected Publications
[VB1] VanLehn, K., & Ball, W. (1991). Goal Reconstruction: How Teton blends situated action and planned action. In K. VanLehn (Ed.), Architectures for Intelligence (pp. 147-188). Hillsdale, NJ: Erlbaum. [2 MB PDF]