Goal Domain

Even though one can interpret another’s behavior as goal-directed, doing so need not mean that one represents the other’s intention. It is sufficient to represent the action’s goal. Because the human cognitive system takes self-induced motion as a cue for goal-directedness, intentions to act are inferred from observed behavior. Gergely and Csibra (2003) argued that infants do not primarily interpret instrumental actions as intentional actions. Instead, they judge them by their efficiency in reaching a goal, perceiving them as a function of the physical constraints of the agent’s situation, that is, as obstacles, visual conditions, and so forth. Only later do children adopt a mentalistic stance, learning to attribute intentions to the actor.

Therefore, any representation of intentions requires that goals already be represented. The goal domain is primary and must be described first. When the agent is located at a certain physical distance from a desired object, the goal domain can be read from the physical domain. Reaching the goal is reaching the location. The difference is that, in the physical domain, the locations of the agents and objects are in focus, whereas in the goal domain, the focus is on the distances between them. In this example the goal domain is the space of force vectors that extend from the initial to the desired location. When the goal is represented in this way, two principal ways of obtaining the goal arise. One is that the agent moves to the goal location and grasps the object. The other is that the agent uses imperative pointing, so that another individual brings the object to the agent.

Goal domains can be more abstract than force vectors in the physical domain. In principle, goal vectors can be defined in all kinds of semantic domains. If I want the wall to be painted purple, my goal is to change its color from the current location in the green part of the color domain to the desired location in the purple region. Goal spaces are represented as abstract spaces in economics, cognitive science, and artificial intelligence. The classic example from artificial intelligence is Newell and Simon’s (1972) General Problem Solver. I suggest that these spaces are generated by metaphorical extensions from the original physical space and thus always maintain the key notion of distance. This hypothesis is supported by the pervasiveness of spatial metaphors in relation to goals, as in “he reached his goal,” “the goal was unattainable,” “the target was set too high” (see also Lakoff & Johnson, 1980).

Consider next the problem of representing intentions. The basic premise is that the intention domain can be seen as a product of the goal domain and the action domain.[1] An intention is thus a combination of a goal and a planned action conceived of as leading toward that goal. Take the difference between blink and wink. A blink is an often unintentional action, a pattern of forces exerted on the muscles around the eye. By contrast, a wink is an intentional action combining the action of blinking in order “to awaken the attention of or convey private intimation to [a] person” (Concise Dictionary, 1911).[2]

  • [1] Product is meant in the mathematical sense. The intention domain is that product space generatedfrom the goal domain (a vector space) and the action space (derived from the space of forces).
  • [2] As I show in the following section, this model of intentions is the same as the model of events—except that the action involved in an intention is only planned. This analysis fits well with Gergelyand Csibra’s (2003) proposal that one infers the intentions of a person from the beliefs and desiresone attributes to that person.
 
Source
< Prev   CONTENTS   Source   Next >