One of the most important things I learned from "The Emotions" (N. Frijda, 1986) is the concept of Concern.
Concerns are the things that are important to you: staying alive, have enough to eat, reproduce, be free of pain, to understand the world. These are not goals. A goal is something that is actively persued, a drive. However, when a concern is endangered or violated, a goal will be created automatically in order to protect the concern. Concerns are the cause of goal-directed behaviour. This goal is an emotion and the behaviour that fulfills the goal is emotional behaviour.
Concerns can be physical (survive, avoid pain), moral (values), social (to have esteem), and perhaps others. For a machine a concern is the reason why it exists and why it was built. Computer-programs can be made to be aware of there own concerns.
Why this distinction between concerns and goals? Why not just say that the goal is to avoid pain, reproduce, understand the world, be loved, etc? Because a goal is only a goal when the person is currently actively working on the realization of it. And this is just not true of all these things. Concerns linger in the dark. They are potential goals, but will only be awoken when needed. For a computer program this means that the active goals may be restricted to the ones that are currently active. It is not necessary to react to all concerns all the time.
When does a concern sprout a goal? A person has sensors that monitor the state of the concern. When the concern moves too much off balance, the goal is created to restore the balance. The goal self-destructs when the concern has returned to balance. But this may require a lot of effort on the part of the person. So the person at any one time persues only those goals that actually matter to him/her.
Concerns are also quite handy as a debugging tool. At any time it is possible to ask the program why it is doing what it is doing, and get a reasonable answer. All actions are rooted in concerns.
In "Artificial Intelligence, A Modern Approach", Russell and Norvig describe four levels of agents: simple reflex agents, reflex agents with internal state, goal-based agents, and utility-based agents. This series could be extended by a new type: the concern-based agent. It would be, in he words of Frijda, a concern realization system.
And finally, concerns may lift a machine from being a mindless do-as-your-told device to an autonomous, self-aware agent.
- cognitive architecture