"An agent is a software thing that knows how to do things that you could probably do yourself if you had the time."
Ted Selker of the IBM Almaden Research Centre (quote taken from [JANC95])


In this section we will not come to a rock-solid formal definition of the concept "agent". Given the multiplicity of roles agents can play, this is quite impossible and even very impractical. On the Software Agents Mailing List, however, a possible informal definition of an intelligent software agent was given:

"A piece of software which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The software should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result."
(with thanks to G.W. Lecky-Thompson for this definition)

Instead of the formal definition, a list of general characteristics of agents will be given. Together these characteristics give a global impression of what an agent "is".  [1]

The first group of characteristics, which will be presented in section 2.2.1, are connected to the weak notion of the concept "agent". The fact that an agent should possess most, if not all of these characteristics, is something that most scientists have agreed upon at this moment.
This is not the case, however, with the second group of characteristics, which are connected to the strong notion of the concept "agent". The characteristics that are presented in section 2.2.2 are not things that go without saying for everybody.
What "intelligence" is, and what the related term "agency" means, is explained in section 2.2.3.



Perhaps the most general way in which the term agent is used, is to denote a hardware or (more usually) software-based computer system that enjoys the following properties:
autonomy: agents operate without the direct intervention of humans or others, and have some kind of control over their actions and internal state;  [2]
social ability: agents interact with other agents and (possibly) humans via some kind of agent communication language;  [3]
reactivity: agents perceive their environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or perhaps all of these combined), and respond in a timely fashion to changes that occur in it. [4] This may entail that an agent spends most of its time in a kind of sleep state [5] from which it will awake if certain changes in its environment (like the arrival of new e-mail) give rise to it;
proactivity: agents do not simply act in response to their environment, they are able to exhibit goal-directed behaviour by taking the initiative;
temporal continuity: agents are continuously running processes (either running active in the foreground or sleeping/passive in the background), not once-only computations or scripts that map a single input to a single output and then terminate;
goal orientedness: an agent is capable of handling complex, high-level tasks. The decision how such a task is best split up in smaller sub-tasks, and in which order and in which way these sub-tasks should be best performed, should be made by the agent itself.

Thus, a simple way of conceptualising an agent is as a kind of UNIX-like software process, that exhibits the properties listed above. A clear example of an agent that meets the weak notion of an agent is the so-called softbot (`software robot'). This is an agent that is active in a software environment (for instance the previously mentioned UNIX operating system).



For some researchers - particularly those working in the field of AI - the term agent has a stronger and more specific meaning than that sketched out in the previous section. These researchers generally mean an agent to be a computer system that, in addition to having the properties as they were previously identified, is either conceptualised or implemented using concepts that are more usually applied to humans. For example, it is quite common in AI to characterise an agent using mentalistic notions, such as knowledge, belief, intention, and obligation [6]. Some AI researchers have gone further, and considered emotional agents[7]
Another way of giving agents human-like attributes is to represent them visually by using techniques such as a cartoon-like graphical icon or an animated face  [8]. Research into this matter  [9] has shown that, although agents are pieces of software code, people like to deal with them as if they were dealing with other people (regardless of the type of agent interface that is being used).

Agents that fit the stronger notion of agent usually have one or more of the following characteristics: [10]
mobility: the ability of an agent to move around an electronic network;  [11]
benevolence: is the assumption that agents do not have conflicting goals, and that every agent will therefore always try to do what is asked of it;  [12]
rationality: is (crudely) the assumption that an agent will act in order to achieve its goals and will not act in such a way as to prevent its goals being achieved - at least insofar as its beliefs permit;  [13]
adaptivity: an agent should be able to adjust itself to the habits, working methods and preferences of its user;
collaboration: an agent should not unthinkingly accept (and execute) instructions, but should take into account that the human user makes mistakes (e.g. give an order that contains conflicting goals), omits important information and/or provides ambiguous information. For instance, an agent should check things by asking questions to the user, or use a built-up user model to solve problems like these. An agent should even be allowed to refuse to execute certain tasks, because (for instance) they would put an unacceptable high load on the network resources or because it would cause damage to other users. [14]

Although no single agent possesses all these abilities, there are several prototype agents that posses quite a lot of them (see section 3.2.2 for some examples). At this moment no consensus has yet been reached about the relative importance (weight) of each of these characteristics in the agent as a whole. What most scientists have come to a consensus about, is that it are these kinds of characteristics that distinguish agents from ordinary programs.



The degree of autonomy and authority vested in the agent, is called its agency. It can be measured at least qualitatively by the nature of the interaction between the agent and other entities in the system in which it operates.
At a minimum, an agent must run a-synchronously. The degree of agency is enhanced if an agent represents a user in some way. This is one of the key values of agents. A more advanced agent can interact with other entities such as data, applications, or services. Further advanced agents collaborate and negotiate with other agents.

What exactly makes an agent "intelligent" is something that is hard to define. It has been the subject of many discussions in the field of Artificial Intelligence, and a clear answer has yet to be found.
Yet, a workable definition of what makes an agent intelligent is given in [IBM95]:

"Intelligence is the degree of reasoning and learned behaviour: the agent's ability to accept the user's statement of goals and carry out the task delegated to it.
At a minimum, there can be some statement of preferences, perhaps in the form of rules, with an inference engine or some other reasoning mechanism to act on these preferences.
Higher levels of intelligence include a user model or some other form of understanding and reasoning about what a user wants done, and planning the means to achieve this goal.
Further out on the intelligence scale are systems that learn and adapt to their environment, both in terms of the user's objectives, and in terms of the resources available to the agent. Such a system might, like a human assistant, discover new relationships, connections, or concepts independently from the human user, and exploit these in anticipating and satisfying user needs.
"

[1] See [WOOL95] for a more eleborated overview of the theoretical and practical aspects of agents.
[2] See: Casterfranchi, C (1995). Guarantees for autonomy in cognitive agent architecture. In Woolridge, M. and Jennings, N. R., ed., Intelligent Agents: Theories, Architectures, and Languages (LNAI Volume 890), page 56-70. Springer-Verlag: Heidelberg, Germany.
[3] See: Genesereth, M. R. and Ketchpel, S. P. (1994). Software Agents. Communications of the ACM, 37(7): page 48-53.
[4] Note that the kind of reactivity that is displayed by agents, is beyond that of so-called (UNIX) daemons. Daemons are system processes that continuously monitor system resources and activities, and become active once certain conditions (e.g. thresholds) are met. As opposed to agents, daemons react in a very straight-forward way, and they do not get better in reacting to certain conditions.
[5] Analogous to the "sleep" state in a UNIX system, where a process that has no further tasks to be done, or has to wait for another process to finish, goes into a sleep state until another process wakes it up again.
[6] See: Shoham, Y. Agent-oriented programming. Artificial Intelligence, 60(1): page 51-92, 1993.
[7] See, for instance, Bates, J. The role of emotion in believable agents. Communications of the ACM, 37(7): page 122-125, 1994.
[8] See: Maes, P. Agents that reduce work and information overload. Communications of the ACM, 37(7): page 31-40, 1994.
[9] See, for instance, Norman, D. How Might People Interact with Agents. In Communications of the ACM, 1994 issue, Juli 1994.
[10] This list is far from complete. There are many other characteristics of agents that could have been added to this list. The characteristics that are mentioned here are there for illustrative purposes and should not be interpreted as an ultimate enumeration.
[11] See: White, J. E. Telescript technology: The foundation for the electronic marketplace. White paper, General Magic Inc., 1994.
[12] See: Rosenschein, J. S. and Genesereth, M. R. Deals among rational agents. In Proceedings of the Ninth International Joint Conference on Artificial Intelligence (IJCAI-85), page 91-99, Los Angeles, United States, 1994.
[13] See: Galliers, J. R. A Theoretical Framework for Computer Models of Cooperative Dialogue, Acknowledging Multi-Agent Conflict. PhD thesis, page 49-54, Open University, Great Britain, 1994.
[14] See: Eichmann, D. Ethical Web Agents. Proceedings of the Second International World-Wide Web Conference. Chicago, United States, October 1994.

 previous page  next page  to the chapter's TOC  to the main TOC

"Intelligent Software Agents on the Internet" - by Björn Hermans