![]() |
|
In chapter one, two statements have been formulated. Let us see now how these statements - a
claim and a prediction - have turned out.
[1]
Judging from the information that we have seen in chapters two and three, and also judging from published research reports, new product announcements and articles in the media, it seems safe to conclude that agents are starting to lift off, and are judged by many as valuable, promising and useful. Numerous agent-like as well as real agent-enabled applications are available on the Internet (albeit often as test or beta versions). These are already able to offer a broad range of functions, which make it possible to perform all sorts of tasks on the Internet (some of which were not feasible in the past), and/or support users while doing them. There are only a few objections that can be raised against the claim that agents "make up a promising solution" for the information overkill on the Internet. Objections that can be made, concern the lack of standards with regard to vital agent aspects (such as the communication language and the architecture that will be used) and about the vagueness of some of the agent's aspects (as seen in section 2.2). While these are indeed valid objections, none of them really are insurmountable obstacles for further development of the agent-technique as a whole, and of agent-enabled applications in particular.
The current structure of the Internet seems to be missing something. Users complain that they are increasingly unable to find the information or services they are looking for. Suppliers are complaining that it gets increasingly difficult to reach users, let alone reach the right ones. Both seem to find "it's a jungle out there". A worrying development, also for governments and many others who want the Internet (and all the information and services that are available through it) to be easily accessible and operable for all. What many seem to be wanting, either implicitly (e.g. by stating that some sort of intermediary services are needed) or explicitly, is that a third party [2] or layer be added to the Internet. This layer or party will try to bring supply (i.e. suppliers) and demand (i.e. users) together in the best possible way. The three layer model, as seen in chapter four, is a way in which this can be accomplished. So, adding a third layer or party to the Internet seems to be very promising and a way of offering new and powerful services to all on the Internet. But does it lead to agents being "maximally utilised"? First and foremost: it does not mean that agents have little to offer if they are not employed in a three layer structure for the Internet. Individual agents (or agent systems) are capable of doing many things, even when not employed in a three layer structure. But some of the offered functionality can be done more efficiently, and probably quicker or at lesser costs, when the three layer structure is used (as was shown in chapter four). Moreover, the structure will enable tasks that a single agent is incapable of doing (well, or not at all), such as finding information within a foreseeable period of time on (ideally) the whole Internet.
In the previous section it has been shown that agents are able to contribute in many ways to improve "the process of information supply and demand" (e.g. as intermediary agents). The question now is: are they better at doing this than, say, a human information broker? |
| [1] about six months after they have been formulated. [2] users and suppliers being the first and second one. |
| previous page | next page | to the chapter's TOC | to the main TOC |