February 28, 1997 The Foundation for Intelligent Physical Agents ( FIPA) is an international non-profit association developing specifications of generic agent technologies to provide a high level of interoperability across applications. The target of FIPA-specified agent technologies are "Intelligent Physical Agents" -- devices intended for the mass market, capable of executing actions to accomplish goals imparted by or in collaboration with human beings or other IPAs, with a high degree of intelligence.
A first draft of the FIPA'97 specification covers three technology parts (Agent Management, Agent Communication and Agent/Software Interaction) and one application part (Personal Travel Assistance). Three more application parts (Personal Assistant, Audio-Visual Entertainment and Broadcasting, and Network Provisioning and Management) will be generated at the Fifth FIPA meeting to be held in Reston, Virginia on 14-18 April. 2/2/97
This note is a collection of comments, observations and questions regarding the FIPA proposal for a standard language for agent-to-agent communication [FIPA 97]. Our main points can be summarized as:
What follows are questions regarding the FIPA proposal. Page references refer to the Microsoft-Word printed version of the document [FIPA 97].
"i NOT Believe that j Believes that P or NOT(P) (or is uncertain about P or about NOT(P)"versus
"i Believes that j is uncertain about P".
The phrasing seems to suggest that the CONFIRM CA is a special case of INFORM. This probably answers the question about what should the guide be for a developer, i.e., the formal description of Annex A. But then, which is the final definition of INFORM? Is it the one at the top of 18, the one at the bottom of page 18 or the one at the top of page 19?
Let us consider the definition at the top of page 18. The RE is BjP. Does the theory somewhere provide for BiBjP after the CA? If yes, does this mean that an agent cannot plan the same INFORM CA twice because the second time the NOT(BiBjP) of the FP will not hold? It seems reasonable not to allow that, *if* all the conversing agents adhere to the same theory of agency, but what happens if this is not the case?
Also, considering the other definitions of CONFIRM: what are the semantics of the U modal operator (or C or I for that matter)? [FIPA 97] suggests that this is provided in [Sadek91a] but shouldn't all that be part of the FIPA proposal?
<j,QUERY-IF(i,p)> ; <i, INFORM-IF(j,p)>.
Why doesn't the same apply to the definition of the QUERY-WH, i.e., why isn't QUERY-WH a composite act like a YN-QUESTION?
But is this a composite message or two different CAs since the actors of the two acts are different?
As is stated in the beginning of section 7.1.3
"... this does not constrain agent i to actually inform agent of P or NOT(P)."Presumably, what constrains the agent to eventually do so, is the MCP of section 7.3.1. But then the definition of QUERY-WH is predicated upon a principle that is part of the agent model and will not necessarily be a universal requirement for all agents.
Bi NOT(PGj DONE (<j,INFORMIF(i,p)>))Isn't this a bit counterintuitive? In order for the agent to plan the act the agent needs to believe that its interlocutor does not have a PG (persistent goal) to do an INFORMIF. How will the agent establish such a belief?
Let us imagine an agent that *complies* with the agent model assumed in the FIPA proposal: the agent, due to limited resources, treats its PGs as a FIFO stack of bounded length and always drops "recent" PGs when the stack length exceeds the bound. Does this render the MCP inoperable? Does this have further repercussions regarding the compliance with the semantic description (of QUERY-WH for example). Does the developer have to check for that?
Even worse, what happens if an agent implements a different theory of agency with a different definition of Intention? For example, PGs are defined as in [Cohen & Levesque 90] but without the "escape clause" of dropping a PG if the agent comes to believe that the PG is unachievable. There could very well be an agent that does not mind pursuing religiously its PGs because they are limited, well-defined, bounded (an agent that controls a set of devices with a few pre-defined settings and that always "does what it is requested"). Does this change anything? Does the programmer have to figure out that such an agent model is subsumed by the "official" FIPA one which is implicit in the semantic description of the ACL?
The larger point again is that the underlying theory of agency creeps into the semantic description. Even worse the underlying theory is not immune to problems. Things might be fine as long as all agents are of the same design but even in the same theory there can be agents of slightly different design. Agent systems are essentially "open systems" and in general very little can be assumed regarding what goes on inside the agent.
By the way, in the prescribed theory of agency, assuming that the definition of PG follows that of [Cohen & Levesque 90], what prevents an agent from routinely dropping its PGs because it is a very "pessimistic" agent and does not believe that it can achieve them?
Again, if rational agent of the proposal follows the [Cohen & Levesque 90], do the authors of the FIPA proposal have any comments regarding Singh's criticism towards it [Singh 92]?
As a final comment, the 2nd Cooperativity Principle has nothing to do with the ACL. It (or perhaphs both of the Co-op principles) have to do with agents that implement the *rational* agent and its principles.
What is the meaning of the replyRef and replyTo parameters?
"Use of the technologies described in this specification may infringe patents, copyrights or other intellectuall property rights of FIPA Members and non-members. Nothing in this specification should be construed as granting permission to use any of the technologies described. Anyone planning to make use of technology covered by the intellectual property rights of others should first obtain permission from the holder(s) of the rights. FIPA strongly encourages anyone implementing any part of this specification to determine first whether part(s) sought to be implemented are covered by the intellectual property of others, and, if so, to obtain appropriate licences or other permission from the holder(s) of such intellectual property prior to implementation. This FIPA '97 Specification is subject to change without notice. Neither FIPA nor any of its Members accept any responsibility whatsoever for damages or liability, direct or consequential, which may result from the use of this specification."
This may be another case where "worse is better" in the sense of Dick Gabriel's classic article [Gabriel 91] on why Lisp lost out to C as practical programming language of choice. Rather than attempting to "do the right thing" and define into our standard ACL a strong model of agency, we should adopt a standard ACL which puts fewer constraints on our agents and see what happens. The software agent paradigm is new. We do not have much experience in actually using it in large-scale applications that matter. Let's try to get it mostly right and privilege flexibility, because we are going to have to live with and evolve this for a while. We should develop an agent ACL and associated infrastructure that has the quality of "habitability" as described in recent work on software patterns [Gabriel 96, 97].