Postdoctoral Research Fellow, Griffith University
To formalise and enable the revision of an intelligent agent's beliefs in an uncertain environment
Consider the following scenario: In a university, professors generally teach, unless they have an administrative appointment. Assume we know that John is a professor. Since most faculties do not have an administrative appointment, and there is no evidence that John does, we conclude with confidence that he teaches. Suppose one day we receive the new information that John does not teach. This clearly contradicts our beliefs about John. For us, it is not too hard to figure out what's going on here and resolve the contradiction. It could be that some of our beliefs are wrong; perhaps professors with administrative duties still need to do teaching or perhaps John is not a professor. Alternatively, it could be that assuming John is not an administrative staff via the absence of evidence is too adventurous; that is, he may indeed, be an administrative staff member but we don't know it.
This is a typical instance of a belief change event in which an agent receives new information and tries to make sense of it by revising her beliefs. The process of belief change behind such scenarios has been the subject of considerable investigation in the field of belief revision. The importance of understanding the exact process of adapting one's beliefs to newly received information cannot be overstated, as it is through this process that we learn and evolve. Most of what we know and believe is gained through this process. This adapting ability clearly is a defining feature of intelligent behaviours. The understanding and the formalisation of this process represent a significant advancement in realising artificial intelligence.
Belief change is traditionally studied from the viewpoint of an ideal agent and through a background classical logic which means the agent has unlimited reasoning power and computational resources but her ways of reasoning and inferencing are limited to those of classical logic. On the contrary, in a real-world setting, agents have limited reasoning and computational capacity and employ various ways of reasoning and inferencing beyond those of classical logic. Because of the discordances, the techniques and theoretical results of the traditional approaches are not immediately transferable to and in many cases insufficient in a real-world setting. My paper to be presented in IJCAI2017 represents the latest advancement in enabling belief revision in a real-world setting.
Abstract: Abduction was first introduced in the epistemological context of scientific discovery. It was more recently analyzed in artificial intelligence, especially with respect to diagnosis analysis or ordinary reasoning. These two fields share a common view of abduction as a general process of hypotheses formation. More precisely, abduction is conceived as a kind of reverse explanation where a hypothesis H can be abduced from events E if H is a “good explanation” of E. The paper surveys four known schemes for abduction that can be used in both fields. Its first contribution is a taxonomy of these schemes according to a common semantic framework based on belief revision. Its second contribution is to produce, for each non-trivial scheme, a representation theorem linking its semantic framework to a set of postulates. Its third contribution is to present semantic and axiomatic arguments in favor of one of these schemes, “ordered abduction,” which has never been vindicated in the literature.
Pub.: 01 Dec '05, Pinned: 27 Jul '17
Abstract: There are several contexts of non-monotonic reasoning where a priority between rules is established whose purpose is preventing conflicts. One formalism that has been widely employed for non-monotonic reasoning is the sceptical one known as Defeasible Logic. In Defeasible Logic the tool used for conflict resolution is a preference relation between rules, that establishes the priority among them. In this paper we investigate how to modify such a preference relation in a defeasible logic theory in order to change the conclusions of the theory itself. We argue that the approach we adopt is applicable to legal reasoning where users, in general, cannot change facts or rules, but can propose their preferences about the relative strength of the rules. We provide a comprehensive study of the possible combinatorial cases and we identify and analyse the cases where the revision process is successful. After this analysis, we identify three revision/update operators and study them against the AGM postulates for belief revision operators, to discover that only a part of these postulates are satisfied by the three operators.
Pub.: 23 Nov '12, Pinned: 27 Jul '17
Abstract: Traditional approaches to non-monotonic reasoning fail to satisfy a number of plausible axioms for belief revision and suffer from conceptual difficulties as well. Recent work on ranked preferential models (RPMs) promises to overcome some of these difficulties. Here we show that RPMs are not adequate to handle iterated belief change. Specifically, we show that RPMs do not always allow for the reversibility of belief change. This result indicates the need for numerical strengths of belief.
Pub.: 20 Mar '13, Pinned: 27 Jul '17
Abstract: This paper extends the applications of belief-networks to include the revision of belief commitments, i.e., the categorical acceptance of a subset of hypotheses which, together, constitute the most satisfactory explanation of the evidence at hand. A coherent model of non-monotonic reasoning is established and distributed algorithms for belief revision are presented. We show that, in singly connected networks, the most satisfactory explanation can be found in linear time by a message-passing algorithm similar to the one used in belief updating. In multiply-connected networks, the problem may be exponentially hard but, if the network is sparse, topological considerations can be used to render the interpretation task tractable. In general, finding the most probable combination of hypotheses is no more complex than computing the degree of belief for any individual hypothesis. Applications to medical diagnosis are illustrated.
Pub.: 27 Mar '13, Pinned: 27 Jul '17