A pinboard by
Zhiqiang Zhuang

Postdoctoral Research Fellow, Griffith University


To formalise and enable the revision of an intelligent agent's beliefs in an uncertain environment

Consider the following scenario: In a university, professors generally teach, unless they have an administrative appointment. Assume we know that John is a professor. Since most faculties do not have an administrative appointment, and there is no evidence that John does, we conclude with confidence that he teaches. Suppose one day we receive the new information that John does not teach. This clearly contradicts our beliefs about John. For us, it is not too hard to figure out what's going on here and resolve the contradiction. It could be that some of our beliefs are wrong; perhaps professors with administrative duties still need to do teaching or perhaps John is not a professor. Alternatively, it could be that assuming John is not an administrative staff via the absence of evidence is too adventurous; that is, he may indeed, be an administrative staff member but we don't know it.

This is a typical instance of a belief change event in which an agent receives new information and tries to make sense of it by revising her beliefs. The process of belief change behind such scenarios has been the subject of considerable investigation in the field of belief revision. The importance of understanding the exact process of adapting one's beliefs to newly received information cannot be overstated, as it is through this process that we learn and evolve. Most of what we know and believe is gained through this process. This adapting ability clearly is a defining feature of intelligent behaviours. The understanding and the formalisation of this process represent a significant advancement in realising artificial intelligence.

Belief change is traditionally studied from the viewpoint of an ideal agent and through a background classical logic which means the agent has unlimited reasoning power and computational resources but her ways of reasoning and inferencing are limited to those of classical logic. On the contrary, in a real-world setting, agents have limited reasoning and computational capacity and employ various ways of reasoning and inferencing beyond those of classical logic. Because of the discordances, the techniques and theoretical results of the traditional approaches are not immediately transferable to and in many cases insufficient in a real-world setting. My paper to be presented in IJCAI2017 represents the latest advancement in enabling belief revision in a real-world setting.