I am a computer scientist at Uni. of Cambridge. I work on making artificial reasoning human-like.
How to make artificial reasoning more intuitive?
Scientific advances today increasingly depend on understanding, manipulating and querying data. In addition, businesses that can capitalise on the value of information will have a competitive edge. Typically, data has structure which can be described by formal mathematical statements. These statements are typically written in symbolic, mathematical notations, which are inaccessible to most people. This is a serious shortcoming because ensuring that these statements are correct is particularly hard.
What if instead of mathematical symbols we could use an alternative that is more intuitive and hence accessible to most people? Recent research has demonstrated that visual notations (e.g. diagrams) bring cognitive benefits over symbolic and textual notations. We, thus believe that diagrams can be used instead of mathematical symbols to make artificial reasoning more intuitive. We are currently implementing a prototype of a reasoner that uses these diagrams as the basis of its internal logic. Our prototype will constantly undergo user studies by to verify that its reasoning is indeed intuitive and human like. If you want to know about the result of user studies follow this pinboard and find out more.
Abstract: There is a growing focus on how to design safe artificial intelligent (AI) agents. As systems become more complex, poorly specified goals or control mechanisms may cause AI agents to engage in unwanted and harmful outcomes. Thus it is necessary to design AI agents that follow initial programming intentions as the program grows in complexity. How to specify these initial intentions has also been an obstacle to designing safe AI agents. Finally, there is a need for the AI agent to have redundant safety mechanisms to ensure that any programming errors do not cascade into major problems. Humans are autonomous intelligent agents that have avoided these problems and the present manuscript argues that by understanding human self-regulation and goal setting, we may be better able to design safe AI agents. Some general principles of human self-regulation are outlined and specific guidance for AI design is given.
Pub.: 05 Jan '17, Pinned: 01 Jul '17
Abstract: Research on human self-regulation has shown that people hold many goals simultaneously and have complex self-regulation mechanisms to deal with this goal conflict. Artificial autonomous systems may also need to find ways to cope with conflicting goals. Indeed, the intricate interplay among different goals may be critical to the design as well as long-term safety and stability of artificial autonomous systems. I discuss some of the critical features of the human self-regulation system and how it might be applied to an artificial system. Furthermore, the implications of goal conflict for the reliability and stability of artificial autonomous systems and ensuring their alignment with human goals and ethics is examined.
Pub.: 18 Mar '17, Pinned: 01 Jul '17