A pinboard by
this curator

PhD Candidate, RMIT University


This pinboard looks at spoken conversational search - including IR, HCI and SDS

My are of research is on how we can present search engine results over a speech-only communication channel without overwhelming the user with information. I am also interested in how conversations can be structured between the user and the Spoken Conversation Search System. My focus is on the experimental design with multi method analysis as evaluation.

There is a lot of development around audio-only search systems such as Siri, Cortana or Google Now. These systems are great to answer factoid (very direct) questions. However, when the user has a more open question the system either reverts back to presenting search results on the screen or cannot handle these questions. We are investigating what the interactions are that people expect from these kind of systems and how we could integrate them in the search sessions. Theses systems will be beneficial for every day users but especially for users with a visual impairment.

I will spend the money for my travel to the SIGIR conference (www.sigir.org/sigir2017/) in Japan where I will be presenting several pieces of work. I am also volunteering at the conference with a program to welcome new PhD students to our information retrieval community. This program "PhD buddies" is set up by me and some other volunteers.


Steering the conversation: A linguistic exploration of natural language interactions with a digital assistant during simulated driving.

Abstract: Given the proliferation of 'intelligent' and 'socially-aware' digital assistants embodying everyday mobile technology - and the undeniable logic that utilising voice-activated controls and interfaces in cars reduces the visual and manual distraction of interacting with in-vehicle devices - it appears inevitable that next generation vehicles will be embodied by digital assistants and utilise spoken language as a method of interaction. From a design perspective, defining the language and interaction style that a digital driving assistant should adopt is contingent on the role that they play within the social fabric and context in which they are situated. We therefore conducted a qualitative, Wizard-of-Oz study to explore how drivers might interact linguistically with a natural language digital driving assistant. Twenty-five participants drove for 10 min in a medium-fidelity driving simulator while interacting with a state-of-the-art, high-functioning, conversational digital driving assistant. All exchanges were transcribed and analysed using recognised linguistic techniques, such as discourse and conversation analysis, normally reserved for interpersonal investigation. Language usage patterns demonstrate that interactions with the digital assistant were fundamentally social in nature, with participants affording the assistant equal social status and high-level cognitive processing capability. For example, participants were polite, actively controlled turn-taking during the conversation, and used back-channelling, fillers and hesitation, as they might in human communication. Furthermore, participants expected the digital assistant to understand and process complex requests mitigated with hedging words and expressions, and peppered with vague language and deictic references requiring shared contextual information and mutual understanding. Findings are presented in six themes which emerged during the analysis - formulating responses; turn-taking; back-channelling, fillers and hesitation; vague language; mitigating requests and politeness and praise. The results can be used to inform the design of future in-vehicle natural language systems, in particular to help manage the tension between designing for an engaging dialogue (important for technology acceptance) and designing for an effective dialogue (important to minimise distraction in a driving context).

Pub.: 16 May '17, Pinned: 28 Jul '17