PhD Candidate, Research Assistant, UNIVERSITY OF CALGARY
We use brain cognition and analytical methods to explore factors that make interruptions disruptive.
Think back to your last workday, and consider for a minute the many interruptions that occurred. Interruptions are inevitable. It’s just the way life works. However, human's brain has a limited capacity for handling all the information required for managing task switching and interruptions. In my research, I am exploring factors which make interruptions more disruptive in terms of their negative impact on software developers' productivity. To this end, in our recent study, we studied 5,094 recorded tasks of 19 employees of the largest software and service development company in Calgary, Canada and found that self-interruptions (voluntary interruptions) are more disruptive that external-interruption.
Moreover, we found that interrupting Sub-tasks (i.e. dependent on other high-level tasks) result in longer resumption period. For example, if get interrupted during writing memo on back of your printed photos during sorting the photos chronologically, you might forget and never get back to this task (i.e. as this is a sub-task related to the sorting task).
We are also using brain cognition tools (e.g. Heart Rate Variability (HRV)) to measure mental workload before and after interruptions. This helps to define the best temporal point of task interruptions and consequently reduce the disruptiveness of interruptions.
The results of our experiments has been accepted to the 25th IEEE International Conference on Requirements Engineering (an A-ranked software Engineering Conference) in form of 2 full research paper, and one short paper. I am going to present these papers at RE in two days :-)
Abstract: Classifying requirements into functional requirements (FR) and non-functional ones (NFR) is an important task in requirements engineering. However, automated classification of requirements written in natural language is not straightforward, due to the variability of natural language and the absence of a controlled vocabulary. This paper investigates how automated classification of requirements into FR and NFR can be improved and how well several machine learning approaches work in this context. We contribute an approach for preprocessing requirements that standardizes and normalizes requirements before applying classification algorithms. Further, we report on how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as usability, availability, or performance. Our study is performed on 625 requirements provided by the OpenScience tera-PROMISE repository. We found that our preprocessing improved the performance of an existing classification method. We further found significant differences in the performance of approaches such as Latent Dirichlet Allocation, Biterm Topic Modeling, or Naive Bayes for the sub-classification of NFRs.
Pub.: 07 Jul '17, Pinned: 31 Aug '17
Abstract: Mobile apps have exploded in popularity, encouraging developers to provide content to the massive user base of the main app stores. Although there exist automated techniques that can classify user comments into various topics with high levels of precision, recent studies have shown that the top apps in the app stores do not have customer ratings that directly correlate with the app's success. This implies that no single requirements elicitation technique can cover the full depth required to produce a successful product and that applying alternative requirements gathering techniques can lead to success when these two are combined. Since user involvement has been found to be the most impactful contribution to project success, in this paper we will explore how the Wizard of Oz (WOz) technique and user reviews available in Google Play, can be integrated to produce a product that meets the demand of more stakeholders than either method alone. To compare the role of early interactive requirements specification and app reviews, we conducted two studies (i) a case study analysis on 13 mobile app development teams who used very early stages Requirements Engineering (RE) by applying WOz, and (ii) a study analyzing 40 (70, 592 reviews) similar mobile apps on Google Play. The results of both studies show that while each of WOz and app review analysis techniques can be applied to capture specific types of requirements, an integrated process including both methods would eliminate the communication gap between users and developers at early stages of the development process and mitigates the risk of requirements change in later stages.
Pub.: 17 Jul '17, Pinned: 31 Aug '17
Abstract: Task switching and interruptions are a daily reality in software development projects: developers switch between Requirements Engineering (RE), coding, testing, daily meetings, and other tasks. Task switching may increase productivity through increased information flow and effective time management. However, it might also cause a cognitive load to reorient the primary task, which accounts for the decrease in developers' productivity and increases in errors. This cognitive load is even greater in cases of cognitively demanding tasks as the ones typical for RE activities. In this paper, to compare the reality of task switching in RE with the perception of developers, we conducted two studies: (i) a case study analysis on 5,076 recorded tasks of 19 developers and (ii) a survey of 25 developers. The results of our retrospective analysis show that in ALL of the cases that the disruptiveness of RE interruptions is statistically different from other software development tasks, RE related tasks are more vulnerable to interruptions compared to other task types. Moreover, we found that context switching, the priority of the interrupting task, and the interruption source and timing are key factors that impact RE interruptions. We also provided a set of RE task switching patterns along with recommendations for both practitioners and researchers. While the results of our retrospective analysis show that self-interruptions are more disruptive than external interruptions, developers have different perceptions about the disruptiveness of various sources of interruptions.
Pub.: 03 Jul '17, Pinned: 31 Aug '17
Abstract: Requirements Engineering (RE) is closely tied to other development activities and is at the heart and foundation of every software development process. This makes RE the most data and communication-intensive activity compared to other development tasks. The highly demanding communication makes task switching and interruptions inevitable in RE activities. While task switching often allows us to perform tasks effectively, it imposes a cognitive load and can be detrimental to the primary task, particularly in complex tasks as the ones typical for RE activities. Visualization mechanisms enhanced with analytical methods and interaction techniques help software developers obtain a better cognitive understanding of the complexity of RE decisions, leading to timelier and higher quality decisions. In this paper, we propose to apply interactive visual analytics techniques for managing requirements decisions from various perspectives, including stakeholders communication, RE task switching, and interruptions. We propose a new layered visualization framework that supports the analytical reasoning process of task switching. This framework consists of both data analysis and visualization layers. The visual layers offer interactive knowledge visualization components for managing task interruption decisions at different stages of an interruption (i.e. before, during, and after). The analytical layers provide narrative knowledge about the consequences of task switching decisions and help requirements engineers to recall their reasoning process and decisions upon resuming a task. Moreover, we surveyed 53 software developers to test our visual prototype and to explore more required features for the visual and analytical layers of our framework.
Pub.: 06 Jul '17, Pinned: 31 Aug '17