Research Data Analyst Senior, Northwestern University
I build tools to diagnosis neurologic conditions using the auditory system
The goal of my work is to design objective, easy-to-use diagnostic tests for neurologic conditions (dyslexia, autism, concussion, etc.). My colleagues and I use EEG tests of how well the brain processes speech sounds as a "back door" to look at central nervous system function in general. We have designed a number of EEG tests that take the guesswork out of identifying a nervous system disorder.
Abstract: Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child's future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to the precision of information processing in noise, but it is unknown whether these markers are present in pre-reading children. Here, in a series of experiments in 112 children (ages 3-14 y), we show brain-behavior relationships between the integrity of the neural coding of speech in noise and phonology. We harness these findings into a predictive model of preliteracy, revealing that a 30-min neurophysiological assessment predicts performance on multiple pre-reading tests and, one year later, predicts preschoolers' performance across multiple domains of emergent literacy. This same neural coding model predicts literacy and diagnosis of a learning disability in school-aged children. These findings offer new insight into the biological constraints on preliteracy during early childhood, suggesting that neural processing of consonants in noise is fundamental for language and reading development. Pragmatically, these findings open doors to early identification of children at risk for language learning problems; this early identification may in turn facilitate access to early interventions that could prevent a life spent struggling to read.
Pub.: 15 Jul '15, Pinned: 20 Jun '17
Abstract: Aging results in pervasive declines in nervous system function. In the auditory system, these declines include neural timing delays in response to fast-changing speech elements; this causes older adults to experience difficulty understanding speech, especially in challenging listening environments. These age-related declines are not inevitable, however: older adults with a lifetime of music training do not exhibit neural timing delays. Yet many people play an instrument for a few years without making a lifelong commitment. Here, we examined neural timing in a group of human older adults who had nominal amounts of music training early in life, but who had not played an instrument for decades. We found that a moderate amount (4-14 years) of music training early in life is associated with faster neural timing in response to speech later in life, long after training stopped (>40 years). We suggest that early music training sets the stage for subsequent interactions with sound. These experiences may interact over time to sustain sharpened neural processing in central auditory nuclei well into older age.
Pub.: 08 Nov '13, Pinned: 20 Jun '17
Abstract: Concussions carry devastating potential for cognitive, neurologic, and socio-emotional disease, but no objective test reliably identifies a concussion and its severity. A variety of neurological insults compromise sound processing, particularly in complex listening environments that place high demands on brain processing. The frequency-following response captures the high computational demands of sound processing with extreme granularity and reliably reveals individual differences. We hypothesize that concussions disrupt these auditory processes, and that the frequency-following response indicates concussion occurrence and severity. Specifically, we hypothesize that concussions disrupt the processing of the fundamental frequency, a key acoustic cue for identifying and tracking sounds and talkers, and, consequently, understanding speech in noise. Here we show that children who sustained a concussion exhibit a signature neural profile. They have worse representation of the fundamental frequency, and smaller and more sluggish neural responses. Neurophysiological responses to the fundamental frequency partially recover to control levels as concussion symptoms abate, suggesting a gain in biological processing following partial recovery. Neural processing of sound correctly identifies 90% of concussion cases and clears 95% of control cases, suggesting this approach has practical potential as a scalable biological marker for sports-related concussion and other types of mild traumatic brain injuries.
Pub.: 23 Dec '16, Pinned: 20 Jun '17
Abstract: Neural slowing is commonly noted in older adults, with consequences for sensory, motor, and cognitive domains. One of the deleterious effects of neural slowing is impairment of temporal resolution; older adults, therefore, have reduced ability to process the rapid events that characterize speech, especially in noisy environments. Although hearing aids provide increased audibility, they cannot compensate for deficits in auditory temporal processing. Auditory training may provide a strategy to address these deficits. To that end, we evaluated the effects of auditory-based cognitive training on the temporal precision of subcortical processing of speech in noise. After training, older adults exhibited faster neural timing and experienced gains in memory, speed of processing, and speech-in-noise perception, whereas a matched control group showed no changes. Training was also associated with decreased variability of brainstem response peaks, suggesting a decrease in temporal jitter in response to a speech signal. These results demonstrate that auditory-based cognitive training can partially restore age-related deficits in temporal processing in the brain; this plasticity in turn promotes better cognitive and perceptual skills.
Pub.: 13 Feb '13, Pinned: 20 Jun '17
Abstract: Auditory-evoked potentials are classically defined as the summations of synchronous firing along the auditory neuraxis. Converging evidence supports a model whereby timing jitter in neural coding compromises listening and causes variable scalp-recorded potentials. Yet the intrinsic noise of human scalp recordings precludes a full understanding of the biological origins of individual differences in listening skills. To delineate the mechanisms contributing to these phenomena, in vivo extracellular activity was recorded from inferior colliculus in guinea pigs to speech in quiet and noise. Here we show that trial-by-trial timing jitter is a mechanism contributing to auditory response variability. Identical variability patterns were observed in scalp recordings in human children, implicating jittered timing as a factor underlying reduced coding of dynamic speech features and speech in noise. Moreover, intertrial variability in human listeners is tied to language development. Together, these findings suggest that variable timing in inferior colliculus blurs the neural coding of speech in noise, and propose a consequence of this timing jitter for human behavior. These results hint both at the mechanisms underlying speech processing in general, and at what may go awry in individuals with listening difficulties.
Pub.: 01 Jan '16, Pinned: 20 Jun '17
Join Sparrho today to stay on top of science
Discover, organise and share research that matters to you