A pinboard by
this curator

Graduate Research Assistant, Kansas State University


General and prior knowledge for an event influences how that event is perceived and remembered.

One way individuals make sense of events in the world around them is by chunking the incoming information into meaningful and discrete units. This process is called event segmentation (i.e., breaking up events into units) and the information that may be used to do this comes from the environment (e.g., motion, lines, color) and semantic knowledge (e.g., prior knowledge for an activity). For example, if you watch someone make breakfast, you can use the information from the environment (e.g., kitchen, frying pan) and your knowledge of making breakfast (e.g., frying pans are used for bacon and eggs) to inform you of what is happening throughout the course of the breakfast event.

My research focuses on how our previous knowledge and experiences with different activities, or events, influences how we perceive and later remember those activities or events. Additionally, I am interested in how segmentation and memory change through normal aging and what the relationships are to other important cognitive processes, such as working memory and inhibition. My recent projects include investigating the influences of context and perspective-taking on event segmentation and recall. Both projects have found differences in segmentation and memory, suggesting semantic knowledge is important for event understanding.

This research is important because older adults are worse at segmenting compared to younger adults; however, semantic knowledge is maintained, and even improved, as we age. Older adults may be able to use semantic knowledge to compensate, which could lead to better segmentation and memory for events. In the long run, segmentation training could be implemented to improve older adults' segmentation abilities, which may lead to improved daily functioning and longer independence.


Effects of penetrating traumatic brain injury on event segmentation and memory.

Abstract: Penetrating traumatic brain injury (pTBI) is associated with deficits in cognitive tasks including comprehension and memory, and also with impairments in tasks of daily living. In naturalistic settings, one important component of cognitive task performance is event segmentation, the ability to parse the ongoing stream of behavior into meaningful units. Event segmentation ability is associated with memory performance and with action control, but is not well assessed by standard neuropsychological assessments or laboratory tasks. Here, we measured event segmentation and memory in a sample of 123 male military veterans aged 59-81 who had suffered a traumatic brain injury as young men, and 34 demographically similar controls. Participants watched movies of everyday activities and segmented them to identify fine-grained or coarse-grained events, and then completed tests of recognition memory for pictures from the movies and of memory for the temporal order of actions in the movies. Lesion location and volume were assessed with computed tomography (CT) imaging. Patients with traumatic brain injury were impaired on event segmentation. Those with larger lesions had larger impairments for fine segmentation and also impairments for both memory measures. Further, the degree of memory impairment was statistically mediated by the degree of event segmentation impairment. There was some evidence that lesions to the ventromedial prefrontal cortex (vmPFC) selectively impaired coarse segmentation; however, lesions outside of a priori regions of interest also were associated with impaired segmentation. One possibility is that the effect of vmPFC damage reflects the role of prefrontal event knowledge representations in ongoing comprehension. These results suggest that assessment of naturalistic event comprehension can be a valuable component of cognitive assessment in cases of traumatic brain injury, and that interventions aimed at event segmentation could be clinically helpful.

Pub.: 26 Dec '15, Pinned: 19 Sep '17

Event Segmentation Improves Event Memory up to One Month Later.

Abstract: When people observe everyday activity, they spontaneously parse it into discrete meaningful events. Individuals who segment activity in a more normative fashion show better subsequent memory for the events. If segmenting events effectively leads to better memory, does asking people to attend to segmentation improve subsequent memory? To answer this question, participants viewed movies of naturalistic activity with instructions to remember the activity for a later test, and in some conditions additionally pressed a button to segment the movies into meaningful events or performed a control condition that required button-pressing but not attending to segmentation. In 5 experiments, memory for the movies was assessed at intervals ranging from immediately following viewing to 1 month later. Performing the event segmentation task led to superior memory at delays ranging from 10 min to 1 month. Further, individual differences in segmentation ability predicted individual differences in memory performance for up to a month following encoding. This study provides the first evidence that manipulating event segmentation affects memory over long delays and that individual differences in event segmentation are related to differences in memory over long delays. These effects suggest that attending to how an activity breaks down into meaningful events contributes to memory formation. Instructing people to more effectively segment events may serve as a potential intervention to alleviate everyday memory complaints in aging and clinical populations. (PsycINFO Database Record

Pub.: 07 Apr '17, Pinned: 19 Sep '17

(Pea)nuts and bolts of visual narrative: structure and meaning in sequential image comprehension.

Abstract: Just as syntax differentiates coherent sentences from scrambled word strings, the comprehension of sequential images must also use a cognitive system to distinguish coherent narrative sequences from random strings of images. We conducted experiments analogous to two classic studies of language processing to examine the contributions of narrative structure and semantic relatedness to processing sequential images. We compared four types of comic strips: (1) Normal sequences with both structure and meaning, (2) Semantic Only sequences (in which the panels were related to a common semantic theme, but had no narrative structure), (3) Structural Only sequences (narrative structure but no semantic relatedness), and (4) Scrambled sequences of randomly-ordered panels. In Experiment 1, participants monitored for target panels in sequences presented panel-by-panel. Reaction times were slowest to panels in Scrambled sequences, intermediate in both Structural Only and Semantic Only sequences, and fastest in Normal sequences. This suggests that both semantic relatedness and narrative structure offer advantages to processing. Experiment 2 measured ERPs to all panels across the whole sequence. The N300/N400 was largest to panels in both the Scrambled and Structural Only sequences, intermediate in Semantic Only sequences and smallest in the Normal sequences. This implies that a combination of narrative structure and semantic relatedness can facilitate semantic processing of upcoming panels (as reflected by the N300/N400). Also, panels in the Scrambled sequences evoked a larger left-lateralized anterior negativity than panels in the Structural Only sequences. This localized effect was distinct from the N300/N400, and appeared despite the fact that these two sequence types were matched on local semantic relatedness between individual panels. These findings suggest that sequential image comprehension uses a narrative structure that may be independent of semantic relatedness. Altogether, we argue that the comprehension of visual narrative is guided by an interaction between structure and meaning.

Pub.: 06 Mar '12, Pinned: 19 Sep '17

Priming in Sentence Processing: Intralexical Spreading Activation, Schemas, and Situation Models

Abstract: A series of eye-tracking experiments investigated priming in natural language understanding. Intralexical spreading activation accounts of priming predict that the response to a target word will be speeded (i.e., primed) when strong associates appear prior to the target. Schema-based priming accounts predict that priming will occur when the target word is a component of an activated schema or script. Situation model accounts predict that priming will occur when a target word can be integrated easily into an evolving discourse representation. In separate experiments, we measured the effect of associated words, synonyms, and identity primes on processing times for subsequently encountered target words. Our designs crossed prime type (e.g., synonyms vs. unassociated words) with semantic plausibility (i.e., the target word was a plausible vs. an implausible continuation of the sentence). The results showed that identity primes, but not associates or synonyms, primed target words in early measures of processing like first fixation and gaze duration. Plausibility effects tended to emerge in later measures of processing (e.g., on total reading time), although some evidence was obtained for early effects of semantic plausibility. We propose that priming in naturalistic conditions is not caused by intralexical spreading activation or access to precompiled schemas.

Pub.: 01 Nov '00, Pinned: 28 Jun '17

The construction of visual-spatial situation models in children's reading and their relation to reading comprehension.

Abstract: Readers construct mental models of situations described by text to comprehend what they read, updating these situation models based on explicitly described and inferred information about causal, temporal, and spatial relations. Fluent adult readers update their situation models while reading narrative text based in part on spatial location information that is consistent with the perspective of the protagonist. The current study investigated whether children update spatial situation models in a similar way, whether there are age-related changes in children's formation of spatial situation models during reading, and whether measures of the ability to construct and update spatial situation models are predictive of reading comprehension. Typically developing children from 9 to 16 years of age (N=81) were familiarized with a physical model of a marketplace. Then the model was covered, and children read stories that described the movement of a protagonist through the marketplace and were administered items requiring memory for both explicitly stated and inferred information about the character's movements. Accuracy of responses and response times were evaluated. Results indicated that (a) location and object information during reading appeared to be activated and updated not simply from explicit text-based information but from a mental model of the real-world situation described by the text; (b) this pattern showed no age-related differences; and (c) the ability to update the situation model of the text based on inferred information, but not explicitly stated information, was uniquely predictive of reading comprehension after accounting for word decoding.

Pub.: 10 Dec '13, Pinned: 28 Jun '17

Changes in situation models modulate processes of event perception in audiovisual narratives.

Abstract: Humans understand text and film by mentally representing their contents in situation models. These describe situations using dimensions like time, location, protagonist, and action. Changes in 1 or more dimensions (e.g., a new character enters the scene) cause discontinuities in the story line and are often perceived as boundaries between 2 meaningful units. Recent theoretical advances in event perception led to the assumption that situation models are represented in the form of event models in working memory. These event models are updated at event boundaries. Points in time at which event models are updated are important: Compared with situations during an ongoing event, situations at event boundaries are remembered more precisely and predictions about what happens next become less reliable. We hypothesized that these effects depend on the number of changes in the situation model. In 2 experiments, we had participants watch sitcom episodes and measured recognition memory and prediction performance for event boundaries that contained a change in 1, 2, 3, or 4 dimensions. Results showed a linear relationship: the more dimensions changed, the higher recognition performance was. At the same time, participants' predictions became less reliable with an increasing number of dimension changes. These results suggest that updating of event models at event boundaries occurs incrementally.

Pub.: 14 May '14, Pinned: 28 Jun '17

Knowledge Structure Measures of Reader’s Situation Models Across Languages: Translation Engenders Richer Structure

Abstract: In order to further validate and extend the application of recent knowledge structure (KS) measures to second language settings, this investigation explores how second language (L2, English) situation models are influenced by first language (L1, Korean) translation tasks. Fifty Korean low proficient English language learners were asked to read an L2 story and then complete L2 concept map and summary writing tasks, with or without an intervening L1 production tasks (Translated versus Directed conditions). Posttest comprehension was measured using the TOEFL multiple-choice items associated with the story (both in L2). KS elicited as concept maps and as text summaries were used to represent the situation models before, during, and after writing. For analysis, all of the participants’ maps and writing artifacts were converted into Pathfinder Networks (PFNets) that were analyzed using two distinctly different approaches, correlation of the raw proximity data and also degree centrality of the PFNets, in order to analyze the PFNets statistically and to visually describe KS cognitive state changes over time. The correlation results showed that the Translated Writing participants’ L2 KS relative to the Directed Writing condition are more similar to that of an expert and are significantly correlated with comprehension posttest scores. Including L1 tasks substantially improved the qualities of the L2 KS artifacts and underlying mental structures related to reading comprehension. In addition, the average centrality results showed that the KS ‘form’ of the participants’ PFNets who translated were relational network structures relative to the Directed Writing group’s (English only) PFNets that had a more linear structure that matched the text surface structure, suggesting a fundamental way that L1 and L2 cognitive processing differs.

Pub.: 11 Feb '15, Pinned: 28 Jun '17