Graduate Research Assistant, Kansas State University
General and prior knowledge for an event influences how that event is perceived and remembered.
One way individuals make sense of events in the world around them is by chunking the incoming information into meaningful and discrete units. This process is called event segmentation (i.e., breaking up events into units) and the information that may be used to do this comes from the environment (e.g., motion, lines, color) and semantic knowledge (e.g., prior knowledge for an activity). For example, if you watch someone make breakfast, you can use the information from the environment (e.g., kitchen, frying pan) and your knowledge of making breakfast (e.g., frying pans are used for bacon and eggs) to inform you of what is happening throughout the course of the breakfast event.
My research focuses on how our previous knowledge and experiences with different activities, or events, influences how we perceive and later remember those activities or events. Additionally, I am interested in how segmentation and memory change through normal aging and what the relationships are to other important cognitive processes, such as working memory and inhibition. My recent projects include investigating the influences of context and perspective-taking on event segmentation and recall. Both projects have found differences in segmentation and memory, suggesting semantic knowledge is important for event understanding.
This research is important because older adults are worse at segmenting compared to younger adults; however, semantic knowledge is maintained, and even improved, as we age. Older adults may be able to use semantic knowledge to compensate, which could lead to better segmentation and memory for events. In the long run, segmentation training could be implemented to improve older adults' segmentation abilities, which may lead to improved daily functioning and longer independence.
Abstract: Penetrating traumatic brain injury (pTBI) is associated with deficits in cognitive tasks including comprehension and memory, and also with impairments in tasks of daily living. In naturalistic settings, one important component of cognitive task performance is event segmentation, the ability to parse the ongoing stream of behavior into meaningful units. Event segmentation ability is associated with memory performance and with action control, but is not well assessed by standard neuropsychological assessments or laboratory tasks. Here, we measured event segmentation and memory in a sample of 123 male military veterans aged 59-81 who had suffered a traumatic brain injury as young men, and 34 demographically similar controls. Participants watched movies of everyday activities and segmented them to identify fine-grained or coarse-grained events, and then completed tests of recognition memory for pictures from the movies and of memory for the temporal order of actions in the movies. Lesion location and volume were assessed with computed tomography (CT) imaging. Patients with traumatic brain injury were impaired on event segmentation. Those with larger lesions had larger impairments for fine segmentation and also impairments for both memory measures. Further, the degree of memory impairment was statistically mediated by the degree of event segmentation impairment. There was some evidence that lesions to the ventromedial prefrontal cortex (vmPFC) selectively impaired coarse segmentation; however, lesions outside of a priori regions of interest also were associated with impaired segmentation. One possibility is that the effect of vmPFC damage reflects the role of prefrontal event knowledge representations in ongoing comprehension. These results suggest that assessment of naturalistic event comprehension can be a valuable component of cognitive assessment in cases of traumatic brain injury, and that interventions aimed at event segmentation could be clinically helpful.
Pub.: 26 Dec '15, Pinned: 19 Sep '17
Abstract: Segmenting ongoing activity into events is important for later memory of those activities. In the experiments reported in this article, older adults' segmentation of activity into events was less consistent with group norms than younger adults' segmentation, particularly for older adults diagnosed with mild dementia of the Alzheimer type. Among older adults, poor agreement with others' event segmentation was associated with deficits in recognition memory for pictures taken from the activity and memory for the temporal order of events. Impaired semantic knowledge about events also was associated with memory deficits. The data suggest that semantic knowledge about events guides encoding, facilitating later memory. To the extent that such knowledge or the ability to use it is impaired in aging and dementia, memory suffers.
Pub.: 07 Sep '06, Pinned: 19 Sep '17
Abstract: Everyday activities break down into parts and subparts, and appreciating this hierarchical structure is an important component of understanding. In two experiments we found age differences in the ability to perceive hierarchical structure in continuous activity. In both experiments, younger and older adults segmented movies of everyday activities into large and small meaningful events. Older adults' segmentation deviated more from group norms than did younger adults' segmentation, and older adults' segmentation was less hierarchically organized than that of younger adults. Older adults performed less well than younger adults on event memory tasks. In some cases, measures of event segmentation discriminated between those older adults with better and worse memory. These results suggest that the hierarchical encoding of ongoing activity declines with age, and that such encoding may be important for memory.
Pub.: 26 Jan '11, Pinned: 19 Sep '17
Abstract: One way to understand something is to break it up into parts. New research indicates that segmenting ongoing activity into meaningful events is a core component of ongoing perception, with consequences for memory and learning. Behavioral and neuroimaging data suggest that event segmentation is automatic and that people spontaneously segment activity into hierarchically organized parts and sub-parts. This segmentation depends on the bottom-up processing of sensory features such as movement, and on the top-down processing of conceptual features such as actors' goals. How people segment activity affects what they remember later; as a result, those who identify appropriate event boundaries during perception tend to remember more and learn more proficiently.
Pub.: 01 Apr '07, Pinned: 19 Sep '17
Abstract: Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan.
Pub.: 15 Aug '13, Pinned: 19 Sep '17
Abstract: When people observe everyday activity, they spontaneously parse it into discrete meaningful events. Individuals who segment activity in a more normative fashion show better subsequent memory for the events. If segmenting events effectively leads to better memory, does asking people to attend to segmentation improve subsequent memory? To answer this question, participants viewed movies of naturalistic activity with instructions to remember the activity for a later test, and in some conditions additionally pressed a button to segment the movies into meaningful events or performed a control condition that required button-pressing but not attending to segmentation. In 5 experiments, memory for the movies was assessed at intervals ranging from immediately following viewing to 1 month later. Performing the event segmentation task led to superior memory at delays ranging from 10 min to 1 month. Further, individual differences in segmentation ability predicted individual differences in memory performance for up to a month following encoding. This study provides the first evidence that manipulating event segmentation affects memory over long delays and that individual differences in event segmentation are related to differences in memory over long delays. These effects suggest that attending to how an activity breaks down into meaningful events contributes to memory formation. Instructing people to more effectively segment events may serve as a potential intervention to alleviate everyday memory complaints in aging and clinical populations. (PsycINFO Database Record
Pub.: 07 Apr '17, Pinned: 19 Sep '17
Abstract: How do people make sense of the sequential images in visual narratives like comics? A growing literature of recent research has suggested that this comprehension involves the interaction of multiple systems: The creation of meaning across sequential images relies on a "narrative grammar" that packages conceptual information into categorical roles organized in hierarchic constituents. These images are encapsulated into panels arranged in the layout of a physical page. Finally, how panels frame information can impact both the narrative structure and page layout. Altogether, these systems operate in parallel to construct the Gestalt whole of comprehension of this visual language found in comics.
Pub.: 30 Jul '14, Pinned: 19 Sep '17
Abstract: Just as syntax differentiates coherent sentences from scrambled word strings, the comprehension of sequential images must also use a cognitive system to distinguish coherent narrative sequences from random strings of images. We conducted experiments analogous to two classic studies of language processing to examine the contributions of narrative structure and semantic relatedness to processing sequential images. We compared four types of comic strips: (1) Normal sequences with both structure and meaning, (2) Semantic Only sequences (in which the panels were related to a common semantic theme, but had no narrative structure), (3) Structural Only sequences (narrative structure but no semantic relatedness), and (4) Scrambled sequences of randomly-ordered panels. In Experiment 1, participants monitored for target panels in sequences presented panel-by-panel. Reaction times were slowest to panels in Scrambled sequences, intermediate in both Structural Only and Semantic Only sequences, and fastest in Normal sequences. This suggests that both semantic relatedness and narrative structure offer advantages to processing. Experiment 2 measured ERPs to all panels across the whole sequence. The N300/N400 was largest to panels in both the Scrambled and Structural Only sequences, intermediate in Semantic Only sequences and smallest in the Normal sequences. This implies that a combination of narrative structure and semantic relatedness can facilitate semantic processing of upcoming panels (as reflected by the N300/N400). Also, panels in the Scrambled sequences evoked a larger left-lateralized anterior negativity than panels in the Structural Only sequences. This localized effect was distinct from the N300/N400, and appeared despite the fact that these two sequence types were matched on local semantic relatedness between individual panels. These findings suggest that sequential image comprehension uses a narrative structure that may be independent of semantic relatedness. Altogether, we argue that the comprehension of visual narrative is guided by an interaction between structure and meaning.
Pub.: 06 Mar '12, Pinned: 19 Sep '17
Abstract: Theories of visual narrative understanding have often focused on the changes in meaning across a sequence, like shifts in characters, spatial location, and causation, as cues for breaks in the structure of a discourse. In contrast, the theory of visual narrative grammar posits that hierarchic "grammatical" structures operate at the discourse level using categorical roles for images, which may or may not co-occur with shifts in coherence. We therefore examined the relationship between narrative structure and coherence shifts in the segmentation of visual narrative sequences using a "segmentation task" where participants drew lines between images in order to divide them into subepisodes. We used regressions to analyze the influence of the expected constituent structure boundary, narrative categories, and semantic coherence relationships on the segmentation of visual narrative sequences. Narrative categories were a stronger predictor of segmentation than linear coherence relationships between panels, though both influenced participants' divisions. Altogether, these results support the theory that meaningful sequential images use a narrative grammar that extends above and beyond linear semantic shifts between discourse units. (PsycINFO Database Record
Pub.: 07 Oct '16, Pinned: 19 Sep '17
Abstract: Three experiments examined how readers inferred spatial information that was relevant to a story character's movements through a previously memorized layout of a fictional building relative to various tasks. This study also examined how inference measures were related to spatial imagery and reading comprehension ability. Replicating the spatial separation effect reported by Morrow, Greenspan, and Bower (1987), probed objects were responded to faster when they were located in the same room of a building as the main character of a narrative than when the objects were located in different rooms. Experiment 2 ruled out a simple name-based priming explanation of the spatial separation effect, and Experiment 3 demonstrated a facilitation for objects from the character's target room even when readers were provided with a spatially indeterminate list description of the building. The construction-integration model of text comprehension accounted for the spatial separation effect in terms of variations in the knowledge-integration process. It was concluded that the integration of an enriched knowledge network can facilitate the process of mapping text information onto a developing mental representation of a discourse situation, a process that gains further support from spatial imagery and reading comprehension ability.
Pub.: 01 Mar '95, Pinned: 28 Jun '17
Abstract: The organization of information when it is retrieved from situation models was explored using recognition and recall tests. A fan effect paradigm was used to assess organization in recognition and a clustering measure was used in recall. People memorized either object-location facts (e.g., “The pay phone is in the city hall”) or person/small location facts (e.g., “The banker is on the witness stand”). For object-location facts, a location-based organization was observed in both recall and recognition. However, for person/small location facts, a location-based organization was observed for recall, and a person-based organization was observed for recognition. Thus, the observed organization was flexible depending on the retrieval task.
Pub.: 01 Jun '98, Pinned: 28 Jun '17
Abstract: A series of eye-tracking experiments investigated priming in natural language understanding. Intralexical spreading activation accounts of priming predict that the response to a target word will be speeded (i.e., primed) when strong associates appear prior to the target. Schema-based priming accounts predict that priming will occur when the target word is a component of an activated schema or script. Situation model accounts predict that priming will occur when a target word can be integrated easily into an evolving discourse representation. In separate experiments, we measured the effect of associated words, synonyms, and identity primes on processing times for subsequently encountered target words. Our designs crossed prime type (e.g., synonyms vs. unassociated words) with semantic plausibility (i.e., the target word was a plausible vs. an implausible continuation of the sentence). The results showed that identity primes, but not associates or synonyms, primed target words in early measures of processing like first fixation and gaze duration. Plausibility effects tended to emerge in later measures of processing (e.g., on total reading time), although some evidence was obtained for early effects of semantic plausibility. We propose that priming in naturalistic conditions is not caused by intralexical spreading activation or access to precompiled schemas.
Pub.: 01 Nov '00, Pinned: 28 Jun '17
Abstract: In two experiments, we investigated how readers use information about temporal and spatial distance to focus attention on the more important parts of the situation model that they create during narrative comprehension. Effects of spatial distance were measured by testing the accessibility in memory of objects and rooms located at differing distances from the protagonist’s current location. Before the test probe, an intervening episode was inserted in the narrative. Story time distance was manipulated by stating that the intervening episode lasted for either minutes or hours. Discourse time—that is, time spent reading from prime to test—was manipulated by describing the intervening episode either briefly or at length. Clear effects of story time distance and spatial distance on accessibility were found, whereas discourse time distance did not affect accessibility. The results are interpreted as supporting constructionist theories of text comprehension.
Pub.: 01 Dec '00, Pinned: 28 Jun '17
Abstract: These experiments examined the hypothesis that situation model construction involves perceptual processing—specifically, processing that involves visuospatial information. In this research, a dualtask paradigm was used to demonstrate that tasks that engage visuospatial processes interfere more with the generation of a situation model than tasks that are less likely to involve these processes or tasks that are verbal in nature. Using Albrecht and O'Brien's (1993) contradiction effect as evidence of situation model construction, Experiment 1 demonstrated that participants reading short texts while simultaneously holding high-imagery sentences in memory failed to show a significant contradiction effect in comparison with readers holding low-imagery sentences in memory. In Experiment 2, participants reading texts while retaining a difficult visuospatial memory load showed disrupted comprehension in comparison with readers retaining a verbal memory load.
Pub.: 01 Mar '01, Pinned: 28 Jun '17
Abstract: Psycholinguistic research faces a major challenge in describing the mental representations readers construct from a text. It is now widely accepted that readers end with a representation of the situation described in the text. However, it is unclear whether this representation allows the activation of elements in accordance with their situation proximity. To answer this question, two experiments were conducted. Participants read texts, sentence by sentence, which gave them instructions about how to arrange items in a layout; they then performed a recognition task. By manipulating the spatial proximity between prime and probe items, this task allowed the measurement of a spatial priming effect. In the first experiment, a larger priming effect was observed for closer items on the spatial layout. The second experiment replicated these findings and showed that the priming effects are better explained by categorical distance than by Euclidean distance.
Pub.: 05 Sep '03, Pinned: 28 Jun '17
Abstract: The authors examined how situation models are updated during text comprehension. If comprehenders keep track of the evolving situation, they should update their models such that the most current information, the here and now, is more available than outdated information. Contrary to this updating hypothesis, E. J. O'Brien, M. L. Rizzella, J. E. Albrecht, and J. G. Halleran (1998) obtained results suggesting that outdated or incorrect information may still influence the comprehension process. The authors of the current study demonstrate that the nature of E. J. O'Brien et al.'s materials were the likely cause of this pattern of results. Hence, the current authors constructed materials that circumvent identified confounds and in a reading-time experiment obtained evidence supporting the here-and-now hypothesis.
Pub.: 23 Jan '04, Pinned: 28 Jun '17
Abstract: According to theories of language comprehension, people can construct multiple levels of representation: the surface form, the propositional text base, and the situation model. In this study, I looked at how the referential nature of memory probes affects the experience of retrieval interference. All the subjects memorized sentences about objects in locations (e.g., “The potted palm is in the hotel”). When memory probes were sentences and, therefore, referential and most closely associated with the situation model level, no interference was observed during retrieval for information that could be integrated into a common situation model. In contrast, interference was observed in such cases when the memory probes were concept pairs (POTTED PALM-HOTEL) and hence not directly referential. This is interpreted to mean that nonreferential memory probes involve surface form and text base representations more than do referential sentence probes.
Pub.: 01 Jun '05, Pinned: 28 Jun '17
Abstract: Previous studies have found that interference in long-term memory retrieval occurs when information cannot be integrated into a single situation model, but this interference is greatly reduced or absent when the information can be so integrated. The current study looked at the influence of presentation format-sentences or pictures-on this observed pattern. When sentences were used at memorisation and recognition, a spatial organisation was observed. In contrast, when pictures were used, a different pattern of results was observed. Specifically, there was an overall speed-up in response times, and consistent evidence of interference. Possible explanations for this difference were examined in a third experiment using pictures during learning, but sentences during recognition. The results from Experiment 3 were consistent with the organisation of information into situation models in long-term memory, even from pictures. This suggests that people do create situation models when learning pictures, but their recognition memory may be oriented around more "verbatim", surface-form memories of the pictures.
Pub.: 07 Jun '06, Pinned: 28 Jun '17
Abstract: We investigated the ability of people to retrieve information about objects as they moved through rooms in a virtual space. People were probed with object names that were either associated with the person (i.e., carried) or dissociated from the person (i.e., just set down). Also, people either did or did not shift spatial regions (i.e., go to a new room). Information about objects was less accessible when the objects were dissociated from the person. Furthermore, information about an object was also less available when there was a spatial shift. However, the spatial shift had a larger effect on memory for the currently associated object. These data are interpreted as being more supportive of a situation model explanation, following on work using narratives and film. Simpler memory-based accounts that do not take into account the context in which a person is embedded cannot adequately account for the results. This research was supported in part by a grant from the Army Research Institute, ARMY-DASW01-02-K-0003 and by funding from J. Chris Forsythe of Sandia National Laboratories.
Pub.: 01 Jul '06, Pinned: 28 Jun '17
Abstract: The present study demonstrates that children experience difficulties reaching the correct situation model of multiple events described in temporal sentences if the sentences encode language-external events in reverse chronological order. Importantly, the timing of the cue of how to organize these events is crucial: When temporal subordinate conjunctions (before/after) or converb constructions that carry information of how to organize the events were given sentence-medially, children experienced severe difficulties in arriving at the correct interpretation of event order. When this information was provided sentence-initially, children were better able to arrive at the correct situation model, even if it required them to decode the linguistic information reversely with respect to the actual language external events. This indicates that children even aged 8-12 still experience difficulties in arriving at the correct interpretation of the event structure, if the cue of how to order the events is not given immediately when they start building the representation of the situation. This suggests that children's difficulties in comprehending sequential temporal events are caused by their inability to revise the representation of the current event structure at the level of the situation model.
Pub.: 05 Oct '11, Pinned: 28 Jun '17
Abstract: Readers construct mental models of situations described by text to comprehend what they read, updating these situation models based on explicitly described and inferred information about causal, temporal, and spatial relations. Fluent adult readers update their situation models while reading narrative text based in part on spatial location information that is consistent with the perspective of the protagonist. The current study investigated whether children update spatial situation models in a similar way, whether there are age-related changes in children's formation of spatial situation models during reading, and whether measures of the ability to construct and update spatial situation models are predictive of reading comprehension. Typically developing children from 9 to 16 years of age (N=81) were familiarized with a physical model of a marketplace. Then the model was covered, and children read stories that described the movement of a protagonist through the marketplace and were administered items requiring memory for both explicitly stated and inferred information about the character's movements. Accuracy of responses and response times were evaluated. Results indicated that (a) location and object information during reading appeared to be activated and updated not simply from explicit text-based information but from a mental model of the real-world situation described by the text; (b) this pattern showed no age-related differences; and (c) the ability to update the situation model of the text based on inferred information, but not explicitly stated information, was uniquely predictive of reading comprehension after accounting for word decoding.
Pub.: 10 Dec '13, Pinned: 28 Jun '17
Abstract: Humans understand text and film by mentally representing their contents in situation models. These describe situations using dimensions like time, location, protagonist, and action. Changes in 1 or more dimensions (e.g., a new character enters the scene) cause discontinuities in the story line and are often perceived as boundaries between 2 meaningful units. Recent theoretical advances in event perception led to the assumption that situation models are represented in the form of event models in working memory. These event models are updated at event boundaries. Points in time at which event models are updated are important: Compared with situations during an ongoing event, situations at event boundaries are remembered more precisely and predictions about what happens next become less reliable. We hypothesized that these effects depend on the number of changes in the situation model. In 2 experiments, we had participants watch sitcom episodes and measured recognition memory and prediction performance for event boundaries that contained a change in 1, 2, 3, or 4 dimensions. Results showed a linear relationship: the more dimensions changed, the higher recognition performance was. At the same time, participants' predictions became less reliable with an increasing number of dimension changes. These results suggest that updating of event models at event boundaries occurs incrementally.
Pub.: 14 May '14, Pinned: 28 Jun '17
Abstract: In order to further validate and extend the application of recent knowledge structure (KS) measures to second language settings, this investigation explores how second language (L2, English) situation models are influenced by first language (L1, Korean) translation tasks. Fifty Korean low proficient English language learners were asked to read an L2 story and then complete L2 concept map and summary writing tasks, with or without an intervening L1 production tasks (Translated versus Directed conditions). Posttest comprehension was measured using the TOEFL multiple-choice items associated with the story (both in L2). KS elicited as concept maps and as text summaries were used to represent the situation models before, during, and after writing. For analysis, all of the participants’ maps and writing artifacts were converted into Pathfinder Networks (PFNets) that were analyzed using two distinctly different approaches, correlation of the raw proximity data and also degree centrality of the PFNets, in order to analyze the PFNets statistically and to visually describe KS cognitive state changes over time. The correlation results showed that the Translated Writing participants’ L2 KS relative to the Directed Writing condition are more similar to that of an expert and are significantly correlated with comprehension posttest scores. Including L1 tasks substantially improved the qualities of the L2 KS artifacts and underlying mental structures related to reading comprehension. In addition, the average centrality results showed that the KS ‘form’ of the participants’ PFNets who translated were relational network structures relative to the Directed Writing group’s (English only) PFNets that had a more linear structure that matched the text surface structure, suggesting a fundamental way that L1 and L2 cognitive processing differs.
Pub.: 11 Feb '15, Pinned: 28 Jun '17
Abstract: This article sets out to examine the role of symbolic and sensorimotor representations in discourse comprehension. It starts out with a review of the literature on situation models, showing how mental representations are constrained by linguistic and situational factors. These ideas are then extended to more explicitly include sensorimotor representations. Following Zwaan and Madden (2005), the author argues that sensorimotor and symbolic representations mutually constrain each other in discourse comprehension. These ideas are then developed further to propose two roles for abstract concepts in discourse comprehension. It is argued that they serve as pointers in memory, used (1) cataphorically to integrate upcoming information into a sensorimotor simulation, or (2) anaphorically integrate previously presented information into a sensorimotor simulation. In either case, the sensorimotor representation is a specific instantiation of the abstract concept.
Pub.: 20 Jun '15, Pinned: 28 Jun '17
Join Sparrho today to stay on top of science
Discover, organise and share research that matters to you