Quantcast

Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks

Research paper by Radoslaw Martin Cichy, Aditya Khosla, Dimitrios Pantazis, Aude Oliva

Indexed on: 04 Apr '16Published on: 01 Apr '16Published in: NeuroImage



Abstract

Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~ 100ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~ 250ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain.

Figure 10.1016/j.neuroimage.2016.03.063.0.jpg
Figure 10.1016/j.neuroimage.2016.03.063.1.jpg
Figure 10.1016/j.neuroimage.2016.03.063.2.jpg
Figure 10.1016/j.neuroimage.2016.03.063.3.jpg
Figure 10.1016/j.neuroimage.2016.03.063.4.jpg