I cover science and tech news for Sparrho and work with Sparrho Heroes to curate, translate and disseminate scientific research to the wider public.
Scientists are finding new ways to perfect algorithms. One surprising source is the animal kingdom.
Researchers mimicked mammals’ specialist brain cells to build an AI navigation system capable of finding the short route through a labyrinth.
Abstract: Animals execute goal-directed behaviours despite the limited range and scope of their sensors. To cope, they explore environments and store memories maintaining estimates of important information that is not presently available. Recently, progress has been made with artificial intelligence (AI) agents that learn to perform tasks from sensory input, even at a human level, by merging reinforcement learning (RL) algorithms with deep neural networks, and the excitement surrounding these results has led to the pursuit of related ideas as explanations of non-human animal learning. However, we demonstrate that contemporary RL algorithms struggle to solve simple tasks when enough information is concealed from the sensors of the agent, a property called "partial observability". An obvious requirement for handling partially observed tasks is access to extensive memory, but we show memory is not enough; it is critical that the right information be stored in the right format. We develop a model, the Memory, RL, and Inference Network (MERLIN), in which memory formation is guided by a process of predictive modeling. MERLIN facilitates the solution of tasks in 3D virtual reality environments for which partial observability is severe and memories must be maintained over long durations. Our model demonstrates a single learning agent architecture that can solve canonical behavioural tasks in psychology and neurobiology without strong simplifying assumptions about the dimensionality of sensory input or the duration of experiences.
Pub.: 28 Mar '18, Pinned: 17 May '18
Abstract: A goal-directed navigation model is proposed based on forward linear look-ahead probe of trajectories in a network of head direction cells, grid cells, place cells and prefrontal cortex (PFC) cells. The model allows selection of new goal-directed trajectories. In a novel environment, the virtual rat incrementally creates a map composed of place cells and PFC cells by random exploration. After exploration, the rat retrieves memory of the goal location, picks its next movement direction by forward linear look-ahead probe of trajectories in several candidate directions while stationary in one location, and finds the one activating PFC cells with the highest reward signal. Each probe direction involves activation of a static pattern of head direction cells to drive an interference model of grid cells to update their phases in a specific direction. The updating of grid cell spiking drives place cells along the probed look-ahead trajectory similar to the forward replay during waking seen in place cell recordings. Directions are probed until the look-ahead trajectory activates the reward signal and the corresponding direction is used to guide goal-finding behavior. We report simulation results in several mazes with and without barriers. Navigation with barriers requires a PFC map topology based on the temporal vicinity of visited place cells and a reward signal diffusion process. The interaction of the forward linear look-ahead trajectory probes with the reward diffusion allows discovery of never-before experienced shortcuts towards a goal location.
Pub.: 08 Mar '12, Pinned: 17 May '18
Abstract: More than three decades of research have demonstrated a role for hippocampal place cells in representation of the spatial environment in the brain. New studies have shown that place cells are part of a broader circuit for dynamic representation of self-location. A key component of this network is the entorhinal grid cells, which, by virtue of their tessellating firing fields, may provide the elements of a path integration-based neural map. Here we review how place cells and grid cells may form the basis for quantitative spatiotemporal representation of places, routes, and associated experiences during behavior and in memory. Because these cell types have some of the most conspicuous behavioral correlates among neurons in nonsensory cortical systems, and because their spatial firing structure reflects computations internally in the system, studies of entorhinal-hippocampal representations may offer considerable insight into general principles of cortical network dynamics.
Pub.: 21 Feb '08, Pinned: 17 May '18
Abstract: Entorhinal grid cells have periodic, hexagonally patterned firing locations that scale up progressively along the dorsal-ventral axis of medial entorhinal cortex. This topographic expansion corresponds with parallel changes in cellular properties dependent on the hyperpolarization-activated cation current (Ih), which is conducted by hyperpolarization-activated cyclic nucleotide-gated (HCN) channels. To test the hypothesis that grid scale is determined by Ih, we recorded grid cells in mice with forebrain-specific knockout of HCN1. We find that, although the dorsal-ventral gradient of the grid pattern was preserved in HCN1 knockout mice, the size and spacing of the grid fields, as well as the period of the accompanying theta modulation, was expanded at all dorsal-ventral levels. There was no change in theta modulation of simultaneously recorded entorhinal interneurons. These observations raise the possibility that, during self-motion-based navigation, Ih contributes to the gain of the transformation from movement signals to spatial firing fields.
Pub.: 22 Nov '11, Pinned: 17 May '18
Abstract: Medial entorhinal cortex (MEC) grid cells fire at regular spatial intervals and project to the hippocampus, where place cells are active in spatially restricted locations. One feature of the grid population is the increase in grid spatial scale along the dorsal-ventral MEC axis. However, the difficulty in perturbing grid scale without impacting the properties of other functionally defined MEC cell types has obscured how grid scale influences hippocampal coding and spatial memory. Here we use a targeted viral approach to knock out HCN1 channels selectively in MEC, causing the grid scale to expand while leaving other MEC spatial and velocity signals intact. Grid scale expansion resulted in place scale expansion in fields located far from environmental boundaries, reduced long-term place field stability and impaired spatial learning. These observations, combined with simulations of a grid-to-place cell model and position decoding of place cells, illuminate how grid scale impacts place coding and spatial memory.
Pub.: 18 Jan '18, Pinned: 17 May '18
Abstract: Grid cells are spatially modulated neurons within the medial entorhinal cortex whose firing fields are arranged at the vertices of tessellating equilateral triangles . The exquisite periodicity of their firing has led to the suggestion that they represent a path integration signal, tracking the organism's position by integrating speed and direction of movement [2-10]. External sensory inputs are required to reset any errors that the path integrator would inevitably accumulate. Here we probe the nature of the external sensory inputs required to sustain grid firing, by recording grid cells as mice explore familiar environments in complete darkness. The absence of visual cues results in a significant disruption of grid cell firing patterns, even when the quality of the directional information provided by head direction cells is largely preserved. Darkness alters the expression of velocity signaling within the entorhinal cortex, with changes evident in grid cell firing rate and the local field potential theta frequency. Short-term (<1.5 s) spike timing relationships between grid cell pairs are preserved in the dark, indicating that network patterns of excitatory and inhibitory coupling between grid cells exist independently of visual input and of spatially periodic firing. However, we find no evidence of preserved hexagonal symmetry in the spatial firing of single grid cells at comparable short timescales. Taken together, these results demonstrate that visual input is required to sustain grid cell periodicity and stability in mice and suggest that grid cells in mice cannot perform accurate path integration in the absence of reliable visual cues.
Pub.: 09 Aug '16, Pinned: 17 May '18
Abstract: Neurons of the medial entorhinal cortex (MEC) provide spatial representations critical for navigation. In this network, the periodic firing fields of grid cells act as a metric element for position. The location of the grid firing fields depends on interactions between self-motion information, geometrical properties of the environment and nonmetric contextual cues. Here, we test whether visual information, including nonmetric contextual cues, also regulates the firing rate of MEC neurons. Removal of visual landmarks caused a profound impairment in grid cell periodicity. Moreover, the speed code of MEC neurons changed in darkness and the activity of border cells became less confined to environmental boundaries. Half of the MEC neurons changed their firing rate in darkness. Manipulations of nonmetric visual cues that left the boundaries of a 1D environment in place caused rate changes in grid cells. These findings reveal context specificity in the rate code of MEC neurons.
Pub.: 28 Jul '16, Pinned: 17 May '18
Abstract: The medial entorhinal cortex (mEC) has been identified as a hub for spatial information processing by the discovery of grid, border, and head-direction cells. Here we find that in addition to these well-characterized classes, nearly all of the remaining two-thirds of mEC cells can be categorized as spatially selective. We refer to these cells as nongrid spatial cells and confirmed that their spatial firing patterns were unrelated to running speed and highly reproducible within the same environment. However, in response to manipulations of environmental features, such as box shape or box color, nongrid spatial cells completely reorganized their spatial firing patterns. At the same time, grid cells retained their spatial alignment and predominantly responded with redistributed firing rates across their grid fields. Thus, mEC contains a joint representation of both spatial and environmental feature content, with specialized cell types showing different types of integrated coding of multimodal information.
Pub.: 28 Mar '17, Pinned: 17 May '18
Abstract: Grid cells in the entorhinal cortex appear to represent spatial location via a triangular coordinate system. Such cells, which have been identified in rats, bats and monkeys, are believed to support a wide range of spatial behaviors. Recording neuronal activity from neurosurgical patients performing a virtual-navigation task, we identified cells exhibiting grid-like spiking patterns in the human brain, suggesting that humans and simpler animals rely on homologous spatial-coding schemes.
Pub.: 06 Aug '13, Pinned: 17 May '18
Abstract: Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go. Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex . Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space and is critical for integrating self-motion (path integration) and planning direct trajectories to goals (vector-based navigation). Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types . We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments-optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation, demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.
Pub.: 11 May '18, Pinned: 17 May '18
Abstract: We introduce the task of directly modeling a visually intelligent agent. Computer vision typically focuses on solving various subtasks related to visual intelligence. We depart from this standard approach to computer vision; instead we directly model a visually intelligent agent. Our model takes visual information as input and directly predicts the actions of the agent. Toward this end we introduce DECADE, a large-scale dataset of ego-centric videos from a dog's perspective as well as her corresponding movements. Using this data we model how the dog acts and how the dog plans her movements. We show under a variety of metrics that given just visual input we can successfully model this intelligent agent in many situations. Moreover, the representation learned by our model encodes distinct information compared to representations trained on image classification, and our learned representation can generalize to other domains. In particular, we show strong results on the task of walkable surface estimation by using this dog modeling task as representation learning.
Pub.: 28 Mar '18, Pinned: 17 May '18