Spatiotemporal patterns in neural networks

Spatiotemporal patterns in neural networks

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

It turns out that my work in nonlinear dynamics directly maps to a close variant of the Integrate and Fire model. My model produces spatiotemporal patterns that resemble foam. I have to stress that this is the result of an exceedingly simple model of neuronal activity.

The plot below visualizes the voltage levels of a network of neurons. Yellow is high, purple is low.

I am wondering is these patterns are relevant to the computational neuroscience community. To this end, I am looking for literature describing spatiotemporal patterns in (preferably biological) neural systems.

I am especially interested in any work that proposes metrics that characterise such patterns since this would allow for a quantitative comparison with my results.

Forgive me is this seems trivial to you, but coming form complex systems theory I did my best and found nothing.

Unfortunately, I haven't looked at this sort of literature for a long time, but here are some thoughts with which to start.

Your question is about "spatiotemporal patterns" of neural systems. One of the first things that comes to mind is neural oscillations (e.g. alpha waves). This is essentially looking for the presence of "frequency patterns" in the neural system. This has a huge amount of literature.

Another methodology that comes to mind is the graph-theoretic approach. In essence, functional or anatomical activity is used to build a graph, which can be analyzed for patterns (i.e. special properties of the graphs of neural systems that are, say, shared across species). One paper: Complex brain networks: graph theoretical analysis of structural and functional systems by Bullmore and Sporns.

Separately, I might think about the neural correlations literature. Essentially, this is analyzing a neural population by examining the correlation structure of its members, and looking for patterns within it (i.e. that characterize the population code of the system). Two review papers: Neural correlations, population coding, and computation by Averbeck et al and Measuring and interpreting neural correlations by Cohen and Kohn. For a review on population codes, see Information processing with population codes by Pouget et al. This notion is sort of orthogonal to the notion of sparse coding in neural networks, more prevalent in sensory perception.

Sorry if this is broad, but then again I feel the question is quite broad as well. If you have more specific questions, let me know.

Spatiotemporal neural networks for action recognition based on joint loss

Action recognition is a challenging and important problem in a myriad of significant fields, such as intelligent robots and video surveillance. In recent years, deep learning and neural network techniques have been widely applied to action recognition and attained remarkable results. However, it is still a difficult task to recognize actions in complicated scenes, such as various illumination conditions, similar motions, and background noise. In this paper, we present a spatiotemporal neural network model with a joint loss to recognize human actions from videos. This spatiotemporal neural network is comprised of two key connected substructures. The first one is a two-stream-based network extracting optical flow and appearance features from each frame of videos, which characterizes the human actions of videos in spatial dimension. The second substructure is a group of Long Short-Term Memory structures following the spatial network, which describes the temporal and transition information in videos. This research effort presents a joint loss function for training the spatiotemporal neural network model. By introducing the loss function, the action recognition performance is improved. The proposed method was tested with video samples from two challenging datasets. The experiments demonstrate that our approach outperforms the baseline comparison methods.

This is a preview of subscription content, access via your institution.

Sevoflurane Alters Spatiotemporal Functional Connectivity Motifs That Link Resting-State Networks during Wakefulness

Background: The spatiotemporal patterns of correlated neural activity during the transition from wakefulness to general anesthesia have not been fully characterized. Correlation analysis of blood-oxygen-level dependent (BOLD) functional magnetic resonance imaging (fMRI) allows segmentation of the brain into resting-state networks (RSNs), with functional connectivity referring to the covarying activity that suggests shared functional specialization. We quantified the persistence of these correlations following the induction of general anesthesia in healthy volunteers and assessed for a dynamic nature over time. Methods: We analyzed human fMRI data acquired at 0 and 1.2% vol sevoflurane. The covariance in the correlated activity among different brain regions was calculated over time using bounded Kalman filtering. These time series were then clustered into eight orthogonal motifs using a K-means algorithm, where the structure of correlated activity throughout the brain at any time is the weighted sum of all motifs. Results: Across time scales and under anesthesia, the reorganization of interactions between RSNs is related to the strength of dynamic connections between member pairs. The covariance of correlated activity between RSNs persists compared to that linking individual member pairs of different RSNs. Conclusions: Accounting for the spatiotemporal structure of correlated BOLD signals, anesthetic-induced loss of consciousness is mainly associated with the disruption of motifs with intermediate strength within and between members of different RSNs. In contrast, motifs with higher strength of connections, predominantly with regions-pairs from within-RSN interactions, are conserved among states of wakefulness and sevoflurane general anesthesia.

Keywords: Kalman filtering dynamic functional connectivity resting-state functional MRI sevoflurane spatiotemporal analysis.


Recent advances in brain recording techniques have led to a rapid influx of high spatial- and temporal-resolution datasets of large neural populations [1–4]. One of the major challenges in modern neuroscience is to identify and extract important population-level structures and dynamics from these datasets [5,6]. Traditionally, neural population activity has been mainly studied from the perspective of temporal synchrony or correlation, and relating correlated neural activity to brain functions has been the major focus of many studies in neuroscience during the past two decades [7,8].

However, growing evidence indicates that population-level brain activity is often organized into patterns that are structured in both space and time. Such spatiotemporal patterns, including planar traveling waves [9–11], spiral waves which rotate around a central point [12–14], source and sink patterns which expand or contract from a point [13,15], and saddle patterns which are formed by the interaction of multiple waves [13], have been observed at different neural levels within multiple recording techniques, including multi-electrode arrays [13,16–18], voltage sensitive dye (VSD) imaging [9,12,19], and electroencephalography (EEG), electrocorticography (ECoG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) [20–24].

The functional role of these spatiotemporal patterns is a subject of active research: In spontaneous activity, propagating patterns have been shown to follow repeated temporal motifs instead of occurring randomly [13,15], and are postulated to facilitate information transfer across brain regions [10,17] and carry out distributed dynamical computation [25]. In sensory cortices, stimuli can elicit repeatable propagating patterns [9,10,19,26,27], and the properties of these waves can be linked to stimulus features. For instance, the phase and amplitude of traveling waves in the motor cortex and visual cortex correlate with reach target location [17] and with saccade size [18], respectively, and the propagation direction of moving patterns in the visual cortex is sensitive to visual movement orientation [28]. These studies thus indicate that the ability to detect and analyze these patterns is essential for uncovering the principled dynamics of neural population activity and for understanding the working mechanisms of neural circuits [15,26,29,30].

In this study, to detect changes of neural signals happening across both space and time, we introduce velocity vector fields which represent the speed and direction of local spatiotemporal propagations. These vector fields allow us to make a novel conceptual link between spatiotemporal patterns in neural activity and complex patterns such as vortices or eddies found in the field of fluid turbulence [31–33], in which these patterns are similarly characterized by using velocity fields of the underlying moving molecules. Velocity vector fields in our methods are computed by adapting optical flow estimation methods originally developed in the field of computer vision [34]. Optical flow techniques have previously been implemented to analyze brain activity [13,26–28], but here we extend these methods to consider the amplitude and phase of oscillatory neural signals, allowing for a comprehensive analysis of neural spatiotemporal patterns. When constructed from oscillation phase, velocity vector fields are conceptually similar to phase gradient vector fields as often used in previous studies [15,18]. However, velocity vector fields provide a conceptual basis for us to adapt methods from turbulence to develop a unified methodological framework for analyzing neural spatiotemporal patterns.

We show that by examining the critical points in a velocity vector field (also called “stationary points” or “singularity points”), where the local velocity is zero [35], different types of spatiotemporal patterns including spiral waves (“foci”), source/sink patterns (“nodes”) and saddles can be detected. In addition to these complex wave patterns, neural systems can exhibit widespread synchrony and planar travelling waves. These types of activity are common to many physical and biological systems, and can be detected by introducing global order parameters calculated from velocity vector fields [36]. These methods thus enable the automatic detection of a diverse range of spatiotemporal patterns after user-defined parameters have been chosen these parameters are discussed in detail in Methods and Materials.

Aside from detecting these patterns, our methods can provide systematic analysis of pattern dynamics including their evolution pathways and their underlying spatiotemporal modes that exhibit intrinsic and inseparable spatial and temporal features, thus providing a novel alternative to existing dimensionality reduction techniques which instead separate space and time components [6]. We validate the effectiveness of all methods and their implementation in the toolbox through multiple approaches. Using synthetic data with known pattern activity, we show that spatiotemporal pattern detection is accurate and reliable even in noisy conditions. We then analyze local field potentials from multi-electrode arrays in marmoset visual cortex and whole-brain optical imaging data from mouse cortex to test our methodological framework across different recording modalities, species, and neural scales. We find that pattern properties including location and propagation direction are modulated by visual stimulus, and that patterns evolve along structured pathways following preferred transitions.

Precisely timed spatiotemporal patterns of neural activity in dissociated cortical cultures

Recurring patterns of neural activity, a potential substrate of both information transfer and transformation in cortical networks, have been observed in the intact brain and in brain slices. Do these patterns require the inherent cortical microcircuitry of such preparations or are they a general property of self-organizing neuronal networks? In networks of dissociated cortical neurons from rats--which lack evidence of the intact brain's intrinsic cortical architecture--we have observed a robust set of spontaneously repeating spatiotemporal patterns of neural activity, using a template-matching algorithm that has been successful both in vivo and in brain slices. The observed patterns in cultured monolayer networks are stable over minutes of extracellular recording, occur throughout the culture's development, and are temporally precise within milliseconds. The identification of these patterns in dissociated cultures opens a powerful methodological avenue for the study of such patterns, and their persistence despite the topological and morphological rearrangements of cellular dissociation is further evidence that precisely timed patterns are a universal emergent feature of self-organizing neuronal networks.


Shuffling methods. (Top) Spike swapping…

Shuffling methods. (Top) Spike swapping preserves the dataset’s spike-timing distribution and electrode distribution.…

Precisely timed sequences of neural…

Precisely timed sequences of neural activity repeat spontaneously in networks of dissociated cortical…

Properties of detected sequences. (A)…

Properties of detected sequences. (A) Histogram of times between sequence repetitions from those…

Sequences repeat more frequently in…

Sequences repeat more frequently in actual data from cultures aged 21 DIV than…

Sequences repeat more frequently in…

Sequences repeat more frequently in actual data than in shuffled data at 35…

Persistence of detected sequences. The…

Persistence of detected sequences. The three most frequently recurring sequences were sought in…

The distribution of sequence sizes…

The distribution of sequence sizes obeys a power law probability distribution. When the…

nα , where n is the event size, P(n) is its normalized frequency of occurrence in our datasets, and α is the power law’s exponent. On log-log plots, such as these, graphed power laws appear linear, with slope α. For the observed sequences in the actual data α = −3.1 ± 0.2 (±95% CI, R 2 = 0.97 A). However, we find similar scale invariance in our spike-swapped (α = −3.2 ± 0.2, R 2 = 0.97 B) and spike-jittered data (α = −3.3 ± 0.2, R 2 = 0.97 C), minimizing the importance of scale invariance in explaining our significantly repeating patterns.

A spatiotemporal complexity architecture of human brain activity

The human brain operates in large-scale functional networks, collectively subsumed as the functional connectome 1–13 . Recent work has begun to unravel the organization of the connectome, including the temporal dynamics of brain states 14–20 , the trade-off between segregation and integration 9,15,21-23 , and a functional hierarchy from lower-order unimodal to higher-order transmodal processing systems 24–27 . However, it remains unknown how these network properties are embedded in the brain and if they emerge from a common neural foundation.

Here we apply time-resolved estimation of brain signal complexity to uncover a unifying principle of brain organization, linking the connectome to neural variability 6,28-31 . Using functional magnetic resonance imaging (fMRI), we show that neural activity is marked by spontaneous “complexity drops” that reflect episodes of increased pattern regularity in the brain, and that functional connections among brain regions are an expression of their simultaneous engagement in such episodes. Moreover, these complexity drops ubiquitously propagate along cortical hierarchies, suggesting that the brain intrinsically reiterates its own functional architecture. Globally, neural activity clusters into temporal complexity states that dynamically shape the coupling strength and configuration of the connectome, implementing a continuous re-negotiation between cost-efficient segregation and communication-enhancing integration 9,15,21,23 . Furthermore, complexity states resolve the recently discovered association between anatomical and functional network hierarchies comprehensively 25-27,32 . Finally, brain signal complexity is highly sensitive to age and reflects inter-individual differences in cognition and motor function. In sum, we identify a spatiotemporal complexity architecture of neural activity – a functional “complexome” that gives rise to the network organization of the human brain.

4 Spatio-temporal property of SNNs

This section introduces the spiking neural network properties and depicts the absence of these characteristics in ANN feature extractors. Also, it presents the test cases that are designed to examine spatio-temporal properties of NNs in detail.

4.1 Claim

Figure 2: Test cases designed to challenge spatio-temporal extraction properties. The base images are derived from MNIST dataset. Two of the top image rows belong to test1(zoom-in from 0% to 100% and zoom-out from 100% to 0%), the two in the 3rd and 4th rows belong to test2(360 degree clock-wise and counter clock-wise rotations from 0 degree to 360 degree and vice-versa), the two in 5th and 6th rows belong to test3 (zoom-in from 50% to 100% and zoom-out from 100% to 50%), the images in 7th row belong to test4 (occlusion) and the last two rows in the bottom belong to test5(random rotation clock-wise and counter clock-wise)

The structure of spiking neural networks is very similar to the human brain, and an advantage of these networks is the memory that exists per neuron. This memory is the source of temporal coding feature. The memory of neurons leads to astonishing performance in extracting particular spatio-temporal features including learning models with random patterns. Common feature extractors such as C3D and CONVLSTM are not able to extract these features. Mathematical formulation of this claim would be as follows: Assume that f ( . ) is a function of x , y and t . This function models an spatio-temporal motion. Frames in time can be modeled as:

Training data are binomial samples:

A single layer of C3D or conv2D can not learn stochastic I ( x , y , t ) as we defined. Those layers are designed to learn deterministic patterns in I ( x , y , t )

. ConvLSTM is comparable to SNN in terms of having memory in each layer. This memory makes it as dominant as SNN. If sigma is big enough, LSTM in the convolution layer cannot forget significant variance and it will cause ConvLSTM accuracy to drop however, SNN thresholding makes it highly robust to significant noise variances. The mentioned problem can be solved if ConvLSTM has significantly more convolution kernels compared to SNN. A neural network of sequential shallow convolution layers and LSTMs also has some issues. The network has difficulty in predicting time domain of kernels. Additionally, in case of large time windows, typical LSTM layers suffer from information loss.

Designed test cases are as follows:

Test1: Zoom-in (0 to 100%) and zoom-out (0 to 100%) as 20 classes of MNIST

Test2: Rotate clock-wise(0 to 360 degrees) and Rotate counter clock-wise (360 to 0 degrees) as 20 classes of MNIST

Test3: Zoom-in (50 to 100%) and zoom-out (50 to 100%) as 20 classes of MNIST

Test4: Occlusion with random box of zero values

Test5: Random incremental rotations CW/CCW (no rotation on first and last frames of CCW, blank picture on first and last frames of CW)

4.2 Experimental backed proof

To demonstrate ineherent memory of spiking neural networks, some special test cases were designed. In tests 1 and 3 zoom-in and zoom-out images are considered as inputs and the network classifies them. SNNs can also identify clockwise or counter-clockwise rotation. Test2 is designed to challenge that property. In addition, due to memory existence in each neuron, they are capable of learning random patterns. Also, SNNs classifies occluded images with great accuracy. Tests 4 and 5 have also been designed to signify two last mentioned properties.

Spatiotemporal patterns in neural networks - Biology

a Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza L. da Vinci, 32 – 20133 Milano, Italy
E-mail: [email protected]

b Micron Technology, Inc., Boise, ID, USA


Resistive switching random-access memory (ReRAM) is a two-terminal device based on ion migration to induce resistance switching between a high resistance state (HRS) and a low resistance state (LRS). ReRAM is considered one of the most promising technologies for artificial synapses in brain-inspired neuromorphic computing systems. However, there is still a lack of general understanding about how to develop such a gestalt system to imitate and compete with the brain’s functionality and efficiency. Spiking neural networks (SNNs) are well suited to describe the complex spatiotemporal processing inside the brain, where the energy efficiency of computation mostly relies on the spike carrying information about both space (which neuron fires) and time (when a neuron fires). This work addresses the methodology and implementation of a neuromorphic SNN system to compute the temporal information among neural spikes using ReRAM synapses capable of spike-timing dependent plasticity (STDP). The learning and recognition of spatiotemporal spike sequences are experimentally demonstrated. Our simulation study shows that it is possible to construct a multi-layer spatiotemporal computing network. Spatiotemporal computing also enables learning and detection of the trace of moving objects and mimicking of the hierarchy structure of the biological visual cortex adopting temporal-coding for fast recognition.

Spatiotemporal patterns in neural networks - Biology

Neuropatt is a MATLAB toolbox to automatically detect, analyze and visualize spatiotemporal patterns in neural population activity. Data recorded by multi-electrode arrays, EEG, MEG, fMRI and other imaging methods such as VSD can optionally be filtered to extract the phase or amplitude of oscillations within a specified frequency range. Propagating activity is then extracted adapting methods from the fields of turbulence fluid and computer vision. Spatiotemporal patterns can be tracked across space and time using order parameter and critical point analysis, and dominant spatiotemporal dynamics are extracted using vector field decompositions.

NeuroPatt comes with a collection of user-friendly Matlab functions that allow for (1) automatic detection of all spatiotemporal patterns in an input recording based on user parameters, (2) output and visualization (plots and videos) of pattern statistics, (3) and analysis and visualization of pattern dynamics.

If you use our code in your research, please cite us as follows:

Townsend RG, Gong P. Detection and analysis of spatiotemporal patterns in brain activity, PLoS Computational Biology 14(12): e1006643, 2018.

All primary functions can be accessed through the NeuroPattGUI M-file. The toolbox includes a sample dataset of LFPs recorded from marmoset visual area MT (see references 1 and 2, 10x10 recording array, 5 repetitions of moving-dot stimulus presentations with stimulus turned on at 1s and sampling frequency 1017 Hz, MA026-14 Dir 1 Rep 40-44). This test data can be processed with the following:

Using default parameters, NeuroPattGUI will find all patterns at 4 Hz in the recording and output vector decompositions, patterns detected and pattern transitions. At the results screen, pattern statistics and locations can be visualised in more detail, and analysis can be repeated with surrogate, noise-driven data to verify results.

The saveVelocityFieldVideo function can then be used to show all computed amplitude/phase maps with their corresponding velocity fields:

As an alternative to using the GUI, all main functionality can instead be run through the command line. Desired parameters should first be specified within the setNeuroPattParams M-file, then processing can be run through:

This project is licensed under the GNU General Public License Version 3 - see the file for details.

Townsend RG, Gong P. Detection and analysis of spatiotemporal patterns in brain activity, PLoS Computational Biology 14(12): e1006643, 2018.

Additional data files

The following additional data are available with the online version of this paper: a figure showing an outline of the AVEXIS procedure (Additional data file 1) a figure showing validation of interactions using a fluorescent bead-based assay (Additional data file 2) a figure showing that interacting neuroreceptors display both complementary and overlapping expression patterns in the developing brain (Additional data file 3) a table listing the zebrafish LRR genes cloned and used to produce recombinant ectodomains (Additional data file 4) a table listing the 97 zebrafish IgSF ectodomain baits (Additional data file 5) a table classifying the neuroreceptor interactions using AVEXIS (Additional data file 6) a table listing the spatiotemporal expression of each gene within the interaction network (Additional data file 7).


  1. Frewyn

    Thanks! Useful ... .. (-___________-)

  2. Reymundo

    The same has been discussed recently

  3. Everett

    A very funny answer

  4. Broden

    All good. Thanks for the post!

  5. Kegor

    I apologize that I am interrupting you.

  6. Treowbrycg


Write a message