Canard solutions in neural mass models: consequences on critical regimes
Mathematical models at multiple temporal and spatial scales can unveil the fundamental mechanisms of critical transitions in brain activities. Neural mass models (NMMs) consider the average temporal dynamics of interconnected neuronal subpopulations without explicitly representing the underlying cellular activity. The mesoscopic level offered by the neural mass formulation has been used to model electroencephalographic (EEG) recordings and to investigate various cerebral mechanisms, such as the generation of physiological and pathological brain activities. In this work, we consider a NMM widely accepted in the context of epilepsy, which includes four interacting neuronal subpopulations with different synaptic kinetics. Due to the resulting three-time-scale structure, the model yields complex oscillations of relaxation and bursting types. By applying the principles of geometric singular perturbation theory, we unveil the existence of the canard solutions and detail how they organize the complex oscillations and excitability properties of the model. In particular, we show that boundaries between pathological epileptic discharges and physiological background activity are determined by the canard solutions. Finally we report the existence of canard-mediated small-amplitude frequency-specific oscillations in simulated local field potentials for decreased inhibition conditions. Interestingly, such oscillations are actually observed in intracerebral EEG signals recorded in epileptic patients during pre-ictal periods, close to seizure onsets.
Rendering neuronal state equations compatible with the principle of stationary action
The principle of stationary action is a cornerstone of modern physics, providing a powerful framework for investigating dynamical systems found in classical mechanics through to quantum field theory. However, computational neuroscience, despite its heavy reliance on concepts in physics, is anomalous in this regard as its main equations of motion are not compatible with a Lagrangian formulation and hence with the principle of stationary action. Taking the Dynamic Causal Modelling (DCM) neuronal state equation as an instructive archetype of the first-order linear differential equations commonly found in computational neuroscience, we show that it is possible to make certain modifications to this equation to render it compatible with the principle of stationary action. Specifically, we show that a Lagrangian formulation of the DCM neuronal state equation is facilitated using a complex dependent variable, an oscillatory solution, and a Hermitian intrinsic connectivity matrix. We first demonstrate proof of principle by using Bayesian model inversion to show that both the original and modified models can be correctly identified via in silico data generated directly from their respective equations of motion. We then provide motivation for adopting the modified models in neuroscience by using three different types of publicly available in vivo neuroimaging datasets, together with open source MATLAB code, to show that the modified (oscillatory) model provides a more parsimonious explanation for some of these empirical timeseries. It is our hope that this work will, in combination with existing techniques, allow people to explore the symmetries and associated conservation laws within neural systems - and to exploit the computational expediency facilitated by direct variational techniques.
Pattern formation in a 2-population homogenized neuronal network model
We study pattern formation in a 2-population homogenized neural field model of the Hopfield type in one spatial dimension with periodic microstructure. The connectivity functions are periodically modulated in both the synaptic footprint and in the spatial scale. It is shown that the nonlocal synaptic interactions promote a finite band width instability. The stability method relies on a sequence of wave-number dependent invariants of [Formula: see text]-stability matrices representing the sequence of Fourier-transformed linearized evolution equations for the perturbation imposed on the homogeneous background. The generic picture of the instability structure consists of a finite set of well-separated gain bands. In the shallow firing rate regime the nonlinear development of the instability is determined by means of the translational invariant model with connectivity kernels replaced with the corresponding period averaged connectivity functions. In the steep firing rate regime the pattern formation process depends sensitively on the spatial localization of the connectivity kernels: For strongly localized kernels this process is determined by the translational invariant model with period averaged connectivity kernels, whereas in the complementary regime of weak and moderate localization requires the homogenized model as a starting point for the analysis. We follow the development of the instability numerically into the nonlinear regime for both steep and shallow firing rate functions when the connectivity kernels are modeled by means of an exponentially decaying function. We also study the pattern forming process numerically as a function of the heterogeneity parameters in four different regimes ranging from the weakly modulated case to the strongly heterogeneous case. For the weakly modulated regime, we observe that stable spatial oscillations are formed in the steep firing rate regime, whereas we get spatiotemporal oscillations in the shallow regime of the firing rate functions.
Auditory streaming emerges from fast excitation and slow delayed inhibition
In the auditory streaming paradigm, alternating sequences of pure tones can be perceived as a single galloping rhythm (integration) or as two sequences with separated low and high tones (segregation). Although studied for decades, the neural mechanisms underlining this perceptual grouping of sound remains a mystery. With the aim of identifying a plausible minimal neural circuit that captures this phenomenon, we propose a firing rate model with two periodically forced neural populations coupled by fast direct excitation and slow delayed inhibition. By analyzing the model in a non-smooth, slow-fast regime we analytically prove the existence of a rich repertoire of dynamical states and of their parameter dependent transitions. We impose plausible parameter restrictions and link all states with perceptual interpretations. Regions of stimulus parameters occupied by states linked with each percept match those found in behavioural experiments. Our model suggests that slow inhibition masks the perception of subsequent tones during segregation (forward masking), whereas fast excitation enables integration for large pitch differences between the two tones.
A model of on/off transitions in neurons of the deep cerebellar nuclei: deciphering the underlying ionic mechanisms
The neurons of the deep cerebellar nuclei (DCNn) represent the main functional link between the cerebellar cortex and the rest of the central nervous system. Therefore, understanding the electrophysiological properties of DCNn is of fundamental importance to understand the overall functioning of the cerebellum. Experimental data suggest that DCNn can reversibly switch between two states: the firing of spikes (F state) and a stable depolarized state (SD state). We introduce a new biophysical model of the DCNn membrane electro-responsiveness to investigate how the interplay between the documented conductances identified in DCNn give rise to these states. In the model, the F state emerges as an isola of limit cycles, i.e. a closed loop of periodic solutions disconnected from the branch of SD fixed points. This bifurcation structure endows the model with the ability to reproduce the [Formula: see text] transition triggered by hyperpolarizing current pulses. The model also reproduces the [Formula: see text] transition induced by blocking Ca currents and ascribes this transition to the blocking of the high-threshold Ca current. The model suggests that intracellular current injections can trigger fully reversible [Formula: see text] transitions. Investigation of low-dimension reduced models suggests that the voltage-dependent Na current is prominent for these dynamical features. Finally, simulations of the model suggest that physiological synaptic inputs may trigger [Formula: see text] transitions. These transitions could explain the puzzling observation of positively correlated activities of connected Purkinje cells and DCNn despite the former inhibit the latter.
M-current induced Bogdanov-Takens bifurcation and switching of neuron excitability class
In this work, we consider a general conductance-based neuron model with the inclusion of the acetycholine sensitive, M-current. We study bifurcations in the parameter space consisting of the applied current [Formula: see text], the maximal conductance of the M-current [Formula: see text] and the conductance of the leak current [Formula: see text]. We give precise conditions for the model that ensure the existence of a Bogdanov-Takens (BT) point and show that such a point can occur by varying [Formula: see text] and [Formula: see text]. We discuss the case when the BT point becomes a Bogdanov-Takens-cusp (BTC) point and show that such a point can occur in the three-dimensional parameter space. The results of the bifurcation analysis are applied to different neuronal models and are verified and supplemented by numerical bifurcation diagrams generated using the package MATCONT. We conclude that there is a transition in the neuronal excitability type organised by the BT point and the neuron switches from Class-I to Class-II as conductance of the M-current increases.
Estimating Fisher discriminant error in a linear integrator model of neural population activity
Decoding approaches provide a useful means of estimating the information contained in neuronal circuits. In this work, we analyze the expected classification error of a decoder based on Fisher linear discriminant analysis. We provide expressions that relate decoding error to the specific parameters of a population model that performs linear integration of sensory input. Results show conditions that lead to beneficial and detrimental effects of noise correlation on decoding. Further, the proposed framework sheds light on the contribution of neuronal noise, highlighting cases where, counter-intuitively, increased noise may lead to improved decoding performance. Finally, we examined the impact of dynamical parameters, including neuronal leak and integration time constant, on decoding. Overall, this work presents a fruitful approach to the study of decoding using a comprehensive theoretical framework that merges dynamical parameters with estimates of readout error.
A bio-inspired geometric model for sound reconstruction
The reconstruction mechanisms built by the human auditory system during sound reconstruction are still a matter of debate. The purpose of this study is to propose a mathematical model of sound reconstruction based on the functional architecture of the auditory cortex (A1). The model is inspired by the geometrical modelling of vision, which has undergone a great development in the last ten years. There are, however, fundamental dissimilarities, due to the different role played by time and the different group of symmetries. The algorithm transforms the degraded sound in an 'image' in the time-frequency domain via a short-time Fourier transform. Such an image is then lifted to the Heisenberg group and is reconstructed via a Wilson-Cowan integro-differential equation. Preliminary numerical experiments are provided, showing the good reconstruction properties of the algorithm on synthetic sounds concentrated around two frequencies.
On the potential role of lateral connectivity in retinal anticipation
We analyse the potential effects of lateral connectivity (amacrine cells and gap junctions) on motion anticipation in the retina. Our main result is that lateral connectivity can-under conditions analysed in the paper-trigger a wave of activity enhancing the anticipation mechanism provided by local gain control (Berry et al. in Nature 398(6725):334-338, 1999; Chen et al. in J. Neurosci. 33(1):120-132, 2013). We illustrate these predictions by two examples studied in the experimental literature: differential motion sensitive cells (Baccus and Meister in Neuron 36(5):909-919, 2002) and direction sensitive cells where direction sensitivity is inherited from asymmetry in gap junctions connectivity (Trenholm et al. in Nat. Neurosci. 16:154-156, 2013). We finally present reconstructions of retinal responses to 2D visual inputs to assess the ability of our model to anticipate motion in the case of three different 2D stimuli.
Retroactive interference model of forgetting
Memory and forgetting constitute two sides of the same coin, and although the first has been extensively investigated, the latter is often overlooked. A possible approach to better understand forgetting is to develop phenomenological models that implement its putative mechanisms in the most elementary way possible, and then experimentally test the theoretical predictions of these models. One such mechanism proposed in previous studies is retrograde interference, stating that a memory can be erased due to subsequently acquired memories. In the current contribution, we hypothesize that retrograde erasure is controlled by the relevant "importance" measures such that more important memories eliminate less important ones acquired earlier. We show that some versions of the resulting mathematical model are broadly compatible with the previously reported power-law forgetting time course and match well the results of our recognition experiments with long, randomly assembled streams of words.
Noisy network attractor models for transitions between EEG microstates
The brain is intrinsically organized into large-scale networks that constantly re-organize on multiple timescales, even when the brain is at rest. The timing of these dynamics is crucial for sensation, perception, cognition, and ultimately consciousness, but the underlying dynamics governing the constant reorganization and switching between networks are not yet well understood. Electroencephalogram (EEG) microstates are brief periods of stable scalp topography that have been identified as the electrophysiological correlate of functional magnetic resonance imaging defined resting-state networks. Spatiotemporal microstate sequences maintain high temporal resolution and have been shown to be scale-free with long-range temporal correlations. Previous attempts to model EEG microstate sequences have failed to capture this crucial property and so cannot fully capture the dynamics; this paper answers the call for more sophisticated modeling approaches. We present a dynamical model that exhibits a noisy network attractor between nodes that represent the microstates. Using an excitable network between four nodes, we can reproduce the transition probabilities between microstates but not the heavy tailed residence time distributions. We present two extensions to this model: first, an additional hidden node at each state; second, an additional layer that controls the switching frequency in the original network. Introducing either extension to the network gives the flexibility to capture these heavy tails. We compare the model generated sequences to microstate sequences from EEG data collected from healthy subjects at rest. For the first extension, we show that the hidden nodes 'trap' the trajectories allowing the control of residence times at each node. For the second extension, we show that two nodes in the controlling layer are sufficient to model the long residence times. Finally, we show that in addition to capturing the residence time distributions and transition probabilities of the sequences, these two models capture additional properties of the sequences including having interspersed long and short residence times and long range temporal correlations in line with the data as measured by the Hurst exponent.
Stability analysis of a neural field self-organizing map
We provide theoretical conditions guaranteeing that a self-organizing map efficiently develops representations of the input space. The study relies on a neural field model of spatiotemporal activity in area 3b of the primary somatosensory cortex. We rely on Lyapunov's theory for neural fields to derive theoretical conditions for stability. We verify the theoretical conditions by numerical experiments. The analysis highlights the key role played by the balance between excitation and inhibition of lateral synaptic coupling and the strength of synaptic gains in the formation and maintenance of self-organizing maps.
Neural field models with transmission delays and diffusion
A neural field models the large scale behaviour of large groups of neurons. We extend previous results for these models by including a diffusion term into the neural field, which models direct, electrical connections. We extend known and prove new sun-star calculus results for delay equations to be able to include diffusion and explicitly characterise the essential spectrum. For a certain class of connectivity functions in the neural field model, we are able to compute its spectral properties and the first Lyapunov coefficient of a Hopf bifurcation. By examining a numerical example, we find that the addition of diffusion suppresses non-synchronised steady-states while favouring synchronised oscillatory modes.
Spatio-chromatic information available from different neural layers via Gaussianization
How much visual information about the retinal images can be extracted from the different layers of the visual pathway? This question depends on the complexity of the visual input, the set of transforms applied to this multivariate input, and the noise of the sensors in the considered layer. Separate subsystems (e.g. opponent channels, spatial filters, nonlinearities of the texture sensors) have been suggested to be organized for optimal information transmission. However, the efficiency of these different layers has not been measured when they operate together on colorimetrically calibrated natural images and using multivariate information-theoretic units over the joint spatio-chromatic array of responses.In this work, we present a statistical tool to address this question in an appropriate (multivariate) way. Specifically, we propose an empirical estimate of the information transmitted by the system based on a recent Gaussianization technique. The total correlation measured using the proposed estimator is consistent with predictions based on the analytical Jacobian of a standard spatio-chromatic model of the retina-cortex pathway. If the noise at certain representation is proportional to the dynamic range of the response, and one assumes sensors of equivalent noise level, then transmitted information shows the following trends: (1) progressively deeper representations are better in terms of the amount of captured information, (2) the transmitted information up to the cortical representation follows the probability of natural scenes over the chromatic and achromatic dimensions of the stimulus space, (3) the contribution of spatial transforms to capture visual information is substantially greater than the contribution of chromatic transforms, and (4) nonlinearities of the responses contribute substantially to the transmitted information but less than the linear transforms.
Interactions of multiple rhythms in a biophysical network of neurons
Neural oscillations, including rhythms in the beta1 band (12-20 Hz), are important in various cognitive functions. Often neural networks receive rhythmic input at frequencies different from their natural frequency, but very little is known about how such input affects the network's behavior. We use a simplified, yet biophysical, model of a beta1 rhythm that occurs in the parietal cortex, in order to study its response to oscillatory inputs. We demonstrate that a cell has the ability to respond at the same time to two periodic stimuli of unrelated frequencies, firing in phase with one, but with a mean firing rate equal to that of the other. We show that this is a very general phenomenon, independent of the model used. We next show numerically that the behavior of a different cell, which is modeled as a high-dimensional dynamical system, can be described in a surprisingly simple way, owing to a reset that occurs in the state space when the cell fires. The interaction of the two cells leads to novel combinations of properties for neural dynamics, such as mode-locking to an input without phase-locking to it.
A new blind color watermarking based on a psychovisual model
In this paper, we address the problem of the use of a human visual system (HVS) model to improve watermark invisibility. We propose a new color watermarking algorithm based on the minimization of the perception of color differences. This algorithm is based on a psychovisual model of the dynamics of cone photoreceptors. We used this model to determine the discrimination power of the human for a particular color and thus the best strategy to modify color pixels. Results were obtained on a color version of the lattice quantization index modulation (LQIM) method and showed improvements on psychovisual invisibility and robustness against several image distortions.
The geometry of rest-spike bistability
Morris-Lecar model is arguably the simplest dynamical model that retains both the slow-fast geometry of excitable phase portraits and the physiological interpretation of a conductance-based model. We augment this model with one slow inward current to capture the additional property of bistability between a resting state and a spiking limit cycle for a range of input current. The resulting dynamical system is a core structure for many dynamical phenomena such as slow spiking and bursting. We show how the proposed model combines physiological interpretation and mathematical tractability and we discuss the benefits of the proposed approach with respect to alternative models in the literature.
Geometry of color perception. Part 2: perceived colors from real quantum states and Hering's rebit
Inspired by the pioneer work of H.L. Resnikoff, which is described in full detail in the first part of this two-part paper, we give a quantum description of the space [Formula: see text] of perceived colors. We show that [Formula: see text] is the effect space of a rebit, a real quantum qubit, whose state space is isometric to Klein's hyperbolic disk. This chromatic state space of perceived colors can be represented as a Bloch disk of real dimension 2 that coincides with Hering's disk given by the color opponency mechanism. Attributes of perceived colors, hue and saturation, are defined in terms of Von Neumann entropy.
Attractor-state itinerancy in neural circuits with synaptic depression
Neural populations with strong excitatory recurrent connections can support bistable states in their mean firing rates. Multiple fixed points in a network of such bistable units can be used to model memory retrieval and pattern separation. The stability of fixed points may change on a slower timescale than that of the dynamics due to short-term synaptic depression, leading to transitions between quasi-stable point attractor states in a sequence that depends on the history of stimuli. To better understand these behaviors, we study a minimal model, which characterizes multiple fixed points and transitions between them in response to stimuli with diverse time- and amplitude-dependencies. The interplay between the fast dynamics of firing rate and synaptic responses and the slower timescale of synaptic depression makes the neural activity sensitive to the amplitude and duration of square-pulse stimuli in a nontrivial, history-dependent manner. Weak cross-couplings further deform the basins of attraction for different fixed points into intricate shapes. We find that while short-term synaptic depression can reduce the total number of stable fixed points in a network, it tends to strongly increase the number of fixed points visited upon repetitions of fixed stimuli. Our analysis provides a natural explanation for the system's rich responses to stimuli of different durations and amplitudes while demonstrating the encoding capability of bistable neural populations for dynamical features of incoming stimuli.
Synchronization and resilience in the Kuramoto white matter network model with adaptive state-dependent delays
White matter pathways form a complex network of myelinated axons that regulate signal transmission in the nervous system and play a key role in behaviour and cognition. Recent evidence reveals that white matter networks are adaptive and that myelin remodels itself in an activity-dependent way, during both developmental stages and later on through behaviour and learning. As a result, axonal conduction delays continuously adjust in order to regulate the timing of neural signals propagating between different brain areas. This delay plasticity mechanism has yet to be integrated in computational neural models, where conduction delays are oftentimes constant or simply ignored. As a first approach to adaptive white matter remodeling, we modified the canonical Kuramoto model by enabling all connections with adaptive, phase-dependent delays. We analyzed the equilibria and stability of this system, and applied our results to two-oscillator and large-dimensional networks. Our joint mathematical and numerical analysis demonstrates that plastic delays act as a stabilizing mechanism promoting the network's ability to maintain synchronous activity. Our work also shows that global synchronization is more resilient to perturbations and injury towards network architecture. Our results provide key insights about the analysis and potential significance of activity-dependent myelination in large-scale brain synchrony.
Neurally plausible mechanisms for learning selective and invariant representations
Coding for visual stimuli in the ventral stream is known to be invariant to object identity preserving nuisance transformations. Indeed, much recent theoretical and experimental work suggests that the main challenge for the visual cortex is to build up such nuisance invariant representations. Recently, artificial convolutional networks have succeeded in both learning such invariant properties and, surprisingly, predicting cortical responses in macaque and mouse visual cortex with unprecedented accuracy. However, some of the key ingredients that enable such success-supervised learning and the backpropagation algorithm-are neurally implausible. This makes it difficult to relate advances in understanding convolutional networks to the brain. In contrast, many of the existing neurally plausible theories of invariant representations in the brain involve unsupervised learning, and have been strongly tied to specific plasticity rules. To close this gap, we study an instantiation of simple-complex cell model and show, for a broad class of unsupervised learning rules (including Hebbian learning), that we can learn object representations that are invariant to nuisance transformations belonging to a finite orthogonal group. These findings may have implications for developing neurally plausible theories and models of how the visual cortex or artificial neural networks build selectivity for discriminating objects and invariance to real-world nuisance transformations.