A systematic review of cross-patient approaches for EEG epileptic seizure prediction
: Seizure prediction could greatly improve the quality of life of people suffering from epilepsy. Modern prediction systems leverage artificial intelligence (AI) techniques to automatically analyze neurophysiological data, most commonly the electroencephalogram (EEG), in order to anticipate upcoming epileptic events. However, the performance of these systems is normally assessed using randomized splitting methods, which can suffer from data leakage and thus result in an optimistic evaluation. In this review, we systematically surveyed the available scientific literature looking for research approaches that adopted more stringent assessment methods based on patient-independent testing.: We queried three scientific databases (PubMed, Scopus, and Web of Science), focusing on AI techniques based on non-invasive EEG recorded from human subjects. We first summarize a standardized signal processing pipeline that could be deployed for the development and testing of cross-patient seizure prediction systems. We then analyze the research work that meets our selection criteria.: 21 articles adopted patient-independent validation methods, constituting only 4% of the published work in the entire field of epileptic seizure prediction. Among eligible articles, the most common approach to deal with cross-patient scenarios was based on source domain adaptation techniques, which allow to fine-tune the predictive model on a limited set of data recorded from a set of independent target patients.: Overall, our review indicates that epileptic seizure prediction remains an extremely challenging problem and significant research efforts are still needed to develop automated systems that can be deployed in realistic clinical settings. Our review protocol is based on the preferred reporting items for systematic review and meta-analysis protocols 2020 guidelines for conducting systematic reviews, considering NHLBI and ROBIS tools to mitigate the risk of bias, and it was pre-registered in PROSPERO (registration number: CRD4202452317).
Stimulation artefact removal: review and evaluation of applications in evoked responses
This study investigated software methods for removing stimulation artefacts in recordings undertaken during deep brain stimulation (DBS). We aimed to evaluate artefact attenuation using sample recordings of evoked resonant neural activity (ERNA), as well as a synthetic ground-truth waveform that emulated observed ERNA characteristics.
Approach.
The synthetic waveform and eight raw DBS recordings were processed by fourteen algorithms spanning the following categories: signal modification, signal decomposition, and template subtraction. For the synthetic waveform, performance was quantified by comparing each reconstructed signal against the ground-truth waveform. For DBS recordings, performance was contrasted amongst each other. The stimulation artefact was quantified by its amplitude and subsequent decay to baseline by the time to first zero-crossing. Each reconstructed ERNA signal was characterised by peak-to-peak-amplitude, root-mean-square amplitude, latency, and number of zero-crossings.
Main results.
None of the methods performed overall as well as the Backward Filter. Signal decomposition techniques were able to attenuate stimulation artefact albeit with unacceptable ERNA distortion.
Significance.
Upon evaluation of common software methods for DBS artefact attenuation, we advocate the use of the Backward Filter for reducing such artefacts while reconstructing ERNA.
When neuromodulation met control theory
The brain is a highly complex physical system made of assemblies of neurons that work together to accomplish elaborate tasks such as motor control, memory and perception. How these parts work together has been studied for decades by neuroscientists using neuroimaging, psychological manipulations, and neurostimulation. Neurostimulation has gained particular interest, given the possibility to perturb the brain and elicit a specific response. This response depends on different parameters such as the intensity, the location and the timing of the stimulation. However, most of the studies performed so far used previously established protocols without considering the ongoing brain activity and, thus, without adaptively targeting the stimulation. In control theory, this approach is called open-loop control, and it is always paired with a different form of control called closed-loop, in which the current activity of the brain is used to establish the next stimulation. Recently, neuroscientists are beginning to shift from classical fixed neuromodulation studies to closed-loop experiments. This new approach allows the control of brain activity based on responses to stimulation and thus to personalize individual treatment in clinical conditions. Here, we review this new approach by introducing control theory and focusing on how these aspects are applied in brain studies. We also present the different stimulation techniques and the control approaches used to steer the brain. Finally, we explore how the closed-loop framework will revolutionize the way the human brain can be studied, including a discussion on open questions and an outlook on future advances.
Electroencephalographic power ratio and peak frequency difference associate with central sensitization in chronic pain
Central sensitization, or increased responsiveness of the central nervous system to sensory input, is present in many chronic pain patients. Clinically, it is detected through subjective, patient-reported measures. There is a need for reliable, direct measurements of neural response to controlled stimuli to quantify neuronal dysfunction in pain. The goal of this work is to investigate cortical activity, recorded via electroencephalogram (EEG), during objective and calibrated painful stimulation in chronic pain patients.
Approach. Chronic pain patients (N=8) and healthy controls (N=8) participated in this study. We recorded electroencephalography (EEG) at rest (baseline) and during evoked pain tasks, including thermal and mechanical stimuli. The evoked pain was applied following the quantitative sensory testing (QST) protocol, which is a research technique that applies objective, calibrated painful stimuli.
Main results. Peak alpha frequency at rest was significantly lower in chronic pain patients compared to healthy controls (p<0.0002), while EEG alpha/theta and alpha/beta power ratios at rest were higher in patients (p<0.0002). During thermal QST, these power ratios decreased in patients and increased in controls (p<0.0002 for both). During mechanical QST, power ratios decreased or did not change. Furthermore, the peak theta-beta frequency difference at baseline was significantly lower in patients compared to controls (p<0.0002). During thermal QST, this difference increased in patients and decreased in controls; during mechanical QST, this difference increased in both patients and controls (p<0.0002). Functional connectivity analysis showed that controls had greater baseline theta connectivity strength that increased during mechanical QST (p<0.0002).
Significance. This work demonstrates differential patterns of EEG activity at rest and during acute painful stimulation in chronic pain patients compared to healthy controls. These measures may quantify an individual's tendency to experience chronic pain and central sensitization and serve as diagnostic biomarkers.
Model-agnostic meta-learning for EEG-based inter-subject emotion recognition
Developing an efficient and generalizable method for inter-subject emotion recognition from neural signals is an emerging and challenging problem in affective computing. In particular, human subjects usually have heterogeneous neural signal characteristics and variable emotional activities that challenge the existing recognition algorithms from achieving high inter-subject emotion recognition accuracy.
Approach.
In this work, we propose a model-agnostic meta-learning algorithm to learn an adaptable and generalizable Electroencephalogram (EEG)-based emotion decoder at the subject's population level. Different from many prior end-to-end emotion recognition algorithms, our learning algorithms include a pre-training step and an adaptation step. Specifically, our meta-decoder first learns on diverse known subjects and then further adapts it to unknown subjects with one-shot adaptation. More importantly, our algorithm is compatible with a variety of mainstream machine learning decoders for emotion recognition.
Main results.
We evaluate the adapted decoders obtained by our proposed algorithm on three Emotion-EEG datasets: SEED, DEAP, and DREAMER. Our comprehensive experimental results show that the adapted meta-emotion decoder achieves state-of-the-art inter-subject emotion recognition accuracy and outperforms the classical supervised learning baseline across different decoder architectures.
Significance.
Our results hold promise to incorporate the proposed meta-learning emotion recognition algorithm to effectively improve the inter-subject generalizability in designing future affective brain-computer interfaces (BCIs).
The 'Sandwich' meta-framework for architecture agnostic deep privacy-preserving transfer learning for non-invasive brainwave decoding
Machine learning has enhanced the performance of decoding signals indicating human behaviour. EEG decoding, as an exemplar indicating neural activity and human thoughts non-invasively, has aided patients via brain-computer interfaces in neural activity analysis. However, training machine learning algorithms on EEG encounters two primary challenges: variability across data sets and privacy concerns using data from individuals and data centres. Our objective is to address these challenges by integrating transfer learning for data variability and federated learning for data privacy into a unified approach.
We introduce the Sandwich as a novel deep privacy-preserving meta-framework combining transfer learning and federated learning. The Sandwich framework comprises three components: federated networks (first layers) that handle data set differences at the input level, a shared network (middle layer) learning common rules and applying transfer learning, and individual classifiers (final layers) for specific tasks of each data set. It enables the central network (central server) to benefit from multiple data sets, while local branches (local servers) maintain data and label privacy.
We evaluated the `Sandwich' meta-architecture in various configurations using the BEETL motor imagery challenge, a benchmark for heterogeneous EEG data sets. Compared with baseline models, our `Sandwich' implementations showed superior performance. The best-performing model, the Inception Sandwich with deep set alignment (Inception-SD-Deepset), exceeded baseline methods by 9%. The `Sandwich' framework demonstrates significant advancements in federated deep transfer learning for diverse tasks and data sets. It outperforms conventional deep learning methods, showcasing the potential for effective use of larger, heterogeneous data sets with enhanced privacy as a model-agnostic meta-framework.
EKFNet: Edge-based Kalman filter network for real-time EEG signal denoising
Signal denoising methods based on deep learning have been extensively adopted on Electroencephalogram(EEG) devices. However, they are unable to deploy on edge-based portable or wearable (P/W) electronics due to the high computational complexity of the existed models. To overcome such issue, we propose an edge-based lightweight Kalman filter network (EKFNet) that does not require manual prior knowledge estimation.
Approach: Specifically, we construct a multi-scale feature fusion (MSFF) module to capture multi-scale feature information and implicitly compute the prior knowledge. Meanwhile, we design an adaptive gain estimation (AGE) module that incorporates long short-term memory (LSTM) and sequential channel attention module (CAM) to dynamically predict the Kalman gain. Furthermore, we present an optimization strategy utilizing operator fusion and constant folding to reduce the model's computational overhead and memory footprint.
Main results: Experimental results show that the EKFNet reduces the sum of the square of the distances by at least 12% and improves the cosine similarity by at least 2.2% over the state-of-the-art methods. Besides, the model optimization shortens the inference time by approximately 3.3×. The code of our EKFNet is available at https://github.com/cathnat/EKFNet.
Significance: By integrating Kalman filter with deep learning, the approach addresses the parameter-setting challenges in traditional algorithms while reducing computational overhead and memory consumption, which exhibits a good tradeoff between algorithm performance and computing power.
Synthetic conduits efficacy in neural repair: a comparative study of dip-coated polycaprolactone and electrospun polycaprolactone/ polyurethane conduits
Peripheral nerve injuries represent the most common type of nervous system injuries, resulting in 5 million injuries per year. Current gold standard, autografts, still carry several limitations, including the inappropriate type, size, and function matches in grafted nerves, lack of autologous donor sites, neuroma formation, and secondary surgery incisions. Polymeric nerve conduits, also known as nerve guides, can help overcome the aforementioned issues that limit nerve recovery and regeneration by reducing tissue fibrosis, misdirection of regenerating axons, and the inability to maintain long- distance axonal growth. Polymer-based double-walled microspheres (DWMS) are designed to locally and in a sustainable fashion deliver bioactive agents. Lysozyme is a natural antimicrobial protein that shares similar physical and chemical properties to glial cell line-derived neurotrophic factor (GDNF), making it an ideal surrogate molecule to evaluate the release kinetics of encapsulated bioagent from polymeric biodegradable microspheres embedded in Polycaprolactone and Polycaprolactone/Polyurethane blend nerve conduits.
Approach: Lysozyme was encapsulated in poly(lactic-co-glycolic acid)/poly(L-lactide) (PLGA/PLLA) double-walled microspheres fabricated through a modified water-oil-water emulsion solvent evaporation method. Lysozyme-loaded DWMS were further embedded in PCL and PCL-PU based nerve guides constructed via polymer dip-coating and electrospinning method respectively. Lysozyme DWMS and nerve guides were imaged using scanning electron microscopy (SEM). Released lysozyme concentration was determined by using a colorimetric micro-BCA protein assay and spectrophotometric quantitation. Tensile and suture pull-out tests were utilized to evaluate the mechanical properties of both dip-coated and electrospun nerve guides, embedded and free of lysozyme DWMS.
Main Results: The study revealed significant distinctions in the lysozyme release profiles, and mechanical properties of the manufactured polymer nerve guides. Both PCL dip-coated and PCL/PU electrospun DWMS-embedded nerve guides revealed biphasic protein release profiles. PCL/PU electrospun and PCL dip-coated nerve guides released 16% and 29% of the total protein concentration within 72 hours, plateauing at week 16 and week 8, respectively. SEM analysis of the nerve guides confirmed the homogeneity and integrity of the polymer nerve guides' structures. The electrospun guides were found to be more flexible with a higher extension under stress bending, while the dip-coated PCL nerve guides displayed more rigid behavior.
Significance: This study provides useful insights on how to optimize nerve guide design and fabrication to enhance recovery progress of peripheral nerve injuries.
.
Nanoparticle targeting strategies for traumatic brain injury
Nanoparticle (NP)-based drug delivery systems hold immense potential for targeted therapy and diagnosis of neurological disorders, overcoming the limitations of conventional treatment modalities. This review explores the design considerations and functionalization strategies of NPs for precise targeting of the brain and central nervous system. This review discusses the challenges associated with drug delivery to the brain, including the blood-brain barrier and the complex heterogeneity of traumatic brain injury. We also examine the physicochemical properties of NPs, emphasizing the role of size, shape, and surface characteristics in their interactions with biological barriers and cellular uptake mechanisms. The review concludes by exploring the options of targeting ligands designed to augment NP affinity and retention to specific brain regions or cell types. Various targeting ligands are discussed for their ability to mimic receptor-ligand interaction, and brain-specific extracellular matrix components. Strategies to mimic viral mechanisms to increase uptake are discussed. Finally, the emergence of antibody, antibody fragments, and antibody mimicking peptides are discussed as promising targeting strategies. By integrating insights from these scientific fields, this review provides an understanding of NP-based targeting strategies for personalized medicine approaches to neurological disorders. The design considerations discussed here pave the way for the development of NP platforms with enhanced therapeutic efficacy and minimized off-target effects, ultimately advancing the field of neural engineering.
Attention demands modulate brain electrical microstates and mental fatigue induced by simulated flight tasks
Prolonged engagement in tasks with varying attention demands is thought to elicit distinct forms of mental fatigue, potentially indicating variations in neural activity. This study aimed to investigate the association between mental fatigue and changes in electroencephalogram (EEG) microstate dynamics during tasks with varying attention demands.
A 10-year journey towards clinical translation of an implantable endovascular BCI A keynote lecture given at the BCI society meeting in Brussels
In the rapidly evolving field of brain-computer interfaces (BCIs), a novel modality for recording electrical brain signals has quietly emerged over the past decade. The technology is endovascular electrocorticography, an innovation that stands alongside well-established methods such as electroencephalography (EEG), traditional electrocorticography (ECoG), and single/multi-unit activity recording. This system was inspired by advancements in interventional cardiology, particularly the integration of electronics into various medical interventions. This breakthrough led to the development of the Stentrode system, which employs stent-mounted electrodes to record electrical brain activity for applications in a motor neuroprosthesis. This Perspective explores four key areas in our quest to bring the Stentrode BCI to market: the critical patient need for autonomy driving our efforts, the hurdles and achievements in assessing BCI performance, the compelling advantages of our unique endovascular approach, and the essential steps for clinical translation and product commercialization.
Identification of autism spectrum disorder using electroencephalography and machine learning: a review
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterized by communication barriers, societal disengagement, and monotonous actions. Traditional diagnostic methods for ASD rely on clinical observations and behavioural assessments, which are time -consuming. In recent years, researchers have focused mainly on the early diagnosis of ASD due to the unavailability of recognised causes and the lack of permanent curative solutions. Electroencephalography (EEG) research in ASD offers insight into the neural dynamics of affected individuals. This comprehensive review examines the unique integration of EEG, machine learning, and statistical analysis for ASD identification, highlighting the promise of an interdisciplinary approach for enhancing diagnostic precision. The comparative analysis of publicly available EEG datasets for ASD, along with local data acquisition methods and their technicalities, is presented in this paper. This study also compares preprocessing techniques, and feature extraction methods, followed by classification models and statistical analysis which are discussed in detail. In addition, it briefly touches upon comparisons with other modalities to contextualize the extensiveness of ASD research. Moreover, by outlining research gaps and future directions, this work aims to catalyse further exploration in the field, with the main goal of facilitating more efficient and effective early identification methods that may be helpful to the lives of ASD individuals.
.
Enhancing neuroprosthesis calibration: the advantage of integrating prior training over exclusive use of new data
Neuroprostheses typically operate under supervised learning, in which a machine-learning algorithm is trained to correlate neural or myoelectric activity with an individual's motor intent. Due to the stochastic nature of neuromyoelectric signals, algorithm performance decays over time. This decay is accelerated when attempting to regress proportional control of multiple joints in parallel, compared with the more typical classification-based pattern recognition control. To overcome this degradation, neuroprostheses and commercial myoelectric prostheses are often recalibrated and retrained frequently so that only the most recent, up-to-date data influences the algorithm performance. Here, we introduce and validate an alternative training paradigm in which training data from past calibrations is aggregated and reused in future calibrations for regression control.Using a cohort of four transradial amputees implanted with intramuscular electromyographic recording leads, we demonstrate that aggregating prior datasets improves prosthetic regression-based control in offline analyses and an online human-in-the-loop task. In offline analyses, we compared the performance of a convolutional neural network (CNN) and a modified Kalman filter (MKF) to simultaneously regress the kinematics of an eight-degree-of-freedom prosthesis. Both algorithms were trained under the traditional paradigm using a single dataset, as well as under the new paradigm using aggregated datasets from the past five or ten trainings.Dataset aggregation reduced the root-mean-squared error (RMSE) of algorithm estimates for both the CNN and MKF, although the CNN saw a greater reduction in error. Further offline analyses revealed that dataset aggregation improved CNN robustness when reusing the same algorithm on subsequent test days, as indicated by a smaller increase in RMSE per day. Finally, data from an online virtual-target-touching task with one amputee showed significantly better real-time prosthetic control when using aggregated training data from just two prior datasets.Altogether, these results demonstrate that training data from past calibrations should not be discarded but, rather, should be reused in an aggregated training dataset such that the increased amount and diversity of data improve algorithm performance. More broadly, this work supports a paradigm shift for the field of neuroprostheses away from daily data recalibration for linear classification models and towards daily data aggregation for non-linear regression models.
Brain-computer interfaces patient preferences: a systematic review
Background
Brain-computer interfaces (BCIs) have the potential to restore motor capabilities and functional independence in individuals with motor impairments. Despite accelerating advances in the performance of various implanted devices, few studies have identified patient preferences underlying device design, and moreover, each study has typically captured a single aetiology of motor impairment. We aimed to characterise BCI patient preferences in a large patient cohort across multiple aetiologies.
Methods
We performed a systematic review of all published studies reporting patient preferences for BCI devices. We searched MEDLINE, Embase, and CINAHL from inception to April 18th, 2023. We included any study reporting either qualitative or quantitative preferences concerning BCI devices. Article screening and data extraction were performed by two reviewers in duplicate. Extracted information included demographic information, current digital device use, device invasiveness preference, device design preferences, and device functional preferences.
Findings
Our search identified 1316 articles, of which 28 studies were eligible for inclusion. Preference information was captured from 1701 patients (mean age = 42.1-64.3 years). Amyotrophic lateral sclerosis was the most represented clinical condition (n = 15 studies, 53.6%), followed by spinal cord injury (n = 13 studies, 46.4%). We found that individuals with motor impairment prioritise device accuracy over other device design characteristics. We also found that the speed and accuracy of BCI systems in recent publications exceeds reported patient preferences, however this performance has been achieved with a level of training and setup burden that would not be tolerated by most patients. When comparing populations across studies, we found that patient preferences vary according to both disease aetiology and the severity of motor impairment.
Interpretation
Our findings support a greater research emphasis on minimising BCI setup and training burden, and they suggest future BCI devices may require bespoke configuration and training for specific patient groups.
.
Frequency-dependent phase entrainment of cortical cell types during tACS: computational modeling evidence
Transcranial alternating current stimulation (tACS) enables non-invasive modulation of brain activity, holding promise for clinical and research applications. Yet, it remains unclear how the stimulation frequency differentially impacts various neuron types.
Here, we aimed to quantify the frequency-dependent behavior of key neocortical cell types.
An audiovisual cognitive optimization strategy guided by salient object ranking for intelligent visual prothesis systems
Visual prostheses are effective tools for restoring vision, yet real-world complexities pose ongoing challenges. The progress in AI has led to the emergence of the concept of intelligent visual prosthetics with auditory support, leveraging deep learning to create practical artificial vision perception beyond merely restoring natural sight for the blind.This study introduces an object-based attention mechanism that simulates human gaze points when observing the external world to descriptions of physical regions. By transforming this mechanism into a ranking problem of salient entity regions, we introduce prior visual attention cues to build a new salient object ranking (SaOR) dataset, and propose a SaOR network aimed at providing depth perception for prosthetic vision. Furthermore, we propose a SaOR-guided image description method to align with human observation patterns, toward providing additional visual information by auditory feedback. Finally, the integration of the two aforementioned algorithms constitutes an audiovisual cognitive optimization strategy for prosthetic vision.Through conducting psychophysical experiments based on scene description tasks under simulated prosthetic vision, we verify that the SaOR method improves the subjects' performance in terms of object identification and understanding the correlation among objects. Additionally, the cognitive optimization strategy incorporating image description further enhances their prosthetic visual cognition.This offers valuable technical insights for designing next-generation intelligent visual prostheses and establishes a theoretical groundwork for developing their visual information processing strategies. Code will be made publicly available.
Multi-layer ear-scalp distillation framework for ear-EEG classification enhancement
Ear-electroencephalography (ear-EEG) holds significant promise as a practical tool in brain-computer interfaces (BCIs) due to its enhanced unobtrusiveness, comfort, and mobility in comparison to traditional steady-state visual evoked potential (SSVEP)-based BCI systems. However, achieving accurate SSVEP classification in ear-EEG faces a major challenge due to the significant attenuation and distorted amplitude of the signal. Our aim is to enhance the classification performance of SSVEP using ear-EEG and augment its practical application value. To address this challenge, we focuse on enhancing ear-EEG feature representations by training the model to learn feature representations similar to those of scalp-EEG. We introduce a novel framework, termed multi-layer ear-scalp distillation (MESD), designed for optimizing SSVEP target classification recognition in ear-EEG data. This framework combines signals from the scalp area to obtains multi-layer distilled knowledge through the cooperation of distillation of features in the mid-layer feature distillation and output layer response distillation. We improved the classification of the shorter first 1s data and achieved a maximum classification result of 75.7%. We evaluate the proposed MESD framework through single-session, cross-session and cross-subject transfer decoding, comparing it with baseline method. The results demonstrate that the proposed framework achieves the best classification results in all experiments. Our study enhances the classification accuracy of SSVEP based on ear-EEG within a short time window. These results offer insights for the application of ear-EEG brain-computer interfaces in tasks including auxiliary control and rehabilitation training in forthcoming endeavors.
SSVEP modulation via non-volitional neurofeedback: An in silico proof of concept
Objective Neuronal oscillatory patterns are believed to underpin multiple cognitive mechanisms. Accordingly, compromised oscillatory dynamics were shown to be associated with neuropsychiatric conditions. Therefore, the possibility of modulating, or controlling, oscillatory components of brain activity as a therapeutic approach has emerged.
Typical non-invasive brain-computer interfaces (BCI) based on EEG have been used to decode volitional motor brain signals for interaction with external devices. Here we aimed at feedback through visual stimulation which returns directly back to the visual cortex.
Approach Our architecture permits the implementation of feedback control-loops capable of controlling, or at least modulating, visual cortical activity. As this type of neurofeedback depends on early visual cortical activity, mainly driven by external stimulation it is called non-volitional or implicit neurofeedback. Because retino-cortical 40-100ms delays in the feedback loop severely degrade controller performance, we implemented a predictive control system, called a Smith-Predictor (SP) controller, which compensates for fixed delays in the control loop by building an internal model of the system to be controlled, in this case the EEG response to stimuli in the visual cortex.
Main Results Response models were obtained by analyzing, EEG data (n=8) of experiments using periodically inverting stimuli causing prominent parieto-occipital oscillations, the Steady-State Visual Evoked Potentials (SSVEPs). Averaged subject-specific SSVEPs, and associated retina-cortical delays, were subsequently used to obtain the SP controler's Linear, Time-Invariant (LTI) models of individual responses.
The SSVEP models were first successfully validated against the experimental data. When placed in closed loop with the designed SP controller configuration, the SSVEP amplitude level oscillated around several reference values, accounting for inter-individual variability.
Significance In silico and in vivo data matched, suggesting model's robustness, paving the way for the experimental validation of this non-volitional neurofeedback system to control the amplitude of abnormal brain oscillations in autism and attention and hyperactivity deficits.
.
Improving subject transfer in EEG classification with divergence estimation
\textit{Objective}. Classification models for electroencephalogram (EEG) data show a large decrease in performance when evaluated on unseen test subjects. We improve performance using new regularization techniques during model training.
\textit{Approach}.
We propose several graphical models to describe an EEG classification task.
From each model, we identify statistical relationships that should hold true in an idealized training scenario (with infinite data and a globally-optimal model) but that may not hold in practice.
We design regularization penalties to enforce these relationships in two stages.
First, we identify suitable proxy quantities (divergences such as Mutual Information and Wasserstein-1) that can be used to measure statistical independence and dependence relationships.
Second, we provide algorithms to efficiently estimate these quantities during training using secondary neural network models.
\textit{Main Results}.
We conduct extensive computational experiments using a large benchmark EEG dataset, comparing our proposed techniques with a baseline method that uses an adversarial classifier.
We first show the performance of each method across a wide range of hyperparameters, demonstrating that each method can be easily tuned to yield significant benefits over an unregularized model.
We show that, using ideal hyperparameters for all methods, our first technique gives significantly better performance than the baseline regularization technique.
We also show that, across hyperparameters, our second technique gives significantly more stable performance than the baseline.
The proposed methods require only a small computational cost at training time that is equivalent to the cost of the baseline.
\textit{Significance}.
The high variability in signal distribution between subjects means that typical approaches to EEG signal modeling often require time-intensive calibration for each user, and even re-calibration before every use.
By improving the performance of population models in the most stringent case of zero-shot subject transfer, we may help reduce or eliminate the need for model calibration.
Stability of sputtered iridium oxide neural microelectrodes under kilohertz frequency pulsed stimulation
. Kilohertz (kHz) frequency stimulation has gained attention as a neuromodulation therapy in spinal cord and in peripheral nerve block applications, mainly for treating chronic pain. Yet, few studies have investigated the effects of high-frequency stimulation on the performance of the electrode materials. In this work, we assess the electrochemical characteristics and stability of sputtered iridium oxide film (SIROF) microelectrodes under kHz frequency pulsed electrical stimulation.. SIROF microelectrodes were subjected to 1.5-10 kHz pulsing at charge densities of 250-1000C cm(25-100 nC phase), under monopolar and bipolar configurations, in buffered saline solution. The electrochemical behavior as well as the long-term stability of the pulsed electrodes was evaluated by voltage transient, cyclic voltammetry, and electrochemical impedance spectroscopy measurements.. Electrode polarization was more pronounced at higher stimulation frequencies in both monopolar and bipolar configurations. Bipolar stimulation resulted in an overall higher level of polarization than monopolar stimulation with the same parameters. In all tested pulsing conditions, except one, the maximum cathodal and anodal potential excursions stayed within the water window of iridium oxide (-0.6-0.8 V vs Ag|AgCl). Additionally, these SIROF microelectrodes showed little or no changes in the electrochemical performance under continuous current pulsing at frequencies up to 10 kHz for more than 10pulses.e. Our results suggest that 10 000mSIROF microelectrodes can deliver high-frequency neural stimulation up to 10 kHz in buffered saline at charge densities between 250 and 1000C cm(25-100 nC phase).
Performance of optically pumped magnetometer magnetoencephalography: validation in large samples and multiple tasks
Objective
Current commercial magnetoencephalography (MEG) systems detect neuro-magnetic signals using superconducting quantum interferometers devices (SQUIDs), which require liquid helium as cryogen and have many limitations during operation. In contrast, optical pumped magnetometers (OPMs) technology provides a promising alternative to conventional SQUID-MEG. OPMs can operate at room temperature, offering benefits such as flexible deployment and lower costs. However, the validation of OPM-MEG has primarily been conducted on small sample sizes and specific regions of interest in the brain, lacking comprehensive validation for larger sample sizes and assessment of whole-brain.
Approach
We recruited 100 participants, including healthy and neurological disorders individuals. Whole-brain OPM-MEG and SQUID-MEG data were recorded sequentially during auditory (n = 50) and visual (n = 50) stimulation experiments. By comparing the task-evoked responses of the two systems, we aimed to validate the performance of the next-generation OPM-MEG.
Main results
The results showed that OPM-MEG enhanced the amplitude of task-related responses and exhibited similar magnetic field patterns and neural oscillatory activity as SQUID-MEG. There was no difference in the task-related latencies measured by the two systems. The signal-to-noise ratio was lower for the OPM-MEG in the auditory experiment, but did not differ in the visual experiment, suggesting that the results may be task-dependent.
Significance
These results demonstrate that OPM-MEG, as an alternative to traditional SQUID-MEG, shows superior response amplitude and comparable performance in capturing brain dynamics. This study provides evidence for the effectiveness of OPM-MEG as a next-generation neuroimaging technique.