Efficient federated learning for distributed neuroimaging data
Recent advancements in neuroimaging have led to greater data sharing among the scientific community. However, institutions frequently maintain control over their data, citing concerns related to research culture, privacy, and accountability. This creates a demand for innovative tools capable of analyzing amalgamated datasets without the need to transfer actual data between entities. To address this challenge, we propose a decentralized sparse federated learning (FL) strategy. This approach emphasizes local training of sparse models to facilitate efficient communication within such frameworks. By capitalizing on model sparsity and selectively sharing parameters between client sites during the training phase, our method significantly lowers communication overheads. This advantage becomes increasingly pronounced when dealing with larger models and accommodating the diverse resource capabilities of various sites. We demonstrate the effectiveness of our approach through the application to the Adolescent Brain Cognitive Development (ABCD) dataset.
Optimizing neuroscience data management by combining REDCap, BIDS and SQLite: a case study in Deep Brain Stimulation
Neuroscience studies entail the generation of massive collections of heterogeneous data (e.g. demographics, clinical records, medical images). Integration and analysis of such data in research centers is pivotal for elucidating disease mechanisms and improving clinical outcomes. However, data collection in clinics often relies on non-standardized methods, such as paper-based documentation. Moreover, diverse data types are collected in different departments hindering efficient data organization, secure sharing and compliance to the FAIR (Findable, Accessible, Interoperable, Reusable) principles. Henceforth, in this manuscript we present a specialized data management system designed to enhance research workflows in Deep Brain Stimulation (DBS), a state-of-the-art neurosurgical procedure employed to treat symptoms of movement and psychiatric disorders. The system leverages REDCap to promote accurate data capture in hospital settings and secure sharing with research institutes, Brain Imaging Data Structure (BIDS) as image storing standard and a DBS-specific SQLite database as comprehensive data store and unified interface to all data types. A self-developed Python tool automates the data flow between these three components, ensuring their full interoperability. The proposed framework has already been successfully employed for capturing and analyzing data of 107 patients from 2 medical institutions. It effectively addresses the challenges of managing, sharing and retrieving diverse data types, fostering advancements in data quality, organization, analysis, and collaboration among medical and research institutions.
Reproducible supervised learning-assisted classification of spontaneous synaptic waveforms with Eventer
Detection and analysis of spontaneous synaptic events is an extremely common task in many neuroscience research labs. Various algorithms and tools have been developed over the years to improve the sensitivity of detecting synaptic events. However, the final stages of most procedures for detecting synaptic events still involve the manual selection of candidate events. This step in the analysis is laborious and requires care and attention to maintain consistency of event selection across the whole dataset. Manual selection can introduce bias and subjective selection criteria that cannot be shared with other labs in reporting methods. To address this, we have created Eventer, a standalone application for the detection of spontaneous synaptic events acquired by electrophysiology or imaging. This open-source application uses the freely available MATLAB Runtime and is deployed on Mac, Windows, and Linux systems. The principle of the Eventer application is to learn the user's "expert" strategy for classifying a set of detected event candidates from a small subset of the data and then automatically apply the same criterion to the remaining dataset. Eventer first uses a suitable model template to pull out event candidates using fast Fourier transform (FFT)-based deconvolution with a low threshold. Random forests are then created and trained to associate various features of the events with manual labeling. The stored model file can be reloaded and used to analyse large datasets with greater consistency. The availability of the source code and its user interface provide a framework with the scope to further tune the existing Random Forest implementation, or add additional, artificial intelligence classification methods. The Eventer website (https://eventerneuro.netlify.app/) includes a repository where researchers can upload and share their machine learning model files and thereby provide greater opportunities for enhancing reproducibility when analyzing datasets of spontaneous synaptic activity. In summary, Eventer, and the associated repository, could allow researchers studying synaptic transmission to increase throughput of their data analysis and address the increasing concerns of reproducibility in neuroscience research.
Light-weight neural network for intra-voxel structure analysis
We present a novel neural network-based method for analyzing intra-voxel structures, addressing critical challenges in diffusion-weighted MRI analysis for brain connectivity and development studies. The network architecture, called the Local Neighborhood Neural Network, is designed to use the spatial correlations of neighboring voxels for an enhanced inference while reducing parameter overhead. Our model exploits these relationships to improve the analysis of complex structures and noisy data environments. We adopt a self-supervised approach to address the lack of ground truth data, generating signals of voxel neighborhoods to integrate the training set. This eliminates the need for manual annotations and facilitates training under realistic conditions. Comparative analyses show that our method outperforms the constrained spherical deconvolution (CSD) method in quantitative and qualitative validations. Using phantom images that mimic data, our approach improves angular error, volume fraction estimation accuracy, and success rate. Furthermore, a qualitative comparison of the results in actual data shows a better spatial consistency of the proposed method in areas of real brain images. This approach demonstrates enhanced intra-voxel structure analysis capabilities and holds promise for broader application in various imaging scenarios.
A canonical polyadic tensor basis for fast Bayesian estimation of multi-subject brain activation patterns
Task-evoked functional magnetic resonance imaging studies, such as the Human Connectome Project (HCP), are a powerful tool for exploring how brain activity is influenced by cognitive tasks like memory retention, decision-making, and language processing. A fast Bayesian function-on-scalar model is proposed for estimating population-level activation maps linked to the working memory task. The model is based on the canonical polyadic (CP) tensor decomposition of coefficient maps obtained for each subject. This decomposition effectively yields a tensor basis capable of extracting both common features and subject-specific features from the coefficient maps. These subject-specific features, in turn, are modeled as a function of covariates of interest using a Bayesian model that accounts for the correlation of the CP-extracted features. The dimensionality reduction achieved with the tensor basis allows for a fast MCMC estimation of population-level activation maps. This model is applied to one hundred unrelated subjects from the HCP dataset, yielding significant insights into brain signatures associated with working memory.
Spectral graph convolutional neural network for Alzheimer's disease diagnosis and multi-disease categorization from functional brain changes in magnetic resonance images
Alzheimer's disease (AD) is a progressive neurological disorder characterized by the gradual deterioration of cognitive functions, leading to dementia and significantly impacting the quality of life for millions of people worldwide. Early and accurate diagnosis is crucial for the effective management and treatment of this debilitating condition. This study introduces a novel framework based on Spectral Graph Convolutional Neural Networks (SGCNN) for diagnosing AD and categorizing multiple diseases through the analysis of functional changes in brain structures captured via magnetic resonance imaging (MRI). To assess the effectiveness of our approach, we systematically analyze structural modifications to the SGCNN model through comprehensive ablation studies. The performance of various Convolutional Neural Networks (CNNs) is also evaluated, including SGCNN variants, Base CNN, Lean CNN, and Deep CNN. We begin with the original SGCNN model, which serves as our baseline and achieves a commendable classification accuracy of 93%. In our investigation, we perform two distinct ablation studies on the SGCNN model to examine how specific structural changes impact its performance. The results reveal that Ablation Model 1 significantly enhances accuracy, achieving an impressive 95%, while Ablation Model 2 maintains the baseline accuracy of 93%. Additionally, the Base CNN model demonstrates strong performance with a classification accuracy of 93%, whereas both the Lean CNN and Deep CNN models achieve 94% accuracy, indicating their competitive capabilities. To validate the models' effectiveness, we utilize multiple evaluation metrics, including accuracy, precision, recall, and F1-score, ensuring a thorough assessment of their performance. Our findings underscore that Ablation Model 1 (SGCNN Model 1) delivers the highest predictive accuracy among the tested models, highlighting its potential as a robust approach for Alzheimer's image classification. Ultimately, this research aims to facilitate early diagnosis and treatment of AD, contributing to improved patient outcomes and advancing the field of neurodegenerative disease diagnosis.
Reproducible brain PET data analysis: easier said than done
While a great deal of recent effort has focused on addressing a perceived reproducibility crisis within brain structural magnetic resonance imaging (MRI) and functional MRI research communities, this article argues that brain positron emission tomography (PET) research stands on even more fragile ground, lagging behind efforts to address MRI reproducibility. We begin by examining the current landscape of factors that contribute to reproducible neuroimaging data analysis, including scientific standards, analytic plan pre-registration, data and code sharing, containerized workflows, and standardized processing pipelines. We then focus on disparities in the current status of these factors between brain MRI and brain PET. To demonstrate the positive impact that further developing such reproducibility factors would have on brain PET research, we present a case study that illustrates the many challenges faced by one laboratory that attempted to reproduce a community-standard brain PET processing pipeline. We identified key areas in which the brain PET community could enhance reproducibility, including stricter reporting policies among PET dedicated journals, data repositories, containerized analysis tools, and standardized processing pipelines. Other solutions such as mandatory pre-registration, data sharing, code availability as a condition of grant funding, and online forums and standardized reporting templates, are also discussed. Bolstering these reproducibility factors within the brain PET research community has the potential to unlock the full potential of brain PET research, propelling it toward a higher-impact future.
Predicting the clinical prognosis of acute ischemic stroke using machine learning: an application of radiomic biomarkers on non-contrast CT after intravascular interventional treatment
This study aimed to develop a radiomic model based on non-contrast computed tomography (NCCT) after interventional treatment to predict the clinical prognosis of acute ischemic stroke (AIS) with large vessel occlusion.
Research on ECG signal reconstruction based on improved weighted nuclear norm minimization and approximate message passing algorithm
In order to improve the energy efficiency of wearable devices, it is necessary to compress and reconstruct the collected electrocardiogram data. The compressed data may be mixed with noise during the transmission process. The denoising-based approximate message passing (AMP) algorithm performs well in reconstructing noisy signals, so the denoising-based AMP algorithm is introduced into electrocardiogram signal reconstruction. The weighted nuclear norm minimization algorithm (WNNM) uses the low-rank characteristics of similar signal blocks for denoising, and averages the signal blocks after low-rank decomposition to obtain the final denoised signal. However, under the influence of noise, there may be errors in searching for similar blocks, resulting in dissimilar signal blocks being grouped together, affecting the denoising effect. Based on this, this paper improves the WNNM algorithm and proposes to use weighted averaging instead of direct averaging for the signal blocks after low-rank decomposition in the denoising process, and validating its effectiveness on electrocardiogram signals. Experimental results demonstrate that the IWNNM-AMP algorithm achieves the best reconstruction performance under different compression ratios and noise conditions, obtaining the lowest PRD and RMSE values. Compared with the WNNM-AMP algorithm, the PRD value is reduced by 0.17∼4.56, the P-SNR value is improved by 0.12∼2.70.
Can micro-expressions be used as a biomarker for autism spectrum disorder?
Early and accurate diagnosis of autism spectrum disorder (ASD) is crucial for effective intervention, yet it remains a significant challenge due to its complexity and variability. Micro-expressions are rapid, involuntary facial movements indicative of underlying emotional states. It is unknown whether micro-expression can serve as a valid bio-marker for ASD diagnosis.
Early detection of mild cognitive impairment through neuropsychological tests in population screenings: a decision support system integrating ontologies and machine learning
Machine learning (ML) methodologies for detecting Mild Cognitive Impairment (MCI) are progressively gaining prevalence to manage the vast volume of processed information. Nevertheless, the black-box nature of ML algorithms and the heterogeneity within the data may result in varied interpretations across distinct studies. To avoid this, in this proposal, we present the design of a decision support system that integrates a machine learning model represented using the Semantic Web Rule Language (SWRL) in an ontology with specialized knowledge in neuropsychological tests, the NIO ontology. The system's ability to detect MCI subjects was evaluated on a database of 520 neuropsychological assessments conducted in Spanish and compared with other well-established ML methods. Using the coefficient to minimize false negatives, results indicate that the system performs similarly to other well-established ML methods ( = 0.830, only below bagging, = 0.832) while exhibiting other significant attributes such as explanation capability and data standardization to a common framework thanks to the ontological part. On the other hand, the system's versatility and ease of use were demonstrated with three additional use cases: evaluation of new cases even if the acquisition stage is incomplete (the case records have missing values), incorporation of a new database into the integrated system, and use of the ontology capabilities to relate different domains. This makes it a useful tool to support physicians and neuropsychologists in population-based screenings for early detection of MCI.
Fuzzy C-means clustering algorithm applied in computed tomography images of patients with intracranial hemorrhage
In recent years, intracerebral hemorrhage (ICH) has garnered significant attention as a severe cerebrovascular disorder. To enhance the accuracy of ICH detection and segmentation, this study proposed an improved fuzzy C-means (FCM) algorithm and performed a comparative analysis with both traditional FCM and advanced convolutional neural network (CNN) algorithms. Experiments conducted on the publicly available CT-ICH dataset evaluated the performance of these three algorithms in predicting ICH volume. The results demonstrated that the improved FCM algorithm offered notable improvements in computational time and resource consumption compared to the traditional FCM algorithm, while also showing enhanced accuracy. However, it still lagged behind the CNN algorithm in areas such as feature extraction, model generalization, and the ability to handle complex image structures. The study concluded with a discussion of potential directions for further optimizing the FCM algorithm, aiming to bridge the performance gap with CNN algorithms and provide a reference for future research in medical image processing.
Commentary: Accelerating spiking neural network simulations with PymoNNto and PymoNNtorch
SEEG4D: a tool for 4D visualization of stereoelectroencephalography data
Epilepsy is a prevalent and serious neurological condition which impacts millions of people worldwide. Stereoelectroencephalography (sEEG) is used in cases of drug resistant epilepsy to aid in surgical resection planning due to its high spatial resolution and ability to visualize seizure onset zones. For accurate localization of the seizure focus, sEEG studies combine pre-implantation magnetic resonance imaging, post-implant computed tomography to visualize electrodes, and temporally recorded sEEG electrophysiological data. Many tools exist to assist in merging multimodal spatial information; however, few allow for an integrated spatiotemporal view of the electrical activity. In the current work, we present SEEG4D, an automated tool to merge spatial and temporal data into a complete, four-dimensional virtual reality (VR) object with temporal electrophysiology that enables the simultaneous viewing of anatomy and seizure activity for seizure localization and presurgical planning. We developed an automated, containerized pipeline to segment tissues and electrode contacts. Contacts are aligned with electrical activity and then animated based on relative power. SEEG4D generates models which can be loaded into VR platforms for viewing and planning with the surgical team. Automated contact segmentation locations are within 1 mm of trained raters and models generated show signal propagation along electrodes. Critically, spatial-temporal information communicated through our models in a VR space have potential to enhance sEEG pre-surgical planning.
Cooperation objective evaluation in aviation: validation and comparison of two novel approaches in simulated environment
In operational environments, human interaction and cooperation between individuals are critical to efficiency and safety. These states are influenced by individuals' cognitive and emotional states. Human factor research aims to objectively quantify these states to prevent human error and maintain constant performances, particularly in high-risk settings such as aviation, where human error and performance account for a significant portion of accidents.
Interpretable machine learning comprehensive human gait deterioration analysis
Gait analysis, an expanding research area, employs non-invasive sensors and machine learning techniques for a range of applications. In this study, we investigate the impact of cognitive decline conditions on gait performance, drawing connections between gait deterioration in Parkinson's Disease (PD) and healthy individuals dual tasking.
Artificial intelligence role in advancement of human brain connectome studies
Neurons are interactive cells that connect via ions to develop electromagnetic fields in the brain. This structure functions directly in the brain. Connectome is the data obtained from neuronal connections. Since neural circuits change in the brain in various diseases, studying connectome sheds light on the clinical changes in special diseases. The ability to explore this data and its relation to the disorders leads us to find new therapeutic methods. Artificial intelligence (AI) is a collection of powerful algorithms used for finding the relationship between input data and the outcome. AI is used for extraction of valuable features from connectome data and in turn uses them for development of prognostic and diagnostic models in neurological diseases. Studying the changes of brain circuits in neurodegenerative diseases and behavioral disorders makes it possible to provide early diagnosis and development of efficient treatment strategies. Considering the difficulties in studying brain diseases, the use of connectome data is one of the beneficial methods for improvement of knowledge of this organ. In the present study, we provide a systematic review on the studies published using connectome data and AI for studying various diseases and we focus on the strength and weaknesses of studies aiming to provide a viewpoint for the future studies. Throughout, AI is very useful for development of diagnostic and prognostic tools using neuroimaging data, while bias in data collection and decay in addition to using small datasets restricts applications of AI-based tools using connectome data which should be covered in the future studies.
Investigating cortical complexity and connectivity in rats with schizophrenia
The above studies indicate that the SCZ animal model has abnormal gamma oscillations and abnormal functional coupling ability of brain regions at the cortical level. However, few researchers have focused on the correlation between brain complexity and connectivity at the cortical level. In order to provide a more accurate representation of brain activity, we studied the complexity of electrocorticogram (ECoG) signals and the information interaction between brain regions in schizophrenic rats, and explored the correlation between brain complexity and connectivity.
The ROSMAP project: aging and neurodegenerative diseases through omic sciences
The Religious Order Study and Memory and Aging Project (ROSMAP) is an initiative that integrates two longitudinal cohort studies, which have been collecting clinicopathological and molecular data since the early 1990s. This extensive dataset includes a wide array of omic data, revealing the complex interactions between molecular levels in neurodegenerative diseases (ND) and aging. Neurodegenerative diseases (ND) are frequently associated with morbidity and cognitive decline in older adults. Omics research, in conjunction with clinical variables, is crucial for advancing our understanding of the diagnosis and treatment of neurodegenerative diseases. This summary reviews the extensive omics research-encompassing genomics, transcriptomics, proteomics, metabolomics, epigenomics, and multiomics-conducted through the ROSMAP study. It highlights the significant advancements in understanding the mechanisms underlying neurodegenerative diseases, with a particular focus on Alzheimer's disease.
Customizable automated cleaning of multichannel sleep EEG in SleepTrip
While standard polysomnography has revealed the importance of the sleeping brain in health and disease, more specific insight into the relevant brain circuits requires high-density electroencephalography (EEG). However, identifying and handling sleep EEG artifacts becomes increasingly challenging with higher channel counts and/or volume of recordings. Whereas manual cleaning is time-consuming, subjective, and often yields data loss (e.g., complete removal of channels or epochs), automated approaches suitable and practical for overnight sleep EEG remain limited, especially when control over detection and repair behavior is desired. Here, we introduce a flexible approach for automated cleaning of multichannel sleep recordings, as part of the free Matlab-based toolbox SleepTrip. Key functionality includes 1) channel-wise detection of various artifact types encountered in sleep EEG, 2) channel- and time-resolved marking of data segments for repair through interpolation, and 3) visualization options to review and monitor performance. Functionality for Independent Component Analysis is also included. Extensive customization options allow tailoring cleaning behavior to data properties and analysis goals. By enabling computationally efficient and flexible automated data cleaning, this tool helps to facilitate fundamental and clinical sleep EEG research.
Quantitative assessment of neurodevelopmental maturation: a comprehensive systematic literature review of artificial intelligence-based brain age prediction in pediatric populations
Over the past few decades, numerous researchers have explored the application of machine learning for assessing children's neurological development. Developmental changes in the brain could be utilized to gauge the alignment of its maturation status with the child's chronological age. AI is trained to analyze changes in different modalities and estimate the brain age of subjects. Disparities between the predicted and chronological age can be viewed as a biomarker for a pathological condition. This literature review aims to illuminate research studies that have employed AI to predict children's brain age.