A systematic review of deep learning-based denoising for low-dose computed tomography from a perceptual quality perspective
Low-dose computed tomography (LDCT) scans are essential in reducing radiation exposure but often suffer from significant image noise that can impair diagnostic accuracy. While deep learning approaches have enhanced LDCT denoising capabilities, the predominant reliance on objective metrics like PSNR and SSIM has resulted in over-smoothed images that lack critical detail. This paper explores advanced deep learning methods tailored specifically to improve perceptual quality in LDCT images, focusing on generating diagnostic-quality images preferred in clinical practice. We review and compare current methodologies, including perceptual loss functions and generative adversarial networks, addressing the significant limitations of current benchmarks and the subjective nature of perceptual quality evaluation. Through a systematic analysis, this study underscores the urgent need for developing methods that balance both perceptual and diagnostic quality, proposing new directions for future research in the field.
Quantitative biomechanical analysis in validating a video-based model to remotely assess physical frailty: a potential solution to telehealth and globalized remote-patient monitoring
Assessing physical frailty (PF) is vital for early risk detection, tailored interventions, preventive care, and efficient healthcare planning. However, traditional PF assessments are often impractical, requiring clinic visits and significant resources. We introduce a video-based frailty meter (vFM) that utilizes machine learning (ML) to assess PF indicators from a 20 s exercise, facilitating remote and efficient healthcare planning. This study validates the vFM against a sensor-based frailty meter (sFM) through elbow flexion and extension exercises recorded via webcam and video conferencing app. We developed the vFM using Google's MediaPipe ML model to track elbow motion during a 20 s elbow flexion and extension exercise, recorded via a standard webcam. To validate vFM, 65 participants aged 20-85 performed the exercise under single-task and dual-task conditions, the latter including counting backward from a random two-digit number. We analyzed elbow angular velocity to extract frailty indicators-slowness, weakness, rigidity, exhaustion, and unsteadiness-and compared these with sFM results using intraclass correlation coefficient analysis and Bland-Altman plots. The vFM results demonstrated high precision (0.00-7.14%) and low bias (0.00-0.09%), showing excellent agreement with sFM outcomes (ICC(2,1): 0.973-0.999), unaffected by clothing color or environmental factors. The vFM offers a quick, accurate method for remote PF assessment, surpassing previous video-based frailty assessments in accuracy and environmental robustness, particularly in estimating elbow motion as a surrogate for the 'rigidity' phenotype. This innovation simplifies PF assessments for telehealth applications, promising advancements in preventive care and healthcare planning without the need for sensors or specialized infrastructure.
A rate-responsive duty-cycling protocol for leadless pacemaker synchronization
Dual-chamber leadless pacemakers (LLPMs) consist of two implants, one in the right atrium and one in the right ventricle. Inter-device communication, required for atrioventricular (AV) synchrony, however, reduces the projected longevity of commercial dual-chamber LLPMs by 35-45%. This work analyzes the power-saving potential and the resulting impact on AV-synchrony for a novel LLPM synchronization protocol. Relevant parameters of the proposed window scheduling algorithm were optimized with system-level simulations investigating the resulting trade-off between transceiver current consumption and AV-synchrony. The parameter set included the algorithm's setpoint for the target number of windows per cardiac cycle and the number of averaging cycles used in the window update calculation. The sensing inputs for the LLPM model were derived from human electrocardiogram recordings in the MIT-BIH Arrhythmia Database. Transceiver current consumption was estimated by combining the simulation results on the required communication resources with electrical measurements of a receiver microchip developed for LLPM synchronization in previous work. The performance ratio given by AV-synchrony divided by current consumption was maximized for a target of one window per cardiac cycle and three averaging cycles. Median transceiver current of both LLPMs combined was 166 nA (interquartile range: 152-183 nA) and median AV-synchrony was 92.5%. This corresponded to median reduction of 18.3% and 3.2% in current consumption and AV-synchrony, respectively, compared to a non-rate-responsive implementation of the same protocol, which prioritized maximum AV-synchrony. In conclusion, adopting a rate-responsive communication protocol may significantly increase device longevity of dual-chamber LLPMs without compromising AV-synchrony, potentially reducing the frequency of device replacements.
Integrated deep learning approach for generating cross-polarized images and analyzing skin melanin and hemoglobin distributions
Cross-polarized images are beneficial for skin pigment analysis due to the enhanced visualization of melanin and hemoglobin regions. However, the required imaging equipment can be bulky and optically complex. Additionally, preparing ground truths for training pigment analysis models is labor-intensive. This study aims to introduce an integrated approach for generating cross-polarized images and creating skin melanin and hemoglobin maps without the need for ground truth preparation for pigment distributions. We propose a two-component approach: a cross-polarized image generation module and a skin analysis module. Three generative adversarial networks (CycleGAN, pix2pix, and pix2pixHD) are compared for creating cross-polarized images. The regression analysis network for skin analysis is trained with theoretically reconstructed ground truths based on the optical properties of pigments. The methodology is evaluated using the VISIA VAESTRO clinical system. The cross-polarized image generation module achieved a peak signal-to-noise ratio of 35.514 dB. The skin analysis module demonstrated correlation coefficients of 0.942 for hemoglobin and 0.922 for melanin. The integrated approach yielded correlation coefficients of 0.923 for hemoglobin and 0.897 for melanin, respectively. The proposed approach achieved a reasonable correlation with the professional system using actually captured images, offering a promising alternative to existing professional equipment without the need for additional optical instruments or extensive ground truth preparation.
Ventral tegmental area deep brain stimulation reverses ethanol-induced dopamine increase in the rat nucleus accumbens
The neurophysiology of alcohol use disorder (AUD) is complex, but a major contributor to addictive phenotypes is the tendency for drugs of abuse to increase tonic extracellular dopamine (DA) levels in the nucleus accumbens (NAc). Repeated exposure to substances of abuse such as ethanol results in the overstimulation of the mesolimbic pathway, causing an excessive release of DA from the ventral tegmental area (VTA) to target regions such as the NAc. This heightened DA signaling is associated with the reinforcing effects of substances, leading to a strong desire for continued use. Recent work has postulated that high frequency deep brain stimulation (DBS) of the ventral tegmental area may reduce dopamine transmission to the nucleus accumbens following acute drug of abuse exposure, thereby mitigating the drug's addictive potential. We first demonstrate ethanol's ability to decrease phasic DA release over time and to increase tonic extracellular DA concentrations in the nucleus accumbens. Next, we demonstrate the capability for high frequency VTA DBS to reverse this ethanol-associated surge in tonic DA concentrations in the nucleus accumbens to levels not significantly different from baseline. This study suggests a promising new avenue for investigating the mechanisms of alcohol use disorder.
Behavior of jittering potential before and after impulse blockings: a preliminary study in myasthenia gravis
Neuromuscular junction disorders lead to secession of bioelectrical activity transmission between motor nerve endings and muscle fibers. In diseases that are severe enough, impulse blockings are observed. This study aims to reveal the behavior of neuromuscular junction before and after impulse blockings. Fourteen recordings harboring impulse blockings from nine myasthenia gravis (MG) patients were included. Recordings were made from frontalis muscle by using concentric needle electrode during voluntary contraction. One hundred traces were acquired in each session. In addition to well-known jitter parameters, new parameters were calculated such as number of consecutive impulse blocking groups, number of impulse blockings in each group, ratio of maximum number of consecutive impulse blockings to all number of blockings. Graphics were composed to show location change behavior of jittering potential in all traces. For jittering potential, before or after a single impulse blocking the amount of getting further away from trigger peak was greater than getting closer to trigger peak. However, after consecutive impulse blockings the amount of getting closer to trigger peak was greater than the amount of getting further away. The behavior of neuromuscular junction before and after impulse blockings was demonstrated in MG patients. Moreover, new features were extracted for jitter studies. Building models for different diseases according to their impulse blockings may be possible with the developed algorithm.
Characterization of the phagocytic ability of white blood cells separated using a single curvature spiral microfluidic device
The present work describes a microfluidic device developed for separating white blood cells (WBCs) for the Nitroblue Tetrazolium (NBT) bioassay, which quantifies the phagocytic ability of cells. The NBT test requires a small number of phagocytic cells but is highly susceptible to the presence of red blood cells (RBCs). Our inertial microfluidic device can deliver a WBC sample by removing 99.99% of RBCs and subsequently reducing the ratio of RBC to WBC from 848:1 to 2:3. The microdevice operates on a relatively higher hematocrit concentration (1% Hct) of blood. Compared to conventional WBC separation methods, the microdevice's passive, label-free nature preserves the cell properties of the original sample. A single-turn spiral microfluidic device with a rectangular cross-section is simple to fabricate, cost-effective, and easy to operate. The reported microfluidic device requires only a single drop of whole blood (⁓20 µl) obtained via the finger prick method for efficient phagocytic analysis. Also, the microdevice reported in this study achieves WBC separation in under 10 min, omitting the need for RBC lysis, density gradient centrifugation, or expensive antibodies.
PolySegNet: improving polyp segmentation through swin transformer and vision transformer fusion
Colorectal cancer ranks as the second most prevalent cancer worldwide, with a high mortality rate. Colonoscopy stands as the preferred procedure for diagnosing colorectal cancer. Detecting polyps at an early stage is critical for effective prevention and diagnosis. However, challenges in colonoscopic procedures often lead medical practitioners to seek support from alternative techniques for timely polyp identification. Polyp segmentation emerges as a promising approach to identify polyps in colonoscopy images. In this paper, we propose an advanced method, PolySegNet, that leverages both Vision Transformer and Swin Transformer, coupled with a Convolutional Neural Network (CNN) decoder. The fusion of these models facilitates a comprehensive analysis of various modules in our proposed architecture.To assess the performance of PolySegNet, we evaluate it on three colonoscopy datasets, a combined dataset, and their augmented versions. The experimental results demonstrate that PolySegNet achieves competitive results in terms of polyp segmentation accuracy and efficacy, achieving a mean Dice score of 0.92 and a mean Intersection over Union (IoU) of 0.86. These metrics highlight the superior performance of PolySegNet in accurately delineating polyp boundaries compared to existing methods. PolySegNet has shown great promise in accurately and efficiently segmenting polyps in medical images. The proposed method could be the foundation for a new class of transformer-based segmentation models in medical image analysis.
Self-supervised learning for CT image denoising and reconstruction: a review
This article reviews the self-supervised learning methods for CT image denoising and reconstruction. Currently, deep learning has become a dominant tool in medical imaging as well as computer vision. In particular, self-supervised learning approaches have attracted great attention as a technique for learning CT images without clean/noisy references. After briefly reviewing the fundamentals of CT image denoising and reconstruction, we examine the progress of deep learning in CT image denoising and reconstruction. Finally, we focus on the theoretical and methodological evolution of self-supervised learning for image denoising and reconstruction.
Strategies for mitigating inter-crystal scattering effects in positron emission tomography: a comprehensive review
Inter-crystal scattering (ICS) events in Positron Emission Tomography (PET) present challenges affecting system sensitivity and image quality. Understanding the physics and factors influencing ICS occurrence is crucial for developing strategies to mitigate its impact. This review paper explores the physics behind ICS events and their occurrence within PET detectors. Various methodologies, including energy-based comparisons, Compton kinematics-based approaches, statistical methods, and Artificial Intelligence (AI) techniques, which have been proposed for identifying and recovering ICS events accurately are introduced. Energy-based methods offer simplicity by comparing energy depositions in crystals. Compton kinematics-based approaches utilize trajectory information for first interaction position estimation, yielding reasonably good results. Additionally, statistical approach and AI algorithms contribute by optimizing likelihood analysis and neural network models for improved positioning accuracy. Experimental validations and simulation studies highlight the potential of recovering ICS events and enhancing PET sensitivity and image quality. Especially, AI technologies offers a promising avenue for addressing ICS challenges and improving PET image accuracy and resolution. These methods offer promising solutions for overcoming the challenges posed by ICS events and enhancing the accuracy and resolution of PET imaging, ultimately improving diagnostic capabilities and patient outcomes. Further studies applying these approaches to real PET systems are needed to validate theoretical results and assess practical implementation feasibility.
CT synthesis with deep learning for MR-only radiotherapy planning: a review
MR-only radiotherapy planning is beneficial from the perspective of both time and safety since it uses synthetic CT for radiotherapy dose calculation instead of real CT scans. To elevate the accuracy of treatment planning and apply the results in practice, various methods have been adopted, among which deep learning models for image-to-image translation have shown good performance by retaining domain-invariant structures while changing domain-specific details. In this paper, we present an overview of diverse deep learning approaches to MR-to-CT synthesis, divided into four classes: convolutional neural networks, generative adversarial networks, transformer models, and diffusion models. By comparing each model and analyzing the general approaches applied to this task, the potential of these models and ways to improve the current methods can be can be evaluated.
A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies
Accelerated magnetic resonance imaging (MRI) has played an essential role in reducing data acquisition time for MRI. Acceleration can be achieved by acquiring fewer data points in k-space, which results in various artifacts in the image domain. Conventional reconstruction methods have resolved the artifacts by utilizing multi-coil information, but with limited robustness. Recently, numerous deep learning-based reconstruction methods have been developed, enabling outstanding reconstruction performances with higher acceleration. Advances in hardware and developments of specialized network architectures have produced such achievements. Besides, MRI signals contain various redundant information including multi-coil redundancy, multi-contrast redundancy, and spatiotemporal redundancy. Utilization of the redundant information combined with deep learning approaches allow not only higher acceleration, but also well-preserved details in the reconstructed images. Consequently, this review paper introduces the basic concepts of deep learning and conventional accelerated MRI reconstruction methods, followed by review of recent deep learning-based reconstruction methods that exploit various redundancies. Lastly, the paper concludes by discussing the challenges, limitations, and potential directions of future developments.
Synthetic CT generation based on multi-sequence MR using CycleGAN for head and neck MRI-only planning
The purpose of this study is to investigate the influence of different magnetic resonance (MR) sequences on the accuracy of generating computed tomography (sCT) images for nasopharyngeal carcinoma based on CycleGAN. In this study, 143 patients' head and neck MR sequence (T1, T2, T1C, and T1DIXONC) and CT imaging data were acquired. The generator and discriminator of CycleGAN are improved to achieve the purpose of balance confrontation, and a cyclic consistent structure control domain is proposed in terms of loss function. Four different single-sequence MR images and one multi-sequence MR image were used to evaluate the accuracy of sCT. During the model testing phase, five testing scenarios were employed to further assess the mean absolute error, peak signal-to-noise ratio, structural similarity index, and root mean square error between the actual CT images and the sCT images generated by different models. T1 sequence-based sCT achieved better results in single-sequence MR-based sCT. Multi-sequence MR-based sCT achieved better results with T1 sequence-based sCT in terms of evaluation metrics. For metrological evaluation, the global gamma passage rate of sCT based on sequence MR was greater than 95% at 3%/3 mm, except for sCT based on T2 sequence MR. We developed a CycleGAN method to synthesize CT using different MR sequences, this method shows encouraging potential for dosimetric evaluation.
Evaluation of consumer-grade wireless EEG systems for brain-computer interface applications
With the growing popularity of consumer-grade electroencephalogram (EEG) devices for health, entertainment, and cognitive research, assessing their signal quality is essential. In this study, we evaluated four consumer-grade wireless and dry-electrode EEG systems widely used for brain-computer interface (BCI) research and applications, comparing them with a research-grade system. We designed an EEG phantom method that reproduced µV-level amplitude EEG signals and evaluated the five devices based on their spectral responses, temporal patterns of event-related potential (ERP), and spectral patterns of resting-state EEG. We discovered that the consumer-grade devices had limited bandwidth compared with the research-grade device. A late component (e.g., P300) was detectable in the consumer-grade devices, but the overall ERP temporal pattern was distorted. Only one device showed an ERP temporal pattern comparable to that of the research-grade device. On the other hand, we confirmed that the activation of the alpha rhythm was observable in all devices. The results provide valuable insights for researchers and developers when it comes to selecting suitable EEG devices for BCI research and applications.
A comprehensive review on Compton camera image reconstruction: from principles to AI innovations
Compton cameras have emerged as promising tools in biomedical imaging, offering sensitive gamma-ray imaging capabilities for diverse applications. This review paper comprehensively overviews the latest advancements in Compton camera image reconstruction technologies. Beginning with a discussion of the fundamental principles of Compton scattering and its relevance to gamma-ray imaging, the paper explores the key components and design considerations of Compton camera systems. We then review various image reconstruction algorithms employed in Compton camera systems, including analytical, iterative, and statistical approaches. Recent developments in machine learning-based reconstruction methods are also discussed, highlighting their potential to enhance image quality and reduce reconstruction time in biomedical applications. In particular, we focus on the challenges posed by conical back-projection in Compton camera image reconstruction, and how innovative signal processing techniques have addressed these challenges to improve image accuracy and spatial resolution. Furthermore, experimental validations of Compton camera imaging in preclinical and clinical settings, including multi-tracer and whole-gamma imaging studies are introduced. In summary, this review provides potentially useful information about the current state-of-the-art Compton camera image reconstruction technologies, offering a helpful guide for investigators new to this field.
Monte Carlo methods for medical imaging research
In radiation-based medical imaging research, computational modeling methods are used to design and validate imaging systems and post-processing algorithms. Monte Carlo methods are widely used for the computational modeling as they can model the systems accurately and intuitively by sampling interactions between particles and imaging subject with known probability distributions. This article reviews the physics behind Monte Carlo methods, their applications in medical imaging, and available MC codes for medical imaging research. Additionally, potential research areas related to Monte Carlo for medical imaging are discussed.
Recent advances in shape memory scaffolds and regenerative outcomes
The advent of tissue engineering (TE) technologies has revolutionized human medicine over the last few decades. Despite splendid advances in the fabricating and development of different substrates for regenerative purposes, non-responsive static composites have been used to heal injured tissues. After being transplanted into the target sites, grafts will lose their original features, leading to a reduction in regenerative potential. Along with these statements, the use of shape memory polymers (SMPs), smart substrates with unique physicochemical properties, has been extended in different disciplines of regenerative medicine in recent years. These substrates are intelligent and they can easily change physicogeometry features such as stiffness, strain size, shape, etc. in response to external stimuli. It has been proposed that SMPs can easily acquire their original properties after deformation, even in the presence or absence of certain stimuli. It has been indicated that the application of distinct synthesis protocols is required to fabricate dynamically switchable surfaces with prominent cell-to-substrate interaction, resulting in better regulation of cell function, dynamic growth, and reparative mechanisms. Here, we aimed to scrutinize the prominent regenerative properties of SMPs in the TE and regenerative medicine fields. Whether and how SMPs can orchestrate certain cell behavior, with reconfigurable features and adaptability were discussed in detail.
Synapse device based neuromorphic system for biomedical applications
Despite holding valuable information, unstructured data pose challenges for efficient recognition due to the difficulties in feature extraction using traditional Von-Neumann architecture systems, which are limited by power and time bottlenecks. Although biological neural signals offer crucial insights, they require more effective recognition solutions due to inherent noise and the vast volumes of data. Inspired by the human brain, neuromorphic systems have emerged as promising alternatives because of their parallelism, low power consumption, and error tolerance. By leveraging deep neural networks (DNNs), these systems can recognize imprecise data through two key processes: learning (feature extraction) and testing (feature matching and recognition). During the learning phase, DNNs extract and store unique features such as weight changes in synapse units. In the testing phase, new data are compared with the stored features for recognition. The parallelization of the neuromorphic system enables the efficient processing of large, imprecise datasets with minimal energy consumption. Nevertheless, the hardware implementation is essential for determining the full potential of DNNs. This paper focuses on synapse devices, which are the core units for hardware DNN implementations, and presents a biomedical application example: a rat neural signal recognition system implemented using a synapse device-based neuromorphic system.
Hybrid deep learning technique for COX-2 inhibition bioactivity detection against breast cancer disease
This study addresses detecting COX-2 inhibition in breast cancer, targeting its role in tumor growth. The primary goal is to develop an efficient technique for precise COX-2 inhibition bioactivity detection, with implications for identifying anti-cancer compounds and advancing breast cancer therapies. The proposed methodology uses the UNet architecture for feature extraction, enhancing accuracy. A modified chicken swarm optimization (MCSO) algorithm addresses data dimensionality, optimizing features. An improved Laguerre neural network (ILNN) classifies COX-2 inhibition bioactivity. Validation is performed using the ChEMBL database. The research evaluates the accuracy, precision, recall, F-measure, Matthews' correlation coefficient (MCC), and Dice coefficient of the proposed method. These metrics are compared against those of contemporary methods to assess the efficiency and effectiveness of the developed technique. The study underscores the hybrid deep learning method's significance in accurately detecting COX-2 inhibition bioactivity against breast cancer. Results highlight its potential as a valuable tool in breast cancer drug discovery.
Adaptive augmented cubature Kalman filter/smoother for ECG denoising
Model-based Bayesian approaches have been widely applied in Electrocardiogram (ECG) signal processing, where their performances heavily rely on the accurate selection of model parameters, particularly the state and measurement noise covariance matrices. In this study, we introduce an adaptive augmented cubature Kalman filter/smoother (CKF/CKS) for ECG processing, which updates the noise covariance matrices at each time step to accommodate diverse noise types and input signal-to-noise ratios (SNRs). Additionally, we incorporate the dynamic time warping technique to enhance the filter's efficiency in the presence of heart rate variability. Furthermore, we propose a method to significantly reduce the computational complexity required for CKF/CKS implementation in ECG processing. The denoising performance of the proposed filter was compared to those of various nonlinear Kalman-based frameworks involving the Extended Kalman filter/smoother (EKF/EKS), the unscented Kalman filter/smoother (UKF/UKS), and the ensemble Kalman filter (EnKF) that was recently proposed for ECG enhancement. In this study, we conducted a comprehensive evaluation and comparison of the performance of various nonlinear Kalman-based frameworks for ECG signal processing, which have been proposed in recent years. Our assessment was carried out on multiple normal ECG segments extracted from different entries in the MIT-BIH Normal Sinus Rhythm Database (NSRDB). This database provides a diverse set of ECG recordings, allowing us to examine the filters' denoising capabilities across various scenarios. By comparing the performance of these filters on the same dataset, we aimed to provide a thorough analysis and identification of the most effective approach for ECG denoising. Two kinds of noises were introduced to such segments: 1-stationary white Gaussian noise and 2-non-stationary real muscle artifact noise. For evaluation, four comparable measures namely the SNR improvement, PRD, correlation coefficient and MSEWPRD were employed. The findings demonstrated that the suggested algorithm outperforms the EKF/EKS, EnKF/EnKS, UKF/UKS methods in both stationary and nonstationary environments regarding SNR improvement, PRD, correlation coefficient and MSEWPRD metrics.
A novel approach to characterize the correction path features for the tibia deformity correction
Preoperative correction path planning is an important preparation for obtaining the desired correction. However, a convenient and effective model has not been proposed to characterize the correction path features, especially how to visualize the growth process of the bone cross-section has not been investigated. In this paper, a new characterization approach of the correction path features and corresponding evaluation indexes are proposed for the tibia deformity correction. We represent the growth process of new bone cross-section by a series of continuous and discrete circles. Based on the definition and assumptions of the bone cross-section, three evaluation indexes are proposed to assist the clinician in critically comparing and analyzing the feasibility of preoperatively correction approaches. A motor-driven parallel external fixator (MD-PEF) is developed to verify the proposed characterization approach. Finally, the features of the generated correction paths are compared and analyzed based on three correction methods. The results show that the proposed method can well present the growth process of bone cross-section and can detect the overlap phenomenon between bone cross-sections. And the approach of joint adjustment for equal bone distraction can generate a smooth correction path, uniform distraction rate and effectively avoid the overlap of bone cross-sections. This study is an important addition to facilitating the development of deformity correction techniques.