The Use of TheraBracelet Upper Extremity Vibrotactile Stimulation in a Child with Cerebral Palsy-A Case Report
TheraBracelet is peripheral vibrotactile stimulation applied to the affected upper extremity via a wristwatch-like wearable device during daily activities and therapy to improve upper limb function. The objective of this study was to examine feasibility of using TheraBracelet for a child with hemiplegic cerebral palsy.
Validation of a Biomechanical Injury and Disease Assessment Platform Applying an Inertial-Based Biosensor and Axis Vector Computation
Inertial kinetics and kinematics have substantial influences on human biomechanical function. A new algorithm for Inertial Measurement Unit (IMU)-based motion tracking is presented in this work. The primary aims of this paper are to combine recent developments in improved biosensor technology with mainstream motion-tracking hardware to measure the overall performance of human movement based on joint axis-angle representations of limb rotation. This work describes an alternative approach to representing three-dimensional rotations using a normalized vector around which an identified joint angle defines the overall rotation, rather than a traditional Euler angle approach. Furthermore, IMUs allow for the direct measurement of joint angular velocities, offering the opportunity to increase the accuracy of instantaneous axis of rotation estimations. Although the axis-angle representation requires vector quotient algebra (quaternions) to define rotation, this approach may be preferred for many graphics, vision, and virtual reality software applications. The analytical method was validated with laboratory data gathered from an infant dummy leg's flexion and extension knee movements and applied to a living subject's upper limb movement. The results showed that the novel approach could reasonably handle a simple case and provide a detailed analysis of axis-angle migration. The described algorithm could play a notable role in the biomechanical analysis of human joints and offers a harbinger of IMU-based biosensors that may detect pathological patterns of joint disease and injury.
Efficient Training on Alzheimer's Disease Diagnosis with Learnable Weighted Pooling for 3D PET Brain Image Classification
Three-dimensional convolutional neural networks (3D CNNs) have been widely applied to analyze Alzheimer's disease (AD) brain images for a better understanding of the disease progress or predicting the conversion from cognitively impaired (CU) or mild cognitive impairment status. It is well-known that training 3D-CNN is computationally expensive and with the potential of overfitting due to the small sample size available in the medical imaging field. Here we proposed a novel 3D-2D approach by converting a 3D brain image to a 2D fused image using a Learnable Weighted Pooling (LWP) method to improve efficient training and maintain comparable model performance. By the 3D-to-2D conversion, the proposed model can easily forward the fused 2D image through a pre-trained 2D model while achieving better performance over different 3D and 2D baselines. In the implementation, we chose to use ResNet34 for feature extraction as it outperformed other 2D CNN backbones. We further showed that the weights of the slices are location-dependent and the model performance relies on the 3D-to-2D fusion view, with the best outcomes from the coronal view. With the new approach, we were able to reduce 75% of the training time and increase the accuracy to 0.88, compared with conventional 3D CNNs, for classifying amyloid-beta PET imaging from the AD patients from the CU participants using the publicly available Alzheimer's Disease Neuroimaging Initiative dataset. The novel 3D-2D model may have profound implications for timely AD diagnosis in clinical settings in the future.
ROENet: A ResNet-Based Output Ensemble for Malaria Parasite Classification
(1)People may be infected with an insect-borne disease (malaria) through the blood input of malaria-infected people or the bite of Anopheles mosquitoes. Doctors need a lot of time and energy to diagnose malaria, and sometimes the results are not ideal. Many researchers use CNN to classify malaria images. However, we believe that the classification performance of malaria parasites can be improved.
A Hybrid Framework for Lung Cancer Classification
Cancer is the second leading cause of death worldwide, and the death rate of lung cancer is much higher than other types of cancers. In recent years, numerous novel computer-aided diagnostic techniques with deep learning have been designed to detect lung cancer in early stages. However, deep learning models are easy to overfit, and the overfitting problem always causes lower performance. To solve this problem of lung cancer classification tasks, we proposed a hybrid framework called LCGANT. Specifically, our framework contains two main parts. The first part is a lung cancer deep convolutional GAN (LCGAN) to generate synthetic lung cancer images. The second part is a regularization enhanced transfer learning model called VGG-DF to classify lung cancer images into three classes. Our framework achieves a result of 99.84% ± 0.156% (accuracy), 99.84% ± 0.153% (precision), 99.84% ± 0.156% (sensitivity), and 99.84% ± 0.156% (F1-score). The result reaches the highest performance of the dataset for the lung cancer classification task. The proposed framework resolves the overfitting problem for lung cancer classification tasks, and it achieves better performance than other state-of-the-art methods.
Energy-Efficient Respiratory Anomaly Detection in Premature Newborn Infants
Precise monitoring of respiratory rate in premature newborn infants is essential to initiating medical interventions as required. Wired technologies can be invasive and obtrusive to the patients. We propose a deep-learning-enabled wearable monitoring system for premature newborn infants, where respiratory cessation is predicted using signals that are collected wirelessly from a non-invasive wearable Bellypatch put on the infant's body. We propose a five-stage design pipeline involving data collection and labeling, feature scaling, deep learning model selection with hyperparameter tuning, model training and validation, and model testing and deployment. The model used is a 1-D convolutional neural network (1DCNN) architecture with one convolution layer, one pooling layer, and three fully-connected layers, achieving 97.15% classification accuracy. To address the energy limitations of wearable processing, several quantization techniques are explored, and their performance and energy consumption are analyzed for the respiratory classification task. Results demonstrate a reduction of energy footprints and model storage overhead with a considerable degradation of the classification accuracy, meaning that quantization and other model compression techniques are not the best solution for respiratory classification problem on wearable devices. To improve accuracy while reducing the energy consumption, we propose a novel spiking neural network (SNN)-based respiratory classification solution, which can be implemented on event-driven neuromorphic hardware platforms. To this end, we propose an approach to convert the analog operations of our baseline trained 1DCNN to their spiking equivalent. We perform a design-space exploration using the parameters of the converted SNN to generate inference solutions having different accuracy and energy footprints. We select a solution that achieves an accuracy of 93.33% with 18× lower energy compared to the baseline 1DCNN model. Additionally, the proposed SNN solution achieves similar accuracy as the quantized model with a 4× lower energy.
Human-Mimetic Estimation of Food Volume from a Single-View RGB Image Using an AI System
It is well known that many chronic diseases are associated with unhealthy diet. Although improving diet is critical, adopting a healthy diet is difficult despite its benefits being well understood. Technology is needed to allow an assessment of dietary intake accurately and easily in real-world settings so that effective intervention to manage being overweight, obesity, and related chronic diseases can be developed. In recent years, new wearable imaging and computational technologies have emerged. These technologies are capable of performing objective and passive dietary assessments with a much simplified procedure than traditional questionnaires. However, a critical task is required to estimate the portion size (in this case, the food volume) from a digital image. Currently, this task is very challenging because the volumetric information in the two-dimensional images is incomplete, and the estimation involves a great deal of imagination, beyond the capacity of the traditional image processing algorithms. In this work, we present a novel Artificial Intelligent (AI) system to mimic the thinking of dietitians who use a set of common objects as gauges (e.g., a teaspoon, a golf ball, a cup, and so on) to estimate the portion size. Specifically, our human-mimetic system "mentally" gauges the volume of food using a set of internal reference volumes that have been learned previously. At the output, our system produces a vector of probabilities of the food with respect to the internal reference volumes. The estimation is then completed by an "intelligent guess", implemented by an inner product between the probability vector and the reference volume vector. Our experiments using both virtual and real food datasets have shown accurate volume estimation results.
Scattering from Spheres: A New Look into an Old Problem
In this work, we introduce a theoretical framework to describe the scattering from spheres. In our proposed framework, the total field in the outer medium is decomposed in terms of inward and outward electromagnetic fields, rather than in terms of incident and scattered fields, as in the classical Lorenz-Mie formulation. The fields are expressed as series of spherical harmonics, whose combination weights can be interpreted as reflection and transmission coefficients, which provides an intuitive understanding of the propagation and scattering phenomena. Our formulation extends the previously proposed theory of non-uniform transmission lines by introducing an expression for impedance transfer, which yields a closed-form solution for the fields inside and outside the sphere. The power transmitted in and scattered by the sphere can be also evaluated with a simple closed-form expression and related with the modulus of the reflection coefficient. We showed that our method is fully consistent with the classical Mie scattering theory. We also showed that our method can provide an intuitive physical interpretation of electromagnetic scattering in terms of impedance matching and resonances, and that it is especially useful for the case of inward traveling spherical waves generated by sources surrounding the scatterer.
Weighted Random Forests to Improve Arrhythmia Classification
Construction of an ensemble model is a process of combining many diverse base predictive learners. It arises questions of how to weight each model and how to tune the parameters of the weighting process. The most straightforward approach is simply to average the base models. However, numerous studies have shown that a weighted ensemble can provide superior prediction results to a simple average of models. The main goals of this article are to propose a new weighting algorithm applicable for each tree in the Random Forest model and the comprehensive examination of the optimal parameter tuning. Importantly, the approach is motivated by its flexibility, good performance, stability, and resistance to overfitting. The proposed scheme is examined and evaluated on the Physionet/Computing in Cardiology Challenge 2015 data set. It consists of signals (electrocardiograms and pulsatory waveforms) from intensive care patients which triggered an alarm for five cardiac arrhythmia types (Asystole, Bradycardia, Tachycardia, Ventricular Tachycardia, and Ventricular Fultter/Fibrillation). The classification problem regards whether the alarm should or should not have been generated. It was proved that the proposed weighting approach improved classification accuracy for the three most challenging out of the five investigated arrhythmias comparing to the standard Random Forest model.
Development of a Multisensory Wearable System for Monitoring Cigarette Smoking Behavior in Free-Living Conditions
This paper presents the development and validation of a novel multi-sensory wearable system (Personal Automatic Cigarette Tracker v2 or PACT2.0) for monitoring of cigarette smoking in free-living conditions. The contributions of the PACT2.0 system are: (1) the implementation of a complete sensor suite for monitoring of all major behavioral manifestations of cigarette smoking (lighting events, hand-to-mouth gestures, and smoke inhalations); (2) a miniaturization of the sensor hardware to enable its applicability in naturalistic settings; and (3) an introduction of new sensor modalities that may provide additional insight into smoking behavior e.g., Global Positioning System (GPS), pedometer and Electrocardiogram(ECG) or provide an easy-to-use alternative (e.g., bio-impedance respiration sensor) to traditional sensors. PACT2.0 consists of three custom-built devices: an instrumented lighter, a hand module, and a chest module. The instrumented lighter is capable of recording the time and duration of all lighting events. The hand module integrates Inertial Measurement Unit (IMU) and a Radio Frequency (RF) transmitter to track the hand-to-mouth gestures. The module also operates as a pedometer. The chest module monitors the breathing (smoke inhalation) patterns (inductive and bio-impedance respiratory sensors), cardiac activity (ECG sensor), chest movement (three-axis accelerometer), hand-to-mouth proximity (RF receiver), and captures the geo-position of the subject (GPS receiver). The accuracy of PACT2.0 sensors was evaluated in bench tests and laboratory experiments. Use of PACT2.0 for data collection in the community was validated in a 24 h study on 40 smokers. Of 943 h of recorded data, 98.6% of the data was found usable for computer analysis. The recorded information included 549 lighting events, 522/504 consumed cigarettes (from lighter data/self-registered data, respectively), 20,158/22,207 hand-to-mouth gestures (from hand IMU/proximity sensor, respectively) and 114,217/112,175 breaths (from the respiratory inductive plethysmograph (RIP)/bio-impedance sensor, respectively). The proposed system scored 8.3 ± 0.31 out of 10 on a post-study acceptability survey. The results suggest that PACT2.0 presents a reliable platform for studying of smoking behavior at the community level.
Design of a 1 DOF MEMS motion stage for a parallel plane geometry rheometer
Rotational rheometers are used to measure paste properties, but the test would take too long to be useful for quality control (QC) on the job site. In this paper, a new type of rheometer is proposed based on a one degree of freedom (DOF) micro-electro-mechanical systems (MEMS)-based motion stage. Preliminary data will be presented to show the capability of the system to measure the viscoelastic properties of a paste. The parallel plate geometry rheometer consists of two plates, which move relative to each other to apply a strain to the material to be tested. From the stress measured and the strain applied, the rheological characteristics of the material can be calculated. The new device consists of an electrothermal actuator and a motion plate. For the rheological measurements, the device is designed to generate the shear stress up to 60 Pa and maintain its stiffness to less than 44 N/m. With these features, the device uses a square plate of 1.5 mm x 1.5 mm to provide enough area for a few micro-liter level volumes. The motion of the square plate is monitored by a capacitive sensor at the end of the oscillating plate which has a resolution of 1.06 μm. When a reference cementitious paste, Standard Reference Material (SRM)-2492, is placed between the oscillating plate of the presented motion stage and a fixed plate, the reduction in the displacement of the oscillating plate is monitored showing that the presented motion stage is reasonably designed to detect the response of the reference cementitious paste.
Laboratory Validation of Inertial Body Sensors to Detect Cigarette Smoking Arm Movements
Cigarette smoking remains the leading cause of preventable death in the United States. Traditional in-clinic cessation interventions may fail to intervene and interrupt the rapid progression to relapse that typically occurs following a quit attempt. The ability to detect actual smoking behavior in real-time is a measurement challenge for health behavior research and intervention. The successful detection of real-time smoking through mobile health (mHealth) methodology has substantial implications for developing highly efficacious treatment interventions. The current study was aimed at further developing and testing the ability of inertial sensors to detect cigarette smoking arm movements among smokers. The current study involved four smokers who smoked six cigarettes each in a laboratory-based assessment. Participants were outfitted with four inertial body movement sensors on the arms, which were used to detect smoking events at two levels: the puff level and the cigarette level. Two different algorithms (Support Vector Machines (SVM) and Edge-Detection based learning) were trained to detect the features of arm movement sequences transmitted by the sensors that corresponded with each level. The results showed that performance of the SVM algorithm at the cigarette level exceeded detection at the individual puff level, with low rates of false positive puff detection. The current study is the second in a line of programmatic research demonstrating the proof-of-concept for sensor-based tracking of smoking, based on movements of the arm and wrist. This study demonstrates efficacy in a real-world clinical inpatient setting and is the first to provide a detection rate against direct observation, enabling calculation of true and false positive rates. The study results indicate that the approach performs very well with some participants, whereas some challenges remain with participants who generate more frequent non-smoking movements near the face. Future work may allow for tracking smoking in real-world environments, which would facilitate developing more effective, just-in-time smoking cessation interventions.
Use of a Wireless Network of Accelerometers for Improved Measurement of Human Energy Expenditure
Single, hip-mounted accelerometers can provide accurate measurements of energy expenditure (EE) in some settings, but are unable to accurately estimate the energy cost of many non-ambulatory activities. A multi-sensor network may be able to overcome the limitations of a single accelerometer. Thus, the purpose of our study was to compare the abilities of a wireless network of accelerometers and a hip-mounted accelerometer for the prediction of EE. Thirty adult participants engaged in 14 different sedentary, ambulatory, lifestyle and exercise activities for five minutes each while wearing a portable metabolic analyzer, a hip-mounted accelerometer (AG) and a wireless network of three accelerometers (WN) worn on the right wrist, thigh and ankle. Artificial neural networks (ANNs) were created separately for the AG and WN for the EE prediction. Pearson correlations () and the root mean square error (RMSE) were calculated to compare criterion-measured EE to predicted EE from the ANNs. Overall, correlations were higher ( = 0.95 = 0.88, < 0.0001) and RMSE was lower (1.34 1.97 metabolic equivalents (METs), < 0.0001) for the WN than the AG. In conclusion, the WN outperformed the AG for measuring EE, providing evidence that the WN can provide highly accurate estimates of EE in adults participating in a wide range of activities.
Automatic Measurement of Chew Count and Chewing Rate during Food Intake
Research suggests that there might be a relationship between chew count as well as chewing rate and energy intake. Chewing has been used in wearable sensor systems for the automatic detection of food intake, but little work has been reported on the automatic measurement of chew count or chewing rate. This work presents a method for the automatic quantification of chewing episodes captured by a piezoelectric sensor system. The proposed method was tested on 120 meals from 30 participants using two approaches. In a semi-automatic approach, histogram-based peak detection was used to count the number of chews in manually annotated chewing segments, resulting in a mean absolute error of 10.40% ± 7.03%. In a fully automatic approach, automatic food intake recognition preceded the application of the chew counting algorithm. The sensor signal was divided into 5-s non-overlapping epochs. Leave-one-out cross-validation was used to train a artificial neural network (ANN) to classify epochs as "food intake" or "no intake" with an average 1 score of 91.09%. Chews were counted in epochs classified as food intake with a mean absolute error of 15.01% ± 11.06%. The proposed methods were compared with manual chew counts using an analysis of variance (ANOVA), which showed no statistically significant difference between the two methods. Results suggest that the proposed method can provide objective and automatic quantification of eating behavior in terms of chew counts and chewing rates.