Source-free collaborative domain adaptation via multi-perspective feature enrichment for functional MRI analysis
Resting-state functional MRI (rs-fMRI) is increasingly employed in multi-site research to analyze neurological disorders, but there exists cross-site/domain data heterogeneity caused by site effects such as differences in scanners/protocols. Existing domain adaptation methods that reduce fMRI heterogeneity generally require accessing source domain data, which is challenging due to privacy concerns and/or data storage burdens. To this end, we propose a source-free collaborative domain adaptation (SCDA) framework using only a pretrained source model and unlabeled target data. Specifically, a multi-perspective feature enrichment method (MFE) is developed to dynamically exploit target fMRIs from multiple views. To facilitate efficient source-to-target knowledge transfer without accessing source data, we initialize MFE using parameters of a pretrained source model. We also introduce an unsupervised pretraining strategy using 3,806 unlabeled fMRIs from three large-scale auxiliary databases. Experimental results on three public and one private datasets show the efficacy of our method in cross-scanner and cross-study prediction.
MS-TCRNet: Multi-Stage Temporal Convolutional Recurrent Networks for Action Segmentation Using Sensor-Augmented Kinematics
Action segmentation is a challenging task in high-level process analysis, typically performed on video or kinematic data obtained from various sensors. This work presents two contributions related to action segmentation on kinematic data. Firstly, we introduce two versions of Multi-Stage Temporal Convolutional Recurrent Networks (MS-TCRNet), specifically designed for kinematic data. The architectures consist of a prediction generator with intra-stage regularization and Bidirectional LSTM or GRU-based refinement stages. Secondly, we propose two new data augmentation techniques, World Frame Rotation and Hand Inversion, which utilize the strong geometric structure of kinematic data to improve algorithm performance and robustness. We evaluate our models on three datasets of surgical suturing tasks: the Variable Tissue Simulation (VTS) Dataset and the newly introduced Bowel Repair Simulation (BRS) Dataset, both of which are open surgery simulation datasets collected by us, as well as the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), a well-known benchmark in robotic surgery. Our methods achieved state-of-the-art performance. code: https://github.com/AdamGoldbraikh/MS-TCRNet.
Assessment of Volumetric Dense Tissue Segmentation in Tomosynthesis Using Deep Virtual Clinical Trials
The adoption of artificial intelligence (AI) in medical imaging requires careful evaluation of machine-learning algorithms. We propose the use of a "deep virtual clinical trial" (DeepVCT) method to effectively evaluate the performance of AI algorithms. In this paper, DeepVCTs have been proposed to elucidate limitations of AI applications and predictions of clinical outcomes, avoiding biases in study designs. The DeepVCT method was used to evaluate the performance of nnU-Net models in assessing volumetric breast density (VBD) from digital breast tomosynthesis (DBT) images. In total, 2,010 anatomical breast models were simulated. Projections were simulated using the acquisition geometry of a clinical DBT system. The projections were reconstructed using 0.1, 0.2, and 0.5 mm plane spacing. nnU-Net models were developed using the center-most planes of the reconstructions with the respective ground-truth. The results show that the accuracy of the nnU-Net improves significantly with DBT images reconstructed with 0.1 mm plane spacing (78.4×205.3×40.1 mm). The segmentations resulted in Dice values up to 0.84 with area under the receiver operating characteristic curve of 0.92. The optimization of plane spacing for VBD assessment was used as an exemplar of a DeepVCT application, allowing us to interpret better the input parameters and outcomes of the nnU-Net. Thus, DeepVCTs can provide a plethora of evidence to predict the efficacy of these algorithms using large-scale simulation-based data.
Improving Image Segmentation with Contextual and Structural Similarity
Deep learning models for medical image segmentation are usually trained with voxel-wise losses, e.g., cross-entropy loss, focusing on unary supervision without considering inter-voxel relationships. This oversight potentially leads to semantically inconsistent predictions. Here, we propose a contextual similarity loss (CSL) and a structural similarity loss (SSL) to explicitly and efficiently incorporate inter-voxel relationships for improved performance. The CSL promotes consistency in predicted object categories for each image sub-region compared to ground truth. The SSL enforces compatibility between the predictions of voxel pairs by computing pair-wise distances between them, ensuring that voxels of the same class are close together whereas those from different classes are separated by a wide margin in the distribution space. The effectiveness of the CSL and SSL is evaluated using a clinical cone-beam computed tomography (CBCT) dataset of patients with various craniomaxillofacial (CMF) deformities and a public pancreas dataset. Experimental results show that the CSL and SSL outperform state-of-the-art regional loss functions in preserving segmentation semantics.
Federated learning for medical image analysis: A survey
Machine learning in medical imaging often faces a fundamental dilemma, namely, the small sample size problem. Many recent studies suggest using multi-domain data pooled from different acquisition sites/centers to improve statistical power. However, medical images from different sites cannot be easily shared to build large datasets for model training due to privacy protection reasons. As a promising solution, federated learning, which enables collaborative training of machine learning models based on data from different sites without cross-site data sharing, has attracted considerable attention recently. In this paper, we conduct a comprehensive survey of the recent development of federated learning methods in medical image analysis. We have systematically gathered research papers on federated learning and its applications in medical image analysis published between 2017 and 2023. Our search and compilation were conducted using databases from IEEE Xplore, ACM Digital Library, Science Direct, Springer Link, Web of Science, Google Scholar, and PubMed. In this survey, we first introduce the background of federated learning for dealing with privacy protection and collaborative learning issues. We then present a comprehensive review of recent advances in federated learning methods for medical image analysis. Specifically, existing methods are categorized based on three critical aspects of a federated learning system, including client end, server end, and communication techniques. In each category, we summarize the existing federated learning methods according to specific research problems in medical image analysis and also provide insights into the motivations of different approaches. In addition, we provide a review of existing benchmark medical imaging datasets and software platforms for current federated learning research. We also conduct an experimental study to empirically evaluate typical federated learning methods for medical image analysis. This survey can help to better understand the current research status, challenges, and potential research opportunities in this promising research field.
Longitudinal Prediction of Postnatal Brain Magnetic Resonance Images via a Metamorphic Generative Adversarial Network
Missing scans are inevitable in longitudinal studies due to either subject dropouts or failed scans. In this paper, we propose a deep learning framework to predict missing scans from acquired scans, catering to longitudinal infant studies. Prediction of infant brain MRI is challenging owing to the rapid contrast and structural changes particularly during the first year of life. We introduce a trustworthy metamorphic generative adversarial network (MGAN) for translating infant brain MRI from one time-point to another. MGAN has three key features: (i) Image translation leveraging spatial and frequency information for detail-preserving mapping; (ii) Quality-guided learning strategy that focuses attention on challenging regions. (iii) Multi-scale hybrid loss function that improves translation of image contents. Experimental results indicate that MGAN outperforms existing GANs by accurately predicting both tissue contrasts and anatomical details.
AGMN: Association Graph-based Graph Matching Network for Coronary Artery Semantic Labeling on Invasive Coronary Angiograms
Semantic labeling of coronary arterial segments in invasive coronary angiography (ICA) is important for automated assessment and report generation of coronary artery stenosis in computer-aided coronary artery disease (CAD) diagnosis. However, separating and identifying individual coronary arterial segments is challenging because morphological similarities of different branches on the coronary arterial tree and human-to-human variabilities exist. Inspired by the training procedure of interventional cardiologists for interpreting the structure of coronary arteries, we propose an association graph-based graph matching network (AGMN) for coronary arterial semantic labeling. We first extract the vascular tree from invasive coronary angiography (ICA) and convert it into multiple individual graphs. Then, an association graph is constructed from two individual graphs where each vertex represents the relationship between two arterial segments. Thus, we convert the arterial segment labeling task into a vertex classification task; ultimately, the semantic artery labeling becomes equivalent to identifying the artery-to-artery correspondence on graphs. More specifically, the AGMN extracts the vertex features by the embedding module using the association graph, aggregates the features from adjacent vertices and edges by graph convolution network, and decodes the features to generate the semantic mappings between arteries. By learning the mapping of arterial branches between two individual graphs, the unlabeled arterial segments are classified by the labeled segments to achieve semantic labeling. A dataset containing 263 ICAs was employed to train and validate the proposed model, and a five-fold cross-validation scheme was performed. Our AGMN model achieved an average accuracy of 0.8264, an average precision of 0.8276, an average recall of 0.8264, and an average F1-score of 0.8262, which significantly outperformed existing coronary artery semantic labeling methods. In conclusion, we have developed and validated a new algorithm with high accuracy, interpretability, and robustness for coronary artery semantic labeling on ICAs.
Momentum contrast transformer for COVID-19 diagnosis with knowledge distillation
Intelligent diagnosis has been widely studied in diagnosing novel corona virus disease (COVID-19). Existing deep models typically do not make full use of the global features such as large areas of ground glass opacities, and the local features such as local bronchiolectasis from the COVID-19 chest CT images, leading to unsatisfying recognition accuracy. To address this challenge, this paper proposes a novel method to diagnose COVID-19 using momentum contrast and knowledge distillation, termed . Our method takes advantage of Vision Transformer to design a momentum contrastive learning task to effectively extract global features from COVID-19 chest CT images. Moreover, in transfer and fine-tuning process, we integrate the locality of convolution into Vision Transformer via special knowledge distillation. These strategies enable the final Vision Transformer simultaneously focuses on global and local features from COVID-19 chest CT images. In addition, momentum contrastive learning is self-supervised learning, solving the problem that Vision Transformer is challenging to train on small datasets. Extensive experiments confirm the effectiveness of the proposed MCT-KD. In particular, our MCT-KD is able to achieve 87.43% and 96.94% accuracy on two publicly available datasets, respectively.
Semi-automatic muscle segmentation in MR images using deep registration-based label propagation
Fully automated approaches based on convolutional neural networks have shown promising performances on muscle segmentation from magnetic resonance (MR) images, but still rely on an extensive amount of training data to achieve valuable results. Muscle segmentation for pediatric and rare diseases cohorts is therefore still often done manually. Producing dense delineations over 3D volumes remains a time-consuming and tedious task, with significant redundancy between successive slices. In this work, we propose a segmentation method relying on registration-based label propagation, which provides 3D muscle delineations from a limited number of annotated 2D slices. Based on an unsupervised deep registration scheme, our approach ensures the preservation of anatomical structures by penalizing deformation compositions that do not produce consistent segmentation from one annotated slice to another. Evaluation is performed on MR data from lower leg and shoulder joints. Results demonstrate that the proposed few-shot multi-label segmentation model outperforms state-of-the-art techniques.
Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent
Adversarial training, especially projected gradient descent (PGD), has proven to be a successful approach for improving robustness against adversarial attacks. After adversarial training, gradients of models with respect to their inputs have a preferential direction. However, the direction of alignment is not mathematically well established, making it difficult to evaluate quantitatively. We propose a novel definition of this direction as the direction of the vector pointing toward the closest point of the support of the closest inaccurate class in decision space. To evaluate the alignment with this direction after adversarial training, we apply a metric that uses generative adversarial networks to produce the smallest residual needed to change the class present in the image. We show that PGD-trained models have a higher alignment than the baseline according to our definition, that our metric presents higher alignment values than a competing metric formulation, and that enforcing this alignment increases the robustness of models.
Learning from multiple annotators for medical image segmentation
Supervised machine learning methods have been widely developed for segmentation tasks in recent years. However, the quality of labels has high impact on the predictive performance of these algorithms. This issue is particularly acute in the medical image domain, where both the cost of annotation and the inter-observer variability are high. Different human experts contribute estimates of the "actual" segmentation labels in a typical label acquisition process, influenced by their personal biases and competency levels. The performance of automatic segmentation algorithms is limited when these noisy labels are used as the expert consensus label. In this work, we use two coupled CNNs to jointly learn, from purely noisy observations alone, the reliability of individual annotators and the expert consensus label distributions. The separation of the two is achieved by maximally describing the annotator's "unreliable behavior" (we call it "maximally unreliable") while achieving high fidelity with the noisy training data. We first create a toy segmentation dataset using MNIST and investigate the properties of the proposed algorithm. We then use three public medical imaging segmentation datasets to demonstrate our method's efficacy, including both simulated (where necessary) and real-world annotations: 1) ISBI2015 (multiple-sclerosis lesions); 2) BraTS (brain tumors); 3) LIDC-IDRI (lung abnormalities). Finally, we create a real-world multiple sclerosis lesion dataset (QSMSC at UCL: Queen Square Multiple Sclerosis Center at UCL, UK) with manual segmentations from 4 different annotators (3 radiologists with different level skills and 1 expert to generate the expert consensus label). In all datasets, our method consistently outperforms competing methods and relevant baselines, especially when the number of annotations is small and the amount of disagreement is large. The studies also reveal that the system is capable of capturing the complicated spatial characteristics of annotators' mistakes.
Invariance encoding in sliced-Wasserstein space for image classification with limited training data
Deep convolutional neural networks (CNNs) are broadly considered to be state-of-the-art generic end-to-end image classification systems. However, they are known to underperform when training data are limited and thus require data augmentation strategies that render the method computationally expensive and not always effective. Rather than using a data augmentation strategy to encode invariances as typically done in machine learning, here we propose to mathematically augment a nearest subspace classification model in sliced-Wasserstein space by exploiting certain mathematical properties of the Radon Cumulative Distribution Transform (R-CDT), a recently introduced image transform. We demonstrate that for a particular type of learning problem, our mathematical solution has advantages over data augmentation with deep CNNs in terms of classification accuracy and computational complexity, and is particularly effective under a limited training data setting. The method is simple, effective, computationally efficient, non-iterative, and requires no parameters to be tuned. Python code implementing our method is available at https://github.com/rohdelab/mathematical augmentation. Our method is integrated as a part of the software package PyTransKit, which is available at https://github.com/rohdelab/PyTransKit.
PLFace: Progressive learning for face recognition with mask bias
The outbreak of the COVID-19 coronavirus epidemic has promoted the development of masked face recognition (MFR). Nevertheless, the performance of regular face recognition is severely compromised when the MFR accuracy is blindly pursued. More facts indicate that MFR should be regarded as a mask bias of face recognition rather than an independent task. To mitigate mask bias, we propose a novel Progressive Learning Loss (PLFace) that achieves a progressive training strategy for deep face recognition to learn balanced performance for masked/mask-free faces recognition based on margin losses. Particularly, our PLFace adaptively adjusts the relative importance of masked and mask-free samples during different training stages. In the early stage of training, PLFace mainly learns the feature representations of mask-free samples. At this time, the regular sample embeddings shrink to the prototype. In the later stage of training, PLFace converges on mask-free samples and further focuses on masked samples until the masked sample embeddings are also gathered in the center of the class. The entire training process emphasizes the paradigm that normal samples shrink first and masked samples gather afterward. Extensive experimental results on popular regular and masked face benchmarks demonstrate the superiority of our PLFace over state-of-the-art competitors.
COVID-19 and Rumors: A Dynamic Nested Optimal Control Model
Unfortunately, the COVID-19 outbreak has been accompanied by the spread of rumors and depressing news. Herein, we develop a dynamic nested optimal control model of COVID-19 and its rumor outbreaks. The model aims to curb the epidemics by reducing the number of individuals infected with COVID-19 and reducing the number of rumor-spreaders while minimizing the cost associated with the control interventions. We use the modified approximation Karush-Kuhn-Tucker conditions with the Hamiltonian function to simplify the model before solving it using a genetic algorithm. The present model highlights three prevention measures that affect COVID-19 and its rumor outbreaks. One represents the interventions to curb the COVID-19 pandemic. The other two represent interventions to increase awareness, disseminate the correct information, and impose penalties on the spreaders of false rumors. The results emphasize the importance of interventions in curbing the spread of the COVID-19 pandemic and its associated rumor problems alike.
COVID-19 contact tracking by group activity trajectory recovery over camera networks
Contact tracking plays an important role in the epidemiological investigation of COVID-19, which can effectively reduce the spread of the epidemic. As an excellent alternative method for contact tracking, mobile phone location-based methods are widely used for locating and tracking contacts. However, current inaccurate positioning algorithms that are widely used in contact tracking lead to the inaccurate follow-up of contacts. Aiming to achieve accurate contact tracking for the COVID-19 contact group, we extend the analysis of the GPS data to combine GPS data with video surveillance data and address a novel task named group activity trajectory recovery. Meanwhile, a new dataset called GATR-GPS is constructed to simulate a realistic scenario of COVID-19 contact tracking, and a coordinated optimization algorithm with a spatio-temporal constraint table is further proposed to realize efficient trajectory recovery of pedestrian trajectories. Extensive experiments on the novel collected dataset and commonly used two existing person re-identification datasets are performed, and the results evidently demonstrate that our method achieves competitive results compared to the state-of-the-art methods.
GFNet: Automatic segmentation of COVID-19 lung infection regions using CT images based on boundary features
In early 2020, the global spread of the COVID-19 has presented the world with a serious health crisis. Due to the large number of infected patients, automatic segmentation of lung infections using computed tomography (CT) images has great potential to enhance traditional medical strategies. However, the segmentation of infected regions in CT slices still faces many challenges. Specially, the most core problem is the high variability of infection characteristics and the low contrast between the infected and the normal regions. This problem leads to fuzzy regions in lung CT segmentation. To address this problem, we have designed a novel global feature network(GFNet) for COVID-19 lung infections: VGG16 as backbone, we design a Edge-guidance module(Eg) that fuses the features of each layer. First, features are extracted by reverse attention module and Eg is combined with it. This series of steps enables each layer to fully extract boundary details that are difficult to be noticed by previous models, thus solving the fuzzy problem of infected regions. The multi-layer output features are fused into the final output to finally achieve automatic and accurate segmentation of infected areas. We compared the traditional medical segmentation networks, UNet, UNet++, the latest model Inf-Net, and methods of few shot learning field. Experiments show that our model is superior to the above models in Dice, Sensitivity, Specificity and other evaluation metrics, and our segmentation results are clear and accurate from the visual effect, which proves the effectiveness of GFNet. In addition, we verify the generalization ability of GFNet on another "never seen" dataset, and the results prove that our model still has better generalization ability than the above model. Our code has been shared at https://github.com/zengzhenhuan/GFNet.
Deep learning of longitudinal mammogram examinations for breast cancer risk prediction
Information in digital mammogram images has been shown to be associated with the risk of developing breast cancer. Longitudinal breast cancer screening mammogram examinations may carry spatiotemporal information that can enhance breast cancer risk prediction. No deep learning models have been designed to capture such spatiotemporal information over multiple examinations to predict the risk. In this study, we propose a novel deep learning structure, LRP-NET, to capture the spatiotemporal changes of breast tissue over multiple negative/benign screening mammogram examinations to predict near-term breast cancer risk in a case-control setting. Specifically, LRP-NET is designed based on clinical knowledge to capture the imaging changes of bilateral breast tissue over four sequential mammogram examinations. We evaluate our proposed model with two ablation studies and compare it to three models/settings, including 1) a "loose" model without explicitly capturing the spatiotemporal changes over longitudinal examinations, 2) LRP-NET but using a varying number (i.e., 1 and 3) of sequential examinations, and 3) a previous model that uses only a single mammogram examination. On a case-control cohort of 200 patients, each with four examinations, our experiments on a total of 3200 images show that the LRP-NET model outperforms the compared models/settings.
Covid-MANet: Multi-task attention network for explainable diagnosis and severity assessment of COVID-19 from CXR images
The devastating outbreak of Coronavirus Disease (COVID-19) cases in early 2020 led the world to face health crises. Subsequently, the exponential reproduction rate of COVID-19 disease can only be reduced by early diagnosis of COVID-19 infection cases correctly. The initial research findings reported that radiological examinations using CT and CXR modality have successfully reduced false negatives by RT-PCR test. This research study aims to develop an explainable diagnosis system for the detection and infection region quantification of COVID-19 disease. The existing research studies successfully explored deep learning approaches with higher performance measures but lacked generalization and interpretability for COVID-19 diagnosis. In this study, we address these issues by the Covid-MANet network, an automated end-to-end multi-task attention network that works for 5 classes in three stages for COVID-19 infection screening. The first stage of the Covid-MANet network localizes attention of the model to the relevant lungs region for disease recognition. The second stage of the Covid-MANet network differentiates COVID-19 cases from bacterial pneumonia, viral pneumonia, normal and tuberculosis cases, respectively. To improve the interpretation and explainability, three experiments have been conducted in exploration of the most coherent and appropriate classification approach. Moreover, the multi-scale attention model MA-DenseNet201 proposed for the classification of COVID-19 cases. The final stage of the Covid-MANet network quantifies the proportion of infection and severity of COVID-19 in the lungs. The COVID-19 cases are graded into more specific severity levels such as mild, moderate, severe, and critical as per the score assigned by the RALE scoring system. The MA-DenseNet201 classification model outperforms eight state-of-the-art CNN models, in terms of sensitivity and interpretation with lung localization network. The COVID-19 infection segmentation by UNet with DenseNet121 encoder achieves dice score of 86.15% outperforming UNet, UNet++, AttentionUNet, R2UNet, with VGG16, ResNet50 and DenseNet201 encoder. The proposed network not only classifies images based on the predicted label but also highlights the infection by segmentation/localization of model-focused regions to support explainable decisions. MA-DenseNet201 model with a segmentation-based cropping approach achieves maximum interpretation of 96% with COVID-19 sensitivity of 97.75%. Finally, based on class-varied sensitivity analysis Covid-MANet ensemble network of MA-DenseNet201, ResNet50 and MobileNet achieve 95.05% accuracy and 98.75% COVID-19 sensitivity. The proposed model is externally validated on an unseen dataset, yields 98.17% COVID-19 sensitivity.
The CP-ABM approach for modelling COVID-19 infection dynamics and quantifying the effects of non-pharmaceutical interventions
The motivation for this research is to develop an approach that reliably captures the disease dynamics of COVID-19 for an entire population in order to identify the key events driving change in the epidemic through accurate estimation of daily COVID-19 cases. This has been achieved through the new CP-ABM approach which uniquely incorporates hange oint detection into an gent ased odel taking advantage of genetic algorithms for calibration and an efficient infection centric procedure for computational efficiency. The CP-ABM is applied to the Northern Ireland population where it successfully captures patterns in COVID-19 infection dynamics over both waves of the pandemic and quantifies the significant effects of non-pharmaceutical interventions (NPI) on a national level for lockdowns and mask wearing. To our knowledge, there is no other approach to date that has captured NPI effectiveness and infection spreading dynamics for both waves of the COVID-19 pandemic for an entire country population.
Self-restrained triplet loss for accurate masked face recognition
Using the face as a biometric identity trait is motivated by the contactless nature of the capture process and the high accuracy of the recognition algorithms. After the current COVID-19 pandemic, wearing a face mask has been imposed in public places to keep the pandemic under control. However, face occlusion due to wearing a mask presents an emerging challenge for face recognition systems. In this paper, we present a solution to improve masked face recognition performance. Specifically, we propose the Embedding Unmasking Model (EUM) operated on top of existing face recognition models. We also propose a novel loss function, the Self-restrained Triplet (SRT), which enabled the EUM to produce embeddings similar to these of unmasked faces of the same identities. The achieved evaluation results on three face recognition models, two real masked datasets, and two synthetically generated masked face datasets proved that our proposed approach significantly improves the performance in most experimental settings.
A multi-task fully deep convolutional neural network for contactless fingerprint minutiae extraction
With the outbreak and wide spread of novel coronavirus (COVID-19), contactless fingerprint recognition has attracted more attention for personal recognition because it can provide significantly higher user convenience and hygiene than the traditional contact-based fingerprint recognition. However, it is still challenging to achieve a highly accurate recognition due to the low ridge-valley contrast and pose variances of contactless fingerprints. Minutiae points are a kind of ridge flow discontinuities, and robust and accurate extraction is an important step for most automatic fingerprint recognition algorithms. Most of existing methods are based on two stages which locate the minutiae points first and then compute their directions. The two-stage method cannot make full use of location and direction information. In this paper, we propose a multi-task fully deep convolutional neural network for jointly learning the minutiae location detection and its corresponding direction computation which operates directly on the whole gray scale contactless fingerprints. The proposed method consists of offline training and online testing stages. In the training stage, a fully deep convolutional neural network is built for the tasks of minutiae detection and its direction regression, with an attention mechanism to make the direction regression branch concentrate on the minutiae points. A new loss function is proposed to jointly learn the tasks of minutiae detection and its direction regression from the whole fingerprints. In the testing stage, the trained network is applied on the whole contactless fingerprint to generate the minutiae location and direction maps. The proposed multi-task leaning method performs better than the individual single task and it operates directly on the raw gray-scale contactless fingerprints without preprocessing. The results on three contactless fingerprint datasets show the proposed algorithm performs better than other minutiae extraction algorithms and the commercial software.