INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY

Radiomic feature reliability of amide proton transfer-weighted MR images acquired with compressed sensing at 3T
Wu J, Huang Q, Shen Y, Guo P, Zhou J and Jiang S
Compressed sensing (CS) is a novel technique for MRI acceleration. The purpose of this paper was to assess the effects of CS on the radiomic features extracted from amide proton transfer-weighted (APTw) images. Brain tumor MRI data of 40 scans were studied. Standard images using sensitivity encoding (SENSE) with an acceleration factor (AF) of 2 were used as the gold standard, and APTw images using SENSE with CS (CS-SENSE) with an AF of 4 were assessed. Regions of interest (ROIs), including normal tissue, edema, liquefactive necrosis, and tumor, were manually drawn, and the effects of CS-SENSE on radiomics were assessed for each ROI category. An intraclass correlation coefficient (ICC) was first calculated for each feature extracted from APTw images with SENSE and CS-SENSE for all ROIs. Different filters were applied to the original images, and the effects of these filters on the ICCs were further compared between APTw images with SENSE and CS-SENSE. Feature deviations were also provided for a more comprehensive evaluation of the effects of CS-SENSE on radiomic features. The ROI-based comparison showed that most radiomic features extracted from CS-SENSE-APTw images and SENSE-APTw images had moderate or greater reliabilities (ICC ≥ 0.5) for all four ROIs and all eight image sets with different filters. Tumor showed significantly higher ICCs than normal tissue, edema, and liquefactive necrosis. Compared to the original images, filters (such as Exponential or Square) may improve the reliability of radiomic features extracted from CS-SENSE-APTw and SENSE-APTw images.
Non-invasive prediction of overall survival time for glioblastoma multiforme patients based on multimodal MRI radiomics
Zhu J, Ye J, Dong L, Ma X, Tang N, Xu P, Jin W, Li R, Yang G and Lai X
Glioblastoma multiforme (GBM) is the most common and deadly primary malignant brain tumor. As GBM tumor is aggressive and shows high biological heterogeneity, the overall survival (OS) time is extremely low even with the most aggressive treatment. If the OS time can be predicted before surgery, developing personalized treatment plans for GBM patients will be beneficial. Magnetic resonance imaging (MRI) is a commonly used diagnostic tool for brain tumors with high-resolution and sound imaging effects. However, in clinical practice, doctors mainly rely on manually segmenting the tumor regions in MRI and predicting the OS time of GBM patients, which is time-consuming, subjective and repetitive, limiting the effectiveness of clinical diagnosis and treatment. Therefore, it is crucial to segment the brain tumor regions in MRI, and an accurate pre-operative prediction of OS time for personalized treatment is highly desired. In this study, we present a multimodal MRI radiomics-based automatic framework for non-invasive prediction of the OS time for GBM patients. A modified 3D-UNet model is built to segment tumor subregions in MRI of GBM patients; then, the radiomic features in the tumor subregions are extracted and combined with the clinical features input into the Support Vector Regression (SVR) model to predict the OS time. In the experiments, the BraTS2020, BraTS2019 and BraTS2018 datasets are used to evaluate our framework. Our model achieves competitive OS time prediction accuracy compared to most typical approaches.
COVID-19 lung infection segmentation from chest CT images based on CAPA-ResUNet
Ma L, Song S, Guo L, Tan W and Xu L
Coronavirus disease 2019 (COVID-19) epidemic has devastating effects on personal health around the world. It is significant to achieve accurate segmentation of pulmonary infection regions, which is an early indicator of disease. To solve this problem, a deep learning model, namely, the content-aware pre-activated residual UNet (CAPA-ResUNet), was proposed for segmenting COVID-19 lesions from CT slices. In this network, the pre-activated residual block was used for down-sampling to solve the problems of complex foreground and large fluctuations of distribution in datasets during training and to avoid gradient disappearance. The area loss function based on the false segmentation regions was proposed to solve the problem of fuzzy boundary of the lesion area. This model was evaluated by the public dataset (COVID-19 Lung CT Lesion Segmentation Challenge-2020) and compared its performance with those of classical models. Our method gains an advantage over other models in multiple metrics. Such as the Dice coefficient, specificity (Spe), and intersection over union (IoU), our CAPA-ResUNet obtained 0.775 points, 0.972 points, and 0.646 points, respectively. The Dice coefficient of our model was 2.51% higher than Content-aware residual UNet (CARes-UNet). The code is available at https://github.com/malu108/LungInfectionSeg.
Application of a novel T1 retrospective quantification using internal references (T1-REQUIRE) algorithm to derive quantitative T1 relaxation maps of the brain
Hasse A, Bertini J, Foxley S, Jeong Y, Javed A and Carroll TJ
Most MRI sequences used clinically are qualitative or weighted. While such images provide useful information for clinicians to diagnose and monitor disease progression, they lack the ability to quantify tissue damage for more objective assessment. In this study, an algorithm referred to as the T1-REQUIRE is presented as a proof-of-concept which uses nonlinear transformations to retrospectively estimate T1 relaxation times in the brain using T1-weighted MRIs, the appropriate signal equation, and internal, healthy tissues as references. T1-REQUIRE was applied to two T1-weighted MR sequences, a spin-echo and a MPRAGE, and validated with a reference standard T1 mapping algorithm in vivo. In addition, a multiscanner study was run using MPRAGE images to determine the effectiveness of T1-REQUIRE in conforming the data from different scanners into a more uniform way of analyzing T1-relaxation maps. The T1-REQUIRE algorithm shows good agreement with the reference standard (Lin's concordance correlation coefficients of 0.884 for the spin-echo and 0.838 for the MPRAGE) and with each other (Lin's concordance correlation coefficient of 0.887). The interscanner studies showed improved alignment of cumulative distribution functions after T1-REQUIRE was performed. T1-REQUIRE was validated with a reference standard and shown to be an effective estimate of T1 over a clinically relevant range of T1 values. In addition, T1-REQUIRE showed excellent data conformity across different scanners, providing evidence that T1-REQUIRE could be a useful addition to big data pipelines.
CoviDetNet: A new COVID-19 diagnostic system based on deep features of chest x-ray
Aslan M
COVID-19 has emerged as a global pandemic affecting the world, and its adverse effects on society still continue. So far, about 243.57 million people have been diagnosed with COVID-19, of which about 4.94 million have died. In this study, a new model, called COVIDetNet, is proposed for automated COVID-19 detection. A lightweight CNN architecture trained instead of the popular and pretrained convolution neural network (CNN) models such as VGG16, VGG19, AlexNet, ResNet50, ResNet100, and MobileNetV2 from scratch with chest x-ray (CXR) images was designed. A new feature set was created by concatenating the features of all layers of the designed CNN architecture. Then, the most efficient features chosen among the features concatenating with the Relief feature selection algorithm were classified using the support vector machine (SVM) method. The experimental works were carried out on a public COVID-19 CXR database. Experimental results demonstrated 99.24% accuracy, 99.60% specificity, 99.39% sensitivity, 99.04% precision, and an 1 score of 99.21%. Also, in comparison to AlexNet and VGG16 models, the deep feature extraction durations were reduced by approximately 6-fold and 38-fold, respectively. The COVIDetNet model provided a higher accuracy score than state-of-the-art models when compared to multi-class research studies. Overall, the proposed model will be beneficial for specialist medical staff to detect COVID-19 cases, as it provides faster and higher accuracy than existing CXR-based approaches.
A comparative analysis of deep neural network architectures for the dynamic diagnosis of COVID-19 based on acoustic cough features
Sunitha G, Arunachalam R, Abd-Elnaby M, Eid MMA and Rashed ANZ
The study aims to assess the detection performance of a rapid primary screening technique for COVID-19 that is purely based on the cough sound extracted from 2200 clinically validated samples using laboratory molecular testing (1100 COVID-19 negative and 1100 COVID-19 positive). Results and severity of samples based on quantitative RT-PCR (qRT-PCR), cycle threshold, and patient lymphocyte numbers were clinically labeled. Our suggested general methods consist of a tensor based on audio characteristics and deep-artificial neural network classification with deep cough convolutional layers, based on the dilated temporal convolution neural network (DTCN). DTCN has approximately 76% accuracy, 73.12% in TCN, and 72.11% in CNN-LSTM which have been trained at a learning rate of 0.2%, respectively. In our scenario, CNN-LSTM can no longer be employed for COVID-19 predictions, as they would generally offer questionable forecasts. In the previous stage, we discussed the exactness of the total cases of TCN, dilated TCN, and CNN-LSTM models which were truly predicted. Our proposed technique to identify COVID-19 can be considered as a robust and in-demand technique to rapidly detect the infection. We believe it can considerably hinder the COVID-19 pandemic worldwide.
A modified DeepLabV3+ based semantic segmentation of chest computed tomography images for COVID-19 lung infections
Polat H
Coronavirus disease (COVID-19) affects the lives of billions of people worldwide and has destructive impacts on daily life routines, the global economy, and public health. Early diagnosis and quantification of COVID-19 infection have a vital role in improving treatment outcomes and interrupting transmission. For this purpose, advances in medical imaging techniques like computed tomography (CT) scans offer great potential as an alternative to RT-PCR assay. CT scans enable a better understanding of infection morphology and tracking of lesion boundaries. Since manual analysis of CT can be extremely tedious and time-consuming, robust automated image segmentation is necessary for clinical diagnosis and decision support. This paper proposes an efficient segmentation framework based on the modified DeepLabV3+ using lower atrous rates in the Atrous Spatial Pyramid Pooling (ASPP) module. The lower atrous rates make receptive small to capture intricate morphological details. The encoder part of the framework utilizes a pre-trained residual network based on dilated convolutions for optimum resolution of feature maps. In order to evaluate the robustness of the modified model, a comprehensive comparison with other state-of-the-art segmentation methods was also performed. The experiments were carried out using a fivefold cross-validation technique on a publicly available database containing 100 single-slice CT scans from >40 patients with COVID-19. The modified DeepLabV3+ achieved good segmentation performance using around 43.9 M parameters. The lower atrous rates in the ASPP module improved segmentation performance. After fivefold cross-validation, the framework achieved an overall Dice similarity coefficient score of 0.881. The results demonstrate that several minor modifications to the DeepLabV3+ pipeline can provide robust solutions for improving segmentation performance and hardware implementation.
LiteCovidNet: A lightweight deep neural network model for detection of COVID-19 using X-ray images
Kumar S, Shastri S, Mahajan S, Singh K, Gupta S, Rani R, Mohan N and Mansotra V
The syndrome called COVID-19 which was firstly spread in Wuhan, China has already been declared a globally "Pandemic." To stymie the further spread of the virus at an early stage, detection needs to be done. Artificial Intelligence-based deep learning models have gained much popularity in the detection of many diseases within the confines of biomedical sciences. In this paper, a deep neural network-based "LiteCovidNet" model is proposed that detects COVID-19 cases as the binary class (COVID-19, Normal) and the multi-class (COVID-19, Normal, Pneumonia) bifurcated based on chest X-ray images of the infected persons. An accuracy of 100% and 98.82% is achieved for binary and multi-class classification respectively which is competitive performance as compared to the other recent related studies. Hence, our methodology can be used by health professionals to validate the detection of COVID-19 infected patients at an early stage with convenient cost and better accuracy.
A deep learning approach for classification of COVID and pneumonia using DenseNet-201
Sanghvi HA, Patel RH, Agarwal A, Gupta S, Sawhney V and Pandya AS
In the present paper, our model consists of deep learning approach: DenseNet201 for detection of COVID and Pneumonia using the Chest X-ray Images. The model is a framework consisting of the modeling software which assists in Health Insurance Portability and Accountability Act Compliance which protects and secures the Protected Health Information . The need of the proposed framework in medical facilities shall give the feedback to the radiologist for detecting COVID and pneumonia though the transfer learning methods. A Graphical User Interface tool allows the technician to upload the chest X-ray Image. The software then uploads chest X-ray radiograph (CXR) to the developed detection model for the detection. Once the radiographs are processed, the radiologist shall receive the Classification of the disease which further aids them to verify the similar CXR Images and draw the conclusion. Our model consists of the dataset from Kaggle and if we observe the results, we get an accuracy of 99.1%, sensitivity of 98.5%, and specificity of 98.95%. The proposed Bio-Medical Innovation is a user-ready framework which assists the medical providers in providing the patients with the best-suited medication regimen by looking into the previous CXR Images and confirming the results. There is a motivation to design more such applications for Medical Image Analysis in the future to serve the community and improve the patient care.
An effective detection of COVID-19 using adaptive dual-stage horse herd bidirectional long short-term memory framework
Mannepalli DP and Namdeo V
COVID-19 is a quickly increasing severe viral disease that affects the human beings as well as animals. The increasing amount of infection and death due to COVID-19 needs timely detection. This work presented an innovative deep learning methodology for the prediction of COVID-19 patients with chest x-ray images. Chest x-ray is the most effective imaging technique for predicting the lung associated diseases. An effective approach with adaptive dual-stage horse herd bidirectional LSTM model is presented for the classification of images into normal, lung opacity, viral pneumonia, and COVID-19. Initially, the input images are preprocessed using modified histogram equalization approach. This is utilized to improve the contrast of the images by changing low-resolution images into high-resolution images. Subsequently, an extended dual tree complex wavelet with trigonometric transform is introduced to extract the high-density features to decrease the complexity of features. Moreover, the dimensionality of the features reduced by adaptive beetle antennae search optimization is utilized. This approach enhances the performance of disease classification by reducing the computational complexity. Finally, an adaptive dual-stage horse herd bidirectional LSTM model is utilized for the classification of images into normal, viral pneumonia, lung opacity, and COVID-19. The implementation platform used in the work is PYTHON. The performance of the presented approach is proved by comparing with the existing approaches in accuracy (99.07%), sensitivity (97.6%), -measure (97.1%), specificity (99.36%), kappa coefficient (97.7%), precision (98.56%), and area under the receiver operating characteristic curve (99%) for COVID-19 chest x-ray database.
Multimodal covid network: Multimodal bespoke convolutional neural network architectures for COVID-19 detection from chest X-ray's and computerized tomography scans
Padmapriya T, Kalaiselvi T and Priyadharshini V
AI-based tools were developed in the existing works, which focused on one type of image data; either CXR's or computerized tomography (CT) scans for COVID-19 prediction. There is a need for an AI-based tool that predicts COVID-19 detection from chest images such as Chest X-ray (CXR) and CT scans given as inputs. This research gap is considered the core objective of the proposed work. In the proposed work, multimodal CNN architectures were developed based on the parameters and hyperparameters of neural networks. Nine experiments evaluate optimizers, learning rates, and the number of epochs. Based on the experimental results, suitable parameters are fixed for multimodal architecture development for COVID-19 detection. We have constructed a bespoke convolutional neural network (CNN) architecture named multimodal covid network (MMCOVID-NET) by varying the number of layers from two to seven, which can predict covid or normal images from both CXR's and CT scans. In the proposed work, we have experimented by constructing 24 models for COVID-19 prediction. Among them, four models named MMCOVID-NET-I, MMCOVID-NET-II, MMCOVID-NET-III, and MMCOVID-NET-IV performed well by producing an accuracy of 100%. We obtained these results from a small dataset. So we repeated these experiments in a larger dataset. We inferred that MMCOVID-NET-III outperformed all the state-of-the-art methods by producing an accuracy of 99.75%. The experiments carried out in this work conclude that the parameters and hyperparameters play a vital role in increasing or decreasing the model's performance.
Can laboratory parameters be an alternative to CT and RT-PCR in the diagnosis of COVID-19? A machine learning approach
Kalaycı M, Ayyıldız H, Tuncer SA, Bozdag PG and Karlidag GE
In this study, a machine learning-based decision support system that uses routine laboratory parameters has been proposed in order to increase the diagnostic success in COVID-19. The main goal of the proposed method was to reduce the number of misdiagnoses in the RT-PCR and CT scans and to reduce the cost of testing. In this study, we retrospectively reviewed the files of patients who presented to the coronavirus outpatient. The demographic, thoracic CT, and laboratory data of the individuals without any symptoms of the disease, who had negative RT-PCR test and who had positive RT-PCR test were analyzed. CT images were classified using hybrid CNN methods to show the superiority of the decision support system using laboratory parameters. Detection of COVID-19 from CT images achieved an accuracy of 97.56% with the AlexNet-SVM hybrid method, while COVID-19 was classified with an accuracy of 97.86% with the proposed method using laboratory parameters.
A lightweight capsule network architecture for detection of COVID-19 from lung CT scans
Tiwari S and Jain A
COVID-19, a novel coronavirus, has spread quickly and produced a worldwide respiratory ailment outbreak. There is a need for large-scale screening to prevent the spreading of the disease. When compared with the reverse transcription polymerase chain reaction (RT-PCR) test, computed tomography (CT) is far more consistent, concrete, and precise in detecting COVID-19 patients through clinical diagnosis. An architecture based on deep learning has been proposed by integrating a capsule network with different variants of convolution neural networks. DenseNet, ResNet, VGGNet, and MobileNet are utilized with CapsNet to detect COVID-19 cases using lung computed tomography scans. It has found that all the four models are providing adequate accuracy, among which the VGGCapsNet, DenseCapsNet, and MobileCapsNet models have gained the highest accuracy of 99%. An Android-based app can be deployed using MobileCapsNet model to detect COVID-19 as it is a lightweight model and best suited for handheld devices like a mobile.
Detection and diagnosis of COVID-19 infection in lungs images using deep learning techniques
Kumar A and Mahapatra RP
World's science and technologies have been challenged by the COVID-19 pandemic. Each and every community across the globe are trying to find a real-time novel method for accurate treatment and cure of COVID-19 infected patients. The most important lead to take from this pandemic is to detect the infected patients as soon as possible and provide them an accurate treatment. At present, the worldwide methodology to detect COVID-19 is reverse transcription-polymerase chain reaction (RT-PCR). This technique is costly and time taking. For this reason, the implementation of a novel method is required. This paper includes the use of deep learning analysis to develop a system for identifying COVID-19 patients. Proposed technique is based on convolution neural network (CNN) and deep neural network (DNN). This paper proposes two models, first is designing DNN on the basis of fractal feature of the images and second is designing CNN using lungs x-ray images. To find the infected area (tissues) of the lungs image using CNN architecture, segmentation process has been used. Developed CNN architecture gave results of classification with accuracy equal to 94.6% and sensitivity equal to 90.5% which is much better than the proposed DNN method, which gave accuracy 84.11% and sensitivity 84.7%. The outcome of the presented model shows 94.6% accuracy in detecting infected regions. Using this method the growth of the infected regions can be monitored and controlled. The designed model can also be used in post-COVID-19 analysis.
COVID-opt-aiNet: A clinical decision support system for COVID-19 detection
Kanwal S, Khan F, Alamri S, Dashtipur K and Gogate M
Coronavirus disease (COVID-19) has had a major and sometimes lethal effect on global public health. COVID-19 detection is a difficult task that necessitates the use of intelligent diagnosis algorithms. Numerous studies have suggested the use of artificial intelligence (AI) and machine learning (ML) techniques to detect COVID-19 infection in patients through chest X-ray image analysis. The use of medical imaging with different modalities for COVID-19 detection has become an important means of containing the spread of this disease. However, medical images are not sufficiently adequate for routine clinical use; there is, therefore, an increasing need for AI to be applied to improve the diagnostic performance of medical image analysis. Regrettably, due to the evolving nature of the COVID-19 global epidemic, the systematic collection of a large data set for deep neural network (DNN)/ML training is problematic. Inspired by these studies, and to aid in the medical diagnosis and control of this contagious disease, we suggest a novel approach that ensembles the feature selection capability of the optimized artificial immune networks (opt-aiNet) algorithm with deep learning (DL) and ML techniques for better prediction of the disease. In this article, we experimented with a DNN, a convolutional neural network (CNN), bidirectional long-short-term memory, a support vector machine (SVM), and logistic regression for the effective detection of COVID-19 in patients. We illustrate the effectiveness of this proposed technique by using COVID-19 image datasets with a variety of modalities. An empirical study using the COVID-19 image dataset demonstrates that the proposed hybrid approaches, named COVID-opt-aiNet, improve classification accuracy by up to 98%-99% for SVM, 96%-97% for DNN, and 70.85%-71% for CNN, to name a few examples. Furthermore, statistical analysis ensures the validity of our proposed algorithms. The source code can be downloaded from Github: https://github.com/faizakhan1925/COVID-opt-aiNet.
Genetic-based adaptive momentum estimation for predicting mortality risk factors for COVID-19 patients using deep learning
Elghamrawy SM, Hassanien AE and Vasilakos AV
The mortality risk factors for coronavirus disease (COVID-19) must be early predicted, especially for severe cases, to provide intensive care before they develop to critically ill immediately. This paper aims to develop an optimized convolution neural network (CNN) for predicting mortality risk factors for COVID-19 patients. The proposed model supports two types of input data clinical variables and the computed tomography (CT) scans. The features are extracted from the optimized CNN phase and then applied to the classification phase. The CNN model's hyperparameters were optimized using a proposed genetic-based adaptive momentum estimation (GB-ADAM) algorithm. The GB-ADAM algorithm employs the genetic algorithm (GA) to optimize Adam optimizer's configuration parameters, consequently improving the classification accuracy. The model is validated using three recent cohorts from New York, Mexico, and Wuhan, consisting of 3055, 7497,504 patients, respectively. The results indicated that the most significant mortality risk factors are: CD T Lymphocyte (Count), D-dimer greater than 1 Ug/ml, high values of lactate dehydrogenase (LDH), C-reactive protein (CRP), hypertension, and diabetes. Early identification of these factors would help the clinicians in providing immediate care. The results also show that the most frequent COVID-19 signs in CT scans included ground-glass opacity (GGO), followed by crazy-paving pattern, consolidations, and the number of lobes. Moreover, the experimental results show encouraging performance for the proposed model compared with different predicting models.
COLI-Net: Deep learning-assisted fully automated COVID-19 lung and infection pneumonia lesion detection and segmentation from chest computed tomography images
Shiri I, Arabi H, Salimi Y, Sanaat A, Akhavanallaf A, Hajianfar G, Askari D, Moradi S, Mansouri Z, Pakbin M, Sandoughdaran S, Abdollahi H, Radmard AR, Rezaei-Kalantari K, Ghelich Oghli M and Zaidi H
We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347'259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7'333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98-0.99) and 0.91 ± 0.038 (95% CI, 0.90-0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 ± 0.84% (95% CI, -0.12 to 0.18) and -0.18 ± 3.4% (95% CI, -0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 ± 1.2% (95% CI, 0.16-0.59) and 0.81 ± 6.6% (95% CI, -0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the first-order feature (-6.95%) and shape feature (8.68%) for lesions. We developed an automated DL-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients to provide fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and quantification.
The effect of deep feature concatenation in the classification problem: An approach on COVID-19 disease detection
Cengil E and Çınar A
In image classification applications, the most important thing is to obtain useful features. Convolutional neural networks automatically learn the extracted features during training. The classification process is carried out with the obtained features. Therefore, obtaining successful features is critical to achieving high classification success. This article focuses on providing effective features to enhance classification performance. For this purpose, the success of the process of concatenating features in classification is taken as basis. At first, the features acquired by feature transfer method are extracted from AlexNet, Xception, NASNETLarge, and EfficientNet-B0 architectures, which are known to be successful in classification problems. Concatenating the features results in the creation of a new feature set. The method is completed by subjecting the features to various classification algorithms. The proposed pipeline is applied to the three datasets: "COVID-19 Image Dataset," "COVID-19 Pneumonia Normal Chest X-ray (PA) Dataset," and "COVID-19 Radiography Database" for COVID-19 disease detection. The whole datasets contain three classes (normal, COVID, and pneumonia). The best classification accuracies for the three datasets are 98.8%, 95.9%, and 99.6%, respectively. Performance metrics are given such as: sensitivity, precision, specificity, and F1-score values, as well. Contribution of paper is as follows: COVID-19 disease is similar to other lung infections. This situation makes diagnosis difficult. Furthermore, the virus's rapid spread necessitates the need to detect cases as soon as possible. There has been an increased curiosity in computer-aided deep learning models to provide the requirements. The use of the proposed method will be beneficial as it provides high accuracy.
Randomly initialized convolutional neural network for the recognition of COVID-19 using X-ray images
Ben Atitallah S, Driss M, Boulila W and Ben Ghézala H
By the start of 2020, the novel coronavirus (COVID-19) had been declared a worldwide pandemic, and because of its infectiousness and severity, several strands of research have focused on combatting its ongoing spread. One potential solution to detecting COVID-19 rapidly and effectively is by analyzing chest X-ray images using Deep Learning (DL) models. Convolutional Neural Networks (CNNs) have been presented as particularly efficient techniques for early diagnosis, but most still include limitations. In this study, we propose a novel randomly initialized CNN (RND-CNN) architecture for the recognition of COVID-19. This network consists of a set of differently-sized hidden layers all created from scratch. The performance of this RND-CNN is evaluated using two public datasets: the COVIDx and the enhanced COVID-19 datasets. Each of these datasets consists of medical images (X-rays) in one of three different classes: chests with COVID-19, with pneumonia, or in a normal state. The proposed RND-CNN model yields encouraging results for its accuracy in detecting COVID-19 results, achieving 94% accuracy for the COVIDx dataset and 99% accuracy on the enhanced COVID-19 dataset.
Automatic classification of severity of COVID-19 patients using texture feature and random forest based on computed tomography images
Amini N and Shalbaf A
Severity assessment of the novel Coronavirus (COVID-19) using chest computed tomography (CT) scan is crucial for the effective administration of the right therapeutic drugs and also for monitoring the progression of the disease. However, determining the severity of COVID-19 needs a highly expert radiologist by visual assessment, which is time-consuming, boring, and subjective. This article introduces an advanced machine learning tool to determine the severity of COVID-19 to mild, moderate, and severe from the lung CT images. We have used a set of quantitative first- and second-order statistical texture features from each image. The first-order texture features extracted from the image histogram are variance, skewness, and kurtosis. The second-order texture features extraction methods are gray-level co-occurrence matrix, gray-level run length matrix, and gray-level size zone matrix. Finally, using the extracted features, CT images of each person are classified using random forest (RF) as an ensemble method based on majority voting of the decision trees outputs to four classes. We have used a dataset of CT scans labeled as being normal (231), mild (563), moderate (120), and severe (42) determined by expert radiologists. The experimental results indicate the combination of all feature extraction methods, and RF achieves the highest result compared with the other strategies in detecting the four classes of severity of COVID-19 from CT images with an accuracy of 90.95%. This proposed system can work well and can be used as an assistant diagnostic tool for quantification of lung involvement of COVID-19 to monitor the progression of the disease.
A novel and efficient deep learning approach for COVID-19 detection using X-ray imaging modality
Bhardwaj P and Kaur A
With the exponential growth of COVID-19 cases, medical practitioners are searching for accurate and quick automated detection methods to prevent Covid from spreading while trying to reduce the computational requirement of devices. In this research article, a deep learning Convolutional Neural Network (CNN) based accurate and efficient ensemble model using deep learning is being proposed with 2161 COVID-19, 2022 pneumonia, and 5863 normal chest X-ray images that has been collected from previous publications and other online resources. To improve the detection accuracy contrast enhancement and image normalization have been done to produce better quality images at the pre-processing level. Further data augmentation methods are used by creating modified versions of images in the dataset to train the four efficient CNN models (Inceptionv3, DenseNet121, Xception, InceptionResNetv2) Experimental results provide 98.33% accuracy for binary class and 92.36% for multiclass. The performance evaluation metrics reveal that this tool can be very helpful for early disease diagnosis.