Detection of various lung diseases including COVID-19 using extreme learning machine algorithm based on the features extracted from a lightweight CNN architecture
Around the world, several lung diseases such as pneumonia, cardiomegaly, and tuberculosis (TB) contribute to severe illness, hospitalization or even death, particularly for elderly and medically vulnerable patients. In the last few decades, several new types of lung-related diseases have taken the lives of millions of people, and COVID-19 has taken almost 6.27 million lives. To fight against lung diseases, timely and correct diagnosis with appropriate treatment is crucial in the current COVID-19 pandemic. In this study, an intelligent recognition system for seven lung diseases has been proposed based on machine learning (ML) techniques to aid the medical experts. Chest X-ray (CXR) images of lung diseases were collected from several publicly available databases. A lightweight convolutional neural network (CNN) has been used to extract characteristic features from the raw pixel values of the CXR images. The best feature subset has been identified using the Pearson Correlation Coefficient (PCC). Finally, the extreme learning machine (ELM) has been used to perform the classification task to assist faster learning and reduced computational complexity. The proposed CNN-PCC-ELM model achieved an accuracy of 96.22% with an Area Under Curve (AUC) of 99.48% for eight class classification. The outcomes from the proposed model demonstrated better performance than the existing state-of-the-art (SOTA) models in the case of COVID-19, pneumonia, and tuberculosis detection in both binary and multiclass classifications. For eight class classification, the proposed model achieved precision, recall and fi-score and ROC are 100%, 99%, 100% and 99.99% respectively for COVID-19 detection demonstrating its robustness. Therefore, the proposed model has overshadowed the existing pioneering models to accurately differentiate COVID-19 from the other lung diseases that can assist the medical physicians in treating the patient effectively.
A deep learning-based COVID-19 automatic diagnostic framework using chest X-ray images
The lethal novel coronavirus disease 2019 (COVID-19) pandemic is affecting the health of the global population severely, and a huge number of people may have to be screened in the future. There is a need for effective and reliable systems that perform automatic detection and mass screening of COVID-19 as a quick alternative diagnostic option to control its spread. A robust deep learning-based system is proposed to detect the COVID-19 using chest X-ray images. Infected patient's chest X-ray images reveal numerous opacities (denser, confluent, and more profuse) in comparison to healthy lungs images which are used by a deep learning algorithm to generate a model to facilitate an accurate diagnostics for multi-class classification (COVID vs. normal vs. bacterial pneumonia vs. viral pneumonia) and binary classification (COVID-19 vs. non-COVID). COVID-19 positive images have been used for training and model performance assessment from several hospitals of India and also from countries like Australia, Belgium, Canada, China, Egypt, Germany, Iran, Israel, Italy, Korea, Spain, Taiwan, USA, and Vietnam. The data were divided into training, validation and test sets. The average test accuracy of 97.11 ± 2.71% was achieved for multi-class (COVID vs. normal vs. pneumonia) and 99.81% for binary classification (COVID-19 vs. non-COVID). The proposed model performs rapid disease detection in 0.137 s per image in a system equipped with a GPU and can reduce the workload of radiologists by classifying thousands of images on a single click to generate a probabilistic report in real-time.
Automated detection of COVID-19 from CT scan using convolutional neural network
Under the prevailing circumstances of the global pandemic of COVID-19, early diagnosis and accurate detection of COVID-19 through tests/screening and, subsequently, isolation of the infected people would be a proactive measure. Artificial intelligence (AI) based solutions, using Convolutional Neural Network (CNN) and exploiting the Deep Learning model's diagnostic capabilities, have been studied in this paper. Transfer Learning approach, based on VGG16 and ResNet50 architectures, has been used to develop an algorithm to detect COVID-19 from CT scan images consisting of Healthy (Normal), COVID-19, and Pneumonia categories. This paper adopts data augmentation and fine-tuning techniques to improve and optimize the VGG16 and ResNet50 model. Further, stratified 5-fold cross-validation has been conducted to test the robustness and effectiveness of the model. The proposed model performs exceptionally well in case of binary classification (COVID-19 vs. Normal) with an average classification accuracy of more than 99% in both VGG16 and ResNet50 based models. In multiclass classification (COVID-19 vs. Normal vs. Pneumonia), the proposed model achieves an average classification accuracy of 86.74% and 88.52% using VGG16 and ResNet50 architectures as baseline, respectively. Experimental results show that the proposed model achieves superior performance and can be used for automated detection of COVID-19 from CT scans.
Automatic Detection of Coronavirus Disease (COVID-19) in X-ray and CT Images: A Machine Learning Based Approach
The newly identified Coronavirus pneumonia, subsequently termed COVID-19, is highly transmittable and pathogenic with no clinically approved antiviral drug or vaccine available for treatment. The most common symptoms of COVID-19 are dry cough, sore throat, and fever. Symptoms can progress to a severe form of pneumonia with critical complications, including septic shock, pulmonary edema, acute respiratory distress syndrome and multi-organ failure. While medical imaging is not currently recommended in Canada for primary diagnosis of COVID-19, computer-aided diagnosis systems could assist in the early detection of COVID-19 abnormalities and help to monitor the progression of the disease, potentially reduce mortality rates. In this study, we compare popular deep learning-based feature extraction frameworks for automatic COVID-19 classification. To obtain the most accurate feature, which is an essential component of learning, MobileNet, DenseNet, Xception, ResNet, InceptionV3, InceptionResNetV2, VGGNet, NASNet were chosen amongst a pool of deep convolutional neural networks. The extracted features were then fed into several machine learning classifiers to classify subjects as either a case of COVID-19 or a control. This approach avoided task-specific data pre-processing methods to support a better generalization ability for unseen data. The performance of the proposed method was validated on a publicly available COVID-19 dataset of chest X-ray and CT images. The DenseNet121 feature extractor with Bagging tree classifier achieved the best performance with 99% classification accuracy. The second-best learner was a hybrid of the a ResNet50 feature extractor trained by LightGBM with an accuracy of 98.
FractalCovNet architecture for COVID-19 Chest X-ray image Classification and CT-scan image Segmentation
Precise and fast diagnosis of COVID-19 cases play a vital role in early stage of medical treatment and prevention. Automatic detection of COVID-19 cases using the chest X-ray images and chest CT-scan images will be helpful to reduce the impact of this pandemic on the human society. We have developed a novel FractalCovNet architecture using Fractal blocks and U-Net for segmentation of chest CT-scan images to localize the lesion region. The same FractalCovNet architecture is also used for classification of chest X-ray images using transfer learning. We have compared the segmentation results using various model such as U-Net, DenseUNet, Segnet, ResnetUNet, and FCN. We have also compared the classification results with various models like ResNet5-, Xception, InceptionResNetV2, VGG-16 and DenseNet architectures. The proposed FractalCovNet model is able to predict the COVID-19 lesion with high F-measure and precision values compared to the other state-of-the-art methods. Thus the proposed model can accurately predict the COVID-19 cases and discover lesion regions in chest CT without the manual annotations of lesions for every suspected individual. An easily-trained and high-performance deep learning model provides a fast way to identify COVID-19 patients, which is beneficial to control the outbreak of SARS-II-COV.
SCovNet: A skip connection-based feature union deep learning technique with statistical approach analysis for the detection of COVID-19
The global population has been heavily impacted by the COVID-19 pandemic of coronavirus. Infections are spreading quickly around the world, and new spikes (Delta, Delta Plus, and Omicron) are still being made. The real-time reverse transcription-polymerase chain reaction (RT-PCR) is the method most often used to find viral RNA in a nasopharyngeal swab. However, these diagnostic approaches require human involvement and consume more time per prediction. Moreover, the existing conventional test mainly suffers from false negatives, so there is a chance for the virus to spread quickly. Therefore, a rapid and early diagnosis of COVID-19 patients is needed to overcome these problems.
Evaluation of electrohysterogram measured from different gestational weeks for recognizing preterm delivery: a preliminary study using random Forest
Developing a computational method for recognizing preterm delivery is important for timely diagnosis and treatment of preterm delivery. The main aim of this study was to evaluate electrohysterogram (EHG) signals recorded at different gestational weeks for recognizing the preterm delivery using random forest (RF). EHG signals from 300 pregnant women were divided into two groups depending on when the signals were recorded: i) preterm and term delivery with EHG recorded before the 26 week of gestation (denoted by PE and TE group), and ii) preterm and term delivery with EHG recorded during or after the 26 week of gestation (denoted by PL and TL group). 31 linear features and nonlinear features were derived from each EHG signal, and then compared comprehensively within PE and TE group, and PL and TL group. After employing the adaptive synthetic sampling approach and six-fold cross-validation, the accuracy (ACC), sensitivity, specificity and area under the curve (AUC) were applied to evaluate RF classification. For PL and TL group, RF achieved the ACC of 0.93, sensitivity of 0.89, specificity of 0.97, and AUC of 0.80. Similarly, their corresponding values were 0.92, 0.88, 0.96 and 0.88 for PE and TE group, indicating that RF could be used to recognize preterm delivery effectively with EHG signals recorded before the 26 week of gestation.
A machine learning approach to epileptic seizure prediction using Electroencephalogram (EEG) Signal
This study investigates the properties of the brain electrical activity from different recording regions and physiological states for seizure detection. Neurophysiologists will find the work useful in the timely and accurate detection of epileptic seizures of their patients. We explored the best way to detect meaningful patterns from an epileptic Electroencephalogram (EEG). Signals used in this work are 23.6 s segments of 100 single channel surface EEG recordings collected with the sampling rate of 173.61 Hz. The recorded signals are from five healthy volunteers with eyes closed and eyes open, and intracranial EEG recordings from five epilepsy patients during the seizure-free interval as well as epileptic seizures. Feature engineering was done using; i) feature extraction of each EEG wave in time, frequency and time-frequency domains via Butterworth filter, Fourier Transform and Wavelet Transform respectively and, ii) feature selection with T-test, and Sequential Forward Floating Selection (SFFS). SVM and KNN learning algorithms were applied to classify preprocessed EEG signal. Performance comparison was based on Accuracy, Sensitivity and Specificity. Our experiments showed that SVM has a slight edge over KNN.
TL-med: A Two-stage transfer learning recognition model for medical images of COVID-19
The recognition of medical images with deep learning techniques can assist physicians in clinical diagnosis, but the effectiveness of recognition models relies on massive amounts of labeled data. With the rampant development of the novel coronavirus (COVID-19) worldwide, rapid COVID-19 diagnosis has become an effective measure to combat the outbreak. However, labeled COVID-19 data are scarce. Therefore, we propose a two-stage transfer learning recognition model for medical images of COVID-19 (TL-Med) based on the concept of "generic domain-target-related domain-target domain". First, we use the Vision Transformer (ViT) pretraining model to obtain generic features from massive heterogeneous data and then learn medical features from large-scale homogeneous data. Two-stage transfer learning uses the learned primary features and the underlying information for COVID-19 image recognition to solve the problem by which data insufficiency leads to the inability of the model to learn underlying target dataset information. The experimental results obtained on a COVID-19 dataset using the TL-Med model produce a recognition accuracy of 93.24%, which shows that the proposed method is more effective in detecting COVID-19 images than other approaches and may greatly alleviate the problem of data scarcity in this field.
Automated diagnosis of COVID stages from lung CT images using statistical features in 2-dimensional flexible analytic wavelet transform
The COVID-19 epidemic has been causing a global problem since December 2019. COVID-19 is highly contagious and spreads rapidly throughout the world. Thus, early detection is essential. The progression of COVID-19 lung illness has been demonstrated to be aided by chest imaging. The respiratory system is the most vulnerable component of the human body to the COVID virus. COVID can be diagnosed promptly and accurately using images from a chest X-ray and a computed tomography scan. CT scans are preferred over X-rays to rule out other pulmonary illnesses, assist venous entry, and pinpoint any new heart problems. The traditional and trending tools are physical, time-inefficient, and not more accurate. Many techniques for detecting COVID utilizing CT scan images have recently been developed, yet none of them can efficiently detect COVID at an early stage. We proposed a two-dimensional Flexible analytical wavelet transform (FAWT) based on a novel technique in this work. This method is decomposed pre-processed images into sub-bands. Then statistical-based relevant features are extracted, and principal component analysis (PCA) is used to identify robust features. After that, robust features are ranked with the help of the Student's t-value algorithm. Finally, features are applied to Least Square-SVM (RBF) for classification. According to the experimental outcomes, our model beat state-of-the-art approaches for COVID classification. This model attained better classification accuracy of 93.47%, specificity 93.34%, sensitivity 93.6% and F1-score 0.93 using tenfold cross-validation.
COVID-RDNet: A novel coronavirus pneumonia classification model using the mixed dataset by CT and X-rays images
Corona virus disease 2019 (COVID-19) testing relies on traditional screening methods, which require a lot of manpower and material resources. Recently, to effectively reduce the damage caused by radiation and enhance effectiveness, deep learning of classifying COVID-19 negative and positive using the mixed dataset by CT and X-rays images have achieved remarkable research results. However, the details presented on CT and X-ray images have pathological diversity and similarity features, thus increasing the difficulty for physicians to judge specific cases. On this basis, this paper proposes a novel coronavirus pneumonia classification model using the mixed dataset by CT and X-rays images. To solve the problem of feature similarity between lung diseases and COVID-19, the extracted features are enhanced by an adaptive region enhancement algorithm. Besides, the depth network based on the residual blocks and the dense blocks is trained and tested. On the one hand, the residual blocks effectively improve the accuracy of the model and the non-linear COVID-19 features are obtained by cross-layer link. On the other hand, the dense blocks effectively improve the robustness of the model by connecting local and abstract information. On mixed X-ray and CT datasets, the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), area under curve (AUC), and accuracy can all reach 0.99. On the basis of respecting patient privacy and ethics, the proposed algorithm using the mixed dataset from real cases can effectively assist doctors in performing the accurate COVID-19 negative and positive classification to determine the infection status of patients.
Explainable COVID-19 detection using fractal dimension and vision transformer with Grad-CAM on cough sounds
The polymerase chain reaction (PCR) test is not only time-intensive but also a contact method that puts healthcare personnel at risk. Thus, contactless and fast detection tests are more valuable. Cough sound is an important indicator of COVID-19, and in this paper, a novel explainable scheme is developed for cough sound-based COVID-19 detection. In the presented work, the cough sound is initially segmented into overlapping parts, and each segment is labeled as the input audio, which may contain other sounds. The deep Yet Another Mobile Network (YAMNet) model is considered in this work. After labeling, the segments labeled as cough are cropped and concatenated to reconstruct the pure cough sounds. Then, four fractal dimensions (FD) calculation methods are employed to acquire the FD coefficients on the cough sound with an overlapped sliding window that forms a matrix. The constructed matrixes are then used to form the fractal dimension images. Finally, a pretrained vision transformer (ViT) model is used to classify the constructed images into COVID-19, healthy and symptomatic classes. In this work, we demonstrate the performance of the ViT on cough sound-based COVID-19, and a visual explainability of the inner workings of the ViT model is shown. Three publically available cough sound datasets, namely COUGHVID, VIRUFY, and COSWARA, are used in this study. We have obtained 98.45%, 98.15%, and 97.59% accuracy for COUGHVID, VIRUFY, and COSWARA datasets, respectively. Our developed model obtained the highest performance compared to the state-of-the-art methods and is ready to be tested in real-world applications.
COVID-19 detection on chest X-ray images using Homomorphic Transformation and VGG inspired deep convolutional neural network
COVID-19 had caused the whole world to come to a standstill. The current detection methods are time consuming as well as costly. Using Chest X-rays (CXRs) is a solution to this problem, however, manual examination of CXRs is a cumbersome and difficult process needing specialization in the domain. Most of existing methods used for this application involve the usage of pretrained models such as VGG19, ResNet, DenseNet, Xception, and EfficeintNet which were trained on RGB image datasets. X-rays are fundamentally single channel images, hence using RGB trained model is not appropriate since it increases the operations by involving three channels instead of one. A way of using pretrained model for grayscale images is by replicating the one channel image data to three channel which introduces redundancy and another way is by altering the input layer of pretrained model to take in one channel image data, which comprises the weights in the forward layers that were trained on three channel images which weakens the use of pre-trained weights in a transfer learning approach. A novel approach for identification of COVID-19 using CXRs, Contrast Limited Adaptive Histogram Equalization (CLAHE) along with Homomorphic Transformation Filter which is used to process the pixel data in images and extract features from the CXRs is suggested in this paper. These processed images are then provided as input to a VGG inspired deep Convolutional Neural Network (CNN) model which takes one channel image data as input (grayscale images) to categorize CXRs into three class labels, namely, No-Findings, COVID-19, and Pneumonia. Evaluation of the suggested model is done with the help of two publicly available datasets; one to obtain COVID-19 and No-Finding images and the other to obtain Pneumonia CXRs. The dataset comprises 6750 images in total; 2250 images for each class. Results obtained show that the model has achieved 96.56% for multi-class classification and 98.06% accuracy for binary classification using 5-fold stratified cross validation (CV) method. This result is competitive and up to the mark when compared with the performance shown by existing approaches for COVID-19 classification.
Automated malarial retinopathy detection using transfer learning and multi-camera retinal images
Cerebral malaria (CM) is a fatal syndrome found commonly in children less than 5 years old in Sub-saharan Africa and Asia. The retinal signs associated with CM are known as malarial retinopathy (MR), and they include highly specific retinal lesions such as whitening and hemorrhages. Detecting these lesions allows the detection of CM with high specificity. Up to 23% of CM, patients are over-diagnosed due to the presence of clinical symptoms also related to pneumonia, meningitis, or others. Therefore, patients go untreated for these pathologies, resulting in death or neurological disability. It is essential to have a low-cost and high-specificity diagnostic technique for CM detection, for which We developed a method based on transfer learning (TL). Models pre-trained with TL select the good quality retinal images, which are fed into another TL model to detect CM. This approach shows a 96% specificity with low-cost retinal cameras.
A deep learning approach to detect Covid-19 coronavirus with X-Ray images
Rapid and accurate detection of COVID-19 coronavirus is necessity of time to prevent and control of this pandemic by timely quarantine and medical treatment in absence of any vaccine. Daily increase in cases of COVID-19 patients worldwide and limited number of available detection kits pose difficulty in identifying the presence of disease. Therefore, at this point of time, necessity arises to look for other alternatives. Among already existing, widely available and low-cost resources, X-ray is frequently used imaging modality and on the other hand, deep learning techniques have achieved state-of-the-art performances in computer-aided medical diagnosis. Therefore, an alternative diagnostic tool to detect COVID-19 cases utilizing available resources and advanced deep learning techniques is proposed in this work. The proposed method is implemented in four phases, viz., data augmentation, preprocessing, stage-I and stage-II deep network model designing. This study is performed with online available resources of 1215 images and further strengthen by utilizing data augmentation techniques to provide better generalization of the model and to prevent the model overfitting by increasing the overall length of dataset to 1832 images. Deep network implementation in two stages is designed to differentiate COVID-19 induced pneumonia from healthy cases, bacterial and other virus induced pneumonia on X-ray images of chest. Comprehensive evaluations have been performed to demonstrate the effectiveness of the proposed method with both (i) training-validation-testing and (ii) 5-fold cross validation procedures. High classification accuracy as 97.77%, recall as 97.14% and precision as 97.14% in case of COVID-19 detection shows the efficacy of proposed method in present need of time. Further, the deep network architecture showing averaged accuracy/sensitivity/specificity/precision/F1-score of 98.93/98.93/98.66/96.39/98.15 with 5-fold cross validation makes a promising outcome in COVID-19 detection using X-ray images.
Computer-aided detection of COVID-19 from X-ray images using multi-CNN and Bayesnet classifier
Corona virus disease-2019 (COVID-19) is a pandemic caused by novel coronavirus. COVID-19 is spreading rapidly throughout the world. The gold standard for diagnosing COVID-19 is reverse transcription-polymerase chain reaction (RT-PCR) test. However, the facility for RT-PCR test is limited, which causes early diagnosis of the disease difficult. Easily available modalities like X-ray can be used to detect specific symptoms associated with COVID-19. Pre-trained convolutional neural networks are widely used for computer-aided detection of diseases from smaller datasets. This paper investigates the effectiveness of multi-CNN, a combination of several pre-trained CNNs, for the automated detection of COVID-19 from X-ray images. The method uses a combination of features extracted from multi-CNN with correlation based feature selection (CFS) technique and Bayesnet classifier for the prediction of COVID-19. The method was tested using two public datasets and achieved promising results on both the datasets. In the first dataset consisting of 453 COVID-19 images and 497 non-COVID images, the method achieved an AUC of 0.963 and an accuracy of 91.16%. In the second dataset consisting of 71 COVID-19 images and 7 non-COVID images, the method achieved an AUC of 0.911 and an accuracy of 97.44%. The experiments performed in this study proved the effectiveness of pre-trained multi-CNN over single CNN in the detection of COVID-19.
Automated COVID-19 detection from X-ray and CT images with stacked ensemble convolutional neural network
Automatic and rapid screening of COVID-19 from the radiological (X-ray or CT scan) images has become an urgent need in the current pandemic situation of SARS-CoV-2 worldwide. However, accurate and reliable screening of patients is challenging due to the discrepancy between the radiological images of COVID-19 and other viral pneumonia. So, in this paper, we design a new stacked convolutional neural network model for the automatic diagnosis of COVID-19 disease from the chest X-ray and CT images. In the proposed approach, different sub-models have been obtained from the VGG19 and the Xception models during the training. Thereafter, obtained sub-models are stacked together using softmax classifier. The proposed stacked CNN model combines the discriminating power of the different CNN's sub-models and detects COVID-19 from the radiological images. In addition, we collect CT images to build a CT image dataset and also generate an X-ray images dataset by combining X-ray images from the three publicly available data repositories. The proposed stacked CNN model achieves a sensitivity of 97.62% for the multi-class classification of X-ray images into COVID-19, Normal and Pneumonia Classes and 98.31% sensitivity for binary classification of CT images into COVID-19 and no-Finding classes. Our proposed approach shows superiority over the existing methods for the detection of the COVID-19 cases from the X-ray radiological images.
Technology-based health promotion: Current state and perspectives in emerging gig economy
It has been a decade since smartphone application stores started allowing developers to post their own applications. This paper presents a narrative review on the state-of-the-art and the future of technology used by researchers in the field of mobile health promotion. Researchers build high cost, complex systems with the purpose of promoting health and collecting data. These systems promote health by using a feedback component that "educates" the subject. Other researchers instead use platforms which provide them with data collected by others, which allows for no communication with subjects, but may be cheaper than building a system to collect the data. This second type of systems cannot be used directly for health promotion. However, both types of systems are relevant to the field of health promotion, because they are precursors to a third type of systems that are emerging, the gig economy systems for mobile health data collection, which are low cost, globally available, and provide limited communication with subjects. If such systems evolve to include more channels for communication with the data-generating subjects, and also bring developers into the economy, they may eventually revolutionize the field of mobile health promotion and data collection by giving researchers new capabilities, such as the ability to replicate existing health promotion campaigns with the click of a button and the appropriate licenses. In this paper we present a review of state-of-the-art systems for mobile health promotion and data collection and a model for what these systems may look like in the future.
WOANet: Whale optimized deep neural network for the classification of COVID-19 from radiography images
Coronavirus Diseases (COVID-19) is a new disease that will be declared a global pandemic in 2020. It is characterized by a constellation of traits like fever, dry cough, dyspnea, fatigue, chest pain, etc. Clinical findings have shown that the human chest Computed Tomography(CT) images can diagnose lung infection in most COVID-19 patients. Visual changes in CT scan due to COVID-19 is subjective and evaluated by radiologists for diagnosis purpose. Deep Learning (DL) can provide an automatic diagnosis tool to relieve radiologists' burden for quantitative analysis of CT scan images in patients. However, DL techniques face different training problems like mode collapse and instability. Deciding on training hyper-parameters to adjust the weight and biases of DL by a given CT image dataset is crucial for achieving the best accuracy. This paper combines the backpropagation algorithm and Whale Optimization Algorithm (WOA) to optimize such DL networks. Experimental results for the diagnosis of COVID-19 patients from a comprehensive COVID-CT scan dataset show the best performance compared to other recent methods. The proposed network architecture results were validated with the existing pre-trained network to prove the efficiency of the network.
Fuzzy logic approach for infectious disease diagnosis: A methodical evaluation, literature and classification
This paper presents a systematic review of the literature and the classification of fuzzy logic application in an infectious disease. Although the emergence of infectious diseases and their subsequent spread have a significant impact on global health and economics, a comprehensive literature evaluation of this topic has yet to be carried out. Thus, the current study encompasses the first systematic, identifiable and comprehensive academic literature evaluation and classification of the fuzzy logic methods in infectious diseases. 40 papers on this topic, which have been published from 2005 to 2019 and related to the human infectious diseases were evaluated and analyzed. The findings of this evaluation clearly show that the fuzzy logic methods are vastly used for diagnosis of diseases such as dengue fever, hepatitis and tuberculosis. The key fuzzy logic methods used for the infectious disease are the fuzzy inference system; the rule-based fuzzy logic, Adaptive Neuro-Fuzzy Inference System (ANFIS) and fuzzy cognitive map. Furthermore, the accuracy, sensitivity, specificity and the Receiver Operating Characteristic (ROC) curve were universally applied for a performance evaluation of the fuzzy logic techniques. This thesis will also address the various needs between the different industries, practitioners and researchers to encourage more research regarding the more overlooked areas, and it will conclude with several suggestions for the future infectious disease researches.
AutoCovNet: Unsupervised feature learning using autoencoder and feature merging for detection of COVID-19 from chest X-ray images
With the onset of the COVID-19 pandemic, the automated diagnosis has become one of the most trending topics of research for faster mass screening. Deep learning-based approaches have been established as the most promising methods in this regard. However, the limitation of the labeled data is the main bottleneck of the data-hungry deep learning methods. In this paper, a two-stage deep CNN based scheme is proposed to detect COVID-19 from chest X-ray images for achieving optimum performance with limited training images. In the first stage, an encoder-decoder based autoencoder network is proposed, trained on chest X-ray images in an unsupervised manner, and the network learns to reconstruct the X-ray images. An encoder-merging network is proposed for the second stage that consists of different layers of the encoder model followed by a merging network. Here the encoder model is initialized with the weights learned on the first stage and the outputs from different layers of the encoder model are used effectively by being connected to a proposed merging network. An intelligent feature merging scheme is introduced in the proposed merging network. Finally, the encoder-merging network is trained for feature extraction of the X-ray images in a supervised manner and resulting features are used in the classification layers of the proposed architecture. Considering the final classification task, an EfficientNet-B4 network is utilized in both stages. An end to end training is performed for datasets containing classes: COVID-19, Normal, Bacterial Pneumonia, Viral Pneumonia. The proposed method offers very satisfactory performances compared to the state of the art methods and achieves an accuracy of 90:13% on the 4-class, 96:45% on a 3-class, and 99:39% on 2-class classification.