A computerized doughty predictor framework for corona virus disease: Combined deep learning based approach
Due to a technical error, the following article was published in error on Wiley Online Library as an Accepted Article on 5 December 2022. The article has been temporarily removed. Wiley would like to apologize to the authors and the academic community for this error. Ramya, P. and Babu, S.V. (2022), A computerized doughty predictor framework for corona virus disease: Combined deep learning based approach. IET Image Process. Accepted Author Manuscript. https://doi.org/10.1049/ipr2.12554.
LW-CovidNet: Automatic covid-19 lung infection detection from chest X-ray images
Coronavirus Disease 2019 (Covid-19) overtook the worldwide in early 2020, placing the world's health in threat. Automated lung infection detection using Chest X-ray images has a ton of potential for enhancing the traditional covid-19 treatment strategy. However, there are several challenges to detect infected regions from Chest X-ray images, including significant variance in infected features similar spatial characteristics, multi-scale variations in texture shapes and sizes of infected regions. Moreover, high parameters with transfer learning are also a constraints to deploy deep convolutional neural network(CNN) models in real time environment. A novel covid-19 lightweight CNN(LW-CovidNet) method is proposed to automatically detect covid-19 infected regions from Chest X-ray images to address these challenges. In our proposed hybrid method of integrating Standard and Depth-wise Separable convolutions are used to aggregate the high level features and also compensate the information loss by increasing the Receptive Field of the model. The detection boundaries of disease regions representations are then enhanced via an Edge-Attention method by applying heatmaps for accurate detection of disease regions. Extensive experiments indicate that the proposed LW-CovidNet surpasses most cutting-edge detection methods and also contributes to the advancement of state-of-the-art performance. It is envisaged that with reliable accuracy, this method can be introduced for clinical practices in the future.
A COVID-19 CXR image recognition method based on MSA-DDCovidNet
Currently, coronavirus disease 2019 (COVID-19) has not been contained. It is a safe and effective way to detect infected persons in chest X-ray (CXR) images based on deep learning methods. To solve the above problem, the dual-path multi-scale fusion (DMFF) module and dense dilated depth-wise separable (D3S) module are used to extract shallow and deep features, respectively. Based on these two modules and multi-scale spatial attention (MSA) mechanism, a lightweight convolutional neural network model, MSA-DDCovidNet, is designed. Experimental results show that the accuracy of the MSA-DDCovidNet model on COVID-19 CXR images is as high as 97.962%, In addition, the proposed MSA-DDCovidNet has less computation complexity and fewer parameter numbers. Compared with other methods, MSA-DDCovidNet can help diagnose COVID-19 more quickly and accurately.
A coarse-refine segmentation network for COVID-19 CT images
The rapid spread of the novel coronavirus disease 2019 (COVID-19) causes a significant impact on public health. It is critical to diagnose COVID-19 patients so that they can receive reasonable treatments quickly. The doctors can obtain a precise estimate of the infection's progression and decide more effective treatment options by segmenting the CT images of COVID-19 patients. However, it is challenging to segment infected regions in CT slices because the infected regions are multi-scale, and the boundary is not clear due to the low contrast between the infected area and the normal area. In this paper, a coarse-refine segmentation network is proposed to address these challenges. The coarse-refine architecture and hybrid loss is used to guide the model to predict the delicate structures with clear boundaries to address the problem of unclear boundaries. The atrous spatial pyramid pooling module in the network is added to improve the performance in detecting infected regions with different scales. Experimental results show that the model in the segmentation of COVID-19 CT images outperforms other familiar medical segmentation models, enabling the doctor to get a more accurate estimate on the progression of the infection and thus can provide more reasonable treatment options.
A multi-class COVID-19 segmentation network with pyramid attention and edge loss in CT images
At the end of 2019, a novel coronavirus COVID-19 broke out. Due to its high contagiousness, more than 74 million people have been infected worldwide. Automatic segmentation of the COVID-19 lesion area in CT images is an effective auxiliary medical technology which can quantitatively diagnose and judge the severity of the disease. In this paper, a multi-class COVID-19 CT image segmentation network is proposed, which includes a pyramid attention module to extract multi-scale contextual attention information, and a residual convolution module to improve the discriminative ability of the network. A wavelet edge loss function is also proposed to extract edge features of the lesion area to improve the segmentation accuracy. For the experiment, a dataset of 4369 CT slices is constructed, including three symptoms: ground glass opacities, interstitial infiltrates, and lung consolidation. The dice similarity coefficients of three symptoms of the model achieve 0.7704, 0.7900, 0.8241 respectively. The performance of the proposed network on public dataset COVID-SemiSeg is also evaluated. The results demonstrate that this model outperforms other state-of-the-art methods and can be a powerful tool to assist in the diagnosis of positive infection cases, and promote the development of intelligent technology in the medical field.
COVID-19 disease severity assessment using CNN model
Due to the highly infectious nature of the novel coronavirus (COVID-19) disease, excessive number of patients waits in the line for chest X-ray examination, which overloads the clinicians and radiologists and negatively affects the patient's treatment, prognosis and control of the pandemic. Now that the clinical facilities such as the intensive care units and the mechanical ventilators are very limited in the face of this highly contagious disease, it becomes quite important to classify the patients according to their severity levels. This paper presents a novel implementation of convolutional neural network (CNN) approach for COVID-19 disease severity classification (assessment). An automated CNN model is designed and proposed to divide COVID-19 patients into four severity classes as mild, moderate, severe, and critical with an average accuracy of 95.52% using chest X-ray images as input. Experimental results on a sufficiently large number of chest X-ray images demonstrate the effectiveness of CNN model produced with the proposed framework. To the best of the author's knowledge, this is the first COVID-19 disease severity assessment study with four stages (mild vs. moderate vs. severe vs. critical) using a sufficiently large number of X-ray images dataset and CNN whose almost all hyper-parameters are automatically tuned by the grid search optimiser.
Towards automatic image analysis and assessment of the multicellular apoptosis process
Apoptotic programmed cell death (PCD) is a fundamental aspect of developmental maturation. However, the authors' understanding of apoptosis, especially in the multi-cell regime, is incomplete because of the difficulty of identifying dying cells by conventional strategies. Real-time microscopy of , an excellent model system for studying the PCD during development, has been used to uncover plausible collective apoptosis at the tissue level, although the dynamic regulation of the process remains to be deciphered. In this work, the authors have developed an image-analysis program that can quantitatively analyse time-lapse microscopy of live tissues undergoing apoptosis with a fluorescent nuclear marker, and subsequently extract the spatiotemporal patterns of multicellular response. The program can process a large number of cells (>10) automatically tracked across sets of image frames. It is applied to characterise the apoptosis of wing epithelium at eclosion. Using the natural anatomic structures as reference, the authors identify dynamic patterns in the progression of PCD within the tissues. The results not only confirm the previously observed collective multi-cell behaviour from a quantitative perspective, but also reveal a plausible role played by the anatomic structures, such as the wing veins, in the PCD propagation across the wing.
Image denoising algorithm based on contourlet transform for optical coherence tomography heart tube image
Optical coherence tomography (OCT) is becoming an increasingly important imaging technology in the Biomedical field. However, the application of OCT is limited by the ubiquitous noise. In this study, the noise of OCT heart tube image is first verified as being multiplicative based on the local statistics (i.e. the linear relationship between the mean and the standard deviation of certain flat area). The variance of the noise is evaluated in log-domain. Based on these, a joint probability density function is constructed to take the inter-direction dependency in the contourlet domain from the logarithmic transformed image into account. Then, a bivariate shrinkage function is derived to denoise the image by the maximum a posteriori estimation. Systemic comparative experiments are made to synthesis images, OCT heart tube images and other OCT tissue images by subjective assessment and objective metrics. The experiment results are analysed based on the denoising results and the predominance degree of the proposed algorithm with respect to the wavelet-based algorithm. The results show that the proposed algorithm improves the signal-to-noise ratio, whereas preserving the edges and has more advantages on the images containing multi-direction information like OCT heart tube image.