MULTIDIMENSIONAL SYSTEMS AND SIGNAL PROCESSING

A stochastic analysis of distance estimation approaches in single molecule microscopy - quantifying the resolution limits of photon-limited imaging systems
Ram S, Ward ES and Ober RJ
Optical microscopy is an invaluable tool to visualize biological processes at the cellular scale. In the recent past, there has been significant interest in studying these processes at the single molecule level. An important question that arises in single molecule experiments concerns the estimation of the distance of separation between two closely spaced molecules. Presently, there exists different experimental approaches to estimate the distance between two single molecules. However, it is not clear as to which of these approaches provides the best accuracy for estimating the distance. Here, we address this problem rigorously by using tools of statistical estimation theory. We derive formulations of the Fisher information matrix for the underlying estimation problem of determining the distance of separation from the acquired data for the different approaches. Through the Cramer-Rao inequality, we derive a lower bound to the accuracy with which the distance of separation can be estimated. We show through Monte-Carlo simulations that the bound can be attained by the maximum likelihood estimator. Our analysis shows that the distance estimation problem is in fact related to the localization accuracy problem, the latter being a distinct problem that deals with how accurately the location of an object can be determined. We have carried out a detailed investigation of the relationship between the Fisher information matrices of the two problems for the different experimental approaches considered here. The paper also addresses the issue of a singular Fisher information matrix, which presents a significant complication when calculating the Cramer-Rao lower bound. Here, we show how experimental design can overcome the singularity. Throughout the paper, we illustrate our results by considering a specific image profile that describe the image of a single molecule.
Fisher information matrix for branching processes with application to electron-multiplying charge-coupled devices
Chao J, Ward ES and Ober RJ
The high quantum efficiency of the charge-coupled device (CCD) has rendered it the imaging technology of choice in diverse applications. However, under extremely low light conditions where few photons are detected from the imaged object, the CCD becomes unsuitable as its readout noise can easily overwhelm the weak signal. An intended solution to this problem is the electron-multiplying charge-coupled device (EMCCD), which stochastically amplifies the acquired signal to drown out the readout noise. Here, we develop the theory for calculating the Fisher information content of the amplified signal, which is modeled as the output of a branching process. Specifically, Fisher information expressions are obtained for a general and a geometric model of amplification, as well as for two approximations of the amplified signal. All expressions pertain to the important scenario of a Poisson-distributed initial signal, which is characteristic of physical processes such as photon detection. To facilitate the investigation of different data models, a "noise coefficient" is introduced which allows the analysis and comparison of Fisher information via a scalar quantity. We apply our results to the problem of estimating the location of a point source from its image, as observed through an optical microscope and detected by an EMCCD.
Diagnosis of breast cancer based on modern mammography using hybrid transfer learning
Khamparia A, Bharati S, Podder P, Gupta D, Khanna A, Phung TK and Thanh DNH
Breast cancer is a common cancer in women. Early detection of breast cancer in particular and cancer, in general, can considerably increase the survival rate of women, and it can be much more effective. This paper mainly focuses on the transfer learning process to detect breast cancer. Modified VGG (MVGG) is proposed and implemented on datasets of 2D and 3D images of mammograms. Experimental results showed that the proposed hybrid transfer learning model (a fusion of MVGG and ImageNet) provides an accuracy of 94.3%. On the other hand, only the proposed MVGG architecture provides an accuracy of 89.8%. So, it is precisely stated that the proposed hybrid pre-trained network outperforms other compared Convolutional Neural Networks. The proposed architecture can be considered as an effective tool for radiologists to decrease the false negative and false positive rates. Therefore, the efficiency of mammography analysis will be improved.
Identifying the presence of bacteria on digital images by using asymmetric distribution with k-means clustering algorithm
Satyanarayana KV, Rao NT, Bhattacharyya D and Hu YC
This paper is mainly aimed at the decomposition of image quality assessment study by using Three Parameter Logistic Mixture Model and k-means clustering (TPLMM-k). This method is mainly used for the analysis of various images which were related to several real time applications and for medical disease detection and diagnosis with the help of the digital images which were generated by digital microscopic camera. Several algorithms and distribution models had been developed and proposed for the segmentation of the images. Among several methods developed and proposed, the Gaussian Mixture Model (GMM) was one of the highly used models. One can say that almost the GMM was playing the key role in most of the image segmentation research works so far noticed in the literature. The main drawback with the distribution model was that this GMM model will be best fitted with a kind of data in the dataset. To overcome this problem, the TPLMM-k algorithm is proposed. The image decomposition process used in the proposed algorithm had been analyzed and its performance was analyzed with the help of various performance metrics like the Variance of Information (VOI), Global Consistency Error (GCE) and Probabilistic Rand Index (PRI). According to the results, it is shown that the proposed algorithm achieves the better performance when compared with the previous results of the previous techniques. In addition, the decomposition of the images had been improved in the proposed algorithm.
Noisy iris smoothing and segmentation scheme based on improved Wildes method
Kumawat A and Panda S
In an automated iris recognition system, in order to get higher accuracy, we should have an efficient iris segmentation process. The reliability of accurate "iris recognition" system largely depends on the accuracy of segmentation process. Traditional "iris segmentation" methods are unable to detect the exact boundaries of iris and pupil, which is time consuming and also highly sensitive to noise. To overcome these problems, we have proposed an improved Wildes method (IWM) for segmentation in iris recognition system. The proposed algorithm consists of two major steps before applying Wildes method for segmentation: edge detection of iris and pupil from a noisy eye image with improved Canny with fuzzy logic (ICWFL) and removal of unwanted noise from above step with a hybrid restoration fusion filter (HRFF). A comparative study of various edge detection techniques is performed to prove the efficiency of ICWFL method. Similarly, the proposed method is tested with various noise densities from 10 to 95 dB. Also the working of the proposed HRFF is compared with some existing smoothing filters. Various experiments have been performed with the help of iris database of IIT_Delhi. Both visual and numerical results prove the efficiency of the proposed algorithm.
Image sub-division and quadruple clipped adaptive histogram equalization (ISQCAHE) for low exposure image enhancement
Acharya UK and Kumar S
In this paper, a novel image sub-division and quadruple clipped adaptive histogram equalization (ISQCAHE) technique is proposed for the enhancement of low exposure images. The proposed method involves, computation of the histogram which includes a new approach of image sub-division, enhancement controlling mechanism, modification of probability density function (PDF) and histogram equalization (HE). The original histogram is segmented into sub-histograms based on exposure threshold and mean, to preserve the brightness and entropy. Then, individual sub-histogram is clipped separately to control the enhancement rate. For enhancing the visual quality, HE is applied to individual sub-histogram using the modified PDF. The experimental results show that, the proposed ISQCAHE method avoids the unpleasant artifacts effectively and provide a natural appearance to the enhanced image. It is simple, adaptive and performs superior than other techniques in terms of visual quality, absolute mean brightness error, entropy, Natural image quality evaluation, brightness preservation, structure similarity index measure and feature similarity index measure.