Region-based image registration for remote sensing imagery
We propose an automatic region-based registration method for remote sensing imagery. In this method, we aim to register two images by matching region properties to address possible errors caused by local feature estimators. We apply automated image segmentation to identify the regions and calculate regional Fourier descriptors and standardized regional intensity descriptors for each region. We define a joint matching cost, as a linear combination of Euclidean distances, to establish and extract correspondences between regions. The segmentation technique utilizes kernel density estimators for edge localization, followed by morphological reconstruction and the watershed transform. We evaluated the registration performance of our method on synthetic and real datasets. We measured the registration accuracy by calculating the root-mean-squared error (RMSE) between the estimated transformation and the ground truth transformation. The results obtained using the joint intensity-Fourier descriptor were compared to the results obtained using Harris, Minimum eigenvalue, Features accelerated segment test (FAST), speeded-up robust features (SURF), binary robust invariant scalable keypoints (BRISK) and KAZE keypoint descriptors. The joint intensity-Fourier descriptor yielded average RMSE of 0.446 ± 0.359 pixels and 1.152 ± 0.488 pixels on two satellite imagery datasets consisting of 35 image pairs in total. These results indicate the capacity of the proposed technique for high accuracy. Our method also produces a lower registration error than the compared feature-based methods.
Simultaneous Tumor Segmentation, Image Restoration, and Blur Kernel Estimation in PET Using Multiple Regularizations
Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the -convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin's lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction.
Entropy-based Correspondence Improvement of Interpolated Skeletal Models
Statistical analysis of shape representations relies on having good correspondence across a population. Improving correspondence yields improved statistics. Point distribution models (PDMs) are often used to represent object boundaries. Skeletal representations (s-reps) model object widths and boundary directions as well as boundary positions, so they should yield better correspondence. We present two methods: one for continuously interpolating a discretely-sampled skeletal model and one for improving correspondence by using this interpolation to shift skeletal samples to new positions. The interpolation operates by an extension of the mathematics of medial structures. As with Cates' boundary-based method, we evaluate correspondence in terms of regularity and shape-feature population entropies. Evaluation on both synthetic and real data shows that our method both improves correspondence of s-rep models fit to segmented lateral ventricles and that the combined boundary-and-skeletal PDMs implied by these optimized s-reps have better correspondence than optimized boundary PDMs.
Modeling eye movement patterns to characterize perceptual skill in image-based diagnostic reasoning processes
Experts have a remarkable capability of locating, perceptually organizing, identifying, and categorizing objects in images specific to their domains of expertise. In this article, we present a hierarchical probabilistic framework to discover the stereotypical and idiosyncratic viewing behaviors exhibited with expertise-specific groups. Through these patterned eye movement behaviors we are able to elicit the domain-specific knowledge and perceptual skills from the subjects whose eye movements are recorded during diagnostic reasoning processes on medical images. Analyzing experts' eye movement patterns provides us insight into cognitive strategies exploited to solve complex perceptual reasoning tasks. An experiment was conducted to collect both eye movement and verbal narrative data from three groups of subjects with different levels or no medical training (eleven board-certified dermatologists, four dermatologists in training and thirteen undergraduates) while they were examining and describing 50 photographic dermatological images. We use a hidden Markov model to describe each subject's eye movement sequence combined with hierarchical stochastic processes to capture and differentiate the discovered eye movement patterns shared by multiple subjects within and among the three groups. Independent experts' annotations of diagnostic conceptual units of thought in the transcribed verbal narratives are time-aligned with discovered eye movement patterns to help interpret the patterns' meanings. By mapping eye movement patterns to thought units, we uncover the relationships between visual and linguistic elements of their reasoning and perceptual processes, and show the manner in which these subjects varied their behaviors while parsing the images. We also show that inferred eye movement patterns characterize groups of similar temporal and spatial properties, and specify a subset of distinctive eye movement patterns which are commonly exhibited across multiple images. Based on the combinations of the occurrences of these eye movement patterns, we are able to categorize the images from the perspective of experts' viewing strategies in a novel way. In each category, images share similar lesion distributions and configurations. Our results show that modeling with multi-modal data, representative of physicians' diagnostic viewing behaviors and thought processes, is feasible and informative to gain insights into physicians' cognitive strategies, as well as medical image understanding.
Modeling 4D Pathological Changes by Leveraging Normative Models
With the increasing use of efficient multimodal 3D imaging, clinicians are able to access longitudinal imaging to stage pathological diseases, to monitor the efficacy of therapeutic interventions, or to assess and quantify rehabilitation efforts. Analysis of such four-dimensional (4D) image data presenting pathologies, including disappearing and newly appearing lesions, represents a significant challenge due to the presence of complex spatio-temporal changes. Image analysis methods for such 4D image data have to include not only a concept for joint segmentation of 3D datasets to account for inherent correlations of subject-specific repeated scans but also a mechanism to account for large deformations and the destruction and formation of lesions (e.g., edema, bleeding) due to underlying physiological processes associated with damage, intervention, and recovery. In this paper, we propose a novel framework that provides a joint segmentation-registration framework to tackle the inherent problem of image registration in the presence of objects not present in all images of the time series. Our methodology models 4D changes in pathological anatomy across time and and also provides an explicit mapping of a healthy normative template to a subject's image data with pathologies. Since atlas-moderated segmentation methods cannot explain appearance and locality pathological structures that are not represented in the template atlas, the new framework provides different options for initialization via a supervised learning approach, iterative semisupervised active learning, and also transfer learning, which results in a fully automatic 4D segmentation method. We demonstrate the effectiveness of our novel approach with synthetic experiments and a 4D multimodal MRI dataset of severe traumatic brain injury (TBI), including validation via comparison to expert segmentations. However, the proposed methodology is generic in regard to different clinical applications requiring quantitative analysis of 4D imaging representing spatio-temporal changes of pathologies.
Expressive visual text-to-speech as an assistive technology for individuals with autism spectrum conditions
Adults with Autism Spectrum Conditions (ASC) experience marked difficulties in recognising the emotions of others and responding appropriately. The clinical characteristics of ASC mean that face to face or group interventions may not be appropriate for this clinical group. This article explores the potential of a new interactive technology, converting text to emotionally expressive speech, to improve emotion processing ability and attention to faces in adults with ASC. We demonstrate a method for generating a near-videorealistic avatar (XpressiveTalk), which can produce a video of a face uttering inputted text, in a large variety of emotional tones. We then demonstrate that general population adults can correctly recognize the emotions portrayed by XpressiveTalk. Adults with ASC are significantly less accurate than controls, but still above chance levels for inferring emotions from XpressiveTalk. Both groups are significantly more accurate when inferring sad emotions from XpressiveTalk compared to the original actress, and rate these expressions as significantly more preferred and realistic. The potential applications for XpressiveTalk as an assistive technology for adults with ASC is discussed.
Robust measurement of individual localized changes to the aging hippocampus
Alzheimer's Disease (AD) is characterized by a stereotypical spatial pattern of hippocampus (HP) atrophy over time, but reliable and precise measurement of localized longitudinal change to individual HP in AD have been elusive. We present a method for quantifying subject-specific spatial patterns of longitudinal HP change that aligns serial HP surface pairs together, cuts slices off the ends of the HP that were not shared in the two delineations being aligned, estimates weighted correspondences between baseline and follow-up HP, and finds a concise set of localized spatial change patterns that explains HP changes while down-weighting HP surface points whose estimated changes are biologically implausible. We tested our method on a synthetic HP change dataset as well as a set of 320 real elderly HP measured at 1-year intervals. Our results suggests that the proposed steps reduce the amount of implausible HP changes indicated among individual HP, increase the strength of association between HP change and cognitive function related to AD, and enhance the estimation of reliable spatially-localized HP change patterns.
2D/3D Image Registration using Regression Learning
In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object's 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region's motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method's application to Image-guided Radiation Therapy ( requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof.
Ricci Flow-based Spherical Parameterization and Surface Registration
This paper presents an improved Euclidean Ricci flow method for spherical parameterization. We subsequently invent a scale space processing built upon Ricci energy to extract robust surface features for accurate surface registration. Since our method is based on the proposed Euclidean Ricci flow, it inherits the properties of Ricci flow such as conformality, robustness and intrinsicalness, facilitating efficient and effective surface mapping. Compared with other surface registration methods using curvature or sulci pattern, our method demonstrates a significant improvement for surface registration. In addition, Ricci energy can capture local differences for surface analysis as shown in the experiments and applications.
Statistical Shape Model for Manifold Regularization: Gleason grading of prostate histology
Gleason patterns of prostate cancer histopathology, characterized primarily by morphological and architectural attributes of histological structures (glands and nuclei), have been found to be highly correlated with disease aggressiveness and patient outcome. Gleason patterns 4 and 5 are highly correlated with more aggressive disease and poorer patient outcome, while Gleason patterns 1-3 tend to reflect more favorable patient outcome. Because Gleason grading is done manually by a pathologist visually examining glass (or digital) slides subtle morphologic and architectural differences of histological attributes, in addition to other factors, may result in grading errors and hence cause high inter-observer variability. Recently some researchers have proposed computerized decision support systems to automatically grade Gleason patterns by using features pertaining to nuclear architecture, gland morphology, as well as tissue texture. Automated characterization of gland morphology has been shown to distinguish between intermediate Gleason patterns 3 and 4 with high accuracy. Manifold learning (ML) schemes attempt to generate a low dimensional manifold representation of a higher dimensional feature space while simultaneously preserving nonlinear relationships between object instances. Classification can then be performed in the low dimensional space with high accuracy. However ML is sensitive to the samples contained in the dataset; changes in the dataset may alter the manifold structure. In this paper we present a manifold regularization technique to constrain the low dimensional manifold to a specific range of possible manifold shapes, the range being determined via a statistical shape model of manifolds (SSMM). In this work we demonstrate applications of the SSMM in (1) identifying samples on the manifold which contain noise, defined as those samples which deviate from the SSMM, and (2) accurate out-of-sample extrapolation (OSE) of newly acquired samples onto a manifold constrained by the SSMM. We demonstrate these applications of the SSMM in the context of distinguish between Gleason patterns 3 and 4 using glandular morphologic features in a prostate histopathology dataset of 58 patient studies. Identifying and eliminating noisy samples from the manifold via the SSMM results in a statistically significant improvement in area under the receiver operator characteristic curve (AUC), 0.832 ± 0.048 with removal of noisy samples compared to a AUC of 0.779 ± 0.075 without removal of samples. The use of the SSMM for OSE of newly acquired glands also shows statistically significant improvement in AUC, 0.834 ± 0.051 with the SSMM compared to 0.779 ± 0.054 without the SSMM. Similar results were observed for the synthetic Swiss Roll and Helix datasets.
Simultaneous Segmentation of Prostatic Zones Using Active Appearance Models With Multiple Coupled Levelsets
In this work we present an improvement to the popular Active Appearance Model (AAM) algorithm, that we call the Multiple-Levelset AAM (MLA). The MLA can simultaneously segment multiple objects, and makes use of multiple levelsets, rather than anatomical landmarks, to define the shapes. AAMs traditionally define the shape of each object using a set of anatomical landmarks. However, landmarks can be difficult to identify, and AAMs traditionally only allow for segmentation of a single object of interest. The MLA, which is a landmark independent AAM, allows for levelsets of multiple objects to be determined and allows for them to be coupled with image intensities. This gives the MLA the flexibility to simulataneously segmentation multiple objects of interest in a new image. In this work we apply the MLA to segment the prostate capsule, the prostate peripheral zone (PZ), and the prostate central gland (CG), from a set of 40 endorectal, T2-weighted MRI images. The MLA system we employ in this work leverages a hierarchical segmentation framework, so constructed as to exploit domain specific attributes, by utilizing a given prostate segmentation to help drive the segmentations of the CG and PZ, which are embedded within the prostate. Our coupled MLA scheme yielded mean Dice accuracy values of .81, .79 and .68 for the prostate, CG, and PZ, respectively using a leave-one-out cross validation scheme over 40 patient studies. When only considering the midgland of the prostate, the mean values were .89, .84, and .76 for the prostate, CG, and PZ respectively.
Particle Filters and Occlusion Handling for Rigid 2D-3D Pose Tracking
In this paper, we address the problem of 2D-3D pose estimation. Specifically, we propose an approach to jointly track a rigid object in a 2D image sequence and to estimate its pose (position and orientation) in 3D space. We revisit a joint 2D segmentation/3D pose estimation technique, and then extend the framework by incorporating a particle filter to robustly track the object in a challenging environment, and by developing an occlusion detection and handling scheme to continuously track the object in the presence of occlusions. In particular, we focus on partial occlusions that prevent the tracker from extracting an exact region properties of the object, which plays a pivotal role for region-based tracking methods in maintaining the track. To this end, a dynamical choice of how to invoke the objective functional is performed online based on the degree of dependencies between predictions and measurements of the system in accordance with the degree of occlusion and the variation of the object's pose. This scheme provides the robustness to deal with occlusions of an obstacle with different statistical properties from that of the object of interest. Experimental results demonstrate the practical applicability and robustness of the proposed method in several challenging scenarios.
Interactive object modelling based on piecewise planar surface patches
Detecting elements such as planes in 3D is essential to describe objects for applications such as robotics and augmented reality. While plane estimation is well studied, table-top scenes exhibit a large number of planes and methods often lock onto a dominant plane or do not estimate 3D object structure but only homographies of individual planes. In this paper we introduce MDL to the problem of incrementally detecting multiple planar patches in a scene using tracked interest points in image sequences. Planar patches are reconstructed and stored in a keyframe-based graph structure. In case different motions occur, separate object hypotheses are modelled from currently visible patches and patches seen in previous frames. We evaluate our approach on a standard data set published by the Visual Geometry Group at the University of Oxford [24] and on our own data set containing table-top scenes. Results indicate that our approach significantly improves over the state-of-the-art algorithms.
GC-ASM: Synergistic Integration of Graph-Cut and Active Shape Model Strategies for Medical Image Segmentation
Image segmentation methods may be classified into two categories: purely image based and model based. Each of these two classes has its own advantages and disadvantages. In this paper, we propose a novel synergistic combination of the image based graph-cut (GC) method with the model based ASM method to arrive at the GC-ASM method for medical image segmentation. A multi-object GC cost function is proposed which effectively integrates the ASM shape information into the GC framework. The proposed method consists of two phases: model building and segmentation. In the model building phase, the ASM model is built and the parameters of the GC are estimated. The segmentation phase consists of two main steps: initialization (recognition) and delineation. For initialization, an automatic method is proposed which estimates the pose (translation, orientation, and scale) of the model, and obtains a rough segmentation result which also provides the shape information for the GC method. For delineation, an iterative GC-ASM algorithm is proposed which performs finer delineation based on the initialization results. The proposed methods are implemented to operate on 2D images and evaluated on clinical chest CT, abdominal CT, and foot MRI data sets. The results show the following: (a) An overall delineation accuracy of TPVF > 96%, FPVF < 0.6% can be achieved via GC-ASM for different objects, modalities, and body regions. (b) GC-ASM improves over ASM in its accuracy and precision to search region. (c) GC-ASM requires far fewer landmarks (about 1/3 of ASM) than ASM. (d) GC-ASM achieves full automation in the segmentation step compared to GC which requires seed specification and improves on the accuracy of GC. (e) One disadvantage of GC-ASM is its increased computational expense owing to the iterative nature of the algorithm.
A Multiple Object Geometric Deformable Model for Image Segmentation
Deformable models are widely used for image segmentation, most commonly to find single objects within an image. Although several methods have been proposed to segment multiple objects using deformable models, substantial limitations in their utility remain. This paper presents a multiple object segmentation method using a novel and efficient object representation for both two and three dimensions. The new framework guarantees object relationships and topology, prevents overlaps and gaps, enables boundary-specific speeds, and has a computationally efficient evolution scheme that is largely independent of the number of objects. Maintaining object relationships and straightforward use of object-specific and boundary-specific smoothing and advection forces enables the segmentation of objects with multiple compartments, a critical capability in the parcellation of organs in medical imaging. Comparing the new framework with previous approaches shows its superior performance and scalability.
Text Extraction from Scene Images by Character Appearance and Structure Modeling
In this paper, we propose a novel algorithm to detect text information from natural scene images. Scene text classification and detection are still open research topics. Our proposed algorithm is able to model both character appearance and structure to generate representative and discriminative text descriptors. The contributions of this paper include three aspects: 1) a new character appearance model by a structure correlation algorithm which extracts discriminative appearance features from detected interest points of character samples; 2) a new text descriptor based on structons and correlatons, which model character structure by structure differences among character samples and structure component co-occurrence; and 3) a new text region localization method by combining color decomposition, character contour refinement, and string line alignment to localize character candidates and refine detected text regions. We perform three groups of experiments to evaluate the effectiveness of our proposed algorithm, including text classification, text detection, and character identification. The evaluation results on benchmark datasets demonstrate that our algorithm achieves the state-of-the-art performance on scene text classification and detection, and significantly outperforms the existing algorithms for character identification.
Tensor scale: An analytic approach with efficient computation and applications
Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as "tensor scale" using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for -dimensional (-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and -linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert's structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based -linear interpolation is evaluated in comparison with standard -linear and windowed-sinc interpolation methods.
A framework for comparing different image segmentation methods and its use in studying equivalences between level set and fuzzy connectedness frameworks
In the current vast image segmentation literature, there seems to be considerable redundancy among algorithms, while there is a serious lack of methods that would allow their theoretical comparison to establish their similarity, equivalence, or distinctness. In this paper, we make an attempt to fill this gap. To accomplish this goal, we argue that: (1) every digital segmentation algorithm [Formula: see text] should have a well defined continuous counterpart [Formula: see text], referred to as its model, which constitutes an asymptotic of [Formula: see text] when image resolution goes to infinity; (2) the equality of two such models [Formula: see text] and [Formula: see text] establishes a theoretical (asymptotic) equivalence of their digital counterparts [Formula: see text] and [Formula: see text]. Such a comparison is of full theoretical value only when, for each involved algorithm [Formula: see text], its model [Formula: see text] is proved to be an asymptotic of [Formula: see text]. So far, such proofs do not appear anywhere in the literature, even in the case of algorithms introduced as digitizations of continuous models, like level set segmentation algorithms.The main goal of this article is to explore a line of investigation for formally pairing the digital segmentation algorithms with their asymptotic models, justifying such relations with mathematical proofs, and using the results to compare the segmentation algorithms in this general theoretical framework. As a first step towards this general goal, we prove here that the gradient based thresholding model [Formula: see text] is the asymptotic for the fuzzy connectedness Udupa and Samarasekera segmentation algorithm used with gradient based affinity [Formula: see text]. We also argue that, in a sense, [Formula: see text] is the asymptotic for the original front propagation level set algorithm of Malladi, Sethian, and Vemuri, thus establishing a theoretical equivalence between these two specific algorithms. Experimental evidence of this last equivalence is also provided.
Optimal-Flow Minimum-Cost Correspondence Assignment in Particle Flow Tracking
A diversity of tracking problems exists in which cohorts of densely packed particles move in an organized fashion, however the stability of individual particles within the cohort is low. Moreover, the flows of cohorts can regionally overlap. Together, these conditions yield a complex tracking scenario that can not be addressed by optical flow techniques that assume piecewise coherent flows, or by multiparticle tracking techniques that suffer from the local ambiguity in particle assignment. Here, we propose a graph-based assignment of particles in three consecutive frames to recover from image sequences the instantaneous organized motion of groups of particles, i.e. flows. The algorithm makes no a priori assumptions on the fraction of particles participating in organized movement, as this number continuously alters with the evolution of the flow fields in time. Graph-based assignment methods generally maximize the number of acceptable particles assignments between consecutive frames and only then minimize the association cost. In dense and unstable particle flow fields this approach produces many false positives. The here proposed approach avoids this via solution of a multi-objective optimization problem in which the number of assignments is maximized while their total association cost is minimized at the same time. The method is validated on standard benchmark data for particle tracking. In addition, we demonstrate its application to live cell microscopy where several large molecular populations with different behaviors are tracked.
Contour based object detection using part bundles
In this paper we propose a novel framework for contour based object detection from cluttered environments. Given a contour model for a class of objects, it is first decomposed into fragments hierarchically. Then, we group these fragments into , where a part bundle can contain overlapping fragments. Given a new image with set of edge fragments we develop an efficient voting method using local shape similarity between part bundles and edge fragments that generates high quality candidate part configurations. We then use global shape similarity between the part configurations and the model contour to find optimal configuration. Furthermore, we show that appearance information can be used for improving detection for objects with distinctive texture when model contour does not sufficiently capture deformation of the objects.
Intensity Standardization Simplifies Brain MR Image Segmentation
Typically, brain MR images present significant intensity variation across patients and scanners. Consequently, training a classifier on a set of images and using it subsequently for brain segmentation may yield poor results. Adaptive iterative methods usually need to be employed to account for the variations of the particular scan. These methods are complicated, difficult to implement and often involve significant computational costs. In this paper, a simple, non-iterative method is proposed for brain MR image segmentation. Two preprocessing techniques, namely intensity inhomogeneity correction, and more importantly MR image intensity standardization, used prior to segmentation, play a vital role in making the MR image intensities have a tissue-specific numeric meaning, which leads us to a very simple brain tissue segmentation strategy.Vectorial scale-based fuzzy connectedness and certain morphological operations are utilized first to generate the brain intracranial mask. The fuzzy membership value of each voxel within the intracranial mask for each brain tissue is then estimated. Finally, a maximum likelihood criterion with spatial constraints taken into account is utilized in classifying all voxels in the intracranial mask into different brain tissue groups. A set of inhomogeneity corrected and intensity standardized images is utilized as a training data set. We introduce two methods to estimate fuzzy membership values. In the first method, called SMG (for simple membership based on a gaussian model), the fuzzy membership value is estimated by fitting a multivariate Gaussian model to the intensity distribution of each brain tissue whose mean intensity vector and covariance matrix are estimated and fixed from the training data sets. The second method, called SMH (for simple membership based on a histogram), estimates fuzzy membership value directly via the intensity distribution of each brain tissue obtained from the training data sets. We present several studies to evaluate the performance of these two methods based on 10 clinical MR images of normal subjects and 10 clinical MR images of Multiple Sclerosis (MS) patients. A quantitative comparison indicates that both methods have overall better accuracy than the k-nearest neighbors (kNN) method, and have much better efficiency than the Finite Mixture (FM) model based Expectation-Maximization (EM) method. Accuracy is similar for our methods and EM method for the normal subject data sets, but much better for our methods for the patient data sets.