ShaderNN: A Lightweight and Efficient Inference Engine for Real-time Applications on Mobile GPUs
Inference using deep neural networks on mobile devices has been an active area of research in recent years. The design of a deep learning inference framework targeted for mobile devices needs to consider various factors, such as the limited computational capacity of the devices, low power budget, varied memory access methods, and I/O bus bandwidth governed by the underlying processor's architecture. Furthermore, integrating an inference framework with time-sensitive applications - such as games and video-based software to perform tasks like ray tracing denoising and video processing - introduces the need to minimize data movement between processors and increase data locality in the target processor. In this paper, we propose Shader Neural Network (ShaderNN), an OpenGL-based, fast, and power-efficient inference framework designed for mobile devices to address these challenges. Our contributions include the following: (1) the texture-based input/output provides an efficient, zero-copy integration with real-time graphics pipelines or image processing applications, thereby saving expensive data transfers between CPU and GPU, which are unavoidable in most existing inference engines; (2) we are the first to leverage fragment shaders based on the OpenGL backend in neural network inference operators, which has an advantage in deploying parametrically small neural network models; (3) a hybrid implementation of the compute shader and fragment shader is proposed that enables layer-level shader selection to boost performance; and (4) we utilize OpenGL features - such as normalization, interpolation and texture padding - to improve performance. Experiments illustrate the favorable performance of ShaderNN over other popular on-device deep learning frameworks such as TensorFlow-Lite on the latest mobile devices powered by Qualcomm and MediaTek chips. A case study further demonstrates the usability and integration of the ShaderNN framework with a media processing Android application seamlessly. ShaderNN is available open source at Github (https://github.com/inferenceengine/shadernn).
Comparing multi-class classifier performance by multi-class ROC analysis: A nonparametric approach
The area under the Receiver Operating Characteristic (ROC) curve (AUC) is a standard metric for quantifying and comparing binary classifiers. Real world applications often require classification into multiple (more than two) classes. For multi-class classifiers that produce class membership scores, a popular multi-class AUC (MAUC) variant is to average the pairwise AUC values [1]. Due to the complicated correlation patterns, the variance of MAUC is often estimated numerically using resampling techniques. This work is a generalization of DeLong's nonparameteric approach for binary AUC analysis [2] to MAUC. We first derive the closed-form expression of the covariance matrix of the pairwise AUCs within a single MAUC. Then by dropping higher order terms, we obtain an approximate covariance matrix with a compact, matrix factorization form, which then serves as the basis for variance estimation of a single MAUC. We further extend this approach to estimate the covariance of correlated MAUCs that arise from multiple competing classifiers. For the special case of binary correlated AUCs, our results coincide with that of DeLong. Our numerical studies confirm the accuracy of the variance and covariance estimates. We provide the source code of the proposed covariance estimation of correlated MAUCs on GitHub (https://tinyurl.com/euj6wvsz) for its easy adoption by machine learning and statistical analysis packages to quantify and compare multi-class classifiers.
Improving Adversarial Robustness of Deep Neural Networks via Adaptive Margin Evolution
Adversarial training is the most popular and general strategy to improve Deep Neural Network (DNN) robustness against adversarial noises. Many adversarial training methods have been proposed in the past few years. However, most of these methods are highly susceptible to hyperparameters, especially the training noise upper bound. Tuning these hyperparameters is expensive and difficult for people not in the adversarial robustness research domain, which prevents adversarial training techniques from being used in many application fields. In this study, we propose a new adversarial training method, named Adaptive Margin Evolution (AME). Besides being hyperparameter-free for the user, our AME method places adversarial training samples into the optimal locations in the input space by gradually expanding the exploration range with self-adaptive and gradient-aware step sizes. We evaluate AME and the other seven well-known adversarial training methods on three common benchmark datasets (CIFAR10, SVHN, and Tiny ImageNet) under the most challenging adversarial attack: AutoAttack. The results show that: (1) On the three datasets, AME has the best overall performance; (2) On the Tiny ImageNet dataset, which is much more challenging, AME has the best performance at every noise level. Our work may pave the way for adopting adversarial training techniques in application domains where hyperparameter-free methods are preferred.
TISS-net: Brain tumor image synthesis and segmentation using cascaded dual-task networks and error-prediction consistency
Accurate segmentation of brain tumors from medical images is important for diagnosis and treatment planning, and it often requires multi-modal or contrast-enhanced images. However, in practice some modalities of a patient may be absent. Synthesizing the missing modality has a potential for filling this gap and achieving high segmentation performance. Existing methods often treat the synthesis and segmentation tasks separately or consider them jointly but without effective regularization of the complex joint model, leading to limited performance. We propose a novel brain Tumor Image Synthesis and Segmentation network (TISS-Net) that obtains the synthesized target modality and segmentation of brain tumors end-to-end with high performance. First, we propose a dual-task-regularized generator that simultaneously obtains a synthesized target modality and a coarse segmentation, which leverages a tumor-aware synthesis loss with perceptibility regularization to minimize the high-level semantic domain gap between synthesized and real target modalities. Based on the synthesized image and the coarse segmentation, we further propose a dual-task segmentor that predicts a refined segmentation and error in the coarse segmentation simultaneously, where a consistency between these two predictions is introduced for regularization. Our TISS-Net was validated with two applications: synthesizing FLAIR images for whole glioma segmentation, and synthesizing contrast-enhanced T1 images for Vestibular Schwannoma segmentation. Experimental results showed that our TISS-Net largely improved the segmentation accuracy compared with direct segmentation from the available modalities, and it outperformed state-of-the-art image synthesis-based segmentation methods.
Multi-weight susceptible-infected model for predicting COVID-19 in China
The mutant strains of COVID-19 caused a global explosion of infections, including many cities of China. In 2020, a hybrid AI model was proposed by Zheng et al., which accurately predicted the epidemic in Wuhan. As the main part of the hybrid AI model, ISI method makes two important assumptions to avoid over-fitting. However, the assumptions cannot be effectively applied to new mutant strains. In this paper, a more general method, named the multi-weight susceptible-infected model (MSI) is proposed to predict COVID-19 in Chinese Mainland. First, a Gaussian pre-processing method is proposed to solve the problem of data fluctuation based on the quantity consistency of cumulative infection number and the trend consistency of daily infection number. Then, we improve the model from two aspects: changing the grouped multi-parameter strategy to the multi-weight strategy, and removing the restriction of weight distribution of viral infectivity. Experiments on the outbreaks in many places in China from the end of 2021 to May 2022 show that, in China, an individual infected by Delta or Omicron strains of SARS-CoV-2 can infect others within 3-4 days after he/she got infected. Especially, the proposed method effectively predicts the trend of the epidemics in Xi'an, Tianjin, Henan, and Shanghai from December 2021 to May 2022.
Towards an ML-based semantic IoT for pandemic management: A survey of enabling technologies for COVID-19
The connection between humans and digital technologies has been documented extensively in the past decades but needs to be evaluated through the current global pandemic. Artificial Intelligence(AI), with its two strands, Machine Learning (ML) and Semantic Reasoning, has proven to be a great solution to provide efficient ways to prevent, diagnose and limit the spread of COVID-19. IoT solutions have been widely proposed for COVID-19 disease monitoring, infection geolocation, and social applications. In this paper, we investigate the usage of the three technologies for handling the COVID-19 pandemic. For this purpose, we surveyed the existing ML applications and algorithms proposed during the pandemic to detect COVID-19 disease using symptom factors and image processing. The survey includes existing approaches including semantic technologies and IoT systems for COVID-19. Based on the survey result, we classified the main challenges and the solutions that could solve them. The study proposes a conceptual framework for pandemic management and discusses challenges and trends for future research.
DDCNet: Deep Dilated Convolutional Neural Network for Dense Prediction
Dense pixel matching problems such as optical flow and disparity estimation are among the most challenging tasks in computer vision. Recently, several deep learning methods designed for these problems have been successful. A sufficiently larger effective receptive field (ERF) and a higher resolution of spatial features within a network are essential for providing higher-resolution dense estimates. In this work, we present a systemic approach to design network architectures that can provide a larger receptive field while maintaining a higher spatial feature resolution. To achieve a larger ERF, we utilized dilated convolutional layers. By aggressively increasing dilation rates in the deeper layers, we were able to achieve a sufficiently larger ERF with a significantly fewer number of trainable parameters. We used optical flow estimation problem as the primary benchmark to illustrate our network design strategy. The benchmark results (Sintel, KITTI, and Middlebury) indicate that our compact networks can achieve comparable performance in the class of networks.
Masked face recognition with convolutional visual self-attention network
With the global outbreak of COVID-19, wearing face masks has been actively introduced as an effective public measure to reduce the risk of virus infection. This measure leads to the failure of face recognition in many cases. Therefore, it is very necessary to improve the recognition performance of masked face recognition (MFR). Inspired by the successful application of self-attention in computer vision, we propose a Convolutional Visual Self-Attention Network (CVSAN), which uses self-attention to augment the convolution operator. Specifically, this is achieved by connecting a convolutional feature map, which enforces local features, to a self-attention feature map that is capable of modeling long-range dependencies. Since there is currently no publicly available large-scale masked face data, we generate a Masked VGGFace2 dataset based on the face detection algorithm to train the CVSAN model. Experiments show that the CVSAN algorithm significantly improves the performance of MFR compared to other algorithms.
Deep learning for Covid-19 forecasting: State-of-the-art review
The Covid-19 pandemic has galvanized scientists to apply machine learning methods to help combat the crisis. Despite the significant amount of research there exists no comprehensive survey devoted specifically to examining deep learning methods for Covid-19 forecasting. In this paper, we fill the gap in the literature by reviewing and analyzing the current studies that use deep learning for Covid-19 forecasting. In our review, all published papers and preprints, discoverable through Google Scholar, for the period from Apr 1, 2020 to Feb 20, 2022 which describe deep learning approaches to forecasting Covid-19 were considered. Our search identified 152 studies, of which 53 passed the initial quality screening and were included in our survey. We propose a model-based taxonomy to categorize the literature. We describe each model and highlight its performance. Finally, the deficiencies of the existing approaches are identified and the necessary improvements for future research are elucidated. The study provides a gateway for researchers who are interested in forecasting Covid-19 using deep learning.
A semi-supervised learning approach for COVID-19 detection from chest CT scans
COVID-19 has spread rapidly all over the world and has infected more than 200 countries and regions. Early screening of suspected infected patients is essential for preventing and combating COVID-19. Computed Tomography (CT) is a fast and efficient tool which can quickly provide chest scan results. To reduce the burden on doctors of reading CTs, in this article, a high precision diagnosis algorithm of COVID-19 from chest CTs is designed for intelligent diagnosis. A semi-supervised learning approach is developed to solve the problem when only small amount of labelled data is available. While following the MixMatch rules to conduct sophisticated data augmentation, we introduce a model training technique to reduce the risk of model over-fitting. At the same time, a new data enhancement method is proposed to modify the regularization term in MixMatch. To further enhance the generalization of the model, a convolutional neural network based on an attention mechanism is then developed that enables to extract multi-scale features on CT scans. The proposed algorithm is evaluated on an independent CT dataset of the chest from COVID-19 and achieves the area under the receiver operating characteristic curve (AUC) value of 0.932, accuracy of 90.1%, sensitivity of 91.4%, specificity of 88.9%, and F1-score of 89.9%. The results show that the proposed algorithm can accurately diagnose whether a chest CT belongs to a positive or negative indication of COVID-19, and can help doctors to diagnose rapidly in the early stages of a COVID-19 outbreak.
Effective multiscale deep learning model for COVID19 segmentation tasks: A further step towards helping radiologist
Infection by the SARS-CoV-2 leading to COVID-19 disease is still rising and techniques to either diagnose or evaluate the disease are still thoroughly investigated. The use of CT as a complementary tool to other biological tests is still under scrutiny as the CT scans are prone to many false positives as other lung diseases display similar characteristics on CT scans. However, fully investigating CT images is of tremendous interest to better understand the disease progression and therefore thousands of scans need to be segmented by radiologists to study infected areas. Over the last year, many deep learning models for segmenting CT-lungs were developed. Unfortunately, the lack of large and shared annotated multicentric datasets led to models that were either under-tested (small dataset) or not properly compared (own metrics, none shared dataset), often leading to poor generalization performance. To address, these issues, we developed a model that uses a multiscale and multilevel feature extraction strategy for COVID19 segmentation and extensively validated it on several datasets to assess its generalization capability for other segmentation tasks on similar organs. The proposed model uses a novel encoder and decoder with a proposed kernel-based atrous spatial pyramid pooling module that is used at the bottom of the model to extract small features with a multistage skip connection concatenation approach. The results proved that our proposed model could be applied on a small-scale dataset and still produce generalizable performances on other segmentation tasks. The proposed model produced an efficient Dice score of 90% on a 100 cases dataset, 95% on the NSCLC dataset, 88.49% on the COVID19 dataset, and 97.33 on the StructSeg 2019 dataset as compared to existing state-of-the-art models. The proposed solution could be used for COVID19 segmentation in clinic applications. The source code is publicly available at https://github.com/RespectKnowledge/Mutiscale-based-Covid-_segmentation-usingDeep-Learning-models.
Comparison and ensemble of 2D and 3D approaches for COVID-19 detection in CT images
Detecting COVID-19 in computed tomography (CT) or radiography images has been proposed as a supplement to the RT-PCR test. We compare slice-based (2D) and volume-based (3D) approaches to this problem and propose a deep learning ensemble, called IST-CovNet, combining the best 2D and 3D systems with novel preprocessing and attention modules and the use of a bidirectional Long Short-Term Memory model for combining slice-level decisions. The proposed ensemble obtains 90.80% accuracy and 0.95 AUC score overall on the newly collected IST-C dataset in detecting COVID-19 among normal controls and other types of lung pathologies; and 93.69% accuracy and 0.99 AUC score on the publicly available MosMedData dataset that consists of COVID-19 scans and normal controls only. The system also obtains state-of-art results (90.16% accuracy and 0.94 AUC) on the COVID-CT-MD dataset which is only used for testing. The system is deployed at Istanbul University Cerrahpaşa School of Medicine where it is used to automatically screen CT scans of patients, while waiting for RT-PCR tests or radiologist evaluation.
Multi-modal trained artificial intelligence solution to triage chest X-ray for COVID-19 using pristine ground-truth, versus radiologists
The front-line imaging modalities computed tomography (CT) and X-ray play important roles for triaging COVID patients. Thoracic CT has been accepted to have higher sensitivity than a chest X-ray for COVID diagnosis. Considering the limited access to resources (both hardware and trained personnel) and issues related to decontamination, CT may not be ideal for triaging suspected subjects. Artificial intelligence (AI) assisted X-ray based application for triaging and monitoring require experienced radiologists to identify COVID patients in a timely manner with the additional ability to delineate and quantify the disease region is seen as a promising solution for widespread clinical use. Our proposed solution differs from existing solutions presented by industry and academic communities. We demonstrate a functional AI model to triage by classifying and segmenting a single chest X-ray image, while the AI model is trained using both X-ray and CT data. We report on how such a multi-modal training process improves the solution compared to single modality (X-ray only) training. The multi-modal solution increases the AUC (area under the receiver operating characteristic curve) from 0.89 to 0.93 for a binary classification between COVID-19 and non-COVID-19 cases. It also positively impacts the Dice coefficient (0.59 to 0.62) for localizing the COVID-19 pathology. To compare the performance of experienced readers to the AI model, a reader study is also conducted. The AI model showed good consistency with respect to radiologists. The DICE score between two radiologists on the COVID group was 0.53 while the AI had a DICE value of 0.52 and 0.55 when compared to the segmentation done by the two radiologists separately. From a classification perspective, the AUCs of two readers was 0.87 and 0.81 while the AUC of the AI is 0.93 based on the reader study dataset. We also conducted a generalization study by comparing our method to the-state-art methods on independent datasets. The results show better performance from the proposed method. Leveraging multi-modal information for the development benefits the single-modal inferencing.
A fuzzy-enhanced deep learning approach for early detection of Covid-19 pneumonia from portable chest X-ray images
The Covid-19 pandemic is the defining global health crisis of our time. Chest X-Rays (CXR) have been an important imaging modality for assisting in the diagnosis and management of hospitalised Covid-19 patients. However, their interpretation is time intensive for radiologists. Accurate computer aided systems can facilitate early diagnosis of Covid-19 and effective triaging. In this paper, we propose a fuzzy logic based deep learning (DL) approach to differentiate between CXR images of patients with Covid-19 pneumonia and with interstitial pneumonias not related to Covid-19. The developed model here, referred to as , is used to extract some relevant features from CXR images, combined with fuzzy images generated by a fuzzy edge detection algorithm. Experimental results show that using a combination of CXR and fuzzy features, within a deep learning approach by developing a deep network inputed to a Multilayer Perceptron (MLP), results in a higher classification performance (accuracy rate up to 81%), compared to benchmark deep learning approaches. The approach has been validated through additional datasets which are continously generated due to the spread of the virus and would help triage patients in acute settings. A permutation analysis is carried out, and a simple occlusion methodology for explaining decisions is also proposed. The proposed pipeline can be easily embedded into present clinical decision support systems.
Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM
Adaptive gradient methods (AGMs) have become popular in optimizing the nonconvex problems in deep learning area. We revisit AGMs and identify that the adaptive learning rate (A-LR) used by AGMs varies significantly across the dimensions of the problem over epochs (i.e., anisotropic scale), which may lead to issues in convergence and generalization. All existing modified AGMs actually represent efforts in revising the A-LR. Theoretically, we provide a new way to analyze the convergence of AGMs and prove that the convergence rate of Adam also depends on its hyper-parameter є, which has been overlooked previously. Based on these two facts, we propose a new AGM by calibrating the A-LR with an activation () function, resulting in the Sadam and SAMSGrad methods. We further prove that these algorithms enjoy better convergence speed under nonconvex, non-strongly convex, and Polyak-Łojasiewicz conditions compared with Adam. Empirical studies support our observation of the anisotropic A-LR and show that the proposed methods outperform existing AGMs and generalize even better than S-Momentum in multiple deep learning tasks.
Time series predicting of COVID-19 based on deep learning
COVID-19 was declared a global pandemic by the World Health Organisation (WHO) on 11th March 2020. Many researchers have, in the past, attempted to predict a COVID outbreak and its effect. Some have regarded time-series variables as primary factors which can affect the onset of infectious diseases like influenza and severe acute respiratory syndrome (SARS). In this study, we have used public datasets provided by the European Centre for Disease Prevention and Control for developing a prediction model for the spread of the COVID-19 outbreak to and throughout Malaysia, Morocco and Saudi Arabia. We have made use of certain effective deep learning (DL) models for this purpose. We assessed some specific major features for predicting the trend of the existing COVID-19 outbreak in these three countries. In this study, we also proposed a DL approach that includes recurrent neural network (RNN) and long short-term memory (LSTM) networks for predicting the probable numbers of COVID-19 cases. The LSTM models showed a 98.58% precision accuracy while the RNN models showed a 93.45% precision accuracy. Also, this study compared the number of coronavirus cases and the number of resulting deaths in Malaysia, Morocco and Saudi Arabia. Thereafter, we predicted the number of confirmed COVID-19 cases and deaths for a subsequent seven days. In this study, we presented their predictions using the data that was available up to December 3rd, 2020.
Digital twins based on bidirectional LSTM and GAN for modelling the COVID-19 pandemic
The outbreak of the coronavirus disease 2019 (COVID-19) has now spread throughout the globe infecting over 150 million people and causing the death of over 3.2 million people. Thus, there is an urgent need to study the dynamics of epidemiological models to gain a better understanding of how such diseases spread. While epidemiological models can be computationally expensive, recent advances in machine learning techniques have given rise to neural networks with the ability to learn and predict complex dynamics at reduced computational costs. Here we introduce two digital twins of a SEIRS model applied to an idealised town. The SEIRS model has been modified to take account of spatial variation and, where possible, the model parameters are based on official virus spreading data from the UK. We compare predictions from one digital twin based on a data-corrected Bidirectional Long Short-Term Memory network with predictions from another digital twin based on a predictive Generative Adversarial Network. The predictions given by these two frameworks are accurate when compared to the original SEIRS model data. Additionally, these frameworks are data-agnostic and could be applied to towns, idealised or real, in the UK or in other countries. Also, more compartments could be included in the SEIRS model, in order to study more realistic epidemiological behaviour.
Deep supervised learning using self-adaptive auxiliary loss for COVID-19 diagnosis from imbalanced CT images
The outbreak and rapid spread of coronavirus disease 2019 (COVID-19) has had a huge impact on the lives and safety of people around the world. Chest CT is considered an effective tool for the diagnosis and follow-up of COVID-19. For faster examination, automatic COVID-19 diagnostic techniques using deep learning on CT images have received increasing attention. However, the number and category of existing datasets for COVID-19 diagnosis that can be used for training are limited, and the number of initial COVID-19 samples is much smaller than the normal's, which leads to the problem of class imbalance. It makes the classification algorithms difficult to learn the discriminative boundaries since the data of some classes are rich while others are scarce. Therefore, training robust deep neural networks with imbalanced data is a fundamental challenging but important task in the diagnosis of COVID-19. In this paper, we create a challenging clinical dataset (named COVID19-Diag) with category diversity and propose a novel imbalanced data classification method using deep supervised learning with a self-adaptive auxiliary loss (DSN-SAAL) for COVID-19 diagnosis. The loss function considers both the effects of data overlap between CT slices and possible noisy labels in clinical datasets on a multi-scale, deep supervised network framework by integrating the effective number of samples and a weighting regularization item. The learning process jointly and automatically optimizes all parameters over the deep supervised network, making our model generally applicable to a wide range of datasets. Extensive experiments are conducted on COVID19-Diag and three public COVID-19 diagnosis datasets. The results show that our DSN-SAAL outperforms the state-of-the-art methods and is effective for the diagnosis of COVID-19 in varying degrees of data imbalance.
Fusion of intelligent learning for COVID-19: A state-of-the-art review and analysis on real medical data
The unprecedented surge of a novel coronavirus in the month of December 2019, named as COVID-19 by the World Health organization has caused a serious impact on the health and socioeconomic activities of the public all over the world. Since its origin, the number of infected and deceased cases has been growing exponentially in almost all the affected countries of the world. The rapid spread of the novel coronavirus across the world results in the scarcity of medical resources and overburdened hospitals. As a result, the researchers and technocrats are continuously working across the world for the inculcation of efficient strategies which may assist the government and healthcare system in controlling and managing the spread of the COVID-19 pandemic. Therefore, this study provides an extensive review of the ongoing strategies such as diagnosis, prediction, drug and vaccine development and preventive measures used in combating the COVID-19 along with technologies used and limitations. Moreover, this review also provides a comparative analysis of the distinct type of data, emerging technologies, approaches used in diagnosis and prediction of COVID-19, statistics of contact tracing apps, vaccine production platforms used in the COVID-19 pandemic. Finally, the study highlights some challenges and pitfalls observed in the systematic review which may assist the researchers to develop more efficient strategies used in controlling and managing the spread of COVID-19.
Automatic Whole Slide Pathology Image Diagnosis Framework via Unit Stochastic Selection and Attention Fusion
Pathology tissue slides are taken as the gold standard for the diagnosis of most cancer diseases. Automatic pathology slide diagnosis is still a challenging task for researchers because of the high-resolution, significant morphological variation, and ambiguity between malignant and benign regions in whole slide images (WSIs). In this study, we introduce a general framework to automatically diagnose different types of WSIs via unit stochastic selection and attention fusion. For example, a unit can denote a patch in a histopathology slide or a cell in a cytopathology slide. To be specific, we first train a unit-level convolutional neural network (CNN) to perform two tasks: constructing feature extractors for the units and for estimating a unit's non-benign probability. Then we use our novel stochastic selection algorithm to choose a small subset of units that are most likely to be non-benign, referred to as the Units Of Interest (UOI), as determined by the CNN. Next, we use the attention mechanism to fuse the representations of the UOI to form a fixed-length descriptor for the WSI's diagnosis. We evaluate the proposed framework on three datasets: histological thyroid frozen sections, histological colonoscopy tissue slides, and cytological cervical pap smear slides. The framework achieves diagnosis accuracies higher than 0.8 and AUC values higher than 0.85 in all three applications. Experiments demonstrate the generality and effectiveness of the proposed framework and its potentiality for clinical applications.
Information Capacity of a Stochastically Responding Neuron Assembly
In this work, certain aspects of the structure of the overlapping groups of neurons encoding specific signals are examined. Individual neurons are assumed to respond stochastically to input signal. Identification of a particular signal is assumed to result from the aggregate activity of a group of neurons, which we call information pathway. Conditions for definite response and for non-interference of pathways are derived. These conditions constrain the response properties of individual neurons and the allowed overlap among pathways. Under these constrains, and under the simplifying assumption that all pathways have similar structure, the information capacity of the system is derived. Furthermore, we show that there is a definite advantage in the information capacity if pathway neurons areinterspersed among the neuron assembly.