Neuroscience â€ș Neurology

Brain Tumor Detection and Classification

Description

This cluster of papers focuses on the classification of brain tumor type and grade using various techniques such as MRI, deep learning, convolutional neural networks, feature extraction, and machine learning. The research aims to improve the accuracy and efficiency of brain tumor classification for better diagnosis and treatment.

Keywords

MRI; Brain Tumor; Classification; Deep Learning; Convolutional Neural Network; Feature Extraction; Machine Learning; Image Segmentation; Transfer Learning; Medical Imaging

An accurate, reproducible method for determining the infarct volumes of gray matter structures is presented for use with presently available image analysis systems. Areas of stained sections with optical densities 
 An accurate, reproducible method for determining the infarct volumes of gray matter structures is presented for use with presently available image analysis systems. Areas of stained sections with optical densities above that of a threshold value are automatically recognized and measured. This eliminates the potential error and bias inherent in manually delineating infarcted regions. Moreover, the volume of surviving normal gray matter is determined rather than that of the infarct. This approach minimizes the error that is introduced by edema, which distorts and enlarges the infarcted tissue and surrounding white matter.
This paper has reviewed, with somewhat variable coverage, the nine MR image segmentation techniques itemized in Table II. A wide array of approaches have been discussed; each has its merits 
 This paper has reviewed, with somewhat variable coverage, the nine MR image segmentation techniques itemized in Table II. A wide array of approaches have been discussed; each has its merits and drawbacks. We have also given pointers to other approaches not discussed in depth in this review. The methods reviewed fall roughly into four model groups: c-means, maximum likelihood, neural networks, and k-nearest neighbor rules. Both supervised and unsupervised schemes require human intervention to obtain clinically useful results in MR segmentation. Unsupervised techniques require somewhat less interaction on a per patient/image basis. Maximum likelihood techniques have had some success, but are very susceptible to the choice of training region, which may need to be chosen slice by slice for even one patient. Generally, techniques that must assume an underlying statistical distribution of the data (such as LML and UML) do not appear promising, since tissue regions of interest do not usually obey the distributional tendencies of probability density functions. The most promising supervised techniques reviewed seem to be FF/NN methods that allow hidden layers to be configured as examples are presented to the system. An example of a self-configuring network, FF/CC, was also discussed. The relatively simple k-nearest neighbor rule algorithms (hard and fuzzy) have also shown promise in the supervised category. Unsupervised techniques based upon fuzzy c-means clustering algorithms have also shown great promise in MR image segmentation. Several unsupervised connectionist techniques have recently been experimented with on MR images of the brain and have provided promising initial results. A pixel-intensity-based edge detection algorithm has recently been used to provide promising segmentations of the brain. This is also an unsupervised technique, older versions of which have been susceptible to oversegmenting the image because of the lack of clear boundaries between tissue types or finding uninteresting boundaries between slightly different types of the same tissue. To conclude, we offer some remarks about improving MR segmentation techniques. The better unsupervised techniques are too slow. Improving speed via parallelization and optimization will improve their competitiveness with, e.g., the k-nn rule, which is the fastest technique covered in this review. Another area for development is dynamic cluster validity. Unsupervised methods need better ways to specify and adjust c, the number of tissue classes found by the algorithm. Initialization is a third important area of research. Many of the schemes listed in Table II are sensitive to good initialization, both in terms of the parameters of the design, as well as operator selection of training data.(ABSTRACT TRUNCATED AT 400 WORDS)
MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While 
 MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.
Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and vascular disorders, as 
 Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and vascular disorders, as well as in elderly healthy subjects. We present BIANCA (Brain Intensity AbNormality Classification Algorithm), a fully automated, supervised method for WMH detection, based on the k-nearest neighbour (k-NN) algorithm. Relative to previous k-NN based segmentation methods, BIANCA offers different options for weighting the spatial information, local spatial intensity averaging, and different options for the choice of the number and location of the training points. BIANCA is multimodal and highly flexible so that the user can adapt the tool to their protocol and specific needs. We optimised and validated BIANCA on two datasets with different MRI protocols and patient populations (a "predominantly neurodegenerative" and a "predominantly vascular" cohort). BIANCA was first optimised on a subset of images for each dataset in terms of overlap and volumetric agreement with a manually segmented WMH mask. The correlation between the volumes extracted with BIANCA (using the optimised set of options), the volumes extracted from the manual masks and visual ratings showed that BIANCA is a valid alternative to manual segmentation. The optimised set of options was then applied to the whole cohorts and the resulting WMH volume estimates showed good correlations with visual ratings and with age. Finally, we performed a reproducibility test, to evaluate the robustness of BIANCA, and compared BIANCA performance against existing methods. Our findings suggest that BIANCA, which will be freely available as part of the FSL package, is a reliable method for automated WMH segmentation in large cross-sectional cohort studies.
Brain tumor segmentation is an important task in medical image processing. Early diagnosis of brain tumors plays an important role in improving treatment possibilities and increases the survival rate of 
 Brain tumor segmentation is an important task in medical image processing. Early diagnosis of brain tumors plays an important role in improving treatment possibilities and increases the survival rate of the patients. Manual segmentation of the brain tumors for cancer diagnosis, from large amount of MRI images generated in clinical routine, is a difficult and time consuming task. There is a need for automatic brain tumor image segmentation. The purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. Recently, automatic segmentation using deep learning methods proved popular since these methods achieve the state-of-the-art results and can address this problem better than other methods. Deep learning methods can also enable efficient processing and objective evaluation of the large amounts of MRI-based image data. There are number of existing review papers, focusing on traditional methods for MRI-based brain tumor image segmentation. Different than others, in this paper, we focus on the recent trend of deep learning methods in this field. First, an introduction to brain tumors and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend of deep learning methods are discussed. Finally, an assessment of the current state is presented and future developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed.
The accurate diagnosis of Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment, is essential for timely treatment and possible delay of AD. Fusion of multimodal neuroimaging data, 
 The accurate diagnosis of Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment, is essential for timely treatment and possible delay of AD. Fusion of multimodal neuroimaging data, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), has shown its effectiveness for AD diagnosis. The deep polynomial networks (DPN) is a recently proposed deep learning algorithm, which performs well on both large-scale and small-size datasets. In this study, a multimodal stacked DPN (MM-SDPN) algorithm, which MM-SDPN consists of two-stage SDPNs, is proposed to fuse and learn feature representation from multimodal neuroimaging data for AD diagnosis. Specifically speaking, two SDPNs are first used to learn high-level features of MRI and PET, respectively, which are then fed to another SDPN to fuse multimodal neuroimaging information. The proposed MM-SDPN algorithm is applied to the ADNI dataset to conduct both binary classification and multiclass classification tasks. Experimental results indicate that MM-SDPN is superior over the state-of-the-art multimodal feature-learning-based algorithms for AD diagnosis.
The segmentation, detection, and extraction of infected tumor area from magnetic resonance (MR) images are a primary concern but a tedious and time taking task performed by radiologists or clinical 
 The segmentation, detection, and extraction of infected tumor area from magnetic resonance (MR) images are a primary concern but a tedious and time taking task performed by radiologists or clinical experts, and their accuracy depends on their experience only. So, the use of computer aided technology becomes very necessary to overcome these limitations. In this study, to improve the performance and reduce the complexity involves in the medical image segmentation process, we have investigated Berkeley wavelet transformation (BWT) based brain tumor segmentation. Furthermore, to improve the accuracy and quality rate of the support vector machine (SVM) based classifier, relevant features are extracted from each segmented tissue. The experimental results of proposed technique have been evaluated and validated for performance and quality analysis on magnetic resonance brain images, based on accuracy, sensitivity, specificity, and dice similarity index coefficient. The experimental results achieved 96.51% accuracy, 94.2% specificity, and 97.72% sensitivity, demonstrating the effectiveness of the proposed technique for identifying normal and abnormal tissues from brain MR images. The experimental results also obtained an average of 0.82 dice similarity index coefficient, which indicates better overlap between the automated (machines) extracted tumor region with manually extracted tumor region by radiologists. The simulation results prove the significance in terms of quality parameters and accuracy in comparison to state-of-the-art techniques.
Deep Learning is a new machine learning field that gained a lot of interest over the past few years. It was widely applied to several applications and proven to be 
 Deep Learning is a new machine learning field that gained a lot of interest over the past few years. It was widely applied to several applications and proven to be a powerful machine learning tool for many of the complex problems. In this paper we used Deep Neural Network classifier which is one of the DL architectures for classifying a dataset of 66 brain MRIs into 4 classes e.g. normal, glioblastoma, sarcoma and metastatic bronchogenic carcinoma tumors. The classifier was combined with the discrete wavelet transform (DWT) the powerful feature extraction tool and principal components analysis (PCA) and the evaluation of the performance was quite good over all the performance measures.
The identification, segmentation and detection of infecting area in brain tumor MRI images are a tedious and time-consuming task. The different anatomy structure of human body can be visualized by 
 The identification, segmentation and detection of infecting area in brain tumor MRI images are a tedious and time-consuming task. The different anatomy structure of human body can be visualized by an image processing concepts. It is very difficult to have vision about the abnormal structures of human brain using simple imaging techniques. Magnetic resonance imaging technique distinguishes and clarifies the neural architecture of human brain. MRI technique contains many imaging modalities that scans and capture the internal structure of human brain. In this study, we have concentrated on noise removal technique, extraction of gray-level co-occurrence matrix (GLCM) features, DWT-based brain tumor region growing segmentation to reduce the complexity and improve the performance. This was followed by morphological filtering which removes the noise that can be formed after segmentation. The probabilistic neural network classifier was used to train and test the performance accuracy in the detection of tumor location in brain MRI images. The experimental results achieved nearly 100% accuracy in identifying normal and abnormal tissues from brain MR images demonstrating the effectiveness of the proposed technique.
Alzheimer's disease is an incurable, progressive neurological brain disorder. Earlier detection of Alzheimer's disease can help with proper treatment and prevent brain tissue damage. Several statistical and machine learning models 
 Alzheimer's disease is an incurable, progressive neurological brain disorder. Earlier detection of Alzheimer's disease can help with proper treatment and prevent brain tissue damage. Several statistical and machine learning models have been exploited by researchers for Alzheimer's disease diagnosis. Analyzing magnetic resonance imaging (MRI) is a common practice for Alzheimer's disease diagnosis in clinical research. Detection of Alzheimer's disease is exacting due to the similarity in Alzheimer's disease MRI data and standard healthy MRI data of older people. Recently, advanced deep learning techniques have successfully demonstrated human-level performance in numerous fields including medical image analysis. We propose a deep convolutional neural network for Alzheimer's disease diagnosis using brain MRI data analysis. While most of the existing approaches perform binary classification, our model can identify different stages of Alzheimer's disease and obtains superior performance for early-stage diagnosis. We conducted ample experiments to demonstrate that our proposed model outperformed comparative baselines on the Open Access Series of Imaging Studies dataset.
Purpose To develop and validate a deep learning algorithm that predicts the final diagnosis of Alzheimer disease (AD), mild cognitive impairment, or neither at fluorine 18 ( Purpose To develop and validate a deep learning algorithm that predicts the final diagnosis of Alzheimer disease (AD), mild cognitive impairment, or neither at fluorine 18 (
Deep Learning algorithms are designed in such a way that they mimic the function of the human cerebral cortex. These algorithms are representations of deep neural networks i.e. neural networks 
 Deep Learning algorithms are designed in such a way that they mimic the function of the human cerebral cortex. These algorithms are representations of deep neural networks i.e. neural networks with many hidden layers. Convolutional neural networks are deep learning algorithms that can train large datasets with millions of parameters, in form of 2D images as input and convolve it with filters to produce the desired outputs. In this article, CNN models are built to evaluate its performance on image recognition and detection datasets. The algorithm is implemented on MNIST and CIFAR-10 dataset and its performance are evaluated. The accuracy of models on MNIST is 99.6 %, CIFAR-10 is using real-time data augmentation and dropout on CPU unit.
Brain tumor classification is a crucial task to evaluate the tumors and make a treatment decision according to their classes. There are many imaging techniques used to detect brain tumors. 
 Brain tumor classification is a crucial task to evaluate the tumors and make a treatment decision according to their classes. There are many imaging techniques used to detect brain tumors. However, MRI is commonly used due to its superior image quality and the fact of relying on no ionizing radiation. Deep learning (DL) is a subfield of machine learning and recently showed a remarkable performance, especially in classification and segmentation problems. In this paper, a DL model based on a convolutional neural network is proposed to classify different brain tumor types using two publicly available datasets. The former one classifies tumors into (meningioma, glioma, and pituitary tumor). The other one differentiates between the three glioma grades (Grade II, Grade III, and Grade IV). The datasets include 233 and 73 patients with a total of 3064 and 516 images on T1-weighted contrast-enhanced images for the first and second datasets, respectively. The proposed network structure achieves a significant performance with the best overall accuracy of 96.13% and 98.7%, respectively, for the two studies. The results indicate the ability of the model for brain tumor multi-classification purposes.
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks 
 In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
Brain tumor is considered as one of the deadliest and most common form of cancer both in children and in adults. Consequently, determining the correct type of brain tumor in 
 Brain tumor is considered as one of the deadliest and most common form of cancer both in children and in adults. Consequently, determining the correct type of brain tumor in early stages is of significant importance to devise a precise treatment plan and predict patient's response to the adopted treatment. In this regard, there has been a recent surge of interest in designing Convolutional Neural Networks (CNNs) for the problem of brain tumor type classification. However, CNNs typically require large amount of training data and can not properly handle input transformations. Capsule networks (referred to as CapsNets) are brand new machine learning architectures proposed very recently to overcome these shortcomings of CNNs, and posed to revolutionize deep learning solutions. Of particular interest to this work is that Capsule networks are robust to rotation and affine transformation, and require far less training data, which is the case for processing medical image datasets including brain Magnetic Resonance Imaging (MRI) images. In this paper, we focus to achieve the following four objectives: (i) Adopt and incorporate CapsNets for the problem of brain tumor classification to design an improved architecture which maximizes the accuracy of the classification problem at hand; (ii) Investigate the over-fitting problem of CapsNets based on a real set of MRI images; (iii) Explore whether or not CapsNets are capable of providing better fit for the whole brain images or just the segmented tumor, and; (iv) Develop a visualization paradigm for the output of the CapsNet to better explain the learned features. Our results show that the proposed approach can successfully overcome CNNs for the brain tumor classification problem.
PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a 
 PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.
Alzheimer's disease (AD) is a progressive and irreversible brain degenerative disorder. Mild cognitive impairment (MCI) is a clinical precursor of AD. Although some treatments can delay its progression, no effective 
 Alzheimer's disease (AD) is a progressive and irreversible brain degenerative disorder. Mild cognitive impairment (MCI) is a clinical precursor of AD. Although some treatments can delay its progression, no effective cures are available for AD. Accurate early-stage diagnosis of AD is vital for the prevention and intervention of the disease progression. Hippocampus is one of the first affected brain regions in AD. To help AD diagnosis, the shape and volume of the hippocampus are often measured using structural magnetic resonance imaging (MRI). However, these features encode limited information and may suffer from segmentation errors. Additionally, the extraction of these features is independent of the classification model, which could result in sub-optimal performance. In this study, we propose a multi-model deep learning framework based on convolutional neural network (CNN) for joint automatic hippocampal segmentation and AD classification using structural MRI data. Firstly, a multi-task deep CNN model is constructed for jointly learning hippocampal segmentation and disease classification. Then, we construct a 3D Densely Connected Convolutional Networks (3D DenseNet) to learn features of the 3D patches extracted based on the hippocampal segmentation results for the classification task. Finally, the learned features from the multi-task CNN and DenseNet models are combined to classify disease status. Our method is evaluated on the baseline T1-weighted structural MRI data collected from 97 AD, 233 MCI, 119 Normal Control (NC) subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The proposed method achieves a dice similarity coefficient of 87.0% for hippocampal segmentation. In addition, the proposed method achieves an accuracy of 88.9% and an AUC (area under the ROC curve) of 92.5% for classifying AD vs. NC subjects, and an accuracy of 76.2% and an AUC of 77.5% for classifying MCI vs. NC subjects. Our empirical study also demonstrates that the proposed multi-model method outperforms the single-model methods and several other competing methods.
The review covers automatic segmentation of images by means of deep learning approaches in the area of medical imaging. Current developments in machine learning, particularly related to deep learning, are 
 The review covers automatic segmentation of images by means of deep learning approaches in the area of medical imaging. Current developments in machine learning, particularly related to deep learning, are proving instrumental in identification, and quantification of patterns in the medical images. The pivotal point of these advancements is the essential capability of the deep learning approaches to obtain hierarchical feature representations directly from the images, which in turn is eliminating the need for handcrafted features. Deep learning is expeditiously turning into the state-of-the-art for medical image processing and has resulted in performance improvements in diverse clinical applications. In this review, the basics of deep learning methods are discussed along with an overview of successful implementations involving image segmentation for different medical applications. Finally, some research issues are highlighted and the future need for further improvements is pointed out.
The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor 
 The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor diagnostics without invasive measures. A machine-learning algorithm that has achieved substantial results in image segmentation and classification is the convolutional neural network (CNN). We present a new CNN architecture for brain tumor classification of three tumor types. The developed network is simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images. The performance of the network was evaluated using four approaches: combinations of two 10-fold cross-validation methods and two databases. The generalization capability of the network was tested with one of the 10-fold methods, subject-wise cross-validation, and the improvement was tested by using an augmented image database. The best result for the 10-fold cross-validation method was obtained for the record-wise cross-validation for the augmented data set, and, in that case, the accuracy was 96.56%. With good generalization capability and good execution speed, the new developed CNN architecture could be used as an effective decision-support tool for radiologists in medical diagnostics.
Brain tumor classification plays an important role in clinical diagnosis and effective treatment. In this work, we propose a method for brain tumor classification using an ensemble of deep features 
 Brain tumor classification plays an important role in clinical diagnosis and effective treatment. In this work, we propose a method for brain tumor classification using an ensemble of deep features and machine learning classifiers. In our proposed framework, we adopt the concept of transfer learning and uses several pre-trained deep convolutional neural networks to extract deep features from brain magnetic resonance (MR) images. The extracted deep features are then evaluated by several machine learning classifiers. The top three deep features which perform well on several machine learning classifiers are selected and concatenated as an ensemble of deep features which is then fed into several machine learning classifiers to predict the final output. To evaluate the different kinds of pre-trained models as a deep feature extractor, machine learning classifiers, and the effectiveness of an ensemble of deep feature for brain tumor classification, we use three different brain magnetic resonance imaging (MRI) datasets that are openly accessible from the web. Experimental results demonstrate that an ensemble of deep features can help improving performance significantly, and in most cases, support vector machine (SVM) with radial basis function (RBF) kernel outperforms other machine learning classifiers, especially for large datasets.
Abstract Brain tumor localization and segmentation from magnetic resonance imaging (MRI) are hard and important tasks for several applications in the field of medical analysis. As each brain imaging modality 
 Abstract Brain tumor localization and segmentation from magnetic resonance imaging (MRI) are hard and important tasks for several applications in the field of medical analysis. As each brain imaging modality gives unique and key details related to each part of the tumor, many recent approaches used four modalities T1, T1c, T2, and FLAIR. Although many of them obtained a promising segmentation result on the BRATS 2018 dataset, they suffer from a complex structure that needs more time to train and test. So, in this paper, to obtain a flexible and effective brain tumor segmentation system, first, we propose a preprocessing approach to work only on a small part of the image rather than the whole part of the image. This method leads to a decrease in computing time and overcomes the overfitting problems in a Cascade Deep Learning model. In the second step, as we are dealing with a smaller part of brain images in each slice, a simple and efficient Cascade Convolutional Neural Network (C-ConvNet/C-CNN) is proposed. This C-CNN model mines both local and global features in two different routes. Also, to improve the brain tumor segmentation accuracy compared with the state-of-the-art models, a novel Distance-Wise Attention (DWA) mechanism is introduced. The DWA mechanism considers the effect of the center location of the tumor and the brain inside the model. Comprehensive experiments are conducted on the BRATS 2018 dataset and show that the proposed model obtains competitive results: the proposed method achieves a mean whole tumor, enhancing tumor, and tumor core dice scores of 0.9203, 0.9113 and 0.8726 respectively. Other quantitative and qualitative assessments are presented and discussed.
Summary Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning‐based algorithms. While the performance of the models which these algorithms 
 Summary Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning‐based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren’t typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state‐of‐the‐art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.
Deep learning (DL) based semantic segmentation methods have been providing state-of-the-art performance in the last few years. More specifically, these techniques have been successfully applied to medical image classification, segmentation, 
 Deep learning (DL) based semantic segmentation methods have been providing state-of-the-art performance in the last few years. More specifically, these techniques have been successfully applied to medical image classification, segmentation, and detection tasks. One deep learning technique, U-Net, has become one of the most popular for these applications. In this paper, we propose a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net models, which are named RU-Net and R2U-Net respectively. The proposed models utilize the power of U-Net, Residual Network, as well as RCNN. There are several advantages of these proposed architectures for segmentation tasks. First, a residual unit helps when training deep architecture. Second, feature accumulation with recurrent residual convolutional layers ensures better feature representation for segmentation tasks. Third, it allows us to design better U-Net architecture with same number of network parameters with better performance for medical image segmentation. The proposed models are tested on three benchmark datasets such as blood vessel segmentation in retina images, skin cancer segmentation, and lung lesion segmentation. The experimental results show superior performance on segmentation tasks compared to equivalent models including U-Net and residual U-Net (ResU-Net).
Abstract Brain extraction is essential in neuroimaging studies for patient privacy and optimizing computational analyses. Manual creation of 3D brain masks is labor-intensive, prompting the development of automatic computational methods. 
 Abstract Brain extraction is essential in neuroimaging studies for patient privacy and optimizing computational analyses. Manual creation of 3D brain masks is labor-intensive, prompting the development of automatic computational methods. Robust quality control (QC) is hence necessary for the effective use of these methods in large-scale studies. However, previous automated QC methods have been limited in flexibility regarding algorithmic architecture and data adaptability. We introduce a novel approach inspired by a statistical outlier detection paradigm to efficiently identify potentially erroneous data. Our QC method is unsupervised, resource-efficient, and requires minimal parameter tuning. We quantitatively evaluated its performance using morphological features of brain masks generated from three automated brain extraction tools across multi-institutional pre- and post-operative brain glioblastoma MRI scans. We achieved an accuracy of 0 . 9 for pre- and 0 . 87 for post-operative scans, thus demonstrating the effectiveness of our proposed QC tool for brain extraction. Additionally, the method shows potential for other tasks where a user-defined feature space can be defined. Our novel QC approach offers significant improvements in flexibility and efficiency over previous methods. It is a valuable tool, targeting reassurance of brain masks in neuroimaging and can be adapted for other applications requiring robust QC mechanisms.
A Rajasekaran | INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Stroke is still one of the leading causes of death and chronic disability worldwide, and so immediate and proper diagnosis is a must to prevent delays in clinical management. Manual 
 Stroke is still one of the leading causes of death and chronic disability worldwide, and so immediate and proper diagnosis is a must to prevent delays in clinical management. Manual interpretation of imaging by conventional diagnostic methods is, by and large, time-consuming and susceptible to human bias. The medical imaging process has undergone a transformation in contemporary times due to the confluence of AI, ML, and DL approaches. Through computer-aided processing of CT and MRI scans, these have been shown to offer enormous potential to improve the detection, classification, and prediction of strokes.This survey paper provides a state-of-the-art overview of the AI-based models reported during the period from 2021 to 2024 in their methodologies, datasets, performance metrics, and clinical usability. By combining these contributions, the paper concludes that there are research gaps identified in current literature, including generalizability of models, a lack of standard datasets, and explainability requirements in clinical applications. This study aims to provide a foundation for further research into creating clinically integrated, interpretable, and reliable AI systems for stroke detection. Index Terms- Stroke Detection, Deep Learning, Machine Learning, Neuroimaging, CT Scans, MRI, Artificial Intelligence, Medical Diagnosis, Hemorrhagic Stroke, Ischemic Stroke, Healthcare AI, Feature Extraction, Image Classification.
Deep Learning and advanced image processing can enhance the detection and prognosis of liver cancer using medical imaging, such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans. Liver 
 Deep Learning and advanced image processing can enhance the detection and prognosis of liver cancer using medical imaging, such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans. Liver cancer detection is a challenging task due to factors such as poor contrast, noise in imaging techniques, limited annotated datasets, and the complex characteristics of tumors. This study proposes a hybrid technique that combines Contrast-Limited Adaptive Histogram Equalization (CLAHE), Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs), and Transfer Learning (TL) to improve the precision and accuracy of liver tumor detection. A conventional technique for image enhancement, CLAHE increases the contrast of medical images, making malignant tumors more apparent. CLAHE, however, does not provide a thorough tumor characterization; instead, it focuses on enhancing image quality. CNN is used to extract features, find and learn important patterns, such as edges, textures, and shapes that are pertinent to the diagnosis of tumors. Finally, TL utilizes pre-trained models (Inception V3) for classification, enabling the effective learning of tumor features and achieving high diagnostic precision with fewer computational resources. A hybrid approach combining CNN, GAN, and TL may give an integrated and effective solution for identifying and diagnosing liver tumors. The hybrid technique performed significantly better than independent DL approaches, achieving an accuracy of 93.3%, a sensitivity of 92.2%, a specificity of 94.5%, and an F1-score of 92.8%.
The anomalous enlargement of brain cells is known as Brain Tumour (BT), which can cause serious damage to different blood vessel and nerve in human body. A precise and early 
 The anomalous enlargement of brain cells is known as Brain Tumour (BT), which can cause serious damage to different blood vessel and nerve in human body. A precise and early detection of BT is foremost important to eliminate severe illness. Thus, a SpinalNet Visual Geometry Group-16 (Spinal VGG-16-Net) is introduced for early BT detection. At first, Magnetic Resonance Imaging (MRI) of image obtained from data sample is subjected to image denoising by bilateral filter. Then, BT area is segmented from the image using entropy-based Kapur thresholding technique, where threshold values are ideally selected by Gradient Energy Valley Optimization (GEVO), which is designed by incorporating Energy Valley Optimization (EVO) with Stochastic Gradient Descent (SGD) algorithm. Then, process of image augmentation is worked and later, feature extraction is performed to mine the most significant features. Finally, BT is detected using proposed Spinal VGG-16Net, which is devised by combining both SpinalNet and VGG-16 Net. The Spinal VGG-16-Net is compared with some of the existing schemes, and it attained maximum accuracy of 92.14%, True Positive Rate (TPR) of 93.16%, True Negative Rate (TNR) of 91.35%, Negative Predictive Value (NPV) 89.73%, and Positive Predictive Value (PPV) o of 92.13%.
Brijesh Kannaujiya , Asif Rajput | International Journal of Science and Research (IJSR)
Accurate and efficient classification of brain tumors, including gliomas, meningiomas, and pituitary adenomas, is critical for early diagnosis and treatment planning. Magnetic resonance imaging (MRI) is a key diagnostic tool, 
 Accurate and efficient classification of brain tumors, including gliomas, meningiomas, and pituitary adenomas, is critical for early diagnosis and treatment planning. Magnetic resonance imaging (MRI) is a key diagnostic tool, and deep learning models have shown promise in automating tumor classification. However, challenges remain in achieving high accuracy while maintaining interpretability for clinical use. This study explores the use of transfer learning with pre-trained architectures, including VGG16, DenseNet121, and Inception-ResNet-v2, to classify brain tumors from MRI images. An ensemble-based classifier was developed using a majority voting strategy to improve robustness. To enhance clinical applicability, explainability techniques such as Grad-CAM++ and Integrated Gradients were employed, allowing visualization of model decision-making. The ensemble model outperformed individual Convolutional Neural Network (CNN) architectures, achieving an accuracy of 86.17% in distinguishing gliomas, meningiomas, pituitary adenomas, and benign cases. Interpretability techniques provided heatmaps that identified key regions influencing model predictions, aligning with radiological features and enhancing trust in the results. The proposed ensemble-based deep learning framework improves the accuracy and interpretability of brain tumor classification from MRI images. By combining multiple CNN architectures and integrating explainability methods, this approach offers a more reliable and transparent diagnostic tool to support medical professionals in clinical decision-making.
The early detection of brain tumors is critical for improving clinical outcomes and patient survival. However, medical imaging datasets frequently exhibit class imbalance, posing significant challenges for traditional classification algorithms 
 The early detection of brain tumors is critical for improving clinical outcomes and patient survival. However, medical imaging datasets frequently exhibit class imbalance, posing significant challenges for traditional classification algorithms that rely on balanced data distributions. To address this issue, this study employs a One-Class Support Vector Machine (OCSVM) trained exclusively on features extracted from healthy brain MRI images, using both deep learning architectures—such as DenseNet121, VGG16, MobileNetV2, InceptionV3, and ResNet50—and classical feature extraction techniques. Experimental results demonstrate that combining Convolutional Neural Network (CNN)-based feature extraction with OCSVM significantly improves anomaly detection performance compared with simpler handcrafted approaches. DenseNet121 achieved an accuracy of 94.83%, a precision of 99.23%, and a sensitivity of 89.97%, while VGG16 reached an accuracy of 95.33%, a precision of 98.87%, and a sensitivity of 91.32%. MobileNetV2 showed a competitive trade-off between accuracy (92.83%) and computational efficiency, making it suitable for resource-constrained environments. Additionally, the pure CNN model—trained directly for classification without OCSVM—outperformed hybrid methods with an accuracy of 97.83%, highlighting the effectiveness of deep convolutional networks in directly learning discriminative features from MRI data. This approach enables reliable detection of brain tumor anomalies without requiring labeled pathological data, offering a promising solution for clinical contexts where abnormal samples are scarce. Future research will focus on reducing inference time, expanding and diversifying training datasets, and incorporating explainability tools to support clinical integration and trust in AI-based diagnostics.
| International journal of intelligent engineering and systems
M. Pavan Kalyan , Aman Sharma | International Journal for Research in Applied Science and Engineering Technology
Brain Tumor segmentation plays a vital role in the early diagnosis and treatment planning of neurological disorders. While modern deep learning approaches have shown remarkable accuracy, classical image processing techniques 
 Brain Tumor segmentation plays a vital role in the early diagnosis and treatment planning of neurological disorders. While modern deep learning approaches have shown remarkable accuracy, classical image processing techniques remain significant due to their simplicity, lower computational requirements, and interpretability. This study presents a comparative analysis of four classical segmentation methods—thresholding, edge detection, region growing, and watershed—for segmenting brain tumours from MRI images. Each technique is evaluated against ground truth masks using metrics such as Dice coefficient, Jaccard index, accuracy, sensitivity, and precision. Experimental results show that although no single method outperforms the others in all metrics, region growing and watershed methods offer better segmentation quality for complex tumour boundaries. This study emphasises the continued relevance of classical methods as lightweight and effective solutions in constrained environments.
Introduction The pressing need for accurate diagnostic tools in the medical field, particularly for diseases such as brain tumors and Alzheimer's, poses significant challenges to timely and effective treatment. Methods 
 Introduction The pressing need for accurate diagnostic tools in the medical field, particularly for diseases such as brain tumors and Alzheimer's, poses significant challenges to timely and effective treatment. Methods This study presents a novel approach to MRI image classification by integrating transfer learning with Explainable AI (XAI) techniques. The proposed method utilizes a hybrid CNN-VGG16 model, which leverages pre-trained features from the VGG16 architecture to enhance classification performance across three distinct MRI datasets: brain tumor classification, Alzheimer's disease detection, and a third dataset of brain tumors. A comprehensive preprocessing pipeline ensures optimal input quality and variability, including image normalization, resizing, and data augmentation. Results The model achieves accuracy rates of 94% on the brain tumor dataset, 81% on the augmented Alzheimer dataset, and 93% on the third dataset, underscoring its capability to differentiate various neurological conditions. Furthermore, the integration of SHapley Additive exPlanations (SHAP) provides a transparent view of the model's decision-making process, allowing clinicians to understand which regions of the MRI scans contribute to the classification outcomes. Discussion This research demonstrates the potential of combining advanced deep learning techniques with explainability to improve diagnostic accuracy and trust in AI applications within healthcare.
<title>Abstract</title> <bold>PolicySegNet</bold> is a novel hybrid deep learning architecture developed for joint brain tumor segmentation and classification using MRI scans. It combines a pretrained SegFormer-B4 encoder (with a MiT backbone, 
 <title>Abstract</title> <bold>PolicySegNet</bold> is a novel hybrid deep learning architecture developed for joint brain tumor segmentation and classification using MRI scans. It combines a pretrained SegFormer-B4 encoder (with a MiT backbone, originally trained on the ADE20K dataset) as a fixed feature extractor with a UNet-inspired decoder for segmentation and a lightweight classification head for tumor type identification. Unlike typical fine-tuning approaches, the SegFormer encoder remains frozen, enabling efficient training on limited domain-specific data. PolicySegNet uniquely integrates a policy-based reinforcement learning algorithm—specifically Proximal Policy Optimization (PPO)—to jointly optimize the decoder and classifier based on a reward signal that balances segmentation accuracy with classification performance. The segmentation task involves four distinct binary masks, each representing a tumor class. Experimental results on a multi-class brain tumor MRI dataset demonstrate strong performance: on the training set, the model achieves a segmentation accuracy of 0.9961 and classification accuracy of 0.9133, on the validation set, it achieves a segmentation accuracy of 0.9936 and classification accuracy of 0.9175, and on the test set, it achieves a segmentation accuracy of 0.9924 and classification accuracy of 0.8803. During training, the model attains a reward of 0.7295. These results showcase the potential of combining transformer-based vision features with reinforcement learning strategies for improved medical image analysis, while requiring fewer computational resources due to the fixed encoder and lightweight architectural design.
Brain tumor (BT) arises due to uncontrollable and fast development of cells. Unless addressed at a primary stage, it may cause death. The information about the human soft tissues for 
 Brain tumor (BT) arises due to uncontrollable and fast development of cells. Unless addressed at a primary stage, it may cause death. The information about the human soft tissues for BT classification is gathered by Magnetic resonance imaging (MRI). BT classification is a complicated chore because different parts of the same tumor can exhibit diverse characteristics (e.g. texture, density), making it difficult to classify accurately. In order to overcome the issues, this survey provides an analysis of existing BT classification approaches. This study examines 50 research papers focused on several approaches utilized for BT classification. Initially, this survey demonstrates the MRI image-based BT classification process. A detailed discussion about the five key techniques involved in classification methods like Transfer Learning (TL)-based methods, Deep learning (DL)-based models, Machine Learning (ML)-based techniques, Algorithmic-based approaches, and Hybrid methods are provided. Additionally, it describes the BT classification method using MRI images and research gaps. Finally, the evaluation is provided on the basis of performance evaluation, research methodologies, publication year, and achievement of the research methods. The analysis shows that most of the papers employed DL methods for BT classification. Likewise, the frequently utilized dataset is the BT dataset in reviewed papers. Similarly, accuracy is the most widely used evaluation measure.
In recent years, advancements in artificial intelligence and deep learning techniques have transformed various domains, including healthcare, by offering new ways to support clinical practices. These technologies are increasingly used 
 In recent years, advancements in artificial intelligence and deep learning techniques have transformed various domains, including healthcare, by offering new ways to support clinical practices. These technologies are increasingly used in the diagnosis and treatment of diseases such as tumors, cancer, and neurological disorders. Among these, brain tumors are one of the most aggressive and damaging types of cancer. Due to the significant impact on the prognosis of a patient, early detection of brain tumors is crucial to improve treatment outcomes and monitoring treatment efficacy. Magnetic resonance imaging (MRI), a non-invasive imaging technique, plays a central role in diagnosing and evaluating brain tumors, providing detailed images critical for assessing tumor location, size, and progression. Deep learning models have achieved impressive results in medical image analysis, particularly in the classification of brain tumors using MRI. These models rely on large datasets of labeled medical images to identify complex patterns, textures, and features that distinguish different tumor types, such as gliomas or meningiomas. By training on these large datasets, deep learning models can learn subtle and high-level features that may be overlooked by human experts, thus improving diagnostic accuracy. This study aims to provide a comprehensive review of the current state of brain tumor classification using deep learning on MRI images. It seeks to explore the challenges researchers’ face, including data scarcity, model interpretability, and generalization issues across diverse datasets. Furthermore, it will highlight gaps in the existing literature and offer insights into future trends, such as hybrid models and multimodal imaging, that could enhance the accuracy and reliability of brain tumor diagnosis. The study aims to contribute valuable knowledge to medical image analysis by analyzing these aspects.
Glioblastoma multiforme (GBM) is the most aggressive and common primary brain tumor. Magnetic resonance imaging (MRI) provides detailed visualization of tumor morphology, edema, and necrosis. However, manually segmenting GBM from 
 Glioblastoma multiforme (GBM) is the most aggressive and common primary brain tumor. Magnetic resonance imaging (MRI) provides detailed visualization of tumor morphology, edema, and necrosis. However, manually segmenting GBM from MRI scans is time-consuming, subjective, and prone to inter-observer variability. Therefore, automated and reliable segmentation methods are crucial for improving diagnostic accuracy. This study employs an image semantic segmentation model to segment brain tumors in MRI scans of GBM patients. The MRI recall images include T1-weighted imaging (T1WI) and fluid-attenuated inversion recovery (FLAIR) sequences. To enhance the performance of the semantic segmentation model, image preprocessing techniques were applied before analyzing and comparing commonly used segmentation models. Additionally, a survival model was constructed using discrete genotype attributes of GBM patients. The results indicate that the DeepLabV3+ model achieved the highest accuracy for semantic segmentation, with an accuracy of 77.9% on T1WI image sequences, while the U-Net model achieved 80.1% accuracy on FLAIR image sequences. Furthermore, in constructing the survival model using a discrete attribute dataset, the dataset was divided into three subsets based on different missing value handling strategies. This study found that replacing missing values with 1 resulted in the highest accuracy, with the Bernoulli Bayesian model and the multinomial Bayesian model achieving an accuracy of 94.74%. This study integrates image preprocessing techniques and semantic segmentation models to improve the accuracy and efficiency of brain tumor segmentation while also developing a highly accurate survival model. The findings aim to assist physicians in saving time and facilitating preliminary diagnosis and analysis.
The early detection of Alzheimer’s disease (AD) is essential for improving patient outcomes, enabling timely intervention, and slowing disease progression. However, the complexity of neuroimaging data presents significant obstacles to 
 The early detection of Alzheimer’s disease (AD) is essential for improving patient outcomes, enabling timely intervention, and slowing disease progression. However, the complexity of neuroimaging data presents significant obstacles to accurate classification. This study introduces a computationally efficient AI framework designed to enhance AD staging using structural MRI. The proposed method integrates discrete wavelet transform (DWT) for multi-scale feature extraction, a novel reduced kernel partial least squares (Red-KPLS) algorithm for feature reduction, and ResNet-50 for classification. The proposed technique, referred to as Red-KPLS-CNN, refines MRI features into discriminative biomarkers while minimizing redundancy. As a result, the framework achieves 96.9% accuracy and an F1-score of 97.8% in the multiclass classification of AD cases using the Kaggle dataset. The dataset was strategically partitioned into 60% training, 20% validation, and 20% testing sets, preserving class balance throughout all splits. The integration of Red–KPLS enhances feature selection, reducing dimensionality without compromising diagnostic sensitivity. Compared to conventional models, our approach improves classification robustness and generalization, reinforcing its potential for scalable and interpretable AD diagnostics. These findings emphasize the importance of hybrid wavelet–kernel–deep learning architectures, offering a promising direction for advancing computer-aided diagnosis (CAD) in clinical applications.
Introduction: Alzheimer's disease (AD) is a leading cause of death, making early detection critical to improve survival rates. Conventional manual techniques struggle with early diagnosis due to the brain's complex 
 Introduction: Alzheimer's disease (AD) is a leading cause of death, making early detection critical to improve survival rates. Conventional manual techniques struggle with early diagnosis due to the brain's complex structure, necessitating the use of dependable deep learning (DL) methods. This research proposes a novel RESIGN model is a combination of Res-InceptionSeg for detecting AD utilizing MRI images. Methods: The input MRI images were pre-processed using a Non-Local Means (NLM) filter to reduce noise artifacts. A ResNet-LSTM model was used for feature extraction, targeting White Matter (WM), Grey Matter (GM), and Cerebrospinal Fluid (CSF). The extracted features were concatenated and classified into Normal, MCI, and AD categories using an Inception V3-based classifier. Additionally, SegNet was employed for abnormal brain region segmentation. Results: The RESIGN model achieved an accuracy of 99.46%, specificity of 98.68%, precision of 95.63%, recall of 97.10%, and an F1 score of 95.42%. It outperformed ResNet, AlexNet, Dense- Net, and LSTM by 7.87%, 5.65%, 3.92%, and 1.53%, respectively, and further improved accuracy by 25.69%, 5.29%, 2.03%, and 1.71% over ResNet18, CLSTM, VGG19, and CNN, respectively. Discussion: The integration of spatial-temporal feature extraction, hybrid classification, and deep segmentation makes RESIGN highly reliable in detecting AD. A 5-fold cross-validation proved its robustness, and its performance exceeded that of existing models on the ADNI dataset. However, there are potential limitations related to dataset bias and limited generalizability due to uniform imaging conditions. Conclusion: The proposed RESIGN model demonstrates significant improvement in early AD detection through robust feature extraction and classification by offering a reliable tool for clinical diagnosis.
This research explores adapting vision transformers (ViTs) to classify neurodegenerative diseases while ensuring their decision-making process is interpretable. We developed a model to classify 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography (PET) 
 This research explores adapting vision transformers (ViTs) to classify neurodegenerative diseases while ensuring their decision-making process is interpretable. We developed a model to classify 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography (PET) brain scans into three categories: cognitively normal, mild cognitive impairment, and Alzheimer&amp;rsquo;s disease (AD). The dataset utilized in this research contains 580 samples of 18F-FDG PET scans obtained from the AD neuroimaging initiative. The proposed model obtained an F1 score of 81% (macro-average of all classes) on the test dataset, a significant performance improvement compared to the literature. Furthermore, we combined the model&amp;rsquo;s attention maps with the Automated Anatomical Atlas 3, which represents a digital brain map, to identify the most influential areas on the model&amp;rsquo;s predictions and to conduct a regions&amp;rsquo; importance study as a step toward explainability. We demonstrated that ViTs can achieve competitive performance compared to convolutional neural networks while enabling the development of explainable models without extra computations due to the attention mechanism.
Background: Magnetic Resonance Imaging (MRI) provides rich tumor information through different imaging modalities (T1, T1ce, T2, and FLAIR). Each modality offers distinct contrast and tissue characteristics, which help in the 
 Background: Magnetic Resonance Imaging (MRI) provides rich tumor information through different imaging modalities (T1, T1ce, T2, and FLAIR). Each modality offers distinct contrast and tissue characteristics, which help in the more comprehensive identification and analysis of tumor lesions. However, in clinical practice, only a single modality of medical imaging is available due to various factors such as imaging equipment. The performance of existing methods is significantly hindered when handling incomplete modality data. Methods: A Teacher-Assistant-Student Collaborative and Competitive Net (TASCCNet) is proposed, which is based on traditional knowledge distillation techniques. First, a Multihead Mixture of Experts (MHMoE) module is developed with multiple experts and multiple gated networks to enhance information from fused modalities. Second, a competitive function is formulated to promote collaboration and competition between the student network and the teacher network. Additionally, we introduce an assistant module inspired by human visual mechanisms to provide supplementary structural knowledge, which enriches the information available to the student and facilitates a dynamic teacher-assistant collaboration. Results: The proposed model (TASCCNet) is evaluated on the BraTS 2018 and BraTS 2021 datasets and demonstrates robust performance even when only a single modality is available. Conclusions: TASCCNet successfully addresses the challenge of incomplete modality data in brain tumor segmentation by leveraging collaborative knowledge distillation and competitive learning mechanisms.
Accurate detection of cortical infarct is critical for timely treatment and improved patient outcomes. Current brain imaging methods often require invasive procedures that primarily assess blood vessel and structural white 
 Accurate detection of cortical infarct is critical for timely treatment and improved patient outcomes. Current brain imaging methods often require invasive procedures that primarily assess blood vessel and structural white matter damage. There is a need for non-invasive approaches, such as functional MRI (fMRI), that better reflect neuronal viability. This study utilized automated machine learning (auto-ML) techniques to identify novel infarct-specific fMRI biomarkers specifically related to chronic cortical infarcts. We analyzed resting-state fMRI data from the multi-center ADNI dataset, which included 20 chronic infarct patients and 30 cognitively normal (CN) controls. This study utilized automated machine learning (auto-ML) techniques to identify novel fMRI biomarkers specifically related to chronic cortical infarcts. Surface-based registration methods were applied to minimize partial-volume effects typically associated with lower resolution fMRI data. We evaluated the performance of 7 previously known fMRI biomarkers alongside 107 new auto-generated fMRI biomarkers across 33 different classification models. Our analysis identified 6 new fMRI biomarkers that substantially improved infarct detection performance compared to previously established metrics. The best-performing combination of biomarkers and classifiers achieved a cross-validation ROC score of 0.791, closely matching the accuracy of diffusion-weighted imaging methods used in acute stroke detection. Our proposed auto-ML fMRI infarct-detection technique demonstrated robustness across diverse imaging sites and scanner types, highlighting the potential of automated feature extraction to significantly enhance non-invasive infarct detection.
Brain tumor classification is a critical aspect of medical diagnostics, demanding accurate and efficient methodologies for improved patient care. With advancements in medical imaging, the need for sophisticated techniques to 
 Brain tumor classification is a critical aspect of medical diagnostics, demanding accurate and efficient methodologies for improved patient care. With advancements in medical imaging, the need for sophisticated techniques to analyze and classify brain tumors has become increasingly apparent. Traditional methods often fall short in capturing the complexity of tumor characteristics, necessitating the integration of state-of-the-art technologies to enhance diagnostic precision. The proposed methodology integrates four key steps. First, image enhancement is accomplished using an Improved Weiner Filter to refine the input data, enhancing the quality of features for subsequent analysis. Next, the Shifted Window (SWIN) Transformer is employed for segmentation, effectively delineating affected regions and providing a foundation for accurate classification. The third step involves leveraging a Hierarchical VGGNet19 for the classification of brain tumors. This deep learning architecture is processed to discern intricate patterns within medical images, ensuring precise categorization. Finally, hyperparameter tuning is executed using a novel Hybrid Whale Remora Optimization approach. This innovative optimization technique fine-tunes the parameters of the model, enhancing its robustness and accuracy. The experimental results showcase the effectiveness of the proposed model in achieving state-of-the-art performance in brain tumor classification tasks.
Early and accurate diagnosis of neurodegenerative disorders such as Alzheimer’s disease (AD) and Parkinson’s disease (PD) is critical for effective clinical intervention. However, conventional diagnostic approaches, relying on single-modality imaging 
 Early and accurate diagnosis of neurodegenerative disorders such as Alzheimer’s disease (AD) and Parkinson’s disease (PD) is critical for effective clinical intervention. However, conventional diagnostic approaches, relying on single-modality imaging or clinical assessments, often lack sensitivity and specificity in early-stage detection. In this study, we propose a novel hybrid deep learning framework that integrates multimodal neuroimaging data—including MRI, PET, and fMRI—using a combination of convolutional neural networks (CNNs), transformer-based encoders, and attention-driven fusion strategies. The model is designed to capture both local anatomical patterns and global inter-modality dependencies for robust classification. We evaluate our model on the publicly available ADNI and PPMI datasets, focusing on classifying cognitive states (e.g., cognitively normal, mild cognitive impairment, and AD). The proposed framework achieves superior performance with an accuracy of 88.5%, AUC of 0.915, and F1-score of 0.88, outperforming several state-of-the-art baselines. Ablation studies confirm the effectiveness of the transformer and attention components, while modality contribution analysis reveals significant diagnostic gains from multimodal integration. Additionally, interpretability is enhanced via Grad-CAM and attention heatmaps, which highlight clinically relevant brain regions such as the hippocampus and temporal lobes. These results demonstrate the promise of multimodal, interpretable AI in advancing early neurological diagnostics. Future work will focus on prospective clinical validation, longitudinal modeling, and deployment in real-time settings. Keywords: Multimodal neuroimaging, Deep learning, Alzheimer’s disease, Parkinson’s disease, Transformer networks
A T Aslan | İnönĂŒ Üniversitesi Sağlık Hizmetleri Meslek YĂŒksek Okulu Dergisi
Brain tumors can cause serious neurological damage and death by putting pressure on critical brain regions that manage vital functions. Given the complex structures in the brain, human error in 
 Brain tumors can cause serious neurological damage and death by putting pressure on critical brain regions that manage vital functions. Given the complex structures in the brain, human error in the evaluation of radiological images can create difficulties in the detection of these tumors. Convolutional neural networks (CNNs) are type of deep learning (DL) and are widely used, especially for analyzing visual data. The advantage of CNNs in detecting brain tumors is that they can automatically learn features from images and minimize human error by increasing the classification accuracy. In this study, a unique CNN-based model is proposed for brain tumor diagnosis using magnetic resonance imaging (MRI) images. A high classification score was obtained using a dataset consisting of 3096 MRI images divided into four categories: glioma, meningioma, normal brain, and pituitary tumor. The model achieved an overall 93% accuracy rate in tumor detection. In particular, great success was seen for the detection of pituitary tumors with 96% precision and a 95% F1 score. This study demonstrates that DL has significant potential in medical image analysis. The novelty of our approach lies in designing a lightweight CNN architecture from scratch that achieves high accuracy without relying on transfer learning, while requiring significantly fewer computational resources than traditional deep architectures.
Melanoma, the most lethal type of skin cancer, remains a formidable obstacle in the quest for early detection. Despite significant strides in the field, numerous enduring challenges obstruct the effectiveness 
 Melanoma, the most lethal type of skin cancer, remains a formidable obstacle in the quest for early detection. Despite significant strides in the field, numerous enduring challenges obstruct the effectiveness of current models and techniques. A critical disadvantage of current melanoma detection models is their limited sensitivity, particularly in identifying early-stage melanomas. However, there exists the complexity in visual features of skin lesion images due to the nonhomogenous features and fuzzy boundaries that limit their performance. Additionally, the reliability of melanoma detection is degraded by the noise, shadows, and artifacts present in the medical images. This research addresses these challenges and introduces a novel melanoma detection model that ultimately enhances diagnostic accuracy and enables timely intervention. This research aims to develop an Egret Search Golden (ESG) optimization tuned distributed pooling-based BiLSTM model (ESG-distributed pooling-based BiLSTM) model to enhance the detection accuracy of melanoma through an innovative approach. It begins by gathering data and then employs a series of advanced techniques. The adaptive optimized Otsu thresholding process provides more precise, segmenting of the melanoma lesions accurately from the background, and feature extraction encompassing texture patterns, deep features, and hybrid structural features that provide effective melanoma detection. The extracted features utilizing the Gray Level Co-occurrence Matrix are then fed into a sophisticated distributed pooling-based fused deep BiLSTM model, further fine-tuned using the ESG approach, to overcome all the challenges of the early detection of melanoma, a unique optimization method provides high convergence rate, effective handling of high-dimensional problems, robustness, and reduced the time complexity. To extract the features, the encoder and decoder subnetworks are associated in a sequence pathway to bring the semantic features and provide efficiency in melanoma detection in the initial stage. Specifically, for the Melanoma Skin Cancer Dataset, the model yielded impressive TP 90 values, showcasing metrics at 96.13%, 96.06%, and 96.10%, respectively. Moreover, when adopting the [Formula: see text]-fold 6 approach, the model’s performance elevated to even higher levels, delivering better measures of 96.40%, 96.27%, and 96.05% for accuracy, sensitivity, and specificity.