Professional Documents
Culture Documents
ISSN No:-2456-2165
Abstract:- A neurological condition called Alzheimer's months and the latter is in a reasonably stable state without
disease causes the death of brain cells. Dementia, which is lesions. Although there is no treatment for the disease, there
characterised by a loss of analytical skills and the ability are some guidelines that can be followed to assist halt its
to carry out daily duties independently, is most frequently growth. Therefore, a correct diagnosis will be crucial to
caused by this. People of all ages are susceptible to the enhancing the victim's quality of life.
dementia known as Alzheimer's disease (AD). Recently,
these indicators have been quickly incorporated into the The Alzheimer's disease symptoms include: inability to
signs and symptoms of Alzheimer's disease (AD) using recall recent events or conversations, lack of interest,
classification frameworks that provide diagnostic tools. depression, poor judgement, unanswerable, confusion,
This study conducts a thorough review of published behavioural changes in advanced stages of the disease.
studies on Alzheimer's disease with a focus on computer-
aided diagnosis techniques such as magnetic resonance
imaging (MRI), computerised tomography (CT) scans,
imaging with diffusion tensors, and PET scans (positron
emission tomography). This article reviews some of the
most recent research on Alzheimer's disease and discusses
how machine learning (ML), deep learning (DL), and
other brain imaging techniques can help with an earlier
identification of theAt the conclusion of this research, a
CNN model that incorporates Densenet 169, EfficientNet,
and VGG-16 has been created to identify Alzheimer's
disease using Magnetic Resonance Imaging (MRI) data.
The Kaggle Alzheimer's dataset is used in experiments,
and the results demonstrate that the suggested models
had excellent accuracy.
B. To detect Alzheimer's disease, various Machine Learning The authors of [3-5] combined the prediction algorithms
and Deep Learning techniques are employed. Random Forest, k-Means and Region Growing. The k-Means
The research indicates that the characteristics-based algorithm was used to cluster the MRI images. Using the
categorization criterion offers promising results for region Growing approach, the white and grey matter were
diagnosing the illness and improving therapeutic care. Deep separated from the clustered pictures. The condition was
learning, Bayesian, K-Nearest Neighbor, and Support Vector categorised as having neuro-anatomical constraints or not
Machine are the classifiers that are most frequently used to using the data collected and the Random Forest approach.
diagnose AD.
In order to overcome the constraints of the machine
The author used support vector machines in [1] to draw learning approach, the author of article [6] developed a deep
out the most significant high-level features from MRI scans learning method for recognising AD that uses a softmax
and pinpoint the different stages of Alzheimer's disease. output layer and stacked auto-encoder. The suggested
Table 2: Comparison of several Machine Learning and Deep Learning methods for Alzheimer's disease detection
III. DATASET DESCRIPTION for classes where there is still little data.
Online datasets are available for free. For research on In general, the pipeline in this test can be seen in Figure
Alzheimer's disease, ADNI and OASIS datasets are 2. Initially, there was loading and re-dividing the dataset. The
incredibly helpful. These datasets produce appealing, freely dataset is divided into "train", "val", and "test" with ratio of
usable reverberation images of the brain. This study looked 70:15:15. After that, there is a process of augmentation of
into using deep learning to detect Alzheimer's illness using data to balance the dataset. Then, there is an importation of
the Alzheimer's dataset, which consists of four classes of the model from tensorflow as well as the addition of layers
photographs. Two files, Training and Testing, make up the and optimizers. We've tested some additional layer
dataset, with a combined total of about 5000 photos in each combinations so that the layers used in the final result are
file, each categorised according to the degree of Alzheimer's. already quite good. After that, the data that has been
The dataset includes MRI pictures in the following four processed earlier goes to the training. The output of this
categories: mildly, very mildly, not at all, and moderately training is the data and plot of the training history as well as
demented. the final model obtained. Next, there is an evaluation whose
output is the matrix.
IV. METHODOLOGY
The discussion here includes things that were done
The dataset used in this test was obtained from kaggle. during testing. In testing, the initial step is of course to enter
The dataset contains about 5000 images consisting of four the data. Next, several functions from various modules are
classes (Moderate Demented, Mild Demented, Very Mild imported, mainly from tensorflow modules.
Demented, and Nondemented or normal). However, the data
is very unbalanced.Therefore, there needs to be augmentation
After those various initializations, there is a preliminary The next step is Image Loading and Transformation.
definition. This includes defining the values of important Initially, there was a definition for load and transform images
parameters used in the test, such as data split ratios, via Image Data Generator. The image goes into the generator
parameters for Image transformation, the number of epochs to be randomly transformed so that there is no longer an exact
and batch sizes, and parameters for the optimizer. duplicated image. That way, the class can be more balanced
with data that remains diverse. The next part is testing with
The defined functions are functions for plot training Densenet169, EfficientNet and VGG16 models. Previously,
history and storing its data, functions for model evaluation, as it was necessary to import the initial model from tensorflow.
well as functions for creating matrices and storing them. Imported models already have pretrained weights for image
Then, a random re-division of the dataset is carried out using processing. Next, there is the addition of several layers. The
a specific function. In addition, there is a process of activation function used is relu. What layers need to be added
duplicating image files so that many of the files reach a are the result of several tests and literature reviews.
certain number. The existing image files are duplicated in
order until they can be sufficient. If it is not enough, then the V. RESULT AND ANALYSIS
process is repeated from scratch. This duplication process is
only done for "training" data that requires class balance. The transfer learning method produces the most
These splits and duplications are done before Image accurate results, but it necessitates a substantial amount of
Transformation. If the data is transformed first and then labelled data and demanding computer capabilities. Deep
shared, there will be a very similar image in the train and test network models that have already been trained and validated
data so that the testing is unfair. for transfer learning include Densenet, VGG, and
EfficientNet. The main crux of different model is summarized
below.