Detection of Retinopathy of Prematurity Using Convolution Neural Network Deepa Dhanaskodi and Poongodi Chenniappan

Introduction

Retinopathy of Prematurity

Retinopathy of prematurity (ROP) is an eye disease observed in babies delivered prematurely; it is considered to be a serious issue that must be treated because it causes childhood blindness. ROP is also known as retrolental fibroplasias (RLF). Initially, it was believed that oxygen therapy caused the condition, since it occurs in newborn, especially in premature, babies who received oxygen therapy. The role of supplemental oxygen in ROP has since been disproven. Vascularization is the process that is responsible for the normal development of blood vessels in the retina; it is not completed until a baby reaches full term of 40 weeks in uteri. In premature babies, the retina has not fully developed. After the birth of premature babies, if retinal vascularization completes outside the uterus, the retinal vessels may stop growing, or grow abnormally. ROP will also arise when these vessels develop in an abnormal way (Figures 7.1 and 7.2).

Causes of ROP

  • • ROP happens when anomalous veins develop and spread all through the retina, the tissue that lines the back of the eye.
  • • These anomalous veins are delicate and can leak, scarring the retina, and pulling it out of position. This causes a retinal detachment. Retinal detachment is the fundamental driver of visual impedance and visual impairment in ROP.
  • • The eye begins to develop at around about four months of pregnancy, when the veins of the retina start to shape at the optic nerve in the back of the eye.

If this process is affected, it will cause ROP.

Normal eye

FIGURE 7.1 Normal eye.

ROP-affected eye

FIGURE 7.2 ROP-affected eye.

  • • The veins develop steadily toward the edges of the developing retina, providing oxygen and supplements. Insufficient blood flow will cause ROP.
  • • At the point when a child is brought into the world fullterm, the retinal vein development is generally finished (the retina ordinarily wraps up half a month to a month after birth). In any case, if a child is born prematurely, before these veins have arrived at the edges of the retina, typical vessel development may stop. The edges of the retina the outskirts may not get enough oxygen and supplements.

Stages of ROP

Stage I Represents mildly strange vein development. The condition is often associated with transient visual impairment if identified it can be medicated at earlier stage [1].

Stage II Specifies moderately unusual vein development, again often transient, which eventually resolves spontaneously.

Stage III is severely irregular vein development. The unusual veins grow towards the center of the eye instead of the normal growth pattern. There is no treatment for the children who suffer in this stage. The threatened outcome with the diseases in stage III is retinal detachment. Some infants in this stage escape with regular vision.

Stage IV occurs when the retina partially detaches from its original position. Scars are produced by bleeding and abnormal vessels, and they pull the retina away from the wall of the eye.

Stage V results in a fully disconnected retina. The infant is considered to have serious visual disability and even visual deficiency.

Literature Survey

In 2005, Gole [2] introduced the concept of a more virulent form of retinopathy observed in babies that is very aggressive in posterior ROP compared to conventional cases [3-5]. The new classification describes an intermediate level of increased disease between normal posterior pole vessels and the increased disease, and a practical tool for estimating the extent of the identified zone.

In 2009, Wilkinson [6] suggested that w'ith small modification the GoogLeNet can be pertained as a ROP detector.

In 2013, Fleck investigated the role of oxygen in ROP [7]. The authors found a higher risk of severe ROP results due to high levels of oxygen saturation. From their results they recommended that infants w'ho have gestational age less than 28 weeks should maintain more than 90% of oxygen saturation.

Proposed Algorithm

Convolution Neural Network

Artificial neural networks (ANN) are used in various classification tasks involving image, audio, speech, etc. Different kinds of neural networks are used for various functions; ANN performs w'ell w'hen machine leaning is involved for image classification. Computer vision w'ith deep learning [8] is becoming more conventional using convolutional neural network (CNN). Image and video recognition, classification, and image analysis tasks can be efficiently carried out using CNN. The details of the layers in the proposed work are discussed below.

Input Layer

  • • The input layer takes the data into the system for classification.
  • • The number of layers depends on the features that have to be compared for classification.

Hidden Layer

  • • The hidden layer takes the data from input layer. Based on the application and the system model the number of hidden layers can be increased. There are several hidden layers relying upon the model based on information size.
  • • Every hidden layer will have different numbers of neurons; the number of neurons will be usually larger than the quantity of options (Figure 7.3).
  • • The output from every layer is computed by matrix operations based on the output of the previous layer, weight value of the current layer, and the bias value followed by an activation function that makes the network nonlinear. The following are the layers present in CNN for efficient classification:
  • • CONVOLUTION LAYER:
  • - Primarily used to extract features from the input image. The inputs of the convolution layer are the image matrix and filter.
CNN sequence for classification [9]

FIGURE 7.3 CNN sequence for classification [9].

  • • RECTIFIED LINEAR UNIT:
    • - The rectified linear unit (ReLU) permits quicker and easier training of a data. It is an activation function. It changes the negative value in image matrix to zero and maintains the positive values.
  • • POOLING LAYER:
  • - When the image size is very large, a pooling layer is used to reduce the number of parameters in the image matrix. The types of pooling layers are max pooling, min pooling, and average pooling.

Output Layer

The output of the final hidden layer is given to a logistical function like sigmoid that

converts the output of each class into probability of each class.

Proposed Work Flow for ROP Detection

Survey and Load Image Data

  • • Eye images with and without defects are loaded as dataset for training and testing of ROP. The stored images are then verified during classification process using CNN.
  • • A picture dataset can store enormous amounts of information, including information which is not present in the memory, and this will productively group the pictures during the preparation of the convolution system.

Specify Training and Validation Sets

  • • This step partitions the image information into preparation and evaluation datasets, with the goal that every classification in the preparation dataset includes numerous pictures, and the approval dataset includes the rest of the pictures.
  • • Each label is divided and stored in the data store for preparing and evaluating the input images.
Layers in CNN

FIGURE 7.4 Layers in CNN.

Image Input Layer

  • • Input images are of equal dimensions for segregation of normal and ROP- affected images using training and testing process.
  • • The width and channel dimension of the size is compared with numerical values and verified during the process.
  • • The channel dimension is 1 for a grayscale picture and the channel dimension is 3 for color pictures.
  • • Since training the network itself rearranges the information toward the start of training, it can also naturally rearrange the information toward the start of each epoch during training (Figure 7.4).

Convolution Layer

  • • In the convolution layer, the primary argument is filter size, which is the height and width of the channels. The preparation capacity utilized while examining has the dimension 3x3.
  • • The subsequent argument is the quantity of channels, numFilters, which is the quantity of neurons that interface with a similar area of the information.
  • • This parameter determines the quantity of highlight maps. The stride and learning rates for this layer can be characterized by utilizing name-esteem pair contentions of convolution2dLayer, along the pictures. In this model, three shows the channel..

ReLU Layer

  • • The ReLU is the most common activation function.
  • • ReLU neglects the negative value and convert it into positive value.

Max Pooling Layer

  • • Convolution layers(with actuation capacities) are now and then followed by a down-testing activity that decreases the spatial size of the element guide and expels excess spatial data.
  • • Down-testing makes it possible to expand the quantity of channels in more profound convolution layers without expanding the necessary measure of calculation per layer.
  • • Maximum pooling is one method of down sampling, which make use of maxPooling2dLayer.
  • • The maximum pooling layer restores the most extreme estimations of rectangular districts of data sources, indicated by the principal contention, pool size.

Fully Connected Layer

  • • The convolution and down-inspecting layers are followed by followed by at least one fully connected layer.
  • • The fully connected layer is where the neurons interface with all of the neurons in the previous layer.
  • • This layer joins all of the features learned by the past layers over the image to recognize greater models.
  • • The last fully connected layer combines the features to describe the images. Thus, the output size parameter in the last fully connected layer is identical to the number of classes in the target data.

Softmax Layer

  • • The output of the fully connected layer is used to standardize the softmax activation function.
  • • The summed output of the softmax layer can be used to classify the probabilities obtained by the classification layer.
  • • After the last fully connected layer make a softmax layer to utilize the soft- max function.

Classification Layer

  • • The classification layer is the last layer in the neural network.
  • • This layer uses the probabilities obtained by the softmax activation function to assign the input to one of the mutually unrelated classes.

Specify Training Options

  • • The network characterized by the layers is trained using the prepared options and data.
  • • Table 7.1 demonstrates the accuracy, minibatch loss, and validation loss.

TABLE 7.1

Simulation Results for the Proposed ROP Detection Using CNN

Convolution

Layer

Epoch

Iteration

Time Elapsed

Mini Batch Loss

Mini Batch Accuracy (%)

Learning

Rate

1-5

66.47-245.67

0.8289-0.1661

61.542-100.00

0.0001

1-10

31.72-326.17

0.8474-0.0227

53.85-100.00

0.0001

1-15

33.89-494.55

0.9664-0.0037

61.54-100.00

0.0001

1-20

43.14-715.00

0.8289-0.0036

61.54-100.00

0.0001

5x5

1-5

32.52-164.69

0.8365-0.3571

38.46-76.92

0.0004

1-10

36.67-360.71

0.5999-0.0152

84.62-100.00

0.0004

1-15

36.39^496.82

1.5596-0.4981

46.15-92.31

0.0004

1-20

31.95-617.92

0.61419-0.0001

76.92-100.00

0.0004

1-5

51.15-218.28

2.3335-0.1172

46.15-100.00

0.0001

1-10

40.52—471.59

1.5820-0.0263

46.15-100.00

0.0001

1-15

46.14-699.42

1.6154-0.0010

46.15-100.00

0.0001

1-20

38.55-783.85

1.1147-0.0000

46.15-100.00

0.0001

6x6

1-5

39.02-192.31

0.9035-0.3521

69.23-92.31

0.0004

1-10

38.88-381.69

0.9466-0.0007

76.92-100.00

0.0004

1-15

32.07—455.57

1.7594-(-0.0000)

46.15-100.00

0.0004

1-20

38.22-698.64

3.2383-0.4366

53.85-76.92

0.0004

  • • Central processing unit (CPU) cannot be used to train a large dataset.
  • • The cross-entropy loss is a type of loss. The precision level of pictures characterizes the system effectively.

Classify Evaluation Images and Calculate Accuracy

  • • The labels in the evaluated data is predicted and the final evaluation of the trained network is calculated. Precision is the portion of marks that the system predicts effectively.
  • • The original data in the evaluation set is matched with the predicted output.

Results and Discussion

Figure 7.5 shows the number of iterations versus the time taken for simulation for 5x5 convolutional layer with the learning rates 0.0001 and 0.0004. The results show that learning rate 0.0001 takes more time compared to 0.0004.

Figure 7.6 shows the number of iterations versus time taken for simulation for 6x6 convolutional layer with the learning rates 0.0001 and 0.0004. The results show that learning rate 0.0001 takes more time compared to 0.0004.

Epoch versus time elapsed for 5 x 5 convolution layer

FIGURE 7.5 Epoch versus time elapsed for 5 x 5 convolution layer.

Figures 7.7 and 7.8 show the accuracy of the proposed method for various numbers of iterations with 5x5 and 6x6 convolutional layer respectively, with the learning rates 0.0001 and 0.0004. The results show that for learning rate 0.0001 accuracy is 100% for any number of epochs and it is varied when the learning rate is

0.0004. From the results it is concluded that the learning rate 0.0001 performs well with greater accuracy than 0.0004.

Epoch vs time elapsed for 6 x 6 convolution layer

FIGURE 7.6 Epoch vs time elapsed for 6 x 6 convolution layer.

Epoch vs mini batch accuracy for 5x convolution layer

FIGURE 7.7 Epoch vs mini batch accuracy for 5x convolution layer.

Epoch vs mini batch accuracy for 6 x 6 convolution layer

FIGURE 7.8 Epoch vs mini batch accuracy for 6 x 6 convolution layer.

Conclusion

ROP is an optic vessel irregularity that may causes loss of sight if it is not treated properly at the appropriate stage. Detection and diagnosis of ROP can be done in many ways but the proposed CNN method performs well compared to the conventional ANN method. The results obtained from the proposed method show that the detection accuracy will be more when the number of iterations is normalized with a different convolutional layer size. In the proposed method the time taken for detection, batch loss during the process, and accuracy were identified and, based on the results, CNN performs well.

References

  • 1. Saunders, R.A.. Bluestein. E.C., Sinatra. R.B.. Wilson. M.E., Rust. P.F.: The Predictive value of posterior pole vessels in retinopathy of prematurity. J. Pediatr. Ophthalmol. Strabismus32(2), 82-85 (1995)
  • 2. Gole. G.A., Ells, A.L., Katz. X., Holmstrom. G.. Fielder. A.R.. Capone. Jr., A., Flynn. J.T., Good, W.G.. Holmes. J.M.. McNamara. J.. et al.: The international classification of retinopathy of prematurity revisited. JAMA Ophthalmol. 123(7), 991-999 (2005).
  • 3. Fleck, B.W.. Stenson, B.J.: Retinopathy of prematurity and the oxygen conundrum: lessons learned from recent randomized trials. Clin. Perinatol. 40(2). 229-240 (2013).
  • 4. Wilkinson. A.. Haines, L., Head, K.. Fielder. A., et al.: UK retinopathy of prematurity guideline. Eye (London, England) 23(11). 2137 (2009)
  • 5. Gole. G.A., Ells. A.L., Katz, X., Holmstrom. G., Fielder. A.R., Capone Jr., A., Flynn. J.T., Good, W.G.. Holmes. J.M.. McNamara. J.. et al.: The international classification of retinopathy of prematurity revisited. JAMA Ophthalmol. 123(7), 991-999 (2005)
  • 6. Wilkinson. A.. Haines, L., Head, K.. Fielder. A., et al.: UK retinopathy of prematurity guideline. Eye (London, England). 23(11), 2137(2009).
  • 7. Fleck, B.W.. Stenson, B.J.: Retinopathy of prematurity and the oxygen conundrum: lessons learned from recent randomized trials. Clin. Perinatol. 40(2), 229-240(2013).
  • 8. https://www.mathworks.com/help/deeplearning/examples/create-simple-deep-learn- ing-network-for-classification.html
  • 9. Saha, S.: A comprehensive guide to convolutional neural networks—the EL15 way. Sumit Saha, December 15 (2018). https://towardsdatascience.com/
 
Source
< Prev   CONTENTS   Source   Next >