Performance Validation

Data Set Used

To ensure the proficient outcome of the projected DC-DNN model, benchmark NEMA CT images are applied for experimentation [21]. This offers accurate details of the representing capabilities to precisely differentiate the images. The details are listed in Table 9.1, and holds a total of 600 images with pixel dimensions of 512 x 512. Moreover, a collection of 10 classes are present in the applied data set. A few of the sample images are shown in Fig. 9.4.

Results Analysis

Fig. 9.5 depicts the visual outcome of the presented technique at the time of retrieving images. The outcome indicated that the appropriate images present in the database are effectively retrieved by the projected DC-DNN technique.

A comprehensive investigation of the simulation results takes place by the DC-DNN technique over the compared approaches concerning precision, recall, and accuracy and is illustrated in Figs. 9.6-9.8. A set of methods used for comparison purposes are local ternary directional patterns (LTDP), local quantized extrema patterns (LQEP), local mesh patterns (LMeP), local diagonal extrema pattern (LDEP), DLTerQEP, local directional gradient pattern (LDGP), local Weighting Pattern (LWP), and local block-difference pattern (LBDP). Through the precision

Sample test images

FIGURE 9.4 Sample test images.

Sample of retrieved results

FIGURE 9.5 Sample of retrieved results.

investigation of the results exhibited by the DC-DNN technique, it can be observed that the following approaches such as LTDP, LQEP. and LMeP techniques achieve minimal precision values of 8.97%, 8.93%, and 8.91% correspondingly. It is also observed that somewhat effective precision is provided by the following LDEP, DLTerQEP, and LDGP techniques with the precision values of 10.27%, 9.10%, and 10.27%. Also, the LWP and LBDP techniques exhibit acceptable precision values of

Precision analysis of diverse techniques

FIGURE 9.6 Precision analysis of diverse techniques.

Recall analysis of diverse techniques

FIGURE 9.7 Recall analysis of diverse techniques.

31.03% and 32.57%. Moreover, the OFMM technique reaches the closer precision value of 48.51%. But, the DC-DNN technique leads to outstanding results by achieving the highest precision value of 80.84%.

During the assessment of the results with respect to recall, it is displayed that the following DLTerQEP, LTDP, LQEP, and LMeP techniques lead to worse classifier results with the recall values of 16.68%, 16.59%, 16.55%, and 16.66% correspondingly. Along with that, somewhat better results are achieved by the LDEP and LDGP techniques by offering near identical recall values of 18.39% and 17.28%. At the same time, the LWP and LBDP techniques have reached acceptable recall values

Accuracy analysis of diverse models

FIGURE 9.8 Accuracy analysis of diverse models.

of 46.51% and 47.11%. Just as, the OFMM technique offers near optimal classifier results with a recall value of 74.12%. But, the DC-DNN technique leads to outstanding results by achieving the highest precision value of 81.94%.

While assessing the outcome of the presented DC-DNN technique in terms of accuracy, it is verified that LTDP, LQEP. and LMeP techniques offer the lowest accuracy values of 12.94%, 12.85%, and 12.64% correspondingly. After that, a little higher accuracy values are exhibited by the LDEP. DLTerQEP, and LDGP techniques with the accuracy of 15.96%, 13.62%, and 13.51%. Along with that, the LWP and LBDP techniques demonstrate moderate and closer accuracy values of 38.54% and 40.61%. Concurrently, the OFMM technique offers a near maximum accuracy value of 60.52%. But. the DC-DNN technique leads to outstanding results by achieving the highest accuracy value of 76.40%. By mentioning the above stated results, it is provided that the DC-DNN technique has offered ultimate retrieval and classification performance under several dimensions.

Conclusion

This study has introduced an effective cloud-based DC-DNN model to proficiently retrieve medical images. The proposed DC-DNN model involves four main processes, namely feature extraction, similarity measurement, image retrieval, and image classification. The feature extraction process makes use of two techniques to retrieve the shape and texture features. Besides, similarity measurement takes place using Euclidean distance followed by DNN-based classification. For testing, a set of test images from NEMA CT image data set is applied. The attained simulation results ensured that the DC-DNN model has resulted in effective retrieval and classification performance by attaining maximum precision of 67.98%, a recall of 81.94% and an accuracy of 76.40%. As a part of future work, the secure retrieval of images can be carried out by the use of cryptographic algorithms.

References

[1] Ansari, S.. Aslam. T.. Poncela. J., Otero. P., and Ansari, A., Internet of Things-based healthcare applications, in loT Architectures, Models, and Platforms for Smart City Applications, IGI Global, 2020. 1-2.

[2] Qadri, Y. A., Nauman, A.. Zikria, Y. B„ Vasilakos, A. V.. and Kim, S.W. The future of healthcare Internet of Things: A survey of emerging technologies, IEEE Communications Surveys & Tutorials, 22 (2), 1121-1167, 2020.

[3] Wanjun. L., and Hongwei, Z. Image retrieval of improved color and texture features based on wavelet transform. Computer Engineering and Applications, 52(17), 181-186, 2016.

[4] Yanping, G.. Hongbing. G., and Zhiying, R. Image retrieval algorithm combining color feature and texture feature. Wireless Internet Technology, 2017.

[5] Bing, L. On technology of computer image retrieval based on multi feature fusion. Journal of Southwest China Normal University (Natural Science), 42 (1), 54-59, 2017.

[6] AoBo. Z.. Xianbin, W., and Zhang, X. Retrieval algorithm for texture image based on improved dual tree complex wavelet transform and gray gradient co-occurrence matrix, Computer Science, 44 (6), 274-277, 2017.

[7] Zhou, J.. Feng. C„ Liu, X., and Tang, J. A texture features based medical image retrieval system for breast cancer, in Proceedings of the International Conference on Computing Convergence Technology, 8562. 1010-1015, 2013.

[8] Anran. M., Ning-ning, R„ Li-bo, H., Yong, S„ Shao, Y„ and Jian-feng. Q. Image retrieval based on medical digital X-ray spectrum characteristics, Chinese Journal of Medical Physics, 33 (9), 933-938, 2016.

[9] Jinge, G. The Research and application of medical image retrieval technology based on texture feature. Inner Mongolia University of Science Technology, 2012.

[10] Luo. R., Rui. L.. Liu, R. G„ Ye, C„ and Guan, Z. Z. Total knee arthroplasty for the treatment of knee osteoarthritis caused by endemic skeletal fluorosis. Chinese Journal of Tissue Engineering Research, 16, 1555-1557. 2012.

[11] Gao, Y„ Song. H.. and Zhang, Z. J. Calculation method of iris structure density based on Tamura features. Computer Technology & Development, 26 (3), 36-39, 2016.

[12] Jun, M.. Senlin, Z., and Zhen, F. Recognition algorithm of fabric based on Tamura texture features, Light Industry Machinery, 35 (4), 2017.

[13] Kumar, M. Content based medical image retrieval using texture and intensity for eye images. International Journal of Scientific & Engineering Research, 7 (9), 2016.

[14] Ou. X., Pan, W„ Zhang. X., and Xiao. P. Skin image retrieval using Gabor wavelet texture feature. International Journal of Cosmetic Science, 38 (6), 607-614, 2016.

[15] Anwar, S. M., Arshad, F., and Majid. M. Fast wavelet based image characterization for content based medical image retrieval, in Proceedings of the 2017 International Conference on Communication, Computing and Digital Systems, C-CODE 2017, Pakistan. March 2017. 351-356.

[16] Chunyan, Z., Jingbing, L., and Shuangshuang, W. Encrypted image retrieval algorithm based on discrete wavelet transform and perceptual hash. Journal of Computer Applications, 38 (2), 539-544, 2018.

[17] Elhoseny, M., Bian. G. B., Lakshmanaprabu, S. K., Shankar. K.. Singh. A. K„ and Wu. W. Effective features to classify ovarian cancer data in internet of medical things, Computer Networks, 159. 147-156, 2019.

[18] Kathiresan. S., Sait, A. R. W., Gupta, D., Lakshmanaprabu. S. K„ Khanna, A., and Pandey, H. M. Automated detection and classification of fundus diabetic retinopathy images using synergic deep learning model. Pattern Recognition Letters, Vol. 159. 2020.

[19] Shankar. K., Perumal, E„ and Vidhyavathi. R. M. Deep neural network with moth search optimization algorithm based detection and classification of diabetic retinopathy images. SN Applied Sciences, 2 (4), 1-10, 2020.

[20] Raj, R. J. S„ Shobana. S. J., Pustokhina. I. V., Pustokhin. D. A.. Gupta, D., and Shankar. K. Optimal feature selection-based medical image classification using deep learning model in Internet of Medical Things, IEEE Access, 8, 58006-58017.S, 2020.

[21] Murala, S„ Wu, Q. M. J. MRI and CT image indexing and retrieval using local mesh peak valley edge patterns. Signal Process, Image Communication, 29 400-440, 2014.

 
Source
< Prev   CONTENTS   Source   Next >