Table of Contents:

Summary

By using the LCTSC datasets, the performance of 2D and 3D U-net segmentation were compared for five organs from CT volumes. The comparison was performed by training the image data in different spatial resolutions. For 2D U-net, only axial plane images were used and resampled, and for 3D U-net, this was performed by isotopically resampling all the images into desired resolutions. The advantage of 2D U-net is that training is faster, more channels can be used in the network hierarchy, and more training samples can be provided with the same size dataset. The disadvantage is that it has less continuity along the z-direction. On the other hand, the anatomical structures are smoother in 3D space. The drawback for 3D U-net is that the network may need more epochs to train, and there is less variability in the training samples due to a limited number of training volumes. The network also requires more GPU memory. If one looks into the details of the 3D U-net results, although Dice for low resolution models is lower than that of high resolution models, low resolution models capture global image information better and generate less ambiguity in terms of localization of ROIs. A multi-stage segmentation approach is therefore recommended so that low-resolution models first capture the shape of the structure of interest, and then a high-resolution model can further refine the segmentation.

References

  • 1. Y LeCun, L Bottou, Y Bengio, and P Haffner, “Gradient-based learning applied to document recognition.” Proceedings of the IEEE, pp. 2278-2324. 1998.
  • 2. A Krizhevsky, I Sutskever, and GE Hinton, “ImageNet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume I, pp. 1097-1105, 2012, Lake Tahoe, NV.
  • 3. C Szegedy et al., “Going deeper with convolutions,” in The IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9, 2015.
  • 4. К He, X Zhang, S Ren, and J Sun, “Identity mappings in deep residual networks,” in European Conference on Computer Vision, pp. 630-645, 2016, Springer.
  • 5. G Huang, Z Liu, L van Der Maaten, and К Weinberger, “Densely connected convolutional networks,” in The IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700-4708, 2017.
  • 6. Q Wang et al., “Segmentation of lung nodules in computed tomography images using dynamic programming and multidirection fusion techniques,” Acad Radiol, vol. 16, no. 6, pp. 678-688, Jun 2009.
  • 7. T Weikert, T Akinci D’Antonoli, J Bremerich, В Stieltjes, G Sommer, and AW Sauter, “Evaluation of an Al-powered lung nodule algorithm for detection and 3D segmentation of primary lung tumors,” Contrast Media Mol Imaging, vol. 2019, p. 1545747. 2019.
  • 8. К Edwards et al., “Abdominal muscle segmentation from CT using a convolutional neural network,” Proc SPIE Int Soc Opt Eng, vol. 11317, Feb 2020.
  • 9. J Chmelik et al., “Deep convolutional neural network-based segmentation and classification of difficult to define metastatic spinal lesions in 3D CT data,” Med Image Anal, vol. 49, pp. 76-88, Oct 2018.
  • 10. J Long, E Shelhamer, and T Darrell, “Fully convolutional networks for semantic segmentation,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 3431-3440, 2015, Boston, MA.
  • 11. T Fechter, S Adebahr, D Baltas, I Ben Ayed, C Desrosiers, and J Dolz, “Esophagus segmentation in CT via 3D fully convolutional neural network and random walk,” MedPhys, vol. 44, no. 12, pp. 6341-6352, Dec 2017.
  • 12. О Ronneberger, P Fischer, and T Brox, “U-Net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention - MICCAI2015, vol. Part III, pp. 234-241, 2015, Munich, Germany, Springer.
  • 13. J Yang et al., “Autosegmentation for thoracic radiation treatment planning: a grand challenge at AAPM 2017,” MedPhys, vol. 45, no. 10, pp. 4568-4581, Oct 2018.
  • 14. HJ Bae et al., “Fully automated 3D segmentation and separation of multiple cervical vertebrae in CT images using a 2D convolutional neural network,” Comput Methods Programs Biomed, vol. 184, p. 105119, Feb 2020.
  • 15. M Alexandre, UNet: Semantic Segmentation with PyTorch. May 1 2020. https://github.com/milesial/ Pytorch-UNet
  • 16. S Gou, N Tong, SX Qi, S Yang, RK Chin, and К Sheng, “Self-channel-and-spatial-attention neural network for automated multi-organ segmentation on head and neck CT images,” Phys Med Biol, vol. 65, no. 245034, Feb 25 2020. https://doi.org/10.1088/1361-6560/ab79c3
  • 17. X Li, Z Gong, H Yin, H Zhang, Z Wang, and L Zhuo, “A 3D deep supervised densely network for small organs of human temporal bone segmentation in CT images,” Neural Netw, vol. 124, pp. 75-85, Apr 2020.
  • 18. H Liu et al., “A cascaded dual-pathway residual network for lung nodule segmentation in CT images,” Phys Med, vol. 63, pp. 112-121, Jul 2019.
  • 19. Z Liu et al., “Segmentation of organs-at-risk in cervical cancer CT images with a convolutional neural network,” Phys Med, vol. 69. pp. 184-191, Jan 2020.
  • 20. S Noguchi, M Nishio, M Yakami, К Nakagomi, and К Togashi, “Bone segmentation on whole-body CT using convolutional neural network with novel data augmentation techniques,” Comput Biol Med, vol. 121. p. 103767, Jun 2020.
  • 21. Y Yang, H Jiang, and Q Sun, “A multiorgan segmentation model for CT volumes via full convolution- deconvolution network,” Biomed Res Int, vol. 2017, p. 6941306, 2017.
  • 22. J Zhu, J Zhang, В Qiu, Y Liu, X Liu, and L Chen, “Comparison of the automatic segmentation of multiple organs at risk in CT images of lung cancer between deep convolutional neural network-based and atlas-based techniques,” Acta Oncol, vol. 58, no. 2, pp. 257-264, Feb 2019.
 
Source
< Prev   CONTENTS   Source   Next >