CHALLENGES AND FUTURE RESEARCH DIRECTIONS

This section discusses few research directions and open issues for DeepFakes detection.

i. Generalised DeepFakes Detectors: In spite of the advancement, most prior mechanisms are limited to their ability in detecting manipulated faces. Namely, the performance of existing methods drops significantly when they encounter DeepFakes with different manipulations or dataset sources that were not the part of the training. All in all, they have low generalisation capability. There is a huge demand for DeepFakes detection frameworks that have higher generalisation capabilities and attain lower error rates for new manipulations, tools, and datasets absent in the training phase. More research efforts must be focused on developing new generalised DeepFakes detection schemes.

ii. Adversary-Aware Face Recognition Systems: It has been shown that performance of the face recognition systems goes down under manipulated face samples. Moreover, it is easy to see in the literature that there are very limited works that attempted to address the issue of DeepFakes. Studies should be directed towards developing demanipulation-based systems (i.e., where first the faces are de-manipulated and then utilised for recognition/ identification) and security by design-based systems (i.e., algorithms particularly developed to take into account the face manipulations).

iii. Wearable/Mobile Manipulation Detection: Majority of the DeepFakes detection frameworks are designed for personal computer, which are usually not usable on mobile/wearable platforms owing to high computational cost. To make DeepFakes detection more practical, scientists must address the issue of DeepFakes on mobile/wearable devices by designing novel compact and efficient DeepFakes detectors.

iv. Large-Scale Database: Very few sizeable DeepFakes datasets are publicly available. There is a need of large-scale benchmark datasets with several types of manipulations. Moreover, high-grade synthetic face generation techniques that can be utilised to produce datasets is an exigent problem. Such challenges have stymied advancement in the field of DeepFakes.

CONCLUSIONS

Daily many manipulated videos are being shared on social media. Manipulated face videos, known as DeepFakes, have attracted concerns as they can fool human as well as face recognition systems. There is need of efficient methods that can detect the manipulated videos before they cause any danger. Thus, in this chapter, a proficient framework is developed for discrimination of the fake and genuine face videos. The proposed approach is based on hybrid paradigm that uses the discriminative powers of the deep CNN features by combining CNN with LSTM architectures. In particular, the efficient pre-trained ResNet-50 model and the LSTM classifier were adopted. Experimental analysis using two public DeepFake videos datasets showed that the deep features and LSTM classifier have great potential in discriminating the fake faces videos from real ones. The proposed DeepFake detection framework outperformed the existing techniques. As deep features utilised both colour and texture, thereby quite efficient than a dozen of local descriptors and prior methods. In the feature, we are planning to extend our work on other face video manipulation types and techniques. Moreover, we will apply the proposed method on more challenging datasets. Also, other pre-trained deep models will be used for improving the performance.

NOTES

  • 1 https://generated.photos/
  • 2 https://thispersondoesnotexist.com
  • 3 https://github.com/NVlabs/ffhq-dataset
  • 4 https://github.com/shaoanlu/faceswap-GAN
  • 5 https://github.com/deepfakes/faceswap
  • 6 https://deepfakedetectionchallenge.ai/

REFERENCES

  • 1. A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. NieBner, “Faceforensics++: learning to detect manipulated facial images”, arXiv preprint arXivA 901.08971. pp. 1-14. 2019.
  • 2. Z. Akhtar, D. Dasgupta, and B. Banerjee, “Face authenticity: an overview of face manipulation generation, detection and recognition”, International Conference on Communication and Information Processing (ICCIP), pp. 1-8, 2019.
  • 3. R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia, “DeepFakes and beyond: a survey of face manipulation and fake detection”, arXiv preprint «rX/v:2001.00179, 2020.
  • 4. Z. Akhtar, A. Rattani, A. Hadid, and M. Tistarelli, “Face recognition under ageing effect: a comparative analysis”, 17th International Conference on Image Analysis and Processing (ICIAP), pp. 309-318, 2013.
  • 5. Z. Akhtar, G. Fumera, G.L. Marcialis, and F. Roli, “Robustness evaluation of biometric systems under spoof attacks”, 16th International Conference on Image Analysis and Processing (ICIAP), pp. 159-168, 2011.
  • 6. Z. Akhtar, G. Fumera, G.L. Marcialis. and F. Roli, “Robustness analysis of likelihood ratio score fusion rule for multimodal biometric systems under spoof attacks”, 45th IEEE Int'l Carnahan Conference on Security Technology (ICCST), pp. 237-244, 2011.
  • 7. Z. Akhtar and A. Rattani, “A face in any form: new challenges and opportunities for face recognition technology”, IEEE Computer, vol. 50, no. 4, pp. 80-90, 2017.
  • 8. Z. Akhtar, A. Hadid, M. Nixon, M. Tistarelli, J.L. Dugelay, and S. Marcel, “Biometrics: in search of identity and security (Q & A)”, IEEE MultiMedia, vol. 25, no. 3, pp. 22-35, 2018.
  • 9. R Korshunov and S. Marcel, “Deepfakes: a new threat to face recognition? assessment and detection”, arXiv preprint arXiv: 1812.08685, pp. 1-5, 2018.
  • 10. D. Guera and E. J. Delp, “Deepfake video detection using recurrent neural networks”, IEEE International Conference on Advanced Video and Signalbased Surveillance (AVSS), pp. 1-6. 2018.
  • 11. D. Afchar, V. Nozick, J. Yamagishi, and I. Echizen, “Mesonet: a compact facial video forgery detection network”, IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1-7, 2018.
  • 12. P. Zhou, X. Han, V. I. Morariu, and L. S. Davis, “Two-stream neural networks for tampered face detection”, IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). pp. 1-9. 2017.
  • 13. B. Bekci, Z. Akhtar, and H.K. Ekenel, “Cross-dataset face manipulation detection”, IEEE Conference on Signal Processing and Communications Applications (SIU), pp. 1-5, 2020.
  • 14. T. Neubert, C. Kraetzer, and J. Dittmann, “A face morphing detection concept with a frequency and a spatial domain feature space for images on eMRTD”, Proceedings of ACM Workshop on Information Hiding and Multimedia Security, pp. 95-100, 2019.
  • 15. R. Raghavendra, К. B. Raja, S. Venkatesh, and C. Busch, “Transferable deep-CNN features for detecting digital and print-scanned morphed face images”, IEEE International Conference on Computer Vision Workshop (ICCV), pp. 1822-1830, 2017.
  • 16. A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. NieBner, “Faceforensics: a large-scale video dataset for forgery detection in human faces”, arXiv preprint arXiv: 1803.09179, pp. 1-21, 2018.
  • 17. B. Bayar and M. C. Stamm, “A deep learning approach to universal image manipulation detection using a new convolutional layer”, ACM Workshop on Information Hiding and Multimedia Security (IH&MMSEC). pp. 1-6, 2016.
  • 18. N. Rahmouni, V. Nozick, J. Yamagishi, and I. Echizen, “Distinguishing computer graphics from natural images using convolution neural networks”, IEEE International Workshop on Information Forensics and Security, pp. 1-7, 2017.
  • 19. W. Quan, K. Wang, D. M. Yan, and X. Zhang, “Distinguishing between natural and computer-generated images using convolutional neural networks”, IEEE Transactions on Information Forensics and Security, pp. 2772-2787, 2018.
  • 20. J. Stehouwer, H. Dang, F. Liu, X. Liu, and A. Jain, “On the detection of digital face manipulation”, arXiv preprint arXiv: 1910.01717, pp. 1-10, 2019.
  • 21. I. Goodfellow, J. Pouget-Abadie. M. Mirza, B. Xu. D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets”. Proceedings of Advances in Neural Information Processing Systems, pp. 1-9, 2014.
  • 22. T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks”, Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 4401-4410, 2019.
  • 23. T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of GANs for improved quality, stability, and variation”, arXiv preprint wA7r:1710.10196, 2018.
  • 24. J.Y. Zhu, T. Park, P. Isola, and A.A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks”, arXiv preprint arXiv: 1703.10593, 2017.
  • 25. Y. Shen, P. Luo, J. Yan, and X. Wang, and X. Tang, Faceid GAN: learning a symmetry three-player GAN for identitypreserving face synthesis. In CVPR, pp. 821-830, 2018.
  • 26. X. Yang, Y. Li, and S. Lyu, “Exposing deep fakes using inconsistent head poses”, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 1-4, 2019.
  • 27. Z. He, W. Zuo, M. Kan, S. Shan, and X. Chen, “AttGAN: facial attribute editing by only changing what you want”, IEEE Transactions on Image Processing, pp. 5464-5478, 2019.
  • 28. F.J. Chang, X. Yu, R. Nevatia, and M. Chandraker, “Pose-variant 3D facial attribute generation”, arXiv preprint arXiv: 1907.10202, 2019.
  • 29. G. Perarnau, J. V. D. Weijer, B. Raducanu, and J. Alvarez, “Invertible conditional GANs for image editing”, arXiv preprint arXiv:1611.06355, 2016.
  • 30. Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild”, IEEE International Conference on Computer Vision Workshop (ICCV), pp. 1-11, 2015.
  • 31. Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification”, Advances in Neural Information Processing Systems (NIPS), pp. 1988-1996, 2014.
  • 32. N. Zhang, M. Paluri, M. Ranzato, T. Darrell, and L. Bourdev, “Panda: pose aligned networks for deep attribute modeling”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1637-1644, IEEE. 2014.
  • 33. J. Thies. M. Zollhofer, M. Stamminger, C. Theobalt. and M. NieBner, “Face2face: real-time face capture and reenactment of RGB videos”, Proceeding of Conference on Computer Vision and Pattern Recognition, pp. 2387-2395, 2016.
  • 34. J. Thies, M. Zollhofer, and M. NieBner, “Deferred neural rendering: image synthesis using neural textures”, ACM Transactions on Graphics, vol. 38, no. 66, pp. 1-12, 2019.
  • 35. P. Isola, J. Zhu, T. Zhou, and A. Efros, “Image-to-image translation with conditional adversarial networks”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125-1134, 2017.
  • 36. Y. Choi, M. Choi, M. Kim, J. Ha, S. Kim, and J. Choo, “StarGAN: unified generative adversarial networks for multi-domain imageto-image translation”, in Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, pp. 8789-8797,2018.
  • 37. Z. Akhtar and D. Dasgupta, “A comparative evaluation of local feature descriptors for DeepFakes detection”, IEEE International Symposium on Technologies for Homeland Security, pp. 1-5, 2019.
  • 38. U. Scherhag, C. Rathgeb, and C. Busch, “Performance variation of morphed face image detection algorithms across different datasets”, International Workshop on Biometrics and Forensics (IWBF), pp. 1-6, 2018.
  • 39. L.B. Zhang, F. Peng, and M. Long, “Face morphing detection using Fourier spectrum of sensor pattern noise”, IEEE International Conference on Multimedia and Expo (ICME), pp. 1-6, 2018.
  • 40. U.A. Ciftci, and I. Demir, “FakeCatcher: detection of synthetic portrait videos using biological signals”, fl/X7v:1901.02212, pp. 1-5, 2019.
  • 41. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions”, IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-12, 2015.
  • 42. Y. Li and S. Lyu, “Exposing DeepFake videos by detecting face warping artifacts”, IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1-7, 2019.
  • 43. К. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition”, IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-12,2016.
  • 44. S. McCloskey and M. Albright, “Detecting GAN-generated imagery using color cues”, arXivpreprint arX7v:1812.08247, 2018.
  • 45. H. Guan, M. Kozak. E. Robertson. Y. Lee, A. Yates. A. Delgado, D. Zhou, T. Kheyrkhah. J. Smith, and J. Fiscus, “MFC datasets: largescale benchmark datasets for media forensic challenge evaluation”, in Proceeding of IEEE Winter Applications of Computer Vision Workshops, pp. 63-72, 2019.
  • 46. L. Nataraj, T. Mohammed, B. Manjunath, S. Chandrasekaran, A. Flenner, J. Bappy, and A. Roy-Chowdhury, “Detecting GAN generated fake images using co-occurrence matrices”, arXiv preprint ягХ/v: 1903.06836, 2019.
  • 47. J. Neves, R. Tolosana, R. Vera-Rodriguez, V. Lopes, and H. Proenya. “Real or fake? spoofing state-of-the-art face synthesis detection systems”, arXiv preprint arXiv: 1911.05351,2019.
  • 48. S. Tariq, S. Lee, H. Kim, Y. Shin, and S. Woo, “Detecting both machine and human created fake face images in the wild”, Proceedings of International Workshop on Multimedia Privacy and Security, pp. 81-87,2018.
  • 49. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition”, arXiv preprint arXiv: 1409.1556, pp. 1-14, 2014.
  • 50. F. Chollet, “Xception: deep learning with depthwise separable convolutions”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1800-1807, 2017.
  • 51. A. Bharati, R. Singh, M. Vatsa, and K. Bowyer, “detecting facial retouching using supervised deep learning”, IEEE Transactions on Information Forensics and Security, vol. 11, no. 9. pp. 1903-Г913, 2016.
  • 52. H. H. Nguyen, J. Yamagishi, and I. Echizen, “Use of a capsule network to detect fake images and videos”, arXiv preprint «rX/v: 1910.12467. pp. 1-14,2019.
  • 53. S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules”, Neural Information Processing Systems (NeurlPS), pp. 1-11, 2017.
  • 54. F. Matern, C. Riess, and M. Stamminger, “Exploiting visual artifacts to expose DeepFakes and face manipulations”, IEEE Winter Applications of Computer Vision Workshops (WACVW), pp. 83-92, 2019.
  • 55. B. Dolhansky, R. Howes, B. Pflaum, N. Baratn, and C. Ferrer, “The deepfake detection challenge (DFDC) preview dataset”, arXivpreprint a/V/'v: 1910.08854, 2019.
  • 56. E. Sabir, J. Cheng, A. Jaiswal, W. AbdAlmageed, I. Masi, and R Natarajan, “Recurrent convolutional strategies for face manipulation detection in videos”, in Proceeding of Conference on Computer Vision and Pattern Recognition Workshops, pp. 80-87, 2019.
  • 57. S. Agarwal, H. Farid, Y. Gu, M. He, K. Nagano and H. Li, “Protecting world leaders against deep fakes”, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 38-45, 2019.
  • 58. T. Baltrusaitis, A. Zadeh, Y. Lim, and L. Morency, “OpenFace 2.0: facial behavior analysis toolkit”. Proceedings of International Conference on Automatic Face & Gesture Recognition, pp. 1-10, 2018.
  • 59. H. H Nguyen, F. Fang, J. Yamagishi, and I. Echizen, “Multi-task learning for detecting and segmenting manipulated facial images and videos”, IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), pp. 1-8, 2019.
  • 60. I. Amerini, L. Galteri, R. Caldelli, and D. Bimbo, “DeepFake video detection through optical flow based CNN”, IEEE International Conference on Computer Vision (ICCV), pp. 1-3, 2019.
  • 61. S. S. Beauchemin and J. L. Barron, “The computation of optical flow”, ACM Computing Surveys (CSUR), pp. 1-57, 1995.
  • 62. R. Lienhart, A. Kuranov, and V. Pisarevsky, “Empirical analysis of detection cascades of boosted classifiers for rapid object detection”, 25th DAGM Symposium on Pattern Recognition, pp. 297-304, 2003.
  • 63. P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 511-518, 2001.
  • 64. E. Basaran, Z. Comert, and Y. Qelik, “Convolutional neural network approach for automatic tympanic membrane detection and classification”, Biomedical Signal Processing Control, pp. 1-10, 2020.
  • 65. C. Xu, J. Yang, H. Lai, J. Gao, L. Shen, and S. Yan, “UP-CNN: un-pooling augmented convolutional neural network”, Pattern Recognition Letters, vol. 119, pp. 34-40,2019.
  • 66. M. D. Zeiler, “ADADELTA: an adaptive learning rate method”, arXiv preprint arXiv: 1212.5701, pp. 1-4.2012.
  • 67. R. Shindjalova, K. Prodanova, and V. Svechtarov, “Modeling data for tilted implants in grafted with Bio-Oss maxillary sinuses using logistic regression”, AIP Conference Proceedings, vol. 1631, pp. 58-62, 2014.
  • 68. S. Hochreiter, and J. Schmidhuber, “Long short-term memory”, Neural Computing, vol. 9, pp. 1735-80, 1997.
  • 69. F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: continual prediction with LSTM”, International Conference on Artificial Neural Networks (ICANN), vol. 2, pp. 850-855, 1999.
  • 70. F. A. Gers, N.N. Schraudolph, and J. Schmidhuber, “Learning precise timing with LSTM recurrent networks”, Journal of Machine Learning Research, vol. 3, pp. 115-143, 2002.
  • 71. U. Budak, V. Bajaj, Y. Akbulut, O. Atilla, A. Sengur, “An effective hybrid model for EEG-based drowsiness detection”, IEEE Sensors Journal, vol. 19. pp. 7624-7631,2019.
  • 72. C. Sanderson and В. C Lovell, “Multi-region probabilistic histograms for robust and scalable identity inference”, International Conference on Biometric, pp. 199-208, 2009.
  • 73. Y. Li. X. Yang, P. Sun, H. Qi, and S. Lyu, “Celeb-DF: a new dataset for DeepFake forensics”, arXiv preprint arXiv: 1909.12962. pp. 1-6, 2019.
  • 74. L. Li. J. Bao, T. Zhang. H. Yang, D. Chen, F. Wen, and B. Guo, “Face X-ray for more general face forgery detection”, arXiv preprint arXiv.92.13458, pp. 1-10, 2019.
 
Source
< Prev   CONTENTS   Source   Next >