Date of Award

5-1-2022

Language

English

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

College/School/Department

Department of Computer Science

Content Description

1 online resource (xi, 67 pages) : illustrations (chiefly color)

Dissertation/Thesis Chair

Ming-Ching Chang

Committee Members

Siwei Lyu, Pradeep K. Atrey, Shaghayegh Sahebi

Keywords

Deepfakes, Disinformation, Machine learning, Neural networks (Computer science), Image analysis, Face, Eye

Subject Categories

Computer Sciences

Abstract

Generative adversarial network (GAN) generated high-realistic human faces are visually challenging to discern from real ones. They have been used as profile images for fake social media accounts, which leads to high negative social impacts.In this work, we explore a universal physiological cue of the eye, namely the pupil shape consistency, to identify GAN-generated faces reliably. We show that GAN-generated faces can be exposed via irregular pupil shapes. This phenomenon is caused by the lack of physiological constraints in the GAN models. We demonstrate that such artifacts exist widely in high-quality GAN-generated faces. We design an automatic method to segment the pupils from the eyes and analyze their shapes to distinguish GAN-generated faces from real ones. Furthermore, we propose a robust, attentive, end-to-end framework that spots GAN-generated faces by analyzing iris regions. The framework can automatically localize and compare artifacts between iris to identify GAN-generated faces. Once Mask-RCNN extracts the iris regions, a Residual Attention Network (RAN) is used to examine the components between the two iris. Besides, we use a joint loss function combining the traditional cross-entropy loss with a relaxation of the ROC-AUC loss via WMW statistics to improve the deep neural network learning from imbalanced data. Comprehensive evaluations demonstrate the superiority of the proposed methods.

Share

COinS