Date of Award
Spring 2025
Language
English
Embargo Period
3-28-2025
Document Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
College/School/Department
Department of Computer Science
Program
Computer Science
First Advisor
Ming Ching Chang
Committee Members
Xin Li, Siwei Lyu, Pradeep K Atrey
Subject Categories
Artificial Intelligence and Robotics | Computer Sciences
Abstract
Detecting and characterizing manipulations in digital media continues to pose a significant challenge within the field of digital forensics. Despite notable advancements, the discipline often remains in a reactive stance against emerging threats. Current state-of-the-art methods, typically evaluated within academic settings, fails to mirror the complexities of real-world disinformation scenarios. These methods generally prioritize high performance based on quantitative metrics, yet they demonstrate a considerable dependency on training data and lack adaptability to new novel attack signatures. With the rapid evolution of attack methodologies, the dependency on highly accurate models that do not generalize or adapt well to unseen threats proves inadequate for safeguarding today's media landscape.
We propose that the forensic community reevaluate methodological design from a threat assessment perspective. Approaches that prioritize explainability, adaptability, and independence from specific data sets are crucial as a primary line of defense. If new disinformation signatures remain undetected, defending against them becomes a formidable challenge. We advocate for a shift within the community towards developing models capable of producing actionable evidence that semantically explains the rationale behind identified manipulations. This shift is imperative as machine learning models become increasingly accessible to the public and all future media inevitably incorporates some synthetic elements. Identifying whether such alterations pose a semantic threat through a comprehensive threat assessment framework is essential to maintaining the integrity of the media we consume.
License
This work is licensed under the University at Albany Standard Author Agreement.
Recommended Citation
Chen, Yuwei, "Towards Human Explainable Digital Forensics: Generating Human Interpretable Evidence for Semantic Understanding in Manipulated Images and Text" (2025). Electronic Theses & Dissertations (2024 - present). 123.
https://scholarsarchive.library.albany.edu/etd/123