Date of Award
8-1-2023
Language
English
Document Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
College/School/Department
Department of Philosophy
Dissertation/Thesis Chair
Jason D’Cruz
Committee Members
Bradley Armour-Garb, Ron McClamrock, Kush Raj Varshney
Keywords
Artificial intelligence, Embedded ethics, Human-centered design, Human-robot interaction, Responsible AI, Trustworthy AI
Subject Categories
Ethics and Political Philosophy
Abstract
Being trustworthy and responsible—particularly in terms of decisions and actions—in artificial agents is as crucial as it is in human agents. However, the dominant theories surrounding pertinent concepts such as trust, trustworthiness, moral responsibility, legal rights, etc., are primarily proposed, developed, and applied to human agents, not artificial ones. The absence of well-developed conceptualization and characterization of relevant concepts concerning critical agents continues to exacerbate the challenges and importance of these concepts in the realm of artificial intelligence. This void necessitates novel perspectives and approaches toward the metrics needed to foster genuinely trustworthy and responsive artificial intelligence systems. In this dissertation, I tackle various challenges associated with developing trustworthy and responsible artificial intelligence systems and propose a model for this purpose.In Chapter 1, I focus on the current conceptualization of trust in AI, the significance of trust/distrust in AI (as opposed to other technologies), and a typology of trust in human-machine interaction. In Chapter 2, I explore two major classes of technical and non-technical metrics of trustworthy AI as well as trust creators and destroyers in AI technology. I argue in Chapter 3 that current trust theories cannot accommodate trust related to AI, and propose an alternative counterfactual theory of trust, which accounts for the four major types of trust concerning AI. Chapter 4 delves into different types of current value-based challenges facing AI. In Chapter 5, I discuss two approaches—a priori and a posteriori—to integrating ethics in AI technology, argue against the ethics in design approach, and explain the necessity of embedding ethics to address current value-based challenges confronting AI. Finally, in Chapter 6, I present a version of embedded ethics for responsible AI systems (EE-RAIS). This model draws upon four platforms of embedded ethics: educational, cross-functional, developmental, and algorithmic. Moreover, it utilizes four imperative metrics: ethical intelligence, legal intelligence, social-emotional competency, and artificial wisdom.
Recommended Citation
Afroogh, Saleh, "Toward A Trustworthy And Responsible Artificial Intelligence" (2023). Legacy Theses & Dissertations (2009 - 2024). 3069.
https://scholarsarchive.library.albany.edu/legacy-etd/3069