Explainable Agency in Artificial Intelligence: Research and Practice
暫譯: 人工智慧中的可解釋代理:研究與實踐
Tulli, Silvia, AHA, David W.
相關主題
商品描述
This book focuses on a subtopic of Explainable AI (XAI) called Explainable Agency (EA), which involves producing records of decisions made during an agent's reasoning, summarizing its behavior in human-accessible terms, and providing answers to questions about specific choices and the reasons for them. We distinguish explainable agency from Interpretable Machine Learning (IML), another branch of XAI that focuses on providing insight (typically, for an ML expert) concerning a learned model and its decisions. In contrast, explainable agency typically involves a broader set of AI-enabled techniques, systems, and stakeholders (e.g., end users) where the explanations provided by EA agents are best evaluated in the context of human subject studies.
The chapters of this book explore the concept of endowing intelligent agents with explainable agency, which is crucial for agents to be trusted by humans in critical domains such as finance, self-driving vehicles, and military operations. This book presents the work of researchers from a variety of perspectives and describes challenges, recent research results, lessons learned from applications, and recommendations for future research directions in EA. The historical perspectives of explainable agency and the importance of interactivity in explainable systems are also discussed. Ultimately, this book aims to contribute to the successful partnership between humans and AI systems.
● Contributes to the topic of Explainable Artificial Intelligence (XAI)
● Focuses on the XAI subtopic of Explainable Agency
● Includes an introductory chapter, a survey, and five other original contributions
商品描述(中文翻譯)
這本書專注於可解釋人工智慧(Explainable AI, XAI)的一個子主題,稱為可解釋代理(Explainable Agency, EA),其涉及在代理的推理過程中產生決策記錄,以人類可理解的術語總結其行為,並提供有關特定選擇及其原因的問題答案。我們將可解釋代理與可解釋機器學習(Interpretable Machine Learning, IML)區分開來,後者是XAI的另一個分支,專注於為機器學習專家提供有關學習模型及其決策的見解。相比之下,可解釋代理通常涉及更廣泛的AI技術、系統和利益相關者(例如,最終用戶),EA代理提供的解釋在人體研究的背景下最為有效地評估。
本書的各章探討賦予智能代理可解釋代理的概念,這對於在金融、自駕車和軍事行動等關鍵領域中使人類信任代理至關重要。本書呈現了來自不同視角的研究者的工作,描述了挑戰、最近的研究結果、應用中的經驗教訓以及對未來EA研究方向的建議。還討論了可解釋代理的歷史視角以及可解釋系統中互動性的重要性。最終,本書旨在促進人類與AI系統之間的成功合作。
● 對可解釋人工智慧(Explainable Artificial Intelligence, XAI)主題作出貢獻
● 專注於XAI子主題可解釋代理(Explainable Agency)
● 包含一章介紹、調查以及五篇其他原創貢獻
作者簡介
Dr. Silvia Tulli is an Assistant Professor at Sorbonne University. She received her Marie Curie ITN research fellowship and completed her Ph.D. at Instituto Superior Técnico. Her research interests lie at the intersection of explainable AI, interactive machine learning, and reinforcement learning.
Dr. David W. Aha (UC Irvine, 1990) serves as the Director of the AI Center at the Naval Research Laboratory in Washington, DC. His research interests include goal reasoning agents, deliberative autonomy, case-based reasoning, explainable AI, machine learning (ML), reproducible studies, and related topics.
作者簡介(中文翻譯)
西爾維亞·圖利博士是索邦大學的助理教授。她獲得了瑪麗·居里ITN研究獎學金,並在高等技術學院完成了她的博士學位。她的研究興趣位於可解釋人工智慧、互動式機器學習和強化學習的交集。
大衛·W·阿哈博士(加州大學爾灣分校,1990年)擔任華盛頓特區海軍研究實驗室人工智慧中心的主任。他的研究興趣包括目標推理代理、深思熟慮的自主性、基於案例的推理、可解釋人工智慧、機器學習(ML)、可重複的研究以及相關主題。