Explainable AI with Python
暫譯: 使用 Python 的可解釋人工智慧

Di Cecco, Antonio, Gianfagna, Leonida

  • 出版商: Springer
  • 出版日期: 2025-08-06
  • 售價: $2,360
  • 貴賓價: 9.5$2,242
  • 語言: 英文
  • 頁數: 324
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 303192228X
  • ISBN-13: 9783031922282
  • 相關分類: 人工智慧
  • 海外代購書籍(需單獨結帳)

相關主題

商品描述

This comprehensive book on Explainable Artificial Intelligence has been updated and expanded to reflect the latest advancements in the field of XAI, enriching the existing literature with new research, case studies, and practical techniques.

The Second Edition expands on its predecessor by addressing advancements in AI, including large language models and multimodal systems that integrate text, visual, auditory, and sensor data. It emphasizes making complex systems interpretable without sacrificing performance and provides an enhanced focus on additive models for improved interpretability. Balancing technical rigor with accessibility, the book combines theory and practical application to equip readers with the skills needed to apply explainable AI (XAI) methods effectively in real-world contexts.

Features:

Expansion of the "Intrinsic Explainable Models" chapter to delve deeper into generalized additive models and other intrinsic techniques, enriching the chapter with new examples and use cases for a better understanding of intrinsic XAI models.

Further details in "Model-Agnostic Methods for XAI" focused on how explanations differ between the training set and the test set, including a new model to illustrate these differences more clearly and effectively.

New section in "Making Science with Machine Learning and XAI" presenting a visual approach to learning the basic functions in XAI, making the concept more accessible to readers through an interactive and engaging interface.

Revision in "Adversarial Machine Learning and Explainability" that includes a code review to enhance understanding and effectiveness of the concepts discussed, ensuring that code examples are up-to-date and optimized for current best practices.

New chapter on "Generative Models and Large Language Models (LLM)" chapter dedicated to generative models and large language models, exploring their role in XAI and how they can be used to create richer, more interactive explanations. This chapter also covers the explainability of transformer models and privacy through generative models.

New "Artificial General Intelligence and XAI" mini-chapter dedicated to exploring the implications of Artificial General Intelligence (AGI) for XAI, discussing how advancements towards AGI systems influence strategies and methodologies for XAI.

Enhancements in "Explaining Deep Learning Models" features new methodologies in explaining deep learning models, further enriching the chapter with cutting-edge techniques and insights for deeper understanding.

商品描述(中文翻譯)

這本關於可解釋人工智慧(Explainable Artificial Intelligence, XAI)的綜合性書籍已更新並擴展,以反映該領域的最新進展,並通過新的研究、案例研究和實用技術豐富現有文獻。第二版在前一版的基礎上擴展,涵蓋了人工智慧的進展,包括大型語言模型和整合文本、視覺、聽覺及感測數據的多模態系統。它強調在不犧牲性能的情況下,使複雜系統可解釋,並增強了對加法模型的關注,以提高可解釋性。這本書在技術嚴謹性與可接近性之間取得平衡,結合理論與實踐應用,幫助讀者掌握在現實情境中有效應用可解釋人工智慧(XAI)方法所需的技能。

特色:
- 擴展了「內在可解釋模型(Intrinsic Explainable Models)」章節,深入探討廣義加法模型及其他內在技術,並用新的範例和使用案例豐富該章節,以便更好地理解內在XAI模型。
- 在「XAI的模型無關方法(Model-Agnostic Methods for XAI)」中進一步詳細說明了解釋在訓練集和測試集之間的差異,包括一個新模型,以更清晰有效地說明這些差異。
- 在「利用機器學習和XAI進行科學研究(Making Science with Machine Learning and XAI)」中新增一個部分,呈現一種視覺化的方法來學習XAI的基本功能,通過互動和引人入勝的介面使概念更易於讀者理解。
- 在「對抗性機器學習與可解釋性(Adversarial Machine Learning and Explainability)」中進行修訂,包含代碼審查以增強對所討論概念的理解和有效性,確保代碼範例是最新的並優化為當前最佳實踐。
- 新增「生成模型與大型語言模型(Generative Models and Large Language Models, LLM)」章節,專門探討生成模型和大型語言模型在XAI中的角色,以及它們如何用於創建更豐富、更具互動性的解釋。該章節還涵蓋了變壓器模型的可解釋性和通過生成模型的隱私問題。
- 新增「人工通用智慧與XAI(Artificial General Intelligence and XAI)」迷你章節,專門探討人工通用智慧(AGI)對XAI的影響,討論向AGI系統的進展如何影響XAI的策略和方法論。
- 在「解釋深度學習模型(Explaining Deep Learning Models)」中增強了新方法,進一步豐富該章節,提供尖端技術和見解以加深理解。

作者簡介

Leonida Gianfagna (Phd, MBA) is a theoretical physicist currently working in cybersecurity and machine learning as the R&D Director at Cyber Guru. Before joining Cyber Guru, he spent 15 years at IBM, holding leadership roles in software development for IT Service Management (ITSM).

He is the author of several publications in theoretical physics and computer science and has been recognized as an IBM Master Inventor, with over 15 patent filings.

Antonio Di Cecco (Phd, MBA) is a theoretical physicist with a strong mathematical background, dedicated to delivering AI/ML education at all proficiency levels, from beginners to experts. Passionate about all areas of machine learning, he leverages his mathematical expertise to make complex concepts accessible through both in-person and remote classes. As the founder of a School of AI community inspired by the AI for Good movement, he actively promotes AI education and its positive impact. He also holds a Master's degree in Economics with a focus on innovation. His professional background includes research positions at Sony CSL / Sapienza University, and he currently works at Università D'Annunzio Chieti-Pescara.

作者簡介(中文翻譯)

Leonida Gianfagna (博士, MBA) 是一位理論物理學家,目前擔任 Cyber Guru 的研發總監,專注於網絡安全和機器學習。在加入 Cyber Guru 之前,他在 IBM 工作了 15 年,擔任 IT 服務管理 (ITSM) 軟體開發的領導職位。他是多篇理論物理和計算機科學出版物的作者,並被認可為 IBM 大師發明家,擁有超過 15 項專利申請。

Antonio Di Cecco (博士, MBA) 是一位具有強大數學背景的理論物理學家,致力於提供各個水平的 AI/ML 教育,從初學者到專家。他對機器學習的各個領域充滿熱情,利用他的數學專業知識,使複雜的概念通過面對面和遠程課程變得易於理解。作為一個受 AI for Good 運動啟發的 AI 學校社區的創始人,他積極推廣 AI 教育及其正面影響。他還擁有專注於創新的經濟學碩士學位。他的專業背景包括在 Sony CSL / 薩賓納大學的研究職位,目前在 D'Annunzio Chieti-Pescara 大學工作。