Generative AI Security: Defense, Threats, and Vulnerabilities
暫譯: 生成式 AI 安全:防禦、威脅與漏洞
Rana, Shaila, Chicone, Rhonda
相關主題
商品描述
Up-to-date reference enabling readers to address the full spectrum of AI security challenges while maintaining model utility
Generative AI Security: Defense, Threats, and Vulnerabilities delivers a technical framework for securing generative AI systems, building on established standards while focusing specifically on emerging threats to large language models and other generative AI systems. Moving beyond treating AI security as a dual-use technology, this book provides detailed technical analysis of three critical dimensions: implementing AI-powered security tools, defending against AI-enhanced attacks, and protecting AI systems from compromise through attacks like prompt injection, model poisoning, and data extraction.
The book provides concrete technical implementations supported by real-world case studies of actual AI system compromises, examining documented cases like the DeepSeek breaches, Llama vulnerabilities, and Google's CaMeL security defenses to demonstrate attack methodologies and defense strategies while emphasizing foundational security principles that remain relevant despite technological shifts. Each chapter progresses from theoretical foundations to practical applications.
The book also includes an implementation guide and hands-on exercises focusing on specific vulnerabilities in generative AI architectures, security control implementation, and compliance frameworks.
Generative AI Security: Defense, Threats, and Vulnerabilities discusses topics including:
- Machine learning fundamentals, including supervised, unsupervised, and reinforcement learning and feature engineering and selection
- Intelligent Security Information and Event Management (SIEM), covering AI-enhanced log analysis, predictive vulnerability assessment, and automated patch generation
- Deepfakes and synthetic media, covering image and video manipulation, voice cloning, audio deepfakes, and AI's greater impact on information integrity
- Security attacks on generative AI, including jailbreaking, adversarial, backdoor, and data poisoning attacks
- Privacy-preserving AI techniques including federated learning and homomorphic encryption
Generative AI Security: Defense, Threats, and Vulnerabilities is an essential resource for cybersecurity professionals and architects, engineers, IT professionals, and organization leaders seeking integrated strategies that address the full spectrum of Generative AI security challenges while maintaining model utility.
商品描述(中文翻譯)
最新參考資料,幫助讀者應對全方位的人工智慧安全挑戰,同時保持模型的效用
生成式人工智慧安全:防禦、威脅與脆弱性 提供了一個技術框架,用於保護生成式人工智慧系統,基於既有標準,特別關注大型語言模型及其他生成式人工智慧系統的新興威脅。這本書不僅將人工智慧安全視為雙重用途技術,還提供了三個關鍵維度的詳細技術分析:實施人工智慧驅動的安全工具、抵禦人工智慧增強的攻擊,以及通過如提示注入、模型中毒和數據提取等攻擊來保護人工智慧系統不被妥協。
本書提供了具體的技術實現,並以實際的人工智慧系統妥協案例為支持,檢視了如 DeepSeek 漏洞、Llama 脆弱性和 Google 的 CaMeL 安全防禦等已記錄的案例,以展示攻擊方法和防禦策略,同時強調即使在技術變遷中仍然相關的基礎安全原則。每一章都從理論基礎進展到實際應用。
本書還包括實施指南和針對生成式人工智慧架構中特定脆弱性的實作練習、安全控制實施和合規框架。
生成式人工智慧安全:防禦、威脅與脆弱性 討論的主題包括:
- 機器學習基礎,包括監督式、非監督式和強化學習,以及特徵工程和選擇
- 智能安全信息和事件管理 (SIEM),涵蓋人工智慧增強的日誌分析、預測性脆弱性評估和自動化補丁生成
- 深偽技術和合成媒體,涵蓋圖像和視頻操控、聲音克隆、音頻深偽技術,以及人工智慧對信息完整性的更大影響
- 針對生成式人工智慧的安全攻擊,包括越獄、對抗性攻擊、後門攻擊和數據中毒攻擊
- 隱私保護的人工智慧技術,包括聯邦學習和同態加密
生成式人工智慧安全:防禦、威脅與脆弱性 是網絡安全專業人士、架構師、工程師、IT 專業人員和尋求綜合策略以應對生成式人工智慧安全挑戰的組織領導者的重要資源,同時保持模型的效用。
作者簡介
Shaila Rana, PhD, is a professor of Cybersecurity, co-founder of the ACT Research Institute, a cybersecurity, AI, and technology think tank, and serves as the Chair of the IEEE Standards Association initiative on Zero Trust Cybersecurity for Health Technology, Tools, Services, and Devices.
Rhonda Chicone, PhD, is a retired professor and the co-founder of the ACT Research Institute. A former CSO, CTO, and Director of Software Development, she brings decades of experience in software product development and cybersecurity.
作者簡介(中文翻譯)
Shaila Rana, PhD, 是網路安全的教授,也是 ACT 研究所的共同創辦人,該研究所是一個專注於網路安全、人工智慧和技術的智庫,並擔任 IEEE 標準協會在健康技術、工具、服務和設備的零信任網路安全倡議的主席。
Rhonda Chicone, PhD, 是一位退休教授,也是 ACT 研究所的共同創辦人。作為前首席安全官 (CSO)、首席技術官 (CTO) 和軟體開發總監,她在軟體產品開發和網路安全方面擁有數十年的經驗。