AI-Native LLM Security: Threats, defenses, and best practices for building safe and trustworthy AI (Paperback)
暫譯: AI 原生 LLM 安全:威脅、防禦及建立安全可信 AI 的最佳實踐 (平裝本)

Malik, Vaibhav, Huang, Ken, Dawson, Ads

  • 出版商: Packt Publishing
  • 出版日期: 2025-12-12
  • 售價: $1,840
  • 貴賓價: 9.5$1,748
  • 語言: 英文
  • 頁數: 416
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 1836203756
  • ISBN-13: 9781836203759
  • 相關分類: Large language model
  • 海外代購書籍(需單獨結帳)

買這商品的人也買了...

相關主題

商品描述

Unlock the secrets to safeguarding AI by exploring the top risks, essential frameworks, and cutting-edge strategies-featuring the OWASP Top 10 for LLM Applications and Generative AI

DRM-free PDF version + access to Packt's next-gen Reader*

Key Features:

- Understand adversarial AI attacks to strengthen your AI security posture effectively

- Leverage insights from LLM security experts to navigate emerging threats and challenges

- Implement secure-by-design strategies and MLSecOps practices for robust AI system protection

- Purchase of the print or Kindle book includes a free PDF eBook

Book Description:

Adversarial AI attacks present a unique set of security challenges, exploiting the very foundation of how AI learns. This book explores these threats in depth, equipping cybersecurity professionals with the tools needed to secure generative AI and LLM applications. Rather than skimming the surface of emerging risks, it focuses on practical strategies, industry standards, and recent research to build a robust defense framework.

Structured around actionable insights, the chapters introduce a secure-by-design methodology, integrating threat modeling and MLSecOps practices to fortify AI systems. You'll discover how to leverage established taxonomies from OWASP, NIST, and MITRE to identify and mitigate vulnerabilities. Through real-world examples, the book highlights best practices for incorporating security controls into AI development life cycles, covering key areas such as CI/CD, MLOps, and open-access LLMs.

Built on the expertise of its co-authors-pioneers in the OWASP Top 10 for LLM applications-this guide also addresses the ethical implications of AI security, contributing to the broader conversation on trustworthy AI. By the end of this book, you'll be able to develop, deploy, and secure AI technologies with confidence and clarity.

*Email sign-up and proof of purchase required

What You Will Learn:

- Understand unique security risks posed by LLMs

- Identify vulnerabilities and attack vectors using threat modeling

- Detect and respond to security incidents in operational LLM deployments

- Navigate the complex legal and ethical landscape of LLM security

- Develop strategies for ongoing governance and continuous improvement

- Mitigate risks across the LLM life cycle, from data curation to operations

- Design secure LLM architectures with isolation and access controls

Who this book is for:

This book is essential for cybersecurity professionals, AI practitioners, and leaders responsible for developing and securing AI systems powered by large language models. Ideal for CISOs, security architects, ML engineers, data scientists, and DevOps professionals, it provides insights on securing AI applications. Managers and executives overseeing AI initiatives will also benefit from understanding the risks and best practices outlined in this guide to ensure the integrity of their AI projects. A basic understanding of security concepts and AI fundamentals is assumed.

Table of Contents

- Fundamentals and Introduction to Large Language Models

- Securing Large Language Models

- The Dual Nature of LLM Risks: Inherent Vulnerabilities and Malicious Actors

- Mapping Trust Boundaries in LLM Architectures

- Aligning LLM Security with Organizational Objectives and Regulatory Landscapes

- Identifying and Prioritizing LLM Security Risks with OWASP

- Diving Deep: Profiles of the Top 10 LLM Security Risks

- Mitigating LLM Risks: Strategies and Techniques for Each OWASP Category

(N.B. Please use the Read Sample option to see further chapters)

商品描述(中文翻譯)

揭開保護人工智慧的秘密,探索主要風險、必要框架和尖端策略 - 包含 LLM 應用程式和生成式 AI 的 OWASP 前 10 名

無 DRM 的 PDF 版本 + 獲得 Packt 的下一代閱讀器*

主要特色:

- 理解對抗性 AI 攻擊,以有效加強您的 AI 安全防護
- 利用 LLM 安全專家的見解,應對新興威脅和挑戰
- 實施安全設計策略和 MLSecOps 實踐,以保護強健的 AI 系統
- 購買印刷版或 Kindle 書籍可獲得免費 PDF 電子書

書籍描述:

對抗性 AI 攻擊帶來一系列獨特的安全挑戰,利用 AI 學習的根本基礎。本書深入探討這些威脅,為網路安全專業人士提供保護生成式 AI 和 LLM 應用所需的工具。本書不僅僅是淺嘗輒止於新興風險,而是專注於實用策略、行業標準和最新研究,以建立強健的防禦框架。

本書以可行的見解為結構,介紹安全設計的方法論,整合威脅建模和 MLSecOps 實踐,以加強 AI 系統的防護。您將發現如何利用 OWASP、NIST 和 MITRE 的既定分類法來識別和減輕漏洞。透過實際案例,本書突顯將安全控制納入 AI 開發生命週期的最佳實踐,涵蓋 CI/CD、MLOps 和開放存取 LLM 等關鍵領域。

本書基於其共同作者的專業知識 - OWASP 前 10 名 LLM 應用的先驅 - 也探討了 AI 安全的倫理影響,為可信 AI 的更廣泛對話做出貢獻。在閱讀完本書後,您將能夠自信且清晰地開發、部署和保護 AI 技術。

*需要電子郵件註冊和購買證明

您將學到的內容:

- 理解 LLM 所帶來的獨特安全風險
- 使用威脅建模識別漏洞和攻擊向量
- 在運行中的 LLM 部署中檢測和應對安全事件
- 瀏覽 LLM 安全的複雜法律和倫理環境
- 制定持續治理和持續改進的策略
- 在 LLM 生命週期中減輕風險,從數據策展到運營
- 設計具有隔離和訪問控制的安全 LLM 架構

本書適合誰:

本書對於網路安全專業人士、AI 實踐者以及負責開發和保護大型語言模型驅動的 AI 系統的領導者至關重要。非常適合 CISO、安全架構師、機器學習工程師、數據科學家和 DevOps 專業人士,提供有關保護 AI 應用的見解。負責 AI 項目的經理和高層主管也將從理解本指南中概述的風險和最佳實踐中受益,以確保其 AI 項目的完整性。假設讀者對安全概念和 AI 基礎知識有基本了解。

目錄

- 大型語言模型的基本原理與介紹
- 保護大型語言模型
- LLM 風險的雙重性:固有漏洞與惡意行為者
- 在 LLM 架構中映射信任邊界
- 將 LLM 安全與組織目標和法規環境對齊
- 使用 OWASP 識別和優先考慮 LLM 安全風險
- 深入探討:前 10 名 LLM 安全風險的概況
- 減輕 LLM 風險:針對每個 OWASP 類別的策略和技術

(注意:請使用「閱讀範本」選項查看後續章節)