LLM Design Patterns: A Practical Guide to Building Robust and Efficient AI Systems (Paperback)
暫譯: LLM 設計模式:構建穩健高效 AI 系統的實用指南 (平裝本)

Huang, Ken

  • 出版商: Packt Publishing
  • 出版日期: 2025-05-30
  • 售價: $1,910
  • 貴賓價: 9.5$1,815
  • 語言: 英文
  • 頁數: 534
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 1836207034
  • ISBN-13: 9781836207030
  • 相關分類: Large language model
  • 海外代購書籍(需單獨結帳)

買這商品的人也買了...

相關主題

商品描述

Explore reusable design patterns, including data-centric approaches, model development, model fine-tuning, and RAG for LLM application development and advanced prompting techniques

Key Features:

- Learn comprehensive LLM development, including data prep, training pipelines, and optimization

- Explore advanced prompting techniques, such as chain-of-thought, tree-of-thought, RAG, and AI agents

- Implement evaluation metrics, interpretability, and bias detection for fair, reliable models

- Print or Kindle purchase includes a free PDF eBook

Book Description:

This practical guide for AI professionals enables you to build on the power of design patterns to develop robust, scalable, and efficient large language models (LLMs). Written by a global AI expert and popular author driving standards and innovation in Generative AI, security, and strategy, this book covers the end-to-end lifecycle of LLM development and introduces reusable architectural and engineering solutions to common challenges in data handling, model training, evaluation, and deployment.

You'll learn to clean, augment, and annotate large-scale datasets, architect modular training pipelines, and optimize models using hyperparameter tuning, pruning, and quantization. The chapters help you explore regularization, checkpointing, fine-tuning, and advanced prompting methods, such as reason-and-act, as well as implement reflection, multi-step reasoning, and tool use for intelligent task completion. The book also highlights Retrieval-Augmented Generation (RAG), graph-based retrieval, interpretability, fairness, and RLHF, culminating in the creation of agentic LLM systems.

By the end of this book, you'll be equipped with the knowledge and tools to build next-generation LLMs that are adaptable, efficient, safe, and aligned with human values.

What You Will Learn:

- Implement efficient data prep techniques, including cleaning and augmentation

- Design scalable training pipelines with tuning, regularization, and checkpointing

- Optimize LLMs via pruning, quantization, and fine-tuning

- Evaluate models with metrics, cross-validation, and interpretability

- Understand fairness and detect bias in outputs

- Develop RLHF strategies to build secure, agentic AI systems

Who this book is for:

This book is essential for AI engineers, architects, data scientists, and software engineers responsible for developing and deploying AI systems powered by large language models. A basic understanding of machine learning concepts and experience in Python programming is a must.

Table of Contents

- Introduction to LLM Design Patterns

- Data Cleaning for LLM Training

- Data Augmentation

- Handling Large Datasets for LLM Training

- Data Versioning

- Dataset Annotation and Labeling

- Training Pipeline

- Hyperparameter Tuning

- Regularization

- Checkpointing and Recovery

- Fine-Tuning

- Model Pruning

- Quantization

- Evaluation Metrics

- Cross-Validation

- Interpretability

- Fairness and Bias Detection

- Adversarial Robustness

- Reinforcement Learning from Human Feedback

- Chain-of-Thought Prompting

- Tree-of-Thoughts Prompting

- Reasoning and Acting

- Reasoning WithOut Observation

- Reflection Techniques

- Automatic Multi-Step Reasoning and Tool Use

- Retrieval-Augmented Generation

- Graph-Based RAG

- Advanced RAG

- Evaluating RAG Systems

- Agentic Patterns

商品描述(中文翻譯)

探索可重用的設計模式,包括以數據為中心的方法、模型開發、模型微調以及用於大型語言模型(LLM)應用開發的檢索增強生成(RAG)和高級提示技術

主要特點:

- 學習全面的LLM開發,包括數據準備、訓練管道和優化

- 探索高級提示技術,如思維鏈(chain-of-thought)、思維樹(tree-of-thought)、RAG和AI代理

- 實施評估指標、可解釋性和偏見檢測,以確保模型的公平性和可靠性

- 印刷版或Kindle購買包括免費PDF電子書

書籍描述:

這本針對AI專業人士的實用指南使您能夠利用設計模式的力量來開發穩健、可擴展且高效的大型語言模型(LLMs)。本書由一位全球AI專家和在生成AI、安全性和策略方面推動標準與創新的知名作者撰寫,涵蓋了LLM開發的端到端生命周期,並介紹了可重用的架構和工程解決方案,以應對數據處理、模型訓練、評估和部署中的常見挑戰。

您將學會清理、增強和註釋大規模數據集,設計模組化的訓練管道,並使用超參數調整、剪枝和量化來優化模型。各章節幫助您探索正則化、檢查點、微調和高級提示方法,如推理與行動(reason-and-act),以及實施反思、多步推理和工具使用以完成智能任務。本書還強調檢索增強生成(RAG)、基於圖形的檢索、可解釋性、公平性和人類反饋強化學習(RLHF),最終實現代理型LLM系統的創建。

在本書結束時,您將具備構建下一代LLM的知識和工具,這些模型具有適應性、高效性、安全性,並與人類價值觀保持一致。

您將學到什麼:

- 實施高效的數據準備技術,包括清理和增強

- 設計可擴展的訓練管道,並進行調整、正則化和檢查點

- 通過剪枝、量化和微調來優化LLM

- 使用指標、交叉驗證和可解釋性來評估模型

- 理解公平性並檢測輸出中的偏見

- 開發RLHF策略以構建安全的代理型AI系統

本書適合誰:

本書對於負責開發和部署由大型語言模型驅動的AI系統的AI工程師、架構師、數據科學家和軟體工程師至關重要。對機器學習概念的基本理解和Python編程經驗是必須的。

目錄

- LLM設計模式介紹

- LLM訓練的數據清理

- 數據增強

- 處理大型數據集以進行LLM訓練

- 數據版本控制

- 數據集註釋和標記

- 訓練管道

- 超參數調整

- 正則化

- 檢查點和恢復

- 微調

- 模型剪枝

- 量化

- 評估指標

- 交叉驗證

- 可解釋性

- 公平性和偏見檢測

- 對抗性穩健性

- 從人類反饋中進行強化學習

- 思維鏈提示

- 思維樹提示

- 推理與行動

- 無觀察推理

- 反思技術

- 自動多步推理和工具使用

- 檢索增強生成

- 基於圖形的RAG

- 高級RAG

- 評估RAG系統

- 代理型模式