機器學習 : 貝葉斯和優化方法 (英文版)(Machine Learning: A Bayesian and Optimization Perspective)
暫譯: 機器學習:貝葉斯與優化觀點
西格尔斯·西奥多里蒂斯 (Sergios Theodoridis)
- 出版商: 機械工業
- 出版日期: 2017-04-01
- 定價: $1,614
- 售價: 8.4 折 $1,359
- 語言: 英文
- 頁數: 1050
- 裝訂: 精装
- ISBN: 7111565266
- ISBN-13: 9787111565260
-
相關分類:
Machine Learning
- 此書翻譯自: Machine Learning: A Bayesian and Optimization Perspective (Hardcover)
立即出貨 (庫存 < 3)
買這商品的人也買了...
-
Docker 錦囊妙計 (Docker Cookbook)$680$537 -
貝葉斯方法:概率編程與貝葉斯推斷 (Bayesian Methods for Hackers: Probabilistic Programming and Bayesian Inference)$534$507 -
$474面向機器學習的自然語言標註 (Natural language annotation for macbhine learning) -
Deep Learning|用 Python 進行深度學習的基礎理論實作$580$458 -
$354構建實時機器學習系統 -
$294Python 網絡爬蟲從入門到實踐 -
$403AWS Lambda 實戰 : 開發事件驅動的無服務器應用程序 (AWS Lambda in Action: Event-Driven Serverless Applications) -
$474深度學習入門之 PyTorch -
演算法圖鑑:26種演算法 + 7種資料結構,人工智慧、數據分析、邏輯思考的原理和應用 step by step 全圖解$450$356 -
Python 入門邁向高手之路王者歸來$699$594 -
Python 深度學習 (Python Deep Learning)$620$484 -
$403深入淺出強化學習 : 原理入門 -
特洛伊木馬病毒程式設計:使用 Python$520$406 -
$465統計學習方法, 2/e -
$352當電腦體系結構遇到深度學習:面向電腦體系結構設計師的深度學習概論 -
$505微服務體系建設和實踐 -
$256移動通信技術與網絡優化, 2/e -
$454Python 科學計算, 2/e (Python for Scientists, 2/e) -
$301Unreal Engine 4 遊戲開發指南 -
科班出身的 AI人必修課:OpenCV 影像處理 使用 Python$780$616 -
Python 網路爬蟲:大數據擷取、清洗、儲存與分析 -- 王者歸來$650$514 -
$602Unreal Engine 4 學習總動員:材質渲染 -
$594Unreal Engine 4 學習總動員:動畫設計 -
$594Unreal Engine 4 學習總動員:遊戲開發 -
Foundations of Data Science (Hardcover)$1,200$1,176
中文年末書展|繁簡參展書2書75折 詳見活動內容 »
-
75折
為你寫的 Vue Components:從原子到系統,一步步用設計思維打造面面俱到的元件實戰力 (iThome 鐵人賽系列書)$780$585 -
75折
BDD in Action, 2/e (中文版)$960$720 -
75折
看不見的戰場:社群、AI 與企業資安危機$750$563 -
79折
AI 精準提問 × 高效應用:DeepSeek、ChatGPT、Claude、Gemini、Copilot 一本搞定$390$308 -
7折
超實用!Word.Excel.PowerPoint 辦公室 Office 365 省時高手必備 50招, 4/e (暢銷回饋版)$420$294 -
75折
裂縫碎光:資安數位生存戰$550$412 -
85折
日本當代最強插畫 2025 : 150位當代最強畫師豪華作品集$640$544 -
79折
Google BI 解決方案:Looker Studio × AI 數據驅動行銷實作,完美整合 Google Analytics 4、Google Ads、ChatGPT、Gemini$630$498 -
79折
超有料 Plus!職場第一實用的 AI 工作術 - 用對 AI 工具、自動化 Agent, 讓生產力全面進化!$599$473 -
75折
從零開始學 Visual C# 2022 程式設計, 4/e (暢銷回饋版)$690$518 -
75折
Windows 11 制霸攻略:圖解 AI 與 Copilot 應用,輕鬆搞懂新手必學的 Windows 技巧$640$480 -
75折
精準駕馭 Word!論文寫作絕非難事 (好評回饋版)$480$360 -
Sam Yang 的插畫藝術:用 Procreate / PS 畫出最強男友視角 x 女孩美好日常$699$629 -
79折
AI 加持!Google Sheets 超級工作流$599$473 -
78折
想要 SSR? 快使用 Nuxt 吧!:Nuxt 讓 Vue.js 更好處理 SEO 搜尋引擎最佳化(iThome鐵人賽系列書)$780$608 -
75折
超實用!業務.總管.人資的辦公室 WORD 365 省時高手必備 50招 (第二版)$500$375 -
7折
Node-RED + YOLO + ESP32-CAM:AIoT 智慧物聯網與邊緣 AI 專題實戰$680$476 -
79折
「生成式⇄AI」:52 個零程式互動體驗,打造新世代人工智慧素養$599$473 -
7折
Windows APT Warfare:惡意程式前線戰術指南, 3/e$720$504 -
75折
我輩程式人:回顧從 Ada 到 AI 這條程式路,程式人如何改變世界的歷史與未來展望 (We, Programmers: A Chronicle of Coders from Ada to AI)$850$637 -
75折
不用自己寫!用 GitHub Copilot 搞定 LLM 應用開發$600$450 -
79折
Tensorflow 接班王者:Google JAX 深度學習又快又強大 (好評回饋版)$780$616 -
79折
GPT4 會你也會 - 共融機器人的多模態互動式情感分析 (好評回饋版)$700$553 -
79折
技術士技能檢定 電腦軟體應用丙級術科解題教本|Office 2021$460$363 -
75折
Notion 與 Notion AI 全能實戰手冊:生活、學習與職場的智慧策略 (暢銷回饋版)$560$420
相關主題
商品描述
本書對所有主要的機器學習方法和新研究趨勢進行了深入探索,涵蓋概率和確定性方法以及貝葉斯推斷方法。其中,經典方法包括平均/小二乘濾波、卡爾曼濾波、隨機逼近和在線學習、貝葉斯分類、決策樹、邏輯回歸和提升方法等,新趨勢包括稀疏、凸分析與優化、在線分佈式算法、RKH空間學習、貝葉斯推斷、圖模型與隱馬爾可夫模型、粒子濾波、深度學習、字典學習和潛變數建模等。
全書構建了一套明晰的機器學習知識體系,各章內容相對獨立,物理推理、數學建模和算法實現精準且細緻,並輔以應用實例和習題。本書適合該領域的科研人員和工程師閲讀,也適合學習模式識別、統計/自適應信號處理和深度學習等課程的學生參考。
商品描述(中文翻譯)
本書對所有主要的機器學習方法和新研究趨勢進行了深入探索,涵蓋概率和確定性方法以及貝葉斯推斷方法。其中,經典方法包括平均/小二乘濾波、卡爾曼濾波、隨機逼近和在線學習、貝葉斯分類、決策樹、邏輯回歸和提升方法等,新趨勢包括稀疏、凸分析與優化、在線分佈式算法、RKH空間學習、貝葉斯推斷、圖模型與隱馬爾可夫模型、粒子濾波、深度學習、字典學習和潛變數建模等。
全書構建了一套明晰的機器學習知識體系,各章內容相對獨立,物理推理、數學建模和算法實現精準且細緻,並輔以應用實例和習題。本書適合該領域的科研人員和工程師閱讀,也適合學習模式識別、統計/自適應信號處理和深度學習等課程的學生參考。
作者簡介
Sergios Theodoridis希臘雅典大學信息系教授。主要研究方向是自適應信號處理、通信與模式識別。他是歐洲並行結構及語言協會(PARLE-95)的主席和歐洲信號處理協會(EUSIPCO-98)的常務主席、《信號處理》雜誌編委。
Konstantinos Koutroumbas 1995年在希臘雅典大學獲得博士學位。自2001年起任職於希臘雅典國家天文台空間應用研究院,是國際知名的專家。
作者簡介(中文翻譯)
Sergios Theodoridis 是希臘雅典大學信息系的教授。主要研究方向是自適應信號處理、通信與模式識別。他是歐洲並行結構及語言協會(PARLE-95)的主席和歐洲信號處理協會(EUSIPCO-98)的常務主席,以及《信號處理》雜誌的編委。
Konstantinos Koutroumbas 於1995年在希臘雅典大學獲得博士學位。自2001年起,他任職於希臘雅典國家天文台空間應用研究院,是國際知名的專家。
目錄大綱
Preface
Acknowledgments
Notation
CHAPTER 1 Introduction
1.1 What Machine Learning is About
1.1.1 Classification
1.1.2 Regression
1.2 Structure and a Road Map of the Book
References
CHAPTER 2 Probability and Stochastic Processes
2.1 Introduction
2.2 Probability and Random Variables
2.2.1Probability
2.2.2Discrete Random Variables
2.2.3Continuous Random Variables
2.2.4Meanand Variance
2.2.5Transformation of Random Variables
2.3 Examples of Distributions
2.3.1Discrete Variables
2.3.2Continuous Variables
2.4 Stochastic Processes
2.4.1First and Second Order Statistics
2.4.2Stationarity and Ergodicity
2.4.3PowerSpectral Density
2.4.4Autoregressive Models
2.5 Information Theory
2.5.1Discrete Random Variables
2.5.2Continuous Random Variables
2.6 Stochastic Convergence
Problems
References
CHAPTER 3 Learning in Parametric Modeling:Basic Concepts and Directions
3.1 Introduction
3.2 Parameter Estimation:The Deterministic Point of View
3.3 Linear Regression
3.4 Classification
3.5 Biased Versus Unbiased Estimation
3.5.1 Biased or Unbiased Estimation?
3.6 The Cramér—Rao Lower Bound
3.7 Sufcient Statistic
3.8 Regularization
3.9 The Bias—Variance Dilemma
3.9.1 Mean—Square Error Estimation
3.9.2 Bias—Variance Tradeoff
3.10 Maximum Likelihood Method
3.10.1 Linear Regression:The Nonwhite Gaussian Noise Case
3.11 Bayesian Inference
3.11.1 The Maximum a Posteriori Probability Estimation Method
3.12 Curse of Dimensionality
3.13 Validation
3.14 Expected and Empirical Loss Functions
3.15 Nonparametric Modeling and Estimation
Problems
References
CHAPTER 4 Mean—quare Error Linear Estimation
4.1Introduction
4.2Mean—Square Error Linear Estimation:The Normal Equations
4.2.1The Cost Function Surface
4.3A Geometric Viewpoint: Orthogonality Condition
4.4Extensionto Complex—Valued Variables
4.4.1Widely Linear Complex—Valued Estimation
4.4.2Optimizing with Respect to Complex—Valued Variables:Wirtinger Calculus
4.5Linear Filtering
4.6MSE Linear Filtering:A Frequency Domain Point of View
4.7Some Typical Applications
4.7.1Interference Cancellation
4.7 .2System Identification
4.7.3Deconvolution:Channel Equalization
4.8Algorithmic Aspects:The Levinson and the Lattice—Ladder Algorithms
4.8.1The Lattice—Ladder Scheme
4.9Mean—Square Error Estimation of Linear Models
4.9.1The Gauss—Markov Theorem
4.9.2Constrained Linear Estimation: The Beamforming Case
4.10Time—Varying Statistics:Kalman Filtering
Problems
References
CHAPTER 5 Stochastic Gradient Descent:The LMS Algorithm and its Family
5.1 Introduction
5.2 The Steepest Descent Method
5.3 Application to the Mean—Square Error Cost Function
5.3.1 The Complex—Valued Case
5.4 Stochastic Approximation
5.5 The Least—Mean—Squares Adaptive Algorithm
5.5.1 Convergence and Steady—State Performance of the LMS in Stationary Environments
5.5.2 Cumulative Loss Bounds
5.6 The Affine Projection Algorithm
5.6.1 The Normalized LMS
5.7 The Complex—Valued Case
5.8 Relatives of the LMS
5.9 Simulation Examples
5.10 Adaptive Decision Feedback Equalization
5.11 The Linearly Constrained LMS
5.12 Tracking Performance of the LMS in Nonstationary Environments
5.13 Distributed Learning:The Distributed LMS
5.13.1Cooperation Strategies
5.13.2The Diffusion LMS
5.13.3 Convergence and Steady—State Performance:Some Highlights
5.13.4 Consensus—Based Distributed Schemes
5.14 A Case Study: Target Localization
5.15 Some Concluding Remarks:Consensus Matrix
Problems
References
CHAPTER 6 The Least—Squares Family
6.1 Introduction
6.2 Least—Squares Linear Regression:A Geometric Perspective
6.3 Statistical Properties of the LS Estimator
6.4 Orthogonalizing the Column Space of X:The SVD Method
6.5 Ridge Regression
6.6 The Recursive Least—Squares Algorithm
6.7 Newton's Iterative Minimization Method
6.7.1 RLS and Newton's Method
6.8 Steady—State Performance of the RLS
6.9 Complex—Valued Data:The Widely Linear RLS
6.10 Computational Aspects of the LS Solution
6.11 The Coordinate and Cyclic Coordinate Descent Methods
6.12 Simulation Examples
6.13 Total —Least—Squares
Problems
References
……
CHAPTER 7 Classification:A Tour of the Classics
CHAPTER 8 Parameter Learning:A Convex Analytic Path
CHAPTER 9 Sparsity—Aware Learning:Concepts and Theoretical Foundations
CHAPTER 10 Sparsity—Aware Learning:Algorithms and Applications
CHAPTER 11 Learning in Reproducirg Kernel Hilbert Spaces
CHAPTER 12 Bayesian Learning:Inference and the EM Algorithm
CHAPTER 13 Bayesian Learning:Approximate Inference and Nonparametric Models
CHAPTER 14 Monte Carlo Methods
CHAPTER 15 Probabilistic Graphical Models:Part Ⅰ
CHAPTER 16 Probabilistic Graphical Models:Part Ⅱ
CHAPTER 17 Particle Filtering
CHAPTER 18 Neural Networks and Deep Learning
CHAPTER 19 Dimensionality Reduction
APPENDIX A Linear Algebra
APPENDIX B Probability Theory and Statistics
APPENDIX C Hints on Constrained Optimization
Index
目錄大綱(中文翻譯)
Preface
Acknowledgments
Notation
CHAPTER 1 Introduction
1.1 What Machine Learning is About
1.1.1 Classification
1.1.2 Regression
1.2 Structure and a Road Map of the Book
References
CHAPTER 2 Probability and Stochastic Processes
2.1 Introduction
2.2 Probability and Random Variables
2.2.1Probability
2.2.2Discrete Random Variables
2.2.3Continuous Random Variables
2.2.4Meanand Variance
2.2.5Transformation of Random Variables
2.3 Examples of Distributions
2.3.1Discrete Variables
2.3.2Continuous Variables
2.4 Stochastic Processes
2.4.1First and Second Order Statistics
2.4.2Stationarity and Ergodicity
2.4.3PowerSpectral Density
2.4.4Autoregressive Models
2.5 Information Theory
2.5.1Discrete Random Variables
2.5.2Continuous Random Variables
2.6 Stochastic Convergence
Problems
References
CHAPTER 3 Learning in Parametric Modeling:Basic Concepts and Directions
3.1 Introduction
3.2 Parameter Estimation:The Deterministic Point of View
3.3 Linear Regression
3.4 Classification
3.5 Biased Versus Unbiased Estimation
3.5.1 Biased or Unbiased Estimation?
3.6 The Cramér—Rao Lower Bound
3.7 Sufcient Statistic
3.8 Regularization
3.9 The Bias—Variance Dilemma
3.9.1 Mean—Square Error Estimation
3.9.2 Bias—Variance Tradeoff
3.10 Maximum Likelihood Method
3.10.1 Linear Regression:The Nonwhite Gaussian Noise Case
3.11 Bayesian Inference
3.11.1 The Maximum a Posteriori Probability Estimation Method
3.12 Curse of Dimensionality
3.13 Validation
3.14 Expected and Empirical Loss Functions
3.15 Nonparametric Modeling and Estimation
Problems
References
CHAPTER 4 Mean—quare Error Linear Estimation
4.1Introduction
4.2Mean—Square Error Linear Estimation:The Normal Equations
4.2.1The Cost Function Surface
4.3A Geometric Viewpoint: Orthogonality Condition
4.4Extensionto Complex—Valued Variables
4.4.1Widely Linear Complex—Valued Estimation
4.4.2Optimizing with Respect to Complex—Valued Variables:Wirtinger Calculus
4.5Linear Filtering
4.6MSE Linear Filtering:A Frequency Domain Point of View
4.7Some Typical Applications
4.7.1Interference Cancellation
4.7 .2System Identification
4.7.3Deconvolution:Channel Equalization
4.8Algorithmic Aspects:The Levinson and the Lattice—Ladder Algorithms
4.8.1The Lattice—Ladder Scheme
4.9Mean—Square Error Estimation of Linear Models
4.9.1The Gauss—Markov Theorem
4.9.2Constrained Linear Estimation: The Beamforming Case
4.10Time—Varying Statistics:Kalman Filtering
Problems
References
CHAPTER 5 Stochastic Gradient Descent:The LMS Algorithm and its Family
5.1 Introduction
5.2 The Steepest Descent Method
5.3 Application to the Mean—Square Error Cost Function
5.3.1 The Complex—Valued Case
5.4 Stochastic Approximation
5.5 The Least—Mean—Squares Adaptive Algorithm
5.5.1 Convergence and Steady—State Performance of the LMS in Stationary Environments
5.5.2 Cumulative Loss Bounds
5.6 The Affine Projection Algorithm
5.6.1 The Normalized LMS
5.7 The Complex—Valued Case
5.8 Relatives of the LMS
5.9 Simulation Examples
5.10 Adaptive Decision Feedback Equalization
5.11 The Linearly Constrained LMS
5.12 Tracking Performance of the LMS in Nonstationary Environments
5.13 Distributed Learning:The Distributed LMS
5.13.1Cooperation Strategies
5.13.2The Diffusion LMS
5.13.3 Convergence and Steady—State Performance:Some Highlights
5.13.4 Consensus—Based Distributed Schemes
5.14 A Case Study: Target Localization
5.15 Some Concluding Remarks:Consensus Matrix
Problems
References
CHAPTER 6 The Least—Squares Family
6.1 Introduction
6.2 Least—Squares Linear Regression:A Geometric Perspective
6.3 Statistical Properties of the LS Estimator
6.4 Orthogonalizing the Column Space of X:The SVD Method
6.5 Ridge Regression
6.6 The Recursive Least—Squares Algorithm
6.7 Newton's Iterative Minimization Method
6.7.1 RLS and Newton's Method
6.8 Steady—State Performance of the RLS
6.9 Complex—Valued Data:The Widely Linear RLS
6.10 Computational Aspects of the LS Solution
6.11 The Coordinate and Cyclic Coordinate Descent Methods
6.12 Simulation Examples
6.13 Total —Least—Squares
Problems
References
……
CHAPTER 7 Classification:A Tour of the Classics
CHAPTER 8 Parameter Learning:A Convex Analytic Path
CHAPTER 9 Sparsity—Aware Learning:Concepts and Theoretical Foundations
CHAPTER 10 Sparsity—Aware Learning:Algorithms and Applications
CHAPTER 11 Learning in Reproducirg Kernel Hilbert Spaces
CHAPTER 12 Bayesian Learning:Inference and the EM Algorithm
CHAPTER 13 Bayesian Learning:Approximate Inference and Nonparametric Models
CHAPTER 14 Monte Carlo Methods
CHAPTER 15 Probabilistic Graphical Models:Part Ⅰ
CHAPTER 16 Probabilistic Graphical Models:Part Ⅱ
CHAPTER 17 Particle Filtering
CHAPTER 18 Neural Networks and Deep Learning
CHAPTER 19 Dimensionality Reduction
APPENDIX A Linear Algebra
APPENDIX B Probability Theory and Statistics
APPENDIX C Hints on Constrained Optimization
Index
