Gaussian Processes for Machine Learning (Hardcover)
Carl Edward Rasmussen, Christopher K. I. Williams
立即出貨 (庫存 < 3)
貴賓價: $1,330Mining the Social Web: Analyzing Data from Facebook, Twitter, LinkedIn, and Other Social Media Sites (Paperback)
售價: $1,120Data Analysis with Open Source Tools (Paperback)
售價: $972Semi-Supervised Learning(Hardcover)
貴賓價: $1,197Neural Networks for Pattern Recognition
貴賓價: $1,264Introduction to Machine Learning
貴賓價: $1,137Fundamentals of Communication Systems, 2/e (Paperback)
貴賓價: $1,107Bioinformatics: The Machine Learning Approach, 2/e (Hardcover)
售價: $3,100Optimisation in Signal and Image Processing (Hardcover)
售價: $990Large-Scale Kernel Machines
貴賓價: $1,235Artificial Intelligence: Structures and Strategies for Complex Problem Solving, 5/e
售價: $999Artificial Intelligence for Games (Hardcover)
貴賓價: $1,881A First Course in Machine Learning, 2/e (Hardcover)
貴賓價: $5,262Probabilistic Graphical Models: Principles and Techniques (Hardcover)
貴賓價: $1,098Introduction to Computation and Programming Using Python: With Application to Understanding Data (Paperback)
Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.
The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes.
Carl Edward Rasmussen is a Research Scientist at the Department of Empirical Inference for Machine Learning and Perception at the Max Planck Institute for Biological Cybernetics, Tübingen.
Christopher K. I. Williams is Professor of Machine Learning and Director of the Institute for Adaptive and Neural Computation in the School of Informatics, University of Edinburgh.
Table of Contents
Symbols and Notation xvii
3 Classification 33
4 Covariance Functions 79
5 Model Selection and Adaptation of Hyperparameters 105
6 Relationships between GPs and Other Models 129
7 Theoretical Perspectives 151
8 Approximation Methods for Large Datasets
9 Further Issues and Conclusions 189
Appendix A Mathematical Background 199
Appendix B Gaussian Markov Process 207
Appendix C Datasets and Code 221