Mathematics for Machine Learning (Hardcover)

Deisenroth, Marc Peter, Faisal, A. Aldo, Ong, Cheng Soon

買這商品的人也買了...

商品描述

The fundamental mathematical tools needed to understand machine learning include linear algebra, analytic geometry, matrix decompositions, vector calculus, optimization, probability and statistics. These topics are traditionally taught in disparate courses, making it hard for data science or computer science students, or professionals, to efficiently learn the mathematics. This self-contained textbook bridges the gap between mathematical and machine learning texts, introducing the mathematical concepts with a minimum of prerequisites. It uses these concepts to derive four central machine learning methods: linear regression, principal component analysis, Gaussian mixture models and support vector machines. For students and others with a mathematical background, these derivations provide a starting point to machine learning texts. For those learning the mathematics for the first time, the methods help build intuition and practical experience with applying mathematical concepts. Every chapter includes worked examples and exercises to test understanding. Programming tutorials are offered on the book's web site.

  •  
  • A one-stop presentation of all the mathematical background needed for machine learning
  • Worked examples make it easier to understand the theory and build both practical experience and intuition
  • Explains central machine learning methods: linear regression, principal component analysis, Gaussian mixture models and support vector machines

作者簡介

Marc Peter DeisenrothUniversity College London
Marc Peter Deisenroth is DeepMind Chair in Artificial Intelligence at the Department of Computer Science, University College London. Prior to this, he was a faculty member in the Department of Computing, Imperial College London. His research areas include data-efficient learning, probabilistic modeling, and autonomous decision making. Deisenroth was Program Chair of the European Workshop on Reinforcement Learning (EWRL) 2012 and Workshops Chair of Robotics Science and Systems (RSS) 2013. His research received Best Paper Awards at the International Conference on Robotics and Automation (ICRA) 2014 and the International Conference on Control, Automation and Systems (ICCAS) 2016. In 2018, he was awarded the President's Award for Outstanding Early Career Researcher at Imperial College London. He is a recipient of a Google Faculty Research Award and a Microsoft P.hD. grant.

A. Aldo FaisalImperial College London
A. Aldo Faisal leads the Brain and Behaviour Lab at Imperial College London, where he is faculty at the Departments of Bioengineering and Computing and a Fellow of the Data Science Institute. He is the director of the 20Mio£ UKRI Center for Doctoral Training in AI for Healthcare. Faisal studied Computer Science and Physics at the Universität Bielefeld (Germany). He obtained a Ph.D. in Computational Neuroscience at the University of Cambridge and became Junior Research Fellow in the Computational and Biological Learning Lab. His research is at the interface of neuroscience and machine learning to understand and reverse engineer brains and behavior.

Cheng Soon OngData61, CSIRO
Cheng Soon Ong is Principal Research Scientist at the Machine Learning Research Group, Data61, Commonwealth Scientific and Industrial Research Organisation, Canberra (CSIRO). He is also Adjunct Associate Professor at Australian National University. His research focuses on enabling scientific discovery by extending statistical machine learning methods. Ong received his Ph.D. in Computer Science at Australian National University in 2005. He was a postdoc at Max Planck Institute of Biological Cybernetics and Friedrich Miescher Laboratory. From 2008 to 2011, he was a lecturer in the Department of Computer Science at Eidgenössische Technische Hochschule (ETH) Zürich, and in 2012 and 2013 he worked in the Diagnostic Genomics Team at NICTA in Melbourne.

目錄大綱

1. Introduction and motivation
2. Linear algebra
3. Analytic geometry
4. Matrix decompositions
5. Vector calculus
6. Probability and distribution
7. Optimization
8. When models meet data
9. Linear regression
10. Dimensionality reduction with principal component analysis
11. Density estimation with Gaussian mixture models
12. Classification with support vector machines.