Large-Scale Kernel Machines

Léon Bottou, Olivier Chapelle, Dennis DeCoste, Jason Weston

買這商品的人也買了...

商品描述

Description

Pervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data. This volume offers researchers and engineers practical solutions for learning from large scale datasets, with detailed descriptions of algorithms and experiments carried out on realistically large datasets. At the same time it offers researchers information that can address the relative lack of theoretical grounding for many useful algorithms.

After a detailed description of state-of-the-art support vector machine technology, an introduction of the essential concepts discussed in the volume, and a comparison of primal and dual optimization techniques, the book progresses from well-understood techniques to more novel and controversial approaches. Many contributors have made their code and data available online for further experimentation. Topics covered include fast implementations of known algorithms, approximations that are amenable to theoretical guarantees, and algorithms that perform well in practice but are difficult to analyze theoretically.

商品描述(中文翻譯)

描述

普及和網絡化的計算機大大降低了收集和分發大型數據集的成本。在這種情況下,規模不佳的機器學習算法可能會變得無關緊要。我們需要能夠與數據量成線性擴展並保持足夠的統計效率以勝過僅處理數據的隨機子集的算法的學習算法。本書為研究人員和工程師提供了從大規模數據集中學習的實用解決方案,並詳細描述了在實際大型數據集上進行的算法和實驗。同時,它還為研究人員提供了可以解決許多有用算法相對缺乏理論基礎的信息。

在詳細描述了最先進的支持向量機技術、介紹了本書中討論的基本概念以及對原始和對偶優化技術進行比較之後,本書從眾所周知的技術進展到更新穎和有爭議的方法。許多貢獻者已經將他們的代碼和數據在線上提供供進一步實驗。涵蓋的主題包括已知算法的快速實現、適合理論保證的近似算法以及在實踐中表現良好但在理論上難以分析的算法。