Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers
暫譯: 透過交替方向乘子法的分散優化與統計學習
Boyd, Stephen, Parikh, Neal, Chu, Eric
- 出版商: Now Publishers
- 出版日期: 2011-06-30
- 售價: $2,810
- 貴賓價: 9.5 折 $2,670
- 語言: 英文
- 頁數: 140
- 裝訂: Quality Paper - also called trade paper
- ISBN: 160198460X
- ISBN-13: 9781601984609
-
相關分類:
Data Science、Machine Learning
海外代購書籍(需單獨結帳)
相關主題
商品描述
Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers argues that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas-Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ?1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, it discusses applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. It also discusses general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
商品描述(中文翻譯)
許多近期在統計學和機器學習中受到關注的問題可以在凸優化的框架下提出。由於現代數據集的規模和複雜性急劇增加,能夠解決具有大量特徵或訓練範例的問題變得越來越重要。因此,這些數據集的去中心化收集或存儲以及隨之而來的分散式解決方法要麼是必要的,要麼至少是非常可取的。《透過交替方向乘子法的分散優化與統計學習》主張交替方向乘子法非常適合於分散式凸優化,特別是適用於統計學、機器學習及相關領域中出現的大規模問題。該方法於1970年代發展,根源可追溯至1950年代,並且與許多其他算法等價或密切相關,例如對偶分解、乘子法、Douglas-Rachford分裂、Spingarn的部分逆方法、Dykstra的交替投影、針對 ?1 問題的Bregman迭代算法、近端方法等。在簡要回顧該算法的理論和歷史後,書中討論了應用於各種近期受到關注的統計學和機器學習問題,包括套索(lasso)、稀疏邏輯回歸、基追求(basis pursuit)、協方差選擇、支持向量機等。它還討論了一般的分散式優化、對非凸情境的擴展以及高效實現,包括一些有關分散式MPI和Hadoop MapReduce實現的細節。