Neural Network Learning: Theoretical Foundations (Paperback)

Martin Anthony

買這商品的人也買了...

商品描述

This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics.

商品描述(中文翻譯)

這本重要的著作描述了人工神經網絡研究中的最新理論進展。它探討了監督學習問題的概率模型,並解決了關鍵的統計和計算問題。各章節概述了關於具有二元輸出網絡的模式分類的研究,包括對Vapnik Chervonenkis維度的相關性以及幾種神經網絡模型的維度估計的討論。此外,Anthony和Bartlett還開發了一個關於具有實數輸出網絡的分類模型,並展示了“大邊界”分類的有用性。作者解釋了Vapnik Chervonenkis維度的尺度敏感版本在大邊界分類和實際預測中的作用。關鍵章節還討論了神經網絡學習的計算複雜性,描述了各種困難結果,並概述了兩種高效的建構性學習算法。這本書是自成一體的,適合計算機科學、工程和數學領域的研究人員和研究生閱讀。