How Deeply Human Is Language?: Chomsky, the Brain, and the AI Fantasy
暫譯: 語言有多深刻的人性?:喬姆斯基、大腦與人工智慧的幻想

Grodzinsky, Yosef

  • 出版商: Summit Valley Press
  • 出版日期: 2026-04-21
  • 售價: $1,330
  • 貴賓價: 9.5$1,264
  • 語言: 英文
  • 頁數: 192
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 0262052008
  • ISBN-13: 9780262052009
  • 相關分類: Large language model
  • 尚未上市,無法訂購

商品描述

An explanation of linguistic theory and large language models--the top contenders for understanding human language--in the context of the brain, from a leading neurolinguist.

Contemporary linguistics, founded and inspired by Noam Chomsky, seeks to understand the hallmark of our humanity--language. Linguists develop powerful tools to discover how knowledge of language is acquired and how the brain puts it to use. AI experts, using vastly different methods, create remarkable neural networks--large language models (LLMs) such as ChatGPT--said to learn and use language like us.

Chomsky called LLMs "a false promise." AI leader Geoffrey Hinton has declared that "neural nets are much better at processing language than anything ever produced by the Chomsky School of Linguistics." Who is right, and how can we tell? Do we learn everything from scratch, or could some knowledge be innate? Is our brain one big network, or is it built out of modules, language being one of them?

In How Deeply Human Is Language?, Yosef Grodzinsky explains both approaches and confronts them with the reality as it emerges from the engineering, the linguistic, and the neurological record. He walks readers through vastly different methods, tools, and findings from all these fields. Aiming to find a common path forward, he describes the conflict, but also locates points of potential contact, and sketches a joint research program that may unite these communities in a common effort to understand knowledge and learning in the brain.

商品描述(中文翻譯)

語言理論與大型語言模型的解釋——理解人類語言的主要競爭者——在大腦的背景下,由一位領先的神經語言學家提供。

當代語言學以諾姆·喬姆斯基(Noam Chomsky)為基礎和靈感,旨在理解我們人性的標誌——語言。語言學家開發出強大的工具,以發現語言知識是如何獲得的,以及大腦如何運用這些知識。人工智慧專家則使用截然不同的方法,創造出卓越的神經網絡——大型語言模型(LLMs),如ChatGPT——據說能像我們一樣學習和使用語言。

喬姆斯基稱LLMs為「虛假的承諾」。人工智慧領袖傑佛瑞·辛頓(Geoffrey Hinton)宣稱「神經網絡在處理語言方面遠比喬姆斯基語言學派所產生的任何東西要好得多。」誰是對的,我們該如何判斷?我們是從零開始學習一切,還是某些知識可能是天生的?我們的大腦是一個大型網絡,還是由模塊組成,其中語言是其中之一?

語言有多深刻的人性?中,約瑟夫·格羅茲金斯基(Yosef Grodzinsky)解釋了這兩種方法,並將它們與從工程、語言學和神經學記錄中浮現的現實進行對比。他引導讀者了解這些領域中截然不同的方法、工具和發現。旨在尋找共同的前進道路,他描述了衝突,但也找到了潛在的接觸點,並勾勒出一個可能使這些社群在共同努力理解大腦中的知識和學習方面團結起來的聯合研究計劃。

作者簡介

Yosef Grodzinsky is currently Director of the Neurolinguistics Lab at the Edmond and Lily Safra Center for Brain Sciences, and a professor emeritus at the Hebrew University of Jerusalem. He is also a scientific associate at the Institute for Neuroscience and Medicine, Forschungszentrum Jülich, and the Cécile and Oskar Vogt Institute for Brain Research, University Hospital Düsseldorf. He is a recipient of an Alexander von Humboldt Award, and Senior Canada Research Chair in Neurolinguistics, which he held at the Departments of Linguistics and Neurology/Neurosurgery at McGill University.

作者簡介(中文翻譯)

約瑟夫·格羅茲金斯基目前是艾德蒙與莉莉·薩夫拉腦科學中心的神經語言學實驗室主任,並且是耶路撒冷希伯來大學的名譽教授。他同時也是於尤利希研究中心的神經科學與醫學研究所及杜塞爾多夫大學的塞西爾與奧斯卡·福克特腦研究所的科學合夥人。他是亞歷山大·馮·洪堡獎的獲得者,並曾擔任麥吉爾大學語言學及神經學/神經外科系的高級加拿大研究主席。