Linguistic Nativism and the Poverty of the Stimulus
January 2011, Wiley-Blackwell
- Critically examines the Argument from the Poverty of the Stimulus - the theory that the linguistic input which children receive is insufficient to explain the rich and rapid development of their knowledge of their first language(s) through general learning mechanisms
- Focuses on formal learnability properties of the class of natural languages, considered from the perspective of several learning theoretic models
- The only current book length study of arguments for the poverty of the stimulus which focuses on the computational learning theoretic aspects of the problem
1 Introduction: Nativism in Linguistic Theory.
1.1 Historical Development.
1.2 The Rationalist–Empiricist Debate.
1.3 Nativism and Cognitive Modularity.
1.4 Connectionism, Nonmodularity, and Antinativism.
1.5 Adaptation and the Evolution of Natural Language.
1.6 Summary and Conclusions.
2 Clarifying the Argument from the Poverty of the Stimulus.
2.1 Formulating the APS.
2.2 Empiricist Learning versus Nativist Learning.
2.3 Our Version of the APS.
2.4 A Theory-Internal APS.
2.5 Evidence for the APS: Auxiliary Inversion as a Paradigm Case.
2.6 Debate on the PLD.
2.7 Learning Theory and Indispensable Data.
2.8 A Second Empirical Case: Anaphoric One.
2.9 Summary and Conclusions.
3 The Stimulus: Determining the Nature of Primary Linguistic Data.
3.1 Primary Linguistic Data.
3.2 Negative Evidence.
3.3 Semantic, Contextual, and Extralinguistic Evidence.
3.4 Prosodic Information.
3.5 Summary and Conclusions.
4 Learning in the Limit: The Gold Paradigm.
4.1 Formal Models of Language Acquisition.
4.2 Mathematical Models of Learnability.
4.3 The Gold Paradigm of Learnability.
4.4 Critique of the Positive-Evidence-Only APS in IIL.
4.5 Proper Positive Results.
4.6 Variants of the Gold Model.
4.7 Implications of Gold's Results for Linguistic Nativism.
4.8 Summary and Conclusions.
5 Probabilistic Learning Theory for Language Acquisition.
5.1 Chomsky's View of Statistical Learning.
5.2 Basic Assumptions of Statistical Learning Theory.
5.3 Learning Distributions.
5.4 Probabilistic Versions of the IIL Framework.
5.5 PAC Learning.
5.6 Consequences of PAC Learnability.
5.7 Problems with the Standard Model.
5.8 Summary and Conclusions.
6 A Formal Model of Indirect Negative Evidence.
6.2. From Low Probability to Ungrammaticality.
6.3 Modeling the DDA.
6.4 Applying the Functional Lower Bound.
6.5 Summary and Conclusions.
7 Computational Complexity and Efficient Learning.
7.1 Basic Concepts of Complexity
7.2 Efficient Learning.
7.3 Negative Results.
7.4 Interpreting Hardness Results.
7.5 Summary and Conclusions.
8 Positive Results in Efficient Learning.
8.1 Regular Languages.
8.2 Distributional Methods.
8.3 Distributional Learning of Context-Free Languages.
8.4 Lattice-Based Formalisms.
8.5 Arguments against Distributional Learning.
8.6 Summary and Conclusions.
9 Grammar Induction through Implemented Machine Learning.
9.1 Supervised Learning.
9.3 Summary and Conclusions.
10 Parameters in Linguistic Theory and Probabilistic Language Models.
10.1 Learnability of Parametric Models of Syntax.
10.2 UG Parameters and Language Variation.
10.3 Parameters in Probabilistic Language Models.
10.4 Inferring Constraints on Hypothesis Spaces with Hierarchical Bayesian Models.
10.5 Summary and Conclusions.
11 A Brief Look at Some Biological and Psychological Evidence.
11.1 Developmental Arguments.
11.2 Genetic Factors: Inherited Language Disorders.
11.3 Experimental Learning of Artificial Languages.
11.4 Summary and Conclusions.
Shalom Lappin is Professor of Computational Linguistics at King's College,
“This book is not only very pertinent, but also succeeds in eschewing most of the polemical excess that tends to engulf us all in this field It’s not an easy book.. but I think it gives some sense of what the enterprise is about. Alex Clark describes it, at one point, as an exercise in clearing the ground – and it succeeds in sweeping away certain comfortable assumptions that are often made in this area, concerning (for instance) the irrelevance of negative evidence, what languages are provably unlearnable, and the role of the Chomsky hierarchy.” (New Books in Language, 8 June 2012)
“Most of all, it challenges basic concepts in mainstream linguistics. It rejects key tenets of UG in the light of advances in machine learning theory, and research in the computational modelling of the language acquisition process. It exposes so-called proofs supporting the poverty of stimulus, and reveals alternatives that are formally more comprehensive than the explanations previously provided by UG theories, and empirically more likely to match natural language acquisition processes.” (Linguist List, 2011)
“This book is not only very pertinent, but also succeeds in eschewing most of the polemical excess that tends to engulf us all in this field. It’s not an easy book … but I think it gives some sense of what the enterprise is about. Alex Clark describes it, at one point, as an exercise in clearing the ground – and it succeeds in sweeping away certain comfortable assumptions that are often made in this area, concerning (for instance) the irrelevance of negative evidence, what languages are provably unlearnable, and the role of the Chomsky hierarchy.” (New Books in Language, 2012 – review and interview available at http://newbooksinlanguage.com/2012/06/08/alexander-clark-and-shalom-lappin-linguistic-nativism-and-the-poverty-of-the-stimulus-wiley-blackwell-2011/)
This highly readable, but game-changing book shows to what extent the `poverty of the stimulus' argument stems from nothing more than poverty of the imagination. A must-read for generative linguists.
Ivan Sag, Stanford University
For fifty years, the "poverty of the stimulus" has driven "nativist" linguistics. Clark and Lappin challenge the PoS and develop a formal foundation for language learning. This brilliant book should be mandatory reading for anyone who wants to understand the most fundamental question in linguistics.
Richard Sproat, Oregon Health and Science University
Clark and Lappin provide a brilliant and wide-ranging re-examination of one of the most important questions in cognitive science: how much innate structure is required to support language acquisition. A remarkable achievement.
Nick Chater, Professor of Behavioural Science, University of Warwick
This comprehensive cutting-edge treatise on linguistic nativism skillfully untangles the human capacity to effortlessly learn languages, from claims that this capacity is specific to language.
Juliette Blevins, CUNY Graduate Center