I would like to say a few words about what has come to be called “the biolinguistic perspective,” which began to take shape half a century ago in discussions among a few graduate students who were much influenced by developments in biology and mathematics in the early postwar years, including work in ethology that was just coming to be known in the United States. One of them was Eric Lenneberg, whose seminal 1967 study Biological Foundations of Language remains a basic document of the field. By then considerable interchange was proceeding, including interdisciplinary seminars and international conferences. The most far-reaching one, in 1974 was called, for the first time, “Biolinguistics.” Many of the leading questions discussed there remain very much alive today.
One of these questions, repeatedly brought up as “one of the basic questions to be asked from the biological point of view,” is the extent to which apparent principles of language, including some that had only recently come to light, are unique to this cognitive system. An even more basic question from the biological point of view is how much of language can be given a principled explanation, whether or not homologous elements can be found in other domains or organisms. The effort to sharpen these questions and to investigate them for language has come to be called “the minimalist program” in recent years, but the questions arise for any biological system, and are independent of theoretical persuasion, in linguistics and other domains. Answers to these questions are not only fundamental to understanding of the nature and functioning of organisms and their subsystems, but also to investigation of their growth and evolution.
The biolinguistic perspective views a person’s language in all of its aspects – sound, meaning, structure -- as a state of some component of the mind, understanding “mind” in the sense of 18th century scientists who recognized that after Newton’s demolition of the “mechanical philosophy,” based on the intuitive concept of a material world, no coherent mind-body problem remains, and we can only regard aspects of the world “termed mental,” as the result of “such an organical structure as that of the brain,” as chemist-philosopher Joseph Priestley observed. Thought is a “little agitation of the brain,” David Hume remarked; and as Darwin commented a century later, there is no reason why “thought, being a secretion of the brain,” should be considered “more wonderful than gravity, a property of matter.” By then, the more tempered view of the goals of science that Newton introduced had become scientific common sense: Newton’s reluctant conclusion that we must be satisfied with the fact that universal gravity exists, even if we cannot explain it in terms of the self-evident "mechanical philosophy." As many commentators have observed, this intellectual move "set forth a new view of science" in which the goal is "not to seek ultimate explanations" but to find the best theoretical account we can of the phenomena of experience and experiment (I. Bernard Cohen).
The central issues in the domain of study of mind still arise, in much the same form. They were raised prominently at the end of the "Decade of the Brain," which brought the last millennium to a close. The American Academy of Arts and Sciences published a volume to mark the occasion, summarizing the current state of the art. The guiding theme was formulated by neuroscientist Vernon Mountcastle in his introduction to the volume: It is the thesis that "Things mental, indeed minds, are emergent properties of brains, [though] these emergences are not regarded as irreducible but are produced by principles...we do not yet understand." The same thesis, which closely paraphrases Priestley, has been put forth in recent years as an "astonishing hypothesis" of the new biology, a "radically new idea" in the philosophy of mind, "the bold assertion that mental phenomena are entirely natural and caused by the neurophysiological activities of the brain," and so on. But this is a misunderstanding. The thesis follows from the collapse of any coherent concept of “body” or “material” in the 17th century, as was soon recognized. Terminology aside, the fundamental thesis remains what has been called “Locke’s suggestion”: that God might have chosen to "superadd to matter a faculty of thinking" just as he "annexed effects to motion, which we can in no way conceive motion able to produce.”
Mountcastle’s reference to reductive principles that we “do not yet understand” also begs some interesting questions, as a look at the history of science illustrates, even quite recent science. It is reminiscent of Bertrand Russell’s observation in 1929, also reflecting standard beliefs, that “chemical laws cannot at present be reduced to physical laws.” The phrase "at present," like Mountcastle's word "yet," expresses the expectation that the reduction should take place in the normal course of scientific progress, perhaps soon. In the case of physics and chemistry, it never did: what happened was unification of a virtually unchanged chemistry with a radically revised physics. It’s hardly necessary to add that the state of understanding and achievement in those areas 80 years ago was far beyond anything that can be claimed for the brain and cognitive sciences today. Hence confidence in “reduction” to the little that is understood is not necessarily appropriate.
From the array of phenomena that one might loosely consider language-related, the biolinguistic approach focuses attention on a component of human biology that enters into the use and acquisition of language, however one interprets the term “language.” Call it the “faculty of language,” adapting a traditional term to a new usage. This component is more or less on a par with the system of mammalian vision, insect navigation, or others. In many of these cases, the best available explanatory theories attribute to the organism computational systems and what is called "rule-following" in informal usage – for example, when a recent text on vision presents the so-called "rigidity principle" as it was formulated 50 years ago: "if possible, and other rules permit, interpret image motions as projections of rigid motions in three dimensions." In this case, later work provided substantial insight into the mental computations that seem to be involved when the visual system follows these rules, but even for very simple organisms, that is typically no slight task, and relating mental computations to analysis at the cellular level is commonly a distant goal. Some philosophers have objected to the notion “rule-following” – for language, rarely vision. But I think that is another misunderstanding, one of many in my opinion. It is of some interest to compare qualms expressed today about theories of language, and aspects of the world “termed mental” more generally, with debates among leading scientists well into the 1920s as to whether chemistry was a mere calculating device predicting the results of experiments, or whether it merits the honorific status of an account of “physical reality,” debates later understood to be completely pointless. The similarities, which I have discussed elsewhere, are striking and I think instructive.
Putting these interesting topics aside, if we adopt the biolinguistic perspective, a language is a state of the faculty of language – an I-language in technical usage, where “I” underscores the fact that the conception is internalist, individual, and intensional (with an “s,” not a “t”) – that is, the actual formulation of the generative principles, not the set it enumerates; the latter we can think of as a more abstract property of the I-language, rather as we can think of the set of possible trajectories of a comet through the solar system as an abstract property of that system.
The decision to study language as part of the world in this sense was regarded as highly controversial at the time, and still is, by many linguists as well. It seems to me that the arguments advanced against the legitimacy of the approach have little force – a weak thesis; and that its basic assumptions are tacitly adopted even by those who strenuously reject them – a much stronger thesis. I will not enter into this chapter of contemporary intellectual history here, but will simply assume that crucial aspects of language can be studied as part of the natural world in the sense of the biolinguistic approach that took shape half a century ago, and has been intensively pursued since, along various different paths.
The language faculty is one component of what the co-founder of modern evolutionary theory, Alfred Russel Wallace, called “man’s intellectual and moral nature”: the human capacities for creative imagination, language and other modes of symbolism, mathematics, interpretation and recording of natural phenomena, intricate social practices and the like, a complex of capacities that seem to have crystallized fairly recently, perhaps a little over 50,000 years ago, among a small breeding group of which we are all descendants – a complex that sets humans apart rather sharply from other animals, including other hominids, judging by the archaeological record. The nature of the “human capacity,” as some researchers now call it, remains a considerable mystery. It was one element of a famous disagreement between the two founders of the theory of evolution, with Wallace holding, contrary to Darwin, that evolution of these faculties cannot be accounted for in terms of variation and natural selection alone, but requires “some other influence, law, or agency,” some principle of nature alongside gravitation, cohesion, and other forces without which the material universe could not exist. Although the issues are differently framed today, they have not disappeared.
It is commonly assumed that whatever the human intellectual capacity is, the faculty of language is essential to it. Many scientists agree with paleoanthropologist Ian Tattersall, who writes that he is “almost sure that it was the invention of language” that was the “sudden and emergent” event that was the “releasing stimulus” for the appearance of the human capacity in the evolutionary record -- the “great leap forward” as Jared Diamond called it, the result of some genetic event that rewired the brain, allowing for the origin of human language with the rich syntax that provides a multitude of modes of expression of thought, a prerequisite for social development and the sharp changes of behavior that are revealed in the archaeological record, also generally assumed to be the trigger for the rapid trek from Africa, where otherwise modern humans had apparently been present for hundreds of thousands of years. The view is similar to that of the Cartesians, but stronger: they regarded normal use of language as the clearest empirical evidence that another creature has a mind like ours, but not the criterial evidence for mind and the origin of the human capacity.
If this general picture has some validity, than the evolution of language may be a very brief affair, even though it is a very recent product of evolution. Of course, there are innumerable precursors, and they doubtless had a long evolutionary history. For example, the bones of the middle ear are a marvellous sound-amplifying system, wonderfully designed for interpreting speech, but they appear to have migrated from the reptilian jaw as a mechanical effect of growth of the neocortex in mammals that began 160 million years ago, so it is reported. We know far too little about conceptual systems to say much, but it’s reasonable to suppose that they too had a long history after the separation of hominids, yielding results with no close similarity elsewhere. But the question of evolution of language itself has to do with how these various precursors were organized into the faculty of language, perhaps through some slight genetic event that brought a crucial innovation. If that is so, then the evolution of language itself is brief, speculations that have some bearing on the kind of inquiry into language that is likely to be productive.
Tattersall takes language to be “virtually synonymous with symbolic thought.” Elaborating, one of the initiators of the 1974 symposium, Nobel laureate Francois Jacob, observed that “the role of language as a communication system between individuals would have come about only secondarily,” perhaps referring to discussions at the 1974 conference, where his fellow Nobel Laureate Salvador Luria was one of the more forceful advocates of the view that communicative needs would not have provided “any great selective pressure to produce a system such as language,” with its crucial relation to “development of abstract or productive thinking.” “The quality of language that makes it unique does not seem to be so much its role in communicating directives for action” or other common features of animal communication, Jacob continues, but rather “its role in symbolizing, in evoking cognitive images,” in “molding” our notion of reality and yielding our capacity for thought and planning, through its unique property of allowing “infinite combinations of symbols” and therefore “mental creation of possible worlds,” ideas that trace back to the 17th century cognitive revolution.
Jacob also stressed the common understanding that answers to questions about evolution “in most instances…can hardly be more than more or less reasonable guesses.” And in most cases, hardly even that. An example that is perhaps of interest here is the study of evolution of the bee communication system, unusual in that in principle it permits transmission of information over an infinite (continuous) range. There are hundreds of species of honey and stingless bees, some having variants of communication systems, some not, though they all seem to survive well enough. So there is plenty of opportunity for comparative work. Bees are incomparably easier to study than humans, along every dimension. But little is understood. Even the literature is sparse. The most recent extensive review I have seen, by entomologist Fred Dyer, notes that even the basic computational problems of coding spatial information to motor commands, and the reverse for follower bees, remains “puzzling”, and “What sorts of neural events might underlie these various mapping processes is unknown,” while evolutionary origins scarcely go beyond speculation. There is nothing like the huge literature and confident pronouncements about the evolution of human language – something that one might also find a bit “puzzling.”
We can add another insight of 17th and 18th century philosophy, with roots as far back as Aristotle’s analysis of what were later interpreted as mental entities: that even the most elementary concepts of human language do not relate to mind-independent objects by means of some reference-like relation between symbols and identifiable physical features of the external world, as seems to be universal in animal communication systems. Rather, they are creations of the “cognoscitive powers” that provide us with rich means to refer to the outside world from certain perspectives, but are individuated by mental operations that cannot be reduced to a "peculiar nature belonging" to the thing we are talking about, as Hume summarized a century of inquiry. Julius Moravcsik’s “aitiational theory of semantics” is a recent development of some of these ideas, from their Aristotelian origins and with rich implications for natural language semantics.
These are critical observations about the elementary semantics of natural language, suggesting that its most primitive elements are related to the mind-independent world much as the internal elements of phonology are, not by a reference-like relation but as part of a considerably more intricate species of conception and action. I cannot try to elaborate here, but I think such considerations, if seriously pursued, reveal that it is idle to try to base the semantics of natural language on any kind of “word-object” relation, however intricate the constructed notion of “object,” just as it would be idle to base the phonetics of natural language on a “symbol-sound” relation, where sounds are taken to be constructed physical events – perhaps indescribable four-dimensional constructs based on motions of models, with further questions dispatched to the physics department, or if one wants to make the problem still more hopeless, to the sociology department as well. It is universally agreed that these moves are the wrong ones for the study of the sound side of language, and I think the conclusions are just as reasonable on the meaning side. For each utterance, there is a physical event, but that does not imply that we have to seek some mythical relation between such an internal object as the syllable /ta/ and an identifiable mind-independent event; and for each act of referring there is some complex aspect of the experienced or imagined world on which attention is focused by that act, but that is not to say that a relation of reference exists for natural language. I think it does not, even at the most primitive level.
If this much is generally on the right track, then, at least two basic problems arise when we consider the origins of the faculty of language and its role in the sudden emergence of the human intellectual capacity: first, the core semantics of minimal meaning-bearing elements, including the simplest of them; and second, the principles that allow unbounded combinations of symbols, hierarchically organized, which provide the means for use of language in its many aspects. By the same token, the core theory of language – Universal Grammar, UG – must provide, first, a structured inventory of possible lexical items that are related to or perhaps identical with the concepts that are the elements of the “cognoscitive powers”; and second, means to construct from these lexical items the infinite variety of internal structures that enter into thought, interpretation, planning, and other human mental acts, and are sometimes externalized, a secondary process if the speculations just reviewed turn out to be correct. On the first problem, the apparently human-specific conceptual-lexical apparatus, there is insightful work on relational notions linked to syntactic structures and on the partially mind-internal objects that appear to play a critical role (events, propositions, etc). But there is little beyond descriptive remarks on the core referential apparatus that is used to talk about the world. The second problem has been central to linguistic research for a half century, with a long history before in different terms.
The biolinguistic approach adopted from the outset the point of view that cognitive neuroscientist R.G. Gallistel calls "the norm in neuroscience” today, the "modular view of learning": the conclusion that in all animals, learning is based on specialized mechanisms, "instincts to learn" in specific ways. He suggests that we think of these mechanisms as "organs within the brain," achieving states in which they perform specific kinds of computation. Apart from "extremely hostile environments," they change states under the triggering and shaping effect of external factors, more or less reflexively, and in accordance with internal design. That is the "process of learning," though "growth" might be a more appropriate term, avoiding misleading connotations of the term "learning." One might relate these ideas to Gallistel’s encyclopedic work on organization of motion, based on “structural constraints” that set “limits on the kinds of solutions an animal will come up with in a learning situation.”
The modular view of learning of course does not entail that the components of the module are unique to it: at some level, everyone assumes that they are not – the cell, for example. The question of the level of organization at which unique properties emerge remains a basic question from a biological point of view, as it was at the 1974 conference. Gallistel’s observations recall the concept of “canalization” introduced into evolutionary and developmental biology by C.H. Waddington 60 years ago, referring to processes “adjusted so as to bring about one definite end result regardless of minor variations in conditions during the course of the reaction,” thus ensuring “the production of the normal, that is optimal type in the face of the unavoidable hazards of existence.” That seems to be a fair description of the growth of language in the individual. A core problem of the study of the faculty of language is to discover the mechanisms that limit outcomes to “optimal types.”
It has been recognized since the origins of modern biology that organism-external developmental constraints and architectural-structural principles enter not only into the growth of organisms but also their evolution. In a classic contemporary paper, Maynard Smith and associates trace the post-Darwinian version back to Thomas Huxley, who was struck by the fact that there appear to be “predetermined lines of modification” that lead natural selection to “produce varieties of a limited number and kind” for every species. They review a variety of such constraints in the organic world and describe how “limitations on phenotypic variability” are “caused by the structure, character, composition, or dynamics of the developmental system.” They also point out that such “developmental constraints undoubtedly play a significant role in evolution” though there is yet “little agreement on their importance as compared with selection, drift, and other such factors in shaping evolutionary history.” At about the same time, Jacob wrote that “the rules controlling embryonic development,” almost entirely unknown, interact with other physical factors to “restrict possible changes of structures and functions” in evolutionary development, providing “architectural constraints” that “limit adaptive scope and channel evolutionary patterns,” to quote a recent review. The best known of the figures who devoted much of their work to these topics are D’Arcy Thompson and Alan Turing, who took a very strong view on the central role of such factors in biology. In recent years, such considerations have been adduced for a wide range of problems of development and evolution, from cell division in bacteria to optimization of structure and function of cortical networks, even to proposals that organisms have “the best of all possible brains,” as argued by computational neuroscientist Chris Cherniak. The problems are the border of inquiry, but their significance is not controversial.
Assuming that the faculty of language has the general properties of other biological systems, we should, therefore, be seeking three factors that enter into the growth of language in the individual:
(1) Genetic factors, apparently near uniform for the species, the topic of UG. The genetic endowment interprets part of the environment as linguistic experience, a non-trivial task that the infant carries out reflexively, and determines the general course of the development of the language faculty to the languages attained.
(2) Experience, which leads to variation, within a fairly narrow range, as in the case of other subsystems of the human capacity and the organism generally.
(3) Principles not specific to the faculty of language.
The third factor includes principles of structural architecture that restrict outcomes, including principles of efficient computation, which would be expected to be of particular significance for computational systems such as language, determining the general character of attainable languages.
One can trace interest in this third factor back to the Galilean intuition that "nature is perfect," from the tides to the flight of birds, and that it is the task of the scientist to discover in just what sense this is true. Newton's confidence that Nature must be "very simple" reflects the same intuition. However obscure it may be, that intuition about what Ernst Haeckel called nature's "drive for the beautiful" ("Sinn fuer das Schoene") has been a guiding theme of modern science ever since its modern origins.
Biologists have tended to think differently about the objects of their inquiry, adopting Jacob's image of nature as a tinkerer, which does the best it can with materials at hand -- often a pretty poor job, as human intelligence seems to be intent on demonstrating about itself. British geneticist Gabriel Dover captures the prevailing view when he concludes that "biology is a strange and messy business and `perfection' is the last word one would use to describe how organisms work, particularly for anything produced by natural selection" -- though produced only in part by natural selection, as he emphasizes, and as every biologist knows, and to an extent that cannot be quantified by available tools. These expectations make good sense for systems with a long and complex evolutionary history, with plenty of accidents, lingering effects of evolutionary history that lead to non-optimal solutions of problems, and so on. But the logic does not apply to relatively sudden emergence, which might very well lead to systems that are unlike the complex outcomes of millions of years of Jacobian “bricolage,” perhaps more like snowflakes, or phyllotaxis, or cell division into spheres rather than cubes, or polyhedra as construction materials, or much else that is found in the natural world. The minimalist program is motivated by the suspicion that something like that may indeed be true for human language, and I think recent work has given some reason to believe that language is in many respects an optimal solution to conditions it must satisfy, far more so than could have been anticipated a few years ago.
Returning to the early days, within the structuralist/behaviorist frameworks of the 1950s, the closest analogues to UG were the procedural approaches developed by Trubetzkoy, Harris, and others, devised to determine linguistic units and their patterns from a corpus of linguistic data. At best, these cannot reach very far, no matter how vast the corpus and futuristic the computational devices used. Even the elementary formal and meaning-bearing elements, morphemes, do not have the “beads on a string” character that is required for procedural approaches, but relate much more indirectly to phonetic form. Their nature and properties are fixed within the more abstract computational system that determines the unbounded range of expressions. The earliest approaches to generative grammar therefore assumed that the genetic endowment provides a format for rule systems and a method for selecting the optimal instantiation of it, given data of experience. Specific proposals were made then and in the years that followed. In principle, they provided a possible solution to the problem of language acquisition, but involved astronomical calculation, and therefore did not seriously address the issues.
The main concerns in those years were quite different, as they still are. It may be hard to believe today, but it was commonly assumed 50 years ago that the basic technology of linguistic description was available, and that language variation was so free than nothing of much generality was likely to be discovered. As soon as efforts were made to provide fairly explicit accounts of the properties of languages, it immediately became obvious how little was known, in any domain. Every specific proposal yielded a treasure trove of counter-evidence, requiring complex and varied rule-systems even to achieve a very limited approximation to descriptive adequacy. That was highly stimulating for inquiry into language, but also left a serious quandary, since the most elementary considerations led to the conclusion that UG must impose narrow constraints on possible outcomes in order to account for the acquisition of language, the task of achieving “explanatory adequacy,” so called. Sometimes these are called “poverty of stimulus” problems in the study of language, though the term is misleading because this is just a special case of basic issues that arise universally for organic growth, including cognitive growth, a variant of problems recognized as far back as Plato.
A number of paths were pursued to try to resolve the tension. The most successful turned out to be efforts to formulate general principles, attributed to UG – that is, the genetic endowment – leaving a somewhat reduced residue of phenomena that would result, somehow, from experience. These approaches had some success, but the basic tensions remained unresolved at the time of the 1974 conference.
Within a few years, the landscape changed considerably. In part this was the result of a vast array of new materials from studies of much greater depth than previously, in part from opening new topics to investigation. About 25 years ago, much of this work crystallized in a radically different approach to UG, the “Principles and Parameters” (P&P) framework, which for the first time offered the hope of overcoming the tension between descriptive and explanatory adequacy. This approach sought to eliminate the format framework entirely, and with it, the traditional conception of rules and constructions that had been pretty much taken over into generative grammar. In these respects, it was a much more radical departure from the rich tradition of 2500 years than early generative grammar. The new P&P framework led to an explosion of inquiry into languages of the most varied typology, leading to new problems previously not envisioned, sometimes answers, and the reinvigoration of neighboring disciplines concerned with acquisition and processing, their guiding questions now reframed in terms of parameter-setting within a fixed system of principles of UG. No one familiar with the field has any illusion today that the horizons of inquiry are even visible, let alone at hand.
Abandonment of the format framework also had a significant impact on the biolinguistic program. If, as had been assumed, acquisition is a matter of selection among options made available by the format provided by UG, then the format must be rich and highly articulated, allowing relatively few options; otherwise, explanatory adequacy is out of reach. The best theory of language must be a very unsatisfactory one from other points of view, with a complex array of conditions specific to human language, restricting possible instantiations. The fundamental biological issue of principled explanation could barely be contemplated, and correspondingly, the prospects for some serious inquiry into evolution of language were dim; evidently, the more varied and intricate the conditions specific to language, the less hope there is for a reasonable account of the evolutionary origins of UG. These are among the questions that were raised at the 1974 symposium and others of the period, but they were left as apparently irresoluble problems.
The P&P framework offered prospects for resolution of these tensions as well. Insofar as this framework proves valid, acquisition is a matter of parameter setting, and is therefore divorced entirely from the remaining format for grammar: the principles of UG. There is no longer a conceptual barrier to the hope that the UG might be reduced to a much simpler form, and that basic properties of the computational systems of language might have a principled explanation instead of being stipulated in terms of a highly restrictive language-specific format for grammars. Returning to the three factors of language design, adoption of a P&P framework overcomes a difficult conceptual barrier to shifting the burden of explanation from factor (1), the genetic endowment, to factor (3), language-independent principles of structural architecture and computational efficiency, thereby providing some answers to the fundamental questions of biology of language, its nature and use, and perhaps its evolution.
With the conceptual barriers imposed by the format framework overcome, we can try more realistically to sharpen the question of what constitutes a principled explanation for properties of language, and turn to one of the most fundamental questions of the biology of language: to what extent does language approximate an optimal solution to conditions that it must satisfy to be usable at all, given extra-linguistic structural architecture? These conditions take us back to the traditional characterization of language since Aristotle as a system that links sound and meaning. In our terms, the expressions generated by a language must satisfy two interface conditions: those imposed by the sensorimotor system and by the conceptual-intentional system that enters into the human intellectual capacity and the variety of speech acts.
We can regard an explanation of properties of language as principled insofar as it can be reduced to properties of the interface systems and general considerations of computational efficiency and the like. Independently, the interface systems can be studied on their own, including comparative study that has been productively underway. And the same is true of principles of efficient computation, applied to language in recent work by many investigators with important results, and perhaps also amenable to comparative inquiry. In a variety of ways, then, it is possible both to clarify and address some of the basic problems of the biology of language.
At this point we have to move on to more technical discussion than is possible here, but a few informal remarks may help sketch the general landscape, at least.
An elementary fact about the language faculty is that it is a system of discrete infinity, rare in the organic world. Any such system is based on a primitive operation that takes objects already constructed, and constructs from them a new object: in the simplest case, the set containing them. Call that operation Merge. Either Merge or some equivalent is a minimal requirement. With Merge available, we instantly have an unbounded system of hierarchically structured expressions. The simplest account of the “Great Leap Forward” in the evolution of humans would be that the brain was rewired, perhaps by some slight mutation, to provide the operation Merge, at once laying a core part of the basis for what is found at that dramatic moment of human evolution: at least in principle; to connect the dots is far from a trivial problem. There are speculations about the evolution of language that postulate a far more complex process: first some mutation that permits two-unit expressions, perhaps yielding selectional advantage by reducing memory load for lexical items; then further mutations to permit larger ones; and finally the Great Leap that yields Merge. Perhaps the earlier steps really took place, though there is no empirical or serious conceptual argument for the belief. A more parsimonious speculation is that they did not, and that the Great Leap was effectively instantaneous, in a single individual, who was instantly endowed with intellectual capacities far superior to those of others, transmitted to offspring and coming to predominate. At best a reasonable guess, as are all speculations about such matters, but about the simplest one imaginable, and not inconsistent with anything known or plausibly surmised. It is hard to see what account of human evolution would not assume at least this much, in one or another form.
Similar questions arise about growth of language in the individual. It is commonly assumed that there is a two-word stage, a three-word stage, and so on, with an ultimate Great Leap Forward to unbounded generation. That is observed in performance, but it is also observed that at the early stage the child understands much more complex expressions, and that random modification of longer ones – even such simple changes as placement of function words in a manner inconsistent with UG or the adult language – leads to confusion and misinterpretation. It could be that unbounded Merge, and whatever else is involved in UG, is present at once, but only manifested in limited ways for extraneous reasons, memory and attention limitation and the like; matters discussed at the 1974 symposium, and now possible to investigate much more systematically and productively.
The most restrictive case of Merge applies to a single object, forming a singleton set. Restriction to this case yields the successor function, from which the rest of the theory of natural numbers can be developed in familiar ways. That suggests a possible answer to a problem that troubled Wallace in the late 19th century: in his words, that the “gigantic development of the mathematical capacity is wholly unexplained by the theory of natural selection, and must be due to some altogether distinct cause,” if only because it remained unused. One possibility is the natural numbers result from a simple constraint on the language faculty, hence not given by God, in accord with Kronecker’s famous aphorism, though the rest is created by man, as he continued. Speculations about the origin of the mathematical capacity as an abstraction from linguistic operations are not unfamiliar. There are apparent problems, including dissociation with lesions and diversity of localization, but the significance of such phenomena is unclear for many reasons (including the issue of possession vs. use of the capacity). There may be something to these speculations, perhaps along the lines just indicated.
Elementary considerations of computational efficiency impose other conditions on the optimal solution to the task of linking sound and meaning. There is by now extensive literature exploring problems of this kind, and I think it is fair to say that there has been considerable progress in moving towards principled explanation. It is even more clear that these efforts have met one primary requirement for a sensible research program: stimulating inquiry that has been able to overcome some old problems while even more rapidly bringing to light new ones, previously unrecognized and scarcely even formulable and enriching greatly the empirical challenges of descriptive and explanatory adequacy that have to be faced; and for the first time, opening a realistic prospect of moving significantly beyond explanatory adequacy to principled explanation along the lines indicated.
The quest for principled explanation faces daunting tasks. We can formulate the goals with reasonable clarity. We cannot, of course, know in advance how well they can be attained – that is, to what extent the states of the language faculty are attributable to general principles, possibly even holding for organisms generally. With each step towards this goal, we gain a clearer grasp of the core properties that are specific to the language faculty, still leaving quite unresolved problems that have been raised for hundreds of years. Among these are the question how properties “termed mental” relate to “the organical structure of the brain,” problems far from resolution even for insects, and with unique and deeply mysterious aspects when we consider the human capacity and its evolutionary origins.
Š Noam Chomsky
With the Author's permission.
A more extensive
elaboration of the ideas presented here will
appear in Linguistic Inquiry.