人間言語の固有性

心理学者ギャリステルは下記論文で以下のように述べる.

“Chomsky (1988) has suggested that both language and the capacity for abstract thought rest on the evolution in humans of a computational capacity that is absent in nonhuman animals.  Language and abstract thought may, for example, be manifestations of a uniquely human capacity to construct symbolic structures by recursion (Hauser, Chomsky, & Fitch, 2002).” (253)

“I argue that findings in the animal cognition literature suggest that species with which humans have not shared a common ancestor since the Cambrian era represent the experienced world at a high level of abstraction. … These nonlinguistic representations appear to involve symbolic structures, that is, multiple symbols stored in memory in such a way as to encode experienced relations among the entities to which the symbols refer.’’ (253-4)

“It would appear that animals have represented the experienced world at a highly abstract level in a richly structured symbolic system for eons and that the human infant is heir to this powerful and versatile representational system.  What is unique in human is the machinery for mapping what they represent in the privacy of their own brain into a communicable system of symbols of similar power and versatility to the private system.”  (260)

この短い論文の中で,ギャリステルは,人間以外の動物も,非常に抽象的な表象を持ち,それに基づいた複雑な推論を行うと主張する.その中でギャリステルは,チョムスキーの主張がそれと相容れないと示唆するが,それはミスリーディングであろうと考える.人間言語の重要な特徴は,記号的な構造を回帰的 recursive に構成する力であり,抽象性や記号性そのものが問題になっているわけではない.「言語能力は人間に固有」と言語学者が述べるとき,抽象性や記号性が人間に固有であると述べているわけではない.結語の部分において “a communicable system of symbols” と述べられているが,これは外在的な言語観を前提としているのかもしれない.

C. R. Gallistel “Prelinguistic Thought” in Language Learning and Development, 7: 253–262, 2011

意味論における内在主義と外在主義1

John Collins 本における内在主義の擁護を見てみよう.言語研究に関する内在主義と外在主義とはどのような立場であろうか.

Linguistic externalism: The explanations offered by successful linguistic theory (broadly conceived) entail or presuppose externalia (objects or properties individuated independent of speaker/hearer’s cognitive states).  The externalia include the quotidian objects we take ourselves to talk about each day.

まず第一に,これは言語の理論に関する外在主義である.あるいは言語の科学に関する立場であると言ってもよい.日常的にわれわれが用いるところの「言語」という概念とはあまり関係がない.「ことば」に関して話者がもつ前理論的な意見といったものは,直接的に関係がないのである.第二に,この外在主義によると,正しい言語理論は,話者の認知的状態から独立した日常物などの外在物の存在を前提するのである.あるいはそれらの存在が論理的に帰結するのである.ここでの外在物は,クオークからコンピュータ,椅子やソナタなど幅広い対象を含めてよいであろう.

さて一方,内在主義は以下のようにまとめられる.

Linguistic internalism: The explanations offered by successful linguistic theory neither presuppose nor entail externalia.  There are externalia, but they do not enter into the explanations of linguistics qua externalia.  Linguistics is methodologically solipsistic; its kinds are interalist.

この特徴付けの最初の一文は上記外在主義の否定である.これは言語学的説明から何が帰結するのか,という点に関する主張であり,存在論一般に関する主張ではない.したがって,内在主義は,国,都市や人物が存在しない,などといった馬鹿げた主張を含むわけではない.コリンズはとくに Kennedy and Stanley (2009) を批判する.

According to Chomsky, native speakers will tell us that this sentence [`London is a city in England.’] is actually true.  But Chomsky thinks it is quite clear to all that the city of London, the standard semantic value of the noun phrase `London’, does not exist (Kennedy and Stanley, 2009, 586)

ここでケネディ・スタンリーは,まるでチョムスキーがロンドンという都市が存在しないと主張しているように述べる.チョムスキーは,言語表現がどの文脈においても単一の意味論的値(指示対象)を「持つ」というその関係性について批判しているのであって,話者が指示するところの対象そのものについて疑っているわけではない.

Collins, J The Unity of Linguistic Meaning, 137-8
Kennedy, C. and Stanley, J. 2009, “On Average” in Mind

Collins on Merge 3

According to Collins, Merge is one of the two parts of an adequate solution to the unity problem as he conceives it.  The other part is a theory of the features of lexical items.  What are the features of lexical items?  He says we may assume four classes of lexical feature: phonology, syntactic, thematic, and semantic.  Phonological features are set aside because they are presumably irrelevant to the interpretability of a structure itself.   

I put some examples of the other three types of feature:

  • Syntactic: +/- finite likely vs. probable
  • Thematic: AGENT, PATIENT etc., which `relate constituents to verbs, determining roles within the event marked by the verb.’
  • Semantic: +/- animate etc.

Since he says, `I take semantic theory broadly to be in the business of determining the identity and interplay of these feature’ (119 n33), the specification of syntactic features is part of or, at least, goes hand-in-hand with, semantic theory.

Here is his solution to the unity problem.  Lexical items inherently have such features (or they are mere bundles of such features).  Merge combines them to form structures.  Some of them are interpretable because `the inherent features of the constituent items fit one another. If they do not fit, then the structure is uninterpretable or unstable, if you will.  Such was Frege’s insight’ (119)

This is Collins’ two part account of unity, the unity of linguistic meaning.  Importantly, his unity problem asks why and how some structures are interpretable and others are not.  It’s not about the unity of propositions per se or of facts or any other metaphysical entity. His question is concerned with `combinatorial unity’:

(Combinatorial Unity) Given lexical items with their semantic properties, what principle or mechanism combines the items into structures that are interpretable as a function of their constituent parts? (28)

Coming back to this earlier part of his book, I now notice he already mentions `lexical items with their semantic properties’, which are presupposed, `given’, and not part of the combinatorial unity problem in Collins’ definition.  But I wonder if the specification of semantic and other features of lexical items is an important part of the combinatorial unity problem.  As noted by Collins, some salient properties are not reflected on lexical features (not all possible features are available/realized) and there are cross-linguistic homogeneities.  Don’t we need to know why so in order to explicate why some structures are interpretable and some others are not?  And we are not quite sure why and how such constraints are observable.  Are they derived from some issues in computation, learning, or evolutionary facts?  I’m inclined to agree with him that his account is not merely `kicking the problem of unity upstairs’. But maybe this is just a first step to a comprehensive solution to the interpretive unity problem.

Collins on Merge 2

We have to distinguish internal and external forms of Merge.  Merge applies to two objects.  If the objects are distinct, then the operation is a case of `external’ Merge.  But if one object is contained in the other object, then it is a case of `internal’ Merge.  Merge in itself is binary set formation.  Any application of Merge creates symmetry between the objects conjoined.  But repetitive applications of Merge can create a superset that contains the initial result of Merge.

(9) [a, b] -> [a [a, b]]

The idea is that the order in itself is not important, but which element is dominant is.  In (9) a is dominant; `internal’ Merge establishes hierarchical asymmetry over directional symmetry.

We may think of internal Merge as a device of symmetry breaking, where one object of a directionally symmetrical pair is positioned so as to be asymmetrically related to a copy of itself and the copy’s pair mate.

Internal Merge creates a head of a structure, which can be seen as the creation of a new object.  A new relation, being a head, allows us to consider the one object both as an element of a structure and also as a proxy for the collection of elements it heads.

Merge itself is indifferent to interpretation, so Merge in itself cannot produce the constraints that only determine interpretable structures; the constraints `must arise from the lexical items or interface properties’ (117).

Collins on Merge 1

Following John Collins’ The Unity of Linguistic Meaning, let’s first see the main differences/similarities between mere concatenation and ‘Merge’ as discussed in the minimalist literature.

Concatenation is an operation that produces linear strings of elements.   For example, the result of applying concatenation to two elements, a and b in this order, is a^b.  The fundamental property of concatenation, he notes, is associative.  Parentheses are redundant, just as in addition or multiplication.  Every concatenation is flat; it produces no hierarchical asymmetry.

Merge, on the other hand, is an operation that targets two elements and creates a new object, where the merged objects are atomic or themselves produces of Merge.  Each merged object displays its ‘Merge history’ as an individuative condition, i.e., each object is structured as a sequence of binary parings.

Both concatenation and Merge are (i) recursive and able to generate unboundedly many things, and they are (ii) specifiable independently of that upon which they operate.  The second condition is related to one of the desiderata for the solution of the unity problem envisaged by Collins.

We can think of our problem as presupposing the availability of a compositional theory.  A semantic theory might well tell us what structures (linguistic material) count as unities for a speaker/hearer, but it does not eo ipso tell us how those unities are available.  The job of a semantic theory is typically construed as if the possible unities were waiting to be compositionally described, already formed.  What we want to know, though, is how the unities are available to the speaker/hearer at all.  A desideratum on an explanation, here, is the provision of a combinatorial principle independent of the unities that happen to be interpretable, where independence means that the principle can be fully specified without any mention of the elements to which it applies.  That way, an explicatory principle will not presuppose what we want explained.  (my emphasis, p.30)

So thinking about concatenation helps us understand what Collins means by this explanation desideratum, despite its inadequacy for becoming the sought-after principle.