Collins on Merge 3

According to Collins, Merge is one of the two parts of an adequate solution to the unity problem as he conceives it.  The other part is a theory of the features of lexical items.  What are the features of lexical items?  He says we may assume four classes of lexical feature: phonology, syntactic, thematic, and semantic.  Phonological features are set aside because they are presumably irrelevant to the interpretability of a structure itself.   

I put some examples of the other three types of feature:

  • Syntactic: +/- finite likely vs. probable
  • Thematic: AGENT, PATIENT etc., which `relate constituents to verbs, determining roles within the event marked by the verb.’
  • Semantic: +/- animate etc.

Since he says, `I take semantic theory broadly to be in the business of determining the identity and interplay of these feature’ (119 n33), the specification of syntactic features is part of or, at least, goes hand-in-hand with, semantic theory.

Here is his solution to the unity problem.  Lexical items inherently have such features (or they are mere bundles of such features).  Merge combines them to form structures.  Some of them are interpretable because `the inherent features of the constituent items fit one another. If they do not fit, then the structure is uninterpretable or unstable, if you will.  Such was Frege’s insight’ (119)

This is Collins’ two part account of unity, the unity of linguistic meaning.  Importantly, his unity problem asks why and how some structures are interpretable and others are not.  It’s not about the unity of propositions per se or of facts or any other metaphysical entity. His question is concerned with `combinatorial unity’:

(Combinatorial Unity) Given lexical items with their semantic properties, what principle or mechanism combines the items into structures that are interpretable as a function of their constituent parts? (28)

Coming back to this earlier part of his book, I now notice he already mentions `lexical items with their semantic properties’, which are presupposed, `given’, and not part of the combinatorial unity problem in Collins’ definition.  But I wonder if the specification of semantic and other features of lexical items is an important part of the combinatorial unity problem.  As noted by Collins, some salient properties are not reflected on lexical features (not all possible features are available/realized) and there are cross-linguistic homogeneities.  Don’t we need to know why so in order to explicate why some structures are interpretable and some others are not?  And we are not quite sure why and how such constraints are observable.  Are they derived from some issues in computation, learning, or evolutionary facts?  I’m inclined to agree with him that his account is not merely `kicking the problem of unity upstairs’. But maybe this is just a first step to a comprehensive solution to the interpretive unity problem.

Collins on Merge 2

We have to distinguish internal and external forms of Merge.  Merge applies to two objects.  If the objects are distinct, then the operation is a case of `external’ Merge.  But if one object is contained in the other object, then it is a case of `internal’ Merge.  Merge in itself is binary set formation.  Any application of Merge creates symmetry between the objects conjoined.  But repetitive applications of Merge can create a superset that contains the initial result of Merge.

(9) [a, b] -> [a [a, b]]

The idea is that the order in itself is not important, but which element is dominant is.  In (9) a is dominant; `internal’ Merge establishes hierarchical asymmetry over directional symmetry.

We may think of internal Merge as a device of symmetry breaking, where one object of a directionally symmetrical pair is positioned so as to be asymmetrically related to a copy of itself and the copy’s pair mate.

Internal Merge creates a head of a structure, which can be seen as the creation of a new object.  A new relation, being a head, allows us to consider the one object both as an element of a structure and also as a proxy for the collection of elements it heads.

Merge itself is indifferent to interpretation, so Merge in itself cannot produce the constraints that only determine interpretable structures; the constraints `must arise from the lexical items or interface properties’ (117).

Collins on Merge 1

Following John Collins’ The Unity of Linguistic Meaning, let’s first see the main differences/similarities between mere concatenation and ‘Merge’ as discussed in the minimalist literature.

Concatenation is an operation that produces linear strings of elements.   For example, the result of applying concatenation to two elements, a and b in this order, is a^b.  The fundamental property of concatenation, he notes, is associative.  Parentheses are redundant, just as in addition or multiplication.  Every concatenation is flat; it produces no hierarchical asymmetry.

Merge, on the other hand, is an operation that targets two elements and creates a new object, where the merged objects are atomic or themselves produces of Merge.  Each merged object displays its ‘Merge history’ as an individuative condition, i.e., each object is structured as a sequence of binary parings.

Both concatenation and Merge are (i) recursive and able to generate unboundedly many things, and they are (ii) specifiable independently of that upon which they operate.  The second condition is related to one of the desiderata for the solution of the unity problem envisaged by Collins.

We can think of our problem as presupposing the availability of a compositional theory.  A semantic theory might well tell us what structures (linguistic material) count as unities for a speaker/hearer, but it does not eo ipso tell us how those unities are available.  The job of a semantic theory is typically construed as if the possible unities were waiting to be compositionally described, already formed.  What we want to know, though, is how the unities are available to the speaker/hearer at all.  A desideratum on an explanation, here, is the provision of a combinatorial principle independent of the unities that happen to be interpretable, where independence means that the principle can be fully specified without any mention of the elements to which it applies.  That way, an explicatory principle will not presuppose what we want explained.  (my emphasis, p.30)

So thinking about concatenation helps us understand what Collins means by this explanation desideratum, despite its inadequacy for becoming the sought-after principle.