According to Collins, Merge is one of the two parts of an adequate solution to the unity problem as he conceives it. The other part is a theory of the features of lexical items. What are the features of lexical items? He says we may assume four classes of lexical feature: phonology, syntactic, thematic, and semantic. Phonological features are set aside because they are presumably irrelevant to the interpretability of a structure itself.
I put some examples of the other three types of feature:
- Syntactic: +/- finite likely vs. probable
- Thematic: AGENT, PATIENT etc., which `relate constituents to verbs, determining roles within the event marked by the verb.’
- Semantic: +/- animate etc.
Since he says, `I take semantic theory broadly to be in the business of determining the identity and interplay of these feature’ (119 n33), the specification of syntactic features is part of or, at least, goes hand-in-hand with, semantic theory.
Here is his solution to the unity problem. Lexical items inherently have such features (or they are mere bundles of such features). Merge combines them to form structures. Some of them are interpretable because `the inherent features of the constituent items fit one another. If they do not fit, then the structure is uninterpretable or unstable, if you will. Such was Frege’s insight’ (119)
This is Collins’ two part account of unity, the unity of linguistic meaning. Importantly, his unity problem asks why and how some structures are interpretable and others are not. It’s not about the unity of propositions per se or of facts or any other metaphysical entity. His question is concerned with `combinatorial unity’:
(Combinatorial Unity) Given lexical items with their semantic properties, what principle or mechanism combines the items into structures that are interpretable as a function of their constituent parts? (28)
Coming back to this earlier part of his book, I now notice he already mentions `lexical items with their semantic properties’, which are presupposed, `given’, and not part of the combinatorial unity problem in Collins’ definition. But I wonder if the specification of semantic and other features of lexical items is an important part of the combinatorial unity problem. As noted by Collins, some salient properties are not reflected on lexical features (not all possible features are available/realized) and there are cross-linguistic homogeneities. Don’t we need to know why so in order to explicate why some structures are interpretable and some others are not? And we are not quite sure why and how such constraints are observable. Are they derived from some issues in computation, learning, or evolutionary facts? I’m inclined to agree with him that his account is not merely `kicking the problem of unity upstairs’. But maybe this is just a first step to a comprehensive solution to the interpretive unity problem.