Academia.eduAcademia.edu

Categorial features and categorizers

2011, The Linguistic Review

Abstract

Categorizing heads, aka categorizers, like n and v are the elements that make nouns and verbs in approaches embracing syntactic decomposition. In this paper they are claimed to bear distinctive LF-interpretable categorial features [N] and [V], features that set the interpretive perspective of nouns and verbs as sortal and extending-into-time respectively. Consequently, I argue that categorizers are necessary for the interpretation of roots because they provide the interpretive perspective in which concepts can be related with semantically deficient roots through syntax. This explains the observation that free roots cannot be merged directly in a syntactic derivation. On the basis of their categorial status, the paper also offers a new classification of the basic elements syntax manipulates, which includes roots, inner morphemes, categorizers, and functional heads.

Categorial features and categorizers E. PHOEVOS PANAGIOTIDIS Abstract Categorizing heads, aka categorizers, like n and v are the elements that make nouns and verbs in approaches embracing syntactic decomposition. In this paper they are claimed to bear distinctive LF-interpretable categorial features [N] and [V], features that set the interpretive perspective of nouns and verbs as sortal and extending-into-time respectively. Consequently, I argue that categorizers are necessary for the interpretation of roots because they provide the interpretive perspective in which concepts can be related with semantically deficient roots through syntax. This explains the observation that free roots cannot be merged directly in a syntactic derivation. On the basis of their categorial status, the paper also offers a new classification of the basic elements syntax manipulates, which includes roots, inner morphemes, categorizers, and functional heads. 1. Introduction This article is an investigation into the nature of the categorizing heads n and v (Marantz 2000). It looks into the feature make-up of these heads, to be called categorizers henceforth, and how LF-interpretable categorial features such as [N] and [V] guarantee the distinct interpretation of the categorizers n and v. On this conceptual basis, I will discuss and make two hypotheses regarding fundamental properties of categorizers: a. Categorizers are necessary for the interpretation of roots (the Categorization Assumption of Embick and Marantz 2008: 6) because they provide the interpretive perspective in which concepts can be related with semantically deficient root material; The Linguistic Review 28 (2011), 365–386 DOI 10.1515/tlir.2011.010 0167–6318/11/028-0365 ©Walter de Gruyter Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM 366 E. Phoevos Panagiotidis b. Categorizers are not functional heads; instead, categorizers are the only lexical heads. In relation to the above, the menagerie of the elements combining to make syntactic objects will be revisited and revised; it will be claimed to include category-less roots, inner morphemes (category-less bundles of UG features), categorizers (bundles of UG features bearing category), and functional heads (bundles of UG features bearing uninterpretable categorial features). 2. Categories in syntax Hale & Keyser (1993; 2002) introduced a syntactic approach to the construction of lexical categories, like nouns and verbs. The distinct version thereof developed in Marantz (1997; 2000) has gained considerable currency in the last five years or so. The general outline of the Marantzian approach is that lexical categories such as ‘noun’, ‘verb’ and ‘adjective’ are not products of the combination of categorial features with roots in a lexicon: categories are not specified on lexical items in a pre-syntactic lexicon. On the contrary, roots are inserted bare in syntax, where the assignment of roots to categories takes place: thus, categorization is a syntactic process. More specifically, the syntactic environment turns roots into ‘nouns’, ‘verbs’ or ‘adjectives’. This is achieved not by associating roots with categorial features, but by inserting them inside the complement of categorizers – a nominalizer (n), a verbalizer (v) and an adjectivizer (a). On top of this, a categorizer may change the category of an already categorized element, e.g., in the cases of denominal verbs and deverbal nouns (e.g., colony → colonize, colonize → colonization). The empirical consequences of syntactic categorization have been explored in detail in a significant body of work, including – but not restricted to – Harley & Noyer (1998), Embick (2000), Alexiadou (2001), Folli, Harley & Karimi (2003), Arad (2003; 2005), Folli & Harley (2005), Harley (2005a, 2005b, 2007, 2009), Marantz (2005, 2006), Embick & Marantz (2008), Lowenstamm (2008), Acquaviva (2008), Basilico (2008), Volpe (2009) and, in a slightly different framework but in considerable detail, Borer (2005). I will hardly attempt to summarize the diverse and insightful findings of this line of work. Syntactic categorization analyses are customarily embedded within Distributed Morphology (Halle & Marantz 1993). However, I believe that any consistently realizational morphological framework can be used equally well, as long as it incorporates a. a separationist distinction (and/or a dissociation) between syntactic featurestructures and their morphological exponence, cf. also Beard (1995) and, b. Syntax-all-the-way-down (cf. Marantz 1997; Harley & Noyer 1999), i.e., the same combinatorial mechanism behind word building and sentence building. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM Categorial features and categorizers 367 However, pending further empirical investigation of the matter, I will broadly frame my discussion in Distributed Morphology terms where necessary. Let us now briefly outline how syntactic categorization works. A first point is that the projections of categorizers may contain more than just themselves and a root, for instance, they could contain small clause structures, low applicatives or causatives and other subcategorial feature structures called “inner morphemes” in Marantz (2000). The phrases that categorizers project are phases (Marantz 2000; 2006), i.e., complete structural units of semantic and phonological interpretation. The phases headed by categorizers will be notated as First Phases here, after Ramchand (2008):1 (1) categorizerP categorizer categorizerP root categorizer x x . . . root . . . Keeping the above in mind, the resulting basic story of how to make categories in syntax is quite straightforward: First of all, nouns and verbs are fully-fledged syntactic structures made of a categorizer, “inner morphemes”, like x in (1), and roots: n makes these structures ‘nouns’, v makes them ‘verbs’ – and so on. This, of course, is not the full story: as Marantz (2000, 2005, 2006) illustrates, those nPs and vPs inevitably also contain (some) internal arguments of nouns and verbs.2 This takes us to the next point, which is that the content and configuration of those syntactic structures, nPs and vPs, determine the interpretation of the noun or verb. In other words, interpretation largely depends on (and is constrained by) a. the position of the root within these structures (e.g., predicate, small clause subject, location, locatum, instrument, etc.); 1. As a reviewer asks, does phasehood entail that the complement of the categorizer is dispatched to the interfaces to the exclusion of the categorizer, the phase head? This is a serious problem that, as far as I know, has not been addressed in the literature on categorizer phrases as First Phases. A possible solution is to assume generalized movement of complement material to the specifier of the categorizer, to my mind a highly undesirable solution. I will have to remain agnostic on the matter, although a serious one. 2. See also Basilico (2008). Let me point out here that there is considerable confusion on what the label v stands for. For the work cited and followed here, it is the categorizer; for those mainly concentrating on Phase Theory (beginning with Chomsky 2000, et seq.), v essentially stands for Kratzer’s (1996) Voice: a causative-transitive or passive head that hosts the external argument and may assign accusative case if transitive (as per Burzio’s Generalization). Both camps take v (categorizing or Voice) to be a phase head. I will here be concerned with the phase status of the categorizing v (and n), remaining agnostic about that of Voice – but see Anagnostopoulou & Samioti (2009) for discussion. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM 368 E. Phoevos Panagiotidis b. which inner morphemes and internal argument(s), if any, the structures contain; inner morphemes include elements like low applicatives, low causativizers, particles, etc. Purely for illustration of a case involving more than just a categorizer and a root, take a possible syntactic representation of the verb paint, adapted from Harley (2005a): (2) vP SC v PP the wall P PAINT Paint above is treated as a locatum verb, like ‘butter’. The verbalizer takes a small clause in its complement, which in turn relates a subject of predication, ‘the wall’, with a PP(-like) constituent identifying a locatum. The root PAINT abstractly incorporates into the null P(-like) inner morpheme and then to v itself, à la Hale & Keyser (1993).3 Concluding this section, if nPs and vPs are indeed phases, they must receive a morphophonological and a semantic interpretation at the interfaces once completed. Again using 2 for illustration, once the vP phase is built, the syntactic object resulting from the incorporation of the root PAINT into P and v will be given a morphophonological shape (“associated with a vocabulary item”) and, crucially for our discussion here, a semantic interpretation. 3. Distinctive (categorial) features Assuming a category-less syntax as above, where nouns are (a part of) nPs and verbs are (a part of) vPs, we will inevitably run into the following question: (3) What distinguishes n from v? What is their difference at LF? In other words, the long-standing problem of how grammatical categories are defined and in what ways they differ from each other (Baker 2003: Ch. 1) 3. All roots will be notated using small caps throughout this article, just like PAINT here. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM Categorial features and categorizers 369 is now recast as what makes categorizers different from each other.4 Plainly put, even if we conceive nouns and verbs as syntactically decomposable entities within nPs and vPs, the syntactic difference between ‘nouns’ and ‘verbs’ remains, and needs to be expressed as the difference between categorizers n and v. Now, adopting categorizers and a process of syntactic categorization, the simplest hypothesis would be that each categorizer (n and v) bears a distinctive feature (or feature value). Therefore, we need to identify (a) the feature (values) and (b) its interpretation. Keeping with the established notation, let us call these features [N] and [V]: let’s take categorial features as the distinctive features on categorizers.5 If categorial features [N] and [V] truly exist and act as the features distinguishing between n and v, they must somehow affect the syntactic derivation and/or be interpretable at one of the two interfaces, according to a weakened version of Chomsky’s (1995) Full Interpretation. What we cannot argue for is [N] and [V] as purely taxonomic and otherwise inert ‘features’. Hence, the following options present themselves: a. Categorial features are PF interpretable. This does not seem to be a viable option, as cross-linguistically there is no clear, across-the-board phonological marking of nouns as opposed to verbs and so on. b. [N] and [V] are purely morphological / post-syntactic features. In other words, they would be like grammatical gender and morphological class (noun declension and verb conjugation) features, as analyzed in Marantz (1991); Aronoff (1994); Embick & Noyer (2001); Arad (2005: Ch. 2).6 However, the universal relevance and significance of the noun-verb distinction that extend well beyond morphological exponence and word-class membership (Baker 2003) makes such a claim a highly implausible one. c. They are all uninterpretable, just like case features in Chomsky (1995). If this is so, then grammatical category would accordingly be a grammar4. I will here set adjectives and adjectivizers aside. There are two reasons for this: first, it is not clear that ‘adjective’ is either a unitary or a universal category, despite Baker (2003: Ch. 4) arguing for both these theses. Second, even if an ‘adjectivizing’ categorial feature exists, it is not clear at this point what its semantic import is (see Partee 1995; Larson & Segal 1995: 130–132). Corver (1997), Neeleman, van de Koot & Doetjes (2004) and Doetjes (2008) offer interesting discussions on the customary correlation between adjectives and gradability. 5. A Linguistic Review referee insightfully observes that “any approach will have to say something about which categorizers there are. [. . . ] [W]e on empirical grounds would like to exclude a categorizer M that would categorize a root as being both an adjective and a verb”. If the account here is on the right track, then categorizers and categorial features lie at the heart of the feature system of UG: which categorial features exist must reflect the fundamental workings of the language faculty. 6. See Alexiadou & Müller (2007) for an account of such features as uninterpretable syntactic features. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM 370 E. Phoevos Panagiotidis internal mechanism with no direct interpretive effect. As I will sketch below, this would be undesirable given the research on the semantic differences between (bare) nouns and (bare) verbs. d. [N] and [V] are LF-interpretable (Déchaine 1993; Baker 2003). This looks like an option worth exploring, perhaps the only one. Concluding, even more than in a model where categorial information is decided in the lexicon, it is necessary for categorial features to be syntactic LFinterpretable features, acting (inter alia) as distinctive features on categorizers like n and v. Thus, there is no escape from categorial features (or from syntax, for that matter), a result already anticipated in the influential Chomsky (1970), of which this account is an updated version.7 4. The content and LF interpretation of categorial features Arguing that categorial features, now located on categorizing heads themselves, are LF-interpretable opens up anew the old question of the semantics of grammatical category: what it means to be a noun, what it means to be a verb. More generally, as suggested above, if we are serious about constructing nouns and verbs in syntax, we will have to address the question of what they are and how they and their projections are interpreted at LF. This question has been addressed since the beginnings of grammatical theorizing (see Baker 2003: Ch. 1 for a survey). At the same time, Baker (2003) has revived the discussion on grammatical category, by foregrounding the role of categorial features as LF-interpretable. Before going on to propose how [N] and [V] are interpreted at LF, let’s review some key points in Baker’s book relevant to the discussion here. A first point is that concepts of particular types are canonically mapped onto nouns or verbs cross-linguistically (Baker 2003: 293–294; see also Acquaviva 2009 on nominal concepts). To wit, object concepts are mapped onto ‘prototypical’ nouns (e.g., rock or tree), whereas dynamic event concepts onto ‘prototypical’ verbs (e.g., buy, hit, walk, fall). Contrary to claims that have been made in relation to the so-called ‘Nootka debate’, no natural language expresses rock, for instance, using a simplex verb, i.e., one consisting of a v and a root.8 To my mind, the question is settled in Baker (2003: 169–189), who 7. An anonymous reviewer wonders whether different subvarieties of n heads and v heads contain different types of [N] and [V]. I would think that this is not the case; rather, different n heads and different v heads would bear different additional LF-interpretable features. To wit, if Folli & Harley’s (2005) three flavors of v exist (i.e., vCAUS , vDO , vBECOME ), they would all share a [V] feature but each have different additional features; similarly for Lowenstamm’s (2008) different flavours of n: they would all share an [N] feature but differ in their other features. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM Categorial features and categorizers 371 draws from a wealth of typological data. A second point, following Baker (2003: 11–22), is that ‘prototypical’ members of each lexical category share the same grammatical properties with ‘nonprototypical’ ones. Comparing rock or tree with theory, liberty and game, we find them all to be equally ‘nouny’ in their morphological and syntactic properties in language after language. This has been extensively and convincingly argued for in Newmeyer’s (1998: Ch. 4) refutation of Ross’s (1973) ‘nouniness’ continuum. So, whichever way prototypicality matters, it does not really affect a word’s grammatical behavior regarding this word’s membership of a lexical category. Thirdly, category distinctions must correspond to perspectives on (concepts about) the world – not notional categories (Baker 2003: 296–297). Thus: (4) conceptual categorization = linguistic categorization Accordingly, although all physical objects are nouns cross-linguistically, not all nouns denote concepts of physical objects (David Pesetsky, p.c.): thus, rock and theory cannot belong together in any conceptual category. They can, however, be viewed by the language faculty in the same way. This would entail that grammatical categories, such as ‘noun’ and ‘verb’, are particular interpretive perspectives on concepts, that there is a way in which rock and theory are treated the same by grammar, even if they share no common properties notionally. This stance is essentially taken in Uriagereka (1999) and Acquaviva (2008, 2009), it also informs Pesetsky & Torrego (2004, 2005). Building on the above points, we turn to the next question: what the interpretation of [N] and [V] is; what are these different interpretive perspectives that categorial features impose on the material in their complement? A proposal, one related to but significantly diverging from that in Baker (2003), is the following: (5) LF-interpretation of categorial features: A [V] feature imposes an extending-into-time perspective at LF; an [N] feature imposes a sortal perspective at LF. Regarding [N] as encoding sortality, this is the stance taken by Baker (2003: Ch. 3) and extensively argued for. Here I will, however, diverge from Baker’s interpretation, and I will follow the notion of sortality in Prasada (2008) and Acquaviva (2009). As Prasada (2008) notes, sortality incorporates three criteria: a criterion of application, a criterion of identity and a criterion of individuation. The criterion of application “means that the representation is understood to apply to things of a certain kind, but not others. Thus, the sortal DOG allows 8. Or a V and an adjective, in Baker’s system. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM 372 E. Phoevos Panagiotidis us to think about dogs, but not tables, trees, wood or any other kind of thing.” (Prasada 2008: 6 – my emphasis; cf. also Chierchia 1998). In this respect the criterion of application differentiates sortal predicates from indexicals like this. The criterion of identity “provides the basis for thoughts like dogs, [which] by virtue of being dogs, remain dogs throughout their existence” (Prasada 2008: 7), for as long as external conditions permit them to maintain their existence (for a short time, like puppy, or for a long one, like water and universe). Acquaviva (2009: 1–5) contains more detailed and in-depth discussion of nominal concepts as such, which goes beyond this brief sketching of the two criteria of application and identity. In this article, however, we will be satisfied with Prasada’s (2008) two criteria of application and identity as being enough to define sortality for our purposes, namely exploring what ‘nominality’ means in grammar and – crucially – what the interpretation of a nominal feature [N] is. Put differently, the criteria of application and identity adequately characterize the sortal interpretive perspective that [N] features impose on the concept they associate with, an association mediated by syntax.9 Let’s turn to the interpretation of [V] now. In work by Givón (1984: Ch. 3), Langacker (1987) and Croft (1991), who conceive categories as prototypes along a continuum of temporal stability after Ross (1973), verbs are placed at the least time-stable end of the spectrum. The intuition that the temporal perspective seriously matters for the interpretation of verbs is echoed in Ramchand (2008, my emphasis): “VP is the heart of the dynamic predicate, since it represents change through time, and it is present in every dynamic verb”. In a similar vein, Acquaviva (2009: 2) notes that because “verbal meaning is based on event structure (cf. especially Ramchand 2008), it has a temporal dimension built in. Nominal meaning, by contrast, does not have a temporal dimension built in.” If we replace ‘meaning’ by ‘perspective’ here, we can claim that [V] encodes a perspective over the concept with which it is associated as extending into time: verbs and their projections are the basic ingredients of events. We can actually call Vs and VPs subevents, contributing the temporal perspective to event structures. Some consequences of the feature content of [N] and [V] as described in (5) include the following: First of all, we can now understand why objects – and, also, substances – are typically conceived as sortal concepts in the way sketched above: they smoothly satisfy both criteria of application and identity. This is compatible with the canonical mapping of such concepts onto nouns. At the same time, we expect dynamic events (activities, achievements, accom9. The criterion of individuation, namely, that “two instances of a kind are distinct because they are the kinds of things they are” (Prasada 2008: 8) does not apply to mass nouns. However, it may play a role in the object bias in the acquisition of nouns (Bloom 2000: Ch. 4) and the perceived prototypical character of objects over substances in terms of nominality. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM Categorial features and categorizers 373 plishments) to typically be conceived as extending into time, hence the canonical mapping of such concepts onto verbs. If, on top of everything, dynamic events are compositionally derived from states and states are roughly equivalent to VPs, then a theory of event structure, such as in Borer (2005) and Ramchand (2008), receives added justification along the lines of verbal constituents being inherently (sub-)eventive by virtue of their temporal dimension.10 Furthermore, Tense and Voice exclusively combine with verbs exactly because of the perspective a [V] feature imposes on the root within the projection of v is as extending into time (Tonhauser 2007). Similarly, Number combines with nouns: the perspective an [N] feature imposes on the root within an nP is of a sortal concept that already fulfils the third sortal criterion (that of individuation) or can be coerced to do so by classifiers and Number (Borer 2005). In any case, the important part of (5) is that [N] or [V] on n and v encode different perspectives, rather than different inherent properties of the concept itself as expressed by the root. This is actually a most welcome consequence of syntactically decomposing grammatically category in the light of pairs like N work – V work, which brings us to the answer of what distinguishes n from v: (6) Categorial features [N] and [V] are interpretable on the n and v categorizers respectively. As seen, categorial features provide the fundamental interpretive perspective in which the meaning of the categorizer’s complement will be negotiated. This will become especially relevant in the context of discussing the behaviour of roots below. 5. Where roots grow It is now time to turn to how categorizers, their categorial features actually, enable the insertion of roots in syntax. Generally speaking, roots need to be assigned a category and they cannot be freely inserted in syntax, as observed in Baker (2003: 268) and – in a slightly different perspective – Acquaviva (2008). This ban on free roots and the necessity for them to be assigned a category is the Categorization Assumption of Embick and Marantz (2008: 6). Illustrating this below, we see that a root cannot be directly merged with a functional head, say, a determiner or a complementiser head: such DPs or CPs are illicit. This generalization is supported by a cursory look at languages where roots are morphologically bound, like Italian and Spanish: 10. Essentially, here we assume the framework and insights in Rappaport Hovav & Levin (1998) as fleshed out in Borer (2005) and Ramchand (2008), although the latter is not based on roots. I wish to thank a reviewer for raising the issue. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM 374 E. Phoevos Panagiotidis (7) *DP D *CP root C root Hence, the descriptive observation that roots must be categorized is made in the literature but neither of the above works provides an attempt to explain it. In the account presented here, the categorization assumption is equivalent to an understanding that roots cannot be interpreted unless inside the complement of a categorizer. Having decomposed lexical heads such as N and V to parts of nP and vP projections, we are left with the following elements for syntax to build structures with:11 a. Roots b. Categorizers (e.g., n and v) c. Functional heads (e.g., Num, D, T, C. . . ) Therefore the statement that ‘roots need category’ can be expanded to a descriptive generalization such as the one below: (8) Roots can only be merged inside the complement of a categorizer, never of a functional head. A more traditional way to express (8) is of course that functional heads, unlike categorizers, cannot support real descriptive content, as in Abney’s (1987: 64– 65) oft-quoted criteria. However, we must still address the real question of why configurations like those in (7) are impossible and why (8) holds. In order to reach a principled answer, we need to weave together three yarns. First, we take seriously the hypothesis that roots are unexceptional syntactically, by getting merged inside the complement of material below the categorizers or, perhaps, by projecting their own phrases, too (Marantz 2006; Harley 2007, 2009). Consider that, as it stands, this hypothesis goes against what we have in (7) and (8). Second, we need to carefully consider the LF interpretation of roots themselves, their semantic content; in this I will assume that the semantic content of the root is seriously underspecified/impoverished. Third, we will employ our newly developed understanding of categorial features on categorizers as providing the necessary perspective (e.g., sortal or temporal) for the root to be interpreted in; we will also explain why the absence of such an interpretive perspective prevents roots from participating in legitimate syntactic objects. In other words, I argue that categorizers exclusively provide the ‘context’ in Marantz (2000) for the root to be assigned an interpretation and/or a matching entry in the encyclopedia. 11. The list is incomplete and will be revised and expanded, see (15). Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM Categorial features and categorizers 375 Let us now look at the second and third yarn in more detail. 5.1. The interpretation of free roots Bare roots are sometimes thought as standing for stative predicates (Alexiadou 2001) and, moreover, as containing argument structure information (Doron 2003; Levin & Rappaport Hovav 2005). This would make them the semantic core of verbs, e.g., a root like OPEN would be the ‘core element’ of the verbs in The window opened and Tim opened the window. However, Acquaviva (2008, 2009) argues against any semantic content for roots along the following lines, already familiar from Aronoff (2007):12 as the same root can underlie a number of nouns, verbs and adjectives with significantly different meanings, lexical information seems to be root-external (Acquaviva 2008: 5); this is something to be expected if lexical information is assigned to grammatical structures by the encyclopedia. It is not the case that the ‘meaning’ of the root can be a (proper) subset of that of the nouns, verbs and adjectives in the derivation of which it participates. To wit, Aronoff (2007: 819) observes that “words have morphological structure even when they are not compositionally derived, and roots are morphologically important entities, [even] though not particularly characterized by lexical meaning.” He supplies the example of the Hebrew root KBŠ (typically rendered as ‘press’) which derives nouns such as keveš (‘gangway’, ‘step’, ‘degree’, ‘pickled fruit’), kviš (‘paved road’, ‘highway’), kviša (‘compression’), kivšon (‘furnace’, ‘kiln’), and verbs like kavaš (‘conquer’, ‘subdue’, ‘press’, ‘pave’, ‘pickle’, ‘preserve’, ‘store’, ‘hide’) and kibeš (‘conquer’, ‘subdue’, ‘press’, ‘pave’, ‘pickle’, ‘preserve’).13 We realizse that there is no way in which ‘step’, ‘degree’, ‘furnace’, ‘pave’, ‘pickle’ and ‘conquer’ can be compositionally derived from ‘press’, or similar. If these are ‘lexical idioms’, then we are better off arguing, as Marantz (2000), Aronoff (2007) and Acquaviva (2008, 2009) essentially claim, that all lexemes are idioms. What is important for the discussion here is that roots on their own have minimal semantic content or, as Arad (2003, 2005: Ch. 3) proposes, that they are severely underspecified. This could be understood to result in free roots not being adequately specified to stand on their own as legitimate LF-objects.14 12. Acquaviva also discusses the issue of morphological class membership information and the problems arising if we argue it (not) to be encoded on the root. 13. In Levinson’s (2007) analysis of roots as polysemous, all these meanings should be listed as possible interpretations of the root KBŠ . 14. The semantic underspecification of (uncategorized) roots can be understood as the reason they are not legitimate objects at LF, but more needs to be said on the matter. For instance, Horst Lohnstein (p.c.) points out that a root like SPIT could still be interpretable in some sense even if it’s underspecified, unlike KBŠ ; the ‘basic meaning’ of SPIT as a root is more clearly Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM 376 E. Phoevos Panagiotidis Consider that syntactic objects at LF, say a vP phase, consist purely of interpretable and/or valued UG features and roots. Now, by hypothesis, syntax uses the operation Agree in order to eliminate uninterpretable and/or unvalued UG features before the phase is completed (i.e. before the derivation is sent off to the interface). What about roots? Roots are essentially imported into the language faculty, for instance they can be borrowed or even coined, and they are extraneous to it. More precisely, it seems that roots are elements coming from outside FLN (= the Faculty of Language in the Narrow sense); however FLN must manipulate roots. If no roots are manipulated by FLN in a particular derivation, we get expressions made up exclusively of UG features like “This is her”, “I got that”, “It is here” etc. (cf. Emonds 1985: Ch. 4). In other words, it is the ability of FLN to manipulate roots that enables it to denote concepts and, ultimately, to be used to “refer”. However, roots being extraneous to FLN, and given that they probably do not contain UG features (whatever their content), they need to be somehow dealt with and categorization is exactly the way this is achieved. In a nutshell: uncategorized roots are FLN-extraneous; either because just of this or also because they are semantically underspecified themselves, uncategorized roots would not be recognized by LF/SEM. The general claim here is that syntax does not use a special operation to ‘acclimatize’ roots but embeds them within a categorizer projection, whose categorial features provide an interpretive perspective in which ConceptualIntentional systems will associate the root with conceptual content. The short of it is that (9) (Free/uncategorized) roots are not readable by the Conceptual-Intentional/SEM systems. 5.2. The role of categorization Let us now take up the third yarn, going from (9) to (8): clearly, embedding roots in the complement of a categorizer cancels out the roots’ LF-deficient character and allows them to participate in LF-convergent derivations. By (5) and (6), the way this is achieved is because the categorial features [N] or [V] on n or v provide the interpretive perspective for the root to be interpreted in. At the same time, (8) is ensured because no functional heads bear categorial features (more on this below). This picture is completed by what is pointed out in the very definition of the categorization assumption (Embick & Marantz circumscribed. However, even the relatively straightforward root like SPIT means much less when it is uncategorized than, e.g., the verb it names (see, Rappaport Hovav & Levin 1998 for discussion). Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM Categorial features and categorizers 377 2008: 6): “If all category-defining heads are phase heads [. . . ] the categorization assumption would follow from the general architecture of the grammar [. . . ]”.15 Given that roots cannot be directly embedded in the context of a functional phase head like C, cf. the discussion of (7), I would be tempted to think that the phasehood of categorizers (‘category-defining heads’) is a necessary but not sufficient condition for the categorization of roots.16 The argument goes as follows: Roots are semantically impoverished when they are free. However, we already noted that they are syntactically unexceptional objects: they may merge with elements such as low applicatives, low causatives, abstract prepositions, particles, etc.; they can also participate in small clause structures and, perhaps, can project their own phrases (Marantz 2006; Harley 2007, 2009). Nevertheless, the resulting structures would still contain an offending item, the SEM-deficient root itself. In this sense, a ‘free’ root is simply an uncategorized root. In order to supply a concrete example, consider the configuration in (10), adapted from Marantz (2000: 27): (10) G ROW G ROW tomatoes Merge creates a syntactic object from the root and its object ‘tomatoes’, with the root projecting a label: a syntactically unexceptional object. However, if this is embedded under functional structure without the ‘intervention’ of a categorizer, then the resulting syntactic object will lack interpretive perspective, because of the SEM-deficient root G ROW. This is where the categorial feature on the categorizer, [V] on v or [N] on n, is necessary. It assigns an interpretive perspective to the object as extending into time or as sortal, therefore enabling the resulting vP or nP – the First Phase – to be interpreted. At the same time, the root-categorizer object, associated with an interpretive perspective, can be matched with a vocabulary item (grow or growth) and an appropriate ‘lexical’ concept in the encyclopedia, a ‘meaning’ (cf. Aronoff 2007). 15. Already in Marantz (2000): “To use a root in the syntax, one must ‘merge’ it (combine it syntactically) with a node containing category information.” I of course take [N] and [V] to be the said “category information” here. 16. I hedge my claim because, as a reviewer points out, non-phasal heads could also provide a context for the interpretation of roots: categorizers could in principle be non-cyclic heads. Note that if the objections in Anagnostopoulou & Samioti (2009) are correct, this is exactly the way things are – see also Footnote 2. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM 378 E. Phoevos Panagiotidis (11) vP v nP G ROW G ROW “grow tomatoes” n tomatoes G ROW G ROW tomatoes “growth (of) tomatoes” Summarizing, categorial features on the categorizer close off material associated with the root exactly by providing this material with a fundamental perspective for the conceptual systems to view it in. Thus, no such perspective can be provided by a (phase) head without categorial features: it is [N] or [V] on n or v that provide the necessary perspective, the “context”, for the root to be interpreted. Therefore, the association of root material with categorial features [N] and [V] enables the former to be interpreted at the interfaces and beyond. The categorization of roots is not a narrow-syntactic requirement, but one of the interface between syntax and the conceptual-intentional systems. It indeed follows “from the general architecture of the grammar” (Embick & Marantz 2008: 6). 5.3. nPs and vPs as idioms Before turning to the status of categorizers, a note on the systematic idiomaticity of nPs and vPs, is in order. To begin with, the interpretation of the First Phase is canonically non-compositional: consider well-known pairs such as N water-V water-A watery, N dog-V dog, N castle-V castle, N deed-V do, etc., already highlighted in Chomsky (1970); meanings associated with material such as root-v and root-n in (11) are invariably listed and almost always idiosyncratic. This is a well-known and widely examined fact, one that prompted the analysis of word-formation in ways different than phrase-building (see Marantz 1997, 2000, 2006 for overviews). This canonical idiosyncrasy/non-compositionality is what tempts one to think of the first phase as a somehow privileged domain for idiomaticity and to correlate idiomaticity of material with it appearing below the categorizer, within the categorizer’s complement. However, we need to consider two factors: a. Idiomaticity, non-predictability and non-compositionality are in part explained away once the role of subcategorial material (Marantz’s “inner morphemes”) and argument structure are spelled out in more detail – see Marantz (2005) and Harley (2005a) for illustration. b. Given the discussion above on the impoverished or inexistent semantic import of roots themselves, non-compositional and idiosyncratic interpreta- Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM Categorial features and categorizers 379 tions of material in nP and vP is the only option: how could compositional interpretation deal with the un- or under-specified meaning of roots? Moreover, as has been argued in the literature, idioms larger than the first phase do exist (for overview and discussion: Nunberg, Sag & Wasow 1994; McGinnis 2002 – pace Svenonius 2005). So, although an idiomatic interpretation may be associated with syntactic constituents (phases?) of various sizes, idiomaticity is the only option for first phases, exactly because of the semantic impoverishment / deficiency of roots. This point is already alluded to in Arad (2005). In a nutshell, the systematic idiomaticity of first phases is not due to the categorizer acting as a limit, below which interpretation is/can be/must be noncompositional, as suggested in Marantz (2000). It is because the First Phase (an nP or a vP) contains a root, an LF-deficient element by (9), that would resist any compositional treatment anyway. Therefore, inner versus outer morphology phenomena (Marantz 2006) are due to the semantic impoverishment of roots: once roots have been dispatched to the interfaces with the rest of the complement of the categorizer, compositional interpretation may canonically apply in the next phase up. 5.4. Categorizers are not functional Based on the above discussion, we can summarize our analysis of categorial features on categorizers as follows: (12) Categorial features a. contribute the interpretive perspective phase-internally, and b. identify the whole phase externally (as ‘nominal’, ‘verbal’ etc.). So, categorial features on categorizers enable roots to participate in the interpretation, essentially enabling syntax to map concepts not encoded by UGfeatures. They also form the basis on which functional structure is built on. Although they are phase heads, categorizers are clearly not functional heads like complementizer or determiner. Empirically, this is already evident by the fact that no functional head can categorise roots and root material. This is true both for ‘major’ functional categories such as voice, aspect, tense, complementizer, number or determiner (recall (7) above) and for those sub-categorial elements Marantz (2000, 2006) terms “inner morphemes”. I think that this is exactly the difference between ‘lexical’ and functional’: only the former can categorize a free root in its complement. So, effectively, there is (13) only one class of ‘lexical elements’ that qualify as atomic ‘verbs’: v and Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM 380 E. Phoevos Panagiotidis only one class of ‘lexical elements’ that qualify as atomic ‘nouns’: n.17 Hence, unlike the received way of dealing with them, categorizers are not functional heads, on the contrary, they are the only possible lexical heads! A way to capture the above observations in a systematic way is to adopt categorial deficiency for ‘major’ functional categories (Panagiotidis 2002: 170– 183, submitted). According to this hypothesis, heads such as voice, aspect, tense, complementizer, etc. bear an uninterpretable categorial feature [uV] and number and determiner bear an uninterpretable categorial feature [uN].18 I am not going to expand on the merits (and possible shortcomings) of such an approach here, except stressing that it can derive several well-known phrase structure facts (Panagiotidis submitted for details). What is important for our purposes is that the categorial deficiency hypothesis would offer us a way to distinguish between categorially deficient bundles of features (‘major’ functional elements), category-less bundles of formal features (subcategorial elements, “inner morphemes”), categorizers and roots. (14) Categorizers are not functional, because they bear interpretable categorial features, not uninterpretable ones. Categorizers are (the only) lexical elements. The distinction between lexical elements (categorizers) and functional elements is anything but terminological: only the former bear interpretable categorial features, thus only lexical elements can categorize roots and root material. A further distinction between functional elements, bearing uninterpretable categorial features, and category-less bundles of formal features (such as “inner morphemes”) also emerges: the former will need a categorized constituent in their complement (Panagiotidis, submitted), whereas the latter can be directly associated with roots.19 The table below – which must be taken in conjunction with that in Marantz (2000), numbered item (42) – would summarize this new typology of grammatical elements: 17. The interested reader can find a discussion of different flavors of v in Folli & Harley (2005) and of different flavors of n in Lowenstamm (2008). 18. Essentially, categorial deficiency of functional elements is in the spirit of Chomsky (1995), but takes further afield his discussion of categorial feature strength and “affixal features” (1995: 269) on functional heads. 19. I am grateful to a Linguistic Review referee for raising this matter. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM Categorial features and categorizers (15) 381 Elements of grammar made of categorial status categorizers ‘major’ functional elements subcategorial UGcategory-less elements features Roots also known as includes UGcategorial feature lexical heads features UGuninterpretable functional features categorial feature categories ? category-less “inner morphemes” “descriptive material” n, v examples -ment, -th Voice, Asp, T, -ing, to, C, D, Num will, if, the, –s particles, low -ee, de-, up, in applicatives, low statives, low causatives roots CAT, WORK , KTB A sketch of a tree illustrates the typical position of the above at the level of the purported voice phase. The complement of the first vP phase is shaded: (16) Voice [uV] functional head [uV] vP v [V] inner morpheme ROOT YP 5.5. No roots: solo categorizers as ‘semi-lexical heads’ The fact that what we used to classify as ‘lexical heads’ are in reality the categorizers themselves can be glimpsed by looking at semi-lexical categories (Corver & van Riemsdijk 2001). Semi-lexical elements are lexical nouns and verbs that do not carry any descriptive content, including English one (as in the right one), ‘empty nouns’ (Panagiotidis 2003) and some types of light verbs. Emonds (1985: Ch. 4) already analyses semi-lexical elements (which he calls grammatical nouns and grammatical verbs) as instances of N and V heads without any descriptive, concept-denoting features. This line of analysis is taken up and developed in van Riemsdijk (1998), Haider (2001), Schütze Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM 382 E. Phoevos Panagiotidis (2001) and Panagiotidis (2003). Curiously, Emonds’ definitions of such elements, as lexical elements only bearing formal features, is essentially what we would think categorizers themselves are in a lexical decomposition framework, consider for instance Folli & Harley’s (2005) postulation of three v heads CAUS , DO and BECOME . Indeed, Emonds (1985: 159-168) claims that grammatical nouns and verbs, as they are devoid of descriptive content, can only be distinguished from each other by virtue of their formal features. It seems then that when we deal with semi-lexical elements, the root supplying the descriptive content – whichever way it does it – is absent. So, a first straightforward conclusion would be that in order to have a ‘noun’ (nP) or a ‘verb’ (vP) a root is not necessary, whereas categorizers always are. This is also discussed Harley’s (2005b) discussion of one as the vocabulary item inserted when an n is not associated with a root, under certain morphosyntactic conditions.20 The above strongly suggests that (category-less) roots and subcategorial material are – syntactically speaking – optional and that a well-formed syntactic representation can be constructed using just categorizers and a functional structure superimposed on them. 6. Conclusion Categorizers impose interpretive perspectives on the root material in their complement via their categorial features [N] and [V]. At the same time, they form the foundation stone of functional structure, which – informally speaking – they ‘type’ as ‘nominal’, growing out of kinds and sortal predicates, and ‘verbal’, growing out of sub-events with a temporal perspective. Categorizers are the lexical heads, minus the descriptive material; the latter is negotiated by the encyclopedia looking inside the categorizers’ complements, at the roots and at the subcategorial functional elements (cf. Marantz 2000: 7). Roots are both category-less and seriously impoverished semantically; therefore they need to be given an interpretive perspective (sortal or temporal) before they can be negotiated by conceptual-intentional systems. This suffices to explain the systematically non-compositional character of interpreting material inside the first phase and the consequent necessity for listed ‘meanings’. Functional heads such as determiner or tense should in principle be able to directly 20. If categorial deficiency is correct – cf. the state of affairs in (16), there can be no projection of (categorially uninterpretable) functional structure without a categorizer, which will bear an interpretable categorial feature, at its base. An Agree operation eliminates uninterpretable versions of categorial features of functional heads (Panagiotidis, submitted). Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM Categorial features and categorizers 383 categorize root material, as well. However, they fail to do so at least because they lack the relevant [N] and [V] features. Therefore, taking [N] and [V] as LF-interpretable features on categorizers enables us to understand lexical decomposition and syntactic categorization more particularly in a more coherent and certainly more principled fashion. University of Cyprus [email protected] References Abney, Steven P. 1987. The English noun phrase in its sentential aspect. MIT PhD Dissertation. Ackema, Peter & Ad Neeleman. 2004. Beyond morphology. Oxford: Oxford University Press. Acquaviva, Paolo. 2008. Roots and lexicality in distributed morphology. Paper given at the Fifth York-Essex morphology meeting (http://ling.auf.net/lingBuzz/000654). Acquaviva, Paolo. 2009. The roots of nominality, the nominality of roots. Unpublished ms. University College Dublin. Alexiadou, Artemis. 2001. Functional structure in nominals. Amsterdam: Benjamins. Alexiadou, Artemis & Gereon Müller. 2007. Class Features as Probes. In Asaf Bachrach & Andrew Nevins (eds.), Inflectional identity. 101–155. Oxford: Oxford University Press. Anagnostopoulou, Elena & Yota Samioti. 2009. Domains for Idioms. Paper presented at the Roots workshop (June 10–12, 2009), University of Stuttgart. Arad, Maya. 2003. Locality constraints on the interpretation of roots: The case of Hebrew denominal verbs. Natural Language and Linguistic Theory 21. 737–778. Arad, Maya. 2005. Roots and patterns: Hebrew Morpho-syntax. Dordrecht: Springer. Aronoff, Mark. 1994. Morphology by itself. Cambridge, MA.: MIT Press. Aronoff, Mark. 2007. In the beginning was the word. Language 83. 803–830 Baker, Mark. 2003. Lexical categories: Verbs, nouns and adjectives. Cambridge: Cambridge University Press. Basilico, David. 2008. Particle verbs and benefactive double objects in English: high and low attachments. Natural Language and Linguistic Theory 26. 731–773. Beard, Robert. 1995. Lexeme-morpheme base morphology. Albany: SUNY Albany Press. Bloom, Paul. 2000. How children learn the meaning of words. Cambridge, MA.: MIT Press. Bobaljik, Jonathan D. & Höskuldur Thráinsson. 1998. Two heads aren’t always better than one. Syntax 1. 37–71. Borer, Hagit 2005. Structuring sense. Oxford: Oxford University Press. Chierchia, Gennaro. 1998. Reference to kinds across languages. Natural Language Semantics 6. 339–405. Chomsky, Noam. 1970. Remarks on nominalization. In Roderick Jacobs and Peter Rosenbaum (eds.), Readings in English transformational grammar. 184–221. Waltham, MA.: Ginn & Company. Chomsky, Noam. 1995. The minimalist program. Cambridge, MA.: MIT Press Chomsky, Noam. 2000. Minimalist inquiries: The framework. In Roger Martin, David Michaels and Juan Uriagereka (eds.), Step by step. Essays on minimalist syntax in honor of Howard Lasnik. 89–155. Cambridge, MA.: MIT Press. Chomsky, Noam. 2001. Derivation by phase. In Michael Kenstowicz (ed.), Ken Hale. A life in language. 1–52. Cambridge, MA.: MIT Press. Chomsky, Noam. 2004. Beyond explanatory adequacy. In Adriana Belletti (ed.), Structures and beyond. 104–131. Oxford: Oxford University Press. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM 384 E. Phoevos Panagiotidis Chomsky, Noam. 2004. Three Factors in Language Design. Linguistic Inquiry 36. 1–22. Corver, Norbert & Henk van Riemsdijk (eds.). 2001. Semi-lexical categories. Berlin: Mouton De Gruyter. Corver, Norbert. 1997. The internal syntax of the Dutch extended adjectival projection. Natural Language and Linguistic Theory 15. 289–368. Croft, William. 1991. Syntactic categories and grammatical relations. Chicago: University of Chicago Press. Déchaine, Rose-Marie. 1993. Predicates across categories. Unpublished PhD thesis, University of Massachusetts, Amherst. Doetjes, Jenny. 2008. Adjectives and degree modification. In Louise McNally & Christopher Kennedy (eds.), Adjectives and adverbs: Syntax, semantics and discourse. 123–155. Oxford: Oxford University Press. Doron, Edit. 2003. Agency and voice: the semantics of the Semitic templates. Natural Language Semantics 11. 1–67. Embick, David & Alec Marantz. 2008. Architecture and blocking. Linguistic Inquiry 39. 1–53. Embick, David. & Rolf Noyer. 2001. Movement operations after syntax. Linguistic Inquiry 32. 555–595. Embick, David. 2000. Features, syntax, and categories in the Latin perfect. Linguistic Inquiry 31. 185–230. Emonds, Joseph. 1985. A unified theory of syntactic categories. Dordrecht: Foris. Folli Raffaella, Heidi Harley & Simin Karimi. 2003. Determinants of event type in Persian complex predicates. In Luisa Astruc & Marc Richards (eds.), Cambridge occasional papers in linguistics 1: 100–120. Cambridge: University of Cambridge. Folli, Raffaella & Heidi Harley. 2005. Consuming results in Italian & English: Flavours of v. In Paula Kempchinsky & Roumyana Slabakova (eds.), Aspectual inquiries. 1–25. Dordrecht: Springer Givón, Talmy. 1984. Syntax: A functional-typological introduction. Amsterdam: Benjamins. Grimshaw, Jane. 1991. Extended projection. Unpublished ms., Brandeis University. Haeberli, Eric. 2002. Features, categories and the syntax of A-positions: Cross-linguistic variation in the Germanic languages. Dordrecht: Kluwer. Haider, Hubert. 2001. Heads and selection. In Norbert Corver & Henk van Riemsdijk (eds.), Semilexical categories. 67–96. Berlin: Mouton De Gruyter. Hale, Kenneth & Samuel Jay Keyser. 1993. On argument structure and the lexical expression of syntactic relations. In , Kenneth Hale & Samuel Jay Keyser (eds.), The view from building 20. 53–109. Cambridge, MA: MIT Press. Hale, Kenneth & Samuel Jay Keyser. 2002. Prolegomenon to a theory of argument structure. Cambridge, MA.: MIT Press. Halle, Morris and Alec Marantz. 1993. Distributed Morphology and the pieces of inflection. In Hale, Ken and Samuel Jay Keyser (eds.), The view from building 20. 111–176. Cambridge, MA.: MIT Press. Harley, Heidi & Rolf Noyer. 1998. Licensing in the non-lexicalist lexicon: nominalizations, vocabulary items and the Encyclopaedia. MIT Working Papers in Linguistics 32. 119–137. Cambridge, MA.: MIT. Harley, Heidi & Rolf Noyer. 1999. State-of-the-Article: Distributed Morphology. GLOT International 4 (4). 3–9. Harley, Heidi. 2005a. How do verbs get their names? Denominal verbs, Manner Incorporation and the ontology of verb roots in English. In Nomi Erteschik-Shir & Tova Rapoport (eds.), The syntax of aspect: Deriving thematic and aspectual interpretation. 42–64. Oxford: Oxford University Press. Harley, Heidi. 2005b. Bare phrase structure, a-categorial roots, one-replacement and unaccusativity. In Slava Gorbachov and Andrew Nevins (eds.), Harvard Working Papers on Linguistics 9. 1–19. Cambridge, MA.: Harvard. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM Categorial features and categorizers 385 Harley, Heidi. 2007. The bipartite structure of verbs cross-linguistically, or why Mary can’t exhibit John her paintings. Talk presented at the 2007 ABRALIN Congres in Belo Horizonte, Brazil, March 2007. Harley, Heidi. 2009. The morphology of nominalizations and the syntax of vP. In Anastasia Giannakidou & Monika Rathert (eds.), Quantification, definiteness, and nominalization. 320–342. Oxford: Oxford University Press. Hegarty, Michael. 2005. A feature-based syntax of functional categories. Berlin: DeGruyter. Higginbotham, James. 1985. On Semantics. Linguistic Inquiry 16. 547–593. Hudson, Richard. 2003. Gerunds without phrase structure. Natural Language and Linguistic Theory 21. 579–615. Jackendoff, Ray. 1977. X′ syntax: A study of phrase structure. Cambridge, MA.: MIT Press. Kiparsky, Paul. 1982. Word formation and the lexicon. In Fred Ingeman (ed.), Proceedings of the Mid-America Linguistics Conference. 3–29. Lawrence: University of Kansas. Kratzer, Angelika. 1996. Severing the external argument from its verb. In Johan Rooryck & Laurie Zaring (eds.), Phrase structure and the lexicon. 109–137. Dordrecht: Kluwer. Langacker, Ronald. 1987. Foundations of cognitive grammar. Stanford: Stanford University Press. Larson, Richard. 1991. The projection of DP (and DegP). Ms. Stony Brook University. Larson, Richard & Gabriel Segal. 1995. Knowledge of meaning. Cambridge, MA.: MIT Press. Lecarme, Jacqueline. 2004. Tense in nominals. In Jacqueline Guéron & Jacqueline Lecarme (eds.), The syntax of time. 441–476. Cambridge. MA.: MIT Press. Levin, Beth & Malka Rappaport Hovav. 2005. Argument realization. Cambridge: Cambridge University Press. Levinson, Lisa. 2007. The roots of verbs. NYU PhD dissertation. Longobardi, Giuseppe. 1994. Reference and proper names: A theory of N-movement in syntax and logical form. Linguistic Inquiry 25. 609–665. √ Lowenstamm, Jean. 2008. On n, , and types of nouns. In Jutta M. Hartmann, Veronika Hegedüs and Henk van Riemsdijk (eds.), Sounds of silence: Empty elements in syntax and phonology. [North Holland Linguistic Series, Linguistic Variations Volume 63]. 107–144. Amsterdam: Elsevier. Marantz, Alec. 1991. Case and licensing. Proceedings of ESCOL 1991. 234–253. Columbus: Ohio State University. Marantz, Alec. 1997. No escape from syntax: Don’t try morphological analysis in the privacy of your own lexicon. University of Pennsylvania Working Papers in Linguistics 4. 201–225. Marantz Alec. 2000. Words. Unpublished ms. MIT. Marantz Alec. 2005. Rederived generalizations. Unpublished ms. MIT. Marantz Alec. 2006. Phases and words. Unpublished ms. NYU. McGinnis, Martha. 2002. On the Systematic Aspect of Idioms. Linguistic Inquiry 33: 665–672. McNally, Louise & Christopher Kennedy. 2008. Adjectives and adverbs. Syntax, semantics, and discourse. Oxford: Oxford University Press. Neeleman Ad, Hans van de Koot & Jenny Doetjes. 2004. Degree expressions. The Linguistic Review 21. 1–66. Newmeyer, Frederick. 1998. Language form and language function. Cambridge, MA.: MIT Press. Nunberg Geoffrey, Ivan Sag & Thomas Wasow. 1994. Idioms. Language 70. 491–538. Panagiotidis, Phoevos. 2002. Pronouns, clitics and empty nouns. Amsterdam: Benjamins. Panagiotidis, Phoevos. 2003. Empty Nouns. Natural Language and Linguistic Theory 21. 381–432. Panagiotidis, Phoevos. Submitted. Functional heads, Agree and labels. Syntax. Partee, Barbara. 1995. Lexical semantics and compositionality. In Lila Gleitman & Mark Liberman (eds.), An invitation to cognitive science, Volume 1: Language. 311–360. Cambridge, MA.: MIT Press. Pesetsky, David & Esther Torrego. 2004. Tense, case and the nature of syntactic categories. In Jacqueline Guéron & Jacqueline Lecarme (eds.), The syntax of time. 495–537. Cambridge, Mass.: MIT Press. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM 386 E. Phoevos Panagiotidis Pesetsky, David & Esther Torrego. 2005. Subcategorization phenomena and case-theory effects: Some possible explanations. Talk delivered at LAGB 2005, Fitzwilliam College, Cambridge. Prasada, Sandeep. 2008. Aspects of a fully psychological theory of sortal representation. Ms CUNY. Ramchand, Gillian. 2008. Verb meaning and the lexicon: A first phase syntax. Oxford: Oxford University Press. Rappaport Hovav, Malka & Beth Levin. 1998. Building verb meanings. In Miriam Butt & Wilhelm Geuder (eds.), The projection of arguments: Lexical and compositional factors. 97–134. Stanford, CA.: CSLI Publications. Ross, John. 1973. Nouniness. In Osamu Fujimura (ed.), Three dimensions of linguistic theory. 137–258. Tokyo: TEC Co. Ltd. Rothstein, Susan. 1983. The syntactic forms of predication. MIT PhD dissertation. Rothstein, Susan. 1999. Fine-grained structure in the eventuality domain: The semantics of predicative adjective phrases and be. Natural Language Semantics 7. 347–420. Schütze, Carson. 2001. Semantically empty lexical heads as last resorts. In Norbert Corver & Henk van Riemsdijk (eds.), Semi-lexical categories. 127–187. Berlin: Mouton De Gruyter. Svenonius, Peter. 2005. Idioms and domain boundaries. Unpublished ms., CASTL, University of Tromsø. Thráinsson, Höskuldur. 1996. On the (non-)universality of functional categories. In Werner Abraham, Samuel D. Epstein, Höskuldur Thráinsson & Jan-Wouter Zwart (eds.), Minimal ideas. Syntactic studies in the minimalist framework. 253–282. Amsterdam: John Benjamins. Tonhauser, Judith. 2007. Nominal Tense? The meaning of Guarani nominal temporal markers. Language 83. 831–869. Uriagereka, Juan. 1999. Warps: some thoughts on categorization. Theoretical Linguistics 25. 31– 73. van Riemsdijk, Henk. 1998. Categorial feature magnetism: the endocentricity and distribution of projections. Journal of Comparative Germanic Linguistics 2. 1–48. Volpe, Mark. 2009. Root and deverbal nominalizations: Lexical flexibility in Japanese. Unpublished ms. http://ling.auf.net/lingBuzz/000789. Brought to you by | Columbia University Library The Burke Library New York Authenticated | 172.16.1.226 Download Date | 7/31/12 7:22 PM