Syntactic Constructions in English-CUP (2020)
Syntactic Constructions in English-CUP (2020)
Syntactic Constructions in English-CUP (2020)
JO NG -B O K K IM
Kyung Hee University, Seoul
www.cambridge.org
Information on this title: www.cambridge.org/9781108470339
DOI: 10.1017/9781108632706
c Jong-Bok Kim and Laura A. Michaelis 2020
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2020
Printed in the United Kingdom by TJ International Ltd, Padstow Cornwall
A catalogue record for this publication is available from the British Library.
Library of Congress Cataloging-in-Publication Data
Names: Kim, Jong-Bok, 1966– author. | Michaelis, Laura A., 1964– author.
Title: Syntactic constructions in English / Jong-Bok Kim, Laura A.
Michaelis-Cummings.
Description: 1. | New York : Cambridge University Press, 2020. | Includes
bibliographical references and index.
Identifiers: LCCN 2019057511 (print) | LCCN 2019057512 (ebook) | ISBN
9781108470339 (hardback) | ISBN 9781108632706 (ebook)
Subjects: LCSH: English language – Syntax. | English language – Grammar.
Classification: LCC PE1361 .K565 2020 (print) | LCC PE1361 (ebook) | DDC
425–dc23
LC record available at https://lccn.loc.gov/2019057511
LC ebook record available at https://lccn.loc.gov/2019057512
ISBN 978-1-108-47033-9 Hardback
ISBN 978-1-108-45586-2 Paperback
Cambridge University Press has no responsibility for the persistence or accuracy of
URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
Contents
Preface page xi
v
vi Contents
Afterword 317
Appendix 320
Bibliography 337
Index 352
Preface
xi
xii Preface
meaning, and use of English sentences, both simple and complex, including their
correct syntactic structures.
The book focuses primarily on the descriptive facts of English syntax,
presented through a ‘lexical lens’ that encourages students to recognize the
important contribution that words and word classes make to syntactic structure. It
then proceeds with the basic theoretical concepts of declarative grammar (in the
framework of SBCG), providing sample sentences. We have tried to make each
chapter maximally accessible to those with no background knowledge of English
syntax. We provide clear, simple tree diagrams that will help students understand
recursive structures in syntax. The theoretical notions are simply described but
framed as precisely as possible so that students can apply them in analyzing
English sentences. Each chapter also contains exercises ranging from straight-
forward to challenging, aiming to promote a deeper understanding of the factual
and theoretical contents of each chapter.
We relied heavily on the prior works on English syntax. In particular, much
of the content, as well as our exercises, were inspired by or adapted from
renowned textbooks including Aarts (1997, 2001), C. L. Baker (1995), Bors-
ley (1991, 1996), Radford (1988, 1997, 2004), Miller (2000), Sag et al. (2003),
Carnie (2002, 2011), and Hilpert (2014). These works have set the standard for
syntactic description and argumentation for decades.
Many people have supported and/or improved this textbook. This work owes
a great intellectual debt to the late Ivan A. Sag, who demonstrated that an ele-
gant and intuitive grammar formalism can also have extraordinary sweep and
scope. Our thanks also go to Peter Sells for contributing foundations for this
book in Kim and Sells (2008). We thank anonymous reviewers of prior drafts
of this book for detailed comments and suggestions which helped us reshape
it. We are grateful for the advice and insights of linguistic colleagues includ-
ing Anne Abeillé, Doug Arnold, Jóhanna BarDdal, Emily Bender, Bob Borsley,
Rui Chaves, Suk-Jin Chang, Hee-Rahk Chae, Sae-Youn Cho, Incheol Choi, Jae-
Woong Choi, Chan Chung, Mark Davies, Elaine Francis, Jonathan Ginzburg,
Adele Goldberg, Goldberg, Martin Hilpert, Paul Kay, Jungsoo Kim, Valia Kor-
doni, Chungmin Lee, Juwon Lee, Kiyong Lee, Bob Levine, Philip Miller, Stefan
Müller, Joanna Nykiel, Byung-Soo Park, Chongwon Park, Javier Pérez-Guerra,
Jeffrey Runner, Manfred Sailer, Rok Sim, Sanghoun Song, Eun-jung Yoo, James
Yoon, Frank Van Eynde, Gert Webelhuth, and Stephen Wechsler. We also thank
students and colleagues at Kyung Hee University, Seoul and the University of
Colorado Boulder for their encouragement over the years. In particular, we thank
students who used drafts of this textbook and raised questions that helped us
solidify its structure and content. We are also grateful to Helen Barton at Cam-
bridge University Press for her outstanding advice and support, and to Catherine
Dunn and Stanly Emelson for expert editorial and production assistance. The
first author also acknowledges support from the Alexander Von Humboldt Foun-
dation, from which he received a Humboldt Research Award in 2019. Lastly,
we thank our close friends and family members, whose love and understanding
sustained us through the writing process.
1 What Is a Theory of English Syntax
About?
In the same way, speakers who know English may accept (2a) and (2c), but not
(2b):3
(2) a. She swam.
b. *She swam the passengers.
c. She swam the passengers to three nearby boats.
This implies that knowing a language means that (English) speakers have linguis-
tic knowledge sufficient to distinguish between ‘acceptable’ and ‘unacceptable’
sentences. However, when speakers are asked to articulate what kind of knowl-
edge allows them to make these distinctions, it is not easy for them to describe it.
This knowledge of language, often called linguistic competence, is the ability
to speak a language. Knowing one’s native language requires neither skill nor
talent, but it is nonetheless an accomplishment worthy of investigation.
Linguistic competence involves several different levels of language structure.
It includes phonetic and phonological competence: knowledge of the sounds
1 The example in (1a) is from the corpus COCA (Corpus of Contemporary of American English),
a collection of 560 million words of text from five different genres including spoken, fiction,
magazine, newspaper, and academic texts. Throughout this book, we will use many corpus exam-
ples (extracted mainly from COCA) to portray English as it is actually spoken. We will, however,
suppress their exact sources in the interest of readability.
2 The notation * indicates that the particular example is ungrammatical or unacceptable. The notion
of grammaticality (grammatical or ungrammatical) is closely related to that of acceptability
(acceptable or unacceptable). Grammaticality has to do with whether a given sentence conforms
to the rules and constraints of the relevant grammar, while acceptability has to do with whether
a native English speaker would judge the sentence to be an instance of native English. Unless a
distinction is required, we use these notions interchangeably.
3 These examples are based on those used by Goldberg (1995). See Chapter 4.5 for discussion of
such sentences.
1
2 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?
The speaker’s intent in uttering such sentences is not just to inquire about the
hearer’s ability but also to request an aisle seat and the syrup, respectively.
The person to whom such a question is directed can infer that it is actually a
directive.
The pivotal competence that we are concerned with in this book is syntac-
tic competence: the ability to combine words into phrases that conform to the
phrasal patterns of the language. Children learn these patterns without explicit
training. How exactly they do so is a matter of controversy. Some linguists claim
that certain aspects of grammar must be innate, because children do not receive
enough data during early development to determine what the patterns are. Others
argue that syntactic competence is in fact something that a child acquires through
learning, and that the proponents of innate grammar have overlooked children’s
outstanding capacity to imitate adult routines and to infer patterns from rich but
noisy input. We do not attempt to resolve this controversy here, because our focus
is on what constitutes the adult’s knowledge of language, and not the means by
which it is achieved.4
Although children do not receive explicit instruction in their first language,
they somehow gain the ability to produce all and only the grammatical sentences
of their language and to distinguish grammatical sentences from ungrammatical
4 We refer the interested reader to the rich literature on grammar learnability, which includes works
by Goldberg (2006), Tomasello (2009), Newport (2016), and Chater and Christiansen (2018).
1.1 Linguistic and Syntactic Competence 3
Most of the combinations, a few of which are given in (7), are unacceptable to
native speakers of English:
(7) a. *Kicked the player the ball.
b. *Player the ball kicked the.
c. *The player a kicked ball.
It is clear that there are certain rules in English for combining words. These rules
constrain which words can be combined and how they can be ordered, sometimes
in groups, with respect to each other.
Such combinatory rules also enable speakers to construct (or construe) com-
plex sentences like (8a).6 Whatever the combinatory rules are, they should give
a different status to (8b), an example which is judged ungrammatical by native
speakers even though the intended meaning is relatively clear.
(8) a. My parents decided to stay in the house they built.
b. *My parents decided to stay in the house they built it.
The fact that we require such combinatory knowledge also provides an argument
for the assumption that we use a finite set of resources (expressions and rules) to
produce and interpret grammatical sentences, and that we do not just rely on the
meanings of the words involved. Consider the examples in (9):7
(9) a. I *(am) fond of that garden.
b. He *(is) angry at the not guilty verdict.
5 Examples like (6e) are called ‘topicalization’ sentences: The topic expression (the ball), already
mentioned or understood in a given context, is placed in a sentence initial position. See
Lambrecht (1994), Gregory and Michaelis (2001), and references therein.
6 In Chapter 2, we will begin to see these combinatory rules.
7 The star * in front of the parenthesis symbols means that the expression within the parentheses
cannot be omitted.
4 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?
The omission of the copula verbs am and is would not prevent us from under-
standing the intended meaning, but the presence of these words is a structural
requirement here.
In addition to being rule-based, syntactic competence powers the creativity
(expressivity) that defines language ability. Speakers can produce and understand
an infinite number of new grammatical sentences that the speaker has never spo-
ken or heard before. For example, native speakers of English may have never
heard, seen, or talked about the subject matter of sentences like (10) before, but
they would have no difficulties producing or understanding such sentences:
One might argue that since the number of English adjectives is limited, there
should be a limit to this process. However, there are numerous examples in which
we could keep such a process going, as shown by the following (Sag et al., 2003:
22):
To (12a), we add the string and on, producing a longer one, (12b). To the result-
ing sentence, we once again add and on and make (12c). We could in principle
go on adding without stopping: This is enough to prove that language has infinite
creative potential (see Chomsky, 1957, 1965).
The job of the syntacticians is thus to discover and formulate these rules or
principles, which is also our goal here.
However, the very existence of a prescriptive rule is good evidence that the tar-
geted usage practice is commonplace, as suggested by the following attested
‘violations’:
6 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?
Descriptive rules characterize whatever forms speakers actually use. One might
have occasion to posit both prescriptive and descriptive rules, but the rule-
governed grammar we are exploring in this book consists exclusively of
descriptive rules.
The ensuing question is then: how can we discover the descriptive rules of
English syntax – those that can generate all of the grammatical sentences but
none of the ungrammatical ones? As noted earlier, these rules are part of our
knowledge about language but are not consciously accessible; speakers can-
not articulate their content if asked to do so. Hence we can discover the rules
indirectly: We infer these latent rules from the observed data of a language.
These data can come from speakers’ judgments – known as intuitions – or from
collected data of produced written or spoken language – often called corpora.
Linguists use patterns in data to make inferences about an underlying phe-
nomenon, and this is why we take linguistics to be an empirical discipline.
The basic steps involved in doing such data-based linguistic research can be
summarized as follows:
• Step I: Collect and observe data.
• Step II: Make a hypothesis to cover the first set of data.
• Step III: Check the hypothesis using more data.
• Step IV: Revise the hypothesis if necessary.
Let us now use these basic strategies to discover one of the grammar rules of
English: the rule that distinguishes count and mass (noncount) nouns.9
Step I: Observing Data. To discover a grammar rule, the first thing we need
to do is examine grammatical and ungrammatical variants of the expression in
question. For example, let us look at the usage of the word evidence:
(17) Data Set 1: evidence
a. *The professor found some strong evidences of water on Mars.
b. *The professor was hoping for a strong evidence.
c. *The evidence that Jones found was more helpful than the one that Smith
found.
What can you tell from these examples? We can make the following observa-
tions:
(18) Observation 1:
a. evidence cannot be used in the plural.
b. evidence cannot be used with the indefinite article a(n).
c. evidence cannot be referred to by the pronoun one.
9 The discussion and data in this section are adapted from Baker (1995).
1.3 How We Discover Descriptive Rules 7
Unlike equipment and evidence, the nouns clue and tool can be used in the test
linguistic contexts we set up. We thus can add Observation 3, different from
Observation 2:
(23) Observation 3:
a. clue/tool can be used in the plural.
b. clue/tool can be used with the indefinite article a(n).
c. clue/tool can be referred to by the pronoun one.
Step II: Forming a Hypothesis. From the data and observations we have
made so far, can we make any hypothesis about the English grammar rule in
question? One hypothesis that we can make is the following:
(24) First Hypothesis:
English has at least two groups of nouns, Group I (count nouns) and Group
II (mass nouns), diagnosed by tests of plurality, the indefinite article, and the
pronoun one.
8 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?
Step III: Checking the Hypothesis. Once we have formed such a hypothesis,
we need to determine whether it is true of other data and to see if it has other
analytical consequences. A little further thought allows us to find support for the
two-way distinction among nouns. For example, consider the usage of much and
many:
(25) a. much evidence, much equipment, much information, much advice
b. *much clue, *much tool, *much armchair, *much bags
(26) a. *many evidence, *many equipment, *many information, *many advice
b. many clues, many tools, many suggestions, many armchairs
As observed here, plural count nouns can occur only with many, whereas mass
nouns can combine with much. Similar support can be found in the usage of little
and few:
(27) a. little evidence, little equipment, little advice, little information
b. *little clue, *little tool, *little suggestion, *little armchair
(28) a. *few evidence, *few equipment, *few furniture, *few advice, *few
information
b. few clues, few tools, few suggestions, few armchairs
The word little can occur with mass nouns like evidence, yet few cannot.
Meanwhile, few occurs only with count nouns.
Given these data, it appears that the two-way distinction is plausible and per-
suasive. We can now ask if this distinction into just two groups is really sufficient
for the classification of nouns. Consider the following examples with cake:
(29) a. She makes very good cakes.
b. The president was hoping for a good cake.
c. The cake that Jones got was more delicious than the one that Smith got.
Similar behavior can be observed with a noun like beer:
(30) a. I like good, dark, full-flavored beers.
b. No one knows how to tell a good beer from a bad one.
These data show us that cake and beer can be classified as count nouns. However,
observe the following:
(31) a. My pastor says I ate too much cake.
b. The students drank too much beer last night.
(32) a. We recommend that you eat less cake and pastry.
b. People now drink less beer.
The data indicate that cake and beer can also be used as mass nouns, since they
can be used with less or much.
Step IV: Revising the Hypothesis. The examples in (31) and (32) imply that
there is another group of nouns: those that can be used as both count nouns and
mass nouns. This leads us to revise the hypothesis in (24) as follows:
1.4 Two Different Views of Generative Grammar 9
We can expect that context will determine whether a Group 3 noun is used as
count or as mass.
As we have observed thus far, the process of discovering grammar rules cru-
cially hinges on finding data, drawing generalizations, making a hypothesis, and
revising this hypothesis with more data. In addition, we have noticed that gram-
matical generalizations may actually be generalizations about classes of words,
like the class of count nouns.
10 The historical development of the Chomskyan view, also called Transformational Grammar, can
be summarized as follows:
The Standard Theory, laid out by Chomsky (1957, 1965), is the original form of generative
grammar, and introduces two representations for sentential structure: deep structure and sur-
face structure. These two levels are linked by transformational rules. The next stage is the
so-called Extended Standard Theory, where X-bar theory is introduced as a generalized model
of phrase structures. The Revised Extended Standard Theory generalizes transformational rules
as Move-α. These previous theories are radically revised in GB (Government and Binding)/P&P
(Principles and Parameters) theory (1981–1990). GB theory, armed with subtheories like gov-
ernment and binding, is the first theory to be based on the principles and parameters model of
language. The P&P framework also underlies the later developments of the MP (Minimalist Pro-
gram), which tries to provide a conceptual framework for the development of linguistic theory
(Chomsky, 1995).
1.4 Two Different Views of Generative Grammar 11
exposure, which components of the UG tool kit are present in the particular
language, for example, what the word order of the language is. The theory is
deductive in that linguistic data are assumed to reflect properties of UG: The the-
orist must attempt to square the facts of language with the presumed properties
of UG.
Another key component of the Chomskyan nativist view is that the language
faculty consists of several modules. According to Chomsky (1965), (mental)
grammar can be divided into three basic components: syntax, semantics, and
phonology. Each module has its own categories and rules that are in principle
independent of each other. On this account, syntax is ‘autonomous’ in the sense
that syntax can be analyzed without reference to meaning, as illustrated by the
following example, made famous by Chomsky (1957):
(34) Colorless green ideas sleep furiously.
Even if we do not know what this sentence means, we can still immediately
apprehend that the sentence is grammatical, whereas *Green furiously ideas col-
orless sleep is not. The syntactic system manipulates symbols (expressions) not
according to meaning but rather according to the position occupied by those
symbols in hierarchical syntactic structure. One consequence of the autonomy
view is that properties like being the subject of a sentence cannot be described
according to presumed functional properties of subject (like being the topic of
the sentence) but must instead be represented in syntactic terms, for example,
being in a specific location in a hierarchical syntactic structure.11
It is important to recognize that the central goal of Chomskyan theory is not
to describe all of the grammatical patterns of particular languages but rather to
explain how children acquire language, starting from the assumption that chil-
dren are not exposed to sufficiently rich data within their linguistic environments
to learn all of the grammatical patterns of their first language. The explana-
tory mechanism involves a form of UG consisting of general principles (e.g.,
a sentence always has a grammatical subject, even if it is not overtly expressed)
combined with binary parameter settings intended to capture variability across
languages (e.g., some languages require overt subjects and others do not). Propo-
nents of this view seek to predict the structures of a given language and minimize
what must be stipulated. In the Chomskyan view:
the notion of grammatical construction is eliminated, and with it, the con-
struction particular rules. Constructions such as verb phrase, relative clause,
and passive remain only as taxonomic artifacts, collections of phenomena
explained through the interaction of the principles of UG, with the values of
the parameters fixed. (Chomsky, 1993: 4)
It is self-evident that the syntactic phenomena that one could predict based on
the principles and the particular parameter settings of a language are only the
most basic (core) patterns, and that idioms and other specialized (peripheral)
patterns in a language would fall outside the scope of such a framework. This
limitation in grammar coverage is something that proponents of the Chomskyan
framework accept, inasmuch as they view the theory as a narrow theory of basic
or core grammar, which does not, and need not, describe idiosyncratic (or periph-
eral) phenomena that often arise from historical accident, including expressions
that were borrowed from another language or developed from language formulas
or other conventions (e.g., abbreviations).
Examples like (35a) are linked to the directive force, while (35b) induces a con-
ditional meaning. These two meanings or functions do not simply come from the
words involved here. Such examples suggest that we cannot separate form (syn-
tax) from functions (meaning and usage), as Chomsky did based on examples
like (34).
Constraint-based grammar also rejects the distinction between core and
peripheral grammar, on the grounds that capturing the patterns of word combina-
tion that constitute knowledge of a language like English requires us to describe
everything from general patterns that might exist in every language (like coor-
dination) to specialized patterns that are particular to, say, English. Consider the
following attested examples:
Sentence (36a) could illustrate a core phenomenon in the sense that its mean-
ing is ‘compositional’ and quite straightforward.12 One interpretive constraint
here is that the subject he and the object himself refer to the same individ-
ual.13 Sentence (36b), by contrast, illustrates a manifestly idiomatic pattern.
The idiomatic verb phrase have x to do with y means something like ‘x has
some degree of relationship to y,’ and the whole sentence means that my
age and knowledge of politics are not related to any degree. The pattern is
idiomatic in that one could not predict this meaning based on the meanings
that the verb have and the verb do have elsewhere (Kay and Michaelis, 2019).
This pattern could appropriately be relegated to the periphery of the
grammar.
However, consider (35b), which includes core as well as peripheral proper-
ties. The sentence, having the pattern ‘the X-er . . . , the Y-er . . . ’, illustrates the
so-called COMPARATIVE CONDITIONAL CONSTRUCTION (Fillmore et al., 1988;
12 The principle of compositionality states that the meaning of a given sentence is determined by
the meanings of its constituent expressions and the rules used to combine them.
13 This interpretation appears to be structurally conditioned, as it depends on the pronoun he and
the reflexive pronoun himself being in a particular syntactic relationship (the first being subject
and the second object). See Pollard and Sag (1992, 1994), and Sag et al. (2003) for detailed
discussion of the constraints on the use of reflexive pronouns.
14 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?
Culicover and Jackendoff, 1999). The construction, which combines two paral-
lel clauses, requires the presence of the definite article at the beginning of each
clause and conveys a conditional meaning. In these respects, the construction
includes certain idiosyncratic properties. But, other than these properties, the
‘linked variables’ meaning of the construction (whereby one quantity or property
is understood to increase as the other does) is clearly related to the construc-
tion’s parts, and the pattern itself is highly productive (we can easily create new
instances of it). This suggests that it is a major grammatical pattern rather than
a minor one (Culicover and Jackendoff, 1999; Borsley, 2004; den Dikken, 2005;
Kim, 2011).
In addition to sentence patterns having idiosyncratic formal properties, there
are also constructions that have regular syntax but unpredicted meanings. The
following sentence is one that a diner can utter:
(37) What is that fly doing in my soup?
The diner in (37) is not inquiring about the activities of the fly in the soup
but rather is indicating that there is something incongruous about there being
a fly in the soup. Although this construction, called the WXDY CONSTRUC -
TION (Kay and Fillmore, 1999), has several peculiarities of form and meaning
(e.g., the obligatory use of doing and a specialized pragmatic function, ‘querying
the reason for an incongruous situation’), it is highly productive, as seen in the
following attested examples:
(38) a. What are you doing with my money, then?
b. But what are you doing with those mashed potatoes on the table?
c. What are you doing calling on a Friday night?
The varying degrees and types of idiosyncrasy observed here tell us that there
is no clear boundary between core and periphery. In addition, even seemingly
noncore phenomena include some general properties that a complete grammati-
cal description must acknowledge if we are to understand what a language user
knows about his or her native language. Under an enriched view of grammat-
ical competence, which aims to capture all of the linguistic routines that an
adult native speaker knows, the grammar represents an array of form-meaning-
function groupings of varying degrees of productivity and internal complexity.
This is the idea that has motivated non-Chomskyan frameworks like HPSG
(Head-driven Phrase Structure Grammar) and CxG (Construction Grammar) –
frameworks that we adopt in this book.
requires the direct object to follow the verb – and the forms of the words, as
the comparable Latin construction requires its direct object to have an accusative
case-ending. Grammatical constructions have long played a central role in lin-
guistic description, and for most of that history they have been treated in a similar
manner to words – pairings of form and meaning with particular patterns of
usage. It is only since the advent of Chomsky’s generative grammar that words
came to be seen as the sole vessels of meaning and constructions as the prod-
ucts of general rules that build up hierarchical structures in a ‘meaning blind’
fashion, much like mathematical operations. Chomsky’s embrace of computing
metaphors that predate the era of cheap data storage convinced many syntacti-
cians that sentence patterns cannot be stored in memory. But in fact it is quite
plausible to assume that we learn and recall grammatical constructions in much
the same way that we learn and recall words. In a review of findings from lan-
guage development, language impairment, and language processing, Bates and
Goodman (1997) conclude that there is little evidence for a modular dissocia-
tion between a language’s grammar and its lexicon (the inventory of words). For
example, they observe that in child language acquisition, “the emergence and
elaboration of grammar are highly dependent upon vocabulary size [. . . ] as chil-
dren make the passage from first words to sentences and go on to gain productive
control over the basic morphosyntactic structures of their native language” (Bates
and Goodman, 1997: 509). They go on to say:
This does not mean that grammatical structures do not exist (they do), or
that the representations that underlie grammatical phenomena are identical
to those that underlie single-content words (they are not). Rather, we suggest
that the heterogeneous set of linguistic forms that occur in any natural lan-
guage (i.e. words, morphemes, phrase structure types) may be acquired and
processed by a unified processing system, one that obeys a common set of
activation and learning principles.
In other words, both words and constructions are patterns in the mind.
Whether we are describing a word that has highly restricted privileges of occur-
rence (e.g., the adjective blithering, which to our knowledge combines only with
the nouns idiot and fool), a class of words (e.g., the class of nouns or the class
of transitive verbs), an inflected word (e.g., the plural noun copies) or a way to
create a basic phrase of a particular type (e.g., a noun phrase), we are describing
patterns, because in each case we are describing the combinatoric properties of
words (Michaelis, 2019).
Exercises
5. We have seen that the examples like the following belong to semanti-
cally specialized constructions. For each, discuss whatever special
properties (syntactic, semantic, and pragmatic functions) you can
think of:
a. What are elephants doing in the middle of town?
b. The sooner you do it, the better off you’ll be.
c. Just because you’re paranoid doesn’t mean they aren’t all out to
get you.
d. Charlie shouldered his way through the crowd of cops toward
the door.
e. Not in my house, you don’t!
2 Lexical and Phrasal Signs
Sound-image
(b)
arbor
signified
signifié
signifier
signifiant
Figure 2.1 An example of a sign
Figure 2.1, adapted from Saussure (1916 [2011]), shows the linguistic sign as
a link between a sound sequence (form) and a concept (meaning), as in (a), or
between a signifier (signifiant, in the original French) and a signified (signifié),
as in (b). Thus, as shown in (c), the form (sound sequence) arbor (Latin ‘tree’)
is a signifier, and its associated meaning or denotation (concept) is a signified,
depicted as a tree image.
This notion of the Saussurean sign has been generalized in SBCG to include
linguistic expressions of any degree of internal complexity, including mor-
phemes, words, multi-word expressions (or idioms) like drop the ball and hit
the nail on the head, and, crucially, phrases, including sentences. A grammar
is accordingly conceived as a set of descriptions of signs and sign combina-
tions. These descriptions represent (a) the properties shared by each class of
signs (called lexical classes) and (b) the templates or ‘rules’ used to construct
phrasal signs from simpler signs.
19
20 LEXICAL AND PHRASAL SIGNS
All of these words must be represented as distinct signs of the grammar because
their meanings cannot be predicted. For instance, the form hot dog does not
mean a dog that is hot; from the form one cannot predict the meaning (a hot
sausage inside a long split roll). Idioms are also distinct signs; they are multi-
word expressions whose meanings one would not generally predict from the
meanings of their parts. Consider the following:
(2) a. The suspect is still at large.
b. I’m really feeling under the weather today; I have a terrible cold.
c. Don’t beat around the bush. Just tell me the truth.
The meaning of the italicized word string in each case is not predictable. For
instance, in (2a), the meaning ‘not captured yet’ does not come from the parts at
and large. There is thus a special form-meaning relation here, just as there is in
(3a)-(3b):
(3) a. I tried jogging mom’s memory, but she couldn’t remember Joe’s phone
number either.
b. Don’t worry about what he said. He’s just pulling your leg.
The idioms in (3a)–(3b) mean ‘to cause someone to remember something’ and
‘to deceive someone playfully,’ respectively. The only difference between idioms
like (2) and idioms like (3) is that the latter include a variable (in the case of (3),
2.2 From Lexical Signs to Phrasal Signs 21
a possessor) that can be replaced by another expression like his, her, their, and
so forth.
There are also more complex (phrasal) constructions that specify idiomatic
interpretations, as in the case of the COMPARATIVE CORRELATIVE CONSTRUC -
TION discussed in Chapter 1 and exemplified again below:
As noted in Chapter 1, this bi-clausal pattern, whose basic form is ‘the X-er, the
Y-er,’ has a conditional meaning in which an increase (or decrease) in the value of
the first variable yields a concomitant change in the value of the second. Sentence
(4b) means something like ‘To whatever degree a trip is long, the recovery period
is long to that same degree.’ In an earlier stage of English, this construction had a
syntactically transparent interpretation, which sound change and changes in the
case system of English have now obscured.1
Now consider the following sentences introduced by verbs like give, pass,
read, teach, and so forth:
(5) a. Pedro [gave [her] [his email address]].
b. The player [passed [Paulo] [the ball]].
c. Dad [read [me] [the letter]].
d. My mom [taught [me] [the importance of being clean]].
e. My Auntie Julia, a seamstress, [sewed [me] [a leopard bikini]].
The verbs here combine with the two bracketed expressions, evoking a meaning
of ‘transfer,’ whether metaphorical or literal, in each case. For instance, in (5a),
the email address is figuratively transferred from one person to the other (com-
municative acts are typically framed as events in which information goes from
one person to another). One important aspect of the transfer meaning is that the
‘goal’ or endpoint of the transfer event must be understood to be a (volitional)
recipient. Thus while (6a) sounds natural, (6b) does not, unless we imagine the
summit of Mt. Kilimanjaro to stand for people located there:
(6) a. He took the Brooksville Elementary flag to the summit of Mt. Kilimanjaro.
b. *He took the summit of Mt. Kilimanjaro the Brooksville Elementary flag.
The claim that the relevant pattern expresses an act of transfer (to a human recip-
ient) is bolstered by a phenomenon sometimes called ‘semantic enrichment,’ as
in (5e). The verb sew is a verb of creation, and as such selects for just two par-
ticipant roles (the creator and the item created). In the context of (5e), however,
we understand the sewing event to have an additional participant: a recipient of
1 The paired ‘definite articles’ are modern reflexes of Old English instrumental-case demonstrative
pronouns that meant ‘by that much,’ The construction in this period was thus structurally similar
to the analogous French construction plus . . . plus (as in Plus ça change, plus c’est la même
chose, ‘The more it changes, the more it stays the same’). Because speakers of Present-Day
English (PDE) do not generally possess this etymological information, the construction today
presents as a phrasal idiom, albeit a highly productive one.
22 LEXICAL AND PHRASAL SIGNS
the item created. The addition of this third participant, we submit, is triggered
by the syntactic pattern that the sentence instantiates. The pattern, commonly
referred to as the DITRANSITIVE CONSTRUCTION, is a skeletal construction, in
the sense that it has no lexically fixed portion (no particular verb or noun phrase
is required). And yet, much like a lexical sign, this syntactic pattern has an asso-
ciated meaning: the transfer schema. We know of this meaning because of the
contrast in (6a)–(6b), and because of contexts of semantic enrichment like (5e).
The constructions we have seen so far have specialized meanings that can-
not be traced to words within them, but there are also highly schematic
(lexically open) constructions whose meanings are largely predictable from
their constituent words and whose frequency is high. Consider the following
sentences:
(7) a. [Elvis] [sang softly].
b. [The furious dog] [chased me].
c. [They] [made the problem more difficult].
All of these examples have two subparts, subject and predicate, as indicated
by the square brackets. These phrasal signs are licensed by the SUBJECT-
PREDICATE CONSTRUCTION , which is in general used to attribute a property to
an entity. Because it is used to perform a basic communicative routine, this con-
struction is very frequent but it does not add any meaning beyond what the words
within it mean. The primary reason that we need the SUBJECT- PREDICATE CON -
STRUCTION is to represent the division of a clause into phrases. These phrases,
for example, the furious dog and chased me, act like indivisible units for certain
syntactic purposes. The lesson here is that a sentence is not merely a sequence of
words. Instead, there are constructions that describe the way in which words are
combined to form phrases, constructions that describe the way in which phrases
are combined to form still larger phrases, and constructions that describe the
way in which phrases are combined to create sentences, as we will see below in
Section 2.4.
In sum, words, multi-word expressions, and phrases (including clauses) are
all analyzed as signs – pairings of form and meaning. We use lexical descrip-
tions (also called lexical entries) to describe words, word classes, and multi-word
expressions. We use constructions to describe phrasal signs. Constructions can
thus be understood as recipes for combining lexical signs and phrasal signs into
larger units. For the construction grammarian, the grammar of a language is thus
a repertoire of form-meaning pairings that range from those with fixed lexi-
cal make-up (including words) to those that constrain their subparts only very
broadly. A construction grammar models this range with an array of descriptions
of correspondingly graded generality.
In (8) we see the range of sign types presented as a continuum of idiomaticity
or degree of lexical fixity. This continuum distinguishes types of signs accord-
ing to the range of lexical, inflectional, or syntactic variants attested for each
type.
2.2 From Lexical Signs to Phrasal Signs 23
At the low variability end of the continuum, we have words (like dolphin) and
fixed idioms, like pass the buck, which have inflectional variants (e.g., dolphins,
passed the buck); these can be combined with other signs to form larger phrases
(e.g., baby dolphin, don’t pass the buck!), but the expressions themselves con-
tain no open slots. Fixed idioms contrast with idioms like throw the book at x,
in which the variable ‘x’ can take on any value (e.g., Throw the book at them!).
At the ‘open’ end of the continuum we have idiomatic phrase types (like the
comparative conditional) that have some lexically fixed portions (the two degree
words) and some open ones (e.g., the comparative expressions) that allow for cre-
ative elaborations. At the extreme end of the continuum are the patterns of sign
combination that do not have many semantic or use restrictions and are open –
or at least open in the sense that they evoke only basic syntactic categories (like
verb). These phrasal patterns can be identified with the phrase-structure rules of
traditional generative grammars.
This construction-based view of linguistic knowledge thus has two major
descriptive goals, which can be summarized as follows:
• to identify the constructions needed to describe the syntactic combi-
nations of a language
• to investigate the constructions (or rules) that license the combination
of words and phrases
To meet these goals, we will examine the way in which meanings are assem-
bled through the grammatically allowable patterns of sign combination. Words
are combined to form larger ‘phrasal’ constructs, and phrases can be combined to
form a clausal construct. A clause either is or is part of a well-formed sentence:
(9)
Typically we use the term ‘clause’ to refer to a complete sentence-like unit, but
one which may be part of another clause, as a subordinate or an adverbial clause.
Each of the sentences in (10b)–(10d) contains more than one clause, with one
clause embedded inside another:
(10) a. The weather is lovely today.
b. I am hoping that [the weather is lovely today].
24 LEXICAL AND PHRASAL SIGNS
This chapter first explores the types of lexical signs that we can observe in
English (Section 2.3). Equipped with generalizations about lexical expressions,
we then discuss phrasal and clausal constructions formed from the combination
of lexical and phrasal signs.
Though such semantic bases can be used for many words, these notional def-
initions leave a great many words unaccounted for. For example, words like
sincerity, happiness, and pain do not simply denote any individual or entity.
Absence and loss are even harder cases. There are many words whose seman-
tic properties do not match the syntactic category they belong to. For example,
words like assassination and construction may refer to an action rather than an
individual, but they are always nouns. Words like remain, bother, appear, and
exist are verbs, but they do not involve any action.
2.3 Lexical Signs 25
According to these frames, in which the word in question goes in the place
indicated by , nouns allow the plural marking suffix -(e)s to be attached, or
possessive ’s, whereas verbs can have the past tense -ed or the 3rd singular form
-(e)s. Adjectives can take comparative and superlative endings -er or -est, or
combine with the suffix -ly. The examples in the following are derived from
these frames:
(13) a. N: trains, actors, rooms, man’s, sister’s, etc.
b. V: devoured, laughed, devours, laughs, etc.
c. A: fuller, fullest, more careful, most careful, etc.
d. Adv: fully, carefully, diligently, clearly, etc.
The categories that can be used to fill in the blanks are N, V, A, Adv, and P
(preposition), respectively. The following data show that these lexical categories
are not typically interchangeable in a given context:
2 The underscore means that an expression is missing but suggests that the general class of
expressions is predictable from the context.
26 LEXICAL AND PHRASAL SIGNS
As shown here, only a restricted set of lexical categories can occur in each
position; we can then assign a specific lexical category to these elements:
(20) a. N: TV, car, information, friend . . .
b. V: sing, run, smile, stay, cry . . .
c. A: big, new, interesting, scientific . . .
d. Adv: nicely, badly, kindly . . .
e. P: in, into, on, under, over . . .
In addition to these basic lexical categories, does English have other lexical
categories? Consider the following distributional environments:
(21) a. vaccine could soon hit the market.
b. We found out that job is in jeopardy.
The words that can occur in the open slot in these sentences are words like the, a,
that, this, and so forth, which are determiners (Det). One clear piece of evidence
for grouping these elements in the same category, ‘Det,’ comes from the fact that
they cannot occupy the same position at the same time:
(22) a. *[My this job] is in jeopardy.
b. *[Some my jobs] are in jeopardy.
c. *[The his jobs] are in jeopardy.
Words like my and these or some and my cannot occur together, indicating that
they compete with each other for just one structural position.
Now, consider the following examples:
(23) a. He is a very good pitcher, he just has to have confidence in his pitches.
b. he is a very good pitcher, he just has to have confidence in his pitches.
(23a) provides a frame for conjunctions (Conj) such as and, but, so, for, or, and
yet. These conjunctions are ‘coordinating conjunctions’ different from the words
that can occur in (23b). The words that can occur in (23b) are ‘subordinating
conjunctions’ like since, when, if, because, though, and so forth. The former
type conjoins two identical phrasal elements, as in the following:
2.3 Lexical Signs 27
(24) a. [He immediately turned over to the right], for [he had been asleep on his left
side].
b. [She knew he shouldn’t drive], yet [she gave him the car keys].
The expressions that can occur in the following contexts form a different group:
(26) a. She didn’t think she could stand on her own.
b. I doubt he would listen to any moderate voice.
c. I’m so anxious him to give us the names of the people.
Once again, the words that can occur in the particular slots in (26) are strictly
limited:
(27) a. She didn’t think that [she could stand on her own].
b. I doubt if [he would listen to any moderate voice].
c. I’m so anxious for [him to give us the names of the people].
The italicized words here are different from the other lexical categories that we
have seen so far. They introduce a complement clause (marked above by the
square brackets) and are sensitive to the tense of that clause. A tensed clause is
known as a ‘finite’ clause, as opposed to an infinitive clause (see Chapter 5). For
example, that and if introduce or combine with a tensed sentence (present or
past tense), whereas for requires an infinitival clause marked with to. We cannot
disturb these relationships:
(28) a. *She didn’t think that [her to stand on her own].
b. *I doubt if [him listening to any moderate voice].
c. *I’m so anxious for [he gave us the names of the people].
The words that can appear in the blanks are neither main verbs nor adjectives,
but rather words like did, would, should, and could. In English, there is clear
evidence that these verbs are different from main verbs, and we refer to them
as auxiliary verbs (Aux). Auxiliary verbs, also known as helping verbs, perform
several grammatical functions: expressing tense (present, past, future), aspect
(progressive and perfect), or modality (possibility, futurity, obligation). It is prob-
lematic in some respects to posit the category Aux as an independent category,
but here we differentiate auxiliary verbs from main verbs by means of the feature
AUX , as given in the following:
The auxiliary verb appears in front of the main verb, which is typically in its
citation (lexemic) form (see Chapter 5 for the verb forms in English).
There is one remaining category we must consider: the ‘particles’ (Part), in
(31):
(31) a. Stacey had called off the engagement.
b. I had to go home and look up the word.
Words like off and up here behave differently from prepositions in that they can
occur after the object:
(32) a. Stacey had called the engagement off.
b. I looked the word up.
The pronoun it can naturally follow the preposition, as in (36b), but not the parti-
cle, as in (35b). Such contrasts between prepositions and particles give us ample
reason to introduce another lexical category, Part (particle), which is differenti-
ated from P (preposition). In Section 2.6, we will see more tests to differentiate
these two types of word.
In sum, we have seen that the grammar of English has at least the following
syntactic categories:
2.3 Lexical Signs 29
In deciding the types of lexical category, we can use the semantic, morphological,
and distributional criteria, but we have seen that distributional ones are most
reliable. Most of the lexical categories we have discussed in this section have
associated phrasal categories, which we discuss in what follows.3
Given only the lexical categories that we have identified so far, we can set up a
grammar rule for sentences (S) like the following:
(39) S → Det (A) N V Det (A) N
According to this rule, S consists of the items mentioned in the order given,
except that those in parentheses are optional. So this rule characterizes any sen-
tence that consists of a Det, N, V, Det, and N, in that order, possibly with an
A in front of either N. We can represent the core items in a tree structure,
as in (40):
3 The lexical categories we have seen so far can be classified into two major types: content and
function words. Content words (N, V, Adj, Adv) are those with substantive semantic content,
whereas function words (Det, Aux, Conj, P) are those primarily serving to carry grammatical
information. The ‘content’ words are also known as ‘open’ class words, because the number of
such words is unlimited and new words can be added to these categories, including nouns like
email, fax, internet, and verbs like emailed, googled, etc. By contrast, function words are mainly
used to indicate the grammatical functions of other words and are ‘closed’ class items: Only about
300 function words exist in English, and new function words are rarely added.
30 LEXICAL AND PHRASAL SIGNS
(40)
By inserting lexical items into the appropriate preterminal nodes, these being the
nodes immediately dominating the ‘. . . ’ notations, we can produce grammatical
examples like those in (38) as well as those like the following, not all of which
describe a possible real-world situation:
(42) a. That ball hit a student.
b. The piano played a song.
c. The piano kicked a student.
d. That ball sang a student.
The iteration operator ∗ after A, called the ‘Kleene Star Operator,’ is a notation
meaning ‘zero to infinitely many’ occurrences. It thus allows us to repeat any
number of As, thereby generating sentences like those in (44). Note that the
parentheses around ‘A’ in (39) are no longer necessary in this instance:
(44) a. The tall man kicked the ball.
b. The tall, handsome man kicked the ball.
c. The tall, kind, handsome man kicked the ball.
4 The ‘Kleene Star Operator’ should not be confused with the * prefixed to a linguistic example,
indicating ungrammaticality.
2.4 Phrasal Constructions and Constituency Tests 31
Why do the verbs in these two sentences have different agreement patterns? Our
intuitions tell us that the answer lies in two different possibilities for grouping
the words:
(47) a. [The mother of [the boy and the girl]] is arriving soon.
b. [The mother of the boy] and [the girl] are arriving soon.
The different groupings shown by the brackets indicate who is arriving: in (47a),
the mother, while in (47b) it is both the mother and the girl. The grouping of
words into larger phrasal units that we call constituents provides the first step in
understanding the agreement facts in (47).
Now, consider the following examples:
(48) a. Pat saw the man with a telescope.
b. I like chocolate cakes and pies.
c. We need more intelligent leaders.
These sentences have different meanings depending on how we group the words.
For example, (48a) will have the following two different constituent structures:
(49) a. Pat saw [the man with a telescope]. (the man had the telescope)
b. Pat [[saw the man] with a telescope]. (Pat used the telescope)
Even these very cursory observations indicate that a grammar with only lexical
categories is not adequate for describing syntax. In addition, we need a notion
of ‘constituent,’ and we need to consider how phrases can be formed, defining
groups of words as single units for syntactic purposes.
Perhaps most of us would intuitively assign the structure given in (51a) but not
those in (51b) or (51c):
32 LEXICAL AND PHRASAL SIGNS
(51) a. [The businessmen] [enjoyed [their breakfasts] [at the hotel] [last week]].
b. [The] [businessmen enjoyed] [their breakfasts at the hotel] [last week].
c. [The businessmen] [[enjoyed their breakfasts] [at the hotel last week]].
What kind of knowledge, in addition to semantic coherence, forms the basis
for our intuitions of constituent structure? Are there clear syntactic or distri-
butional tests that demonstrate the appropriate grouping of words or specific
constituencies? There are certain syntactic constructions that carry condi-
tions related to constituents (whether these are groups of words or single
words) and on this basis are used to diagnose what strings of words count as
constituents.
Cleft: The cleft construction, which places an emphasized or focused element
in the X position in the pattern ‘It is/was X that . . . ,’ can provide us with straight-
forward evidence for the existence of phrasal units. For instance, think about how
many different cleft sentences we can form from (52).
(52) The policeman met several young students in the park last night.
With no difficulty, we can cleft almost all the constituents we can get from the
above sentence:
(53) a. It was [the policeman] that met several young students in the park last night.
b. It was [several young students] that the policeman met in the park last night.
c. It was [in the park] that the policeman met several young students last night.
d. It was [last night] that the policeman met several young students in the park.
However, we cannot cleft sequences that do not form constituents:
(54) a. *It was [the policeman met] that several young students in the park last night.
b. *It was [several young students in the park] that the policeman met last night.
Constituent Questions and the Stand-Alone Test: Further support for the
existence of phrasal categories can be found in answers to ‘constituent ques-
tions,’ which involve a wh-word such as who, where, when, and how. For any
given wh-question, the answer can either be a full sentence or a fragment. This
stand-alone fragment is a constituent:
(55) Q: Where did the policeman meet several young students?
A: In the park.
(56) Q: Who(m) did the policeman meet in the park?
A: Several young students.
This kind of test can be of use in determining constituents; we will illustrate with
the following:
(57) Lee put old books in the box.
Does either old books in the box or put old books in the box form a con-
stituent? Are there smaller constituents? The wh-question tests can provide some
answers:
2.4 Phrasal Constructions and Constituency Tests 33
Overall, the tests here show that old books and in the box are constituents and
that put old books in the box is also a (larger) constituent.
The constituenthood test is also sensitive to the difference between particles
and prepositions. Consider the similar-looking examples in (60), both of which
contain looked and up:
(60) a. We looked up the street.
b. He looked up the answer.
What the contrasts here show is that up forms a constituent with the street in
(60a), whereas it does not with the answer in (60b).
Replacement by a Proform: English, like many languages, uses pronouns to
refer to individuals and entities mentioned earlier. For instance, the woman who
is standing by the door in (64a) can be ‘replaced’ by the pronoun she in (64b):
(64) a. What do you think the woman who is standing by the door is doing now?
b. What do you think she is doing now?
There are other ‘proforms,’ such as there, so, as, and which, that also stand in for
(express the same content as) a previously mentioned expression.
(65) a. Have you been [to Seoul]? I have never been there.
b. Pat might [go home]; so might Lee.
c. Pat might [pass the exam], as might Lee.
d. If Pat can [speak French fluently] – which we all know they can – we will
have no problems.
34 LEXICAL AND PHRASAL SIGNS
Both the pronouns there and them refer to a constituent. However, so in (66c),
referring to a VP, refers to only part of the constituent put the clothes, making it
unacceptable.
The expressions that can occur in the blank position here are once again limited.
The kinds of expression that do appear here include:
(68) Mary, I, you, students, the students, the tall students, the students from
Seoul, the students who came from Seoul, etc.
If we look into the subconstituents of these expressions, we can see that each
includes at least an N and forms an NP (noun phrase):
(69) a. students: N
b. the students: Det N
c. the tall students: Det Adj N
d. the students [from Seoul]: Det N PP
e. the students [who came from Seoul]: Det N S
5 The relative clause who came from Seoul is kind of modifying the sentence (S). See Chapter 11.
2.5 Forming Phrasal Constructions 35
This rule characterizes a phrase and is one instance of a phrase structure (PS)
rule. The rule indicates that a ‘mother’ NP can consist of one or more ‘daughters,’
including an optional Det, any number of optional As, an obligatory N, and then
an optional PP or a modifying S.6 The slash indicates different options for the
same place in the linear order. These options in the NP rule can be represented
in a tree structure:
(71)
Once we insert appropriate expressions into the preterminal nodes, we will have
well-formed NPs, and the rule will not generate the following NPs:
(72) *the whistle tune, *the easily student, *the my dog . . .
One important point is that as only N is obligatory in NP, a single noun such
as Mary, you, or students can constitute an NP by itself. Hence the subject of
the sentence She sings will be an NP, even though that NP consists only of a
pronoun.
(74) lists just a few of the possible phrases that can occur in the underlined
position.
(74) snored, ran, sang, loved music, walked the dog through the park, lifted 50
pounds, is honest, warned us that storms were coming, etc.
6 To license an example like the very tall man, we need to make A* into AP*. For simplicity, we
just use the former in the rule.
7 The phrase CP is the combination of that and a finite sentence. See Section 2.5.6.
36 LEXICAL AND PHRASAL SIGNS
We thus can characterize the VP rule as the one given in (76), to a first level of
analysis:
(76) VP → V (NP) (PP∗ /S/CP)
We thus have the rule that English sentences are composed of an NP and a VP,
the precise structural counterpart of the traditional ideas of a sentence being ‘a
subject and a predicate’ or ‘a noun and a verb.’
One more aspect of the structure of a VP involves the presence of auxiliary
verbs. Think of continuations for the fragments in (82):
(82) a. The students .
b. The students want .
For example, the phrases in (83a) and (83b) can occur in (82a), whereas those in
(83c) can appear in (82b):
(83) a. run, feel happy, study English syntax . . .
b. can run, will feel happy, must study English syntax . . .
c. to run, to feel happy, to study English syntax . . .
We have seen that the expressions in (83a) all form VPs, but how about those in
(83b) and (83c)? These are also VPs, which happen to contain more than one V.
In fact, the parts after the auxiliary verbs in (83b) and (83c) are themselves reg-
ular VPs. In the full grammar we will consider to and can and so on as auxiliary
verbs, with the feature specification [AUX +] to distinguish them from regular
2.5 Forming Phrasal Constructions 37
main verbs. Then all modal auxiliary verbs are simply introduced by a second
VP rule (see Section 2.5):
(84) VP → V[AUX +] VP
In such examples, the adverb illegally and the PP in the last decade are
modifying the preceding VP. To form such VPs, we need the PS rule in (86):
(86) VP → VP Adv/PP
This rule, together with (81), will allow the following structure for (85b):8
(87)
Expressions like those in (89) can occur in the blank space in (88):
(89) happy, uncomfortable, terrified, sad, proud of her, proud to be his student,
proud that he passed the exam, etc.
Since these all include an adjective (A), we can safely conclude that they all form
an AP:
(90) a. happy: A
b. proud [of her]: A PP
c. proud [to be his student]: A VP
d. proud [that he passed the exam]: A CP
Looking into the constituents of these, we can formulate the following simple PS
rule for the AP:9
8 We use a triangle when we do not need to represent the internal structure of a phrase.
9 The phrase CP results from the combination of a complementizer like that and an S.
38 LEXICAL AND PHRASAL SIGNS
(91) AP → A (PP/VP/CP)
The verb sound or felt requires an AP to follow it: (92a)–(92c) satisfy the rule in
(91). This can be represented in the following structures:
(93)
The rule in (91), however, would not license the expressions in the brackets as
proper APs:
(94) John sounded [*happily/*very/*the student/*in the park].
These phrases are often used to modify verbs, adjectives, and adverbs them-
selves, and they can all occur in principle in the following environments:
(96) a. They had behaved very .
b. They worded the offer .
c. They treated the sources .
2.5 Forming Phrasal Constructions 39
Phrases other than an AdvP cannot appear here. For example, an NP the student
or an AP really happy cannot occur in these syntactic positions:
(97) a. They had behaved very differently.
b. They worded the offer really carefully.
c. They treated the sources separately.
The intensifiers straight and right can occur neither with an AP nor with an
AdvP:
(102) a. The squirrel ran straight/right up the tree.
b. *The squirrel is straight/right angry.
c. *The squirrel ran straight/right quickly.
From the examples in (99), we can deduce the following general rule for forming
a PP:10
(103) PP → P NP
10 Depending on how we treat the qualifiers straight and right, we may need to extend this PP
rule as PP → (Qual) P NP so that the P may be preceded by an optional qualifier like right
or straight. However, this means that we need to introduce another lexical category, ‘Qual.’
Another direction is to take the qualifier categorically as an adverb carrying the feature QUAL
while allowing only such adverbs to modify a PP.
40 LEXICAL AND PHRASAL SIGNS
There are issues of whether these expressions project phrases, but we take at
least complementizers and subordinating conjunctions project phrases like CP
and ConjP:12
(106) a. He hopes [C that [S you go ahead with the speech]].
b. [CONJ After [S I had an interview]], I met her.
The if -clause in (107a) is a complement clause required by the matrix verb asked,
while the if -clause in (107b) is a subordinating clause, which is optional. This
implies that we need to distinguish these two by the following PS rules:
(108) a. CP → C S
b. ConjP → Conj S
We have seen earlier that a grammar with just lexical categories is not
adequate for capturing the basic properties of the language. How much further
a. She must have been very hungry, for she ate everything immediately.
b. They went to the park, and they went down the slide.
c. Mike doesn’t like doing his homework, nor does he like going to school.
d. The park is empty now, but it will be filled with children after school.
e. We could go get ice cream, or we could go get pizza.
f. Projects can be really exciting, yet they can be really hard work.
g. The lady was feeling ill, so she went home to bed.
h. I go to the library; I love to read.
12 There are two other views on the treatment of subordinating conjunctions. One is to treat
them as prepositions combining with an S (Emonds, 1976), and the other is to take them as
complementizers (van Gelderen, 2017).
2.6 Grammar with Phrasal Constructions 41
do we get with a grammar that includes phrases? A set of PS rules that license
the combination of lexical and phrasal constructions, some of which we have
already seen, are given in (109):13
(109) a. S → NP VP
b. NP → (Det) A∗ N (PP/S)
c. VP → V (NP) (A/PP/S/VP)
d. AP → A (PP/CP)
e. AdvP → (AdvP) Adv
f. PP → P NP
g. VP → VP AdvP
The rules say, among other things, that a sentence is the combination of NP and
VP, and that an NP can be made up of a Det, any number of As, an obligatory N,
an optional PP or S. Of the possible tree structures that these rules can generate,
the following is one example:
(110)
With the structural possibilities shown here, let us assume that we have the
following lexical entries:
(111) a. Det: a, an, the, this, that, his, her, no, etc.
b. A: handsome, tall, little, small, large, stylish, big, yellow, etc.
c. N: book, boy, garden, friend, present, dog, cat, man, woman, etc.
d. V: kicked, chased, sang, met, gave, taught, etc.
e. P: in, at, of, to, for, on, etc.
Inserting these elements in the appropriate preterminal nodes (the places with
dots) in (110), we are able to produce various sentences like those in (112):14
(112) a. That tall man met a dog.
b. A man kicked that small ball.
13 The grammar consisting of such a form of rules is often called a ‘Context Free Grammar,’ as
each rule may apply any time its environment is satisfied, regardless of any other contextual
restrictions.
14 The grammar still generates semantically anomalous examples like #The desk believed a man or
#A man sang her hat. For such semantically distorted examples, we need to refer to the notion
of ‘selectional restrictions’ (see Chapter 7).
42 LEXICAL AND PHRASAL SIGNS
There are several ways to generate an infinite number of sentences with this
kind of grammar. As we have seen before, one simple way is to repeat a category
(e.g., adjective) infinitely as given in (109b). There are also other ways of gen-
erating an infinite number of grammatical sentences. Look at the following two
PS rules from (109) again:
(113) a. S → NP VP
b. VP → V S/CP
As we show in the following tree structure, we can ‘recursively’ apply the two
rules, in the sense that one can feed the other and then vice versa:
(114)
Verbs like think can combine with either an S or a CP. It is not difficult to expand
this sentence by applying the two rules again and again:
(115) a. Bill claims (that) John believes (that) Mary thinks (that) Tom is honest.
b. Jane imagines (that) Bill claims (that) John believes (that) Mary thinks (that)
Tom is honest.
This means that we will also have a recursive structure like the following:15
(117)
The structures clearly indicate what with the toy modifies: In (119a), it modifies
the whole VP, whereas (119b) modifies just the NP the child. The structural
differences induced by the PS rules directly represent these meaning differences.
15 Due to the limited number of auxiliary verbs, and restrictions on their cooccurrence, the maxi-
mum number of auxiliaries in a single English clause is four (e.g., The building will have been
being built for three years), and their relative order is fixed. See Chapter 8.
44 LEXICAL AND PHRASAL SIGNS
The PS rules we have introduced would give us different structures for the
following two:
(120)
We have noted that English allows two alike categories to be coordinated. This
can be written as a PS rule, for phrasal conjunction, where XP is any phrase in
the grammar:16
(122) X(P) → X(P)+ Conj X(P)
The ‘coordination’ rule says that two identical XP categories or X lexical cate-
gories can be coordinated and form the same category X(P), as illustrated by the
following:17
16 Different from the Kleene Star Operator ∗ , the plus operator + here means that the XP occurs at
least once.
17 This coordination rule needs to be relaxed to license the coordination of unlike categories, as in
Kim is [a CEO] and [proud of her job]. Such examples can be taken to be the coordination of
two predicative expressions.
2.7 Multi-word Expressions 45
Applying the PS rule in (122), we will then allow (125a) but not (125b):
(125)
Unlike categories such as PP and AP may not be coordinated: This is what the
coordination PS rule ensures.
(127)
What the structure implies is that the inflected verb kicked combines with the
NP the bucket, but its semantic composition is peculiar in that the combination
kicked the bucket means not an action of kicking but ‘die.’
(130) a. Everyone was panting as if they’d all run up a steep hill. (up as a preposition)
b. The disease would run up a bill as high as $50 billion. (up as a particle)
One obvious difference between particle and preposition is that the particle
can occur after the object:
The constituent test with cleft constructions tells us that, unlike the particle, the
preposition forms a unit with the following NP:
(132) Preposition up
a. It was [up a big hill] that John ran. (cleft)
b. It was [a big hill] that John ran up. (cleft)
(133) Particle up
a. It was [a big bill] that John ran up. (cleft)
b. *It was [up a big bill] that John ran. (cleft)
This data set indicates that the particle does not form a constituent with the
object. Another interesting data set concerns the ‘so-called gapping’ that allows
the ellipsis of a redundant (repeated) verb or verb complex:
(134) a. John ran up a big hill and Jack up a small hill. (gapping ran)
b. *John ran up a big hill and Jack a small hill. (no gapping ran up)
(135) a. John ran up a big bill and Jack up a small bill. (gapping ran)
b. John ran up a big bill and Jack a small bill. (gapping ran up)
In both (134a) and (135b), we can gap the main verb ran. The differ-
ence comes between (134b) and (135b): The main verb can be gapped
not with the preposition as in (135a) but with the particle as in (135b).
This difference implies that the postverbal particle forms a strong unit with
the preceding main verb. These can be represented in the following tree
structures:
48 LEXICAL AND PHRASAL SIGNS
(136)
The structure in (136b) would mean that the particle with the preceding main
verb forms a verb complex, as represented by the following constructional rule:
(137) V → V, Part
The verb-particle complex will then combine with the object by the typical VP
rule VP → V, NP, as seen in (136b). The supporting evidence for this may come
from coordination:
(138) a. Did Jill run [up a big hill] or [up a small hill]?
b. *Did Jill run [up a big bill] or [up a small bill]?
c. Did Jill [run up] [a big bill] or [a small bill]?
The contrast here indicates that the sequence of [verb-particle] forms a com-
plex unit. The verb-particle complex can be also observed from the following
data (Jackendoff, 2002):
(139) a. the rapid [looking up] of the information is important.
b. the prompt [sending out] of reports is commendable.
In these examples, the particle forms a unit with the gerundive verb. The particle
here cannot be separated from the gerundive verb, as in the following:
(140) a. *the rapid looking of the information up
b. *the prompt sending of the reports out
The facts we have not discussed include examples where the particle occurs
right after the object:
(141) a. Jill brought the cat in.
b. He shut the gas off.
To license this ordering, the grammar introduces a combinatorial rule like the
following:
2.8 Conclusion 49
(142) VP → V NP Part
(143)
The particle in English thus either forms a fixed complex unit with the preceding
main verb, or is a syntactic sister to the verb, occurring right after the object (see
Section 5.4 for further discussion). This complex unit is larger than a word but
smaller than a full phrase. We can think of this complex unit as a compound word
somewhat like English compound verbs stir fry and blow dry.
2.8 Conclusion
open patterns like the rule for forming noun phrases (NPs) or the rule for creating
conditional sentences.18
Before examining the constructions that combine words and phrasal signs, we
will explore the grammatical functions and semantic roles that each constituent
plays in a given sentence. These functions and roles are the main topic of the
next chapter.
Exercises
(ii) a. I was anxious for Freeman to return and cage the beast.
b. *I was anxious for Freeman should return and cage the beast.
18 CxG uses the term construct-icon (a blend of the words construction and lexicon) to refer to
this continuum. The construct-icon is not just a list of language conventions; it has a taxonomic
organization. See Hilpert (2014).
2.8 Conclusion 51
(vi) a. If students are coming to school less prepared to learn, this may
cause declining productivity.
b. Whether students are in elementary schools or in prestigious
universities, homework is a necessary part of the learning
process.
What do the following data imply for the lexical category of to?
(iii) a. I know I should [go to the dentist’s], but I just don’t want to .
b. I don’t really want to [go to the dentist’s], but I know I should
.
c. *I know I should keep studying, but I just don’t keep .
3.1 Introduction
Terms like SUBJ, OBJ (direct object (DO), indirect object (IO)), MOD, and PRED
represent grammatical functions that each phrasal constituent can play in a given
sentence. As an example, consider (2):
(2) The driver crashed his car into the back of another car.
As shown here, the driver is an NP with respect to its syntactic form, but it is the
SUBJ (subject) of the sentence with respect to its grammatical function. The NP
his car is the OBJ (object), while the verb crashed functions as a predicator. More
importantly, we consider the entire VP to be a PRED (predicate) that describes
a property of the subject. Into the back of another car is a PP in terms of its
syntactic category while serving as a MOD (modifier) here.
We also can represent sentence structures using semantic roles. Constituents
can be considered in terms of semantic relations such as agent, patient, location,
instrument, and the like. A semantic role label tells us in essence ‘who is doing
what to whom’ – that is, what sort of participant each constituent expresses in a
clause, regardless of whether that clause describes an event or a state. Each main
53
54 F O R M S , F U N C T I O N S , A N D RO L E S
verb assigns one or more semantic roles. Consider the semantic roles of the NPs
in the following two sentences:1
(4) a. [The hurricane] destroyed [their house].
b. [Their house] was destroyed by [the hurricane].
As noted here, in addition to agent and patient, we have the semantic predicate
(pred), which selects for the agent and patient roles. So we now can describe the
semantic role that each constituent expresses.
Throughout this book we will see that in syntactic description, we must refer to
these three different levels of information (syntactic category, grammatical func-
tion, and semantic role), and that these levels interact with one other. There are
certain associations across levels that are typical in event encoding; for example,
an agent is a subject and an NP, and a patient is an object and an NP. However, as
we see in (5), the passive-active voice alternation is a case in which these typical
associations are broken.
3.2.1 Subjects
Consider the following pair of examples:
(6) a. [The dark] [devoured [the light]].
b. [The light] [devoured [the dark]].
These two sentences have exactly the same words and have the same predicator,
devoured. Yet they differ significantly in meaning, and the main difference comes
1 Semantic roles are also often called ‘thematic roles’ or ‘θ-roles’ (‘theta roles’) in generative
grammar (Chomsky, 1982, 1986).
3.2 Grammatical Functions 55
from what serves as subject or object with respect to the predicator. In (6a), the
subject is the dark, whereas in (6b) it is the light, and the object is the light in
(6a) but the dark in (6b).
The most common sentence structure seems to be that in which NP subject
performs the action denoted by the verb (thus having the semantic role of agent).
However, this is not always so:
(7) a. She wears a stylish set of furs.
b. This place physically stinks.
c. It is raining heavily.
d. Wolfgang himself disliked his hometown.
Wearing a set of furs, stinking, raining, or disliking one’s hometown are not
agentive activities; these are states or, in the case of raining, physical processes.
Such facts show that we cannot equate the grammatical role of subject with the
semantic role of agent.
More reliable tests for subjecthood come from syntactic tests such as agree-
ment, tag-question formation, and subject-auxiliary inversion.
Agreement: The main verb of a sentence agrees with the subject in English:
(8) a. He never writes/*write his books from an outline.
b. The events of the last days *saddens/sadden me.
c. Ashley takes/*take her mother out to lunch.
The singular subject he or Ashley requires a singular verb, while the plural sub-
ject events requires a nonsingular verb. Simply being closer to the main verb
does not entail subjecthood, as further shown by the following examples:
(9) a. Every one of those children is/*are important.
b. The legitimacy of their decisions depends/*depend on public support for the
institution.
c. The results of this analysis *is/are reported in Table 6.
The subject in each example is every one, the legitimacy, and the results respec-
tively, even though there are other nouns closer to the main verb. It is thus not
simply the linear position of the NP that determines agreement; rather, agreement
tells us what the subject of the sentence is.
Tag questions: A tag question is an abbreviated question at the end of a clause
consisting of an auxiliary verb followed by a pronoun referring back to the sub-
ject of the main clause. The tag-question formation is also a reliable subjecthood
test:
(10) a. The lady singing with that boy is a genius, isn’t she/*isn’t he?
b. With their teacher, the kids have arrived safely, haven’t they/*hasn’t he?
The pronoun in the tag question agrees with the subject in person, number, and
gender – it refers back to the subject but not necessarily to the closest NP, nor
to the most topical one. The pronoun she in (10a) shows us that lady is the head
56 F O R M S , F U N C T I O N S , A N D RO L E S
(the essential element) of the subject NP in that example, and the use of they in
the tag in (10b) leads us to assign the same property to kids. The generalization
is that a tag question must contain a pronoun which identifies the subject of the
clause to which the tag is attached.
Subject-auxiliary inversion: In forming questions and other sentence types,
English uses subject-auxiliary inversion, a pattern in which the subject imme-
diately follows an auxiliary verb:
(11) a. This guy is a genius.
b. The rules have changed.
c. It could be more harmful on super hot days.
(12) a. Is [this guy] a genius?
b. Have [the rules] changed?
c. Could [it] be more harmful on super hot days?
As seen here, the formation of yes-no questions such as these involves placing the
first tensed auxiliary verb in front of the subject NP. More formally, the auxiliary
verb is inverted with respect to the subject, hence the term ‘subject-auxiliary
inversion’ (SAI) (see Chapter 8 for detailed discussion). This is not possible
with a nonsubject:
(13) a. Most of the people in this country have already made the decision.
b. *Have [in this country] most of the people already made the decision?
However, this is not a solid generalization. The objects in (15a) and (15b)
are not obviously changed by the action. In (15a) the dog is experiencing
something, and in (15b) the thunder is somehow causing some feeling in the
dog:
(15) a. Thunder frightens [the dog].
b. The dog fears [thunder].
Once again, the data show us that we cannot identify the object based on semantic
roles. A much more reliable criterion is the syntactic construction passive, in
which a nonagent appears as subject. The sentences in (14) can be turned into
passive sentences in (16):
3.2 Grammatical Functions 57
What we can learn here is that the object-denoting entities in (14) can be
‘promoted’ to subject in the passive sentences. The test relies on the fact that
nonobject NPs cannot be promoted to the subject:
(17) a. Jones remained a faithful servant to Rice.
b. *A faithful servant was remained to Rice by Jones.
The generalization is that only those NPs that serve as direct objects of their
verbs can be promoted to subject by means of passive.
An indirect object (IO) is an NP that occurs with a DO in a ditransitive
sentence, and in this construction it precedes the DO. The pattern is:
(18) Subject – Verb – IO (Indirect Object) – DO (Direct Object)
The IO expresses the one to whom or for whom the action of the verb is per-
formed, or the (actual or potential recipient) of the item being transferred (the
latter of which is denoted by the DO). The IO thus canonically has the semantic
role of goal, recipient, or benefactive:
(19) a. The catcher threw [me] [the ball]. (IO = goal)
b. She gave [the police] [the licence plate number]. (IO = recipient)
c. She’d baked [him] [a birthday cake]. (IO = benefactive)
In each case, the DO, following the IO, has the semantic role of theme.
While both IO and DO can have a variety of semantic roles, the passive
construction (to be introduced in Chapter 9) has structural rather than seman-
tic conditions of application, promoting to subject whatever NP would have
immediately followed the verb. This reflects the traditional intuition that pas-
sive applies to the grammatically dependent first NP, and thus allows those IO
arguments that immediately follow the verb to become subjects as well. This is
shown by the passive versions of the sentences in (19):
(20) a. I was thrown the ball (by the catcher).
b. The police were given the licence plate number (by her).
c. He had been baked a birthday cake (by her).
Note that examples with IO-DO order are different from those in which the
semantic role of the IO is expressed as an oblique PP, following the DO:2
(21) a. The catcher threw the ball to me.
b. She gave the licence plate number to the police.
c. She’d baked a birthday cake for him.
The ‘NP PP’ (or oblique goal) pattern combines with a wider array of verbs than
does the ‘NP NP’ ditransitive pattern; the latter is restricted to the specific seman-
tic roles mentioned above. So, for example, (23a) has no alternate expression
where ‘a zombie’ is an IO:
(23) a. They have turned him into a zombie.
b. *They have turned a zombie him.
The italicized elements here are traditionally called ‘predicative (PRD) comple-
ments’ in the sense that they function as a predicate describing the subject or
object. However, although they are NPs, they cannot be promoted to subject by
passive:
(26) a. *President was elected Bill Clinton (by the Democrats).
b. *A boyfriend was considered Jimmy (by her).
The difference between objects and predicative complements can also be seen in
the following contrast:
(27) a. He made Jack a sandwich.
b. I made Jack a football star.
Even though the italicized expressions here are both NPs, they function differ-
ently. The NP a sandwich in (27a) is a direct object, as in He made a sandwich
for Jack, whereas the NP a football star in (27b) cannot be an object: It serves
as the predicate of the object Jack. If we think of part of the meaning informally,
only in the second example would we say that the final NP describes the NP:
(28) a. (27a): Jack = a sandwich
b. (27b): Jack = a football star
The PPs here, which cannot be objects since they are not NPs, also do not serve
as predicates of the subject or object – they relate directly to the verb as oblique
complements.
The functions of DO, IO, predicative complement, and oblique complement
all have one common property: they are all selected by the verb, and we view
them as being present to ‘complement’ the verb to form a legitimate VP. Hence,
these are called complements (COMPS), and typically they cannot be omitted.
3.2.5 Modifiers
Unlike these complements required by a lexical head, there are
expressions which do not complement the predicate in the same way and which
are truly optional:
(33) a. She stopped and looked up suddenly.
b. I made my choice a long time ago.
c. The videographers were indicted in Texas.
d. He wasn’t popular because he was a genius at math.
The italicized expressions here are all optional and function as modifiers (also
called ‘adjuncts’ or ‘adverbial’ expressions). These modifiers specify the man-
ner, location, time, or reason, among many other properties, of the situations
60 F O R M S , F U N C T I O N S , A N D RO L E S
expressed by the given sentences – informally, they are the how, when, where,
and why phrases.
One additional characteristic of modifiers is that they can be stacked, whereas
complements cannot:
(34) a. *John gave Tom [a book] [a record].
b. Oswald was seen with him [several times] [last summer].
As shown here, temporal adjuncts like several times and last year can be
repeated, whereas the two complements a book and a record in (34a) cannot.
Of course, temporal adjuncts do not become the subject of a passive sentence,
suggesting that they cannot serve as objects:
(35) a. Gary visited yesterday.
b. *Yesterday was visited by Gary.
As shown here, the expressions the little cat and a mouse are both NPs, but they
have different grammatical functions, SUBJ and OBJ. The VP as a whole func-
tions as the predicate of the sentence, describing the property of the subject.3
Additionally, though not shown here, we would want to say that little is an
attributive modifier of cat, and the determiners the and a have a ‘specifying’
function with respect to their head nouns (see Chapter 5).
Assigning grammatical functions within complex sentences is no different:
3 It is important not to confuse the functional term ‘adverbial’ and the syntactic category label
‘adverb.’ The term ‘adverbial’ is used interchangeably with ‘adjunct’ or ‘modifier,’ whereas
‘adverb’ only designates a part of speech. In English almost any kind of phrasal category can
function as an adverbial, but only a limited set of words are adverbs.
3.4 Form-Function Mismatches 61
(37)
Each clause has its own SUBJ and PRED: John is the subject of the higher clause,
whereas the cat is the subject of the lower clause. We can also notice that there
are two OBJs: The CP is the object of the higher clause, whereas the NP is that
of the lower clause.
Within the PS-rule system, as represented here, the subject is defined as the
immediate daughter of S, while the object is the immediate sister of V. These two
are also categorically specified as NPs. However, linguistic evidence indicates
that not only NPs but also other categories (e.g., CP, VP, and PP) can function as
subject and object (Newmeyer, 2000, 2003):
(39) a. [NP The inferno] destroyed the downtown area.
b. [VP Loving you] is not in my control.
c. [CP That he doesn’t achieve perfection] is reasonable.
d. [VP To finish this work] is beyond his ability.
e. [PP Under the bed] is a safe place to hide.
Subject tests like subject-verb agreement and tag question support the assump-
tion that these non-NP phrases are the subject:
(41) a. [That he doesn’t achieve perfection] is reasonable, isn’t it?
b. [[That the march should go ahead] and [that it should be cancelled]]
have/*has been argued by different people at different times.
(42) a. [To finish this work] is beyond his ability, isn’t it?
b. [[To delay the march] and [to go ahead with it]] have/*has been argued by
different people at different times.
Examples like this would require a new set of PS rules. For example, the partial
tree structure of (42a) may look like the following:
(43)
The tree structure means that we need a new S rule like ‘S → VP VP’ or a rule
like ‘NP → VP’ to project the subject VP to an NP to keep the rule ‘S → NP
VP’ (see Chapter 4 for the resolution of this issue).
The same fact is observed for the object. Non-NP phrases like CP, VP, or even
PP can function as the object:
(44) a. They believe [that group work is an essential tool for students’ future lives].
b. They prefer [to study in a formal setting].
c. I’ll choose [after the holidays] to hold my party.
Object tests like the passive tell us that these non-NPs function as the object:
(45) a. [Group work is an essential tool for students’ future lives] is believed.
b. [To study in a formal setting] is preferred.
c. [After the holidays] will be chosen to hold my party.
The same goes for modifier (MOD), as noted before. Not only AdvP but also
phrases such as NP, S, VP, or PP can function as a modifier:
(46) a. The little cat devoured a mouse [NP last night].
b. This race has started [AdvP very early].
c. I stayed on as CEO [PP for four years].
d. They will absorb enough correct information [VP to pass the test].
e. Joseph had spoken to me in English [S when the party started].
3.5 Semantic Roles 63
Here the expression last night is an adverbial NP in the sense that it is categor-
ically an NP but functions as a modifier (adjunct) to the VP. As we go through
this book, we will see that the distinction between grammatical functions and
categorical types is crucial in the understanding of English syntax.
4 The definitions of semantic roles given here are adapted from Dowty (1989).
5 Patient and theme are often unified into ‘undergoer’ on the grounds that both a patient and a
theme can be said to be affected by the action in question.
64 F O R M S , F U N C T I O N S , A N D RO L E S
• Benefactive: The entity that benefits from the action or event denoted by the
predicator. Examples: oblique complement of make, buy, etc.
(52) a. He made a cake for me.
b. John bought a guitar for me.
• Instrument: The means by which the action or event denoted by the pred-
icator is carried out. Examples: oblique complement of hit, wipe, hammer,
etc.
(56) a. He wiped his mouth with the back of his hand.
b. Tiger can hit a ball with a stick.
Although the above two sentences have different syntactic structures, they have
essentially identical interpretations. The reason is that the same semantic roles
are assigned to the same NPs: In both examples, the cat is the agent and the
mouse is the patient. Different grammatical uses of verbs may express the same
semantic roles in different arrays.
Semantic roles also allow us to classify verbs into finer-grained groups.
Consider the following examples:
(58) a. There comes a time when you have to say to yourself enough is enough.
b. There remains a gap between ‘what is’ and ‘what should be.’
c. There lived a lion whose skin could not be pierced by any weapon.
d. There arrived a tall, red-haired, and incredibly well-dressed man.
All the verbs in (58) and (59) are intransitive, but not all are acceptable in the
there-construction. The difference comes from the semantic role of the postver-
bal NP, as assigned by the main verb. Verbs like arrive, remain, and live are
taken to assign the semantic role of ‘theme’ (see the list of roles above), whereas
verbs like sing and dance assign an ‘agent’ role. We thus can conjecture that
there-constructions are not compatible with a verb whose subject carries an agent
semantic role.
While semantic roles provide very useful ways of describing properties across
different constructions, we should point out that the theoretical status of seman-
tic roles is still unresolved.6 For example, there is no agreement about exactly
which and how many semantic roles are needed. The problem is illustrated by
the following simple examples:
(60) a. The exhibit resembles a video game.
b. The composition of the planet Venus is similar to that of Earth.
What kind of semantic roles do the arguments here have? Both participants seem
to be playing the same role in these examples – they both cannot be either agent
or patient or theme. There are also cases where we might not be able to pin down
the exact semantic role:
(61) a. Henry ran into the house to find a bag of water.
b. The baby tilted her head up to look at the sky.
The subject Henry in (61a) is both agent and theme: It is agent since it initiates
and sustains the movement but also theme since it is the object that moves. Also,
6 See Levin and Rappaport Hovav (2005) for further discussion of this issue.
66 F O R M S , F U N C T I O N S , A N D RO L E S
the subject the baby in (61b) can either be an experiencer or an agent depending
on her intention – one can just look at the sky with no purpose at all.7
Although there are theoretical issues involved in adopting semantic roles in
grammar, there are many advantages to using them, some of which we have
noted here. We can make generalizations about the grammar of the language;
for example, typically the ‘agent’ takes the subject position, while an NP fol-
lowing the word from serves as the ‘source.’ As we will see in Chapter 4, the
array of semantic roles that a verb or class of verbs takes is a standard way of
characterizing that verb or verb class in a lexicon based on lexical classes. In
subsequent chapters, we will have cause to refer to semantic roles in various
places.
3.6 Conclusion
7 To overcome the problem of assigning the correct semantic role to an argument, one can assume
that each predicator has its own (individual) semantic roles. For example, the verb kick, instead of
having an agent and a patient, has two individualized semantic roles, ‘kicker’ and ‘kicked.’ See
Pollard and Sag (1987).
3.6 Conclusion 67
Exercises
3. Draw tree structures for the following sentences (with categories) and
then assign an appropriate grammatical function to each phrase:
a. They parted the best of friends.
b. Benny worked in a shoe factory when he was a student.
c. The gang robbed her of her necklace.
d. The film is about marine life.
e. I think of John as a good friend.
f. The trio visited a pub in the small town.
g. Oscar described Doberman as a really smart guy.
h. We often expect our students to diligently read their textbooks.
i. Honestly, I do not think that I understand people very well.
6. Consider the following examples with the copula verb be, and discuss
what kind of form-function (category-meaning) mismatches we can
observe here:
(i) a. Kim is a good student.
b. Kim is in.
Scientists found that the birds sang well in the evenings but per-
formed badly in the mornings. After being awake several hours,
however, the young males regained their mastery of the mate-
rial and then improved on the previous day’s accomplishments.
To see whether this dip in learning was caused by the same
kind of precoffee fog that many people feel in the morning,
the researchers prevented the birds from practicing first thing
in the morning. They also tried keeping the birds from singing
during the day, and they used a chemical called melatonin to
make the birds nap at odd times. The researchers concluded that
their study supports the idea that sleep helps birds learn. Stud-
ies of other animals have also suggested that sleep improves
learning.9
Why is only (1e) acceptable? Only this sentence satisfies the condition that the
verb put selects an NP and a PP as its complements, because it combines with
these complements to form a well-formed VP. In the other examples, this condi-
tion is not fulfilled. This combinatory requirement can be traced back to lexical
properties of the verb put, and it is not related to any properties external to the VP.
By contrast, external syntax is concerned with the syntactic environment
in which a phrase occurs. Some of the unacceptable examples in (1) can be
legitimate expressions if they occur in the proper (syntactic) context:
(2) a. This is the comforter under which he [put his hand]. (cf. (1a))
b. This is his hand that he [put under the comforter]. (cf. (1b))
1 The terms ‘internal’ and ‘external’ syntax are from Baker (1995).
70
4.1 Building a Phrase from a Head 71
The VP put his hand under the comforter is a well-formed phrase, but it cannot
occur in (3a) since this is not the environment in which such a finite VP occurs.
That is, the verb kept requires as its complement not a finite VP but a gerundive
VP like putting his hand under the comforter.
The circled element here is the essential, obligatory element within the particular
phrase. We call this essential element the head of the phrase.2 The head of each
phrase determines the syntactic category of the phrase from which it is built, a
phenomenon called ‘lexical projection.’ The head of an NP is thus N, the head
of a VP is V, and the head of an AP is A.
The property of headedness plays an important role in grammar. For example,
the verb put, functioning as the head of a VP, dictates what it must combine with:
two complements, NP and PP, respectively. Consider the other examples below:
(5) a. Clark denied the plagiarism charges.
b. *Clark denied.
(6) a. Hill handed the students an ambitious assignment.
b. *Hill handed the students.
The verb denied here requires an NP object, while handed requires two NP com-
plements in this use. The properties of the head verb determine what kind(s)
of elements it combines with. As noted in the previous chapter, the elements
with which a head verb must combine are called complements. The comple-
ments include direct object, indirect object, predicative complement, and oblique
complement. These are all potentially required by some verb or another.
The properties of the head become properties of the whole phrase. Why are
the examples in (7b) and (8b) ungrammatical?
(7) a. Lopez [wants to leave the United States].
b. *Lopez [eager to leave the United States].
The examples in (7b) and (8b) are unacceptable because of the absence of
the required head. The unacceptable examples lack a finite (tensed) VP as the
bracketed part, but we know that English sentences require a finite VP as one
immediate (or daughter) constituent, as informally represented in (9):
(9) English Declarative Sentence Construction:
Each declarative sentence must contain a finite VP as its head.
The PPs in his office or with love here provide further information about the
action described by the verb, but they are not required by the verb. These phrases
are optional and function as modifiers, and they function to augment the minimal
phrase projected from the head verb offered. The VP which includes this kind of
modifier forms a maximal phrase. We might say that the inner VP here forms a
‘minimal’ VP, which includes all the ‘minimally’ required complements, and the
outer VP is the ‘maximal’ VP, which includes optional modifiers.
What we have seen can be summarized as follows:
(12) a. Head: A lexical or phrasal element that is essential in determining the
category and internal structure of a larger phrase.
b. Complement: A phrasal element that a head must combine with – that is,
one that is selected by the head. Complements include direct object, indirect
object, predicative complement, and oblique complement.
c. Modifier: A phrasal element that is not selected by the head functions but
which functions as a modifier of the head phrase, for example, indicating
the time, place, manner, or purpose of the action expressed by a verb and its
complements.
3 See Section 5.5 for the values of the attribute English verb form (VFORM) including finite and
nonfinite.
4.2 Differences between Complements and Modifiers 73
d. Minimal Phrase: the phrase including a head and all of its complements.
e. Maximal Phrase: the phrase that includes all complements as well as any
modifiers.
The possibility of omitting the book and to me in each case implies that they are
optional complements.
Iterability: The possibility of iterating identical types of phrase can also dis-
tinguish between complements and modifiers. In general, two or more instances
of the same modifier type can occur with the same head, but this is impossible
for complements:
4 Most of the criteria and tests we discuss here are adopted from Pollard and Sag (1987) and
Baker (1995).
74 H E A D , C O M P L E M E N T S , M O D I FI E R S
(18) a. *The UN blamed global warming [on humans] [on natural causes].
b. The two had met [in Los Angeles] one night [at a bar] in June of that year.
In (18a), on humans is a complement and thus the same type of PP, on natural
causes, cannot cooccur. Yet in Los Angeles is a modifier and we can repeatedly
have the same type of PP like at a bar.
The Do-So Test: Another reliable test used to distinguish complements from
modifiers is the do-so or do the same thing test. As shown in (19), we can use do
the same thing to avoid repetition of an identical VP expression:
(19) a. Leslie deposited some money in the checking account and Kim did the same
thing.
b. Leslie deposited some money in the checking account on Friday and Kim
did the same thing.
We can observe in (19b) that the VP did the same thing can replace either the
minimal phrase deposited some money in the checking account or the maximal
phrase including the modifier on Friday. Notice that this VP can also replace
only the minimal phrase, excluding the modifier, as in (20):
(20) John deposited some money into the checking account on Friday and Mary
did the same thing on Monday.
From these observations, we can draw the conclusion that if something can be
replaced by do the same thing, then it is either a minimal or a maximal phrase.
This in turn means that this ‘replacement’ VP cannot be understood to exclude
any complement(s). This can be verified with more data:
(21) a. *John [deposited some money into the checking account] and Mary did the
same thing into the savings account.
b. *John [gave a present to the student] and Mary did the same thing to the
teacher.
Here the PPs into the checking account and to the student are both complements,
and thus must be included in the do the same thing phrase. This gives us the
following informal generalization:
(22) Do-So Replacement Condition:
The phrase do so or do the same thing can replace a verb phrase that includes
at least all of the complements of the verb.
This condition explains why all the oblique expressions into the savings account
and to the teacher cannot appear next to did the same thing in (21). The unac-
ceptability of the examples in (23) also supports this generalization about English
grammar:
(23) a. *John locked Fido in the garage and Mary did so in the room.
b. *John ate a carrot and Mary did so a radish.
The ill-formedness of these examples indicates that both in the room and a radish
function as complements.
4.2 Differences between Complements and Modifiers 75
5 These observed ordering restrictions can provide more evidence for the distinction between com-
plements and modifiers. Again, this test is not always sufficient by itself. In the following, the
modifiers precede the complements:
a. We discussed [all night long] [how to finish the project].
b. I said [publicly] [that we would have plenty of problems along the way].
One way to account for such examples is to assume that the clausal complement in each
case is ‘extraposed’ to the sentential-final. See Chapter 12 for the discussion of extraposition
constructions in English.
6 The discussion in this section is based on Sag et al. (2003).
4.3 PS Rules, X -rules, and Features 77
h. PP → P NP
i. VP → Adv VP
One property common to all of these rules is, as we have discussed, that every
phrase has its own head. In this sense, each phrase is the projection of a head and
is thereby endocentric. However, this raises the question of whether we can have
rules like the following, in which the phrase has no head at all:
(31) a. VP → P NP
b. NP → PP S
Nothing in the grammar makes such PS rules unusual or different in any way
from the set in (30). Yet if we allow such ‘nonendocentric’ PS rules, in which
a phrase does not have a lexical head, grammar would then be too powerful
to generate only the grammatical sentences of the language. For instance, with
this kind of PS rule, examples like to the room would be a VP, making John to
the room as a sentence consisting of an NP and a VP. More seriously, such PS
rules with no head expression in the right hand do not exist in English or other
languages. We have seen that each phrase must have a head. These PS rules thus
violate this headedness (or endocentricity) of a phrase.
Another limitation of the simple PS rules concerns the issue of redundancy.
Observe the following:
(32) a. *The problem disappeared the accusation.
b. The problem disappeared.
These examples show that each verb has its own restrictions on its comple-
ment(s). For example, deny requires an NP, whereas disappear does not, and give
requires two NPs as complements. The different patterns of complementation are
said to define different subcategories of verbs. Each specific pattern is known as
the ‘subcategorization’ requirement of each verb, which can be represented as
follows (IV: intransitive, TV: transitive, DTV: ditransitive):
(35) a. disappear: IV,
b. deny: TV, NP
c. give: DTV, NP NP
We can see here that in each VP rule, only the appropriate verb can occur. That
is, a DTV cannot form a VP with the rules in (36a) or (36b): It forms a VP only
according to the last PS rule. Each VP rule thus also needs to specify the kind of
verb that can serve as its head.
Taking all of these observations together, we see that a grammar of the type
just suggested must redundantly encode subcategorization information both in
the lexical type of each verb (e.g., DTV) and in the PS rule for that type
of verb. A similar issue of redundancy arises in accounting for subject-verb
agreement:
(37) a. The insect devours the soft flesh.
b. The insects devour the soft flesh.
To capture the fact that the subject NP agrees with the predicate VP, we need to
break the S rule into the following two rules:
(38) a. S → NPsing VPsing (for (37a))
b. S → NPpl VPpl (for (37b))
The two PS rules ensure that the singular (sing) subject combines with a singular
VP, while the plural (pl) subject NP combines with a plural VP.
The grammar described above may be a perfectly adequate descriptive tool.
From a theoretical perspective, however, we must address the endocentricity
and redundancy issues. A more specific, related question is: how many PS
rules does English have? For example, how many PS rules do we need to
characterize English VPs? Presumably there are as many rules as there are
subcategories of verb. We need to investigate the properties shared by all PS
rules in order to develop a theory of PS rules. For example, it seems to be
the case that each PS rule must have a ‘head.’ This will prevent many PS
rules that we could write using the rule format from being actual rules of any
language.
What are the structures of these two sentences? Do the phrases every photo of
Max and sketch by his students form NPs? It is not difficult to see that sketch by
his students is not a full NP by itself, for if it was, it would be able to appear as
subject by itself:
4.3 PS Rules, X -rules, and Features 79
In terms of semantic units, we can assign the following structures to the above
sentences, in which every and no operate over the meaning of the rest of the
phrase:
(41) a. [Every [[photo of Max] and [sketch by his students]]] appeared in the
magazine.
b. [No [[photo of Max] or [sketch by his students]]] appeared in the magazine.
The expressions photo of Max and sketch by his students are phrasal elements but
not full NPs. So what are they? We call these ‘intermediate phrases,’ notationally
represented as N-bar or N . The phrase N is thus intuitively bigger than a noun
but smaller than a full NP, in the sense that it still requires a determiner from the
class the, every, no, some, and the like.
The complementary notion that we introduce at this point is ‘specifier’ (SPR),
which can include the words just mentioned as well as phrases:
The phrase the enemy’s in (42a) and the subject the enemy in (42b) are semanti-
cally similar in the sense that they complete the specification of the event denoted
by the (nominal and verbal) predicate. These phrases are treated as the specifiers
of N and of VP, respectively.
As for the possible specifiers of N , observe the following:
The italicized expressions here all function as the specifier of N . Notice, how-
ever, that although most of these specifiers are determiners, some consist of
several words, as in (43e) (my friend’s, the Queen of England’s). This moti-
vates us to introduce the new phrase type DP (determiner phrase) that includes
the possessive phrase (NP + ’s) as well as determiners. This leads us to allow
two things: a determiner alone can be projected as a DP and the posses-
sive marker (’s) functions as a determiner and projects into a DP with its NP
specifier:
80 H E A D , C O M P L E M E N T S , M O D I FI E R S
(44)
The structure in (44a) is an instance where a lexical head projects into a phrase
without combining with any complement or a modifier.7 The structure in (44b)
indicates that the possessive marker ’s functions as a head and projects into a DP
after combining with the obligatory NP specifier. The new phrase DP thus gives
us the generalization that the specifier of N is a DP.8
Now let us compare the syntactic structures of (43a) and (43b):
(45)
(46)
7 In a traditional X -theory, N first projects into N and then into NP, but our feature-based system
only distinguishes between word and phrase: An N just means a nominal phrase that requires a
specifier. See Chapter 5 for details.
8 Some analyses take each expression in (43) to form a DP (e.g., a little dog, my little dogs) where
the determiner functions as the head expression.
4.3 PS Rules, X -rules, and Features 81
Even though the NP and S are different phrases, we can notice several similari-
ties. In the NP structure, the head N destruction combines with its complement
and forms an intermediate phrase N , which in turn combines with the speci-
fier DP the enemy’s. In the S structure, the head V destroyed combines with
its complement the city and forms a VP. This resulting VP then combines
with the subject the enemy, which is also a specifier. In a sense, the VP
is an intermediate phrase that requires a subject in order to be a full and
complete S.
Given these similarities between NP and S structures, we can generalize over
them as in (47), where X is a variable over categories such as N, V, P, and other
grammatical categories:
(47)
This structure in turn means that the grammar now includes the following two
rules:9
9 Unlike the PS rules we have seen so far, the rules here are further abstracted, as indicated by
the comma notation between daughters on the right-hand side. We assume that the relative
linear order of a head and complements, etc. is determined by a combination of general and
language-specific ordering principles, while the hierarchical X -structures themselves apply to
all languages that have demonstrable hierarchical structure.
10 The comma indicates that the modifier can appear either before the head or after the head, as in
always read books or read books always.
82 H E A D , C O M P L E M E N T S , M O D I FI E R S
(50)
The ill-formedness of (51b) is due to the fact that the modifier with a hat was
combined with the head king first:
(52)
4.3 PS Rules, X -rules, and Features 83
We can observe in (52b) that the combination of king with with a hat forms an
N , but the combination of the complement of Rock and Roll with this N will not
satisfy the HEAD - COMPLEMENT CONSTRUCTION.
The existence and role of the intermediate phrase N , which is larger than a
lexical category but still not a fullyfledged phrase, is further supported by the
pronoun substitution examples in (53):
(53) a. The present king of country music is more popular than the last one.
b. *The king of Rock and Roll is more popular than the one of country music.
Why do we have the contrast here? One simple answer is that the pronoun one
here replaces an N but not an N or an NP. This will also account for the following
contrast:
(54) A: Which student were you talking about?
B: The one with long hair.
B : *The one of linguistics with long hair.
The phrase of linguistics is the complement of student. This means the N-bar
pronoun one should include this complement, as in B.
There are several more welcome consequences of these three X rules. These
grammar rules can account for the same structures described by all of the
PS rules that we have seen so far: With these rules we can identify phrases
whose daughters are a head and its complement(s), or a head and its speci-
fier, or a head and its modifier. The three X rules thereby greatly minimize
the number of PS rules needed to characterize well-formed English sentences.
In addition, these X rules directly address the endocentricity issue, because
they refer to ‘Head.’ Assuming that X is N, then we will have N, N , and NP
structures. We can formalize this more precisely by introducing the feature POS
(part of speech), which has values such as noun, verb, adjective. The structure
(55) shows how the values of the features in different parts of a structure are
related:
(55)
The notation 1 shows that whatever value the feature has in one place in the
structure, it has the same value somewhere else. This is a representational tag
84 H E A D , C O M P L E M E N T S , M O D I FI E R S
The rule states that the subject’s NUMBER value is identical to that of the
predicate VP’s NUMBER value. The two rules in (38) are both represented
in (56).
With the assumption that the specifier is a nonhead phrase directly dominated
by a maximal phrase like AP or PP, much and right in (57b) and (57b) would
be specifiers. However, note that, unlike specifiers of N , specifiers of A and P
are all optional and lack a tight syntactic relationship with the head. Such differ-
ences among putative ‘specifiers’ have caused proponents of X syntax to restrict
the use of X to phrases like NPs. In due course, we will see that the present
feature-based grammar requires no X notion in order to capture the properties
of intermediate phrases.
The value of each attribute can be an atomic element, a list, a set, or a feature
structure:
⎡ ⎤
(59) type
⎢ ⎥
⎢Attribute1 atomic ⎥
⎢ ⎥
⎢ ⎥
⎢Attribute2 ⎥
⎢ ⎥
⎢ ⎥
⎢Attribute3 ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
Attribute4 . . .
One important property of every feature structure is that it is typed.12 That is,
each feature structure is relevant only for a given type. A simple illustration
should suffice to show why each feature structure must be ‘typed.’ The upper left
declaration in italics is the type of the feature structure:
(60) a. ⎡ ⎤
university
⎢ ⎥
⎣NAME Kyunghee University⎦
LOCATION Seoul
b. ⎡ ⎤
* university
⎢ ⎥
⎣NAME Kyunghee University⎦
MAYOR Kim
The type university may have many properties, including its name and location,
but having a MAYOR (though it can have a president) is inappropriate. In the
linguistic realm, we might declare that TENSE is appropriate only for verb, for
example.
11 In particular, grammars such as Head-driven Phrase Structure Grammar (HPSG) and Lexical
Functional Grammar (LFG) are couched upon mathematically well-defined feature-structure
systems. The theory developed in this textbook relies heavily upon the feature-structure system
of HPSG. See Sag et al. (2003).
12 Even though every feature structure is typed in the present grammar, we will not specify the type
of each feature structure unless it is necessary for the discussion.
86 H E A D , C O M P L E M E N T S , M O D I FI E R S
This illustrates the different types of values that attributes (feature names) may
have. Here, the value of the attribute NAME is atomic, whereas the value of CHIL -
DREN is a list that represents something relative about the three values, in this
case that one is older than the other two. So, for example, ‘youngest child’ would
be the right-most element in the list value of CHILDREN. Meanwhile, the value
of HOBBIES is a set, showing that there is no significance in the relative ordering.
Finally, the value of the feature ADVANCED - DEGREE is a feature structure which
in turn has three attributes.
One useful aspect of feature structures is structure-sharing, which we have
already seen above in connection with the 1 notation (see (55)). Structure-
sharing is used to represent cases where two features (or attributes) have an
identical value:
⎡ ⎤
(62) individual
⎢ ⎥
⎢NAME Kim ⎥
⎢ ⎥
⎢ADDRESS 1 ⎥
⎢ ⎡ ⎤⎡ ⎤⎡ ⎤⎥
⎢ ⎥
⎢ individual individual individual ⎥
⎢ ⎥⎥
⎢CHILDREN ⎢ ⎥⎢ ⎥⎢
⎣NAME Edward⎦, ⎣NAME Richard⎦⎣NAME Albert⎦ ⎥
⎣ ⎦
ADDRESS 1 ADDRESS 1 ADDRESS 1
For the type individual, attributes such as NAME and ADDRESS and CHILDREN
are appropriate. The feature structure (62) represents a situation in which the
particular individual Kim has three sons, and their ADDRESS attribute has a value
( 1 ) that is the same as the value of his ADDRESS attribute, whatever the value
actually is.
In addition to this, the notion of subsumption is also important in the the-
oretical use of feature structures; the symbol represents subsumption. The
subsumption relation concerns the relationship between a feature structure with
general information and one with more specific information. In such a case,
the general one subsumes the specific one. Put differently, feature structure A
subsumes another feature structure, B, if A is not more informative than B.
4.4 Lexicon and Feature Structures 87
⎡ ⎤
(63) individual
individual ⎢ ⎥
A: B: ⎣NAME Kim ⎦
NAME Kim
TEL 961-0892
The two feature structures are unified, resulting in a feature structure with
both NAME and TEL information. However, if two feature structures have
incompatible feature values, they cannot be unified:
(65) individual individual
→
NAME Edward NAME Richard
⎡ ⎤
individual
⎢ ⎥
*⎣NAME Edward ⎦
NAME Richard
Since the two smaller feature structures here have different NAME values, they
cannot be unified. Unification will make sure that information is consistent as it
is built up in the analysis of a phrase or sentence.
This feature structure, the details of which we will see as we move on, has
roughly the same information as the informal representation in (66). The feature
structure is describing a type of verb first. The verb puts has its own morpho-
logical form (FORM) value, syntactic (SYN) argument structure (ARG - ST), and
13 The expression also has a phonological (PHON) value, but we suppress this value throughout this
book. Later on, we will not represent SEM values unless relevant to the discussion at hand.
4.5 Arguments and Argument-Structure Constructions 89
semantic (SEM) information. The SYN attribute indicates that the POS (parts of
speech) value is verb and that it has a present finite verbal inflectional form value
(VFORM). Both of these features are head (HEAD) features (see Chapter 5). The
SYN attribute also includes the attribute valence ( VAL ), which has both an SPR
and a COMPS value. The attribute VAL thus refers to the number of syntactic argu-
ments SPR (specifier or subject) and COMPS (complements) that a lexical item
can combine with to make a syntactically well-formed sentence. The ARG - ST
attribute indicates that the verb selects three arguments (with respective thematic
roles agent (agt), theme (th), and location (loc)), which will be realized as the
subject (SPR) and two complements (COMPS) in the full analysis (see Chapter 5).
The semantic (SEM) feature represents the fact that this verb denotes the predi-
cate relation, whose three participants are linked to the elements in the ARG - ST
via indexing values like i, j, and k. As we progress, we will see the roles that each
feature attribute plays in the grammar.
These sentences describe the situation of smiling, chasing, and giving, respec-
tively. Note that the participants in each event are different. In (68a), there is only
one participant, the child, and in (68b), there are two individuals involved in the
event of chasing. Meanwhile, in (68c), the giving situation has three individuals
involved. Thus, from the meaning or situation that a verb describes we can infer
how many arguments a verb selects. The number of arguments each verb or pred-
icate requires is represented in the ARG - ST list. So, for example, verbs like smile,
chase, and give will have the following ARG - ST representations, respectively:
(69) a. FORM smile
ARG - ST NP
90 H E A D , C O M P L E M E N T S , M O D I FI E R S
b. FORM chase
ARG - ST NP, NP
c. FORM give
ARG - ST NP, NP, PP
One-place predicates (predicates selecting one argument) like smile select just
one argument, two-place predicates like chase take two arguments, and three-
place predicates like give take three arguments.
We can make a few important observations about the properties of ARG - ST.
The first is that even though arguments are linked to semantic roles (e.g., agent,
patient, theme, location, etc.), the value of ARG - ST is a list of syntactic cate-
gories like NP or PP. This is partially because there are sometimes difficulties in
assigning a specific semantic role (as in That item is similar to his).
The second is that not only verbs but also other lexical expressions including
adjectives, nouns, and prepositions can take an argument or arguments. Consider
the following examples:
(70) a. [His mother] is quite fond [of the novel].
b. [Internet firms’] reliance [on information technology] might differ across
industries.
c. [The moon] was out. [Mars] was in.
The adjective fond and the noun reliance each denote an event involving two
individuals, while the prepositions out and in require one subject argument. This
information can be represented in terms of ARG - ST:
(71) a. FORM fond
ARG - ST NP, PP[of ]
b. FORM reliance
ARG - ST DP, PP[on]
c. FORM in
ARG - ST NP
The third point to note here is that the arguments selected by each predi-
cate are ordered as follows: subject, direct object/indirect object, and oblique
complement.14
can differentiate verb types by looking only at the number of arguments they
require. There are five main types of argument structures, described in terms of
the number and properties of the argument(s).
T HE I NTRANSITIVE C ONSTRUCTION : This is the argument-structure construc-
tion accommodating verbs that require only one argument:
(72) a. John disappeared.
b. *John disappeared Bill.
This unique argument is realized as subject (SUBJ) at syntax (see Chapter 5 for
discussion of the manner in which the elements from the ARG - ST list are realized
as grammatical functions like SUBJ (or SPR) and COMPS).
T HE L INKING C ONSTRUCTION : Verbs such as look, seem, remain, and feel
require a complement whose typical category is an AP:
(75) a. Tang looked [thoughtful].
b. Students became [familiar with this information].
c. The drink never tasted [so good].
d. The difference remained [statistically significant].
e. James seemed [ready to start a new life].
Though each verb may select a different type of phrase, all at least select a
predicative (PRD) complement, where a property is ascribed to the subject (com-
pare John remained a student with John revived a student).15 This pattern of
argument-structure can be represented as follows:
(77) ARG - ST NP, XP[PRD +]
The verbs that can occur in the linking construction have two arguments: one is
canonically an NP to be realized as the subject and the other is any phrase (XP)
that can function as a predicate (PRD +). The XP can be either an NP or an AP
for the verb become.
15 The verb remain can be used in a different sense, as in John remained in the park, in which the
PP functions as a nonpredicative locative, as in John stayed in the park. These uses involve a
construction like the locative construction.
92 H E A D , C O M P L E M E N T S , M O D I FI E R S
The ‘destroying’ event involves at least two participants or arguments: one who
does the action and the other (a patient) who is affected by the action. The
verbs occurring in this type of argument structure thus typically take an agent
NP subject with a patient NP object.17
T HE D ITRANSITIVE C ONSTRUCTION : English has a number of generally
ditransitive verbs, including send, pass, buy, teach, and tell:
(81) a. Sam sent [him] [a coded message].
b. The player passed [Paul] [the ball].
c. The parents bought [the children] [nonfiction books].
d. She taught [her students] [job skills].
As these examples show us, the verbs here take a subject and two apparent
objects, the latter of which refer to a theme and a recipient, respectively. Each
sentence describes a change-of-possession event in which an agent participant
transfers a ‘theme’ (th) object to a recipient or goal.
(82) FORM teach
ARG - ST NP[agt], NP[goal], NP[th]
The two complement NPs are taken to function as IO and DO, respectively, but
because the IO is an NP (object) rather than a PP (the typical grammatical real-
ization of recipient arguments), the resulting structure is typically referred to as
the ‘double object’ construction.
17 The first element of the ARG - ST in the TRANSITIVE CONSTRUCTION bears nonagent roles like
an experiencer, as in Most of the students liked the teacher.
4.5 Arguments and Argument-Structure Constructions 93
As we noted earlier, these verbs typically have related verbs in which the
recipient or goal argument is realized instead as an oblique PP complement:
(83) a. Sam sent a coded message to him.
b. The player passed the ball to Paul.
c. The parents bought nonfiction books for the children.
d. She taught job skills to her students.
In these uses, unlike the ones in (81), the second argument has the theme role
while the third argument has some other role; we illustrate here with goal:
(84) FORM teach
ARG - ST NP[agt], NP[th], PP[goal]
In (85a), the predicative PP as a good friend follows the object Bill; in (85b), the
AP furious serves as a predicate phrase of the preceding object some people. In
(85c), the NP a strategist is another predicative phrase. In (85d), the predicative
phrase is an infinitive VP. Just like linking verbs, these verbs require a predicative
([PRD +]) XP as complement, as exemplified by the following:
(86) FORM call
ARG - ST NP, NP, XP[PRD +]
This means that the verbs in (85) all select an object NP and an XP phrase that
functions as a predicate. Although these five types of argument-structure con-
structions cover most of the general types, there are other verbs that do not
fit into these constructions, or at least require further specifications on their
complement(s). Take the use of the verb cart in (87):
(87) a. *They carted away.
b. *They carted the debris.
c. They carted the furniture out of the home.
18 There are also differences between the two, for example, with respect to information structure
(see Goldberg, 2006 and the references therein). These divergences have raised the question
of whether one can be derived from the other (Larson, 1988; Baker, 1997) or whether the two
should be treated independently (Jackendoff, 1990; Goldberg, 2006).
94 H E A D , C O M P L E M E N T S , M O D I FI E R S
The intriguing fact, however, is that the verb can be used with an object when the
object is followed by a directional phrase. Note the following attested examples:
(90) a. Chess coughed smoke out of his lungs.
b. I coughed vodka back into my glass.
While in (89) the verb cough simply describes an action of expellng air from
one’s lungs, the verb in (90a) and (90b) expresses causation of motion: The
entities denoted by the direct object (smoke and vodka) come to be in a new
location by means of the coughing. Such novel uses suggest that a verb can
occur in different argument-structure configurations with systematic variations in
meaning.
In addition, consider the following data set, which shows that verbs like kick
can appear in a variety of complement (argument-structure) configurations:
(91) a. Pat kicked. (intransitive)
b. Pat kicked the ball. (transitive)
c. Pat kicked at the ball. (conative)
d. Pat kicked Bob the ball. (ditransitive)
e. Pat kicked the ball into the stadium. (caused-motion)
f. Pat kicked Bob black and blue. (resultative)
Traditional generative grammar assumes that each use of the verb kick here has
a distinct lexical entry with distinct combinatory properties (e.g., kick1,
4.5 Arguments and Argument-Structure Constructions 95
kick2, kick3, etc.). However, note that in all of these cases, the verb kick retains its
basic meaning of performing a forward-moving action with the foot. The mean-
ing differences come from the argument-structure patterns with which the verb
kick combines. In (91a), the INTRANSITIVE construction is used to convey that
the subject acted alone; in (91b), the TRANSITIVE construction is used to indicate
that the subject acted on another entity (propelling it forward); in (91c), the use
of a CONATIVE construction (which uses a PP complement in place of a direct
object) conveys that the subject made little or ineffectual contact with the ball;
in (91d), the DITRANSITIVE construction is used to describe an event in which
propulsion of the ball causes someone else to possess it; in (91e), a CAUSED -
MOTION predication, we understand the subject to have moved the ball to a new
location by means of kicking; finally, in (91f), the RESULTATIVE construction is
used to convey that the subject changed the direct object’s properties by means
of kicking. In light of these facts, we observe that each argument-structure con-
struction, schematized in Table 4.1, expresses a certain type of event or action.
In this constructional view, the meaning of a sentence is determined by the
combination of the matrix verb’s core meaning with the basic event type con-
veyed by the construction with which the verb combines. When a verb occurs in
one of these constructions, its semantic roles are identified or ‘fused’ with those
of the argument-structure construction with which it combines.19 Critically, the
argument-structure construction may provide semantic roles that are not supplied
by the verb, thus augmenting the verb’s array of semantic roles. The novel uses
of cough in (90) are then expected. The verb, as noted, is typically an intransitive
verb, but in (90) it occurs in the CAUSED - MOTION construction, which supplies
two additional participant roles (the theme argument and the directional argu-
ment). We find a similar pattern of flexible usage among other intransitive verbs,
including sneeze:
(92) a. Colin sneezed.
b. *Colin sneezed his napkin.
c. Colin sneezed his napkin off the table.
The examples (92a) and (92b) suggest that verbs like sneeze are used only in
intransitive environments. How can we square this with examples like (92c) in
which the verb combines with the object his napkin and the directional phrase
off the table? A proponent of traditional generative grammar might assume that
there is another type of sneeze, but the syntactic flexibility illustrated by (92c)
is prevalent in English, and creating a new lexical entry for each novel use
of a verb would not be practical, nor would it capture the insight that many
novel verb uses are ‘nonce uses’ – they serve an expressive purpose in a par-
ticular context but may never become conventionali. The CxG view we have
sketched out here can account for this important aspect of linguistic creativity
in an intuitive way: Argument-structure constructions have their own mean-
ings and semantic-role arrays, and the kind of event or relation expressed by
a verb is ultimately determined by the argument-structure pattern with which it
combines.
4.6 Conclusion
Exercises
4. For each sentence below, draw its tree structure and then provide the
ARG - ST value of the underlined verb:
5. The verbs in the following examples are used incorrectly. Correct the
errors or replace the verb with another one, and write out each new
example. In addition, provide the ARG - ST value for each verb (in its
use in your grammatical examples).
98 H E A D , C O M P L E M E N T S , M O D I FI E R S
7. Verbs like cut, get, and make can occur in many different syntac-
tic environments. Try to find authentic examples with these verbs in
different argument-structure constructions. In doing so, use corpora
like COCA, NOW, or iWeb, all of which are available online free of
charge.
5 Combinatorial Construction Rules
and Principles
We have seen that verbs like put specify information about arguments
(the number of participants in the expressed situation), as represented by the fea-
ture ARG - ST. This information can be traced to a lexeme: the basic lexical unit,
or, alternatively, the headword (citation form) in the dictionary. Each verb lexeme
is realized in different inflected forms, as seen in the realizations of the lexeme
chase:
(1) a. The dog chased the cat.
b. The dog chases a shadow.
c. The dog is chasing the cat.
All the three forms – chased, chases, chasing – here are related to the verb lexeme
chase, which carries the following ARG - ST information:
⎡ ⎤
(2) v-lxm
⎢ ⎥
⎣FORM chase ⎦
ARG - ST NP[agt], NP[th]
This lexeme (v-lxm) information shows that the event of chasing has two NP
arguments: an agent (agt) NP and a theme (th) NP. These two arguments are
realized as the subject and complement (object) respectively when the lexeme
is used as a word at the sentence level. For instance, the verb chased would
have the following syntactic information (suppressing semantic information at
the moment):
⎡ ⎤
(3) v-wd
⎢ ⎥
⎢FORM chased ⎥
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ POS verb ⎥
⎢ ⎢ HEAD ⎥⎥
⎢ ⎢ VFORM ed ⎥⎥
⎢ ⎢ ⎥⎥
⎢SYN ⎢ ⎥⎥
⎢ ⎢ NP ⎦⎥
⎥
⎢ ⎣VAL SPR 1 ⎥
⎢ ⎥
⎢ COMPS 2 NP ⎥
⎣ ⎦
ARG - ST 1 NP, 2 NP
The feature structure tells us that the word-level verb chased (v-wd) is a verb and
in the ed verb inflection form (VFORM). The first NP ( 1 ) element of the ARG - ST
99
100 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S
is linked to the SPR (specifier or subject), while the second NP is linked ( 2 ) to the
COMPS . In what follows, we will discuss the properties of these syntax-relevant
feature attributes, focusing on internal syntax.
All of these verbal forms are generated from the citation form (the lex-
eme) by English inflectional construction rules. For example, the past verb
word (v-wd) will be derived from a verb lexeme (v-lxm) by a rule like the
following:
The inflectional construction rule states that a verb lexeme (like chase, as in
(5)) can be used to create a v-word (v-w d) and derives its ed form by applying
the Fpast function, whose value can be either ‘-ed,’ as in The dog chased a cat,
or none, as in The thieves cut a hole in the fence, or even a suppletive form
(e.g., was). The following is an illustration deriving chased from the lexeme
chase:
The output v-wd adds the value for the feature VFORM (which we discuss in what
follows), as well as a past meaning (which we suppress here).1 Note that since in
this book we focus on word (lexical) and phrasal constructions, we will discuss
such morphological processes and constructions only when necessary.
This sharing between head and mother is ensured by the Head Feature
Principle:
(7) The Head Feature Principle (HFP):
A phrase’s head feature value (e.g., POS, VFORM, etc.) is identical to that of
its head.
The HFP thus ensures that every phrase has its own lexical head with the iden-
tical POS value. The HFP will apply to any features that we declare to be ‘head
features,’ VFORM being another (see Section 5.5 for detailed discussion). The
grammar thus does not allow hypothetical phrases like the following, ensuring
the endocentric property of each phrase:
(8)
The fin forms have three subtypes: es, ed, and pln ( plain). Notice that there
might be mismatch between form and function: Although the ed verb canonically
describes a past event as in (10a), while the es and pln verbs represent a present
event as in (10b), this is not always true, as seen in (10c):2
(10) a. My daughter called me yesterday.
b. She usually smiles a lot and she is usually pretty articulate.
c. Your plane leaves Seoul early tomorrow morning.
The verb leaves in (10c) is in the present form, es, but it describes a future event.
The mapping between a VFORM value and event time is thus not one-to-one.
The nonfin values include bse (base), ing (present participle), en (past partici-
ple), and inf (infinitive). As for the infinitival marker to, we follow the standard
generative grammatical analysis of English ‘infinitives,’ in which the infinitive
marker is the head (to). Note that the plain and base forms are identical to the
lexical base (or citation form) of the lexeme. Even though the two forms are
identical in most cases, substitution of the past form shows a clear difference:
(11) a. They write/wrote to her.
b. They want to write/*wrote to her.
(12) a. They are/*be kind to her.
b. They want to be/*are kind to her.
In (11a) and (12a)–(12b), we have two occurrences of the verb write, but note that
only that in (11a) can be replaced by the past verb wrote. This means that only
this one is a plain finite verb with no inflectional marking, while the verb write
in (11b) is a nonfinite base verb. The contrast in (12) also shows us a difference
between the two different verb forms: are is used only as a finite verb, while be
occurs only as a base verb.
2 More specifically, the plain form, though identical to the citation form, is used for present
tense when the subject is anything other than 3rd person singular. The plain verb thus lacks
an inflectional ending.
5.2 Head Features and Head Feature Principle 103
The verb form values (as value for the attribute VFORM ) given in (9) can be
represented as in the following hierarchy:
(13)
The classification of VFORM values here means that the values of VFORM are
‘typed,’ and those types have different subtypes – for example, what is shared
between es and ed will be stated on the type fin, yet they will individually differ
(they express different tenses). Sometimes we want to be able to refer to the type
of a value, as in (14a), and sometimes to a particular form, as in (14b):
(14) a. [VFORM fin]
b. [VFORM ing]
We can easily determine whether we need to distinguish between fin and non-
fin. Every declarative sentence in English must have a finite verb with tense
information:
(15) a. The student [knows the answers].
b. The student [knew the answers].
c. The students [know the answers].
(16) a. *The student [knowing the answers].
b. *The student [known the answers].
The examples in (16) are unacceptable because knowing and known have no
expression of tense – they are not finite. This in turn shows us that only finite
verb forms can be used as the head of the highest VP in a declarative sentence,
satisfying a basic requirement placed on English declarative sentences:
(17) English Declarative Sentence Construction:
For an English declarative sentence to be well-formed, its verb form value
(VFORM) must be finite.
The finiteness of a sentence or a VP is the same as the one on the head verb,
showing that the finiteness of the VFORM value is a head feature:
(18)
104 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S
One thing we need to remember is that the two participle forms (ing and en)
have many different uses, in different constructions, as partially exemplified in
(9). Some of these usages (gerundive, progressive, passive) were introduced as
VFORM values (Gazdar et al., 1985, Ginzburg and Sag, 2000), each of which
has several functions or constructional usages. In Section 5.5, we will further
examine how this HEAD feature functions in internal syntax.
Note that each of the three arguments selected by the verb needs to be realized
as a syntactic expression bearing its own grammatical function:
(21) a. *The doctor put his hand.
b. *The doctor put on my elbow.
c. *The doctor put.
All these examples are ill-formed, since at least one of the arguments is not
realized as a grammatical function. Note also that the first element of the ARG - ST
list must be the subject, with the other expression(s) linked to the complements
in order:3
(22) a. *In my elbow put his arm the doctor.
b. #His arm put the doctor in my elbow.
3 The notation # indicates that the structure is technically well-formed from a syntactic perspective
but semantically anomalous.
5.3 Combinatory Construction Rules 105
The constraint means that the elements in the ARG - ST list of word-level expres-
sions will be realized as SPR and COMPS in syntax. Lexemic expressions will
only have ARG - ST information, but when they occur in syntax, they will also
carry syntactic valence features such as SPR and COMPS.
We can apply this constraint to the word puts, as given in the following feature
structure:
⎡ ⎤
(25) FORM puts
⎢ ⎥
⎢ SPR 1 NP ⎥
⎢SYN | VAL ⎥
⎢ COMPS 2 NP, 3 PP ⎥
⎣ ⎦
ARG - ST 1 NP, 2 NP, 3 PP
The boxed tags show the different identities in the overall structure. For example,
the first element of ARG - ST and of SPR have the boxed tag 1 , ensuring that the
two are identical.
The ARC blocks examples like (21) as well as (22a), in which the location
argument is realized as the subject, as shown in (26):
⎡ ⎤
(26) * SPR 3 PP
⎢SYN | VAL ⎥
⎢ COMPS 1 NP, 2 NP ⎥
⎣ ⎦
ARG - ST 1 NP, 2 NP, 3 PP
This violates the ARC, which requires that the first element of ARG - ST be realized
as the SPR (the subject of a verb or the specifier of a noun).
4 The symbol ⊕ represents an operation of combining two list expressions. In addition, the symbol
⇒ represents constraints on the type.
106 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S
b. Head-Complement Rule:
X → X, YP* (Head, Complement(s))
c. Head-Modifier Rule:
XP → ModP, XP (Modifier, Head)
This X rule (29a) represents the case in which a head combines with its specifier
(e.g., a VP with its subject and an N with its determiner), whereas (29b) says
that a head combines with its complement(s) to form a phrase. Rule (29c) allows
the combination of a head with its modifier.
Within the present feature-based system, these X -rules can be reinterpreted
as follows:
(30) Combinatory Construction Rules (to be revised):
a. HEAD - SPECIFIER CONSTRUCTION (XP → Specifier, Head):
XP[POS 1 ] → Specifier, XP[POS 1 ]
b. HEAD - COMPLEMENT CONSTRUCTION (XP → Head, Complement(s)):
XP[POS 1 ] → X[POS 1 ], Complement(s)
c. HEAD - MODIFIER CONSTRUCTION (XP → Modifier, Head):
XP[POS 1 ] → Modifier, XP[POS 1 ]
(31)
This simplified presentation says that the head daughter VP requires a subject NP
(functioning as a specifier (SPR)) while carrying its own POS and VFORM value.
Combining with the subject, the VP then is projected into an S. This resulting
combination S discharges the requirement that the head VP combines with a SPR,
so the SPR set is empty at the level of S (once the requirement is satisfied, it is
‘cancelled’ from the list). Meanwhile, note that the S’s HEAD value is the same
as the head VP’s HEAD, in accordance with the HFP.
The HEAD - COMPLEMENT CONSTRUCTION, again analogous to the X rule X
→ X, YP, allows the combination of a lexical head daughter with its complement
daughter(s) (zero or more), as represented in (32).
(32)
108 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S
The declarative verb denied selects two arguments (ARG - ST), which are mapped
onto subject (SPR) and complement (COMPS), respectively. The head verb com-
bines with the NP complement, forming a well-formed VP. The resulting VP
then has its COMPS value empty (discharged) but still requires a subject speci-
fier. Note that in these two construction rules, once the required COMPS and SPR
values in (32) and (31) are combined, their value is discharged at the mother
level. This cancellation of elements of the valence (VAL) set is controlled by a
general principle called the Valence Principle:6
(33) Valence Principle (VALP):
For each valence feature F (e.g., SPR and COMPS), the F value of a
headed phrase is the head-daughter’s F value minus the realized non-head-
daughters.
The Adv strongly or AdvP quite strongly can modify its head VP, resulting in a
well-formed head-modifier construct.7 Note that the combination of a modifier
and its head does not alter the valence features (SPR and COMPS).8
6 Another way to state this is that unless the construction rule says otherwise, the mother’s SPR and
COMPS values are identical to those of the head daughter.
7 In contrast to the discussion in Chapters 3 and 4, the present grammar allows both a lexical
expression and a phrasal expression to modify a head phrasal expression, as long as the former
bears the feature MOD.
8 This means that the feature MOD does not belong to valence (VAL), since there is no process of
discharging its value.
5.3 Combinatory Construction Rules 109
To explicate the principles (the HFP and the VALP) and these three com-
binatory construction rules, let us consider a complete sentence using a tree
representation.11
(37)
The HFP ensures that the head-daughter’s HEAD information is projected in its
mother phrase. The HEAD value of the lexical head denied (such as the part-of-
speech value, verb, and VFORM value, fin) is thus that of both VPs and the S
here. In accordance with the VALP, the head’s valence information determines
the elements that the maximal projection contains. The valence specifications of
the head denied show that it requires one NP complement and a subject. When it
combines with the complement, its COMPS specification is satisfied, leaving the
VP’s COMPS value empty. The resulting VP combines with the modifier via the
HEAD - MODIFIER CONSTRUCTION to form the top VP. When this top VP com-
bines with the subject NP via the HEAD - SPECIFIER CONSTRUCTION, we obtain
11 All linguistic objects are represented as feature structures in HPSG. But for expository purposes,
they are presented in the familiar trappings of generative grammar – tree representations.
5.4 Nonphrasal, Lexical Constructions 111
We have seen thus far that complements are phrases or clauses. Com-
plements are represented as phrases rather than merely lexemes. We know, for
instance, that the object of the verb destroy cannot be simply an N, because there
are many more cases where the object of destroy is an NP, as shown in (38):
(38) a. *The hail destroyed garden.
b. You can’t legally destroy evidence.
c. Liberal programs have destroyed those cities.
d. They destroy all the vegetation.
e. It destroyed the work we had done.
We cannot assume that the main verb here ( figured, gave, and turned) selects a
particle phrase because an expression cannot be placed in front of the particle in
the manner that it could be if the preposition and the following NP jointly made
up a preposition phrase (e.g., out the right answer, up the job):
(40) a. *I figured finally out the right answer.
b. *Hunter gave completely up the job.
c. *He turned easily off the light.
The particle can in fact occur without an NP following, indicating again that the
particle does not take a NP object:
(41) a. All of these other lies [added up].
b. I think that I will [sign off] now.
c. One by one, her days were [slipping by].
The particle here is not optional, but rather contributes to the meaning. This in
turn implies that we need to allow certain verbs to select a particle, whether
or not the verb also takes an object, to induce a special meaning. The parti-
cle verbs figure and add, for instance, would thus have the following lexical
entries:
112 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S
⎡ ⎤
(42) a. FORM figure
⎢ ⎥
⎢ARG - ST NPx, Part[out], NPy⎥
⎣ ⎦
SEM compute-rel(x,y)
⎡ ⎤
b. FORM add
⎢ARG - ST NP , Part[up]⎥
⎣ x ⎦
SEM accumulate-rel(x)
What these lexical entries tell us is that, for example, in the particle verb figure
out, the verb figure has three syntactic arguments, to be realized as a subject, the
particle out, and an NP object, while the verb semantically has two arguments (x
and y), which are linked to the subject (x) and the NP complement (y), respec-
tively. Meanwhile, add up is projected from the verb add, which has the subject
NP (x) and a particle complement, evoking the meaning of x’s accumulating.
In Chapter 2, we saw that phenomena like gapping support the verb-particle
constituent structure, in which the verb forms a syntactic unit with the following
particle. We repeat the relevant examples here:
(43) a. *John ran up a big hill and Jack a small hill.
b. John ran up a big bill and Jack a small bill.
Such semantic and syntactic unity, first discussed in Chapter 2, once again
motivates us to adopt the complex verb analysis in which the matrix verb and
the particle form a unit. Together with the assignment of the feature [LEX +] to
expressions like particle, the grammar introduces the following construction rule
to license the verb-particle combination:12
(45) HEAD - LEX CONSTRUCTION :
V[POS 1] → V[POS 1 ], X[LEX +]
This construction rule allows a lexical head to combine with an expression bear-
ing the feature LEX (like a particle) to form another lexical expression. This rule
would, for instance, license the following structure:13
12 Adopting this construction rule implies that we need to modify the HEAD - COMPLEMENT CON -
STRUCTION in (35). Instead of discharging all the elements of COMPS, it needs to discharge the
LEX element first and the remaining phrasal complements at once.
13 Meanwhile, the combination of figured the answer out is licensed by the HEAD - COMPLEMENT
CONSTRUCTION .
5.5 Feature Specifications on the Syntactic Complement 113
(46)
The structure reflects the strong syntactic and semantic unity of the verb figure
and the particle out. The verb figure selects three arguments including sub-
ject, particle (out), and object, which are realized as the subject (SPR) and
complements (COMPS). It first combines with the particle in accordance with
the HEAD - LEX CONSTRUCTION, yielding the mid-level verb-particle unit. This
mid-level expression then combines with the object, licensed by the HEAD -
COMPLEMENT CONSTRUCTION . The combination of the verb and the particle
(V → V Part) is thus bigger than a pure lexical construction but smaller than a
phrasal construction, leading grammarians to call the verb-particle construction
a phrasal-verb or multi-word expression. The combination of a lexical head with
another lexical element yields a nonphrasal, lexical-level construction.14
⎡ ⎤
(47) FORM knows
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ POS verb ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ VFORM es ⎥⎥
⎢ ⎢ ⎥⎥
⎢SYN⎢ ⎥⎥
⎢ ⎢ NP ⎥ ⎥
⎢ ⎣VAL SPR 1
⎦⎥
⎢ ⎥
⎢ COMPS 2 NP ⎥
⎣ ⎦
ARG - ST 1 NP, 2 NP
This [VFORM es] value will be the same for S, in accordance with the HFP, as
shown here:
(48)
It is easy to verify that if we had knowing instead of knows here, the S would have
the [VFORM ing] and the result could not be a well-formed declarative sentence.
This is simply because the value ing is a subtype of nonfin.
There are various constructions in which we need to refer to VFORM values,
such as:
(49) a. During rehearsal, John kept [forgetting/*forgot/*forgotten his lines].
b. Last summer a cop caught them [drinking/*drank/*drink/*drunk beer
behind a local burger joint].
c. They made him [cook/*to cook/*cooking their gypsy food].
Even though each main verb here requires a VP as its complement (the part
in brackets), the required VFORM value could be different, as illustrated by the
following lexical specifications of the word kept:
⎡ ⎤
(50) a. FORM kept
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ HEAD | POS verb ⎥
⎢ ⎢ ⎥⎥
⎢SYN⎢ 1 NP ⎥⎥
⎢ ⎣VAL SPR ⎦⎥
⎢ ⎥
⎢ COMPS 2 VP[ing] ⎥
⎣ ⎦
ARG - ST 1 NP, 2 VP
5.5 Feature Specifications on the Syntactic Complement 115
⎡ ⎤
b. FORM made
⎢ ⎡ ⎤⎥
⎢ HEAD | POS ⎥
⎢ verb ⎥
⎢ ⎢ ⎥⎥
⎢SYN⎢ 1 NP ⎥⎥
⎢ ⎣VAL SPR ⎦⎥
⎢ ⎥
⎢ COMPS 2 NP, 3 VP[bse] ⎥
⎣ ⎦
ARG - ST 1 NP, 2 NP, 3 VP
Such lexical specifications on the VFORM value ensure that these verbs only
combine with a VP with the appropriate VFORM value, as shown here:
(51)
The finite verb kept selects as its complement a VP whose VFORM value is ing.
The verb forgetting has this VFORM value, which it shares with its mother VP
in accordance with the HFP. The HEAD - COMPLEMENT CONSTRUCTION allows
the combination of the head verb kept with this VP. In the upper part of the
structure, the VFORM value of the verb kept is also passed up to its mother node
VP, ensuring that the VFORM value of the S is a subtype of fin, satisfying the
basic English rule for declarative sentences.
116 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S
(52) a. She was apparently despondent (that she could not leave the city).
b. He seems intelligent (*to study medicine).
One thing we can note again is that the complements also need to be in
a specific VFORM and PFORM value, where PFORM indicates the form of a
specific preposition, as illustrated in examples (53b)–(53f). Just like verbs,
adjectives also place restrictions on the VFORM or PFORM value of their
complement. Such restrictions are also specified in the arguments that they
select:
⎡ ⎤
(54) a. FORM eager
⎢ ⎥
⎣SYN | HEAD | POS adj ⎦
ARG - ST NP, VP[VFORM inf ]
⎡ ⎤
b. FORM fond
⎢ ⎥
⎣SYN | HEAD | POS adj ⎦
ARG - ST NP, PP[PFORM of ]
Such lexical entries will project sentences like the following, in which the
first element is realized as SPR while the second is realized as the COMPS
value:15
15 The copula verb are selects two arguments: a subject and an AP. Its subject is the same as the
subject of eager. For discussion of copula verbs, see Chapter 8.
5.5 Feature Specifications on the Syntactic Complement 117
(55)
The category DP (similar to NP) includes not only simple determiners like a,
the, and that but also possessive phrases like John’s (See Chapter 6, where we
discuss NP structures in detail). In these particular entries, the SPR is shown to
be required.
However, as noted in the previous chapter, certain English verbs select only it or
there as subject:16
(59) a. It/*John/*There rains.
b. There/*The spy lies a man in the park.
The pronouns it and there are often called ‘expletives,’ indicating that they do
not contribute any meaning. The use of these expletives is restricted to partic-
ular contexts or verbs, although both forms have regular pronoun uses as well.
One way to specify such lexical specifications for subjects is to make use of a
form value specification for nouns: All regular nouns have [NFORM norm(al)]
as a default specification; overall we classify nouns as having three different
NFORM values: normal, it, and there. Given the NFORM feature, we can have
the following lexical entries for the verbs above:
⎡ ⎤
(60) a. FORM rained
⎢ ⎥
⎢ 1 NP[NFORM it] ⎥
⎢SYN | VAL SPR ⎥
⎢ ⎥
⎣ COMPS ⎦
ARG - ST 1 NP
We can also observe that only a limited set of verbs require their subject to be
[NFORM there]:17
(61) a. There comes a time when you can’t save it.
b. There remains a marked contrast between potentiality and actuality.
c. There exist few solutions which are cost-effective.
d. There arose a cloud of dust that obscured the view.
For sentences with there subjects, we first consider verb forms which have reg-
ular subjects. A verb like exist in (61c) takes one argument in such an example,
and the argument will be realized as the SPR, as dictated by the entry in (63a).
In addition, such verbs can introduce there as the subject through the Argument
Realization option given in (63b), which is the form that occurs in the structure
of (60a):
⎡ ⎤
(63) a. FORM exists
⎢ ⎥
⎢ ⎥
⎢SYN | VAL SPR NP
1
⎥
⎢ ⎥
⎣ COMPS ⎦
ARG - ST 1 NP
⎡ ⎤
b. FORM exists
⎢ ⎥
⎢ ⎥
⎢SYN | VAL SPR NP[NFORM there] ⎥
1
⎢ NP ⎥
⎣ COMPS 2
⎦
ARG - ST 1 NP, 2 NP
17 Some verbs such as arise or remain sound a little archaic in these constructions.
120 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S
(64) a. I think (that) reporters are doing their jobs, by and large.
b. They believe (that) some improvements to the referral process should be
investigated.
The C (complementizer) that is optional here, implying that this kind of verb
selects a finite complement clause of some type, which we will notate as a
[VFORM fin] clause. That is, these verbs will have one of the following two
COMPS values:
(65) a. COMPS S[VFORM fin]
b. COMPS CP[VFORM fin]
If the COMPS value only specifies a VFORM value, the complement can be either
S or CP. This means that we can subsume these two uses under the follow-
ing single lexical entry, suppressing the category information of the sentential
complement:18
⎡ ⎤
(66) FORM believe
⎢ ⎥
⎣SYN | HEAD | POS verb ⎦
ARG - ST NP, [VFORM fin]
This constraint will then allow both of the following structures, in which believe
combines either with a finite S or a finite CP:
(67)
We also find somewhat similar verbs, like demand and require, which diverge
only in the VFORM value on their sentential complements:
(68) a. They demanded that that city’s police not be allowed to march in the parade.
b. The dance required that she turn around as she circled.
Unlike think or believe, these verbs that introduce a subjunctive clause typically
only take a CP[VFORM bse] as complement: The finite verb itself is actually in
the bse form. Observe the structure of (68b):
18 Although the categories V and VP are also potentially specified as [VFORM fin], such words or
phrases cannot be complements of verbs like think or believe. This is because complements are
typically saturated phrases at least with respect to their own complements (since the VP still
requires a subject). While S and CP are saturated categories projected from V, VP and V are not
saturated.
5.7 Clausal Complement and Subject 121
(69)
The verb require selects a bse CP or S complement, and this COMPS require-
ment is discharged at its mother VP: This satisfies the HEAD - COMPLEMENT
CONSTRUCTION . There is one issue here with respect to the percolation of the
VFORM value: The CP must be bse, and this information must come from the
head C, not from its complement S. One way to make sure this is so is to assume
that the VFORM value of C is identical to that of its complement S, as in this
lexical realization:
⎡ ⎤
(70) FORM that
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS comp
⎥⎥
⎢ ⎢ ⎥⎥
⎢ VFORM 1 ⎥⎥
⎢SYN⎢
⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎥
⎣ ⎣VAL SPR ⎦⎦
COMPS S[VFORM 1 ]
This lexical information will then allow us to pass on the VFORM value of S to the
head C and then percolate up to the CP according to the HFP. This encodes the
intuition that a complementizer ‘agrees’ in VFORM value with its complement
sentence.
One more thing to note here is that the unique argument of the complementizer
is mapped not onto the SPR but rather onto the COMPS value. Lexical expressions
like complementizers, nonpredicative prepositions, markers like than, and deter-
miners, are functional expressions in the sense that they do not select subjects.
122 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S
This means that the argument of the function-word is mapped to its complement.
Prepositions like of have no specifier and allow at most one complement, since
of cannot be used as predicative.19 We will see that even an inverted auxiliary
verb also can be viewed as a function-word of this type, in the sense that it has no
subject but contains just a complement realized from the unique argument (see
Chapter 8).
There are also verbs which select a sequence of an NP followed by
a CP as a complement. NP and CP are abbreviations for feature struc-
ture descriptions that include the information [POS noun] and [POS comp],
respectively:
(72) a. The trial court warned the defendant that his behavior was unacceptable.
b. His parents told him that he had fainted.
c. Liza finally convinced me that I was ready for more training.
The COMPS value of such verbs realized from the ARG - ST will be as in (73):
(73)
COMPS NP, CP[VFORM fin]
The data show that verbs like intend and prefer select an infinitival CP clause.
The structure of (75a) is familiar, but it now has a nonfinite VFORM value
within it:
19 This means that predicative prepositions like in or under in sentences like Pat is in the room or
Pat is under the table select a subject as well as a complement.
5.7 Clausal Complement and Subject 123
(76)
The structure given here means that the verb intends will have the following
lexical information, suppressing the ARG - ST information:
(77) FORM intend
ARG - ST NP, CP[VFORM inf ]
To fill out the analysis, we need explicit lexical entries for the complementizer
for and for the infinitival marker to, which we treat as an (infinitive) auxiliary
verb. In fact, to has a distribution very similar to finite modal auxiliaries such as
will or must, differing only in the VFORM value (see Chapter 8, Section 8.3.5).20
⎡ ⎤
(78) a. FORM for
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS comp
⎥⎥
⎢ ⎥⎥
⎢SYN⎢ VFORM inf ⎥
⎣ ⎣ ⎦⎦
VAL | COMPS S[VFORM inf ]
20 An issue arises regarding the accusative case of the subject him, as in Tom intends for him to
review the book. In line with what is traditionally assumed, we could posit a constructional
constraint specifying that the subject of an infinitival VP can have accusative case. Alternatively,
some linguists (e.g., Ginzburg and Sag, 2000) have proposed a ternary analysis for infinitivals
where the complementizer for selects both the accusative subject and the infinitival VP as its
complements.
124 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S
⎡ ⎤
b. FORM to
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS verb
⎥⎥
⎢ ⎥⎥
⎢SYN⎢ VFORM inf ⎥
⎣ ⎣ ⎦⎦
VAL | COMPS VP[VFORM bse]
Just like the complementizer that, the complementizer for selects an infiniti-
val S as its complement, inheriting its VFORM value too. The evidence that the
complementizer for requires an infinitival S can be found from coordination data:
(79) a. For John to either [make up such a story] or [repeat it] is outrageous.
(coordination of bse VPs)
b. For John either [to make up such a story] or [to repeat it] is outrageous.
(coordination of inf VPs)
c. For [John to tell Bill such a lie] and [Bill to believe it] is outrageous.
(coordination of inf Ss)
Given that only like categories (constituents with the same label) can be coor-
dinated, we can see that base VPs, infinitival VPs, and infinitival Ss are all
constituents.21
An important point here is that the verbs that select a CP[VFORM inf ]
complement can also take a VP[VFORM inf ] complement:
(80) a. He intends to continue to see patients and conduct research.
b. Wayne prefers to sit at the bar and mingle.
Since the specification [VFORM inf ] is quite general, it can be realized either as
CP[VFORM inf ] or VP[VFORM inf ].
However, this does not mean that all verbs behave alike: Not all verbs can
take variable complement types like an infinitival VP or S. For example, try,
tend, hope, and others select only a VP[inf ], as attested by the data:
(82) a. Tom tried to ask a question.
b. *Tom tried for Bill to ask a question.
(83) a. Greenberg tends to avoid theoretical terminology in favor of descriptive
language.
b. *Greenberg tends for Mary to avoid theoretical terminology in favor of
descriptive language.
21 Tensed VPs can be coordinated even with different tense values, as in Kim [alienated cats] and
[loves his dog].
5.7 Clausal Complement and Subject 125
Such subcategorization differences are hard to predict simply from the meanings
of verbs: They are apparently arbitrary lexical specifications that language users
need to learn.
There is another generalization that we need to consider with respect to the
property of verbs that select a CP: Most verbs that select a CP can at first glance
select an NP too:
Should we have two lexical entries for such verbs or can we have a simple way
of representing such a pattern? To reflect such lexical patterns, we can assume
that English parts of speech come in families and can profitably be analyzed in
terms of a type hierarchy as follows:22
(86)
According to the hierarchy, the type nominal is a supertype of both noun and
comp. In accordance with the basic properties of systems of typed feature struc-
tures, an element specified as [POS nominal] can be realized either as [POS
noun] or [POS comp]. These will correspond to the phrasal types NP and CP,
respectively.
The hierarchy implies that the subcategorization pattern of English verbs will
refer to (at least) each of these types. Consider the following patterns:
In each class, the ARG - ST list specifies the argument elements that the verbs
select (in the order Subject, Direct Object . . . ). The POS value of a given
element is the part-of-speech type that a word passes on to the phrases it projects.
These three patterns illustrate that English transitive verbs come in at least three
varieties.
In addition to the intermediate category, the postulation of supercategories
like verbal can capture generalizations about so-called it-object extraposition.
English allows a pattern where a finite or infinitival clause appears in sentence-
final or ‘extraposed’ position, leaving the expletive it behind:
(91) a. I have made it my duty [to clean this place from top to bottom].
b. I owe it to you [that the jury acquitted me].
The contrast here means that verbs like bother can have two realizations of
the ARG - ST, whereas those like love allow only one. This difference can be
represented by the following:
(95) a. FORM bother
ARG - ST XP[nominal], NP
b. FORM love
ARG - ST 1NP, NP
The difference is that the first argument of bother is nominal while that of love is
just an NP. By definition, the nominal argument can be realized either as an NP
or as a CP, licensing sentences like (93):
⎡ ⎤
(96) a. FORM bother
⎢ ⎥
⎢ 1 NP ⎥
⎢SYN | VAL SPR ⎥
⎢ COMPS 2 NP ⎥
⎣ ⎦
ARG - ST [nominal], NP
1 2
⎡ ⎤
b. FORM bother
⎢ ⎥
⎢ ⎥
⎢SYN | VAL SPR CP
1
⎥
⎢ 2 NP ⎥
⎣ COMPS ⎦
ARG - ST 1 [nominal], 2 NP
The different realizations thus all hinge on the lexical properties of the given
verb, and only some verbs allow the dual realization.
A clausal subject is not limited to a finite that-headed CP, but there are other
clausal types:
(97) a. [That John sold the ostrich] surprised Bill.
(that-clause CP subject)
b. [(For John) to train his horse] would be desirable.
(infinitival CP or VP subject)
c. [That the king or queen be present] is a requirement of all royal weddings.
(subjunctive that-clause CP subject)
d. [Which otter you should adopt first] is unclear.
(wh-question subject)
For example, the difference between the two verbs nominate and surprise can be
seen in these partial lexical entries:
128 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S
(99) a. FORM nominate
ARG - ST NP, NP
b. FORM surprise
ARG - ST [nominal], NP
Unlike nominate, the first argument of surprise can be a nominal. This means
that its subject can be either an NP or a CP.
(102) a. I’m ashamed [that I took my life for granted while you take nothing for
granted].
b. They are content [that you are not a threat].
c. I am thankful [that she lived one year after diagnosis].
The lexical entries for the adjectives in (101) and (102) are given in (103):
(103) a. FORM ashamed
ARG - ST NP, CP[VFORM fin]
b. FORM content
ARG - ST NP, CP[VFORM fin]
c. FORM eager
ARG - ST NP, CP[VFORM inf ]
Note that many of these adjectives can select an infinitival VP as the second
argument:
The second argument in each case will be realized as the COMPS element
in accordance with the ARP. This realization, interacting with the HEAD -
COMPLEMENT CONSTRUCTION , the HEAD - SPECIFIER CONSTRUCTION , and
the HFP, can license structures like (105):
5.7 Clausal Complement and Subject 129
(105)
When the adjective eager combines with its complement, VP[inf ], it satisfies the
HEAD - COMPLEMENT CONSTRUCTION . The same rule allows the verb willing
to combine with its AP complement.
These examples imply that eagerness will have the following lexical informa-
tion:
⎡ ⎤
(107) FORM eagerness
⎣ ⎦
ARG - ST DP, XP[VFORM inf ]
130 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S
This means that the noun eagerness selects two arguments in which the DP is
realized as its specifier and the VP as the complement. This will allow a structure
like the following:
(108)
Note that the noun first combines with its VP complement, forming a Head-
Complement construct. The resulting N then combines with its specifier DP,
yielding a Head-Specifier construct.
One pattern that we can observe is that when a verb selects a CP complement
and has a corresponding noun, the noun also selects a CP:
This shows us that the derivational process that derives a noun from a verb pre-
serves the COMPS value of that verb.23 Not surprisingly, not all nouns select a
CP complement:
23 Derivational processes or rules (e.g., establishment from establish) typically create a new lexeme
from a base, while inflectional ones (e.g., students from student) do not.
5.8 Conclusion 131
These nouns cannot combine with a CP, indicating that they do not have CPs as
arguments or complements.
These facts show us that indirect questions have some feature (e.g.,
QUE ), which distinguishes them from canonical that- or for-CPs and
makes them similar to true nouns (NP is the typical complement of a
preposition).24
5.8 Conclusion
Exercises
As represented here, nouns fall into three major categories: common nouns,
proper nouns, and pronouns. An important division within the class of com-
mon nouns is the one between count and noncount nouns. In Chapter 1, we saw
that whether a noun is countable or not does not fully depend on its reference.
A single group of things can be referred to by a count or a noncount (‘mass’)
term (Rothstein, 2010). For example, the greenery on a tree may be referred to
as either leaves or foliage. We can make a similar observation about ‘flexible’
nouns, like brick and difficulty, which can be either mass or count depending on
context:
134
6.2 Syntactic Structures 135
Proper nouns denote specific people or places and are typically uncount-
able. Common nouns and proper nouns display clear contrasts in terms of
the combinatorial possibilities with determiners, as shown in the following
chart:
Common N
Proper N
countable uncountable flexible
N Einstein *book music cake
the + N *the Einstein the book the music the cake
a+N *an Einstein a book *a music a cake
some + N *some Einstein *some book some music some cake
N+s *Einsteins books *musics cakes
Proper nouns (Einstein) do not combine with any determiner, as can be seen
from the chart. Meanwhile, count nouns have singular and plural forms (e.g., a
book and books), whereas uncountable nouns (music) combine only with some
or the. The discussion in Chapter 1 has shown us that some common nouns may
be either count or noncount, depending on the kind of reference they have. For
example, cake is countable when it refers to a specific entity as in I made a cake,
but noncountable when it refers to ‘cake in general,’ as in I like cake.
Together with verbs, nouns are critical to the meaning and structure of
the English clause, because they (or their phrasal projections) are used to
encode both the core semantic roles (agents and undergoers of actions) and
the core syntactic functions (subject and object). This chapter deals with the
structural, semantic, and functional dimensions of NPs, with a focus on the
agreement relationships between nouns and determiners and between subjects
and verbs.
However, mass or plural count nouns are fully grammatical as bare NPs with
no determiners:1
Examples like (6) imply that, as we have seen earlier, a single noun (rice) can
be projected into an NP without combining with a complement or specifier, as
given in the following:2
(7) NP
⎡ ⎤
phrase
⎢ ⎥
⎣SPR ⎦
COMPS
N
⎡ ⎤
word
⎢ ⎥
⎣SPR ⎦
COMPS
rice
This structure shows us that a lexical head is projected into a phrasal construction
without combining with any specifier or complement. There is no need to have
an N projection since no specifier is required.
Different from such cases, countable nouns like book and student will select a
DP as their specifier:
⎡ ⎤ ⎡ ⎤
(8) FORM book FORM student
⎢ ⎡ ⎤⎥ ⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥ ⎢ HEAD | POS noun ⎥
⎢
a. ⎢ ⎢ ⎥ ⎢
⎥⎥ b. ⎢ ⎢ ⎥⎥
⎢SYN⎢ ⎥⎥ ⎢SYN⎢ ⎥⎥
⎥
⎣ ⎣VAL SPR DP ⎦⎦ ⎣ ⎣VAL SPR DP ⎦⎦
COMPS COMPS
1 The style of English used in headlines does not have this restriction, e.g., Student discovers planet,
Army receives high-tech helicopter.
2 Note that the projection from N to NP makes no changes to the VAL feature values. The key
change is from a word to a phrase. This projection is a unary structure with no branching. To
allow this kind of unary projection, the grammar needs the HEAD - ONLY CONSTRUCTION:
This construction rule will also license a lexical element to project into a phrase, as in VP → V
and NP → N.
6.2 Syntactic Structures 137
(9)
As seen from the structure, the lexical construction N directly combines with its
specifier DP, forming a head-specifier construct.
In the previous chapter we have seen that not only a simple lexical element
(e.g., a, an, this, that, any, some, his, how, which) but also a phrasal expression
like a possessive phrase can serve as a specifier:
The grammar thus allows not only a simple determiner but also a pos-
sessive NP to be projected into a DP, as represented in the following
structures:
138 N O U N P H R A S E S A N D AG R E E M E N T
(12)
As shown here, the noun friend does not select a complement, and thus projects
to an NP with its specifier DP my brother’s. The head of this DP is the possessive
determiner selecting an NP as given here. The expression my brother is also a
full NP just like the whole phrase my brother’s friend. The common noun brother
requires a DP as its specifier.3
As we have seen in the previous chapters, common nouns can select a com-
plement, as in the planet’s proximity to the Sun, an increase in price, or a feeling
of loneliness. This kind of NP would have the following structure:
(13)
3 Once again note that this combinatorial system, with cancellation of the values of valence features
SPR and COMPS, requires no vacuous projection from N to N when the N does not combine with
a complement. The head N, requiring a specifier, can directly combine with that specifier with no
intervening N projection; the COMPS set of such a N is simply empty.
6.2 Syntactic Structures 139
The head noun proximity combines with its complement to the Sun, and the
resulting N phrase combines the specifier the planet’s, which consists of the
NP the planet and the apostrophe ‘s.’
6.2.2 Pronouns
The core class of pronouns in English includes at least three main
subgroups:
Personal pronouns refer to specific persons or things and take different forms
to indicate person, number, gender, and case. Syntactically, each pronoun is
projected into a saturated NP without complements or specifiers:
(15) NP
SPR
COMPS
N
SPR
COMPS
you
Reflexive pronouns are special forms which are typically used to indicate a
reflexive activity or action, which can include mental activities:
4 These restricted constructions can involve some indefinite pronouns (e.g., a little something, a
certain someone).
140 N O U N P H R A S E S A N D AG R E E M E N T
In this sense, proper nouns are just like pronouns in being projected into an
NP with no complement or specifier. However, proper nouns can be converted
into countable nouns when they refer to a particular individual or type of
individual:
In such cases, proper nouns are converted into common nouns, may select a
specifier, and take other nominal modifiers. This means that a proper noun will
have a lexical entry like (20a) but can be related to one like (20b):5
⎡ ⎤ ⎡ ⎤
(20) prpn cn-prpn
⎢FORM John Smith ⎥ ⎢FORM John Smith ⎥
⎢ ⎡ ⎤⎥ ⎢ ⎡ ⎤⎥
⎢ ⎥ ⎢ ⎥
a. ⎢
⎢ HEAD | POS noun ⎥ b. ⎢
⎢ HEAD | POS noun ⎥
⎢ ⎢ ⎥⎥
⎥ ⎢ ⎢ ⎥⎥
⎢SYN⎢ ⎥ ⎢SYN⎢ ⎥⎥
⎣ ⎣VAL SPR ⎦⎥
⎦ ⎣ ⎣VAL SPR DP ⎦⎥
⎦
COMPS COMPS
(20a) specifies that the proper noun John Smith does not require any
specifier or complement. But (20b) says that the proper noun, converted
into a common noun, combines with a specifier, as represented in the
following:
5 Once again, the italic part at the top of the feature structure denotes the type of the expression
described. For example, prpn here means proper-noun and cn-prpn means common-noun-prpn
derived from a proper noun.
6.3 Agreement Types and Morphosyntactic Features 141
(21)
The data in turn means that the head noun’s number value should be identical to
that of its specifier, leading us to revise the HEAD - SPECIFIER CONSTRUCTION:
(23) HEAD - SPECIFIER CONSTRUCTION :
XP → SPR AGR 1 , H AGR 1
This revised rule, specified with the agreement (AGR) feature, guarantees
that English head-specifier phrases require their head and specifier to share
agreement features including the attribute NUM (number).
⎡ ⎤ ⎡ ⎤
(24) FORM a FORM book
⎢ ⎡ ⎤⎥ ⎢ ⎡ ⎤⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎢HEAD
POS det
⎥⎥ ⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥ ⎢ ⎥⎥
AGR | NUM sing ⎥⎥ b. ⎢ ⎢ AGR | NUM sing
a. ⎢
⎢SYN⎢ ⎥⎥ ⎢SYN⎢ ⎥⎥
⎢ ⎥ ⎢ ⎢ ⎥⎥
⎢ ⎢ ⎥⎥ ⎢ ⎢ ⎥⎥
⎢
⎣ ⎣VAL SPR ⎦⎥
⎦ ⎣ ⎣VAL SPR DP[ NUM sing] ⎦⎥
⎦
COMPS COMPS
142 N O U N P H R A S E S A N D AG R E E M E N T
(25)
The singular noun book selects a singular determiner like a as its specifier,
forming a HEAD - SPECIFIER CONSTRUCTION. The head and its specifier are
structure-shared with the AGR value, satisfying the constructional constraint.
Notice that the AGR value on the head noun book is passed up to the whole
NP, marking the whole NP as singular, so that it can combine with a singular VP,
if it is the subject.
In addition, there is nothing preventing a singular noun from combining with
a determiner that is not specified at all for a NUM value:
Determiners like the, no, and my are not specified for a NUM value. Formally,
their NUM value is underspecified as num(ber). That is, the grammar of English
has the underspecified value num for the feature NUM, with two subtypes,
sing(ular) and pl(ural):
(27)
Given this hierarchy, nouns like book requiring a singular Det can combine with
determiners like the whose AGR value is num. This is in accord with the grammar,
since the value num is a supertype of sing. The same explanation can be applied
to the phrases whose books and whose book, in which whose is underspecified
for the AGR’s number value.
6 Keen readers may have noticed that we allow the combination of N with the specifier DP. Noth-
ing blocks the head noun from combining with the specifier directly as the HEAD - SPECIFIER
CONSTRUCTION .
6.3 Agreement Types and Morphosyntactic Features 143
The pronoun he or it here needs to agree with its antecedent not only with respect
to the number value but also with respect to person (1st, 2nd, or 3rd) and gender
(masculine, feminine, or neuter) values too. This shows us that nouns have also
information about person, number, and gender in the AGR values:
⎡ ⎤
(29) a. FORM book
⎢ ⎡ ⎡ ⎤ ⎤⎥
⎢ ⎥
⎢ POS noun ⎥
⎢ ⎢ ⎢ ⎡ ⎤⎥ ⎥⎥
⎢ ⎢ ⎢ ⎥ ⎥⎥
⎢ ⎢ ⎢ PER 3rd
⎥ ⎥⎥
⎢ ⎢ HEAD ⎢AGR ⎢NUM sing ⎥⎥ ⎥⎥
⎢ ⎢ ⎣ ⎣ ⎦⎦ ⎥⎥
⎢SYN⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ GEND neut ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎣ SPR DP[ NUM sing] ⎦⎥
⎣ VAL ⎦
COMPS
⎡ ⎤
b. FORM he
⎢ ⎡ ⎡ ⎤⎤⎥
⎢ ⎥
⎢ POS noun ⎥
⎢ ⎢ ⎢ ⎡ ⎤⎥⎥⎥
⎢ ⎢ ⎢ ⎥⎥⎥
⎢ ⎢ ⎢ PER 3rd
⎥⎥⎥
⎢ ⎢ HEAD ⎢ ⎢ ⎥⎥⎥
⎥
⎢ ⎢ ⎣ AGR ⎣ NUM sing ⎦⎦⎥⎥
⎢SYN⎢ ⎥⎥
⎢ ⎢ GEND masc ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎣ SPR ⎦⎥
⎣ VAL ⎦
COMPS
As we have briefly shown, nouns have NUM (number), PER (person), and GEND
(gender) for their AGR values. The PER value can be 1st, 2nd, or 3rd; the GEND
value can be masc(uline), fem(inine), or neut(ral). The NUM values are shown in
(27).
The present-tense verb swims selects one argument, which is realized as the
subject bearing the 3rd singular AGR information. This lexical information will
license a structure like the following:
(34)
6.4 Semantic Agreement Features 145
The verb itself carries the third singular agreement features, passing these fea-
tures up to the VP level. These agreement features are identical with the subject
NP the boy, satisfying the HEAD - SPECIFIER CONSTRUCTION. In other words, if
this verb were to combine with a subject that has an incompatible agreement
value, we would create an ungrammatical sentence like *The boys swims in
(32b). In this system, subject-verb agreement is structure sharing between the
AGR value of the subject ( SPR value of the verb) and that of the NP with which
the VP combines.
The acute reader may have noticed that there are similarities between noun-
determiner agreement and subject-verb agreement – that is, in the way that
agreement works inside NP and inside S. Both NP and S require agreement
between the head and the specifier, as reflected in the revised HEAD - SPECIFIER
CONSTRUCTION in (23).
When (35b) is spoken by a waiter to another waiter, the subject refers to a person
who ordered hash browns.7 A somewhat similar case is found in (36):
(36) King prawns cooked in chili salt and pepper was very much better, a simple
dish succulently executed.
Here the verb form was is singular in agreement with the dish being referred
to, rather than with a plurality of prawns. If we were simply to assume that the
subject phrase inherits the morphosyntactic agreement features of the head noun
(hash) browns in (35b) and (King) prawns in (36) and requires that these fea-
tures match those of the verb, we would not expect the singular verb form to be
possible at all in these examples. In the interpretation of a nominal expression,
that expression must be anchored to an individual in the situation described. We
call this anchoring value the noun phrase’s ‘index’ value. The index of hash
browns in (35a) must be anchored to the plural entities on the plate, whereas
that of hash browns in (35b) must be anchored to a customer who ordered the
food.
The lesson here is that English agreement is not purely morphosyntactic but
context-dependent in various ways – a context-dependency we represent via the
7 Such an example illustrates a reference transfer or a metonymic use of language (see Nun-
berg, 1995 and Pollard and Sag, 1994).
146 N O U N P H R A S E S A N D AG R E E M E N T
notion of ‘index’ that we have just introduced. Often what a given nominal refers
to in the real world is important for agreement – index agreement. Index agree-
ment involves sharing of referential indexes, closely related to the semantics of a
nominal and somewhat separate from the syntactic agreement feature AGR. This
then requires us to distinguish the morphological AGR value from the semantic
(SEM) IND (index) value. So, in addition to the morphological AGR value intro-
duced above, each noun will also have a semantic IND value representing what
the noun refers to in the actual world:8
⎡ ⎤
(37) a. FORM boy
⎢ ⎥
⎢ ⎥
⎢SYN | HEAD POS noun ⎥
⎢ | sing ⎥
⎣ AGR NUM ⎦
SEM | IND | NUM sing
⎡ ⎤
b. FORM boys
⎢ ⎥
⎢ ⎥
⎢SYN | HEAD POS noun ⎥
⎢ AGR | NUM pl ⎥
⎣ ⎦
SEM | IND | NUM pl
The lexical entry for boy indicates that it is syntactically a singular noun (through
the feature AGR) and semantically also denotes a singular entity (through the
feature IND). And the verb will place a restriction on its subject’s IND value
rather than its morphological AGR value:9
⎡ ⎤
(38) FORM swims
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ POS verb ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ | sing ⎥⎥
⎢SYN ⎢ AGR NUM ⎥
⎢ ⎣ ⎦⎥
⎥
⎢ VAL | SPR NP[IND | NUM sing] ⎥
⎢ ⎥
⎣ ⎦
SEM | IND s
0
The lexical entry for swims here indicates that it is morphologically marked as
singular (the AGR feature) and selects a subject to be linked to a singular entity
in the context (by the feature IND). Distinct from the IND value of nouns, the
verb’s IND value is a situation index (s0). The situation referred to here is that
the individual indexed by the SPR value is performing the action of swimming.
If the referent of this subject (its IND value) did not match, the result would be
an ungrammatical sentence like *The boys swims:
8 See Wechsler (2013) for a similar analysis in which the morphosyntactic AGR feature is named
CONCORD .
9 The IND value of a noun will be an individual index (i, j, k, etc.), whereas that of a verb or
predicative adjective will be a situation index such as s0 , s1 , s2 , etc.
6.4 Semantic Agreement Features 147
(39)
As we can observe, the required subject has the IND value i, but the subject in
(39) has a different IND value j.
In the prototypical cases, the AGR and IND values are identical, but they can be
different, as in examples like (35b). This means that, depending on the context,
hash browns can have different IND values:10
⎡ ⎤
(40) FORM hash browns
⎢ ⎥
⎢ POS noun ⎥ (when referring to the food itself)
a. ⎢SYN | HEAD ⎥
⎢ AGR | NUM pl ⎥
⎣ ⎦
SEM | IND | NUM pl
⎡ ⎤
FORM hash browns
⎢ ⎥
⎢ POS noun ⎥ (when referring to a customer or to a dish)
b. ⎢SYN | HEAD ⎥
⎢ | pl ⎥
⎣ AGR NUM ⎦
SEM | IND | NUM sing
In the lexical entry (40b), the AGR’s NUM value is plural but its IND’s NUM value
is singular. As shown by (35), the reference hash browns can be transferred from
cooked potatoes to the customer who ordered them. This means that, given an
appropriate context, there could be a mismatch between the morphological form
of a noun and the index value of the noun.
What this indicates is that subject-verb agreement and noun-specifier agree-
ment are different. In fact, English determiner-noun agreement is merely a
reflection of morphosyntactic agreement features between determiner and noun,
whereas subject-verb (like pronoun-antecedent) agreement is index-based agree-
ment. This is represented in (41):
10 As indicated here, the lexical expression now has two features: SYN (syntax) and SEM (seman-
tics). The feature SYN includes HEAD as well as SPR and COMPS. The feature SEM is for semantic
information, and will be further described in what follows.
148 N O U N P H R A S E S A N D AG R E E M E N T
Such agreement patterns can be found in examples like the following, where
the underlined parts have singular agreement with four pounds, which is formally
plural:
(42) [Four pounds] was quite a bit of money in 1950 and it was not easy to come
by.
Given the separation of the morphological AGR value and the semantic
IND value, nothing blocks mismatches between the two (AGR and IND ) as
long as all other constraints are satisfied. Observe further examples in the
following:
(43) a. [Five pounds] is/*are a lot of money.
b. [Two drops] deodorizes/*deodorize anything in your house.
c. [Fifteen dollars] in a week is/*are not much.
d. [Fifteen years] represents/*represent a long period of his life.
e. [Two miles] is/*are as far as they can walk.
In all of these examples with measure nouns, the plural subject combines with
a singular verb. An apparent conflict arises from the agreement features of the
head noun. For proper agreement inside the noun phrase, the head noun has to
be plural, but for subject-verb agreement the noun has to be singular. Consider
the example in (43a). The noun pounds is morphologically plural and thus must
select a plural determiner, as argued so far. But when these nouns are anchored to
the group as a whole – that is, conceptualized as referring to a single measure –
the index value has to be singular, as represented in (44).
⎡ ⎤
(44) pounds
⎢ ⎡ ⎤⎥
⎢ POS noun ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ ⎥⎥
⎢SYN⎢ AGR 1 | NUM pl ⎥⎥
⎢ ⎣ ⎦⎥
⎢ ⎥
⎢ VAL | SPR DP AGR 1 ⎥
⎣ ⎦
SEM | IND | NUM sing
(45)
There is nothing wrong in forming these dollars or these pounds, since dollars
and pounds can combine with a plural DP (or determiner). The issue is the agree-
ment between the subject these dollars and the verb is. Unlike five dollars or five
pounds, these dollars and these pounds are semantically not taken to refer to a
single unit: They always refer to plural entities. Thus no mismatch is allowed in
these examples.
However, a similar mismatch between subject and verb is also found in cases
with terms for social organizations or collections, as in the following attested
examples:
(47) a. [This/*these government] has/*have broken its promises.
b. [This/*these government] have/*has broken their promises.
The head noun government or team is singular, so it can combine with the sin-
gular determiner this. But the surprising fact is that the singular noun phrase
can combine with a plural verb have as well as with a singular verb has. This
is possible because the index value of the subject can be anchored either to a
150 N O U N P H R A S E S A N D AG R E E M E N T
singular entity or a plural one. More precisely, we can represent the relevant
information in the expressions participating in these agreement relationships, as
in (49).
⎡ ⎤
(49) a. FORM this
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ POS det ⎥
⎣SYN⎣HEAD ⎦⎦
AGR | NUM sing
⎡ ⎤
b. FORM team/government
⎢ ⎡ ⎤
⎥
⎢ ⎥
⎢ POS noun
⎦⎥
⎢SYN⎣HEAD ⎥
⎢ AGR | NUM sing ⎥
⎣ ⎦
SEM | IND | NUM pl
As represented in (49a) and (49b), this and government agree with each other in
terms of the morphosyntactic agreement number value, whereas the index value
of government is what matters for subject-verb agreement. This in turn means
that when government refers to the individuals in a government, the whole NP
this government carries a plural index value.
First, the lower NP in partitive phrases must be definite and in the of -phrase,
no quantificational NP is allowed, as shown in (52):
(52) a. each student vs. each of the students vs. *each of students
b. some problems vs. some of the problems vs. *some of many problems
Second, not all determiners with quantificational force can appear in partitive
constructions. As shown in (53), determiners such as the, every, and no cannot
occupy the first position:
(53) a. *the of the students vs. the students
b. *every of his ideas vs. every idea
c. *no of your books vs. no book(s)
Third, simple NPs and partitive NPs have different restrictions relative to the
semantic head. Observe the contrast between (54) and (55):
(54) a. She doesn’t believe much of that story.
b. We listened to as little of his speech as possible.
c. How much of the fresco did the flood damage?
d. I read some of the book.
(55) a. *She doesn’t believe much story.
b. *We listened to as little speech as possible.
c. *How much fresco did the flood damage?
d. *I read some book.
The partitives can be headed by quantifiers like one and many, as shown in (56)
and (57), but, unlike many, one cannot serve as a determiner when the head noun
is collective, as in (57a).
(58) Type I:
a. Each of the suggestions is acceptable.
b. Neither of the cars has air conditioning.
c. None of these men wants to be president.
d. Many of the students can speak French or German.
We can observe here that the verb’s number value is determined by the preceding
expression each, neither, and none. Now let us contrast Type II:
(59) Type II:
a. Most of the fruit is rotten.
b. Most of the children are here.
c. Some of the soup needs more salt.
d. Some of the diners need menus.
e. All of the land belongs to the government.
f. All of these cars belong to me.
An effective way of capturing the relations between Type I and Type II construc-
tions involves the lexical properties of the quantifiers. First, Type I and Type II
involve pronominal forms serving as the head of the construction, which select
an of -NP inside which the NP is definite:
(61) a. *neither of students, *some of water
b. neither of the two linguists/some of the water
However, we know that the two types are different in terms of agreement: Pro-
nouns like neither in the Type I construction are lexically specified to be singular,
whereas the number value for Type II comes from inside the selected PP.11
A slight digression is in order. It is easy to see that there are prepositions
whose functions are just grammatical markers:
(62) a. John is in the room.
b. I am fond of him.
The predicative preposition in here selects two arguments, John and the room.
By contrast, the preposition of has no predicative meaning but simply functions
as a marker to the argument of fond. PPs headed by these markers, as in the par-
titive construction, have semantic features identical to those of the prepositional
object NP. This means that the PP of him receives its semantic features from the
NP him.
Given this analysis, in which the PP in the partitive construction shares AGR
and semantic features (e.g., DEF: definite) with its inner NP, we can lexically
encode the similarities and differences between Type I and Type II in a simple
manner:
⎡ ⎤
(63) a. FORM neither
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ POS noun ⎥
⎢ ⎢ HEAD ⎥⎥
⎢ ⎢ AGR | NUM sing ⎥⎥
⎢ ⎢ ⎥⎥
⎢SYN⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎣VAL | COMPS PP PFORM of ⎦⎥
⎣ ⎦
DEF +
⎡ ⎤
b. FORM some
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ AGR | NUM 1 ⎥⎥
⎢ ⎢ ⎥
⎢ ⎡ ⎤⎥
⎥⎥
⎢SYN⎢
⎢ ⎥⎥
⎢ ⎢
PFORM of ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥⎥⎥
⎢
⎣ ⎣ DEF + ⎦⎥⎥
⎦⎦
⎣
AGR | NUM 1
The diagrams in (63) show that both Type I neither and Type II some are lex-
ically specified to require a PP complement whose semantic value includes a
definite (DEF) feature whose value is ‘+’. This will account for the contrast in
(61). However, the two types differ with respect to the NUM value. The NUM
value of Type I neither is singular, whereas that of Type II is identified with
the PP’s NUM value, which is actually coming from its prepositional object
NP. Showing these differences in syntactic structures, we have the alternatives
in (64):12
12 The arrows here are for expositional purposes and are not intended to indicate a direction of
feature copying or movement: The relevant features linked by the arrow are simply required to
have the same values.
154 N O U N P H R A S E S A N D AG R E E M E N T
(64)
As shown in (64a), for Type I, it is neither which determines the NUM value
of the whole NP phrase. However, for Type II, it is the NP the students which
determines the NUM value of the whole NP.
We can check a few of the consequences of these different specifications in
the two types. Consider the contrast in (65):
(65) a. many of the/those/her apples
b. *many of some/all/no apples
(65b) is ungrammatical, since many requires an of -PP phrase whose DEF value
is positive.
This system also offers a simple way of dealing with the fact that quantifiers
like each affect the NUM value as well as the countability of the of -NP phrase.
One difference between Type I and Type II is that Type I selects a plural of -
NP phrase when the head noun is one, each, or neither. Meanwhile, Type II in
general has no such restriction. This is illustrated in (66) and (67):
(66) Type I:
a. one of the suggestions/*the suggestion/*his advice
b. each of the suggestions/*the suggestion/*his advice
c. neither of the students/*the student/*his advice
The only additional specification we need for Type I pronouns relates to the NUM
value on the PP’s complement, as given in (68):
⎡ ⎤
(68) FORM each
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ AGR | NUM sing ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎡ ⎤ ⎥⎥
⎢SYN⎢
⎢ PFORM of ⎥⎥
⎢ ⎢ ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥⎥⎥⎥
⎢ ⎣ DEF + ⎦ ⎥
⎣ ⎣ ⎦⎦
NUM pl
We see that quantifiers like each select a PP complement whose NUM value is
plural.
Type II pronouns do not place such a requirement on the PP complement: Note
that all the examples in (69) are acceptable, in contrast to those in (70):13
(69) a. Most of John’s boat has been repainted.
b. Some of the record contains evidence of wrongdoing.
c. Much of that theory is unfounded. (Data from Baker, 1995.)
The contrast here indicates that Type II pronouns can combine with a PP whose
daughter NP is singular. This is simply predicted because our analysis allows the
inner NP to be either plural or singular (or uncountable).
We are also in a position now to understand some differences between simple
NPs and partitive NPs. Consider the following examples:
(71) a. many dogs/*much dog/the dogs
b. much furniture/*many furniture/the furniture
The data here indicate that, in addition to the agreement features we have seen
so far, common nouns also place a restriction on the countability value of the
selected specifier. Specifically, a countable noun selects a countable determiner
as its specifier (Sag et al., 2003).14 To capture this agreement restriction, we can
introduce a new feature, COUNT (countable):
13 Examples like Much of the savings came from employee concession indicate that much belongs
to Type II.
14 We cannot use the NUM feature here, since mass nouns like furniture are neither singular nor
plural. We also cannot take much to be unspecified with the NUM value, since it can combine
only with a mass noun, different from determiners like the or his.
156 N O U N P H R A S E S A N D AG R E E M E N T
⎡ ⎤ ⎡ ⎤
(73) FORM dogs FORM furniture
⎢ ⎥ ⎢ ⎥
a. ⎢ ⎥ b. ⎢ ⎥
⎣SYN HEAD | POS noun ⎦ ⎣SYN HEAD | POS noun ⎦
VAL | SPR DP[ COUNT +] VAL | SPR DP[ COUNT –]
The lexical specification of a countable noun like dogs requires its specifier to
be [COUNT +] to prevent formations like *much dogs. This in turn means that
determiners must also carry the feature COUNT:
⎡ ⎤ ⎡ ⎤
(74) FORM many FORM the
⎢ ⎡ ⎤⎥ ⎢ ⎡ ⎤⎥
a. ⎢
⎢ POS det
⎥
⎥ b. ⎢
⎢ POS det
⎥
⎥
⎣SYN⎣HEAD ⎦⎦ ⎣SYN⎣HEAD ⎦⎦
COUNT + COUNT boolean
⎡ ⎤
FORM little
⎢ ⎡ ⎤⎥
c. ⎢
⎢ POS det
⎥
⎥
⎣SYN⎣HEAD ⎦⎦
COUNT –
The determiner many bears the positive COUNT value, while little carries the neg-
ative COUNT value. However, the value of the feature COUNT for the expression
the can be either positive or negative. Note here that the feature COUNT is not
an agreement feature but a semantic feature assigned only to determiners. Thus,
the cooccurrence restriction of count and mass nouns with certain determiners is
not captured as agreement but ensured by a VAL requirement of the count/mass
nouns.
Now consider the following contrasts:
Due to the feature COUNT, we understand now the contrast between much advice
and *many advice or the contrast between *much story and many stories. The
facts in partitive structures are slightly different, as (76) shows, but the patterns
in the data directly follow from these lexical entries:
⎡ ⎤
(77) a. FORM many
⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎡ ⎤ ⎥⎥
⎢ PFORM of ⎥⎥
⎢SYN⎢
⎢ ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥ ⎥⎥
⎢ ⎣ NUM pl ⎦ ⎥⎥
⎣ ⎣ ⎦⎦
DEF +
6.5 Partitive NPs and Agreement 157
⎡ ⎤
b. FORM much
⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎡ ⎤ ⎥⎥
⎢ PFORM of ⎥⎥
⎢SYN⎢
⎢ ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥ ⎥⎥
⎢ ⎣ NUM sing ⎦ ⎥⎥
⎣ ⎣ ⎦⎦
DEF +
Notice here that (78) is a kind of partitive construction, whereas (79) measures
the amount of the NP after of. As the examples show, measure noun phrases do
not require a definite article, unlike the true partitive constructions repeated here:
(80) *many of beans, *some of wire, *much of cider, *none of yogurt, *one of
strawberries
There are several more differences between partitive and measure noun
phrases. For example, measure nouns cannot occur in simple noun phrases. They
obligatorily require an of -NP phrase:
(81) a. *one pound beans vs. one pound of beans
b. *three feet wire vs. three feet of wire
c. *a quart cider vs. a quart of cider
As noted here, many or much in the partitive constructions cannot combine with
numerals like one or several; by contrast, measure nouns pound and feet need to
combine with a numeral like one or three.
Further complications arise owing to the existence of defective measure noun
phrases. Consider the following examples:
158 N O U N P H R A S E S A N D AG R E E M E N T
Expressions like few and lot actually behave quite differently. For instance, it
appears that a few acts like a complex word. The expression lot acts more like a
noun, but, unlike can, it does not allow its specifier to be a numeral.
Regarding agreement, measure noun phrases behave like Type I partitive
constructions:
(84) a. A can of tomatoes is/*are added.
b. Two cans of tomatoes are/*is added.
We can see here that it is the head noun can or cans which determines the NUM
value of the whole NP. The inner NP in the PP does not affect the NUM value at
all. These observations lead us to posit the following lexical entry for a measure
noun:15
⎡ ⎤
(85) FORM pound
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ NUM sing
⎥⎥
⎢ ⎡ ⎤⎥⎥
⎢SYN⎢
⎢ ⎥⎥
⎢ ⎢ DP ⎥
⎥⎥
SPR
⎢ ⎢VAL⎢ ⎥⎥
⎢ ⎣ ⎦⎦⎥
⎣ ⎣ ⎦
COMPS PP PFORM of
That is, a measure noun like pound requires one obligatory SPR and a PP com-
plement. Unlike partitive constructions, there is no definiteness restriction on the
PP complement.
6.6 Modifying an NP
Predicative adjectives carry the feature PRD and have a MOD value that is empty
as a default:16
⎡ ⎤
(90) FORM alive
⎢ ⎡ ⎤⎥
⎢ POS adj ⎥
⎢ ⎥⎥
⎢SYN | HEAD ⎢PRD + ⎥
⎣ ⎣ ⎦⎦
MOD
This says that alive is used predicatively and does not have a specification for a
MOD value (the value is empty). This lexical information will prevent predicative
adjectives from also functioning as noun modifiers.17
In contrast to a predicative adjective, a modifying adjective will have the
following lexical entry:
⎡ ⎤
(91) FORM wooden
⎢ ⎥
⎢ ⎥
⎣SYN | HEAD POS adj ⎦
MOD N
This specifies an adjective which modifies any value whose POS is noun. This
will license a structure like the following:
(92)
As illustrated here, the prenominal adjective wooden modifies the head nominal
phrase (N ) desk.18
All these postnominal elements bear the feature MOD. Leaving aside detailed
discussion of the relative clause(-like) modifiers in (93b)–(93e) until Chapter 11,
we can say that example (93a) will have the following structure:20
(94)
These modifiers must modify an N but not a complete NP. This claim is con-
sistent with the examples above and with the (ungrammatical) examples in
(95):
(95) a. *John in the doorway waved to his father.
b. *He in the doorway waved to his father.
18 In the present system, a modifier expression can be either a lexical (X) or a phrasal expression
(XP), while the element modified is a phrasal expression.
19 Relative clauses like the boy who was in the doorway are also postnominal modifiers. See
Chapter 11 for details.
20 As noted in Chapter 4, the approach here assumes that the relative linear order of a head, comple-
ments, and modifiers is determined by a combination of general and language-specific ordering
principles. For example, a simple AP modifier will precede its head, whereas a PP or complex
AP modifier will follow the head.
6.7 Conclusion 161
(96)
6.7 Conclusion
Exercises
1. Draw a tree structure for each of the following sentences and mark
which expression determines the agreement (AGR) and index values
of the subject NP and the main verb:
a. Neither of these men is worthy to lead Italy.
b. None of his customary excuses suffices Edgar now.
c. One of the problems was the robins.
d. Some of the water from melted snow also goes into the ground
for plants.
e. Most of the milk your baby ingests during breastfeeding is
produced during nursing.
f. One of the major factors affecting the value of diamonds was
their weight.
g. Each of these stones has to be cut and polished.
h. Most of her free time was spent attending concerts.
the underlined (uninflected) verb lexeme and identify the noun that
determines this VFORM value:
a. An example of these substances be tobacco.
b. The effectiveness of teaching and learning depend on several
factors.
c. One of the most serious problems that some students have be
lack of motivation.
d. Ten years be a long time to spend in prison.
e. Everyone of us be given a prize.
f. Some of the fruit be going bad.
g. All of his wealth come from real estate investments.
h. Do some of your relatives live nearby?
i. Two ounces of this caviar cost nearly three hundred dollars.
j. Fifty pounds seem like a lot of weight to lose in one year.
k. Half of the year be dark and wintry.
l. Some of the promoters of ostrich meat compare its taste to beef
tenderloin.
6. Read the following passage and provide detailed lexical entries for
the underlined expressions. For nouns, specify their AGR and IND
values:
When two or more nouns combine, as in computer screen, inter-
net facility, and garden fence, the first noun is said to modify the
second. In a sense, the first noun is playing the role of an adjec-
tive, which is what most people have in mind when we think
about modification, but nouns can do the job equally well. It
is worth mentioning that not every language offers this possi-
bility, but native speakers of English are quite happy to invent
their own combinations of nouns in order to describe things,
events, or ideas they have not come across before; this is partic-
ularly true in the workplace, where we need constantly to refer
to innovations and new concepts.
7 Raising and Control Constructions
Verbs like try are called ‘control’ or ‘equi’ verbs. The subject of such a verb
is understood to be ‘equivalent’ in some sense to the unexpressed subject of the
infinitival VP. In linguistic terminology, the subject of the verb is said to ‘control’
the referent of the subject of the infinitival complement. Let us consider the ‘deep
structure’ of (1a), representing the unexpressed subject of the VP complement of
tried:1
(4) John tried [(for) John to fix the computer].
As shown here, in this sentence it is John who performs the action of fixing the
computer. In the original transformational grammar approach, this proposed deep
structure would undergo a rule of ‘Equivalent NP Deletion’ in which the second
NP John is deleted to produce the output sentence. This is why such verbs are
referred to as ‘equi-verbs.’
1 Deep structure, linked to surface structure, is a theoretical construct and abstract level of repre-
sentation that is designed to unify several related observed forms and that played an important
role in the theory of Transformational Grammar in the late twentieth century. For example, the
surface structures of both The cat chased the mouse and The mouse was chased by the cat are
derived from an identical deep structure similar to The cat chased the mouse.
164
7.2 Differences between Raising and Control Verbs 165
By contrast, verbs like seem are called ‘raising’ verbs. Consider the deep
structure of (1b):
(5) appeared [John to fix the computer].
In order to derive the ‘surface structure’ (1b), the subject, John, needs to be raised
to the matrix subject position marked by . This transformational analysis is
designed to capture the fact that the subject of appear owes its semantic role to
the downstairs verb (it is the agent of fix) rather than to the main verb, appear.
The verb appear assigns only one semantic role (the situation or state of affairs
that ‘appears’) and, since John is not a state of affairs, the nominal expression
John is not assigned a semantic role by appear. This is why verbs like appear
are called ‘raising’ verbs.
This chapter discusses the similarities and differences between these two types
of verb and shows how we can explain their respective properties in a systematic
way.
There are many differences between the two classes of verb, which
we present here.
As suggested by the paraphrase, the one who does the action of trying is John
in (6a). How about (6b)? Is it John who is involved in the situation of ‘seem-
ing’? As represented in the paraphrase (7b), seeming is a property of a situation
(John’s being honest) rather than a property of an individual (John). Owing to
this difference, we say that a control verb like try assigns a semantic role to its
subject (the ‘agent’ role), whereas a raising verb like seem does not assign any
semantic role to its subject (this is what (5) is intended to represent). Among
raising verbs, there is a mismatch between the number of syntactic complements
(two: subject NP and infinitival VP complement) and the number of semantic
roles (one: a situation).
166 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S
Expletive subjects: Since a raising verb does not assign a semantic role to its
subject, certain expressions which do not have a semantic role – or any meaning,
for that matter – may appear in the subject position, provided that the infinitival
VP is of the right kind. Such potential subjects include the expletives it and
there:
(8) a. It tends to be warm in September.
b. It seems to bother Kim that they resigned.
Since control verbs like try and hope require their subject to have an agent role,
an expletive it or there, which takes no semantic role, cannot function as their
subject.
We can observe the same contrast with respect to raising and control adjec-
tives:
(10) a. There is likely to be a candidate. (raising)
b. *There/John is eager to be a candidate. (control)
Since the raising adjective likely does not assign any semantic role to its subject,
a nonreferential expression, the ‘dummy’ there subject of the existential verb
be, can be the subject of the sentence. By contrast, the control adjective eager
assigns a semantic role and thus does not allow this ‘dummy’ element as its
subject.
Subcategorization: Investigating what determines properties of the subject, we
can note that in raising constructions, it is not the raising verb or adjective but
rather the infinitival complement’s predicate that influences the semantic char-
acteristics of the subject. That is, in raising constructions, it is not the raising
predicate itself but its VP complement that restricts the properties of the raising
predicate’s subject. Observe the following:
(11) a. Pat seemed [to be intelligent].
b. It seems [to be obvious that she is not showing up].
c. The chicken is likely [to come home to roost].
(In the sense of ‘Consequences will be felt.’)
(12) a. *There seemed [to be intelligent].
b. *Pat seems [to be obvious that she is not showing up].
c. *Pat is likely [to come home to roost].
requires the subject NP the chicken, without which it would lack the idiomatic
meaning. Sentence (12b) would be acceptable only with a literal meaning, with
Pat referring to a chicken. In sum, in raising constructions, whatever category
is required as the subject of the infinitival VP is also required as the subject by
the higher VP – hence the intuition of ‘raising’: Any requirement placed on the
subject of the infinitival VP complement passes up to the higher predicate.
However, among control verbs, there is no direct relation between the subject
of the main verb and that of the infinitival VP. It is the control verb or adjective
itself which fully determines the properties of the subject:
(13) a. Sandy tried [to eat oysters].
b. *There tried [to be riots in Seoul].
c. *It tried [to bother me that Chris lied].
d. *The chickens try [to come home to roost]. (on the idiomatic meaning)
This selectional restriction then also accounts for the following contrast:
(16) a. The color red seems [to be his favorite color].
b. #The color red tried [to be his favorite color].
The presence of the raising verb seems does not change selectional restrictions
on the subject. However, the control verb tried is different: The control verb
tried requires its subject to be sentient, at least. The subject of a raising verb
just carries the selectional restrictions of the infinitival VP’s subject. This in turn
means that the subject of the infinitival VP is the subject of the raising verb.
168 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S
In the raising example (17a), the meaning of the idiom The cat is out of the bag
is retained. However, because the control verb tries assigns a semantic role to its
subject the cat, ‘the cat’ must be the one doing the action of trying, and there is
no idiomatic meaning.
This preservation of meaning also holds for examples like the following:
(18) a. The dentist is likely to examine Pat.
b. Pat is likely to be examined by the dentist.
As the raising predicate likely does not assign a semantic role to its subject, (18a)
and (18b) have more or less identical meanings – the proposition is about the den-
tist examining Pat, in both active and passive forms: The active subject is raised
in (18a) and the passive subject in (18b). However, the control predicate eager
assigns a semantic role to its subject, and this forces (19a) and (19b) to differ
semantically: In (19a), it is the dentist who is eager to examine Pat, whereas in
(19b), it is Pat who is eager to be examined by the dentist. Intuitively, if one of the
sentences in (18) is true, so is the other, but this inference cannot be made in (19).
Once again, these two verbs (believe and persuade) look alike in terms of syntax:
They both combine with an NP and an infinitival VP complement. However, the
two are different with respect to the properties of the object NP in relation to the
rest of the structure. Observe the differences between believe and persuade with
respect to their possible object:
(21) a. Stephen believed it to be easy to please Maja.
b. *Stephen persuaded it to be easy to please Maja.
7.3 A Simple Transformational Approach 169
We can observe that, unlike believe, persuade does not license an expletive
object (just like try does not license an expletive subject). And in this respect,
the verb believe is similar to seem in that it does not assign a semantic role
(to its object). The differences show up again in the preservation of idiomatic
meaning:
While the idiomatic reading is retained with the raising verb believed, it is lost
with the control verb persuaded.
Active-passive pairs show another contrast:
With the raising verb believe, there is no strong semantic difference in the exam-
ples in (24). However, in (25), there is a clear difference in who is persuaded. In
(25a), it is the dentist, but in (25b), it is Pat who is persuaded. This is one more
piece of evidence that believe is a raising verb whereas persuade is a control verb
with respect to the object.
How can we account for these differences between raising and con-
trol verbs or adjectives? A traditional strategy, hinted at earlier, is to treat raising
as a relationship between two distinct syntactic structures, mediated by a pro-
cedure that was known in the literature as NP Movement. This transformation
takes a deep structure like (26a) as its input and produces a surface structure like
(26b):
To derive (26b), the subject of the infinitival VP in (26a) moves to the matrix
subject position, as represented in the following tree structure:
170 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S
(27)
The movement of the subject Donald to the higher subject position will correctly
generate (26b). This kind of movement to the subject position can be triggered
by the requirement that each English declarative sentence have a surface subject
(Chomsky, 1981b). A similar movement process can be applied to the object
raising cases:
(28) a. Deep structure: Tom believes [Donald to be irritating].
b. Surface structure: Tom believes Donald to be irritating.
Here the embedded subject Donald moves not to the matrix subject but to the
matrix object position:
(29)
Since try and persuade assign semantic roles to their subjects and objects, an
unfilled position of the kind designated above by cannot be allowed. Instead,
it is posited that there is an unexpressed subject of the infinitival VPs to please
7.3 A Simple Transformational Approach 171
(32)
An independent part of the theory of control links PRO in each case to its
antecedent, marked by coindexing. In (32a), PRO is coindexed with John; in
(32b), it is coindexed with Stephen.
These analyses, which involve derivational rules operating on tree structures,
are driven by the assumption that the mapping between semantics and syntax is
very direct. For example, in (29), the verb believe semantically selects an expe-
riencer and a proposition, and this is reflected in the initial structure. In some
2 In traditional generative grammar, this ‘big PRO’ is taken to be different from ‘small pro’ in the
sense that the former is the subject of a nonfinite clause while the latter is the subject of a finite
clause. Small pro is licensed only in null-subject languages like Korean and Italian.
172 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S
syntactic respects, though, believe acts like it has an NP object (separate from
the infinitival complement), and the raising operation creates this object. In con-
trast, persuade semantically selects an agent, a patient, and a proposition, and
the structure in (32b) reflects this: The object position is there all along, so to
speak.
The classical transformational approach is a useful way to represent the differ-
ence between raising and control. However, it assumes a very different model of
grammar from that assumed here. In the transformational approach, the raising
and control patterns are the products of rules that map one sentential structure to
another. The transformational approach is highly abstract, in that it assumes syn-
tactic structure that is not ‘visible.’ For example, it is assumed that raising and
control verbs, rather than taking a VP as complement, in fact take a full sentence
as complement – one that happens to have a ‘phonetically null’ or ‘inaudible’
subject. In the remainder of this chapter, we will present a nontransformational
account of control and raising.
⎡ ⎤
(33) a. FORM seemed
⎢ ⎡ ⎤⎥
⎢ SPR 1 NP ⎥
⎢ ⎦⎥
⎢SYN | VAL⎣ ⎥
⎢ COMPS 2 VP VFORM inf ⎥
⎣ ⎦
ARG - ST 1 NP, 2 VP
⎡ ⎤
b. FORM tried
⎢ ⎡ ⎤⎥
⎢ SPR 1 NP ⎥
⎢ ⎦⎥
⎢SYN | VAL⎣ ⎥
⎢ COMPS 2 VP VFORM inf ⎥
⎣ ⎦
ARG - ST 1 NP, 2 VP
7.4 A Nontransformational Approach 173
These two lexical entries would project the following similar structures, respec-
tively:
(34)
As shown here, the syntactic structures projected by seemed and tried are
identical.
The object raising verb expect and the control verb persuade also have
identical valence (SPR and COMPS) information:
⎡ ⎤
(35) a. FORM expects
⎢ ⎡ ⎤⎥
⎢ SPR 1 NP ⎥
⎢ ⎦⎥
⎢SYN | VAL⎣ ⎥
⎢ COMPS 2 NP, 3 VP VFORM inf ⎥
⎣ ⎦
ARG - ST NP, NP, VP
1 2 3
⎡ ⎤
b. FORM persuaded
⎢ ⎡ ⎤⎥
⎢ SPR 1 NP ⎥
⎢ ⎦⎥
⎢SYN | VAL⎣ ⎥
⎢ COMPS 2 NP, 3 VP VFORM inf ⎥
⎣ ⎦
ARG - ST NP, NP, VP
1 2 3
(36)
As can be seen here, raising and control verbs are no different in terms of their
subcategorization or valence requirements, so they project similar structures. The
question is then how we can capture the different properties of raising and con-
trol verbs. The answer is that their differences follow from the other pieces of
the lexical information, in particular, the mapping relations between syntax and
semantics.
These two lexical entries represent the difference between seem and try: In the
entry for seemed, the subject of the VP complement is fully identical with its
own subject (notated by 1 ), whereas in the entry for tried, only the index value
of the specifier of its VP complement is identical to that of its subject, meaning
that the VP complement’s understood subject refers to the same individual as the
subject of tried. This index identity in control constructions is clear when we
consider examples like the following:
(40) Someonei tried NPi to leave town.
The example here means that whoever someone might refer to, that same person
left town. In some cases, English allows a paraphrase with an overt pronoun:
(41) a. Tom hoped [to win].
b. Tomi hoped [that hei would win].
The lexical entries in (39) generate the following structures for the intransitive
raising and control sentences:
(42)
176 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S
It is easy to verify that these structures conform to all the grammar rules (the
HEAD - SPECIFIER CONSTRUCTION and HEAD - COMPLEMENT CONSTRUCTION )
and principles, including the HFP and VALP.
Object raising and control predicates are analogous. Object raising verbs
select a VP complement whose subject is fully identical with the object.
Object control verbs select a VP complement whose subject’s index value
is identical with that of its object. The following lexical entries show these
properties:
⎡ ⎤
(43) a. FORM expect
⎢ ⎡ ⎤⎥
⎢ 1 NPi ⎥
⎢ SPR ⎥
⎢ ⎢ ⎥⎥
⎢SYN | VAL⎢ VFORM inf ⎥⎥
⎢ ⎣COMPS ⎦⎥
⎢ 2 NP, 3 VP ⎥
⎢ SPR 2 NP ⎥
⎣ ⎦
ARG - ST 1 NP, 2 NP, VP
3
⎡ ⎤
b. FORM persuade
⎢ ⎡ ⎤⎥
⎢ 1 NP ⎥
⎢ SPR ⎥
⎢ ⎢ ⎥⎥
⎢SYN | VAL⎢ VFORM inf ⎥⎥
⎢ ⎣COMPS ⎦⎥
⎢ 2 NPi , 3 VP ⎥
⎢ SPR NPi ⎥
⎣ ⎦
ARG - ST 1 NP, 2 NP, 3 VP
(44)
(45)
This shows that the verb hit takes two arguments in the predicate relation hit, with
the notation to indicate the semantic value. The relevant semantic properties can
be represented in a feature-structure system as follows:
⎡ ⎤
(47) FORM hit
⎢ ⎥
⎢ 1 NPi ⎥
⎢SYN | VAL SPR ⎥
⎢ COMPS 2 NPj ⎥
⎥
⎢
⎢ ⎥
⎢ARG - ST NP , NP ⎥
⎢ i j ⎥
⎢ ⎡ ⎤ ⎥
⎢ IND s ⎥
⎢ 0 ⎥
⎢ ⎢ ⎡ ⎤ ⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢SEM⎢ PRED hit ⎥ ⎥
⎢ ⎢ ⎥⎥ ⎥
⎢ ⎢RELS ⎢ ⎣ AGT i ⎦ ⎥ ⎥
⎣ ⎣ ⎦ ⎦
PAT j
With respect to syntax, hit is a verb selecting two arguments, realized as a subject
and a complement, respectively, as shown in the values of the features VAL and
ARG - ST . The semantic information associated with the verb is represented by
means of the feature SEM (semantics). Its first attribute is IND (index), represent-
ing what this expression refers to; as a verb, hit refers to a situation s0 in which
an individual i hits an individual j. The semantic relation of hitting is represented
using the feature for semantic relations (RELS). The feature RELS has as its value
a list of one feature structure, here with three further features, PRED (predicate),
AGT (agent), and PAT (patient). The predicate ( PRED ) relation is whatever the
verb denotes: In this case, hit takes two arguments. The AGT argument in the
SEM value is coindexed with the SPR in the SYN value, while the PAT is coin-
dexed with COMPS. This coindexing links the subcategorization information of
hit with the arguments in its semantic relation. Simply put, the lexical entry in
(47) is the formal representation of the fact that in X hits Y, X is the hitter and Y
is the one hit.
7.4 A Nontransformational Approach 179
Now we can use these additional parts of the verb’s representation to describe
the semantic differences between raising and control verbs. The subject of a rais-
ing verb like seem is not assigned any semantic role, while that of a control verb
like try is linked to a semantic role, whether agent (as in the case of try) or expe-
riencer (as in the case of want or eager). Assuming that ‘s0 ’ or ‘s1 ’ stands for
a situation denoted by an infinitival VP, we can give seem and try the following
simplified meaning representations:
(48) a. seem (s1 ) (‘s1 seems (to be the case) = s0 ’)
b. try (i, s1 ) (‘i tries to (make) s1 (be the case) = s0 ’)
We can see here that even though the verb seem selects two syntactic arguments
( 1 NP and 2 NP), its meaning relation (PRED) has only one argument (SIT, refer-
ring to s1 ): Note that the subject (SPR) is not coindexed with any argument in the
semantic relation.3 This means that the subject does not receive a semantic role
(from seem). Meanwhile, the verb try also selects two syntactic arguments (an
3 The feature attribute SIT denotes a situation, roughly corresponding to an event or state of affairs.
180 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S
NP and a VP) as well as two semantic arguments (AGT and SIT). Unlike seem,
try has one-to-one mapping relation between syntactic arguments and semantic
arguments. That is, the verb’s SPR is coindexed with the AGT role in the seman-
tics (RELS value), whereas its VP complement is identified with the SIT role.
Thus, both the subject and complement of try are linked to semantic arguments,
whereas the subject of seem is not linked to any semantic argument.
Now we turn to object-related verbs like expect and persuade. Just as in the
contrast between seem and try, the key difference here concerns whether the
object (y) receives a semantic role or not:
(50) a. expect (x, s1 )
b. persuade (x, y, s1 )
With respect to the manner in which members of the ARG - ST list are linked
to the syntactic grammatical functions SPR and COMPS, the two verbs are the
same: Both select three syntactic arguments. But observe the key difference in
the linking relations with the semantic arguments. As seen in the lexical entries,
expect has two semantic arguments, experiencer (EXP) and situation (SIT): The
object is not linked to a semantic argument of expect. In contrast, persuade has
three semantic arguments: AGT, EXP, and SIT. We can thus conclude that raising
predicates assign one less semantic role in their argument structures than the
number of syntactic dependents, while in the case of control predicates there is a
one-to-one correlation between arguments and grammatical functions.
Control verbs are different, directly assigning the semantic role of agent or
experiencer to the subject or object. For this reason, a control verb does not
accept an expletive argument, even if the verb of the infinitival complement is
one that can take such an argument. This is illustrated in (53a)–(53b) for the
subject of try and the object of persuade, respectively:
(53) a. *There/*It/John tried to leave the country.
b. We persuaded *there/*it/John to be part of the solution.
This is once again because the subject of seems does not have any semantic role:
Its subject is identical with the subject of its VP complement to be out of the bag,
whereas the subject of plans has its own agent role.
The same explanation applies to the following contrast:
(55) a. The dentist is likely to examine Pat.
b. Pat is likely to be examined by the dentist.
182 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S
The control adjective eager assigns a semantic role to its subject independent of
the VP complement, as given in the following lexical entry:
⎡ ⎤
(57) FORM eager
⎢ ⎡ ⎤⎥
⎢ SPR NPi ⎥
⎢ ⎥⎥
⎢ ⎢ ⎥
⎢SYN | VAL ⎢ inf ⎥ ⎥
⎢ ⎣COMPS VP VFORM ⎦⎥
⎢ ⎥
⎢ IND s1 ⎥
⎢ ⎥
⎢ ⎡ ⎤ ⎥
⎢ IND s ⎥
⎢ 0 ⎥
⎢ ⎢ ⎡ ⎤⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢ ⎢ PRED eager ⎥ ⎥
⎢SEM⎢ ⎢ ⎥ ⎥ ⎥
⎢ ⎢RELS ⎣EXP i ⎦⎥ ⎥
⎣ ⎣ ⎦ ⎦
SIT s1
This then means that (56a) and (56b) must differ in that in the former, it is
the dentist who is eager to perform the action denoted by the VP complement,
whereas in the latter, it is Pat who is eager.
Both persuaded and promised are control verbs: Their object is assigned a
semantic role (and so is their subject). This in turn means that their object cannot
be an expletive:
(59) a. *They persuaded it to rain.
b. *They promised it to rain.
However, the two are different with respect to the controller of the infinitival VP.
Consider who is understood as the unexpressed subject of the infinitival verb
here. In (58a), it is the object me which semantically functions as the subject of
the infinitival VP. Yet in (58b) it is the subject they who will do the action of
leaving. Owing to this fact, verbs like promise are known as ‘subject control’
7.6 Conclusion 183
verbs, whereas those like persuade are ‘object control’ verbs. This difference is
straightforwardly represented in their lexical entries:
⎡ ⎤
(60) FORM persuade
⎢ ⎡ ⎤⎥
⎢ SPR NPi ⎥
⎢ ⎥
⎢ ⎥⎥
⎢SYN | VAL⎢
⎢ ⎥⎥
⎢ ⎣ VFORM inf ⎦⎥
⎣ COMPS NPj , VP ⎦
SPR NPj
⎡ ⎤
FORM promise
⎢ ⎡ ⎤⎥
⎢ SPR NPi ⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎡ ⎤ ⎥⎥
⎢ ⎥⎥
⎢SYN | VAL⎢
⎢
VFORM inf ⎥
⎢ ⎢COMPS NPj , VP⎢ ⎥⎥ ⎥
⎢ ⎣ SPR NPi ⎦ ⎥⎥
⎣ ⎣ ⎦⎦
IND s1
The divergent control profiles of these two verbs follow from the communicative
acts they describe. Promising is a commitment by the speaker to perform some
act; it is therefore the speaker (the subject) and not the addressee (the object) who
is understood to be the (potential) doer of the action denoted by the infinitival
VP. In an act of persuasion, by contrast, what is at issue is a future act by the
addressee; it is therefore the addressee (the object), and not the speaker (the
subject), who is understood to be the potential doer of the action expressed by
the infinitival VP.
7.6 Conclusion
This chapter has shown us that these properties of raising and control
verbs follow naturally from their lexical specifications. In particular, the
present analysis offers a systematic, construction-based account of the mis-
match between the number of syntactic complements that a verb has and the
number of semantic arguments that it has. In Chapter 8, we will observe that
the properties of raising verbs are key to understanding the English auxiliary
system.
Exercises
1. Draw trees for the following sentences and provide a lexical entry for
each of the italicized verbs:
Decide which group each of the following lexical items belongs to.
In doing so, consider the it, there, and idiom tests that this chapter
has introduced:
(i) certain, anxious, lucky, sure, apt, liable, bound, careful, reluc-
tant
(ii) tend, decide, manage, fail, happen, begin, hope, intend, refuse
6. Consider the following data and discuss briefly what can be the
antecedent of her and herself :
(i) a. Kevin urged Anne to be loyal to her.
b. Kevin urged Anne to be loyal to herself.
Now consider the following data and discuss the binding conditions
on ourselves and us. In particular, determine the relevance of the
ARG - ST list for the possible and impossible binding relations:
• modal auxiliary verbs such as will, shall, may, etc.: These have only
finite forms
• aspectual auxiliaries have/be: These have both finite and nonfinite
forms
• do: This ‘support’ verb has a finite form only, with vacuous semantics
• to: The infinitival marker has a nonfinite form only, with apparently
vacuous semantics
Such auxiliary verbs behave differently from main verbs in various respects.
There have been arguments for treating these auxiliary verbs as simply having the
lexical category V, although being distinct from main verbs with respect to both
syntactic distribution and semantic contribution. Similarities include the fact that
both auxiliary and main verbs carry tense information and participate in some of
the same identical syntactic constructions. These include so-called Right Node
Raising, as shown in (1):
Such phenomena suggest that it might be a mistake to assign auxiliary verbs and
lexical (main) verbs to two distinct categories.
Distinguishing auxiliary from main verbs: How do we know which verbs
are auxiliary verbs? Put differently, what distributional or behavioral properties
are unique to auxiliary verbs in Present Day English? The most reliable criteria
186
8.1 Basic Issues 187
for auxiliary status arise from syntactic phenomena such as negation, inversion,
contraction, and ellipsis (sometimes known as the acronym ‘NICE’ properties,
see Warner, 2000; Kim, 2002b; Sag et al., 2003):
1. Negation: Only auxiliary verbs can be followed by the negative adverb not.
(2) a. Tom will not leave in the morning.
b. *Tom left not in the morning.
3. Contraction: Only auxiliary verbs have contracted forms with the suffix n’t.
(4) a. John couldn’t leave the party.
b. *John leftn’t the party early.
4. Ellipsis: The complement of an auxiliary verb, but not of a main verb, can
be omitted.
(5) a. If anybody is spoiling the children, John is .
b. *If anybody keeps spoiling the children, John keeps .
The position of adverbs and so-called floated quantifiers can also be used to
differentiate auxiliary verbs from main verbs. These differences can be seen in
the following contrasts:
(7) a. She would never believe that story.
b. *She believed never his story.
Adverbs like never and floated quantifiers like all can follow an auxiliary verb
but not a main verb.
Ordering restrictions: The third major issue for the syntactic analy-
sis of auxiliaries is the question of how to capture ordering restrictions
on auxiliary sequences. Auxiliaries are subject to restrictions that limit
the sequences in which they can occur and the forms in which they
188 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S
can combine with other auxiliary and main verbs. Observe the following
examples:
(9) a. The children will have been being entertained.
b. He must have been being interrogated by the police at that very
moment.
As shown here, when there are two or more auxiliary verbs, they must come in a
certain order. In addition, note that each auxiliary verb requires the immediately
following verb to be in a particular morphological form (e.g., has eaten vs. *has
eating).
In the study of the English auxiliary system, we thus need to address at least
these issues:
• Should we posit an auxiliary category?
• How can we distinguish main verbs from auxiliary verbs?
• How can we account for phenomena (such as the NICE group)
that are sensitive to the presence of an auxiliary verb?
• How can we capture the ordering and cooccurrence restrictions
among auxiliary verbs?
This chapter provides answers to these fundamental questions related to the
English auxiliary system.
The PS rule in (11) will license sentences with or without auxiliary verbs, as in
(12):
(12) a. Mary solved the problem.
b. Mary would solve the problem.
c. Mary was solving the problem.
d. Mary would easily solve the problem.
(13)
To derive the surface structure, the famous ‘Affix Hopping’ rule of Chom-
sky (1957) ensures that the affixal tense morpheme (Past) in Tense is moved
to M (modal) (will), or onto the main verb (solve) if a modal does not appear. If
the modal is present, Past moves onto will, producing Mary would (easily) solve
the problem. If the modal is not present, the affix Past will move onto the main
verb solve, yielding Mary solved the problem.
In addition to the Affix Hopping rule, typical transformational analyses intro-
duce the English-particular rule ‘do-support,’ used to describe how the NICE
properties are manifested in clauses that otherwise have no auxiliary verb:
(14) a. *Mary not avoided Bill.
b. Mary did not avoid Bill.
The presence of not in a position like Adv in the tree (13) has been claimed to
prevent the Tense affix from hopping over to the verb (as not intervenes). As
a last-resort option, the grammar introduces the auxiliary verb do onto which
the affix Tense is hopped. This would then generate (14b). In other words, the
position of do is used to diagnose the position of Tense in the structure.
The analysis captures syntactic affordances and behaviors of auxiliary verbs,
but nevertheless it misses several important points. For example, because con-
stituent structure in (13) does not provide the constituent properties that we find
in coordinate structures, it cannot capture the fact that the tensed (first) auxiliary
and the following VP (which may or may not itself have an auxiliary verb as
head verb) form a unit with respect to coordination:
(15) a. Fred [must have been singing songs] and [probably was drinking beer].
b. (?)Fred must both [have been singing songs] and [have been drinking beer].
c. (?)Fred must have both [been singing songs] and [been drinking beer].
d. Fred must have been both [singing songs] and [drinking beer].
the coordination data just given.1 Nevertheless, there are many problems that
transformational analyses cannot easily overcome (for a thorough review, see
Kim, 2000 and Kim and Sag, 2002).
All the auxiliary verbs also bear this kind of raising property: The subject of an
auxiliary verb is determined not by the verb itself but by the VP following it:
(19) a. Tom/*It/*There will [leave the town tomorrow].
b. *Tom/It/*There will [rain tomorrow].
c. *Tom/*It/There will [be a riot tomorrow].
As seen from the contrasts, the type of the subject in both (19) and (20) depends
on the type of subject that the bracketed VP (selected by the preceding auxiliary
verb will and has) requires. This is typical of raising verbs. This implies that all
auxiliary verbs will have the following type specifications:
1 An IP (Inflectional Phrase), similar to a sentence whose verb is finite form, is a functional category
that contains inflections such as tense and agreement. See Radford (1997).
8.3 A Construction-Based Analysis 191
(21) ⎡ ⎡ ⎤
⎤
⎢SYN⎣HEAD POS verb ⎥
⎢ ⎦ ⎥
⎢ AUX + ⎥
⇒ ⎢ ⎥
aux-verb
⎢ ⎥
⎣ ⎦
ARG - ST 1 XP, YP SPR 1 XP
Each type of auxiliary verb, belonging to the type aux-verb, will bear these speci-
fications: Each auxiliary verb carries the feature [AUX +], and its subject specifier
(first argument) is the same as the subject of its second argument.
8.3.2 Modals
One major property of modal auxiliaries, such as will, shall, and
must, is that they can only occur in finite (plain or past) forms. They cannot
occur either as infinitives or as participles:2
(22) a. I hope *to would/*to can/to study in France.
b. *John stopped can/canning to sing in tune.
Modals do not show 3rd person inflection in the present tense, nor do they have
a transparent past-tense form:
(23) a. *John musts/musted leave the party early.
b. *John wills leave the party early.
Reflecting these basic lexical properties, all modal auxiliary verbs will share
the following lexical specifications:
⎡ ⎤
(25) SYN | HEAD | VFORM fin
aux-modal ⇒ ⎣ ⎦
ARG - ST NP, VP VFORM bse
In the lexical entry given here, we can notice at least two things. First, modals
bear the head feature AUX, which differentiates them from main verbs, while
being specified as finite ( fin). This constraint on the finiteness of modals ensures
that they cannot occur in any environment where finite verbs are prohibited:
(26) a. *We expect there to [VP[fin] will rain].
b. *It is vital that we [VP[fin] will study everyday].
2 As we have seen in 5.2.2, the VFORM value fin includes es, ed, and pln, whereas nonfin includes
ing, en, inf , and bse.
192 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S
These simple lexical specifications, which are required in almost any analysis,
explain the distributional potential of modal verbs.
Second, (25) specifies that modals take two arguments, which will be realized
as SUBJ and COMPS, respectively, in accordance with the Argument Realization
Constraint. This means that a modal like must will ultimately have the following
lexical information:
⎡ ⎤
(27) FORM must
⎢ ⎡ ⎤⎥
⎢ VFORM fin ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ + ⎥⎥
⎢ AUX ⎥
⎢SYN⎢⎢ ⎡ ⎤⎥
⎥⎥
⎢ ⎢ 1 NP ⎥⎥
⎢ ⎣VAL ⎣
SPR
⎦⎦⎥
⎢ ⎥
⎢ 2 VP SPR 1 NP ⎥
⎢ COMPS ⎥
⎣ ⎦
ARG - ST 1 NP, 2 VP
The modal auxiliary verb must, as a subtype of aux-verb and aux-modal, inherits
feature specifications from both (21) and (25). Consider the feature specification
of its complement: The VP complement must be a VP[bse]. The possible and
impossible structures projected from this lexical specification can be most clearly
represented in tree format:
(28)
8.3 A Construction-Based Analysis 193
The structure shows that the modal auxiliary must requires a VP[bse] as its
complement. The VP[fin] in (28b) cannot function as the complement of must.
As shown in (27), modals are raising verbs, requiring the subject of their VP
complement to be identical to that of the modal auxiliary itself (indicated by
the box 1 ). This feature specification is inherited from (21), since modals also
belong to the type aux-verb. This then rules out ungrammatical examples like the
following:
(29) a. It/*Tom will [VP[bse] snow tomorrow].
b. There/*It may [VP[bse] exist a man in the park].
The VP rain tomorrow in (29a) requires the expletive subject it, disallowing other
NPs including Tom, and the VP exist a man in the park in (29b) allows only there
as its subject.
On the assumption that every sentence has a tensed main verb, is and has here are
main verbs. However, a striking property of be is that it still shows the properties
of an auxiliary: It exhibits all of the NICE (negation, inversion, contraction, and
ellipsis) properties, as we will see below. The usage of be actually provides a
strong reason why the grammar should allow a verb categorized as ‘V’ to also
have the feature specification [AUX +]; be in (30a) is clearly a verb, yet it also
behaves exactly like an auxiliary.
The verb be has three main uses: as a copula selecting a predicate XP, as an
aspectual auxiliary with a progressive VP following, and as an auxiliary as part
of the passive construction:3
(31) a. John is in the school.
b. John is running to the car.
c. John was found in the office.
All three uses in (31) exhibit the NICE properties: they show identical behav-
ior with subject-auxiliary inversion, their position relative to adverbs includes
floated quantifiers, and so forth.
(33) Subject-aux inversion:
a. Was the child in the school? (*Did the child be in the school?)
b. Was the child running to the car?
c. Was the child found?
Thus, all three uses share the lexical specifications given in (35) (XP here is a
variable over phrasal categories such as NP, VP, AP, and PP):
⎡ ⎤
(35) aux-be
⎢ ⎥
⎢FORM be ⎥
⎢ ⎥
⎣ ⎦
ARG - ST NP, XP PRD +
All three be lexemes bear the feature AUX with the + value and select a pred-
icative phrase whose subject is identical with the subject of be. Every use of be
thus has the properties of a raising verb. The main syntactic difference among the
three uses arises when this copula lexeme is realized into three different types of
words:4
(36) a. copula be: COMPS XP
b. progressive be: COMPS VP[VFORM ing]
c. passive be: COMPS VP[VFORM pass]
As given here, there are at least three uses of be: copula, progressive, and
passive, each of which has a different specification on the COMPS value.
The copula be needs no further COMPS specification: Any phrase that can
function as a predicate can be its COMPS value. The progressive be requires
its complement to be VP[ing], and the passive be requires its complement
to be VP[pass]. Hence, examples like those in (37) are straightforwardly
licensed:
4 In Chapter 5, we have seen that in terms of morphological form, the VFORM value pass is a
subtype of the value en. See Chapter 9 for further discussion of passive constructions.
8.3 A Construction-Based Analysis 195
Given facts like these, we can posit the following specifications in the lexi-
cal entry for auxiliary have, the head of the perfect aspect construction (see
Michaelis, 2011 for semantic details):
⎡ ⎤
(39) aux-have
⎢ ⎥
⎢FORM have ⎥
⎢ ⎥
⎣ ⎦
ARG - ST NP, VP VFORM en
Here, the main verbs be and have display the NICE properties; although they are
main verbs, they have the syntax of auxiliaries. This fact supports the idea that
every sentence has a (main) verb, while the surface syntax of a verb is determined
by whether it has the specification [AUX +] or [AUX −].
8.3.4 Periphrastic Do
Next we discuss the so-called ‘dummy’ do, which is used as an aux-
iliary in the absence of another (finite) auxiliary head. This do also exhibits the
NICE properties:
(45) a. John does not like this town. (negation)
b. In no other circumstances does that distinction matter. (inversion)
c. They didn’t leave any food. (contraction)
d. Jane likes these apples even more than Mary does . (ellipsis)
There are also some properties that distinguish do from other auxiliaries. First,
unlike other auxiliaries, do appears neither before nor after any other auxiliary:
(47) a. *He does be leaving.
b. *He does have been eating.
c. *They will do come.
Second, the verb do has no intrinsic meaning. Except for carrying the gram-
matical information about tense (and number in present clause), it makes no
semantic contribution.
Third, if do is used in a positive statement, it needs to be emphatic (stressed).
But in negative statements and questions, no such requirement exists:
(48) a. *Pat did leave. (Ungrammatical if did is unstressed.)
b. Pat DID leave.
(49) a. Pat did not show up.
b. Pat DID not show up. (more likely in this case: Pat did NOT show up.)
(50) a. Did Pat find the solution?
b. How long did it last?
These examples are also ruled out by the specification that the complement of do
be a VP[AUX −]. This requirement will further predict the ungrammaticality of
the examples in (56) and (57):
198 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S
In (56) and (57), the VPs following the auxiliary do, stressed or not, bear the
feature [AUX +] inherited from the auxiliaries have and be. This explains the
ungrammaticality of these sentences.
These verbs share the property that they obligatorily take bare verbal com-
plements (hence, nonbase forms or modals cannot head the complement
VP):
(59) a. *John believed Kim to leaving here.
b. *John did not leaving here.
c. *John expects to must leave.
d. *John did not may leave.
These properties indicate that to should have a lexical entry like the following:
⎡ ⎤
(61) aux-to
⎢ ⎥
⎢FORM to ⎥
⎢ ⎥
⎢ ⎥
⎢SYN ⎥
⎢ HEAD VFORM inf ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
ARG - ST NP, VP VFORM bse
6 In British English, auxiliary do has nonfinite forms, as in John will read the book and Bill will do
too or John has read the book and Bill has done too.
8.4 Capturing NICE Properties 199
Taking not to modify a nonfinite VP, we can predict its various positional
possibilities in nonfinite clauses via the following lexical entry:
⎡ ⎤
(63) FORM never/not
⎢ ⎡ ⎤⎥
⎢ adv ⎥
⎢ POS
⎦⎥
⎣SYN | HEAD ⎣ ⎦
MOD VP[VFORM nonfin]
The contrast in these two sentences shows one clear difference between never
and not. The negator not cannot precede a finite VP, though it can freely
occur as a nonfinite VP modifier, a property further illustrated by the following
examples:
(68) a. John could [not [leave town]].
b. John wants [not [to leave town]].
The polarity value of the tag is generally opposite to that of the matrix clause.
The contrast here indicates that not in (70a) makes the clause negative, while not
in (70b) does not.
The distinction between these two types of negation also influences scope
possibilities in an example like (71) (Warner, 2000):
(71) The president could not approve the bill.
Negation here could have the two different scope readings, paraphrased in (72):
(72) a. It would be possible for the president not to approve the bill.
b. It would not be possible for the president to approve the bill.
8.4 Capturing NICE Properties 201
The data here indicate that not behaves differently from adverbs like never
in finite contexts, even though the two behave alike in nonfinite contexts.
The adverb never is a true diagnostic of a VP-modifier, and we use contrasts
between never and not to reason about what the properties of the negator not
must be.
We saw the lexical representation for constituent negation not in (63) above.
Sentential not typically appears linearly in the same position – following a finite
auxiliary verb – but shows different syntactic properties (while constituent nega-
tion need not follow an auxiliary, as in Not eating gluten is dumb). We can
observe that expressions like the negator not, too, so, and indeed combine with a
preceding auxiliary verb:
Expressions like too and so are used to reaffirm the truth of the sentence in
question and follow a finite auxiliary verb. We assume that the negator and
these reaffirming expressions (called AdvI) form a unit with the finite auxil-
iary, resulting in a lexical-level construction. The syntactic cohesion of these
two expressions, including not, can be observed from the fact that the auxiliary
and the negator not are fused into a single lexical unit from which comes the
possibility of contracting the two expressions, as in won’t, can’t, and so forth.7
As we have seen for the verb-particle combination (e.g., figure out, give up, etc.),
the combination of a finite auxiliary verb and sentential negation is licensed by
the HEAD - LEX CONSTRUCTION (see Chapter 5.5):
This construction, along with the assumption that the sentential negator not bears
the LEX feature, projects a structure like the following:
7 Zwicky and Pullum (1983) note that the contracted negative n’t more closely resembles word
inflection than it does a ‘clitic’ or ‘weak’ word of the kind that often occurs in highly entrenched
word sequences (e.g., Gimme!). For example, as Zwicky and Pullum observe, won’t is not the
fused form one would predict based on the pronunciation of the word will, and such idiosyncrasies
are far more characteristic of inflectional endings than clitic words.
202 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S
(76)
Since the sentential negator is not a modifier of the following VP-type expres-
sion, we take it to be selected by a finite auxiliary verb, as a main verb
selects a particle. This means that a finite auxiliary verb (fin-aux) can be pro-
jected into a corresponding NEG-introducing auxiliary verb (neg-fin-aux), as
in (77):
We can also take this relation as a kind of derivation whose input is a finite
auxiliary verb and whose output is a negated-finite auxiliary (fin-aux → neg-
fin-aux). That is, the finite auxiliary verb selecting just complement XP can be
projected into a NEG finite auxiliary (AuxI) that selects the negator as its addi-
tional lexical complement that bears the feature NEG as well as the feature LEX.
For instance, the finite auxiliary will can undergo this derivational process and
becomes a negative-finite auxiliary will:
(78) ⎡ ⎤
neg-fin-aux
⎡ ⎤ ⎢FORM will ⎥
fin-aux ⎢ ⎡ ⎡ ⎤⎤ ⎥
⎢FORM will ⎥ ⎢ ⎥
⎢ ⎥ ⎢ + ⎥
⎢ ⎡ ⎤⎥ ⎢ AUX ⎥
⎢ ⎥ ⎢SYN⎢ ⎢
⎣HEAD⎣VFORM fin⎦⎦
⎥⎥ ⎥
⎢ → ⎢ ⎥
⎢SYN⎣HEAD
AUX + ⎦⎥
⎥ ⎢ ⎥
⎢ VFORM fin ⎥ ⎢ NEG + ⎥
⎣ ⎦ ⎢ ⎥
⎢ ⎥
⎢ ⎥
ARG - ST 1 NP, 2 XP ⎣ARG - ST 1 NP, Adv
LEX +
2 ⎦
I NEG + , XP
The output lexical construction will then licenses the following structure for
sentential negation:
8.4 Capturing NICE Properties 203
(79)
As shown here, the negative finite auxiliary verb will selects two comple-
ments, the negator not and the VP leave town. The finite auxiliary then
first combines with the negator, forming a head-lex construct. This con-
struct then can combine with a VP complement, forming a head-complement
construct.
By treating not as both a modifier (constituent negation) and a lexical
complement (sentential negation), we can account for the scope differences
in (71) and various other phenomena, including VP Ellipsis (see below).
For example, the present analysis will assign two different structures to the
string (71):
(80)
204 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S
In the structure (80a), not modifies only the nonfinite VP, with scope nar-
rower than could. Meanwhile, in (80b), not is at the same level in the syntax
as could, and semantically not scopes over could. In this case, the feature
[NEG +] percolates up to the VP and then to the whole sentence. The semantic
consequence of this structural difference can be seen in the different tag questions
appropriate for each interpretation, as we have noted earlier:
(81) a. The president [could [not [approve the bill]]], couldn’t/*could he?
b. The president [[[could][not]] [approve the bill]], could/*couldn’t he?
The tag question forms show that (81a) is actually a positive statement, even
though some part of it is negative. By contrast, (81b) is a negative statement.
However, there are certain exceptions that present problems for the analysis of
inverted auxiliaries involving a movement transformation. Observe the following
contrast:
(84) a. I shall go downtown.
b. Shall I go downtown?
Here there is a semantic difference between the auxiliary verb shall in (84a) and
the one in (84b): The former conveys a sense of simple futurity – in the near
future, I will go downtown – whereas the latter example concerns permission,
asking whether it is appropriate for me to go downtown. If the inverted verb is
simply moved from an initial medial position in (84b), it is not clear how the
grammar can represent this meaning difference.
English also assigns various interpretations to the subject-auxiliary inversion
pattern:9
(85) a. Wish: May she live forever!
b. Matrix Polar Interrogative: Was I that stupid?
c. Negative Imperative: Don’t you even touch that!
d. Subjunctive: Had they been here now, we wouldn’t have this problem.
e. Exclamative: Boy, am I tired!
Each of these constructions has its own constraints, which cannot fully be pre-
dicted from other constructions. For example, in ‘wish’ constructions, only the
modal auxiliary may is possible. In negative imperatives, only don’t (but not, e.g.,
do) is allowed. These idiosyncratic properties support a nonmovement approach,
in which auxiliaries can be specified as having particular uses or meanings when
inserted into particular positions in the syntax.
Note that there are many environments where nonfinite Ss form a constituent:
(86) a. I prefer for [Tom [to do the washing]] and [Bill [to do the drying]].
b. Mary meant for, but nobody else meant for, [Sandy [to do the washing]].
(87) a. They didn’t approve of [him/my [leaving without a word]].
b. Tom believes that [him [taking a leave of absence]] bothers Mary.
c. Why does [John’s [taking a leave of absence]] bother Mary?
(88) a. [With [the children [so sick]]], we weren’t able to get much work done.
b. [With [Tom [out of town]]], Beth hastily exited New Albany and fled to
Ohio.
c. [With [Bush [a born-again Christian]]], the public already had a sense of
where he would stand on those issues.
(89) a. [His wife [missing]], John cried on Brown’s shoulder.
b. [No money [left in the account]], John didn’t know what to do.
Each of these examples shows us that S[inf ], S[ing], or S[PRD +] forms a syn-
tactic unit, which is traditionally called a small clause (SC) (see Chapter 9 for
further discussion). What these data imply is that the construction S[nonfin] lives
its own life as an independent construction in English. In the yes-no question and
wh-interrogative SAI construction, we further observe this constituenthood:
(90) a. Can [[Robin sing] and [Mary dance]]?
b. When the going got tough, why did [[the men quit] and [the women stay
behind]]?
(91) a. Who did [[Tom hug t ] and [Mary kiss t ]]?
b. Which man and which woman did [[Tom hug t ] and [Mary kiss t ]]
respectively?
Such coordination examples support the idea that a finite auxiliary verb
combines with a nonfinite S whose subject is nominative, as illustrated in the
following tree:
(92)
8.4 Capturing NICE Properties 207
As shown in (92), the inverted finite auxiliary verb combines with a nonfinite
S. Licensing such a structure also means that a noninverted auxiliary verb
construction is systematically mapped into an inverted auxiliary verb by the
following derivational process:
The key effect of this post-inflection derivation is, as seen here, to change the
values of the attribute INV and ARG - ST of a finite auxiliary verb that belongs to
a raising verb. That is, a noninverted auxiliary verb selecting two arguments is
mapped onto an inverted auxiliary verb, selecting a nonfinite S whose external
argument (XARG) is the same as the input verb’s subject.
Traditionally, arguments are classified into external and internal ones, where
the former usually refer to the subject. The introduction of such a semantic fea-
ture is necessary if we want to make the subject value visible on the S node (see
Bender and Flickinger, 1999 and Sag, 2012). That is to say, although a VP has
an SPR value for its subject, once the VP and the subject combine, the resulting
S no longer has any information about any features of the subject – including its
semantic index. The feature XARG is a mechanism used to make this informa-
tion visible at the S level, which is where the tag question adjoins. The clausal
complement of the inverted auxiliary inherits the VFORM value and requires its
external argument (XARG) to be nominative (nom). For instance, consider the
derivation of the noninverted auxiliary will into the inverted will:
Put informally, the input is the noninverted auxiliary will that selects a subject
and a base VP[bse], whose subject is structure-shared with the verb’s subject as
a raising verb. By contrast, the output is the inverted auxiliary will that selects
just a nonfinite S. Note that this S is mapped onto the COMPS value because the
output belongs to a function-word (aux-inv-fwd) (see the discussion in Chapter 5
208 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S
around (71)). Let us consider the structure of an SAI sentence licensed by this
inverted auxiliary will:
(95)
The combination of the nominative subject and the base VP forms a nonfinite
head-subject construct, and this nonfinite S combines with the head inverted
auxiliary, forming a head-complement construct. Note also that in the present
system, the VFORM value requirement on the VP complement of the noninverted
auxiliary is maintained in the nonfinite S complement. Thus, if the noninverted
auxiliary selects a bse VP, then its SAI counterpart will select a bse S instead,
thus blocking cases like *Will he coming to Seoul?, *Will he came to Seoul?, and
so on. More illustrations are given in (96):
(96) a. John can come to Seoul. vs. Can John come to Seoul?
b. John has driven to Seoul. vs. Has John driven to Seoul?
c. John is [visiting Seoul]. vs. Is John [visiting Seoul]?
d. John is [visited by his friends]. vs. Is John [visited by his friends]?
e. John is [to visit his friends]. vs. Is John [to visit his friends]?
This means that a word like can will be mapped to can’t, gaining the NEG feature:
⎡ ⎤
(100) FORM can’t
FORM can ⎢ ⎥
→ ⎢ fin ⎥
HEAD | VFORM fin ⎣HEAD VFORM ⎦
NEG +
As we saw earlier, the head feature NEG will play an important role in forming
tag questions:
(101) a. They can do it, can’t they?
b. They can’t do it, can they?
c. *They can’t do it, can’t they?
d. *They can’t do it, can he?
The tag part of such a question has a NEG value that is the opposite of that in the
main part of the clause.
This rule means that the second argument (YP) of an auxiliary verb need not be
realized as a complement (COMPS) when the second argument is interpreted as
a type of pro (proverb) referring to the antecedent provided in the context, as
illustrated in the following examples:
(105) a. They all couldn’t solve the puzzle. However, Albert could .
b. Jane rebooted the server. She had to .
The complement of could and to is not realized here, but it can be understood by
referring to the preceding sentence.
Since the rule in (104) is stated to apply to any YP (predicate) after a verb
with the [AUX +] specification, it can apply to more than just VPs and to more
than just the canonical auxiliary verbs, but also to be and have in their main verb
uses. With be, non-VP complements can be elided:
(106) a. Kim is happy and Sandy is too.
b. When Kim was in China, I was too.
The main verb have is somewhat restricted, but the contrast in (107) is clear.
Even though have is a main verb in (107a), it can allow an elided complement,
unlike the main verb bring in (107b):
(107) a. A: Have you anything to share with the group?
B: No. Have you ?
b. A: Have you brought anything to share with the group?
B: No. *Have you brought ?
Given the derivation rule (104), which specifies no change in the ARG - ST,
a canonical auxiliary verb like can will have a counterpart that lacks a phrasal
complement on the COMPS list:
⎡ ⎤ ⎡ ⎤
(108) FORM can FORM can
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢SYN | VAL SPR NP ⎢SYN | VAL SPR
1 1
⎥ → ⎥
⎢ 2 VP[bse] ⎥ ⎢ ⎥
⎣ COMPS ⎦ ⎣ COMPS ⎦
ARG - ST 1, 2 ARG - ST 1 , 2 [pro]
Notice here that even though the VP complement is elided in the output, the
ARG - ST is intact. This allows us to assign a proper interpretation to the elided
VP (see Kim, 2003).
In the first part of the example in (109), there are three auxiliary verbs:
⎧ ⎫
⎪
⎨a. Sandy must have been , too.⎪
⎬
(109) Kim must have been dancing and b. Sandy must have , too.
⎪
⎩c. Sandy must , too. ⎪
⎭
There are therefore various options for an elided VP: the complement of been,
or have, or must.
The analysis also immediately predicts that ellipsis is possible with the
infinitival marker to, as this lexeme is an auxiliary verb, too:
8.4 Capturing NICE Properties 211
(111) a. Because John persuaded Sally to , he didn’t have to talk to the reporters.
b. Mary likes to tour art galleries, but Bill hates to .
Finally, the analysis given here will also account for the contrast shown above
in (73); a similar contrast is found in the following examples:
The negator not in (112b) is a marker of sentential negation and can be the com-
plement of the finite auxiliary verb could. This means that we can apply the VPE
lexical rule to the auxiliary verb could after the projection of the NEGATION
AUXILIARY CONSTRUCTION , as shown in (113):
⎡ ⎤ ⎡ ⎤
(113) FORM could FORM could
⎢ ⎥ ⎢ ⎥
⎣ COMPS 2 Adv[NEG +], 3 VP[bse]⎦ → ⎣ COMPS 2 ⎦
ARG - ST 1, 2, 3 ARG - ST 1, 2, 3
As shown here in the right-hand form, the VP complement of the auxiliary verb
could is not realized as a COMPS element, though the negative adverb is. This
form would then project a syntactic structure like (114):
(114)
Thus, if the VP were elided, we would have a hypothetical structure like the
following:
(115)
Here, the adverb never modifies a VP through the feature MOD, which guarantees
that the adverb requires the head VP that it modifies. In an ellipsis structure,
the absence of such a VP means that there is no VP for the adverb to modify.
In other words, there is no rule licensing such a combination – predicting the
ungrammaticality of *has never, as opposed to has not.10
8.5 Conclusion
This chapter aimed to address four key issues in the study of the
English auxiliary system. The issues involve the properties that distinguish
auxiliary verbs from main verbs, ordering restrictions among auxiliary verbs,
combinatorial restrictions on the syntactic complements of auxiliary verbs, and
auxiliary-sensitive phenomena like NICE properties.
The chapter first focused on the morphosyntactic properties of English aux-
iliary verbs. We showed that their distributional, ordering, and combinatorial
properties all follow from their lexical groupings: modals, have/be, do, and to.
The second part of this chapter concerned the so-called NICE phenomena, each
of which is sensitive to the presence of an auxiliary verb and has been extensively
analyzed in generative grammar. The chapter showed us that a construction-
based analysis can offer a straightforward analysis of these phenomena without
reliance on movement operations or functional projections.
In Chapter 9, we move on to a particular auxiliary-headed construction or
family of constructions: the passive (which canonically consists of the passive
auxiliary be followed by a past participial VP complement). We will see that the
construction-based analysis developed in this chapter can be extended to account
for passive constructions in English.
10 As we saw in Section 6.6.1, Chapter 6, all modifiers carry the head feature MOD, whose value is
the expression that is modified.
8.5 Conclusion 213
Exercises
(iii)
a. He had hardly collected the papers on his desk, had he/*hadn’t
he?
b. He never achieved anything, did he/*didn’t he?
Draw tree structures for the sentences in (ii) and provide the
lexical entries for hardly, little, and never. The examples in
(iii) indicate that these adverbs all involve some kind of nega-
tion in the sentence in which they appear. In addition, think of
how your analysis can account for the unacceptable examples
in (iv):
(iv) a. As a statesman, he scarcely could do anything worth
mentioning.
b. As a statesman, scarcely could he do anything worth
mentioning.
c. *As a statesman, scarcely he could do anything worth
mentioning.
7. Identify errors in the following passage and provide the reasons for
the errors:
The expanded role of auxiliaries in English has resulting in some curious
rules. One is that when a sentence are to be negated, the word not must
follow not the main verb (as used to be the case), but the auxiliary. This rule
creates an awkward dilemma in the occasional instance when the sentence to
being negated actually doesn’t have an auxiliary verb. Thus, if I wish to deny
the sentence, I walked home, I must add an entirely meaningless auxiliary
from the verb do just to standing as the prop for the word not. The result is
8.5 Conclusion 215
the sentence, I didn’t walk home. Now, do and did are often adding to show
emphasis, but in those cases they are speak with emphasis. Thus, there is
a difference between saying I didn’t walk home and saying I DIDN’T walk
home. The latter sentence expresses emphasis, but in the former sentence
the verb did expresses nothing at all; it be merely there to hang the not on.
If we tried to say, I walked not home, this would had an unacceptably odd
sound to it. It would, indeed, sound archaic. English literature is full of such
archaisms, since putting not after the main verb was still good usage in the
time of Shakespeare and a century or more later.
9 Passive Constructions
9.1 Introduction
We recognize (1b) as the passive counterpart of the active sentence (1a). These
two sentences are true or false under the same real-world conditions: They both
describe the event of writing the lines by one Korean poet. The only difference
involves grammatical functions: In the active voice (1a), one of Korea’s most
famous poets is the subject, whereas in the passive voice (1b), these lines is the
subject.
Why are there two ways of saying essentially the same thing? It is generally
accepted that the passive construction is used for certain discourse-motivated
reasons. For example, when the person or thing acted upon is what the sentence
is about, we tend to use passive.1 Compare the following:
(2) a. Somebody apparently struck the unidentified victim during the early morn-
ing hours.
b. The unidentified victim was apparently struck during the early morning
hours.
We can observe that the passive in (2b) assigns greater salience to the victim
than the active in (2a). In addition, language users prefer passive voice when the
identity of the actor is unknown or unimportant:
(3) a. Targets can be observed at any angle.
b. During the early evening, Saturn is found in the north, while Jupiter rises in
the east.
Similarly, we use the passive voice in formal, scientific, or technical writing and
reports to convey an objective presentation of the events or state of affairs being
described. Compare, for example, the following sentences:
1 In other words, the passive construction is used to ensure that a nonagentive entity is realized as
the subject, because subject is the canonical position for a sentence topic (see Lambrecht, 1994).
216
9.2 The Relationship between Active and Passive 217
Yet, when such verbs are passive, the object NP is necessarily absent from the
postverbal position:
(8) a. *The guide has been taken John to the library.
b. *The department has been chosen John for the position.
(9) a. John has been taken to the library.
b. John has been chosen for the position.
The absence of the object in the passive is due to the fact that the argument that
would have been the object of the active verb has been promoted to subject of
the passive.
Apart from the realizations of the two core arguments of a transitive verb,
other subcategorization requirements are unchanged in a passive form. For
example, the active form handed in (10) requires an NP and a PP[to] as its
complements, and the passive handed in (11) still requires the PP complement:
(10) a. Pat handed a book to Chris.
b. *Pat handed to Chris.
c. *Pat handed a book.
218 PA S S I V E C O N S T RU C T I O N S
If the active complement is itself a clause, the subject of the passive verb must
also be a clause:
(14) a. No one believes/suspects [that he is a fool].
b. [That he is a fool] is believed/suspected by no one.
We thus can conclude that the subject of the passive form is the argument which
corresponds to the object of the active. This also means that one cannot describe
the passive in terms of the respective mappings of agent and patient (e.g., the sub-
ject of a passive sentence is the verb’s patient argument), because the argument
realized as subject in sentences like (13b) and (15b) is not assigned a semantic
role, patient or otherwise, by the verb.
Morphosyntactic changes: In addition to changes in argument realization,
the passive construction requires the auxiliary verb be, which requires the passive
form of the verb (a subtype of the en form, see 5.2.1). In addition to ‘passive be,’
italicized in the examples below, there can be other auxiliary verbs, with the
passive auxiliary last in the sequence:
(16) a. Jean drove the car. → The car was driven.
b. Jean was driving the car. → The car was being driven.
c. Jean will drive the car. → The car will be driven.
d. Jean has driven the car. → The car has been driven.
e. Jean has been driving the car. → The car has been being driven.
f. Jean will have been driving the car. → The car will have been being driven.
The forgoing observations mean that any grammar must capture the following
basic properties of passive:
There are several potential ways to capture the syntactic and semantic
relationships between active and passive forms. Given our discussion so far, one
might think of relying on grammatical categories in phrase structure (NP, VP, S,
etc.), or on surface valence properties (SPR and COMPS), often informally char-
acterized as grammatical functions, or on semantic roles (agent, patient, etc.).
In what follows, we will see that we need to refer to all of these aspects of the
representation in a proper treatment of English passive constructions.
This rule means that if there is anything that fits the SD in (19), it will be changed
into the given SC: that is, if we have any string in the order of X – NP – Y – V –
NP – Z (in which X, Y, and Z are variables), the order can be changed into X –
NP – Y – be – V+en – Z – by NP. For example:
(20)
The object Bill moves to the subject position and the verb be moves to I (Infl)
position, giving the output sentence Bill was deceived. The analysis is based on
these three major assumptions:
9.3 Approaches to Passive 221
Such transitive verbs presumably fit the tree structure in (21), but they cannot be
passivized.
Second, there are verbs like bear, rumor, say, and repute that are used only in
the passive, as seen in the following contrasts:
(24) a. I was born in 1970.
b. It is rumored that he is on his way out.
c. John is said to be rich.
d. He is reputed to be a good scholar.
(25) a. *My mother bore me in 1970.
b. *Everyone rumored that he was on his way out.
Unlike, say, resemble, these verbs are not typically used as active forms. Intrin-
sically passive verb lexemes are difficult to explain if we rely on the assumption
that passives are derived from actives via configurational transformation rules.
Third, the subject in a passive sentence need not be a patient:
(26) a. Not much is known about the effects of these medications on children.
b. It was alleged by the victim that he was kidnapped.
c. That laughter is the sign of joy is doubted by no one.
This derivational rule says that if there is a transitive verb lexeme (v-tran-lxm)
selecting two arguments, it has a corresponding passive verb lexeme (passive-v).
This derivationally related verb selects the second argument of the input tran-
sitive verb as the first argument that will be realized as the subject. The first
argument in the input is mapped to an optional PP argument in the derived verb,
the remaining arguments unchanged (. . . ). The derivation also effects a change
of the VFORM value to pass, reflecting the morphological process.4
Let us consider what kinds of passive sentences this derivation can give rise
to. Consider the following pair:
According to the derivational rule in (27), the active verb send has a counterpart
passive verb, sent:
3 The present analysis, in which verbs are classified into different lexical types in accordance with
their morphosyntactic behavior (Sag et al., 2003; Kim and Sells, 2008; Sag, 2012; Kim, 2016),
implies that there are verb lexemes that select two arguments but that are excluded from the
type v-tran-lxm, and further that there are verb lexemes which belong to the passive-v from the
beginning (not derived from v-tran-lxm). Verbs like resemble belong to the former group, while
those like rumor belong to the latter.
4 As we noted in Chapter 5, in terms of the morphological form, the VFORM pass is a subtype of
en.
9.3 Approaches to Passive 223
⎡ ⎤
(29) FORM sent
FORM send ⎢ ⎥
→ ⎣SYN | HEAD | VFORM pass ⎦
ARG - ST NPi , 2 NP, 3 PP[to]
ARG - ST 2 NP, 3 PP[to], (PPi [by])
As seen here in the output form, the passive sent takes three arguments: a subject
identical to the second argument of the transitive verb, an intact PP inherited
from the transitive verb, and an optional PP whose index value is identical with
the subject of the transitive verb.5 These three arguments will be realized as
SPR and COMPS elements, respectively, in accordance with the ARC (Argument
Realization Constraint). This output lexical entry can then be embedded in the
following structure for (28b):
(30)
As shown in (30), the passive sent combines with its PP[to] complement, form-
ing a VP that still requires a SPR. This VP functions as the complement of the
auxiliary be (was). As we saw in Chapter 8, the passive copula be is a raising
verb, with the lexical entry repeated in (31). Its subject (SPR value) is identical
to its VP complement’s subject she:
⎡ ⎤
(31) aux-be-pass
⎢ ⎥
⎢FORM be ⎥
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ SPR 1 NP ⎥
⎢ ⎢ ⎥⎥
⎢ ⎥⎥
⎢SYN | VAL⎢⎣ VFORM pass ⎦⎥
⎢ 2 VP ⎥
⎢ COMPS
1 NP ⎥
⎢ SPR ⎥
⎣ ⎦
ARG - ST 1 NP, 2 VP
The passive verb believed in (32b) is derived from its active counterpart in
(32a). This projection will generate the passive form of believed, as given in
the following:
⎡ ⎤
(33) FORM believed
⎢ ⎡ ⎤
⎥
believe ⎢ ⎥
FORM ⎢ ⎣
POS verb
⎦⎥
→ ⎢SYN HEAD ⎥
ARG - ST NPi , 2 CP ⎢ VFORM pass ⎥
⎣ ⎦
ARG - ST 2 CP, (PPi )
The output passive verb believed can then project a structure like the following:
(34)
9.3 Approaches to Passive 225
The passive verb believed first combines with its optional complement by them
and then with the modifier widely. The resulting VP then combines with the
raising verb be in accordance with the HEAD - COMPLEMENT CONSTRUCTION.
This system, licensing each local structure by the defined grammar rules and
principles, thus links the CP subject of be to that of believed.
The same account also holds when the complement is an indirect question:
(35) a. They have decided [which attorney will give the closing argument].
b. [Which attorney will give the closing argument] has been decided (by them).
The active decided selects an interrogative sentence as its complement, and the
PASSIVE CONSTRUCTION can apply to this verb:6
⎡ ⎤
(36) ⎡ ⎤ FORM decided
⎢ ⎡ ⎤ ⎥
FORM decide ⎢ ⎥
⎢ ⎥ ⎢ POS verb ⎥
⎣SYN | HEAD | POS verb ⎦ → ⎢SYN⎣HEAD ⎦ ⎥
⎢ VFORM pass ⎥
ARG - ST NPi , Sj [QUE +] ⎣ ⎦
ARG - ST Sj [QUE +], (PPi [by])
The output passive decided then will license the following structure (for
simplicity, we do not show COMPS with empty < > values):
(37)
6 We assume that indirect or direct questions are marked by the feature QUE (question). See
Chapter 10.
226 PA S S I V E C O N S T RU C T I O N S
As seen here, the object of the preposition in the active can function as the subject
of the passive sentence. Notice that such prepositional passives are possible with
verbs selecting a PP bearing a specified preposition:
(40) a. The plan was approved of by my mother. (My mother approved of the plan.)
b. The issue was dealt with promptly. (They dealt with the issue promptly.)
c. That’s not what was asked for. (That’s not what they asked for.)
d. This should be attended to immediately. (We should attend to this immedi-
ately.)
(41) a. *Boston was flown to. (They flew to/near/by Boston.)
b. *The capital was gathered near by a crowd of people. (A crowd of people
gathered near/at the capital.)
c. *The hot sun was played under by the children. (The children played
under/near the hot sun.)
The propositions in (40) are all selected by the main verbs (no other preposi-
tions can replace them). By contrast, the prepositions in (41) are not selected
by the main verb, since they can be replaced by another, as noted in their active
counterparts.7
One thing to observe is that there is a contrast between active and pas-
sive prepositional verbs with respect to the appearance of an adverb (see
Chomsky, 1972; Bresnan, 1982b). Observe the following:
(42) a. That’s something I would have paid twice for.
7 See Exercise 5 of this chapter for examples (e.g., The bed was slept in) where the prepositional
passive is possible with an adjunct PP.
9.4 Prepositional Passives 227
b. These are the books that we have gone most thoroughly over.
c. They look generally on John as selfish.
The contrast here shows us that, unlike the active, the passive does not allow any
adverb to intervene between the verb and the preposition.
There are two possible structures that can capture these properties: ternary
and reanalysis structures. The ternary structure generates a flat structure like the
following:
(44)
Contrasting with this flat or ternary structure, there is another possible structure
assumed in the literature:
(45)
This structure differs from (44) in that the passive verb and the preposition forms
a constituent (the ‘reanalysis’). Both (44) and (45) can capture the coherence
between the prepositional verb and the preposition. Even though both have their
merits, we choose the structure (45), in which the passive verb and the preposi-
tion form a unit. Evidence for this kind of unitization comes from environments
in which the passive verb (but not the active verb) forms a lexical unit with the
following preposition:
What we can observe here is that, unlike the active verb, the passive relied on
acts like a lexical unit in the gapping process: The passive relied alone cannot be
gapped.
228 PA S S I V E C O N S T RU C T I O N S
This contrast supports the reanalysis structure for the passive. The HEAD -
LEX CONSTRUCTION we have employed to license verb-particle and finite-aux-
negator also licenses the combination of the prepositional passive V with the
following P (which is defined to be ‘LEX’ in the sense that it is not a prosodically
heavy element), which we repeat here again:
(47) HEAD - LEX CONSTRUCTION :
V → V, X[LEX +]
The output passive verb now has three arguments: The first argument will be real-
ized as the subject; the remaining two elements are a preposition whose PFORM
is identical with that of the input PP and an optional PP[by] linked to the input
subject. This output will then project a structure like the following:
8 In languages like Korean, German, and even French, such a syntactic combination is prevalent in
the formation of complex predicates. See Kim (2004b).
9.5 The Get-Passive 229
(51)
The HEAD - LEX CONSTRUCTION in (47) allows the passive verb to combine with
the preposition into first, still forming a lexical element. This resulting lexical
element then combines with its PP complement by the lawyer in accordance with
the HEAD - COMPLEMENT CONSTRUCTION, which requires that the complement
with which the head combines is phrasal.
(52) a. You must come back in spring to see them. The man did; he was fired.
b. He got fired by the liberals and rehired by Fox.
The be passive in (52a) and the get-passive in (52b) both describe a situation in
which an employer fired someone. Note that be and get passives are not always
interchangeable, as illustrated in the following (Huddleston and Pullum, 2002):
(53) a. Kim was/*got seen to leave the lab with Dr. Smith.
b. He saw Kim get/*be mauled by my brother’s dog.
230 PA S S I V E C O N S T RU C T I O N S
In (53a), the head verb must be be, while in (53b) the head verb can only be
get.9 This contrast indicates that there must be some differences between the two
passives.10
The first main difference comes from the status of be and get. While the verb
be is a typical auxiliary, get is not (cf. Haegeman, 1985). This can be observed
from the NICE properties discussed in Chapter 8:
(54) a. He was not fired by the company.
b. Was he fired by the company?
c. He wasn’t fired by the company.
d. John was fired by the company, and Bill was too.
As seen from the contrast here, the passive verb got fails every test for auxiliary
status: The verb cannot have sentential negation following (55a), cannot undergo
auxiliary inversion (55b), has no contracted form (55c), and cannot elide the
following VP (55d). The possible alternatives are those in which the verb get is
used as a lexical verb:
(56) a. He didn’t get fired by the company.
b. Did he get fired by the company?
c. He didn’t get fired by the company.
d. John got fired by the company, and Bill did too.
These data indicate that the passive get is not an auxiliary verb.
Also note that the passive get verb is different from typical raising verbs in that
its subject referent cannot be an expletive (it or there) but must be understood
to be affected by the action in question (Taranto, 2005). That is, the status of
the subject is understood to be changed by the action performed by the agent.
Consider the following:
(57) a. The letter was written by you and no one else.
b. *The letter got written by you and no one else.
a. Central: A woman got phoned by her daughter who was already on the plane.
b. Psychological: I got frustrated by the high level of unemployment.
c. Reciprocal/Reflexive: She never got herself dressed up for work.
d. Adjectival: His clothes got entangled in sewer equipment.
e. Formulaic: I got fed up with sitting in front of my computer.
The central get-passive has an active counterpart with the identical propositional meaning,
although its agent can in general be inferred from context. Our discussion here centers on this
central type.
9.5 The Get-Passive 231
The letter came into existence after the action of writing was carried out,
so it was in a sense not affected. For an individual to be affected by
an action, it needs to exist at the time that the action happens. This
means that the preexistence of the subject is a necessary condition (Taranto,
2005):
The ‘affected’ condition can also account for the awkwardness of the following
examples:
All these examples, possible with the be-passive, contain lexical verbs that are
either stative or do not entail a change of state. For example, fearing someone or
seeing someone does not affect the individual.
In sum, the get-passive verb does not bear the feature AUX but requires
a passive VP as its complement. The get-passive typically focuses on what
happened as the result of the action described by the participial complement
predicate, and the subject referent of the get-passive is necessarily under-
stood to have been affected by the action. The following lexeme represents
passive get:
⎡ ⎤
(60) FORM get
⎢ ⎥
⎢SYN | HEAD | AUX – ⎥
⎢ ⎡ ⎤ ⎥
⎢ ⎥
⎢ SPR NPj ⎥
⎢ ⎢ ⎥ ⎥
⎢ARG - ST NPj, VP⎢ ⎥
⎢ ⎣ VFORM pass⎥
⎦ ⎥
⎢ ⎥
⎢ s ⎥
⎢ IND
1 ⎥
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ IND s0 ⎥
⎢ ⎢ ⎡ ⎤⎥⎥
⎢ ⎢ ⎥
⎢ ⎢ PRED get-affected-rel ⎥
⎥⎥
⎢SEM⎢ ⎢ ⎥⎥⎥
⎢ ⎣RELS ⎣PAT j ⎦⎦⎥
⎣ ⎦
SIT s1
In (60), we see that the verb get selects two arguments: a subject NP and a
passive VP whose VFORM value is passive. The subject of the verb get is an
affected patient in the situation s1. This lexeme will then project a sentence like
the following:
232 PA S S I V E C O N S T RU C T I O N S
(61)
As seen in (61), the passive verb fired requires a patient subject NP and an
optional PP agent. This subject NP has the same index value as the subject of the
VP that the verb get requires. The subject bears the semantic role of patient. This
captures the fact that a get-passive sentence describes an event which has some
impact on the subject referent. This supports the usage that the get-passive is
found only with dynamic verbs, describing the action in question (Collins, 1996;
Downing, 1996; Taranto, 2005). The predicates typically used in the get-passive
are nonstative verbs like caught, paid, done, dressed, fired, tested, picked, thrown,
killed, asked. It is not natural for the complement of get to be a stative participle:
(62) a. It was/*got believed that the letter was a forgery.
b. He is/*got feared by most of the staff.
c. The teacher was/*got liked by everybody.
Perception verbs like believe, fear, and like are difficult to construe as change-
of-state verbs.
The effect conveyed by a get-passive sentence need not be negative:
(63) a. He got promoted multiple times.
b. The story got published and won some recognition.
As shown by such examples, the get-passive is characteristically used in
clauses involving adversity, but it can also describe a beneficial situation
(Collins, 1996).11
11 The get-passive has other pragmatic constraints: It usually conveys the speaker’s personal
involvement or attribution of responsibility to the subject referent, or it reflects the speaker’s
opinion about the desirability of the event’s outcome. See Collins (1996) for further discussion.
9.6 Conclusion 233
9.6 Conclusion
Exercises
1. Draw tree structures for each of the following sentences and then
provide a lexical entry for the italicized passive verb:
a. Peter has been asked to resign.
b. I assume the matter to have been filed in the appropriate records.
c. Smith wants the picture to be removed from the office.
d. The events have been described well.
e. Over 120 different contaminants have been dumped into the
river.
f. Heart disease is considered the leading cause of death in the
United States.
g. The balloon is positioned in an area of blockage and is
inflated.
h. There was believed to have been a riot in the kitchen.
i. Cancer is now thought to be unlikely to be caused by hot dogs.
After drawing tree structures for the above examples, discuss the lex-
ical properties of have and get as exemplified here? For example,
what are their ARG - ST lists?
4. Consider the following prepositional passive examples and then
analyze them as deeply as you can with tree structures:
(i) a. Ricky can be relied on.
b. The news was dealt with carefully.
c. The plaza was come into by many people.
d. The tree was looked after by Kim.
Can the analysis given in this chapter account for such examples?
Now observe the following examples, which illustrate two different
kinds of passive:
(iv) a. They paid a lot of attention to the matter.
b. The son took care of his parents.
(v) a. The matter was paid a lot of attention to.
b. A lot of attention was paid to the matter.
6. The (a) sentences in the following are active, whereas the (b)
sentences are all passive:
(i) a. John washed the trousers easily.
b. The trousers were washed easily.
c. The trousers wash easily.
(ii) a. They peel ripe oranges quickly.
b. Ripe oranges are peeled quickly.
c. Ripe oranges peel quickly.
Note that the (c) examples are often called ‘middle’ verb construc-
tion. Can you check if the verbs in the following also allow these
triplets: active, passive, and middle? In answering this, construct rel-
evant examples and also discuss all of the grammatical properties you
can find in these kinds of middle examples:
(iii) close, break, melt, bribe, translate, roll, crush
7. Provide a tree structure for each example and explain the rules or
principles that are violated in the ungrammatical versions:
(i) a. There is/*are believed to be a sheep in the park.
b. There *is/are believed to be sheep in the park.
236 PA S S I V E C O N S T RU C T I O N S
Each clause type has a dedicated function. For example, a declarative makes
a statement, an interrogative asks a question, an exclamative expresses surprise
about the degree of some property, and an imperative issues a directive. However,
these correspondences are not always one-to-one. For example, the declarative
in (2a) represents not a statement but a question, while the interrogative in (2b)
actually indicates a directive:
237
238 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S
(4)
The wh-phrases formed from these wh-words have a variety of functions in the
clause. As seen in the examples in (5), a wh-expression can be an object, subject,
or oblique complement, or even an adjunct. Note that the wh-questions have a
bipartite structure: a wh-phrase and an S that is incomplete in the sense that the
complement of some predicator within it is missing:
(6) a. [NP Which man] [did you talk to ]?
b. [PP To which man] [did you talk ]?
c. [AP How ill] [has Hobbs been ]?
d. [AdvP How frequently] [did Hobbs see Rhodes ]?
The wh-phrase (filler) and the missing phrase (gap) must have identical syntactic
categories as a way of ensuring their linkage:
(8) a. *[NP Which man] [did you talk [PP ]]?
b. *[PP To which man] [did you talk to [NP ]]?
10.2 Movement vs. Feature Percolation 239
The wh-phrase who originates in the object position of recommend and is then
moved to the specifier position of the intermediate phrase C . The auxiliary verb
will is also moved from the V position to the C.
This kind of movement operation is an appealingly straightforward way
to capture the linkage between the filler and gap. However, the move-
ment analysis becomes less plausible when we consider examples like the
following:
(12) a. Who did Kim work for and Sandy rely on ?
b. *Who did Kim work for and Sandy rely ?
c. *Who did Kim work for and Sandy rely on Mary?
(14) a. We endlessly talked about [the fact that she had quit the race].
b. [The fact that she had quit the race], we endlessly talked about .
(16) a. *We endlessly argued about [that she had quit the race].
b. [That she had quit the race], we endlessly argued about .
shared within the tree so that the gap and its filler bear the same specifications
for the relevant features, for example, syntactic category.
(18)
Notations like NP/NP (read as ‘NP slash NP’) or S/NP (‘S slash NP’) here mean
that the category to the left of the slash is incomplete: It is missing one NP.
This missing information is percolated up to the point where the slash category
is combined with the filler who. Instead of movement operations, this strategy
has successive applications of a phrase-structure rule that creates a local tree in
which a constituent bearing a gap feature is combined with another constituent,
and the mother phrase bears the same value for the gap feature that the gapped
daughter does.
This kind of analysis can be used to describe the contrast shown in (12a) and
(12b). Let us look at partial structures of these two examples:
(19)
242 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S
In (19a), the missing gaps are both NPs, while in (19b), an NP and a PP are
missing. Since the mechanism of feature unification allows two nonconflicting
phrases to be unified into one, the two S/NP phrases in (19a) are merged into
one S/NP. Simply put, the whole coordinate structure is ‘missing an NP,’ and this
description also applies to each internal conjunct. However, in (19b) we cannot
combine the two phrases S/NP and S/PP into one because they have conflicting
slash values.
In (21), the object of the verb is present as its sister, whereas in (22) the object
is in a nonlocal position. These two possibilities for argument instantiation are
captured by the following revised ARC:
This revised ARC thus allows the following lexical entries for recommend:
(24)
In (24a), the two arguments of the verb recommend are realized as the SPR and
COMPS values, respectively, whereas in (24b) the second argument is realized not
244 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S
as a COMPS value but as a GAP value. Each of these two different realizations will
project the following structures for examples like (21b) and (22b), respectively:
(25)
The main difference between the two is that in (25a), the object of recommend is
the verb’s sister, while in (25b) it is not. That is, in the former the object is local
to the verb whereas in the latter it is nonlocal. In (25b), the verb contains a GAP
value which is identified with the object. This GAP value is passed up to the VP
and then to the middle S. This GAP value is discharged by the filler who, or more
specifically by the HEAD - FILLER CONSTRUCTION in (26):
(26) HEAD - FILLER CONSTRUCTION:
This grammar rule says that when a head expression S containing a nonempty
GAP value combines with the constituent bearing its filler value, the resulting
10.3 Feature Percolation with No Abstract Elements 245
phrase will form a grammatical head-filler phrase with the GAP value discharged.
This completes the ‘top’ of the long-distance or unbounded dependency.
The ARC will ensure that of these three arguments, the first must be realized as
the SPR element and the rest either as COMPS or as GAP elements. We will thus
have at least the following three realizations for the verb lexeme put:2
⎡ ⎤
(28) a. v-wd
⎢ ⎥
⎢FORM put ⎥
⎢ ⎡ ⎤⎥
⎢ NP ⎥
⎢ SPR 1 ⎥
⎢ ⎢ ⎥⎥
⎢SYN | VAL⎣COMPS 2 NP, 3 PP⎦⎥
⎢ ⎥
⎢ ⎥
⎣ GAP ⎦
ARG - ST 1 NP, 2 NP, 3 PP
⎡ ⎤
b. v-gap-wd
⎢ ⎥
⎢FORM put ⎥
⎢ ⎡ ⎤⎥
⎢ 1 NP ⎥
⎢ SPR ⎥
⎢ ⎢ ⎥⎥
⎢SYN | VAL⎣COMPS 3 PP⎦⎥
⎢ ⎥
⎢ 2 NP ⎥
⎣ GAP ⎦
ARG - ST 1 NP, 2 PP, 3 PP
⎡ ⎤
c. v-gap-wd
⎢ ⎥
⎢FORM put ⎥
⎢ ⎡ ⎤ ⎥
⎢ 1 NP ⎥
⎢ SPR ⎥
⎢ ⎢ ⎥ ⎥
⎢SYN | VAL⎣COMPS 2 NP⎦⎥
⎢ ⎥
⎢ 3 PP ⎥
⎣ GAP ⎦
ARG - ST 1 NP, 2 NP, 3 PP
Each of these three lexical words entries can the be used to generate sentences
like the following:
(29) a. John put the books in a box.
b. Which books did John put in the box?
c. Where did John put the books?
2 The SPR value of a verb can be gapped too. See Section 10.3.3.
246 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S
As we see here, the complements of the verb put may be realized in three
different ways. The verb put in (28a) shows the canonical realization of the verb’s
arguments, licensing an example like (29a). Meanwhile, in (28b), the object NP
argument is realized as a GAP, as reflected in (29b), whereas in (28c), the PP
is realized as a GAP, as shown in (29c). The following tree structure shows the
derivation of (29b) and the manner in which the lexical entry of (28b) contributes
to the propagation of the GAP feature throughout the tree:
(30)
Let us look at the structure, working from bottom to top. At the bottom, the
verb put has one PP complement, with its NP complement being realized as a
GAP value. This GAP information is copied to the mother node of each phrasal
construct in the tree, successively, the VP, then the S that immediately domi-
nates this VP, and finally the S whose head is the phrase-initial auxiliary did,
at which point the GAP value is satisfied by the presence of the [QUE +] filler.
Each phrase is licensed by a rule of the grammar: The verb put with the rele-
vant GAP specification first combines with the necessary PP complement in the
box, in accordance with the HEAD - COMPLEMENT CONSTRUCTION. The result-
ing VP combines with the subject, forming a nonfinite S with which the inverted
auxiliary verb did combines. The resulting S remains incomplete because of the
10.3 Feature Percolation with No Abstract Elements 247
nonempty GAP value (every complete sentence must have an empty GAP value).
This GAP value is discharged when the HEAD - FILLER CONSTRUCTION in (26)
combines the filler NP which book with the incomplete S.3
This kind of feature percolation system, involving no empty elements, works
well even for long-distance dependency examples. Consider the following
structure:
(31)
The GAP value starts from the lexical head met, whose second argument is real-
ized as a GAP value. Since the complement of the verb met is realized as a GAP
value, the verb met will not look for its complement in the local domain (as its
sister node). The GAP information will be passed up to the embedded S, which
is a nonhead daughter. It is the principle given in (32) that ensures that the GAP
value in the head daughter or nonhead daughter is passed up through the structure
until it is discharged by the filler who in the HEAD - FILLER CONSTRUCTION:4
The role of this principle is clear from the embedded S in (31): The principle
allows the GAP in this nonhead S to pass up to the VP. Assuming (32), we can
observe that the treatment of long-distance dependency involves three parts: top,
middle, and bottom. The bottom part introduces the GAP value according to the
ARC. The middle part ensures the GAP value is inherited ‘up’ to the mother
in accordance with the NIP. Finally, the top level terminates the GAP value by
providing the filler as nonhead daughter, in accordance with the HEAD - FILLER
CONSTRUCTION .
It is also easy to verify that this feature percolation system accounts for
examples like (33), in which the gap is a non-NP:
(33) a. [In which box] did John place the book ?
b. [How happy] has John been ?
The HEAD - FILLER CONSTRUCTION in (26) ensures that the categorial status of
the filler is identical to that of the gap. The structure of (33a) can be represented
as follows:
(34)
10.3 Feature Percolation with No Abstract Elements 249
In this structure, the missing phrase is a PP encoded in the GAP value. This value
is percolated up to the lower S and discharged by the filler in which box.
In addition, this approach provides a clearer account of the examples we saw
in (12), which we repeat here:
This grammar rule explains the contrast in (35), represented using simplified
feature structures:
(37)
250 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S
In (37a), the GAP value in the first conjunct is identical to that in the second
conjunct, satisfying the COORDINATION CONSTRUCTION. The feature unifica-
tion will allow these two identical GAP values to be unified into one. However,
in (37b), the GAP values in the two conjuncts are different, violating the
COORDINATION CONSTRUCTION .5
We can notice that when the subject who is questioned, the presence of an aux-
iliary verb is optional. That is, the question in (38a) is well-formed, even though
no auxiliary is present. The related example (38b) is also well-formed, but it is
used only when there is emphasis on the auxiliary.
As a first step toward accounting for such examples, we can allow a structure
similar to that of nonsubject wh-questions and license a structure like (39), in
which the subject is gapped:
(39) a. Who placed the book in the box?
b. Who can place the book in the box?
This revised ARC guarantees that the members of the ARG - ST list are the sum of
that of SPR, COMPS, and GAP. The system then allows for the following lexical
realization of put, in addition to those in (28):
5 This feature-based analysis can also offer a way of dealing with the movement paradox examples
we observed in (15), repeated here:
The introduction of a GAP value is a lexical realization process in the present system, implying
that we can assume that the complement of the preposition on in such a usage can be realized
either as an NP in (38b) or as a nominal GAP element. Since, as shown in Chapter 5, the filler CP
in (c) also belongs to the category nominal, there is then no category mismatching between the
filler and the gap here. See Kim and Sells (2008) too.
10.3 Feature Percolation with No Abstract Elements 251
⎡ ⎤
(41) FORM placed
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ SPR ⎥
⎢ ⎢ ⎥⎥
⎢SYN | VAL⎣COMPS NP, PP⎦⎥
2 3
⎢ ⎥
⎢ GAP 1 NP ⎥
⎣ ⎦
ARG - ST 1 NP, 2 NP, 3 PP
This realization in which the subject is gapped then projects the following
structure for (39a):6
(42)
As shown in (42), the subject of placed is realized as the GAP value, metaphor-
ically passing up to the mother node. This mother VP is marked as projecting
up to the incomplete sentence ‘S’ in terms of the traditional notion of phrases.
This is a notational variant to indicate that the VP is identical to the ‘S’ in
6 Note that our feature system means the following for the complete S, an incomplete ‘S’ with its
subject being gapped, and a VP:
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
SPR SPR SPR NP
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
a. S = ⎣COMPS ⎦ b. ‘S’ = ⎣COMPS ⎦ c. VP = ⎣COMPS ⎦
GAP GAP XP GAP
252 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S
The verb visited allows its subject to be gapped and then licenses the head-
complement combination of visited Seoul. This VP with its subject being gapped,
vacuously projected to ‘S,’ serves as the complement of the verb think. The GAP
value, passing up all the way to the second lower S, is then discharged by the
filler who.
⎡ ⎤
(52) a. FORM wonder
⎢ ⎡ ⎤⎥
⎢ HEAD | POS verb ⎥
⎢ ⎥
⎢ ⎢ ⎡ ⎤⎥⎥
⎢ ⎢ 1 ⎥⎥
⎢SYN⎢ SPR
⎥⎥⎥
⎢ ⎢VAL⎢ ⎥⎥
⎢ ⎣ ⎣ ⎦⎦⎥
⎢ COMPS 2 S/CP QUE + ⎥
⎢ ⎥
⎣ ⎦
ARG - ST 1 NP, 2 S/CP
⎡ ⎤
b. FORM deny
⎢ ⎡ ⎤⎥
⎢ HEAD | POS verb ⎥
⎢ ⎥
⎢ ⎢ ⎡ ⎤⎥⎥
⎢ ⎢ 1 ⎥⎥
⎢SYN⎢ SPR
⎥⎥⎥
⎢ ⎢VAL⎢ ⎥⎥
⎢ ⎣ ⎣ ⎦⎦⎥
⎢ COMPS 2 QUE − ⎥
⎢ ⎥
⎣ ⎦
ARG - ST 1 NP, 2
⎡ ⎤
c. FORM tell
⎢ ⎡ ⎤⎥
⎢ HEAD | POS verb ⎥
⎢ ⎥
⎢ ⎢ ⎡ ⎤⎥⎥
⎢ ⎢ 1 NP ⎥⎥
⎢SYN⎢ SPR
⎥⎥⎥
⎢ ⎢VAL⎢ ⎥⎥
⎢ ⎣ ⎣ ⎦⎦⎥
⎢ COMPS 2 NP, 3 S/CP QUE ± ⎥
⎢ ⎥
⎣ ⎦
ARG - ST 1 NP, 2 NP, 3 S/CP
The feature QUE flags the presence of a clause-initial wh-word like who or which;
it is used to distinguish between indirect questions and declarative clauses. The
QUE value of the verb’s complement will ensure that each verb combines with
an appropriate clausal complement. For example, the verb wonder, requiring a
[QUE +] clausal complement, will be licensed in a structure like the following:
(53)
10.4 Indirect Questions 255
The GAP value of likes is passed up to the lower S and discharged by the filler
whose book. The wh-word whose carries the feature [QUE +], which will pass up
to the point where it is ‘visible’ to the verb selecting its complement or to the
highest position needed to indicate that the particular sentence is a question. For
example, in (54), the feature QUE indicates that the whole sentence is a ques-
tion, whereas in (55) it allows the verb ask to select an indirect question as its
complement:
The percolation of the feature QUE upward from a wh-word can be ensured by the
NIP, which guarantees that nonlocal features like QUE are passed up until they
are bound off or selected by a sister (whether it be a filler phrase or a selecting
V). This principled constraint allows the QUE value to pass up to the mother from
a deeply embedded nonhead, as illustrated in the following:
(57)
256 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S
Although which is embedded in the PP and functions as the Det of the inner NP,
its QUE value will pass up to the S, granting it the status of an indirect question.
The verb wonder then combines with this S, thus satisfying its valence require-
ment. If the verb combined with a [QUE −] clausal complement, the result would
be an ungrammatical structure:
(58) a. *Kim has wondered [[QUE −] that Gary stayed in the room].
b. *Kim asked me [[QUE −] that the monkeys are very fond of chocolates].
As we saw above, the category of the missing phrase within the S must
correspond to that of the wh-phrase in the initial position. For example, the
following structure is not licensed simply because there is no HEAD - FILLER
CONSTRUCTION that allows a filler NP to combine with an S missing a PP:
(59)
In a similar fashion, the present system also predicts the following contrast:
(60) a. John knows [whose book [Mary bought ] and [Tom borrowed from
her]].
b. *John knows [whose book [Mary bought ] and [Tom talked ]].
The partial structure of these sentences can be represented as follows:
(61)
10.4 Indirect Questions 257
As long as the two GAP values are identical, we can unify the two, as in (61a).
However, if the GAP values are different, as in (61b), there is no way to unify
them in the coordination structure.
These indirect questions are all internally complete in the sense that there is no
missing element. This means that the complementizers whether and if will have
at least the following lexical information:
⎡ ⎤
(63) FORM whether
⎢ ⎡ ⎤⎥
⎢ HEAD | POS comp ⎥
⎢ ⎥
⎢ ⎢ ⎥⎥
⎢SYN ⎣VAL | COMPS S[fin]⎦⎥
⎢ ⎥
⎢ QUE + ⎥
⎣ ⎦
ARG - ST S
While if and whether both carry a positive value for the QUE feature, whether
more closely resembles question words like when in the following respect: the
258 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S
type of indirect question that it introduces can serve as the object of a preposition,
as in (65):
(65) a. I am not certain about [when he will come].
b. I am not certain about [whether he will go or not].
This means that whether and if both bear the attribute [QUE +] (projecting an
indirect question), but only whether behaves like a true wh-element.7
Like finite indirect questions, these constructions have the familiar bipartite
structure: a wh-phrase and an infinitival clause missing one element.
Notice at this point that in English there exist at least four different ways for
the subject to be realized: as an overt NP or a covert NP (gap, PRO, or pro):
(70) a. The student protected him. (canonical NP)
b. Who protected him? (subject gap NP)
c. To protect him is not an easy task. (big PRO)
d. Protect him! (small pro)
In (70a), the subject is a ‘canonical’ NP, while those in the subsequent exam-
ples are ‘noncanonical.’ In the wh-question (70b), the subject is a GAP value; in
(70c), the infinitival VP has an understood, unexpressed subject PRO; the imper-
ative in (70d) has an unexpressed subject, understood as the 2nd person subject
you. As previously noted, the unexpressed pronoun subject of a finite clause
is called ‘pro’ (pronounced ‘small pro’), whereas that of an nonfinite clause is
7 One way to distinguish the wh-elements including whether from if , is to use an additional feature
WH with binary values.
10.4 Indirect Questions 259
called ‘PRO’ (pronounced ‘big pro’) to capture the distinctive referential prop-
erties of these sign types (see Chomsky, 1982). In terms of a theory of linguistic
types, this means that we have ‘canonical’ pronouns like he and him as well as
‘noncanonical (covert)’ realizations of pronouns, such as pro for imperatives and
PRO for infinitival clauses. This in turn means that in English, when a VP’s sub-
ject is a noncanonical one, either a 2nd person pronoun pro or a PRO, the VP can
be projected directly into S in accordance with the following construction rule:
(71) NONCANONICAL SUBJECT CONSTRUCTION:
S SPR → VP SPR NP[noncanonical]
(72)
The subject of the VP in (72a) is the second person pro, while that in (72b)
is a PRO coindexed with yourself. Both are licensed by the HEAD - ONLY
CONSTRUCTION .
Now, consider the following structure licensed by the current grammar rules:
(73)
260 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S
Consider the structure from the bottom up. The verb support selects two
arguments whose second argument can be realized as a GAP:
⎡ ⎤
(74) FORM support
⎢ ⎡ ⎡ ⎤
⎤⎥
⎢ 1 NP[PRO] ⎥
⎢ ⎢
SPR ⎥
⎢ ⎢ ⎥⎥⎥
⎢SYN⎢⎣ VAL⎣ COMPS ⎦⎥
⎦⎥
⎢ ⎥
⎢ GAP 2 NP ⎥
⎣ ⎦
ARG - ST 1 NP, 2 NP
The verb will then form a VP with the infinitival marker to. Since this VP’s
subject is PRO, the VP can be projected into an S with the accusative NP GAP
value in accordance with the HEAD - ONLY CONSTRUCTION. The S then forms a
well-formed head-filler construct when combined with the filler which politician.
The QUE value of the phrase allows the whole infinitival clause to function as an
indirect question, which can then be combined with the verb knows.
A constraint we can observe in infinitival wh-questions is that the subject of
the infinitival head cannot be overtly realized:
(75) a. *Fred knows [which politician for Karen/her to vote for].
b. *Karen asked [where for Jerry/him to put the chairs].
The data indicate that in infinitival indirect questions, the subject of the infinitival
VP cannot appear. The tree diagram in (76) shows why it is not legitimate:8
(76)
8 The grammar needs to block examples like the ones below in which the infinitival VP combines
with its subject:
As in (73), the HEAD - FILLER CONSTRUCTION allows an S (directly projected from an infinitival
VP) to combine with its filler. As a way of blocking such examples, we may assume an indepen-
dent constraint that the infinitival subject can appear only together with the complementizer for
because the subject needs to get the accusative case from it (cf. Chomsky, 1982).
10.4 Indirect Questions 261
The structure shows that the HEAD - FILLER CONSTRUCTION licenses the combi-
nation of an S with its filler but not a CP with its filler.
One way to deal with such examples is to take the adverbial wh-phrase to modify
an inverted question:
(78)
These sentences are ambiguous with respect to the function of the wh-adjunct
(when, where, how), and in particular which of the two verbs (main or embedded)
it modifies. The question in (79a) could be an inquiry into either the time of his
statement or the time of his firing. Question (79b) could be a question about
the time of the telling or the time of his meeting Mary. Question (79c) can be
construed as questioning either the means by which he guessed or the means by
which he performed the computer repair.
These data indicate that in addition to a structure like (78), in which the adver-
bial wh-word modifies the whole sentence, we need a structure in which the
fronted adverbial wh-phrase is linked to the embedded clause. One way to do
262 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S
This extended ARG - ST then can allow us its adverbial argument to be realized as
a GAP value according to the ARC:
⎡ ⎤
(81) FORM fix
⎢ ⎡ ⎤ ⎥
⎢ 1 NP ⎥
⎢ SPR ⎥
⎢ ⎢ ⎥ ⎥
⎢SYN⎣COMPS 2 NP⎦ ⎥
⎢ ⎥
⎢ GAP 3 AdvP ⎥
⎣ ⎦
ARG - ST 1 NP, 2 NP, 3 AdvP
This lexical realization will then project a structure like the following for (79c):
(82)
10.5 Conclusion 263
This structure shows that the wh-word how originates from the subordinate
clause VP. More specifically, the GAP value starts from the verb fixed, whose
arguments in this case include an adverbial element. Note this does not mean that
we can extend the ARG - ST list randomly. For example, the argument extension
mechanism cannot be applied to examples like the following:
10.5 Conclusion
Exercises
1. Draw tree structures for the following sentences and indicate which
grammar rules are used to construct each phrase:
(i) a. What causes students to select particular majors?
b. Who will John ask for information about summer courses?
c. Which textbook did the teacher use in the class last summer?
d. Whose car is blocking the entrance to the store?
11.1 Introduction
There are several different properties that we can use to classify English rel-
ative clauses. First, we can classify them by the type of missing element in the
relative clause:
(2) a. the student who won the prize
b. the student who everyone likes
c. the baker from whom I bought these bagels
d. the person whom John gave the book to
e. the day when I met her
f. the place where we can relax
Wh-relatives like (3a) have a wh-type relative pronoun, and (3b) has the rela-
tive pronoun that, while (3c) has no relative pronoun at all. We consider that in
relative clauses to be a form of relative pronoun (see Section 11.4 below).
Third, relative clauses can also be classified according to the finiteness of
the clause. Unlike the finite relative clauses in (1)–(3), the following examples
include infinitival relatives:
266
11.2 Nonsubject Wh-Relative Clauses 267
This chapter first reviews the basic properties of the various types of English
relative clauses and then provides analyses of their syntactic structures.
One thing we can observe here is that, like wh-questions, relative clauses have
bipartite structures: a relative pronoun (including a wh-element) and a sentence
with a missing element (S/XP):
(7) a. wh-element S/XP
b. that S/XP
c. [ ] S/XP
Assuming that relative wh-words carry a REL feature whose index value is iden-
tical with the nominal that the relative clause modifies, we can represent the
structure of (6a) in the following way:
(8)
268 R E L AT I V E C L AU S E C O N S T RU C T I O N S
As shown in the structure, the object of the verb met is realized as a GAP
value, which, in accordance with the NIP (Nonlocal (Feature) Inheritance Prin-
ciple), is metaphorically passed up until it is discharged by the filler, who. The
HEAD - FILLER CONSTRUCTION licenses the combination of the filler who and
the gapped sentence Fred met. This filler who also has a nonlocal REL feature
whose value is an index referring to senators. The REL value originating from
the relative pronoun also percolates up to the mother S in accordance with the
NIP. Note that the relative pronoun’s REL value is identical to the index value of
the antecedent nominal. The need to identify these two index values is shown by
the agreement facts in (9):
(9) a. the man [who you think knows/*know the answer]
b. the men [who you think know/*knows the answer]
Here the lowest verb knows/know agrees with the number features of the head
noun man or men, respectively. The element that ensures this agreement is the
relative pronoun who, whose index value would be singular in (9a) while plural
in (9b).
The following question is, then, what mechanism allows the relative clause
to function as a modifier of a noun or noun phrase? In Chapter 6, we saw that
phrases like AP, nonfinite VP, and PP can modify an NP (these examples can be
taken as ‘reduced’ relatives):
(10) a. the people [happy with the proposal]
b. the person [standing on my foot]
c. the bills [passed by the House yesterday]
d. the paper [to finish by tomorrow]
e. the student [in the classroom]
All of these postnominal bracketed elements bear the feature MOD. The feature
originates from the head happy, standing, passed, to and in, respectively. This is
illustrated by the following:
(11)
11.2 Nonsubject Wh-Relative Clauses 269
The feature MOD is a head feature, which enables the mother VP to carry the
same MOD value. The combination of this VP modifier with the head N is
licensed by the HEAD - MODIFIER CONSTRUCTION, repeated here:
(12) HEAD - MODIFIER CONSTRUCTION:
XP → [MOD 1 ], 1 H
English allows the modifier phrase bearing the feature MOD to either precede or
follow the head, and relative clauses are positioned after the head they modify.
Note that not all phrases can function as postmodifiers. In particular, a base
VP or finite VP cannot be found in this environment:
(13) a. *the person [stand on my foot]
b. *the person [stood on my foot]
c. *the person [stands on my foot]
This means that a finite VP or a finite clause with no missing element can-
not function as a modifier. Only relative clauses with one missing element
may serve as postnominal modifiers, indicating that they also have the MOD
feature.
Unlike reduced relative clauses, where the MOD feature comes from the head
verb, adjective, or preposition, typical relative clauses (e.g., the student [who
everyone likes]) have no expression other than the relative pronoun that can
trigger the emergence of the MOD feature. It is thus reasonable to assume that
the presence of a relative pronoun bearing the [REL i] feature also introduces a
relative MOD value, according to the following constructional rule:1
(15) HEAD - REL MOD CONSTRUCTION
:
REL i
N → 1 N , S
i
MOD 1
(16)
As shown here in (16), the verb met realizes its object as a GAP value,
which metaphorically percolates up to the S, where it is discharged once this
S combines with the relative pronoun whom. There is no lexical expression
(e.g., a nonfinite verb) that evokes the MOD feature; the constructional con-
straint in (15) evokes a MOD value linked to the relative pronoun whom.
Since the relative clause is a type of HEAD - FILLER CONSTRUCTION, there
must be a total syntactic identity between the gap and a filler with a REL
value:
(17) a. Jack is the person [[NP whom] [Jenny fell in love with [NP ]]].
b. Jack is the person [[PP with whom] [Jenny fell in love [PP ]]].
(18) a. *Jack is the person [[NP whom] [Jenny fell in love [PP ]]] .
b. *Jack is the person [[PP with whom] [Jenny fell in love with [NP ]]].
In (17a) and (17b), the gap and the filler are the same category, whereas those in
(18) are not. The putative gap in (18a) is a PP and that in (18b) an NP, but the
fillers are the nonmatching categories NP and PP, respectively.
11.2 Nonsubject Wh-Relative Clauses 271
(19)
In (19), the GAP value starts from the verb of the embedded clause and passes
up to the top S in accordance with the NIP. The value is discharged by the filler
wh-phrase including the relative pronoun which. This nonlocal REL feature, in
accordance with the NIP, is passed up to the top S to ensure that the clause
functions as a modifier.
Just like the QUE feature, the nonlocal REL feature can also come from a
deeper position within the nonhead daughter of the relative clause:
2 Once again, the arrows here do not signify any feature copying; they simply represent identity of
the two feature structures.
272 R E L AT I V E C L AU S E C O N S T RU C T I O N S
(21)
The REL feature is embedded in the specifier of the inner NP whose mother,
but the NIP guarantees that this value is passed up to the top S so that it can
function as a modifier of the head noun friend.
(24)
As shown in the structure, the subject of met is realized as the GAP value, which
metaphorically passes up to the mother node. As noted in the previous chapter,
this mother node is an ‘S’ with an empty COMPS and SPR value. Although it
appears to be a VP, the constituent is an S with a gap in it, and this S combines
with the filler who, in accordance with the HEAD - FILLER CONSTRUCTION. The
resulting S is a complete clause (who met Fred) carrying the REL and MOD spec-
ifications, which allows the resulting clause to modify senators in accordance
with the HEAD - REL MOD CONSTRUCTION.
Notice that this analysis does not license bare subject relatives like those in
(23). The VP with the missing subject met John cannot carry the MOD fea-
ture at all even if it can function as an ‘S’ that can combine either with a
wh-question phrase or a wh-relative phrase. However, the analysis also predicts
that the subject of an embedded clause can be gapped in sentences like the
following:
274 R E L AT I V E C L AU S E C O N S T RU C T I O N S
As we saw in Chapter 10, verbs like think and believe combine with a CP, an S,
or even a ‘S’ with the subject gapped:
(26)
The VP was interesting here forms an ‘S’ with the subject gapped. This ‘S’ com-
bines with the verb thought, forming a VP with a nonempty GAP specification.
This GAP value percolates up to the lower S and is then discharged by the filler
relative pronoun which. The relative pronoun, in accordance with the HEAD - REL
MOD CONSTRUCTION , introduces the MOD value into the relative clause, which
allows it to modify the antecedent statement.
The key difference here is that the clauses in (28) following the relative pronoun
that contain a syntactic gap, while those in (27) following the complemen-
tizer that are complete clauses with no missing element involved. These two
environments can be represented as follows:
(29)
The relative pronoun that differs from the wh-relative pronoun in sev-
eral respects. For example, the relative pronoun that disallows genitive and
pied-piping (see Sag, 1997):
(30) a. the student whose turn it was
b. *the student that’s turn it was
One way to account for these differences is to assume that the relative pronoun
that has no accusative case and therefore cannot be the complement of a preposi-
tion that assigns accusative. The relative pronoun who, unlike relative pronouns
like whose, whom, and which, shares this property:
(33) a. *The people [in who we placed our trust] . . .
b. *The person [with who we were talking] . . .
(37)
As shown here in the structure, the VP to sit has a PP GAP value which functions
as the complement of sit. The infinitival VP, missing its PP complement, realizes
its subject as a PRO and thus can be projected into an S in accordance with
the HEAD - ONLY CONSTRUCTION (see Chapter 10). This S forms a head-filler
phrase with the PP on which. The resulting S also inherits the REL value from the
relative pronoun which and thus bears the MOD feature. Once again, we see that
every projection observes the grammar rules as well as other general principles,
including the HFP, the VALP, and the NIP.
11.5 Infinitival and Bare Relative Clauses 277
The examples indicate that wh-infinitival relatives cannot have an overt subject
(such as for Jerry) realized. We saw before that the same is true for infinitival
wh-questions; the data are repeated here:
This tells us that both infinitival wh-relatives and infinitival wh-questions are
subject to the same constraint. The ungrammaticality of (38a) can be understood
if we look at its structure:
(40)
The HEAD - FILLER CONSTRUCTION (see Chapter 10) does not allow the combi-
nation of a CP with a PP filler, and hence the S here is ill-formed.3
How, then, can we deal with infinitival bare relative clauses like those in (41)?
Notice here that, unlike infinitival wh-relative clauses, these lack a relative pro-
noun. Given that the infinitival VP can be projected into an S, we can assign the
following structure to (41b) when the subject is not overt:
3 One peculiar constraint on infinitival wh relatives (unlike infinitival wh indirect questions) is that
they do not allow an NP gap, as in *the bench which to sit on. To disallow such an example, we
need to develop a more elaborate analysis; see Sag (1997) for a direction.
278 R E L AT I V E C L AU S E C O N S T RU C T I O N S
(42)
The VP to finish has a GAP value for its object, and its subject is PRO. Accord-
ing to the HEAD - ONLY CONSTRUCTION, this VP then will be projected into an
incomplete ‘S.’ There are two analytic issues now: how to introduce the MOD
feature and how to discharge the GAP value when there is no filler. As we noted
above, English also allows finite bare relatives with the gapped element being
accusative:
The construction differs from the HEAD - REL MOD CONSTRUCTION only with
respect to its GAP value: The GAP value is discharged constructionally. That
is, the construction allows a finite or infinitival clause (S, but not an ‘S’ or
a VP) bearing an accusative NP GAP value to function as a modifier of the
preceding noun. One specification in the construction is that the GAP value is
discharged even if there is no filler: The index of the head noun is identified
with that of the discharged GAP value. The construction thus licenses constructs
like (43a):
11.6 Restrictive vs. Nonrestrictive Relative Clauses 279
(45)
Note that the GAP value is a specification of the verb met but is discharged even
without combining with a filler. This is possible because of the constructional
constraint in (44).4
The subject-gap bare relative is possible when the relative clause is embedded as the complement
of a verb like thought and believed, but not when it directly modifies the nominal head, as in (23).
To license such examples, we must modify the head-rel bare mod construction.
280 R E L AT I V E C L AU S E C O N S T RU C T I O N S
The second example suggests that John has only two sisters, while the first means
that two of his sisters are lawyers but leaves open the possibility that he has
additional sisters. The denotation of the restrictive relative clause (RRC) two
sisters who became lawyers is thus the intersection between the set of two sisters
and the set of lawyers. There can be more than two sisters, but there are only two
who became lawyers. By contrast, the nonrestrictive clause (NRC) two sisters,
who became lawyers must be understood to mean that there are two sisters and
they all became lawyers: There is no intersection of meaning here.
This meaning difference has given rise to the idea that the RRC modifies the
meaning of N – a noun phrase without a determiner – whereas the NRC modifies
a fully determined NP (McCawley, 1988):
(48) Restrictive Relative Clause (RRC):
These representational differences are intended to reflect the fact that the RRC
is interpreted as restricting the set of women under consideration to a particular
subset (those whom we respect), while the NRC simply adds information about
the antecedent ‘Frieda.’
Note that in terms of the syntactic combination, (48) is licensed by the HEAD -
MODIFIER CONSTRUCTION but (49) is not, since the NP and the appositive
11.6 Restrictive vs. Nonrestrictive Relative Clauses 281
relative clause is not in a head-modifier relation. The NRC in (49) is quite similar
to the nominal apposition constructions given in the following (van Eynde and
Kim, 2016):
(50) a. He was one of the few that told [the president], [Johnson], to get out of
Vietnam.
b. [Dr. William], [a consultant from Seoul], is to head the new unit.
c. That was his first trip to [the capital of Korea], Seoul.
In these so-called appositional constructions, there are two NPs, an anchor (the
president), and an appositive (Johnson) linked to the same individual. The second
appositive is optional, but it adds additional identifying information about the
referent of the first NP anchor. The added information consists of a proposition
about the anchor, as illustrated by the following:
(52) a. [Isabelle], [who the police looked for], went into exile in 1975.
b. [Politicians], [who make extravagant promises], cannot be trusted.
c. For camp, the children need [sturdy shoes], [which are expensive].
This implies that English grammar contains the following construction for
nominal apposition as well as NRC constructions:5
5 The NRC allows the anchor to be a non-NP. To cover such a case, we need to distinguish NRCs
from nominal appositions.
282 R E L AT I V E C L AU S E C O N S T RU C T I O N S
(54)
The anchor Isabelle refers to an individual, while the appositive clause who the
police looked for refers to a situational proposition. The syntactic combination of
the two licenses an appositive construct, while each contributes to the meaning
of the phrasal mother sign (a complex NP).
Accordingly, it seems that there are two different types of relative clauses with
different syntactic structures. The RRC is licensed by a head-modifier construc-
tion, while the NRC is licensed by an appositive construction. This structural
and semantic difference can provide us with a way of explaining why the RRC
cannot modify a pronoun or proper noun:6
(55) a. I met the man who grows peaches.
b. I met the lady from France who grows peaches.
Given that the meanings of ‘John’ and ‘her’ refer to unique individuals, we
expect that no further modification or restriction is possible. Nonrestrictive rel-
ative clauses like (57) can modify proper nouns or pronouns, simply because
they provide additional or background information about a mutually identifiable
individual:
(57) a. In the classroom, the teacher praised Lee, whom I also respect.
b. Reagan, whom the Republicans nominated in 1980, lived most of his life in
California.
6 In certain expressions of English, a who relative clause can modify a nominative animate pronoun
like she, he, or we:
The relative clause whom I also respect modifies the proper noun Lee without
restricting its designation, and it has the same interpretation as a conjoined clause
like The teacher praised Lee, and I also respect her.
There is another semantic implication of the restrictive vs. nonrestrictive dis-
tinction: Only a restrictive clause can modify a quantified NP like every N or no
N:
(58) a. Every student who attended the party had a good time.
b. *Every student, who attended the party, had a good time.
(59) a. No student who scored 80 or more in the exam was ever failed.
b. *No student, who scored 80 or more in the exam, was ever failed.
(60) a. The contestant who won the first prize, who is the judge’s brother-in-law,
sang dreadfully.
b. *The contestant, who is the judge’s brother-in-law, who won the first prize
sang dreadfully.
Compare the following partial structures for the two NPs at issue:
(61)
(63) a. *[Who] did he believe [the claim that he had never met ]?
b. *[Which celebrity] did he mention [the fact that he had run into ]?
What is the source of these contrasts? Let us compare the partial structures of
(62a) with (63a):
8 One additional difference between restrictive and nonrestrictive clauses is that that is used mainly
in restrictive clauses:
a. The knife [which/that] he threw into the sea had a gold handle.
b. The knife, [which/??that] he threw into the sea had a gold handle.
9 The structural account, in which nonrestrictive clauses attach to NP and restrictive clauses to N ,
fails to account for certain facts. For example, a restrictive clause appears to attach to NP when
the relative clause modifies an indefinite pronoun, as in everyone who smiled must have been
happy, or when the clauses modify two conjoined full NPs, as in the man and the woman who
are neighbors are getting to know each other. To account for such examples, we must develop
a more elaborated syntactic and semantic analysis. See Fabb (1990), Sag (1997), Arnold (2004),
Chaves (2007), and references therein for further discussion.
11.7 Island Constraints on the Filler-Gap Dependencies 285
(64)
Various attempts have been made to account for such island constraints.
Among these, we sketch an analysis within the present system that relies on
licensing constraints on subtree structures. As we have seen in previous chapters,
the present analysis provides a straightforward account of the CSC:
(72)
Although two VPs are coordinated, they are not identical with respect to
the GAP values. This violates constraints imposed by the COORDINATION
CONSTRUCTION , which allows only identical categories to be coordinated. 10
10 There are cases that seem to violate the CSC when coordinate conjuncts express specific types
of event relations, as noted by Ross (1967), Goldsmith (1985), and others:
11.8 Conclusion 287
The existence of some island constraints has been questioned, since violations
of island constraints can sometimes produce acceptable sentences. For example,
the following examples are acceptable, although both violate a claimed island
constraint:
(73) a. What did he get the impression that the problem really was ? (CNPC)
b. This is the paper that we really need to find the linguist who
understands . (CNPC)
These examples have identical syntactic structures but differ in acceptability. The
data indicate that it may not be the syntactic structure but the properties of the
head of the complex NP that influence the acceptability of such sentences. This
implies that processing factors closely interact with the grammar of filler-gap
constructions (see Hofmeister et al., 2006).
11.8 Conclusion
In (a), the conjunction can be paraphrased as and nonetheless and in (b) the operative relation of the
conjuncts is narration.
288 R E L AT I V E C L AU S E C O N S T RU C T I O N S
interactions among these can license each subpattern of the English relative
clause constructions.
In addition, the chapter discussed two important phenomena: differences
between restrictive and nonrestrictive relative clauses, and island constraints on
filler-gap dependencies. We have seen that restrictive and nonrestrictive rela-
tive clauses behave differently with respect to both syntax and semantics. Island
constraints refer to a configuration that blocks a syntactic dependency (e.g.,
movement or linkage) between constituents in the particular structure. Island
constraints have been a cornerstone of syntactic research since Ross (1967).
We discussed how these constraints can be interpreted within the present sys-
tem, although many, if not all, island constraints are potentially reducible to
nonsyntactic (interpretive, processing, or discourse) principles.
In Chapter 12, we will explore constructions (e.g., tough, it-extraposition,
and cleft) that illustrate slightly different dependencies between the gap and
its putative filler. Once again, we will see that the licensing of these construc-
tions requires mechanisms not appreciably different from those we developed
for wh-interrogatives and relative clauses.
Exercises
2. Draw tree structures for the following examples and discuss which
grammar rules license each phrase involving a wh-expression or that:
(i) a. This is the book which I need to read.
b. This is the very book that we need to talk about.
c. The person whom they intended to speak with agreed to
reimburse us.
d. The motor that Martha thinks that Joe replaced costs thirty
dollars.
(ii) a. The official to whom Smith loaned the money has been
indicted.
b. The man on whose lap the puppet is sitting is a ventriloquist.
11.8 Conclusion 289
12.1 Introduction
One thing we can observe here is that the fillers whom and on whom are not in a
core clause position (subject or object) but are in an adjoined filler position.
Consider examples of the tough-movement type:
The gap in (3a) would correspond to an ‘accusative’ object NP (him), whereas the
apparent filler is a ‘nominative’ subject (he). The filler and the gap here are thus
not identical syntactically, though they are understood as referring to the same
individual. Owing to the lack of syntactic identity, the dependency between the
filler and the gap is considered ‘weaker’ than that in wh-questions or wh-relatives
(Pollard and Sag, 1994).
The extraposition and cleft constructions in (1b)–(1d ) are also different from
wh-questions as well as tough-construction examples. In clefts, we have a gap
1 The construction is named after adjectives that appear in it, like tough, easy, difficult, etc.
2 The more accurate term for Extraposition is it-Subject and -Object Extraposition.
290
12.2 ‘Tough’ Constructions and Topichood 291
Superficially quite similar predicates, such as eager and ready, do not allow all
three options:
(5) a. *To please John is eager/ready.
b. *It is eager/ready to please John.
c. John is eager/ready to please.
Even though both (4c) and (5c) are grammatical and they look structurally iden-
tical, they reveal themselves to be quite different once we look at their properties
in detail. Consider the following contrast:
(6) a. Kim is easy to please.
b. Kim is eager to please.
One obvious difference between (6a) and (6b) lies in the grammatical roles of
Kim: In (6a), Kim is the object of please, whereas Kim in (6b) is the subject
of please. More specifically, the verb please in (6a) is used as a transitive verb
whose object is identified with the subject Kim. Meanwhile, the verb please in
(6b) is used intransitively, not requiring any object. This difference is shown
clearly by the following examples:
(7) a. *Kim is easy [to please Tom].
b. Kim is eager [to please Tom].
The VP complement of the adjective easy cannot thus have a surface object,
whereas eager has no such restriction. This means that the VP complement of
easy has to be incomplete in the sense that it has a missing object, and this is so
with other easy-type adjectives as well:
(8) a. The signature is hard [to see ].
b. The child is impossible [to teach ].
c. The problem is easy [to solve ].
292 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S
In all of these examples, there must be a missing element (GAP) in the VP com-
plement. Meanwhile, eager places no such restriction on its VP complement,
which should be internally complete:
The problematic aspect is the status of the subject He: How can a direct move-
ment approach move him into the subject position and then change the form into
he?3 As a solution, Chomsky (1986) proposes an empty operator (Op) movement
operation, represented here:
3 In technical terms, this will violate the ‘Case Filter’ of Government-Binding Theory, as he
receives two cases: accusative from the original object position and nominative from the subject
position.
12.2 ‘Tough’ Constructions and Topichood 293
(16)
The subject he is base-generated in the matrix subject position, while the null
operator Opi moves to the intermediate position from its original object position,
leaving the trace (ti ). At an interpretive level, this operator is coindexed with
the subject, indirectly linking the gap with the filler even though the two have
different case markings.
(19)
⎡ ⎤
SYN | HEAD | POS adj
⎢ ⎥
tough-lxm ⇒ ⎢ inf ⎥
⎣ARG - ST NPi , VP
VFORM ⎦
GAP 1 NPi [acc]
This lexical construction specifies that the infinitival complement (VP or CP)
of adjectives like easy contains a GAP value (NPi ) that is coindexed with the
subject. This coindexation will ensure the semantic linkage between the matrix
subject and the gapped NP. Notice that, unlike canonical filler-gap constructions,
in which the GAP value is discharged when it meets the filler (by the HEAD -
FILLER CONSTRUCTION ), the feature GAP licensed by the tough-adjective needs
to be discharged constructionally:
(20) :
TOUGH CONSTRUCTION
tough-adj
AP[GAP A] → A , XP GAP NPi [acc] ⊕ A
SPR NPi
As shown in the tree, the transitive verb please introduces its accusative object
as the GAP value, hence the mother infinitival VP is incomplete. The adjective
easy combines with this VP, constructionally discharging the GAP value in accor-
dance with (20). Note that the subject of the adjective easy is coindexed with
the GAP value in accordance with its lexical specifications. The copula verb, as
we have seen in Chapter 8, is a raising verb whose AP complement’s subject
is identical to its subject NP. This is why the subject NP Kim is in fact coin-
dexed with the AP’s subject and with the GAP value. As such, the interplay of
the lexical properties of easy and is with other principles like the NIP ensures the
semantic dependency between the subject and the GAP value in the different local
domain.
Meanwhile, the lexical information for eager-type adjectives is very simple:
⎡ ⎤
(22) SYN | HEAD | POS adj
⎢ ⎥
eager-lxm ⇒ ⎢ inf ⎥
⎣ARG - ST NPi , VP
VFORM ⎦
SPR NPi
(23)
The lexical specification of eager in (22) ensures that the AP’s subject is coin-
dexed with its VP complement’s subject. This implies that the infinitival VP
complement is controlled by the subject Kim. However, it places no restriction
on the GAP value of its VP complement, and so it can legitimately com-
bine with the fully saturated VP complement. When its VP complement has
296 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S
Notice that the present analysis can straightforwardly account for examples in
which the VP complement includes more than one GAP element. Compare the
following pair of examples:
(26)
(27)
In the structure above, the VP complement of easy has two GAP values: One rep-
resents the missing object of play ( 4 NP) and the other the missing object ( 2 ) of
on. The first GAP value coindexed with the subject this sonata is construction-
ally bound by easy in accordance with (20). The remaining GAP value ( 2 NP) is
passed up to the second higher S, where it is discharged by its filler, which piano,
through the HEAD - FILLER CONSTRUCTION.
12.3 Extraposition
This kind of alternation is quite systematic: Given sentences like (31a), English
speakers have an intuition that (31b) is possible:
(31) a. That the Dalai Lama claims Tibet’s independence discomfits the Chinese
government.
b. It discomfits the Chinese government that the Dalai Lama claims Tibet’s
independence.
The extraposition rule moves the finite clause you came early to a sentence-final
position. This movement process also introduces a rule to insert the comple-
mentizer that, thus generating (34b). To generate nonextraposed sentences like
(34a), the analysis posits deletion of it in (34a), followed by addition of the
complementizer that.
A slightly different analysis assumes the opposite direction of movement
(Emonds, 1970; Chomsky, 1981a; Groat, 1995). That is, instead of extraposing
the clause from the subject, the clause is assumed to already be in the extraposed
position as in (36a):
(36) a. [[ ] [VP surprised [me] [CP that you came early]]].
b. [[It] [VP surprised me that you came early]].
The insertion of the expletive it in the subject position in (36a) would then
account for (36b). When the CP clause is moved to the subject position, the
result is the nonextraposed sentence (34a).
Most current movement approaches follow this second line of thought.
Although such derivational analyses can capture certain aspects of English
subject extraposition, they are not specified sufficiently to account for lexi-
cal idiosyncrasies and instantiation of the extraposed clause in a position not
immediately following the main predicator (see Kim and Sag (2005) for further
discussion).
include both comp and verb (see Chapter 5.4.2). In particular, we can
adopt the following lexical rule to capture the systematic relationship in
extraposition:
As shown here, the verb annoys can take either a CP or an NP as its sub-
ject. When the verb annoys selects a verbal argument (CP), it can undergo the
derivation of the EXTRAPOSITION CONSTRUCTION:
Because the verb annoys selects a nominal (CP or NP) argument, the verb can
undergo the EXTRAPOSITION CONSTRUCTION. This is possible because when
the nominal argument is realized as a CP, it is a subtype of verbal whose sub-
types include verb and comp. As shown here, the output extraposed verb annoys
now selects the expletive it as its subject, while its original CP serves as the
value of the EXTRA. The ARC ensures that the two arguments in the output
ARG - ST will be realized as the SPR and COMPS values, respectively, with the
addition of the EXTRA value. This derived word licenses a structure like the
following:
12.3 Extraposition 301
(43)
As given in the tree, the two arguments of the verb annoys are realized as SPR
and COMPS respectively. When the verb combines with the NP me, it forms a
VP with a nonempty EXTRA value. This VP then combines with the extraposed
clause CP in accordance with the HEAD - EXTRA CONSTRUCTION:
(44) HEAD - EXTRA CONSTRUCTION:
EXTRA → H EXTRA 1 , 1 XP
As shown here, the rule also discharges the feature EXTRA by combination of
the head VP with the extraposed CP. This grammar rule reflects the fact that the
grammar of English contains a phrase pattern in which a head element combines
with an extraposed element:
(45)
The lexical entry for find selects three arguments. The EXTRAPOSITION CON -
STRUCTION effectively augments the array of complements licensed by the input
verb by adding to the EXTRA list a CP that expresses the ‘content’ argument of
the verb (the state of affairs being assessed):
⎡ ⎤
(48) FORM find
FORM find ⎢ ⎥
→ ⎣ARG - ST 1 NP, NP[it], 3 AP⎦
ARG - ST 1 NP, 2 [nominal], 3 AP
EXTRA 2 [comp]
Since the type comp is a subtype of both nominal and verbal, the verb can
undergo the EXTRAPOSITION CONSTRUCTION. The output introduces a new
element it together with the EXTRA value. The three arguments in the derived
word will then be realized as its SPR and COMPS values, projecting a structure
like the following:
(49)
One major difference between subject and object extraposition is that the latter
is obligatory:
(50) a. *I made [to settle the matter] my objective.
b. I made it [my objective] to settle the matter.
c. I made [the settlement of the matter] my objective.
This contrast is due to a general constraint that prevents any element within the
VP from occurring after a CP:
(52) a. I believe strongly [that the Earth is round].
b. *I believe [that the Earth is round] strongly.
In the present context, this means that there is no predicative expression (verb or
adjective) whose COMPS list contains an element that follows a CP complement
(see Kim and Sag, 2005).
These three types of clefts all denote the same proposition, captured by the
following declarative sentence:
(54) We are using their teaching material.
This raises the question: why would a speaker use a cleft structure instead
of a simple sentence like (54)? It is commonly accepted that clefts have
shared information-structure properties, given in (55) for the example in
question:
(55) a. Presupposition (Background): We are using X.
b. Highlighted (Foreground or focus): their teaching material
c. Assertion: X is their teaching material.
Structually, three kinds of clefts consist of a matrix clause headed by a copula and
a relative-like cleft clause whose head is coindexed with the predicative argument
304 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S
of the copula. The three structures differ only in the location of the highlighted
(focused) expression.
Also notice that in addition to that, wh-words like who and which can also
introduce a cleft clause:
(58) a. It’s the second Monday [that] we get back from Easter holiday.
b. It was the girl [who] kicked the ball.
c. It’s mainly his attitude [which] convinced the teacher.
In contrast to the it-cleft, the wh-cleft allows an AP, a base VP, or a clause (CP,
simple S, and wh-clause) to serve as the highlighted XP:
(60) a. What you do is [VP wear it like that].
b. What happened is [S they caught her without a licence].
c. What the gentleman seemed to be asking is [S how policy would have
differed].
12.4 Cleft Constructions 305
For example, in Gundel (1977), the wh-cleft clause in (64a) is first right dislo-
cated, as in (64b), which can then generate the it-cleft (64c) once what is replaced
by that. Analyses of this nature take the cleft clause to be extraposed to the end
of the sentence.
By contrast, the expletive analysis (Chomsky, 1977; Kiss, 1998; Lam-
brecht, 2001) takes the pronoun it to be an expletive expression generated in
place, while the cleft clause is semantically linked to the clefted constituent by a
‘predication’ relation.
(66)
(68) a. It was not until I was perhaps twenty-five or thirty that I read them and
enjoyed them.
b. *When I read them and enjoyed them was not until I was perhaps twenty-five.
c. *Not until I was perhaps twenty-five was when I read them and enjoyed them.
As seen here, the not until adverbial clause appears only in it-clefts.
Unlike it-clefts, neither wh-clefts nor inverted wh-clefts allow the cleft clause
portion to be headed by the complementizer that:
In addition, the relative pronoun of the cleft clause in an it-cleft may be a PP,
whereas a PP cannot occur in the comparable position in a wh-cleft or inverted
wh-cleft:
(70) a. And it was this matter [[on which] I consulted with the chairman of the
Select Committee].
b. *[[On which] I consulted with the chairman of the Select Committee] was
this matter.
c. *This matter was [[on which] I consulted with the chairman of the Select
Committee].
These facts suggest that the different types of cleft are not derivationally related
and should be treated as distinct constructions. Without providing detailed
analyses, we sketch out possible directions here.
There are two observations to make here concerning the respective roles of the
copula be and the cleft clause. The copula in the cleft construction has a ‘speci-
ficational’ use, rather than a ‘predicational’ one. The examples in (72) illustrate
these two copular functions. In (72a), the copula is predicational, whereas in
examples like (72b), the copula is specificational:
(72) a. The one who got an A in the class was very happy.
b. The one who broke the window was Mr. Kim.
In contrast to (72a), the postcopular element (very happy) denotes a property
of the subject. In (72b), the postcopular NP (Mr. Kim) provides the value of a
variable. The subject refers not to an individual but to a variable (the x such
that x is a student and x broke the window). This is shown by agreement in tag
questions:
(73) a. The one who got A in the class was very happy, wasn’t she?
b. The one who broke the window was Mr. Kim, wasn’t it/*wasn’t she?
Different from (73a), the appropriate tag for (73b) includes the pronoun it, not
he or she. A rough paraphrase for (73b) is ‘The x such that x broke the window’
is Mr. Kim.’
Regarding the cleft clause itself, we can observe that it behaves like a kind of
free relative clause. Not all wh-words can occur in free relatives:
(74) a. He got what he wanted.
b. He put the money where Lee told him to put it.
c. The concert started when the bell rang.
308 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S
One can regard what, where, and when as introducing a free relative clause, in the
sense that they are interpreted, respectively, as ‘the thing that, the place where,
and the time when.’ However, this kind of interpretation is not feasible with who,
which, or how. As predicted by their failure to form free relatives, neither who
nor which can appear in wh-clefts:
Also note that the syntactic distribution of a free relative clause is that of an NP,
not that of a clause. The object of eat is a diagnostic environment:
Since the verb ate requires only an NP as its complement, the only possible
structure is as follows:
(78)
Although the filler what and the head phrase John ate form a constituent, the
result cannot be an S, because ate can combine only with an NP. This kind of
12.4 Cleft Constructions 309
free relative structure, which is unusual in the sense that the nonhead filler what
is the syntactic head, is licensed by the following grammar rule (Pullum, 1991):4
This construction ensures that when a free relative pronoun combines with a
sentence missing one phrase, the resulting expression is not an S but a complete
NP.
On the assumption that the cleft clause in the wh-cleft is a free relative, we
can assign the following structure to (71b):
(80)
As shown here, the cleft clause is formed by the combination of what with an
S missing an NP. The index of the free relative is identified with that of the
postcopular NP their teaching material.
Taking wh-clefts as a type of free-relative clause construction headed by an
NP, we can explain the ungrammaticality of examples like the following:
The subjects in these sentences are not headed by NPs and therefore cannot
be free relatives.
4 The feature FREL is assigned to wh-words like what, where, and when, but not to how and why, to
distinguish between those wh-words that can head a free relative and those that cannot. See Kim
(2001b).
310 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S
(82)
In these structures, the cleft clause has no FREL value, and so allows all wh-words
to head the relative clause:
This contrast suggests that the focused element (Pat in (84a)) and the following
relative clause do not form a syntactic unit, as a restrictive relative clause does
with its nominal head.
As discussed earlier, two major transformational approaches have been pro-
posed for the generation of it-clefts: expletive insertion and extraposition.
The present analysis takes the latter direction, whereby the pronoun it and
the cleft clause are linked by a type of extraposition process (Gundel, 1977;
Geluykens, 1988; Hedberg, 1988). As noted in the previous section, this anal-
ysis generates it-clefts from wh-clefts by extraposing the what-clause to the
sentence-final position. The present analysis, without postulating any move-
ment operations, assumes that the it-clefts have base-generated structures like
the following:
12.4 Cleft Constructions 311
(85)
The structure implies that the cleft clause is extraposed while the NP functions as
a focused (FOC) phrase. This kind of projection is possible when the copula verb
be selects a clausal subject and then becomes an extraposed word (extraposed-
wd):
⎡ ⎤
(86) FORM be
FORM be ⎢ ⎥
→ ⎣ARG - ST NP[it], 2 XP⎦
ARG - ST 1 [verbal], 2 XP
EXTRA 1 [verbal]
This present analysis also leads us to expect examples like (88), in which a par-
enthetical expression intervenes between the focused phrase and the extraposed
cleft clause.
(88) a. It was the boy, I believe, that bought the book.
b. It was in the attic, the police believed, where Ann had been hiding.
The present analysis can also license examples like the following, where the
focused XP is gapped:
(89) a. I wonder who it was who saw you.
b. I wonder who it was you saw .
c. I wonder in which pocket it was that Kim had hidden the jewels.
As shown here, the first COMPS value of the cleft copula be is realized as a
GAP element. This GAP value is passed up to the point where it is discharged by
the wh-element who. This induces an interrogative meaning on the complement
clause of the verb wonder.
As seen here, the present system allows the focus phrase (the complement
of the copula) to be gapped, but note that the cleft clause cannot have a gap
expression:
(91) a. Who do you think it is that Mary met ?
b. *To whom do you think it is the book that Mary gave ?
In these examples, where the cleft clause has a subject gap, the verb in the
cleft clause agrees with the coordinated NP. This kind of agreement is what we
observe in relative clauses:
(93) a. the students that like Peter
b. *the student that like Peter
Such a semantic relation between the focused phrase and the cleft clause is
quite similar to the one we find between the antecedent phrase and the relative
clause. Our conclusion is therefore that, in terms of syntax, it-clefts are different
from relative clauses, but in terms of semantics, they are quite similar to relative
clauses.
In order to capture such an agreement connectivity effect, we could add
additional constraints to the extraposed be:
⎡ ⎤
(94) be-cleft
⎢FORM be ⎥
⎢ ⎥
be ⎢ ⎥
FORM ⎢ARG - ST NP[it], 2 XP [FOC +]⎥
→⎢ i ⎥
ARG - ST 1 [verbal], 2 XP ⎢ ⎥
⎢ verbal ⎥
⎣EXTRA 1 ⎦
REL i
The only thing added here is the coindexation relation between the focused XP
and the cleft clause. That is, the type be-cleft is a subtype of the type extraposed-
wd but requires that the focused phrase be coindexed with the relative pronoun
of the cleft clause. This analysis would enable us to predict examples like the
following:
(95) a. It is [me] [that is to blame].
b. It is [he] [that is to blame].
c. It is [you] [who is to blame].
d. It is [you] [who are to blame].
What we see in (95) is variability in how person and number features appear in
the inflection of the extraposed cleft clause. While (95a) suggests that the verb
of the extraposed clause is invariantly 3rd person, (95d) shows that a 2nd-person
focal argument can trigger 2nd-person agreement (if the extraposed clause con-
tains a relative pronoun). Such variability is anticipated by an index-based theory
of agreement, as discussed in Section 6.4 of Chapter 6, in which agreement relies
on the manner in which the anchor element is construed.
314 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S
12.5 Conclusion
Exercises
2. Draw structures for the following sentences and show which gram-
mar rules are involved in generating them:
(i) a. This problem will be difficult for the students to solve.
b. Being lovely to look at has its advantages.
12.5 Conclusion 315
Further, consider the following examples in (ii) and (iii), draw struc-
tures for them and show which grammar rules and principles are
involved in their generation:
(ii) a. I wonder who it was who saw you.
b. I wonder who it was you saw.
c. I wonder in which pocket it was that Kim had hidden the jewels.
(iii) a. Was it for this that we suffered and toiled?
b. Who was it who interviewed you?
7. Provide the structures of the following two sentences and then dis-
cuss whether the present analysis can account for each of these
two:
a. It is on Kim that Lee relies.
b. It is Kim on whom Kim relies.
In this book we have explored the theory and practice of Sign-Based Construc-
tion Grammar (SBCG) by applying it to a range of grammatical phenomena in
English. The two basic theoretical notions in SBCG are sign and construction.
Signs are complexes of linguistic information that fully specify both the form
and the meaning of a linguistic expression. Classes of signs can be expressed
as sign descriptions. Constructions are the means provided by the grammar of
deriving more complex sign descriptions from simpler sign descriptions. Thus,
there can be constructions that pair an inflected form of a verb with an abstract
representation of that verb (e.g., assigning the form persuaded as the past tense of
persuade), constructions that associate a lexeme with information about valence
and meaning (e.g., the use of have in an ‘ordering’ sense, as in They have me
shine their shoes every morning), and constructions that associate constraints on
a phrasal pattern with a construction-specific meaning, as in The more we know,
the less we’ll need.
The notion of construction, in this view, is a formalization, in a constraint-
based architecture, of the notion of construction in traditional grammar. The
central notion is that constructions license linguistic signs that need special
explanations for at least some of their properties – lexical, syntactic, semantic, or
pragmatic – beyond what we know about their component parts. The construc-
tions of a grammar model the native or fluent speaker’s ability to produce and
understand the signs of their language.
A construction may assign semantic properties that are not determined by its
constituent elements and their manner of combination. This is true of phrases
like the poor, the rich, the young, the old, the blind, the lame, etc. These have the
properties ‘human,’ ‘generic,’ and ‘plural.’ This means that a sentence like I have
two electric vehicles; the old is a Nissan Leaf fails on three grounds: the phrase
the old would have to be specific, inanimate, and singular to be acceptable in the
subject position here.
As we have discussed in this book, there is a tradition in which the grammar-
ian’s main goal is to characterize those properties of the grammar that belong to
the ‘core,’ which is understood to contain the basic underlying building-blocks of
the language; these are the features that are most relevant in comparing languages
with each other or in studying the nature of language in the human species. In
contrast to the core is the collection of patterns that make up the less important
317
318 Afterword
Some constructions are very general, like the coordination pattern in (1a). Some
are particular, like the NP in (1b), which consists of the coordination of two
singular bare count nouns. This pattern can be used only when both of the items
and their close association are already saliently established in the discourse. And
some constructions are even more particular, like the sequence in (1c), where
the lexical makeup is fixed, the usual coordinate conjunction of similar syntactic
elements is not in evidence, and the meaning of the whole is unrelated to the
meaning of the parts (the expression hammer and tongs means ‘energetically’).
It is easy to see that examples like (1c) illustrate idioms and that expressions
like (1a) are the product of a general rule of grammar. But what about expres-
sions like (1b), which seem to lie between opaque idioms and fully productive
grammatical rules? Example (1b) is special because it is not a simple conjunc-
tion of two possible objects of grab: She grabbed her hat and She grabbed her
hat are ordinary expressions, but English does not allow *She grabbed hat.
One of the advantages of a constructional approach to grammar is that it gives
us a single format in which to describe all the grammatical formulas that the
speaker of a language must know, from the most particular, like that illustrated in
(1c), to the most general, like that illustrated in (1a), in the same format. The con-
struction grammarian sees a language as presenting a continuum of idiomaticity,
or generality, of expressions; a construction grammar models this continuum
with an array of constructions of correspondingly graded generality. It is pos-
sible that no language except English has a construction that builds an Adverb
Phrase, like by and large, by conjoining a preposition and an adjective, and it is
also possible that every language has some form of coordinate conjunction. But
where along the gradient of intermediate cases should one draw the line between
‘core’ and ‘periphery’? To our understanding, no objective criterion has been
established to distinguish core from periphery–even by those who assert that only
core phenomena are worthy of scientific investigation. A common practice is to
include in the core obvious cases plus as much of the rest of the language as fits
the theoretical apparatus at hand (Culicover and Jackendoff, 1999). But this prac-
tice simply leads to circular argumentation. A constructional approach, which
offers us a single representational format for any grammatical pattern, at what-
ever point on the gradient from frozen idiom to productive rule it falls, avoids
this failing. Constructional approaches to grammar assume that accounting for
all the facts of a language as precisely as possible is the major goal of syntactic
theory.
The appendix that follows is designed as a basic map of the grammatical land-
scape that we have explored in this book; it includes both descriptions of words
Afterword 319
A Lexical Entries
320
Appendix 321
k. FORM put
ARG - ST NP[agt], NP[th], PP[loc]
l. FORM smile
ARG - ST NP
m. FORM surprise
ARG - ST [nominal], NP
n. FORM teach
ARG - ST NP, NP[goal], NP[th]
A.1.2 Adjectives
⎡ ⎤
(2) a. FORM alive
⎢ ⎡ ⎤⎥
⎢ POS adj ⎥
⎢ ⎥⎥
⎢SYN | HEAD ⎢PRD + ⎥
⎣ ⎣ ⎦⎦
MOD
b. FORM ashamed
ARG - ST NP, CP[VFORM fin]
c. FORM content
ARG - ST NP, CP[VFORM fin]
⎡ ⎤
d. FORM eager
⎢ ⎥
⎣SYN | HEAD | POS adj ⎦
ARG - ST NP, VP[VFORM inf ]
⎡ ⎤
e. FORM fond
⎢ ⎥
⎣SYN | HEAD | POS adj ⎦
ARG - ST NP, PP[PFORM of ]
⎡ ⎤
f. FORM wooden
⎢ ⎥
⎢ ⎥
⎣SYN | HEAD POS adj ⎦
MOD N
A.1.3 Nouns
⎡ ⎤
(3) a. FORM boy
⎢ ⎥
⎢ ⎥
⎢SYN | HEAD POS noun ⎥
⎢ AGR | NUM sing ⎥
⎣ ⎦
SEM | IND | NUM sing
⎡ ⎤
b. FORM boys
⎢ ⎥
⎢ ⎥
⎢SYN | HEAD POS noun ⎥
⎢ AGR | NUM pl ⎥
⎣ ⎦
SEM | IND | NUM pl
322 Appendix
⎡ ⎤
c. FORM eagerness
⎣ ⎦
ARG - ST DP, XP[VFORM inf ]
d. FORM reliance
ARG - ST DP, PP[on]
⎡ ⎤
e. FORM proximity
⎣ ⎦
ARG - ST DP, (PP[PFORM to])
⎡ ⎤
f. FORM faith
⎣ ⎦
ARG - ST DP, (PP[PFORM in])
⎡ ⎤
g. FORM hash browns
⎢ ⎥
⎢ ⎥ (when referring to the food itself)
⎢SYN | HEAD POS noun ⎥
⎢ AGR | NUM pl ⎥
⎣ ⎦
SEM | IND | NUM pl
⎡ ⎤
h. FORM hash browns
⎢ ⎥
⎢ ⎥ (when referring to a customer, or to a dish)
⎢SYN | HEAD POS noun ⎥
⎢ AGR | NUM pl ⎥
⎣ ⎦
SEM | IND | NUM sing
⎡ ⎤
i. FORM team/government
⎢ ⎡ ⎥ ⎤
⎢ ⎥
⎢ POS noun
⎦⎥
⎢SYN⎣HEAD ⎥
⎢ AGR | NUM sing ⎥
⎣ ⎦
SEM | IND | NUM pl
A.1.4 Prepositions
(4) FORM in
ARG - ST NP
A.1.5 Auxiliary
⎡ ⎤
(5) a. aux-be
⎢ ⎥
⎢FORM be ⎥
⎢ ⎥
⎣ ⎦
ARG - ST NP, XP PRD +
⎡ ⎤
b. aux-do
⎢FORM do ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢SYN HEAD VFORM fin ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ − ⎥
⎣ARG - ST NP, VP
AUX ⎦
VFORM bse
Appendix 323
⎡ ⎤
c. aux-have
⎢ ⎥
⎢FORM have ⎥
⎢ ⎥
⎣ ⎦
ARG - ST NP, VP VFORM en
⎡ ⎤
d. aux-to
⎢ ⎥
⎢FORM to ⎥
⎢ ⎥
⎢ ⎥
⎢SYN ⎥
⎢ HEAD VFORM inf ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
ARG - ST NP, VP VFORM bse
A.1.6 Determiners
⎡ ⎤
(6) a. FORM little
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ POS det ⎥
⎣SYN⎣HEAD ⎦⎦
COUNT –
⎡ ⎤
b. FORM many
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ POS det ⎥
⎣SYN⎣HEAD ⎦⎦
COUNT +
⎡ ⎤
c. FORM this
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ POS det ⎥
⎣SYN⎣HEAD ⎦⎦
AGR | NUM sing
⎡ ⎤
d. FORM the
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ POS det ⎥
⎣SYN⎣HEAD ⎦⎦
COUNT boolean
A.2.3 Nouns
⎡ ⎤
(9) a. FORM each
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ AGR | NUM sing ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎡ ⎤ ⎥⎥
⎢SYN⎢⎢ PFORM of ⎥⎥
⎢ ⎢ ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥⎥⎥⎥
⎢ ⎣ DEF + ⎦ ⎥
⎣ ⎣ ⎦⎦
NUM pl
⎡ ⎤
b. FORM book
⎢ ⎡ ⎡ ⎤ ⎤⎥
⎢ ⎥
⎢ POS noun ⎥
⎢ ⎢ ⎢ ⎡ ⎤⎥ ⎥⎥
⎢ ⎢ ⎢ ⎥ ⎥ ⎥
⎢ ⎢ ⎢ PER 3rd
⎥ ⎥⎥
⎢ ⎢ HEAD ⎢ ⎢ ⎥⎥ ⎥⎥ ⎥
⎢ ⎢ ⎣ AGR ⎣ NUM sing ⎦⎦ ⎥
⎢SYN⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ GEND neut
⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎥
⎢ ⎣ SPR DP[NUM sing] ⎥ ⎦⎥
⎣ VAL ⎦
COMPS
⎡ ⎤
c. FORM dogs
⎢ ⎥
⎢ ⎥
⎣SYN HEAD | POS noun ⎦
VAL | SPR DP[ COUNT +]
⎡ ⎤
d. FORM furniture
⎢ ⎥
⎢ ⎥
⎣SYN HEAD | POS noun ⎦
VAL | SPR DP[ COUNT –]
⎡ ⎤
e. FORM he
⎢ ⎡ ⎡ ⎤⎤⎥
⎢ ⎥
⎢ POS noun ⎥
⎢ ⎢ ⎢ ⎡ ⎤⎥⎥⎥
⎢ ⎢ ⎢ ⎥⎥⎥
⎢ ⎢HEAD ⎢ PER 3rd
⎥⎥⎥
⎢ ⎢ ⎢ ⎢ ⎥⎥⎥⎥
⎢ ⎢ ⎣ AGR ⎣ NUM sing ⎦⎦⎥⎥
⎥
⎢SYN⎢ ⎥
⎢ ⎢ GEND masc ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎣ SPR ⎦⎥
⎣ VAL ⎦
COMPS
⎡ ⎤
f. cn-prpn
⎢FORM John Smith ⎥
⎢ ⎥
⎢ ⎡ ⎤⎥
⎢ | ⎥
⎢ HEAD POS noun ⎥
⎢ ⎢ ⎥⎥
⎢SYN⎢ DP ⎥⎥
⎣ ⎣VAL SPR ⎦⎦
COMPS
330 Appendix
⎡ ⎤
g. prpn
⎢FORM John Smith ⎥
⎢ ⎥
⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥
⎢ ⎥
⎢ ⎢ ⎥⎥
⎢SYN⎢ ⎥
⎦⎥
⎣ ⎣VAL SPR ⎦
COMPS
⎡ ⎤
h. FORM many
⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥
⎢ ⎢ ⎥
⎢ ⎢ ⎡ ⎤⎥⎥ ⎥
⎢ PFORM of ⎥⎥
⎢SYN⎢⎢ ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥ ⎥⎥
⎢ ⎣ NUM pl ⎦ ⎥⎥
⎣ ⎣ ⎦⎦
DEF +
⎡ ⎤
i. FORM much
⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎡ ⎤ ⎥⎥
⎢ PFORM of ⎥⎥
⎢SYN⎢⎢ ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥ ⎥⎥
⎢ ⎣ NUM sing ⎦ ⎥⎥
⎣ ⎣ ⎦⎦
DEF +
⎡ ⎤
j. FORM neither
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ POS noun ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ AGR | NUM sing ⎥⎥
⎢ ⎥⎥
⎢SYN⎢⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎣VAL | COMPS PP PFORM of ⎦⎥
⎣ ⎦
DEF +
⎡ ⎤
k. FORM pound
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ NUM sing
⎥⎥
⎢ ⎡ ⎤⎥⎥
⎢SYN⎢⎢ ⎥⎥
⎢ ⎢ SPR DP ⎥
⎢
⎢ ⎢VAL⎢ ⎥⎥
⎥⎥
⎣ ⎣ ⎣ ⎦⎦⎥⎦
COMPS PP PFORM of
⎡ ⎤
l. pounds
⎢ ⎡ ⎤
⎥
⎢ POS noun ⎥
⎢
⎢ ⎢ HEAD ⎥⎥
⎥
⎢SYN⎢⎢ AGR 1 | NUM pl ⎥
⎥⎥
⎢
⎢ ⎣ ⎦⎥
⎥
⎢ VAL | SPR DP AGR 1 ⎥
⎣ ⎦
SEM | IND | NUM sing
Appendix 331
⎡ ⎤
m. FORM some
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ AGR | NUM 1 ⎥⎥
⎢ ⎢ ⎥
⎢ ⎡ ⎤⎥
⎥⎥
⎢SYN⎢⎢ ⎥⎥
⎢ ⎢
PFORM of ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥⎥⎥
⎢
⎣ ⎣DEF + ⎦⎥⎥
⎦⎦
⎣
AGR | NUM 1
⎡ ⎤
n. FORM student
⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥
⎢ ⎢ ⎥⎥
⎢
⎢SYN⎢ ⎥⎥
⎥
⎣ ⎣VAL SPR DP ⎦⎦
COMPS
A.2.4 Adjectives
⎡ ⎤
(10) a. FORM eager
⎢ ⎡ ⎤⎥
⎢ SPR NPi ⎥
⎢ ⎥⎥
⎢ ⎢ ⎥
⎢SYN | VAL ⎢ inf ⎥ ⎥
⎢ ⎣COMPS VP VFORM ⎦⎥
⎢ ⎥
⎢ IND s1 ⎥
⎢ ⎥
⎢ ⎡ ⎤ ⎥
⎢ IND s ⎥
⎢ 0 ⎥
⎢ ⎢ ⎡ ⎤⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢ ⎢ PRED eager ⎥ ⎥
⎢SEM⎢ ⎢ ⎥ ⎥ ⎥
⎢ ⎢RELS ⎣EXP i ⎦⎥ ⎥
⎣ ⎣ ⎦ ⎦
SIT s1
A.2.5 Complementizers
⎡ ⎤
(11) a. FORM that
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS comp
⎥⎥
⎢ ⎢ ⎥⎥
⎢ VFORM 1 ⎥⎥
⎢SYN⎢⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎥
⎣ ⎣VAL SPR ⎦⎦
COMPS S[VFORM 1 ]
⎡ ⎤
b. FORM for
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ ⎥⎥
POS comp
⎢HEAD
⎢
⎢SYN⎢ VFORM inf ⎥⎥
⎥
⎣ ⎣ ⎦⎦
VAL | COMPS S[ VFORM inf ]
⎡ ⎤
c. FORM whether
⎢ ⎡ ⎤⎥
⎢ HEAD | POS comp ⎥
⎢ ⎥
⎢ ⎢ ⎥⎥
⎢SYN ⎣VAL | COMPS S[fin]⎦⎥
⎢ ⎥
⎢ QUE + ⎥
⎣ ⎦
ARG - ST S
332 Appendix
A.2.6 Auxiliaries
⎡ ⎤
(12) a. aux-be-pass
⎢ ⎥
⎢FORM be ⎥
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ SPR 1 NP ⎥
⎢ ⎢ ⎥⎥
⎢ ⎥⎥
⎢SYN | VAL⎢⎣COMPS 2 VP VFORM pass ⎦⎥
⎢ ⎥
⎢ ⎥
⎢ SPR 1 NP ⎥
⎣ ⎦
ARG - ST 1 NP, 2 VP
⎡ ⎤
b. FORM must
⎢ ⎡ ⎤⎥
⎢ VFORM fin ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ + ⎥⎥
⎢ AUX ⎥
⎢SYN⎢⎢ ⎡ ⎤⎥
⎥⎥
⎢ ⎢ NP ⎥⎥
⎢ ⎣VAL ⎣
SPR 1
⎦⎦⎥
⎢ ⎥
⎢ 2 VP SPR 1 NP ⎥
⎢ COMPS ⎥
⎣ ⎦
ARG - ST 1 NP, 2 VP
⎡ ⎤
c. FORM to
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS verb
⎥⎥
⎢ ⎥⎥
⎢SYN⎢ VFORM inf ⎥
⎣ ⎣ ⎦⎦
VAL | COMPS VP[ VFORM bse]
A.2.7 Determiners
⎡ ⎤
(13) a. FORM a
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS det
⎥⎥
⎢ ⎢ ⎥
⎢ AGR | NUM sing ⎥ ⎥⎥
⎢SYN⎢⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎥
⎣ ⎣VAL SPR ⎦⎦
COMPS
⎡ ⎤
b. FORM ’s
⎢ ⎡ ⎤⎥
⎢ HEAD | POS det ⎥
⎢ ⎢ ⎥⎥
⎢ ⎥⎥
⎢SYN⎢ ⎥
⎣ ⎣VAL SPR NP ⎦⎦
COMPS
A.2.8 Adverbs
⎡ ⎤
(14) FORM never/not
⎢ ⎡ ⎤⎥
⎢ adv ⎥
⎢ POS
⎦⎥
⎣SYN | HEAD ⎣ ⎦
MOD VP[VFORM nonfin]
Appendix 333
d. ⎡ :
N’t CONTRACTION CONSTRUCTION ⎤
⎡ ⎤ aux-nt-w
aux-w ⎢ ⎥
⎢ ⎥ ⎢FORM 1 + n’t ⎥
⎣FORM 1 ⎦ → ⎢ ⎢
⎥
⎥
HEAD | VFORM fin
⎣HEAD VFORM fin ⎦
NEG +
f. PASSIVE CONSTRUCTION:
⎡ ⎤
passive-v
v-tran-lxm ⎢ ⎥
→ ⎢SYN | HEAD | VFORM pass ⎥
ARG - ST XPi, 2 YP, ... ⎣ ⎦
ARG - ST YP, . . . , PPi[by]
2
g. ⎡:
PREPOSITIONAL PASSIVE CONSTRUCTION ⎤
pass-prep-v
⎢ ⎥
⎢SYN | HEAD | VFORM pass ⎥
prep-v ⎢ ⎥
→ ⎢ ⎥
ARG - ST NPi , PPj [ PFORM 4 ] ⎢ LEX + ⎥
⎣ARG - ST NPj , P , (PPi [by]) ⎦
PFORM 4
334 Appendix
h. VP ELLIPSIS CONSTRUCTION : ⎡ ⎤
aux-elide-w
⎡ ⎤ ⎢ ⎥
aux-w ⎢HEAD | AUX + ⎥
⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎣HEAD | AUX + ⎦ → ⎢ SPR XP
1 ⎥
⎢VAL ⎥
ARG - ST 1 XP, YP ⎢ COMPS ⎥
⎣ ⎦
ARG - ST 1 XP, YP[pro]
C Constructional Constraints
c. Auxiliary Verbs: ⎡ ⎡ ⎤
⎤
⎢SYN⎣HEAD POS verb ⎥
⎢ ⎦ ⎥
⎢ AUX + ⎥
⇒ ⎢ ⎥
aux-verb
⎢ ⎥
⎣ ⎦
ARG - ST 1 XP, YP SPR 1 XP
d. Modal Auxiliary: ⎡ ⎤
SYN | HEAD | VFORM fin
aux-modal ⇒ ⎣ ⎦
ARG - ST NP, VP VFORM bse
e. Tough Lexeme:
⎡ ⎤
SYN | HEAD | POS adj
⎢ ⎥
tough-lxm ⇒ ⎢ inf ⎥
⎣ARG - ST NPi , VP
VFORM ⎦
GAP 1 NPi [acc]
f. Eager Lexeme:
⎡ ⎤
SYN | HEAD | POS adj
⎢ ⎥
eager-lxm ⇒ ⎢ inf ⎥
⎣ARG - ST NPi , VP
VFORM⎦
SPR NPi
d. - ONLY
HEAD :
CONSTRUCTION
phrase word
XP →X
VAL 1 VAL 1
b. COORDINATION CONSTRUCTION :
XP → XP[GAP A] conj XP[GAP A]
Aarts, Bas. 1997/2001. English Syntax and Argumentation. Basingstoke, Hampshire and
New York: Palgrave.
Aarts, Bas. 2007. Syntactic Gradience: The Nature of Grammatical Indeterminacy.
Oxford: Oxford University Press.
Abeillé, Anne and Godard, Daniele. 2000. French Word Order and Lexical Weight. In
Borsley, R. (ed.), The Nature and Function of Syntactic Categories, 325–360. New
York: Academic Press.
Abeillé, Anne and Godard, Daniele. 2002. The Syntactic Structure of French Auxiliaries.
Language 78(3): 404–452.
Abney, Steven. 1987. The English Noun Phrase in Its Sentential Aspect. PhD dissertation,
MIT.
Adger, David. 2013. Constructions and Grammatical Explanation: Comments on Gold-
berg. Mind and Language 28(4): 466–478.
Akmajian, Adrian. 1970. On Deriving Cleft Sentences from Pseudo-cleft Sentences.
Linguistic Inquiry 1(2): 149–168.
Akmajian, Adrian and Heny, Frank. 1975. Introduction to the Principles of Transforma-
tional Syntax. Cambridge, MA: MIT Press.
Akmajian, Adrian, Steele, Susan, and Wasow, Thomas. 1979. The Category AUX in
Universal Grammar. Linguistic Inquiry 10(1): 1–64.
Akmajian, Adrian and Wasow, Thomas. 1974. The Constituent Structure of VP and AUX
and the Position of Verb BE. Linguistic Analysis 1(3): 205–245.
Arnold, Douglas. 2004. Non-restrictive Relative Clauses in Construction-Based HPSG.
In Müller, S. (ed.), Proceedings of the 11th International Conference on Head-Driven
Phrase Structure Grammar, 27–47. Stanford, CA: CSLI Publications.
Arnold, Douglas and Spencer, Andrew. 2015. A Constructional Analysis for the Skepti-
cal. In Müller, S. (ed.), Proceedings of the 22nd International Conference on Head-
Driven Phrase Structure Grammar, 41–61. Stanford, CA: CSLI Publications.
Asudeh, Ash, Dalrymple, Mary, and Toivonen, Ida. 2013. Constructions with Lexical
Integrity. Journal of Language Modelling 1(1): 1–54.
Bach, Emmon. 1974. Syntactic Theory. New York: Holt, Rinehart and Winston.
Bach, Emmon. 1979. Control in Montague Grammar. Linguistic Inquiry 10(4): 515–531.
Baker, Carl. 1991. The Syntax of English not: The Limits of Core Grammar. Linguistic
Inquiry 22(3): 387–429.
Baker, Carl. 1995. English Syntax. Cambridge, MA: MIT Press.
Baker, Mark. 1997. Thematic Roles and Syntactic Structure. In Haegeman, L. (ed.),
Elements of Grammar, 73–137. Dordrecht: Kluwar.
Baker, Mark. 2001. The Atoms of Language: The Mind’s Hidden Rules of Grammar. New
York: Basic Books.
337
338 Bibliography
Baltin, Mark. 2006. Extraposition. In Everaert, M. and Van Riemsdijk, H. (eds.), The
Blackwell Companion to Syntax (Blackwell Handbooks in Linguistics), 237–271.
Oxford: Blackwell.
Bates, Elizabeth and Goodman, Judith C. 1997. On the Inseparability of Grammar and the
Lexicon: Evidence from Acquisition, Aphasia and Real-Time Processing. Language
and Cognitive Processes 12: 507–584.
Bender, Emily and Flickinger, Dan. 1999. Peripheral Constructions and Core Phe-
nomena: Agreement in Tag Questions. In Webelhuth, G., Koenig, J.-P., and Kathol,
A. (eds.), Lexical and Constructional Aspects of Linguistic Explanation, 199–214.
Stanford, CA: CSLI Publications.
Biber, Douglas, Johansson, Stig, Leech, Geoffrey, Conrad, Susan, and Finegan, Edward.
1999. Longman Grammar of Spoken and Written English. New York: Longman.
Blake, Barry. 1990. Relational Grammar. London: Routledge.
Bloomfield, Leonard. 1933. Language. New York: H. Holt and Company.
Booij, Geert. 2010. Construction Morphology. Language and Linguistics Compass 4(7):
543–555.
Borsley, Robert. 1989a. Phrase Structure Grammar and the Barriers Conception of Clause
Structure. Linguistics 27(5): 843–863.
Borsley, Robert. 1989b. An HPSG Approach to Welsh. Journal of Linguistics 25(2): 333–
354.
Borsley, Robert. 1991. Syntactic Theory: A Unified Approach. London: Routledge.
Borsley, Robert. 1996. Modern Phrase Structure Grammar. Oxford: Blackwell.
Borsley, Robert. 2004. An Approach to English Comparative Correlatives. In Müller,
S. (ed.), Proceedings of the 11th International Conference on Head-Driven Phrase
Structure Grammar, 70–92. Stanford, CA: CSLI Publications.
Borsley, Robert. 2005. Against ConjP. Lingua 115(4): 461–482.
Borsley, Robert. 2006. Syntactic and Lexical Approaches to Unbounded Dependencies.
Essex Research Reports in Linguistics 49. Colchester, UK: University of Essex.
Borsley, Robert. 2012. Don’t Move! Iberia: An International Journal of Theoretical
Linguistics 4(1): 110–139
Bouma, Gosse, Malouf, Rob, and Sag Ivan. 2001. Satisfying Constraints on Extraction
and Adjunction. Natural Language and Linguistic Theory 19(1): 1–65.
Brame, Michael. 1979. Essays toward Realistic Syntax. Seattle: Noit Amrofer.
Bresnan, Joan. 1978. A Realistic Transformational Grammar. In Halle, M., Bresnan, J.,
and Miller, G. A. (eds.), Linguistic Theory and Psychological Reality. Cambridge,
MA: MIT Press.
Bresnan, Joan. 1982a. Control and Complementation. In The Mental Representation of
Grammatical Relations (Bresnan, 1982c).
Bresnan, Joan. 1982b. The Passive in Lexical Theory. In The Mental Representation of
Grammatical Relations (Bresnan, 1982c).
Bresnan, Joan. 1982c. The Mental Representation of Grammatical Relations. Cambridge,
MA: MIT Press.
Bresnan, Joan. 1994. Locative Inversion and the Architecture of Universal Grammar.
Language 70(2): 1–52.
Bresnan, Joan. 2001. Lexical-Functional Syntax. Oxford and Cambridge, MA: Blackwell.
Briscoe, Edward, Copestake, Ann, and Paiva, Valeria. 1993. Inheritance, Defaults, and
the Lexicon. Cambridge, UK: Cambridge University Press.
Bibliography 339
Chomsky, Noam. 1993. A Minimalist Program for Linguistic Theory. In Hale, K. and
Kayser, S. (eds.), The View from Building 20: Essays in Honor of Sylvain Bromberger,
1–52. Cambridge, MA: MIT Press.
Chomsky, Noam. 1995. The Minimalist Program. Cambridge, MA: MIT Press.
Chomsky, Noam. 2005. Three Factors in Language Design. Linguistic Inquiry 36(1):
1–22.
Chomsky, Noam. 2013. Problems of Projection. Lingua 130: 33–49.
Chomsky, Noam and Lasnik, Howard. 1977. Filters and Control. Linguistic Inquiry 8(3):
425–504.
Christiansen, Morten and Chater, Nick. 2016. Creating Language: Integrating Evolution,
Acquisition, and Processing. Cambridge, MA: MIT Press.
Collins, Peter. 1996. Get-passives in English. World Englishes 15(1): 43–56.
Copestake, Ann. 2002. Implementing Typed Feature Structures Grammars. Stanford, CA:
CSLI Publications.
Copestake, Ann, Flickinger, Dan, Pollard, Carl, and Sag, Ivan. 2006. Minimal Recursion
Semantics: An Introduction. Research on Language and Computation 3(4): 281–332.
Cowper, Elizabeth. 1992. A Concise Introduction to Syntactic Theory: The Government-
Binding Approach. Chicago: University of Chicago Press.
Croft, William. 2001. Radical Construction Grammar: Syntactic Theory in Typological
Perspective. Oxford: Oxford University Press.
Croft, William. 2009. Syntax Is More Diverse, and Evolutionary Linguistics Is Already
Here. The Behavioral and Brain Sciences 32(5): 457–458.
Culicover, Peter. 1993. Evidence against ECP Aaccounts of the That-t Effect. Linguistic
Inquiry 24(3): 557–561.
Culicover, Peter and Jackendoff, Ray. 1999. The View from the Periphery: The English
Comparative Correlative. Linguistic Inquiry 30(4): 543–571.
Culicover, Peter, and Jackendoff, Ray. 2005. Simpler Syntax. Oxford: Oxford University
Press.
Dalrymple, Mary. 2001. Lexical Functional Grammar. (Syntax and Semantics, Volume
34). New York: Academic Press.
Dalrymple, Mary, Zaenen, Annie, Maxwell III, John, and Kaplan, Ronald. 1995. Formal
Issues in Lexical-Functional Grammar. Stanford, CA: CSLI Publications.
Davidson, Donald. 1980. Essays on Actions and Events. Oxford: Clarendon Press; New
York: Oxford University Press.
Davis, Anthony. 2001. Linking by Types in the Hierarchical Lexicon. Stanford, CA: CSLI
Publications.
den Dikken, Marcel. 2005. Comparative Correlatives Comparatively. Linguistic Inquiry
36(4): 497–532.
Downing, Angela. 1996. The Semantics of Get-Passives. In Hasan, R., Cloran, C.,
and Butt, G. (eds.), Functional Descriptions: Theory in Practice. Amsterdam: John
Benjamins Publishing.
Dowty, David. 1982. Grammatical Relations and Montague Grammar. In Jacobson, P.
and Pullum, G. (eds.), The Nature of Syntactic Representation, 79–130. Dordrecht:
Reidel.
Dowty, David. 1989. On the Semantic Content of the Notion of Thematic Role. In Chier-
chia, G., Partee B., and Turner, R. (eds.), Properties, Types, and Meanings, Volume 2,
69–129. Dordrecht: Kluwer.
Bibliography 341
Dowty, David, Wall, Robert, and Peters, Stanley. 1981. Introduction to Montague
Semantics. Dordrecht: Reidel.
Dubinksy, Stanley and Davies, William. 2004. The Grammar of Raising and Control: A
Course in Syntactic Argumentation. Oxford: Blackwell.
Emonds, Joseph. 1970. Root and Structure-Preserving Transformations. PhD disserta-
tion, MIT.
Emonds, Joseph. 1976. A Transformational Approach to English Syntax: Root, Structure-
Preserving, and Local Transformations. New York: Academic Press.
Ernst, Thomas. 1992. The Phrase Structure of English Negation. The Linguistic Review
9(2): 109–144.
van Eynde, Frank. 2015. Sign-Based Construction Grammar: A Guided Tour. Journal of
Linguistics 52(1): 194–217.
van Eynde, Frank and Kim, Jong-Bok. 2016. Loose Apposition: A Construction-Based
Analysis. Functions of Language 23(1): 17–39.
Fabb, Nigel. 1990. The Difference between English Restrictive and Non-restrictive
Relative Clauses. Journal of Linguistics 26(1): 57–78.
Fillmore, Charles. 1963. The Position of Embedding Transformations in a Grammar.
Word 19(2): 208–231.
Fillmore, Charles. 1999. Inversion and Constructional Inheritance. In Webelhuth, G.,
Koenig, J. P., and Kathol, A. (eds.), Lexical and Constructional Aspects of Linguistics
Explanation, 113–128. Stanford, CA: CSLI Publications.
Fillmore, Charles, Kay, Paul, and O’Connor, Mary. 1988. Regularity and Idiomaticity in
Grammatical Constructions: The Case of Let Alone. Language 64(3): 501–538.
Flickinger, Daniel. 1983. Lexical Heads and Phrasal Gaps. In Barlow, M., Flickinger,
D., and Wescoat, M. (eds.), Proceedings of the 2nd West Coast Conference on Formal
Linguistics, 89-101. Stanford, CA: Stanford Linguistics Association.
Flickinger, Daniel. 1987. Lexical Rules in the Hierarchical Lexicon. PhD dissertation,
Stanford University.
Flickinger, Daniel. 2008. Transparent Heads. In Müller, S. (ed.), Proceedings of the
15th International Conference on Head-Driven Phrase Structure Grammar, 87–94.
Stanford, CA: CSLI Publications.
Flickinger, Daniel, Pollard, Carl, and Wasow, Thomas. 1985. Structures Sharing in
Lexical Representation. In Morristown, N. (ed.), Proceedings of the 23rd Annual Meet-
ing of the Association for Computational Linguistics, Association for Computational
Linguistics.
Fodor, Jerry. 1983. The Modularity of Mind. Cambridge, MA: MIT Press.
Fodor, Jerry and Katz, Jerrold. 1964. The Structure of Language. Englewood Cliffs, NJ:
Prentice-Hall.
Fraser, Bruce. 1970. Idioms within a Transformational Grammar. Foundations of Lan-
guage 6: 22–42.
Gazdar, Gerald. 1981. Unbounded Dependencies and Coordinate Structure. Linguistic
Inquiry 12(2): 155–184.
Gazdar, Gerald. 1982. Phrase Structure Grammar. In Jacobson, P. and Pullum, G. (eds.),
The Nature of Syntactic Representation. Dordrecht: Reidel.
Gazdar, Gerald, Klein, Ewan, Pullum, Geoffrey, and Sag, Ivan. 1985. Generalized
Phrase Structure Grammar. Cambridge, MA; Harvard University Press; Oxford: Basil
Blackwell.
342 Bibliography
Gazdar, Gerald and Pullum, Geoffrey. 1981. Subcategorization, Constituent Order, and
the Notion ‘Head’. In Moortgat, M., van der Hulst, H., and Hoekstra, T. (eds.), The
Scope of Lexical Rules. Dordrecht: Foris.
Gazdar, Gerald, Pullum, Geoffrey, and Sag, Ivan. 1982. Auxiliaries and Related Phenom-
ena in a Restrictive Theory of Grammar. Language 58(3): 591–638.
van Gelderen, Elly. 2017. Syntax: An Introduction to Minimalism. Amsterdam: John
Benjamins.
Geluykens, Ronald. 1988. Five Types of Clefting in English Discourse. Linguistics 26:
823–842.
Ginzburg, Jonathan and Sag, Ivan. 2000. Interrogative Investigations: The Form, Mean-
ing and Use of English Interrogatives. Stanford, CA: CSLI Publications.
Goldberg, Adele. 1995. A Construction Grammar Approach to Argument Structure.
Chicago: University of Chicago Press.
Goldberg, Adele. 2003. Constructions: A New Theoretical Approach to Language.
Trends in Cognitive Science 7(5): 219–224.
Goldberg, Adele. 2006. Constructions at Work. Oxford: Oxford University Press.
Goldberg, Adele. 2009. The Nature of Generalization in Language. Cognitive Linguistics
20(1): 93–127.
Goldberg, Adele. 2013. Constructionist Approaches to Language. In Hoffmann, T. and
Trousdale, G. (eds.), Handbook of Construction Grammar. Oxford: Oxford University
Press.
Goldberg, Adele. 2014. Fitting a Slim Dime between the Verb Template and Argument
Structure Construction Approaches. Theoretical Linguistics 40(1–2): 113–135.
Goldberg, Adele. 2016. Tuning in to the Verb-Particle Construction in English. In Nash,
Lea and Samvelian, Pollet (eds.), Approaches to Complex Predicates. Leiden: Brill.
Goldberg, Adele and Casenhiser, Devin. 2006. English Constructions. In Aarts,
B. and McMahon, A. (eds.), Handbook of English Linguistics. Malden, MA:
Blackwell.
Goldsmith, John. 1985. A Principled Exception to the Coordinate Structure Constraint.
In Eilfort, W., Kroeber, P., and Peters, K. (eds.), Papers from the 21st Regional Meeting
of the Chicago Linguistic Society. Chicago: Chicago Linguistic Society.
Green, Georgia. 1976. Main Clause Phenomena in Subordinate Clauses. Language 52(2):
382–397.
Green, Georgia. 1981. Pragmatics and Syntactic Description. Studies in the Linguistic
Sciences 11(1): 27–37.
Green, Georgia. 2011. Modelling Grammar Growth: Universal Grammar without Innate
Principles or Parameters. In Borsley, R. and Borjars, K. (eds.), Nontransformational
Syntax: Formal and Explicit Models of Grammar: A Guide to Current Models, 378–
403. Cambridge, MA: Blackwell.
Greenbaum, Sidney. 1996. The Oxford English Grammar. Oxford: Oxford University
Press.
Gregory, Michelle and Michaelis, Laura. 2001. Topicalization and Left-Dislocation: A
Functional Opposition Revisited. Journal of Pragmatics 33(11): 1665–1706.
Grice, Paul. 1989. Studies in the Way of Words. Cambridge, MA: Harvard University
Press.
Grimshaw, Jane. 1997. Projection, Heads, and Optimality. Linguistic Inquiry 28(3): 373–
422.
Bibliography 343
Groat, Erich. 1995. English Expletives: A Minimalist Approach. Linguistic Inquiry 26(2):
354–365.
Grosu, Alexander. 1974. On the Nature of the Left Branch Constraint. Linguistic Inquiry
5(2): 308–319.
Gundel, Jeanette. 1977. Where Do Cleft-Sentences Come from? Language 53(3): 543–
559.
Haegeman, Liliane. 1985. The Get-Passive and Burzio’s Generalization. Lingua 66(1):
53–77.
Haegeman, Liliane. 1994. Introduction to Government and Binding Theory. Cambridge,
MA: Basil Blackwell.
Harman, Gilbert. 1963. Generative Grammar without Transformation Rules: A Defense
of Phrase Structure. Language 39(4): 597–616.
Harris, Randy. 1993. The Linguistic Wars. Oxford: Oxford University Press.
Harris, Zellig. 1970. Papers in Structural and Transformational Linguistics. Dordrecht:
Reidel.
Hedberg, Nancy. 1988. The Discourse Function of Cleft Sentences in Spoken English.
Paper presented at the Linguistics Society of America Conference, New York.
Hedberg, Nancy. 2000. The Referential Status of Clefts. Language 39(4): 891–920.
Hilpert, Martin. 2014. Construction Grammar and Its Application to English. Edinburgh:
Edinburgh University Press.
Hofmeister, Philip, Jaeger, Florian, Sag, Ivan, Arnon, Inbal, and Snider, Neal. 2006.
Locality and Accessibility in Wh-questions. In Featherston, S. and Sternefeld, W.
(eds.), Roots: Linguistics in Search of Its Evidential Base, 185–206. Berlin: Mouton
de Gruyter.
Hooper, Joan and Thompson, Sandra. 1973. On the Applicability of Root Transforma-
tions. Linguistic Inquiry 4(4): 465–497.
Hornstein, Norbert and Lightfoot, David. 1981. Explanation in Linguistics: The Logical
Problem of Language Acquisition. London: Longman.
Huddleston, Rodney and Pullum, Geoffrey. 2002. The Cambridge Grammar of the
English Language. Cambridge, UK: Cambridge University Press.
Hudson, Richard. 1984. Word Grammar. Oxford: Blackwell.
Hudson, Richard. 1990. English Word Grammar. Oxford: Blackwell.
Hudson, Richard. 1998. Word Grammar. In Agel, V., Eichinger, L., Eroms, H. W.
et al. (eds.), Dependency and Valency: An International Handbook of Contemporary
Research. Berlin: Walter de Gruyter.
Hudson, Richard. 2003. Mismatches in Default Inheritance. In Francis, F. and Michaelis,
L. (eds.), Mismatch: Form-Function Incongruity and the Architecture of Grammar,
355–402. Stanford, CA: CSLI Publications.
Hudson, Richard. 2004. Are Determiners Heads? Functions of Language 11(1): 7–42.
Hudson, Richard. 2010. An Introduction to Word Grammar (Cambridge Textbooks in
Linguistics). Cambridge, UK: Cambridge University Press.
Huang, James. 1982. Logical Relations in Chinese and the Theory of Grammar. PhD
dissertation, MIT.
Jackendoff, Ray. 1972. Semantic Interpretation in Generative Grammar. Cambridge,
MA: MIT Press.
Jackendoff, Ray. 1975. Morphological and Semantic Regularities in the Lexicon. Lan-
guage 51(3): 639–671.
344 Bibliography
Perlmutter, David and Rosen, Carol. 1984. Studies in Relational Grammar 2. Chicago:
University of Chicago Press.
Perlmutter, David and Soames, Scott. 1979. Syntactic Argumentation and the Structure
of English. Berkeley: University of California Press.
Pinker, Steven. 1994. The Language Instinct. New York: Morrow.
Pollard, Carl. 1996. The Nature of Constraint-Based Grammar. Paper presented at the
Pacific Asia Conference on Language, Information, and Computation. Seoul, Korea:
Kyung Hee University.
Pollard, Carl and Sag, Ivan. 1987. Information-Based Syntax and Semantics, Volume 1:
Fundamentals. Stanford, CA: CSLI Publications.
Pollard, Carl and Sag, Ivan. 1992. Anaphors in English and the Scope of Binding Theory.
Linguistic Inquiry 23(2): 261–303.
Pollard, Carl and Sag, Ivan. 1994. Head-Driven Phrase Structure Grammar. Chicago:
University of Chicago Press.
Pollock, Jean-Yves. 1989. Verb Movement, Universal Grammar, and the Structure of IP.
Linguistic Inquiry 20(3): 365–422.
Postal, Paul. 1971. Crossover Phenomena. New York: Holt, Rinehart and Winston.
Postal, Paul. 1974. On Raising. Cambridge, MA: MIT Press.
Postal, Paul. 1986. Studies of Passive Clauses. Albany: SUNY Press.
Postal, Paul and Joseph, Brian. 1990. Studies in Relational Grammar 3. Chicago:
University of Chicago Press.
Postal, Paul and Pullum, Geoffrey. 1998. Expletive Noun Phrases in Subcategorized
Positions. Linguistic Inquiry 19(4): 635–670.
Pullum, Geoffrey. 1979. Rule Interaction and the Organization of a Grammar. New York:
Garland.
Pullum, Geoffrey. 1991. English Nominal Gerund Phrases as Noun Phrases with Verb-
Phrase Heads. Linguistics 29(5): 763–799.
Pullum, Geoffrey. 2013. The Central Question in Comparative Syntactic Metatheory.
Mind and Language 28(4): 492–521.
Pullum, Geoffrey and Gazdar, Gerald. 1982. Natural Languages and Context-Free
Languages. Linguistics and Philosophy 4(4): 471–504.
Pullum, Geoffrey and Scholz, Barbara. 2002. Empirical Assessment of Stimulus Poverty
Arguments. The Linguistic Review, 8(1–2): 9–50.
Przepiórkowski, Adam and Kupść, Anna. 2006. HPSG for Slavicists. Glossos 8: 1–68.
Quirk, Randoph, Greenbaum, Sidney, Leech, Geoffrey, and Swartvik, Jan. 1972. A
Grammar of Contemporary English. London and New York: Longman.
Quirk, Randoph, Greenbaum, Sidney, Leech, Geoffrey, and Swartvik, Jan. 1985. A
Comprehensive Grammar of the English Language. London and New York: Longman.
Radford, Andrew. 1981. Transformational Syntax: A Student’s Guide to Chomsky’s
Extended Standard Theory. Cambridge, UK: Cambridge University Press.
Radford, Andrew. 1988. Transformation Grammar. Cambridge, UK: Cambridge Univer-
sity Press.
Radford, Andrew. 1997. Syntactic Theory and the Structure of English. New York and
Cambridge, UK: Cambridge University Press.
Radford, Andrew. 2004. English Syntax: An Introduction. Cambridge, UK: Cambridge
University Press.
Bibliography 349
Sag, Ivan, Wasow, Thomas, and Bender, Emily. 2003. Syntactic Theory: A Formal
Introduction. Stanford, CA: CSLI Publications.
Sag, Ivan and Wasow, Thomas. 2011. Performance-Compatible Competence Gram-
mar. In Borsley, R. and Borjars, K. (eds.) Non-Transformational Syntax: Formal and
Explicit Models of Grammar. Oxford: Wiley-Blackwell.
Sag, Ivan and Wasow, Thomas. 2015. Flexible Processing and the Design of Grammar.
Journal of Psycholinguistic Research 44(1): 47–63.
Saussure, Ferdinand. 2011. Course in General Linguistics [1916]. London: Duckworth.
Sells, Peter. 1985. Lectures on Contemporary Syntactic Theories. Stanford, CA: CSLI
Publications.
Sells, Peter. 2001. Formal and Empirical Issues in Optimality Theoretic Syntax. Stanford,
CA: CSLI Publications.
Shieber, Stuart. 1986. An Introduction to Unification-Based Approaches to Grammar.
Stanford, CA: CSLI publications.
Steedman, Mark. 1996. Surface Structure and Interpretation. Cambridge, MA: MIT
Press.
Steedman, Mark. 2000. The Syntactic Process. Cambridge, MA: MIT Press/Bradford
Books.
Stockwell, Robert, Schachter, Paul, and Partee, Barbara. 1973. The Major Syntactic
Structures of English. New York: Holt, Rinehart and Winston.
Stowell, Timothy. 1981. Origins of Phrase Structure. PhD dissertation, MIT.
Sussex, Roland. 1982. A Note on the Get-Passive Construction. Australian Journal of
Linguistics 2(1): 83–95.
Taranto, Gina. 2005. An Event Structure Analysis of Causative and Passive Get.
Unpublished MS. San Diego: University of California.
Thornton, Rosalind. 2016. Childrens’ Acquisition of Syntactic Knowledge. In Oxford
Research Encyclopedia of Linguistics. Available at https://oxfordre.com/linguistics/
view/10.1093/acrefore/9780199384655.001.0001/acrefore-9780199384655-e-72.
Tomasello, Michael. 2009. Constructing a Language. Cambridge, MA: Harvard Univer-
sity Press.
Tseng, Jesse. 2007. English Prepositional Passive Constructions. In Müller, S. (ed.),
Proceedings of the 14th International Conference on Head-Driven Phrase Structure
Grammar, 271–286. Stanford, CA: CSLI Publications.
Ward, Gregory. 1985. The Semantics and Pragmatics of Preposing. PhD dissertation,
University of Pennsylvania.
Warner, Anthony. 2000. English Auxiliaries without Lexical Rules. In Borsley, R. (ed.),
The Nature and Function of Syntactic Categories, 167–218. New York: Academic
Press.
Wasow, Thomas. 1977. Transformations and the Lexicon. In Akmajian, A., Culicover, P.,
and Wasow, T. (eds.), Formal Syntax, 327–360. New York: Academic Press.
Wasow, Thomas. 1989. Grammatical Theory. In Posner, T. (ed.), Foundations of Cogni-
tive Science, 161–205. Cambridge, MA: MIT Press.
Webelhuth, Gert. 1995. Government and Binding Theory and the Minimalist Program.
Oxford: Basil Blackwell.
Wechsler, Stephen. 1995. The Semantic Basis of Argument Structure. PhD dissertation,
Stanford University.
Bibliography 351
AGR (agreement), 88, 89, 141–147, 149, 153 predicative, 158, 159, 300
ARG - ST (argument-structure), 88–94, 97, 99, 104, raising, 164, 166
105, 114, 115, 118, 120, 122, 123, 126, 127, adjunct, 59, 60, 63, 108, 238, 261, 286, 301
129, 131, 181, 185, 207, 210, 243, 250, 262, Adjunct Clause Constraint, 286, 313
300 adverb, 37–39, 59, 60, 187, 199, 201, 212
COMPS (complements), 59, 104, 105 adverbial, 59, 261, 263, 306
COUNT (countable), 155 Affix Hopping Construction, 189
DEF (definite), 153 agreement, 31, 55, 139, 196
DP (determiner phrase), 79–81, 138, 139, 280 index, 145, 146
EXTRA , 300–302 mismatch, 147
FORM (morphological form), 89 morphosyntactic, 145, 150
FREL 309 noun-determiner, 141
GAP 244, 274, 285 pronoun-antecedent, 143
GEND (gender), 143 subject-verb, 62, 78, 143, 148
IND (index), 146–150, 162 ambiguity, 31, 43
IP (inflectional phrase), 190 structural, 31
MOD (modifier), 53, 268 anomalous, 104
NFORM , 118, 119, 300 semantically, 41
NUM (number), 84, 141–144, 146 antecedent, 139, 143, 147, 163, 274
OBJ (object), 53 Argument Realization Constraint (ARC), 104,
PER (person), 143, 144, 147 105, 192, 243, 245, 250, 262
PFORM 116, 153, 158, 228 argument-structure construction, 89, 91, 94–96,
PHON (phonology), 88 104
POS (part-of-speech), 83, 88, 101, 144, 146, 148, arguments, 88, 89
156 article, 6, 79, 157
PRD (predicate), 91, 93, 159 atomic, 85
PRED (predicate), 53, 60, 61, 63, 91 attribute, 85, 86, 103
PRO 258, 260, 276 attribute-value matrix (AVM), 85
QUE (question), 225, 246–248, 255 autonomous, 30
REL 267, 270 autonomy, 11, 13, 30
SEM (semantics), 88, 89, 146 auxiliary verb, 36, 42, 56, 218, 226, 250
SPR (specifier), 78, 79, 94, 104, 105, 115, 119,
139, 141, 147, 158, 174 bare NP, 136
SUBJ (subject), 53 biological endowment, 16
SYN (syntax), 88, 89, 109, 147, 178, 180 British English, 195
VAL (valence), 89, 100, 105, 106, 108, 167
VFORM , 72, 88, 89, 101, 103, 104, 114, 115, 228 Case Filter, 292
238, 239, 241 Case Theory, 221
Categorial Grammar, 108
acceptability, 1, 3, 5, 9, 10, 287 clausal
accusative, 221, 260, 275, 278, 290, 293 complement, 27, 120, 129, 132, 253, 254, 298
adjective, 4, 25, 37, 38, 63, 116, 128, 129, 139, subject, 127, 286
158, 293 clause
attributive, 158 embedded, 247, 271, 273
control, 164, 166, 182 finite, 27
352
Index 353
nonfinite, 40, 72, 101, 119, 122, 171, 193, Principles and Parameter, 10
199–201, 207, 237, 270 proform, 33
nonhead daughter, 271 projection, 71, 77
nonlocal, 244 promoted, 217
dependency, 290 pronoun, 7, 33, 34, 56, 118, 152, 221
feature, 247, 248, 255 proposition, 180, 216
position, 243 PS rules, 34, 37, 42, 44, 53, 76, 77, 81, 83, 188
Nonlocal Inheritance Principle (NIP), 247, 248,
272, 276 quantificational, 151, 152
nontransformational, 172, 181, 221 quantified NP, 283
noun question, 237
collective, 149
common, 134, 135, 140 raising properties, 218
count, 6, 8, 9, 135 reanalysis, 227
countable, 134 reason, 60
mass, 6, 8, 155 recursive application, 42
measure, 157 redundancy, 77, 78
noncount, 134 reflexive, 13, 139, 230
partitive, 150 relative
pronoun, 134, 139 pronoun, 266, 274
proper, 134, 135, 140, 161 relative clause
bare, 272, 273, 277, 278
obligatory, 71 infinitival, 266, 276
ontological issue, 186 nonrestrictive, 279, 280
reduced, 267
particle, 28, 33, 228 restrictive, 279, 280
partitive, 150, 156 rule-governed, 3, 6
passive, 56
get-passive, 229 SAI Construction, 247
prepositional, 226 SBCG (Sign-Based Construction Grammar), 16
Passive Construction, 225 selectional restriction, 108, 167
passivization, 46, 226 semantic
past, 25, 27, 102, 189, 195 constancy, 75
periphery, 12–14 constraint, 235
personal pronoun, 144 enrichment, 22
phrasal function, 253
category, 31 restriction, 75
plural, 6, 16, 25, 55, 84, 135, 140, 145, 148, 150 role, 54, 165, 179, 219, 291
position semantic role
of adverb, 194 agent, 54, 63, 65, 66, 89, 178, 181
possessive, 80, 139 benefactive, 57, 64
postcopular, 236 experiencer, 64, 66, 181, 182
pragmatics, 221 goal, 57, 64, 93
predicate, 36, 54, 58, 59, 78, 93, 158, 166, 168, instrument, 53, 64
291 location, 64
predication, 305, 307 patient, 53, 54, 63, 65, 178
preposition, 25, 33, 116, 131, 152, 153, 226, 228 recipient, 57
prepositional source, 64
object, 266 theme, 63–65, 89, 93
object construction, 93 semantics, 2, 3, 11, 13, 22, 24, 30, 41, 88, 95, 99,
verb, 226, 228 112, 145, 153, 168, 221
Prepositional Passive Construction, 228 semi-fixed expressions, 46
prescriptive, 5 Sentential Subject Constraint (SSC), 286
Present Inflectional Construction, 100, 333 signified, 19
preterminal, 30 signifier, 19
principle of compositionality, 13 specificational, 307
356 Index