Syntactic Constructions in English-CUP (2020)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 370

Syntactic Constructions in English

Construction Grammar (CxG) is a framework for syntactic analysis that


takes constructions – pairings of form and meaning that range from
the highly idiomatic to the very general – to be the building blocks of
sentence meaning. Offering the first comprehensive introduction to CxG
to focus on both English words and the constructions that combine them,
this textbook shows students not only what the analyses of particular
structures are, but also how and why those analyses are constructed,
with each chapter taking the student step by step through the reasoning
processes that yield the best description of a data set. It offers a wealth
of illustrative examples and exercises, largely based on real language
data, making it ideal for both self-study and classroom use. Written in an
accessible and engaging way, this textbook will open up this increasingly
popular linguistic framework to anyone interested in the grammatical
patterns of English.

JONG - BOK KIM is Professor of English Linguistics at Kyung Hee


University, Seoul.
LAURA A . MICHAELIS is Professor of Linguistics at the University of
Colorado Boulder.
Syntactic Constructions
in English

JO NG -B O K K IM
Kyung Hee University, Seoul

L AUR A A . MIC H A ELIS


University of Colorado Boulder
University Printing House, Cambridge CB2 8BS, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India
79 Anson Road, #06–04/06, Singapore 079906

Cambridge University Press is part of the University of Cambridge.


It furthers the University’s mission by disseminating knowledge in the pursuit of
education, learning, and research at the highest international levels of excellence.

www.cambridge.org
Information on this title: www.cambridge.org/9781108470339
DOI: 10.1017/9781108632706

c Jong-Bok Kim and Laura A. Michaelis 2020
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2020
Printed in the United Kingdom by TJ International Ltd, Padstow Cornwall
A catalogue record for this publication is available from the British Library.
Library of Congress Cataloging-in-Publication Data
Names: Kim, Jong-Bok, 1966– author. | Michaelis, Laura A., 1964– author.
Title: Syntactic constructions in English / Jong-Bok Kim, Laura A.
Michaelis-Cummings.
Description: 1. | New York : Cambridge University Press, 2020. | Includes
bibliographical references and index.
Identifiers: LCCN 2019057511 (print) | LCCN 2019057512 (ebook) | ISBN
9781108470339 (hardback) | ISBN 9781108632706 (ebook)
Subjects: LCSH: English language – Syntax. | English language – Grammar.
Classification: LCC PE1361 .K565 2020 (print) | LCC PE1361 (ebook) | DDC
425–dc23
LC record available at https://lccn.loc.gov/2019057511
LC ebook record available at https://lccn.loc.gov/2019057512
ISBN 978-1-108-47033-9 Hardback
ISBN 978-1-108-45586-2 Paperback
Cambridge University Press has no responsibility for the persistence or accuracy of
URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
Contents

Preface page xi

1 What Is a Theory of English Syntax About? 1


1.1 Linguistic and Syntactic Competence 1
1.2 Generative Grammars 5
1.3 How We Discover Descriptive Rules 5
1.4 Two Different Views of Generative Grammar 9
1.4.1 Deductive Reasoning and the Nativist View 10
1.4.2 Inductive Reasoning and the Constraint-Based View 12
1.5 Evidence That Grammar Is Construction-Based 14
1.6 Goals of This Book 15

2 Lexical and Phrasal Signs 19


2.1 Linguistic Signs and Constructions as Form-Meaning Pairs 19
2.2 From Lexical Signs to Phrasal Signs as a Continuum 20
2.3 Lexical Signs 24
2.3.1 Classifying Lexical Signs 24
2.3.2 Grammar with Lexical Categories Alone 29
2.4 Phrasal Constructions and Constituency Tests 31
2.5 Forming Phrasal Constructions: Phrase Structure Rules 34
2.5.1 NP: Noun Phrase 34
2.5.2 VP: Verb Phrase 35
2.5.3 AP: Adjective Phrase 37
2.5.4 AdvP: Adverb Phrase 38
2.5.5 PP: Preposition Phrase 39
2.5.6 CP and ConjP: Complementizer and Conjunction
Phrases 40
2.6 Grammar with Phrasal Constructions 40
2.7 Multi-word Expressions: Between Lexical and Phrasal
Constructions 45
2.7.1 Fixed Expressions 45
2.7.2 Semi-fixed Expressions 46
2.7.3 Syntactically Flexible Multi-word Expressions 47
2.8 Conclusion 49

v
vi Contents

3 Syntactic Forms, Grammatical Functions, and Semantic


Roles 53
3.1 Introduction 53
3.2 Grammatical Functions 54
3.2.1 Subjects 54
3.2.2 Direct Objects and Indirect Objects 56
3.2.3 Predicative Complements 58
3.2.4 Oblique Complements 59
3.2.5 Modifiers 59
3.3 Bringing Form and Function Together 60
3.4 Form-Function Mismatches 61
3.5 Semantic Roles 63
3.6 Conclusion 66

4 Head, Complements, Modifiers, and Argument Structures 70


4.1 Building a Phrase from a Head 70
4.1.1 Internal vs. External Syntax 70
4.1.2 The Notion of Head, Complements, and Modifiers 71
4.2 Differences between Complements and Modifiers 73
4.3 PS Rules, X -rules, and Features 76
4.3.1 Problems of PS Rules 76
4.3.2 Intermediate Phrases and Specifiers 78
4.3.3 Intermediate Phrases for Non-NPs 84
4.4 Lexicon and Feature Structures 84
4.4.1 Feature Structures and Basic Operations 85
4.4.2 Feature Structures for Linguistic Entities 87
4.5 Arguments and Argument-Structure Constructions 89
4.5.1 Basic Properties of Argument Structure 89
4.5.2 Types of Argument-Structure Constructions 90
4.5.3 Argument Structures as Constructions: Form and
Meaning Relations 94
4.6 Conclusion 96

5 Combinatorial Construction Rules and Principles 99


5.1 From Lexemes to Words 99
5.2 Head Features and Head Feature Principle 101
5.2.1 Parts of Speech Value as a Head Feature 101
5.2.2 Verb Form as a Head Feature 101
5.2.3 Mapping between Argument-Structure and
Valence Features 104
5.3 Combinatory Construction Rules 105
5.4 Nonphrasal, Lexical Constructions 111
5.5 Feature Specifications on the Syntactic Complement 113
5.5.1 Complements of Verbs 113
Contents vii

5.5.2 Complements of Adjectives 116


5.5.3 Complements of Common Nouns 117
5.6 Feature Specifications on the Subject 118
5.7 Clausal Complement and Subject 119
5.7.1 Verbs Selecting a Clausal Complement 119
5.7.2 Verbs Selecting a Clausal Subject 126
5.7.3 Adjectives Selecting a Clausal Complement 128
5.7.4 Nouns Selecting a Clausal Complement 129
5.7.5 Prepositions Selecting a Clausal Complement 131
5.8 Conclusion 131

6 Noun Phrases and Agreement 134


6.1 Classification of Nouns 134
6.2 Syntactic Structures 135
6.2.1 Common Nouns 135
6.2.2 Pronouns 139
6.2.3 Proper Nouns 140
6.3 Agreement Types and Morphosyntactic Features 141
6.3.1 Noun-Determiner Agreement 141
6.3.2 Pronoun-Antecedent Agreement 143
6.3.3 Subject-Verb Agreement 143
6.4 Semantic Agreement Features 145
6.5 Partitive NPs and Agreement 150
6.5.1 Basic Properties 150
6.5.2 Two Types of Partitive NPs 151
6.5.3 Measure Noun Phrases 157
6.6 Modifying an NP 158
6.6.1 Adjectives as Prenominal Modifiers 158
6.6.2 Postnominal Modifiers 160
6.7 Conclusion 161

7 Raising and Control Constructions 164


7.1 Raising and Control Predicates 164
7.2 Differences between Raising and Control Verbs 165
7.2.1 Subject Raising and Control 165
7.2.2 Object Raising and Control 168
7.3 A Simple Transformational Approach 169
7.4 A Nontransformational, Construction-Based Approach 172
7.4.1 Identical Syntactic Structures 172
7.4.2 Differences among the Feature Specifications in
the Valence Information 174
7.4.3 A Mismatch between Meaning and Structure 178
7.5 Explaining the Differences 181
7.5.1 Expletive Subject and Object 181
viii Contents

7.5.2 Meaning Preservation 181


7.5.3 Subject vs. Object Control Verbs 182
7.6 Conclusion 183

8 Auxiliary and Related Constructions 186


8.1 Basic Issues 186
8.2 Transformational Analyses 188
8.3 A Construction-Based Analysis 190
8.3.1 Shared Properties of Raising Verbs 190
8.3.2 Modals 191
8.3.3 Be and Have 193
8.3.4 Periphrastic Do 196
8.3.5 Infinitival Clause Marker To 198
8.4 Capturing NICE Properties 199
8.4.1 Auxiliaries with Negation 199
8.4.2 Auxiliaries with Inversion 204
8.4.3 Contracted Auxiliaries 208
8.4.4 Auxiliaries with Ellipsis 209
8.5 Conclusion 212

9 Passive Constructions 216


9.1 Introduction 216
9.2 The Relationship between Active and Passive 217
9.3 Approaches to Passive 219
9.3.1 From Structural Description to Structural Change 219
9.3.2 A Transformational Approach 220
9.3.3 A Construction-Based Approach 221
9.4 Prepositional Passives 226
9.5 The Get-Passive 229
9.6 Conclusion 233

10 Interrogative and Wh-question Constructions 237


10.1 Clausal Types and Interrogatives 237
10.2 Movement vs. Feature Percolation 239
10.3 Feature Percolation with No Abstract Elements 242
10.3.1 Basic Systems 242
10.3.2 Nonsubject Wh-questions 245
10.3.3 Subject Wh-questions 250
10.4 Indirect Questions 253
10.4.1 Basic Structures 253
10.4.2 Non-wh Indirect Questions 257
10.4.3 Infinitival Indirect Questions 258
10.4.4 Adjunct Wh-questions 261
10.5 Conclusion 263
Contents ix

11 Relative Clause Constructions 266


11.1 Introduction 266
11.2 Nonsubject Wh-Relative Clauses 267
11.3 Subject Relative Clauses 272
11.4 That-Relative Clauses 274
11.5 Infinitival and Bare Relative Clauses 276
11.6 Restrictive vs. Nonrestrictive Relative Clauses 279
11.7 Island Constraints on the Filler-Gap Dependencies 284
11.8 Conclusion 287

12 Tough, Extraposition, and Cleft Constructions 290


12.1 Introduction 290
12.2 ‘Tough’ Constructions and Topichood 291
12.2.1 Basic Properties 291
12.2.2 Transformational Analyses 292
12.2.3 A Construction-Based Analysis 293
12.3 Extraposition 297
12.3.1 Basic Properties 297
12.3.2 Transformational Analysis 298
12.3.3 A Construction-Based Analysis 299
12.4 Cleft Constructions 303
12.4.1 Basic Properties 303
12.4.2 Distributional Properties of the Three Clefts 304
12.4.3 Syntactic Structures of the Three Types of Cleft:
Movement Analyses 305
12.4.4 A Construction-Based Analysis 307
12.5 Conclusion 314

Afterword 317
Appendix 320
Bibliography 337
Index 352
Preface

Charles J. Fillmore, an exalted scholar of syntax at the University of California,


Berkeley, used to say that studying the syntax of any language is like trying to
examine a web made of chain that has sunk to the bottom of a swamp. There
is no way to see the full structure at once. Instead, he said, you have to pick up
one small piece at a time, clean off that piece, and then examine it. But each
time you lift up a new piece for study, the piece you are already holding will
slide back into the swamp. The point, we think, is that it is hard to develop a
complete picture of the grammar of a language, and each new fact we uncover
might make us doubt an analysis we have previously given. But there is only one
way to proceed in grammar analysis, and that is from linguistic fact to linguistic
fact, as we slowly develop a picture of how the facts fit together. We offer this
book as a small contribution to that enterprise; it is intended to inspire careful
syntactic scholarship.
This book grew out of Kim and Sells’s (2008) English Syntax: An Introduc-
tion. The key property that distinguishes this book from its predecessor is that
it uses a synthesis of Construction Grammar and HPSG (Head-driven Phrase
Structure Grammar) to analyze English syntactic structures. Construction Gram-
mar returns to the traditional idea that a grammar is composed of conventional
associations of form and meaning. It aims to provide full coverage of the facts
of the language under study. An allied theory, HPSG is a lexicalized, constraint-
based grammar that relies on de Saussure’s concept of the sign (an association of
signifier (form) and signified (meaning)), and in particular the idea that language
is an infinite set of signs, including complex phrasal signs. A synthesis of these
two grammars, Sign-Based Construction Grammar (SBCG), was brought forth
in the first decade of the new millennium. SBCG aims to expand the range of
phenomena covered by HPSG grammars while also improving the formal rigor
of construction-based grammar description, for example, by reducing the reper-
toire of grammatical features used. The descriptive tools used in this book are
directly inspired by SBCG.
Successful teaching of English syntax (whether one’s students are native or
nonnative speakers) requires the ability to strike a balance in the exposition
between facts and theory. Students who study English syntax want to learn basic
facts of English grammar in use, and transparent ways to represent those facts,
so that they can extend what they know to newly encountered structures. In this
book, as in Kim and Sells (2008), we try to offer an explicit account of the form,

xi
xii Preface

meaning, and use of English sentences, both simple and complex, including their
correct syntactic structures.
The book focuses primarily on the descriptive facts of English syntax,
presented through a ‘lexical lens’ that encourages students to recognize the
important contribution that words and word classes make to syntactic structure. It
then proceeds with the basic theoretical concepts of declarative grammar (in the
framework of SBCG), providing sample sentences. We have tried to make each
chapter maximally accessible to those with no background knowledge of English
syntax. We provide clear, simple tree diagrams that will help students understand
recursive structures in syntax. The theoretical notions are simply described but
framed as precisely as possible so that students can apply them in analyzing
English sentences. Each chapter also contains exercises ranging from straight-
forward to challenging, aiming to promote a deeper understanding of the factual
and theoretical contents of each chapter.
We relied heavily on the prior works on English syntax. In particular, much
of the content, as well as our exercises, were inspired by or adapted from
renowned textbooks including Aarts (1997, 2001), C. L. Baker (1995), Bors-
ley (1991, 1996), Radford (1988, 1997, 2004), Miller (2000), Sag et al. (2003),
Carnie (2002, 2011), and Hilpert (2014). These works have set the standard for
syntactic description and argumentation for decades.
Many people have supported and/or improved this textbook. This work owes
a great intellectual debt to the late Ivan A. Sag, who demonstrated that an ele-
gant and intuitive grammar formalism can also have extraordinary sweep and
scope. Our thanks also go to Peter Sells for contributing foundations for this
book in Kim and Sells (2008). We thank anonymous reviewers of prior drafts
of this book for detailed comments and suggestions which helped us reshape
it. We are grateful for the advice and insights of linguistic colleagues includ-
ing Anne Abeillé, Doug Arnold, Jóhanna BarDdal, Emily Bender, Bob Borsley,
Rui Chaves, Suk-Jin Chang, Hee-Rahk Chae, Sae-Youn Cho, Incheol Choi, Jae-
Woong Choi, Chan Chung, Mark Davies, Elaine Francis, Jonathan Ginzburg,
Adele Goldberg, Goldberg, Martin Hilpert, Paul Kay, Jungsoo Kim, Valia Kor-
doni, Chungmin Lee, Juwon Lee, Kiyong Lee, Bob Levine, Philip Miller, Stefan
Müller, Joanna Nykiel, Byung-Soo Park, Chongwon Park, Javier Pérez-Guerra,
Jeffrey Runner, Manfred Sailer, Rok Sim, Sanghoun Song, Eun-jung Yoo, James
Yoon, Frank Van Eynde, Gert Webelhuth, and Stephen Wechsler. We also thank
students and colleagues at Kyung Hee University, Seoul and the University of
Colorado Boulder for their encouragement over the years. In particular, we thank
students who used drafts of this textbook and raised questions that helped us
solidify its structure and content. We are also grateful to Helen Barton at Cam-
bridge University Press for her outstanding advice and support, and to Catherine
Dunn and Stanly Emelson for expert editorial and production assistance. The
first author also acknowledges support from the Alexander Von Humboldt Foun-
dation, from which he received a Humboldt Research Award in 2019. Lastly,
we thank our close friends and family members, whose love and understanding
sustained us through the writing process.
1 What Is a Theory of English Syntax
About?

1.1 Linguistic and Syntactic Competence

We language users believe that we ‘know’ a language, but the ques-


tion is what we know when we know a language, like English or Korean. It may
mean that we know how to create natural English sentences like (1a) but not
unnatural sentences like (1b):1
(1) a. We can’t pay for health care benefits like this, but you can.
b. *We can’t keep paying for health care benefits like this, but you can keep.2

In the same way, speakers who know English may accept (2a) and (2c), but not
(2b):3
(2) a. She swam.
b. *She swam the passengers.
c. She swam the passengers to three nearby boats.

This implies that knowing a language means that (English) speakers have linguis-
tic knowledge sufficient to distinguish between ‘acceptable’ and ‘unacceptable’
sentences. However, when speakers are asked to articulate what kind of knowl-
edge allows them to make these distinctions, it is not easy for them to describe it.
This knowledge of language, often called linguistic competence, is the ability
to speak a language. Knowing one’s native language requires neither skill nor
talent, but it is nonetheless an accomplishment worthy of investigation.
Linguistic competence involves several different levels of language structure.
It includes phonetic and phonological competence: knowledge of the sounds
1 The example in (1a) is from the corpus COCA (Corpus of Contemporary of American English),
a collection of 560 million words of text from five different genres including spoken, fiction,
magazine, newspaper, and academic texts. Throughout this book, we will use many corpus exam-
ples (extracted mainly from COCA) to portray English as it is actually spoken. We will, however,
suppress their exact sources in the interest of readability.
2 The notation * indicates that the particular example is ungrammatical or unacceptable. The notion
of grammaticality (grammatical or ungrammatical) is closely related to that of acceptability
(acceptable or unacceptable). Grammaticality has to do with whether a given sentence conforms
to the rules and constraints of the relevant grammar, while acceptability has to do with whether
a native English speaker would judge the sentence to be an instance of native English. Unless a
distinction is required, we use these notions interchangeably.
3 These examples are based on those used by Goldberg (1995). See Chapter 4.5 for discussion of
such sentences.

1
2 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?

of the language and their pronunciation variants. Linguistic competence also


includes morphological competence. English native speakers, for example, can
decide which words are (or could be) English and which are not. They (implic-
itly) know the rules for forming words, enabling them to make the past tense of
an unfamiliar verb like winter or a new verb like google, as illustrated by the
following corpus examples:
(3) a. Swallows wintered beneath the lakes.
b. She googled his name and discovered ninety-four hits.

Semantic competence includes the ability to determine the meaning of a partic-


ular sentence from the words of the sentence and their manner of combination.
Native speakers distinguish the meanings of the following two sentences, which
contain the same words but in different word orders:
(4) a. The dog chased the cat up a tree.
b. The cat chased the dog up a tree.

English speakers also interpret sentences flexibly, according to interactional con-


text, enabling them to give appropriate responses to each. Consider the following
utterances:
(5) a. Can you give me an aisle seat? (said at an airport check-in counter)
b. Can you pass the maple syrup, please? (said at a dining table)

The speaker’s intent in uttering such sentences is not just to inquire about the
hearer’s ability but also to request an aisle seat and the syrup, respectively.
The person to whom such a question is directed can infer that it is actually a
directive.
The pivotal competence that we are concerned with in this book is syntac-
tic competence: the ability to combine words into phrases that conform to the
phrasal patterns of the language. Children learn these patterns without explicit
training. How exactly they do so is a matter of controversy. Some linguists claim
that certain aspects of grammar must be innate, because children do not receive
enough data during early development to determine what the patterns are. Others
argue that syntactic competence is in fact something that a child acquires through
learning, and that the proponents of innate grammar have overlooked children’s
outstanding capacity to imitate adult routines and to infer patterns from rich but
noisy input. We do not attempt to resolve this controversy here, because our focus
is on what constitutes the adult’s knowledge of language, and not the means by
which it is achieved.4
Although children do not receive explicit instruction in their first language,
they somehow gain the ability to produce all and only the grammatical sentences
of their language and to distinguish grammatical sentences from ungrammatical

4 We refer the interested reader to the rich literature on grammar learnability, which includes works
by Goldberg (2006), Tomasello (2009), Newport (2016), and Chater and Christiansen (2018).
1.1 Linguistic and Syntactic Competence 3

ones, as in (1). This kind of competence arises because language is rule-


governed. One piece of evidence that it is a rule-governed system can be
observed in word-order restrictions. If a sentence is an arrangement of words and
we have five words, such as player, ball, a, the, and kicked, how many possible
combinations can we have from these five words? Mathematically, the number
of possible combinations of five words is 5! (factorial), equalling 120 instances.
But among these 120 possible combinations, there is only a limited number of
grammatical English sentences (including those that are semantically odd, as in
(6c) and (6d)):5
(6) a. The player kicked a ball.
b. A player kicked the ball.
c. The ball kicked a player.
d. A ball kicked the player.
e. The ball, a player kicked.
f. ...

Most of the combinations, a few of which are given in (7), are unacceptable to
native speakers of English:
(7) a. *Kicked the player the ball.
b. *Player the ball kicked the.
c. *The player a kicked ball.

It is clear that there are certain rules in English for combining words. These rules
constrain which words can be combined and how they can be ordered, sometimes
in groups, with respect to each other.
Such combinatory rules also enable speakers to construct (or construe) com-
plex sentences like (8a).6 Whatever the combinatory rules are, they should give
a different status to (8b), an example which is judged ungrammatical by native
speakers even though the intended meaning is relatively clear.
(8) a. My parents decided to stay in the house they built.
b. *My parents decided to stay in the house they built it.

The fact that we require such combinatory knowledge also provides an argument
for the assumption that we use a finite set of resources (expressions and rules) to
produce and interpret grammatical sentences, and that we do not just rely on the
meanings of the words involved. Consider the examples in (9):7
(9) a. I *(am) fond of that garden.
b. He *(is) angry at the not guilty verdict.

5 Examples like (6e) are called ‘topicalization’ sentences: The topic expression (the ball), already
mentioned or understood in a given context, is placed in a sentence initial position. See
Lambrecht (1994), Gregory and Michaelis (2001), and references therein.
6 In Chapter 2, we will begin to see these combinatory rules.
7 The star * in front of the parenthesis symbols means that the expression within the parentheses
cannot be omitted.
4 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?

The omission of the copula verbs am and is would not prevent us from under-
standing the intended meaning, but the presence of these words is a structural
requirement here.
In addition to being rule-based, syntactic competence powers the creativity
(expressivity) that defines language ability. Speakers can produce and understand
an infinite number of new grammatical sentences that the speaker has never spo-
ken or heard before. For example, native speakers of English may have never
heard, seen, or talked about the subject matter of sentences like (10) before, but
they would have no difficulties producing or understanding such sentences:

(10) Forget intelligence or wisdom. A muscular physique might just be a more


important attribute when it comes to judging a person’s leadership potential,
according to a new study.8

The expressivity intrinsic to grammatical competence is unbounded: A language


user can produce and understand an infinite number of grammatical sentences.
For example, given the simple sentence (11a), we can make a more complex
one like (11b) by adding an adjective like isolated, which modifies the noun
nation. To this sentence, we can add another adjective, corrupt, as in (11c). We
could continue adding adjectives, theoretically enabling us to produce an infinite
number of sentences:

(11) a. The nation faced sanctions.


b. The isolated nation faced sanctions.
c. The isolated, corrupt nation faced sanctions.
d. The isolated, corrupt, belligerent nation faced sanctions.
e. ...

One might argue that since the number of English adjectives is limited, there
should be a limit to this process. However, there are numerous examples in which
we could keep such a process going, as shown by the following (Sag et al., 2003:
22):

(12) a. Some sentences can go on.


b. Some sentences can go on and on.
c. Some sentences can go on and on and on.
d. Some sentences can go on and on and on and on.
e. ...

To (12a), we add the string and on, producing a longer one, (12b). To the result-
ing sentence, we once again add and on and make (12c). We could in principle
go on adding without stopping: This is enough to prove that language has infinite
creative potential (see Chomsky, 1957, 1965).

8 Excerpt from the newspaper Science Newsline.


1.3 How We Discover Descriptive Rules 5

1.2 Generative Grammars

As discussed in the previous section, language is an infinite resource:


There is no upper limit to the number of distinct grammatical arrangements of
words that a native speaker can produce. Because any language user, at any
time, can produce a grammatically licit string of words that she or he has never
encountered before, it cannot be the case that language users have somehow
managed to memorize every string of words that we might see or hear (Pullum
and Scholz, 2002). Thus, a grammar cannot be an exhaustive list of possible word
strings. It must instead be a model of our capacity to create (and understand) sen-
tences of the language – a capacity referred to in the literature as competence.
Nearly all syntactic theorists working today assume the following as a working
hypothesis:
(13) All native speakers have grammatical competence that enables them to
produce and understand an infinite number of grammatical sentences.

As reflected in (13), grammatical competence is a kind of discrete infinity: a


limited repertoire of rules that allows us to make an infinite number of acceptable
sentences. This grammatical competence is modeled by a generative grammar,
which we then can define as follows (for English, in this instance):
(14) An English generative grammar is one that can generate an infinite set of
well-formed English sentences from a finite set of rules or principles that do
not generate any of the non-well-formed sentences.

The job of the syntacticians is thus to discover and formulate these rules or
principles, which is also our goal here.

1.3 How We Discover Descriptive Rules

When talking about language rules or principles, we must recognize


that there are two types of rules: prescriptive and descriptive rules. The key dif-
ference between the two is that descriptive rules capture naturally occurring
language patterns, while prescriptive rules recommend certain usage practices.
Prescriptive rules, as illustrated in (15), tell us how language ought to be used,
rather than describing the language as it is:
(15) a. Do not end a sentence with a preposition.
b. Avoid split infinitives.
c. Use who rather than that to introduce a relative clause that describes a
human.

However, the very existence of a prescriptive rule is good evidence that the tar-
geted usage practice is commonplace, as suggested by the following attested
‘violations’:
6 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?

(16) a. Who does she work with?


b. Young people need to try to boldly go where no one has gone before.
c. And she’s the person that puts together the master list of songs.

Descriptive rules characterize whatever forms speakers actually use. One might
have occasion to posit both prescriptive and descriptive rules, but the rule-
governed grammar we are exploring in this book consists exclusively of
descriptive rules.
The ensuing question is then: how can we discover the descriptive rules of
English syntax – those that can generate all of the grammatical sentences but
none of the ungrammatical ones? As noted earlier, these rules are part of our
knowledge about language but are not consciously accessible; speakers can-
not articulate their content if asked to do so. Hence we can discover the rules
indirectly: We infer these latent rules from the observed data of a language.
These data can come from speakers’ judgments – known as intuitions – or from
collected data of produced written or spoken language – often called corpora.
Linguists use patterns in data to make inferences about an underlying phe-
nomenon, and this is why we take linguistics to be an empirical discipline.
The basic steps involved in doing such data-based linguistic research can be
summarized as follows:
• Step I: Collect and observe data.
• Step II: Make a hypothesis to cover the first set of data.
• Step III: Check the hypothesis using more data.
• Step IV: Revise the hypothesis if necessary.
Let us now use these basic strategies to discover one of the grammar rules of
English: the rule that distinguishes count and mass (noncount) nouns.9
Step I: Observing Data. To discover a grammar rule, the first thing we need
to do is examine grammatical and ungrammatical variants of the expression in
question. For example, let us look at the usage of the word evidence:
(17) Data Set 1: evidence
a. *The professor found some strong evidences of water on Mars.
b. *The professor was hoping for a strong evidence.
c. *The evidence that Jones found was more helpful than the one that Smith
found.

What can you tell from these examples? We can make the following observa-
tions:
(18) Observation 1:
a. evidence cannot be used in the plural.
b. evidence cannot be used with the indefinite article a(n).
c. evidence cannot be referred to by the pronoun one.

9 The discussion and data in this section are adapted from Baker (1995).
1.3 How We Discover Descriptive Rules 7

In any form of scientific research, one example is insufficient to enable us to


draw a conclusion. However, we can easily find more words that behave like
evidence:
(19) Data Set 2: equipment
a. *We had hoped to get three new equipments every month, but we only had
enough money to get an equipment every two weeks.
b. *This is a large truck which has an equipment to automatically bottle the
wine.
c. *The equipment we bought last year was more expensive than the one we
bought this year.

We thus extend Observation 1 a little bit further:


(20) Observation 2:
a. evidence/equipment cannot be used in the plural.
b. evidence/equipment cannot be used with the indefinite article a(n).
c. evidence/equipment cannot be referred to by the pronoun one.

It is usually necessary to find contrastive examples to understand the range of a


given observation. For instance, words like clue and tool act differently:
(21) Data Set 3: clue
a. They hold vital clues to deciphering the history of the solar system.
b. That would give us a good clue that something funny is going on.
c. The clue that John got was more helpful than the one that Smith got.

(22) Data Set 4: tool


a. The word clouds are good tools for engaging in critical thinking.
b. Trade can be a powerful tool for global growth.
c. The tool that Jones got was more helpful than the one that Smith got.

Unlike equipment and evidence, the nouns clue and tool can be used in the test
linguistic contexts we set up. We thus can add Observation 3, different from
Observation 2:
(23) Observation 3:
a. clue/tool can be used in the plural.
b. clue/tool can be used with the indefinite article a(n).
c. clue/tool can be referred to by the pronoun one.

Step II: Forming a Hypothesis. From the data and observations we have
made so far, can we make any hypothesis about the English grammar rule in
question? One hypothesis that we can make is the following:
(24) First Hypothesis:
English has at least two groups of nouns, Group I (count nouns) and Group
II (mass nouns), diagnosed by tests of plurality, the indefinite article, and the
pronoun one.
8 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?

Step III: Checking the Hypothesis. Once we have formed such a hypothesis,
we need to determine whether it is true of other data and to see if it has other
analytical consequences. A little further thought allows us to find support for the
two-way distinction among nouns. For example, consider the usage of much and
many:
(25) a. much evidence, much equipment, much information, much advice
b. *much clue, *much tool, *much armchair, *much bags
(26) a. *many evidence, *many equipment, *many information, *many advice
b. many clues, many tools, many suggestions, many armchairs
As observed here, plural count nouns can occur only with many, whereas mass
nouns can combine with much. Similar support can be found in the usage of little
and few:
(27) a. little evidence, little equipment, little advice, little information
b. *little clue, *little tool, *little suggestion, *little armchair
(28) a. *few evidence, *few equipment, *few furniture, *few advice, *few
information
b. few clues, few tools, few suggestions, few armchairs
The word little can occur with mass nouns like evidence, yet few cannot.
Meanwhile, few occurs only with count nouns.
Given these data, it appears that the two-way distinction is plausible and per-
suasive. We can now ask if this distinction into just two groups is really sufficient
for the classification of nouns. Consider the following examples with cake:
(29) a. She makes very good cakes.
b. The president was hoping for a good cake.
c. The cake that Jones got was more delicious than the one that Smith got.
Similar behavior can be observed with a noun like beer:
(30) a. I like good, dark, full-flavored beers.
b. No one knows how to tell a good beer from a bad one.

These data show us that cake and beer can be classified as count nouns. However,
observe the following:
(31) a. My pastor says I ate too much cake.
b. The students drank too much beer last night.
(32) a. We recommend that you eat less cake and pastry.
b. People now drink less beer.

The data indicate that cake and beer can also be used as mass nouns, since they
can be used with less or much.
Step IV: Revising the Hypothesis. The examples in (31) and (32) imply that
there is another group of nouns: those that can be used as both count nouns and
mass nouns. This leads us to revise the hypothesis in (24) as follows:
1.4 Two Different Views of Generative Grammar 9

(33) Revised Hypothesis:


There are at least three groups of nouns: Group 1 (count nouns), Group 2
(mass nouns), and Group 3 (count and mass nouns).

We can expect that context will determine whether a Group 3 noun is used as
count or as mass.
As we have observed thus far, the process of discovering grammar rules cru-
cially hinges on finding data, drawing generalizations, making a hypothesis, and
revising this hypothesis with more data. In addition, we have noticed that gram-
matical generalizations may actually be generalizations about classes of words,
like the class of count nouns.

1.4 Two Different Views of Generative Grammar

We have seen that a theory of English syntax seeks answers to ques-


tions like how we can produce an infinite array of grammatical sentences and
why some sentences are grammatical (or acceptable) while others are not. To
answer such questions, we can derive generalizations from the observations of
examples or data under investigation, as we did when discovering the distinction
between count nouns and mass nouns in English. Such reasoning is traditionally
called inductive reasoning, which derives broad generalizations from specific
observations. In inductive reasoning, then, the investigator draws a conclusion
about what is probably the case given the evidence encountered thus far. By con-
trast, we can start out with a general statement, or hypothesis, and then test it
against data to ascertain its validity. Such reasoning, called deductive reason-
ing, is often adopted in natural sciences like physics. In deductive inference, we
thus investigate the consequences of a theory, asking ourselves what would have
to be the case if the theory is valid.
Each of these two forms of reasoning is associated with a different view of
generative grammar. Deductive reasoning is closely associated with the trans-
formational or movement-based view, in which ‘underlying structures,’ which
feature a transparent relationship between syntax and semantics, are altered
by various operations that produce an array of ‘surface realizations’ of those
patterns. One such relationship is that between a question and its declarative
counterpart (e.g., For whom will you vote? vs. You will vote for whom). The
array of acceptable sentences is determined by the conditions on the applica-
tion of transformations: Some properties of a given underlying structure may
block the application of a given movement rule. Transformational grammari-
ans are engaged in a deductive enterprise because they follow the premise that
human languages have all and only those (structural) properties that are express-
ible in the transformational formalism: a structure-building operation (in recent
versions of this framework called merge) and a structural displacement opera-
tion (in recent implementations of the model called move). A transformational
10 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?

grammar thus seeks maximally general analyses conforming to the categories,


relations, and operations assumed to characterize the language faculty, as when,
for example, a sentence’s structural complexity is analyzed in terms of the
number of operations required to derive that sentence from its underlying
structure.
Inductive reasoning plays a key role in the development of the so-called
constraint-based view. A constraint-based grammar simply enumerates all of
the patterns that exist in a grammar from general to specific, without attempting
to derive one from another. A sentence or phrase of the language is predicted to
be permissible insofar as it conforms to one or more of the existing patterns. In
what follows, we will consider some key ideas of these two frameworks.

1.4.1 Deductive Reasoning and the Nativist View


Deductive reasoning is the key mode of explanation in Chomsky’s
linguistic framework, which represents human grammatical competence as a
‘generative engine’ that can produce only the grammatical sentences of a lan-
guage like English.10 One key hypothesis accepted by proponents of Chomsky’s
theory of grammar is the innateness hypothesis, which is often called the
nativist view.
The nativist view “takes as a basic assumption that children are ‘hardwired’
with linguistic knowledge that gives them access to structural representations in
the absence of experience” (Thornton, 2016). These structural representations,
as well as the computational mechanism that operates over them, are referred
to collectively as ‘universal grammar’ (UG) or the ‘language faculty.’ On this
account, learning one’s first language is simply a matter of determining, through

10 The historical development of the Chomskyan view, also called Transformational Grammar, can
be summarized as follows:

a. Standard Theory (1957–1965)


b. Extended Standard Theory (1965–1973)
c. Revised Extended Standard Theory (1973–1976)
d. GB (Government and Binding)/P&P (Principles and Parameters) Theory (1981–
1990)
e. Minimalist Program (1990–present)

The Standard Theory, laid out by Chomsky (1957, 1965), is the original form of generative
grammar, and introduces two representations for sentential structure: deep structure and sur-
face structure. These two levels are linked by transformational rules. The next stage is the
so-called Extended Standard Theory, where X-bar theory is introduced as a generalized model
of phrase structures. The Revised Extended Standard Theory generalizes transformational rules
as Move-α. These previous theories are radically revised in GB (Government and Binding)/P&P
(Principles and Parameters) theory (1981–1990). GB theory, armed with subtheories like gov-
ernment and binding, is the first theory to be based on the principles and parameters model of
language. The P&P framework also underlies the later developments of the MP (Minimalist Pro-
gram), which tries to provide a conceptual framework for the development of linguistic theory
(Chomsky, 1995).
1.4 Two Different Views of Generative Grammar 11

exposure, which components of the UG tool kit are present in the particular
language, for example, what the word order of the language is. The theory is
deductive in that linguistic data are assumed to reflect properties of UG: The the-
orist must attempt to square the facts of language with the presumed properties
of UG.
Another key component of the Chomskyan nativist view is that the language
faculty consists of several modules. According to Chomsky (1965), (mental)
grammar can be divided into three basic components: syntax, semantics, and
phonology. Each module has its own categories and rules that are in principle
independent of each other. On this account, syntax is ‘autonomous’ in the sense
that syntax can be analyzed without reference to meaning, as illustrated by the
following example, made famous by Chomsky (1957):
(34) Colorless green ideas sleep furiously.

Even if we do not know what this sentence means, we can still immediately
apprehend that the sentence is grammatical, whereas *Green furiously ideas col-
orless sleep is not. The syntactic system manipulates symbols (expressions) not
according to meaning but rather according to the position occupied by those
symbols in hierarchical syntactic structure. One consequence of the autonomy
view is that properties like being the subject of a sentence cannot be described
according to presumed functional properties of subject (like being the topic of
the sentence) but must instead be represented in syntactic terms, for example,
being in a specific location in a hierarchical syntactic structure.11
It is important to recognize that the central goal of Chomskyan theory is not
to describe all of the grammatical patterns of particular languages but rather to
explain how children acquire language, starting from the assumption that chil-
dren are not exposed to sufficiently rich data within their linguistic environments
to learn all of the grammatical patterns of their first language. The explana-
tory mechanism involves a form of UG consisting of general principles (e.g.,
a sentence always has a grammatical subject, even if it is not overtly expressed)
combined with binary parameter settings intended to capture variability across
languages (e.g., some languages require overt subjects and others do not). Propo-
nents of this view seek to predict the structures of a given language and minimize
what must be stipulated. In the Chomskyan view:
the notion of grammatical construction is eliminated, and with it, the con-
struction particular rules. Constructions such as verb phrase, relative clause,
and passive remain only as taxonomic artifacts, collections of phenomena
explained through the interaction of the principles of UG, with the values of
the parameters fixed. (Chomsky, 1993: 4)

It is self-evident that the syntactic phenomena that one could predict based on
the principles and the particular parameter settings of a language are only the

11 See Chapter 3 for further discussion of subject properties.


12 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?

most basic (core) patterns, and that idioms and other specialized (peripheral)
patterns in a language would fall outside the scope of such a framework. This
limitation in grammar coverage is something that proponents of the Chomskyan
framework accept, inasmuch as they view the theory as a narrow theory of basic
or core grammar, which does not, and need not, describe idiosyncratic (or periph-
eral) phenomena that often arise from historical accident, including expressions
that were borrowed from another language or developed from language formulas
or other conventions (e.g., abbreviations).

1.4.2 Inductive Reasoning and the Constraint-Based View


In contrast to the Chomskyan view, the constraint-based view takes
the central goal of syntactic theory to be precise and broad grammatical descrip-
tion of individual languages. The grammar of a language is viewed as an
inventory of patterns ranging from tightly bound idioms to fully general patterns.
Constraint-based grammars have become associated with data-driven models of
language learning that reject the idea of a genetically specified UG, instead view-
ing grammar as an evolving system whose features have been shaped by human
cognitive capacities and transmission from one generation of language users to
the next. The boundaries of the grammar are determined not by principles and
structures of UG but rather by linguistic convention. Rather than being ‘ruled
out’ on principle, nonoccurring sentences are simply those that are not ‘ruled
in’: there is no combination of words and phrases (or constructions) that could
be used to build such sentences. In constraint-based grammars:

[a]n expression is syntactically well-formed if its phonological form is


paired with its semantics as an instance of some syntactic construction.
It follows that an expression is ungrammatical only because there is no
combination of constructions that license it, not because there is some
cross-constructional filter that rules it out. (Zwicky, 1994: 614)

Unlike practitioners of the Chomskyan framework, proponents of the


constraint-based view seek to catalogue the observations of a language under
investigation and from these form generalizations about the grammatical con-
ventions of the language; this is inductive reasoning. One key difference
between constraint-based grammar and Chomskyan generative grammar is
that grammars of the latter type use mechanisms to preempt certain kinds of
symbol-manipulation operations, while grammars of the former type simply
enumerate “constraints that structures are required to satisfy in order to be con-
sidered well formed” (Pollard, 1996). For this reason, constraint-based grammars
are considered declarative, rather than procedural, grammars. Constraint-based
grammars are essentially idealized traditional grammars, with the goal of accu-
rately and parsimoniously describing all of the linguistic conventions of the
language under study.
1.4 Two Different Views of Generative Grammar 13

Constraint-based grammar challenges two major assumptions of the Chom-


skyan view. It first questions the autonomy of syntax, based on observed
interactions among different modules of the grammar. It is not hard to observe
the interplay of form (syntax) and meaning (semantics/pragmatics). For exam-
ple, consider the following examples, in which syntactic forms are associated
with special semantic, pragmatic functions:

(35) a. Please, be prompt because it’s over at four o’clock.


b. The more she learns about this case, the less sense it makes.

Examples like (35a) are linked to the directive force, while (35b) induces a con-
ditional meaning. These two meanings or functions do not simply come from the
words involved here. Such examples suggest that we cannot separate form (syn-
tax) from functions (meaning and usage), as Chomsky did based on examples
like (34).
Constraint-based grammar also rejects the distinction between core and
peripheral grammar, on the grounds that capturing the patterns of word combina-
tion that constitute knowledge of a language like English requires us to describe
everything from general patterns that might exist in every language (like coor-
dination) to specialized patterns that are particular to, say, English. Consider the
following attested examples:

(36) a. He can defend himself.


b. My age has nothing to do with my knowledge of politics.

Sentence (36a) could illustrate a core phenomenon in the sense that its mean-
ing is ‘compositional’ and quite straightforward.12 One interpretive constraint
here is that the subject he and the object himself refer to the same individ-
ual.13 Sentence (36b), by contrast, illustrates a manifestly idiomatic pattern.
The idiomatic verb phrase have x to do with y means something like ‘x has
some degree of relationship to y,’ and the whole sentence means that my
age and knowledge of politics are not related to any degree. The pattern is
idiomatic in that one could not predict this meaning based on the meanings
that the verb have and the verb do have elsewhere (Kay and Michaelis, 2019).
This pattern could appropriately be relegated to the periphery of the
grammar.
However, consider (35b), which includes core as well as peripheral proper-
ties. The sentence, having the pattern ‘the X-er . . . , the Y-er . . . ’, illustrates the
so-called COMPARATIVE CONDITIONAL CONSTRUCTION (Fillmore et al., 1988;

12 The principle of compositionality states that the meaning of a given sentence is determined by
the meanings of its constituent expressions and the rules used to combine them.
13 This interpretation appears to be structurally conditioned, as it depends on the pronoun he and
the reflexive pronoun himself being in a particular syntactic relationship (the first being subject
and the second object). See Pollard and Sag (1992, 1994), and Sag et al. (2003) for detailed
discussion of the constraints on the use of reflexive pronouns.
14 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?

Culicover and Jackendoff, 1999). The construction, which combines two paral-
lel clauses, requires the presence of the definite article at the beginning of each
clause and conveys a conditional meaning. In these respects, the construction
includes certain idiosyncratic properties. But, other than these properties, the
‘linked variables’ meaning of the construction (whereby one quantity or property
is understood to increase as the other does) is clearly related to the construc-
tion’s parts, and the pattern itself is highly productive (we can easily create new
instances of it). This suggests that it is a major grammatical pattern rather than
a minor one (Culicover and Jackendoff, 1999; Borsley, 2004; den Dikken, 2005;
Kim, 2011).
In addition to sentence patterns having idiosyncratic formal properties, there
are also constructions that have regular syntax but unpredicted meanings. The
following sentence is one that a diner can utter:
(37) What is that fly doing in my soup?

The diner in (37) is not inquiring about the activities of the fly in the soup
but rather is indicating that there is something incongruous about there being
a fly in the soup. Although this construction, called the WXDY CONSTRUC -
TION (Kay and Fillmore, 1999), has several peculiarities of form and meaning
(e.g., the obligatory use of doing and a specialized pragmatic function, ‘querying
the reason for an incongruous situation’), it is highly productive, as seen in the
following attested examples:
(38) a. What are you doing with my money, then?
b. But what are you doing with those mashed potatoes on the table?
c. What are you doing calling on a Friday night?

The varying degrees and types of idiosyncrasy observed here tell us that there
is no clear boundary between core and periphery. In addition, even seemingly
noncore phenomena include some general properties that a complete grammati-
cal description must acknowledge if we are to understand what a language user
knows about his or her native language. Under an enriched view of grammat-
ical competence, which aims to capture all of the linguistic routines that an
adult native speaker knows, the grammar represents an array of form-meaning-
function groupings of varying degrees of productivity and internal complexity.
This is the idea that has motivated non-Chomskyan frameworks like HPSG
(Head-driven Phrase Structure Grammar) and CxG (Construction Grammar) –
frameworks that we adopt in this book.

1.5 Evidence That Grammar Is Construction-Based

Grammatical constructions are recipes for word combination that


speakers use to achieve specific communicative goals – issuing orders, request-
ing information, attributing a property to an entity. Constructions determine the
linear order of the words – as the English transitive verb-phrase construction
1.6 Goals of This Book 15

requires the direct object to follow the verb – and the forms of the words, as
the comparable Latin construction requires its direct object to have an accusative
case-ending. Grammatical constructions have long played a central role in lin-
guistic description, and for most of that history they have been treated in a similar
manner to words – pairings of form and meaning with particular patterns of
usage. It is only since the advent of Chomsky’s generative grammar that words
came to be seen as the sole vessels of meaning and constructions as the prod-
ucts of general rules that build up hierarchical structures in a ‘meaning blind’
fashion, much like mathematical operations. Chomsky’s embrace of computing
metaphors that predate the era of cheap data storage convinced many syntacti-
cians that sentence patterns cannot be stored in memory. But in fact it is quite
plausible to assume that we learn and recall grammatical constructions in much
the same way that we learn and recall words. In a review of findings from lan-
guage development, language impairment, and language processing, Bates and
Goodman (1997) conclude that there is little evidence for a modular dissocia-
tion between a language’s grammar and its lexicon (the inventory of words). For
example, they observe that in child language acquisition, “the emergence and
elaboration of grammar are highly dependent upon vocabulary size [. . . ] as chil-
dren make the passage from first words to sentences and go on to gain productive
control over the basic morphosyntactic structures of their native language” (Bates
and Goodman, 1997: 509). They go on to say:

This does not mean that grammatical structures do not exist (they do), or
that the representations that underlie grammatical phenomena are identical
to those that underlie single-content words (they are not). Rather, we suggest
that the heterogeneous set of linguistic forms that occur in any natural lan-
guage (i.e. words, morphemes, phrase structure types) may be acquired and
processed by a unified processing system, one that obeys a common set of
activation and learning principles.

In other words, both words and constructions are patterns in the mind.
Whether we are describing a word that has highly restricted privileges of occur-
rence (e.g., the adjective blithering, which to our knowledge combines only with
the nouns idiot and fool), a class of words (e.g., the class of nouns or the class
of transitive verbs), an inflected word (e.g., the plural noun copies) or a way to
create a basic phrase of a particular type (e.g., a noun phrase), we are describing
patterns, because in each case we are describing the combinatoric properties of
words (Michaelis, 2019).

1.6 Goals of This Book

In seeking the answers to the question of what speakers know when


they know a language (and in particular what they know about syntactic combi-
nation), this book offers a constraint-based view based on static (‘declarative’)
16 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?

constraints, rather than on procedures like movement operations. The leading


constraint-based grammars include HPSG and CxG.
In the last decade, practitioners of these two frameworks have sought to
combine them in order to expand their empirical coverage and solidify their
theoretical foundations. The resulting synthesis is the framework of Sign-Based
Construction Grammar (SBCG), whose key ideas are elaborated in Sag (2012).
The pivotal assumption of SBCG is that ‘ CONSTRUCTIONS,’ mappings of form
to meaning, are what speakers know when they know a language (see, among
others, Goldberg, 1995, 2006; Michaelis, 2006, 2013; Hilpert, 2014; Kim, 2016).
This book follows this direction in describing English syntax. However, to
enhance readability, we retain the more widely known representational conven-
tions of HPSG, which models linguistic objects as feature structures while taking
constructions as the basic units of linguistic knowledge, as does CxG. In this
sense, the analyses given here could be also taken as a ‘constructionist HPSG’
approach to English syntax.
With this in mind, we will explore the description of patterns at all points
along the continuum of lexical fixedness from classic multi-word expressions,
like rock the boat, in the know, and leave (someone) high and dry, to lexically
unfilled phrasal patterns, like that which licenses yes-no questions like Are you
there? While illustrating the descriptive precision needed for the grammatical
analysis of rich everyday language, the book will also show that we must bring
to bear numerous dimensions of linguistic knowledge (e.g., phonology, mor-
phology, syntax, semantics, pragmatics) to adequately describe what we know
when we know a linguistic expression. Accordingly, our discussions of linguis-
tic phenomena will rely in large measure on instances of language in use rather
than on constructed sentences, reflecting our belief in the proposition, advanced
by Chater and Christiansen (2018) among others, that language is a cultural
product, shaped by our communicative needs, rather than a biological endow-
ment. We thus regard the linguistic patterns that we will examine in this book –
from words to multi-word expressions to phrasal templates – as conventionalized
communicative routines, entrenched through use.

Exercises

1. Discuss the following terms with one concrete example:

competence, performance, descriptive, prescriptive, language faculty, core,


peripheral, inductive, deductive, nativist, usage-based

2. For each of the following nouns, decide if it can be used as a count


or as a mass (noncount) noun. In doing so, provide acceptable and
unacceptable examples using the tests (plurality, indefinite article,
few/little, many/much) we have discussed in this chapter:
1.6 Goals of This Book 17

activity, knowledge, discussion, luggage, suitcase, difficulty, cheese, experi-


ence, progress, research
When constructing acceptable examples, try to use the online corpora
COCA (Corpus of Contemporary American English). Regarding the
pronoun one test, you may need to use your intuition in constructing
examples.
3. Check or find out whether each of the following examples is gram-
matical or ungrammatical. For each ungrammatical one, provide at
least one (informal) reason for its ungrammaticality, according to
your intuitions:
a. Spring and fall is the best viewing seasons.
b. Then, I put the guitar and let loose.
c. A chunk of ice floated down the river melted quickly.
d. There was eager to get back to normal.
e. He was easy to love her.
f. So, what is North Korea likely to do in coming years?
g. What should I mix the flour and at this point?
h. There seem to be a fire burning inside you that we don’t see.
4. Consider the following set of data, focusing on the usage of ‘self’
reflexive pronouns and personal pronouns:
(i) a. He washed himself.
b. *He washed herself.
c. *He washed myself.
d. *He washed ourselves.
(ii) a. *He washed him. (‘he’ and ‘him’ referring to the same person)
b. He washed me.
c. He washed her.
d. He washed us.
Can you make a generalization about the usage of ‘self’ pronouns
and personal pronouns like him here? In answering this question,
pay attention to what the pronouns can refer to. Also consider the
following imperative examples:
(iii) a. Wash yourself.
b. Wash yourselves.
c. *Wash myself.
d. *Wash himself.
(iv) a. *Wash you!
b. Wash me!
c. Wash him!
Can you explain why we can use yourself and yourselves but not you
as the object of the imperatives here? In answering this, try to put
pronouns in the unrealized subject position.
18 W H AT I S A T H E O RY O F E N G L I S H S Y N TA X A B O U T ?

5. We have seen that the examples like the following belong to semanti-
cally specialized constructions. For each, discuss whatever special
properties (syntactic, semantic, and pragmatic functions) you can
think of:
a. What are elephants doing in the middle of town?
b. The sooner you do it, the better off you’ll be.
c. Just because you’re paranoid doesn’t mean they aren’t all out to
get you.
d. Charlie shouldered his way through the crowd of cops toward
the door.
e. Not in my house, you don’t!
2 Lexical and Phrasal Signs

2.1 Linguistic Signs and Constructions as Form-Meaning


Pairs
As we noted in Chapter 1, the framework we adopt here to describe
English is a simplified version of SBCG (Sign-Based Construction Grammar).
One key theoretical assumption of this approach is the traditional notion that
language is an infinite set of signs, arbitrary and conventional pairings of form
and meaning. Consider the following illustration:
(a)
Concept (c)

Sound-image

(b)
arbor
signified
signifié

signifier
signifiant
Figure 2.1 An example of a sign

Figure 2.1, adapted from Saussure (1916 [2011]), shows the linguistic sign as
a link between a sound sequence (form) and a concept (meaning), as in (a), or
between a signifier (signifiant, in the original French) and a signified (signifié),
as in (b). Thus, as shown in (c), the form (sound sequence) arbor (Latin ‘tree’)
is a signifier, and its associated meaning or denotation (concept) is a signified,
depicted as a tree image.
This notion of the Saussurean sign has been generalized in SBCG to include
linguistic expressions of any degree of internal complexity, including mor-
phemes, words, multi-word expressions (or idioms) like drop the ball and hit
the nail on the head, and, crucially, phrases, including sentences. A grammar
is accordingly conceived as a set of descriptions of signs and sign combina-
tions. These descriptions represent (a) the properties shared by each class of
signs (called lexical classes) and (b) the templates or ‘rules’ used to construct
phrasal signs from simpler signs.

19
20 LEXICAL AND PHRASAL SIGNS

An example of a lexical class is the class of transitive verbs including drop


and hit. An example of a phrase-building template is the rule that specifies the
formation of NPs (noun phrases) where a noun combines with a determiner, as
in this book, or VPs (verb phrases) where a verb, deny, combines with its object,
as in deny the accusation. While people can create new lexical signs (as when
someone coins a verb like phish), the phrase-building templates, which allow
us to recursively embed signs under other signs, are the real engines of linguistic
expressivity. The term used to refer to a rule of sign combination is construction.
These construtions are instantiated as constructs: actual tokens of language in
use. Constructs like milk, the news, most days, and the big house all instantiate
the general syntactic pattern that we refer to as the NP CONSTRUCTION, while
constructs like ran, deny a refund, and gave him a present are realizations of the
VP CONSTRUCTION .

2.2 From Lexical Signs to Phrasal Signs as a Continuum

Construction-based grammars recognize a distinction between signs


and configurations of signs, the latter of which are called constructs in SBCG.
First, consider word expressions that are simple signs:
(1) school, house, hot dog, chase, run, glitter, tall, big, extremely, very, in,
on . . .

All of these words must be represented as distinct signs of the grammar because
their meanings cannot be predicted. For instance, the form hot dog does not
mean a dog that is hot; from the form one cannot predict the meaning (a hot
sausage inside a long split roll). Idioms are also distinct signs; they are multi-
word expressions whose meanings one would not generally predict from the
meanings of their parts. Consider the following:
(2) a. The suspect is still at large.
b. I’m really feeling under the weather today; I have a terrible cold.
c. Don’t beat around the bush. Just tell me the truth.

The meaning of the italicized word string in each case is not predictable. For
instance, in (2a), the meaning ‘not captured yet’ does not come from the parts at
and large. There is thus a special form-meaning relation here, just as there is in
(3a)-(3b):
(3) a. I tried jogging mom’s memory, but she couldn’t remember Joe’s phone
number either.
b. Don’t worry about what he said. He’s just pulling your leg.

The idioms in (3a)–(3b) mean ‘to cause someone to remember something’ and
‘to deceive someone playfully,’ respectively. The only difference between idioms
like (2) and idioms like (3) is that the latter include a variable (in the case of (3),
2.2 From Lexical Signs to Phrasal Signs 21

a possessor) that can be replaced by another expression like his, her, their, and
so forth.
There are also more complex (phrasal) constructions that specify idiomatic
interpretations, as in the case of the COMPARATIVE CORRELATIVE CONSTRUC -
TION discussed in Chapter 1 and exemplified again below:

(4) a. The younger, the better.


b. The longer the trip, the longer the recovery period.
c. The stronger you are, the less likelihood you’ll ever have to use it.

As noted in Chapter 1, this bi-clausal pattern, whose basic form is ‘the X-er, the
Y-er,’ has a conditional meaning in which an increase (or decrease) in the value of
the first variable yields a concomitant change in the value of the second. Sentence
(4b) means something like ‘To whatever degree a trip is long, the recovery period
is long to that same degree.’ In an earlier stage of English, this construction had a
syntactically transparent interpretation, which sound change and changes in the
case system of English have now obscured.1
Now consider the following sentences introduced by verbs like give, pass,
read, teach, and so forth:
(5) a. Pedro [gave [her] [his email address]].
b. The player [passed [Paulo] [the ball]].
c. Dad [read [me] [the letter]].
d. My mom [taught [me] [the importance of being clean]].
e. My Auntie Julia, a seamstress, [sewed [me] [a leopard bikini]].

The verbs here combine with the two bracketed expressions, evoking a meaning
of ‘transfer,’ whether metaphorical or literal, in each case. For instance, in (5a),
the email address is figuratively transferred from one person to the other (com-
municative acts are typically framed as events in which information goes from
one person to another). One important aspect of the transfer meaning is that the
‘goal’ or endpoint of the transfer event must be understood to be a (volitional)
recipient. Thus while (6a) sounds natural, (6b) does not, unless we imagine the
summit of Mt. Kilimanjaro to stand for people located there:
(6) a. He took the Brooksville Elementary flag to the summit of Mt. Kilimanjaro.
b. *He took the summit of Mt. Kilimanjaro the Brooksville Elementary flag.

The claim that the relevant pattern expresses an act of transfer (to a human recip-
ient) is bolstered by a phenomenon sometimes called ‘semantic enrichment,’ as
in (5e). The verb sew is a verb of creation, and as such selects for just two par-
ticipant roles (the creator and the item created). In the context of (5e), however,
we understand the sewing event to have an additional participant: a recipient of

1 The paired ‘definite articles’ are modern reflexes of Old English instrumental-case demonstrative
pronouns that meant ‘by that much,’ The construction in this period was thus structurally similar
to the analogous French construction plus . . . plus (as in Plus ça change, plus c’est la même
chose, ‘The more it changes, the more it stays the same’). Because speakers of Present-Day
English (PDE) do not generally possess this etymological information, the construction today
presents as a phrasal idiom, albeit a highly productive one.
22 LEXICAL AND PHRASAL SIGNS

the item created. The addition of this third participant, we submit, is triggered
by the syntactic pattern that the sentence instantiates. The pattern, commonly
referred to as the DITRANSITIVE CONSTRUCTION, is a skeletal construction, in
the sense that it has no lexically fixed portion (no particular verb or noun phrase
is required). And yet, much like a lexical sign, this syntactic pattern has an asso-
ciated meaning: the transfer schema. We know of this meaning because of the
contrast in (6a)–(6b), and because of contexts of semantic enrichment like (5e).
The constructions we have seen so far have specialized meanings that can-
not be traced to words within them, but there are also highly schematic
(lexically open) constructions whose meanings are largely predictable from
their constituent words and whose frequency is high. Consider the following
sentences:
(7) a. [Elvis] [sang softly].
b. [The furious dog] [chased me].
c. [They] [made the problem more difficult].

All of these examples have two subparts, subject and predicate, as indicated
by the square brackets. These phrasal signs are licensed by the SUBJECT-
PREDICATE CONSTRUCTION , which is in general used to attribute a property to
an entity. Because it is used to perform a basic communicative routine, this con-
struction is very frequent but it does not add any meaning beyond what the words
within it mean. The primary reason that we need the SUBJECT- PREDICATE CON -
STRUCTION is to represent the division of a clause into phrases. These phrases,
for example, the furious dog and chased me, act like indivisible units for certain
syntactic purposes. The lesson here is that a sentence is not merely a sequence of
words. Instead, there are constructions that describe the way in which words are
combined to form phrases, constructions that describe the way in which phrases
are combined to form still larger phrases, and constructions that describe the
way in which phrases are combined to create sentences, as we will see below in
Section 2.4.
In sum, words, multi-word expressions, and phrases (including clauses) are
all analyzed as signs – pairings of form and meaning. We use lexical descrip-
tions (also called lexical entries) to describe words, word classes, and multi-word
expressions. We use constructions to describe phrasal signs. Constructions can
thus be understood as recipes for combining lexical signs and phrasal signs into
larger units. For the construction grammarian, the grammar of a language is thus
a repertoire of form-meaning pairings that range from those with fixed lexi-
cal make-up (including words) to those that constrain their subparts only very
broadly. A construction grammar models this range with an array of descriptions
of correspondingly graded generality.
In (8) we see the range of sign types presented as a continuum of idiomaticity
or degree of lexical fixity. This continuum distinguishes types of signs accord-
ing to the range of lexical, inflectional, or syntactic variants attested for each
type.
2.2 From Lexical Signs to Phrasal Signs 23

(8) Idiomaticity as a continuum:

At the low variability end of the continuum, we have words (like dolphin) and
fixed idioms, like pass the buck, which have inflectional variants (e.g., dolphins,
passed the buck); these can be combined with other signs to form larger phrases
(e.g., baby dolphin, don’t pass the buck!), but the expressions themselves con-
tain no open slots. Fixed idioms contrast with idioms like throw the book at x,
in which the variable ‘x’ can take on any value (e.g., Throw the book at them!).
At the ‘open’ end of the continuum we have idiomatic phrase types (like the
comparative conditional) that have some lexically fixed portions (the two degree
words) and some open ones (e.g., the comparative expressions) that allow for cre-
ative elaborations. At the extreme end of the continuum are the patterns of sign
combination that do not have many semantic or use restrictions and are open –
or at least open in the sense that they evoke only basic syntactic categories (like
verb). These phrasal patterns can be identified with the phrase-structure rules of
traditional generative grammars.
This construction-based view of linguistic knowledge thus has two major
descriptive goals, which can be summarized as follows:
• to identify the constructions needed to describe the syntactic combi-
nations of a language
• to investigate the constructions (or rules) that license the combination
of words and phrases
To meet these goals, we will examine the way in which meanings are assem-
bled through the grammatically allowable patterns of sign combination. Words
are combined to form larger ‘phrasal’ constructs, and phrases can be combined to
form a clausal construct. A clause either is or is part of a well-formed sentence:
(9)

Typically we use the term ‘clause’ to refer to a complete sentence-like unit, but
one which may be part of another clause, as a subordinate or an adverbial clause.
Each of the sentences in (10b)–(10d) contains more than one clause, with one
clause embedded inside another:
(10) a. The weather is lovely today.
b. I am hoping that [the weather is lovely today].
24 LEXICAL AND PHRASAL SIGNS

c. If [the weather is lovely today], then we will go out.


d. The birds are singing because [the weather is lovely today].

This chapter first explores the types of lexical signs that we can observe in
English (Section 2.3). Equipped with generalizations about lexical expressions,
we then discuss phrasal and clausal constructions formed from the combination
of lexical and phrasal signs.

2.3 Lexical Signs

2.3.1 Classifying Lexical Signs


The basic units of syntax are lexemes. A lexeme is an abstract sign
that captures the form-meaning correspondence common to all instantiations of
that sign, for example, the inflected versions of the verb walk, as in walked,
walking, walks. Inflected instantiations of a lexeme are called words. Lexemes
can be grouped into classes based on their parts of speech (a given lexeme
may belong to a couple of different parts of speech). What parts of speech (or
syntactic categories) does English have? Are they simply noun (N), verb (V),
adjective (A), adverb (Adv), preposition (P), and perhaps a few others? Most
of us would not be able to devise simple definitions that account for the cat-
egorization of words. For instance, why do we categorize book as a noun but
kick as a verb? To make it more difficult, how do we know that virtue is a
noun, that without is a preposition, and that well is an adverb (in one of its
meanings)?
Lexemes can be placed into different syntactic categories according to three
criteria: meaning, morphological form, and syntactic function. At first glance,
it seems that words can be classified according to their meaning. For example,
we could have the following rough semantic criteria for N (noun), V (verb),
A (adjective), and Adv (adverb):

(11) a. N: referring to an individual or entity


b. V: referring to an action
c. A: referring to a property
d. Adv: referring to the manner, location, time, or frequency of an action

Though such semantic bases can be used for many words, these notional def-
initions leave a great many words unaccounted for. For example, words like
sincerity, happiness, and pain do not simply denote any individual or entity.
Absence and loss are even harder cases. There are many words whose seman-
tic properties do not match the syntactic category they belong to. For example,
words like assassination and construction may refer to an action rather than an
individual, but they are always nouns. Words like remain, bother, appear, and
exist are verbs, but they do not involve any action.
2.3 Lexical Signs 25

A more reliable approach is to characterize words in terms of their forms and


functions. The ‘form-based’ criteria look at the morphological form of the word
in question:
(12) a. N: + plural morpheme -(e)s
b. V: + past tense -ed
c. V: + 3rd singular -(e)s
d. A: + -er/est (or more/most)
e. Adv: + -ly (to create an adverb)

According to these frames, in which the word in question goes in the place
indicated by , nouns allow the plural marking suffix -(e)s to be attached, or
possessive ’s, whereas verbs can have the past tense -ed or the 3rd singular form
-(e)s. Adjectives can take comparative and superlative endings -er or -est, or
combine with the suffix -ly. The examples in the following are derived from
these frames:
(13) a. N: trains, actors, rooms, man’s, sister’s, etc.
b. V: devoured, laughed, devours, laughs, etc.
c. A: fuller, fullest, more careful, most careful, etc.
d. Adv: fully, carefully, diligently, clearly, etc.

The morphological properties of each lexical category cannot be overridden;


verbs cannot have plural marking, nor can adjectives have tense marking. It
turns out, however, that these morphological criteria are also only of limited
value. In addition to nouns like information and furniture that we presented
in Chapter 1, there are also many nouns such as love and pain that do not
have a plural form. There are adjectives (such as absent and circular) that do
not have comparative -er or superlative -est forms due to their meanings. The
morphological (form-based) criterion, though reliable in many cases, is nei-
ther a necessary nor a sufficient condition of membership of any of the lexical
categories.
The most reliable criterion in judging the lexical category of a word is its
syntactic function or distributional potential. Let us try to determine what
kinds of lexical categories can occur in the following environments:2
(14) a. They have no .
b. They can .
c. They read the book.
d. He treats John very .
e. He walked right the wall.

The categories that can be used to fill in the blanks are N, V, A, Adv, and P
(preposition), respectively. The following data show that these lexical categories
are not typically interchangeable in a given context:

2 The underscore means that an expression is missing but suggests that the general class of
expressions is predictable from the context.
26 LEXICAL AND PHRASAL SIGNS

(15) a. They have no TV/car/information/friend.


b. They have no *went/*in/*old/*very/*and.

(16) a. They can sing/run/smile/stay/cry.


b. They can *happy/*down/*door/*very.

(17) a. They read the big/new/interesting/scientific book.


b. They read the *sing/*under/*very book.

(18) a. He treats John very nicely/badly/kindly.


b. He treats John very *kind/*shame/*under.

(19) a. He walked right into/on the wall.


b. He walked right *very/*happy/*the wall.

As shown here, only a restricted set of lexical categories can occur in each
position; we can then assign a specific lexical category to these elements:
(20) a. N: TV, car, information, friend . . .
b. V: sing, run, smile, stay, cry . . .
c. A: big, new, interesting, scientific . . .
d. Adv: nicely, badly, kindly . . .
e. P: in, into, on, under, over . . .

In addition to these basic lexical categories, does English have other lexical
categories? Consider the following distributional environments:
(21) a. vaccine could soon hit the market.
b. We found out that job is in jeopardy.

The words that can occur in the open slot in these sentences are words like the, a,
that, this, and so forth, which are determiners (Det). One clear piece of evidence
for grouping these elements in the same category, ‘Det,’ comes from the fact that
they cannot occupy the same position at the same time:
(22) a. *[My this job] is in jeopardy.
b. *[Some my jobs] are in jeopardy.
c. *[The his jobs] are in jeopardy.

Words like my and these or some and my cannot occur together, indicating that
they compete with each other for just one structural position.
Now, consider the following examples:
(23) a. He is a very good pitcher, he just has to have confidence in his pitches.
b. he is a very good pitcher, he just has to have confidence in his pitches.

(23a) provides a frame for conjunctions (Conj) such as and, but, so, for, or, and
yet. These conjunctions are ‘coordinating conjunctions’ different from the words
that can occur in (23b). The words that can occur in (23b) are ‘subordinating
conjunctions’ like since, when, if, because, though, and so forth. The former
type conjoins two identical phrasal elements, as in the following:
2.3 Lexical Signs 27

(24) a. [He immediately turned over to the right], for [he had been asleep on his left
side].
b. [She knew he shouldn’t drive], yet [she gave him the car keys].

Meanwhile, words of the latter type introduce a subordinate clause, as in the


following:
(25) a. When we spoke, she had been doing chores for her landlord in exchange for
free rent.
b. Those who drop out may do so because they were not adequately prepared
for college.

The expressions that can occur in the following contexts form a different group:
(26) a. She didn’t think she could stand on her own.
b. I doubt he would listen to any moderate voice.
c. I’m so anxious him to give us the names of the people.

Once again, the words that can occur in the particular slots in (26) are strictly
limited:
(27) a. She didn’t think that [she could stand on her own].
b. I doubt if [he would listen to any moderate voice].
c. I’m so anxious for [him to give us the names of the people].

The italicized words here are different from the other lexical categories that we
have seen so far. They introduce a complement clause (marked above by the
square brackets) and are sensitive to the tense of that clause. A tensed clause is
known as a ‘finite’ clause, as opposed to an infinitive clause (see Chapter 5). For
example, that and if introduce or combine with a tensed sentence (present or
past tense), whereas for requires an infinitival clause marked with to. We cannot
disturb these relationships:
(28) a. *She didn’t think that [her to stand on her own].
b. *I doubt if [him listening to any moderate voice].
c. *I’m so anxious for [he gave us the names of the people].

The term ‘complement’ refers to an obligatory dependent clause or phrase rela-


tive to a lexical head like the verb (see Chapters 3 and 4 for a fuller discussion
of ‘head’ and ‘complement’). The italicized elements in (27) introduce a clausal
complement and are consequently known as ‘complementizers’ (abbreviated as
‘C’). There are only a few complementizers in English (that, for, if, and whether),
but they nevertheless occupy their own lexical category.
Now consider the following environments:
(29) a. I not know what I was going to do.
b. CNN bring you the president’s remarks.
c. NASA scrap the shuttle program?
d. She had accepted that, even embraced it, but he not.
28 LEXICAL AND PHRASAL SIGNS

The words that can appear in the blanks are neither main verbs nor adjectives,
but rather words like did, would, should, and could. In English, there is clear
evidence that these verbs are different from main verbs, and we refer to them
as auxiliary verbs (Aux). Auxiliary verbs, also known as helping verbs, perform
several grammatical functions: expressing tense (present, past, future), aspect
(progressive and perfect), or modality (possibility, futurity, obligation). It is prob-
lematic in some respects to posit the category Aux as an independent category,
but here we differentiate auxiliary verbs from main verbs by means of the feature
AUX , as given in the following:

(30) a. Main verbs: [AUX –]


b. Auxiliary verbs: [AUX +]

The auxiliary verb appears in front of the main verb, which is typically in its
citation (lexemic) form (see Chapter 5 for the verb forms in English).
There is one remaining category we must consider: the ‘particles’ (Part), in
(31):
(31) a. Stacey had called off the engagement.
b. I had to go home and look up the word.

Words like off and up here behave differently from prepositions in that they can
occur after the object:
(32) a. Stacey had called the engagement off.
b. I looked the word up.

Such distributional possibilities cannot be observed with true prepositions:


(33) a. She fell off the deck.
b. The two boys looked up the high stairs (from the floor).
(34) a. *She fell the deck off.
b. *The students looked the high stairs up (from the floor).

We can also find differences between particles and prepositions in combination


with an object pronoun:
(35) a. She called it off. (particle)
b. *She called off it.
(36) a. *She fell it off.
b. She fell off it. (preposition)

The pronoun it can naturally follow the preposition, as in (36b), but not the parti-
cle, as in (35b). Such contrasts between prepositions and particles give us ample
reason to introduce another lexical category, Part (particle), which is differenti-
ated from P (preposition). In Section 2.6, we will see more tests to differentiate
these two types of word.
In sum, we have seen that the grammar of English has at least the following
syntactic categories:
2.3 Lexical Signs 29

(37) a. N (noun): trains, book, desk, Kim, Mimi, he, she . . .


b. V[AUX –] (main verb): devour, send, call, look, fall . . .
c. V[AUX +] (auxiliary verb): will, can, must, shall, should, to . . .
d. A (adjective): full, careful, diligent, clear, honest . . .
e. Adv (adverb): carefully, diligently, clearly, well . . .
f. P (preposition): of, to, at, in, on, up, off . . .
g. Part (particle): in, on, up, off . . .
h. Det (determiner): the, a, this, that, which . . .
i. C (complementizer): that, for, whether, if . . .
j. Conj (conjunction): and, so, but, when, while, whether, if . . .

In deciding the types of lexical category, we can use the semantic, morphological,
and distributional criteria, but we have seen that distributional ones are most
reliable. Most of the lexical categories we have discussed in this section have
associated phrasal categories, which we discuss in what follows.3

2.3.2 Grammar with Lexical Categories Alone


As noted in Chapter 1, the main goal of syntax is to build a grammar
in which the combination of lexical classes and phrasal constructions can gen-
erate an infinite set of well-formed, grammatical English sentences. Let us see
what kind of grammar we can develop with the family of syntactic categories.
We will begin by examining the examples in (38):
(38) a. A man kicked the ball.
b. A tall boy threw the ball.
c. The cat chased the long string.
d. The happy student played the piano.

Given only the lexical categories that we have identified so far, we can set up a
grammar rule for sentences (S) like the following:
(39) S → Det (A) N V Det (A) N

According to this rule, S consists of the items mentioned in the order given,
except that those in parentheses are optional. So this rule characterizes any sen-
tence that consists of a Det, N, V, Det, and N, in that order, possibly with an
A in front of either N. We can represent the core items in a tree structure,
as in (40):

3 The lexical categories we have seen so far can be classified into two major types: content and
function words. Content words (N, V, Adj, Adv) are those with substantive semantic content,
whereas function words (Det, Aux, Conj, P) are those primarily serving to carry grammatical
information. The ‘content’ words are also known as ‘open’ class words, because the number of
such words is unlimited and new words can be added to these categories, including nouns like
email, fax, internet, and verbs like emailed, googled, etc. By contrast, function words are mainly
used to indicate the grammatical functions of other words and are ‘closed’ class items: Only about
300 function words exist in English, and new function words are rarely added.
30 LEXICAL AND PHRASAL SIGNS

(40)

We assume a lexicon, a list of categorized words, to be part of the grammar along


with the rule in (39):
(41) a. Det: a, that, the, this . . .
b. N: ball, man, piano, string, student . . .
c. V: kicked, hit, played, sang, threw, chased . . .
d. A: handsome, happy, kind, long, tall . . .

By inserting lexical items into the appropriate preterminal nodes, these being the
nodes immediately dominating the ‘. . . ’ notations, we can produce grammatical
examples like those in (38) as well as those like the following, not all of which
describe a possible real-world situation:
(42) a. That ball hit a student.
b. The piano played a song.
c. The piano kicked a student.
d. That ball sang a student.

Such examples are all syntactically well-formed, even if semantically odd in


some cases, suggesting that syntax is ‘autonomous’ from semantics, as briefly
discussed in Chapter 1. Note that any anomalous example can be preceded by
the statement “Now, here’s something hard to imagine . . . ”
We can easily extend this simple grammar rule by allowing iteration of the A,
thereby enabling the rule to generate an infinite number of sentences:4
(43) S → Det A∗ N V Det A∗ N

The iteration operator ∗ after A, called the ‘Kleene Star Operator,’ is a notation
meaning ‘zero to infinitely many’ occurrences. It thus allows us to repeat any
number of As, thereby generating sentences like those in (44). Note that the
parentheses around ‘A’ in (39) are no longer necessary in this instance:
(44) a. The tall man kicked the ball.
b. The tall, handsome man kicked the ball.
c. The tall, kind, handsome man kicked the ball.

One could even say a sentence like (45):


(45) The happy, happy, happy, happy, happy, happy man sang a song.

4 The ‘Kleene Star Operator’ should not be confused with the * prefixed to a linguistic example,
indicating ungrammaticality.
2.4 Phrasal Constructions and Constituency Tests 31

A grammar using only lexical categories can be specified to generate an


infinite number of well-formed English sentences, but it nevertheless misses a
great number of basic properties that we can observe. For example, this sim-
ple grammar cannot capture the agreement facts seen in examples like the
following:
(46) a. The mother of the boy and the girl is arriving soon.
b. The mother of the boy and the girl are arriving soon.

Why do the verbs in these two sentences have different agreement patterns? Our
intuitions tell us that the answer lies in two different possibilities for grouping
the words:
(47) a. [The mother of [the boy and the girl]] is arriving soon.
b. [The mother of the boy] and [the girl] are arriving soon.

The different groupings shown by the brackets indicate who is arriving: in (47a),
the mother, while in (47b) it is both the mother and the girl. The grouping of
words into larger phrasal units that we call constituents provides the first step in
understanding the agreement facts in (47).
Now, consider the following examples:
(48) a. Pat saw the man with a telescope.
b. I like chocolate cakes and pies.
c. We need more intelligent leaders.

These sentences have different meanings depending on how we group the words.
For example, (48a) will have the following two different constituent structures:
(49) a. Pat saw [the man with a telescope]. (the man had the telescope)
b. Pat [[saw the man] with a telescope]. (Pat used the telescope)

Even these very cursory observations indicate that a grammar with only lexical
categories is not adequate for describing syntax. In addition, we need a notion
of ‘constituent,’ and we need to consider how phrases can be formed, defining
groups of words as single units for syntactic purposes.

2.4 Phrasal Constructions and Constituency Tests

In addition to the agreement and ambiguity facts, our intuitions may


lead us to hypothesize constituent structure. If we were asked to group the words
in (50) into phrases or phrasal constructions, what constituents would we come
up with?
(50) The businessmen enjoyed their breakfasts at the hotel last week.

Perhaps most of us would intuitively assign the structure given in (51a) but not
those in (51b) or (51c):
32 LEXICAL AND PHRASAL SIGNS

(51) a. [The businessmen] [enjoyed [their breakfasts] [at the hotel] [last week]].
b. [The] [businessmen enjoyed] [their breakfasts at the hotel] [last week].
c. [The businessmen] [[enjoyed their breakfasts] [at the hotel last week]].
What kind of knowledge, in addition to semantic coherence, forms the basis
for our intuitions of constituent structure? Are there clear syntactic or distri-
butional tests that demonstrate the appropriate grouping of words or specific
constituencies? There are certain syntactic constructions that carry condi-
tions related to constituents (whether these are groups of words or single
words) and on this basis are used to diagnose what strings of words count as
constituents.
Cleft: The cleft construction, which places an emphasized or focused element
in the X position in the pattern ‘It is/was X that . . . ,’ can provide us with straight-
forward evidence for the existence of phrasal units. For instance, think about how
many different cleft sentences we can form from (52).
(52) The policeman met several young students in the park last night.
With no difficulty, we can cleft almost all the constituents we can get from the
above sentence:
(53) a. It was [the policeman] that met several young students in the park last night.
b. It was [several young students] that the policeman met in the park last night.
c. It was [in the park] that the policeman met several young students last night.
d. It was [last night] that the policeman met several young students in the park.
However, we cannot cleft sequences that do not form constituents:
(54) a. *It was [the policeman met] that several young students in the park last night.
b. *It was [several young students in the park] that the policeman met last night.

Constituent Questions and the Stand-Alone Test: Further support for the
existence of phrasal categories can be found in answers to ‘constituent ques-
tions,’ which involve a wh-word such as who, where, when, and how. For any
given wh-question, the answer can either be a full sentence or a fragment. This
stand-alone fragment is a constituent:
(55) Q: Where did the policeman meet several young students?
A: In the park.
(56) Q: Who(m) did the policeman meet in the park?
A: Several young students.
This kind of test can be of use in determining constituents; we will illustrate with
the following:
(57) Lee put old books in the box.
Does either old books in the box or put old books in the box form a con-
stituent? Are there smaller constituents? The wh-question tests can provide some
answers:
2.4 Phrasal Constructions and Constituency Tests 33

(58) Q: What did you put in your box?


A: Old books.
A : *Old books in your box.

(59) Q: What did you do?


A: *Put old books.
A : *Put in the box.
A : Put old books in the box.

Overall, the tests here show that old books and in the box are constituents and
that put old books in the box is also a (larger) constituent.
The constituenthood test is also sensitive to the difference between particles
and prepositions. Consider the similar-looking examples in (60), both of which
contain looked and up:
(60) a. We looked up the street.
b. He looked up the answer.

The examples differ, however, as to whether up forms a constituent with the


following material or not. We can again apply the wh-question test:
(61) Q: What did he look up?
A: The street.
A: The answer.

(62) Q: Where did he look?


A: Up the street.
A : *Up the answer.

(63) Q: Up what did he look?


A: The street.
A : *The answer.

What the contrasts here show is that up forms a constituent with the street in
(60a), whereas it does not with the answer in (60b).
Replacement by a Proform: English, like many languages, uses pronouns to
refer to individuals and entities mentioned earlier. For instance, the woman who
is standing by the door in (64a) can be ‘replaced’ by the pronoun she in (64b):
(64) a. What do you think the woman who is standing by the door is doing now?
b. What do you think she is doing now?

There are other ‘proforms,’ such as there, so, as, and which, that also stand in for
(express the same content as) a previously mentioned expression.
(65) a. Have you been [to Seoul]? I have never been there.
b. Pat might [go home]; so might Lee.
c. Pat might [pass the exam], as might Lee.
d. If Pat can [speak French fluently] – which we all know they can – we will
have no problems.
34 LEXICAL AND PHRASAL SIGNS

A proform cannot normally be used to refer back to something that is not a


constituent:
(66) a. John asked me to put the clothes in the cupboard, and to annoy him I really
stuffed them there [there = in the cupboard].
b. John asked me to put the clothes in the cupboard, and to annoy him I stuffed
them there [them = the clothes].
c. *John asked me to put the clothes in the cupboard, but I did so [=put the
clothes] in the suitcase.

Both the pronouns there and them refer to a constituent. However, so in (66c),
referring to a VP, refers to only part of the constituent put the clothes, making it
unacceptable.

2.5 Forming Phrasal Constructions: Phrase Structure


Rules
We have seen evidence for the existence of phrasal constructions.
Phrases are built up from lexical categories, and hence we have phrases such as
NP, VP, PP, and so on. As before, we use distributional evidence to classify each
type and then specify rules to account for the distributions we have observed, as
we did above for S.

2.5.1 NP: Noun Phrase


Consider (67):
(67) [liked ice cream].

The expressions that can occur in the blank position here are once again limited.
The kinds of expression that do appear here include:
(68) Mary, I, you, students, the students, the tall students, the students from
Seoul, the students who came from Seoul, etc.

If we look into the subconstituents of these expressions, we can see that each
includes at least an N and forms an NP (noun phrase):
(69) a. students: N
b. the students: Det N
c. the tall students: Det Adj N
d. the students [from Seoul]: Det N PP
e. the students [who came from Seoul]: Det N S

These observations lead us to posit the following rule:5


(70) NP → (Det) A∗ N (PP/S)

5 The relative clause who came from Seoul is kind of modifying the sentence (S). See Chapter 11.
2.5 Forming Phrasal Constructions 35

This rule characterizes a phrase and is one instance of a phrase structure (PS)
rule. The rule indicates that a ‘mother’ NP can consist of one or more ‘daughters,’
including an optional Det, any number of optional As, an obligatory N, and then
an optional PP or a modifying S.6 The slash indicates different options for the
same place in the linear order. These options in the NP rule can be represented
in a tree structure:
(71)

Once we insert appropriate expressions into the preterminal nodes, we will have
well-formed NPs, and the rule will not generate the following NPs:
(72) *the whistle tune, *the easily student, *the my dog . . .

One important point is that as only N is obligatory in NP, a single noun such
as Mary, you, or students can constitute an NP by itself. Hence the subject of
the sentence She sings will be an NP, even though that NP consists only of a
pronoun.

2.5.2 VP: Verb Phrase


Just as N projects an NP, V projects a VP. A simple test environment
for VP is given in (73):
(73) The student .

(74) lists just a few of the possible phrases that can occur in the underlined
position.
(74) snored, ran, sang, loved music, walked the dog through the park, lifted 50
pounds, is honest, warned us that storms were coming, etc.

These phrases all have a V as their head (essential expression) – as projections


of V, they form a VP:7
(75) a. snored: V
b. loved [music]: V NP
c. walked [the dog] [through the park]: V NP PP
d. thinks [Kim is honest]: V S
e. warned [us] [that storms were coming]: V NP CP

6 To license an example like the very tall man, we need to make A* into AP*. For simplicity, we
just use the former in the rule.
7 The phrase CP is the combination of that and a finite sentence. See Section 2.5.6.
36 LEXICAL AND PHRASAL SIGNS

We thus can characterize the VP rule as the one given in (76), to a first level of
analysis:
(76) VP → V (NP) (PP∗ /S/CP)

This simple VP rule says that a VP can consist of an obligatory V followed by an


optional NP and then any number of PPs or an S. The rule thus does not generate
ill-formed VPs such as these:
(77) *leave the meeting sing, *the leave meeting, *leave on time the meeting . . .

We can also observe that the presence of a VP is essential to forming a grammat-


ical S, and the VP must be finite (present or past tense). Consider the following
examples:
(78) a. The monkey wants to leave the meeting.
b. *The monkey eager to leave the meeting.

(79) a. The monkeys approved of their leader.


b. *The monkeys proud of their leader.

(80) a. The women practice medicine.


b. *The women doctors of medicine.

These examples show us that a well-formed English sentence can consist of an


NP and a (finite) VP, which can be represented as a PS rule providing information
about constituency and linear order:
(81) S → NP VP

We thus have the rule that English sentences are composed of an NP and a VP,
the precise structural counterpart of the traditional ideas of a sentence being ‘a
subject and a predicate’ or ‘a noun and a verb.’
One more aspect of the structure of a VP involves the presence of auxiliary
verbs. Think of continuations for the fragments in (82):
(82) a. The students .
b. The students want .

For example, the phrases in (83a) and (83b) can occur in (82a), whereas those in
(83c) can appear in (82b):
(83) a. run, feel happy, study English syntax . . .
b. can run, will feel happy, must study English syntax . . .
c. to run, to feel happy, to study English syntax . . .

We have seen that the expressions in (83a) all form VPs, but how about those in
(83b) and (83c)? These are also VPs, which happen to contain more than one V.
In fact, the parts after the auxiliary verbs in (83b) and (83c) are themselves reg-
ular VPs. In the full grammar we will consider to and can and so on as auxiliary
verbs, with the feature specification [AUX +] to distinguish them from regular
2.5 Forming Phrasal Constructions 37

main verbs. Then all modal auxiliary verbs are simply introduced by a second
VP rule (see Section 2.5):
(84) VP → V[AUX +] VP

One more important VP structure involves the VP modified by an adverb or a


PP:
(85) a. They [[entered the country] illegally].
b. They [[entered the country] in the last decade].

In such examples, the adverb illegally and the PP in the last decade are
modifying the preceding VP. To form such VPs, we need the PS rule in (86):
(86) VP → VP Adv/PP

This rule, together with (81), will allow the following structure for (85b):8
(87)

2.5.3 AP: Adjective Phrase


The most common environment in which an adjective phrase (AP)
occurs is in ‘linking verb’ constructions, as in (88):
(88) John feels .

Expressions like those in (89) can occur in the blank space in (88):
(89) happy, uncomfortable, terrified, sad, proud of her, proud to be his student,
proud that he passed the exam, etc.

Since these all include an adjective (A), we can safely conclude that they all form
an AP:
(90) a. happy: A
b. proud [of her]: A PP
c. proud [to be his student]: A VP
d. proud [that he passed the exam]: A CP

Looking into the constituents of these, we can formulate the following simple PS
rule for the AP:9
8 We use a triangle when we do not need to represent the internal structure of a phrase.
9 The phrase CP results from the combination of a complementizer like that and an S.
38 LEXICAL AND PHRASAL SIGNS

(91) AP → A (PP/VP/CP)

The AP rule can easily explain the following:


(92) a. John sounded [AP happy].
b. John sounded [AP proud [PP of her]].
c. John felt [AP proud [CP that his son won the game]].

The verb sound or felt requires an AP to follow it: (92a)–(92c) satisfy the rule in
(91). This can be represented in the following structures:
(93)

The rule in (91), however, would not license the expressions in the brackets as
proper APs:
(94) John sounded [*happily/*very/*the student/*in the park].

2.5.4 AdvP: Adverb Phrase


Another phrasal syntactic category is adverb phrase (AdvP), as
exemplified in (95):
(95) soundly, well, clearly, extremely, carefully, very soundly, almost certainly,
very slowly, etc.

These phrases are often used to modify verbs, adjectives, and adverbs them-
selves, and they can all occur in principle in the following environments:
(96) a. They had behaved very .
b. They worded the offer .
c. They treated the sources .
2.5 Forming Phrasal Constructions 39

Phrases other than an AdvP cannot appear here. For example, an NP the student
or an AP really happy cannot occur in these syntactic positions:
(97) a. They had behaved very differently.
b. They worded the offer really carefully.
c. They treated the sources separately.

Based on the observations, the AdvP rule can be given as follows:


(98) AdvP → (AdvP) Adv

2.5.5 PP: Preposition Phrase


Another major phrasal category is preposition phrase (PP). PPs like
those in (99) generally consist of a preposition plus an NP:
(99) from Seoul, in the box, in the hotel, into the soup, with John and his dog,
under the table, etc.

These PPs can appear in a wide range of environments:


(100) a. John came from Seoul.
b. They put the book in the box.
c. They stayed in the hotel.
d. The fly fell into the soup.

One clear case in which only a PP can appear is the following:


(101) The squirrel ran straight/right .

The intensifiers straight and right can occur neither with an AP nor with an
AdvP:
(102) a. The squirrel ran straight/right up the tree.
b. *The squirrel is straight/right angry.
c. *The squirrel ran straight/right quickly.

From the examples in (99), we can deduce the following general rule for forming
a PP:10
(103) PP → P NP

The rule states that a PP consists of a P followed by an NP. We cannot construct


unacceptable PPs like the following:
(104) *in angry, *into sing a song, *with happily . . .

10 Depending on how we treat the qualifiers straight and right, we may need to extend this PP
rule as PP → (Qual) P NP so that the P may be preceded by an optional qualifier like right
or straight. However, this means that we need to introduce another lexical category, ‘Qual.’
Another direction is to take the qualifier categorically as an adverb carrying the feature QUAL
while allowing only such adverbs to modify a PP.
40 LEXICAL AND PHRASAL SIGNS

2.5.6 CP and ConjP: Complementizer and Conjunction Phrases


We have seen that the following expressions form independent lexical
categories:11
(105) a. Complementizers: that, if, whether (finite), for (nonfinite)
b. Coordinating conjunctions: for, and, nor, but, or, yet, so, (This set is
occasionally referred to by means of the acronym FANBOYS.)
c. Subordinating conjunctions: if, since, until, unless, whereas, while, as
though . . .

There are issues of whether these expressions project phrases, but we take at
least complementizers and subordinating conjunctions project phrases like CP
and ConjP:12
(106) a. He hopes [C that [S you go ahead with the speech]].
b. [CONJ After [S I had an interview]], I met her.

One key difference between CP and ConjP is that a CP is an obligatory phrase,


while a ConjP is an optional subordinating clause. For instance, consider the
following:
(107) a. The police officer asked *(if the death was not an accident).
b. The police officer missed the evidence (if the death was not an accident).

The if -clause in (107a) is a complement clause required by the matrix verb asked,
while the if -clause in (107b) is a subordinating clause, which is optional. This
implies that we need to distinguish these two by the following PS rules:
(108) a. CP → C S
b. ConjP → Conj S

2.6 Grammar with Phrasal Constructions

We have seen earlier that a grammar with just lexical categories is not
adequate for capturing the basic properties of the language. How much further

11 The following are examples that use ‘FANBOYS’ as coordinating markers:

a. She must have been very hungry, for she ate everything immediately.
b. They went to the park, and they went down the slide.
c. Mike doesn’t like doing his homework, nor does he like going to school.
d. The park is empty now, but it will be filled with children after school.
e. We could go get ice cream, or we could go get pizza.
f. Projects can be really exciting, yet they can be really hard work.
g. The lady was feeling ill, so she went home to bed.
h. I go to the library; I love to read.

12 There are two other views on the treatment of subordinating conjunctions. One is to treat
them as prepositions combining with an S (Emonds, 1976), and the other is to take them as
complementizers (van Gelderen, 2017).
2.6 Grammar with Phrasal Constructions 41

do we get with a grammar that includes phrases? A set of PS rules that license
the combination of lexical and phrasal constructions, some of which we have
already seen, are given in (109):13
(109) a. S → NP VP
b. NP → (Det) A∗ N (PP/S)
c. VP → V (NP) (A/PP/S/VP)
d. AP → A (PP/CP)
e. AdvP → (AdvP) Adv
f. PP → P NP
g. VP → VP AdvP

The rules say, among other things, that a sentence is the combination of NP and
VP, and that an NP can be made up of a Det, any number of As, an obligatory N,
an optional PP or S. Of the possible tree structures that these rules can generate,
the following is one example:
(110)

With the structural possibilities shown here, let us assume that we have the
following lexical entries:
(111) a. Det: a, an, the, this, that, his, her, no, etc.
b. A: handsome, tall, little, small, large, stylish, big, yellow, etc.
c. N: book, boy, garden, friend, present, dog, cat, man, woman, etc.
d. V: kicked, chased, sang, met, gave, taught, etc.
e. P: in, at, of, to, for, on, etc.

Inserting these elements in the appropriate preterminal nodes (the places with
dots) in (110), we are able to produce various sentences like those in (112):14
(112) a. That tall man met a dog.
b. A man kicked that small ball.

13 The grammar consisting of such a form of rules is often called a ‘Context Free Grammar,’ as
each rule may apply any time its environment is satisfied, regardless of any other contextual
restrictions.
14 The grammar still generates semantically anomalous examples like #The desk believed a man or
#A man sang her hat. For such semantically distorted examples, we need to refer to the notion
of ‘selectional restrictions’ (see Chapter 7).
42 LEXICAL AND PHRASAL SIGNS

c. The woman chased a cat in the garden.


d. The little boy gave a present to his friend.

There are several ways to generate an infinite number of sentences with this
kind of grammar. As we have seen before, one simple way is to repeat a category
(e.g., adjective) infinitely as given in (109b). There are also other ways of gen-
erating an infinite number of grammatical sentences. Look at the following two
PS rules from (109) again:
(113) a. S → NP VP
b. VP → V S/CP

As we show in the following tree structure, we can ‘recursively’ apply the two
rules, in the sense that one can feed the other and then vice versa:
(114)

Verbs like think can combine with either an S or a CP. It is not difficult to expand
this sentence by applying the two rules again and again:
(115) a. Bill claims (that) John believes (that) Mary thinks (that) Tom is honest.
b. Jane imagines (that) Bill claims (that) John believes (that) Mary thinks (that)
Tom is honest.

There is no limit to this kind of recursive application of PS rules: It proves that


this kind of grammar can generate an infinite number of grammatical sentences.
One structure which can be also recursive involves sentences with auxil-
iary verbs. As noted before in (84), an auxiliary verb forms a larger VP after
combining with a VP:
(116)
2.6 Grammar with Phrasal Constructions 43

This means that we will also have a recursive structure like the following:15
(117)

Another important property that PS rules bring us is the ability to make


reference to hierarchical structures within given sentences, where parts are
assembled into substructures of the whole. One merit of such hierarchical struc-
tural properties is that they enable us to represent the structural ambiguities
of sentences like those we have seen earlier in (48). Let us look at more
examples:
(118) a. The little boy hit the child with a toy.
b. Chocolate cakes and pies are my favorite desserts.
Depending on which PS rules we apply, for the sentences here, we will have dif-
ferent hierarchical structures. Consider the possible partial structures of (118a),
which the grammar allows:
(119)

The structures clearly indicate what with the toy modifies: In (119a), it modifies
the whole VP, whereas (119b) modifies just the NP the child. The structural
differences induced by the PS rules directly represent these meaning differences.
15 Due to the limited number of auxiliary verbs, and restrictions on their cooccurrence, the maxi-
mum number of auxiliaries in a single English clause is four (e.g., The building will have been
being built for three years), and their relative order is fixed. See Chapter 8.
44 LEXICAL AND PHRASAL SIGNS

The PS rules we have introduced would give us different structures for the
following two:
(120)

The if -clause in (120a) is a complement clause introduced by the comple-


mentizer if, while the one in (120b) is a subordinate clause introduced by
if.
Note that the variation of coordination structures, allowing the coordination
of not only Ss but also any other lexical as well as phrasal categories, needs a
different PS rule. Consider the following:
(121) a. *The children were in their rooms and happily.
b. *Lee went to the store and crazy.

We have noted that English allows two alike categories to be coordinated. This
can be written as a PS rule, for phrasal conjunction, where XP is any phrase in
the grammar:16
(122) X(P) → X(P)+ Conj X(P)

The ‘coordination’ rule says that two identical XP categories or X lexical cate-
gories can be coordinated and form the same category X(P), as illustrated by the
following:17

16 Different from the Kleene Star Operator ∗ , the plus operator + here means that the XP occurs at
least once.
17 This coordination rule needs to be relaxed to license the coordination of unlike categories, as in
Kim is [a CEO] and [proud of her job]. Such examples can be taken to be the coordination of
two predicative expressions.
2.7 Multi-word Expressions 45

(123) a. Paul [lives] and [works] in the same city.


b. The gentleman bought my [house] and [car] last year.
c. He has a [big] and [beautiful] swimming pool.

(124) a. [The chicken] and [the rice] go well together.


b. The president will [understand the criticism] and [take action].
c. Susan works [too slowly] and [overly carefully].

Applying the PS rule in (122), we will then allow (125a) but not (125b):
(125)

Unlike categories such as PP and AP may not be coordinated: This is what the
coordination PS rule ensures.

2.7 Multi-word Expressions: Between Lexical and Phrasal


Constructions
We have classified expressions into word and phrasal constructions,
but note that there are also so-called multi-word expressions, which contain two
or more words but behave like a word or a fixed phrase in the sense that their
grammar or meaning is often idiosyncratic or unpredictable. There are at least
three different types of multi-word expressions in English.

2.7.1 Fixed Expressions


Expressions like by and large, in short, in spite of, and so forth consist
of more than one word, but they do not behave like phrases. These are fully
lexicalized in the sense that they have no allowable permutations:
(126) a. *in shorter/*by and larger/*in spiter of
b. *in very short/*by and really large/*in truly spite of

Such expressions are thus are taken to be words-with-spaces, as illustrated by


the following structure:
46 LEXICAL AND PHRASAL SIGNS

(127)

2.7.2 Semi-fixed Expressions


Idiomatic expressions like kick the bucket (die), throw in the towel
(give up), shoot the breeze (talk idly), and so forth are also fixed expressions
in the sense that they do not allow syntactic variations and their meaning is not
predictable.
(128) a. *John kicked the bucket we all must kick. (no internal modification)
b. *The bucket was kicked. (no passivization)
As the data indicate, such idiomatic, fixed expressions do not allow internal mod-
ification or passivization: No syntactic processes can be applied to an internal
element of the expression. Note, however, that, unlike fixed expressions, these
expressions can undergo inflection, as in kicked the bucket or shot the breeze.
The meanings of these expressions are not composed from the meaning of each
part. For instance, kick the bucket does not mean that one does the action of kick-
ing toward the bucket; it rather means ‘die.’ The same can be said of the other two
semi-fixed expressions mentioned in this selection: shoot the breeze means ‘chat’
and throw in the towel means ‘give up’ – meanings not owed to their component
parts. To capture the fact that the verbs in these expressions behave syntactically
like other verbs with respect to inflection but do not contribute to the meaning of
the VP, we could have a structure like the following:
(129)
2.7 Multi-word Expressions 47

What the structure implies is that the inflected verb kicked combines with the
NP the bucket, but its semantic composition is peculiar in that the combination
kicked the bucket means not an action of kicking but ‘die.’

2.7.3 Syntactically Flexible Multi-word Expressions


In Section 2.2.1, we have seen that verb-particle constructions with
an object behave differently from verb-preposition examples. For example,
consider the following pair of examples as another illustration:

(130) a. Everyone was panting as if they’d all run up a steep hill. (up as a preposition)
b. The disease would run up a bill as high as $50 billion. (up as a particle)

One obvious difference between particle and preposition is that the particle
can occur after the object:

(131) a. *John ran a big hill up.


b. John ran a big bill up.

The constituent test with cleft constructions tells us that, unlike the particle, the
preposition forms a unit with the following NP:

(132) Preposition up
a. It was [up a big hill] that John ran. (cleft)
b. It was [a big hill] that John ran up. (cleft)

(133) Particle up
a. It was [a big bill] that John ran up. (cleft)
b. *It was [up a big bill] that John ran. (cleft)

This data set indicates that the particle does not form a constituent with the
object. Another interesting data set concerns the ‘so-called gapping’ that allows
the ellipsis of a redundant (repeated) verb or verb complex:

(134) a. John ran up a big hill and Jack up a small hill. (gapping ran)
b. *John ran up a big hill and Jack a small hill. (no gapping ran up)

(135) a. John ran up a big bill and Jack up a small bill. (gapping ran)
b. John ran up a big bill and Jack a small bill. (gapping ran up)

In both (134a) and (135b), we can gap the main verb ran. The differ-
ence comes between (134b) and (135b): The main verb can be gapped
not with the preposition as in (135a) but with the particle as in (135b).
This difference implies that the postverbal particle forms a strong unit with
the preceding main verb. These can be represented in the following tree
structures:
48 LEXICAL AND PHRASAL SIGNS

(136)

The structure in (136b) would mean that the particle with the preceding main
verb forms a verb complex, as represented by the following constructional rule:
(137) V → V, Part

The verb-particle complex will then combine with the object by the typical VP
rule VP → V, NP, as seen in (136b). The supporting evidence for this may come
from coordination:
(138) a. Did Jill run [up a big hill] or [up a small hill]?
b. *Did Jill run [up a big bill] or [up a small bill]?
c. Did Jill [run up] [a big bill] or [a small bill]?

The contrast here indicates that the sequence of [verb-particle] forms a com-
plex unit. The verb-particle complex can be also observed from the following
data (Jackendoff, 2002):
(139) a. the rapid [looking up] of the information is important.
b. the prompt [sending out] of reports is commendable.

In these examples, the particle forms a unit with the gerundive verb. The particle
here cannot be separated from the gerundive verb, as in the following:
(140) a. *the rapid looking of the information up
b. *the prompt sending of the reports out

The facts we have not discussed include examples where the particle occurs
right after the object:
(141) a. Jill brought the cat in.
b. He shut the gas off.

To license this ordering, the grammar introduces a combinatorial rule like the
following:
2.8 Conclusion 49

(142) VP → V NP Part

This would then license the following structure for (141b):

(143)

The particle in English thus either forms a fixed complex unit with the preceding
main verb, or is a syntactic sister to the verb, occurring right after the object (see
Section 5.4 for further discussion). This complex unit is larger than a word but
smaller than a full phrase. We can think of this complex unit as a compound word
somewhat like English compound verbs stir fry and blow dry.

2.8 Conclusion

The theoretical framework that we adopt here is based on the fun-


damental assumption that language is an infinite set of signs, including lexical
and phrasal signs. Lexical entries license lexemes, and constructions license
constructs – phrases that consist of a mother plus one or more (phrasal or
lexical) daughter nodes. Lexemes belong to syntactic categories like noun,
verb, and preposition. We diagnose these categories based on combinatory
requirements of the words in question. But we could not properly analyze
English clauses and sentences if we viewed them as simply strings of syntactic
categories. Instead, sentences in English have a kind of hierarchical struc-
ture called constituent structure, as indicated by phenomena like subject-verb
agreement.
A constituent is a series of words that behaves like an indivisible unit for cer-
tain syntactic purposes, for example, serving as the ‘clefted’ constituent in the
it-cleft construction. Among the constituents we have discussed are NP (typi-
cally a determiner followed by a nominal expression) and VP (a verb optionally
followed by a NP, AP, or PP). We noted that, for example, a single verb, like the
intransitive verb disappear, will be a VP all by itself. We saw that some phrases,
like the semi-fixed expressions read between the lines and throw in the towel,
are phrasal units whose meanings cannot be predicted from the meanings of the
head word or its complement(s). Within CxG, we recognize as idioms not only
multi-word expressions but also templates for building phrases like the CORREL -
ATIVE CONDITIONAL CONSTRUCTION , which, despite having some fixed parts,
is highly productive. The grammar of a language is thus a continuum of flexibil-
ity that ranges from lexical entries to descriptions of multi-word expressions to
50 LEXICAL AND PHRASAL SIGNS

open patterns like the rule for forming noun phrases (NPs) or the rule for creating
conditional sentences.18
Before examining the constructions that combine words and phrasal signs, we
will explore the grammatical functions and semantic roles that each constituent
plays in a given sentence. These functions and roles are the main topic of the
next chapter.

Exercises

1. Determine the lexical category of the italicized words in the fol-


lowing. In doing so, use the three criteria (morphological, semantic,
and syntactic) to provide evidence for your answer and state which
criterion is the most reliable one:
a. The president is busy being well, himself, in Texas, waving a
flag around.
b. We went out there and didn’t play very well.
c. They were reportedly drawing water from a contaminated well.
d. You might not get that impression if you google our business or
check out my personal Facebook profile.
e. Google plans to put its autonomous driving technology into
minivans.
f. It’s hard for me to take time off from work.
g. Emmanuel, what do you like to do for fun?
h. This is a review I’m almost reluctant to write, for I have a
serious conflict of interest.

2. Consider the following data carefully and describe the similarities


and differences among that, for, if, and whether. In so doing, first
compare that and for and then see how these two are different from
if and whether:
(i) a. I am astounded that you should expect to be paid for this totally
unnecessary trip.
b. *I am astounded that you to expect to be paid for this totally
unnecessary trip.

(ii) a. I was anxious for Freeman to return and cage the beast.
b. *I was anxious for Freeman should return and cage the beast.

(iii) a. We need to know if/whether those details support originalism.


b. I am not here to decide if/whether you should be punished.

18 CxG uses the term construct-icon (a blend of the words construction and lexicon) to refer to
this continuum. The construct-icon is not just a list of language conventions; it has a taxonomic
organization. See Hilpert (2014).
2.8 Conclusion 51

(vi) a. If students are coming to school less prepared to learn, this may
cause declining productivity.
b. Whether students are in elementary schools or in prestigious
universities, homework is a necessary part of the learning
process.

3. Consider the following examples and identify the lexical category


of to in each case. In doing so, give at least one argument for the
identified lexical category:
(i) a. Students wanted to write a letter.
b. Students intended to surprise the teacher.

(ii) a. Students objected to the teacher.


b. Students sent letters to the teacher.

What do the following data imply for the lexical category of to?
(iii) a. I know I should [go to the dentist’s], but I just don’t want to .
b. I don’t really want to [go to the dentist’s], but I know I should
.
c. *I know I should keep studying, but I just don’t keep .

4. Check whether the italic parts form a constituent or not, using at


least two constituenthood tests (e.g., cleft, pronoun substitution, stan-
dalone, etc.). In addition, using the PS rules found throughout the
chapter, provide tree structures for three of the sentences:
(i) a. Ed bought a book on English syntax.
b. Ed put a book on the shelf.

(ii) a. Dana turned down my dinner invitation.


b. Dana turned down a side road.

(iii) a. He pointed at a book about Construction Grammar.


b. He talked to a stranger about Construction Grammar.
c. He talked with a stranger about Construction Grammar.

(iv) a. The boss considered the person a genius.


b. The boss fired the person responsible.

5. Explain why the examples in (5a)–(5e) are ungrammatical. As part


of the exercise, first draw a structure for each example and then try to
determine the applicability of the PS rules such as the coordination
rule in (122), presented earlier in this chapter:
a. *I know that student and that she is really smart.
b. *He turned off the TV and off the lights.
c. *Jean put on a fire suit and out the fire.
d. *She went bankrupt and to the bank.
e. *Ed called off this afternoon’s meeting and on his sick girlfriend.
52 LEXICAL AND PHRASAL SIGNS

6. In addition to the so-called it-cleft constructions, we can use the


pseudo-cleft construction as a constituent test:
(i) a. She hated her failure at it.
b. What she hated was her failure at it.

Using the psuedo-cleft construction, test if the bracket parts in (ii)


form a constituent or not. Also discuss whether you can observe any
differences between cleft and pseudo-cleft in testing the constituent-
hood of a phrasal construction:
(ii) a. It looks [very beautiful].
b. He bought [a new van last week].
c. Jake hid the writings [under his robe].
d. He tried to [destroy the evidence of what he had done].
e. Robert turned [down the job].

7. Provide a tree structure for each of the following sentences and


provide all the PS rules required in each sentence:
a. I consider Telma the best candidate.
b. He took Masako to the school by the park.
c. Jane wants to study linguistics in the near future.
d. He made the faces look cold in a cool way.
e. I was happy that the doctor chose us for the forum.
f. That they did it is readily apparent.
g. Tori encouraged her students to try the harder classes.

8. Each of the following sentences is structurally ambiguous – each has


at least two structural analyses. Represent the structural ambiguities
by providing different tree structures for each string of words:
a. I saw the film with Beyoncé.
b. I saw that gas can explode.
c. I need to have that report on our webpage by tomorrow.

9. We have discussed three types of multi-word expressions: fixed,


idiomatic, and verb-particle complexes. Consider the following sen-
tences and identify these expressions therein. In doing so, explain
why and provide tree structures for each sentence:
a. Two guys from Texas were shooting the breeze.
b. I switched off the light.
c. The suspect is still at large.
d. Let me try the boots on.
e. Security across the region was tight because of the unrest.
f. I guess I’ll hit the sack now.
3 Syntactic Forms, Grammatical
Functions, and Semantic Roles

3.1 Introduction

In the previous chapter, we analyzed English sentences using PS rules


(constructions) that license the combination of lexical and phrasal signs to create
bigger signs. For example, the PS rule ‘S → NP VP’ allows us to combine an
NP with a VP, creating a subject-predicate (a clause). As we have seen, such
PS rules allow us to represent the constituent structure of a given sentence in
terms of lexical and phrasal categories, using forms. There are other sets of
distinctions that we can use to analyze phrasal units like sentences: One is the
set of grammatical functions like subject and object:
(1) a. Syntactic categories or forms: N, A, V, P, NP, VP, AP . . .
b. Grammatical functions: SUBJ (subject), OBJ (object), MOD (modifier), PRED
(predicate) . . .

Terms like SUBJ, OBJ (direct object (DO), indirect object (IO)), MOD, and PRED
represent grammatical functions that each phrasal constituent can play in a given
sentence. As an example, consider (2):
(2) The driver crashed his car into the back of another car.

This sentence can be structurally represented in terms of either syntactic


categories or grammatical functions, as illustrated in (3):
(3) a. [S [NP The driver] [VP crashed [NP his car] [PP into the back of another car]]].
b. [S [SUBJ The driver] [PRED crashed [OBJ his car] [MOD into the back of
another car]]].

As shown here, the driver is an NP with respect to its syntactic form, but it is the
SUBJ (subject) of the sentence with respect to its grammatical function. The NP
his car is the OBJ (object), while the verb crashed functions as a predicator. More
importantly, we consider the entire VP to be a PRED (predicate) that describes
a property of the subject. Into the back of another car is a PP in terms of its
syntactic category while serving as a MOD (modifier) here.
We also can represent sentence structures using semantic roles. Constituents
can be considered in terms of semantic relations such as agent, patient, location,
instrument, and the like. A semantic role label tells us in essence ‘who is doing
what to whom’ – that is, what sort of participant each constituent expresses in a
clause, regardless of whether that clause describes an event or a state. Each main

53
54 F O R M S , F U N C T I O N S , A N D RO L E S

verb assigns one or more semantic roles. Consider the semantic roles of the NPs
in the following two sentences:1
(4) a. [The hurricane] destroyed [their house].
b. [Their house] was destroyed by [the hurricane].

Both of these sentences describe a situation in which the hurricane destroyed


their home. In this situation, the hurricane is the agent and their house is the
patient of the event. This in turn means that in both the active and passive ver-
sions of this report, the hurricane has the semantic role of agent (agt), whereas
their house has the semantic role of patient (pat), even though the grammatical
functions of these roles vary across the two sentences (e.g., the hurricane is sub-
ject in the active sentence and object of a preposition in the passive sentence). We
thus can assign the following semantic role to each constituent of the examples:
(5) a. [[agt The hurricane] [pred destroyed [pat their house]]].
b. [[pat Their house] [pred was destroyed [agt by the hurricane]]].

As noted here, in addition to agent and patient, we have the semantic predicate
(pred), which selects for the agent and patient roles. So we now can describe the
semantic role that each constituent expresses.
Throughout this book we will see that in syntactic description, we must refer to
these three different levels of information (syntactic category, grammatical func-
tion, and semantic role), and that these levels interact with one other. There are
certain associations across levels that are typical in event encoding; for example,
an agent is a subject and an NP, and a patient is an object and an NP. However, as
we see in (5), the passive-active voice alternation is a case in which these typical
associations are broken.

3.2 Grammatical Functions

How can we identify the grammatical function of a given constituent?


Several tests can be used to determine grammatical function, as we show here.

3.2.1 Subjects
Consider the following pair of examples:
(6) a. [The dark] [devoured [the light]].
b. [The light] [devoured [the dark]].

These two sentences have exactly the same words and have the same predicator,
devoured. Yet they differ significantly in meaning, and the main difference comes

1 Semantic roles are also often called ‘thematic roles’ or ‘θ-roles’ (‘theta roles’) in generative
grammar (Chomsky, 1982, 1986).
3.2 Grammatical Functions 55

from what serves as subject or object with respect to the predicator. In (6a), the
subject is the dark, whereas in (6b) it is the light, and the object is the light in
(6a) but the dark in (6b).
The most common sentence structure seems to be that in which NP subject
performs the action denoted by the verb (thus having the semantic role of agent).
However, this is not always so:
(7) a. She wears a stylish set of furs.
b. This place physically stinks.
c. It is raining heavily.
d. Wolfgang himself disliked his hometown.

Wearing a set of furs, stinking, raining, or disliking one’s hometown are not
agentive activities; these are states or, in the case of raining, physical processes.
Such facts show that we cannot equate the grammatical role of subject with the
semantic role of agent.
More reliable tests for subjecthood come from syntactic tests such as agree-
ment, tag-question formation, and subject-auxiliary inversion.
Agreement: The main verb of a sentence agrees with the subject in English:
(8) a. He never writes/*write his books from an outline.
b. The events of the last days *saddens/sadden me.
c. Ashley takes/*take her mother out to lunch.

The singular subject he or Ashley requires a singular verb, while the plural sub-
ject events requires a nonsingular verb. Simply being closer to the main verb
does not entail subjecthood, as further shown by the following examples:
(9) a. Every one of those children is/*are important.
b. The legitimacy of their decisions depends/*depend on public support for the
institution.
c. The results of this analysis *is/are reported in Table 6.

The subject in each example is every one, the legitimacy, and the results respec-
tively, even though there are other nouns closer to the main verb. It is thus not
simply the linear position of the NP that determines agreement; rather, agreement
tells us what the subject of the sentence is.
Tag questions: A tag question is an abbreviated question at the end of a clause
consisting of an auxiliary verb followed by a pronoun referring back to the sub-
ject of the main clause. The tag-question formation is also a reliable subjecthood
test:
(10) a. The lady singing with that boy is a genius, isn’t she/*isn’t he?
b. With their teacher, the kids have arrived safely, haven’t they/*hasn’t he?

The pronoun in the tag question agrees with the subject in person, number, and
gender – it refers back to the subject but not necessarily to the closest NP, nor
to the most topical one. The pronoun she in (10a) shows us that lady is the head
56 F O R M S , F U N C T I O N S , A N D RO L E S

(the essential element) of the subject NP in that example, and the use of they in
the tag in (10b) leads us to assign the same property to kids. The generalization
is that a tag question must contain a pronoun which identifies the subject of the
clause to which the tag is attached.
Subject-auxiliary inversion: In forming questions and other sentence types,
English uses subject-auxiliary inversion, a pattern in which the subject imme-
diately follows an auxiliary verb:
(11) a. This guy is a genius.
b. The rules have changed.
c. It could be more harmful on super hot days.
(12) a. Is [this guy] a genius?
b. Have [the rules] changed?
c. Could [it] be more harmful on super hot days?

As seen here, the formation of yes-no questions such as these involves placing the
first tensed auxiliary verb in front of the subject NP. More formally, the auxiliary
verb is inverted with respect to the subject, hence the term ‘subject-auxiliary
inversion’ (SAI) (see Chapter 8 for detailed discussion). This is not possible
with a nonsubject:
(13) a. Most of the people in this country have already made the decision.
b. *Have [in this country] most of the people already made the decision?

Subject-auxiliary inversion provides another reliable subjecthood test.

3.2.2 Direct Objects and Indirect Objects


The grammatical function of object (OBJ) has two subtypes: direct
object (DO) and indirect object (IO). A direct object (DO) is canonically an NP
denoting the entity that undergoes a change of state or a change of location as a
result of the action denoted by the verb:
(14) a. The burglar broke the window.
b. She bought this blue hat for her boyfriend.

However, this is not a solid generalization. The objects in (15a) and (15b)
are not obviously changed by the action. In (15a) the dog is experiencing
something, and in (15b) the thunder is somehow causing some feeling in the
dog:
(15) a. Thunder frightens [the dog].
b. The dog fears [thunder].
Once again, the data show us that we cannot identify the object based on semantic
roles. A much more reliable criterion is the syntactic construction passive, in
which a nonagent appears as subject. The sentences in (14) can be turned into
passive sentences in (16):
3.2 Grammatical Functions 57

(16) a. The window was broken by the burglar.


b. This blue hat was bought for her boyfriend by her.

What we can learn here is that the object-denoting entities in (14) can be
‘promoted’ to subject in the passive sentences. The test relies on the fact that
nonobject NPs cannot be promoted to the subject:
(17) a. Jones remained a faithful servant to Rice.
b. *A faithful servant was remained to Rice by Jones.

The generalization is that only those NPs that serve as direct objects of their
verbs can be promoted to subject by means of passive.
An indirect object (IO) is an NP that occurs with a DO in a ditransitive
sentence, and in this construction it precedes the DO. The pattern is:
(18) Subject – Verb – IO (Indirect Object) – DO (Direct Object)

The IO expresses the one to whom or for whom the action of the verb is per-
formed, or the (actual or potential recipient) of the item being transferred (the
latter of which is denoted by the DO). The IO thus canonically has the semantic
role of goal, recipient, or benefactive:
(19) a. The catcher threw [me] [the ball]. (IO = goal)
b. She gave [the police] [the licence plate number]. (IO = recipient)
c. She’d baked [him] [a birthday cake]. (IO = benefactive)

In each case, the DO, following the IO, has the semantic role of theme.
While both IO and DO can have a variety of semantic roles, the passive
construction (to be introduced in Chapter 9) has structural rather than seman-
tic conditions of application, promoting to subject whatever NP would have
immediately followed the verb. This reflects the traditional intuition that pas-
sive applies to the grammatically dependent first NP, and thus allows those IO
arguments that immediately follow the verb to become subjects as well. This is
shown by the passive versions of the sentences in (19):
(20) a. I was thrown the ball (by the catcher).
b. The police were given the licence plate number (by her).
c. He had been baked a birthday cake (by her).

Note that examples with IO-DO order are different from those in which the
semantic role of the IO is expressed as an oblique PP, following the DO:2
(21) a. The catcher threw the ball to me.
b. She gave the licence plate number to the police.
c. She’d baked a birthday cake for him.

In this kind of example, it is the DO that is promoted to subject in the passive


voice, as it immediately follows the V in the active form, yielding examples like
the following:
2 Strictly speaking, inside the PP, the NP is the (direct) object of the preposition.
58 F O R M S , F U N C T I O N S , A N D RO L E S

(22) a. The ball was thrown to me by the catcher.


b. The licence plate number was given to the police by her.
c. A birthday cake had been baked for him by her.

The ‘NP PP’ (or oblique goal) pattern combines with a wider array of verbs than
does the ‘NP NP’ ditransitive pattern; the latter is restricted to the specific seman-
tic roles mentioned above. So, for example, (23a) has no alternate expression
where ‘a zombie’ is an IO:
(23) a. They have turned him into a zombie.
b. *They have turned a zombie him.

3.2.3 Predicative Complements


Some NPs that immediately follow the verb but do not behave like
DOs. Consider the following sentences:
(24) a. She is a beautiful young lady.
b. John became a huge supporter of the group.
(25) a. The Democrats elected Bill Clinton president.
b. She didn’t consider Jimmy a boyfriend.

The italicized elements here are traditionally called ‘predicative (PRD) comple-
ments’ in the sense that they function as a predicate describing the subject or
object. However, although they are NPs, they cannot be promoted to subject by
passive:
(26) a. *President was elected Bill Clinton (by the Democrats).
b. *A boyfriend was considered Jimmy (by her).

The difference between objects and predicative complements can also be seen in
the following contrast:
(27) a. He made Jack a sandwich.
b. I made Jack a football star.

Even though the italicized expressions here are both NPs, they function differ-
ently. The NP a sandwich in (27a) is a direct object, as in He made a sandwich
for Jack, whereas the NP a football star in (27b) cannot be an object: It serves
as the predicate of the object Jack. If we think of part of the meaning informally,
only in the second example would we say that the final NP describes the NP:
(28) a. (27a): Jack  = a sandwich
b. (27b): Jack = a football star

In addition, phrases other than NPs can serve as predicative complements:


(29) a. The revolution then became [AP necessary].
b. Passion is [S what makes you roll up your sleeves and get it done].
c. The irony was [CP that there was nothing repairable about any of this].
3.2 Grammatical Functions 59

(30) a. My two sons really made her [AP happy].


b. Male students regard English [PP as the language for better employment,
technology and tourism].
c. His mother-in-law spoiled her grandchildren [AP rotten].

The bracketed complements function to predicate a property of the subject in


(29) and of the object in (30).

3.2.4 Oblique Complements


Consider now the italicized expressions in (31):
(31) a. He talked to them about the health care bill.
b. He just reminded me of someone I used to know.
c. They informed clients of problems.

These italicized expressions are neither objects nor predicative complements.


Since their presence is obligatory for syntactic well-formedness, they are called
oblique complements. Roughly speaking, ‘oblique’ contrasts with the ‘direct’
functions of subject and object, and oblique phrases are typically expressed as
PPs in English.
As we have seen before, most ditransitive verbs can also take oblique
complements:
(32) a. I gave the phone to my husband.
b. Her uncle taught English to her.

The PPs here, which cannot be objects since they are not NPs, also do not serve
as predicates of the subject or object – they relate directly to the verb as oblique
complements.
The functions of DO, IO, predicative complement, and oblique complement
all have one common property: they are all selected by the verb, and we view
them as being present to ‘complement’ the verb to form a legitimate VP. Hence,
these are called complements (COMPS), and typically they cannot be omitted.

3.2.5 Modifiers
Unlike these complements required by a lexical head, there are
expressions which do not complement the predicate in the same way and which
are truly optional:
(33) a. She stopped and looked up suddenly.
b. I made my choice a long time ago.
c. The videographers were indicted in Texas.
d. He wasn’t popular because he was a genius at math.

The italicized expressions here are all optional and function as modifiers (also
called ‘adjuncts’ or ‘adverbial’ expressions). These modifiers specify the man-
ner, location, time, or reason, among many other properties, of the situations
60 F O R M S , F U N C T I O N S , A N D RO L E S

expressed by the given sentences – informally, they are the how, when, where,
and why phrases.
One additional characteristic of modifiers is that they can be stacked, whereas
complements cannot:
(34) a. *John gave Tom [a book] [a record].
b. Oswald was seen with him [several times] [last summer].

As shown here, temporal adjuncts like several times and last year can be
repeated, whereas the two complements a book and a record in (34a) cannot.
Of course, temporal adjuncts do not become the subject of a passive sentence,
suggesting that they cannot serve as objects:
(35) a. Gary visited yesterday.
b. *Yesterday was visited by Gary.

3.3 Bringing Form and Function Together

We now can analyze each sentence in terms of grammatical func-


tions as well as structural constituents. Let us see how we can analyze a simple
sentence along these two dimensions:
(36)

As shown here, the expressions the little cat and a mouse are both NPs, but they
have different grammatical functions, SUBJ and OBJ. The VP as a whole func-
tions as the predicate of the sentence, describing the property of the subject.3
Additionally, though not shown here, we would want to say that little is an
attributive modifier of cat, and the determiners the and a have a ‘specifying’
function with respect to their head nouns (see Chapter 5).
Assigning grammatical functions within complex sentences is no different:

3 It is important not to confuse the functional term ‘adverbial’ and the syntactic category label
‘adverb.’ The term ‘adverbial’ is used interchangeably with ‘adjunct’ or ‘modifier,’ whereas
‘adverb’ only designates a part of speech. In English almost any kind of phrasal category can
function as an adverbial, but only a limited set of words are adverbs.
3.4 Form-Function Mismatches 61

(37)

Each clause has its own SUBJ and PRED: John is the subject of the higher clause,
whereas the cat is the subject of the lower clause. We can also notice that there
are two OBJs: The CP is the object of the higher clause, whereas the NP is that
of the lower clause.

3.4 Form-Function Mismatches

In traditional generative syntax, grammatical functions like subject


and direct object are indirectly defined by PS rules (Chomsky, 1981a, 1981b):
(38) a. Subject-of: [NP, S] (S → NP, VP)
b. Direct-Object-of: [NP, VP] (VP → V, NP)

Within the PS-rule system, as represented here, the subject is defined as the
immediate daughter of S, while the object is the immediate sister of V. These two
are also categorically specified as NPs. However, linguistic evidence indicates
that not only NPs but also other categories (e.g., CP, VP, and PP) can function as
subject and object (Newmeyer, 2000, 2003):
(39) a. [NP The inferno] destroyed the downtown area.
b. [VP Loving you] is not in my control.
c. [CP That he doesn’t achieve perfection] is reasonable.
d. [VP To finish this work] is beyond his ability.
e. [PP Under the bed] is a safe place to hide.

(40) a. Mr. Mulvaney sent [NP a memo] to employees.


b. Gina wondered [S what other bills her mother might have neglected to pay].
c. They believed [CP that a tattoo or piercing had hurt their chances of getting
a job].
d. Are you going on holiday before or after Easter? I prefer [PP after Easter].
62 F O R M S , F U N C T I O N S , A N D RO L E S

Subject tests like subject-verb agreement and tag question support the assump-
tion that these non-NP phrases are the subject:
(41) a. [That he doesn’t achieve perfection] is reasonable, isn’t it?
b. [[That the march should go ahead] and [that it should be cancelled]]
have/*has been argued by different people at different times.

(42) a. [To finish this work] is beyond his ability, isn’t it?
b. [[To delay the march] and [to go ahead with it]] have/*has been argued by
different people at different times.

Examples like this would require a new set of PS rules. For example, the partial
tree structure of (42a) may look like the following:
(43)

The tree structure means that we need a new S rule like ‘S → VP VP’ or a rule
like ‘NP → VP’ to project the subject VP to an NP to keep the rule ‘S → NP
VP’ (see Chapter 4 for the resolution of this issue).
The same fact is observed for the object. Non-NP phrases like CP, VP, or even
PP can function as the object:
(44) a. They believe [that group work is an essential tool for students’ future lives].
b. They prefer [to study in a formal setting].
c. I’ll choose [after the holidays] to hold my party.

Object tests like the passive tell us that these non-NPs function as the object:
(45) a. [Group work is an essential tool for students’ future lives] is believed.
b. [To study in a formal setting] is preferred.
c. [After the holidays] will be chosen to hold my party.

The same goes for modifier (MOD), as noted before. Not only AdvP but also
phrases such as NP, S, VP, or PP can function as a modifier:
(46) a. The little cat devoured a mouse [NP last night].
b. This race has started [AdvP very early].
c. I stayed on as CEO [PP for four years].
d. They will absorb enough correct information [VP to pass the test].
e. Joseph had spoken to me in English [S when the party started].
3.5 Semantic Roles 63

The sentence (46a) will have the following structure:


(47)

Here the expression last night is an adverbial NP in the sense that it is categor-
ically an NP but functions as a modifier (adjunct) to the VP. As we go through
this book, we will see that the distinction between grammatical functions and
categorical types is crucial in the understanding of English syntax.

3.5 Semantic Roles

As noted above, semantic roles were devised to classify the argu-


ments of predicators (primarily verbs, locative prepositions and adjectives) into
a closed set of participant types. Even metaphorical sentences are assigned
semantic roles based on their literal meanings; for example, the NP bad habits
is analyzed as a location in the sentence Daenerys is falling into bad habits.
Although, as discussed in Section 3.2.1, we cannot identify any particular gram-
matical function with any particular semantic role, there are important correla-
tions between the two levels: for example, agents tend to be subjects and patients
tend to be objects. In addition, the properties of semantic roles interact in regular
ways with certain grammatical constructions; for example, the missing second-
person subject in an imperative sentence like Sit down! is an agent. A list of the
most relevant semantic roles and their associated properties is given below.4
• Agent: A participant who engages in some intentional act as specified by the
verb. Examples: subject of eat, kick, hit, hammer, etc.
(48) a. The boy ate a sandwich.
b. He hit the ball.
c. Ruby hammered the spike.

• Patient: A participant that is affected by the action denoted by the verb.


Examples: object of kick, hit, hammer, etc.5

4 The definitions of semantic roles given here are adapted from Dowty (1989).
5 Patient and theme are often unified into ‘undergoer’ on the grounds that both a patient and a
theme can be said to be affected by the action in question.
64 F O R M S , F U N C T I O N S , A N D RO L E S

(49) a. He hit the ball.


b. Ruby hammered the spike.

• Experiencer: A participant characterized as aware of something. Examples:


subject of perception verbs like feel, smell, hear, see, etc.
(50) a. He felt comfortable in Washington.
b. She heard a distant bell.
• Theme: A participant characterized as changing its position or condition, or
as being in a state or position. Examples: direct object of give, hand, subject of
come, happen, die, etc.
(51) a. They gave a flashlight to my younger brother.
b. He died last month.

• Benefactive: The entity that benefits from the action or event denoted by the
predicator. Examples: oblique complement of make, buy, etc.
(52) a. He made a cake for me.
b. John bought a guitar for me.

• Source: The participant from which motion proceeds. Examples: object of


deprive, fell off, free, cure, etc.
(53) a. Grant fell off the wagon.
b. We bought the house from her parents.
c. The government deprived the public of essential information.

• Goal: The participant to which motion proceeds. Examples: subject of


receive, buy, indirect object of tell, give, etc.
(54) a. Moon receives the award this week.
b. He moved his family to his boyhood home.

• Location: The position of a static entity, including a possessor or container


of that entity. Examples: subject of keep, own, retain, locative PPs, etc.
(55) a. They had been allowed to keep their personal effects.
b. Extracted cores were placed in a CT scanner.

• Instrument: The means by which the action or event denoted by the pred-
icator is carried out. Examples: oblique complement of hit, wipe, hammer,
etc.
(56) a. He wiped his mouth with the back of his hand.
b. Tiger can hit a ball with a stick.

An important advantage of having such semantic roles available to us is that


they allow us to capture the relationship between sentences that express distinct
perspectives on the same situation, as we saw at the beginning of this chapter. As
another example, consider the following pair:
3.5 Semantic Roles 65

(57) a. [agt The cat] chased [pat the mouse].


b. [pat The mouse] was chased by [agt the cat].

Although the above two sentences have different syntactic structures, they have
essentially identical interpretations. The reason is that the same semantic roles
are assigned to the same NPs: In both examples, the cat is the agent and the
mouse is the patient. Different grammatical uses of verbs may express the same
semantic roles in different arrays.
Semantic roles also allow us to classify verbs into finer-grained groups.
Consider the following examples:
(58) a. There comes a time when you have to say to yourself enough is enough.
b. There remains a gap between ‘what is’ and ‘what should be.’
c. There lived a lion whose skin could not be pierced by any weapon.
d. There arrived a tall, red-haired, and incredibly well-dressed man.

(59) a. *There sang a man with a pipe.


b. *There dances a man with an umbrella.
c. *There cried a child asking for more candies.

All the verbs in (58) and (59) are intransitive, but not all are acceptable in the
there-construction. The difference comes from the semantic role of the postver-
bal NP, as assigned by the main verb. Verbs like arrive, remain, and live are
taken to assign the semantic role of ‘theme’ (see the list of roles above), whereas
verbs like sing and dance assign an ‘agent’ role. We thus can conjecture that
there-constructions are not compatible with a verb whose subject carries an agent
semantic role.
While semantic roles provide very useful ways of describing properties across
different constructions, we should point out that the theoretical status of seman-
tic roles is still unresolved.6 For example, there is no agreement about exactly
which and how many semantic roles are needed. The problem is illustrated by
the following simple examples:
(60) a. The exhibit resembles a video game.
b. The composition of the planet Venus is similar to that of Earth.

What kind of semantic roles do the arguments here have? Both participants seem
to be playing the same role in these examples – they both cannot be either agent
or patient or theme. There are also cases where we might not be able to pin down
the exact semantic role:
(61) a. Henry ran into the house to find a bag of water.
b. The baby tilted her head up to look at the sky.

The subject Henry in (61a) is both agent and theme: It is agent since it initiates
and sustains the movement but also theme since it is the object that moves. Also,

6 See Levin and Rappaport Hovav (2005) for further discussion of this issue.
66 F O R M S , F U N C T I O N S , A N D RO L E S

the subject the baby in (61b) can either be an experiencer or an agent depending
on her intention – one can just look at the sky with no purpose at all.7
Although there are theoretical issues involved in adopting semantic roles in
grammar, there are many advantages to using them, some of which we have
noted here. We can make generalizations about the grammar of the language;
for example, typically the ‘agent’ takes the subject position, while an NP fol-
lowing the word from serves as the ‘source.’ As we will see in Chapter 4, the
array of semantic roles that a verb or class of verbs takes is a standard way of
characterizing that verb or verb class in a lexicon based on lexical classes. In
subsequent chapters, we will have cause to refer to semantic roles in various
places.

3.6 Conclusion

Chapter 2 discussed syntactic categories and their phrasal expansions


(e.g., NP, VP, AP, and S), used by traditional grammar to represent the constituent
structure of sentences. This chapter discussed two other notions – grammatical
functions and semantic roles – each of which allows us to describe the dependen-
cies that exist between a predicator and the units that it combines with to make
phrases of various categories. The grammatical functions that we have discussed
include subject (direct and indirect), object, predicative complement, oblique
complement, and modifier. The chapter explored diagnostics used to identify
each of these grammatical functions in a sentence. For instance, tag questions,
agreement, and subject-auxiliary inversion can tell us if a given constituent is a
subject or not. We note here that key to understanding the syntax of English is
the recognition that the mapping between form (categorial type) and function is
not one-to-one; mismatches, as when a clause or even a PP serves as a subject,
are possible. This chapter described cases in which a given grammatical function
can have various categorical realizations.
We saw that the semantic roles of each constituent in a sentence, taken
together, tell us ‘who is doing what to whom.’ The chapter gave examples of
semantic roles like agent, theme, patient, location, source, and goal. We saw that
although there are instances in which it is difficult to diagnose an argument’s
semantic role, semantic roles can be of use in classifying verbs into distinct sub-
classes. We will refer to these semantic roles as needed. Throughout this book,
we will refer to mappings between categorical form and grammatical function,
as well as to the combinatorial properties of words and phrases, in describing
simple and complex clauses.

7 To overcome the problem of assigning the correct semantic role to an argument, one can assume
that each predicator has its own (individual) semantic roles. For example, the verb kick, instead of
having an agent and a patient, has two individualized semantic roles, ‘kicker’ and ‘kicked.’ See
Pollard and Sag (1987).
3.6 Conclusion 67

Exercises

1. Construct sentences containing the following grammatical functions:


a. subject, predicator, direct object
b. subject, predicator, indirect object, direct object
c. subject, predicator, adjunct
d. adjunct, subject, predicator
e. subject, predicator, direct object, oblique complement
f. subject, predicator, predicative complement
g. subject, predicator, direct object, predicative complement

2. Give the grammatical function of the italicized phrases in the follow-


ing examples:
a. I’m sure this year will be totally awesome if you keep hanging
in there!
b. We ended this year with the highest consumer confidence rating
in 28 years.
c. We could still make the playoffs this year.
d. She clipped samples and placed them in little jars.
e. We should teach the dolts a lesson.
f. That he does it with such a deft sense of equilibrium makes this
one of the more intriguing entries this year.
g. You just look wonderful, honey.
h. Excuse me, I don’t see people rushing out in little spring outfits
in February.

3. Draw tree structures for the following sentences (with categories) and
then assign an appropriate grammatical function to each phrase:
a. They parted the best of friends.
b. Benny worked in a shoe factory when he was a student.
c. The gang robbed her of her necklace.
d. The film is about marine life.
e. I think of John as a good friend.
f. The trio visited a pub in the small town.
g. Oscar described Doberman as a really smart guy.
h. We often expect our students to diligently read their textbooks.
i. Honestly, I do not think that I understand people very well.

4. Consider the following examples:


(i) a. There is/*are only one museum diploma program in South
Africa.
b. There *is/are more museum diploma programs in South Africa.

With respect to the grammatical function of there, what can we


infer from these data? Try out more subjecthood tests, such as the
tag-question test, to determine the grammatical function of there
68 F O R M S , F U N C T I O N S , A N D RO L E S

in these examples. In addition, decide the subject in the following


so-called ‘locative inversion’ examples, and provide at least three
different locative inversion examples that you can find from naturally
occurring material:
(ii) a. Down the street *comes/come two men, Jonathan and Adam.
b. Just ahead of her in the queue, nearly hidden among the others,
stands the tiny figure of Shakespeare, a hand in his breeches
pocket and a somewhat bemused expression on his ruff-framed
face.

5. Determine the grammatical function of the italicized phrase, pro-


viding at least one syntactic test that we have discussed in the
chapter:8
a. He proved his innocence.
b. He proved an adequate student.
c. We will all remember today.
d. We will all relax today.
e. Tori considered him a decent man.
f. Tori cooked him a decent meal.

6. Consider the following examples with the copula verb be, and discuss
what kind of form-function (category-meaning) mismatches we can
observe here:
(i) a. Kim is a good student.
b. Kim is in.

Do you also observe any mismatches in the following examples? (See


Chapter 7 for detailed discussion of examples like (ii).)
(ii) a. Lee seems to be happy.
b. Lee believed him to be heartbroken.

7. Assign a semantic role to each NP in the following sentences:


a. John smelled the freshly baked bread.
b. On one level, the thought horrified the woman.
c. Thomas Harty was stabbed with a knife.
d. She gave us a variety of assignments.
e. Wright has heard the full rumor.
f. Tony hid the writings under his robe.
g. James baked me a fruitcake.
h. I am really jealous of the smaller girls on my team.
i. The teakettle was boiling on the stove.

8. Determine the grammatical functions for the underlined expressions


in the following text:
8 This exercise is adapted from Huddleston and Pullum (2002).
3.6 Conclusion 69

Scientists found that the birds sang well in the evenings but per-
formed badly in the mornings. After being awake several hours,
however, the young males regained their mastery of the mate-
rial and then improved on the previous day’s accomplishments.
To see whether this dip in learning was caused by the same
kind of precoffee fog that many people feel in the morning,
the researchers prevented the birds from practicing first thing
in the morning. They also tried keeping the birds from singing
during the day, and they used a chemical called melatonin to
make the birds nap at odd times. The researchers concluded that
their study supports the idea that sleep helps birds learn. Stud-
ies of other animals have also suggested that sleep improves
learning.9

9 From Science News Online, Feb 2, 2007.


4 Head, Complements, Modifiers,
and Argument Structures

4.1 Building a Phrase from a Head

4.1.1 Internal vs. External Syntax


As we saw in the preceding chapters, both syntactic categories (NP,
AP, VP, PP, etc.) and grammatical functions (e.g., subject, complement, and mod-
ifier) play important roles in the analysis of English sentences. We have also
observed that the grammatical function and form of each constituent depend on
where the constituent occurs and what it combines with.
The combinatory properties of word and phrasal constructions involve two
aspects of syntax: internal and external syntax.1 Internal syntax involves what
a well-formed phrase consists of, whereas external syntax is concerned with
how the phrase can be used in a larger construction. Observe the following
examples:
(1) a. *He [put his hand].
b. *He [put under the comforter].
c. *He [put his hand warm].
d. *He [put his hand to be under the comforter].
e. He [put his hand under the comforter].

Why is only (1e) acceptable? Only this sentence satisfies the condition that the
verb put selects an NP and a PP as its complements, because it combines with
these complements to form a well-formed VP. In the other examples, this condi-
tion is not fulfilled. This combinatory requirement can be traced back to lexical
properties of the verb put, and it is not related to any properties external to the VP.
By contrast, external syntax is concerned with the syntactic environment
in which a phrase occurs. Some of the unacceptable examples in (1) can be
legitimate expressions if they occur in the proper (syntactic) context:
(2) a. This is the comforter under which he [put his hand]. (cf. (1a))
b. This is his hand that he [put under the comforter]. (cf. (1b))

At the same time the well-formed VP in (1e) may be unacceptable, depending


on external contexts. For example, consider the frame induced by the governing
verb kept in (3):

1 The terms ‘internal’ and ‘external’ syntax are from Baker (1995).

70
4.1 Building a Phrase from a Head 71

(3) a. *He kept [put his hand under the comforter].


b. He kept [putting his hand under the comforter].

The VP put his hand under the comforter is a well-formed phrase, but it cannot
occur in (3a) since this is not the environment in which such a finite VP occurs.
That is, the verb kept requires as its complement not a finite VP but a gerundive
VP like putting his hand under the comforter.

4.1.2 The Notion of Head, Complements, and Modifiers


One important property we observe in English phrase-internal syn-
tax is that in building up any phrasal constructions, we find only one obligatory
element in each phrase. That is, each phrase has one essential element, as
represented in the diagrams in (4):
(4)

The circled element here is the essential, obligatory element within the particular
phrase. We call this essential element the head of the phrase.2 The head of each
phrase determines the syntactic category of the phrase from which it is built, a
phenomenon called ‘lexical projection.’ The head of an NP is thus N, the head
of a VP is V, and the head of an AP is A.
The property of headedness plays an important role in grammar. For example,
the verb put, functioning as the head of a VP, dictates what it must combine with:
two complements, NP and PP, respectively. Consider the other examples below:
(5) a. Clark denied the plagiarism charges.
b. *Clark denied.
(6) a. Hill handed the students an ambitious assignment.
b. *Hill handed the students.

The verb denied here requires an NP object, while handed requires two NP com-
plements in this use. The properties of the head verb determine what kind(s)
of elements it combines with. As noted in the previous chapter, the elements
with which a head verb must combine are called complements. The comple-
ments include direct object, indirect object, predicative complement, and oblique
complement. These are all potentially required by some verb or another.
The properties of the head become properties of the whole phrase. Why are
the examples in (7b) and (8b) ungrammatical?
(7) a. Lopez [wants to leave the United States].
b. *Lopez [eager to leave the United States].

2 See also Section 2.4 in Chapter 2.


72 H E A D , C O M P L E M E N T S , M O D I FI E R S

(8) a. They [know that the president is running for reelection].


b. *They [certain that the president is running for reelection].

The examples in (7b) and (8b) are unacceptable because of the absence of
the required head. The unacceptable examples lack a finite (tensed) VP as the
bracketed part, but we know that English sentences require a finite VP as one
immediate (or daughter) constituent, as informally represented in (9):
(9) English Declarative Sentence Construction:
Each declarative sentence must contain a finite VP as its head.

Each finite VP is headed by a finite verb. If we amend the ungrammati-


cal examples above to include a verb but not a finite one, they are still
ungrammatical:
(10) a. *Lopez [(to) be eager to leave the United States].
b. *They [(to) be certain that the president is running for reelection].

The VP is considered to be the (immediate) head of the sentence, with the


verb itself as the head of the VP. In this way, we can talk about a finite or
nonfinite sentence, one which is ultimately headed by a finite or nonfinite verb,
respectively.3
In addition to the complements of a head, a phrase may also contain modifiers
(also called adjuncts):
(11) a. Tom [VP [VP offered advice to his students] in his office].
b. Tom [VP [VP offered advice to his students] with love].

The PPs in his office or with love here provide further information about the
action described by the verb, but they are not required by the verb. These phrases
are optional and function as modifiers, and they function to augment the minimal
phrase projected from the head verb offered. The VP which includes this kind of
modifier forms a maximal phrase. We might say that the inner VP here forms a
‘minimal’ VP, which includes all the ‘minimally’ required complements, and the
outer VP is the ‘maximal’ VP, which includes optional modifiers.
What we have seen can be summarized as follows:
(12) a. Head: A lexical or phrasal element that is essential in determining the
category and internal structure of a larger phrase.
b. Complement: A phrasal element that a head must combine with – that is,
one that is selected by the head. Complements include direct object, indirect
object, predicative complement, and oblique complement.
c. Modifier: A phrasal element that is not selected by the head functions but
which functions as a modifier of the head phrase, for example, indicating
the time, place, manner, or purpose of the action expressed by a verb and its
complements.

3 See Section 5.5 for the values of the attribute English verb form (VFORM) including finite and
nonfinite.
4.2 Differences between Complements and Modifiers 73

d. Minimal Phrase: the phrase including a head and all of its complements.
e. Maximal Phrase: the phrase that includes all complements as well as any
modifiers.

4.2 Differences between Complements and Modifiers

Several tests are traditionally used to determine whether a phrase is a


complement or a modifier.4
Obligatoriness: As already suggested, complements are required phrases
while modifiers are not. The examples in (13)–(15) show that the verb placed
requires an NP and a PP as its complements, kept an NP and a PP or an AP, and
stayed a PP:
(13) a. Eli placed the cushion behind him.
b. Eli kept the cushion behind him.
c. *Eli stayed the cushion behind him.

(14) a. *These ladies and gentlemen placed him busy.


b. These ladies and gentlemen kept him busy.
c. *These ladies and gentlemen stayed him busy.

(15) a. *He placed behind the bodyguard.


b. *He kept behind the bodyguard.
c. He stayed behind the bodyguard.

In contrast, modifiers are optional. Their presence is not required by the


grammar:
(16) a. Pat deposited some money in the bank.
b. Pat deposited some money in the bank on Friday.

In (16b), the PP on Friday is optional here, serving as a modifier. However,


this ‘obligatoriness’ test is not always sufficient, for some verbs allow optional
complements:
(17) a. She read (the book) for at least one hour every day.
b. It seems inappropriate (to me) to turn a simple wedding into a grand social
occasion.

The possibility of omitting the book and to me in each case implies that they are
optional complements.
Iterability: The possibility of iterating identical types of phrase can also dis-
tinguish between complements and modifiers. In general, two or more instances
of the same modifier type can occur with the same head, but this is impossible
for complements:

4 Most of the criteria and tests we discuss here are adopted from Pollard and Sag (1987) and
Baker (1995).
74 H E A D , C O M P L E M E N T S , M O D I FI E R S

(18) a. *The UN blamed global warming [on humans] [on natural causes].
b. The two had met [in Los Angeles] one night [at a bar] in June of that year.

In (18a), on humans is a complement and thus the same type of PP, on natural
causes, cannot cooccur. Yet in Los Angeles is a modifier and we can repeatedly
have the same type of PP like at a bar.
The Do-So Test: Another reliable test used to distinguish complements from
modifiers is the do-so or do the same thing test. As shown in (19), we can use do
the same thing to avoid repetition of an identical VP expression:
(19) a. Leslie deposited some money in the checking account and Kim did the same
thing.
b. Leslie deposited some money in the checking account on Friday and Kim
did the same thing.

We can observe in (19b) that the VP did the same thing can replace either the
minimal phrase deposited some money in the checking account or the maximal
phrase including the modifier on Friday. Notice that this VP can also replace
only the minimal phrase, excluding the modifier, as in (20):
(20) John deposited some money into the checking account on Friday and Mary
did the same thing on Monday.

From these observations, we can draw the conclusion that if something can be
replaced by do the same thing, then it is either a minimal or a maximal phrase.
This in turn means that this ‘replacement’ VP cannot be understood to exclude
any complement(s). This can be verified with more data:
(21) a. *John [deposited some money into the checking account] and Mary did the
same thing into the savings account.
b. *John [gave a present to the student] and Mary did the same thing to the
teacher.

Here the PPs into the checking account and to the student are both complements,
and thus must be included in the do the same thing phrase. This gives us the
following informal generalization:
(22) Do-So Replacement Condition:
The phrase do so or do the same thing can replace a verb phrase that includes
at least all of the complements of the verb.

This condition explains why all the oblique expressions into the savings account
and to the teacher cannot appear next to did the same thing in (21). The unac-
ceptability of the examples in (23) also supports this generalization about English
grammar:
(23) a. *John locked Fido in the garage and Mary did so in the room.
b. *John ate a carrot and Mary did so a radish.

The ill-formedness of these examples indicates that both in the room and a radish
function as complements.
4.2 Differences between Complements and Modifiers 75

Combinatory Freedom: An adjunct can cooccur with a relatively broad range


of heads, whereas a complement is typically limited in its distribution. Note the
following contrast:
(24) a. They sat/danced/walked/meditated on the hill.
b. They walked on/over/under the hill.
(25) a. The world relies on/*at/*for Occam’s Razor.
b. When you fail, how do you cope with/*for/*to failure?
The semantic contribution of the adjunct on the hill in (24a) is independent
of the head, whereas that of the complement on Occam’s Razor or with fail-
ure is idiosyncratically dependent upon the head. That is, the verb relies or
cope can combine only with a PP headed by the preposition on and with,
respectively.
Structural Differences: We can distinguish between complements and mod-
ifiers using tree structures: Complements combine with a lexical head (not a
phrase) to form a minimal phrase, whereas modifiers combine with a phrase to
form a maximal phrase. This means that we have the following structure:
(26)

As represented in (26), complements are sisters of the lexical head X, whereas


modifiers are sisters of a phrasal head. This structural difference between com-
plements and modifiers provides a clean explanation for the patterns revealed by
the do-so (or do the same thing) test. Given that the verb ate takes only an NP
complement, whereas put takes an NP and a PP complement, we will posit the
following two structures:
(27)
76 H E A D , C O M P L E M E N T S , M O D I FI E R S

In this way, we represent the complements and modifiers in terms of structural


differences.
Ordering Differences: Another difference that follows from the structural
distinction between complements and modifiers concerns ordering. As a comple-
ment needs to combine with a lexical head first, complements typically precede
modifiers:
(28) a. He met [a woman] [in the lobby of the Four Seasons].
b. *He met [in the lobby of the Four Seasons] [a woman].

A similar contrast can be observed in the following:


(29) a. the student [of linguistics] [with long hair]
b. *the student [with long hair] [of linguistics]

The PP with long hair is a modifier, whereas of linguistics is the complement of


student. This is why with long hair cannot occur between the head student and
its complement of linguistics.5

4.3 PS Rules, X -rules, and Features

4.3.1 Problems of PS Rules


We saw in Chapter 2 that PS rules can describe how English sen-
tences are formed. However, two main issues arise with respect to the content of
PS rules.6 The first is related to the headedness of each phrase, often called the
‘endocentricity’ of the phrase.
We have seen above that PS rules like those in (30) can characterize well-
formed phrases in English, together with an appropriate lexicon:
(30) a. S → NP VP
b. NP → Det AP∗ N
c. VP → V (NP) (VP)
d. VP → V NP AP
e. VP → V NP NP
f. VP → V S
g. AP → A VP

5 These observed ordering restrictions can provide more evidence for the distinction between com-
plements and modifiers. Again, this test is not always sufficient by itself. In the following, the
modifiers precede the complements:
a. We discussed [all night long] [how to finish the project].
b. I said [publicly] [that we would have plenty of problems along the way].
One way to account for such examples is to assume that the clausal complement in each
case is ‘extraposed’ to the sentential-final. See Chapter 12 for the discussion of extraposition
constructions in English.
6 The discussion in this section is based on Sag et al. (2003).
4.3 PS Rules, X -rules, and Features 77

h. PP → P NP
i. VP → Adv VP

One property common to all of these rules is, as we have discussed, that every
phrase has its own head. In this sense, each phrase is the projection of a head and
is thereby endocentric. However, this raises the question of whether we can have
rules like the following, in which the phrase has no head at all:
(31) a. VP → P NP
b. NP → PP S

Nothing in the grammar makes such PS rules unusual or different in any way
from the set in (30). Yet if we allow such ‘nonendocentric’ PS rules, in which
a phrase does not have a lexical head, grammar would then be too powerful
to generate only the grammatical sentences of the language. For instance, with
this kind of PS rule, examples like to the room would be a VP, making John to
the room as a sentence consisting of an NP and a VP. More seriously, such PS
rules with no head expression in the right hand do not exist in English or other
languages. We have seen that each phrase must have a head. These PS rules thus
violate this headedness (or endocentricity) of a phrase.
Another limitation of the simple PS rules concerns the issue of redundancy.
Observe the following:
(32) a. *The problem disappeared the accusation.
b. The problem disappeared.

(33) a. *Clarke denied.


b. Clarke denied the plagiarism charges.

(34) a. *Hill handed the students.


b. Hill handed the students an ambitious assignment.

These examples show that each verb has its own restrictions on its comple-
ment(s). For example, deny requires an NP, whereas disappear does not, and give
requires two NPs as complements. The different patterns of complementation are
said to define different subcategories of verbs. Each specific pattern is known as
the ‘subcategorization’ requirement of each verb, which can be represented as
follows (IV: intransitive, TV: transitive, DTV: ditransitive):
(35) a. disappear: IV,
b. deny: TV, NP
c. give: DTV, NP NP

In addition, in order to license the grammatical sentences in (32)–(34), we need


to have the following three VP rules:
(36) a. VP → IV
b. VP → TV NP
c. VP → DTV NP NP
78 H E A D , C O M P L E M E N T S , M O D I FI E R S

We can see here that in each VP rule, only the appropriate verb can occur. That
is, a DTV cannot form a VP with the rules in (36a) or (36b): It forms a VP only
according to the last PS rule. Each VP rule thus also needs to specify the kind of
verb that can serve as its head.
Taking all of these observations together, we see that a grammar of the type
just suggested must redundantly encode subcategorization information both in
the lexical type of each verb (e.g., DTV) and in the PS rule for that type
of verb. A similar issue of redundancy arises in accounting for subject-verb
agreement:
(37) a. The insect devours the soft flesh.
b. The insects devour the soft flesh.

To capture the fact that the subject NP agrees with the predicate VP, we need to
break the S rule into the following two rules:
(38) a. S → NPsing VPsing (for (37a))
b. S → NPpl VPpl (for (37b))

The two PS rules ensure that the singular (sing) subject combines with a singular
VP, while the plural (pl) subject NP combines with a plural VP.
The grammar described above may be a perfectly adequate descriptive tool.
From a theoretical perspective, however, we must address the endocentricity
and redundancy issues. A more specific, related question is: how many PS
rules does English have? For example, how many PS rules do we need to
characterize English VPs? Presumably there are as many rules as there are
subcategories of verb. We need to investigate the properties shared by all PS
rules in order to develop a theory of PS rules. For example, it seems to be
the case that each PS rule must have a ‘head.’ This will prevent many PS
rules that we could write using the rule format from being actual rules of any
language.

4.3.2 Intermediate Phrases and Specifiers


In order to understand the structures that rules describe, we need
two additional notions: ‘intermediate category/phrase’ and ‘specifier (SPR).’ We
motivate the concept of an intermediate category and then describe a specifier as
a counterpart concept. Consider the examples in (39):
(39) a. Every photo of Max and sketch by his students appeared in the magazine.
b. No photo of Max or sketch by his students appeared in the magazine.

What are the structures of these two sentences? Do the phrases every photo of
Max and sketch by his students form NPs? It is not difficult to see that sketch by
his students is not a full NP by itself, for if it was, it would be able to appear as
subject by itself:
4.3 PS Rules, X -rules, and Features 79

(40) *Sketch by his students appeared in the magazine.

In terms of semantic units, we can assign the following structures to the above
sentences, in which every and no operate over the meaning of the rest of the
phrase:

(41) a. [Every [[photo of Max] and [sketch by his students]]] appeared in the
magazine.
b. [No [[photo of Max] or [sketch by his students]]] appeared in the magazine.

The expressions photo of Max and sketch by his students are phrasal elements but
not full NPs. So what are they? We call these ‘intermediate phrases,’ notationally
represented as N-bar or N . The phrase N is thus intuitively bigger than a noun
but smaller than a full NP, in the sense that it still requires a determiner from the
class the, every, no, some, and the like.
The complementary notion that we introduce at this point is ‘specifier’ (SPR),
which can include the words just mentioned as well as phrases:

(42) a. [the enemy’s] [N destruction of the city]


b. [The enemy] [VP destroyed the city].

The phrase the enemy’s in (42a) and the subject the enemy in (42b) are semanti-
cally similar in the sense that they complete the specification of the event denoted
by the (nominal and verbal) predicate. These phrases are treated as the specifiers
of N and of VP, respectively.
As for the possible specifiers of N , observe the following:

(43) a. a little dog, the little dogs (indefinite or definite article)


b. this little dog, those little dogs (demonstrative)
c. my little dogs, their little dog (possessive adjective)
d. every little dog, each little dog, some little dog, either dog, no dog
(quantifying)
e. my friend’s little dog, the Queen of England’s little dog (possessive phrase)

The italicized expressions here all function as the specifier of N . Notice, how-
ever, that although most of these specifiers are determiners, some consist of
several words, as in (43e) (my friend’s, the Queen of England’s). This moti-
vates us to introduce the new phrase type DP (determiner phrase) that includes
the possessive phrase (NP + ’s) as well as determiners. This leads us to allow
two things: a determiner alone can be projected as a DP and the posses-
sive marker (’s) functions as a determiner and projects into a DP with its NP
specifier:
80 H E A D , C O M P L E M E N T S , M O D I FI E R S

(44)

The structure in (44a) is an instance where a lexical head projects into a phrase
without combining with any complement or a modifier.7 The structure in (44b)
indicates that the possessive marker ’s functions as a head and projects into a DP
after combining with the obligatory NP specifier. The new phrase DP thus gives
us the generalization that the specifier of N is a DP.8
Now let us compare the syntactic structures of (43a) and (43b):
(45)

(46)

7 In a traditional X -theory, N first projects into N and then into NP, but our feature-based system
only distinguishes between word and phrase: An N just means a nominal phrase that requires a
specifier. See Chapter 5 for details.
8 Some analyses take each expression in (43) to form a DP (e.g., a little dog, my little dogs) where
the determiner functions as the head expression.
4.3 PS Rules, X -rules, and Features 81

Even though the NP and S are different phrases, we can notice several similari-
ties. In the NP structure, the head N destruction combines with its complement
and forms an intermediate phrase N , which in turn combines with the speci-
fier DP the enemy’s. In the S structure, the head V destroyed combines with
its complement the city and forms a VP. This resulting VP then combines
with the subject the enemy, which is also a specifier. In a sense, the VP
is an intermediate phrase that requires a subject in order to be a full and
complete S.
Given these similarities between NP and S structures, we can generalize over
them as in (47), where X is a variable over categories such as N, V, P, and other
grammatical categories:

(47)

This structure in turn means that the grammar now includes the following two
rules:9

(48) a. XP → Specifier, X (HEAD - SPECIFIER CONSTRUCTION)


b. XP → X, YP∗ (HEAD - COMPLEMENT CONSTRUCTION)

The HEAD - SPECIFIER CONSTRUCTION and HEAD - COMPLEMENT CONSTRUC -


TION , which form the central part of ‘X -theory,’ account for the core structure
of both NP and S. In fact, these two general rules also encompass most of the
PS rules we have seen so far. In addition to these two, we just need one more
rule:10

(49) X → Modifier, X (HEAD - MODIFIER CONSTRUCTION)

This HEAD - MODIFIER CONSTRUCTION allows a modifier to combine with its


head, as in the PS rule VP → VP Adv/PP:

9 Unlike the PS rules we have seen so far, the rules here are further abstracted, as indicated by
the comma notation between daughters on the right-hand side. We assume that the relative
linear order of a head and complements, etc. is determined by a combination of general and
language-specific ordering principles, while the hierarchical X -structures themselves apply to
all languages that have demonstrable hierarchical structure.
10 The comma indicates that the modifier can appear either before the head or after the head, as in
always read books or read books always.
82 H E A D , C O M P L E M E N T S , M O D I FI E R S

(50)

One important constraint on the HEAD - COMPLEMENT CONSTRUCTION is that


the head must be a lexical element. This in turn means that we cannot apply
the HEAD - MODIFIER CONSTRUCTION first and then the HEAD - COMPLEMENT
CONSTRUCTION . This accounts for the following contrast:

(51) a. the king [of Rock and Roll] [with a hat]


b. *the king [with a hat] [of Rock and Roll]

The ill-formedness of (51b) is due to the fact that the modifier with a hat was
combined with the head king first:

(52)
4.3 PS Rules, X -rules, and Features 83

We can observe in (52b) that the combination of king with with a hat forms an
N , but the combination of the complement of Rock and Roll with this N will not
satisfy the HEAD - COMPLEMENT CONSTRUCTION.
The existence and role of the intermediate phrase N , which is larger than a
lexical category but still not a fullyfledged phrase, is further supported by the
pronoun substitution examples in (53):
(53) a. The present king of country music is more popular than the last one.
b. *The king of Rock and Roll is more popular than the one of country music.

Why do we have the contrast here? One simple answer is that the pronoun one
here replaces an N but not an N or an NP. This will also account for the following
contrast:
(54) A: Which student were you talking about?
B: The one with long hair.
B : *The one of linguistics with long hair.

The phrase of linguistics is the complement of student. This means the N-bar
pronoun one should include this complement, as in B.
There are several more welcome consequences of these three X rules. These
grammar rules can account for the same structures described by all of the
PS rules that we have seen so far: With these rules we can identify phrases
whose daughters are a head and its complement(s), or a head and its speci-
fier, or a head and its modifier. The three X rules thereby greatly minimize
the number of PS rules needed to characterize well-formed English sentences.
In addition, these X rules directly address the endocentricity issue, because
they refer to ‘Head.’ Assuming that X is N, then we will have N, N , and NP
structures. We can formalize this more precisely by introducing the feature POS
(part of speech), which has values such as noun, verb, adjective. The structure
(55) shows how the values of the features in different parts of a structure are
related:
(55)

The notation 1 shows that whatever value the feature has in one place in the
structure, it has the same value somewhere else. This is a representational tag
84 H E A D , C O M P L E M E N T S , M O D I FI E R S

in which the number 1 has no significance: It could as easily be 7 or 223 (we


provide more details of the formal feature system in the following section). Thus
(55) indicates that the phrase’s POS value is identical to its head daughter, cap-
turing the headedness of each phrase: The grammar simply does not allow any
phrase without a head. Solving the redundancy issue mentioned above for agree-
ment is now simply a matter of introducing another feature, NUMBER. That is,
using the new feature NUMBER, whose values are singular and plural, we can
add a crucial detail to the HEAD - SPECIFIER CONSTRUCTION:

(56) XP → Specifier[NUMBER 1 ], X [NUMBER 1]

The rule states that the subject’s NUMBER value is identical to that of the
predicate VP’s NUMBER value. The two rules in (38) are both represented
in (56).

4.3.3 Intermediate Phrases for Non-NPs


The traditional notion of X -rules, in particular the specifier of an X
intermediate phrase, may be extended to phrases other than NPs:

(57) a. [NP that [N boy [of hers]]]


b. [AP much [A smaller [than Tom]]]
c. [PP right [P down [the slope]]]

With the assumption that the specifier is a nonhead phrase directly dominated
by a maximal phrase like AP or PP, much and right in (57b) and (57b) would
be specifiers. However, note that, unlike specifiers of N , specifiers of A and P
are all optional and lack a tight syntactic relationship with the head. Such differ-
ences among putative ‘specifiers’ have caused proponents of X syntax to restrict
the use of X to phrases like NPs. In due course, we will see that the present
feature-based grammar requires no X notion in order to capture the properties
of intermediate phrases.

4.4 Lexicon and Feature Structures

In the previous section, we have seen that the properties of a lexical


head determine the components of the minimal phrase, in terms of complements,
and that other properties of the head directly reflect properties of the phrase.
This information is encoded in a lexical entry for each word in the lexicon.
This section discusses how the present grammar does this in terms of feature
structures.
4.4 Lexicon and Feature Structures 85

4.4.1 Feature Structures and Basic Operations


Most modern grammars rely on a representation of lexical infor-
mation in terms of features and their values.11 We present here a formal and
explicit way of representing it with feature structures. Each feature structure is
an attribute-value matrix (AVM):
⎡ ⎤
(58) Attribute1 value1
⎢Attribute2 value2⎥
⎢ ⎥
⎢ ⎥
⎣Attribute3 value3⎦
... ...

The value of each attribute can be an atomic element, a list, a set, or a feature
structure:
⎡ ⎤
(59) type
⎢ ⎥
⎢Attribute1 atomic ⎥
⎢ ⎥
⎢ ⎥
⎢Attribute2  ⎥
⎢ ⎥
⎢  ⎥
⎢Attribute3 ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
Attribute4 . . .

One important property of every feature structure is that it is typed.12 That is,
each feature structure is relevant only for a given type. A simple illustration
should suffice to show why each feature structure must be ‘typed.’ The upper left
declaration in italics is the type of the feature structure:
(60) a. ⎡ ⎤
university
⎢ ⎥
⎣NAME Kyunghee University⎦
LOCATION Seoul
b. ⎡ ⎤
* university
⎢ ⎥
⎣NAME Kyunghee University⎦
MAYOR Kim

The type university may have many properties, including its name and location,
but having a MAYOR (though it can have a president) is inappropriate. In the
linguistic realm, we might declare that TENSE is appropriate only for verb, for
example.

11 In particular, grammars such as Head-driven Phrase Structure Grammar (HPSG) and Lexical
Functional Grammar (LFG) are couched upon mathematically well-defined feature-structure
systems. The theory developed in this textbook relies heavily upon the feature-structure system
of HPSG. See Sag et al. (2003).
12 Even though every feature structure is typed in the present grammar, we will not specify the type
of each feature structure unless it is necessary for the discussion.
86 H E A D , C O M P L E M E N T S , M O D I FI E R S

Now consider the following example of a typed feature structure, information


about one of the authors of this book:
⎡ ⎤
(61) author
⎢ ⎥
⎢NAME Kim ⎥
⎢ ⎥
⎢CHILDREN Edward, Richard, Albert ⎥
⎢  ⎥
⎢ ⎥
⎢HOBBIES swimming, cycling, jogging . . . ⎥
⎢ ⎥
⎢ ⎡ ⎤ ⎥
⎢ FIELD linguistics ⎥
⎢ ⎥
⎢ADVANCED - DEGREE ⎢AREA syntax ⎥ ⎥
⎣ ⎣ ⎦ ⎦
YEAR 1996

This illustrates the different types of values that attributes (feature names) may
have. Here, the value of the attribute NAME is atomic, whereas the value of CHIL -
DREN is a list that represents something relative about the three values, in this
case that one is older than the other two. So, for example, ‘youngest child’ would
be the right-most element in the list value of CHILDREN. Meanwhile, the value
of HOBBIES is a set, showing that there is no significance in the relative ordering.
Finally, the value of the feature ADVANCED - DEGREE is a feature structure which
in turn has three attributes.
One useful aspect of feature structures is structure-sharing, which we have
already seen above in connection with the 1 notation (see (55)). Structure-
sharing is used to represent cases where two features (or attributes) have an
identical value:
⎡ ⎤
(62) individual
⎢ ⎥
⎢NAME Kim ⎥
⎢ ⎥
⎢ADDRESS 1 ⎥
⎢ ⎡ ⎤⎡ ⎤⎡ ⎤⎥
⎢ ⎥
⎢ individual individual individual ⎥
⎢ ⎥⎥
⎢CHILDREN ⎢ ⎥⎢ ⎥⎢
⎣NAME Edward⎦, ⎣NAME Richard⎦⎣NAME Albert⎦ ⎥
⎣ ⎦
ADDRESS 1 ADDRESS 1 ADDRESS 1

For the type individual, attributes such as NAME and ADDRESS and CHILDREN
are appropriate. The feature structure (62) represents a situation in which the
particular individual Kim has three sons, and their ADDRESS attribute has a value
( 1 ) that is the same as the value of his ADDRESS attribute, whatever the value
actually is.
In addition to this, the notion of subsumption is also important in the the-
oretical use of feature structures; the symbol represents subsumption. The
subsumption relation concerns the relationship between a feature structure with
general information and one with more specific information. In such a case,
the general one subsumes the specific one. Put differently, feature structure A
subsumes another feature structure, B, if A is not more informative than B.
4.4 Lexicon and Feature Structures 87
⎡ ⎤
(63)   individual
individual ⎢ ⎥
A: B: ⎣NAME Kim ⎦
NAME Kim
TEL 961-0892

In (64), A represents more general information than B. This kind of subsumption


relation is used to represent ‘partial’ information, for in fact we cannot repre-
sent the total information describing all possible worlds or states of affairs. In
describing a given phenomenon, it will be more than enough just to represent
the particular or general aspects of the facts concerned. Each small component
of feature structure will provide partial information, and as the structure is built
up, the different pieces of information are put together.
 The most crucial operation in feature structures is unification, represented by
. Feature unification means that two compatible feature structures are unified,
conveying more coherent and rich information. Consider the feature structures
in (64); the first two may unify to give the third:
   
(64) individual  individual

NAME Kim TEL 961-0892
⎡ ⎤
individual
⎢ ⎥
⎣NAME Kim ⎦
TEL 961-0892

The two feature structures are unified, resulting in a feature structure with
both NAME and TEL information. However, if two feature structures have
incompatible feature values, they cannot be unified:
   
(65) individual  individual
→
NAME Edward NAME Richard
⎡ ⎤
individual
⎢ ⎥
*⎣NAME Edward ⎦
NAME Richard

Since the two smaller feature structures here have different NAME values, they
cannot be unified. Unification will make sure that information is consistent as it
is built up in the analysis of a phrase or sentence.

4.4.2 Feature Structures for Linguistic Entities


Any individual or entity, including a linguistic expression, can be
represented by a feature structure. Consider that every lexical entry includes
at least phonological (in practice, orthographic), morphological, syntactic, and
semantic information. For example, the word puts will have at least the following
specifications:
88 H E A D , C O M P L E M E N T S , M O D I FI E R S

(66) Lexical information for the verb puts


a. phonological information: /pυts/
b. morphological information: put + s
c. syntactic information: verb, present, 3rd singular
d. argument information: <agenti, themej, locationk>
e. semantic information: put_relation(i, j, k)

The phonological information is information about how the word is pronounced,


while the morphological information concerns the internal structure of the word
(the number of meaning units within the word). The morphosyntactic informa-
tion indicates that this particular word is a verb and is in the third singular present
(finite) form. The argument structure represents the number of arguments that the
verb selects, indicating the participants that are minimally involved in the event
expressed by the verb. The indexes i, j, and k refer to the participants denoted by
the arguments. Finally, the semantic structure represents the fact that the verb’s
meaning involves three participants – someone, i, who is doing the action of
putting; something, j, that is being put in a place; and some place, k, that it is
put in. All of these lexical entries can be represented in a more systematic and
precise way using the system of feature structures, which we now introduce.
The lexical information associated with the verb puts can be represented in
terms of a feature structure, as illustrated in the following:13
⎡ ⎤
(67) verb
⎢ ⎥
⎢FORM puts ⎥
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ syntax ⎥
⎢ ⎢   ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢HEAD POS verb ⎥⎥
⎢ ⎢ ⎥⎥
⎢SYN ⎢ VFORM -es ⎥⎥
⎢ ⎢  ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎣ SPR NPi  ⎦⎥
⎢ ⎥
⎢ VAL
COMPS NPj , PPk  ⎥
⎢ ⎥
⎢   ⎥
⎢ ⎥
⎢ARG - ST NP[agt]i , NP[th]j , NP[loc]k ⎥
⎢ ⎥
⎢ ⎡ ⎤ ⎥
⎢ semantics ⎥
⎢ ⎢ ⎥ ⎥
⎢ ⎢PRED ⎥
⎢ put-rel⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢SEM ⎢AGT i ⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢ ⎢THEME ⎥ ⎥
⎣ ⎣ j ⎦ ⎦
LOC k

This feature structure, the details of which we will see as we move on, has
roughly the same information as the informal representation in (66). The feature
structure is describing a type of verb first. The verb puts has its own morpho-
logical form (FORM) value, syntactic (SYN) argument structure (ARG - ST), and

13 The expression also has a phonological (PHON) value, but we suppress this value throughout this
book. Later on, we will not represent SEM values unless relevant to the discussion at hand.
4.5 Arguments and Argument-Structure Constructions 89

semantic (SEM) information. The SYN attribute indicates that the POS (parts of
speech) value is verb and that it has a present finite verbal inflectional form value
(VFORM). Both of these features are head (HEAD) features (see Chapter 5). The
SYN attribute also includes the attribute valence ( VAL ), which has both an SPR
and a COMPS value. The attribute VAL thus refers to the number of syntactic argu-
ments SPR (specifier or subject) and COMPS (complements) that a lexical item
can combine with to make a syntactically well-formed sentence. The ARG - ST
attribute indicates that the verb selects three arguments (with respective thematic
roles agent (agt), theme (th), and location (loc)), which will be realized as the
subject (SPR) and two complements (COMPS) in the full analysis (see Chapter 5).
The semantic (SEM) feature represents the fact that this verb denotes the predi-
cate relation, whose three participants are linked to the elements in the ARG - ST
via indexing values like i, j, and k. As we progress, we will see the roles that each
feature attribute plays in the grammar.

4.5 Arguments and Argument-Structure Constructions

4.5.1 Basic Properties of Argument Structure


Among the feature attributes (FORM, SYN, SEM, ARG - ST) that each
lexical expression carries, let us here consider the feature ARG - ST (argument
structure). The feature ARG - ST has as its value a list whose elements are the
arguments that a lexical expression takes. The verb put, as noted, takes three
arguments: an agent who is performing the action of putting (being in the put-
rel(ation)), a theme that undergoes a change of status, and a location where the
theme is placed. Accordingly, the number of arguments is equal to the number
of the core participants in the relation or eventuality a verb denotes. Consider the
following additional examples:
(68) a. The child smiled.
b. The dog chased the squirrel.
c. The bishop gave the medal to his successor.

These sentences describe the situation of smiling, chasing, and giving, respec-
tively. Note that the participants in each event are different. In (68a), there is only
one participant, the child, and in (68b), there are two individuals involved in the
event of chasing. Meanwhile, in (68c), the giving situation has three individuals
involved. Thus, from the meaning or situation that a verb describes we can infer
how many arguments a verb selects. The number of arguments each verb or pred-
icate requires is represented in the ARG - ST list. So, for example, verbs like smile,
chase, and give will have the following ARG - ST representations, respectively:
 
(69) a. FORM smile
ARG - ST NP
90 H E A D , C O M P L E M E N T S , M O D I FI E R S

 
b. FORM chase
ARG - ST NP, NP
 
c. FORM give
ARG - ST NP, NP, PP

One-place predicates (predicates selecting one argument) like smile select just
one argument, two-place predicates like chase take two arguments, and three-
place predicates like give take three arguments.
We can make a few important observations about the properties of ARG - ST.
The first is that even though arguments are linked to semantic roles (e.g., agent,
patient, theme, location, etc.), the value of ARG - ST is a list of syntactic cate-
gories like NP or PP. This is partially because there are sometimes difficulties in
assigning a specific semantic role (as in That item is similar to his).
The second is that not only verbs but also other lexical expressions including
adjectives, nouns, and prepositions can take an argument or arguments. Consider
the following examples:
(70) a. [His mother] is quite fond [of the novel].
b. [Internet firms’] reliance [on information technology] might differ across
industries.
c. [The moon] was out. [Mars] was in.

The adjective fond and the noun reliance each denote an event involving two
individuals, while the prepositions out and in require one subject argument. This
information can be represented in terms of ARG - ST:
 
(71) a. FORM fond
ARG - ST NP, PP[of ]
 
b. FORM reliance
ARG - ST DP, PP[on]
 
c. FORM in
ARG - ST NP

The third point to note here is that the arguments selected by each predi-
cate are ordered as follows: subject, direct object/indirect object, and oblique
complement.14

4.5.2 Types of Argument-Structure Constructions


The information of the ARG - ST list implies that verbs can be clas-
sified based on the type of argument structure they can occur with. That is, we
14 This ordering, which can be traced back to the accessibility hierarchy of Keenan and Com-
rie (1977), reflects a cross-linguistic property regarding the grammatical functions of nominal
expressions functioning as heads of relative clauses. The same ordering relations also play an
important role in explaining binding facts (see Pollard and Sag, 1994 and Sag et al., 2003), which
we discussed in the exercises in Chapter 1.
4.5 Arguments and Argument-Structure Constructions 91

can differentiate verb types by looking only at the number of arguments they
require. There are five main types of argument structures, described in terms of
the number and properties of the argument(s).
T HE I NTRANSITIVE C ONSTRUCTION : This is the argument-structure construc-
tion accommodating verbs that require only one argument:
(72) a. John disappeared.
b. *John disappeared Bill.

(73) a. John sneezed.


b. *John sneezed the money.

These verbs will thus typically select one argument:


(74) ARG - ST NP

This unique argument is realized as subject (SUBJ) at syntax (see Chapter 5 for
discussion of the manner in which the elements from the ARG - ST list are realized
as grammatical functions like SUBJ (or SPR) and COMPS).
T HE L INKING C ONSTRUCTION : Verbs such as look, seem, remain, and feel
require a complement whose typical category is an AP:
(75) a. Tang looked [thoughtful].
b. Students became [familiar with this information].
c. The drink never tasted [so good].
d. The difference remained [statistically significant].
e. James seemed [ready to start a new life].

These verbs also can select other phrases (here, NP):


(76) a. Her house became [a landmark].
b. They seemed [a happy couple].
c. She remained [a firm supporter of the arts].

Though each verb may select a different type of phrase, all at least select a
predicative (PRD) complement, where a property is ascribed to the subject (com-
pare John remained a student with John revived a student).15 This pattern of
argument-structure can be represented as follows:
(77) ARG - ST NP, XP[PRD +]

The verbs that can occur in the linking construction have two arguments: one is
canonically an NP to be realized as the subject and the other is any phrase (XP)
that can function as a predicate (PRD +). The XP can be either an NP or an AP
for the verb become.
15 The verb remain can be used in a different sense, as in John remained in the park, in which the
PP functions as a nonpredicative locative, as in John stayed in the park. These uses involve a
construction like the locative construction.
92 H E A D , C O M P L E M E N T S , M O D I FI E R S

T HE T RANSITIVE C ONSTRUCTION : Unlike linking verbs, a pure transitive verb


combines with a referential, nonpredicative NP as its complement:
(78) a. He typed [the first pages of his doctoral dissertation].
b. Clinton supported [the health care bill].
c. The Roman armies destroyed [the temple].

The complement NP here is not a predicative complement, as seen from the


passive examples.16
(79) a. The first pages of his doctoral dissertation were typed.
b. The health care bill was supported by Clinton.

Such verbs will thus have two arguments:


 
(80) FORM destroy
ARG - ST NP[agt], NP[pat]

The ‘destroying’ event involves at least two participants or arguments: one who
does the action and the other (a patient) who is affected by the action. The
verbs occurring in this type of argument structure thus typically take an agent
NP subject with a patient NP object.17
T HE D ITRANSITIVE C ONSTRUCTION : English has a number of generally
ditransitive verbs, including send, pass, buy, teach, and tell:
(81) a. Sam sent [him] [a coded message].
b. The player passed [Paul] [the ball].
c. The parents bought [the children] [nonfiction books].
d. She taught [her students] [job skills].

As these examples show us, the verbs here take a subject and two apparent
objects, the latter of which refer to a theme and a recipient, respectively. Each
sentence describes a change-of-possession event in which an agent participant
transfers a ‘theme’ (th) object to a recipient or goal.
 
(82) FORM teach
ARG - ST NP[agt], NP[goal], NP[th]

The two complement NPs are taken to function as IO and DO, respectively, but
because the IO is an NP (object) rather than a PP (the typical grammatical real-
ization of recipient arguments), the resulting structure is typically referred to as
the ‘double object’ construction.

16 A predicative NP complement cannot be passivized (see Chapter 3):

a. Its cause remains a mystery.


b. *A mystery is remained by its cause.

17 The first element of the ARG - ST in the TRANSITIVE CONSTRUCTION bears nonagent roles like
an experiencer, as in Most of the students liked the teacher.
4.5 Arguments and Argument-Structure Constructions 93

As we noted earlier, these verbs typically have related verbs in which the
recipient or goal argument is realized instead as an oblique PP complement:
(83) a. Sam sent a coded message to him.
b. The player passed the ball to Paul.
c. The parents bought nonfiction books for the children.
d. She taught job skills to her students.

In these uses, unlike the ones in (81), the second argument has the theme role
while the third argument has some other role; we illustrate here with goal:
 
(84) FORM teach
ARG - ST NP[agt], NP[th], PP[goal]

Structures containing such verbs, often called ‘prepositional object’ construc-


tions, share some properties with the double object constructions in (81).18
T HE C OMPLEX T RANSITIVE C ONSTRUCTION : There is another type of transi-
tive verb which selects two complements, one functioning as a direct object and
the other as a predicative phrase (NP, AP, or VP) describing the object:
(85) a. Mary regards Bill as a good friend.
b. Hamilton’s policies made some people furious.
c. They call her a strategist.
d. They believe him to be a disinterested observer.

In (85a), the predicative PP as a good friend follows the object Bill; in (85b), the
AP furious serves as a predicate phrase of the preceding object some people. In
(85c), the NP a strategist is another predicative phrase. In (85d), the predicative
phrase is an infinitive VP. Just like linking verbs, these verbs require a predicative
([PRD +]) XP as complement, as exemplified by the following:
 
(86) FORM call
ARG - ST NP, NP, XP[PRD +]

This means that the verbs in (85) all select an object NP and an XP phrase that
functions as a predicate. Although these five types of argument-structure con-
structions cover most of the general types, there are other verbs that do not
fit into these constructions, or at least require further specifications on their
complement(s). Take the use of the verb cart in (87):
(87) a. *They carted away.
b. *They carted the debris.
c. They carted the furniture out of the home.

18 There are also differences between the two, for example, with respect to information structure
(see Goldberg, 2006 and the references therein). These divergences have raised the question
of whether one can be derived from the other (Larson, 1988; Baker, 1997) or whether the two
should be treated independently (Jackendoff, 1990; Goldberg, 2006).
94 H E A D , C O M P L E M E N T S , M O D I FI E R S

The examples in (87) suggest that carted requires an NP and a PP as its


complements, as represented in the feature structure in (88):
 
(88) FORM cart
ARG - ST  1 NP[agt], 2 NP[th], 3 PP[loc]

The PP here cannot be said to be predicated of the object furniture; rather it


denotes the location to which they carried the furniture.

4.5.3 Argument Structures as Constructions: Form and Meaning


Relations
We have seen that argument-structure patterns have been identified
with verb classes, but, like other proponents of construction-based syntax, we
view these classes as constructions because their properties are independent of
any given verb (see, e.g., Goldberg, 1995). Key support for this view comes from
the observation that language users creatively extend the meanings of verbs by
changing the combinatory requirements of verbs. For instance, the verb cough is
typically used as an intransitive verb (i.e. with no direct object):
(89) a. Pat coughed and then shook his head.
b. Pat began to cough violently.
c. *Pat coughed his head.

The intriguing fact, however, is that the verb can be used with an object when the
object is followed by a directional phrase. Note the following attested examples:
(90) a. Chess coughed smoke out of his lungs.
b. I coughed vodka back into my glass.

While in (89) the verb cough simply describes an action of expellng air from
one’s lungs, the verb in (90a) and (90b) expresses causation of motion: The
entities denoted by the direct object (smoke and vodka) come to be in a new
location by means of the coughing. Such novel uses suggest that a verb can
occur in different argument-structure configurations with systematic variations in
meaning.
In addition, consider the following data set, which shows that verbs like kick
can appear in a variety of complement (argument-structure) configurations:
(91) a. Pat kicked. (intransitive)
b. Pat kicked the ball. (transitive)
c. Pat kicked at the ball. (conative)
d. Pat kicked Bob the ball. (ditransitive)
e. Pat kicked the ball into the stadium. (caused-motion)
f. Pat kicked Bob black and blue. (resultative)

Traditional generative grammar assumes that each use of the verb kick here has
a distinct lexical entry with distinct combinatory properties (e.g., kick1,
4.5 Arguments and Argument-Structure Constructions 95

Table 4.1 Argument-structure constructions and semantic properties


Construction type Argument-structure Semantic properties
INTRANSITIVE <NPx> X acts alone
CONATIVE <NPx, PPy> X acts at Y
TRANSITIVE <NPx, NPy> X acts on Y or X experiences Y
DITRANSITIVE <NPx, NPy, NPz> X causes Y to receive Z
CAUSED - MOTION <NPx, NPy, PPz> X causes Y to move Z
RESULTATIVE <NPx, NPy, XPz[PRD +]> X causes Y to become Z

kick2, kick3, etc.). However, note that in all of these cases, the verb kick retains its
basic meaning of performing a forward-moving action with the foot. The mean-
ing differences come from the argument-structure patterns with which the verb
kick combines. In (91a), the INTRANSITIVE construction is used to convey that
the subject acted alone; in (91b), the TRANSITIVE construction is used to indicate
that the subject acted on another entity (propelling it forward); in (91c), the use
of a CONATIVE construction (which uses a PP complement in place of a direct
object) conveys that the subject made little or ineffectual contact with the ball;
in (91d), the DITRANSITIVE construction is used to describe an event in which
propulsion of the ball causes someone else to possess it; in (91e), a CAUSED -
MOTION predication, we understand the subject to have moved the ball to a new
location by means of kicking; finally, in (91f), the RESULTATIVE construction is
used to convey that the subject changed the direct object’s properties by means
of kicking. In light of these facts, we observe that each argument-structure con-
struction, schematized in Table 4.1, expresses a certain type of event or action.
In this constructional view, the meaning of a sentence is determined by the
combination of the matrix verb’s core meaning with the basic event type con-
veyed by the construction with which the verb combines. When a verb occurs in
one of these constructions, its semantic roles are identified or ‘fused’ with those
of the argument-structure construction with which it combines.19 Critically, the
argument-structure construction may provide semantic roles that are not supplied
by the verb, thus augmenting the verb’s array of semantic roles. The novel uses
of cough in (90) are then expected. The verb, as noted, is typically an intransitive
verb, but in (90) it occurs in the CAUSED - MOTION construction, which supplies
two additional participant roles (the theme argument and the directional argu-
ment). We find a similar pattern of flexible usage among other intransitive verbs,
including sneeze:
(92) a. Colin sneezed.
b. *Colin sneezed his napkin.
c. Colin sneezed his napkin off the table.

19 For more background on argument-structure constructions, refer to seminal work by Gold-


berg (1995, 2006).
96 H E A D , C O M P L E M E N T S , M O D I FI E R S

The examples (92a) and (92b) suggest that verbs like sneeze are used only in
intransitive environments. How can we square this with examples like (92c) in
which the verb combines with the object his napkin and the directional phrase
off the table? A proponent of traditional generative grammar might assume that
there is another type of sneeze, but the syntactic flexibility illustrated by (92c)
is prevalent in English, and creating a new lexical entry for each novel use
of a verb would not be practical, nor would it capture the insight that many
novel verb uses are ‘nonce uses’ – they serve an expressive purpose in a par-
ticular context but may never become conventionali. The CxG view we have
sketched out here can account for this important aspect of linguistic creativity
in an intuitive way: Argument-structure constructions have their own mean-
ings and semantic-role arrays, and the kind of event or relation expressed by
a verb is ultimately determined by the argument-structure pattern with which it
combines.

4.6 Conclusion

In this chapter, we showed that the well-formedness of each phrase


depends on both its internal and external syntax. The pivotal expression in
internal syntax is the head, which determines what expressions can or may
accompany it as its syntactic sisters. We have seen that a grammar with sim-
ple PS rules inevitably confronts two critical issues: endocentricity (headedness)
of a phrase and redundancies in the lexicon.
To resolve these two problems, generative grammar has introduced X rules,
including three key combinatorial rules: head-complement(s), head-specifier,
and head-modifier. These rules ensure that each phrase is a projection of a
head expression, while allowing for the existence of intermediate phrases (X-
phrases). X syntax captures the similarities between NPs and Ss by treating
these phrase types in a uniform way. A grammar with X rules also recognizes
the necessity of introducing features like POS. The grammar we adopt in this
book (SBCG) follows this direction of X theory, using a fine-grained feature
system to describe the syntactic and semantic properties of both simple and com-
plex signs; it enables us to track how those features change during the course
of a derivation, as complex signs are built up from simple ones. This chapter
introduced the basic feature system that we will use in describing the English
language.
In the final section of this chapter, we examined the patterns of semantic-role
expression called argument-structure constructions and the novel conception of
such patterns within CxG, according to which argument-structure patterns are
constructions. We have briefly shown that this view allows us to account for
innovative uses of verbs in various contexts. Chapter 5 introduces a more fine-
grained feature-structure system, as well as a robust generative grammar based
on that system.
4.6 Conclusion 97

Exercises

1. The following exercise will entrench your understanding of the


notion of feature structures.
a. Describe yourself as a feature structure as far as you can.
Try to introduce feature attributes which have different value
types, such as atomic, list, set, or another feature structure (e.g.,
NAME , SIBLINGS , EMAIL , HOBBIES , FRIENDS , etc.).
b. Provide two examples which illustrate these feature-structure
operations: structure sharing, subsumption, and unification.
Use the attributes you used for describing yourself.

2. Construct a sentence including each of the following verbs and


identify what kind of argument structure each verb is linked to:
a. see, watch, glance, look, witness
b. tell, say, speak, talk, inform, mention

3. Provide tree structures for the following pairs of sentences while


identifying the grammatical function of the italicized phrase. In
doing so, have at least one valid distributional test supporting the
identification.
(i) a. They rinsed the dishes in the sink.
b. They put the dishes in the sink.
(ii) a. He placed the gun under the bed.
b. He saw the gun under the bed.
(iii) a. I wonder if he came back.
b. I’d be thrilled if he came back.

4. For each sentence below, draw its tree structure and then provide the
ARG - ST value of the underlined verb:

a. Frank hopes to make the cook wash the dishes.


b. We have not confirmed whether the flight had been booked.
c. They confined his remarks to the matter under discussion.
d. He had napped on their couch for hours.
e. He wanted to persuade Bess to tell Emil not to come to the
party.
f. He shoved the key into his pocket.
g. Johnson watched the starlings attack the snake.

5. The verbs in the following examples are used incorrectly. Correct the
errors or replace the verb with another one, and write out each new
example. In addition, provide the ARG - ST value for each verb (in its
use in your grammatical examples).
98 H E A D , C O M P L E M E N T S , M O D I FI E R S

a. *Mary slept the baby.


b. *She explained me the whole story.
c. *No subjects attributed their performance task difficulty.
d. *Harry Winston donated the museum the diamond.
e. *Brian behaved his sister in Phoenix. (cf. Brian behaved himself
in Phoenix.)

6. Draw a tree structure for each of the following sentences. In partic-


ular, provide detailed NP structures for the italicized part using the
notion of intermediate phrase N .
a. The love of my life and father of my children would never do
such a thing.
b. The museum displayed no painting by Miro or drawing by Klee.
c. His transformation into a wolf surprised her.
d. Jane met a student of economics from Kenya in the Russian
language class.
e. His reliance on her memory had also hobbled him.

7. Verbs like cut, get, and make can occur in many different syntac-
tic environments. Try to find authentic examples with these verbs in
different argument-structure constructions. In doing so, use corpora
like COCA, NOW, or iWeb, all of which are available online free of
charge.
5 Combinatorial Construction Rules
and Principles

5.1 From Lexemes to Words

We have seen that verbs like put specify information about arguments
(the number of participants in the expressed situation), as represented by the fea-
ture ARG - ST. This information can be traced to a lexeme: the basic lexical unit,
or, alternatively, the headword (citation form) in the dictionary. Each verb lexeme
is realized in different inflected forms, as seen in the realizations of the lexeme
chase:
(1) a. The dog chased the cat.
b. The dog chases a shadow.
c. The dog is chasing the cat.

All the three forms – chased, chases, chasing – here are related to the verb lexeme
chase, which carries the following ARG - ST information:
⎡ ⎤
(2) v-lxm
⎢ ⎥
⎣FORM chase ⎦
ARG - ST NP[agt], NP[th]

This lexeme (v-lxm) information shows that the event of chasing has two NP
arguments: an agent (agt) NP and a theme (th) NP. These two arguments are
realized as the subject and complement (object) respectively when the lexeme
is used as a word at the sentence level. For instance, the verb chased would
have the following syntactic information (suppressing semantic information at
the moment):
⎡ ⎤
(3) v-wd
⎢ ⎥
⎢FORM chased ⎥
⎢ ⎡  ⎤⎥
⎢ ⎥
⎢ POS verb ⎥
⎢ ⎢ HEAD ⎥⎥
⎢ ⎢ VFORM ed ⎥⎥
⎢ ⎢ ⎥⎥
⎢SYN ⎢  ⎥⎥
⎢ ⎢  NP ⎦⎥

⎢ ⎣VAL SPR 1 ⎥
⎢ ⎥
⎢ COMPS  2 NP ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP

The feature structure tells us that the word-level verb chased (v-wd) is a verb and
in the ed verb inflection form (VFORM). The first NP ( 1 ) element of the ARG - ST

99
100 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

is linked to the SPR (specifier or subject), while the second NP is linked ( 2 ) to the
COMPS . In what follows, we will discuss the properties of these syntax-relevant
feature attributes, focusing on internal syntax.
All of these verbal forms are generated from the citation form (the lex-
eme) by English inflectional construction rules. For example, the past verb
word (v-wd) will be derived from a verb lexeme (v-lxm) by a rule like the
following:

(4) PAST INFLECTIONAL CONSTRUCTION:


⎡ ⎤
⎡ ⎤ v-wd
v-lxm ⎢ ⎥
⎢ ⎥ ⎢FORM Fpast ( 1 ) ⎥
⎣FORM  1  ⎦ → ⎢  ⎥
⎢ ⎥
SYN | HEAD | POS verb ⎣SYN | HEAD POS verb ⎦
VFORM ed

The inflectional construction rule states that a verb lexeme (like chase, as in
(5)) can be used to create a v-word (v-w d) and derives its ed form by applying
the Fpast function, whose value can be either ‘-ed,’ as in The dog chased a cat,
or none, as in The thieves cut a hole in the fence, or even a suppletive form
(e.g., was). The following is an illustration deriving chased from the lexeme
chase:

(5) Deriving the word chased from the lexeme


⎡ chase: ⎤
⎡ ⎤ v-wd
v-lxm ⎢ ⎥
⎢ ⎥ ⎢FORM chase + ed ⎥
⎢FORM chase ⎥ → ⎢  ⎥
⎣ ⎦ ⎢ verb ⎥
SYN | HEAD POS verb
⎣SYN | HEAD POS ⎦
VFORM ed

The output v-wd adds the value for the feature VFORM (which we discuss in what
follows), as well as a past meaning (which we suppress here).1 Note that since in

1 We could translate this rule-style format of constructing a word to a bottom-to-top process of


projecting a word-type from a lexeme type:
5.2 Head Features and Head Feature Principle 101

this book we focus on word (lexical) and phrasal constructions, we will discuss
such morphological processes and constructions only when necessary.

5.2 Head Features and Head Feature Principle

5.2.1 Parts of Speech Value as a Head Feature


As noted earlier, in order to guarantee that the head’s POS (part of
speech) value is identical to that of its mother, we must introduce the cate-
gory variable X and the feature POS. The POS feature is thus a head feature
that is shared between the ‘mother’ phrase and its head ‘daughter,’ as shown
in (6):
(6)

This sharing between head and mother is ensured by the Head Feature
Principle:
(7) The Head Feature Principle (HFP):
A phrase’s head feature value (e.g., POS, VFORM, etc.) is identical to that of
its head.

The HFP thus ensures that every phrase has its own lexical head with the iden-
tical POS value. The HFP will apply to any features that we declare to be ‘head
features,’ VFORM being another (see Section 5.5 for detailed discussion). The
grammar thus does not allow hypothetical phrases like the following, ensuring
the endocentric property of each phrase:
(8)

5.2.2 Verb Form as a Head Feature


As noted at the beginning, a verb lexeme can have one of several
inflectional markings, chosen according to the verb’s tense and agreement prop-
erties. Intuitively, English verbs have seven grammatical forms. For example, the
verb drive can take these forms: drives, drove, drive, driving, driven, to drive, in
addition to the citation form. The present and past tense forms are usually classi-
fied together as fin (finite), with all the rest being nonfin (nonfinite) in some way.
Using this division, we can lay out the forms as in (9):
102 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

(9) Types of English verb forms:

Finiteness Verb forms Examples


es He drives a car.
fin ed He drove a car.
pln They drive a car.
bse He wants to drive a car.
Driving a car, he sang a song. (present
participle)
ing He was driving. (progressive)
nonfin He is proud of driving a car. (gerundive)
Driven by the mentor, he worked. (past
participle)
en The car was driven by him. (passive)
He has driven the car. (perfect)
inf He has to drive.

The fin forms have three subtypes: es, ed, and pln ( plain). Notice that there
might be mismatch between form and function: Although the ed verb canonically
describes a past event as in (10a), while the es and pln verbs represent a present
event as in (10b), this is not always true, as seen in (10c):2
(10) a. My daughter called me yesterday.
b. She usually smiles a lot and she is usually pretty articulate.
c. Your plane leaves Seoul early tomorrow morning.
The verb leaves in (10c) is in the present form, es, but it describes a future event.
The mapping between a VFORM value and event time is thus not one-to-one.
The nonfin values include bse (base), ing (present participle), en (past partici-
ple), and inf (infinitive). As for the infinitival marker to, we follow the standard
generative grammatical analysis of English ‘infinitives,’ in which the infinitive
marker is the head (to). Note that the plain and base forms are identical to the
lexical base (or citation form) of the lexeme. Even though the two forms are
identical in most cases, substitution of the past form shows a clear difference:
(11) a. They write/wrote to her.
b. They want to write/*wrote to her.
(12) a. They are/*be kind to her.
b. They want to be/*are kind to her.
In (11a) and (12a)–(12b), we have two occurrences of the verb write, but note that
only that in (11a) can be replaced by the past verb wrote. This means that only
this one is a plain finite verb with no inflectional marking, while the verb write
in (11b) is a nonfinite base verb. The contrast in (12) also shows us a difference
between the two different verb forms: are is used only as a finite verb, while be
occurs only as a base verb.
2 More specifically, the plain form, though identical to the citation form, is used for present
tense when the subject is anything other than 3rd person singular. The plain verb thus lacks
an inflectional ending.
5.2 Head Features and Head Feature Principle 103

The verb form values (as value for the attribute VFORM ) given in (9) can be
represented as in the following hierarchy:
(13)

The classification of VFORM values here means that the values of VFORM are
‘typed,’ and those types have different subtypes – for example, what is shared
between es and ed will be stated on the type fin, yet they will individually differ
(they express different tenses). Sometimes we want to be able to refer to the type
of a value, as in (14a), and sometimes to a particular form, as in (14b):
(14) a. [VFORM fin]
b. [VFORM ing]

We can easily determine whether we need to distinguish between fin and non-
fin. Every declarative sentence in English must have a finite verb with tense
information:
(15) a. The student [knows the answers].
b. The student [knew the answers].
c. The students [know the answers].
(16) a. *The student [knowing the answers].
b. *The student [known the answers].

The examples in (16) are unacceptable because knowing and known have no
expression of tense – they are not finite. This in turn shows us that only finite
verb forms can be used as the head of the highest VP in a declarative sentence,
satisfying a basic requirement placed on English declarative sentences:
(17) English Declarative Sentence Construction:
For an English declarative sentence to be well-formed, its verb form value
(VFORM) must be finite.

The finiteness of a sentence or a VP is the same as the one on the head verb,
showing that the finiteness of the VFORM value is a head feature:
(18)
104 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

One thing we need to remember is that the two participle forms (ing and en)
have many different uses, in different constructions, as partially exemplified in
(9). Some of these usages (gerundive, progressive, passive) were introduced as
VFORM values (Gazdar et al., 1985, Ginzburg and Sag, 2000), each of which
has several functions or constructional usages. In Section 5.5, we will further
examine how this HEAD feature functions in internal syntax.

5.2.3 Mapping between Argument-Structure and Valence Features


We have seen that the ARG - ST of the verb puts includes three argu-
ments (agent, theme, and location) that are linked to the participants in the
‘putting’ event, as represented in the following:
 
(19) FORM put
ARG - ST NP[agt], NP[th], PP[loc]

These three elements in the ARG - ST (argument-structure) list are realized as


the grammatical functions SPR (specifier/subject) and COMPS (complements),
respectively:
(20) a. [The doctor] put [his hand] [on my elbow].
b. [Clinton] has also put [more emphasis] [on women’s issues].
c. [Democrats] put [their hopes] [in key swing areas].

Note that each of the three arguments selected by the verb needs to be realized
as a syntactic expression bearing its own grammatical function:
(21) a. *The doctor put his hand.
b. *The doctor put on my elbow.
c. *The doctor put.

All these examples are ill-formed, since at least one of the arguments is not
realized as a grammatical function. Note also that the first element of the ARG - ST
list must be the subject, with the other expression(s) linked to the complements
in order:3
(22) a. *In my elbow put his arm the doctor.
b. #His arm put the doctor in my elbow.

The generalization governing the realization of arguments in the ARG - ST list


as grammatical functions SPR (including SUBJ) is that the first element on the
list is realized as subject and the rest as complements. This can be stated as the
following principle:
(23) Argument Realization Constraint (ARC, first approximation):
The first element on the ARG - ST list is realized as SPR (or subject), the rest
as COMPS in syntax.

3 The notation # indicates that the structure is technically well-formed from a syntactic perspective
but semantically anomalous.
5.3 Combinatory Construction Rules 105

This realization is obligatory in English. More formally, we can represent this


constraint as follows:4
(24) Argument Realization Constraint
 (ARC):
⎤

SPR A
⎢SYN | VAL ⎥
v-wd ⇒ ⎣ COMPS B ⎦
ARG - ST A ⊕ B

The constraint means that the elements in the ARG - ST list of word-level expres-
sions will be realized as SPR and COMPS in syntax. Lexemic expressions will
only have ARG - ST information, but when they occur in syntax, they will also
carry syntactic valence features such as SPR and COMPS.
We can apply this constraint to the word puts, as given in the following feature
structure:
⎡ ⎤
(25) FORM puts
⎢  ⎥
⎢ SPR  1 NP ⎥
⎢SYN | VAL ⎥
⎢ COMPS  2 NP, 3 PP ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP, 3 PP

The boxed tags show the different identities in the overall structure. For example,
the first element of ARG - ST and of SPR have the boxed tag 1 , ensuring that the
two are identical.
The ARC blocks examples like (21) as well as (22a), in which the location
argument is realized as the subject, as shown in (26):
⎡  ⎤
(26) * SPR 3 PP
⎢SYN | VAL ⎥
⎢ COMPS  1 NP, 2 NP ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP, 3 PP

This violates the ARC, which requires that the first element of ARG - ST be realized
as the SPR (the subject of a verb or the specifier of a noun).

5.3 Combinatory Construction Rules

As noted, each lexical head (verb, adjective, noun, preposition) can


have its own argument structure (ARG - ST), and the arguments in ARG - ST are
realized as the syntactic elements SPR (subject of a verb and determiner of a
noun) and COMPS in accordance with the ARC. This will license examples like
(27) while blocking those like (28):

4 The symbol ⊕ represents an operation of combining two list expressions. In addition, the symbol
⇒ represents constraints on the type.
106 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

(27) a. Sam remained quiet.


b. Everyone supported the idea immediately.
c. His teammates passed him the ball more.
(28) a. *Sam remained.
b. *Everyone supported.
c. *His teammates passed him.
As we saw in Chapter 4, the rules that license the combination of a lexical
head (or predicate) with its syntactic sisters are not the PS rules but the X rules,
given in the following:
(29) a. Head-Specifier Rule:
XP → ZP, X (Specifier, Head)

b. Head-Complement Rule:
X → X, YP* (Head, Complement(s))

c. Head-Modifier Rule:
XP → ModP, XP (Modifier, Head)

This X rule (29a) represents the case in which a head combines with its specifier
(e.g., a VP with its subject and an N with its determiner), whereas (29b) says
that a head combines with its complement(s) to form a phrase. Rule (29c) allows
the combination of a head with its modifier.
Within the present feature-based system, these X -rules can be reinterpreted
as follows:
(30) Combinatory Construction Rules (to be revised):
a. HEAD - SPECIFIER CONSTRUCTION (XP → Specifier, Head):
XP[POS 1 ] → Specifier, XP[POS 1 ]
b. HEAD - COMPLEMENT CONSTRUCTION (XP → Head, Complement(s)):
XP[POS 1 ] → X[POS 1 ], Complement(s)
c. HEAD - MODIFIER CONSTRUCTION (XP → Modifier, Head):
XP[POS 1 ] → Modifier, XP[POS 1 ]

These combinatory construction rules license well-formed constructs. One thing


to note here is that these combinatory construction rules require no notion of X
phrase, the reason for which will be come clear in due course.
Let us consider how these three constructional rules allow lexical as well
as phrasal constructions to be combined. First, the HEAD - SPECIFIER CON -
STRUCTION in (30a) (analogous to the X rule, XP → YP, X ) licenses phrases
consisting of a phrasal head daughter and a subject daughter, as illustrated in
(31).5
5 In tree structure formats, we adopt a shorthand system of representing feature structures,
suppressing unrelated features or paths. For example, the fully specified feature structure in (31)
will include VAL as well as FORM, SYN, SEM, etc.
5.3 Combinatory Construction Rules 107

(31)

This simplified presentation says that the head daughter VP requires a subject NP
(functioning as a specifier (SPR)) while carrying its own POS and VFORM value.
Combining with the subject, the VP then is projected into an S. This resulting
combination S discharges the requirement that the head VP combines with a SPR,
so the SPR set is empty at the level of S (once the requirement is satisfied, it is
‘cancelled’ from the list). Meanwhile, note that the S’s HEAD value is the same
as the head VP’s HEAD, in accordance with the HFP.
The HEAD - COMPLEMENT CONSTRUCTION, again analogous to the X rule X
→ X, YP, allows the combination of a lexical head daughter with its complement
daughter(s) (zero or more), as represented in (32).
(32)
108 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

The declarative verb denied selects two arguments (ARG - ST), which are mapped
onto subject (SPR) and complement (COMPS), respectively. The head verb com-
bines with the NP complement, forming a well-formed VP. The resulting VP
then has its COMPS value empty (discharged) but still requires a subject speci-
fier. Note that in these two construction rules, once the required COMPS and SPR
values in (32) and (31) are combined, their value is discharged at the mother
level. This cancellation of elements of the valence (VAL) set is controlled by a
general principle called the Valence Principle:6
(33) Valence Principle (VALP):
For each valence feature F (e.g., SPR and COMPS), the F value of a
headed phrase is the head-daughter’s F value minus the realized non-head-
daughters.

The effect of this principle, reminiscent of the category cancellation associated


with functional application in Categorial Grammar, is to ‘check off’ the valence
requirements of a lexical head, traditionally called the subcategorization frame
of a verb.
The HEAD - MODIFIER CONSTRUCTION generates a phrasal head to com-
bine with a modifier phrase (this is a type of adjunction and analogous to
X → X , YP). The modifier in the construction selects for the kind of head
it combines with. This selectional restriction is mediated by the head feature
MOD (modified) (see the lexical entry for always in the next section). This
head feature specification enables an adjunct to select its head, as illustrated
in (34).
(34)

The Adv strongly or AdvP quite strongly can modify its head VP, resulting in a
well-formed head-modifier construct.7 Note that the combination of a modifier
and its head does not alter the valence features (SPR and COMPS).8
6 Another way to state this is that unless the construction rule says otherwise, the mother’s SPR and
COMPS values are identical to those of the head daughter.
7 In contrast to the discussion in Chapters 3 and 4, the present grammar allows both a lexical
expression and a phrasal expression to modify a head phrasal expression, as long as the former
bears the feature MOD.
8 This means that the feature MOD does not belong to valence (VAL), since there is no process of
discharging its value.
5.3 Combinatory Construction Rules 109

Incorporating the requirements for discharging the valence features, we can


formally represent the construction grammar rules as follows:
(35) Combinatory Construction Rules (final):
a. HEAD - SPECIFIER CONSTRUCTION :
XP[SPR  ] → 1 , H[SPR  1 ]
b. HEAD - COMPLEMENT CONSTRUCTION :
XP[COMPS  ] → H[COMPS  1 , . . . , n ], 1, ..., n
c. HEAD - MODIFIER CONSTRUCTION:
XP → [MOD  1 ], 1 H

The combinatory grammar rules here are conditions on possible phrases in


English, indicating what each head combines with and what happens as the result
of that combination. For example, in (35a) when a head, requiring an SPR, com-
bines with it, we have a well-formed head-specifier phrasal construction with
the SPR value discharged; and in (35b), a head combines with all of its COMPS
value, forming a head-complement construct; in (35c), when a modifier com-
bines (carrying the MOD (modifying) feature) with the head it modifies, the
resulting phrase forms a well-formed head-modifier phrase.9 Interacting with
general principles such as the HFP, the three construction grammar rules in (35)
license grammatical sentences in English.10
Note that, as hinted earlier, these feature-based construction rules require no
notion of intermediate phrase X , since the intermediate level is reflected in the
cancellation of valence features (SPR and COMPS). That is, in this feature-based
grammar, there are no phrasal categories S, NP, VP, or even N . These are sim-
ply notational conventions referring to certain feature structures. For instance, in
terms of valence features, an S is a phrase whose SPR and COMPS values are all
discharged, as in (36a). Similarly, a VP, as in (36b), is a phrase that still requires a
specifier (SPR), while N is a nominal expression that is also missing its specifier
(SPR), as in (36c):
 
(36) a. SPR 

S=
COMPS  
 
b. SPR XP
VP =
COMPS  
 
c.  SPR DP
N =
COMPS  
A modifier expression can also be either a lexical expression (e.g., Adv) or a
phrasal element (AdvP) as long as it bears the category MOD (modifying), as
noted in (34).
9 Note that the modifier can either precede or follow the head it modifies.
10 In addition to these three grammar rules, English employs the HEAD - FILLER CONSTRUCTION,
which licenses the combination of a head missing one phrasal element with a filler that matches
this missing element, as in What did John eat ? See Chapter 10 for discussion of this grammar
rule.
110 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

To explicate the principles (the HFP and the VALP) and these three com-
binatory construction rules, let us consider a complete sentence using a tree
representation.11

(37)

The HFP ensures that the head-daughter’s HEAD information is projected in its
mother phrase. The HEAD value of the lexical head denied (such as the part-of-
speech value, verb, and VFORM value, fin) is thus that of both VPs and the S
here. In accordance with the VALP, the head’s valence information determines
the elements that the maximal projection contains. The valence specifications of
the head denied show that it requires one NP complement and a subject. When it
combines with the complement, its COMPS specification is satisfied, leaving the
VP’s COMPS value empty. The resulting VP combines with the modifier via the
HEAD - MODIFIER CONSTRUCTION to form the top VP. When this top VP com-
bines with the subject NP via the HEAD - SPECIFIER CONSTRUCTION, we obtain

11 All linguistic objects are represented as feature structures in HPSG. But for expository purposes,
they are presented in the familiar trappings of generative grammar – tree representations.
5.4 Nonphrasal, Lexical Constructions 111

a completely saturated phrase, all of whose valence specifications are satisfied


or discharged. Hence, each subtree, as well as the whole sentence, conforms to
the general principles of the HFP and the VALP, as well as to the combinatorial
grammar rules (constructions).

5.4 Nonphrasal, Lexical Constructions

We have seen thus far that complements are phrases or clauses. Com-
plements are represented as phrases rather than merely lexemes. We know, for
instance, that the object of the verb destroy cannot be simply an N, because there
are many more cases where the object of destroy is an NP, as shown in (38):
(38) a. *The hail destroyed garden.
b. You can’t legally destroy evidence.
c. Liberal programs have destroyed those cities.
d. They destroy all the vegetation.
e. It destroyed the work we had done.

Note, however, that in the case of English verb-particle combinations,


discussed in Chapter 2, the verb does appear to combine with a single-word
particle expression:
(39) a. I finally figured out the right answer.
b. Hunter gave up the job.
c. He turned off the light.

We cannot assume that the main verb here ( figured, gave, and turned) selects a
particle phrase because an expression cannot be placed in front of the particle in
the manner that it could be if the preposition and the following NP jointly made
up a preposition phrase (e.g., out the right answer, up the job):
(40) a. *I figured finally out the right answer.
b. *Hunter gave completely up the job.
c. *He turned easily off the light.

The particle can in fact occur without an NP following, indicating again that the
particle does not take a NP object:
(41) a. All of these other lies [added up].
b. I think that I will [sign off] now.
c. One by one, her days were [slipping by].

The particle here is not optional, but rather contributes to the meaning. This in
turn implies that we need to allow certain verbs to select a particle, whether
or not the verb also takes an object, to induce a special meaning. The parti-
cle verbs figure and add, for instance, would thus have the following lexical
entries:
112 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

⎡ ⎤
(42) a. FORM figure
⎢ ⎥
⎢ARG - ST NPx, Part[out], NPy⎥
⎣ ⎦
SEM compute-rel(x,y)
⎡ ⎤
b. FORM add
⎢ARG - ST NP , Part[up]⎥
⎣ x ⎦
SEM accumulate-rel(x)

What these lexical entries tell us is that, for example, in the particle verb figure
out, the verb figure has three syntactic arguments, to be realized as a subject, the
particle out, and an NP object, while the verb semantically has two arguments (x
and y), which are linked to the subject (x) and the NP complement (y), respec-
tively. Meanwhile, add up is projected from the verb add, which has the subject
NP (x) and a particle complement, evoking the meaning of x’s accumulating.
In Chapter 2, we saw that phenomena like gapping support the verb-particle
constituent structure, in which the verb forms a syntactic unit with the following
particle. We repeat the relevant examples here:
(43) a. *John ran up a big hill and Jack a small hill.
b. John ran up a big bill and Jack a small bill.

The contrast, as we have noted, indicates that a verb-preposition clause cannot


be gapped, while a verb-particle clause can be gapped. One additional, intriguing
property we must consider is that the verb and its particle form a single semantic
unit, manifested in substitution by a single word, as given in the parentheses:
(44) a. I finally [figured out] the right answer. (=understand)
b. Hunter [gave up] the job. (=stop)
c. Jenny [looked over] the information packet for the newspaper. (=inspect)
d. He’d decided to [put off] the trip to Cairo. [=postpone]

Such semantic and syntactic unity, first discussed in Chapter 2, once again
motivates us to adopt the complex verb analysis in which the matrix verb and
the particle form a unit. Together with the assignment of the feature [LEX +] to
expressions like particle, the grammar introduces the following construction rule
to license the verb-particle combination:12
(45) HEAD - LEX CONSTRUCTION :
V[POS 1] → V[POS 1 ], X[LEX +]

This construction rule allows a lexical head to combine with an expression bear-
ing the feature LEX (like a particle) to form another lexical expression. This rule
would, for instance, license the following structure:13
12 Adopting this construction rule implies that we need to modify the HEAD - COMPLEMENT CON -
STRUCTION in (35). Instead of discharging all the elements of COMPS, it needs to discharge the
LEX element first and the remaining phrasal complements at once.
13 Meanwhile, the combination of figured the answer out is licensed by the HEAD - COMPLEMENT
CONSTRUCTION .
5.5 Feature Specifications on the Syntactic Complement 113

(46)

The structure reflects the strong syntactic and semantic unity of the verb figure
and the particle out. The verb figure selects three arguments including sub-
ject, particle (out), and object, which are realized as the subject (SPR) and
complements (COMPS). It first combines with the particle in accordance with
the HEAD - LEX CONSTRUCTION, yielding the mid-level verb-particle unit. This
mid-level expression then combines with the object, licensed by the HEAD -
COMPLEMENT CONSTRUCTION . The combination of the verb and the particle
(V → V Part) is thus bigger than a pure lexical construction but smaller than a
phrasal construction, leading grammarians to call the verb-particle construction
a phrasal-verb or multi-word expression. The combination of a lexical head with
another lexical element yields a nonphrasal, lexical-level construction.14

5.5 Feature Specifications on the Syntactic Complement

5.5.1 Complements of Verbs


Every verb will be specified for a value of the head feature VFORM.
For example, let us consider a simple example like The student knows the answer.
Here the verb knows will have the following lexical information:
14 In addition to the verb-particle construction, the combination of a finite auxiliary and the neg-
ative adverb not, when functioning as a sentential negation, is also licensed by the HEAD - LEX
CONSTRUCTION . See Section 8.4.1 in Chapter 8.
114 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

⎡ ⎤
(47) FORM knows
⎢ ⎡  ⎤⎥
⎢ ⎥
⎢ POS verb ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ VFORM es ⎥⎥
⎢ ⎢ ⎥⎥
⎢SYN⎢  ⎥⎥
⎢ ⎢  NP ⎥ ⎥
⎢ ⎣VAL SPR 1
⎦⎥
⎢ ⎥
⎢ COMPS  2 NP ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP

This [VFORM es] value will be the same for S, in accordance with the HFP, as
shown here:
(48)

It is easy to verify that if we had knowing instead of knows here, the S would have
the [VFORM ing] and the result could not be a well-formed declarative sentence.
This is simply because the value ing is a subtype of nonfin.
There are various constructions in which we need to refer to VFORM values,
such as:
(49) a. During rehearsal, John kept [forgetting/*forgot/*forgotten his lines].
b. Last summer a cop caught them [drinking/*drank/*drink/*drunk beer
behind a local burger joint].
c. They made him [cook/*to cook/*cooking their gypsy food].
Even though each main verb here requires a VP as its complement (the part
in brackets), the required VFORM value could be different, as illustrated by the
following lexical specifications of the word kept:
⎡ ⎤
(50) a. FORM kept
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ HEAD | POS verb ⎥
⎢ ⎢  ⎥⎥
⎢SYN⎢  1 NP ⎥⎥
⎢ ⎣VAL SPR ⎦⎥
⎢ ⎥
⎢ COMPS  2 VP[ing] ⎥
⎣ ⎦
ARG - ST  1 NP, 2 VP
5.5 Feature Specifications on the Syntactic Complement 115
⎡ ⎤
b. FORM made
⎢ ⎡ ⎤⎥
⎢ HEAD | POS ⎥
⎢ verb ⎥
⎢ ⎢  ⎥⎥
⎢SYN⎢  1 NP ⎥⎥
⎢ ⎣VAL SPR ⎦⎥
⎢ ⎥
⎢ COMPS  2 NP, 3 VP[bse] ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP, 3 VP

Such lexical specifications on the VFORM value ensure that these verbs only
combine with a VP with the appropriate VFORM value, as shown here:

(51)

The finite verb kept selects as its complement a VP whose VFORM value is ing.
The verb forgetting has this VFORM value, which it shares with its mother VP
in accordance with the HFP. The HEAD - COMPLEMENT CONSTRUCTION allows
the combination of the head verb kept with this VP. In the upper part of the
structure, the VFORM value of the verb kept is also passed up to its mother node
VP, ensuring that the VFORM value of the S is a subtype of fin, satisfying the
basic English rule for declarative sentences.
116 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

5.5.2 Complements of Adjectives


There are at least two types of adjectives in English as these per-
tain to complement selection: those selecting no complements at all and those
taking complements. As shown in the following examples, an adjective like
despondent optionally takes a complement, while intelligent does not take any
complements:

(52) a. She was apparently despondent (that she could not leave the city).
b. He seems intelligent (*to study medicine).

Adjectives such as eager, fond, and compatible each select a complement,


possibly of different categories (for example, VP or PP):

(53) a. Colleges are eager [to embrace/*embracing the trend].


b. I was not fond [of/*with the saltiness of Shandong cooking].
c. Some proposals seem compatible [with/*for the human interests].
d. From a distance, the building looks similar [to/*with the other newer, two-
floor homes around town].
e. He is proud [of/*with his profession].
f. The plan is subject [to/*for approval by a federal bankruptcy judge].

One thing we can note again is that the complements also need to be in
a specific VFORM and PFORM value, where PFORM indicates the form of a
specific preposition, as illustrated in examples (53b)–(53f). Just like verbs,
adjectives also place restrictions on the VFORM or PFORM value of their
complement. Such restrictions are also specified in the arguments that they
select:
⎡ ⎤
(54) a. FORM eager
⎢ ⎥
⎣SYN | HEAD | POS adj ⎦
ARG - ST NP, VP[VFORM inf ]
⎡ ⎤
b. FORM fond
⎢ ⎥
⎣SYN | HEAD | POS adj ⎦
ARG - ST NP, PP[PFORM of ]

Such lexical entries will project sentences like the following, in which the
first element is realized as SPR while the second is realized as the COMPS
value:15

15 The copula verb are selects two arguments: a subject and an AP. Its subject is the same as the
subject of eager. For discussion of copula verbs, see Chapter 8.
5.5 Feature Specifications on the Syntactic Complement 117

(55)

As represented in this simplified tree structure, the adjective eager combines


with its VP[inf ] complement in accordance with the HEAD - COMPLEMENT CON -
STRUCTION . In addition, this rule also licenses the combination of the infinitival
marker to with its VP[bse] complement and the combination of the copula are
with its AP complement. The HFP ensures that the HEAD features, POS and
VFORM , are passed up to the final S. Each structure will satisfy all of the relevant
constraints and principles.

5.5.3 Complements of Common Nouns


Many nouns do not select complements, though they often have
specifiers. For example, common nouns such as desk, book, and beer require
only a specifier but no complement. Yet there are also nouns which do require
a specific type of complement, such as proximity, search, king, desire, and
bottom:
(56) a. their proximity to/*for the ocean
b. my father’s faith in/*on me
c. the king of/*in England
d. the search for/*of the answer
e. the bottom of/*in the lake
118 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

Although these complements are optional (as indicated by parentheses in (57)),


they are grammatically classified as complements of the nouns, and they are
represented in the following simplified lexical entries:
⎡ ⎤
(57) a. FORM proximity
⎣  ⎦
ARG - ST DP, (PP[PFORM to])
⎡ ⎤
b. FORM faith
⎣  ⎦
ARG - ST DP, (PP[PFORM in])

The category DP (similar to NP) includes not only simple determiners like a,
the, and that but also possessive phrases like John’s (See Chapter 6, where we
discuss NP structures in detail). In these particular entries, the SPR is shown to
be required.

5.6 Feature Specifications on the Subject

In general, verbs select a regular NP as subject:


(58) a. John/Some books/The spy disappeared.
b. The teacher/The monkey/He fooled the students.

However, as noted in the previous chapter, certain English verbs select only it or
there as subject:16
(59) a. It/*John/*There rains.
b. There/*The spy lies a man in the park.

The pronouns it and there are often called ‘expletives,’ indicating that they do
not contribute any meaning. The use of these expletives is restricted to partic-
ular contexts or verbs, although both forms have regular pronoun uses as well.
One way to specify such lexical specifications for subjects is to make use of a
form value specification for nouns: All regular nouns have [NFORM norm(al)]
as a default specification; overall we classify nouns as having three different
NFORM values: normal, it, and there. Given the NFORM feature, we can have
the following lexical entries for the verbs above:
⎡ ⎤
(60) a. FORM rained
⎢  ⎥
⎢  1 NP[NFORM it] ⎥
⎢SYN | VAL SPR ⎥
⎢   ⎥
⎣ COMPS ⎦
ARG - ST  1 NP

16 Refer to Exercise 4 of Chapter 3 for the subjecthood of there.


5.7 Clausal Complement and Subject 119
⎡ ⎤
b. FORM fooled
⎢  ⎥
⎢  1 NP[NFORM norm] ⎥
⎢SYN | VAL SPR ⎥
⎢ COMPS  2 NP  ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP

We can also observe that only a limited set of verbs require their subject to be
[NFORM there]:17
(61) a. There comes a time when you can’t save it.
b. There remains a marked contrast between potentiality and actuality.
c. There exist few solutions which are cost-effective.
d. There arose a cloud of dust that obscured the view.

The majority of verbs do not allow there as subject:


(62) a. *There runs a man in the park.
b. *There sings a man loudly.

For sentences with there subjects, we first consider verb forms which have reg-
ular subjects. A verb like exist in (61c) takes one argument in such an example,
and the argument will be realized as the SPR, as dictated by the entry in (63a).
In addition, such verbs can introduce there as the subject through the Argument
Realization option given in (63b), which is the form that occurs in the structure
of (60a):
⎡ ⎤
(63) a. FORM exists
⎢  ⎥
⎢ ⎥
⎢SYN | VAL SPR  NP
1

⎢   ⎥
⎣ COMPS ⎦
ARG - ST  1 NP 
⎡ ⎤
b. FORM exists
⎢  ⎥
⎢ ⎥
⎢SYN | VAL SPR  NP[NFORM there]  ⎥
1
⎢  NP  ⎥
⎣ COMPS 2

ARG - ST  1 NP, 2 NP

5.7 Clausal Complement and Subject

5.7.1 Verbs Selecting a Clausal Complement


We have seen that the COMPS list includes predominantly phrasal
elements. However, there are verbs that require not just a phrase but a whole
clause as the complement, either finite or nonfinite. Consider, for example, the
complements of think or believe:

17 Some verbs such as arise or remain sound a little archaic in these constructions.
120 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

(64) a. I think (that) reporters are doing their jobs, by and large.
b. They believe (that) some improvements to the referral process should be
investigated.

The C (complementizer) that is optional here, implying that this kind of verb
selects a finite complement clause of some type, which we will notate as a
[VFORM fin] clause. That is, these verbs will have one of the following two
COMPS values:
  
(65) a. COMPS S[VFORM fin]
  
b. COMPS CP[VFORM fin]

If the COMPS value only specifies a VFORM value, the complement can be either
S or CP. This means that we can subsume these two uses under the follow-
ing single lexical entry, suppressing the category information of the sentential
complement:18
⎡ ⎤
(66) FORM believe
⎢ ⎥
⎣SYN | HEAD | POS verb ⎦
ARG - ST NP, [VFORM fin]

This constraint will then allow both of the following structures, in which believe
combines either with a finite S or a finite CP:
(67)

We also find somewhat similar verbs, like demand and require, which diverge
only in the VFORM value on their sentential complements:
(68) a. They demanded that that city’s police not be allowed to march in the parade.
b. The dance required that she turn around as she circled.

Unlike think or believe, these verbs that introduce a subjunctive clause typically
only take a CP[VFORM bse] as complement: The finite verb itself is actually in
the bse form. Observe the structure of (68b):

18 Although the categories V and VP are also potentially specified as [VFORM fin], such words or
phrases cannot be complements of verbs like think or believe. This is because complements are
typically saturated phrases at least with respect to their own complements (since the VP still
requires a subject). While S and CP are saturated categories projected from V, VP and V are not
saturated.
5.7 Clausal Complement and Subject 121

(69)

The verb require selects a bse CP or S complement, and this COMPS require-
ment is discharged at its mother VP: This satisfies the HEAD - COMPLEMENT
CONSTRUCTION . There is one issue here with respect to the percolation of the
VFORM value: The CP must be bse, and this information must come from the
head C, not from its complement S. One way to make sure this is so is to assume
that the VFORM value of C is identical to that of its complement S, as in this
lexical realization:
⎡ ⎤
(70) FORM that
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS comp
⎥⎥
⎢ ⎢ ⎥⎥
⎢ VFORM 1 ⎥⎥
⎢SYN⎢
⎢  ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎥
⎣ ⎣VAL SPR   ⎦⎦
COMPS S[VFORM 1 ]

This lexical information will then allow us to pass on the VFORM value of S to the
head C and then percolate up to the CP according to the HFP. This encodes the
intuition that a complementizer ‘agrees’ in VFORM value with its complement
sentence.
One more thing to note here is that the unique argument of the complementizer
is mapped not onto the SPR but rather onto the COMPS value. Lexical expressions
like complementizers, nonpredicative prepositions, markers like than, and deter-
miners, are functional expressions in the sense that they do not select subjects.
122 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

We take such expressions to be function-words that select only a complement,


and observe the following ARC:

(71) Argument Realization Constraint for the Function-Word:


⎡  ⎤
SPR elist
⎢SYN | VAL ⎥
function-wd ⇒ ⎣ COMPS A ⎦
ARG - ST A

This means that the argument of the function-word is mapped to its complement.
Prepositions like of have no specifier and allow at most one complement, since
of cannot be used as predicative.19 We will see that even an inverted auxiliary
verb also can be viewed as a function-word of this type, in the sense that it has no
subject but contains just a complement realized from the unique argument (see
Chapter 8).
There are also verbs which select a sequence of an NP followed by
a CP as a complement. NP and CP are abbreviations for feature struc-
ture descriptions that include the information [POS noun] and [POS comp],
respectively:

(72) a. The trial court warned the defendant that his behavior was unacceptable.
b. His parents told him that he had fainted.
c. Liza finally convinced me that I was ready for more training.

The COMPS value of such verbs realized from the ARG - ST will be as in (73):

(73)
COMPS  NP, CP[VFORM fin]

In addition to the that-type of CP, there is an infinitive type of CP,


headed by the complementizer for. Some verbs select this nonfinite CP as the
complement:

(74) a. Tom intends for Sam to review that book.


b. I honestly never intended for this to happen.

(75) a. I would have preferred for her to stay on as governor.


b. Jenna prefers for me to play with her hair in a specific way.

The data show that verbs like intend and prefer select an infinitival CP clause.
The structure of (75a) is familiar, but it now has a nonfinite VFORM value
within it:

19 This means that predicative prepositions like in or under in sentences like Pat is in the room or
Pat is under the table select a subject as well as a complement.
5.7 Clausal Complement and Subject 123

(76)

The structure given here means that the verb intends will have the following
lexical information, suppressing the ARG - ST information:
 
(77) FORM intend
ARG - ST NP, CP[VFORM inf ]

To fill out the analysis, we need explicit lexical entries for the complementizer
for and for the infinitival marker to, which we treat as an (infinitive) auxiliary
verb. In fact, to has a distribution very similar to finite modal auxiliaries such as
will or must, differing only in the VFORM value (see Chapter 8, Section 8.3.5).20
⎡ ⎤
(78) a. FORM for
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS comp
⎥⎥
⎢ ⎥⎥
⎢SYN⎢ VFORM inf ⎥
⎣ ⎣ ⎦⎦
VAL | COMPS S[VFORM inf ]

20 An issue arises regarding the accusative case of the subject him, as in Tom intends for him to
review the book. In line with what is traditionally assumed, we could posit a constructional
constraint specifying that the subject of an infinitival VP can have accusative case. Alternatively,
some linguists (e.g., Ginzburg and Sag, 2000) have proposed a ternary analysis for infinitivals
where the complementizer for selects both the accusative subject and the infinitival VP as its
complements.
124 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

⎡ ⎤
b. FORM to
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS verb
⎥⎥
⎢ ⎥⎥
⎢SYN⎢ VFORM inf ⎥
⎣ ⎣ ⎦⎦
VAL | COMPS VP[VFORM bse]

Just like the complementizer that, the complementizer for selects an infiniti-
val S as its complement, inheriting its VFORM value too. The evidence that the
complementizer for requires an infinitival S can be found from coordination data:
(79) a. For John to either [make up such a story] or [repeat it] is outrageous.
(coordination of bse VPs)
b. For John either [to make up such a story] or [to repeat it] is outrageous.
(coordination of inf VPs)
c. For [John to tell Bill such a lie] and [Bill to believe it] is outrageous.
(coordination of inf Ss)

Given that only like categories (constituents with the same label) can be coor-
dinated, we can see that base VPs, infinitival VPs, and infinitival Ss are all
constituents.21
An important point here is that the verbs that select a CP[VFORM inf ]
complement can also take a VP[VFORM inf ] complement:
(80) a. He intends to continue to see patients and conduct research.
b. Wayne prefers to sit at the bar and mingle.

By underspecifying the category information of complements, we can generalize


this subcategorization information:
⎡ ⎤
(81) FORM intend
⎢ ⎡ ⎤⎥
⎢ HEAD | POS verb ⎥
⎢  ⎦⎥
⎣SYN⎣ ⎦
VAL | COMPS [VFORM inf ]

Since the specification [VFORM inf ] is quite general, it can be realized either as
CP[VFORM inf ] or VP[VFORM inf ].
However, this does not mean that all verbs behave alike: Not all verbs can
take variable complement types like an infinitival VP or S. For example, try,
tend, hope, and others select only a VP[inf ], as attested by the data:
(82) a. Tom tried to ask a question.
b. *Tom tried for Bill to ask a question.
(83) a. Greenberg tends to avoid theoretical terminology in favor of descriptive
language.
b. *Greenberg tends for Mary to avoid theoretical terminology in favor of
descriptive language.

21 Tensed VPs can be coordinated even with different tense values, as in Kim [alienated cats] and
[loves his dog].
5.7 Clausal Complement and Subject 125

(84) a. They hoped to find jobs for the summer.


b. *They hoped for their students to find jobs for the summer.

Such subcategorization differences are hard to predict simply from the meanings
of verbs: They are apparently arbitrary lexical specifications that language users
need to learn.
There is another generalization that we need to consider with respect to the
property of verbs that select a CP: Most verbs that select a CP can at first glance
select an NP too:

(85) a. He really believes it/that he is an average American.


b. She mentioned the issue to me/mentioned to me that her husband had
solicited a reconciliation.

Should we have two lexical entries for such verbs or can we have a simple way
of representing such a pattern? To reflect such lexical patterns, we can assume
that English parts of speech come in families and can profitably be analyzed in
terms of a type hierarchy as follows:22

(86)

According to the hierarchy, the type nominal is a supertype of both noun and
comp. In accordance with the basic properties of systems of typed feature struc-
tures, an element specified as [POS nominal] can be realized either as [POS
noun] or [POS comp]. These will correspond to the phrasal types NP and CP,
respectively.
The hierarchy implies that the subcategorization pattern of English verbs will
refer to (at least) each of these types. Consider the following patterns:

(87) a. They pinched [his cheeks].


b. *They pinched his cheeks [that he felt pain].

(88) a. We hope [that such a vaccine could be available in ten years].


b. *We hope [the availability of such a vaccine in ten years].

(89) a. Cohen proved [the independence of the continuum hypothesis].


b. Cohen proved [that the continuum hypothesis was independent].

22 This type hierarchy is adopted from Kim and Sag (2005).


126 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

The part-of-speech type hierarchy in (86) allows us to formulate simple lexi-


cal constraints that reflect these subcategorization patterns, making reference to
noun, verbal, and nominal:

(90) a. ARG - ST  NP, NP[POS noun] . . . 


b. ARG - ST  NP, CP[POS comp] . . . 
c. ARG - ST  NP, XP[POS nominal] . . . 

In each class, the ARG - ST list specifies the argument elements that the verbs
select (in the order Subject, Direct Object . . . ). The POS value of a given
element is the part-of-speech type that a word passes on to the phrases it projects.
These three patterns illustrate that English transitive verbs come in at least three
varieties.
In addition to the intermediate category, the postulation of supercategories
like verbal can capture generalizations about so-called it-object extraposition.
English allows a pattern where a finite or infinitival clause appears in sentence-
final or ‘extraposed’ position, leaving the expletive it behind:

(91) a. I have made it my duty [to clean this place from top to bottom].
b. I owe it to you [that the jury acquitted me].

Note that this extraposition is applied only to a clausal or verbal expression as


seen from the following data:

(92) a. I find it difficult [to frequently visit your house].


b. They found it a stress [being in the same office].
c. He made it clear [he was perfectly fine with my staff director having access].
d. As a scientist, I find it frustrating [that I can’t empirically test all these
theories].
e. *They found it frustrating [the entrance exam].

The examples illustrate that an infinitival, gerundive VP, S, or CP can undergo


the it-object extraposition process, but not an NP. One simple generalization
we can get from the data is that only the verbal category can participate in the
extraposition (see Chapter 12 for further discussion).

5.7.2 Verbs Selecting a Clausal Subject


In addition to CP as a complement, we also find some cases where a
CP is the subject of a verb:

(93) a. [John] bothers me.


b. [That John snores] bothers me.

(94) a. [John] loves Bill.


b. *[That John snores] loves Bill.
5.7 Clausal Complement and Subject 127

The contrast here means that verbs like bother can have two realizations of
the ARG - ST, whereas those like love allow only one. This difference can be
represented by the following:
 
(95) a. FORM bother
ARG - ST XP[nominal], NP
 
b. FORM love
ARG - ST 1NP, NP

The difference is that the first argument of bother is nominal while that of love is
just an NP. By definition, the nominal argument can be realized either as an NP
or as a CP, licensing sentences like (93):
⎡ ⎤
(96) a. FORM bother
⎢  ⎥
⎢  1 NP  ⎥
⎢SYN | VAL SPR ⎥
⎢ COMPS  2 NP  ⎥
⎣ ⎦
ARG - ST  [nominal], NP
1 2
⎡ ⎤
b. FORM bother
⎢  ⎥
⎢ ⎥
⎢SYN | VAL SPR  CP
1

⎢  2 NP  ⎥
⎣ COMPS ⎦
ARG - ST  1 [nominal], 2 NP

The different realizations thus all hinge on the lexical properties of the given
verb, and only some verbs allow the dual realization.
A clausal subject is not limited to a finite that-headed CP, but there are other
clausal types:
(97) a. [That John sold the ostrich] surprised Bill.
(that-clause CP subject)
b. [(For John) to train his horse] would be desirable.
(infinitival CP or VP subject)
c. [That the king or queen be present] is a requirement of all royal weddings.
(subjunctive that-clause CP subject)
d. [Which otter you should adopt first] is unclear.
(wh-question subject)

Naturally, each particular predicate dictates which kinds of subjects are


possible, as in (97), and which are not, as in (98):
(98) a. *That Fred was unpopular nominated Bill.
b. *That Tom missed the lecture was enjoyable.
c. *For John to remove the mother is undeniable.
d. *How much money Gordon spent is true.

For example, the difference between the two verbs nominate and surprise can be
seen in these partial lexical entries:
128 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

 
(99) a. FORM nominate
ARG - ST NP, NP
 
b. FORM surprise
ARG - ST [nominal], NP

Unlike nominate, the first argument of surprise can be a nominal. This means
that its subject can be either an NP or a CP.

5.7.3 Adjectives Selecting a Clausal Complement


Like verbs, certain adjectives can also select CPs as their comple-
ments. For example, confident and insistent select a finite CP, whereas eager
selects an infinitival CP:

(100) a. Williams is confident [that there will be no issues this year].


b. Their grandmother is insistent [that they are innocent].

(101) a. He seems eager [for her brother to catch a cold].


b. He is eager [for investigators to find out the devices].

We can easily find more adjectives which select a CP complement:

(102) a. I’m ashamed [that I took my life for granted while you take nothing for
granted].
b. They are content [that you are not a threat].
c. I am thankful [that she lived one year after diagnosis].

The lexical entries for the adjectives in (101) and (102) are given in (103):
 
(103) a. FORM ashamed
ARG - ST NP, CP[VFORM fin]
 
b. FORM content
ARG - ST NP, CP[VFORM fin]
 
c. FORM eager
ARG - ST NP, CP[VFORM inf ]

Note that many of these adjectives can select an infinitival VP as the second
argument:

(104) a. The country is eager to accept foreign help.


b. The student was willing to take the first step.

The second argument in each case will be realized as the COMPS element
in accordance with the ARP. This realization, interacting with the HEAD -
COMPLEMENT CONSTRUCTION , the HEAD - SPECIFIER CONSTRUCTION , and
the HFP, can license structures like (105):
5.7 Clausal Complement and Subject 129

(105)

When the adjective eager combines with its complement, VP[inf ], it satisfies the
HEAD - COMPLEMENT CONSTRUCTION . The same rule allows the verb willing
to combine with its AP complement.

5.7.4 Nouns Selecting a Clausal Complement


Nouns can also select an infinitival VP or CP complement, for
example, eagerness:

(106) a. their eagerness [for the child to become independent]


b. their eagerness [to become independent]

These examples imply that eagerness will have the following lexical informa-
tion:
⎡ ⎤
(107) FORM eagerness
⎣  ⎦
ARG - ST DP, XP[VFORM inf ]
130 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

This means that the noun eagerness selects two arguments in which the DP is
realized as its specifier and the VP as the complement. This will allow a structure
like the following:

(108)

Note that the noun first combines with its VP complement, forming a Head-
Complement construct. The resulting N then combines with its specifier DP,
yielding a Head-Specifier construct.
One pattern that we can observe is that when a verb selects a CP complement
and has a corresponding noun, the noun also selects a CP:

(109) a. Amber alleged that he had committed domestic violence.


b. The majority of candidates believed that technology increased engagement.
c. I convinced him that this would be good for his daughter.

(110) a. the allegation that he had committed domestic violence


b. the belief that technology increased engagement
c. my conviction that this would be good for his daughter

This shows us that the derivational process that derives a noun from a verb pre-
serves the COMPS value of that verb.23 Not surprisingly, not all nouns select a
CP complement:

(111) a. *his attention that the earth is round


b. *his article that the earth is flat
c. *the ignorance that James can play the flute
d. *the expertise that she knows how to bake croissants

23 Derivational processes or rules (e.g., establishment from establish) typically create a new lexeme
from a base, while inflectional ones (e.g., students from student) do not.
5.8 Conclusion 131

These nouns cannot combine with a CP, indicating that they do not have CPs as
arguments or complements.

5.7.5 Prepositions Selecting a Clausal Complement


In general, prepositions in English cannot select a CP complement:
(112) a. *Alan is thinking about [that his students are eager to learn English].
b. *Fred is counting on [for Tom to make an announcement].

However, wh-clauses, sometimes known as indirect questions, may serve as


prepositional complements:
(113) a. The future of Poland will depend on [how many people are mature enough
to be nonconformists].
b. They are thinking about [whether they are going to approve the free trade
agreement].

These facts show us that indirect questions have some feature (e.g.,
QUE ), which distinguishes them from canonical that- or for-CPs and
makes them similar to true nouns (NP is the typical complement of a
preposition).24

5.8 Conclusion

As a first step toward building a robust generative grammar based on


a fine-grained feature-structure system, we explored head features (e.g., VFORM
and POS) and the HFP (Head Feature Principle). We then showed how elements
in the ARG - ST list are mapped onto the syntactic valence features SPR (specifier
and subject) and COMPS, in accordance with the ARC (Argument Realization
Constraint).
Equipped with these principles, construction rules, and feature struc-
tures, we then demonstrated how each of the X construction rules
(HEAD - SPECIFIER CONSTRUCTION, HEAD - COMPLEMENT CONSTRUCTION,
and HEAD - MODIFIER CONSTRUCTION) interacts with lexical entries, as well
as general principles like the HFP and the VALP (Valence Principle), to form
lexical and phrasal constructs in English. One key point we learned here is that
each combination (e.g., subtree) must conform to all the principles as well as
a combinatorial phrase-construction rule. We extended this system to license
24 Considering examples like the following, involving since, before, after, and so on, we may
assume that these conjunctions can be taken to be prepositions with either an NP or an S as
the syntactic complement:
a. Students have been studying syntax since the beginning of this month.
b. So much has changed in the sport since I was a teenager.
132 C O M B I N AT O R I A L C O N S T RU C T I O N P R I N C I P L E S

nonphrasal lexical (verb-particle) constructions by means of the HEAD - LEX


CONSTRUCTION .
In the final section of this chapter, we asked why the members of the ARG - ST
list require detailed feature specifications. We observed that there are a variety of
syntactic environments in which the complement of a lexical expression (verb,
noun, adjective, or preposition) must have a specific VFORM or PFORM value.
We also noted environments in which the subject requires a specified NFORM
value. Such feature-specification systems allow us to describe the combinatorial
possibilities for phrasal and lexical expressions in a precise manner. In the next
chapter, we will see how the present generative grammar, combined with a fine-
grained feature system, can be expanded to account for the complexity of NP
structures in English.

Exercises

1. For each of the following expressions, check whether it selects a


clausal complement. Write out the examples which justify your
classifications:
(i) ignore, select, doubt, deny, glad, unsure, confident, allegation,
ignorance
2. For each of the following expressions, check whether it selects a
clausal complement. Write out the examples which justify your
classifications:
(i) annoy, vanish, remain, admire, mandatory, enjoyable, apparent
3. For each sentence, draw a tree structure and then give the COMPS
value (including VFORM and PFORM value) for the italicized word:
a. That experience made her want to study linguistics at Stanford.
b. The driver shot a quick glance at the passenger.
c. Let me ask her whether I should slaughter a cow or a goat for
my son’s initiation.
d. It was necessary for them to continue interacting with the
perpetrator(s) frequently on the job.
e. The perspective highlights the importance of active participa-
tion in literacy events in the classroom.
f. Both sides admitted that they punched and slapped the other.
g. The school forced the authors to make the guide’s language
gender-neutral.
h. She expected him to continue staring at nothing.
i. This reflected Egypt’s economic and financial dependence upon
the US.
j. We teach students to be responsible for their personal health
and fitness.
5.8 Conclusion 133

4. Identify errors in the following sentences, focusing on the form


values of verbs, adjectives, and nouns, and/or their COMPS values:
a. *He lent me with an inflatable mattress.
b. *Jane generously contributed this article her time and energy.
c. *I try not to let them troubled me.
d. *She is afraid against certain neighbors.
e. *I particularly admire his willingness being a distributor.
f. *Would you mind to share the joke with me, Mr. President?
g. *We decided visiting some of our favorite pasta-obsessed restau-
rants.
h. *She poured an enamel mug with coffee.
i. *UN Security Council has condemned the massacre on Syria.
j. *The husband covered a heavy sheet over the piano.
k. *She was able continuing working thanks to an accommodating
employer.
l. *His belief on me made an immeasurable difference.

5. Draw trees for the following examples with detailed NP structures:


a. Both optometrists examined her thought that she could safely
operate a motor vehicle.
b. He reiterated his belief in the genius’s unrestricted nature as
indispensable to the creative process.
c. The child’s exposure to the language and the child’s actual use
of that language have been documented.
d. Corporate speech rights are justified by reference to listeners’
rights.
e. My attempt to read a process of evolution into the production
of von Bonin’s textile paintings obviously claims her practice
for the apparatus of art history.

6. We have seen that a verb and a particle form a multi-word expression,


as represented by the HEAD - LEX CONSTRUCTION. Draw trees for the
following examples and mark the construction rules that license each
combination:
(i) a. None of this added up to proof or even strong possibility of
mischief.
b. The train broke down and stranded me.
c. She threw up after eating a boiled snail.
(ii) a. I filled out a questionnaire and signed a bunch of stuff.
b. He boldly checked out the rest of the trailer.
c. I picked up my papers with shaking hands and went home.
6 Noun Phrases and Agreement

6.1 Classification of Nouns

As noted in Chapter 1, nouns represent not only entities like peo-


ple, places, or things, but also abstract and intangible concepts like happiness,
information, hope, and so on. This variegated reference renders it difficult to
classify nouns solely according to their meanings. The following chart shows
the canonical classification of nouns. It takes into account semantic differences,
while considering formal and grammatical properties:

(1) Types of nouns in English:

common countable desk, book, difficulty, remark, etc.


noun noncount butter, gold, music, furniture, laziness, etc.
Seoul, Kyung Hee, Stanford, Palo Alto,
proper noun
January, etc.
personal he, she, they, his, him, etc.
relative that, which, what, who, whom, etc.
pronoun
interrogative who, where, how, why, when, etc.
anybody, everybody, somebody, nobody,
indefinite
anywhere, etc.

As represented here, nouns fall into three major categories: common nouns,
proper nouns, and pronouns. An important division within the class of com-
mon nouns is the one between count and noncount nouns. In Chapter 1, we saw
that whether a noun is countable or not does not fully depend on its reference.
A single group of things can be referred to by a count or a noncount (‘mass’)
term (Rothstein, 2010). For example, the greenery on a tree may be referred to
as either leaves or foliage. We can make a similar observation about ‘flexible’
nouns, like brick and difficulty, which can be either mass or count depending on
context:

(2) a. The path was made of brick.


b. She piled bricks on the deck.

(3) a. We have had many difficulties.


b. Do you have difficulty getting up?

134
6.2 Syntactic Structures 135

Proper nouns denote specific people or places and are typically uncount-
able. Common nouns and proper nouns display clear contrasts in terms of
the combinatorial possibilities with determiners, as shown in the following
chart:

(4) Combinatory possibilities with determiners:

Common N
Proper N
countable uncountable flexible
N Einstein *book music cake
the + N *the Einstein the book the music the cake
a+N *an Einstein a book *a music a cake
some + N *some Einstein *some book some music some cake
N+s *Einsteins books *musics cakes

Proper nouns (Einstein) do not combine with any determiner, as can be seen
from the chart. Meanwhile, count nouns have singular and plural forms (e.g., a
book and books), whereas uncountable nouns (music) combine only with some
or the. The discussion in Chapter 1 has shown us that some common nouns may
be either count or noncount, depending on the kind of reference they have. For
example, cake is countable when it refers to a specific entity as in I made a cake,
but noncountable when it refers to ‘cake in general,’ as in I like cake.
Together with verbs, nouns are critical to the meaning and structure of
the English clause, because they (or their phrasal projections) are used to
encode both the core semantic roles (agents and undergoers of actions) and
the core syntactic functions (subject and object). This chapter deals with the
structural, semantic, and functional dimensions of NPs, with a focus on the
agreement relationships between nouns and determiners and between subjects
and verbs.

6.2 Syntactic Structures

6.2.1 Common Nouns


As noted before, common nouns can have a determiner as a specifier,
unlike proper and pronouns. In particular, count nouns cannot be used without a
determiner when they are singular:

(5) a. *(The) student completes a self-assessment form.


b. *(The) book includes a suggestive chapter on how gestures and body
language vary culturally.
136 N O U N P H R A S E S A N D AG R E E M E N T

However, mass or plural count nouns are fully grammatical as bare NPs with
no determiners:1

(6) a. Rice is available in most countries.


b. Students learn curriculum content, and teachers teach curriculum content.

Examples like (6) imply that, as we have seen earlier, a single noun (rice) can
be projected into an NP without combining with a complement or specifier, as
given in the following:2

(7) NP
⎡ ⎤
phrase
⎢ ⎥
⎣SPR   ⎦
COMPS  

N
⎡ ⎤
word
⎢ ⎥
⎣SPR   ⎦
COMPS  

rice

This structure shows us that a lexical head is projected into a phrasal construction
without combining with any specifier or complement. There is no need to have
an N projection since no specifier is required.
Different from such cases, countable nouns like book and student will select a
DP as their specifier:
⎡ ⎤ ⎡ ⎤
(8) FORM book FORM student
⎢ ⎡ ⎤⎥ ⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥ ⎢ HEAD | POS noun ⎥

a. ⎢ ⎢  ⎥ ⎢
⎥⎥ b. ⎢ ⎢  ⎥⎥
⎢SYN⎢ ⎥⎥ ⎢SYN⎢ ⎥⎥

⎣ ⎣VAL SPR DP ⎦⎦ ⎣ ⎣VAL SPR DP ⎦⎦
COMPS   COMPS  

1 The style of English used in headlines does not have this restriction, e.g., Student discovers planet,
Army receives high-tech helicopter.
2 Note that the projection from N to NP makes no changes to the VAL feature values. The key
change is from a word to a phrase. This projection is a unary structure with no branching. To
allow this kind of unary projection, the grammar needs the HEAD - ONLY CONSTRUCTION:

(i)  - ONLY CONSTRUCTION


HEAD   :
phrase word
XP →X
VAL 1 VAL 1

This construction rule will also license a lexical element to project into a phrase, as in VP → V
and NP → N.
6.2 Syntactic Structures 137

These nouns then would project a structure like the following:

(9)

As seen from the structure, the lexical construction N directly combines with its
specifier DP, forming a head-specifier construct.
In the previous chapter we have seen that not only a simple lexical element
(e.g., a, an, this, that, any, some, his, how, which) but also a phrasal expression
like a possessive phrase can serve as a specifier:

(10) a. [[My brother]’s] friend learned dancing.


b. [[The president]’s] bodyguard learned surveillance.
c. [[The King of Rock and Roll]’s] records led to dancing.

The possessive NPs my brother’s or the president’s are not determiners


but phrases. We have taken such phrases as DPs headed by the pos-
sessive marker ’s, whose lexical entry is given in the following (see
Abney, 1987).
⎡ ⎤
(11) FORM ’s
⎢ ⎡ ⎤⎥
⎢ HEAD | POS det ⎥
⎢ ⎢  ⎥⎥
⎢ ⎥⎥
⎢SYN⎢ ⎥
⎣ ⎣VAL SPR NP ⎦⎦
COMPS  

The grammar thus allows not only a simple determiner but also a pos-
sessive NP to be projected into a DP, as represented in the following
structures:
138 N O U N P H R A S E S A N D AG R E E M E N T

(12)

As shown here, the noun friend does not select a complement, and thus projects
to an NP with its specifier DP my brother’s. The head of this DP is the possessive
determiner selecting an NP as given here. The expression my brother is also a
full NP just like the whole phrase my brother’s friend. The common noun brother
requires a DP as its specifier.3
As we have seen in the previous chapters, common nouns can select a com-
plement, as in the planet’s proximity to the Sun, an increase in price, or a feeling
of loneliness. This kind of NP would have the following structure:

(13)

3 Once again note that this combinatorial system, with cancellation of the values of valence features
SPR and COMPS, requires no vacuous projection from N to N when the N does not combine with
a complement. The head N, requiring a specifier, can directly combine with that specifier with no
intervening N projection; the COMPS set of such a N is simply empty.
6.2 Syntactic Structures 139

The head noun proximity combines with its complement to the Sun, and the
resulting N phrase combines the specifier the planet’s, which consists of the
NP the planet and the apostrophe ‘s.’

6.2.2 Pronouns
The core class of pronouns in English includes at least three main
subgroups:

(14) a. Personal pronouns: I, you, he, she, it, they, we


b. Reflexive pronouns: myself, yourself, himself, herself, itself
c. Reciprocal pronouns: each other, one another

Personal pronouns refer to specific persons or things and take different forms
to indicate person, number, gender, and case. Syntactically, each pronoun is
projected into a saturated NP without complements or specifiers:
(15) NP
 
SPR 
COMPS  

N
 
SPR 
COMPS  

you

Pronouns participate in agreement relations with their antecedents, the phrases


to which they are understood to be referring (indicated by the underlined parts
of the examples in (16)):

(16) a. President Lincoln delivered his/*her Gettysburg Address in 1863.


b. After reading the pamphlet, Judy threw it/*them into the garbage can.
c. I got worried when the neighbors let their/*his dogs out.

Reflexive pronouns are special forms which are typically used to indicate a
reflexive activity or action, which can include mental activities:

(17) a. I asked myself: why isn’t he here?


b. Edward usually remembered to send a copy of his email to himself.

As noted earlier, these personal or reflexive pronouns neither take a determiner


nor combine with an adjective except in very restricted constructions.4

4 These restricted constructions can involve some indefinite pronouns (e.g., a little something, a
certain someone).
140 N O U N P H R A S E S A N D AG R E E M E N T

6.2.3 Proper Nouns


Because proper nouns usually refer to something or someone
unique, they do not normally take a plural form and cannot occur with a
determiner:

(18) a. Kim, Laura, Seoul, January . . .


b. *a Kim, *a Laura, *a Seoul, *a January . . .

In this sense, proper nouns are just like pronouns in being projected into an
NP with no complement or specifier. However, proper nouns can be converted
into countable nouns when they refer to a particular individual or type of
individual:

(19) a. No John Smiths attended the meeting.


b. This John Smith lives in Seoul.
c. There are three Hannahs in my class.
d. It’s nothing like the America I remember.
e. She doesn’t come across in the same manner as a Hillary Clinton.

In such cases, proper nouns are converted into common nouns, may select a
specifier, and take other nominal modifiers. This means that a proper noun will
have a lexical entry like (20a) but can be related to one like (20b):5

⎡ ⎤ ⎡ ⎤
(20) prpn cn-prpn
⎢FORM John Smith ⎥ ⎢FORM John Smith ⎥
⎢ ⎡ ⎤⎥ ⎢ ⎡ ⎤⎥
⎢ ⎥ ⎢ ⎥
a. ⎢
⎢ HEAD | POS noun ⎥ b. ⎢
⎢ HEAD | POS noun ⎥
⎢ ⎢  ⎥⎥
⎥ ⎢ ⎢  ⎥⎥
⎢SYN⎢  ⎥ ⎢SYN⎢ ⎥⎥
⎣ ⎣VAL SPR  ⎦⎥
⎦ ⎣ ⎣VAL SPR DP ⎦⎥

COMPS   COMPS  

(20a) specifies that the proper noun John Smith does not require any
specifier or complement. But (20b) says that the proper noun, converted
into a common noun, combines with a specifier, as represented in the
following:

5 Once again, the italic part at the top of the feature structure denotes the type of the expression
described. For example, prpn here means proper-noun and cn-prpn means common-noun-prpn
derived from a proper noun.
6.3 Agreement Types and Morphosyntactic Features 141

(21)

6.3 Agreement Types and Morphosyntactic Features

6.3.1 Noun-Determiner Agreement


Common nouns in English participate in three types of agreement.
First, they are involved in determiner-noun agreement. All countable nouns are
used in either the singular or plural form. When they combine with a determiner,
there must be an agreement relationship between the two:
(22) a. this book/that book
b. *this books/*that books/these books/those books

The data in turn means that the head noun’s number value should be identical to
that of its specifier, leading us to revise the HEAD - SPECIFIER CONSTRUCTION:
(23) HEAD - SPECIFIER CONSTRUCTION :
XP → SPR AGR 1 , H AGR 1

This revised rule, specified with the agreement (AGR) feature, guarantees
that English head-specifier phrases require their head and specifier to share
agreement features including the attribute NUM (number).
⎡ ⎤ ⎡ ⎤
(24) FORM a FORM book
⎢ ⎡  ⎤⎥ ⎢ ⎡   ⎤⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎢HEAD
POS det
⎥⎥ ⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥ ⎢ ⎥⎥
AGR | NUM sing ⎥⎥ b. ⎢ ⎢ AGR | NUM sing
a. ⎢
⎢SYN⎢ ⎥⎥ ⎢SYN⎢ ⎥⎥
⎢   ⎥ ⎢ ⎢  ⎥⎥
⎢ ⎢ ⎥⎥ ⎢ ⎢ ⎥⎥

⎣ ⎣VAL SPR   ⎦⎥
⎦ ⎣ ⎣VAL SPR DP[ NUM sing] ⎦⎥

COMPS   COMPS  
142 N O U N P H R A S E S A N D AG R E E M E N T

Common nouns thus impose a specific NUM value on the specifier:6

(25)

The singular noun book selects a singular determiner like a as its specifier,
forming a HEAD - SPECIFIER CONSTRUCTION. The head and its specifier are
structure-shared with the AGR value, satisfying the constructional constraint.
Notice that the AGR value on the head noun book is passed up to the whole
NP, marking the whole NP as singular, so that it can combine with a singular VP,
if it is the subject.
In addition, there is nothing preventing a singular noun from combining with
a determiner that is not specified at all for a NUM value:

(26) a. *those book, *these book . . .


b. no book, the book, my book . . .

Determiners like the, no, and my are not specified for a NUM value. Formally,
their NUM value is underspecified as num(ber). That is, the grammar of English
has the underspecified value num for the feature NUM, with two subtypes,
sing(ular) and pl(ural):

(27)

Given this hierarchy, nouns like book requiring a singular Det can combine with
determiners like the whose AGR value is num. This is in accord with the grammar,
since the value num is a supertype of sing. The same explanation can be applied
to the phrases whose books and whose book, in which whose is underspecified
for the AGR’s number value.
6 Keen readers may have noticed that we allow the combination of N with the specifier DP. Noth-
ing blocks the head noun from combining with the specifier directly as the HEAD - SPECIFIER
CONSTRUCTION .
6.3 Agreement Types and Morphosyntactic Features 143

6.3.2 Pronoun-Antecedent Agreement


As noted earlier, a second type of agreement is pronoun-antecedent
agreement, as indicated in (28):
(28) a. If John wants to succeed in corporate life, he/*she has to know the rules of
the game.
b. The critique of Plato’s Republic was written from a contemporary point
of view. It was an in-depth analysis of Plato’s opinions about possible
governmental forms.

The pronoun he or it here needs to agree with its antecedent not only with respect
to the number value but also with respect to person (1st, 2nd, or 3rd) and gender
(masculine, feminine, or neuter) values too. This shows us that nouns have also
information about person, number, and gender in the AGR values:
⎡ ⎤
(29) a. FORM book
⎢ ⎡ ⎡ ⎤ ⎤⎥
⎢ ⎥
⎢ POS noun ⎥
⎢ ⎢ ⎢ ⎡ ⎤⎥ ⎥⎥
⎢ ⎢ ⎢ ⎥ ⎥⎥
⎢ ⎢ ⎢ PER 3rd
⎥ ⎥⎥
⎢ ⎢ HEAD ⎢AGR ⎢NUM sing ⎥⎥ ⎥⎥
⎢ ⎢ ⎣ ⎣ ⎦⎦ ⎥⎥
⎢SYN⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ GEND neut ⎥⎥
⎢ ⎢  ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎣ SPR DP[ NUM sing] ⎦⎥
⎣ VAL ⎦
COMPS  
⎡ ⎤
b. FORM he
⎢ ⎡ ⎡ ⎤⎤⎥
⎢ ⎥
⎢ POS noun ⎥
⎢ ⎢ ⎢ ⎡ ⎤⎥⎥⎥
⎢ ⎢ ⎢ ⎥⎥⎥
⎢ ⎢ ⎢ PER 3rd
⎥⎥⎥
⎢ ⎢ HEAD ⎢ ⎢ ⎥⎥⎥

⎢ ⎢ ⎣ AGR ⎣ NUM sing ⎦⎦⎥⎥
⎢SYN⎢ ⎥⎥
⎢ ⎢ GEND masc ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢   ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎣ SPR   ⎦⎥
⎣ VAL ⎦
COMPS  

As we have briefly shown, nouns have NUM (number), PER (person), and GEND
(gender) for their AGR values. The PER value can be 1st, 2nd, or 3rd; the GEND
value can be masc(uline), fem(inine), or neut(ral). The NUM values are shown in
(27).

6.3.3 Subject-Verb Agreement


The third type of agreement is subject-verb agreement, which is one
of the most important phenomena in English syntax. Let us look at some slightly
complex examples:
(30) a. The characters in Shakespeare’s Twelfth Night *lives/live in a world that has
been turned upside-down.
144 N O U N P H R A S E S A N D AG R E E M E N T

b. Students studying English read/*reads Conrad’s Heart of Darkness while


at university.
As we can see here, the subject and the verb need to have an identical number
value, and the person value is also involved in agreement relations, in particular
when the subject is a personal pronoun:
(31) a. You are/*is the only person that I can rely on.
b. He is/*are the only person that I can rely on.
These facts show us that a verb lexically specifies the information about the
number as well as person values of the subject that it requires.
To show how the agreement system works, we will use some simpler
examples:
(32) a. The boy swims/*swim.
b. The boys swim/*swims.
English verbs will have at least the following selectional information:
⎡ ⎤
(33) FORM swims
⎢ ⎡  ⎤ ⎥
⎢ ⎥
⎢ POS verb ⎥
⎢ ⎢HEAD ⎥⎥
⎢SYN⎢ VFORM es ⎥ ⎥
⎢ ⎣ ⎦⎥
⎢ ⎥
⎢ VAL | SPR  1 NP ⎥
⎢  ⎥
⎢ ⎥
⎢ ⎥
⎣ARG - ST  1 NP PER 3rd ⎦
NUM sing

The present-tense verb swims selects one argument, which is realized as the
subject bearing the 3rd singular AGR information. This lexical information will
license a structure like the following:
(34)
6.4 Semantic Agreement Features 145

The verb itself carries the third singular agreement features, passing these fea-
tures up to the VP level. These agreement features are identical with the subject
NP the boy, satisfying the HEAD - SPECIFIER CONSTRUCTION. In other words, if
this verb were to combine with a subject that has an incompatible agreement
value, we would create an ungrammatical sentence like *The boys swims in
(32b). In this system, subject-verb agreement is structure sharing between the
AGR value of the subject ( SPR value of the verb) and that of the NP with which
the VP combines.
The acute reader may have noticed that there are similarities between noun-
determiner agreement and subject-verb agreement – that is, in the way that
agreement works inside NP and inside S. Both NP and S require agreement
between the head and the specifier, as reflected in the revised HEAD - SPECIFIER
CONSTRUCTION in (23).

6.4 Semantic Agreement Features

What we have seen so far is that the morphosyntactic AGR values of


noun or verb can be specified and may be inherited by phrases built out of them.
However, consider now the following examples adopted from Nunberg (1995):
(35) a. [The hash browns at table nine] are/*is getting cold.
b. [The hash browns at table nine] is/*are getting angry.

When (35b) is spoken by a waiter to another waiter, the subject refers to a person
who ordered hash browns.7 A somewhat similar case is found in (36):
(36) King prawns cooked in chili salt and pepper was very much better, a simple
dish succulently executed.

Here the verb form was is singular in agreement with the dish being referred
to, rather than with a plurality of prawns. If we were simply to assume that the
subject phrase inherits the morphosyntactic agreement features of the head noun
(hash) browns in (35b) and (King) prawns in (36) and requires that these fea-
tures match those of the verb, we would not expect the singular verb form to be
possible at all in these examples. In the interpretation of a nominal expression,
that expression must be anchored to an individual in the situation described. We
call this anchoring value the noun phrase’s ‘index’ value. The index of hash
browns in (35a) must be anchored to the plural entities on the plate, whereas
that of hash browns in (35b) must be anchored to a customer who ordered the
food.
The lesson here is that English agreement is not purely morphosyntactic but
context-dependent in various ways – a context-dependency we represent via the
7 Such an example illustrates a reference transfer or a metonymic use of language (see Nun-
berg, 1995 and Pollard and Sag, 1994).
146 N O U N P H R A S E S A N D AG R E E M E N T

notion of ‘index’ that we have just introduced. Often what a given nominal refers
to in the real world is important for agreement – index agreement. Index agree-
ment involves sharing of referential indexes, closely related to the semantics of a
nominal and somewhat separate from the syntactic agreement feature AGR. This
then requires us to distinguish the morphological AGR value from the semantic
(SEM) IND (index) value. So, in addition to the morphological AGR value intro-
duced above, each noun will also have a semantic IND value representing what
the noun refers to in the actual world:8
⎡ ⎤
(37) a. FORM boy
⎢  ⎥
⎢ ⎥
⎢SYN | HEAD POS noun ⎥
⎢ | sing ⎥
⎣ AGR NUM ⎦
SEM | IND | NUM sing
⎡ ⎤
b. FORM boys
⎢  ⎥
⎢ ⎥
⎢SYN | HEAD POS noun ⎥
⎢ AGR | NUM pl ⎥
⎣ ⎦
SEM | IND | NUM pl

The lexical entry for boy indicates that it is syntactically a singular noun (through
the feature AGR) and semantically also denotes a singular entity (through the
feature IND). And the verb will place a restriction on its subject’s IND value
rather than its morphological AGR value:9
⎡ ⎤
(38) FORM swims
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ POS verb ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ | sing ⎥⎥
⎢SYN ⎢ AGR NUM ⎥
⎢ ⎣  ⎦⎥

⎢ VAL | SPR NP[IND | NUM sing] ⎥
⎢ ⎥
⎣ ⎦
SEM | IND s
0

The lexical entry for swims here indicates that it is morphologically marked as
singular (the AGR feature) and selects a subject to be linked to a singular entity
in the context (by the feature IND). Distinct from the IND value of nouns, the
verb’s IND value is a situation index (s0). The situation referred to here is that
the individual indexed by the SPR value is performing the action of swimming.
If the referent of this subject (its IND value) did not match, the result would be
an ungrammatical sentence like *The boys swims:

8 See Wechsler (2013) for a similar analysis in which the morphosyntactic AGR feature is named
CONCORD .
9 The IND value of a noun will be an individual index (i, j, k, etc.), whereas that of a verb or
predicative adjective will be a situation index such as s0 , s1 , s2 , etc.
6.4 Semantic Agreement Features 147

(39)

As we can observe, the required subject has the IND value i, but the subject in
(39) has a different IND value j.
In the prototypical cases, the AGR and IND values are identical, but they can be
different, as in examples like (35b). This means that, depending on the context,
hash browns can have different IND values:10
⎡ ⎤
(40) FORM hash browns
⎢  ⎥
⎢ POS noun ⎥ (when referring to the food itself)
a. ⎢SYN | HEAD ⎥
⎢ AGR | NUM pl ⎥
⎣ ⎦
SEM | IND | NUM pl
⎡ ⎤
FORM hash browns
⎢  ⎥
⎢ POS noun ⎥ (when referring to a customer or to a dish)
b. ⎢SYN | HEAD ⎥
⎢ | pl ⎥
⎣ AGR NUM ⎦
SEM | IND | NUM sing

In the lexical entry (40b), the AGR’s NUM value is plural but its IND’s NUM value
is singular. As shown by (35), the reference hash browns can be transferred from
cooked potatoes to the customer who ordered them. This means that, given an
appropriate context, there could be a mismatch between the morphological form
of a noun and the index value of the noun.
What this indicates is that subject-verb agreement and noun-specifier agree-
ment are different. In fact, English determiner-noun agreement is merely a
reflection of morphosyntactic agreement features between determiner and noun,
whereas subject-verb (like pronoun-antecedent) agreement is index-based agree-
ment. This is represented in (41):
10 As indicated here, the lexical expression now has two features: SYN (syntax) and SEM (seman-
tics). The feature SYN includes HEAD as well as SPR and COMPS. The feature SEM is for semantic
information, and will be further described in what follows.
148 N O U N P H R A S E S A N D AG R E E M E N T

(41) Morphosyntactic agreement (AGR)

Such agreement patterns can be found in examples like the following, where
the underlined parts have singular agreement with four pounds, which is formally
plural:
(42) [Four pounds] was quite a bit of money in 1950 and it was not easy to come
by.

Given the separation of the morphological AGR value and the semantic
IND value, nothing blocks mismatches between the two (AGR and IND ) as
long as all other constraints are satisfied. Observe further examples in the
following:
(43) a. [Five pounds] is/*are a lot of money.
b. [Two drops] deodorizes/*deodorize anything in your house.
c. [Fifteen dollars] in a week is/*are not much.
d. [Fifteen years] represents/*represent a long period of his life.
e. [Two miles] is/*are as far as they can walk.

In all of these examples with measure nouns, the plural subject combines with
a singular verb. An apparent conflict arises from the agreement features of the
head noun. For proper agreement inside the noun phrase, the head noun has to
be plural, but for subject-verb agreement the noun has to be singular. Consider
the example in (43a). The noun pounds is morphologically plural and thus must
select a plural determiner, as argued so far. But when these nouns are anchored to
the group as a whole – that is, conceptualized as referring to a single measure –
the index value has to be singular, as represented in (44).
⎡ ⎤
(44) pounds
⎢ ⎡  ⎤⎥
⎢ POS noun ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ ⎥⎥
⎢SYN⎢ AGR 1 | NUM pl ⎥⎥
⎢ ⎣ ⎦⎥
⎢ ⎥
⎢ VAL | SPR DP AGR 1  ⎥
⎣ ⎦
SEM | IND | NUM sing

As indicated in the lexical entry (44), the morphosyntactic number value of


pounds is plural, whereas the index value is singular. In the present analysis,
this would mean that pounds will combine with a plural determiner but with
a singular verb. This is possible, as noted earlier in Section 2, since the index
value is anchored to a singular individual in the context of utterance. The present
analysis thus generates the following structure for (43a):
6.4 Semantic Agreement Features 149

(45)

The analysis takes determiner-head agreement to be morphosyntactic agree-


ment in the HEAD - SPECIFIER CONSTRUCTION, and so the DP five need only
refer to the AGR value of its sister N pounds. This way of looking at English
agreement will enable us to account for the following:
(46) a. *These dollars is what I want to donate to the institute.
b. *These pounds is a lot of money.

There is nothing wrong in forming these dollars or these pounds, since dollars
and pounds can combine with a plural DP (or determiner). The issue is the agree-
ment between the subject these dollars and the verb is. Unlike five dollars or five
pounds, these dollars and these pounds are semantically not taken to refer to a
single unit: They always refer to plural entities. Thus no mismatch is allowed in
these examples.
However, a similar mismatch between subject and verb is also found in cases
with terms for social organizations or collections, as in the following attested
examples:
(47) a. [This/*these government] has/*have broken its promises.
b. [This/*these government] have/*has broken their promises.

(48) a. [This/*these England team] have/*has put themselves in a good position to


win the championship.
b. [This/*these England team] *have/has put itself in a good position to win
the championship.

The head noun government or team is singular, so it can combine with the sin-
gular determiner this. But the surprising fact is that the singular noun phrase
can combine with a plural verb have as well as with a singular verb has. This
is possible because the index value of the subject can be anchored either to a
150 N O U N P H R A S E S A N D AG R E E M E N T

singular entity or a plural one. More precisely, we can represent the relevant
information in the expressions participating in these agreement relationships, as
in (49).
⎡ ⎤
(49) a. FORM this
⎢ ⎡  ⎤⎥
⎢ ⎥
⎢ POS det ⎥
⎣SYN⎣HEAD ⎦⎦
AGR | NUM sing
⎡ ⎤
b. FORM team/government
⎢ ⎡  ⎤
⎥
⎢ ⎥
⎢ POS noun
⎦⎥
⎢SYN⎣HEAD ⎥
⎢ AGR | NUM sing ⎥
⎣ ⎦
SEM | IND | NUM pl

As represented in (49a) and (49b), this and government agree with each other in
terms of the morphosyntactic agreement number value, whereas the index value
of government is what matters for subject-verb agreement. This in turn means
that when government refers to the individuals in a government, the whole NP
this government carries a plural index value.

6.5 Partitive NPs and Agreement

6.5.1 Basic Properties


With regard to the NP-internal elements that display agreement, there
are two main types of NP in English: simple NPs and partitive NPs, shown in (50)
and (51) respectively:
(50) a. some objections
b. most students
c. all students
d. much worry
e. many students
f. neither cars

(51) a. some of the objections


b. most of the students
c. all of the students
d. much of her worry
e. many of the students
f. neither of the cars

As shown in (51), partitive phrases have a quantifier followed by an of -phrase


and designate a set from which certain individuals are extracted. In terms of
semantics, these partitive NPs are different from simple NPs in several respects.
6.5 Partitive NPs and Agreement 151

First, the lower NP in partitive phrases must be definite and in the of -phrase,
no quantificational NP is allowed, as shown in (52):
(52) a. each student vs. each of the students vs. *each of students
b. some problems vs. some of the problems vs. *some of many problems

Second, not all determiners with quantificational force can appear in partitive
constructions. As shown in (53), determiners such as the, every, and no cannot
occupy the first position:
(53) a. *the of the students vs. the students
b. *every of his ideas vs. every idea
c. *no of your books vs. no book(s)

Third, simple NPs and partitive NPs have different restrictions relative to the
semantic head. Observe the contrast between (54) and (55):
(54) a. She doesn’t believe much of that story.
b. We listened to as little of his speech as possible.
c. How much of the fresco did the flood damage?
d. I read some of the book.
(55) a. *She doesn’t believe much story.
b. *We listened to as little speech as possible.
c. *How much fresco did the flood damage?
d. *I read some book.

The partitive constructions in (54) allow a mass (noncount) quantifier such as


much, little, and some to cooccur with a lower of -NP containing a singular count
noun. But as we can see in (55), the same elements serving as determiners cannot
directly precede such nouns.
Another difference concerns lexical idiosyncrasies:
(56) a. One of the people was dying of thirst.
b. Many of the people were dying of thirst.
(57) a. *One people was dying of thirst.
b. Many people were dying of thirst.

The partitives can be headed by quantifiers like one and many, as shown in (56)
and (57), but, unlike many, one cannot serve as a determiner when the head noun
is collective, as in (57a).

6.5.2 Two Types of Partitive NPs


We classify partitive NPs into two types based on the agreement facts
and call them Type I and Type II. In Type I, the number value of the partitive
phrase depends on the preceding head noun, whereas in Type II, the number
value depends on the head noun inside of the of -NP phrase. Observe Type I
examples:
152 N O U N P H R A S E S A N D AG R E E M E N T

(58) Type I:
a. Each of the suggestions is acceptable.
b. Neither of the cars has air conditioning.
c. None of these men wants to be president.
d. Many of the students can speak French or German.

We can observe here that the verb’s number value is determined by the preceding
expression each, neither, and none. Now let us contrast Type II:
(59) Type II:
a. Most of the fruit is rotten.
b. Most of the children are here.
c. Some of the soup needs more salt.
d. Some of the diners need menus.
e. All of the land belongs to the government.
f. All of these cars belong to me.

As shown in (59), when the NP following the preposition of is singular or


uncountable, the main verb is singular. When the NP is plural, the verb is also
plural. From a semantic perspective, we see that the class of quantificational
indefinite pronouns including some, half, most, and all may combine with either
singular or plural verbs, depending on the reference of the of -NP phrase. If the
meaning of these phrases is about how much of something is meant, the verb is
singular, but if the meaning is about how many of something is meant, the verb
is plural. The expressions in (60) also exhibit similar behavior with respect to
agreement:
(60) half of, part of, the majority of, the rest of, two-thirds of, a number of (but
not the number of )

An effective way of capturing the relations between Type I and Type II construc-
tions involves the lexical properties of the quantifiers. First, Type I and Type II
involve pronominal forms serving as the head of the construction, which select
an of -NP inside which the NP is definite:
(61) a. *neither of students, *some of water
b. neither of the two linguists/some of the water

However, we know that the two types are different in terms of agreement: Pro-
nouns like neither in the Type I construction are lexically specified to be singular,
whereas the number value for Type II comes from inside the selected PP.11
A slight digression is in order. It is easy to see that there are prepositions
whose functions are just grammatical markers:
(62) a. John is in the room.
b. I am fond of him.

11 Pronouns like many in Type I are specified to be plural.


6.5 Partitive NPs and Agreement 153

The predicative preposition in here selects two arguments, John and the room.
By contrast, the preposition of has no predicative meaning but simply functions
as a marker to the argument of fond. PPs headed by these markers, as in the par-
titive construction, have semantic features identical to those of the prepositional
object NP. This means that the PP of him receives its semantic features from the
NP him.
Given this analysis, in which the PP in the partitive construction shares AGR
and semantic features (e.g., DEF: definite) with its inner NP, we can lexically
encode the similarities and differences between Type I and Type II in a simple
manner:

⎡ ⎤
(63) a. FORM neither
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ POS noun ⎥
⎢ ⎢ HEAD ⎥⎥
⎢ ⎢ AGR | NUM sing ⎥⎥
⎢ ⎢ ⎥⎥
⎢SYN⎢   ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎣VAL | COMPS PP PFORM of ⎦⎥
⎣ ⎦
DEF +
⎡ ⎤
b. FORM some
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ AGR | NUM 1 ⎥⎥
⎢ ⎢ ⎥
⎢ ⎡ ⎤⎥
⎥⎥
⎢SYN⎢
⎢ ⎥⎥
⎢ ⎢
PFORM of ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥⎥⎥

⎣ ⎣ DEF + ⎦⎥⎥
⎦⎦

AGR | NUM 1

The diagrams in (63) show that both Type I neither and Type II some are lex-
ically specified to require a PP complement whose semantic value includes a
definite (DEF) feature whose value is ‘+’. This will account for the contrast in
(61). However, the two types differ with respect to the NUM value. The NUM
value of Type I neither is singular, whereas that of Type II is identified with
the PP’s NUM value, which is actually coming from its prepositional object
NP. Showing these differences in syntactic structures, we have the alternatives
in (64):12

12 The arrows here are for expositional purposes and are not intended to indicate a direction of
feature copying or movement: The relevant features linked by the arrow are simply required to
have the same values.
154 N O U N P H R A S E S A N D AG R E E M E N T

(64)

As shown in (64a), for Type I, it is neither which determines the NUM value
of the whole NP phrase. However, for Type II, it is the NP the students which
determines the NUM value of the whole NP.
We can check a few of the consequences of these different specifications in
the two types. Consider the contrast in (65):
(65) a. many of the/those/her apples
b. *many of some/all/no apples

(65b) is ungrammatical, since many requires an of -PP phrase whose DEF value
is positive.
This system also offers a simple way of dealing with the fact that quantifiers
like each affect the NUM value as well as the countability of the of -NP phrase.
One difference between Type I and Type II is that Type I selects a plural of -
NP phrase when the head noun is one, each, or neither. Meanwhile, Type II in
general has no such restriction. This is illustrated in (66) and (67):
(66) Type I:
a. one of the suggestions/*the suggestion/*his advice
b. each of the suggestions/*the suggestion/*his advice
c. neither of the students/*the student/*his advice

(67) Type II:


a. some of his advice/students
b. most of his advice/students
c. all of his advice/students
6.5 Partitive NPs and Agreement 155

The only additional specification we need for Type I pronouns relates to the NUM
value on the PP’s complement, as given in (68):
⎡ ⎤
(68) FORM each
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ AGR | NUM sing ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎡ ⎤ ⎥⎥
⎢SYN⎢
⎢ PFORM of ⎥⎥
⎢ ⎢ ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥⎥⎥⎥
⎢ ⎣ DEF + ⎦ ⎥
⎣ ⎣ ⎦⎦
NUM pl

We see that quantifiers like each select a PP complement whose NUM value is
plural.
Type II pronouns do not place such a requirement on the PP complement: Note
that all the examples in (69) are acceptable, in contrast to those in (70):13
(69) a. Most of John’s boat has been repainted.
b. Some of the record contains evidence of wrongdoing.
c. Much of that theory is unfounded. (Data from Baker, 1995.)

(70) a. *Each of John’s boat has been repainted.


b. *Many of the record contained evidence of wrongdoing.
c. *One of the story has appeared in your newspaper.

The contrast here indicates that Type II pronouns can combine with a PP whose
daughter NP is singular. This is simply predicted because our analysis allows the
inner NP to be either plural or singular (or uncountable).
We are also in a position now to understand some differences between simple
NPs and partitive NPs. Consider the following examples:
(71) a. many dogs/*much dog/the dogs
b. much furniture/*many furniture/the furniture

(72) a. few dogs/*few dog/*little dogs/*little dog


b. little furniture/*little furnitures/*few furniture/*few furnitures

The data here indicate that, in addition to the agreement features we have seen
so far, common nouns also place a restriction on the countability value of the
selected specifier. Specifically, a countable noun selects a countable determiner
as its specifier (Sag et al., 2003).14 To capture this agreement restriction, we can
introduce a new feature, COUNT (countable):

13 Examples like Much of the savings came from employee concession indicate that much belongs
to Type II.
14 We cannot use the NUM feature here, since mass nouns like furniture are neither singular nor
plural. We also cannot take much to be unspecified with the NUM value, since it can combine
only with a mass noun, different from determiners like the or his.
156 N O U N P H R A S E S A N D AG R E E M E N T

⎡ ⎤ ⎡ ⎤
(73) FORM dogs FORM furniture
⎢  ⎥ ⎢  ⎥
a. ⎢ ⎥ b. ⎢ ⎥
⎣SYN HEAD | POS noun ⎦ ⎣SYN HEAD | POS noun ⎦
VAL | SPR DP[ COUNT +] VAL | SPR DP[ COUNT –]

The lexical specification of a countable noun like dogs requires its specifier to
be [COUNT +] to prevent formations like *much dogs. This in turn means that
determiners must also carry the feature COUNT:
⎡ ⎤ ⎡ ⎤
(74) FORM many FORM the
⎢ ⎡  ⎤⎥ ⎢ ⎡  ⎤⎥
a. ⎢
⎢ POS det

⎥ b. ⎢
⎢ POS det


⎣SYN⎣HEAD ⎦⎦ ⎣SYN⎣HEAD ⎦⎦
COUNT + COUNT boolean

⎡ ⎤
FORM little
⎢ ⎡  ⎤⎥
c. ⎢
⎢ POS det


⎣SYN⎣HEAD ⎦⎦
COUNT –

The determiner many bears the positive COUNT value, while little carries the neg-
ative COUNT value. However, the value of the feature COUNT for the expression
the can be either positive or negative. Note here that the feature COUNT is not
an agreement feature but a semantic feature assigned only to determiners. Thus,
the cooccurrence restriction of count and mass nouns with certain determiners is
not captured as agreement but ensured by a VAL requirement of the count/mass
nouns.
Now consider the following contrasts:

(75) a. much advice vs. *many advice


b. *much story vs. many stories

(76) a. much of the advice vs. *many of the advice


b. much of the story vs. many of the stories

Due to the feature COUNT, we understand now the contrast between much advice
and *many advice or the contrast between *much story and many stories. The
facts in partitive structures are slightly different, as (76) shows, but the patterns
in the data directly follow from these lexical entries:
⎡ ⎤
(77) a. FORM many
⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎡ ⎤ ⎥⎥
⎢ PFORM of ⎥⎥
⎢SYN⎢
⎢ ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥ ⎥⎥
⎢ ⎣ NUM pl ⎦ ⎥⎥
⎣ ⎣ ⎦⎦
DEF +
6.5 Partitive NPs and Agreement 157
⎡ ⎤
b. FORM much
⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎡ ⎤ ⎥⎥
⎢ PFORM of ⎥⎥
⎢SYN⎢
⎢ ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥ ⎥⎥
⎢ ⎣ NUM sing ⎦ ⎥⎥
⎣ ⎣ ⎦⎦
DEF +

The pronoun many requires a PP complement whose inner NP is plural, whereas


much does not.

6.5.3 Measure Noun Phrases


There are also so-called ‘measure noun phrase’ constructions, which
are similar to partitive constructions. Consider the following contrasts:
(78) a. one pound of those beans
b. three feet of that wire
c. a quart of Bob’s cider

(79) a. one pound of beans


b. three feet of wire
c. a quart of cider

Notice here that (78) is a kind of partitive construction, whereas (79) measures
the amount of the NP after of. As the examples show, measure noun phrases do
not require a definite article, unlike the true partitive constructions repeated here:
(80) *many of beans, *some of wire, *much of cider, *none of yogurt, *one of
strawberries

There are several more differences between partitive and measure noun
phrases. For example, measure nouns cannot occur in simple noun phrases. They
obligatorily require an of -NP phrase:
(81) a. *one pound beans vs. one pound of beans
b. *three feet wire vs. three feet of wire
c. *a quart cider vs. a quart of cider

Further, unlike partitive constructions, measure noun phrases require a


numeral (or a certain determiner) as their specifier:
(82) a. *one many of the books, *several much of the beer
b. one pound of beans, three feet of wire

As noted here, many or much in the partitive constructions cannot combine with
numerals like one or several; by contrast, measure nouns pound and feet need to
combine with a numeral like one or three.
Further complications arise owing to the existence of defective measure noun
phrases. Consider the following examples:
158 N O U N P H R A S E S A N D AG R E E M E N T

(83) a. *a can tomatoes/a can of tomatoes/one can of tomatoes


b. a few suggestions/*a few of suggestions/*one few of suggestions
c. *a lot suggestions/a lot of suggestions/*one lot of suggestions

Expressions like few and lot actually behave quite differently. For instance, it
appears that a few acts like a complex word. The expression lot acts more like a
noun, but, unlike can, it does not allow its specifier to be a numeral.
Regarding agreement, measure noun phrases behave like Type I partitive
constructions:
(84) a. A can of tomatoes is/*are added.
b. Two cans of tomatoes are/*is added.

We can see here that it is the head noun can or cans which determines the NUM
value of the whole NP. The inner NP in the PP does not affect the NUM value at
all. These observations lead us to posit the following lexical entry for a measure
noun:15
⎡ ⎤
(85) FORM pound
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ NUM sing
⎥⎥
⎢ ⎡ ⎤⎥⎥
⎢SYN⎢
⎢ ⎥⎥
⎢ ⎢ DP ⎥
⎥⎥
SPR
⎢ ⎢VAL⎢  ⎥⎥
⎢ ⎣ ⎦⎦⎥
⎣ ⎣ ⎦
COMPS PP PFORM of

That is, a measure noun like pound requires one obligatory SPR and a PP com-
plement. Unlike partitive constructions, there is no definiteness restriction on the
PP complement.

6.6 Modifying an NP

6.6.1 Adjectives as Prenominal Modifiers


Adjectives are expressions commonly used to modify a noun. How-
ever, not all adjectives can modify nouns. Even though most adjectives can be
used either as a modifying (attributive) function or as a predicate (as in She is
tall), certain adjectives are restricted in their usages. Adjectives like alive, asleep,
awake, afraid, ashamed, and aware can be used only predicatively, whereas
others like drunken, golden, main, and mere are only used attributively:
(86) a. He is alive.
b. He is afraid of foxes.
15 To be more specific, not all determiners are allowed here. For example, a possessive determiner
will not function as a specifier of the measure noun, as in *his pound of those beans.
6.6 Modifying an NP 159

(87) a. It is a golden hair.


b. It is the main street.

(88) a. *It is an alive fish. (cf. living fish)


b. *They are afraid people. (cf. nervous people)

(89) a. *This objection is main. (cf. the main objection)


b. *This fact is key. (cf. a key fact)

Predicative adjectives carry the feature PRD and have a MOD value that is empty
as a default:16
⎡ ⎤
(90) FORM alive
⎢ ⎡ ⎤⎥
⎢ POS adj ⎥
⎢ ⎥⎥
⎢SYN | HEAD ⎢PRD + ⎥
⎣ ⎣ ⎦⎦
MOD  

This says that alive is used predicatively and does not have a specification for a
MOD value (the value is empty). This lexical information will prevent predicative
adjectives from also functioning as noun modifiers.17
In contrast to a predicative adjective, a modifying adjective will have the
following lexical entry:
⎡ ⎤
(91) FORM wooden
⎢  ⎥
⎢ ⎥
⎣SYN | HEAD POS adj ⎦
MOD N 

This specifies an adjective which modifies any value whose POS is noun. This
will license a structure like the following:

(92)

16 All modifiers carry the head feature MOD.


17 In addition, all predicative expressions select one argument, their subject (SPR). This information
is not shown here.
160 N O U N P H R A S E S A N D AG R E E M E N T

As illustrated here, the prenominal adjective wooden modifies the head nominal
phrase (N ) desk.18

6.6.2 Postnominal Modifiers


Postnominal modifiers are the same as prenominal modifiers with
respect to what they modify. The only difference is that they follow the
expression they modify. Various phrases can function as such postnominal
modifiers:19
(93) a. [The girl [in the doorway]] waved to her father.
b. [The woman [eager to start the meeting]] is my sister.
c. [The man [holding the bottle]] disappeared.
d. [The papers [removed from the safe]] have not been found.
e. [The money [that you gave me]] disappeared last night.

All these postnominal elements bear the feature MOD. Leaving aside detailed
discussion of the relative clause(-like) modifiers in (93b)–(93e) until Chapter 11,
we can say that example (93a) will have the following structure:20

(94)

These modifiers must modify an N but not a complete NP. This claim is con-
sistent with the examples above and with the (ungrammatical) examples in
(95):
(95) a. *John in the doorway waved to his father.
b. *He in the doorway waved to his father.

18 In the present system, a modifier expression can be either a lexical (X) or a phrasal expression
(XP), while the element modified is a phrasal expression.
19 Relative clauses like the boy who was in the doorway are also postnominal modifiers. See
Chapter 11 for details.
20 As noted in Chapter 4, the approach here assumes that the relative linear order of a head, comple-
ments, and modifiers is determined by a combination of general and language-specific ordering
principles. For example, a simple AP modifier will precede its head, whereas a PP or complex
AP modifier will follow the head.
6.7 Conclusion 161

A proper noun or a pronoun projects directly to NP, with no complement or


specifier. If it were the case that postnominal PPs could modify any NP, these
examples ought to be acceptable.
Note that the postnominal VP also functions to modify the preceding nominal
expression:

(96)

The VP here functions as a modifier, adjoined to the nominal structure. A more


detailed structure will be discussed in Chapter 11.

6.7 Conclusion

We began this chapter with discussion of key grammatical properties


of three major classes of nouns in English: common nouns, pronouns, and proper
nouns. We saw that the lexical properties of these nouns determine their exter-
nal syntactic structures. The chapter then examined three types of agreement
relationships in English: noun-determiner, pronoun-antecedent, and subject-verb
agreement. We have seen that the agreement relationship between a noun and
its determiner concerns number (NUM) features of the two, while that between a
pronoun and its antecedent involves all three morphosyntactic agreement (AGR)
features: person (PER), number (NUM), and gender (GEND). For its part, the
subject-verb agreement relationship depends not only on morphosyntactic agree-
ment (AGR) features but also on the semantic index (IND) feature. This hybrid
agreement framework offers us a streamlined analysis of mismatches that involve
the respective NUM values of subject and verb.
The analysis developed here was extended to partitive NPs in English. We saw
that partitive NPs can be classified into two different types according to their
agreement properties, and that these differences follow from lexical specifica-
tions of the two types of partitive nouns. The chapter also offered a brief analysis
of measure NPs in English and the structure of prenominal and postnominal
modifiers of a nominal expression.
162 N O U N P H R A S E S A N D AG R E E M E N T

In the next chapter, we will explore VP structures projected from so-called


raising and control verbs. We will once again observe that the feature-structure
system offers an elegant analysis of mismatches between syntactic and semantic
specifications of lexical items-in this case between a verb’s repertoire of semantic
roles and its syntactic valence.

Exercises

1. Draw a tree structure for each of the following sentences and mark
which expression determines the agreement (AGR) and index values
of the subject NP and the main verb:
a. Neither of these men is worthy to lead Italy.
b. None of his customary excuses suffices Edgar now.
c. One of the problems was the robins.
d. Some of the water from melted snow also goes into the ground
for plants.
e. Most of the milk your baby ingests during breastfeeding is
produced during nursing.
f. One of the major factors affecting the value of diamonds was
their weight.
g. Each of these stones has to be cut and polished.
h. Most of her free time was spent attending concerts.

2. Provide a detailed analysis of the following examples, focusing on


subject-verb agreement. In doing so, provide the correct AGR and
IND value of the subject head noun and the main verb:

a. The committee were/*was unanimous in their decision.


b. The committee have/*has all now resigned.
c. The crew have/*has both agreed to change sponsor.
d. Her family are/*is all avid skiers.

3. Compare the following examples and assign an appropriate structure


to each. What kind of lexical category can you assign to both and
few? Can you provide arguments for your decisions?
(i) a. Both of the workers will wear carnations.
b. Both the workers will wear carnations.
c. Both workers will wear carnations.
d. Both will wear carnations.
(ii) a. Few of the doctors approve of our remedy.
b. Few doctors approve of our remedy.
c. Few approve of our remedy.

4. While considering the analysis of subject-verb agreement that we


have discussed in this chapter, provide the correct VFORM value of
6.7 Conclusion 163

the underlined (uninflected) verb lexeme and identify the noun that
determines this VFORM value:
a. An example of these substances be tobacco.
b. The effectiveness of teaching and learning depend on several
factors.
c. One of the most serious problems that some students have be
lack of motivation.
d. Ten years be a long time to spend in prison.
e. Everyone of us be given a prize.
f. Some of the fruit be going bad.
g. All of his wealth come from real estate investments.
h. Do some of your relatives live nearby?
i. Two ounces of this caviar cost nearly three hundred dollars.
j. Fifty pounds seem like a lot of weight to lose in one year.
k. Half of the year be dark and wintry.
l. Some of the promoters of ostrich meat compare its taste to beef
tenderloin.

5. Consider the following pairs of examples and explain the subject-


verb and pronoun-antecedent agreement relationships and how they
affect grammaticality:
(i) a. The committeei hasn’t yet made up itsi /*theiri mind.
b. The committeei haven’t yet made up theiri /*itsi mind.
(ii) a. That dog is so ferocious, and it even tried to bite itself.
b. *That dog is so ferocious, and it even tried to bite himself.

6. Read the following passage and provide detailed lexical entries for
the underlined expressions. For nouns, specify their AGR and IND
values:
When two or more nouns combine, as in computer screen, inter-
net facility, and garden fence, the first noun is said to modify the
second. In a sense, the first noun is playing the role of an adjec-
tive, which is what most people have in mind when we think
about modification, but nouns can do the job equally well. It
is worth mentioning that not every language offers this possi-
bility, but native speakers of English are quite happy to invent
their own combinations of nouns in order to describe things,
events, or ideas they have not come across before; this is partic-
ularly true in the workplace, where we need constantly to refer
to innovations and new concepts.
7 Raising and Control Constructions

7.1 Raising and Control Predicates

As noted in Chapter 5, certain verbs select an infinitival VP as their


complements. Compare the following pairs of examples:
(1) a. Lee tried to fix the computer.
b. Lee appeared to fix the computer.
(2) a. Mary persuaded Lee to fix the computer.
b. Mary expected Lee to fix the computer.

At first glance, these pairs are structurally isomorphic in terms of complements:


Both try and appear select an infinitival VP, and expect and persuade select an
NP and an infinitival VP. However, there are several significant differences that
motivate two classes, known as control and raising predicates, respectively:
(3) a. Control verbs and adjectives: try, hope, eager, persuade, promise, etc.
b. Raising verbs and adjectives: seem, appear, tend, happen, likely, certain,
believe, expect, etc.

Verbs like try are called ‘control’ or ‘equi’ verbs. The subject of such a verb
is understood to be ‘equivalent’ in some sense to the unexpressed subject of the
infinitival VP. In linguistic terminology, the subject of the verb is said to ‘control’
the referent of the subject of the infinitival complement. Let us consider the ‘deep
structure’ of (1a), representing the unexpressed subject of the VP complement of
tried:1
(4) John tried [(for) John to fix the computer].

As shown here, in this sentence it is John who performs the action of fixing the
computer. In the original transformational grammar approach, this proposed deep
structure would undergo a rule of ‘Equivalent NP Deletion’ in which the second
NP John is deleted to produce the output sentence. This is why such verbs are
referred to as ‘equi-verbs.’
1 Deep structure, linked to surface structure, is a theoretical construct and abstract level of repre-
sentation that is designed to unify several related observed forms and that played an important
role in the theory of Transformational Grammar in the late twentieth century. For example, the
surface structures of both The cat chased the mouse and The mouse was chased by the cat are
derived from an identical deep structure similar to The cat chased the mouse.

164
7.2 Differences between Raising and Control Verbs 165

By contrast, verbs like seem are called ‘raising’ verbs. Consider the deep
structure of (1b):
(5) appeared [John to fix the computer].

In order to derive the ‘surface structure’ (1b), the subject, John, needs to be raised
to the matrix subject position marked by . This transformational analysis is
designed to capture the fact that the subject of appear owes its semantic role to
the downstairs verb (it is the agent of fix) rather than to the main verb, appear.
The verb appear assigns only one semantic role (the situation or state of affairs
that ‘appears’) and, since John is not a state of affairs, the nominal expression
John is not assigned a semantic role by appear. This is why verbs like appear
are called ‘raising’ verbs.
This chapter discusses the similarities and differences between these two types
of verb and shows how we can explain their respective properties in a systematic
way.

7.2 Differences between Raising and Control Verbs

There are many differences between the two classes of verb, which
we present here.

7.2.1 Subject Raising and Control


The semantic role of the subject: One clear difference between rais-
ing and control verbs is the semantic role assigned to the subject. Let us compare
the following examples:
(6) a. John tries to be honest.
b. John seems to be honest.

These might have paraphrases – perhaps awkward – as follows:


(7) a. John makes efforts for himself to be honest.
b. It seems that John is honest.

As suggested by the paraphrase, the one who does the action of trying is John
in (6a). How about (6b)? Is it John who is involved in the situation of ‘seem-
ing’? As represented in the paraphrase (7b), seeming is a property of a situation
(John’s being honest) rather than a property of an individual (John). Owing to
this difference, we say that a control verb like try assigns a semantic role to its
subject (the ‘agent’ role), whereas a raising verb like seem does not assign any
semantic role to its subject (this is what (5) is intended to represent). Among
raising verbs, there is a mismatch between the number of syntactic complements
(two: subject NP and infinitival VP complement) and the number of semantic
roles (one: a situation).
166 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S

Expletive subjects: Since a raising verb does not assign a semantic role to its
subject, certain expressions which do not have a semantic role – or any meaning,
for that matter – may appear in the subject position, provided that the infinitival
VP is of the right kind. Such potential subjects include the expletives it and
there:
(8) a. It tends to be warm in September.
b. It seems to bother Kim that they resigned.

The situation is markedly different with control verbs:


(9) a. *It/*There tries to be warm in September.
b. *It/*There hopes to bother Kim that they resigned.

Since control verbs like try and hope require their subject to have an agent role,
an expletive it or there, which takes no semantic role, cannot function as their
subject.
We can observe the same contrast with respect to raising and control adjec-
tives:
(10) a. There is likely to be a candidate. (raising)
b. *There/John is eager to be a candidate. (control)

Since the raising adjective likely does not assign any semantic role to its subject,
a nonreferential expression, the ‘dummy’ there subject of the existential verb
be, can be the subject of the sentence. By contrast, the control adjective eager
assigns a semantic role and thus does not allow this ‘dummy’ element as its
subject.
Subcategorization: Investigating what determines properties of the subject, we
can note that in raising constructions, it is not the raising verb or adjective but
rather the infinitival complement’s predicate that influences the semantic char-
acteristics of the subject. That is, in raising constructions, it is not the raising
predicate itself but its VP complement that restricts the properties of the raising
predicate’s subject. Observe the following:
(11) a. Pat seemed [to be intelligent].
b. It seems [to be obvious that she is not showing up].
c. The chicken is likely [to come home to roost].
(In the sense of ‘Consequences will be felt.’)
(12) a. *There seemed [to be intelligent].
b. *Pat seems [to be obvious that she is not showing up].
c. *Pat is likely [to come home to roost].

For example, the VP to be intelligent in (11a) requires an animate subject, and


this is why (11a), with the subject Pat, is acceptable but (12a), with the expletive
subject there, is not. Correspondingly, the VP to be obvious that she is not show-
ing up in (11b) requires the expletive it as its subject. This is why Pat cannot be
the subject in (12b). The contrast in (c) is similar. The VP to come home to roost
7.2 Differences between Raising and Control Verbs 167

requires the subject NP the chicken, without which it would lack the idiomatic
meaning. Sentence (12b) would be acceptable only with a literal meaning, with
Pat referring to a chicken. In sum, in raising constructions, whatever category
is required as the subject of the infinitival VP is also required as the subject by
the higher VP – hence the intuition of ‘raising’: Any requirement placed on the
subject of the infinitival VP complement passes up to the higher predicate.
However, among control verbs, there is no direct relation between the subject
of the main verb and that of the infinitival VP. It is the control verb or adjective
itself which fully determines the properties of the subject:
(13) a. Sandy tried [to eat oysters].
b. *There tried [to be riots in Seoul].
c. *It tried [to bother me that Chris lied].
d. *The chickens try [to come home to roost]. (on the idiomatic meaning)

(14) a. Sandy is eager [to eat oysters].


b. *That he is clever is eager [to be obvious].

Regardless of what the infinitival VP would require as its subject, a control


predicate requires its subject to be able to bear the semantic role of agent or
experiencer. For example, in (13b) and (13c), the subject of the infinitival VP
can be there and it, respectively, but these cannot function as the matrix subject –
because the matrix verb tried requires its own subject, a ‘trier,’ as in (13a). In a
similar manner, the VP complement in (14b) can be a CP clause, but the control
adjective eager requires its subject to be an animate experiencer.
Selectional restrictions: Closely related to differences in subject selection are
differences regarding what are known as ‘selectional restrictions.’ Subcatego-
rization frames, which we have represented by means of VAL (valence) features,
are syntactic, but verbs also impose semantic selectional restrictions on their
subjects or objects. For example, the verb thank requires a human subject and an
object that is at least animate:
(15) a. The king thanked the man.
b. #The king thanked the throne.
c. ?The king thanked the deer.
d. #The castle thanked the deer.

This selectional restriction then also accounts for the following contrast:
(16) a. The color red seems [to be his favorite color].
b. #The color red tried [to be his favorite color].

The presence of the raising verb seems does not change selectional restrictions
on the subject. However, the control verb tried is different: The control verb
tried requires its subject to be sentient, at least. The subject of a raising verb
just carries the selectional restrictions of the infinitival VP’s subject. This in turn
means that the subject of the infinitival VP is the subject of the raising verb.
168 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S

Meaning preservation: We have seen that the subject of a raising predicate is


that of the infinitival VP complement, and that there is no semantic role at all
coming from the raising predicate. This implies that an idiom whose meaning is
specially composed from its parts will retain its meaning even if part of it appears
as the subject of a raising verb:
(17) a. The cat seems to be out of the bag.
(in the sense of ‘The secret is out’)
b. #The cat tries to be out of the bag.

In the raising example (17a), the meaning of the idiom The cat is out of the bag
is retained. However, because the control verb tries assigns a semantic role to its
subject the cat, ‘the cat’ must be the one doing the action of trying, and there is
no idiomatic meaning.
This preservation of meaning also holds for examples like the following:
(18) a. The dentist is likely to examine Pat.
b. Pat is likely to be examined by the dentist.

(19) a. The dentist is eager to examine Pat.


b. Pat is eager to be examined by the dentist.

As the raising predicate likely does not assign a semantic role to its subject, (18a)
and (18b) have more or less identical meanings – the proposition is about the den-
tist examining Pat, in both active and passive forms: The active subject is raised
in (18a) and the passive subject in (18b). However, the control predicate eager
assigns a semantic role to its subject, and this forces (19a) and (19b) to differ
semantically: In (19a), it is the dentist who is eager to examine Pat, whereas in
(19b), it is Pat who is eager to be examined by the dentist. Intuitively, if one of the
sentences in (18) is true, so is the other, but this inference cannot be made in (19).

7.2.2 Object Raising and Control


Similar contrasts are found between what are known as ‘object rais-
ing’ and ‘object control’ predicates. The contrasts we saw above with respect to
the subjects of different verbs now reappear with respect to objects:
(20) a. Stephen believed Ben to be careful.
b. Stephen persuaded Ben to be careful.

Once again, these two verbs (believe and persuade) look alike in terms of syntax:
They both combine with an NP and an infinitival VP complement. However, the
two are different with respect to the properties of the object NP in relation to the
rest of the structure. Observe the differences between believe and persuade with
respect to their possible object:
(21) a. Stephen believed it to be easy to please Maja.
b. *Stephen persuaded it to be easy to please Maja.
7.3 A Simple Transformational Approach 169

(22) a. Stephen believed there to be a fountain in the park.


b. *Stephen persuaded there to be a fountain in the park.

We can observe that, unlike believe, persuade does not license an expletive
object (just like try does not license an expletive subject). And in this respect,
the verb believe is similar to seem in that it does not assign a semantic role
(to its object). The differences show up again in the preservation of idiomatic
meaning:

(23) a. Stephen believed the cat to be out of the bag.


(in the sense ‘Stephen believed that the secret was out’)
b. *Stephen persuaded the cat to be out of the bag. (with the idiomatic reading)

While the idiomatic reading is retained with the raising verb believed, it is lost
with the control verb persuaded.
Active-passive pairs show another contrast:

(24) a. The dentist was believed to have examined Pat.


b. Pat was believed to have been examined by the dentist.

(25) a. The dentist was persuaded to examine Pat.


b. Pat was persuaded to be examined by the dentist.

With the raising verb believe, there is no strong semantic difference in the exam-
ples in (24). However, in (25), there is a clear difference in who is persuaded. In
(25a), it is the dentist, but in (25b), it is Pat who is persuaded. This is one more
piece of evidence that believe is a raising verb whereas persuade is a control verb
with respect to the object.

7.3 A Simple Transformational Approach

How can we account for these differences between raising and con-
trol verbs or adjectives? A traditional strategy, hinted at earlier, is to treat raising
as a relationship between two distinct syntactic structures, mediated by a pro-
cedure that was known in the literature as NP Movement. This transformation
takes a deep structure like (26a) as its input and produces a surface structure like
(26b):

(26) a. Deep structure: seems [Donald to be irritating].


b. Surface structure: Donald seems t to be irritating.

To derive (26b), the subject of the infinitival VP in (26a) moves to the matrix
subject position, as represented in the following tree structure:
170 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S

(27)

The movement of the subject Donald to the higher subject position will correctly
generate (26b). This kind of movement to the subject position can be triggered
by the requirement that each English declarative sentence have a surface subject
(Chomsky, 1981b). A similar movement process can be applied to the object
raising cases:
(28) a. Deep structure: Tom believes [Donald to be irritating].
b. Surface structure: Tom believes Donald to be irritating.

Here the embedded subject Donald moves not to the matrix subject but to the
matrix object position:
(29)

Control constructions are different: There is no movement operation involved.


Instead, it is the lower subject position which has special properties. Consider
the examples in (30):
(30) a. John tried to please Stephen.
b. John persuaded Stephen to be more careful.

Since try and persuade assign semantic roles to their subjects and objects, an
unfilled position of the kind designated above by cannot be allowed. Instead,
it is posited that there is an unexpressed subject of the infinitival VPs to please
7.3 A Simple Transformational Approach 171

Stephen and to be more careful. This is traditionally represented as the ele-


ment called ‘PRO’ (big pro), and the examples will have the following deep
structures:2
(31) a. John tried [PRO to please Stephen].
b. John persuaded Stephen [PRO to be more careful].

The tree representations of these sentences are as follows:

(32)

An independent part of the theory of control links PRO in each case to its
antecedent, marked by coindexing. In (32a), PRO is coindexed with John; in
(32b), it is coindexed with Stephen.
These analyses, which involve derivational rules operating on tree structures,
are driven by the assumption that the mapping between semantics and syntax is
very direct. For example, in (29), the verb believe semantically selects an expe-
riencer and a proposition, and this is reflected in the initial structure. In some

2 In traditional generative grammar, this ‘big PRO’ is taken to be different from ‘small pro’ in the
sense that the former is the subject of a nonfinite clause while the latter is the subject of a finite
clause. Small pro is licensed only in null-subject languages like Korean and Italian.
172 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S

syntactic respects, though, believe acts like it has an NP object (separate from
the infinitival complement), and the raising operation creates this object. In con-
trast, persuade semantically selects an agent, a patient, and a proposition, and
the structure in (32b) reflects this: The object position is there all along, so to
speak.
The classical transformational approach is a useful way to represent the differ-
ence between raising and control. However, it assumes a very different model of
grammar from that assumed here. In the transformational approach, the raising
and control patterns are the products of rules that map one sentential structure to
another. The transformational approach is highly abstract, in that it assumes syn-
tactic structure that is not ‘visible.’ For example, it is assumed that raising and
control verbs, rather than taking a VP as complement, in fact take a full sentence
as complement – one that happens to have a ‘phonetically null’ or ‘inaudible’
subject. In the remainder of this chapter, we will present a nontransformational
account of control and raising.

7.4 A Nontransformational, Construction-Based


Approach
7.4.1 Identical Syntactic Structures
Rather than characterizing raising and control patterns according to
the positions of elements, both overt and ‘phonetically null,’ in configurational
syntactic structure, we simply focus on argument structure patterns that are char-
acteristic of raising verbs, on the one hand, and control verbs, on the other. Both
kinds of verbs feature a characteristic pattern of argument sharing: A single NP
fulfills a valence requirement not only of the main verb but also of that verb’s
infinitival complement. Returning to the raising verb seemed and the control verb
tried, we can observe that both select an infinitival VP, as in (33), yielding the
structures in (34):

⎡ ⎤
(33) a. FORM seemed
⎢ ⎡ ⎤⎥
⎢ SPR  1 NP ⎥
⎢ ⎦⎥
⎢SYN | VAL⎣ ⎥
⎢ COMPS  2 VP VFORM inf  ⎥
⎣ ⎦
ARG - ST  1 NP, 2 VP
⎡ ⎤
b. FORM tried
⎢ ⎡ ⎤⎥
⎢ SPR  1 NP ⎥
⎢ ⎦⎥
⎢SYN | VAL⎣ ⎥
⎢ COMPS  2 VP VFORM inf  ⎥
⎣ ⎦
ARG - ST  1 NP, 2 VP
7.4 A Nontransformational Approach 173

These two lexical entries would project the following similar structures, respec-
tively:

(34)

As shown here, the syntactic structures projected by seemed and tried are
identical.
The object raising verb expect and the control verb persuade also have
identical valence (SPR and COMPS) information:

⎡ ⎤
(35) a. FORM expects
⎢ ⎡ ⎤⎥
⎢ SPR  1 NP ⎥
⎢ ⎦⎥
⎢SYN | VAL⎣ ⎥
⎢ COMPS  2 NP, 3 VP VFORM inf  ⎥
⎣ ⎦
ARG - ST  NP, NP, VP
1 2 3
⎡ ⎤
b. FORM persuaded
⎢ ⎡ ⎤⎥
⎢ SPR  1 NP ⎥
⎢ ⎦⎥
⎢SYN | VAL⎣ ⎥
⎢ COMPS  2 NP, 3 VP VFORM inf  ⎥
⎣ ⎦
ARG - ST  NP, NP, VP
1 2 3

These two lexical entries will license the following structures:


174 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S

(36)

As can be seen here, raising and control verbs are no different in terms of their
subcategorization or valence requirements, so they project similar structures. The
question is then how we can capture the different properties of raising and con-
trol verbs. The answer is that their differences follow from the other pieces of
the lexical information, in particular, the mapping relations between syntax and
semantics.

7.4.2 Differences among the Feature Specifications in the Valence


Information
We have observed that for raising predicates, whatever kind of cate-
gory is required as subject by the infinitival VP is also required as the subject of
the predicate. Some of the key examples are repeated here:

(37) a. Stephen/*It/*There seemed to ignore me.


b. It seemed to rain.
c. There seemed to be a fountain in the park.

(38) a. Stephen/*It/*There tried to ignore me.


b. ?It tried to rain.
c. *There tried to be a fountain in the park.

While the subject of a raising predicate is identical to that of its infinitival VP


complement, the subject of a control predicate has a different requirement. The
subject of a control predicate is coindexed with that of the infinitival VP comple-
ment. This difference is represented in the lexical information shown in (39). The
raising verb involves shared subjects, while in the control verb the two subjects
share a semantic index:
7.4 A Nontransformational Approach 175
⎡ ⎤
(39) a. FORM seemed
⎢ ⎡ ⎤⎥
⎢  1 NP ⎥
⎢ SPR ⎥
⎢ ⎢   ⎥⎥
⎢SYN | VAL⎢ VFORM inf ⎦⎥ ⎥
⎢ ⎣COMPS ⎥
⎢ 2 VP ⎥
⎢ SPR  1  ⎥
⎣ ⎦
ARG - ST  1 NP, 2 VP
⎡ ⎤
b. FORM tried
⎢ ⎡ ⎤⎥
⎢  1 NPi  ⎥
⎢ SPR ⎥
⎢ ⎢   ⎥⎥
⎢SYN | VAL⎢ VFORM inf ⎦⎥ ⎥
⎢ ⎣COMPS ⎥
⎢ 2 VP ⎥
⎢ SPR  NPi  ⎥
⎣ ⎦
ARG - ST  1 NP, 2 VP

These two lexical entries represent the difference between seem and try: In the
entry for seemed, the subject of the VP complement is fully identical with its
own subject (notated by 1 ), whereas in the entry for tried, only the index value
of the specifier of its VP complement is identical to that of its subject, meaning
that the VP complement’s understood subject refers to the same individual as the
subject of tried. This index identity in control constructions is clear when we
consider examples like the following:
(40) Someonei tried NPi to leave town.
The example here means that whoever someone might refer to, that same person
left town. In some cases, English allows a paraphrase with an overt pronoun:
(41) a. Tom hoped [to win].
b. Tomi hoped [that hei would win].
The lexical entries in (39) generate the following structures for the intransitive
raising and control sentences:
(42)
176 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S

It is easy to verify that these structures conform to all the grammar rules (the
HEAD - SPECIFIER CONSTRUCTION and HEAD - COMPLEMENT CONSTRUCTION )
and principles, including the HFP and VALP.
Object raising and control predicates are analogous. Object raising verbs
select a VP complement whose subject is fully identical with the object.
Object control verbs select a VP complement whose subject’s index value
is identical with that of its object. The following lexical entries show these
properties:

⎡ ⎤
(43) a. FORM expect
⎢ ⎡ ⎤⎥
⎢  1 NPi  ⎥
⎢ SPR ⎥
⎢ ⎢   ⎥⎥
⎢SYN | VAL⎢ VFORM inf ⎥⎥
⎢ ⎣COMPS ⎦⎥
⎢ 2 NP, 3 VP ⎥
⎢ SPR  2 NP ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP, VP
3
⎡ ⎤
b. FORM persuade
⎢ ⎡ ⎤⎥
⎢  1 NP ⎥
⎢ SPR ⎥
⎢ ⎢   ⎥⎥
⎢SYN | VAL⎢ VFORM inf ⎥⎥
⎢ ⎣COMPS ⎦⎥
⎢ 2 NPi , 3 VP ⎥
⎢ SPR NPi  ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP, 3 VP

Let us look at the structures these lexical entries eventually project:


7.4 A Nontransformational Approach 177

(44)

(45)

As represented here, the subject of to rain tomorrow in (44) is the NP object of


expects, while the subject of to be more careful in (45) is coindexed with the
independent object of persuade.
178 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S

7.4.3 A Mismatch between Meaning and Structure


We have not yet addressed differences in the assignment of semantic
roles. We first need to introduce further semantic features, distinguished from
syntactic features, as this issue is closely related to the relationship between syn-
tax and semantics. As we saw in Chapter 6, nouns and verbs have IND values.
That is, a noun refers to an individual (e.g., i, j, k) whereas a verb denotes a sit-
uation (e.g., s0 , s1 , s2 ). In addition, a predicate represents a semantic property or
relation. For example, the meaning of the verb hits in (46a) can be represented
in canonical first-order predicate logic. as in (46b):

(46) a. John hits a ball.


b. hit (j, b)

This shows that the verb hit takes two arguments in the predicate relation hit, with
the  notation to indicate the semantic value. The relevant semantic properties can
be represented in a feature-structure system as follows:

⎡ ⎤
(47) FORM hit
⎢  ⎥
⎢  1 NPi  ⎥
⎢SYN | VAL SPR ⎥
⎢ COMPS  2 NPj  ⎥


⎢ ⎥
⎢ARG - ST NP , NP  ⎥
⎢ i j ⎥
⎢ ⎡ ⎤ ⎥
⎢ IND s ⎥
⎢ 0 ⎥
⎢ ⎢ ⎡ ⎤ ⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢SEM⎢ PRED hit ⎥ ⎥
⎢ ⎢ ⎥⎥ ⎥
⎢ ⎢RELS ⎢ ⎣ AGT i ⎦ ⎥ ⎥
⎣ ⎣ ⎦ ⎦
PAT j

With respect to syntax, hit is a verb selecting two arguments, realized as a subject
and a complement, respectively, as shown in the values of the features VAL and
ARG - ST . The semantic information associated with the verb is represented by
means of the feature SEM (semantics). Its first attribute is IND (index), represent-
ing what this expression refers to; as a verb, hit refers to a situation s0 in which
an individual i hits an individual j. The semantic relation of hitting is represented
using the feature for semantic relations (RELS). The feature RELS has as its value
a list of one feature structure, here with three further features, PRED (predicate),
AGT (agent), and PAT (patient). The predicate ( PRED ) relation is whatever the
verb denotes: In this case, hit takes two arguments. The AGT argument in the
SEM value is coindexed with the SPR in the SYN value, while the PAT is coin-
dexed with COMPS. This coindexing links the subcategorization information of
hit with the arguments in its semantic relation. Simply put, the lexical entry in
(47) is the formal representation of the fact that in X hits Y, X is the hitter and Y
is the one hit.
7.4 A Nontransformational Approach 179

Now we can use these additional parts of the verb’s representation to describe
the semantic differences between raising and control verbs. The subject of a rais-
ing verb like seem is not assigned any semantic role, while that of a control verb
like try is linked to a semantic role, whether agent (as in the case of try) or expe-
riencer (as in the case of want or eager). Assuming that ‘s0 ’ or ‘s1 ’ stands for
a situation denoted by an infinitival VP, we can give seem and try the following
simplified meaning representations:
(48) a. seem (s1 ) (‘s1 seems (to be the case) = s0 ’)
b. try (i, s1 ) (‘i tries to (make) s1 (be the case) = s0 ’)

These meaning differences are represented by the following feature structures:


⎡ ⎤
(49) a. FORM seem
⎢ ⎡ ⎤⎥
⎢  1 NP ⎥
⎢ SPR ⎥
⎢ ⎢ ⎡ ⎤⎥ ⎥
⎢ ⎢ inf ⎥⎥ ⎥
⎢SYN | VAL ⎢ VFORM ⎥
⎢ ⎢ ⎥ ⎥⎥
⎢ ⎢COMPS 2 VP⎢ ⎣SPR  1 ⎦ ⎥⎥
⎢ ⎣ ⎦⎥
⎢ s1 ⎥
⎢ IND ⎥
⎢ ⎥
⎢ ⎥
⎢ARG - ST  1 NP, 2 VP ⎥
⎢ ⎡ ⎤ ⎥
⎢ ⎥
⎢ IND s
0 ⎥
⎢  ⎥ ⎥
⎢SEM⎢
⎢ ⎥ ⎥
⎢ ⎣ PRED seem ⎦ ⎥
⎣ RELS ⎦
SIT s1
⎡ ⎤
b. FORM try
⎢ ⎡ ⎤⎥
⎢ SPR  1 NPi  ⎥
⎢ ⎢ ⎥
⎢ ⎢ ⎡ ⎤⎥⎥⎥
⎢ ⎥⎥
⎢SYN | VAL ⎢

VFORM inf ⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ VP⎣SPR  NPi ⎦ ⎥⎥
⎦⎥
COMPS 2
⎢ ⎣ ⎥
⎢ IND s1 ⎥
⎢ ⎥
⎢ ⎥
⎢ARG - ST  1 NP, 2 VP ⎥
⎢ ⎥
⎢ ⎡ ⎤ ⎥
⎢ IND s ⎥
⎢ 0 ⎥
⎢ ⎢ ⎡ ⎤⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢SEM⎢ PRED try ⎥ ⎥
⎢ ⎢ ⎥⎥ ⎥
⎢ ⎢RELS ⎢ ⎣AGT i ⎦⎥ ⎥
⎣ ⎣ ⎦ ⎦
SIT s1

We can see here that even though the verb seem selects two syntactic arguments
( 1 NP and 2 NP), its meaning relation (PRED) has only one argument (SIT, refer-
ring to s1 ): Note that the subject (SPR) is not coindexed with any argument in the
semantic relation.3 This means that the subject does not receive a semantic role
(from seem). Meanwhile, the verb try also selects two syntactic arguments (an

3 The feature attribute SIT denotes a situation, roughly corresponding to an event or state of affairs.
180 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S

NP and a VP) as well as two semantic arguments (AGT and SIT). Unlike seem,
try has one-to-one mapping relation between syntactic arguments and semantic
arguments. That is, the verb’s SPR is coindexed with the AGT role in the seman-
tics (RELS value), whereas its VP complement is identified with the SIT role.
Thus, both the subject and complement of try are linked to semantic arguments,
whereas the subject of seem is not linked to any semantic argument.
Now we turn to object-related verbs like expect and persuade. Just as in the
contrast between seem and try, the key difference here concerns whether the
object (y) receives a semantic role or not:
(50) a. expect (x, s1 )
b. persuade (x, y, s1 )

In (50a), x represents an experiencer who anticipates some outcome or state of


affairs yet to be confirmed. In (50b), by contrast, x represents a causal force or
agent, while y represents an experiencer who is impelled by force of reason to
perform an act (s1 ). Once again, these differences can be clearly represented in
feature structures:
⎡ ⎤
(51) a. FORM expect
⎢ ⎡ ⎤⎥
⎢ SPR  1 NPi  ⎥
⎢ ⎢ ⎥
⎢ ⎢ ⎡ ⎤⎥⎥⎥
⎢ ⎥⎥
⎢SYN | VAL ⎢

VFORM inf ⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎣SPR  2 NP ⎦ ⎥⎥
⎦⎥
COMPS 2 NP , 3 VP
⎢ ⎣ ⎥
⎢ IND s1 ⎥
⎢ ⎥
⎢ ⎥
⎢ARG - ST  1 NP, 2 NP, 3 VP ⎥
⎢ ⎥
⎢ ⎡ ⎤ ⎥
⎢ IND s ⎥
⎢ 0 ⎥
⎢ ⎢ ⎡ ⎤ ⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢ ⎢ PRED expect ⎥ ⎥
⎢SEM⎢ ⎥ ⎥ ⎥
⎢ ⎢RELS ⎢ ⎣ EXP i ⎦ ⎥ ⎥
⎣ ⎣ ⎦ ⎦
SIT s1
⎡ ⎤
b. FORM persuade
⎢ ⎡ ⎤⎥
⎢  1 NPi  ⎥
⎢ SPR ⎥
⎢ ⎢ ⎡ ⎤⎥ ⎥
⎢ ⎢ ⎥⎥
⎢SYN | VAL ⎢ VFORM inf ⎥⎥
⎢ ⎢ ⎥⎥ ⎥
⎢ ⎢COMPS 2 NPj , 3 VP⎢ ⎣SPR  NPj ⎦ ⎥⎥
⎢ ⎣ ⎦⎥
⎢ s1 ⎥
⎢ IND ⎥
⎢ ⎥
⎢ ⎥
⎢ARG - ST  1 NP, 2 NP, 3 VP ⎥
⎢ ⎡ ⎤ ⎥
⎢ ⎥
⎢ IND s
0 ⎥
⎢ ⎢ ⎡ ⎤ ⎥ ⎥
⎢ ⎢ ⎥
⎢ ⎢ PRED persuade ⎥ ⎥

⎢SEM⎢ ⎢ ⎥ ⎥ ⎥
⎢ ⎢ ⎥ ⎥ ⎥
⎢ ⎢RELS ⎢
AGT i ⎥ ⎥ ⎥

⎢ ⎢ ⎢ j ⎥ ⎥⎥ ⎥

⎣ ⎣ ⎣ EXP ⎦ ⎦ ⎦
SIT s1
7.5 Explaining the Differences 181

With respect to the manner in which members of the ARG - ST list are linked
to the syntactic grammatical functions SPR and COMPS, the two verbs are the
same: Both select three syntactic arguments. But observe the key difference in
the linking relations with the semantic arguments. As seen in the lexical entries,
expect has two semantic arguments, experiencer (EXP) and situation (SIT): The
object is not linked to a semantic argument of expect. In contrast, persuade has
three semantic arguments: AGT, EXP, and SIT. We can thus conclude that raising
predicates assign one less semantic role in their argument structures than the
number of syntactic dependents, while in the case of control predicates there is a
one-to-one correlation between arguments and grammatical functions.

7.5 Explaining the Differences

7.5.1 Expletive Subject and Object


Recall that for raising verbs, one argument is dependent for its
semantic properties solely upon those of the specifier of the VP complement:
the subject in the case of seem and the object in the case of believe. This analysis
is supported by the examples in (52):
(52) a. There/*It/*John seems [to be a fountain in the park].
b. We believed there/*it/*John [to be a fountain in the park].

Control verbs are different, directly assigning the semantic role of agent or
experiencer to the subject or object. For this reason, a control verb does not
accept an expletive argument, even if the verb of the infinitival complement is
one that can take such an argument. This is illustrated in (53a)–(53b) for the
subject of try and the object of persuade, respectively:
(53) a. *There/*It/John tried to leave the country.
b. We persuaded *there/*it/John to be part of the solution.

7.5.2 Meaning Preservation


We noted above that in a raising example like (54a), the idiomatic
reading can be preserved, but not in a control example like (54b):
(54) a. The cat seems to be out of the bag.
b. The cat plans to be out of the bag.

This is once again because the subject of seems does not have any semantic role:
Its subject is identical with the subject of its VP complement to be out of the bag,
whereas the subject of plans has its own agent role.
The same explanation applies to the following contrast:
(55) a. The dentist is likely to examine Pat.
b. Pat is likely to be examined by the dentist.
182 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S

Since likely is a raising predicate, in as much as the expressions The dentist


examines Pat and Pat is examined by the dentist have the same meaning, the two
raising examples will also be synonymous. What matters is only that there be
identity between the subject of the sentence (whether subject or object) and the
subject of the verb’s VP complement.
However, control examples are different:
(56) a. The dentist is eager to examine Pat.
b. Pat is eager to be examined by the dentist.

The control adjective eager assigns a semantic role to its subject independent of
the VP complement, as given in the following lexical entry:
⎡ ⎤
(57) FORM eager
⎢ ⎡ ⎤⎥
⎢ SPR NPi  ⎥
⎢   ⎥⎥
⎢ ⎢ ⎥
⎢SYN | VAL ⎢ inf ⎥ ⎥
⎢ ⎣COMPS VP VFORM ⎦⎥
⎢ ⎥
⎢ IND s1 ⎥
⎢ ⎥
⎢ ⎡ ⎤ ⎥
⎢ IND s ⎥
⎢ 0 ⎥
⎢ ⎢ ⎡ ⎤⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢ ⎢ PRED eager ⎥ ⎥
⎢SEM⎢ ⎢ ⎥ ⎥ ⎥
⎢ ⎢RELS ⎣EXP i ⎦⎥ ⎥
⎣ ⎣ ⎦ ⎦
SIT s1

This then means that (56a) and (56b) must differ in that in the former, it is
the dentist who is eager to perform the action denoted by the VP complement,
whereas in the latter, it is Pat who is eager.

7.5.3 Subject vs. Object Control Verbs


Consider finally the following two examples:
(58) a. They persuaded me to leave.
b. They promised me to leave.

Both persuaded and promised are control verbs: Their object is assigned a
semantic role (and so is their subject). This in turn means that their object cannot
be an expletive:
(59) a. *They persuaded it to rain.
b. *They promised it to rain.

However, the two are different with respect to the controller of the infinitival VP.
Consider who is understood as the unexpressed subject of the infinitival verb
here. In (58a), it is the object me which semantically functions as the subject of
the infinitival VP. Yet in (58b) it is the subject they who will do the action of
leaving. Owing to this fact, verbs like promise are known as ‘subject control’
7.6 Conclusion 183

verbs, whereas those like persuade are ‘object control’ verbs. This difference is
straightforwardly represented in their lexical entries:
⎡ ⎤
(60) FORM persuade
⎢ ⎡ ⎤⎥
⎢ SPR NPi  ⎥
⎢ ⎥
⎢   ⎥⎥
⎢SYN | VAL⎢
⎢ ⎥⎥
⎢ ⎣ VFORM inf ⎦⎥
⎣ COMPS NPj , VP ⎦
SPR  NPj 
⎡ ⎤
FORM promise
⎢ ⎡ ⎤⎥
⎢ SPR  NPi  ⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎡ ⎤ ⎥⎥
⎢ ⎥⎥
⎢SYN | VAL⎢

VFORM inf ⎥
⎢ ⎢COMPS NPj , VP⎢ ⎥⎥ ⎥
⎢ ⎣ SPR  NPi ⎦ ⎥⎥
⎣ ⎣ ⎦⎦
IND s1

The divergent control profiles of these two verbs follow from the communicative
acts they describe. Promising is a commitment by the speaker to perform some
act; it is therefore the speaker (the subject) and not the addressee (the object) who
is understood to be the (potential) doer of the action denoted by the infinitival
VP. In an act of persuasion, by contrast, what is at issue is a future act by the
addressee; it is therefore the addressee (the object), and not the speaker (the
subject), who is understood to be the potential doer of the action expressed by
the infinitival VP.

7.6 Conclusion

The properties of raising and control verbs that we have discussed in


this chapter can be summarized as follows:

• Unlike a control predicate, a raising predicate does not assign a


semantic role to its subject (or object). The absence of a semantic
role can be used to account for the possibility of expletive it or there,
or a part of an idiom, as subject or object of a raising predicate, and
the impossibility of such expressions as subjects or objects of control
predicates.
• In control predicates, the VP complement’s unexpressed subject is
coindexed with one of the syntactic dependents. Among raising
predicates, the entire syntactic-semantic value of the subject of the
infinitival VP is shared with that of one of the dependents of the
predicate. This ensures that whatever category is required by the rais-
ing predicate’s VP complement is the raising predicate’s subject (or
object).
184 R A I S I N G A N D C O N T RO L C O N S T RU C T I O N S

This chapter has shown us that these properties of raising and control
verbs follow naturally from their lexical specifications. In particular, the
present analysis offers a systematic, construction-based account of the mis-
match between the number of syntactic complements that a verb has and the
number of semantic arguments that it has. In Chapter 8, we will observe that
the properties of raising verbs are key to understanding the English auxiliary
system.

Exercises

1. Draw trees for the following sentences and provide a lexical entry for
each of the italicized verbs:

a. Kim may have admitted letting Mary mow the lawn.


b. Gregory appears to have wanted to be loyal to the company.
c. Jones would prefer for it to be clear to Barry that the city plans
to sue him.
d. John continues to avoid the conflict.
e. The captain ordered the troops to proceed.
f. He coaxed his brother to give him the candy.
g. Frank hopes to persuade Harry to make the cook wash the
dishes.
h. John wants it to be clear to Ben that the city plans to honor him.

2. Explain why the following examples are ungrammatical, based on


the lexical entries of the predicates:

a. *John seems to rain.


b. *John is likely to appear that he will win the game.
c. *Beth tried for Bill to ask a question.
d. *He believed there to be likely that he won the game.
e. *It is likely to seem to be arrogant.
f. *Sandy appears that Kim is happy.
g. *Dana would be unlikely for Pat to be called upon.
h. *Robin is nothing in the box.
i. *It said that Kim was happy.
g. *There preferred for Sandy to get the job.

3. In this chapter, we have learned that predicates (verbs and adjec-


tives) can be classified into two main groups, raising and control,
as represented in the following simple table:

Raising predicates Control predicates


Intransitive seem . . . try . . .
Transitive believe . . . persuade . . .
7.6 Conclusion 185

Decide which group each of the following lexical items belongs to.
In doing so, consider the it, there, and idiom tests that this chapter
has introduced:
(i) certain, anxious, lucky, sure, apt, liable, bound, careful, reluc-
tant
(ii) tend, decide, manage, fail, happen, begin, hope, intend, refuse

4. Decide whether the following lexical elements are raising or control


verbs. Do any of these predicates have both properties? If they do,
how can we account for these verbs?
want, prefer, intend, prevent, continue

5. Discuss the similarities and differences among the following three


examples; use the it, there, and idiom tests. In doing so, also consider
the controller of the infinitival VP in each case:
a. Pat expected Leslie to be aggressive.
b. Pat persuaded Leslie to be aggressive.
c. Pat promised Leslie to be aggressive.

6. Consider the following data and discuss briefly what can be the
antecedent of her and herself :
(i) a. Kevin urged Anne to be loyal to her.
b. Kevin urged Anne to be loyal to herself.

Now consider the following data and discuss the binding conditions
on ourselves and us. In particular, determine the relevance of the
ARG - ST list for the possible and impossible binding relations:

(ii) a. Wei expect the dentist to examine usi .


b. *Wei expect the dentist to examine ourselvesi .
c. We expect them to examine themselves.
d. *We expect themi to examine themi .
(iii) a. Wei persuaded the dentist to examine usi .
b. *Wei persuaded the dentist to examine ourselvesi .
c. We persuaded themi to examine themselvesi .
d. *We persuaded themi to examine themi .
8 Auxiliary and Related Constructions

8.1 Basic Issues

The English auxiliary system involves a relatively small number of


elements interacting in complex and intriguing ways. This is one of the reasons
that the behaviors of English auxiliary verbs have been so extensively analyzed
in the literature on generative syntax.
Ontological issues: One of the major issues in the study of the English aux-
iliary system is an ontological one: Is it necessary to posit ‘auxiliary’ as an
independent part of speech or not? Auxiliary verbs are generally classified as
follows:

• modal auxiliary verbs such as will, shall, may, etc.: These have only
finite forms
• aspectual auxiliaries have/be: These have both finite and nonfinite
forms
• do: This ‘support’ verb has a finite form only, with vacuous semantics
• to: The infinitival marker has a nonfinite form only, with apparently
vacuous semantics

Such auxiliary verbs behave differently from main verbs in various respects.
There have been arguments for treating these auxiliary verbs as simply having the
lexical category V, although being distinct from main verbs with respect to both
syntactic distribution and semantic contribution. Similarities include the fact that
both auxiliary and main verbs carry tense information and participate in some of
the same identical syntactic constructions. These include so-called Right Node
Raising, as shown in (1):

(1) a. Pat washed and Leslie sliced the apples.


b. Pat could and Leslie might resign.

Such phenomena suggest that it might be a mistake to assign auxiliary verbs and
lexical (main) verbs to two distinct categories.
Distinguishing auxiliary from main verbs: How do we know which verbs
are auxiliary verbs? Put differently, what distributional or behavioral properties
are unique to auxiliary verbs in Present Day English? The most reliable criteria

186
8.1 Basic Issues 187

for auxiliary status arise from syntactic phenomena such as negation, inversion,
contraction, and ellipsis (sometimes known as the acronym ‘NICE’ properties,
see Warner, 2000; Kim, 2002b; Sag et al., 2003):
1. Negation: Only auxiliary verbs can be followed by the negative adverb not.
(2) a. Tom will not leave in the morning.
b. *Tom left not in the morning.

2. Inversion: Only auxiliary verbs can undergo subject-auxiliary inversion,


where the auxiliary appears in front of the subject in interrogatives and certain
other sentence types.
(3) a. Will Tom leave the party now?
b. *Left Tom the party already?

3. Contraction: Only auxiliary verbs have contracted forms with the suffix n’t.
(4) a. John couldn’t leave the party.
b. *John leftn’t the party early.

4. Ellipsis: The complement of an auxiliary verb, but not of a main verb, can
be omitted.
(5) a. If anybody is spoiling the children, John is .
b. *If anybody keeps spoiling the children, John keeps .

In addition to the NICE properties, tag questions provide another criterion: An


auxiliary verb can appear in the tag part of a tag question, but a main verb
cannot:
(6) a. You should leave, shouldn’t you?
b. *You didn’t leave, left you?

The position of adverbs and so-called floated quantifiers can also be used to
differentiate auxiliary verbs from main verbs. These differences can be seen in
the following contrasts:
(7) a. She would never believe that story.
b. *She believed never his story.

(8) a. The boys will all be there.


b. *Our team played all well.

Adverbs like never and floated quantifiers like all can follow an auxiliary verb
but not a main verb.
Ordering restrictions: The third major issue for the syntactic analy-
sis of auxiliaries is the question of how to capture ordering restrictions
on auxiliary sequences. Auxiliaries are subject to restrictions that limit
the sequences in which they can occur and the forms in which they
188 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

can combine with other auxiliary and main verbs. Observe the following
examples:
(9) a. The children will have been being entertained.
b. He must have been being interrogated by the police at that very
moment.

(10) a. *He has eating the pizza.


b. *The house is been remodeling.
c. *Margaret has had already left.
d. *He has been must being interrogated by the police at that very moment.

As shown here, when there are two or more auxiliary verbs, they must come in a
certain order. In addition, note that each auxiliary verb requires the immediately
following verb to be in a particular morphological form (e.g., has eaten vs. *has
eating).
In the study of the English auxiliary system, we thus need to address at least
these issues:
• Should we posit an auxiliary category?
• How can we distinguish main verbs from auxiliary verbs?
• How can we account for phenomena (such as the NICE group)
that are sensitive to the presence of an auxiliary verb?
• How can we capture the ordering and cooccurrence restrictions
among auxiliary verbs?
This chapter provides answers to these fundamental questions related to the
English auxiliary system.

8.2 Transformational Analyses

A highly influential work addressing the issues above is Chom-


sky (1957). Chomsky’s analysis, which introduces the rule in (11), directly
stipulates the ordering relations among auxiliary verbs:
(11) Aux → Tense (Modal) (have + en) (be + ing)

The PS rule in (11) will license sentences with or without auxiliary verbs, as in
(12):
(12) a. Mary solved the problem.
b. Mary would solve the problem.
c. Mary was solving the problem.
d. Mary would easily solve the problem.

For example, the following structure represents (12d):


8.2 Transformational Analyses 189

(13)

To derive the surface structure, the famous ‘Affix Hopping’ rule of Chom-
sky (1957) ensures that the affixal tense morpheme (Past) in Tense is moved
to M (modal) (will), or onto the main verb (solve) if a modal does not appear. If
the modal is present, Past moves onto will, producing Mary would (easily) solve
the problem. If the modal is not present, the affix Past will move onto the main
verb solve, yielding Mary solved the problem.
In addition to the Affix Hopping rule, typical transformational analyses intro-
duce the English-particular rule ‘do-support,’ used to describe how the NICE
properties are manifested in clauses that otherwise have no auxiliary verb:
(14) a. *Mary not avoided Bill.
b. Mary did not avoid Bill.

The presence of not in a position like Adv in the tree (13) has been claimed to
prevent the Tense affix from hopping over to the verb (as not intervenes). As
a last-resort option, the grammar introduces the auxiliary verb do onto which
the affix Tense is hopped. This would then generate (14b). In other words, the
position of do is used to diagnose the position of Tense in the structure.
The analysis captures syntactic affordances and behaviors of auxiliary verbs,
but nevertheless it misses several important points. For example, because con-
stituent structure in (13) does not provide the constituent properties that we find
in coordinate structures, it cannot capture the fact that the tensed (first) auxiliary
and the following VP (which may or may not itself have an auxiliary verb as
head verb) form a unit with respect to coordination:
(15) a. Fred [must have been singing songs] and [probably was drinking beer].
b. (?)Fred must both [have been singing songs] and [have been drinking beer].
c. (?)Fred must have both [been singing songs] and [been drinking beer].
d. Fred must have been both [singing songs] and [drinking beer].

As we saw in Chapter 3, identical phrasal constituents can be conjoined. The


coordination examples here indicate that a VP with one auxiliary verb or more
behaves just like one without any.
More recent analyses in this tradition (e.g., Chomsky, 1986) use X -theory
to provide IP and CP as categories for clausal syntax, which can deal with
190 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

the coordination data just given.1 Nevertheless, there are many problems that
transformational analyses cannot easily overcome (for a thorough review, see
Kim, 2000 and Kim and Sag, 2002).

8.3 A Construction-Based Analysis

In the approach we take in this book, ordering restrictions on aux-


iliary verbs follow from the correct specification of their lexical properties,
interacting with the regular rules of syntactic combination. The analysis requires
no movement, either of whole words or of affixes. In this section, we discuss
several different subtypes of auxiliary.

8.3.1 Shared Properties of Raising Verbs


One important property of all the auxiliary verbs is that they have
raising verb properties. That is, all four types of auxiliary verbs (modals, have/be,
do, to) place no semantic restrictions on their subject. In Chapter 7, we have seen
that raising verbs like seem and expect, unlike control verbs like hope or try,
allow expletive subjects:
(16) a. *There/It hopes to finish the project.
b. *There/It tried to be here at five.

(17) a. There seemed to be no obstacle to our happiness.


b. There is likely to be a general election next year.

(18) a. It seemed to snow every year in the country.


b. It was likely to rain.

All the auxiliary verbs also bear this kind of raising property: The subject of an
auxiliary verb is determined not by the verb itself but by the VP following it:
(19) a. Tom/*It/*There will [leave the town tomorrow].
b. *Tom/It/*There will [rain tomorrow].
c. *Tom/*It/There will [be a riot tomorrow].

(20) a. Tom/*It/*There has [left the town].


b. *Tom/It/*There has [rained].
c. *Tom/*It/There has [been a riot today].

As seen from the contrasts, the type of the subject in both (19) and (20) depends
on the type of subject that the bracketed VP (selected by the preceding auxiliary
verb will and has) requires. This is typical of raising verbs. This implies that all
auxiliary verbs will have the following type specifications:

1 An IP (Inflectional Phrase), similar to a sentence whose verb is finite form, is a functional category
that contains inflections such as tense and agreement. See Radford (1997).
8.3 A Construction-Based Analysis 191

(21) ⎡ ⎡ ⎤
 ⎤
⎢SYN⎣HEAD POS verb ⎥
⎢ ⎦ ⎥
⎢ AUX + ⎥
⇒ ⎢ ⎥
aux-verb
⎢  ⎥
⎣ ⎦
ARG - ST 1 XP, YP SPR  1 XP

Each type of auxiliary verb, belonging to the type aux-verb, will bear these speci-
fications: Each auxiliary verb carries the feature [AUX +], and its subject specifier
(first argument) is the same as the subject of its second argument.

8.3.2 Modals
One major property of modal auxiliaries, such as will, shall, and
must, is that they can only occur in finite (plain or past) forms. They cannot
occur either as infinitives or as participles:2
(22) a. I hope *to would/*to can/to study in France.
b. *John stopped can/canning to sing in tune.

Modals do not show 3rd person inflection in the present tense, nor do they have
a transparent past-tense form:
(23) a. *John musts/musted leave the party early.
b. *John wills leave the party early.

A modal verb selects a base VP as its complement:


(24) a. John can [kick/*kicked/*kicking/*to kick the ball].
b. John will [kick/*kicked/*kicking/*to kick the ball].

Reflecting these basic lexical properties, all modal auxiliary verbs will share
the following lexical specifications:
⎡ ⎤
(25) SYN | HEAD | VFORM fin
aux-modal ⇒ ⎣  ⎦
ARG - ST NP, VP VFORM bse

In the lexical entry given here, we can notice at least two things. First, modals
bear the head feature AUX, which differentiates them from main verbs, while
being specified as finite ( fin). This constraint on the finiteness of modals ensures
that they cannot occur in any environment where finite verbs are prohibited:
(26) a. *We expect there to [VP[fin] will rain].
b. *It is vital that we [VP[fin] will study everyday].

2 As we have seen in 5.2.2, the VFORM value fin includes es, ed, and pln, whereas nonfin includes
ing, en, inf , and bse.
192 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

These simple lexical specifications, which are required in almost any analysis,
explain the distributional potential of modal verbs.
Second, (25) specifies that modals take two arguments, which will be realized
as SUBJ and COMPS, respectively, in accordance with the Argument Realization
Constraint. This means that a modal like must will ultimately have the following
lexical information:
⎡ ⎤
(27) FORM must
⎢ ⎡   ⎤⎥
⎢ VFORM fin ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ + ⎥⎥
⎢ AUX ⎥
⎢SYN⎢⎢ ⎡ ⎤⎥
⎥⎥
⎢ ⎢  1 NP ⎥⎥
⎢ ⎣VAL ⎣
SPR
 ⎦⎦⎥
⎢ ⎥
⎢ 2 VP SPR  1 NP ⎥
⎢ COMPS ⎥
⎣   ⎦
ARG - ST 1 NP, 2 VP

The modal auxiliary verb must, as a subtype of aux-verb and aux-modal, inherits
feature specifications from both (21) and (25). Consider the feature specification
of its complement: The VP complement must be a VP[bse]. The possible and
impossible structures projected from this lexical specification can be most clearly
represented in tree format:

(28)
8.3 A Construction-Based Analysis 193

The structure shows that the modal auxiliary must requires a VP[bse] as its
complement. The VP[fin] in (28b) cannot function as the complement of must.
As shown in (27), modals are raising verbs, requiring the subject of their VP
complement to be identical to that of the modal auxiliary itself (indicated by
the box 1 ). This feature specification is inherited from (21), since modals also
belong to the type aux-verb. This then rules out ungrammatical examples like the
following:
(29) a. It/*Tom will [VP[bse] snow tomorrow].
b. There/*It may [VP[bse] exist a man in the park].

The VP rain tomorrow in (29a) requires the expletive subject it, disallowing other
NPs including Tom, and the VP exist a man in the park in (29b) allows only there
as its subject.

8.3.3 Be and Have


The auxiliary verbs have and be are different from modal verbs. For
example, unlike modals, they have nonfinite forms (would have, would be, to
have/to be); they have a 3rd person inflection form (has, is); they select not a
base VP as their complement but an inflected nonfinite form. In addition, they
differ from modals in that they can be used as main verbs with different syntax.
Consider the examples in (30):
(30) a. He is a fool.
b. He has a car.

On the assumption that every sentence has a tensed main verb, is and has here are
main verbs. However, a striking property of be is that it still shows the properties
of an auxiliary: It exhibits all of the NICE (negation, inversion, contraction, and
ellipsis) properties, as we will see below. The usage of be actually provides a
strong reason why the grammar should allow a verb categorized as ‘V’ to also
have the feature specification [AUX +]; be in (30a) is clearly a verb, yet it also
behaves exactly like an auxiliary.
The verb be has three main uses: as a copula selecting a predicate XP, as an
aspectual auxiliary with a progressive VP following, and as an auxiliary as part
of the passive construction:3
(31) a. John is in the school.
b. John is running to the car.
c. John was found in the office.

There is no categorical or syntactic reason to distinguish these uses, and in fact


different complements as in (31) can be coordinated:
3 These three uses can be complementary, as in is being eaten, where we have a progressive-form
passive auxiliary complex.
194 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

(32) a. He is in bed and sleeping.


b. He is a linguist and proud of it.

All three uses in (31) exhibit the NICE properties: they show identical behav-
ior with subject-auxiliary inversion, their position relative to adverbs includes
floated quantifiers, and so forth.
(33) Subject-aux inversion:
a. Was the child in the school? (*Did the child be in the school?)
b. Was the child running to the car?
c. Was the child found?

(34) Position of an adverb:


a. The child (?never) was (never) crazy. (The child (never) became (*never)
crazy.)
b. The child (?never) was (never) running to the car.
c. The child (?never) was (never) deceived.

Thus, all three uses share the lexical specifications given in (35) (XP here is a
variable over phrasal categories such as NP, VP, AP, and PP):
⎡ ⎤
(35) aux-be
⎢ ⎥
⎢FORM be ⎥
⎢  ⎥
⎣ ⎦
ARG - ST NP, XP PRD +

All three be lexemes bear the feature AUX with the + value and select a pred-
icative phrase whose subject is identical with the subject of be. Every use of be
thus has the properties of a raising verb. The main syntactic difference among the
three uses arises when this copula lexeme is realized into three different types of
words:4
(36) a. copula be: COMPS XP
b. progressive be: COMPS VP[VFORM ing]
c. passive be: COMPS VP[VFORM pass]

As given here, there are at least three uses of be: copula, progressive, and
passive, each of which has a different specification on the COMPS value.
The copula be needs no further COMPS specification: Any phrase that can
function as a predicate can be its COMPS value. The progressive be requires
its complement to be VP[ing], and the passive be requires its complement
to be VP[pass]. Hence, examples like those in (37) are straightforwardly
licensed:
4 In Chapter 5, we have seen that in terms of morphological form, the VFORM value pass is a
subtype of the value en. See Chapter 9 for further discussion of passive constructions.
8.3 A Construction-Based Analysis 195

(37) a. John is [AP happy about the outcome].


b. John was [VP[ing] seeing his children].
c. The children are [VP[pass] seen in the yard].

Auxiliary have is syntactically similar to auxiliary be in that it has all the


NICE properties. One key difference is that it requires a past participle VP
complement:
(38) a. John has not sung a song.
b. Has John sung a song?
c. John hasn’t been singing a song.
d. John has sung a song and Mary has , too.

Given facts like these, we can posit the following specifications in the lexi-
cal entry for auxiliary have, the head of the perfect aspect construction (see
Michaelis, 2011 for semantic details):
⎡ ⎤
(39) aux-have
⎢ ⎥
⎢FORM have ⎥
⎢  ⎥
⎣ ⎦
ARG - ST NP, VP VFORM en

The interaction of subcategorization and morphosyntactic information is enough


to predict the ordering restrictions among modals. For example, the auxil-
iaries have and be can follow a modal, since both have bse as their VFORM
value:
(40) a. John can [VP[bse] have danced].
b. John can [VP[bse] be dancing].

In addition, we can predict the following ordering too:


(41) a. He has [seen his children].
b. He will [have [been [seeing his children]]].
c. He must [have [been [being interrogated by the police at that very
moment]]].
(42) a. *Americans have [paying income tax ever since 1913].
b. *George has [went to America].

Sentence (42a) is ungrammatical because have requires a perfect participle VP.


Sentence (42b) is ruled out because the VP following has is finite.
In some varieties of English, for example, British English, the main verb have
also has the specification [AUX +], as evidenced by the (b) examples below:5
5 Both British and American English speakers can express possession by means of a construction
sometimes referred to as got-extension, e.g., I have not got a lot of money, Have you got any
money? The verb got as used in got-extension appears to be a past participle, but American
English usage suggests that it is not, since the form would be gotten in American English if it
were the past participle of get.
196 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

(43) a. You aren’t a student.


b. You haven’t enough money.
(44) a. Are you a student?
b. Have you enough money?

Here, the main verbs be and have display the NICE properties; although they are
main verbs, they have the syntax of auxiliaries. This fact supports the idea that
every sentence has a (main) verb, while the surface syntax of a verb is determined
by whether it has the specification [AUX +] or [AUX −].

8.3.4 Periphrastic Do
Next we discuss the so-called ‘dummy’ do, which is used as an aux-
iliary in the absence of another (finite) auxiliary head. This do also exhibits the
NICE properties:
(45) a. John does not like this town. (negation)
b. In no other circumstances does that distinction matter. (inversion)
c. They didn’t leave any food. (contraction)
d. Jane likes these apples even more than Mary does . (ellipsis)

Like the modals, do does not appear in nonfinite clauses:


(46) a. They expected us to *do/*should leave him.
b. I found myself needing/*doing need/*should needing sleep.

There are also some properties that distinguish do from other auxiliaries. First,
unlike other auxiliaries, do appears neither before nor after any other auxiliary:
(47) a. *He does be leaving.
b. *He does have been eating.
c. *They will do come.

Second, the verb do has no intrinsic meaning. Except for carrying the gram-
matical information about tense (and number in present clause), it makes no
semantic contribution.
Third, if do is used in a positive statement, it needs to be emphatic (stressed).
But in negative statements and questions, no such requirement exists:
(48) a. *Pat did leave. (Ungrammatical if did is unstressed.)
b. Pat DID leave.
(49) a. Pat did not show up.
b. Pat DID not show up. (more likely in this case: Pat did NOT show up.)
(50) a. Did Pat find the solution?
b. How long did it last?

The most economical way of representing these lexical properties is to give


do the lexical entry shown in (51):
8.3 A Construction-Based Analysis 197
⎡ ⎤
(51) aux-do
⎢FORM do ⎥
⎢ ⎥
⎢   ⎥
⎢ ⎥
⎢SYN HEAD VFORM fin ⎥
⎢ ⎥
⎢  ⎥
⎢ ⎥
⎢ − ⎥
⎣ARG - ST NP, VP
AUX ⎦
VFORM bse

Like other auxiliaries including modals, do appears only in contexts specified as


[AUX +], which ensures that do appears only in contexts of inversion, contrac-
tion, and ellipsis (the NICE contexts), just like the other auxiliaries. Further,
do selects a subject NP and a VP complement whose unrealized subject is
structure-shared with its subject ( 1 ). Treating do as a raising verb like other
English auxiliaries is based on typical properties of raising verbs, one of which
is that raising verbs allow expletives as their subject, as we have seen above
(we will soon see how we capture a major difference between do and the other
auxiliaries):
(52) a. John may leave.
b. It may rain.
c. *John may rain.

(53) a. John did not leave.


b. It did not rain.
c. *John did not rain.

The [AUX +] specification and raising-verb treatment of do capture its similari-


ties with other auxiliaries and modals.
The differences stem from the lexical specifications of both do and its VP com-
plement. Unlike have and be, do is specified as fin. This property then accounts
for why no auxiliary element can precede do, since only the first verb in a
sequence may be finite:
(54) a. He might [have left].
b. *He might [do leave].

The VP complement of the auxiliary do must be [VFORM bse]. This fea-


ture specification blocks modals from heading the VP following do, since
modals are specified as [fin], predicting the ungrammaticality of the examples
in (55):
(55) a. *He does [can leave here].
b. *He does [may leave here].

These examples are also ruled out by the specification that the complement of do
be a VP[AUX −]. This requirement will further predict the ungrammaticality of
the examples in (56) and (57):
198 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

(56) a. *Jim [DOES [have supported the theory]].


b. *The proposal [DID [be endorsed by Clinton]].

(57) a. *I [do [not [have sung]]].


b. *I [do [not [be happy]]].

In (56) and (57), the VPs following the auxiliary do, stressed or not, bear the
feature [AUX +] inherited from the auxiliaries have and be. This explains the
ungrammaticality of these sentences.

8.3.5 Infinitival Clause Marker To


The auxiliary verbs to and do, in addition to differing by just one
phonological feature, voicing, differ with respect to an important syntactic prop-
erty: do appears only in finite contexts and to only in nonfinite contexts.6 The
verb to is, of course, the marker of the infinitive in English. Even though it has
the form of a preposition, its syntactic behavior puts it in the class of auxiliary
verbs requiring a base VP (Gazdar et al., 1985):
(58) a. *John believed Kim to leaving here.
b. John believes Kim not to leave here.

These verbs share the property that they obligatorily take bare verbal com-
plements (hence, nonbase forms or modals cannot head the complement
VP):
(59) a. *John believed Kim to leaving here.
b. *John did not leaving here.
c. *John expects to must leave.
d. *John did not may leave.

In terms of NICE properties, to also falls under the VP ellipsis criterion:


(60) a. Tom wanted to go home, but Peter didn’t want to .
b. Lee voted for Bill because his father told him to .

These properties indicate that to should have a lexical entry like the following:
⎡ ⎤
(61) aux-to
⎢ ⎥
⎢FORM to ⎥
⎢   ⎥
⎢ ⎥
⎢SYN ⎥
⎢ HEAD VFORM inf ⎥
⎢  ⎥
⎢ ⎥
⎣ ⎦
ARG - ST NP, VP VFORM bse

6 In British English, auxiliary do has nonfinite forms, as in John will read the book and Bill will do
too or John has read the book and Bill has done too.
8.4 Capturing NICE Properties 199

The lexeme to is an infinitive auxiliary verb, whose complement must be headed


by a V in the bse form.

8.4 Capturing NICE Properties

In this section, we discuss how we can account for the NICE


properties, which are key diagnostics for the presence of auxiliary verbs.

8.4.1 Auxiliaries with Negation


The English negative adverb not leads a double life: one as a nonfinite
VP modifier, marking constituent negation, and the other as a complement of
a finite auxiliary verb, marking sentential negation. Constituent negation is the
name for a construction in which negation combines with some constituent to its
right and negates exactly that constituent.
Constituent Negation: The use of not as a nonfinite VP modifier is shown by
its similarities to adverbs like never in nonfinite clauses:
(62) a. Kim regrets [never/not [having seen the movie]].
b. We asked him [never/not [to try to call us again]].
c. Duty made them [never/not [miss the weekly meetings]].

Taking not to modify a nonfinite VP, we can predict its various positional
possibilities in nonfinite clauses via the following lexical entry:
⎡ ⎤
(63) FORM never/not
⎢ ⎡ ⎤⎥
⎢ adv ⎥
⎢ POS
 ⎦⎥
⎣SYN | HEAD ⎣ ⎦
MOD VP[VFORM nonfin]

The adverb never or not modifies a nonfinite VP:


(64) Constituent Negation:
200 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

In the grammatical examples in (65) and (66), not modifies a nonfinite


VP; in the ungrammatical examples, the VP[nonfin] lexical constraint is
violated:
(65) a. [Not [speaking English]] is a disadvantage.
b. *[Speaking not English] is a disadvantage.
c. *Lee likes not Kim.

(66) a. Lee is believed [not VP[inf ][to like Kim]].


b. Lee is believed to [not VP[bse][like Kim]].
c. *Lee is believed [to VP[bse][like not Kim]].

Sentential Negation: Contrasting with constituent negation is sentential nega-


tion, which is the canonical way of negating a clause. Unlike constituent
negation, sentential not may not modify a finite VP:
(67) a. Lee never/*not left. (cf. Lee did not leave.)
b. Lee will never/not leave.

The contrast in these two sentences shows one clear difference between never
and not. The negator not cannot precede a finite VP, though it can freely
occur as a nonfinite VP modifier, a property further illustrated by the following
examples:
(68) a. John could [not [leave town]].
b. John wants [not [to leave town]].

(69) a. *John [not [left town]].


b. *John [not [could leave town]].

Tag questions also demonstrate the need to distinguish between constituent


and sentential negation:
(70) a. The president could not approve the bill, could/*couldn’t he?
b. The president could, unfortunately, not approve the bill, couldn’t/*could
he?

The polarity value of the tag is generally opposite to that of the matrix clause.
The contrast here indicates that not in (70a) makes the clause negative, while not
in (70b) does not.
The distinction between these two types of negation also influences scope
possibilities in an example like (71) (Warner, 2000):
(71) The president could not approve the bill.

Negation here could have the two different scope readings, paraphrased in (72):
(72) a. It would be possible for the president not to approve the bill.
b. It would not be possible for the president to approve the bill.
8.4 Capturing NICE Properties 201

The first interpretation is constituent negation; the second is sentential negation.


Another distributional difference between never and not is found in the VP
ellipsis construction. Observe the following contrast:

(73) a. Mary sang a song, but Lee never did .


b. *Mary sang a song, but Lee did never .
c. Mary sang a song, but Lee did not .

The data here indicate that not behaves differently from adverbs like never
in finite contexts, even though the two behave alike in nonfinite contexts.
The adverb never is a true diagnostic of a VP-modifier, and we use contrasts
between never and not to reason about what the properties of the negator not
must be.
We saw the lexical representation for constituent negation not in (63) above.
Sentential not typically appears linearly in the same position – following a finite
auxiliary verb – but shows different syntactic properties (while constituent nega-
tion need not follow an auxiliary, as in Not eating gluten is dumb). We can
observe that expressions like the negator not, too, so, and indeed combine with a
preceding auxiliary verb:

(74) a. Kim will not read it.


b. Kim will too/so/indeed read it.

Expressions like too and so are used to reaffirm the truth of the sentence in
question and follow a finite auxiliary verb. We assume that the negator and
these reaffirming expressions (called AdvI) form a unit with the finite auxil-
iary, resulting in a lexical-level construction. The syntactic cohesion of these
two expressions, including not, can be observed from the fact that the auxiliary
and the negator not are fused into a single lexical unit from which comes the
possibility of contracting the two expressions, as in won’t, can’t, and so forth.7
As we have seen for the verb-particle combination (e.g., figure out, give up, etc.),
the combination of a finite auxiliary verb and sentential negation is licensed by
the HEAD - LEX CONSTRUCTION (see Chapter 5.5):

(75) HEAD - LEX CONSTRUCTION :


V[POS 1 ] → V[POS 1 ], X[LEX +]

This construction, along with the assumption that the sentential negator not bears
the LEX feature, projects a structure like the following:

7 Zwicky and Pullum (1983) note that the contracted negative n’t more closely resembles word
inflection than it does a ‘clitic’ or ‘weak’ word of the kind that often occurs in highly entrenched
word sequences (e.g., Gimme!). For example, as Zwicky and Pullum observe, won’t is not the
fused form one would predict based on the pronunciation of the word will, and such idiosyncrasies
are far more characteristic of inflectional endings than clitic words.
202 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

(76)

Since the sentential negator is not a modifier of the following VP-type expres-
sion, we take it to be selected by a finite auxiliary verb, as a main verb
selects a particle. This means that a finite auxiliary verb (fin-aux) can be pro-
jected into a corresponding NEG-introducing auxiliary verb (neg-fin-aux), as
in (77):

(77) NEGATIVE AUXILIARY CONSTRUCTION :


⎡ ⎤
  neg-fin-aux
fin-aux ⎢   ⎥
→ ⎢ + ⎥
ARG - ST  1 NP, 2 XP ⎣ARG - ST 1 NP, Adv
LEX
, 2 XP ⎦
I NEG +

We can also take this relation as a kind of derivation whose input is a finite
auxiliary verb and whose output is a negated-finite auxiliary (fin-aux → neg-
fin-aux). That is, the finite auxiliary verb selecting just complement XP can be
projected into a NEG finite auxiliary (AuxI) that selects the negator as its addi-
tional lexical complement that bears the feature NEG as well as the feature LEX.
For instance, the finite auxiliary will can undergo this derivational process and
becomes a negative-finite auxiliary will:

(78) ⎡ ⎤
neg-fin-aux
⎡ ⎤ ⎢FORM will ⎥
fin-aux ⎢ ⎡ ⎡ ⎤⎤ ⎥
⎢FORM will ⎥ ⎢ ⎥
⎢ ⎥ ⎢ + ⎥
⎢ ⎡  ⎤⎥ ⎢ AUX ⎥
⎢ ⎥ ⎢SYN⎢ ⎢
⎣HEAD⎣VFORM fin⎦⎦
⎥⎥ ⎥
⎢ → ⎢ ⎥
⎢SYN⎣HEAD
AUX + ⎦⎥
⎥ ⎢ ⎥
⎢ VFORM fin ⎥ ⎢ NEG + ⎥
⎣ ⎦ ⎢ ⎥
⎢   ⎥
⎢ ⎥
ARG - ST  1 NP, 2 XP ⎣ARG - ST 1 NP, Adv
LEX +
2 ⎦
I NEG + , XP

The output lexical construction will then licenses the following structure for
sentential negation:
8.4 Capturing NICE Properties 203

(79)

As shown here, the negative finite auxiliary verb will selects two comple-
ments, the negator not and the VP leave town. The finite auxiliary then
first combines with the negator, forming a head-lex construct. This con-
struct then can combine with a VP complement, forming a head-complement
construct.
By treating not as both a modifier (constituent negation) and a lexical
complement (sentential negation), we can account for the scope differences
in (71) and various other phenomena, including VP Ellipsis (see below).
For example, the present analysis will assign two different structures to the
string (71):

(80)
204 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

In the structure (80a), not modifies only the nonfinite VP, with scope nar-
rower than could. Meanwhile, in (80b), not is at the same level in the syntax
as could, and semantically not scopes over could. In this case, the feature
[NEG +] percolates up to the VP and then to the whole sentence. The semantic
consequence of this structural difference can be seen in the different tag questions
appropriate for each interpretation, as we have noted earlier:

(81) a. The president [could [not [approve the bill]]], couldn’t/*could he?
b. The president [[[could][not]] [approve the bill]], could/*couldn’t he?

The tag question forms show that (81a) is actually a positive statement, even
though some part of it is negative. By contrast, (81b) is a negative statement.

8.4.2 Auxiliaries with Inversion


Questions in English are formed by structures which invert the
subject and the auxiliary:8

(82) a. Are you studying English syntax?


b. What are you studying nowadays?

8 For the analysis of wh-questions like (82b), see Chapter 10.


8.4 Capturing NICE Properties 205

The long-standing transformational approach assumes that the auxiliary verb


is moved from a medial position to the clause-initial position (the node labels
would typically differ in current analyses, but the structure of the movement is
what is relevant here):
(83)

However, there are certain exceptions that present problems for the analysis of
inverted auxiliaries involving a movement transformation. Observe the following
contrast:
(84) a. I shall go downtown.
b. Shall I go downtown?

Here there is a semantic difference between the auxiliary verb shall in (84a) and
the one in (84b): The former conveys a sense of simple futurity – in the near
future, I will go downtown – whereas the latter example concerns permission,
asking whether it is appropriate for me to go downtown. If the inverted verb is
simply moved from an initial medial position in (84b), it is not clear how the
grammar can represent this meaning difference.
English also assigns various interpretations to the subject-auxiliary inversion
pattern:9
(85) a. Wish: May she live forever!
b. Matrix Polar Interrogative: Was I that stupid?
c. Negative Imperative: Don’t you even touch that!
d. Subjunctive: Had they been here now, we wouldn’t have this problem.
e. Exclamative: Boy, am I tired!

Each of these constructions has its own constraints, which cannot fully be pre-
dicted from other constructions. For example, in ‘wish’ constructions, only the
modal auxiliary may is possible. In negative imperatives, only don’t (but not, e.g.,
do) is allowed. These idiosyncratic properties support a nonmovement approach,
in which auxiliaries can be specified as having particular uses or meanings when
inserted into particular positions in the syntax.
Note that there are many environments where nonfinite Ss form a constituent:

9 See Fillmore (1999) for detailed discussion.


206 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

(86) a. I prefer for [Tom [to do the washing]] and [Bill [to do the drying]].
b. Mary meant for, but nobody else meant for, [Sandy [to do the washing]].
(87) a. They didn’t approve of [him/my [leaving without a word]].
b. Tom believes that [him [taking a leave of absence]] bothers Mary.
c. Why does [John’s [taking a leave of absence]] bother Mary?
(88) a. [With [the children [so sick]]], we weren’t able to get much work done.
b. [With [Tom [out of town]]], Beth hastily exited New Albany and fled to
Ohio.
c. [With [Bush [a born-again Christian]]], the public already had a sense of
where he would stand on those issues.
(89) a. [His wife [missing]], John cried on Brown’s shoulder.
b. [No money [left in the account]], John didn’t know what to do.
Each of these examples shows us that S[inf ], S[ing], or S[PRD +] forms a syn-
tactic unit, which is traditionally called a small clause (SC) (see Chapter 9 for
further discussion). What these data imply is that the construction S[nonfin] lives
its own life as an independent construction in English. In the yes-no question and
wh-interrogative SAI construction, we further observe this constituenthood:
(90) a. Can [[Robin sing] and [Mary dance]]?
b. When the going got tough, why did [[the men quit] and [the women stay
behind]]?
(91) a. Who did [[Tom hug t ] and [Mary kiss t ]]?
b. Which man and which woman did [[Tom hug t ] and [Mary kiss t ]]
respectively?
Such coordination examples support the idea that a finite auxiliary verb
combines with a nonfinite S whose subject is nominative, as illustrated in the
following tree:
(92)
8.4 Capturing NICE Properties 207

As shown in (92), the inverted finite auxiliary verb combines with a nonfinite
S. Licensing such a structure also means that a noninverted auxiliary verb
construction is systematically mapped into an inverted auxiliary verb by the
following derivational process:

(93) INVERTED AUX CONSTRUCTION:


⎡ ⎤ ⎡ ⎤
aux-wd aux-inv-fwd
⎢  ⎥ ⎢  ⎥
⎢ 1 ⎥ → ⎢ 2 nonfin⎥
⎣ARG - ST 1 XP, YP
SPR ⎦ ⎣ARG - ST S
VFORM ⎦
VFORM 2 XARG 1 [nom]

The key effect of this post-inflection derivation is, as seen here, to change the
values of the attribute INV and ARG - ST of a finite auxiliary verb that belongs to
a raising verb. That is, a noninverted auxiliary verb selecting two arguments is
mapped onto an inverted auxiliary verb, selecting a nonfinite S whose external
argument (XARG) is the same as the input verb’s subject.
Traditionally, arguments are classified into external and internal ones, where
the former usually refer to the subject. The introduction of such a semantic fea-
ture is necessary if we want to make the subject value visible on the S node (see
Bender and Flickinger, 1999 and Sag, 2012). That is to say, although a VP has
an SPR value for its subject, once the VP and the subject combine, the resulting
S no longer has any information about any features of the subject – including its
semantic index. The feature XARG is a mechanism used to make this informa-
tion visible at the S level, which is where the tag question adjoins. The clausal
complement of the inverted auxiliary inherits the VFORM value and requires its
external argument (XARG) to be nominative (nom). For instance, consider the
derivation of the noninverted auxiliary will into the inverted will:

(94) Deriving the inverted auxiliary will:


⎡ ⎤
aux-inv-fwd
⎡ ⎤ ⎢FORM will ⎥
aux-wd ⎢ ⎥
⎢FORM will ⎥ ⎢ ⎡   ⎤ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎡ ⎢ + ⎥
⎤ ⎥ AUX
⎢  ⎥ ⎢ ⎢HEAD ⎥ ⎥
⎢ AUX + ⎥ ⎢ ⎢ + ⎥ ⎥
⎢SYN ⎣HEAD ⎦ ⎥ ⎢ ⎢ INV ⎥ ⎥
⎢ ⎥ → ⎢ SYN⎢  ⎥ ⎥
INV − ⎢ ⎢ ⎥ ⎥



⎥ ⎢ ⎣VAL SPR   ⎦ ⎥
⎢  ⎥ ⎢ COMPS S[nonfin]

⎢ ⎥ ⎢ ⎥
⎣ARG - ST VFORM bse ⎦ ⎢ ⎥
1 XP, VP ⎢  ⎥
SPR  1  ⎢ ⎥
⎣ARG - ST S VFORM bse ⎦
XARG 1 [ CASE nom]

Put informally, the input is the noninverted auxiliary will that selects a subject
and a base VP[bse], whose subject is structure-shared with the verb’s subject as
a raising verb. By contrast, the output is the inverted auxiliary will that selects
just a nonfinite S. Note that this S is mapped onto the COMPS value because the
output belongs to a function-word (aux-inv-fwd) (see the discussion in Chapter 5
208 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

around (71)). Let us consider the structure of an SAI sentence licensed by this
inverted auxiliary will:
(95)

The combination of the nominative subject and the base VP forms a nonfinite
head-subject construct, and this nonfinite S combines with the head inverted
auxiliary, forming a head-complement construct. Note also that in the present
system, the VFORM value requirement on the VP complement of the noninverted
auxiliary is maintained in the nonfinite S complement. Thus, if the noninverted
auxiliary selects a bse VP, then its SAI counterpart will select a bse S instead,
thus blocking cases like *Will he coming to Seoul?, *Will he came to Seoul?, and
so on. More illustrations are given in (96):
(96) a. John can come to Seoul. vs. Can John come to Seoul?
b. John has driven to Seoul. vs. Has John driven to Seoul?
c. John is [visiting Seoul]. vs. Is John [visiting Seoul]?
d. John is [visited by his friends]. vs. Is John [visited by his friends]?
e. John is [to visit his friends]. vs. Is John [to visit his friends]?

8.4.3 Contracted Auxiliaries


Auxiliary verbs show two kinds of contraction: either with a preced-
ing subject or with the negator not:
(97) a. They’ll be leaving.
b. They’d leave soon.
(98) a. They wouldn’t leave soon.
b. They shouldn’t leave soon.
Contracted-negation forms show several lexical idiosyncrasies, as in *willn’t,
*amn’t, and *mayn’t. It is common to analyze n’t as a kind of inflectional affix
(Zwicky and Pullum, 1983). In the approach we adopt here, we would posit an
inflectional rule applying to a specific set of verbs, as in (99):
8.4 Capturing NICE Properties 209

(99) N’t CONTRACTION CONSTRUCTION:


⎡ ⎤
⎡ ⎤ aux-nt-w
aux-w ⎢ ⎥
⎢ ⎥ ⎢FORM  1 + n’t ⎥
⎣  1  ⎦ → ⎢  ⎥
FORM ⎢ ⎥
HEAD | VFORM fin
⎣ HEAD
VFORM fin ⎦
NEG +

This means that a word like can will be mapped to can’t, gaining the NEG feature:
⎡ ⎤
(100)   FORM can’t
FORM can ⎢  ⎥
→ ⎢ fin ⎥
HEAD | VFORM fin ⎣HEAD VFORM ⎦
NEG +

As we saw earlier, the head feature NEG will play an important role in forming
tag questions:
(101) a. They can do it, can’t they?
b. They can’t do it, can they?
c. *They can’t do it, can’t they?
d. *They can’t do it, can he?
The tag part of such a question has a NEG value that is the opposite of that in the
main part of the clause.

8.4.4 Auxiliaries with Ellipsis


The standard generalization of Verb Phrase Ellipsis (VPE) is that it
is possible only after an auxiliary verb, as shown in the contrast between (102)
and (103):
(102) a. Kim can dance, and Sandy can , too.
b. Kim has danced, and Sandy has , too.
c. Kim was dancing, and Sandy was , too.

(103) a. *Kim considered joining the navy, but I never considered .


b. *Kim got arrested by the CIA, and Sandy got , also.
c. *Kim wanted to go and Sandy wanted , too.
The VP complement of an auxiliary verb, but not a main verb, can undergo VP
ellipsis, provided that the context offers enough information for its interpretation.
The syntactic part of this generalization can be succinctly stated in the form
of lexical derivation:
(104) VP ELLIPSIS CONSTRUCTION :
⎡ ⎤
aux-elide-w
⎡ ⎤ ⎢ ⎥
aux-w ⎢HEAD | AUX + ⎥
⎢   ⎥
⎢ ⎥ ⎢ ⎥
⎣HEAD | AUX + ⎦ → ⎢ SPR  1 XP ⎥
⎢VAL ⎥
ARG - ST  1 XP, YP ⎢ COMPS   ⎥
⎣ ⎦
ARG - ST  1 XP, YP[pro]
210 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

This rule means that the second argument (YP) of an auxiliary verb need not be
realized as a complement (COMPS) when the second argument is interpreted as
a type of pro (proverb) referring to the antecedent provided in the context, as
illustrated in the following examples:
(105) a. They all couldn’t solve the puzzle. However, Albert could .
b. Jane rebooted the server. She had to .

The complement of could and to is not realized here, but it can be understood by
referring to the preceding sentence.
Since the rule in (104) is stated to apply to any YP (predicate) after a verb
with the [AUX +] specification, it can apply to more than just VPs and to more
than just the canonical auxiliary verbs, but also to be and have in their main verb
uses. With be, non-VP complements can be elided:
(106) a. Kim is happy and Sandy is too.
b. When Kim was in China, I was too.

The main verb have is somewhat restricted, but the contrast in (107) is clear.
Even though have is a main verb in (107a), it can allow an elided complement,
unlike the main verb bring in (107b):
(107) a. A: Have you anything to share with the group?
B: No. Have you ?
b. A: Have you brought anything to share with the group?
B: No. *Have you brought ?

Given the derivation rule (104), which specifies no change in the ARG - ST,
a canonical auxiliary verb like can will have a counterpart that lacks a phrasal
complement on the COMPS list:
⎡ ⎤ ⎡ ⎤
(108) FORM can FORM can
⎢  ⎥ ⎢  ⎥
⎢ ⎥ ⎢ ⎥
⎢SYN | VAL SPR  NP ⎢SYN | VAL SPR  
1 1
⎥ → ⎥
⎢  2 VP[bse] ⎥ ⎢   ⎥
⎣ COMPS ⎦ ⎣ COMPS ⎦
ARG - ST 1, 2 ARG - ST  1 , 2 [pro]

Notice here that even though the VP complement is elided in the output, the
ARG - ST is intact. This allows us to assign a proper interpretation to the elided
VP (see Kim, 2003).
In the first part of the example in (109), there are three auxiliary verbs:
⎧ ⎫

⎨a. Sandy must have been , too.⎪

(109) Kim must have been dancing and b. Sandy must have , too.

⎩c. Sandy must , too. ⎪

There are therefore various options for an elided VP: the complement of been,
or have, or must.
The analysis also immediately predicts that ellipsis is possible with the
infinitival marker to, as this lexeme is an auxiliary verb, too:
8.4 Capturing NICE Properties 211

(110) a. Tom wanted to go home, but Peter didn’t want to .


b. Lee voted for Bill because his father told him to .

(111) a. Because John persuaded Sally to , he didn’t have to talk to the reporters.
b. Mary likes to tour art galleries, but Bill hates to .

Finally, the analysis given here will also account for the contrast shown above
in (73); a similar contrast is found in the following examples:

(112) a. *Mary sang a song, but Lee could never .


b. Mary sang a song, but Lee could not .

The negator not in (112b) is a marker of sentential negation and can be the com-
plement of the finite auxiliary verb could. This means that we can apply the VPE
lexical rule to the auxiliary verb could after the projection of the NEGATION
AUXILIARY CONSTRUCTION , as shown in (113):

⎡ ⎤ ⎡ ⎤
(113) FORM could FORM could
⎢ ⎥ ⎢ ⎥
⎣ COMPS  2 Adv[NEG +], 3 VP[bse]⎦ → ⎣ COMPS 2  ⎦
ARG - ST 1, 2, 3 ARG - ST 1, 2, 3

As shown here in the right-hand form, the VP complement of the auxiliary verb
could is not realized as a COMPS element, though the negative adverb is. This
form would then project a syntactic structure like (114):

(114)

As represented here, the auxiliary verb could forms a well-formed head-


complement construct with not.
Why is there a contrast between (112a) and (112b)? The reason is simply that
not can ‘survive’ VPE because it can be licensed in the syntax as the complement
of an auxiliary, independent of the following VP. However, an adverb like never
is only licensed as a modifier of VP (it is adjoined to VP to yield another VP).
212 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

Thus, if the VP were elided, we would have a hypothetical structure like the
following:

(115)

Here, the adverb never modifies a VP through the feature MOD, which guarantees
that the adverb requires the head VP that it modifies. In an ellipsis structure,
the absence of such a VP means that there is no VP for the adverb to modify.
In other words, there is no rule licensing such a combination – predicting the
ungrammaticality of *has never, as opposed to has not.10

8.5 Conclusion

This chapter aimed to address four key issues in the study of the
English auxiliary system. The issues involve the properties that distinguish
auxiliary verbs from main verbs, ordering restrictions among auxiliary verbs,
combinatorial restrictions on the syntactic complements of auxiliary verbs, and
auxiliary-sensitive phenomena like NICE properties.
The chapter first focused on the morphosyntactic properties of English aux-
iliary verbs. We showed that their distributional, ordering, and combinatorial
properties all follow from their lexical groupings: modals, have/be, do, and to.
The second part of this chapter concerned the so-called NICE phenomena, each
of which is sensitive to the presence of an auxiliary verb and has been extensively
analyzed in generative grammar. The chapter showed us that a construction-
based analysis can offer a straightforward analysis of these phenomena without
reliance on movement operations or functional projections.
In Chapter 9, we move on to a particular auxiliary-headed construction or
family of constructions: the passive (which canonically consists of the passive
auxiliary be followed by a past participial VP complement). We will see that the
construction-based analysis developed in this chapter can be extended to account
for passive constructions in English.

10 As we saw in Section 6.6.1, Chapter 6, all modifiers carry the head feature MOD, whose value is
the expression that is modified.
8.5 Conclusion 213

Exercises

1. Each of the following sentences contains an item (in the parentheses)


which we might want to call an auxiliary. In each case, construct
relevant examples that will clarify whether it actually is an auxiliary.
Explain your reasoning from the examples you provide:
a. John got sent to prison. (got)
b. He ought to leave his luggage here. (ought)
c. They needn’t take this exam. (need)
d. You’d better not leave it here. (better)
e. He dared not argue against his parents. (dared)
f. He used to go there very often. (used).

2. Draw trees for the following sentences:


a. The gardener must trim the rose bushes today.
b. This should be the beginning of a beautiful friendship.
c. I am removing the shovel from the shed.
d. The travelers have returned from their vacation.
e. Springfield would have built a police station with the federal
grant.
f. Stingrays could have been cruising near the beach.
g. She seems to have given financial assistance to an important
French art dealer.

3. Provide an analysis of the grammaticality or ungrammaticality of the


following examples, together with a tree structure for each, and lexi-
cal entries for the words playing the crucial roles in the determination
of grammaticality.
a. It has rained/*raining/*rains/*rain every day for the last
week.
b. The roof is leaking/*leaked/*leaks/*leak.
c. *George is having lived in Toledo for thirty years.
d. *The house is been remodeling.
e. *Margaret has had already left.
f. *Does John have gone to the library?
g. *Sam may have been interrogating by the FBI.

4. Analyze the following sentences by providing a lexical entry for each


head verb and a tree structure for each sentence.
a. The senator should not have forgotten the concerns of her
constituents.
b. Did the doctor prescribe aspirin?
c. George has spent a lot of money, hasn’t he?
d. Sandy will read your reports, but Harold will not.
214 AU X I L I A RY A N D R E L AT E D C O N S T RU C T I O N S

5. English allows what is called ‘negative inversion,’ as illustrated in the


following:
(i) a. He can hardly believe that it’s already over.
b. I could have little known that more trouble was just around the
corner.
c. I have never been spoken to so rudely!

(ii) a. [Hardly] was there any rain falling.


b. [Little] did I know that more trouble was just around the corner.
c. [Never] have I been spoken to so rudely!

(iii)
a. He had hardly collected the papers on his desk, had he/*hadn’t
he?
b. He never achieved anything, did he/*didn’t he?

Draw tree structures for the sentences in (ii) and provide the
lexical entries for hardly, little, and never. The examples in
(iii) indicate that these adverbs all involve some kind of nega-
tion in the sentence in which they appear. In addition, think of
how your analysis can account for the unacceptable examples
in (iv):
(iv) a. As a statesman, he scarcely could do anything worth
mentioning.
b. As a statesman, scarcely could he do anything worth
mentioning.
c. *As a statesman, scarcely he could do anything worth
mentioning.

6. Draw a tree structure for each of the following inverted constructions


while giving detailed feature structures for each auxiliary verb:
(i) a. Has he read the paper?
b. May the queen live long!
c. Only then can you belong to me.
d. Were I you, I would visit my grandfather.

7. Identify errors in the following passage and provide the reasons for
the errors:
The expanded role of auxiliaries in English has resulting in some curious
rules. One is that when a sentence are to be negated, the word not must
follow not the main verb (as used to be the case), but the auxiliary. This rule
creates an awkward dilemma in the occasional instance when the sentence to
being negated actually doesn’t have an auxiliary verb. Thus, if I wish to deny
the sentence, I walked home, I must add an entirely meaningless auxiliary
from the verb do just to standing as the prop for the word not. The result is
8.5 Conclusion 215

the sentence, I didn’t walk home. Now, do and did are often adding to show
emphasis, but in those cases they are speak with emphasis. Thus, there is
a difference between saying I didn’t walk home and saying I DIDN’T walk
home. The latter sentence expresses emphasis, but in the former sentence
the verb did expresses nothing at all; it be merely there to hang the not on.
If we tried to say, I walked not home, this would had an unacceptably odd
sound to it. It would, indeed, sound archaic. English literature is full of such
archaisms, since putting not after the main verb was still good usage in the
time of Shakespeare and a century or more later.
9 Passive Constructions

9.1 Introduction

One important goal of syntactic analysis is to capture formal and


semantic properties common to two or more constructions. For example, the
following two sentences have similar meanings:
(1) a. One of Korea’s most famous poets wrote these lines.
b. These lines were written by one of Korea’s most famous poets.

We recognize (1b) as the passive counterpart of the active sentence (1a). These
two sentences are true or false under the same real-world conditions: They both
describe the event of writing the lines by one Korean poet. The only difference
involves grammatical functions: In the active voice (1a), one of Korea’s most
famous poets is the subject, whereas in the passive voice (1b), these lines is the
subject.
Why are there two ways of saying essentially the same thing? It is generally
accepted that the passive construction is used for certain discourse-motivated
reasons. For example, when the person or thing acted upon is what the sentence
is about, we tend to use passive.1 Compare the following:
(2) a. Somebody apparently struck the unidentified victim during the early morn-
ing hours.
b. The unidentified victim was apparently struck during the early morning
hours.

We can observe that the passive in (2b) assigns greater salience to the victim
than the active in (2a). In addition, language users prefer passive voice when the
identity of the actor is unknown or unimportant:
(3) a. Targets can be observed at any angle.
b. During the early evening, Saturn is found in the north, while Jupiter rises in
the east.

Similarly, we use the passive voice in formal, scientific, or technical writing and
reports to convey an objective presentation of the events or state of affairs being
described. Compare, for example, the following sentences:
1 In other words, the passive construction is used to ensure that a nonagentive entity is realized as
the subject, because subject is the canonical position for a sentence topic (see Lambrecht, 1994).

216
9.2 The Relationship between Active and Passive 217

(4) a. I poured 20cc of acid into the beaker.


b. About 20cc of acid was poured into the beaker.

While (4a) is a report of something that the writer did, as it appears to be


about the writer, (4b) is a report of an event that happened to involve human
agency.
Leaving aside these discourse- and genre-motivated factors underlying the
use of passive constructions, in this chapter we will explore the syntactic and
semantic relationships between active and passive constructions as well as the
properties of different passive constructions.

9.2 The Relationship between Active and Passive

Consider these active and passive counterpart sentences:


(5) a. The executive committee approved the new policy.
b. The new policy was approved by the executive committee.

With respect to formal and argument-realization properties, how do the two


constructions in question differ?
Grammatical functions and subcategorization: By definition, a transitive
verb form such as taken or chosen must have an object:
(6) a. John has taken Bill to the library.
b. John has chosen Bill for the position.
(7) a. *John has taken to the library.
b. *John has chosen for the position.

Yet, when such verbs are passive, the object NP is necessarily absent from the
postverbal position:
(8) a. *The guide has been taken John to the library.
b. *The department has been chosen John for the position.
(9) a. John has been taken to the library.
b. John has been chosen for the position.

The absence of the object in the passive is due to the fact that the argument that
would have been the object of the active verb has been promoted to subject of
the passive.
Apart from the realizations of the two core arguments of a transitive verb,
other subcategorization requirements are unchanged in a passive form. For
example, the active form handed in (10) requires an NP and a PP[to] as its
complements, and the passive handed in (11) still requires the PP complement:
(10) a. Pat handed a book to Chris.
b. *Pat handed to Chris.
c. *Pat handed a book.
218 PA S S I V E C O N S T RU C T I O N S

(11) a. A book was handed to Chris (by Pat).


b. *A book was handed (by Pat).

Other selectional properties: We now know that the selectional prop-


erties of the active verb are preserved in a passive sentence. It follows
that if the object of an active verb can be an expletive form like it,
that requirement attaches to the subject of the passive verb. Compare the
following:
(12) a. They believe it/*Stephen to be easy to annoy Ben.
b. They believe there to be a dragon in the wood.

(13) a. It/*Stephen is believed to be easy to annoy Ben.


b. There is believed to be a dragon in the wood.

If the active complement is itself a clause, the subject of the passive verb must
also be a clause:
(14) a. No one believes/suspects [that he is a fool].
b. [That he is a fool] is believed/suspected by no one.

Finally, if the postverbal constituent is construed as part of an idiom, so is the


subject in the passive:
(15) a. They believe the cat to be out of the bag.
b. The cat is believed to be out of the bag.

We thus can conclude that the subject of the passive form is the argument which
corresponds to the object of the active. This also means that one cannot describe
the passive in terms of the respective mappings of agent and patient (e.g., the sub-
ject of a passive sentence is the verb’s patient argument), because the argument
realized as subject in sentences like (13b) and (15b) is not assigned a semantic
role, patient or otherwise, by the verb.
Morphosyntactic changes: In addition to changes in argument realization,
the passive construction requires the auxiliary verb be, which requires the passive
form of the verb (a subtype of the en form, see 5.2.1). In addition to ‘passive be,’
italicized in the examples below, there can be other auxiliary verbs, with the
passive auxiliary last in the sequence:
(16) a. Jean drove the car. → The car was driven.
b. Jean was driving the car. → The car was being driven.
c. Jean will drive the car. → The car will be driven.
d. Jean has driven the car. → The car has been driven.
e. Jean has been driving the car. → The car has been being driven.
f. Jean will have been driving the car. → The car will have been being driven.

Semantics: A passive verb preserves the semantic-role assignments of its


active counterpart, meaning that whatever the semantic role of the active-voice
postverbal argument is the semantic role of the passive-voice subject. The
9.3 Approaches to Passive 219

highest-ranking argument of an active verb is expressed as an optional oblique


argument of a PP headed by the preposition by in the passive or not at all:
(17) a. Pat handed Chris a note.
b. Chris was handed a note (by Pat).

(18) a. TV puts ideas into children’s heads.


b. Ideas are put into children’s heads (by TV).

The forgoing observations mean that any grammar must capture the following
basic properties of passive:

• Passive turns the active object into the passive subject.


• Passive leaves other aspects of the COMPS value of the active verb
unchanged.
• Passive optionally allows the active subject to be the object in a PP
headed by by.
• Passive makes the appropriate morphological change in the form
of the main verb and requires that this verb be the complement of
auxiliary be.
• Passive preserves the semantics of the verb lexeme.

9.3 Approaches to Passive

There are several potential ways to capture the syntactic and semantic
relationships between active and passive forms. Given our discussion so far, one
might think of relying on grammatical categories in phrase structure (NP, VP, S,
etc.), or on surface valence properties (SPR and COMPS), often informally char-
acterized as grammatical functions, or on semantic roles (agent, patient, etc.).
In what follows, we will see that we need to refer to all of these aspects of the
representation in a proper treatment of English passive constructions.

9.3.1 From Structural Description to Structural Change


Before we look into syntactic analyses for the formation of passive
sentences, it is worth reviewing Chomsky’s (1957) Passive Formation Construc-
tion formulated in terms of a structural description (SD) and a structural change
(SC):
(19)
220 PA S S I V E C O N S T RU C T I O N S

This rule means that if there is anything that fits the SD in (19), it will be changed
into the given SC: that is, if we have any string in the order of X – NP – Y – V –
NP – Z (in which X, Y, and Z are variables), the order can be changed into X –
NP – Y – be – V+en – Z – by NP. For example:
(20)

As indicated in the structural-change (SC) component, the first NP becomes an


optional PP, while the second NP becomes the first NP. The rule accompanies
the addition of be and the change of the main verb’s VFORM to passive form.
Although this type of rule does not reflect constituenthood of the expressions in
the particular sentence and is not sufficient to account for all the different pat-
terns of passivization that we will see below, this early analysis has influenced
the development of subsequent transformational analyses for English passive
constructions.

9.3.2 A Transformational Approach


A typical transformational approach assuming movement for passive
involves the operation shown in (21) (Chomsky, 1982):
(21)

The object Bill moves to the subject position and the verb be moves to I (Infl)
position, giving the output sentence Bill was deceived. The analysis is based on
these three major assumptions:
9.3 Approaches to Passive 221

• Move α: Move a category.


• Case Theory: The NP needs Case. The subject receives NOM (nom-
inative) case from tense, and the object receives ACC (accusative)
case from the active transitive verb governing it.2
• A passive participle does not license ACC case.
In the lower position inside the VP, the NP Bill in (21) cannot receive ACC
case, since by assumption the passive participle form deceived cannot assign
any case. In other words, the passive participle form is a kind of intransitive
verb, even though its semantic argument Bill starts out in the structural object
position. Without any movement, the structure of the string was deceived Bill
would violate Case Theory, because every NP must be assigned case. If the NP
Bill moves to the subject position, where case is assigned by the tensed verb was,
Case Theory is satisfied and the structure is well-formed.
Although this kind of derivational analysis appealingly captures the rela-
tionships between canonical active and passive patterns, it leaves many facts
unexplained. In what follows, we will see more complex passive constructions
in English which require us to refer not only to grammatical categories and
grammatical functions, but also semantic/pragmatic constraints on passive.

9.3.3 A Construction-Based Approach


Once we look at a wider variety of passive patterns, we can see the
need to refer to lexical and semantic properties of transitive verbs. We establish
this need by reference to several phenomena. First, there are apparently transitive
verbs that lack a passive counterpart. For example, transitive verbs like resemble
or fit do not have passive counterparts (see Section 9.5 also):
(22) a. The model resembles Kim in nearly every detail.
b. *Kim is resembled by the model in nearly every detail.
(23) a. The coat does not fit you.
b. *You are not fitted by the coat.

Such transitive verbs presumably fit the tree structure in (21), but they cannot be
passivized.
Second, there are verbs like bear, rumor, say, and repute that are used only in
the passive, as seen in the following contrasts:
(24) a. I was born in 1970.
b. It is rumored that he is on his way out.
c. John is said to be rich.
d. He is reputed to be a good scholar.
(25) a. *My mother bore me in 1970.
b. *Everyone rumored that he was on his way out.

2 In English, CASE is morphologically visible only on pronouns: He is nominative, whereas him is


accusative, for example.
222 PA S S I V E C O N S T RU C T I O N S

c. *They said him to be rich.


d. *They reputed him to be a good scholar.

Unlike, say, resemble, these verbs are not typically used as active forms. Intrin-
sically passive verb lexemes are difficult to explain if we rely on the assumption
that passives are derived from actives via configurational transformation rules.
Third, the subject in a passive sentence need not be a patient:

(26) a. Not much is known about the effects of these medications on children.
b. It was alleged by the victim that he was kidnapped.
c. That laughter is the sign of joy is doubted by no one.

We can better capture such lexical idiosyncrasies if we assume passive to be a


relationship between two classes of lexemes.
The PASSIVE CONSTRUCTION, shown in (27), represents the ‘passive rule’ as
a relationship between an input class of transitive lexemes and a class of passive
lexemes:3

(27) PASSIVE CONSTRUCTION: ⎡ ⎤


  passive-v
v-tran-lxm ⎢SYN | HEAD | VFORM pass ⎥
→ ⎢  ⎥
ARG - ST XPi, 2 YP . . .  ⎣ ⎦
ARG - ST  2 YP . . . PPi[by] 

This derivational rule says that if there is a transitive verb lexeme (v-tran-lxm)
selecting two arguments, it has a corresponding passive verb lexeme (passive-v).
This derivationally related verb selects the second argument of the input tran-
sitive verb as the first argument that will be realized as the subject. The first
argument in the input is mapped to an optional PP argument in the derived verb,
the remaining arguments unchanged (. . . ). The derivation also effects a change
of the VFORM value to pass, reflecting the morphological process.4
Let us consider what kinds of passive sentences this derivation can give rise
to. Consider the following pair:

(28) a. They send her to Seoul.


b. She is sent to Seoul (by them).

According to the derivational rule in (27), the active verb send has a counterpart
passive verb, sent:

3 The present analysis, in which verbs are classified into different lexical types in accordance with
their morphosyntactic behavior (Sag et al., 2003; Kim and Sells, 2008; Sag, 2012; Kim, 2016),
implies that there are verb lexemes that select two arguments but that are excluded from the
type v-tran-lxm, and further that there are verb lexemes which belong to the passive-v from the
beginning (not derived from v-tran-lxm). Verbs like resemble belong to the former group, while
those like rumor belong to the latter.
4 As we noted in Chapter 5, in terms of the morphological form, the VFORM pass is a subtype of
en.
9.3 Approaches to Passive 223
⎡ ⎤
(29)   FORM sent
FORM send ⎢ ⎥
→ ⎣SYN | HEAD | VFORM pass ⎦
ARG - ST NPi , 2 NP, 3 PP[to]
ARG - ST  2 NP, 3 PP[to], (PPi [by])
As seen here in the output form, the passive sent takes three arguments: a subject
identical to the second argument of the transitive verb, an intact PP inherited
from the transitive verb, and an optional PP whose index value is identical with
the subject of the transitive verb.5 These three arguments will be realized as
SPR and COMPS elements, respectively, in accordance with the ARC (Argument
Realization Constraint). This output lexical entry can then be embedded in the
following structure for (28b):
(30)

As shown in (30), the passive sent combines with its PP[to] complement, form-
ing a VP that still requires a SPR. This VP functions as the complement of the
auxiliary be (was). As we saw in Chapter 8, the passive copula be is a raising
verb, with the lexical entry repeated in (31). Its subject (SPR value) is identical
to its VP complement’s subject she:
⎡ ⎤
(31) aux-be-pass
⎢ ⎥
⎢FORM be ⎥
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ SPR  1 NP ⎥
⎢ ⎢   ⎥⎥
⎢ ⎥⎥
⎢SYN | VAL⎢⎣ VFORM pass ⎦⎥
⎢ 2 VP ⎥
⎢ COMPS
 1 NP ⎥
⎢ SPR ⎥
⎣ ⎦
ARG - ST  1 NP, 2 VP

5 As noted in Section 6.5.2, Chapter 6, a preposition functioning as a marker rather than as a


predicator with semantic content does not contribute to the meaning of the head PP. This means
that this preposition’s index value is identical to that of its object NP.
224 PA S S I V E C O N S T RU C T I O N S

The SPR requirement of be is passed up to the highest VP in accordance with the


VALP, which regulates the values of SPR and COMPS. When this VP combines
with the subject she in accordance with the HEAD - SPECIFIER CONSTRUCTION,
the passive structure is well-formed.
The PASSIVE CONSTRUCTION can also give rise to sentences like (32b),
whose subject is not an NP but a CP:
(32) a. They widely believed [that John was ill].
b. [That John was ill] was widely believed.

The passive verb believed in (32b) is derived from its active counterpart in
(32a). This projection will generate the passive form of believed, as given in
the following:
⎡ ⎤
(33) FORM believed
  ⎢ ⎡  ⎤
⎥
believe ⎢ ⎥
FORM ⎢ ⎣
POS verb
⎦⎥
→ ⎢SYN HEAD ⎥
ARG - ST NPi , 2 CP ⎢ VFORM pass ⎥
⎣ ⎦
ARG - ST  2 CP, (PPi )

The output passive verb believed can then project a structure like the following:
(34)
9.3 Approaches to Passive 225

The passive verb believed first combines with its optional complement by them
and then with the modifier widely. The resulting VP then combines with the
raising verb be in accordance with the HEAD - COMPLEMENT CONSTRUCTION.
This system, licensing each local structure by the defined grammar rules and
principles, thus links the CP subject of be to that of believed.
The same account also holds when the complement is an indirect question:

(35) a. They have decided [which attorney will give the closing argument].
b. [Which attorney will give the closing argument] has been decided (by them).

The active decided selects an interrogative sentence as its complement, and the
PASSIVE CONSTRUCTION can apply to this verb:6

⎡ ⎤
(36) ⎡ ⎤ FORM decided
⎢ ⎡   ⎤ ⎥
FORM decide ⎢ ⎥
⎢ ⎥ ⎢ POS verb ⎥
⎣SYN | HEAD | POS verb ⎦ → ⎢SYN⎣HEAD ⎦ ⎥
⎢ VFORM pass ⎥
ARG - ST NPi , Sj [QUE +] ⎣ ⎦
ARG - ST Sj [QUE +], (PPi [by])

The output passive decided then will license the following structure (for
simplicity, we do not show COMPS with empty < > values):

(37)

6 We assume that indirect or direct questions are marked by the feature QUE (question). See
Chapter 10.
226 PA S S I V E C O N S T RU C T I O N S

The passive verb decided selects an optional PP complement as its complement


and an indirect question as its subject, as in the lexical entry in (36). To keep the
structure in (37) simple, we have assumed the option without the PP. The raising
verb first combines with the first VP, the result of which again combines with
the auxiliary raising verb has. Notice that since be and have are raising verbs,
their VP complements have identical specifier values. By these identifications,
the subject of has is identical to that of the passive verb decided.

9.4 Prepositional Passives

In addition to the passivization of an active transitive verb, English


allows a ‘prepositional verb’ to undergo passivization, as illustrated in the
following:
(38) a. You can rely on Ben.
b. Ben can be relied on.
(39) a. They talked about the scandal for days.
b. The scandal was talked about for days.

As seen here, the object of the preposition in the active can function as the subject
of the passive sentence. Notice that such prepositional passives are possible with
verbs selecting a PP bearing a specified preposition:
(40) a. The plan was approved of by my mother. (My mother approved of the plan.)
b. The issue was dealt with promptly. (They dealt with the issue promptly.)
c. That’s not what was asked for. (That’s not what they asked for.)
d. This should be attended to immediately. (We should attend to this immedi-
ately.)
(41) a. *Boston was flown to. (They flew to/near/by Boston.)
b. *The capital was gathered near by a crowd of people. (A crowd of people
gathered near/at the capital.)
c. *The hot sun was played under by the children. (The children played
under/near the hot sun.)

The propositions in (40) are all selected by the main verbs (no other preposi-
tions can replace them). By contrast, the prepositions in (41) are not selected
by the main verb, since they can be replaced by another, as noted in their active
counterparts.7
One thing to observe is that there is a contrast between active and pas-
sive prepositional verbs with respect to the appearance of an adverb (see
Chomsky, 1972; Bresnan, 1982b). Observe the following:
(42) a. That’s something I would have paid twice for.

7 See Exercise 5 of this chapter for examples (e.g., The bed was slept in) where the prepositional
passive is possible with an adjunct PP.
9.4 Prepositional Passives 227

b. These are the books that we have gone most thoroughly over.
c. They look generally on John as selfish.

(43) a. *Everything was paid twice for.


b. *Your books were gone most thoroughly over.
c. *He is looked generally on as selfish.

The contrast here shows us that, unlike the active, the passive does not allow any
adverb to intervene between the verb and the preposition.
There are two possible structures that can capture these properties: ternary
and reanalysis structures. The ternary structure generates a flat structure like the
following:

(44)

Contrasting with this flat or ternary structure, there is another possible structure
assumed in the literature:

(45)

This structure differs from (44) in that the passive verb and the preposition forms
a constituent (the ‘reanalysis’). Both (44) and (45) can capture the coherence
between the prepositional verb and the preposition. Even though both have their
merits, we choose the structure (45), in which the passive verb and the preposi-
tion form a unit. Evidence for this kind of unitization comes from environments
in which the passive verb (but not the active verb) forms a lexical unit with the
following preposition:

(46) a. Pavarotti relied on Loren and Bond on Hepburn.


b. *Pavarotti relied on Loren and Bond Hepburn.
c. Loren was relied on by Pavarotti and Hepburn by Bond.
d. *Loren was relied on by Pavarotti and Hepburn on by Bond.

What we can observe here is that, unlike the active verb, the passive relied on
acts like a lexical unit in the gapping process: The passive relied alone cannot be
gapped.
228 PA S S I V E C O N S T RU C T I O N S

This contrast supports the reanalysis structure for the passive. The HEAD -
LEX CONSTRUCTION we have employed to license verb-particle and finite-aux-
negator also licenses the combination of the prepositional passive V with the
following P (which is defined to be ‘LEX’ in the sense that it is not a prosodically
heavy element), which we repeat here again:
(47) HEAD - LEX CONSTRUCTION :
V → V, X[LEX +]

The construction allows a head V to combine with a LEX element such as a


preposition in the prepositional passive verb construction.8
The following question is, then, how we can license prepositional passives.
We first need to ensure that the object of the prepositional verb is promoted to
the subject in the passive, as shown in the following:
(48) PREPOSITIONAL PASSIVE CONSTRUCTION :
⎡ ⎤
pass-prep-v
  ⎢SYN | HEAD | VFORM pass ⎥
prep-v ⎢   ⎥
→ ⎢ ⎥
ARG - ST NPi , PPj [PFORM 4 ]  ⎢ LEX + ⎥
⎣ARG - ST NP , P , (PPi [by]) ⎦
j
PFORM 4

This rule ensures that a prepositional verb ( prepositional-v) has a counterpart


passive verb. This passive verb selects three arguments: a subject NP whose
index value ( j) is identical to the PP of the transitive verb (in other words, the
object of the preposition), a preposition whose form (PFORM) value is inherited
from the transitive prepositional verb, and an optional agent expressing the agent
argument.
Let’s see how the PREPOSITIONAL PASSIVE CONSTRUCTION and the HEAD -
LEX CONSTRUCTION interact together to license so-called prepositional passive
sentences in English.
(49) a. The lawyer looked into the document.
b. The document was looked into by the lawyer.

The active prepositional verb look can be projected as an instance of the


PREPOSITIONAL PASSIVE CONSTRUCTION :
   
(50) FORM look looked
FORM

ARG - ST NPi , PPj [into] ARG - ST NPj , P[into], (PPi [by])

The output passive verb now has three arguments: The first argument will be real-
ized as the subject; the remaining two elements are a preposition whose PFORM
is identical with that of the input PP and an optional PP[by] linked to the input
subject. This output will then project a structure like the following:

8 In languages like Korean, German, and even French, such a syntactic combination is prevalent in
the formation of complex predicates. See Kim (2004b).
9.5 The Get-Passive 229

(51)

The HEAD - LEX CONSTRUCTION in (47) allows the passive verb to combine with
the preposition into first, still forming a lexical element. This resulting lexical
element then combines with its PP complement by the lawyer in accordance with
the HEAD - COMPLEMENT CONSTRUCTION, which requires that the complement
with which the head combines is phrasal.

9.5 The Get-Passive

The passive constructions that we have discussed so far are headed


by the copular verb be, but there is another type of passive headed by the verb
get. Consider the following pair:

(52) a. You must come back in spring to see them. The man did; he was fired.
b. He got fired by the liberals and rehired by Fox.

The be passive in (52a) and the get-passive in (52b) both describe a situation in
which an employer fired someone. Note that be and get passives are not always
interchangeable, as illustrated in the following (Huddleston and Pullum, 2002):

(53) a. Kim was/*got seen to leave the lab with Dr. Smith.
b. He saw Kim get/*be mauled by my brother’s dog.
230 PA S S I V E C O N S T RU C T I O N S

In (53a), the head verb must be be, while in (53b) the head verb can only be
get.9 This contrast indicates that there must be some differences between the two
passives.10
The first main difference comes from the status of be and get. While the verb
be is a typical auxiliary, get is not (cf. Haegeman, 1985). This can be observed
from the NICE properties discussed in Chapter 8:
(54) a. He was not fired by the company.
b. Was he fired by the company?
c. He wasn’t fired by the company.
d. John was fired by the company, and Bill was too.

(55) a. *He got not fired by the company.


b. *Got he fired by the company?
c. *He gotn’t fired by the company.
d. *John was fired by the company, and Bill got too.

As seen from the contrast here, the passive verb got fails every test for auxiliary
status: The verb cannot have sentential negation following (55a), cannot undergo
auxiliary inversion (55b), has no contracted form (55c), and cannot elide the
following VP (55d). The possible alternatives are those in which the verb get is
used as a lexical verb:
(56) a. He didn’t get fired by the company.
b. Did he get fired by the company?
c. He didn’t get fired by the company.
d. John got fired by the company, and Bill did too.

These data indicate that the passive get is not an auxiliary verb.
Also note that the passive get verb is different from typical raising verbs in that
its subject referent cannot be an expletive (it or there) but must be understood
to be affected by the action in question (Taranto, 2005). That is, the status of
the subject is understood to be changed by the action performed by the agent.
Consider the following:
(57) a. The letter was written by you and no one else.
b. *The letter got written by you and no one else.

9 The alternative for (53b) is He saw Kim mauled by my brother’s dog.


10 According to Collins (1996), the get-passive can be classified into five types:

a. Central: A woman got phoned by her daughter who was already on the plane.
b. Psychological: I got frustrated by the high level of unemployment.
c. Reciprocal/Reflexive: She never got herself dressed up for work.
d. Adjectival: His clothes got entangled in sewer equipment.
e. Formulaic: I got fed up with sitting in front of my computer.

The central get-passive has an active counterpart with the identical propositional meaning,
although its agent can in general be inferred from context. Our discussion here centers on this
central type.
9.5 The Get-Passive 231

The letter came into existence after the action of writing was carried out,
so it was in a sense not affected. For an individual to be affected by
an action, it needs to exist at the time that the action happens. This
means that the preexistence of the subject is a necessary condition (Taranto,
2005):

(58) The band/?TV program/?Volcanic eruption got watched by thousands.

The ‘affected’ condition can also account for the awkwardness of the following
examples:

(59) a. *Bull-headed man got feared by some.


b. *Eisenhower got followed by Kennedy.
c. *He got seen by the teacher.
d. *His campaign got invented by a hostile press.

All these examples, possible with the be-passive, contain lexical verbs that are
either stative or do not entail a change of state. For example, fearing someone or
seeing someone does not affect the individual.
In sum, the get-passive verb does not bear the feature AUX but requires
a passive VP as its complement. The get-passive typically focuses on what
happened as the result of the action described by the participial complement
predicate, and the subject referent of the get-passive is necessarily under-
stood to have been affected by the action. The following lexeme represents
passive get:

⎡ ⎤
(60) FORM get
⎢ ⎥
⎢SYN | HEAD | AUX – ⎥
⎢ ⎡ ⎤ ⎥
⎢ ⎥
⎢ SPR NPj ⎥
⎢ ⎢ ⎥ ⎥
⎢ARG - ST NPj, VP⎢ ⎥
⎢ ⎣ VFORM pass⎥
⎦ ⎥
⎢ ⎥
⎢ s ⎥
⎢ IND
1 ⎥
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ IND s0 ⎥
⎢ ⎢ ⎡ ⎤⎥⎥
⎢ ⎢ ⎥
⎢ ⎢ PRED get-affected-rel ⎥
⎥⎥
⎢SEM⎢ ⎢ ⎥⎥⎥
⎢ ⎣RELS ⎣PAT j ⎦⎦⎥
⎣ ⎦
SIT s1

In (60), we see that the verb get selects two arguments: a subject NP and a
passive VP whose VFORM value is passive. The subject of the verb get is an
affected patient in the situation s1. This lexeme will then project a sentence like
the following:
232 PA S S I V E C O N S T RU C T I O N S

(61)

As seen in (61), the passive verb fired requires a patient subject NP and an
optional PP agent. This subject NP has the same index value as the subject of the
VP that the verb get requires. The subject bears the semantic role of patient. This
captures the fact that a get-passive sentence describes an event which has some
impact on the subject referent. This supports the usage that the get-passive is
found only with dynamic verbs, describing the action in question (Collins, 1996;
Downing, 1996; Taranto, 2005). The predicates typically used in the get-passive
are nonstative verbs like caught, paid, done, dressed, fired, tested, picked, thrown,
killed, asked. It is not natural for the complement of get to be a stative participle:
(62) a. It was/*got believed that the letter was a forgery.
b. He is/*got feared by most of the staff.
c. The teacher was/*got liked by everybody.
Perception verbs like believe, fear, and like are difficult to construe as change-
of-state verbs.
The effect conveyed by a get-passive sentence need not be negative:
(63) a. He got promoted multiple times.
b. The story got published and won some recognition.
As shown by such examples, the get-passive is characteristically used in
clauses involving adversity, but it can also describe a beneficial situation
(Collins, 1996).11
11 The get-passive has other pragmatic constraints: It usually conveys the speaker’s personal
involvement or attribution of responsibility to the subject referent, or it reflects the speaker’s
opinion about the desirability of the event’s outcome. See Collins (1996) for further discussion.
9.6 Conclusion 233

9.6 Conclusion

This chapter has offered a detailed description of the formal proper-


ties of English passive constructions. Passive sentences are systematically related
to active sentences. After reviewing core properties of passive constructions in
English, we discussed the major features of prior transformational analyses and
the empirical problems that prevent them from capturing regularities, as well as
peculiarities, of English passive constructions.
To avoid analytical problems arising from transformational analyses, this
chapter suggested a construction-based analysis of be-passives that leverages
multiple grammatical properties, including those related to grammatical cate-
gories, grammatical functions, and semantic/pragmatic constraints. This analysis
was extended to the prepositional passive as well as get-passive constructions,
both of which behave quite differently from the be-passive constructions. We
have seen that the construction-based framework offers a way to account for
relationships of ‘family resemblance’ that unite seemingly divergent construc-
tions.
In Chapter 10, we will again see that a construction-based approach can shed
light on a structural relationship that has commonly been modeled by means of
syntactic movement – that between interrogative sentences and their declarative
counterparts.

Exercises

1. Draw tree structures for each of the following sentences and then
provide a lexical entry for the italicized passive verb:
a. Peter has been asked to resign.
b. I assume the matter to have been filed in the appropriate records.
c. Smith wants the picture to be removed from the office.
d. The events have been described well.
e. Over 120 different contaminants have been dumped into the
river.
f. Heart disease is considered the leading cause of death in the
United States.
g. The balloon is positioned in an area of blockage and is
inflated.
h. There was believed to have been a riot in the kitchen.
i. Cancer is now thought to be unlikely to be caused by hot dogs.

2. Provide the active counterpart of each of the following examples


and explain how we can produce each of them, together with tree
structures, lexical entries, and grammar rules:
234 PA S S I V E C O N S T RU C T I O N S

a. That we should call the police was suggested by her son.


b. Whether this is feasible hasn’t yet been determined.
c. Paying taxes can’t be avoided.

3. Verbs like get and have can be used in so-called ‘pseudo-passives’:


(i) a. Frances has had the drapes cleaned.
b. Shirley seems to have had Fred promoted.
(ii) a. Nina got Bill elected to the committee.
b. We got our car radio stolen twice on holiday.

In addition to these, have and get allow constructions like the


following:
(iii) a. Frances has had her clean the drapes.
b. Nina got them to elect Bill.

After drawing tree structures for the above examples, discuss the lex-
ical properties of have and get as exemplified here? For example,
what are their ARG - ST lists?
4. Consider the following prepositional passive examples and then
analyze them as deeply as you can with tree structures:
(i) a. Ricky can be relied on.
b. The news was dealt with carefully.
c. The plaza was come into by many people.
d. The tree was looked after by Kim.

In addition, consider the passive examples in (ii):


(ii) a. We cannot put up with the noise anymore.
b. He will keep up with their expectations.
(iii) a. This noise cannot be put up with.
b. Their expectations will be kept up with.

Can the analysis given in this chapter account for such examples?
Now observe the following examples, which illustrate two different
kinds of passive:
(iv) a. They paid a lot of attention to the matter.
b. The son took care of his parents.
(v) a. The matter was paid a lot of attention to.
b. A lot of attention was paid to the matter.

Can you think of any way to account for such examples?


5. We have seen that when the verb does not select a specified preposi-
tion, it usually does not undergo passivization. However, observe the
following contrast:
9.6 Conclusion 235

(i) a. *New York was slept in.


b. The bed was slept in.

(ii) a. *The lake was camped beside by my sister.


b. The lake is not to be camped beside by anybody.

Why do we have such a contrast with the same type of prepositional


verb? In answering this, think about the following contrast too, with
respect to semantic or pragmatic factors:

(iii) a. *Six inches were grown by the boy.


b. *A pound was weighed by the book.
c. *A mile to work was run by him.

(iv) a. The beans were grown by the gardener.


b. The plums were weighed by the greengrocer.

Can your semantic or pragmatic constraints explain the following


contrast too? If not, what kind of generalization can you think of
to account for the contrast here?
(v) a. *San Francisco has been lived in by my brother.
b. The house has been lived in by several famous personages.
(vi) a. *Seoul was slept in by the businessman last night.
b. This bed was surely slept in by a huge guy last night.

6. The (a) sentences in the following are active, whereas the (b)
sentences are all passive:
(i) a. John washed the trousers easily.
b. The trousers were washed easily.
c. The trousers wash easily.
(ii) a. They peel ripe oranges quickly.
b. Ripe oranges are peeled quickly.
c. Ripe oranges peel quickly.

Note that the (c) examples are often called ‘middle’ verb construc-
tion. Can you check if the verbs in the following also allow these
triplets: active, passive, and middle? In answering this, construct rel-
evant examples and also discuss all of the grammatical properties you
can find in these kinds of middle examples:
(iii) close, break, melt, bribe, translate, roll, crush

7. Provide a tree structure for each example and explain the rules or
principles that are violated in the ungrammatical versions:
(i) a. There is/*are believed to be a sheep in the park.
b. There *is/are believed to be sheep in the park.
236 PA S S I V E C O N S T RU C T I O N S

c. There seems/*seem to be no student absent.


d. There is/*are likely to be no student absent.

In accounting for such sentences, note that multiple constructions


are involved in licensing them. In particular, constructions like rais-
ing and passive play key roles in forming such sentences. Agreement
is another key factor here. As we saw earlier, there is agreement
between the copula be and the postcopular NP in so-called ‘there’
constructions, as shown again here:
(ii) a. There is/*are only one chemical substance involved in nerve
transmission.
b. There *is/are more chemical substances involved in nerve
transmission.
10 Interrogative and Wh-question
Constructions

10.1 Clausal Types and Interrogatives

Like other languages, English offers distinct sentence patterns for


distinct types of speech acts:

(1) a. Declarative: Shira is clever.


b. Interrogative: Is Shira clever? Who is clever?
c. Exclamative: How clever you are!
d. Imperative: Be very clever.

Each clause type has a dedicated function. For example, a declarative makes
a statement, an interrogative asks a question, an exclamative expresses surprise
about the degree of some property, and an imperative issues a directive. However,
these correspondences are not always one-to-one. For example, the declarative
in (2a) represents not a statement but a question, while the interrogative in (2b)
actually indicates a directive:

(2) a. I ask you if this is what you want.


b. Would you mind taking out the garbage?

In this chapter, we will focus on the syntactic structure of interrogatives, putting


aside the mapping relationships between form and function.
There are two basic types of interrogative: yes-no (or polar) questions and
wh-questions:

(3) a. Yes-no questions: Can the child read the book?


b. Wh-questions: What can the child read?

Yes-no questions are different from their declarative counterparts in having a


subject and an auxiliary verb in an inverted order. As we saw in Chapter 8,
such yes-no questions are generated through the combination of an inverted finite
auxiliary verb with a nonfinite S:

237
238 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

(4)

In addition to featuring this so-called subject-auxiliary inversion,


wh-questions are introduced by one of the interrogative words, for example,
who, what, and how:
(5) a. [Who] did John call last night?
b. [Who] made that mistake?
c. [With what] did the baby hit the toy?
d. [How] did he eat the food?

The wh-phrases formed from these wh-words have a variety of functions in the
clause. As seen in the examples in (5), a wh-expression can be an object, subject,
or oblique complement, or even an adjunct. Note that the wh-questions have a
bipartite structure: a wh-phrase and an S that is incomplete in the sense that the
complement of some predicator within it is missing:
(6) a. [NP Which man] [did you talk to ]?
b. [PP To which man] [did you talk ]?
c. [AP How ill] [has Hobbs been ]?
d. [AdvP How frequently] [did Hobbs see Rhodes ]?

As in these examples, each wh-question consists of a wh-phrase followed by


an inverted sentence with a missing phrase (indicated by the underscore). The
sentence must have a missing element:
(7) a. *[Which man] did you talk to Bill?
b. *[How ill] has Hobbs been sick?

The wh-phrase (filler) and the missing phrase (gap) must have identical syntactic
categories as a way of ensuring their linkage:
(8) a. *[NP Which man] [did you talk [PP ]]?
b. *[PP To which man] [did you talk to [NP ]]?
10.2 Movement vs. Feature Percolation 239

Another important property is that an unlimited number of clause embeddings


may occur between the filler and the gap – a situation that the literature refers to
as a long-distance (or unbounded) dependency:
(9) a. [[Who] do you think [Tom saw ]]?
b. [[Who] do you think [Mary said [Tom saw ]]]?
c. [[Who] do you think [Hobbs imagined [Mary said [Tom saw ]]]]?

Provided that the wh-expression can be construed as filling an argument or


adjunct role of a predicator in the sentence that is missing an element of just that
type, there can be an unbounded distance between filler and gap. We can observe
a similar phenomenon in the so-called TOPICALIZATION CONSTRUCTION:
(10) a. Most dogs, Tom didn’t see .
b. Most dogs, Mary thought Tom didn’t see .
c. Most dogs, Hobbs said Mary thought Tom didn’t see .

This long-distance relationship characterizes wh-questions and other similar con-


structions including topicalization. We regard these constructions as constituting
a family of ‘long-distance dependencies.’

10.2 Movement vs. Feature Percolation

There have traditionally been two means of representing the link


between the filler wh-phrase and its corresponding gap. One strategy is to assume
that the filler wh-phrase is moved to the sentence-initial position by movement
operations, as represented in (11) (Chomsky, 1981a):
(11)
240 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

The wh-phrase who originates in the object position of recommend and is then
moved to the specifier position of the intermediate phrase C . The auxiliary verb
will is also moved from the V position to the C.
This kind of movement operation is an appealingly straightforward way
to capture the linkage between the filler and gap. However, the move-
ment analysis becomes less plausible when we consider examples like the
following:
(12) a. Who did Kim work for and Sandy rely on ?
b. *Who did Kim work for and Sandy rely ?
c. *Who did Kim work for and Sandy rely on Mary?

If we adopt a movement analysis for (12a), there must be an operation in which


the two NP gaps (marked by the underscores above) are collapsed into one NP
and become who. We cannot simply move one NP, because it will generate an
ill-formed example like (12c).
There is also a class of ‘movement paradox’ examples, provided by Bres-
nan (2001). Consider the following topicalization examples, which illustrate the
same type of ‘long-distance’ relationship that wh-questions create:
(13) a. You can always rely on [this kind of assistance].
b. [This kind of assistance], you can always rely on .

(14) a. We endlessly talked about [the fact that she had quit the race].
b. [The fact that she had quit the race], we endlessly talked about .

In a movement approach, we derive both of the (b) examples from their


corresponding (a) examples by moving the NPs to the sentence-initial posi-
tion. However, not every putatively ‘derived’ example has a well-formed
source:
(15) a. *You can rely on that we will always help you.
b. [That we will always help you], you can rely on .

(16) a. *We endlessly argued about [that she had quit the race].
b. [That she had quit the race], we endlessly argued about .

(17) a. *This theory captures that arrows don’t stop in midair.


b. [That arrows don’t stop in midair] this theory captures .

As in (14), it is difficult to explain why the putative source example is ungram-


matical while a derived form is grammatical – a fact that casts doubt on the claim
that the paired examples are mediated by a movement operation.
An alternative is to assume that there is no movement process at all and to posit
a mechanism of communication through the tree, known as feature percolation,
to license such wh-questions. For example, the information that an NP is missing
from its expected syntactic position following a verb or other predicator can be
10.2 Movement vs. Feature Percolation 241

shared within the tree so that the gap and its filler bear the same specifications
for the relevant features, for example, syntactic category.

(18)

Notations like NP/NP (read as ‘NP slash NP’) or S/NP (‘S slash NP’) here mean
that the category to the left of the slash is incomplete: It is missing one NP.
This missing information is percolated up to the point where the slash category
is combined with the filler who. Instead of movement operations, this strategy
has successive applications of a phrase-structure rule that creates a local tree in
which a constituent bearing a gap feature is combined with another constituent,
and the mother phrase bears the same value for the gap feature that the gapped
daughter does.
This kind of analysis can be used to describe the contrast shown in (12a) and
(12b). Let us look at partial structures of these two examples:

(19)
242 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

In (19a), the missing gaps are both NPs, while in (19b), an NP and a PP are
missing. Since the mechanism of feature unification allows two nonconflicting
phrases to be unified into one, the two S/NP phrases in (19a) are merged into
one S/NP. Simply put, the whole coordinate structure is ‘missing an NP,’ and this
description also applies to each internal conjunct. However, in (19b) we cannot
combine the two phrases S/NP and S/PP into one because they have conflicting
slash values.

10.3 Feature Percolation with No Abstract Elements

10.3.1 Basic Systems


To describe the formal mechanism for feature percolation, we intro-
duce the feature GAP (whose value is a category missing a phrase of a particular
kind) and ‘pass up’ this feature through the repeated application of phrase-
structure rules.1 However, even within such an approach, there remains the
question of whether we must posit an empty element. An empty element is an
abstract entity introduced for analytic convenience. For example, the GAP feature
may ‘start off’ at the bottom of the tree as the attribute of a phrasal node that dom-
inates an invisible element t (trace) of category NP/NP (see (11), as in Gazdar
et al., 1985). Though the introduction of an empty element with no phonolog-
ical value seems to be a good way of propagating the GAP feature, examples
like the following raise issues that are not easily solved (Sag and Fodor, 1994;
Sag, 2000):
(20) a. *Who did you see [NP[NP ] and [NP a picture of [NP ]]]?
b. *Who did you compare [NP [NP ] and [NP ]]?
On the assumption that empty elements are identical to canonical phrases except
for the fact that they have no phonological values at all, nothing in the grammar
1 In much of the literature, this feature is known as ‘SLASH,’ as suggested by our discussion above.
10.3 Feature Percolation with No Abstract Elements 243

would disallow the coordination of two empty phrases, as in (20b). If instead we


could avoid positing empty elements that we cannot see or hear, we would have
a more realistic and predictive theory of syntax (Pullum, 1991).
One way to avoid positing an abstract element is to encode the missing infor-
mation in the lexical head of the phrase containing the missing argument or
adjunct (Sag et al., 2003, 2012). For example, the verb recommend can be
realized with different overt complements:

(21) a. These qualities recommended him to Oliver.


b. The UN recommended an enlarged peacekeeping force.

(22) a. This is the book which the teacher recommended .


b. Who will they recommend ?

In (21), the object of the verb is present as its sister, whereas in (22) the object
is in a nonlocal position. These two possibilities for argument instantiation are
captured by the following revised ARC:

(23) Argument Realization Constraint (ARC, second approximation):


The first element on the ARG - ST list is realized as SPR, the rest as COMPS or
GAP in syntax.

This revised ARC thus allows the following lexical entries for recommend:

(24)

In (24a), the two arguments of the verb recommend are realized as the SPR and
COMPS values, respectively, whereas in (24b) the second argument is realized not
244 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

as a COMPS value but as a GAP value. Each of these two different realizations will
project the following structures for examples like (21b) and (22b), respectively:
(25)

The main difference between the two is that in (25a), the object of recommend is
the verb’s sister, while in (25b) it is not. That is, in the former the object is local
to the verb whereas in the latter it is nonlocal. In (25b), the verb contains a GAP
value which is identified with the object. This GAP value is passed up to the VP
and then to the middle S. This GAP value is discharged by the filler who, or more
specifically by the HEAD - FILLER CONSTRUCTION in (26):
(26) HEAD - FILLER CONSTRUCTION:

S GAP   → 1 XP, S GAP  1 XP

This grammar rule says that when a head expression S containing a nonempty
GAP value combines with the constituent bearing its filler value, the resulting
10.3 Feature Percolation with No Abstract Elements 245

phrase will form a grammatical head-filler phrase with the GAP value discharged.
This completes the ‘top’ of the long-distance or unbounded dependency.

10.3.2 Nonsubject Wh-questions


Let us see how the present system generates a nonsubject
wh-question, using the verb put for illustration. This verb lexeme will select three
arguments as given here:
⎡ ⎤
(27) v-lxm
⎢ ⎥
⎣FORM put ⎦
ARG - ST NP, NP, PP

The ARC will ensure that of these three arguments, the first must be realized as
the SPR element and the rest either as COMPS or as GAP elements. We will thus
have at least the following three realizations for the verb lexeme put:2
⎡ ⎤
(28) a. v-wd
⎢ ⎥
⎢FORM put ⎥
⎢ ⎡ ⎤⎥
⎢  NP ⎥
⎢ SPR 1 ⎥
⎢ ⎢ ⎥⎥
⎢SYN | VAL⎣COMPS  2 NP, 3 PP⎦⎥
⎢ ⎥
⎢   ⎥
⎣ GAP ⎦
ARG - ST  1 NP, 2 NP, 3 PP
⎡ ⎤
b. v-gap-wd
⎢ ⎥
⎢FORM put ⎥
⎢ ⎡ ⎤⎥
⎢  1 NP ⎥
⎢ SPR ⎥
⎢ ⎢ ⎥⎥
⎢SYN | VAL⎣COMPS  3 PP⎦⎥
⎢ ⎥
⎢  2 NP ⎥
⎣ GAP ⎦
ARG - ST  1 NP, 2 PP, 3 PP
⎡ ⎤
c. v-gap-wd
⎢ ⎥
⎢FORM put ⎥
⎢ ⎡ ⎤ ⎥
⎢  1 NP ⎥
⎢ SPR ⎥
⎢ ⎢ ⎥ ⎥
⎢SYN | VAL⎣COMPS  2 NP⎦⎥
⎢ ⎥
⎢  3 PP ⎥
⎣ GAP ⎦
ARG - ST  1 NP, 2 NP, 3 PP

Each of these three lexical words entries can the be used to generate sentences
like the following:
(29) a. John put the books in a box.
b. Which books did John put in the box?
c. Where did John put the books?

2 The SPR value of a verb can be gapped too. See Section 10.3.3.
246 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

As we see here, the complements of the verb put may be realized in three
different ways. The verb put in (28a) shows the canonical realization of the verb’s
arguments, licensing an example like (29a). Meanwhile, in (28b), the object NP
argument is realized as a GAP, as reflected in (29b), whereas in (28c), the PP
is realized as a GAP, as shown in (29c). The following tree structure shows the
derivation of (29b) and the manner in which the lexical entry of (28b) contributes
to the propagation of the GAP feature throughout the tree:
(30)

Let us look at the structure, working from bottom to top. At the bottom, the
verb put has one PP complement, with its NP complement being realized as a
GAP value. This GAP information is copied to the mother node of each phrasal
construct in the tree, successively, the VP, then the S that immediately domi-
nates this VP, and finally the S whose head is the phrase-initial auxiliary did,
at which point the GAP value is satisfied by the presence of the [QUE +] filler.
Each phrase is licensed by a rule of the grammar: The verb put with the rele-
vant GAP specification first combines with the necessary PP complement in the
box, in accordance with the HEAD - COMPLEMENT CONSTRUCTION. The result-
ing VP combines with the subject, forming a nonfinite S with which the inverted
auxiliary verb did combines. The resulting S remains incomplete because of the
10.3 Feature Percolation with No Abstract Elements 247

nonempty GAP value (every complete sentence must have an empty GAP value).
This GAP value is discharged when the HEAD - FILLER CONSTRUCTION in (26)
combines the filler NP which book with the incomplete S.3
This kind of feature percolation system, involving no empty elements, works
well even for long-distance dependency examples. Consider the following
structure:

(31)

The GAP value starts from the lexical head met, whose second argument is real-
ized as a GAP value. Since the complement of the verb met is realized as a GAP
value, the verb met will not look for its complement in the local domain (as its
sister node). The GAP information will be passed up to the embedded S, which
is a nonhead daughter. It is the principle given in (32) that ensures that the GAP
value in the head daughter or nonhead daughter is passed up through the structure
until it is discharged by the filler who in the HEAD - FILLER CONSTRUCTION:4

3 Every wh-element in questions carries the feature [QUE +].


4 The nonlocal features will be ‘bound’ either by a grammar rule like the HEAD - FILLER CON -
STRUCTION (see (26)) or a lexical constraint (see the discussion of ‘easy’ constructions in
Chapter 12, Section 12.2).
248 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

(32) Nonlocal Inheritance Principle (NIP):


A phrase’s nonlocal features, including GAP and QUE, are the union of its
daughters’ nonlocal feature values minus any bound nonlocal features.

The role of this principle is clear from the embedded S in (31): The principle
allows the GAP in this nonhead S to pass up to the VP. Assuming (32), we can
observe that the treatment of long-distance dependency involves three parts: top,
middle, and bottom. The bottom part introduces the GAP value according to the
ARC. The middle part ensures the GAP value is inherited ‘up’ to the mother
in accordance with the NIP. Finally, the top level terminates the GAP value by
providing the filler as nonhead daughter, in accordance with the HEAD - FILLER
CONSTRUCTION .
It is also easy to verify that this feature percolation system accounts for
examples like (33), in which the gap is a non-NP:
(33) a. [In which box] did John place the book ?
b. [How happy] has John been ?

The HEAD - FILLER CONSTRUCTION in (26) ensures that the categorial status of
the filler is identical to that of the gap. The structure of (33a) can be represented
as follows:
(34)
10.3 Feature Percolation with No Abstract Elements 249

In this structure, the missing phrase is a PP encoded in the GAP value. This value
is percolated up to the lower S and discharged by the filler in which box.
In addition, this approach provides a clearer account of the examples we saw
in (12), which we repeat here:

(35) a. Who did Kim work for and Sandy rely on ?


b. *Who did Kim work for and Sandy rely on Mary?

In Chapter 2, we saw that English allows two identical phrases to be conjoined.


This means that the GAP value in each conjunct must also be identical:

(36) COORDINATION CONSTRUCTION :

XP → XP[GAP A] conj XP[GAP A]

This grammar rule explains the contrast in (35), represented using simplified
feature structures:

(37)
250 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

In (37a), the GAP value in the first conjunct is identical to that in the second
conjunct, satisfying the COORDINATION CONSTRUCTION. The feature unifica-
tion will allow these two identical GAP values to be unified into one. However,
in (37b), the GAP values in the two conjuncts are different, violating the
COORDINATION CONSTRUCTION .5

10.3.3 Subject Wh-questions


Consider examples in which the subject is the focus of the wh-
question:
(38) a. Who put the book in the box?
b. Who DID put the book in the box?
c. Who can put the book in the box?

We can notice that when the subject who is questioned, the presence of an aux-
iliary verb is optional. That is, the question in (38a) is well-formed, even though
no auxiliary is present. The related example (38b) is also well-formed, but it is
used only when there is emphasis on the auxiliary.
As a first step toward accounting for such examples, we can allow a structure
similar to that of nonsubject wh-questions and license a structure like (39), in
which the subject is gapped:
(39) a. Who placed the book in the box?
b. Who can place the book in the box?

In the current context, our grammar requires no additional mechanism other


than a slight revision to the ARC:
(40) Argument Realization Constraint (ARC, final):
The first element on the ARG - ST list is realized as SPR or GAP and the rest
as COMPS or GAP.

This revised ARC guarantees that the members of the ARG - ST list are the sum of
that of SPR, COMPS, and GAP. The system then allows for the following lexical
realization of put, in addition to those in (28):

5 This feature-based analysis can also offer a way of dealing with the movement paradox examples
we observed in (15), repeated here:

a. You can rely on [his help]/*[that he will help you].


b. His help, you can rely on .
c. That he will help you, you can rely on .

The introduction of a GAP value is a lexical realization process in the present system, implying
that we can assume that the complement of the preposition on in such a usage can be realized
either as an NP in (38b) or as a nominal GAP element. Since, as shown in Chapter 5, the filler CP
in (c) also belongs to the category nominal, there is then no category mismatching between the
filler and the gap here. See Kim and Sells (2008) too.
10.3 Feature Percolation with No Abstract Elements 251
⎡ ⎤
(41) FORM placed
⎢ ⎡ ⎤⎥
⎢   ⎥
⎢ SPR ⎥
⎢ ⎢ ⎥⎥
⎢SYN | VAL⎣COMPS  NP, PP⎦⎥
2 3
⎢ ⎥
⎢ GAP  1 NP  ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP, 3 PP

This realization in which the subject is gapped then projects the following
structure for (39a):6
(42)

As shown in (42), the subject of placed is realized as the GAP value, metaphor-
ically passing up to the mother node. This mother VP is marked as projecting
up to the incomplete sentence ‘S’ in terms of the traditional notion of phrases.
This is a notational variant to indicate that the VP is identical to the ‘S’ in
6 Note that our feature system means the following for the complete S, an incomplete ‘S’ with its
subject being gapped, and a VP:
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
SPR   SPR   SPR NP
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
a. S = ⎣COMPS  ⎦ b. ‘S’ = ⎣COMPS  ⎦ c. VP = ⎣COMPS  ⎦
GAP   GAP XP GAP  
252 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

terms of valence features. We have seen that, by definition, S is a projection


of V that has an empty SPR and COMPS list. The VP here also has no values for
these two valence features. This means that we can view the VP here as having
been vacuously pumped up to ‘S.’ This incomplete sentence ‘S’ with the subject
missing can then combine with the filler who, according to the HEAD - FILLER
CONSTRUCTION .
Even though the ‘S’ with a gapped subject cannot function as an independent
sentence (as in, e.g., *Visited him), it can function as the complement of a verb
like think:
(43) a. Who do you think [visited Seoul last year]?
b. That’s the UN delegate that the government thinks [visited Seoul last year].
The verb think can select either a finite S or a CP, as in I think (that) she knows
that. This means that the verb can also combine with an ‘S,’ with its subject being
realized as a gapped expression:
(44)
10.4 Indirect Questions 253

The verb visited allows its subject to be gapped and then licenses the head-
complement combination of visited Seoul. This VP with its subject being gapped,
vacuously projected to ‘S,’ serves as the complement of the verb think. The GAP
value, passing up all the way to the second lower S, is then discharged by the
filler who.

10.4 Indirect Questions

10.4.1 Basic Structures


In Chapter 5, we saw that among verbs selecting a sentential or
clausal complement (S or CP), there are also verbs that select an indirect-
question complement:
(45) a. John wonders [whose book his son likes ].
b. John has forgotten [which player his son shouted at ].
c. He told me [how many employees Karen introduced to the visitors].
Not all verbs allow an indirect question as complement:
(46) a. Tom denied [(that) he had been reading the article].
b. *Tom denied [which book he had been reading].
(47) a. Tom claimed [(that) he had spent five thousand dollars].
b. *Tom claimed [how much money he had spent].
Factive verbs like deny or claim cannot combine with an indirect question: Only a
finite declarative clause can complement such verbs. Verbs selecting an indirect-
question complement can be distinguished semantically:
(48) a. interrogative verbs: ask, wonder, inquire . . .
b. verbs of knowledge: know, learn, forget . . .
c. verbs of increased knowledge: teach, tell, inform . . .
d. decision verbs/verbs of concern: decide, care . . .
The clausal complement of these verbs cannot be a canonical CP and must be an
indirect question:
(49) a. *John inquired [that he should read it].
b. *Peter will decide [that we should review the book].
(50) a. John inquired [which book he should read].
b. Peter will decide [which book we should review].
At the same time, there are some verbs, for example, forget, tell, and know, that
can select either a [QUE +] or a [QUE −] complement:
(51) a. John told us that we should review the book.
b. John told us which book we should review.
There are thus at least three different types of verb that take clausal complements.
Lexical entries for three representative verbs are given below:
254 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

⎡ ⎤
(52) a. FORM wonder
⎢ ⎡ ⎤⎥
⎢ HEAD | POS verb ⎥
⎢ ⎥
⎢ ⎢ ⎡ ⎤⎥⎥
⎢ ⎢  1 ⎥⎥
⎢SYN⎢ SPR
 ⎥⎥⎥
⎢ ⎢VAL⎢ ⎥⎥
⎢ ⎣ ⎣ ⎦⎦⎥
⎢ COMPS 2 S/CP QUE + ⎥
⎢ ⎥
⎣ ⎦
ARG - ST  1 NP, 2 S/CP
⎡ ⎤
b. FORM deny
⎢ ⎡ ⎤⎥
⎢ HEAD | POS verb ⎥
⎢ ⎥
⎢ ⎢ ⎡ ⎤⎥⎥
⎢ ⎢  1 ⎥⎥
⎢SYN⎢ SPR
 ⎥⎥⎥
⎢ ⎢VAL⎢ ⎥⎥
⎢ ⎣ ⎣ ⎦⎦⎥
⎢ COMPS 2 QUE − ⎥
⎢ ⎥
⎣ ⎦
ARG - ST  1 NP, 2 
⎡ ⎤
c. FORM tell
⎢ ⎡ ⎤⎥
⎢ HEAD | POS verb ⎥
⎢ ⎥
⎢ ⎢ ⎡ ⎤⎥⎥
⎢ ⎢  1 NP ⎥⎥
⎢SYN⎢ SPR
 ⎥⎥⎥
⎢ ⎢VAL⎢ ⎥⎥
⎢ ⎣ ⎣ ⎦⎦⎥
⎢ COMPS 2 NP, 3 S/CP QUE ± ⎥
⎢ ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP, 3 S/CP

The feature QUE flags the presence of a clause-initial wh-word like who or which;
it is used to distinguish between indirect questions and declarative clauses. The
QUE value of the verb’s complement will ensure that each verb combines with
an appropriate clausal complement. For example, the verb wonder, requiring a
[QUE +] clausal complement, will be licensed in a structure like the following:
(53)
10.4 Indirect Questions 255

The GAP value of likes is passed up to the lower S and discharged by the filler
whose book. The wh-word whose carries the feature [QUE +], which will pass up
to the point where it is ‘visible’ to the verb selecting its complement or to the
highest position needed to indicate that the particular sentence is a question. For
example, in (54), the feature QUE indicates that the whole sentence is a ques-
tion, whereas in (55) it allows the verb ask to select an indirect question as its
complement:

(54) a. [S[QUE +] In which box did he put the book ]?


b. [S[QUE +] Which book by his father did he read ]?

(55) a. John asks [S[QUE +] in which box he put the book].


b. John asks [S[QUE +] which book by his father he read].

The percolation of the feature QUE upward from a wh-word can be ensured by the
NIP, which guarantees that nonlocal features like QUE are passed up until they
are bound off or selected by a sister (whether it be a filler phrase or a selecting
V). This principled constraint allows the QUE value to pass up to the mother from
a deeply embedded nonhead, as illustrated in the following:

(56) a. Kim has wondered [[in which room] Gary stayed ].


b. Lee asked me [[how fond of chocolates] the monkeys are ]].

Let us consider the structure of (56a):

(57)
256 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

Although which is embedded in the PP and functions as the Det of the inner NP,
its QUE value will pass up to the S, granting it the status of an indirect question.
The verb wonder then combines with this S, thus satisfying its valence require-
ment. If the verb combined with a [QUE −] clausal complement, the result would
be an ungrammatical structure:
(58) a. *Kim has wondered [[QUE −] that Gary stayed in the room].
b. *Kim asked me [[QUE −] that the monkeys are very fond of chocolates].
As we saw above, the category of the missing phrase within the S must
correspond to that of the wh-phrase in the initial position. For example, the
following structure is not licensed simply because there is no HEAD - FILLER
CONSTRUCTION that allows a filler NP to combine with an S missing a PP:

(59)

In a similar fashion, the present system also predicts the following contrast:
(60) a. John knows [whose book [Mary bought ] and [Tom borrowed from
her]].
b. *John knows [whose book [Mary bought ] and [Tom talked ]].
The partial structure of these sentences can be represented as follows:
(61)
10.4 Indirect Questions 257

As long as the two GAP values are identical, we can unify the two, as in (61a).
However, if the GAP values are different, as in (61b), there is no way to unify
them in the coordination structure.

10.4.2 Non-wh Indirect Questions


English also has indirect questions headed by the complementizer
whether or if :
(62) a. I don’t know [whether/if I should agree].
b. I wonder [whether/if you’d be kind enough to give us information].

These indirect questions are all internally complete in the sense that there is no
missing element. This means that the complementizers whether and if will have
at least the following lexical information:
⎡ ⎤
(63) FORM whether
⎢ ⎡ ⎤⎥
⎢ HEAD | POS comp ⎥
⎢ ⎥
⎢ ⎢ ⎥⎥
⎢SYN ⎣VAL | COMPS S[fin]⎦⎥
⎢ ⎥
⎢ QUE + ⎥
⎣ ⎦
ARG - ST S

According to this lexical specification, whether, bearing the [QUE +] value,


selects a finite S as its complement. This lexical entry then licenses a structure
like the following:
(64)

While if and whether both carry a positive value for the QUE feature, whether
more closely resembles question words like when in the following respect: the
258 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

type of indirect question that it introduces can serve as the object of a preposition,
as in (65):
(65) a. I am not certain about [when he will come].
b. I am not certain about [whether he will go or not].

However, an if -clause cannot function as prepositional object:


(66) a. *I am not certain about [if he will come].
b. *I am not certain about [if he will go or not].

The difference between if and whether also surfaces in infinitival constructions:


(67) a. I don’t know [where to go].
b. I don’t know [what to do].
c. I don’t know [how to do it].
d. I don’t know [whether to agree with him or not].

(68) a. *I don’t know [if to agree with him].


b. *I don’t know [that to agree with him or not].

This means that whether and if both bear the attribute [QUE +] (projecting an
indirect question), but only whether behaves like a true wh-element.7

10.4.3 Infinitival Indirect Questions


In addition to finite indirect questions, English has infinitival indirect
questions:
(69) a. Fred knows [which politician to support].
b. Karen asked [where to put the chairs].

Like finite indirect questions, these constructions have the familiar bipartite
structure: a wh-phrase and an infinitival clause missing one element.
Notice at this point that in English there exist at least four different ways for
the subject to be realized: as an overt NP or a covert NP (gap, PRO, or pro):
(70) a. The student protected him. (canonical NP)
b. Who protected him? (subject gap NP)
c. To protect him is not an easy task. (big PRO)
d. Protect him! (small pro)

In (70a), the subject is a ‘canonical’ NP, while those in the subsequent exam-
ples are ‘noncanonical.’ In the wh-question (70b), the subject is a GAP value; in
(70c), the infinitival VP has an understood, unexpressed subject PRO; the imper-
ative in (70d) has an unexpressed subject, understood as the 2nd person subject
you. As previously noted, the unexpressed pronoun subject of a finite clause
is called ‘pro’ (pronounced ‘small pro’), whereas that of an nonfinite clause is

7 One way to distinguish the wh-elements including whether from if , is to use an additional feature
WH with binary values.
10.4 Indirect Questions 259

called ‘PRO’ (pronounced ‘big pro’) to capture the distinctive referential prop-
erties of these sign types (see Chomsky, 1982). In terms of a theory of linguistic
types, this means that we have ‘canonical’ pronouns like he and him as well as
‘noncanonical (covert)’ realizations of pronouns, such as pro for imperatives and
PRO for infinitival clauses. This in turn means that in English, when a VP’s sub-
ject is a noncanonical one, either a 2nd person pronoun pro or a PRO, the VP can
be projected directly into S in accordance with the following construction rule:
(71) NONCANONICAL SUBJECT CONSTRUCTION:
S SPR   → VP SPR NP[noncanonical]

This construction rule would then license the following two:

(72)

The subject of the VP in (72a) is the second person pro, while that in (72b)
is a PRO coindexed with yourself. Both are licensed by the HEAD - ONLY
CONSTRUCTION .
Now, consider the following structure licensed by the current grammar rules:
(73)
260 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

Consider the structure from the bottom up. The verb support selects two
arguments whose second argument can be realized as a GAP:
⎡ ⎤
(74) FORM support
⎢ ⎡ ⎡ ⎤
⎤⎥
⎢  1 NP[PRO] ⎥
⎢ ⎢
SPR ⎥
⎢ ⎢ ⎥⎥⎥
⎢SYN⎢⎣ VAL⎣ COMPS  ⎦⎥
⎦⎥
⎢ ⎥
⎢ GAP  2 NP ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP

The verb will then form a VP with the infinitival marker to. Since this VP’s
subject is PRO, the VP can be projected into an S with the accusative NP GAP
value in accordance with the HEAD - ONLY CONSTRUCTION. The S then forms a
well-formed head-filler construct when combined with the filler which politician.
The QUE value of the phrase allows the whole infinitival clause to function as an
indirect question, which can then be combined with the verb knows.
A constraint we can observe in infinitival wh-questions is that the subject of
the infinitival head cannot be overtly realized:
(75) a. *Fred knows [which politician for Karen/her to vote for].
b. *Karen asked [where for Jerry/him to put the chairs].

The data indicate that in infinitival indirect questions, the subject of the infinitival
VP cannot appear. The tree diagram in (76) shows why it is not legitimate:8
(76)

8 The grammar needs to block examples like the ones below in which the infinitival VP combines
with its subject:

a. *Fred knows [S which politician [S her [to vote for]]].


b. *Karen asked [S whom [S him [to vote for]]].

As in (73), the HEAD - FILLER CONSTRUCTION allows an S (directly projected from an infinitival
VP) to combine with its filler. As a way of blocking such examples, we may assume an indepen-
dent constraint that the infinitival subject can appear only together with the complementizer for
because the subject needs to get the accusative case from it (cf. Chomsky, 1982).
10.4 Indirect Questions 261

The structure shows that the HEAD - FILLER CONSTRUCTION licenses the combi-
nation of an S with its filler but not a CP with its filler.

10.4.4 Adjunct Wh-questions


The main clause wh-questions and indirect questions that we have
seen so far have a GAP value originating from an argument position of a verb or
preposition. How can the present system account for examples like the following,
in which the wh-phrases are not arguments but adjuncts?
(77) a. How carefully have you considered your future career?
b. When can we register for graduation?
c. Where do we go to register for graduation?
d. Why have you borrowed my pencil?

One way to deal with such examples is to take the adverbial wh-phrase to modify
an inverted question:
(78)

The structure indicates that the AdvP modifies the inverted S.


Matters become more complicated when we consider questions in which
a wh-word adjunct can modify either the main verb or the embedded one
(Huang, 1982):
(79) a. When did he say that he was fired?
b. Where did he tell you that he met Mary?
c. How did you guess that he fixed the computer?

These sentences are ambiguous with respect to the function of the wh-adjunct
(when, where, how), and in particular which of the two verbs (main or embedded)
it modifies. The question in (79a) could be an inquiry into either the time of his
statement or the time of his firing. Question (79b) could be a question about
the time of the telling or the time of his meeting Mary. Question (79c) can be
construed as questioning either the means by which he guessed or the means by
which he performed the computer repair.
These data indicate that in addition to a structure like (78), in which the adver-
bial wh-word modifies the whole sentence, we need a structure in which the
fronted adverbial wh-phrase is linked to the embedded clause. One way to do
262 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

this is to promote a certain adverbial expression as an argument of the verb in


the embedded clause so that it can be an input into the GAP value. Following
Sag (2005) and others, we assume that English allows the extension of the ARG -
ST list to include a limited set of adverbial elements as arguments. For example,
we can extend the verb fix to include an adverbial as its argument:
(80) Extended ARG - ST :   
FORM fix FORM fix

ARG - ST  1 NP, 2 NP ARG - ST  1 NP, 2 NP, AdvP

This extended ARG - ST then can allow us its adverbial argument to be realized as
a GAP value according to the ARC:
⎡ ⎤
(81) FORM fix
⎢ ⎡ ⎤ ⎥
⎢  1 NP ⎥
⎢ SPR ⎥
⎢ ⎢ ⎥ ⎥
⎢SYN⎣COMPS  2 NP⎦ ⎥
⎢ ⎥
⎢ GAP  3 AdvP ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP, 3 AdvP

This lexical realization will then project a structure like the following for (79c):
(82)
10.5 Conclusion 263

This structure shows that the wh-word how originates from the subordinate
clause VP. More specifically, the GAP value starts from the verb fixed, whose
arguments in this case include an adverbial element. Note this does not mean that
we can extend the ARG - ST list randomly. For example, the argument extension
mechanism cannot be applied to examples like the following:

(83) a. Why do you wonder [whether she will invite me]?


b. How often did he ask [when we will meet at the party]?

In these examples, which include a sentential complement introduced by a wh-


word, we have only one interpretation – that in which the wh-phrase modifies
the matrix verb wonder or ask. This means that argument extension is limited,
governed by various syntactic and semantic conditions.

10.5 Conclusion

This chapter focused on the syntax of wh-question patterns that


have been referred to as long-distance or unbounded dependency construc-
tions. Starting with core dependency properties of wh constructions, we
reviewed the main problems that movement approaches encounter when
attempting to represent the link between the filler wh-phrase and its
corresponding gap.
We then developed a declarative, feature-based analysis that does not use
any abstract elements to capture the linkage between the filler and the gap,
while it resolves problems originating from movement analyses. The key mech-
anisms of our construction-based analysis are the ARC (Argument Realization
Constraint), which allows any argument to be realized as a GAP element, the
HEAD - FILLER CONSTRUCTION , which licenses the combination of a filler and
an incomplete sentence with a nonempty GAP value, and the NIP (Nonlocal
Inheritance Principle), which regulates nonlocal features like GAP in relevant
mother phrases. We have seen that the interplay of these construction-based
mechanisms allows us to license a wide variety of wh-constructions: main-clause
nonsubject wh-questions (e.g., Which book did Erica put in the box?), subject
wh-questions (e.g., Who put the book in the box?), wh-indirect questions (e.g.,
Jean wonders whose book her daughter likes), non-wh indirect questions (e.g.,
I don’t know whether/if I should agree), infinitival indirect questions (e.g, Kim
knows which candidate to support), and even adjunct wh-questions (e.g., When
can you register for graduation?).
The next chapter will explore a variety of English relative clause construc-
tions, which also display long-distance dependency relations. We will see that
the mechanisms developed here play key roles in licensing simple and complex
English relative clause constructions.
264 I N T E R RO G AT I V E A N D W H - Q U E S T I O N C O N S T RU C T I O N S

Exercises

1. Draw tree structures for the following sentences and indicate which
grammar rules are used to construct each phrase:
(i) a. What causes students to select particular majors?
b. Who will John ask for information about summer courses?
c. Which textbook did the teacher use in the class last summer?
d. Whose car is blocking the entrance to the store?

(ii) a. When can we register for graduation?


b. Why do you think he left?
c. Where do we go to register for graduation?

(iii) a. Who do you guess will be here?


b. Who do you think borrowed my book?
c. Which city does Fred think that you believe that John lives in?

2. Draw tree structures for the following sentences containing indirect


questions, and provide lexical entries for the italicized words:
a. I wonder on which shelf John will put the book?
b. Joseph has forgotten how many matches he has won.
c. Fred will warn Martha that she should claim that her brother is
patriotic.
d. That Bill tried to discover which drawer Alice put the money in
made us realize that we should have left him in Seoul.
e. Jasper wonders which book he should attempt to persuade his
students to buy.
f. The committee knows whose efforts to achieve peace the world
should honor.

3. Briefly explain why the following examples are ungrammatical:


a. *I wonder if on which shelf John will put the book.
b. *Which house does your friend live?
c. *I wonder what city that Romans destroyed.
d. *John was wondering to whom he was referring to.
e. *Who do you think that has given the tickets to Bill?
f. *What city will Fred say that Mary thinks that John lives?
g. *On whom does Dana believe Chris knows Sandy trusts?
h. *The politician denied how the opponent was poisoned.
i. *Fred knows which book for the children to read during the
summer vacation.

4. We have seen that the present framework can offer a streamlined


analysis of examples like (i):
(i) a. Who do you think John visited in Seoul last year?
b. Who do you think visited Seoul last year?
10.5 Conclusion 265

Now note that when the complementizer that is present, we cannot


have a subject gap:
(ii) a. Who do you believe that Sara invited ?
b. Who do you believe invited Sara?
c. *Who do you believe that invited Sara?

Discuss what defines the illicit configuration in (iic) while consider-


ing the difference between subjects and objects.
5. Look at the following data set and state the constraints on the usage
of the -ing verbs (mending, investigating, restoring). In addition,
draw trees for the a-examples together with the lexical entries for the
main and participial verbs. Can we take the grammatical examples to
exemplify a special type of passive construction?
(i) a. This needs mending.
b. *This needs mending the shoe.
c. *He mended.
d. He mended the shoe.
(ii) a. This needs investigating.
b. *This needs investigating the problem.
c. *They investigated.
d. They investigated the problem.
11 Relative Clause Constructions

11.1 Introduction

English relative clauses, which modify a preceding nominal expres-


sion, are also a type of long-distance dependency construction, as suggested by
the fact that the distance between the filler and the gap is unbounded:
(1) a. The video [which [you recommended ]] was really terrific.
b. The video [which [I thought [you recommended ]]] was really terrific.
c. The video [which [I thought [John told us [you recommended ]]]] was
really terrific.

There are several different properties that we can use to classify English rel-
ative clauses. First, we can classify them by the type of missing element in the
relative clause:
(2) a. the student who won the prize
b. the student who everyone likes
c. the baker from whom I bought these bagels
d. the person whom John gave the book to
e. the day when I met her
f. the place where we can relax

As seen here, the missing phrase can be a subject, an object, an oblique


expression, a prepositional object, or even a temporal or locative adjunct,
respectively.
Second, relative clauses can be classified according to the type of relative
pronoun. In English we find wh-relatives, that-relatives, and bare relatives.
(3) a. The president [who [Fred voted for]] has resigned.
b. The president [that [Fred voted for]] dislikes his opponents.
c. The president [ [Fred voted for]] has resigned.

Wh-relatives like (3a) have a wh-type relative pronoun, and (3b) has the rela-
tive pronoun that, while (3c) has no relative pronoun at all. We consider that in
relative clauses to be a form of relative pronoun (see Section 11.4 below).
Third, relative clauses can also be classified according to the finiteness of
the clause. Unlike the finite relative clauses in (1)–(3), the following examples
include infinitival relatives:

266
11.2 Nonsubject Wh-Relative Clauses 267

(4) a. He is the kind of person [with whom to consult ].


b. These are the things [for which to be thankful ].
c. We will invite volunteers [on whom to work ].

In addition, English allows so-called ‘reduced’ relative clauses. The examples


in (5) are ‘reduced’ in the sense that the string ‘wh-phrase + be’ appears to be
omitted, as indicated by the parentheses:
(5) a. the person (who is) standing on my foot
b. the prophet (who is) descended from heaven
c. the bills (which were) passed by the House yesterday
d. the people (who are) in Rome
e. the people (who are) happy with the proposal

This chapter first reviews the basic properties of the various types of English
relative clauses and then provides analyses of their syntactic structures.

11.2 Nonsubject Wh-Relative Clauses

Let us consider some canonical relative clauses, first:


(6) a. the senators [who [Fred met ]]
b. the apple [that [John ate ]]
c. the problem [ [you told us about ]]

One thing we can observe here is that, like wh-questions, relative clauses have
bipartite structures: a relative pronoun (including a wh-element) and a sentence
with a missing element (S/XP):
(7) a. wh-element S/XP
b. that S/XP
c. [ ] S/XP
Assuming that relative wh-words carry a REL feature whose index value is iden-
tical with the nominal that the relative clause modifies, we can represent the
structure of (6a) in the following way:
(8)
268 R E L AT I V E C L AU S E C O N S T RU C T I O N S

As shown in the structure, the object of the verb met is realized as a GAP
value, which, in accordance with the NIP (Nonlocal (Feature) Inheritance Prin-
ciple), is metaphorically passed up until it is discharged by the filler, who. The
HEAD - FILLER CONSTRUCTION licenses the combination of the filler who and
the gapped sentence Fred met. This filler who also has a nonlocal REL feature
whose value is an index referring to senators. The REL value originating from
the relative pronoun also percolates up to the mother S in accordance with the
NIP. Note that the relative pronoun’s REL value is identical to the index value of
the antecedent nominal. The need to identify these two index values is shown by
the agreement facts in (9):
(9) a. the man [who you think knows/*know the answer]
b. the men [who you think know/*knows the answer]

Here the lowest verb knows/know agrees with the number features of the head
noun man or men, respectively. The element that ensures this agreement is the
relative pronoun who, whose index value would be singular in (9a) while plural
in (9b).
The following question is, then, what mechanism allows the relative clause
to function as a modifier of a noun or noun phrase? In Chapter 6, we saw that
phrases like AP, nonfinite VP, and PP can modify an NP (these examples can be
taken as ‘reduced’ relatives):
(10) a. the people [happy with the proposal]
b. the person [standing on my foot]
c. the bills [passed by the House yesterday]
d. the paper [to finish by tomorrow]
e. the student [in the classroom]

All of these postnominal bracketed elements bear the feature MOD. The feature
originates from the head happy, standing, passed, to and in, respectively. This is
illustrated by the following:
(11)
11.2 Nonsubject Wh-Relative Clauses 269

The feature MOD is a head feature, which enables the mother VP to carry the
same MOD value. The combination of this VP modifier with the head N is
licensed by the HEAD - MODIFIER CONSTRUCTION, repeated here:
(12) HEAD - MODIFIER CONSTRUCTION:
XP → [MOD  1 ], 1 H

English allows the modifier phrase bearing the feature MOD to either precede or
follow the head, and relative clauses are positioned after the head they modify.
Note that not all phrases can function as postmodifiers. In particular, a base
VP or finite VP cannot be found in this environment:
(13) a. *the person [stand on my foot]
b. *the person [stood on my foot]
c. *the person [stands on my foot]

A complete sentence with no missing expression cannot serve as a postnominal


modifier either:
(14) a. *The student met the senator [John met Bill].
b. *The student met the senator [that John met Bill].
c. *The student met the senator [for John to meet Bill].

This means that a finite VP or a finite clause with no missing element can-
not function as a modifier. Only relative clauses with one missing element
may serve as postnominal modifiers, indicating that they also have the MOD
feature.
Unlike reduced relative clauses, where the MOD feature comes from the head
verb, adjective, or preposition, typical relative clauses (e.g., the student [who
everyone likes]) have no expression other than the relative pronoun that can
trigger the emergence of the MOD feature. It is thus reasonable to assume that
the presence of a relative pronoun bearing the [REL i] feature also introduces a
relative MOD value, according to the following constructional rule:1
(15) HEAD - REL MOD CONSTRUCTION
  :
REL i
N → 1 N , S
i
MOD 1 

The construction, as a subtype of the HEAD - MODIFIER CONSTRUCTION, ensures


that a clause marked with the REL feature modifies a preceding nominal expres-
sion that has the same index value as the pronoun. This construction rule, by
evoking a MOD value linked to the relative pronoun, can license a structure like
the following:
1 Following Sag (1997), one can develop an analysis in which the MOD value is introduced by a
verb whose argument contains a GAP value. By contrast, our grammar constructionally introduces
the MOD feature.
270 R E L AT I V E C L AU S E C O N S T RU C T I O N S

(16)

As shown here in (16), the verb met realizes its object as a GAP value,
which metaphorically percolates up to the S, where it is discharged once this
S combines with the relative pronoun whom. There is no lexical expression
(e.g., a nonfinite verb) that evokes the MOD feature; the constructional con-
straint in (15) evokes a MOD value linked to the relative pronoun whom.
Since the relative clause is a type of HEAD - FILLER CONSTRUCTION, there
must be a total syntactic identity between the gap and a filler with a REL
value:

(17) a. Jack is the person [[NP whom] [Jenny fell in love with [NP ]]].
b. Jack is the person [[PP with whom] [Jenny fell in love [PP ]]].

(18) a. *Jack is the person [[NP whom] [Jenny fell in love [PP ]]] .
b. *Jack is the person [[PP with whom] [Jenny fell in love with [NP ]]].

In (17a) and (17b), the gap and the filler are the same category, whereas those in
(18) are not. The putative gap in (18a) is a PP and that in (18b) an NP, but the
fillers are the nonmatching categories NP and PP, respectively.
11.2 Nonsubject Wh-Relative Clauses 271

In addition, the gap can be embedded in a deeper position, provided that it


finds an appropriate filler:2

(19)

In (19), the GAP value starts from the verb of the embedded clause and passes
up to the top S in accordance with the NIP. The value is discharged by the filler
wh-phrase including the relative pronoun which. This nonlocal REL feature, in
accordance with the NIP, is passed up to the top S to ensure that the clause
functions as a modifier.
Just like the QUE feature, the nonlocal REL feature can also come from a
deeper position within the nonhead daughter of the relative clause:

(20) a. I met the critic [whose remarks [I wanted to object to ]].


b. This is the friend [for whose mother [Kim gave a party ]].
c. The teacher set us a problem [the answer to which [we can find in the
textbook]].

A simplified structure for (20b) serves to illustrate this point:

2 Once again, the arrows here do not signify any feature copying; they simply represent identity of
the two feature structures.
272 R E L AT I V E C L AU S E C O N S T RU C T I O N S

(21)

The REL feature is embedded in the specifier of the inner NP whose mother,
but the NIP guarantees that this value is passed up to the top S so that it can
function as a modifier of the head noun friend.

11.3 Subject Relative Clauses

With respect to the modification function, subject relative clauses do


not differ greatly from nonsubject relative clauses. One major difference is that
the presence of a relative pronoun (including that) is obligatory, and bare relative
clauses are ungrammatical:
(22) a. We called the senators [who] met Fred.
b. The kid picked up the apple [that] fell down on the ground.

(23) a. *[The student [ met John]] came.


b. *[The problem [ intrigued us]] bothered me.
11.3 Subject Relative Clauses 273

Subject relative clauses involve a missing subject – a [REL i] subject is gapped,


represented as in (24):

(24)

As shown in the structure, the subject of met is realized as the GAP value, which
metaphorically passes up to the mother node. As noted in the previous chapter,
this mother node is an ‘S’ with an empty COMPS and SPR value. Although it
appears to be a VP, the constituent is an S with a gap in it, and this S combines
with the filler who, in accordance with the HEAD - FILLER CONSTRUCTION. The
resulting S is a complete clause (who met Fred) carrying the REL and MOD spec-
ifications, which allows the resulting clause to modify senators in accordance
with the HEAD - REL MOD CONSTRUCTION.
Notice that this analysis does not license bare subject relatives like those in
(23). The VP with the missing subject met John cannot carry the MOD fea-
ture at all even if it can function as an ‘S’ that can combine either with a
wh-question phrase or a wh-relative phrase. However, the analysis also predicts
that the subject of an embedded clause can be gapped in sentences like the
following:
274 R E L AT I V E C L AU S E C O N S T RU C T I O N S

(25) a. He made a statement [which [S everyone thought [S was really interesting


and important]]].
b. They all agreed to include those matters [S which [everyone believed [S
had been excluded from the Treaty]]].

As we saw in Chapter 10, verbs like think and believe combine with a CP, an S,
or even a ‘S’ with the subject gapped:
(26)

The VP was interesting here forms an ‘S’ with the subject gapped. This ‘S’ com-
bines with the verb thought, forming a VP with a nonempty GAP specification.
This GAP value percolates up to the lower S and is then discharged by the filler
relative pronoun which. The relative pronoun, in accordance with the HEAD - REL
MOD CONSTRUCTION , introduces the MOD value into the relative clause, which
allows it to modify the antecedent statement.

11.4 That-Relative Clauses

As noted earlier, that can be used either as a complementizer or as a


relative pronoun:
11.4 That-Relative Clauses 275

(27) Complementizer that:


a. Mary knows that John was elected.
b. That John was elected surprised Frank.
c. Mary told Bill that John was elected.

(28) Relative pronoun that:


a. This is the book [that we had read].
b. The president abandoned the people [that voted for him].
c. It is an argument [that people think will never end in Egypt].

The key difference here is that the clauses in (28) following the relative pronoun
that contain a syntactic gap, while those in (27) following the complemen-
tizer that are complete clauses with no missing element involved. These two
environments can be represented as follows:
(29)

The relative pronoun that differs from the wh-relative pronoun in sev-
eral respects. For example, the relative pronoun that disallows genitive and
pied-piping (see Sag, 1997):
(30) a. the student whose turn it was
b. *the student that’s turn it was

(31) a. the pencil with which he is writing


b. *the pencil with that he is writing

In addition, that is used only in finite relative clauses:


(32) a. a pencil with which to write
b. *a pencil with that to write

One way to account for these differences is to assume that the relative pronoun
that has no accusative case and therefore cannot be the complement of a preposi-
tion that assigns accusative. The relative pronoun who, unlike relative pronouns
like whose, whom, and which, shares this property:
(33) a. *The people [in who we placed our trust] . . .
b. *The person [with who we were talking] . . .

(34) a. The company [in which they have invested] . . .


b. The people [in whose house we stayed] . . .
c. The person [with whom he felt most comfortable] . . .
276 R E L AT I V E C L AU S E C O N S T RU C T I O N S

11.5 Infinitival and Bare Relative Clauses

An infinitival clause can also function as a modifier of a preceding


noun. Infinitival relative clauses in principle can but need not contain a relative
pronoun:

(35) a. He bought a bench [on which to sit ].


b. He bought a refrigerator [in which to put the beer ].

(36) a. There is a book [(for you) to give to Alice].


b. There is a bench [(for you) to sit on].

Let us consider infinitival wh-relatives first. As we saw in the previous chapter,


an infinitival VP can be projected into an S when its subject is unrealized. This
will then allow the following structure for (35a):

(37)

As shown here in the structure, the VP to sit has a PP GAP value which functions
as the complement of sit. The infinitival VP, missing its PP complement, realizes
its subject as a PRO and thus can be projected into an S in accordance with
the HEAD - ONLY CONSTRUCTION (see Chapter 10). This S forms a head-filler
phrase with the PP on which. The resulting S also inherits the REL value from the
relative pronoun which and thus bears the MOD feature. Once again, we see that
every projection observes the grammar rules as well as other general principles,
including the HFP, the VALP, and the NIP.
11.5 Infinitival and Bare Relative Clauses 277

Infinitival wh-relatives have an additional constraint on the realization of the


subject:

(38) a. a bench on which (*for Jerry) to sit


b. a refrigerator in which (*for you) to put the beer

The examples indicate that wh-infinitival relatives cannot have an overt subject
(such as for Jerry) realized. We saw before that the same is true for infinitival
wh-questions; the data are repeated here:

(39) a. Fred knows [which politician (*for Karen) to vote for].


b. Karen asked [where (*for Washington) to put the chairs].

This tells us that both infinitival wh-relatives and infinitival wh-questions are
subject to the same constraint. The ungrammaticality of (38a) can be understood
if we look at its structure:

(40)

The HEAD - FILLER CONSTRUCTION (see Chapter 10) does not allow the combi-
nation of a CP with a PP filler, and hence the S here is ill-formed.3
How, then, can we deal with infinitival bare relative clauses like those in (41)?

(41) a. the paper [(for us) to read by tomorrow]


b. the paper [(for us) to finish by tomorrow]

Notice here that, unlike infinitival wh-relative clauses, these lack a relative pro-
noun. Given that the infinitival VP can be projected into an S, we can assign the
following structure to (41b) when the subject is not overt:

3 One peculiar constraint on infinitival wh relatives (unlike infinitival wh indirect questions) is that
they do not allow an NP gap, as in *the bench which to sit on. To disallow such an example, we
need to develop a more elaborate analysis; see Sag (1997) for a direction.
278 R E L AT I V E C L AU S E C O N S T RU C T I O N S

(42)

The VP to finish has a GAP value for its object, and its subject is PRO. Accord-
ing to the HEAD - ONLY CONSTRUCTION, this VP then will be projected into an
incomplete ‘S.’ There are two analytic issues now: how to introduce the MOD
feature and how to discharge the GAP value when there is no filler. As we noted
above, English also allows finite bare relatives with the gapped element being
accusative:

(43) a. the person [I met ]


b. the box [we put the books in ]

To allow such a relative clause lacking the accusative relative pronoun, we


introduce an extension of the HEAD - REL MOD CONSTRUCTION as follows:

(44) HEAD - REL BARE MOD CONSTRUCTION


 : 
MOD 1
N GAP   → 1 N , S
i
GAP NPi [acc]

The construction differs from the HEAD - REL MOD CONSTRUCTION only with
respect to its GAP value: The GAP value is discharged constructionally. That
is, the construction allows a finite or infinitival clause (S, but not an ‘S’ or
a VP) bearing an accusative NP GAP value to function as a modifier of the
preceding noun. One specification in the construction is that the GAP value is
discharged even if there is no filler: The index of the head noun is identified
with that of the discharged GAP value. The construction thus licenses constructs
like (43a):
11.6 Restrictive vs. Nonrestrictive Relative Clauses 279

(45)

Note that the GAP value is a specification of the verb met but is discharged even
without combining with a filler. This is possible because of the constructional
constraint in (44).4

11.6 Restrictive vs. Nonrestrictive Relative Clauses

In addition to recognizing the relative clause types that we have


reviewed in this chapter, linguists draw an interpretive distinction between
‘restrictive’ and ‘nonrestrictive’ relative clauses. Consider the following exam-
ples:
(46) a. The person who John asked for help thinks he is foolish.
b. The person, who John asked for help, thinks he is foolish.

The relative clause in (46a) semantically restricts the denotation of person,


whereas that in (46b) simply gives additional information about the person. Let
us consider one more pair of examples:

4 Note that we do encounter bare relatives with a gapped nominative subject:

a. He made a statement [everyone thought [ was interesting and important]].


b. They all agreed to include those matters [everyone believed [ had been excluded
from the Treaty]].

The subject-gap bare relative is possible when the relative clause is embedded as the complement
of a verb like thought and believed, but not when it directly modifies the nominal head, as in (23).
To license such examples, we must modify the head-rel bare mod construction.
280 R E L AT I V E C L AU S E C O N S T RU C T I O N S

(47) a. John has two sisters who became lawyers. (‘restrictive’)


b. John has two sisters, who became lawyers. (‘nonrestrictive’)

The second example suggests that John has only two sisters, while the first means
that two of his sisters are lawyers but leaves open the possibility that he has
additional sisters. The denotation of the restrictive relative clause (RRC) two
sisters who became lawyers is thus the intersection between the set of two sisters
and the set of lawyers. There can be more than two sisters, but there are only two
who became lawyers. By contrast, the nonrestrictive clause (NRC) two sisters,
who became lawyers must be understood to mean that there are two sisters and
they all became lawyers: There is no intersection of meaning here.
This meaning difference has given rise to the idea that the RRC modifies the
meaning of N – a noun phrase without a determiner – whereas the NRC modifies
a fully determined NP (McCawley, 1988):
(48) Restrictive Relative Clause (RRC):

(49) Nonrestrictive Relative Clause (NRC):

These representational differences are intended to reflect the fact that the RRC
is interpreted as restricting the set of women under consideration to a particular
subset (those whom we respect), while the NRC simply adds information about
the antecedent ‘Frieda.’
Note that in terms of the syntactic combination, (48) is licensed by the HEAD -
MODIFIER CONSTRUCTION but (49) is not, since the NP and the appositive
11.6 Restrictive vs. Nonrestrictive Relative Clauses 281

relative clause is not in a head-modifier relation. The NRC in (49) is quite similar
to the nominal apposition constructions given in the following (van Eynde and
Kim, 2016):

(50) a. He was one of the few that told [the president], [Johnson], to get out of
Vietnam.
b. [Dr. William], [a consultant from Seoul], is to head the new unit.
c. That was his first trip to [the capital of Korea], Seoul.

In these so-called appositional constructions, there are two NPs, an anchor (the
president), and an appositive (Johnson) linked to the same individual. The second
appositive is optional, but it adds additional identifying information about the
referent of the first NP anchor. The added information consists of a proposition
about the anchor, as illustrated by the following:

(51) a. The president is Johnson.


b. Dr. William is a consultant from Seoul.
c. The capital of Korea is Seoul.

In this respect, the NRC is also similar to nominal apposition in adding a


proposition that describes a property of the anchor:

(52) a. [Isabelle], [who the police looked for], went into exile in 1975.
b. [Politicians], [who make extravagant promises], cannot be trusted.
c. For camp, the children need [sturdy shoes], [which are expensive].

This implies that English grammar contains the following construction for
nominal apposition as well as NRC constructions:5

(53)⎡ APPOSITIVE CONSTRUCTION :


 ⎤ ⎡  ⎤ ⎡  ⎤
IND i IND i IND s0
NP⎣SEM ⎦→ NP⎣SEM ⎦, NP/S⎣SEM ⎦
RELS  1, 2  RELS  1  RELS  2 

The construction rules indicate that an anchor NP syntactically combines with


either an NP or an S, forming an appositive construct. The meaning relation
between the two is not a head-modifier one but an addition relation ( 1 and 2 ) in
which the appositive NP or S evokes a sentential meaning. This then assigns a
structure like the following for (52a):

5 The NRC allows the anchor to be a non-NP. To cover such a case, we need to distinguish NRCs
from nominal appositions.
282 R E L AT I V E C L AU S E C O N S T RU C T I O N S

(54)

The anchor Isabelle refers to an individual, while the appositive clause who the
police looked for refers to a situational proposition. The syntactic combination of
the two licenses an appositive construct, while each contributes to the meaning
of the phrasal mother sign (a complex NP).
Accordingly, it seems that there are two different types of relative clauses with
different syntactic structures. The RRC is licensed by a head-modifier construc-
tion, while the NRC is licensed by an appositive construction. This structural
and semantic difference can provide us with a way of explaining why the RRC
cannot modify a pronoun or proper noun:6
(55) a. I met the man who grows peaches.
b. I met the lady from France who grows peaches.

(56) a. *I met John who grows peaches.


b. *I met her who grows peaches.

Given that the meanings of ‘John’ and ‘her’ refer to unique individuals, we
expect that no further modification or restriction is possible. Nonrestrictive rel-
ative clauses like (57) can modify proper nouns or pronouns, simply because
they provide additional or background information about a mutually identifiable
individual:
(57) a. In the classroom, the teacher praised Lee, whom I also respect.
b. Reagan, whom the Republicans nominated in 1980, lived most of his life in
California.

6 In certain expressions of English, a who relative clause can modify a nominative animate pronoun
like she, he, or we:

a. He who laughs last laughs best.


b. He who knows most, knows best how little he knows.
11.6 Restrictive vs. Nonrestrictive Relative Clauses 283

The relative clause whom I also respect modifies the proper noun Lee without
restricting its designation, and it has the same interpretation as a conjoined clause
like The teacher praised Lee, and I also respect her.
There is another semantic implication of the restrictive vs. nonrestrictive dis-
tinction: Only a restrictive clause can modify a quantified NP like every N or no
N:

(58) a. Every student who attended the party had a good time.
b. *Every student, who attended the party, had a good time.

(59) a. No student who scored 80 or more in the exam was ever failed.
b. *No student, who scored 80 or more in the exam, was ever failed.

There is a straightforward explanation for this contrast. When restrictive rela-


tive clauses are combined with nominal expressions containing quantifiers, as in
every student who attended the party, they further restrict the range of the quan-
tifier (e.g., in (58a) from students in general to students who attended the party).
But because NPs like no student or every student do not refer to a specific mutu-
ally identifiable individual, such NPs cannot be combined with nonrestrictive
relative clauses (Huddleston and Pullum, 2002).
The distinction between N and NP has also been used to explain why a
restrictive clause must precede a nonrestrictive clause:7

(60) a. The contestant who won the first prize, who is the judge’s brother-in-law,
sang dreadfully.
b. *The contestant, who is the judge’s brother-in-law, who won the first prize
sang dreadfully.

Compare the following partial structures for the two NPs at issue:

(61)

7 Sentence (60b) is interpretable as involving a sequence of two nonrestrictive clauses.


284 R E L AT I V E C L AU S E C O N S T RU C T I O N S

Only in (61a) can the first relative clause be interpreted restrictively, as it is


attached at the N level. Strictly speaking, as represented above, (61b) is not
ill-formed, but it can only have an interpretation in which both relative clauses
are nonrestrictive.8 Though in the analysis of RRCs and NRCs several issues
remain,9 it seems clear that the two types are syntactically as well as semantically
distinct.

11.7 Island Constraints on the Filler-Gap Dependencies

We have observed that in wh-interrogatives and relative clauses, the


filler and the gap can be in a long-distance relationship. Yet there are construc-
tions in which this dependency seems to be restricted in certain ways. Consider
the following examples:
(62) a. [Who] did he believe [that he would one day meet ]?
b. [Which celebrity] did he mention [that he had run into ]?

(63) a. *[Who] did he believe [the claim that he had never met ]?
b. *[Which celebrity] did he mention [the fact that he had run into ]?

What is the source of these contrasts? Let us compare the partial structures of
(62a) with (63a):

8 One additional difference between restrictive and nonrestrictive clauses is that that is used mainly
in restrictive clauses:

a. The knife [which/that] he threw into the sea had a gold handle.
b. The knife, [which/??that] he threw into the sea had a gold handle.

9 The structural account, in which nonrestrictive clauses attach to NP and restrictive clauses to N ,
fails to account for certain facts. For example, a restrictive clause appears to attach to NP when
the relative clause modifies an indefinite pronoun, as in everyone who smiled must have been
happy, or when the clauses modify two conjoined full NPs, as in the man and the woman who
are neighbors are getting to know each other. To account for such examples, we must develop
a more elaborated syntactic and semantic analysis. See Fabb (1990), Sag (1997), Arnold (2004),
Chaves (2007), and references therein for further discussion.
11.7 Island Constraints on the Filler-Gap Dependencies 285

(64)

As we can see in (64a), a CP may have a GAP value. However, as shown


in (64b), an NP containing a CP cannot have a GAP value. The lat-
ter structure is traditionally known as a ‘Complex NP’ (Ross, 1967) and
is metaphorically described as an ‘island’ because an element within this
island cannot be extracted from it or linked to an expression outside.
Following is a relatively complete list of island constraints assumed for
English:
• The Coordinate Structure Constraint (CSC): In a coordinate structure, no
element in one conjunct alone can be wh-questioned or relativized.
(65) a. Bill cooked supper and washed the dishes.
b. *What did Bill [[cook ] and [wash the dishes]]?
c. *What did Bill [[cook ] and [wash ]]?

• The Complex Noun Phrase Constraint (CNPC): No element within a CP or an


S dominated by an NP can be wh-questioned or relativized.
(66) a. He refuted the proof that you cannot square it.
b. *What did he refute [the [proof [that you cannot square ]]]?
286 R E L AT I V E C L AU S E C O N S T RU C T I O N S

(67) a. They met someone [who knows the professor].


b. *[Which professor] did they meet [someone [who knows ]]?

• The Sentential Subject Constraint (SSC): An element within a clausal subject


cannot be wh-questioned or relativized.
(68) a. [That he has met the professor] is extremely unlikely.
b. *Who is [that he has met ] extremely unlikely?

•The Left-Branch Constraint (LBC): No expression that is the leftmost con-


stituent of a larger NP can be wh-questioned or relativized.
(69) a. She bought [John’s] book.
b. *[Whose] did she buy [ book]?

• The Adjunct Clause Constraint ACC: An element within an adjunct cannot be


questioned or relativized.
(70) a. Which topic did you choose without getting his approval?
b. *Which topic did you get bored [because Mary talked about ]?

• The Indirect Wh-question Constraint: An NP that is within an indirect question


cannot be questioned or relativized.
(71) a. Did John wonder who would win the game?
b. *What did John wonder [who would win ]?

Various attempts have been made to account for such island constraints.
Among these, we sketch an analysis within the present system that relies on
licensing constraints on subtree structures. As we have seen in previous chapters,
the present analysis provides a straightforward account of the CSC:
(72)

Although two VPs are coordinated, they are not identical with respect to
the GAP values. This violates constraints imposed by the COORDINATION
CONSTRUCTION , which allows only identical categories to be coordinated. 10

10 There are cases that seem to violate the CSC when coordinate conjuncts express specific types
of event relations, as noted by Ross (1967), Goldsmith (1985), and others:
11.8 Conclusion 287

The existence of some island constraints has been questioned, since violations
of island constraints can sometimes produce acceptable sentences. For example,
the following examples are acceptable, although both violate a claimed island
constraint:
(73) a. What did he get the impression that the problem really was ? (CNPC)
b. This is the paper that we really need to find the linguist who
understands . (CNPC)

In addition, observe the following examples (Ross, 1967; Kluender, 2004):


(74) a. *Which rebel leader did you hear [Cheney’s rumor [that the CIA
assassinated ]]?
b. ?? Which rebel leader did you hear [the rumor [that the CIA assassinated
]]?
c. ? Which rebel leader did you hear [a rumor [that the CIA assassinated ]]?
d. Which rebel leader did you hear [rumors [that the CIA assassinated ]]?

These examples have identical syntactic structures but differ in acceptability. The
data indicate that it may not be the syntactic structure but the properties of the
head of the complex NP that influence the acceptability of such sentences. This
implies that processing factors closely interact with the grammar of filler-gap
constructions (see Hofmeister et al., 2006).

11.8 Conclusion

This chapter explored the syntax of various types of English relative


clauses. Like the wh-interrogative constructions explored in the previous chapter,
relative clauses have been taken as unbounded dependency constructions.
Adopting the same mechanisms that we have used for the analysis of wh-
interrogatives, this chapter offered a declarative, feature-based analysis of a
range of relative clauses in English, including subject wh-relatives, nonsubject
wh-relatives, that-relatives, infinitival relatives, and bare relatives. To capture the
linkage between the filler wh-relative pronoun (including that) and the gap in
the relative clause, as in the analysis of wh-interrogatives, the chapter employed
key mechanisms including ARC (the Argument Realization Constraint), head
features like MOD, nonlocal features like GAP and REL, NIP (the Nonlocal
Inheritance Principle), constructional constraints in the HEAD - FILLER CON -
STRUCTION , and subtypes of the HEAD - MODIFIER CONSTRUCTION ( HEAD -
REL MOD and HEAD - REL BARE MOD ). The chapter has demonstrated that

a. How much can you drink and still stay sober?


b. What did Harry buy, come home, and devour in thirty seconds?

In (a), the conjunction can be paraphrased as and nonetheless and in (b) the operative relation of the
conjuncts is narration.
288 R E L AT I V E C L AU S E C O N S T RU C T I O N S

interactions among these can license each subpattern of the English relative
clause constructions.
In addition, the chapter discussed two important phenomena: differences
between restrictive and nonrestrictive relative clauses, and island constraints on
filler-gap dependencies. We have seen that restrictive and nonrestrictive rela-
tive clauses behave differently with respect to both syntax and semantics. Island
constraints refer to a configuration that blocks a syntactic dependency (e.g.,
movement or linkage) between constituents in the particular structure. Island
constraints have been a cornerstone of syntactic research since Ross (1967).
We discussed how these constraints can be interpreted within the present sys-
tem, although many, if not all, island constraints are potentially reducible to
nonsyntactic (interpretive, processing, or discourse) principles.
In Chapter 12, we will explore constructions (e.g., tough, it-extraposition,
and cleft) that illustrate slightly different dependencies between the gap and
its putative filler. Once again, we will see that the licensing of these construc-
tions requires mechanisms not appreciably different from those we developed
for wh-interrogatives and relative clauses.

Exercises

1. Find a grammatical error in each of the following sentences and then


explain the nature of the error:
a. *Students enter high-level educational institutions might face
many problems relating to study habits.
b. *A fellow student saw this felt sorry for Miss Kim and offered
her his own book.
c. *Experts all agree that dreams cause great anxiety and stress
are called nightmares.
d. *The victims of the earthquake their property was destroyed in
the disaster were given temporary housing by the government.

2. Draw tree structures for the following examples and discuss which
grammar rules license each phrase involving a wh-expression or that:
(i) a. This is the book which I need to read.
b. This is the very book that we need to talk about.
c. The person whom they intended to speak with agreed to
reimburse us.
d. The motor that Martha thinks that Joe replaced costs thirty
dollars.

(ii) a. The official to whom Smith loaned the money has been
indicted.
b. The man on whose lap the puppet is sitting is a ventriloquist.
11.8 Conclusion 289

c. The teacher set us a problem the answer to which we can find


in the textbook.
d. We just finished the final exam, the result of which we can find
out next week.
3. Draw structures for the following ungrammatical examples and
identify which island constraint is violated in each case:
a. *What did Herb start to play only after he drank?
b. *Who did Herb believe the claim that cheated?
c. *What did Herb like fruit punch and?
d. *What was that the Vikings ate a real surprise to you?
e. *What did you meet someone who understands?
4. Compare the following pairs of examples by considering the struc-
ture of each. In particular, consider whether the structure involves a
relative clause or a CP complement:
(i) a. The fact that scientists have now established all the genes in the
human body is still not widely known.
b. The fact that the scientists used the latest technology to verify
was reported at the recent conference.
(ii) a. They ignored the suggestion that Lee made.
b. They ignored the suggestion that Lee lied.
(iii) a. They denied the claim that we had advanced.
b. They denied the claim that they should report only to us.
5. English also allows adverbial relative clauses like those below. Can
the analysis in this chapter explain such examples? If it can, how? If
it cannot, can you explain why not?
a. The hotel where Gloria stays is being remodeled.
b. The day when Jim got fired was a sad day for everyone.
c. That is the reason why he resigned.
6. Read the following passage and provide correct expressions that fit
in the underlined positions:
Pied-piping describes the situation (a) a phrase larger than
a single wh-word occurs in the fronted position. When the wh-
word is a determiner such as which or whose, pied-piping refers
to the fact (b) the wh-determiner appears sentence-initially
along with its complement. For instance, in the example Which
car does he like?, the entire phrase (c) is moved. In
the transformational analysis, the wh-word which moves to the
beginning of the sentence, taking car, its complement, with it,
much as the Pied Piper of Hamelin attracts rats and children to
follow him, hence the term pied-piping.11

11 Adapted from Wikipedia, http://en.wikipedia.org/wiki/Wh-movement


12 Tough, Extraposition, and Cleft
Constructions

12.1 Introduction

English displays constructions illustrated in (1), respectively known


as ‘tough movement,’1 ‘extraposition,’ and ‘cleft’ constructions:2

(1) a. John is tough to persuade . (‘Tough’ movement)


b. It bothers me that John snores. (Subject Extraposition)
c. John made it clear that he would finish it on time. (Object Extraposition)
d. It is John that I met last night in the park. (Cleft)

Though these constructions each involve some kind of nonlocal dependency,


they are different from wh-question or relative clause constructions in several
respects. This chapter looks into the primary properties of these constructions.
The previous two chapters have shown that in wh-questions and relative
clauses, the syntactic category of gap and filler must match:

(2) a. I wonder [NP whom [Sandy loves NP ]]. (Wh-question)


b. This is the politician [PP on whom [Sandy relies PP ]]. (Wh-relative clause)

One thing we can observe here is that the fillers whom and on whom are not in a
core clause position (subject or object) but are in an adjoined filler position.
Consider examples of the tough-movement type:

(3) a. He is hard to love .


b. This car is easy to drive .

The gap in (3a) would correspond to an ‘accusative’ object NP (him), whereas the
apparent filler is a ‘nominative’ subject (he). The filler and the gap here are thus
not identical syntactically, though they are understood as referring to the same
individual. Owing to the lack of syntactic identity, the dependency between the
filler and the gap is considered ‘weaker’ than that in wh-questions or wh-relatives
(Pollard and Sag, 1994).
The extraposition and cleft constructions in (1b)–(1d ) are also different from
wh-questions as well as tough-construction examples. In clefts, we have a gap

1 The construction is named after adjectives that appear in it, like tough, easy, difficult, etc.
2 The more accurate term for Extraposition is it-Subject and -Object Extraposition.

290
12.2 ‘Tough’ Constructions and Topichood 291

and a corresponding filler, but in extraposition we have a long-distance relation-


ship between the extraposed clause and the expletive pronoun it. We will explore
the differences among these constructions in detail.

12.2 ‘Tough’ Constructions and Topichood

12.2.1 Basic Properties


Adjectives like easy, tough, difficult, and so on can appear in three
seemingly related constructions:
(4) a. To please John is easy/tough.
b. It is easy/tough to please John.
c. John is easy/tough to please.

Superficially quite similar predicates, such as eager and ready, do not allow all
three options:
(5) a. *To please John is eager/ready.
b. *It is eager/ready to please John.
c. John is eager/ready to please.

Even though both (4c) and (5c) are grammatical and they look structurally iden-
tical, they reveal themselves to be quite different once we look at their properties
in detail. Consider the following contrast:
(6) a. Kim is easy to please.
b. Kim is eager to please.

One obvious difference between (6a) and (6b) lies in the grammatical roles of
Kim: In (6a), Kim is the object of please, whereas Kim in (6b) is the subject
of please. More specifically, the verb please in (6a) is used as a transitive verb
whose object is identified with the subject Kim. Meanwhile, the verb please in
(6b) is used intransitively, not requiring any object. This difference is shown
clearly by the following examples:
(7) a. *Kim is easy [to please Tom].
b. Kim is eager [to please Tom].

The VP complement of the adjective easy cannot thus have a surface object,
whereas eager has no such restriction. This means that the VP complement of
easy has to be incomplete in the sense that it has a missing object, and this is so
with other easy-type adjectives as well:
(8) a. The signature is hard [to see ].
b. The child is impossible [to teach ].
c. The problem is easy [to solve ].
292 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S

(9) a. *The signature is hard [to see it].


b. *The child is impossible [to teach him].
c. *The problem is easy [to solve the question].

In all of these examples, there must be a missing element (GAP) in the VP com-
plement. Meanwhile, eager places no such restriction on its VP complement,
which should be internally complete:

(10) a. John is eager [to examine the patient].


b. John is eager [to find a new home].

(11) a. *John is eager [to examine ].


b. *John is eager [to find ].

These observations lead us to the following descriptive generalization:

(12) Unlike eager-type adjectives, easy-type adjectives select an infinitival VP


complement which has one missing element semantically linked to its
subject.

12.2.2 Transformational Analyses


Let us consider two related examples first:

(13) a. It is easy to please John.


b. John is easy to please.

Traditional movement analyses assumed the following deep structure for


(13a):

(14) [S is easy [CP [S PRO to please John]]]

The expletive it is introduced in S-structure in the matrix subject position


to generate (13a). One might assume direct movement of John to the sub-
ject position for (13b), but an issue immediately arises with examples like
(15):

(15) Hei is easy to please i.

The problematic aspect is the status of the subject He: How can a direct move-
ment approach move him into the subject position and then change the form into
he?3 As a solution, Chomsky (1986) proposes an empty operator (Op) movement
operation, represented here:

3 In technical terms, this will violate the ‘Case Filter’ of Government-Binding Theory, as he
receives two cases: accusative from the original object position and nominative from the subject
position.
12.2 ‘Tough’ Constructions and Topichood 293

(16)

The subject he is base-generated in the matrix subject position, while the null
operator Opi moves to the intermediate position from its original object position,
leaving the trace (ti ). At an interpretive level, this operator is coindexed with
the subject, indirectly linking the gap with the filler even though the two have
different case markings.

12.2.3 A Construction-Based Analysis


As we saw earlier, easy-type adjectives, unlike eager-type adjectives,
require an incomplete VP complement as a lexical property. This subcatego-
rization restriction appears to be a lexical fact about a family of adjectives
and verbs. In addition to adjectives like easy, verbs like take and cost also
select an infinitival VP containing an accusative NP gap coindexed with the
subject:
(17) a. This theorem will take only five minutes to prove .
b. This theorem will take only five minutes to establish that he proved in
1930.
(18) a. This scratch will cost Kim $500 to fix .
b. This $500 bribe will cost the government $500,000 to prove that Senator
Jones accepted .

Meanwhile, as we noted in the previous section, eager-type adjectives lack this


subcategorization restriction.
We can represent this lexical difference by means of lexical specifications on
tough-type lexemes. Let us begin with the easy-or tough-type that selects a VP
complement with one NP missing:
294 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S

(19)
⎡ ⎤
SYN | HEAD | POS adj
⎢  ⎥
tough-lxm ⇒ ⎢ inf ⎥
⎣ARG - ST NPi , VP
VFORM ⎦
GAP  1 NPi [acc]

This lexical construction specifies that the infinitival complement (VP or CP)
of adjectives like easy contains a GAP value (NPi ) that is coindexed with the
subject. This coindexation will ensure the semantic linkage between the matrix
subject and the gapped NP. Notice that, unlike canonical filler-gap constructions,
in which the GAP value is discharged when it meets the filler (by the HEAD -
FILLER CONSTRUCTION ), the feature GAP licensed by the tough-adjective needs
to be discharged constructionally:
(20) :
TOUGH CONSTRUCTION 
tough-adj
AP[GAP A] → A , XP GAP NPi [acc] ⊕ A
SPR NPi 

The construction allows a tough-adjective to combine with a phrase that contains


an accusative (acc) GAP value. This GAP value coindexes with its subject, and
then the GAP value is constructionally discharged (not passing up). The interac-
tion of the lexical specifications of tough adjectives with this construction then
licenses the following structure for (6a):
(21)
12.2 ‘Tough’ Constructions and Topichood 295

As shown in the tree, the transitive verb please introduces its accusative object
as the GAP value, hence the mother infinitival VP is incomplete. The adjective
easy combines with this VP, constructionally discharging the GAP value in accor-
dance with (20). Note that the subject of the adjective easy is coindexed with
the GAP value in accordance with its lexical specifications. The copula verb, as
we have seen in Chapter 8, is a raising verb whose AP complement’s subject
is identical to its subject NP. This is why the subject NP Kim is in fact coin-
dexed with the AP’s subject and with the GAP value. As such, the interplay of
the lexical properties of easy and is with other principles like the NIP ensures the
semantic dependency between the subject and the GAP value in the different local
domain.
Meanwhile, the lexical information for eager-type adjectives is very simple:
⎡ ⎤
(22) SYN | HEAD | POS adj
⎢  ⎥
eager-lxm ⇒ ⎢ inf ⎥
⎣ARG - ST NPi , VP
VFORM ⎦
SPR NPi 

The adjective belonging to this group selects a complete infinitival VP with no


missing element. In addition, this type of adjective is a control predicate and thus
its subject is coindexed with the (VP) complement’s subject (see Chapter 7). The
lexical specification will project the following structure for (6b):

(23)

The lexical specification of eager in (22) ensures that the AP’s subject is coin-
dexed with its VP complement’s subject. This implies that the infinitival VP
complement is controlled by the subject Kim. However, it places no restriction
on the GAP value of its VP complement, and so it can legitimately com-
bine with the fully saturated VP complement. When its VP complement has
296 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S

a GAP value, it must be later discharged by a filler, as seen in the following


contrast:

(24) a. *Kim is eager to recommend .


b. Who is Kim eager to recommend ?

Notice that the present analysis can straightforwardly account for examples in
which the VP complement includes more than one GAP element. Compare the
following pair of examples:

(25) a. This sonata is easy to play on this piano.


b. Which piano is this sonata easy to play on ?

The structure of (25a) is similar to that of (21):

(26)

As in the structure in (21), the adjective easy combines with an incom-


plete VP whose missing GAP value is constructionally discharged and
coindexed with the matrix subject. This is a typical example of tough-
construction.
Now consider the structure of (25b), in which the missing object of the
verb play is linked to the subject this sonata, while the missing object of the
preposition on is linked to the wh-phrase which piano:
12.3 Extraposition 297

(27)

In the structure above, the VP complement of easy has two GAP values: One rep-
resents the missing object of play ( 4 NP) and the other the missing object ( 2 ) of
on. The first GAP value coindexed with the subject this sonata is construction-
ally bound by easy in accordance with (20). The remaining GAP value ( 2 NP) is
passed up to the second higher S, where it is discharged by its filler, which piano,
through the HEAD - FILLER CONSTRUCTION.

12.3 Extraposition

12.3.1 Basic Properties


English employs an extraposition strategy that places a heavy con-
stituent, such as a that-clause, wh-clause, or infinitival clause, at the end of the
sentence:
(28) a. [That dogs bark] annoys people.
b. It annoys people [that dogs bark].

(29) a. [Why she told him] is unclear.


b. It is unclear [why she told him].

(30) a. [(For you) to leave so soon] would be inconvenient.


b. It would be inconvenient [(for you) to leave so soon].
298 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S

This kind of alternation is quite systematic: Given sentences like (31a), English
speakers have an intuition that (31b) is possible:
(31) a. That the Dalai Lama claims Tibet’s independence discomfits the Chinese
government.
b. It discomfits the Chinese government that the Dalai Lama claims Tibet’s
independence.

The extraposition strategy can also be applied to a clausal complement:


(32) a. I believe the problem to be obvious.
b. *I believe [that the problem is not easy] to be obvious.
c. I believe it to be obvious [that the problem is not easy].

As seen in (32b)–(32c), when a clausal complement is followed by an infinitival


VP complement, the former is obligatorily extraposed to the sentence-final posi-
tion. In addition to a finite CP, as in (32c), extraposition applies to an infinitival
CP/VP, a simple S, or a gerundive phrase:
(33) a. I do not think it unreasonable [to ask for the return of my subscription].
b. He made it clear [he would continue to cooperate with the United
Nations].
c. They’re not finding it a stress [being in the same office].

12.3.2 Transformational Analysis


Two major kinds of movement analyses have been used to capture
the relationships between sentences like the following:
(34) a. [That you came early] surprised me.
b. It surprised me [that you came early].

One approach assumes that the surface structure of a subject-extraposition


sentence like (34b) is generated from (34a), as represented in the following
(Rosenbaum, 1967):
(35)
12.3 Extraposition 299

The extraposition rule moves the finite clause you came early to a sentence-final
position. This movement process also introduces a rule to insert the comple-
mentizer that, thus generating (34b). To generate nonextraposed sentences like
(34a), the analysis posits deletion of it in (34a), followed by addition of the
complementizer that.
A slightly different analysis assumes the opposite direction of movement
(Emonds, 1970; Chomsky, 1981a; Groat, 1995). That is, instead of extraposing
the clause from the subject, the clause is assumed to already be in the extraposed
position as in (36a):
(36) a. [[ ] [VP surprised [me] [CP that you came early]]].
b. [[It] [VP surprised me that you came early]].

The insertion of the expletive it in the subject position in (36a) would then
account for (36b). When the CP clause is moved to the subject position, the
result is the nonextraposed sentence (34a).
Most current movement approaches follow this second line of thought.
Although such derivational analyses can capture certain aspects of English
subject extraposition, they are not specified sufficiently to account for lexi-
cal idiosyncrasies and instantiation of the extraposed clause in a position not
immediately following the main predicator (see Kim and Sag (2005) for further
discussion).

12.3.3 A Construction-Based Analysis


As we have seen, English exhibits an apparent alternation between
nonextraposed and extraposed sentence patterns like the following:
(37) a. [That Chris knew the answer] occurred to Pat.
b. It occurred to Pat [that Chris knew the answer].

This alternation is quite productive. For example, as English acquires new


verbs, for example, freak out, weird out, or bite, it acquires both extraposed
and nonextraposed sentence patterns for each of these predicators (Jackendoff,
2002):
(38) a. It really freaks/weirds me out that we invaded Iraq.
b. That we invaded Iraq really freaks/weirds me out.

(39) a. It really bites that we invaded Iraq.


b. That we invaded Iraq really bites.

The simple generalization about the process of extraposition is that it


applies to a verbal element (CP, VP, and S). Adopting the analysis
of Sag et al. (2003) and Kim and Sag (2005), we can assume that
the extraposition rule also refers to the verbal category, whose subtypes
300 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S

include both comp and verb (see Chapter 5.4.2). In particular, we can
adopt the following lexical rule to capture the systematic relationship in
extraposition:

(40) EXTRAPOSITION CONSTRUCTION : ⎡ ⎤


  extraposed-wd
v-wd ⎢ARG - ST  . . . , NP[NFORM it], . . . ⎥
→ ⎣ ⎦
ARG - ST  . . . , 1 XP[verbal], . . . 
EXTRA  1 XP

This is a type of postinflectional construction that allows for words to


be derived from other words (see Sag, 2012). The rule says that if a
predicative element (actually, adjective or verb) selects a verbal argument
(either CP or S), this verbal element can be realized as the value of
the feature EXTRA together with the introduction of it as an additional
argument.
To illustrate, consider the following data set:

(41) a. Fido’s barking annoys me.


b. That Fido barks annoys me.
c. It annoys me that Fido barks.

As shown here, the verb annoys can take either a CP or an NP as its sub-
ject. When the verb annoys selects a verbal argument (CP), it can undergo the
derivation of the EXTRAPOSITION CONSTRUCTION:

(42) Deriving an extraposed word:


⎡ ⎤
⎡ ⎤ extraposed-wd
v-wd ⎢FORM annoys ⎥
⎢ ⎥ ⎢ ⎥
⎣FORM annoys ⎦ →⎢ ⎥
⎣ARG - ST NP[NFORM it], 2 NP⎦
ARG - ST  1 [nominal] , 2 NP
EXTRA  1 CP 

Because the verb annoys selects a nominal (CP or NP) argument, the verb can
undergo the EXTRAPOSITION CONSTRUCTION. This is possible because when
the nominal argument is realized as a CP, it is a subtype of verbal whose sub-
types include verb and comp. As shown here, the output extraposed verb annoys
now selects the expletive it as its subject, while its original CP serves as the
value of the EXTRA. The ARC ensures that the two arguments in the output
ARG - ST will be realized as the SPR and COMPS values, respectively, with the
addition of the EXTRA value. This derived word licenses a structure like the
following:
12.3 Extraposition 301

(43)

As given in the tree, the two arguments of the verb annoys are realized as SPR
and COMPS respectively. When the verb combines with the NP me, it forms a
VP with a nonempty EXTRA value. This VP then combines with the extraposed
clause CP in accordance with the HEAD - EXTRA CONSTRUCTION:
(44) HEAD - EXTRA CONSTRUCTION:
EXTRA   → H EXTRA 1 , 1 XP

As shown here, the rule also discharges the feature EXTRA by combination of
the head VP with the extraposed CP. This grammar rule reflects the fact that the
grammar of English contains a phrase pattern in which a head element combines
with an extraposed element:
(45)

This phrasal template also serves to license the extraposition of an adjunct


element:
(46) a. [[A man came into the room] [that no one knew]].
b. [[A man came into the room] [with blond hair]].
c. I [read a book during the vacation [which was written by Chomsky]].
302 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S

All of these examples are licensed by the HEAD - EXTRA CONSTRUCTION,


which allows for the combination of a head element with an extraposed
element.
Object extraposition is similar: Consider the following examples:
(47) a. Ray found the outcome frustrating.
b. Ray found it frustrating [that his policies made little impact on poverty].

The lexical entry for find selects three arguments. The EXTRAPOSITION CON -
STRUCTION effectively augments the array of complements licensed by the input
verb by adding to the EXTRA list a CP that expresses the ‘content’ argument of
the verb (the state of affairs being assessed):
⎡ ⎤
(48)   FORM find
FORM find ⎢ ⎥
→ ⎣ARG - ST  1 NP, NP[it], 3 AP⎦
ARG - ST  1 NP, 2 [nominal], 3 AP
EXTRA  2 [comp] 

Since the type comp is a subtype of both nominal and verbal, the verb can
undergo the EXTRAPOSITION CONSTRUCTION. The output introduces a new
element it together with the EXTRA value. The three arguments in the derived
word will then be realized as its SPR and COMPS values, projecting a structure
like the following:
(49)

The verb find requires an expletive object and an AP as its complement.


It also has a clausal element as its EXTRA element. The first VP thus has
a nonempty EXTRA value projected from the verb, and this VP combines
with the extraposed CP clause as per the HEAD - EXTRA CONSTRUCTION
in (44).
12.4 Cleft Constructions 303

One major difference between subject and object extraposition is that the latter
is obligatory:
(50) a. *I made [to settle the matter] my objective.
b. I made it [my objective] to settle the matter.
c. I made [the settlement of the matter] my objective.

(51) a. *I owe [that the jury acquitted me] to you.


b. I owe it [to you] that the jury acquitted me.
c. I owe [my acquittal] to you.

This contrast is due to a general constraint that prevents any element within the
VP from occurring after a CP:
(52) a. I believe strongly [that the Earth is round].
b. *I believe [that the Earth is round] strongly.

In the present context, this means that there is no predicative expression (verb or
adjective) whose COMPS list contains an element that follows a CP complement
(see Kim and Sag, 2005).

12.4 Cleft Constructions

12.4.1 Basic Properties


The examples in (53) illustrate three kinds of cleft construction: it-
cleft, wh-cleft, and inverted wh-cleft, respectively:
(53) a. It’s their teaching material that we’re using. (it-cleft)
b. What we’re using is their teaching material. (wh-cleft)
c. Their teaching material is what we are using. (inverted wh-cleft)

These three types of clefts all denote the same proposition, captured by the
following declarative sentence:
(54) We are using their teaching material.

This raises the question: why would a speaker use a cleft structure instead
of a simple sentence like (54)? It is commonly accepted that clefts have
shared information-structure properties, given in (55) for the example in
question:
(55) a. Presupposition (Background): We are using X.
b. Highlighted (Foreground or focus): their teaching material
c. Assertion: X is their teaching material.

Structually, three kinds of clefts consist of a matrix clause headed by a copula and
a relative-like cleft clause whose head is coindexed with the predicative argument
304 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S

of the copula. The three structures differ only in the location of the highlighted
(focused) expression.

12.4.2 Distributional Properties of the Three Clefts


It-clefts: As just noted, the it-cleft construction consists of the pro-
noun it as the subject of the matrix verb be, the highlighted (or focused) phrase
XP, and a remaining cleft clause. Literature has noted that only certain types of
phrases can serve as the focal XP:
(56) a. It was [NP the man] that bought the articles from him.
b. It was [AdvP then] that he felt a sharp pain.
c. It was [PP to the student] that the teacher gave the best advice.
d. It was [S because it rained] that we came home.

An AP, a CP or an (infinitival) VP cannot serve as the focal XP:


(57) a. *It was [VP to finish the homework] that John tried.
b. *It is [AP fond of Bill] that John seems to be.
c. *It is [CP that Bill is honest] that John believes.

Also notice that in addition to that, wh-words like who and which can also
introduce a cleft clause:
(58) a. It’s the second Monday [that] we get back from Easter holiday.
b. It was the girl [who] kicked the ball.
c. It’s mainly his attitude [which] convinced the teacher.

Wh-clefts: Unlike the it-cleft, the wh-cleft construction places a cleft


clause in the subject position followed by the highlighted XP in the
postcopular position. This gives a wide range of highlighted phrases. As
shown in (59), almost all the phrasal types can serve as the highlighted
XP:
(59) a. What you want is [NP a little greenhouse].
b. What’s actually happening in London at the moment is [AP immensely
exciting].
c. What is to come is [PP in this document].
d. What I’ve always tended to do is [VP to do my own stretches at home].
e. What I meant was [CP that you have done it really well].

In contrast to the it-cleft, the wh-cleft allows an AP, a base VP, or a clause (CP,
simple S, and wh-clause) to serve as the highlighted XP:
(60) a. What you do is [VP wear it like that].
b. What happened is [S they caught her without a licence].
c. What the gentleman seemed to be asking is [S how policy would have
differed].
12.4 Cleft Constructions 305

Inverted wh-clefts: Although the inverted wh-cleft construction is similar to


the wh-cleft, two constructions differ with respect to the type of the focus
phrase:

(61) a. [NP That] is what they’re trying to do.


b. [AP Insensitive] is how I would describe him.
c. [PP In the early morning] is when I do my best research.

(62) a. ??[VP Wear it like that] is what you do.


b. ??[S They caught her without a licence] is what happened.
c. ??[CP That you have done it really well] is what I meant.

All wh-words except which are possible in inverted wh-clefts:

(63) a. That’s [when] I read.


b. That was [why] she looked so nice.
c. That’s [how] they do it.
d. That’s [who] I played with over Christmas.
e. *That was [which] I decided to buy.

12.4.3 Syntactic Structures of the Three Types of Cleft: Movement


Analyses
Two major kinds of movement analyses have been applied to English
it-cleft constructions: an extraposition analysis and an expletive analysis. The
extraposition analysis assumes a direct syntactic or semantic relation between
the cleft pronoun it and the cleft clause through extraposition (Akmajian, 1970;
Gundel, 1977; Hedberg, 1988):

(64) a. [What you heard] was an explosion. (wh-cleft)


b. It was an explosion, [what you heard]. (right-dislocated)
c. It was an explosion [that you heard]. (it-cleft)

For example, in Gundel (1977), the wh-cleft clause in (64a) is first right dislo-
cated, as in (64b), which can then generate the it-cleft (64c) once what is replaced
by that. Analyses of this nature take the cleft clause to be extraposed to the end
of the sentence.
By contrast, the expletive analysis (Chomsky, 1977; Kiss, 1998; Lam-
brecht, 2001) takes the pronoun it to be an expletive expression generated in
place, while the cleft clause is semantically linked to the clefted constituent by a
‘predication’ relation.

(65) It was [pred John + who heard an explosion].

A transformational version of this analysis has been proposed by Kiss (1998):


306 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S

(66)

As shown here, the clefted phrase John, functioning as focus, is assumed to


occupy the specifier of the FP (focus phrase), while the copula is the head of
the FP and the cleft clause is the complement of F. The cleft clause is thus seen
as predicated of the focal NP John.
Although the wh-cleft and the it-cleft share the function of asserting the value
of a variable in a presupposed open proposition, these constructions have diver-
gent syntactic properties, which suggest that it is implausible to derive one from
the other (Pavey, 2004). One salient difference is this: only wh-clefts allow a
base VP as the focal XP phrase:

(67) a. What you should do is [VP order one first].


b. *It is [VP order one first] that you should do.
c. ??[VP Order one first] is what you should do.

The three differ as well with respect to the acceptability of an adverbial


subordinate clause:

(68) a. It was not until I was perhaps twenty-five or thirty that I read them and
enjoyed them.
b. *When I read them and enjoyed them was not until I was perhaps twenty-five.
c. *Not until I was perhaps twenty-five was when I read them and enjoyed them.

As seen here, the not until adverbial clause appears only in it-clefts.
Unlike it-clefts, neither wh-clefts nor inverted wh-clefts allow the cleft clause
portion to be headed by the complementizer that:

(69) a. It’s the writer [that gets you so involved].


b. *[That gets you so involved] is the writer.
c. *The writer is [that gets you so involved].
12.4 Cleft Constructions 307

In addition, the relative pronoun of the cleft clause in an it-cleft may be a PP,
whereas a PP cannot occur in the comparable position in a wh-cleft or inverted
wh-cleft:
(70) a. And it was this matter [[on which] I consulted with the chairman of the
Select Committee].
b. *[[On which] I consulted with the chairman of the Select Committee] was
this matter.
c. *This matter was [[on which] I consulted with the chairman of the Select
Committee].

These facts suggest that the different types of cleft are not derivationally related
and should be treated as distinct constructions. Without providing detailed
analyses, we sketch out possible directions here.

12.4.4 A Construction-Based Analysis


Wh-clefts. Let us first consider wh-clefts:
(71) a. [What I ate] is an apple.
b. [What we are using] is their teaching material.

There are two observations to make here concerning the respective roles of the
copula be and the cleft clause. The copula in the cleft construction has a ‘speci-
ficational’ use, rather than a ‘predicational’ one. The examples in (72) illustrate
these two copular functions. In (72a), the copula is predicational, whereas in
examples like (72b), the copula is specificational:
(72) a. The one who got an A in the class was very happy.
b. The one who broke the window was Mr. Kim.
In contrast to (72a), the postcopular element (very happy) denotes a property
of the subject. In (72b), the postcopular NP (Mr. Kim) provides the value of a
variable. The subject refers not to an individual but to a variable (the x such
that x is a student and x broke the window). This is shown by agreement in tag
questions:
(73) a. The one who got A in the class was very happy, wasn’t she?
b. The one who broke the window was Mr. Kim, wasn’t it/*wasn’t she?

Different from (73a), the appropriate tag for (73b) includes the pronoun it, not
he or she. A rough paraphrase for (73b) is ‘The x such that x broke the window’
is Mr. Kim.’
Regarding the cleft clause itself, we can observe that it behaves like a kind of
free relative clause. Not all wh-words can occur in free relatives:
(74) a. He got what he wanted.
b. He put the money where Lee told him to put it.
c. The concert started when the bell rang.
308 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S

(75) a. *Lee wants to meet who Kim hired.


b. *Lee bought which car Kim wanted to sell to him.
c. *Lee solved the puzzle why Kim solved it.

One can regard what, where, and when as introducing a free relative clause, in the
sense that they are interpreted, respectively, as ‘the thing that, the place where,
and the time when.’ However, this kind of interpretation is not feasible with who,
which, or how. As predicted by their failure to form free relatives, neither who
nor which can appear in wh-clefts:

(76) a. *Who achieved the best result was Angela.


b. *Which book he read the book was that one.

Also note that the syntactic distribution of a free relative clause is that of an NP,
not that of a clause. The object of eat is a diagnostic environment:

(77) a. I ate [what John ate].


b. I ate [an apple].

Since the verb ate requires only an NP as its complement, the only possible
structure is as follows:

(78)

Although the filler what and the head phrase John ate form a constituent, the
result cannot be an S, because ate can combine only with an NP. This kind of
12.4 Cleft Constructions 309

free relative structure, which is unusual in the sense that the nonhead filler what
is the syntactic head, is licensed by the following grammar rule (Pullum, 1991):4

(79) FREE - REL CONSTRUCTION:

NP[GAP  ] → 1 NP[ FREL i-ind], S[GAP  1 NP]

This construction ensures that when a free relative pronoun combines with a
sentence missing one phrase, the resulting expression is not an S but a complete
NP.
On the assumption that the cleft clause in the wh-cleft is a free relative, we
can assign the following structure to (71b):

(80)

As shown here, the cleft clause is formed by the combination of what with an
S missing an NP. The index of the free relative is identified with that of the
postcopular NP their teaching material.
Taking wh-clefts as a type of free-relative clause construction headed by an
NP, we can explain the ungrammaticality of examples like the following:

(81) a. *[To whom I gave the cake] is John.


b. *[That brought the letter] is Bill.

The subjects in these sentences are not headed by NPs and therefore cannot
be free relatives.

Inverted Wh-clefts. The inverted wh-cleft offers a different information-


structure perspective from the wh-cleft. The inverted cleft focuses the phrase
in subject position:

4 The feature FREL is assigned to wh-words like what, where, and when, but not to how and why, to
distinguish between those wh-words that can head a free relative and those that cannot. See Kim
(2001b).
310 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S

(82)

In these structures, the cleft clause has no FREL value, and so allows all wh-words
to head the relative clause:

(83) a. This is how he did it.


b. This is why he came early.

While the subject position of a wh-cleft is restricted to NPs, the postcopular


complement position in the inverted cleft is not so restricted.

It-clefts. The focused expression of an it-cleft seems to be the nominal


head introduced by the relative pronoun that. If we take a close look,
however, we find that this is not the right analysis for an it-cleft. As
we have seen in the previous chapter, a pronoun or proper noun cannot
serve as the antecedent of a restrictive relative clause. However, unlike rel-
atives, it-clefts allow a pronoun or proper noun to be in the putative head
position:

(84) a. It is Pat that we are looking for.


b. *Pat that we are looking for showed up.

This contrast suggests that the focused element (Pat in (84a)) and the following
relative clause do not form a syntactic unit, as a restrictive relative clause does
with its nominal head.
As discussed earlier, two major transformational approaches have been pro-
posed for the generation of it-clefts: expletive insertion and extraposition.
The present analysis takes the latter direction, whereby the pronoun it and
the cleft clause are linked by a type of extraposition process (Gundel, 1977;
Geluykens, 1988; Hedberg, 1988). As noted in the previous section, this anal-
ysis generates it-clefts from wh-clefts by extraposing the what-clause to the
sentence-final position. The present analysis, without postulating any move-
ment operations, assumes that the it-clefts have base-generated structures like
the following:
12.4 Cleft Constructions 311

(85)

The structure implies that the cleft clause is extraposed while the NP functions as
a focused (FOC) phrase. This kind of projection is possible when the copula verb
be selects a clausal subject and then becomes an extraposed word (extraposed-
wd):
⎡ ⎤
(86)   FORM be
FORM be ⎢ ⎥
→ ⎣ARG - ST NP[it], 2 XP⎦
ARG - ST  1 [verbal], 2 XP
EXTRA  1 [verbal]

As we have seen in (40) for the EXTRAPOSITION CONSTRUCTION, a clausal


argument can undergo an extraposition process that introduces the EXTRA fea-
ture whose value is linked to the pronoun it. In terms of semantics, we take
the copula be here is a specificational use. That is, the pronoun it introduces
a variable ‘x,’ the focused phrase specifies its value, and the extraposed clause
expresses the restriction on the range of the variable to this variable ‘x’ as a rela-
tive clause does. For instance, (85) means that the x such that x bought the book
is the boy.
One advantage of this analysis is that there is no need to introduce an
additional type of the copula be. The copula be here, just as in the other
uses of be, selects two syntactic arguments, not three syntactic arguments,
as in the ternary analysis (see Pollard and Sag, 1994). The first argument
is a clausal subject, while the second argument can be any nominal type
expression (NP, PP, or AdvP, but not VP or AP). This word-level construc-
tion can then undergo the derivational lexical process that allows the clausal
argument to be extraposed (EXTRA) with the introduction of the subject
pronoun it.
The present extraposition analysis can also account for an observation men-
tioned earlier: The focal expression and the following that-clause do not form
a relative-clause constituent. This analysis makes sense in light of attested data
like (87), in which a pronoun, which could not generally serve as the head of a
relative clause, serves as the focal expression:
312 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S

(87) a. It was you that we cared about.


b. It was he that annoyed Omer most.

This present analysis also leads us to expect examples like (88), in which a par-
enthetical expression intervenes between the focused phrase and the extraposed
cleft clause.
(88) a. It was the boy, I believe, that bought the book.
b. It was in the attic, the police believed, where Ann had been hiding.

The present analysis can also license examples like the following, where the
focused XP is gapped:
(89) a. I wonder who it was who saw you.
b. I wonder who it was you saw .
c. I wonder in which pocket it was that Kim had hidden the jewels.

Let us look at the structure of (89a), as our system generates it:


(90)

As shown here, the first COMPS value of the cleft copula be is realized as a
GAP element. This GAP value is passed up to the point where it is discharged by
the wh-element who. This induces an interrogative meaning on the complement
clause of the verb wonder.
As seen here, the present system allows the focus phrase (the complement
of the copula) to be gapped, but note that the cleft clause cannot have a gap
expression:
(91) a. Who do you think it is that Mary met ?
b. *To whom do you think it is the book that Mary gave ?

If the cleft clause is taken to be a complement clause, as in ternary analy-


sis, this would not be expected, but in our analysis, in which the cleft clause
behaves like an adjoined adjunct, this would be expected, as the independently
12.4 Cleft Constructions 313

attested Adjunct Clause Constraint prevents extraction from an adjunct clause


(see Chapter 11).
One remaining property we need to consider is connectivity effects between
the focused phrase and the cleft clause. Consider the following agreement
properties:
(92) a. It is [John and Mary] [that like Peter].
b. *It is [John and Mary] [that likes Peter].

In these examples, where the cleft clause has a subject gap, the verb in the
cleft clause agrees with the coordinated NP. This kind of agreement is what we
observe in relative clauses:
(93) a. the students that like Peter
b. *the student that like Peter

Such a semantic relation between the focused phrase and the cleft clause is
quite similar to the one we find between the antecedent phrase and the relative
clause. Our conclusion is therefore that, in terms of syntax, it-clefts are different
from relative clauses, but in terms of semantics, they are quite similar to relative
clauses.
In order to capture such an agreement connectivity effect, we could add
additional constraints to the extraposed be:
⎡ ⎤
(94) be-cleft
⎢FORM be ⎥
  ⎢ ⎥
be ⎢ ⎥
FORM ⎢ARG - ST NP[it], 2 XP [FOC +]⎥
→⎢ i ⎥
ARG - ST  1 [verbal], 2 XP ⎢   ⎥
⎢ verbal ⎥
⎣EXTRA 1 ⎦
REL i

The only thing added here is the coindexation relation between the focused XP
and the cleft clause. That is, the type be-cleft is a subtype of the type extraposed-
wd but requires that the focused phrase be coindexed with the relative pronoun
of the cleft clause. This analysis would enable us to predict examples like the
following:
(95) a. It is [me] [that is to blame].
b. It is [he] [that is to blame].
c. It is [you] [who is to blame].
d. It is [you] [who are to blame].

What we see in (95) is variability in how person and number features appear in
the inflection of the extraposed cleft clause. While (95a) suggests that the verb
of the extraposed clause is invariantly 3rd person, (95d) shows that a 2nd-person
focal argument can trigger 2nd-person agreement (if the extraposed clause con-
tains a relative pronoun). Such variability is anticipated by an index-based theory
of agreement, as discussed in Section 6.4 of Chapter 6, in which agreement relies
on the manner in which the anchor element is construed.
314 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S

12.5 Conclusion

In this final chapter we have discussed three constructions that place


arguments in special positions: tough, extraposition, and cleft constructions.
These constructions, like wh-interrogatives and wh-relative clauses, involve a
kind of long-distance dependency but behave differently in many respects. The
chapter explored these differences and offered a nonderivational, construction-
based analysis.
In the exploration of the tough-construction, we discussed the differences
between tough/easy-type and eager-type lexemes. The key difference is that the
former type requires a VP complement containing an obligatory accusative gap
that is linked to its subject. The analysis sketched here attributes these differences
to the lexical properties of the adjectives involved. These lexical differences,
interacting with general principles, including the NIP and raising properties of
the copula, can be used to explain syntactic differences.
In our discussion of the extraposition construction, the key analytic question
was how to capture the systematic relations between extraposed sentences and
their putative source sentences. After briefly describing influential transforma-
tional analyses, the chapter developed a constructional analysis based on the
HEAD - EXTRA CONSTRUCTION and the feature EXTRA . Together with the inde-
pendently motivated type verbal (a supertype of comp and verb), this analysis
allows us to license not only subject extraposition but also object extraposition
in English.
This chapter also discussed three kinds of cleft constructions in English,
including their respective information-structure properties. After exploring the
grammatical properties of each cleft type, the chapter sketched out construction-
based analyses of each. While more detailed investigations are needed to
illuminate the full range of usage facts, the construction-based analyses cap-
ture the key ways in which English cleft constructions differ from one
another.

Exercises

1. Explain the relationships among the following sentences:


a. It is difficult for me to concentrate on calculus.
b. For me to concentrate on calculus is difficult.
c. Calculus is difficult for me to concentrate on.

2. Draw structures for the following sentences and show which gram-
mar rules are involved in generating them:
(i) a. This problem will be difficult for the students to solve.
b. Being lovely to look at has its advantages.
12.5 Conclusion 315

c. This toy isn’t easy to try to hand to the baby.


d. That kind of person is hard to find anyone to look after.
e. Letters to Grandma are easy to help the children to write.
(ii) a. It was to Boston that they decided to take the patient.
b. It was with a great deal of regret that I vetoed your proposal.
c. It was Tom who spilled beer on this couch.
d. It is Martha whose work critics will praise.
e. It was John on whom the sheriff placed the blame.
f. I wondered who it was you saw.
g. I was wondering in which pocket it was that Kim had hidden
the jewels.

3. Explain why the following examples are ungrammatical, referring to


the analysis presented in this chapter:
(i) a. *It is Kim on whom that Sandy relies.
b. *It is Kim on whom Sandy relies on.
c. *It is Kim whom Sandy relies.
d. *It is on Kim on whom Sandy relies.

Further, consider the following examples in (ii) and (iii), draw struc-
tures for them and show which grammar rules and principles are
involved in their generation:
(ii) a. I wonder who it was who saw you.
b. I wonder who it was you saw.
c. I wonder in which pocket it was that Kim had hidden the jewels.
(iii) a. Was it for this that we suffered and toiled?
b. Who was it who interviewed you?

4. Analyze the following subject-to-object raising examples and show


clearly how the cleft and raising constructions interact:
a. I believe it to be her father who was primarily responsible.
b. I believe it to be the switch that is defective.

5. Consider the following set of examples, all of which contain the


expression what Mary offered to him. Explain whether the phrase
functions as an indirect question or an NP and support your explana-
tions by drawing the syntactic structures:
a. Tom ate [what Mary offered to him].
b. I wonder [what Mary offered to him].
c. [What Mary offered to him] is unclear.

6. Consider the following contrast and discuss whether the present


analysis can predict this contrast:
a. Who do you think it is that Mary met ?
b. *To whom do you think it is the book that Mary gave ?
316 T O U G H , E X T R A P O S I T I O N , A N D C L E F T C O N S T RU C T I O N S

7. Provide the structures of the following two sentences and then dis-
cuss whether the present analysis can account for each of these
two:
a. It is on Kim that Lee relies.
b. It is Kim on whom Kim relies.

8. Consider the following attested examples, which appear to lack


focused expressions. Investigate the contexts in which such sentences
would be used. Then discuss whether these examples can be taken to
be another type of cleft construction:
a. It may be that they want to make it as stuffy and uncomfortable
as possible.
b. It could be that the pain simply develops or the wound worsens.
c. It’s just that we didn’t know all the things that you were doing.
Afterword

In this book we have explored the theory and practice of Sign-Based Construc-
tion Grammar (SBCG) by applying it to a range of grammatical phenomena in
English. The two basic theoretical notions in SBCG are sign and construction.
Signs are complexes of linguistic information that fully specify both the form
and the meaning of a linguistic expression. Classes of signs can be expressed
as sign descriptions. Constructions are the means provided by the grammar of
deriving more complex sign descriptions from simpler sign descriptions. Thus,
there can be constructions that pair an inflected form of a verb with an abstract
representation of that verb (e.g., assigning the form persuaded as the past tense of
persuade), constructions that associate a lexeme with information about valence
and meaning (e.g., the use of have in an ‘ordering’ sense, as in They have me
shine their shoes every morning), and constructions that associate constraints on
a phrasal pattern with a construction-specific meaning, as in The more we know,
the less we’ll need.
The notion of construction, in this view, is a formalization, in a constraint-
based architecture, of the notion of construction in traditional grammar. The
central notion is that constructions license linguistic signs that need special
explanations for at least some of their properties – lexical, syntactic, semantic, or
pragmatic – beyond what we know about their component parts. The construc-
tions of a grammar model the native or fluent speaker’s ability to produce and
understand the signs of their language.
A construction may assign semantic properties that are not determined by its
constituent elements and their manner of combination. This is true of phrases
like the poor, the rich, the young, the old, the blind, the lame, etc. These have the
properties ‘human,’ ‘generic,’ and ‘plural.’ This means that a sentence like I have
two electric vehicles; the old is a Nissan Leaf fails on three grounds: the phrase
the old would have to be specific, inanimate, and singular to be acceptable in the
subject position here.
As we have discussed in this book, there is a tradition in which the grammar-
ian’s main goal is to characterize those properties of the grammar that belong to
the ‘core,’ which is understood to contain the basic underlying building-blocks of
the language; these are the features that are most relevant in comparing languages
with each other or in studying the nature of language in the human species. In
contrast to the core is the collection of patterns that make up the less important

317
318 Afterword

‘periphery.’ We can briefly remind ourselves of the difference by considering the


sentences in (1):
(1) a. I like [pancakes and toast].
b. She grabbed [hat and coat].
c. They fought [hammer and tongs].

Some constructions are very general, like the coordination pattern in (1a). Some
are particular, like the NP in (1b), which consists of the coordination of two
singular bare count nouns. This pattern can be used only when both of the items
and their close association are already saliently established in the discourse. And
some constructions are even more particular, like the sequence in (1c), where
the lexical makeup is fixed, the usual coordinate conjunction of similar syntactic
elements is not in evidence, and the meaning of the whole is unrelated to the
meaning of the parts (the expression hammer and tongs means ‘energetically’).
It is easy to see that examples like (1c) illustrate idioms and that expressions
like (1a) are the product of a general rule of grammar. But what about expres-
sions like (1b), which seem to lie between opaque idioms and fully productive
grammatical rules? Example (1b) is special because it is not a simple conjunc-
tion of two possible objects of grab: She grabbed her hat and She grabbed her
hat are ordinary expressions, but English does not allow *She grabbed hat.
One of the advantages of a constructional approach to grammar is that it gives
us a single format in which to describe all the grammatical formulas that the
speaker of a language must know, from the most particular, like that illustrated in
(1c), to the most general, like that illustrated in (1a), in the same format. The con-
struction grammarian sees a language as presenting a continuum of idiomaticity,
or generality, of expressions; a construction grammar models this continuum
with an array of constructions of correspondingly graded generality. It is pos-
sible that no language except English has a construction that builds an Adverb
Phrase, like by and large, by conjoining a preposition and an adjective, and it is
also possible that every language has some form of coordinate conjunction. But
where along the gradient of intermediate cases should one draw the line between
‘core’ and ‘periphery’? To our understanding, no objective criterion has been
established to distinguish core from periphery–even by those who assert that only
core phenomena are worthy of scientific investigation. A common practice is to
include in the core obvious cases plus as much of the rest of the language as fits
the theoretical apparatus at hand (Culicover and Jackendoff, 1999). But this prac-
tice simply leads to circular argumentation. A constructional approach, which
offers us a single representational format for any grammatical pattern, at what-
ever point on the gradient from frozen idiom to productive rule it falls, avoids
this failing. Constructional approaches to grammar assume that accounting for
all the facts of a language as precisely as possible is the major goal of syntactic
theory.
The appendix that follows is designed as a basic map of the grammatical land-
scape that we have explored in this book; it includes both descriptions of words
Afterword 319

(lexemes) and descriptions of phrasal patterns (constructions). Like any linguis-


tic type hierarchy, it can accommodate new linguistic facts. It is expansible by
design.
Appendix

A Lexical Entries

A.1 Lexeme-Level Specifications


A.1.1 Verbs
⎡ ⎤
(1) a. FORM add
⎢ARG - ST NP , Part[up]⎥
⎣ x ⎦
SEM accumulate-rel(x)
 
b. FORM believe
ARG - ST NP, [VFORM fin]
 
c. FORM bother
ARG - ST  1 [nominal], 2 NP
 
d. FORM call
ARG - ST NP, NP, XP[PRD +]
⎡ ⎤
e. v-lxm
⎢ ⎥
⎣FORM chase ⎦
ARG - ST NP[agt], NP[th]
 
FORM destroy
ARG - ST NP[agt], NP[pat]
⎡ ⎤
f. FORM figure
⎢ ⎥
⎢ARG - ST NPx, Part[out], NPy⎥
⎣ ⎦
SEM compute-rel(x,y)
 
g. FORM give
ARG - ST NP, NP, PP
 
h. FORM intend
ARG - ST NP, CP[VFORM inf ]
 
i. FORM love
ARG - ST NP, NP
 
j. FORM nominate
ARG - ST NP, NP

320
Appendix 321
 
k. FORM put
ARG - ST NP[agt], NP[th], PP[loc]
 
l. FORM smile
ARG - ST NP
 
m. FORM surprise
ARG - ST [nominal], NP
 
n. FORM teach
ARG - ST NP, NP[goal], NP[th]

A.1.2 Adjectives
⎡ ⎤
(2) a. FORM alive
⎢ ⎡ ⎤⎥
⎢ POS adj ⎥
⎢ ⎥⎥
⎢SYN | HEAD ⎢PRD + ⎥
⎣ ⎣ ⎦⎦
MOD  
 
b. FORM ashamed
ARG - ST NP, CP[VFORM fin]
 
c. FORM content
ARG - ST NP, CP[VFORM fin]
⎡ ⎤
d. FORM eager
⎢ ⎥
⎣SYN | HEAD | POS adj ⎦
ARG - ST NP, VP[VFORM inf ] 
⎡ ⎤
e. FORM fond
⎢ ⎥
⎣SYN | HEAD | POS adj ⎦
ARG - ST  NP, PP[PFORM of ]
⎡ ⎤
f. FORM wooden
⎢  ⎥
⎢ ⎥
⎣SYN | HEAD POS adj ⎦
MOD N 

A.1.3 Nouns
⎡ ⎤
(3) a. FORM boy
⎢  ⎥
⎢ ⎥
⎢SYN | HEAD POS noun ⎥
⎢ AGR | NUM sing ⎥
⎣ ⎦
SEM | IND | NUM sing
⎡ ⎤
b. FORM boys
⎢  ⎥
⎢ ⎥
⎢SYN | HEAD POS noun ⎥
⎢ AGR | NUM pl ⎥
⎣ ⎦
SEM | IND | NUM pl
322 Appendix
⎡ ⎤
c. FORM eagerness
⎣  ⎦
ARG - ST DP, XP[VFORM inf ]
 
d. FORM reliance
ARG - ST DP, PP[on]
⎡ ⎤
e. FORM proximity
⎣  ⎦
ARG - ST DP, (PP[PFORM to])
⎡ ⎤
f. FORM faith
⎣  ⎦
ARG - ST DP, (PP[PFORM in])
⎡ ⎤
g. FORM hash browns
⎢  ⎥
⎢ ⎥ (when referring to the food itself)
⎢SYN | HEAD POS noun ⎥
⎢ AGR | NUM pl ⎥
⎣ ⎦
SEM | IND | NUM pl
⎡ ⎤
h. FORM hash browns
⎢  ⎥
⎢ ⎥ (when referring to a customer, or to a dish)
⎢SYN | HEAD POS noun ⎥
⎢ AGR | NUM pl ⎥
⎣ ⎦
SEM | IND | NUM sing
⎡ ⎤
i. FORM team/government
⎢ ⎡  ⎥ ⎤
⎢ ⎥
⎢ POS noun
⎦⎥
⎢SYN⎣HEAD ⎥
⎢ AGR | NUM sing ⎥
⎣ ⎦
SEM | IND | NUM pl

A.1.4 Prepositions
 
(4) FORM in
ARG - ST NP

A.1.5 Auxiliary
⎡ ⎤
(5) a. aux-be
⎢ ⎥
⎢FORM be ⎥
⎢  ⎥
⎣ ⎦
ARG - ST NP, XP PRD +
⎡ ⎤
b. aux-do
⎢FORM do ⎥
⎢ ⎥
⎢   ⎥
⎢ ⎥
⎢SYN HEAD VFORM fin ⎥
⎢ ⎥
⎢  ⎥
⎢ ⎥
⎢ − ⎥
⎣ARG - ST NP, VP
AUX ⎦
VFORM bse
Appendix 323
⎡ ⎤
c. aux-have
⎢ ⎥
⎢FORM have ⎥
⎢  ⎥
⎣ ⎦
ARG - ST NP, VP VFORM en
⎡ ⎤
d. aux-to
⎢ ⎥
⎢FORM to ⎥
⎢   ⎥
⎢ ⎥
⎢SYN ⎥
⎢ HEAD VFORM inf ⎥
⎢  ⎥
⎢ ⎥
⎣ ⎦
ARG - ST NP, VP VFORM bse

A.1.6 Determiners
⎡ ⎤
(6) a. FORM little
⎢ ⎡  ⎤⎥
⎢ ⎥
⎢ POS det ⎥
⎣SYN⎣HEAD ⎦⎦
COUNT –
⎡ ⎤
b. FORM many
⎢ ⎡  ⎤⎥
⎢ ⎥
⎢ POS det ⎥
⎣SYN⎣HEAD ⎦⎦
COUNT +
⎡ ⎤
c. FORM this
⎢ ⎡  ⎤⎥
⎢ ⎥
⎢ POS det ⎥
⎣SYN⎣HEAD ⎦⎦
AGR | NUM sing
⎡ ⎤
d. FORM the
⎢ ⎡  ⎤⎥
⎢ ⎥
⎢ POS det ⎥
⎣SYN⎣HEAD ⎦⎦
COUNT boolean

A.2 Word-Level Expressions with VAL Information


A.2.1 Verbs: With Valence Information
⎡ ⎤
(7) a. FORM bother
⎢  ⎥
⎢  1 NP  ⎥
⎢SYN | VAL SPR ⎥
⎢ COMPS  2 NP  ⎥
⎣ ⎦
ARG - ST  [nominal], NP
1 2
⎡ ⎤
b. FORM bother
⎢  ⎥
⎢ ⎥
⎢SYN | VAL SPR  CP
1

⎢  2 NP  ⎥
⎣ COMPS ⎦
ARG - ST  1 [nominal], 2 NP
324 Appendix
⎡ ⎤
c. FORM chased
⎢ ⎡  ⎤⎥
⎢ ⎥
⎢ POS verb ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ VFORM ed ⎥⎥
⎢ ⎢ ⎥⎥
⎢SYN ⎢  ⎥⎥
⎢ ⎢  NP ⎥ ⎥
⎢ ⎣VAL SPR 1
⎦⎥
⎢ ⎥
⎢ COMPS  2 NP ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP
⎡ ⎤
d. FORM exists
⎢  ⎥
⎢ ⎥
⎢SYN | VAL SPR  NP
1

⎢   ⎥
⎣ COMPS ⎦
ARG - ST  1 NP 
⎡ ⎤
e. FORM exists
⎢  ⎥
⎢ ⎥
⎢SYN | VAL SPR  NP[NFORM there]  ⎥
1
⎢  NP  ⎥
⎣ COMPS 2

ARG - ST  1 NP, 2 NP
⎡ ⎤
f. FORM fooled
⎢  ⎥
⎢  1 NP[NFORM norm] ⎥
⎢SYN | VAL SPR ⎥
⎢ COMPS  2 NP  ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP
⎡ ⎤
g. FORM kept
⎢ ⎡ ⎤⎥
⎢ | POS verb ⎥
⎢ HEAD ⎥
⎢ ⎢  ⎥⎥
⎢SYN⎢ ⎥⎥
⎢ ⎣VAL SPR  NP 1
⎦⎥
⎢ ⎥
⎢ COMPS  2 VP[ing] ⎥
⎣ ⎦
ARG - ST  1 NP, 2 VP
⎡ ⎤
h. FORM knows
⎢ ⎡  ⎤⎥
⎢ ⎥
⎢ POS verb ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ VFORM es ⎥⎥
⎢ ⎢ ⎥⎥
⎢SYN⎢  ⎥⎥
⎢ ⎢  1 NP ⎥ ⎥
⎢ ⎣VAL SPR ⎦⎥
⎢ ⎥
⎢ COMPS  2 NP ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP
⎡ ⎤
i. FORM made
⎢ ⎡ ⎤⎥
⎢ HEAD | POS ⎥
⎢ verb ⎥
⎢ ⎢  ⎥⎥
⎢SYN⎢  1 NP ⎥⎥
⎢ ⎣VAL SPR ⎦⎥
⎢ ⎥
⎢ COMPS  2 NP, 3 VP[bse] ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP, 3 VP
Appendix 325
⎡ ⎤
j. FORM persuaded
⎢  ⎥
⎢  1 NP ⎥
⎢SYN | VAL SPR ⎥
⎢ COMPS  NP, VP [ VFORM inf ] ⎥
⎣ 2 3

ARG - ST  1 NP, 2 NP, 3 VP
⎡ ⎤
k. FORM tried
⎢ ⎡ ⎤⎥
⎢  1 NPi  ⎥
⎢ SPR ⎥
⎢ ⎢   ⎥⎥
⎢SYN | VAL⎢ inf ⎥⎥
⎢ ⎣COMPS VFORM ⎦⎥
⎢ 2 VP ⎥
⎢ SPR NPi  ⎥
⎣ ⎦
ARG - ST  NP, VP
1 2
⎡ ⎤
l. FORM persuade
⎢ ⎡ ⎤⎥
⎢  1 NP ⎥
⎢ SPR ⎥
⎢ ⎢   ⎥⎥
⎢SYN | VAL⎢ inf ⎥⎥
⎢ ⎣COMPS VFORM ⎦⎥
⎢ 2 NPi , 3 VP ⎥
⎢ SPR NPi  ⎥
⎣ ⎦
ARG - ST  NP, NP, VP
1 2 3
⎡ ⎤
m. FORM placed
⎢ ⎡ ⎤⎥
⎢ SPR   ⎥
⎢ ⎥
⎢ ⎢ ⎥ ⎥
⎢SYN | VAL⎣COMPS  2 NP, 3 PP⎦⎥
⎢ ⎥
⎢ GAP  1 NP  ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP, 3 PP
⎡ ⎤
n. FORM persuade
⎢ ⎡ ⎤⎥
⎢ SPR NPi  ⎥
⎢ ⎥
⎢ ⎢   ⎥⎥
⎢SYN | VAL⎢ ⎥⎥
⎢ ⎣COMPS NP , VP VFORM inf ⎦⎥
⎣ j ⎦
SPR NPj 
⎡ ⎤
o. FORM promise
⎢ ⎡ ⎤⎥
⎢ SPR NPi  ⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎡ ⎤ ⎥⎥
⎢ ⎥⎥
⎢SYN | VAL⎢⎢
VFORM inf
⎥⎥
⎢ ⎢COMPS NPj , VP⎣SPR ⎢ ⎥ ⎥
⎢ NPi ⎦ ⎥⎥
⎣ ⎣ ⎦⎦
IND s1
⎡ ⎤
p. v-wd
⎢ ⎥
⎢FORM put ⎥
⎢ ⎡ ⎤ ⎥
⎢ SPR  1 NP

⎢ ⎥
⎢ ⎢ ⎥ ⎥
⎢SYN | VAL⎣COMPS  2 NP, 3 PP⎦⎥
⎢ ⎥
⎢ GAP   ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP, 3 PP
326 Appendix
⎡ ⎤
q. v-gap-wd
⎢ ⎥
⎢FORM put ⎥
⎢ ⎡ ⎤⎥
⎢ SPR  1 NP

⎢ ⎥
⎢ ⎥
⎢SYN | VAL⎢ ⎥
⎣COMPS  3 PP⎦⎥
⎢ ⎥
⎢ GAP  2 NP ⎥
⎣ ⎦
ARG - ST  NP, PP, PP
1 2 3
⎡ ⎤
r. v-gap-wd
⎢ ⎥
⎢FORM put ⎥
⎢ ⎡ ⎤ ⎥
⎢ SPR  1 NP

⎢ ⎥
⎢ ⎢ ⎥ ⎥
⎢SYN | VAL⎣COMPS  2 NP⎦ ⎥
⎢ ⎥
⎢ GAP  3 PP ⎥
⎣ ⎦
ARG - ST  1 NP, 2 NP, 3 PP
⎡ ⎤
s. FORM rained
⎢  ⎥
⎢  1 NP[NFORM it] ⎥
⎢SYN | VAL SPR ⎥
⎢ COMPS   ⎥
⎣ ⎦
ARG - ST  NP
1
⎡ ⎤
t. FORM seemed
⎢ ⎡ ⎤⎥
⎢  1 NP ⎥
⎢ SPR ⎥
⎢ ⎢   ⎥⎥
⎢SYN | VAL⎢ inf ⎥⎥
⎢ ⎣COMPS VFORM ⎦⎥
⎢ 2 VP ⎥
⎢ SPR  1 NP ⎥
⎣ ⎦
ARG - ST  1 NP, 2 VP
⎡ ⎤
u. FORM swims
⎢ ⎡  ⎤ ⎥
⎢ ⎥
⎢ POS verb ⎥
⎢ ⎢HEAD ⎥ ⎥
⎢SYN⎢ es ⎥ ⎥
⎢ ⎣ VFORM
⎦ ⎥
⎢ ⎥
⎢ VAL | SPR  1 NP ⎥
⎢  ⎥
⎢ ⎥
⎢ ⎥
⎣ARG - ST  1 NP PER 3rd ⎦
NUM sing
⎡ ⎤
v. FORM tried
⎢ ⎡ ⎤⎥
⎢ SPR  1 NP ⎥
⎢ ⎦⎥
⎢SYN | VAL⎣ ⎥
⎢ COMPS  2 VP VFORM inf  ⎥
⎣ ⎦
ARG - ST  1 NP, 2 VP
Appendix 327

A.2.2 Verbs: With Semantic Information


⎡ ⎤
(8) a. FORM expect
⎢ ⎡ ⎤⎥
⎢ SPR  1 NPi  ⎥
⎢ ⎢ ⎥
⎢ ⎢ ⎡ ⎤⎥⎥⎥
⎢ ⎥⎥
⎢SYN | VAL ⎢

VFORM inf
⎥⎥

⎢ ⎢COMPS 2 NP, 3 VP⎢ ⎣

 2 NP⎦ ⎥⎥
⎦⎥
SPR
⎢ ⎣ ⎥
⎢ IND s1 ⎥
⎢ ⎥
⎢ ⎥
⎢ARG - ST  1 NP, 2 NP, 3 VP ⎥
⎢ ⎥
⎢ ⎡ ⎤ ⎥
⎢ s ⎥
⎢ IND
0 ⎥
⎢ ⎢ ⎡ ⎤ ⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢ ⎢ PRED expect ⎥ ⎥
⎢SEM⎢ ⎢ ⎥ ⎥ ⎥
⎢ ⎢RELS ⎣EXP i ⎦⎦ ⎥ ⎥
⎣ ⎣ ⎦
SIT s1
⎡ ⎤
b. FORM get
⎢ ⎥
⎢SYN | HEAD | AUX – ⎥

⎢ ⎡ ⎤ ⎥ ⎥
⎢ SPR NPj ⎥
⎢ ⎢ ⎥ ⎥
⎢ARG - ST
⎢ NPj, VP⎣VFORM pass⎦ ⎥
⎢ ⎥ ⎥
⎢ ⎥
⎢ s1 ⎥
⎢ IND ⎥

⎢ ⎡ ⎤⎥

⎢ IND s0 ⎥

⎢ ⎢ ⎡ ⎤⎥⎥

⎢ ⎢ PRED ⎥
get-affected-rel ⎥⎥
⎢SEM⎢⎢RELS ⎢PAT j ⎥⎥⎥

⎣ ⎣ ⎣ ⎦⎦⎥

SIT s1
⎡ ⎤
c. FORM hit
⎢  ⎥
⎢  1 NPi  ⎥
⎢SYN | VAL SPR ⎥
⎢ COMPS  2 NPj  ⎥


⎢ ⎥
⎢ARG - ST NP , NP  ⎥
⎢ i j ⎥
⎢ ⎡ ⎤ ⎥
⎢ s ⎥
⎢ IND
0 ⎥
⎢ ⎢ ⎡ ⎥
⎤⎥ ⎥
⎢ ⎢ ⎥
⎢SEM⎢ PRED hit ⎥ ⎥
⎢ ⎢ ⎢ ⎥ ⎥ ⎥
⎢ ⎢RELS ⎣AGT i ⎦⎥ ⎥
⎣ ⎣ ⎦ ⎦
PAT j
328 Appendix
⎡ ⎤
d. FORM persuade
⎢ ⎡ ⎤⎥
⎢ SPR  1 NPi  ⎥
⎢ ⎥
⎢ ⎢ ⎡ ⎤ ⎥⎥
⎢ ⎢ ⎥⎥
⎢SYN | VAL ⎢ VFORM inf ⎥⎥
⎢ ⎢ ⎢ ⎥ ⎥⎥
⎢ ⎢COMPS 2 NPj , 3 VP⎣SPR NPj ⎦ ⎥⎥
⎢ ⎣ ⎦⎥
⎢ s1 ⎥
⎢ IND ⎥
⎢ ⎥
⎢ ⎥
⎢ARG - ST  1 NP, 2 NP, 3 VP ⎥
⎢ ⎡ ⎤ ⎥
⎢ ⎥
⎢ IND s
0 ⎥
⎢ ⎢ ⎡ ⎤ ⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢ ⎢ PRED persuade ⎥ ⎥
⎢ ⎢ ⎢ ⎥ ⎥ ⎥
⎢SEM⎢ ⎢ i ⎥ ⎥ ⎥
⎢ ⎢ ⎢ AGT
⎥ ⎥ ⎥
⎢ ⎢ RELS ⎢ ⎥ ⎥ ⎥
⎢ ⎣ ⎣ EXP j ⎦⎦ ⎥
⎣ ⎦
SIT s1
⎡ ⎤
e. FORM seem
⎢ ⎡ ⎤⎥
⎢ SPR  1 NP ⎥
⎢ ⎥
⎢ ⎢ ⎡ ⎤ ⎥⎥
⎢ ⎢ ⎥⎥
⎢SYN | VAL ⎢ VFORM inf ⎥⎥
⎢ ⎢ ⎥ ⎥⎥
⎢ ⎢COMPS 2 VP⎢ ⎣ SPR  1 NP⎦ ⎥⎥
⎢ ⎣ ⎦⎥
⎢ s1 ⎥
⎢ IND ⎥
⎢ ⎥
⎢ ⎥
⎢ARG - ST  1 NP, 2 VP ⎥
⎢ ⎡ ⎤ ⎥
⎢ ⎥
⎢ IND s
0 ⎥
⎢ ⎢   ⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢SEM⎣ PRED seem ⎦ ⎥
⎣ RELS ⎦
SIT s1
⎡ ⎤
f. FORM try
⎢ ⎡ ⎤⎥
⎢ SPR  1 NPi  ⎥
⎢ ⎢ ⎥
⎢ ⎢ ⎡ ⎤⎥⎥⎥
⎢ ⎥⎥
⎢SYN | VAL ⎢

VFORM inf ⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎣SPR NPi ⎦ ⎥⎥
⎦⎥
COMPS 2 VP
⎢ ⎣ ⎥
⎢ IND s1 ⎥
⎢ ⎥
⎢ ⎥
⎢ARG - ST  1 NP, 2 VP ⎥
⎢ ⎥
⎢ ⎡ ⎤ ⎥
⎢ IND s ⎥
⎢ 0 ⎥
⎢ ⎢ ⎡ ⎤⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢ ⎢ PRED try ⎥ ⎥
⎢SEM⎢ ⎥⎥ ⎥
⎢ ⎢RELS ⎢ ⎣AGT i ⎦⎥ ⎥
⎣ ⎣ ⎦ ⎦
SIT s1
Appendix 329

A.2.3 Nouns
⎡ ⎤
(9) a. FORM each
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ AGR | NUM sing ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎡ ⎤ ⎥⎥
⎢SYN⎢⎢ PFORM of ⎥⎥
⎢ ⎢ ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥⎥⎥⎥
⎢ ⎣ DEF + ⎦ ⎥
⎣ ⎣ ⎦⎦
NUM pl
⎡ ⎤
b. FORM book
⎢ ⎡ ⎡ ⎤ ⎤⎥
⎢ ⎥
⎢ POS noun ⎥
⎢ ⎢ ⎢ ⎡ ⎤⎥ ⎥⎥
⎢ ⎢ ⎢ ⎥ ⎥ ⎥
⎢ ⎢ ⎢ PER 3rd
⎥ ⎥⎥
⎢ ⎢ HEAD ⎢ ⎢ ⎥⎥ ⎥⎥ ⎥
⎢ ⎢ ⎣ AGR ⎣ NUM sing ⎦⎦ ⎥
⎢SYN⎢ ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ GEND neut
⎥⎥
⎢ ⎢  ⎥⎥
⎢ ⎢ ⎥
⎢ ⎣ SPR DP[NUM sing] ⎥ ⎦⎥
⎣ VAL ⎦
COMPS  
⎡ ⎤
c. FORM dogs
⎢  ⎥
⎢ ⎥
⎣SYN HEAD | POS noun ⎦
VAL | SPR DP[ COUNT +]
⎡ ⎤
d. FORM furniture
⎢  ⎥
⎢ ⎥
⎣SYN HEAD | POS noun ⎦
VAL | SPR DP[ COUNT –]
⎡ ⎤
e. FORM he
⎢ ⎡ ⎡ ⎤⎤⎥
⎢ ⎥
⎢ POS noun ⎥
⎢ ⎢ ⎢ ⎡ ⎤⎥⎥⎥
⎢ ⎢ ⎢ ⎥⎥⎥
⎢ ⎢HEAD ⎢ PER 3rd
⎥⎥⎥
⎢ ⎢ ⎢ ⎢ ⎥⎥⎥⎥
⎢ ⎢ ⎣ AGR ⎣ NUM sing ⎦⎦⎥⎥

⎢SYN⎢ ⎥
⎢ ⎢ GEND masc ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢   ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎣ SPR   ⎦⎥
⎣ VAL ⎦
COMPS  
⎡ ⎤
f. cn-prpn
⎢FORM John Smith ⎥
⎢ ⎥
⎢ ⎡ ⎤⎥
⎢ | ⎥
⎢ HEAD POS noun ⎥
⎢ ⎢  ⎥⎥
⎢SYN⎢ DP ⎥⎥
⎣ ⎣VAL SPR ⎦⎦
COMPS  
330 Appendix
⎡ ⎤
g. prpn
⎢FORM John Smith ⎥
⎢ ⎥
⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥
⎢ ⎥
⎢ ⎢  ⎥⎥
⎢SYN⎢  ⎥
 ⎦⎥
⎣ ⎣VAL SPR ⎦
COMPS  
⎡ ⎤
h. FORM many
⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥
⎢ ⎢ ⎥
⎢ ⎢ ⎡ ⎤⎥⎥ ⎥
⎢ PFORM of ⎥⎥
⎢SYN⎢⎢ ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥ ⎥⎥
⎢ ⎣ NUM pl ⎦ ⎥⎥
⎣ ⎣ ⎦⎦
DEF +
⎡ ⎤
i. FORM much
⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ ⎡ ⎤ ⎥⎥
⎢ PFORM of ⎥⎥
⎢SYN⎢⎢ ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥ ⎥⎥
⎢ ⎣ NUM sing ⎦ ⎥⎥
⎣ ⎣ ⎦⎦
DEF +
⎡ ⎤
j. FORM neither
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ POS noun ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ AGR | NUM sing ⎥⎥
⎢ ⎥⎥
⎢SYN⎢⎢   ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎣VAL | COMPS PP PFORM of ⎦⎥
⎣ ⎦
DEF +
⎡ ⎤
k. FORM pound
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ NUM sing
⎥⎥
⎢ ⎡ ⎤⎥⎥
⎢SYN⎢⎢ ⎥⎥
⎢ ⎢ SPR DP ⎥

⎢ ⎢VAL⎢  ⎥⎥
⎥⎥
⎣ ⎣ ⎣ ⎦⎦⎥⎦
COMPS PP PFORM of

⎡ ⎤
l. pounds
⎢ ⎡  ⎤
⎥
⎢ POS noun ⎥

⎢ ⎢ HEAD ⎥⎥

⎢SYN⎢⎢ AGR 1 | NUM pl ⎥
⎥⎥

⎢ ⎣ ⎦⎥

⎢ VAL | SPR DP AGR 1  ⎥
⎣ ⎦
SEM | IND | NUM sing
Appendix 331
⎡ ⎤
m. FORM some
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS noun
⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎢ AGR | NUM 1 ⎥⎥
⎢ ⎢ ⎥
⎢ ⎡ ⎤⎥
⎥⎥
⎢SYN⎢⎢ ⎥⎥
⎢ ⎢
PFORM of ⎥
⎢ ⎢VAL | COMPS PP⎢ ⎥⎥⎥

⎣ ⎣DEF + ⎦⎥⎥
⎦⎦

AGR | NUM 1
⎡ ⎤
n. FORM student
⎢ ⎡ ⎤⎥
⎢ HEAD | POS noun ⎥
⎢ ⎢  ⎥⎥

⎢SYN⎢ ⎥⎥

⎣ ⎣VAL SPR DP ⎦⎦
COMPS  

A.2.4 Adjectives
⎡ ⎤
(10) a. FORM eager
⎢ ⎡ ⎤⎥
⎢ SPR NPi  ⎥
⎢   ⎥⎥
⎢ ⎢ ⎥
⎢SYN | VAL ⎢ inf ⎥ ⎥
⎢ ⎣COMPS VP VFORM ⎦⎥
⎢ ⎥
⎢ IND s1 ⎥
⎢ ⎥
⎢ ⎡ ⎤ ⎥
⎢ IND s ⎥
⎢ 0 ⎥
⎢ ⎢ ⎡ ⎤⎥ ⎥
⎢ ⎢ ⎥ ⎥
⎢ ⎢ PRED eager ⎥ ⎥
⎢SEM⎢ ⎢ ⎥ ⎥ ⎥
⎢ ⎢RELS ⎣EXP i ⎦⎥ ⎥
⎣ ⎣ ⎦ ⎦
SIT s1

A.2.5 Complementizers
⎡ ⎤
(11) a. FORM that
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS comp
⎥⎥
⎢ ⎢ ⎥⎥
⎢ VFORM 1 ⎥⎥
⎢SYN⎢⎢   ⎥⎥
⎢ ⎢ ⎥⎥
⎢ ⎥
⎣ ⎣VAL SPR   ⎦⎦
COMPS S[VFORM 1 ]
⎡ ⎤
b. FORM for
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ ⎥⎥
POS comp
⎢HEAD

⎢SYN⎢ VFORM inf ⎥⎥

⎣ ⎣ ⎦⎦
VAL | COMPS S[ VFORM inf ]
⎡ ⎤
c. FORM whether
⎢ ⎡ ⎤⎥
⎢ HEAD | POS comp ⎥
⎢ ⎥
⎢ ⎢ ⎥⎥
⎢SYN ⎣VAL | COMPS S[fin]⎦⎥
⎢ ⎥
⎢ QUE + ⎥
⎣ ⎦
ARG - ST S
332 Appendix

A.2.6 Auxiliaries
⎡ ⎤
(12) a. aux-be-pass
⎢ ⎥
⎢FORM be ⎥
⎢ ⎡ ⎤⎥
⎢ ⎥
⎢ SPR  1 NP ⎥
⎢ ⎢   ⎥⎥
⎢ ⎥⎥
⎢SYN | VAL⎢⎣COMPS 2 VP VFORM pass ⎦⎥
⎢ ⎥
⎢  ⎥
⎢ SPR 1 NP ⎥
⎣ ⎦
ARG - ST  1 NP, 2 VP
⎡ ⎤
b. FORM must
⎢ ⎡   ⎤⎥
⎢ VFORM fin ⎥
⎢ ⎢HEAD ⎥⎥
⎢ ⎢ + ⎥⎥
⎢ AUX ⎥
⎢SYN⎢⎢ ⎡ ⎤⎥
⎥⎥
⎢ ⎢  NP ⎥⎥
⎢ ⎣VAL ⎣
SPR 1
 ⎦⎦⎥
⎢ ⎥
⎢ 2 VP SPR  1 NP ⎥
⎢ COMPS ⎥
⎣   ⎦
ARG - ST 1 NP, 2 VP

⎡ ⎤
c. FORM to
⎢ ⎡   ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS verb
⎥⎥
⎢ ⎥⎥
⎢SYN⎢ VFORM inf ⎥
⎣ ⎣ ⎦⎦
VAL | COMPS VP[ VFORM bse]

A.2.7 Determiners
⎡ ⎤
(13) a. FORM a
⎢ ⎡  ⎤⎥
⎢ ⎥
⎢ ⎢HEAD
POS det
⎥⎥
⎢ ⎢ ⎥
⎢ AGR | NUM sing ⎥ ⎥⎥
⎢SYN⎢⎢   ⎥⎥
⎢ ⎢ ⎥⎥
⎢   ⎥
⎣ ⎣VAL SPR ⎦⎦
COMPS  
⎡ ⎤
b. FORM ’s
⎢ ⎡ ⎤⎥
⎢ HEAD | POS det ⎥
⎢ ⎢  ⎥⎥
⎢ ⎥⎥
⎢SYN⎢ ⎥
⎣ ⎣VAL SPR NP ⎦⎦
COMPS  

A.2.8 Adverbs
⎡ ⎤
(14) FORM never/not
⎢ ⎡ ⎤⎥
⎢ adv ⎥
⎢ POS
 ⎦⎥
⎣SYN | HEAD ⎣ ⎦
MOD VP[VFORM nonfin]
Appendix 333

B Lexical Inflection and Derivations

B.1 Constructional Constraints


(15) a. EXTRAPOSITION CONSTRUCTION :
⎡ ⎤
  extraposed-wd
v-wd ⎢ ⎥
→ ⎣ARG - ST  . . . , NP[NFORM it], . . . ⎦
ARG - ST  . . . , 1 XP[verbal], ... 
EXTRA  1 XP

b. INVERTED AUX CONSTRUCTION:


⎡ ⎤ ⎡ ⎤
aux-wd aux-inv-fwd
⎢  ⎥ ⎢  ⎥
⎢ 1 ⎥ → ⎢ 2 nonfin ⎥
⎣ARG - ST 1 XP, YP
SPR ⎦ ⎣ARG - ST S
VFORM ⎦
VFORM 2 XARG 1 [nom]

c. NEGATIVE AUXILIARY CONSTRUCTION :


⎡ ⎤
  neg-fin-aux
fin-aux ⎢   ⎥
→ ⎢ + ⎥
ARG - ST  1 NP, 2 XP ⎣ARG - ST 1 NP, Adv
LEX
2 XP ⎦
I NEG + ,

d. ⎡ :
N’t CONTRACTION CONSTRUCTION ⎤
⎡ ⎤ aux-nt-w
aux-w ⎢ ⎥
⎢ ⎥ ⎢FORM  1 + n’t ⎥
⎣FORM  1  ⎦ → ⎢ ⎢
 ⎥

HEAD | VFORM fin
⎣HEAD VFORM fin ⎦
NEG +

e. PAST INFLECTIONAL CONSTRUCTION:


⎡ ⎤
⎡ ⎤ v-wd
v-lxm ⎢ ⎥
⎢ ⎥ ⎢FORM Fpast ( 1 ) ⎥
⎣FORM  1  ⎦ → ⎢  ⎥
⎢ ⎥
SYN | HEAD | POS verb ⎣SYN | HEAD POS verb ⎦
VFORM ed

f. PASSIVE CONSTRUCTION:
⎡ ⎤
  passive-v
v-tran-lxm ⎢ ⎥
→ ⎢SYN | HEAD | VFORM pass ⎥
ARG - ST XPi, 2 YP, ... ⎣  ⎦
ARG - ST  YP, . . . , PPi[by] 
2

g. ⎡:
PREPOSITIONAL PASSIVE CONSTRUCTION ⎤
pass-prep-v
  ⎢ ⎥
⎢SYN | HEAD | VFORM pass ⎥
prep-v ⎢   ⎥
→ ⎢ ⎥
ARG - ST NPi , PPj [ PFORM 4 ] ⎢ LEX + ⎥
⎣ARG - ST NPj , P , (PPi [by]) ⎦
PFORM 4
334 Appendix

h. VP ELLIPSIS CONSTRUCTION : ⎡ ⎤
aux-elide-w
⎡ ⎤ ⎢ ⎥
aux-w ⎢HEAD | AUX + ⎥
⎢   ⎥
⎢ ⎥ ⎢ ⎥
⎣HEAD | AUX + ⎦ → ⎢ SPR  XP
1 ⎥
⎢VAL ⎥
ARG - ST  1 XP, YP ⎢ COMPS   ⎥
⎣ ⎦
ARG - ST  1 XP, YP[pro]

B.2 Derived Constructs


(16) a. Deriving a VP Ellipsis Construct:
⎡ ⎤ ⎡ ⎤
FORM can FORM can
⎢  ⎥ ⎢  ⎥
⎢ ⎥ ⎢ ⎥
⎢SYN | VAL SPR  NP ⎢SYN | VAL SPR  
1 1
⎥ → ⎥
⎢  2 VP[bse] ⎥ ⎢   ⎥
⎣ COMPS ⎦ ⎣ COMPS ⎦
ARG - ST 1, 2 ARG - ST  1 , 2 [pro]

b. Deriving a Negative Auxiliary Construct:


⎡ ⎤
neg-fin-aux
⎡ ⎤ ⎢FORM will ⎥
fin-aux ⎢ ⎥
⎢ ⎥ ⎢ ⎡ ⎡ ⎤⎤ ⎥
⎢FORM will ⎥ ⎢ ⎥
⎢ ⎡ ⎤⎥ ⎢ AUX + ⎥
⎢  ⎥ ⎢ ⎢ ⎢ ⎥⎥ ⎥
⎢ ⎢ ⎢ fin⎦⎥ ⎥
⎢SYN⎣HEAD AUX + ⎥ → ⎢SYN⎣HEAD⎣VFORM ⎦ ⎥
⎢ ⎦⎥
⎥ ⎢ ⎥
⎢ VFORM fin ⎥ ⎢ NEG + ⎥
⎣ ⎦ ⎢   ⎥
⎢ ⎥
ARG - ST  1 NP, 2 XP ⎢ LEX + ⎥
⎣ARG - ST 1 NP, Adv 2 XP ⎦
I NEG + ,

c. Deriving an Inverted Auxiliary Construct:


⎡ ⎤
aux-inv-fwd
⎡ ⎤ ⎢FORM will ⎥
aux-wd ⎢ ⎥
⎢FORM will ⎥ ⎢ ⎡   ⎤ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎡ ⎤ ⎥ ⎢ AUX + ⎥
⎢   ⎥ ⎢ ⎢HEAD ⎥ ⎥
⎢ AUX + ⎥ ⎢ ⎢ + ⎥ ⎥
⎢ ⎢ INV ⎥
⎢SYN ⎣HEAD

⎦ ⎥
⎥→ ⎢SYN⎢  ⎥ ⎥ ⎥
INV − ⎢ ⎢ ⎥ ⎥


⎥ ⎢ ⎣VAL SPR   ⎦ ⎥
⎢  ⎥
⎥ ⎢
⎢ COMPS S[nonfin]


⎢ VFORM bse ⎥ ⎢ ⎥
⎣ARG - ST 1 XP, VP ⎦ ⎢  ⎥
SPR  1  ⎢ bse ⎥
⎣ARG - ST S
VFORM ⎦
XARG 1 [ CASE nom]

d. Deriving a Passive Construct:


⎡ ⎤
  FORM sent
FORM send ⎢ ⎥
→ ⎣SYN | HEAD | VFORM pass ⎦
ARG - ST NPi , 2 NP, 3 PP[to]
ARG - ST  NP, PP[to], (PPi [by])
2 3
Appendix 335

C Constructional Constraints

C.1 Word-Level Constructions


(17) a. Argument Realization Constraint (ARC):
⎡  ⎤
SPR A
⎢SYN | VAL ⎥
v-wd ⇒ ⎣ COMPS B ⎦
ARG - ST A ⊕ B

b. Argument Realization Constraint for the Function-Word:


⎡  ⎤
SPR elist
⎢SYN | VAL ⎥
function-wd ⇒ ⎣ COMPS A ⎦
ARG - ST A

c. Auxiliary Verbs: ⎡ ⎡ ⎤
 ⎤
⎢SYN⎣HEAD POS verb ⎥
⎢ ⎦ ⎥
⎢ AUX + ⎥
⇒ ⎢ ⎥
aux-verb
⎢  ⎥
⎣ ⎦
ARG - ST 1 XP, YP SPR  1 XP
d. Modal Auxiliary: ⎡ ⎤
SYN | HEAD | VFORM fin
aux-modal ⇒ ⎣  ⎦
ARG - ST NP, VP VFORM bse

e. Tough Lexeme:
⎡ ⎤
SYN | HEAD | POS adj
⎢  ⎥
tough-lxm ⇒ ⎢ inf ⎥
⎣ARG - ST NPi , VP
VFORM ⎦
GAP  1 NPi [acc]

f. Eager Lexeme:
⎡ ⎤
SYN | HEAD | POS adj
⎢  ⎥
eager-lxm ⇒ ⎢ inf ⎥
⎣ARG - ST NPi , VP
VFORM⎦
SPR NPi 

C.2 Phrase-Level Constructions


C.2.1 Major Phrasal Constructions
(18) a. HEAD - SPECIFIER CONSTRUCTION :
XP[SPR  ] → 1 , H[SPR  1 ]

b. HEAD - COMPLEMENT CONSTRUCTION :


XP[COMPS  ] → H[COMPS  1 , . . . , n ], 1, ..., n

c. HEAD - MODIFIER CONSTRUCTION:


XP → [MOD  1 ], 1 H
336 Appendix

d.  - ONLY
HEAD  :
 CONSTRUCTION

phrase word
XP →X
VAL 1 VAL 1

e. HEAD - LEX CONSTRUCTION :


V [POS 1 ] → V[POS 1 ], X[LEX+]

f. HEAD - FILLER CONSTRUCTION:


S GAP  → 1 XP, S GAP  1 XP

C.2.2 Minor Phrasal Constructions


(19) a. APPOSITIVE CONSTRUCTION :
⎡  ⎤ ⎡  ⎤ ⎡  ⎤
IND i IND i IND s0
NP⎣SEM ⎦→ NP⎣SEM ⎦, NP/S⎣SEM ⎦
RELS 1, 2 RELS  1  RELS  2 

b. COORDINATION CONSTRUCTION :
XP → XP[GAP A] conj XP[GAP A]

c. FREE - REL CONSTRUCTION:


NP[GAP  ] → 1 NP[FREL i], S[GAP  1 NP]

d. HEAD - REL MOD CONSTRUCTION


  :
REL i
N → 1 N , S
i
MOD 1
e. HEAD - REL BARE MOD CONSTRUCTION
 : 
MOD 1
N GAP  → 1 N , S
GAP NPi [acc]
i

f. HEAD - EXTRA CONSTRUCTION:


EXTRA   → H EXTRA  1  , 1 XP

g. NONCANONICAL SUBJECT CONSTRUCTION:

S SPR   → VP SPR NP[noncanonical]


h. TOUGH CONSTRUCTION :
 
tough-adj
AP[GAP A] → A , XP GAPNPi [acc] ⊕ A
SPR NPi 
Bibliography

Aarts, Bas. 1997/2001. English Syntax and Argumentation. Basingstoke, Hampshire and
New York: Palgrave.
Aarts, Bas. 2007. Syntactic Gradience: The Nature of Grammatical Indeterminacy.
Oxford: Oxford University Press.
Abeillé, Anne and Godard, Daniele. 2000. French Word Order and Lexical Weight. In
Borsley, R. (ed.), The Nature and Function of Syntactic Categories, 325–360. New
York: Academic Press.
Abeillé, Anne and Godard, Daniele. 2002. The Syntactic Structure of French Auxiliaries.
Language 78(3): 404–452.
Abney, Steven. 1987. The English Noun Phrase in Its Sentential Aspect. PhD dissertation,
MIT.
Adger, David. 2013. Constructions and Grammatical Explanation: Comments on Gold-
berg. Mind and Language 28(4): 466–478.
Akmajian, Adrian. 1970. On Deriving Cleft Sentences from Pseudo-cleft Sentences.
Linguistic Inquiry 1(2): 149–168.
Akmajian, Adrian and Heny, Frank. 1975. Introduction to the Principles of Transforma-
tional Syntax. Cambridge, MA: MIT Press.
Akmajian, Adrian, Steele, Susan, and Wasow, Thomas. 1979. The Category AUX in
Universal Grammar. Linguistic Inquiry 10(1): 1–64.
Akmajian, Adrian and Wasow, Thomas. 1974. The Constituent Structure of VP and AUX
and the Position of Verb BE. Linguistic Analysis 1(3): 205–245.
Arnold, Douglas. 2004. Non-restrictive Relative Clauses in Construction-Based HPSG.
In Müller, S. (ed.), Proceedings of the 11th International Conference on Head-Driven
Phrase Structure Grammar, 27–47. Stanford, CA: CSLI Publications.
Arnold, Douglas and Spencer, Andrew. 2015. A Constructional Analysis for the Skepti-
cal. In Müller, S. (ed.), Proceedings of the 22nd International Conference on Head-
Driven Phrase Structure Grammar, 41–61. Stanford, CA: CSLI Publications.
Asudeh, Ash, Dalrymple, Mary, and Toivonen, Ida. 2013. Constructions with Lexical
Integrity. Journal of Language Modelling 1(1): 1–54.
Bach, Emmon. 1974. Syntactic Theory. New York: Holt, Rinehart and Winston.
Bach, Emmon. 1979. Control in Montague Grammar. Linguistic Inquiry 10(4): 515–531.
Baker, Carl. 1991. The Syntax of English not: The Limits of Core Grammar. Linguistic
Inquiry 22(3): 387–429.
Baker, Carl. 1995. English Syntax. Cambridge, MA: MIT Press.
Baker, Mark. 1997. Thematic Roles and Syntactic Structure. In Haegeman, L. (ed.),
Elements of Grammar, 73–137. Dordrecht: Kluwar.
Baker, Mark. 2001. The Atoms of Language: The Mind’s Hidden Rules of Grammar. New
York: Basic Books.

337
338 Bibliography

Baltin, Mark. 2006. Extraposition. In Everaert, M. and Van Riemsdijk, H. (eds.), The
Blackwell Companion to Syntax (Blackwell Handbooks in Linguistics), 237–271.
Oxford: Blackwell.
Bates, Elizabeth and Goodman, Judith C. 1997. On the Inseparability of Grammar and the
Lexicon: Evidence from Acquisition, Aphasia and Real-Time Processing. Language
and Cognitive Processes 12: 507–584.
Bender, Emily and Flickinger, Dan. 1999. Peripheral Constructions and Core Phe-
nomena: Agreement in Tag Questions. In Webelhuth, G., Koenig, J.-P., and Kathol,
A. (eds.), Lexical and Constructional Aspects of Linguistic Explanation, 199–214.
Stanford, CA: CSLI Publications.
Biber, Douglas, Johansson, Stig, Leech, Geoffrey, Conrad, Susan, and Finegan, Edward.
1999. Longman Grammar of Spoken and Written English. New York: Longman.
Blake, Barry. 1990. Relational Grammar. London: Routledge.
Bloomfield, Leonard. 1933. Language. New York: H. Holt and Company.
Booij, Geert. 2010. Construction Morphology. Language and Linguistics Compass 4(7):
543–555.
Borsley, Robert. 1989a. Phrase Structure Grammar and the Barriers Conception of Clause
Structure. Linguistics 27(5): 843–863.
Borsley, Robert. 1989b. An HPSG Approach to Welsh. Journal of Linguistics 25(2): 333–
354.
Borsley, Robert. 1991. Syntactic Theory: A Unified Approach. London: Routledge.
Borsley, Robert. 1996. Modern Phrase Structure Grammar. Oxford: Blackwell.
Borsley, Robert. 2004. An Approach to English Comparative Correlatives. In Müller,
S. (ed.), Proceedings of the 11th International Conference on Head-Driven Phrase
Structure Grammar, 70–92. Stanford, CA: CSLI Publications.
Borsley, Robert. 2005. Against ConjP. Lingua 115(4): 461–482.
Borsley, Robert. 2006. Syntactic and Lexical Approaches to Unbounded Dependencies.
Essex Research Reports in Linguistics 49. Colchester, UK: University of Essex.
Borsley, Robert. 2012. Don’t Move! Iberia: An International Journal of Theoretical
Linguistics 4(1): 110–139
Bouma, Gosse, Malouf, Rob, and Sag Ivan. 2001. Satisfying Constraints on Extraction
and Adjunction. Natural Language and Linguistic Theory 19(1): 1–65.
Brame, Michael. 1979. Essays toward Realistic Syntax. Seattle: Noit Amrofer.
Bresnan, Joan. 1978. A Realistic Transformational Grammar. In Halle, M., Bresnan, J.,
and Miller, G. A. (eds.), Linguistic Theory and Psychological Reality. Cambridge,
MA: MIT Press.
Bresnan, Joan. 1982a. Control and Complementation. In The Mental Representation of
Grammatical Relations (Bresnan, 1982c).
Bresnan, Joan. 1982b. The Passive in Lexical Theory. In The Mental Representation of
Grammatical Relations (Bresnan, 1982c).
Bresnan, Joan. 1982c. The Mental Representation of Grammatical Relations. Cambridge,
MA: MIT Press.
Bresnan, Joan. 1994. Locative Inversion and the Architecture of Universal Grammar.
Language 70(2): 1–52.
Bresnan, Joan. 2001. Lexical-Functional Syntax. Oxford and Cambridge, MA: Blackwell.
Briscoe, Edward, Copestake, Ann, and Paiva, Valeria. 1993. Inheritance, Defaults, and
the Lexicon. Cambridge, UK: Cambridge University Press.
Bibliography 339

Brody, Michael. 1995. Lexico-Logical Form: A Radically Minimalist Theory. Cambridge,


MA: MIT Press.
Burton-Roberts, Noel. 2016. Analysing Sentences: An Introduction to English Syntax.
Longman London: Routledge.
Carnie, Andrew. 2002. Syntax: A Generative Introduction. Oxford: Blackwell.
Carnie, Andrew. 2011. Modern Syntax. Cambridge, UK: Cambridge University Press.
Carpenter, Bob. 1992. The Logic of Typed Feature Structures: With Applications to Uni-
fication Grammars, Logic Programs, and Constraint Resolution. Cambridge, UK:
Cambridge University Press.
Chae, Hee-Rahk. 1992. Lexically Triggered Unbounded Discontinuities in English:
An Indexed Phrase Structure Grammar Approach. PhD dissertation, Ohio State
University.
Chappell, Hilary. 1980. Is the Get-passive Adversative? Research on Language & Social
Interaction 13(3): 411–452.
Chater, Nick and Christiansen, Morten H. 2018. Language Acquisition as Skill Learning.
Current Opinion in Behavioural Sciences 21: 205–208.
Chaves, Rui. 2007. Coordinate Structures – Constraint-Based Syntax and Semantics
Processing. PhD dissertation, University of Lisbon.
Chaves, Rui. 2009. Construction-Based Cumulation and Adjunct Extraction. In Müller,
S. (ed.), Proceedings of the 16th International Conference on Head-Driven Phrase
Structure Grammar, 47–67. Stanford, CA: CSLI Publications.
Chierchia, Gennaro and McConnell-Ginet, Sally. 1990. Meaning and Grammar: An
Introduction to Semantics. Cambridge, MA: MIT Press.
Chomsky, Noam. 1957. Syntactic Structures. The Hague: Mouton.
Chomsky, Noam. 1963. Formal Properties of Grammars. In Luce, R., Bush, R., and
Galanter, E. (eds.), Handbook of Mathematical Psychology, Volume II. New York:
Wiley.
Chomsky, Noam. 1965. Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.
Chomsky, Noam. 1969. Remarks on Nominalization. In Jacobs, R. and Rosenbaum,
P. (eds.), Readings in English Transformational Grammar, 184–221. Waltham, MA:
Ginn.
Chomsky, Noam. 1972. Language and Mind. New York: Harcourt, Brace, Jovanovich.
Chomsky, Noam. 1973. Conditions on Transformations. In Anderson, S. and
Kiparsky, P. (eds.), A Festschrift for Morris Halle. New York: Holt, Rinehart and
Winston.
Chomsky, Noam. 1975. The Logical Structure of Linguistic Theory. Chicago: University
of Chicago Press.
Chomsky, Noam. 1977. On Wh-movement. In Akmajian, A., Culicover, P. and Wasow,
T. (eds.), Formal Syntax, 71–132. New York: Academic Press.
Chomsky, Noam. 1980. Rules and Representations. New York: Columbia University
Press.
Chomsky, Noam. 1981a. Lectures on Government and Binding. Dordrecht: Foris.
Chomsky, Noam. 1981b. Principles and Parameters in Syntactic Theory. Explanation in
Linguistics: 32–75.
Chomsky, Noam. 1982. Some Concepts and Consequences of the Theory of Government
and Binding. Cambridge, MA: MIT Press.
Chomsky, Noam. 1986. Barriers. Cambridge, MA: MIT Press.
340 Bibliography

Chomsky, Noam. 1993. A Minimalist Program for Linguistic Theory. In Hale, K. and
Kayser, S. (eds.), The View from Building 20: Essays in Honor of Sylvain Bromberger,
1–52. Cambridge, MA: MIT Press.
Chomsky, Noam. 1995. The Minimalist Program. Cambridge, MA: MIT Press.
Chomsky, Noam. 2005. Three Factors in Language Design. Linguistic Inquiry 36(1):
1–22.
Chomsky, Noam. 2013. Problems of Projection. Lingua 130: 33–49.
Chomsky, Noam and Lasnik, Howard. 1977. Filters and Control. Linguistic Inquiry 8(3):
425–504.
Christiansen, Morten and Chater, Nick. 2016. Creating Language: Integrating Evolution,
Acquisition, and Processing. Cambridge, MA: MIT Press.
Collins, Peter. 1996. Get-passives in English. World Englishes 15(1): 43–56.
Copestake, Ann. 2002. Implementing Typed Feature Structures Grammars. Stanford, CA:
CSLI Publications.
Copestake, Ann, Flickinger, Dan, Pollard, Carl, and Sag, Ivan. 2006. Minimal Recursion
Semantics: An Introduction. Research on Language and Computation 3(4): 281–332.
Cowper, Elizabeth. 1992. A Concise Introduction to Syntactic Theory: The Government-
Binding Approach. Chicago: University of Chicago Press.
Croft, William. 2001. Radical Construction Grammar: Syntactic Theory in Typological
Perspective. Oxford: Oxford University Press.
Croft, William. 2009. Syntax Is More Diverse, and Evolutionary Linguistics Is Already
Here. The Behavioral and Brain Sciences 32(5): 457–458.
Culicover, Peter. 1993. Evidence against ECP Aaccounts of the That-t Effect. Linguistic
Inquiry 24(3): 557–561.
Culicover, Peter and Jackendoff, Ray. 1999. The View from the Periphery: The English
Comparative Correlative. Linguistic Inquiry 30(4): 543–571.
Culicover, Peter, and Jackendoff, Ray. 2005. Simpler Syntax. Oxford: Oxford University
Press.
Dalrymple, Mary. 2001. Lexical Functional Grammar. (Syntax and Semantics, Volume
34). New York: Academic Press.
Dalrymple, Mary, Zaenen, Annie, Maxwell III, John, and Kaplan, Ronald. 1995. Formal
Issues in Lexical-Functional Grammar. Stanford, CA: CSLI Publications.
Davidson, Donald. 1980. Essays on Actions and Events. Oxford: Clarendon Press; New
York: Oxford University Press.
Davis, Anthony. 2001. Linking by Types in the Hierarchical Lexicon. Stanford, CA: CSLI
Publications.
den Dikken, Marcel. 2005. Comparative Correlatives Comparatively. Linguistic Inquiry
36(4): 497–532.
Downing, Angela. 1996. The Semantics of Get-Passives. In Hasan, R., Cloran, C.,
and Butt, G. (eds.), Functional Descriptions: Theory in Practice. Amsterdam: John
Benjamins Publishing.
Dowty, David. 1982. Grammatical Relations and Montague Grammar. In Jacobson, P.
and Pullum, G. (eds.), The Nature of Syntactic Representation, 79–130. Dordrecht:
Reidel.
Dowty, David. 1989. On the Semantic Content of the Notion of Thematic Role. In Chier-
chia, G., Partee B., and Turner, R. (eds.), Properties, Types, and Meanings, Volume 2,
69–129. Dordrecht: Kluwer.
Bibliography 341

Dowty, David, Wall, Robert, and Peters, Stanley. 1981. Introduction to Montague
Semantics. Dordrecht: Reidel.
Dubinksy, Stanley and Davies, William. 2004. The Grammar of Raising and Control: A
Course in Syntactic Argumentation. Oxford: Blackwell.
Emonds, Joseph. 1970. Root and Structure-Preserving Transformations. PhD disserta-
tion, MIT.
Emonds, Joseph. 1976. A Transformational Approach to English Syntax: Root, Structure-
Preserving, and Local Transformations. New York: Academic Press.
Ernst, Thomas. 1992. The Phrase Structure of English Negation. The Linguistic Review
9(2): 109–144.
van Eynde, Frank. 2015. Sign-Based Construction Grammar: A Guided Tour. Journal of
Linguistics 52(1): 194–217.
van Eynde, Frank and Kim, Jong-Bok. 2016. Loose Apposition: A Construction-Based
Analysis. Functions of Language 23(1): 17–39.
Fabb, Nigel. 1990. The Difference between English Restrictive and Non-restrictive
Relative Clauses. Journal of Linguistics 26(1): 57–78.
Fillmore, Charles. 1963. The Position of Embedding Transformations in a Grammar.
Word 19(2): 208–231.
Fillmore, Charles. 1999. Inversion and Constructional Inheritance. In Webelhuth, G.,
Koenig, J. P., and Kathol, A. (eds.), Lexical and Constructional Aspects of Linguistics
Explanation, 113–128. Stanford, CA: CSLI Publications.
Fillmore, Charles, Kay, Paul, and O’Connor, Mary. 1988. Regularity and Idiomaticity in
Grammatical Constructions: The Case of Let Alone. Language 64(3): 501–538.
Flickinger, Daniel. 1983. Lexical Heads and Phrasal Gaps. In Barlow, M., Flickinger,
D., and Wescoat, M. (eds.), Proceedings of the 2nd West Coast Conference on Formal
Linguistics, 89-101. Stanford, CA: Stanford Linguistics Association.
Flickinger, Daniel. 1987. Lexical Rules in the Hierarchical Lexicon. PhD dissertation,
Stanford University.
Flickinger, Daniel. 2008. Transparent Heads. In Müller, S. (ed.), Proceedings of the
15th International Conference on Head-Driven Phrase Structure Grammar, 87–94.
Stanford, CA: CSLI Publications.
Flickinger, Daniel, Pollard, Carl, and Wasow, Thomas. 1985. Structures Sharing in
Lexical Representation. In Morristown, N. (ed.), Proceedings of the 23rd Annual Meet-
ing of the Association for Computational Linguistics, Association for Computational
Linguistics.
Fodor, Jerry. 1983. The Modularity of Mind. Cambridge, MA: MIT Press.
Fodor, Jerry and Katz, Jerrold. 1964. The Structure of Language. Englewood Cliffs, NJ:
Prentice-Hall.
Fraser, Bruce. 1970. Idioms within a Transformational Grammar. Foundations of Lan-
guage 6: 22–42.
Gazdar, Gerald. 1981. Unbounded Dependencies and Coordinate Structure. Linguistic
Inquiry 12(2): 155–184.
Gazdar, Gerald. 1982. Phrase Structure Grammar. In Jacobson, P. and Pullum, G. (eds.),
The Nature of Syntactic Representation. Dordrecht: Reidel.
Gazdar, Gerald, Klein, Ewan, Pullum, Geoffrey, and Sag, Ivan. 1985. Generalized
Phrase Structure Grammar. Cambridge, MA; Harvard University Press; Oxford: Basil
Blackwell.
342 Bibliography

Gazdar, Gerald and Pullum, Geoffrey. 1981. Subcategorization, Constituent Order, and
the Notion ‘Head’. In Moortgat, M., van der Hulst, H., and Hoekstra, T. (eds.), The
Scope of Lexical Rules. Dordrecht: Foris.
Gazdar, Gerald, Pullum, Geoffrey, and Sag, Ivan. 1982. Auxiliaries and Related Phenom-
ena in a Restrictive Theory of Grammar. Language 58(3): 591–638.
van Gelderen, Elly. 2017. Syntax: An Introduction to Minimalism. Amsterdam: John
Benjamins.
Geluykens, Ronald. 1988. Five Types of Clefting in English Discourse. Linguistics 26:
823–842.
Ginzburg, Jonathan and Sag, Ivan. 2000. Interrogative Investigations: The Form, Mean-
ing and Use of English Interrogatives. Stanford, CA: CSLI Publications.
Goldberg, Adele. 1995. A Construction Grammar Approach to Argument Structure.
Chicago: University of Chicago Press.
Goldberg, Adele. 2003. Constructions: A New Theoretical Approach to Language.
Trends in Cognitive Science 7(5): 219–224.
Goldberg, Adele. 2006. Constructions at Work. Oxford: Oxford University Press.
Goldberg, Adele. 2009. The Nature of Generalization in Language. Cognitive Linguistics
20(1): 93–127.
Goldberg, Adele. 2013. Constructionist Approaches to Language. In Hoffmann, T. and
Trousdale, G. (eds.), Handbook of Construction Grammar. Oxford: Oxford University
Press.
Goldberg, Adele. 2014. Fitting a Slim Dime between the Verb Template and Argument
Structure Construction Approaches. Theoretical Linguistics 40(1–2): 113–135.
Goldberg, Adele. 2016. Tuning in to the Verb-Particle Construction in English. In Nash,
Lea and Samvelian, Pollet (eds.), Approaches to Complex Predicates. Leiden: Brill.
Goldberg, Adele and Casenhiser, Devin. 2006. English Constructions. In Aarts,
B. and McMahon, A. (eds.), Handbook of English Linguistics. Malden, MA:
Blackwell.
Goldsmith, John. 1985. A Principled Exception to the Coordinate Structure Constraint.
In Eilfort, W., Kroeber, P., and Peters, K. (eds.), Papers from the 21st Regional Meeting
of the Chicago Linguistic Society. Chicago: Chicago Linguistic Society.
Green, Georgia. 1976. Main Clause Phenomena in Subordinate Clauses. Language 52(2):
382–397.
Green, Georgia. 1981. Pragmatics and Syntactic Description. Studies in the Linguistic
Sciences 11(1): 27–37.
Green, Georgia. 2011. Modelling Grammar Growth: Universal Grammar without Innate
Principles or Parameters. In Borsley, R. and Borjars, K. (eds.), Nontransformational
Syntax: Formal and Explicit Models of Grammar: A Guide to Current Models, 378–
403. Cambridge, MA: Blackwell.
Greenbaum, Sidney. 1996. The Oxford English Grammar. Oxford: Oxford University
Press.
Gregory, Michelle and Michaelis, Laura. 2001. Topicalization and Left-Dislocation: A
Functional Opposition Revisited. Journal of Pragmatics 33(11): 1665–1706.
Grice, Paul. 1989. Studies in the Way of Words. Cambridge, MA: Harvard University
Press.
Grimshaw, Jane. 1997. Projection, Heads, and Optimality. Linguistic Inquiry 28(3): 373–
422.
Bibliography 343

Groat, Erich. 1995. English Expletives: A Minimalist Approach. Linguistic Inquiry 26(2):
354–365.
Grosu, Alexander. 1974. On the Nature of the Left Branch Constraint. Linguistic Inquiry
5(2): 308–319.
Gundel, Jeanette. 1977. Where Do Cleft-Sentences Come from? Language 53(3): 543–
559.
Haegeman, Liliane. 1985. The Get-Passive and Burzio’s Generalization. Lingua 66(1):
53–77.
Haegeman, Liliane. 1994. Introduction to Government and Binding Theory. Cambridge,
MA: Basil Blackwell.
Harman, Gilbert. 1963. Generative Grammar without Transformation Rules: A Defense
of Phrase Structure. Language 39(4): 597–616.
Harris, Randy. 1993. The Linguistic Wars. Oxford: Oxford University Press.
Harris, Zellig. 1970. Papers in Structural and Transformational Linguistics. Dordrecht:
Reidel.
Hedberg, Nancy. 1988. The Discourse Function of Cleft Sentences in Spoken English.
Paper presented at the Linguistics Society of America Conference, New York.
Hedberg, Nancy. 2000. The Referential Status of Clefts. Language 39(4): 891–920.
Hilpert, Martin. 2014. Construction Grammar and Its Application to English. Edinburgh:
Edinburgh University Press.
Hofmeister, Philip, Jaeger, Florian, Sag, Ivan, Arnon, Inbal, and Snider, Neal. 2006.
Locality and Accessibility in Wh-questions. In Featherston, S. and Sternefeld, W.
(eds.), Roots: Linguistics in Search of Its Evidential Base, 185–206. Berlin: Mouton
de Gruyter.
Hooper, Joan and Thompson, Sandra. 1973. On the Applicability of Root Transforma-
tions. Linguistic Inquiry 4(4): 465–497.
Hornstein, Norbert and Lightfoot, David. 1981. Explanation in Linguistics: The Logical
Problem of Language Acquisition. London: Longman.
Huddleston, Rodney and Pullum, Geoffrey. 2002. The Cambridge Grammar of the
English Language. Cambridge, UK: Cambridge University Press.
Hudson, Richard. 1984. Word Grammar. Oxford: Blackwell.
Hudson, Richard. 1990. English Word Grammar. Oxford: Blackwell.
Hudson, Richard. 1998. Word Grammar. In Agel, V., Eichinger, L., Eroms, H. W.
et al. (eds.), Dependency and Valency: An International Handbook of Contemporary
Research. Berlin: Walter de Gruyter.
Hudson, Richard. 2003. Mismatches in Default Inheritance. In Francis, F. and Michaelis,
L. (eds.), Mismatch: Form-Function Incongruity and the Architecture of Grammar,
355–402. Stanford, CA: CSLI Publications.
Hudson, Richard. 2004. Are Determiners Heads? Functions of Language 11(1): 7–42.
Hudson, Richard. 2010. An Introduction to Word Grammar (Cambridge Textbooks in
Linguistics). Cambridge, UK: Cambridge University Press.
Huang, James. 1982. Logical Relations in Chinese and the Theory of Grammar. PhD
dissertation, MIT.
Jackendoff, Ray. 1972. Semantic Interpretation in Generative Grammar. Cambridge,
MA: MIT Press.
Jackendoff, Ray. 1975. Morphological and Semantic Regularities in the Lexicon. Lan-
guage 51(3): 639–671.
344 Bibliography

Jackendoff, Ray. 1977. X -syntax. Cambridge, MA: MIT Press.


Jackendoff, Ray. 1994. Patterns in the Mind. New York: Basic Books.
Jackendoff, Ray. 1990. Semantic Structures. Cambridge, MA: MIT Press.
Jackendoff, Ray. 2002. Foundation of Language: Brain, Meaning, Grammar, Evolution.
Oxford: Oxford University Press.
Jackendoff, Ray. 2007. A Parallel Architecture Perspective on Language Processing.
Brain Research 1146(1): 2–22.
Jackendoff, Ray. 2008. Construction after Construction and Its Theoretical Challenges.
Language 84(1): 8–28.
Jackendoff, Ray. 2011. What is the Human Language Faculty? Two Views. Language
87(3): 586–624.
Jackendoff, Ray and Pinker, Steven. 2009. The Reality of a Universal Language Faculty.
The Behavioral and Brain Sciences 32(5): 465–466.
Jacobs, Roderick. 1995. English Syntax: A Grammar for English Language Profession-
als. Oxford: Oxford University Press.
Johnson, David and Lapin, Shalom. 1999. Local Constraints vs. Economy. Stanford, CA:
CSLI Publications.
Johnson, David and Postal, Paul. 1980. Arc-Pair Grammar. Princeton: Princeton Univer-
sity Press.
Kaplan, Ronald and Zaenen, Annie. 1989. Long-Distance Dependencies, Constituent
Structure and Functional Uncertainty. In Baltin, M. and Kroch, A. (eds.), Alternative
Conceptions of Phrase Structure, 17–42. Chicago: University of Chicago Press.
Katz, Jerrold and Postal, Paul. 1964. An Integrated Theory of Linguistic Descriptions.
Cambridge, MA: MIT Press.
Katz, Jerrold and Postal, Paul. 1991. Realism versus Conceptualism in Linguistics.
Linguistics and Philosophy 14(5): 515–554.
Kay, Paul. 1995. Construction Grammar. In Verschueren, J., Östman, J.-O., and Blom-
maert, J. (eds.), Handbook of Pragmatics. Amsterdam and Philadelphia: John Ben-
jamins.
Kay, Paul. 2002. An Informal Sketch of a Formal Architecture for Construction Grammar.
Grammars 5(1): 1–19.
Kay, Paul and Fillmore, Charles. 1999. Grammatical Constructions and Linguistic
Generalizations: The What’s x Doing y Construction. Language 7(1): 1–33.
Kay, Paul and Michaelis, Laura A. 2019. A Few Words to Do with Multiword Expres-
sions. In Condoravdi, C. and Holloway King, T. (eds.), Tokens of Meaning: Papers in
Honor of Lauri Karttunen. Stanford: CSLI Publications. 87–118.
Kayne, Richard and Pollock, Jean-Yves. 1978. Stylistic Inversion, Successive Cyclicity,
and Move NP in French. Linguistic Inquiry 9(4): 595–621.
Keenan, Edward. 1975. Some Universals of Passive in Relational Grammar. In Grossman,
R., Sam, L., and Vance, T. (eds.), Papers from the 11th Regional Meeting, Chicago
Linguistic Society, 340–352. Chicago: Chicago Linguistic Society.
Keenan, Edward and Comrie, Bernard. 1977. Noun Phrase Accessibility and Universal
Grammar. Linguistic Inquiry 8(1): 63–99.
Kim, Jong-Bok. 2000. The Grammar of Negation: A Constraint-Based Approach.
Stanford, CA: CSLI Publications.
Kim, Jong-Bok. 2001a. On the Types of Prepositions and Their Projections in Syntax.
Studies in Modern Grammar 26: 1–22.
Bibliography 345

Kim, Jong-Bok. 2001b. Constructional Constraints in English Free Relative Construc-


tions. Language and Information 5(1): 35–53.
Kim, Jong-Bok. 2002a. On the Structure of English Partitive NPs and Agreement. Studies
in Generative Grammar 12(2): 309–338.
Kim, Jong-Bok. 2002b. English Auxiliary Constructions and Related Phenomena: From
a Constraint-Based Perspective. Language Research 38(4): 1037–1076.
Kim, Jong-Bok. 2003. Similarities and Differences between English VP Ellipsis and VP
Fronting: An HPSG Analysis. Studies in Generative Grammar 13(3): 429–459.
Kim, Jong-Bok. 2004a. Hybrid English Agreement. Linguistics 42(6): 1105–1128.
Kim, Jong-Bok. 2004b. Korean Phrase Structure Grammar (in Korean). Seoul: Hankwuk
Publishing.
Kim, Jong-Bok. 2011. The English Comparative Correlative Construction: Interactions
between Lexicon and Constructions. Korean Journal of Linguistics 36(2): 307–336.
Kim, Jong-Bok. 2014. English Copy Raising Constructions: Argument Realization and
Characterization Condition. Linguistics 52(1): 167–203.
Kim, Jong-Bok. 2015. Syntactic and Semantic Identity in Korean Sluicing: A Direct
Interpretation Approach. Lingua 66(B): 260–293.
Kim, Jong-Bok. 2016. The Syntactic Structures of Korean: A Construction-Based
Perspective. Cambridge, UK: Cambridge University Press.
Kim, Jong-Bok. 2017. Mixed Properties and Matching Effects in English Free Relatives:
A Construction-Based Perspective. Linguistic Research 34(3): 361–385.
Kim, Jong-Bok and Davies, Mark. 2016. The INTO-CAUSATIVE Construction in
English: A Construction-Based Perspective. English Language and Linguistics 20(1):
55–83.
Kim, Jong-Bok and Sag, Ivan. 1995. The Parametric Variation of French and English
Negation. In Camacho, J., Choueiri, L., and Watanabe, M. (eds.), Proceedings of
the Fourteenth West Coast Conference on Formal Linguistics (WCCFL), 303–317.
Stanford, CA: CSLI Publications.
Kim, Jong-Bok and Sag, Ivan. 2002. Negation without Movement. Natural Language
and Linguistic Theory 20(2): 339–412.
Kim, Jong-Bok and Sag, Ivan. 2005. English Object Extraposition: A Constraint-Based
Approach. In Müller, S. (ed.), Proceedings of the 12th International Conference on
Head-Driven Phrase Structure Grammar, 192–212. Stanford, CA: CSLI Publications.
Kim, Jong-Bok and Sells, Peter. 2008. English Syntax: An Introduction. Stanford, CA:
CSLI Publications.
Kim, Jong-Bok and Sells, Peter. 2011. The Big Mess Construction: Interactions between
the Lexicon and Constructions. English Language and Linguistics 15(2): 335–362.
Kim, Jong-Bok and Sells, Peter. 2015. The English Binominal Construction. Journal of
Linguistics 51(1): 41–73.
King, Paul. 1989. A Logical Formalism for Head-Driven Phrase Structure Grammar.
PhD dissertation, University of Manchester.
Kiss, Katalin. 1998. Identificational Focus versus Information Focus. Language 74(2):
245–273.
Kiss, Tibor. 2005. Semantic Constraints on Relative Clause Extraposition. Natural
Language and Linguistic Theory 23(2): 281–334.
Kluender, Robert. 2004. Are Subject Islands Subject to a Processing Account? In
Schmeiser, B., Chand, V., Kelleher, A., and Rodriguez A. (eds.), Proceedings of
346 Bibliography

the Twenty-Third West Coast Conference on Formal Linguistics (WCCFL), 101–125.


Somerville, MA: Cascadilla Press.
Koenig, Jean-Pierre. 1999. Lexical Relations. Stanford, CA: CSLI Publications.
Koenig, Jean-Pierre and Michelson, Karin. 2012. The (Non)universality of Syntactic
Selection and Functional Application. In Pinon, C. (ed.), Empirical Issues in Syntax
and Semantics 1(9): 185–205. Paris: CNRS.
Kornai, Andras and Pullum, Geoffrey. 1990. The X-bar Theory of Phrase Structure.
Language 66: 24–50.
Koster, Jan. 1987. Domains and Dynasties: The Radical Autonomy of Syntax. Dorarecht:
Foris.
Langacker, Ronald. 1987. Foundations of Cognitive Grammar. Stanford, CA: Stanford
University Press.
Langacker, Ronald. 2009. Cognitive (Construction) Grammar. Cognitive Linguistics
20(1): 167–176.
Lambrecht, Knud. 1994. Information Structure and Sentence Form. Cambridge, UK:
Cambridge University Press.
Lambrecht, Knud. 2001. A Framework for the Analysis of Cleft Constructions. Linguis-
tics 39(3): 463–516.
Lappin, Shalom, Levine, Robert, and Johnson, David. 2000. The Structure of Unscientific
Revolutions. Natural Language and Linguistic Theory 18(3): 665–671.
Larson, Richard. 1988. On the Double Object Constructions. Linguistic Inquiry 19(3):
335–392.
Lasnik, Howard, Depiante, Marcela, and Stepanov, Arthur. 2000. Syntactic Structures
Revisited: Contemporary Lectures on Classic Transformational Theory. Cambridge,
MA: MIT Press.
Lees, Robert and Klima, Edward. 1963. Rules for English Pronominalization. Language
39(1): 17–28.
Levin, Beth. 1993. English Verb Classes and Alternations: A Preliminary Investigation.
Chicago: University of Chicago Press.
Levin, Beth and Rappaport Hovav, Malka. 2005. Argument Realization: Research Surveys
in Linguistics Series. Cambridge, UK: Cambridge University Press.
Levine, Robert. 2017. Syntactic Analysis: An HPSG-Based Analysis. Cambridge, UK:
Cambridge University Press.
Levine, Robert and Hukari, Thomas. 2006. The Unity of Unbounded Dependency
Constructions (CSLI Lecture Notes 166). Stanford, CA: CSLI Publications.
Li, Charles and Thompson, Sandra. 1976. Subject and Topic: A New Typology of Lan-
guages. In Li, C. (ed.), Subject and Topic, 457–490. New York/San Francisco/London:
Academic Press.
Malouf, Rob. 2000. Mixed Categories in the Hierarchical Lexicon. Stanford, CA: CSLI
Publications.
McCawley, James. 1968. Concerning the Base Component of a Transformational Gram-
mar. Foundations of Language 4(3): 243–269.
McCawley, James. 1988. The Syntactic Phenomena of English. Chicago: University of
Chicago Press.
McCloskey, James. 1988. Syntactic Theory. In Swanson, N. (ed.), Linguistics: The
Cambridge Survey 18–59. Cambridge, UK: Cambridge University Press.
Bibliography 347

Michaelis, Laura. 2006. Construction Grammar. In Brown, K. (ed.), The Encyclopedia of


Language and Linguistics, second edition, 3: 73–84. Oxford: Elsevier.
Michaelis, Laura. 2011. Stative by Construction. Linguistics 49(6): 1359–1399.
Michaelis, Laura. 2012. Making the Case for Construction Grammar. In Boas, H.
and Sag, I. (eds.), Sign-Based Construction Grammar, 31–69. Stanford, CA: CSLI
Publications.
Michaelis, Laura. 2013. Sign-Based Construction Grammar. In Hoffman, T. and Trous-
dale, G. (eds.), The Oxford Handbook of Construction Grammar, 133–152. Oxford:
Oxford University Press.
Michaelis, Laura A. 2019. Constructions Are Patterns and So Are Fixed Expressions.
In Busse, B. and Moehlig, R. (eds.), Patterns in Language and Linguistics. 193–220.
Berlin: Mouton de Gruyter.
Michaelis, Laura and Lambrecht, Knud. 1996. Toward a Construction-Based Theory of
Language Function: The Case of Nominal Extraposition. Language 72(2): 215–248.
Miller, Jim. 2000. An Introduction to English Syntax. Edinburgh: Edinburgh University.
Miller, Philip and Pullum, Geoffrey K. 2014. Exophoric VP Ellipsis. In Hofmeister, P.
and Norcliffe, E. (eds.), The Core and the Periphery: Data-Driven Perspectives on
Syntax Inspired by Ivan A. Sag, 5–32. Stanford, CA: CSLI Publications.
Müller, Stefan. 2013. Unifying Everything: Some Remarks on Simpler Syntax, Construc-
tion Grammar, Minimalism and HPSG. Language 89(4): 920–950.
Müller, Stefan. 2015. HPSG – A Synopsis. In Kiss, T. and Alexiadou, A. (eds.), Syn-
tax Theory and Analysis: An International Handbook (Handbooks of Linguistics and
Communication Science, 42(2): 937–973. Berlin: Walter de Gruyter.
Müller, Stefan. 2016. Grammatical Theory: From Transformational Grammar to
Constraint-based Approach. Berlin: Language Science Press.
Nerbonne, John, Netter, Klaus, and Pollard, Carl. 1994. German in Head-Driven Phrase
Structure Grammar. Stanford, CA: CSLI Publications.
Newmeyer, Frederick. 2003. Theoretical Implications of Grammatical Category-
Grammatical Relation Mismatches. In Francis, E. and Michaelis, L. (eds.), Mismatch:
Form-Function Incongruity and the Architecture of Grammar, 149–178. Stanford, CA:
CSLI Publications.
Newmeyer, Frederick. 2000. Language Form and Language Function. Cambridge, MA:
MIT Press.
Newport, Elissa. 2016. Statistical Language Learning: Computational, Maturational, and
Linguistic Constraints. Language and Cognition 8(3): 447–461.
Nunberg, Geoffrey. 1995. Transfer of Meaning. Journal of Semantics 12(2): 109–132.
Nunberg, Geoffrey, Sag, Ivan, and Wasow, Thomas. 1994. Idioms. Language 70(3): 491–
538.
Pavey, Emma. 2004. The English It-Cleft Construction: A Role and Reference Grammar
Analysis. PhD dissertation, SUNY.
Perlmutter, David. 1983. Studies in Relational Grammar 1. Chicago: University of
Chicago Press.
Perlmutter, David and Postal, Paul. 1977. Toward a Universal Characterization of
Passivization. In Proceedings of the 3rd Annual Meeting of the Berkeley Lin-
guistics Society. Berkeley: University of California Press. Reprinted in Perlmutter
(1983).
348 Bibliography

Perlmutter, David and Rosen, Carol. 1984. Studies in Relational Grammar 2. Chicago:
University of Chicago Press.
Perlmutter, David and Soames, Scott. 1979. Syntactic Argumentation and the Structure
of English. Berkeley: University of California Press.
Pinker, Steven. 1994. The Language Instinct. New York: Morrow.
Pollard, Carl. 1996. The Nature of Constraint-Based Grammar. Paper presented at the
Pacific Asia Conference on Language, Information, and Computation. Seoul, Korea:
Kyung Hee University.
Pollard, Carl and Sag, Ivan. 1987. Information-Based Syntax and Semantics, Volume 1:
Fundamentals. Stanford, CA: CSLI Publications.
Pollard, Carl and Sag, Ivan. 1992. Anaphors in English and the Scope of Binding Theory.
Linguistic Inquiry 23(2): 261–303.
Pollard, Carl and Sag, Ivan. 1994. Head-Driven Phrase Structure Grammar. Chicago:
University of Chicago Press.
Pollock, Jean-Yves. 1989. Verb Movement, Universal Grammar, and the Structure of IP.
Linguistic Inquiry 20(3): 365–422.
Postal, Paul. 1971. Crossover Phenomena. New York: Holt, Rinehart and Winston.
Postal, Paul. 1974. On Raising. Cambridge, MA: MIT Press.
Postal, Paul. 1986. Studies of Passive Clauses. Albany: SUNY Press.
Postal, Paul and Joseph, Brian. 1990. Studies in Relational Grammar 3. Chicago:
University of Chicago Press.
Postal, Paul and Pullum, Geoffrey. 1998. Expletive Noun Phrases in Subcategorized
Positions. Linguistic Inquiry 19(4): 635–670.
Pullum, Geoffrey. 1979. Rule Interaction and the Organization of a Grammar. New York:
Garland.
Pullum, Geoffrey. 1991. English Nominal Gerund Phrases as Noun Phrases with Verb-
Phrase Heads. Linguistics 29(5): 763–799.
Pullum, Geoffrey. 2013. The Central Question in Comparative Syntactic Metatheory.
Mind and Language 28(4): 492–521.
Pullum, Geoffrey and Gazdar, Gerald. 1982. Natural Languages and Context-Free
Languages. Linguistics and Philosophy 4(4): 471–504.
Pullum, Geoffrey and Scholz, Barbara. 2002. Empirical Assessment of Stimulus Poverty
Arguments. The Linguistic Review, 8(1–2): 9–50.
Przepiórkowski, Adam and Kupść, Anna. 2006. HPSG for Slavicists. Glossos 8: 1–68.
Quirk, Randoph, Greenbaum, Sidney, Leech, Geoffrey, and Swartvik, Jan. 1972. A
Grammar of Contemporary English. London and New York: Longman.
Quirk, Randoph, Greenbaum, Sidney, Leech, Geoffrey, and Swartvik, Jan. 1985. A
Comprehensive Grammar of the English Language. London and New York: Longman.
Radford, Andrew. 1981. Transformational Syntax: A Student’s Guide to Chomsky’s
Extended Standard Theory. Cambridge, UK: Cambridge University Press.
Radford, Andrew. 1988. Transformation Grammar. Cambridge, UK: Cambridge Univer-
sity Press.
Radford, Andrew. 1997. Syntactic Theory and the Structure of English. New York and
Cambridge, UK: Cambridge University Press.
Radford, Andrew. 2004. English Syntax: An Introduction. Cambridge, UK: Cambridge
University Press.
Bibliography 349

Richter, Frank and Sailer, Manfred. 2009. Phraseological Clauses as Constructions in


HPSG. In Müller, S. (ed.), Proceedings of the 16th International Conference on Head-
Driven Phrase Structure Grammar, 297–317. Stanford, CA: CSLI Publications.
van Riemsdijk, Henk and Williams, Edwin. 1986. Introduction to the Theory of Gram-
mar. Cambridge, MA: MIT Press.
Rosenbaum, Peter. 1967. The Grammar of English Predicate Complement Constructions.
Cambridge, MA: MIT Press.
Ross, John. 1967. Constraints on Variables in Syntax. PhD thesis, MIT. Published as
Infinite Syntax. Norwood, NJ: Ablex, 1986.
Ross, John. 1968. Constraints on Variables in Syntax. Bloomington: Indiana University
Linguistics Club.
Ross, John. 1969. Auxiliaries as Main Verbs, In Todd, W. (ed.), Studies in Philosophical
Linguistics 1. Evanston: Great Expectations Press.
Ross, John. 1972. Doubl-ing. Linguistic Inquiry 3: 61–86.
Rothstein, Susan. 2010. The Semantics of Count Nouns. In Aloni, M., Bastiaanse, H., de
Jager, T., and Schulz, K. (eds.), Logic, Language and Meaning, 395–404. Heidelberg,
Berlin: Springer.
Sag, Ivan. 1997. English Relative Clause Constructions. Journal of Linguistics 33(2):
431–483.
Sag, Ivan. 2000. Another Argument Against Wh-trace. In Chung, S., McCloskey, J., and
Sanders N. (eds.), Jorge Hankamer Webfest. Available at http://ling.ucsc.edu/Jorge/
sag.html.
Sag, Ivan. 2005. Adverb Extraction and Coordination: A Reply to Levine. In Müller,
S. (ed.), Proceedings of the 12th International Conference on Head-Driven Phrase
Structure Grammar, 394–414.
Sag, Ivan. 2007. Remarks on Locality. In Müller, S. (ed.), Proceedings of the 14th Inter-
national Conference on Head-Driven Phrase Structure Grammar, 394–414. Stanford,
CA: CSLI Publications.
Sag, Ivan. 2008. Feature Geometry and Predictions of Locality. In Corbett, G. and
Kibort, A. (eds.), Proceedings of the Workshop on Features, 236–271. Oxford: Oxford
University Press.
Sag, Ivan. 2010. English Filler-Gap Constructions. Language 86(3): 486–545.
Sag, Ivan. 2012. Sign-Based Construction Grammar: An Informal Synopsis. In Boas, H.
and Sag, I. A. (eds.), Sign-Based Construction Grammar, 69–202. Stanford, CA: CSLI
Publications.
Sag, Ivan and Fodor, Janet. 1994. Extraction without Traces. In Proceedings of the Thir-
teenth Annual Meeting of the West Coast Conference on Formal Linguistics, 365–384.
Stanford, CA: CSLI Publication.
Sag, Ivan, Hofmeister, Philip, and Snider, Neal. 2007. Processing Complexity in Sub-
jacency Violations: The Complex Noun Phrase Constraint. Proceedings of the 43rd
Annual Meeting of the Chicago Linguistic Society, 215–229. Chicago: CLS.
Sag, Ivan and Nykiel, Joanna. 2011. In Müller, S. (ed.), Remarks on Sluicing. Proceed-
ings of the 18th International Conference on Head-Driven Phrase-Structure Grammar,
188–208. Stanford, CA: CSLI Publications.
Sag, Ivan and Pollard, Carl. 1991. An Integrated Theory of Complement Control.
Language 67(1): 63–113.
350 Bibliography

Sag, Ivan, Wasow, Thomas, and Bender, Emily. 2003. Syntactic Theory: A Formal
Introduction. Stanford, CA: CSLI Publications.
Sag, Ivan and Wasow, Thomas. 2011. Performance-Compatible Competence Gram-
mar. In Borsley, R. and Borjars, K. (eds.) Non-Transformational Syntax: Formal and
Explicit Models of Grammar. Oxford: Wiley-Blackwell.
Sag, Ivan and Wasow, Thomas. 2015. Flexible Processing and the Design of Grammar.
Journal of Psycholinguistic Research 44(1): 47–63.
Saussure, Ferdinand. 2011. Course in General Linguistics [1916]. London: Duckworth.
Sells, Peter. 1985. Lectures on Contemporary Syntactic Theories. Stanford, CA: CSLI
Publications.
Sells, Peter. 2001. Formal and Empirical Issues in Optimality Theoretic Syntax. Stanford,
CA: CSLI Publications.
Shieber, Stuart. 1986. An Introduction to Unification-Based Approaches to Grammar.
Stanford, CA: CSLI publications.
Steedman, Mark. 1996. Surface Structure and Interpretation. Cambridge, MA: MIT
Press.
Steedman, Mark. 2000. The Syntactic Process. Cambridge, MA: MIT Press/Bradford
Books.
Stockwell, Robert, Schachter, Paul, and Partee, Barbara. 1973. The Major Syntactic
Structures of English. New York: Holt, Rinehart and Winston.
Stowell, Timothy. 1981. Origins of Phrase Structure. PhD dissertation, MIT.
Sussex, Roland. 1982. A Note on the Get-Passive Construction. Australian Journal of
Linguistics 2(1): 83–95.
Taranto, Gina. 2005. An Event Structure Analysis of Causative and Passive Get.
Unpublished MS. San Diego: University of California.
Thornton, Rosalind. 2016. Childrens’ Acquisition of Syntactic Knowledge. In Oxford
Research Encyclopedia of Linguistics. Available at https://oxfordre.com/linguistics/
view/10.1093/acrefore/9780199384655.001.0001/acrefore-9780199384655-e-72.
Tomasello, Michael. 2009. Constructing a Language. Cambridge, MA: Harvard Univer-
sity Press.
Tseng, Jesse. 2007. English Prepositional Passive Constructions. In Müller, S. (ed.),
Proceedings of the 14th International Conference on Head-Driven Phrase Structure
Grammar, 271–286. Stanford, CA: CSLI Publications.
Ward, Gregory. 1985. The Semantics and Pragmatics of Preposing. PhD dissertation,
University of Pennsylvania.
Warner, Anthony. 2000. English Auxiliaries without Lexical Rules. In Borsley, R. (ed.),
The Nature and Function of Syntactic Categories, 167–218. New York: Academic
Press.
Wasow, Thomas. 1977. Transformations and the Lexicon. In Akmajian, A., Culicover, P.,
and Wasow, T. (eds.), Formal Syntax, 327–360. New York: Academic Press.
Wasow, Thomas. 1989. Grammatical Theory. In Posner, T. (ed.), Foundations of Cogni-
tive Science, 161–205. Cambridge, MA: MIT Press.
Webelhuth, Gert. 1995. Government and Binding Theory and the Minimalist Program.
Oxford: Basil Blackwell.
Wechsler, Stephen. 1995. The Semantic Basis of Argument Structure. PhD dissertation,
Stanford University.
Bibliography 351

Wechsler, Stephen. 2013. The Structure of Swedish Pancakes. In Norcliffe, E. and


Hofmeister, P. (eds.), The Core and the Periphery: Data-Driven Perspectives on Syntax
Inspired by Ivan A. Sag, 71–98. Stanford: CSLI Publications.
Zwicky, Arnold. 1994. Dealing out Meaning: Fundamentals of Syntactic Constructions.
Annual Meeting of the Berkeley Linguistics Society 20: 611–625.
Zwicky, Arnold and Pullum, Geoffrey. 1983. Cliticiziation vs. Inflection: English n’t.
Language 59(3): 502–513.
Index

AGR (agreement), 88, 89, 141–147, 149, 153 predicative, 158, 159, 300
ARG - ST (argument-structure), 88–94, 97, 99, 104, raising, 164, 166
105, 114, 115, 118, 120, 122, 123, 126, 127, adjunct, 59, 60, 63, 108, 238, 261, 286, 301
129, 131, 181, 185, 207, 210, 243, 250, 262, Adjunct Clause Constraint, 286, 313
300 adverb, 37–39, 59, 60, 187, 199, 201, 212
COMPS (complements), 59, 104, 105 adverbial, 59, 261, 263, 306
COUNT (countable), 155 Affix Hopping Construction, 189
DEF (definite), 153 agreement, 31, 55, 139, 196
DP (determiner phrase), 79–81, 138, 139, 280 index, 145, 146
EXTRA , 300–302 mismatch, 147
FORM (morphological form), 89 morphosyntactic, 145, 150
FREL 309 noun-determiner, 141
GAP 244, 274, 285 pronoun-antecedent, 143
GEND (gender), 143 subject-verb, 62, 78, 143, 148
IND (index), 146–150, 162 ambiguity, 31, 43
IP (inflectional phrase), 190 structural, 31
MOD (modifier), 53, 268 anomalous, 104
NFORM , 118, 119, 300 semantically, 41
NUM (number), 84, 141–144, 146 antecedent, 139, 143, 147, 163, 274
OBJ (object), 53 Argument Realization Constraint (ARC), 104,
PER (person), 143, 144, 147 105, 192, 243, 245, 250, 262
PFORM 116, 153, 158, 228 argument-structure construction, 89, 91, 94–96,
PHON (phonology), 88 104
POS (part-of-speech), 83, 88, 101, 144, 146, 148, arguments, 88, 89
156 article, 6, 79, 157
PRD (predicate), 91, 93, 159 atomic, 85
PRED (predicate), 53, 60, 61, 63, 91 attribute, 85, 86, 103
PRO 258, 260, 276 attribute-value matrix (AVM), 85
QUE (question), 225, 246–248, 255 autonomous, 30
REL 267, 270 autonomy, 11, 13, 30
SEM (semantics), 88, 89, 146 auxiliary verb, 36, 42, 56, 218, 226, 250
SPR (specifier), 78, 79, 94, 104, 105, 115, 119,
139, 141, 147, 158, 174 bare NP, 136
SUBJ (subject), 53 biological endowment, 16
SYN (syntax), 88, 89, 109, 147, 178, 180 British English, 195
VAL (valence), 89, 100, 105, 106, 108, 167
VFORM , 72, 88, 89, 101, 103, 104, 114, 115, 228 Case Filter, 292
238, 239, 241 Case Theory, 221
Categorial Grammar, 108
acceptability, 1, 3, 5, 9, 10, 287 clausal
accusative, 221, 260, 275, 278, 290, 293 complement, 27, 120, 129, 132, 253, 254, 298
adjective, 4, 25, 37, 38, 63, 116, 128, 129, 139, subject, 127, 286
158, 293 clause
attributive, 158 embedded, 247, 271, 273
control, 164, 166, 182 finite, 27

352
Index 353

infinitival, 27, 122, 128, 164 corpus, 1


subordinate, 263, 306 covert, 259
cleft, 32, 290, 291, 303 creativity, 4
inverted wh-cleft, 303, 305 cultural product, 16
it-cleft, 303
wh-cleft, 303, 304 declarative, 103, 114, 237
COCA (Corpus of Contemporary American deep structure, 164, 165, 169, 170
English), 1, 17 DEF (definite), 153
coindex, 171, 174, 178 definite, 152, 157, 158
combinatory demonstrative, 79
properties, 70 dependency
requirement, 70 long-distance, 239, 245, 266, 284
rules, 3 strong, 290
common noun, 117 unbounded, 239, 245
Comparative Conditional Construction, 14 weak, 290
comparative correlative construction, 21 descriptive, 5, 6, 16, 23, 78, 292
competence, 5 determiner, 26, 79, 135, 140, 141, 145, 148, 151
grammatical, 4, 5, 10 direct object, 71
linguistic, 1 directive, 2, 237
morphological, 2 discharge, 242, 245, 247
phonetic, 1 discourse, 216, 282
phonological, 1 distributional criterion, 25
semantic, 2 ditransitive construction, 22, 92
syntactic, 2, 4 do-so test, 74
complement, 27, 59, 71–73, 76, 81 Do-support, 189
clausal, 119 double object construction, 92
infinitival, 166
oblique, 59, 64, 71, 72, 238 empirical linguistics, 6
predicative, 58, 71, 72 empty element, 242, 252
complementation pattern, 77 empty operator, 292
complementizer, 27, 29, 37, 40, 44, 120–122, 124, endocentricity, 76–78, 83
299 English Declarative Sentence Construction, 72,
Complex Noun Phrase Constraint (CNPC), 285 103
complex NP, 285, 287 entrenched, 16, 201
conjunction, 26 exclamative, 205, 237
coordinate, 26 expletive, 118, 166, 169, 182, 197, 292
subordinate, 26 expressivity, 4
constituent, 31–34, 37, 53, 70, 124, 218, 297 external syntax, 70
question, 32 extraction, 250
constituenthood, 51, 220 extraposition, 290, 291, 298, 305, 310
construct, 20, 246, 260
construct-icon, 50 feature, 83–85
construction, 11, 12, 14–16, 19–24, 29, 34, 40, 45, name, 86
47 percolation, 239, 241, 247
Construction Grammar (CxG), 19 sharing, 91, 101
context dependent, 146 specification, 37
context free grammar, 41 structure, 85–88
contraction, 187, 197, 208 system, 84
Coordinate Structure Constraint (CSC), 285, 286 unification, 87
coordination, 44, 45, 48, 51, 124, 189, 206, 243, feminine, 143
250, 257, 286 filler, 238, 241, 242, 248, 266
Coordination Construction, 249, 250, 286 filler-gap, 287, 288
Coordination Rule, 44 finite, 27, 36, 88, 101, 103, 127, 191, 200, 211
copula, 117, 193, 194, 304 VP, 71, 72
core, 12, 14, 290 fixed expressions, 45, 46
corpora, 6, 17 floated quantifier, 187, 194
354 Index

form, 25, 237 innate, 2, 10, 16


fragment, 32, 36 Innateness Hypothesis, 10
free relative, 308 intermediate category, 78, 81, 98, 240
Free-Relative Clause Construction, 309, 336 intermediate phrases, 79, 96
function, 25, 237 internal syntax, 70
function word, 29, 122, 208 interrogative, 134, 225, 237
intransitive construction, 91
gapping, 47, 186 intuition, 6, 17, 31, 32, 57, 167
generative inversion, 55, 56, 68, 187, 194, 197, 214, 238
grammar, 5, 9, 10, 15, 23, 54, 61, 96, 110, 132, islands, 286
186, 212 iterability, 73
syntax, 186
gerundive, 71, 298 Kleene Star Operator, 30
Government and Binding, 10
grammatical language faculty, 10
function, 53, 60, 70, 216, 217, 219 language specific, 81
grammaticality, 1 Left-Branch Constraint (LBC), 286
lexeme, 24, 28, 49, 99–102, 111, 199, 219
head, 27, 71, 72 lexical
Head-Complement Construction, 81, 101, 109, category, 29
115, 117, 225, 229 head, 108
headedness, 71, 76, 84 idiosyncrasy, 151
Head-Extra Construction, 301, 302 lexicon, 30, 66, 84
head feature, 101, 113 LFG, 85
Head Feature Principle (HFP), 101, 109, 110, 114, linking construction, 91
117, 276 location, 11, 24, 56, 60, 64, 85, 89, 90, 94, 104,
Head-Filler Construction, 109, 244, 247, 252, 256, 304
260, 261 locative adjunct, 266
Head-Lex Construction, 113, 201, 229
Head-Modifier Construction, 81, 82, 101, 109 manner, 60, 72
Head-Only Construction, 136, 276 masculine, 143
Head-Rel Mod Construction, 270, 273 maximal phrase, 72
Head-Specifier Construction, 81, 84, 101, 109, maximal projection, 110
128, 141, 224 meaning preservation, 168, 181
hierarchical structure, 43, 81 minimal phrase, 72
HPSG (Head-driven Phrase Structure Grammar), mismatch, 178
85 modifier, 59, 72, 73, 76, 106, 158, 266
hypothesis, 6, 7 postnominal, 269
morphological criterion, 25
idiom, 12, 13, 20–23, 46, 167, 168, 181 morphological form, 24
idiomaticity, 22 Move-α, 10
imperative, 205, 237 movement, 239, 299
indirect question, 253, 257, 258, 260, 286 movement paradox, 240
Indirect Wh-question Constraint (IWC), 286 multiple gap, 297
infinitival multi-word, 16, 20, 22, 45, 113
CP, 128
marker, 117 nativist view, 10
S, 124 N-bar, 79–81
VFORM, 103 negation, 187, 197, 199, 200, 208
VP, 124, 164, 167, 169, 174, 183, 258 constituent, 199
wh-relative, 277 sentential, 200
information Negative Auxiliary Construction, 202, 211, 333
argument, 87 neutral, 143
phonological, 87 NICE properties, 187
semantic information, 87 nominal, 125, 126, 300
syntactic , 87 nominative, 221
Index 355

nonfinite, 40, 72, 101, 119, 122, 171, 193, Principles and Parameter, 10
199–201, 207, 237, 270 proform, 33
nonhead daughter, 271 projection, 71, 77
nonlocal, 244 promoted, 217
dependency, 290 pronoun, 7, 33, 34, 56, 118, 152, 221
feature, 247, 248, 255 proposition, 180, 216
position, 243 PS rules, 34, 37, 42, 44, 53, 76, 77, 81, 83, 188
Nonlocal Inheritance Principle (NIP), 247, 248,
272, 276 quantificational, 151, 152
nontransformational, 172, 181, 221 quantified NP, 283
noun question, 237
collective, 149
common, 134, 135, 140 raising properties, 218
count, 6, 8, 9, 135 reanalysis, 227
countable, 134 reason, 60
mass, 6, 8, 155 recursive application, 42
measure, 157 redundancy, 77, 78
noncount, 134 reflexive, 13, 139, 230
partitive, 150 relative
pronoun, 134, 139 pronoun, 266, 274
proper, 134, 135, 140, 161 relative clause
bare, 272, 273, 277, 278
obligatory, 71 infinitival, 266, 276
ontological issue, 186 nonrestrictive, 279, 280
reduced, 267
particle, 28, 33, 228 restrictive, 279, 280
partitive, 150, 156 rule-governed, 3, 6
passive, 56
get-passive, 229 SAI Construction, 247
prepositional, 226 SBCG (Sign-Based Construction Grammar), 16
Passive Construction, 225 selectional restriction, 108, 167
passivization, 46, 226 semantic
past, 25, 27, 102, 189, 195 constancy, 75
periphery, 12–14 constraint, 235
personal pronoun, 144 enrichment, 22
phrasal function, 253
category, 31 restriction, 75
plural, 6, 16, 25, 55, 84, 135, 140, 145, 148, 150 role, 54, 165, 179, 219, 291
position semantic role
of adverb, 194 agent, 54, 63, 65, 66, 89, 178, 181
possessive, 80, 139 benefactive, 57, 64
postcopular, 236 experiencer, 64, 66, 181, 182
pragmatics, 221 goal, 57, 64, 93
predicate, 36, 54, 58, 59, 78, 93, 158, 166, 168, instrument, 53, 64
291 location, 64
predication, 305, 307 patient, 53, 54, 63, 65, 178
preposition, 25, 33, 116, 131, 152, 153, 226, 228 recipient, 57
prepositional source, 64
object, 266 theme, 63–65, 89, 93
object construction, 93 semantics, 2, 3, 11, 13, 22, 24, 30, 41, 88, 95, 99,
verb, 226, 228 112, 145, 153, 168, 221
Prepositional Passive Construction, 228 semi-fixed expressions, 46
prescriptive, 5 Sentential Subject Constraint (SSC), 286
Present Inflectional Construction, 100, 333 signified, 19
preterminal, 30 signifier, 19
principle of compositionality, 13 specificational, 307
356 Index

speech acts, 237 unbounded, 4, 239, 266


stand-alone test, 32 underlying structure, 10, 299
statement, 237 underspecification, 124, 142
structural universal, 81
change, 219 Universal Grammar (UG), 10
description, 219
difference, 75, 282 Valence Principle (VALP), 108, 110, 224, 276
position, 26 verb
structure sharing, 86, 145 ditransitive, 59, 77
subcategorization, 77, 78, 166, 167, 174, 178, 195, equi, 164
217, 256, 293 intransitive, 65, 77, 291
SUBJ (subject), 192 linking, 91, 93
subject-auxiliary inversion, 56, 187 transitive, 77, 92, 126, 221, 222, 226
Subject-Predicate Construction, 111 verbal, 125, 300
verb-particle constructions, 47, 52
substitution, 33, 83
voice
subsumption, 86, 87
active, 168, 216, 217, 233
surface structure, 10, 164, 165, 169, 170, 172,
passive, 168, 216–219, 222, 226
291, 298
passivization, 169
syntactic
VP
category, 53, 70
finite, 72
function, 24, 25 infinitival, 62, 167
nonfinite, 199, 200
tag question, 55, 187, 204, 209 VP ellipsis, 187, 197, 198, 201, 203, 209
temporal adjunct, 266 VP Ellipsis Construction, 209
tense, 25, 27, 36, 101, 103, 188, 189, 191, 221
ternary structure, 227 wh-question, 32, 250, 267, 277, 285
topicalization, 3, 239 wh-relative pronoun, 272
tough, 290 word order, 2, 3, 11, 76, 188
trace, 242, 293, 298 WXDY Construction, 14
transformation, 169, 205, 219–221, 298
transformational analysis, 165, 169, 172, X rules, 76, 80, 81, 83, 84, 106, 136
188–190, 205, 220, 233, 289, 292, 298, 314 X-bar theory, 10, 81, 96, 105, 131
transformational grammar, 9, 10, 164
transitive construction, 92 yes-no question, 56

You might also like