VANDEN WYNGAERD, Guido; DE CLERCQ, Karen; CAHA, Pavel. Late Insertion and
Root Suppletion. ReVEL, edição especial, v. 19, n. 18, 2021. [www.revel.inf.br]
LATE INSERTION AND ROOT SUPPLETION1
INSERÇÃO TARDIA E SUPLEÇÃO DE RAIZ
Guido Vanden Wyngaerd2
Karen De Clercq3
Pavel Caha4
[email protected]
karen.declercq@uparis.fr
[email protected]
RESUMO: Este artigo propõe uma abordagem de inserção tardia Nanossintática para a supleção de
raízes. Nós mostramos que essa teoria nos permite explicar a supleção de raízes dentro de uma teoria
da gramática estritamente modular, que não faz nenhuma diferenciação sintática entre raízes distintas.
Como ponto de partida, focamos primeiramente nas dificuldades arquitetônicas que surgem em uma
teoria modular das raízes na abordagem da Morfologia Distribuída. Em seguida, demonstramos como a
Nanossintaxe contorna esses problemas e tratamos de duas questões empíricas potenciais para o trata
mento Nanossintático (exponência múltipla e localidade), mostrando como, na verdade, elas fornecem
suporte para a abordagem proposta.
PALAVRASCHAVE: Inserção tardia; Nanossintaxe; Morfologia Distribuída.
ABSTRACT: This article proposes a Nanosyntactic LateInsertion approach to root suppletion. We
show that this theory allows us to account for root suppletion within a strictly modular theory of grammar,
which makes no syntactic distinction between different roots. As a starting point, we first focus on the
architectural difficulties that arise for a modular theory of roots in the Distributed Morphology approach.
We then show how Nanosyntax circumvents these problems, and address two potential empirical issues
for the Nanosyntactic treatment (multiple exponence and locality), showing that they in fact provide
support for the approach proposed.
KEYWORDS: Late insertion; Suppletion; Nanosyntax; Distributed morphology.
1
We thank Jaehoon Choi and Hyunjung Lee for their help with Korean. We have also benefited from the
helpful comments provided by two anonymous ReVel reviewers on a previous draft of this paper. We
also thank the guest editors Thayse Letícia Ferreira, Valdilena Rammé and Teresa Cristina Wachowicz
for all their editorial work and for translating our abstract to Brasilian Portuguese. All errors are our
own.
2
PhD in linguistics. Professor at KU Leuven, Belgium.
3
PhD in linguistics. Professor at LLF/CNRS, Université de Paris, France.
4
PhD in linguistics. Professor at Masaryk University, Brno, Czechia
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
81
INTRODUCTION
There are several good reasons to adopt a syntaxbased Late Insertion model as an ap
proach to morphology (see Halle & Marantz, 1993; Marantz, 1994; and Embick & Noyer,
2007 for a discussion). We start by briefly discussing what we take to be the two most
important ones, namely universality and modularity.
Universality refers to the idea that the atoms of syntactic trees (its terminals) are
not language specific, but they are drawn from a universallyavailable set of features.
In LateInsertion models (which place the lexicon after the syntactic derivation), the
syntax no longer has to deal with the arbitrary and languagespecific objects that lexi
cal items are. Instead, the atoms of syntax correspond to a universal set of features that
refer to syntactically relevant semantic distinctions, like ANIMATE, COUNTABLE, PLURAL,
etc. Languageparticular elements are introduced late through lexical items, under
stood as unpredictable and languagespecific linking between the universal features and
a languageparticular phonology, and, in some cases, also (encyclopaedic/conceptual)
meaning, that is not syntactically relevant. In Lateinsertion models, these language
particular aspects of individual languages become available only after syntax, in the
process of the ‘externalisation’ of syntax through lexical insertion.
The second major advantage of Late Insertion is modularity. By this we mean the
following:
(1)
Strong Modularity Thesis (SMT)
Syntactic representations only contain entities that are relevant for the applica
tion of syntactic principles and operations.
This excludes from the syntactic module everything that is not syntactically relevant,
specifically, the phonological, conceptual and encyclopaedic information that is asso
ciated with lexical items. To take a concrete example, the difference between dog (sin
gular) and dogs (plural) is syntactically relevant, as seen in various kinds of agreement
processes triggered by nouns, but the difference between cat and dog is purely con
ceptual, and is not relevant to any syntactic operation. Late Insertion offers a way to
separate the syntactically relevant from the syntactically irrelevant. It provides a type
of architecture in which syntax is in principle unable to refer to syntactically irrelevant
properties of lexical items like their phonology or conceptual information. On the other
hand, if the atoms (terminals) of syntactic derivations were the traditional lexical items,
then their phonology (and conceptual meaning) would necessarily be present in syntax
as well. This clashes with the insight that neither their phonology nor conceptual infor
mation play any role in syntax.
Within Distributed Morphology (DM), this view has been uncontroversial and uni
versally adopted for functional lexical items like the comparative er etc. It has been
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
82
a common stance to extend Late Insertion also to openclass lexical items, socalled
roots. As Marantz (1996, p. 16) puts it “[n]o phonological properties of roots interact
with the principles or computations of syntax, nor do idiosyncratic Encyclopaedic facts
about roots show any such interactions” (see also the Principle of PhonologyFree Syn
tax of Zwicky, 1969; Zwicky & Pullum, 1986; and Miller, Pullum and Zwicky, 1997). If
Late Insertion is applied also to lexical items like cat and dog, we derive the fact that
syntax cannot distinguish between them either in terms of their phonology or in terms
of their conceptual and/or enycyclopaedic meaning.
The syntactic irrelevance of the distinction between roots like cat and dog has been
taken to its logical conclusion in models such as Halle and Marantz (1993), Marantz
(1996) and De Belder and Van Craenenbroeck (2015), where syntax contains just a sin
p
, which is an object devoid of syntactic, phonological or semantic
gle root symbol
p
properties.
works as a pure placeholder for the insertion of the morphological root.
p
terminal
Concrete root morphemes are inserted late into such a nondiscriminate
based on a free choice (Harley & Noyer, 1999).
In this context, the issue of suppletion is relevant. Since suppletive roots have, by
definition, several phonologically unrelated forms depending on the context, it must be
the case that roots enter the derivation without any phonology, and acquire it only once
the appropriate context has been determined. As Haugen and Siddiqi (2013), Harley
(2014) and Arregi and Nevins (2014) have argued, Late Insertion of roots also allows
the theory to deal effectively with root suppletion, while simultaneously maintaining
the two advantages alluded to above.
However, within a classical DM architecture (Halle & Marantz, 1993; Harley & Noyer,
1999), this fully modular approach to roots meets two challenges. One is related to
competition among roots. The other concerns the proper pairing of conceptual and
encyclopaedic information with phonological information (as we discuss in section 1).
Because of these two issues, some recent approaches adopt the view that individual
roots are, after all, differentiated in syntax. For instance, Harley (2014) (following Pfau,
2000, 2009) uses arbitrary numerical indexes to this effect. While this move solves the
two issues noted above, it no longer satisfies the Strong Modularity Thesis as formulated
in (1), i.e. it fails to deliver a fully modular architecture, for reasons that we discuss in
the body of the paper.
In this context, the main goal of this article is to show that it is possible to account for
root suppletion in a universalist, fully modular approach to roots, i.e., without the need
to introduce arbitrary diacritics on roots in syntax. In developing a theory along these
lines, we will be drawing on the Nanosyntax theory of spellout (Starke, 2009, 2018).
The organisation of the paper is as follows. In Section 1, we discuss some of the
main findings emerging from the treatment of root suppletion in DM. We also highlight
here two challenges that arise for these approaches. We introduce the architecture of
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
83
Nanosyntax (Starke, 2009, 2018) in sections 2 and 3. We show that this theory allows
for an approach to root suppletion where neither of the two problems mentioned above
arises. By demonstrating this, we show that Nanosyntax allows for an approach to root
suppletion where syntactic terminals are both modular and universal. In sections 4 and
5, we defend the approach against two kinds of possible objections. The first objection is
that phrasal lexicalisation, on which the theory relies, is not the right tool for suppletion,
because it does not allow for multiple exponence. Section 4 argues that it does. The
second objection is that it cannot handle nonlocal conditioning of suppletive roots.
Section 5 argues that this is, in fact, a desirable result, despite apparent challenges.
1 ROOTS AND SUPPLETION IN DM
In the current section, we discuss two kinds of approaches to roots in DM. In Section 1.1,
we discuss an approach that is both universalist and modular. However, this approach
faces two challenges: one related to suppletion and another one related to the overall
architecture of grammar in DM. In Section 1.2, we discuss an approach that addresses
these challenges by differentiating among roots in syntax. Our point is that while root
differentiation is successful in resolving the issues, it does not deliver a fully modular
architecture. These considerations form the background to our own proposal that we
put forth in Section 2.
1.1 TWO CHALLENGES FOR SINGLEROOT APPROACHES
This section highlights two challenges facing a fully modular, universalist theory of
roots. To make a number of points about root suppletion and beyond, we will be of
ten using the positive and the comparative degree as an example (for a number of rea
sons, one of them being the fact that this is a wellresearched topic, thanks to the work
by Bobaljik, 2012). The structures we will be initially assuming are as depicted in (2).
More specifically, we will be assuming that the positive degree (in (2a)) is contained in
the comparative degree (2b), which adds the CMPR head on top (Bobaljik, 2012). We
p
node and a little a node, to maintain easy
will decompose the positive degree into a
comparison with existing proposals in the DM literature (but see Vanden Wyngaerd et
al., 2020 and De Clercq et al., to appear for a different, rootfree type of approach to the
bottom of the functional hierarchy).
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
84
(2)
a.
positive
aP
a
p
b.
comparative
CMPRP
aP
CMPR
a
p
We want to make two points about (2), which relate to the issues of universality and
modularity that we discussed in the previous section. The first point is modularity. The
trees in (2) show a syntactic representation that is in agreement with the Strong Mod
ularity Thesis of the introduction. To see this, consider the fact that it is impossible to
tell from the structures (2a,b) alone what phonology or meaning they represent (as op
posed to the roots nice or kind). This is a good thing, because syntax cannot distinguish
these two roots either (in a similar way in which it cannot differentiate between the
root cat and dog). The inability to capture in syntactic representations such irrelevant
differences would be a failure by design, because this is what modularity is all about.
Another way to show that the syntax we assume for the positive and the comparative
degrees respects modularity is to imagine that we were told to to draw the syntactic
p
structures for the positive degree of the adjectives kind and nice. With just a single
node at our disposal, both structures would look exactly the same, namely as (2a). This
shows that the trees in (2) respect the Strong Modularity Thesis, where information
that is irrelevant to syntax is not represented in syntax.
Note that things would be different if we allowed ourselves to use multiple root sym
bols, one for each root. In such case we would draw different structures for nice and
p
p
kind, one with the symbol NICE another one with the symbol KIND. Our point is that
when we are able to tell from the syntactic trees which root they represent – without
that distinction being relevant for syntax – the Strong Modularity Thesis is violated.
p
This is the case no matter what specific information we use to differentiate NICE from
p
KIND: it could be an arbitrary index, a concept, a phonological string or a combination
thereof; the problem remains the same.
The second point about the structures in (2) is that they are universal. Specifically,
if all languages have the same ingredients for the positive and the comparative (as pro
posed in Bobaljik, 2012), it is impossible to say whether the structures in (2) depict the
structure of English or Latin roots. This is a good thing too: this is what universality
p
p
is all about. In contrast, if the trees had symbols such as NICE PULCHER (Latin for
‘beautiful’), we would be able to tell which language we are dealing with.
Despite its good features, some technical issues arise for this view when we try to
implement it in the DM model as given in Harley and Noyer (1999). We present their
model (with slightly updated labels) in Figure 1.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
85
Features
(List 1)
Syntax
Vocabulary
(List 2)
MF
Sound
LF
Encyclopaedia
(List 3)
Meaning
Figure 1: A version of Distributed Morphology (based on Harley & Noyer, 1999)
Figure 1 shows that the syntactic computation begins from features (or feature bun
dles) drawn from a list that is referred to as List A in Harley and Noyer (1999) and as
List 1 in Arregi and Nevins (2014). These features enter the narrow syntactic deriva
tion, where they are assembled into syntactic trees. For reasons that are not directly
relevant for the current concerns, DM then splits the derivation of the sentence into
two different branches: we have the socalled PF branch (leading from syntax to sound)
and the LF branch (leading from syntax to meaning). On the PF branch, various purely
morphological operations take place. These may displace, add or remove nodes and
features, enriching or impoverishing the structure provided by the narrow syntax. The
placement of these operations on the PF branch is motivated by the fact that they do
not affect the meaning. The output of these operations is a structure that we refer to
as the morphological form (MF), which at this point still consists of abstract features.
On the basis of the MF, the sound representation is constructed by consulting the list of
exponents (Vocabulary Items, List 2). The two arrows going into the sound representa
tion thus indicate the fact that the sound is determined jointly by the two components,
namely by the features that are found at MF, and the list of exponents selected to match
the features at MF.
The meaning representation is formed in an analogous fashion. First we construct
the socalled Logical Form, LF. Harley and Noyer (1999) are not explicit about how the
LF is constructed, but we assume that as in standard minimalism, LF in their concep
tion arises through the application of covert movements such as quantifier raising and
the like. In any event, LF is still an abstract syntaxinternal representation. All such
syntaxinternal representations (composed of abstract features and their structures)
are marked by the (violet) shading in Figure 1.
As a part of mapping the LF to meaning, the Encyclopaedia is consulted and con
tributes encyclopedic and/or conceptual aspects of meaning in a fashion analogous to
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
86
the list of exponents in Vocabulary.
With the basics in place, let us turn to the first challenge. The observation is that the
separation of the phonological information (List 2) from the encyclopaedic/conceptual
information (List 3) has consequences for the representation of roots in syntax and
hence, also for modularity. To see that, recall the fact that the trees in (2) contain just
p
. In Harley and Noyer (1999), all root morphemes such as cat
a single root symbol
and dog reside in the Vocabulary, and they are inserted into such a nondiscriminate
p
terminal based on a free choice. Given this setup, the question arises how LF knows
what kind of exponent has been inserted at the PF branch of the derivation, so that
if kæt is inserted from the Vocabulary at the PF branch, the LF learns about this and
the Encyclopaedia contributes the right encyclopaedic/conceptual information at the
p
symbol arrives
meaning side. What must be prevented is that the nondiscriminate
at LF and Encyclopaedia has the power to independently select a particular lexical item,
so that, in effect, /kæt/ means DOG.
In order to avoid this, Harley and Noyer (1999) suggest that a direct communication
line (depicted by the dashed arrow) is established between the sound and the meaning,
which makes sure that the Encyclopaedia knows what information it should provide.
The architectural consequence is that sound and meaning are linked both via the syn
tactic derivation (as is standardly assumed), but also outside of it for the sole purpose of
root insertion. The direct communication line between sound and meaning is the first
complication that arises in DM when a fully modular approach to roots is adopted.
As we shall see in Section 1.2, there are versions of DM that do not need such a direct
communication line between sound and meaning. However, we will also see that the
price one has to pay for eliminating it is that roots must be differentiated within syntax,
which is something that we avoid in our own proposal in Section 2.
Let us now turn to the second challenge that a theory based on nondiscriminate
roots faces. It comes from root suppletion and the related notion of root competition.
In the DM framework, root suppletion, as in the pair goodbetter, is accounted for by
contextual specification of Vocabulary Items (VIs), which insert phonology under the
p
terminals, in this case the
node. We need two distinct Vocabulary Items for the
suppletive pair good–bett; they are as in (3), which are slightly adapted from Bobaljik
(2012) to fit the trees in (2) above:
(3)
a.
b.
p
p
⇔
⇔
bett /
good
] a ] CMPR ]
In the positive degree, these VIs are not in competition with each other, as there is no
CMPR head there, so that only (3b) meets the structural description, and good will be
inserted. In the comparative, given in (2b), however, a competition between (3a) and
(3b) will arise, since the structure generated in syntax meets the structural description
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
87
of both rules. The outcome of that competition is determined by the Elsewhere Prin
ciple (Kiparsky, 1973, Halle, 1997, p. 428), which states that a more specific rule takes
precedence over a more general one. Since (3a) is more specific than (3b), it wins the
competition in the comparative. As a result, bett is inserted in the comparative. The
CMPR head is spelled out as er, yielding the form better, little a being silent.5
Now the VI in (3b) as currently formulated is just a fragment of the English Vocab
p
node. One way of
ulary. If left on its own, it will insert good under any terminal
extending our fragment will therefore be to add more roots:
(4)
p
⇔
good, nice, happy, small, intelligent, bad, …
What this extended rule achieves is that there is a free choice of insertion of a variety
p
. But now a problem arises with respect to the
of roots in the positive degree under
‘suppletive’ rule (3a): since it is more specific than (3b), it is also more specific than the
extended rule (4) (which is in relevant respects like (3b)). The result is that bett will be
p
in any comparative structure (outcompeting not only good, but also
inserted under
other roots), obviously a wrong result. This problem in the analysis of root suppletion
was pointed out by Marantz (1996), and it is a consequence of the format of the rule
p
has the form bett in the context of a comparative.
(3a): it basically says that any
This issue is still a matter of current research in DM (for an overview of possible
options, see Haugen and Siddiqi, 2013, p. 514). The earliest solution, suggested by
Marantz (1997), held that root suppletion does not exist, except in the functional vo
cabulary, where the competition problem can be easily solved. In Marantz’ idea, the
Vocabulary Item good would spell out a syntactic terminal which is crucially not a con
p
tentless
node, but a node with at least one functional feature (represented as an
evaluative feature with a ‘positive’ value in (5)).
(5)
a.
b.
[EVAL:POSITIVE]
[EVAL:POSITIVE]
⇔
⇔
bett /
good
] a ] CMPR ]
Once (5) is adopted, then nonsuppletive adjectives can be inserted by the freechoice
rule (4), since (4) would no longer contain good as a choice (since good now has the
entry (5b)). As a consequence, the VIs in (5) only compete with each other, not with
(4), and one thus gets rid of the problem where bett would compete with nonsuppletive
roots like nice.
However, empirical evidence against this idea has been presented by Harley (2014),
5
Arregi and Nevins (2014) discuss interesting cases where the regular comparative form (e.g., badder)
is not always blocked by the suppletive form (worse). Specifically, while bad most often has the com
parative worse, there are also senses of bad that give rise to the comparative badder. This could suggest
that there is no competition between worse and badder, since both are in fact attested. However, Arregi
and Nevins (2014) develop an approach where the regular form badder arises because of a syntactic dif
ference between the structures underlying worse and badder. As a result, badder surfaces only where
worse is unavailable for locality reasons, while in the general case worse does indeed block badder.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
88
who argues that suppletive verbs in Hiaki have rich lexical meanings, for which an anal
ysis in terms of functional heads is unlikely (cf. Haugen & Siddiqi, 2013). To deal with
p
this issue, Harley (2014) has argued that s are individuated in the syntax, i.e., prior to
vocabulary insertion, by means of a numerical index. We shall describe this approach in
the next section, noting that while it successfully addresses the challenges identified, it
does so at the cost of proposing different syntactic representations for different roots.6
1.2
ROOT DIFFERENTIATION
According to Harley (2014), the presyntactic lexicon contains an infinity of different,
p
s (see also Pfau, 2000, 2009). Free choice of a root is then not exer
individuated,
cised at the point of insertion (as in Marantz’ approach), but at an earlier point, namely
in the selection from List 1, i.e., when the elements that will serve as the input to the
syntactic computation are selected. At the point of insertion, the competition is conse
quently restricted to the VIs that can spell out the particular root symbol selected, e.g.,
p
153.
The Vocabulary Items that we shall need under Harley’s proposal would then be as
in (6):
(6)
a.
b.
p
p
153
153
⇔
⇔
bett /
good
] a ] CMPR ]
The idea here is that both good and bett are two different forms capable of spelling out
p
the symbol 153. What this proposal achieves is that bett is no longer a comparative
p
p
of just any
(as it has been in (3)), but a comparative of one particular
with a
unique index. Once this is the case, the problem with root competition disappears.
p
This is because bett will not be a candidate for any other root than 153, and so it will
p
never outcompete nice, which spells out (say) 154.
This proposal has as an additional benefit in that direct communication between
the PF branch and the LF branch of the grammar (i.e. the dotted line in Figure 1) is no
longer needed, since the index will be present from the start of the derivation, and be
carried through to both MF and LF.
However, the proposal achieves these results as a consequence of adopting indexed
roots, so cat and dog have a different index. In one sense, this proposal is elegant in
that syntax still does not operate over roots that are specified for languageparticular
phonology and/or encyclopaedic meaning: it still has the potential of being universalist
(although it would appear to predict that all languages have the same number of roots,
6
Other solutions
p are thinkable, but have been shown to be less viable. For example, one could suggest
nodes do not compete with each other at all (something
that VIs for
p which is suggested by the free
choice rule in (4)), but such a generalised lack of competition at the
node would lead to the problem
that both gooder and better are generated (Harley & Noyer, 1998).
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
89
p
a prediction that is not obviously true). In other words, the symbol 153 is not specific
to English, but it is found in all languages (which will, of course, associate a different
phonology and/or encyclopaedic content to it). At the same time, the differentiation
via numerical indexes is a case that violates the Strong Modularity Thesis, as explained
p
in the discussion below (3). By containing symbols such as 153, syntactic representa
tions contain information that syntax cannot process. The index passes intact through
the syntactic computation with a single purpose, namely to be used by Lists 2 and 3. So
even though the numerical index is more abstract than a root with a concrete phonology,
the problem is still the same: we are representing a distinction in the syntax that the
syntactic computation itself cannot make use of, in violation of the Strong Modularity
Thesis.
To summarise the content of this whole section, let us repeat what is going to be
important as we move to our own proposal. The most important observation is that in
a system with nondiscriminate roots, root suppletion leads to a problem with compe
tition: once bett outcompetes good, it also outcompetes nice, because good and nice
p
have exactly the same lexical entry (spelling out
).
To solve the competition problem, we must be able to uniquely identify the lexical
item that undergoes suppletion, and limit the competition to those VIs which stand in
p
a suppletive relation to this particular item. Marantz (who works with just a single )
makes suppletive items unique by placing them in the class of ‘functional’ heads, which
allows one to identify the unique grammatical feature whose realisation shows supple
tion. Harley (2014) suggests that this view is empirically problematic and proposes that
p
s are individuated by an index.
Our main objection against individuating roots in syntax is that by doing so, one in
fact abandons strict modularity and allows for a theory where cat and dog have different
syntax. This is because cat and dog have a different index, and syntax could be sensitive
to this property (since it is present inside it). On the other hand, if we stipulate that
syntax is not sensitive to such indexes, then modularity issues arise.
p
An additional issue is that if
s really lack any constant substantive property, i.e.
something more contentful than a mere index, one needs to seriously wonder why they
should be differentiated in narrow syntax at all.7 The indexation of roots looks like a
technical solution to a technical problem, rather than an advance in the understanding
of the nature of roots.
In the following sections, we will argue that there is a way to handle root suppletion
without the need to differentiate roots in syntax by an arbitrary index, thereby adhering
to strict modularity. In order to avoid the competition problem, we too will have to find
a way to identify the unique root which undergoes suppletion. The device we are going
7
In fact, one may wonder why they should be present in the syntax at all to begin with. Since this question
is orthogonal to our concerns, we shall leave it aside here.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
90
to use to this effect is called a pointer, and we introduce it in the next section.
2
CYCLICITY AND PHRASAL LEXICALISATION
In this section, we describe the main features of an account that allows for root supple
p
p
p
in syntax (or without any
at all, if s are to be eliminated,
tion with just a single
as in Ramchand, 2008 or Vanden Wyngaerd et al., 2020). What makes such a theory
possible is cyclic phrasal lexicalisation, where suppletive items stand in a containment
relationship.
In order to present this idea in an accessible way, we will begin with the suppletive
pair bad—worse, which has been treated by nonterminal lexicalisation also in Bobaljik
(2012). We shall then return to good—better in the following section. The relevant
lexical entries (with the required containment relation) are given in (7). Regardless of
the treatment of bad (to which we return), the important point here is that worse spells
out a nonterminal node properly containing the structure that bad spells out.
(7)
a.
aP
a
⇔
bad
p
b.
CMPRP
⇔
worse
aP
CMPR
a
p
Independent support for (7b) comes from the fact that worse lacks the regular CMPR
marker er. This is accounted for if its lexical entry pronounces the terminal where er
gets usually inserted, as is the case in (7b). Similarly, the reason why bad spells out
a full phrase is that it shows no overt a, differing from adjectives like risky, crappy,
tiny etc.
We will get to the technical details of nonterminal insertion shortly, but the main
intuition is this: when syntax builds just the aP (corresponding to the positive degree),
only bad will be inserted, because its lexical entry provides an exact match for the syn
tactic tree. The lexical item for worse, in contrast, is not an exact match: it is too big.
‘Too big’ may be understood either in an absolute sense (it is not a candidate for inser
tion at all), or in a relative sense (it is a candidate, but it is too big relative to bad, with
which it is in competition). When syntax builds CMPRP, only worse is an exact match
and will be inserted, this time because bad is too small (either in the absolute or in the
relative sense).
There are several ways of formalising the phenomenon that an exact match gets in
serted, and not a lexical item which is either too big or too small. For instance, Bobaljik
(2012) relies on the Subset Principle, augmented by Radkevich’s (2010) Vocabulary In
sertion Principle (VIP), which states that the phonological exponent of a vocabulary
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
91
item is inserted at the minimal node dominating all the features for which the exponent
is specified. On this account, the Subset Principle makes sure that worse is too big for
the positive, and the VIP makes sure that bad is too small for CMPRP. Another avail
able option, which we develop and explain later, adopts the Superset Principle (Starke,
2009). For now, the main point is that no matter how the ‘too big/too small’ difference
gets encoded, we initially run up against the same conundrum as the terminalbased
proposal in section 1. In order to see that, let us once again turn to the fact that there
are a number of roots in free competition with bad:
(8)
aP
a
⇔
good, nice, kind, small, intelligent, bad, …
p
Again, the problem is that once syntax builds the CMPRP, all of these are going to be ‘too
small’ compared to worse. The problem resides in the fact that the lexical entry in (7b)
p
node with a and CMPR, worse will be an
says that whenever the syntax combines the
exact match for such a constituent. Other lexical entries (like nice) might be candidates
for insertion as well, but since they are not an exact match for the comparative structure,
worse will win, independently of how the competition is to be implemented.8
However, in the new setting based on phrasal lexicalisation, a new type of solution
to this problem becomes available, if one more ingredient is added into the mix. The
addition that is needed is that the lexicalisation process, which associates a particular
phonology with a syntactic structure, proceeds bottomup, as in Bobaljik (2000, 2002)
and Embick (2010) or Starke (2009, 2018). We phrase this as (9), noting that (9) need
not be seen as an axiom, but rather the consequence of two proposals, which are given
in (10).
(9)
(10)
Bottomup Lexicalisation
If AP dominates BP, spell out BP before AP.
a.
b.
Merge proceeds bottom up.
Lexicalisation applies after every Merge step.
The bottomup nature of lexicalisation, and the fact that it targets nonterminals is what
p
makes it possible to propose a single syntax that can accommodate root suppletion.
In order to see this, consider the fact that lexicalisation (as it proceeds to higher and
higher nodes) must keep track of what it has done at lower nodes, so that it can ship
this information to PF at some relevant point.9 In this type of architecture, the problem
is solved if we require that the phrasal lexical item (7b) can apply at CMPRP only if the
8
9
Our discussion of this issue is indebted to Michal Starke (p.c.).
This could happen either at the very end of the derivation (as we currently assume), or such shipments
can be sensitive to phases, see, e.g. Embick (2010), Merchant (2015) and Moskal and Smith (2016).
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
92
lower aP node has been lexicalised by bad. Equivalently, worse is inapplicable if (by
free choice of root) we have lexicalised aP by a different lexical entry than bad.
In order to encode this proposal, let us rewrite the lexical entry for worse as in (11),
where instead of the aP node, we write bad. Following Starke (2014), we refer to this
device as a pointer. It is an object that is present inside one lexical entry and refers to
another lexical entry.
(11)
CMPRP
CMPR
⇔
worse
bad
The entry (11) reads as follows: lexicalise CMPRP as worse, if one daughter is the CMPR
feature, and the other daughter (i.e., aP) has been lexicalised as bad at the previous
cycle.10
The idea behind (11) presupposes that the process of lexicalisation has at least two
parts: matching (lexical search) and pronunciation (shipment to PF). The crucial point
is that the lexicalisation procedure may make multiple searches of the lexicon (leading
to the selection of various matching items) before the ultimate pronunciation. Crucially,
when a matching lexical entry is found for a given node (say aP), this does not mean
that this lexical entry is immediately shipped to PF for actual realisation. The match is
remembered, and it will eventually be sent to PF; but if later on, a lexical item matching
a higher node (say CMPRP) is found, then the first (lower) candidate is not sent to PF at
all: only the higher lexicalisation survives. In Nanosyntax, the replacement of a lower
match (bad) by a higher match (worse) is called ‘overriding.’
(12)
Cyclic override (Starke, 2009)
Each successful spellout overrides previous successful spellouts.
As said, overriding means that a matching item at a node XP (worse) prevents that
any item matching a node contained inside XP (bad) is shipped to PF. Overriding is a
general property of cyclic bottomup lexicalisation.
p
theory, the problem with sup
Recall now that from the perspective of a single
pletive lexical items like (7b) was that they could override just any root. The pointer
device introduced in (11) is here to restrict unlimited overriding: worse can only over
ride bad. Caha, De Clercq, and Vanden Wyngaerd (2019) encode this by the socalled
Faithfulness Restriction:
10
Pointers are used within Nanosyntax also beyond root suppletion. The reason that led Starke to intro
duce pointers (in unpublished work) are idioms like shoot the breeze. In this case, intuitively, we have
a lexical entry that introduces a particular concept (CHAT) when the VP contains the relevant lexical
items shoot, the and breeze. In other words, the idiom is to be stored with pointers to these lexical
items as: CHAT ⇔ [ →shoot [ →the →breeze]]. Pointers have also been used to model syncretism in
multidimensional paradigms (Caha & Pantcheva, 2012; Vanden Wyngaerd, 2018; Blix, 2021), and to
deal with issues in nonproductive morphology (De Clercq and Vanden Wyngaerd, 2019).
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
93
(13)
Faithfulness Restriction (FR, preliminary)
A lexicalisation α may override an earlier lexicalisation β iff α contains a pointer
to β
To conclude, let us stress the crucial point, which is that we now have a way to account
p
(or a single A, or, potentially, just functional
for root suppletion with just a single
heads all the way down). To achieve this, we have introduced a bottomup phrasal
p
node is free. But
lexicalisation procedure. In this kind of system, insertion at the
once the choice has been made, the Faithfulness Restriction limits the overriding of the
initial choice only to lexical items whose lexical structure consists of a pointer to this
initial choice. This way, we can restrict worse to be the comparative of bad using a
p
in the syntax.
pointer, rather than an arbitrary index on
3 PHRASAL LEXICALISATION AND MULTIPLE EXPONENCE
p
From the perspective of a modular and universal syntax, the zero theory of
s is that
p
p
there is only a single
(or perhaps no
at all, if the bottom of the functional se
quence is simply a feature like all the others). In the previous section, we have argued
that the problems posed by suppletion can be reconciled with such a modular syntax if
cyclic phrasal lexicalisation is adopted. However, an objection that is sometimes raised
against the principle of phrasal lexicalisation is that of multiple exponence or double
marking, i.e. the phenomenon where suppletion in the root is accompanied by regular
marking.11 This is, for example, the case in a form like better, which multiply expones
the comparative: once in the root, and once in the suffix. Here our theory faces a conun
drum: how can bett spell out CMPRP (as required by the phrasal lexicalisation theory),
while at the same time leaving CMPR available for the insertion of er? No such issue
arises in a theory with terminal lexicalisation: the suppletive root is an allomorph which
is inserted in the context of the CMPR head, and the CMPR head may itself be realised by
a separate suffix.
In this section, we suggest a solution to this problem. The solution is based on
the observation that in cases where suppletion cooccurs with overt marking, the overt
marking tends to be ‘reduced’, often a substring of a different, nonreduced marker. To
see this on an example, let us turn back to English. Here we have er and more for the
11
One solution to this issue, suggested for instance in Haugen and Siddiqi (2016), would be to say that
decomposing suppletive forms (like better) into two pieces is actually doubtful. Such an approach
could be supported by the fact that in degree achievements like to better something, the ‘comparative’
er (if it is one) is retained (unlike in, say, to cool something), which may suggest that er could have
actually been reanalysed as a part of the root. If that is so, the form better would be a nondecom
posable comparative form in trivial conformity with the nonterminalspellout hypothesis. We do not
want to dismiss this approach in its entirety: there are clearly cases where suppletive forms are nonde
composable, and we think that these are suggestive of a solution in terms of nonterminal lexicalisation.
However, we do not think that such a solution is universally applicable for reasons that become clear
as we proceed.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
94
comparative, and est and most for the superlative. Clearly, er and est are morpholog
ically reduced compared to more/most, if only because they are affixes while more and
most are freestanding items. Further, there are morphological and semantic reasons
to think that more/most actually contain er/est as a proper part. Such a contain
ment relation between the two comparative markers can be captured if we decompose
the single CMPR node into two heads, C1 and C2, as shown in (14) (cf. Caha, De Clercq
& Vanden Wyngaerd, 2019). Reduced comparative marking can now be analysed as
expressing only C2, as in (14), while full marking spells out both C1 and C2, as in (15).
(14)
(15)
C2P
C2
C1P
C1P
¨
C2
C2P
er
more
aP
C1
a
aP
C1
p
a
p
We leave it open as to how exactly lexicalisation applies in the case of more, as the main
focus is on its complement, i.e. that part of the structure that is lexicalised by the root
(but see De Clercq & Vanden Wyngaerd, 2018 for discussion of this issue). We only
note that phrasal lexicalisation requires C1 and C2 to form a constituent: this could be
achieved by headmovement (Matushansky, 2013), Local Dislocation (Embick, 2007)
or by ComplexSpec formation (Caha, De Clercq & Vanden Wyngaerd, 2019), as shown
in (16). What is crucial is that this type of marking occurs on top of roots which spell
out only the aP constituent, as shown by the constituent on the right hand side in (16).
(16)
(17)
aP
C2P
C2
more
C2P
C1
p
C1P
a
root′
C2
aP
C1
a
er
p
root′′
In (17), we show that the reduced marker appears on top of roots which spell out C1P,
leaving it again aside how the surface order is derived, as this would take us too far
afield (see Caha, De Clercq & Vanden Wyngaerd, 2019 for a workedout proposal). The
crucial point here is the size of the constituent spelled out by the root and by the func
tional marker. In particular, given that the number of features is constant in (16) and
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
95
(17), we observe a tradeoff between the size of the root and the size of the comparative
marker. In particular, we can distinguish between large roots, which spell out C1, and
combine with reduced markers. Smaller (aPsized) roots must combine with more. The
difference between the two classes of roots can be easily encoded in the lexicon: some
roots will be specified for C1, others will not.
With the background in place, let us now show how a form like better can be de
rived. As a starting point, consider the observation that suppletive adjectives like better
only occur with the reduced markers (i.e., er/est) and never with the full markers (i.e.,
there is no case like *more/most bett), as observed by Bobaljik (2012). In a theory with
p
using pointers like the one we have sketched above, this observation follows.
a single
In particular, the tree in (16) (with full marking) is correctly predicted to be incompati
ble with suppletion. That is because the root in (16) pronounces a constituent (aP) that
p
and pointers, sup
exactly corresponds to the positive. Under the theory with single
pletive roots must stand in a containment relation, one overriding the other. Therefore,
the comparative root must spell out at least one extra feature compared to the positive,
but such a feature is not available in (16), making it incompatible with suppletion.
Turning now to (17), this scenario allows for root suppletion on our account, al
though it does not require it. We first show how root suppletion works, and then we
turn to nonsuppletive roots that combine with the reduced marker. Suppletive roots
like bett will have an entry like (18), with a pointer to a different root.
(18)
C1P
⇔
bett
good
C1
In this case, good first spells out the aP, as shown in (19), which is a stage of the deriva
tion that corresponds to the positive. If C1 is added, bett is inserted at C1P. This C1P
is subsequently merged with C2, yielding the full comparative structure in (20). For
concreteness, we place C1P to the left of C2P, reflecting a leftward movement operation
of C1P, which we do not discuss in detail here.
(20)
aP
(19)
a
C1P
p
good
C2P
aP
C1
a
er
p
bett
We now turn to nonsuppletive roots that combine with er. In order to show how
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
96
they are accounted for, we shall diverge from our reliance on a broad spectrum of con
ceivable approaches to phrasal lexicalisation, and focus on one particular version, due
to Starke (2009, 2018). The specific component of this theory which we now need, is a
matching procedure based on the Superset Principle.
(21)
The Superset Principle (Starke, 2009)
A lexically stored tree L matches a syntactic node S iff L contains the syntactic
tree dominated by S as a subtree
The principle says that if there is an entry like (22), then it can spell out a C1P, as well
as aP (because aP is contained in it).
(22)
C1P
⇔
old, nice, smart, great, …
aP
C1
a
p
If a root has such an entry, it can be used both in the positive (i.e., as an aP), and, at the
same time, appear with reduced marking in the comparative. In English, the adjectives
old or nice would be examples of such roots. The possibility of entries like (22) is what
leads us to say that if a root spells out C1P (and thus occurs with reduced marking in
the comparative), it does not have to be necessarily suppletive.
To sum up, the theory sketched up to now has two parameters of variation. The first
parameter is related to the absolute size of the (morphological) root: it either spells out
aP or C1P. At the level of data, this parameter distinguishes between roots that combine
with more and those that take er. The second parameter distinguishes two classes of
roots of the size C1P, i.e., two classes of roots that combine with er. The difference is
whether the entry for the root has a pointer in it or not: suppletive roots like bett do
(overriding good), nonsuppletive roots like old do not.
Before we develop this concept further, we need to refine the Faithfulness Restric
tion slighty. Notice first that the entry for adjectives like old in (22) is very similar to
the entry we have originally considered for worse, recall (7b). The problem with (7b)
was that it could spell out the comparative form of just about any root, which is why
we introduced the Faithfulness Restriction in (13). The FR states that overriding at C1P
only happens if the overrider has a pointer to the overridee. As a result, the entries of
suppletive adjectives will always contain a pointer to another entry. The entry for the
adjective old in (22) does not contain a pointer, so it is not allowed to override other
roots.
However, such roots do raise an issue related to overriding and faithfulness. In a
p
is always spelled out first. Here all lexical items that
bottomup cyclic system, the
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
97
p
contain the
node are candidates thanks to the Superset Principle, and we let free
choice decide. Suppose we choose an entry like old. The next step is to merge little a
p
with the
, forming aP, and we again try to spell it out. What we need to achieve is
that old is inserted at aP, forming the positivedegree form old.
Strictly speaking at this point, the lexicalisation of aP as old must override the lexi
p
node (also old), which (due to the Faithfulness Restriction) requires
calisation of the
a pointer that old lacks. At the same time, we are not literally overwriting one entry by
p
another, since we want to insert at aP the very same entry that we inserted at the
node. This must be legal, otherwise an entry such as (22) would never get to use its
lexicalisation potential. In order to allow this, we augment the FR in the following way:
(23)
Faithfulness Restriction (FR)
A lexicalisation α may override an earlier lexicalisation β iff
a.
b.
α contains a pointer to β
α=β
The clause (23b) now allows the entry (22) to keep overriding itself all the way to C1P.
When C2 is merged, however, C2P cannot be spelled out by (22), and C2 is lexicalised
by er.
Finally, in order to capture the full spectrum of adjectival roots in English, we must
introduce roots of two more sizes. To see that, consider again aP sized roots:
aP
(24)
a
⇔
good, intelligent, …
p
The reason for claiming that these roots spell out the entire aP (as opposed to spelling
p
out just the ) is the existence of morphologically complex positivedegree adjectives,
like slimy, happy, cheeky, etc., where arguably, y spells out little a. Since the aP
sized roots are not further decomposable, but distribute like positive degree adjectives,
we treated them as spelling out the aP. But for the morphologically complex adjectives,
p
.
where y spells out the little a, we must specify the root only for
Another possible type of root is a root that spells out the whole C2P. This root spells
out both C1 and C2, and hence it appears with no comparative marking whatsoever.
Such roots come again (in principle) in two flavours. One type of such roots has a
pointer to a different root, as in (25), and then the root works as a suppletive counter
part of a positive root. A case in point is the entry for worse, which contains a pointer
to bad.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
98
(25)
C2P
C2
worse
⇔
C2P
C2
C1P
C1
(26)
bad
⇔
root
C1P
aP
C1
a
p
The other type is as in (26), without a pointer. English has no such adjectives, but we
find cases like this in certain varieties of Czech, to be discussed in section 4.1 below.
p
In sum, the approach sketched in this section distinguished the
(a syntactic node)
p
node (or, equivalently, whatever
from the morphological root, which spells out the
is at the bottom of the functional sequence) and potentially other nodes. This allows
p
for a variety of roots in the morphological sense, while still maintaining a single
in
syntax. The variety of roots that our theory makes available can be visualised as a set of
concentric circles, encompassing various sizes of structure, as shown in (27):
root4
(27)
C2P
C2
root3
C1P
C1
a
root2
aP
root1
p
The various types of roots correspond with different types of morphological marking.
A size 1 root (root1 in (27)) appears with an overt little a in the positive, and the com
parative marking comes on top of the little a (happier). A size 2 root (root2) has no
overt marker corresponding to little a, and full comparative marking. A size 3 has no
overt little a, and reduced comparative marking, while a size 4 root has no overt little a
and no comparative marking
From the perspective of suppletion, we note that roots that reach up to the compar
ative zone (namely size 3 and 4) may work as suppletive comparatives of positive roots
(those of size 2). The crucial theoretical possibility allowed by the split CMPR system is
the existence of suppletive roots of size 3, corresponding to bett, since these show the
property of multiple exponence. Size 3 roots can both work as suppletive counterparts
to positivedegree roots of size 2, and, at the same time, combine with an overt com
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
99
parative marker, namely er. This extends the reach of our theory to examples where
suppletive roots combine with overt markers, i.e. cases of multiple exponence. Note,
however, that the impression of multiple exponence is only apparent in our proposal,
since the root and the ending expone different features, C1 and C2. It also follows from
this analysis that in cases of multiple exponence, we will observe a certain type of ‘re
duction’ of the relevant marker. In the following section, we present two case studies
which further illustrate and refine the reduction effect under suppletion.
4
EMPIRICAL SUPPORT
4.1
CZECH
The first case study concerns the interaction between comparative marking and sup
pletion in Czech. We start from the fact that the traditional descriptions recognise
three different allomorphs of the comparative (see Dokulil et al., 1986; Karlík, Nekula
& Rusínova, 1995; Osolsobě, 2016). We give them on the first three rows of (28). Each
row starts by the relevant allomorph, followed by the positive, comparative and the su
perlative. The final morpheme in each form is the agreement marker, which we ignore
in extracting the comparative allomorph. Following this approach, the allomorph in
(28c,d) is zero (no overt marker). We address the k/č alternation in (28c) shortly.
(28)
allomorph
a.
b.
c.
d.
ějš
š
POS
CMPR
SPRL
GLOSS
chabý chabější nejchabější ‘weak/poor’
slabý slab ší nejslab ší
‘weak’
hezký hezč í nejhezč í
‘pretty’
ostrý
ostř
í
nejostř
í
‘sharp’
On the first two lines, we illustrate the ější and ší allomorphs with two adjectives that
are semantically and phonologically similar. We do so to show that the allomorphy is
not driven by phonology or semantics. Rather, the distribution is governed by arbitrary
root class: ějš is the productive allomorph, while ší is restricted (occurring with 72
out of 5440 adjectives sampled in Křivan, 2012).
On the third and fourth line, we illustrate the zero allomorph, and two facts should
be noted. First of all, the positive and the comparative are not homophonous: their
morphological identity is obscured by phonological interactions with the agreement
markers. Specifically, the agreement marker í, found in the comparative, triggers the
palatalisation of the base (k goes to č), while the elsewhere agreement marker ý does
not palatalise the base (see Caha, De Clercq & Vanden Wyngaerd, 2019 for a discussion
of the palatalisations). As a result, the forms are distinct. The second fact to be noted
is that in the standard language, zero marking only occurs after a particular adjectival
marker, namely k. This morpheme is similar to the English y in that it sometimes
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
100
occurs after nominal roots (e.g., slizký = ‘slimy’) and sometimes after cranberry type
of morphemes (e.g., hezký = ‘pretty’). Because of its limited distribution, it is not
clear whether the ø allomorph needs to be recognised as a separate marker, or perhaps
dismissed as a special realisation of š after k. We do, however, recognise the zero
as a relevant allomorph to consider, because in the dialects of North Eastern Bohemia
(Bachmannová, 2007), one finds it also after nonderived adjectives, as shown on the
last row (28d). We note, however, that much of our reasoning is valid even if it turns
out that the zero allomorph is an effect of phonology, rather than morphology.
Taking the traditional descriptions at face value, an interesting generalisation is that
going from the first to the third line, in (28), we see an increasingly ‘reduced’ realisa
tion of the full marker ějš. (28b) suggests that š is a substring of ějš. This makes
it tempting to decompose ějš into two morphemes, ěj and š, as suggested by Caha,
De Clercq, and Vanden Wyngaerd (2019).
Independent evidence for decomposing ějš comes from comparative adverbs, seen
in the second column of (29). Here the špart of the comparative adjective is system
atically missing, while ěj is preserved. This confirms that ěj and š are independent
morphemes.
(29)
CMPR ADJ
CMPR ADV
chabější
chaběji
rychlejší
rychleji
červenější červeněji
‘weak’
‘fast’
‘red’
Given our model with two comparative heads, the facts are easily captured if ěj and š
spell out C1 and C2 respectively. With aPsized roots, both markers surface, see (30).
With roots of the size C1P, only š appears, as in (31).
(30)
(31)
C2P
C2
C1P
aP
a
C2
aP
C1
p
ěj
š
root
C2
C1P
a
C1
š
p
root
Zero marking arises when the root spells out all of the projections, as in (26) above.
Recall that (26) was presented as a logical option allowed by our system, and though
it was not attested in English, we need it to account for ostrý ‘sharp’ in (28). This
concludes our discussion of ‘regular’ comparatives, i.e., those based on the same base
as found in the positive, and we now turn to suppletive comparatives.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
101
Given our theory of suppletion where suppletive roots override the base, compara
tive suppletion requires a root that spells out a different node than the positive. Since
the positive spells out aP, a suppletive comparative root must be at least of the size
C1P. This idea interacts with our account of comparative allomorphy. Specifically, since
roots of the size C1P cannot combine with ějš (recall (31)), we now predict that sup
pletive roots should be incompatible with ějš. To verify this, the table (32) presents an
exhaustive list of suppletive adjectives based on Dokulil et al. (1986, p. 379) and Osol
sobě (2016). The table shows that the prediction is borne out: all suppletive adjectives
require the ‘reduced’ š allomorph.12
(32)
CMPR
GLOSS
POS
CMPR
GLOSS
dobrý lepší
velký větší
dlouhý delší
‘good’
‘big’
‘long’
špatný
malý
horší
menší
‘bad’
‘little, small’
POS
We submit these facts here as an important confirmation of the current model, which
predicts that when there are two or more ways of marking the comparative, suppletion
is incompatible with the full marker. With reduced markers, we find both suppletive
and regular cases, depending on whether the entry of the size C1P has a pointer or not.
It is thanks to phrasal lexicalisation, the mechanism of pointers, and the postsyntactic
p
approach can be maintained against the surface diversity of
lexicon that the single
morphological roots. Roots can be stored in the lexicon without functional structure,
with (more or less) functional structure, and with or without a pointer, resulting in the
different types of roots that we observe. Crucially, suppletive forms can be linked to
p
their base form without the need to code the identity of the base on the syntactic
node.
4.2
LATIN
Latin provides further evidence for the correlation between reduced marking and sup
pletion predicted by our theory, but in contrast to Czech, it shows the effect in the su
perlative. The regular marking of comparative and superlative is shown in (33a).
(33)
POS
CMPR
SPRL
a.
altus
altior
altissimus
b.
malus
pe or
pe ssimus
c.
bonus melior opt
imus
d. magnus maior max
imus
e.
parvus min or min
imus
f.
multus plūs
plūr
imus
12
GLOSS
marking in SPRL
‘tall’
‘bad’
‘good’
‘big’
‘small’
‘much’
full marking
SPRL lacks i
SPRL lacks iss
SPRL lacks iss
SPRL lacks iss
SPRL lacks iss
See Caha, De Clercq, and Vanden Wyngaerd (2019) for the discussion of potential counterexamples.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
102
We segment the regular superlative into five morphemes (following De Clercq & Van
den Wyngaerd, 2017). The first morpheme is the root (alt), and the last one (us) an
agreement marker. The reason for treating the three middle markers i, ss and im
as separate morphemes is that they can be missing in the irregular forms shown in
(33bf). These represent an exhaustive list of the suppletive cases given by Gildersleeve
and Lodge (1903, p. 46).
We analyse i (the first of the postroot superlative morphemes) as a comparative
marker, i.e., as a morpheme identical to the i of the comparative altior. We treat i
in the same way as the English er, namely as the lexicalisation of C2. Consequently,
we analyse or, which follows i in the comparative, as an agreement marker. We do so
because the masculine form altior ‘taller, M.SG’ alternates with the neuter altius. As
a C2 marker, i is compatible with suppletion. In (33c), for instance, the positive degree
root bon realises aP, the suppletive comparative root mel realises C1P, and i is the
marker of C2.
The remaining two morphemes mark the superlative, which we split into S1 and S2,
analogously to CMPR. The structure of altissim(us) thus looks as follows:
(34)
S2P
S2
S1P
S1
C2P
C2
C1P
a
ss
i
aP
C1
im
p
alt
Against this background, consider the fact that the superlative marking with suppletive
roots is always reduced, see (33bf). There is not a single suppletive root in Latin which
keeps all the three pieces in place, as indicated in the final column of (33). Specifically,
we see two classes of suppletive roots. The majority of suppletive roots lacks the C2 i
as well as the S1 ss, and we would thus analyse them as spelling out S1P. However, pe
lacks only the i, which, on the assumption that i is C2, leads to the proposal in (35).
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
103
opt
(35)
S2P
pe
S1P
alt C2P
C1P
a
S1
C2
aP
C1
S2
im
ss
i
p
This picture has implications for the analysis of the comparative. Specifically, all sup
pletive roots which spell out a projection larger than C1P should make i disappear not
only in the superlative, but also in the comparative. This is true for the adjectives min
or ‘smaller’ and plus ‘more’, as well as, arguably, peor ‘worse,’ where the glide in the
comparative pe[j]or results, on our analysis, from phonological factors (hiatus filling).
Note that plus lacks the agreement marker or, and Gildersleeve and Lodge (1903, p. 46)
analyse it as a neuter form, with the masculine cell left blank. Here we treat plus as
spelling out minimally S1P, lacking i in the comparative, and in the superlative also
ss. We leave the reasons for the lack of agreement in the comparative open to inter
pretation.13
The (c) and (d) cases of (33) warrant some further comment, since they have i in
the comparative but lack it in the superlative. This is because they instantiate an ABC
pattern, with two different suppletive roots: one of size C1P (explaining the presence of
i in the comparative), and another of size S1P (explaining the absence of both i and
ss in the superlative). These suppletive roots successively point to one another, e.g.
the lexical entry for opt contains a pointer to mel, which itself contains a pointer to
bon.14
13
Note that plus and plur are two shapes of a single root, with s undergoing rhotacism in intervocalic
positions, which happens also in the comparative when inflected, cf. pluris ‘more, GEN.SG.’
14
We take s in the superlative maksimus ‘biggest’ to be a part of the root, given that it is not a geminate
like the superlative S1 marker. The comparative maior ‘bigger’ could arise from the root mag, as
suggested in Bobaljik (2012), with the root final g first assimilating to j (yielding majjor), which is
then reduced due to degemination. Bobaljik (2012) concludes from this that this adjective has a regular
AAA pattern, and hence, that it is irrelevant for suppletion. However, this move requires the parsing
of the positive as magnus, which we see little evidence for. We therefore treat this as an ABC pattern
(magn–ma(g)–maks).
Related to this we note that the literature sometimes makes a distinction between weak or mild sup
pletion, where the related forms are phonologically similar (e.g. singsang), and strong suppletion,
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
104
p
In sum, this case study also illustrates how
and roots should be treated differ
p
ently. Whilst there is only one
in syntax, there are many different types of roots in
p
with (or without) other pieces of structure. The tradeoff
the lexicon, which store
between the size of roots and the superlative degree morphology in Latin shows that
suppletion follows from the size of lexical entries of morphological roots and not from
p
.
the nature of the syntactic
5
NONLOCAL ALLOMORPHY
In this section, we turn to a case of suppletion in Korean discussed by Chung (2009).
Choi and Harley (2019) have argued that this represents a case where allomorphy may
be conditioned nonlocally, possibly skipping intervening morphemes. The relevant sit
uation of nonlocal allomorphy is abstractly depicted in (36).
p
153(A)AFF1
(36) a.
p
b.
153(B)AFF1AFF2
p
What the formulas in (36) depict is that insertion at the root 153 is suppletive and
p
p
varies between A ( 153(A)) and B ( 153(B)). The suppletion is conditioned by the
presence/absence of AFF2. The trigger is nonlocal because it is separated from the root
by an intervening affix, as (36b) shows.
Choi and Harley (2019) operate in a framework where lexical insertion happens un
der terminal nodes only, and suppletion is a case of contextual allomorphy, as illus
p
trated in (3) above. Therefore, they capture the facts by saying that the 153 is realised
as B if somewhere higher up in the structure the triggering feature [F] is found. [F] is
then independently realised by AFF2.
For our analysis, such cases represent a challenge. The reason is that in order not
to discriminate among roots in syntax, our system relies on pointers and overriding,
thereby eliminating the possibility of standard rules of contextual allomorphy, which
make reference to a wider context. In our system, the ‘trigger’ for allomorphy needs
to be strictly local, in the sense of ‘having to be a part of the set of features exponed
by the suppletive root.’ A situation like the one depicted in (36), where the presence
of a suppletive root is apparently caused by a higher head across an intervening affix
therefore cannot arise in our system.
What we want to do in this section is illustrate the viability of an approach to appar
ent nonlocal cases that goes along the following lines: what appears to be nonlocal is
where such similarity is absent, as in the pair badworse (see e.g. Pomino & Remberger (2019), and
references cited there). The former type is often treated as morphologically regular, and subject to
postmorphology phonological readjustment in DM (see Bobaljik, 2017). We make no difference in the
treatment of cases of weak or strong suppletion, in that we take both weakly and strongly suppletive
allomorphs to be lexically stored in terms of an entry containing a pointer. We thus do not rely on
phonological readjustment rules (see Harley & Siddiqi, 2013 for the same approach).
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
105
only so under certain assumptions about the structure, but if we enrich the structure,
what looked like a case of nonlocal allomorphy starts looking like local allomorphy, in
the sense of the previous paragraph. We will argue that nonlocal conditioning of al
lomorphy is both unnecessary and undesirable. It is unnecessary once we enrich the
structure involved in negation and honorification in Korean. It is also undesirable be
cause it predicts the wrong results once the interaction between negation, causation,
and honorification is taken into account.
In addition, the Korean data that we shall discuss in this section also present a case
of double exponence in the domain of honorific marking. We shall propose to deal with
this case of double exponence by analogy with the one of the English comparative form
better, i.e. by means of, on the one hand, an enrichment of the functional hierarchy
with an additional head, and the principle of phrasal lexicalisation on the other.
5.1
A KOREAN PARADOX
Korean shows a paradox with respect to the conditioning of allomorphy in the domain
of negation and honorification, which was first discussed by Chung (2009), and then
taken up again by Choi and Harley (2019). The paradox is summarised in the table in
(37).
(37)
regular pattern
p
a.
X
p
b. NEG X
p
X HON
c.
p
d. NEG X HON
p
EXIST
iss
eps
kyeysi
ani/mos kyeysi
p
KNOW
al
molu
alsi
molusi
The first column of (37) abstractly shows the regular way of forming honorific and neg
ative forms by affixes. The regular markers of negation and honorification occur each
on a different side of the root, so that it is not a priori obvious what their hierarchical re
lation is with respect to the root. Their hierarchy could be, however, inferred indirectly
by looking at suppletive forms, which are given in the last two columns of the table.
The general form of the paradox is this. Assuming (as we do) that suppletion is al
ways local, the hierarchy of negation and honorification could be determined by looking
specifically at the cell where both are present. This is the last row of the table (37). But
p
the facts from suppletion point either way, paradoxically: one verb ( EXIST) suggests
that the honorific head is closer to the root, because the root shows the ‘honorific’ al
lomorph where both triggers are available (as indicated by the shading). On the other
p
hand, the verb KNOW suggests that negation is closer to the root, because it has a sup
pletive negative form even in the presence of an overt honorific suffix. Let us now look
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
106
at the patterns in more detail.
The first verb, iss ‘exist’, has a negative suppletive form eps ‘not exist’ (37b), which is
a portmanteau expressing negation (Chung, 2009).15 It also has a suppletive honorific
form kyeysi (37c). When negation and honorification cooccur, as in (37d), kyeysi
is used, suggesting that honorification takes precedence over negation in determining
the allomorph of the root (ani and mos ‘not’ are analytic negative markers). Assum
ing that this precedence is a function of greater structural closeness, this suggests the
functional hierarchy in (38a), represented treewise in (39a). The second verb in (37) is
al ‘know’, which also has a suppletive negative form, the portmanteau molu ‘not know’
(37b). There is no allomorph in the presence of honorification, and the regular hon
orific marker (u)si is expressed on the verb (37c). When honorification and negation
cooccur, as in (37d), molu appears again, suggesting that negation takes precedence
over honorification in determining root allomorphy. In terms of structural closeness
to the root, this suggests the opposite conclusion from the one reached earlier, namely
(38b)/(39b).
(38)
a.
b.
(39)
a.
p
NEG > HON > EXIST
p
HON > NEG > KNOW
b.
NEG
p
EXIST HON
NEG
p
HON
KNOW
ani/mos
si
kyey
si
molu
The paradox exists in virtue of the following two assumptions:
(i) there is only a single functional hierarchy (for Korean), i.e. (38a) and (38b)
cannot both be correct
(ii) allomorphy is conditioned strictly locally (see Bobaljik, 2012; Moskal (2013);
Moskal & Merchant, 2015, and many others).
The first of these assumptions is uncontroversial. Opinions diverge, however, as to
which of the two hierarchies shown in (39) is the correct one. Choi and Harley (2019)
15
iss has three meanings (Martin, 1992, p. 319), one of which is ‘exist’, another is ‘stay intentionally’,
and a third meaning is ‘have’. The negation of iss ‘stay’ is an(i) iss (Chung, 2009, p. 539; Chung, 2007,
p. 124–127), while the negation of iss ‘exist’ and ‘have’ is eps. Chung (2007) argues convincingly that
iss ‘exist’ is adjectival, while iss ‘stay’ is verbal, showing additional functional morphology on the root
in the present tense. We hypothesize that the absence of negative suppletion with iss ‘stay’ follows
from intervening structure between ani and the root iss ‘stay’, violating locality. However, we focus
on iss ‘exist’ in this paper, deferring a detailed account of how the different readings for iss arise to
future research.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
107
assume that the functional hierarchy illustrated in (40a) is the correct one, whereas
Chung (2009) argues in favour of (40b):16
(40)
a.
b.
NEG
p
HON
NEG
p
HON
Regardless of which of these two options is taken, it seems clear that a strict version
of (ii) cannot be maintained. In the next section, we discuss the proposal by Choi and
Harley (2019) which abandons (ii), and some of the problems that it faces.
5.2
NONLOCAL ALLOMORPHY AND CAUSATIVE INTERVENTION
We start out by taking a closer look at the specifics of the account by Choi and Harley
(2019). Recall first that they adopt the structure in (40a), where the HON head is local
to the root. This takes care of the allomorphy on iss ‘be’ in a straightforward and lo
cal fashion, the HON head taking precedence over negation when both are present. In
order to account for the pattern of suppletion found with al ‘to know’, Choi and Harley
(2019) argue that if no suppletive form has been inserted locally, root allomorphy can
be conditioned from a distance, as long as the conditioning head is within the complex
p
X° head (Bobaljik, 2012). This condition is satisfied for KNOW, where no suppletive
VI exists that is conditioned by HON, so that NEG can condition allomorphy of the root
across HON. This is shown in (41), where the suppletive portmanteau molu ‘not know’
is inserted under the root terminal, conditioned by a NEG head separated from it by an
intervening HON head. This account cannot be replicated under constituent lexicalisa
tion.
(41)
NEG
p
KNOW HON
molu
si
In what follows we shall discuss two cases (not discussed by Choi & Harley, 2019)
that are structurally analogous to (41), but that show different behaviour, in that a
16
Choi and Harley’s (2019) hierarchy is not (only) the consequence of an assumption they make about
the functional sequence. Instead, it results from their assumption that honorification is a form of agree
ment with an honorific subject, which is realised on v as a result of a rule of HONsprouting that ap
plies in the morphological component if a specific configuration is realised, i.e. if the the verb is c
commanded by an NP with [+hon] (cf. Marantz, 1991, 1993). The result of this node sprouting rule
makes HON lower in the structure than NEG. See Kim and Sells (2007) for arguments to the effect that
Korean honorific marking is not to be considered as agreement.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
108
higher head is not able to trigger suppletion of the root across an intervening head.
We shall refer to these two cases as instances of ‘causative intervention’, as they involve
the intervention of a CAUS head between the root and a higher functional head (HON or
NEG). This intervening causative head appears to block the suppletive realisation of the
root, unlike what happens in (41). The facts of causative intervention suggest to us that
suppletion has to be strictly local after all, since the presence of a causative intervener
appears to block it. We shall then proceed to develop an alternative account for the case
of (41) in section 5.3.
The first case of causative intervention involves negation. Again taking the verb al
‘know’, we see the following pattern (Chung, 2007):
(42)
p
KNOW
al
a.
p
b. NEG KNOW
molu
p
c.
KNOW CAUS
alli
p
d. NEG KNOW CAUS ani/mos alli
‘know’
‘not know’
‘let know, inform’
‘not inform’
The case is similar to the one in (37), with negation occurring to the left of the verb
and the other marker (honorific or causative) to its right, thus yielding a potential am
biguity as to the hierarchical relations. However, in this case the relative scope of the
negation and causative marker can be deduced from the meaning. Specifically, in (42d),
the meaning is ‘not inform’ (‘not let know’) rather than ‘cause to not know,’ suggesting
p
a scopal hierarchy NEG > CAUS > KNOW. Given that hierarchy, the structure of (42d)
looks as in (43), which is exactly as in (41), modulo the substitution of CAUS for HON. Yet
in this case, the NEG head is apparently unable to trigger the insertion of the suppletive
negative portmanteau molu across the intervening CAUS head, since only the nonsup
pletive root al ‘know’ is possible.
(43)
NEG
p
KNOW CAUS
al
*molu
li
This is unexpected under Choi and Harley’s (2019) proposal.
The second case of causative intervention in Korean involves honorific suppletion,
which is found with a small number of Korean verbs, given in (44) (Kim & Sells, 2007,
p. 312).17
17
The regular issusita means ‘have’, while the suppletive kyeysita means ‘be/exist’ or ‘stay’ (Martin,
1992, p. 217). As already mentioned in footnote 15, this paper focuses on iss ‘exist’ and the morpho
logical patterns it triggers, and leaves the other readings and its associated morphological patterns out
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
109
(44)
ROOTDECL
a. mekta
b. cata
c. issta
ROOTHONDECL
*mekusita
*casita
issusita
ROOTHONDECL
capsusita
cwumusita
kyeysita
‘eat’
‘sleep’
‘be, exist, have’
These show the following pattern in the interaction with causation:18
(45)
p
EAT
a.
p
EAT CAUS
b.
p
c.
EAT HON
p
d.
EAT CAUS HON
mek
‘eat’
meki
‘let eat’
capsusi ‘eat’
mekisi ‘let eat’
Since the causative and the honorific markers are on the same side of the root, it is easy
to see their hierarchy, which we take to be the mirror image of the linear order, i.e. HON
p 19
> CAUS >
. This translates into the following tree structure:
(46)
p
HON
EAT CAUS
(u)si
mek
*caps
i
We see the same pattern as in (43) above: HON is not able to trigger the insertion of
the suppletive honorific form caps ‘eat’ under the root node across an intervening CAUS
head. This is a second case, then, that is unexpected under the approach of Choi and
Harley (2019). In these two cases of causative intervention’, a CAUS head intervenes
between the root and a higher functional head (HON or NEG), and blocks the suppletive
realisation of the root. We take the data of causative intervention to cast serious doubt
on Choi and Harley’s (2019) claim that triggers for suppletion can be nonlocal.20
5.3 TOWARDS AN ALTERNATIVE: DECOMPOSING HON
In the light of the preceding discussion, we shall now proceed to develop an alternative
analysis of the Korean paradox in terms of phrasal lexicalisation and the strictly local
allomorphy requirement that this approach entails. The fundamental ingredient of our
of consideration.
The verb iss ‘be, exist’ does not permit causation, so that the interaction of honorification with causation
cannot be illustrated for this verb.
19
The alternation between usi and si is phonologically conditioned: u is an epenthetic vowel that
appears between two heteromorphemic consonants (Chung, 2009, p. 543).
20
A possible way out for Choi and Harley (2019) to account for this problem would be to argue that
CAUS is a cyclic node, which blocks suppletion. It will be clear that such arbitrary marking of heads as
interveners, while deriving the facts, seriously undermines the conceptual appeal of the proposal.
18
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
110
alternative is the idea that honorification (like comparative/superlative formation) in
volves two HON heads. In this section, we explain how assuming two heads yields the
relevant honorific forms. In Section 5.4 we add the causative head, and show how it
interacts with honorification. In Section 5.5, we pinpoint the NEG head in a particular
syntactic position and show how it interacts with causative formation. Finally, with all
the ingredients in place, Section 5.6 explains how the two HON heads allow us to resolve
the Korean paradox noted in Choi and Harley (2019).
In order to see why assuming two honorific heads is needed, consider the example of
the honorific suppletive form capsusi ‘eatHON’. On the one hand, the root has a special
honorific shape (the nonhonorific shape of the root is mek ‘eat’); on the other hand, the
root is accompanied by an overt honorific marker usi ‘HON.’ Therefore, just like in the
case of better, we hypothesise that two honorificationrelated heads are involved, as in
(47). For the lack of a better term, we call them HON1 and HON2 . The verbal structure
p
] in (47).
these heads come on top of is abbreviated as [ v
(47)
(48)
HON2 P
HON
(49)
vP
v
⇔
HON1 P
HON1
p
mek
p
v
HON1 P
HON
vP
⇔
caps
mek
In this setting, the cooccurrence of a suppletive root and an overt honorific suffix is
easily captured. Specifically, we associate the string mek ‘eat’ with a constituent of the
size vP, as in (48), while caps ‘eat.HON’ is a realisation of HON1 P plus whatever features
are contained in the verb mek ‘eat’. The lexical entry of caps ‘eat.HON’ therefore looks
as in (49); it basically says that caps is the honorific form of mek ‘eat.’
The structure of the full honorific form capsusi is then as given in (50). We can see
that the honorific root caps ‘eat.HON’ applies at HON1 P, overriding the nonsuppletive
mek ‘eat’ in the process. The structure further presupposes that the constituent spelled
out by caps ‘eat.HON’ moves out of HON2 P, and the honorific marker is inserted as the
spellout of the remnant HON2 P. The lexical entry for the honorific (u)si is as in (51).
(50)
(51)
HON1 P
HON1
mek
caps
HON2 P
HON2 P
⇔
(u)si
HON2
HON2
(u)si
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
111
The reader will notice that this account follows the same logic as developed for better
in Section 3 (bett spells out C1P, er spells out C2).)
Moving further along the same path, nonsuppletive honorifics will be captured
in the same way as English nonsuppletive forms like smarter. Recall that a root
like smart spells out the whole C1P (like bett), but it lacks a pointer. Quite paral
lel to this treatment, we are assuming that Korean nonsuppletive roots spell out the
whole HON1 P, as shown in (52). This structure shows the honorific form of the (non
suppletive) verb al ‘to know.’
(52)
(53)
HON2 P
HON1 P
HON1
HON2
vP
v
HON1 P
HON1
⇔
al
vP
v
p
si
p
al
This leads us to posit for al ‘know’ a lexical entry like the one in (53). Because of
the Superset Principle, this makes the verb also usable in nonhonorific environments,
spelling out just the vP. The ambiguity of nonsuppletive roots (spelling out either vP
or HON1 P) will be important later on.
5.4
ADDING CAUSATIVES
This section introduces the causative head CAUS in the structure. It starts by providing
a structure for regular verbs, and then turns to suppletive verbs.
To begin with, we shall place the CAUS head in the tree relative to HON1 and HON2 . The
empirical facts to be discussed suggest that CAUS is below HON1 . The relevant hierarchy
is shown in (54).
(54)
HON2
HON1
CAUS
v
p
How does the presence of CAUS influence the derivation? The main consequence is that
p
regular verb roots with an entry like (53) (spelling out , v and HON1 ) will only be able
p
to spell out
and v, since CAUS makes it impossible for such a root to also spell out
HON1 . As is obvious from (54), CAUS intervenes. The root will therefore spell out just
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
112
the vP and move to the left; after the movement, the causative suffix li spells out CAUS,
as in (55). We are using the root al ‘to know’, whose entry is in (53).
(56)
(55)
vP
v
CAUSP
HON2 P
vP
p
CAUS
v
li
al
HON1 P
HON2
p
HON1
CAUSP
al
si
CAUS
li
The tree in (55) represents the structure of the nonhonorific causative. The honorific
causative is derived by adding HON1 and HON2 on top of the nonhonorific causative.
On the surface, this leads to the addition of (u)si to the causative, yielding allisi ‘to let
know, honorific.’
Let us next address the question of what happens to HON1 in the honorific causative.
We know that it is not spelled out by (u)si, which can realise only HON2 . Since the root
cannot spell out HON1 either (since CAUS intervenes), we conclude that HON1 is spelled
out along with the causative li. The full structure we are assuming is therefore as in (56).
Note that the causative li spells out different structures in (55) and (56) and alternates
(in a manner reminiscent of regular roots) between a nonhonorific use in (55) and an
honorific use in (56). In particular, its lexical entry is specified as containing both HON1
and CAUS, which allows it to spell out either both of these features, or just CAUS.
The interest of this derivation is that it allows us to explain why causativisation
blocks honorific suppletion. The relevant facts are as repeated in (57) (originally (45)),
and the relevant observation is that the honorific causative in (57d) uses the nonhonorific
root.
(57)
p
EAT
a.
p
b.
EAT CAUS
p
c.
EAT HON
p
EAT CAUS HON
d.
mek
‘eat’
meki
‘let eat’
capsusi
‘eat’
mekisi ‘let eat’
This pattern can be captured by using the entries for ‘eat’ as proposed in (48) and (49)
above. We repeat them in (58) and (59).
(58)
HON1 P
HON1
⇔
caps
mek
ReVEL, edição especial, n. 18, 2021
(59)
vP
v
⇔
mek
p
ISSN 16788931
113
With these lexical entries, both types of causatives (i.e., both the honorific and the non
honorific causative) are correctly expected to be based on the nonsuppletive root mek
‘eat.’ To show that, we give in (60) and (61) the stuctures for the nonhonorific and
the honorific causative respectively. These structures are the same as those in (55) and
(56), only the root is different (‘eat’).
(61)
(60)
CAUSP
vP
v
HON2 P
vP
p
CAUS
v
i
mek
p
HON1 P
HON2
HON1
CAUSP
mek
si
CAUS
i
The most relevant thing to look at is the honorific causative in (61). In this structure, the
HON1 head is separated from the vP by the CAUS head. Therefore, just like with regular
verbs, HON1 cannot be spelled out by caps ‘eat.HON.’ This explains why the root only
spells out vP and surfaces as mek in the honorific causative.
5.5
ADDING NEGATION
This section puts forth a proposal for negative suppletion. We also propose a specific
position for the NEG in the tree, paving the way for the solution to the paradoxical in
teraction between negation and honorification.
Let us begin by proposing a particular position for negation in the tree. Recall first
that the existing literature contains diverging opinions on the position of NEG. Some
authors place it higher than HON (Choi & Harley, 2019), others place it lower than HON
(Chung, 2009). Our account with split HON allows us to take the good aspects of each
of these proposals by placing the NEG head in between the two honorific heads. The full
structure we shall be assuming is therefore as in (62).
(62)
HON2
NEG
HON1
CAUSE vP
v
ReVEL, edição especial, n. 18, 2021
p
ISSN 16788931
114
All the heads above vP are assumed to be optional in the sense that they can be either
present or absent. The only caveat applies to the two HON heads, which are either both
present, or both absent.
As a default, the NEG head is spelled out by one of the two negative markers ani or
mos. The tree (63) shows the negation of a simple verb root ca ‘sleep.’ The tree in (64)
shows the negation of a causative.
(63)
(64)
NEGP
NEG
NEG
vP
ani
v
NEGP
vP
ani
p
p
v
ca
‘sleep’
CAUSP
CAUS
li
al
‘know’
Let us now turn to the verb al ‘know’, which has a negative suppletive form molu. The
pattern of this verb is repeated in (65) (repeated from (42)). The most remarkable dat
apoint is on line (d), showing that the causative blocks the use of molu despite the
presence of NEG in the structure, and leading to an analytic marking of negation as ani
or mos.
(65)
p
a.
KNOW
al
p
molu
b. NEG KNOW
p
KNOW CAUS
alli
c.
p
d. NEG KNOW CAUS ani/mos alli
‘know’
‘not know’
‘let know, inform’
‘not inform’
To see how our system accounts for this, let us first provide the lexical entry for the
suppletive negative form molu ‘not know,’ see (66). The entry contains a pointer to the
nonnegative verb al ‘know.’ That is, it realises the NEG head and whatever features are
realised by al ‘know.’ The entry of al is repeated in (67) (recall (53)).
(66)
NEGP
NEG
⇔
molu
al
(67)
HON1 P
HON1
⇔
al
vP
v
p
Our account correctly predicts that the lexical item in (66) cannot be used in the causative
structure as the spellout of NEG and the root al. The reason is that CAUS intervenes be
tween NEG and the root. Instead, the vP, the CAUS head, and NEG have to be each spelled
out by a distinct morpheme; the relevant structure is shown in (64) above. It correctly
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
115
predicts that even though NEG is present in the structure of the negated causative, the
suppletive negative form molu ‘not know’ cannot not used.
In sum, the CAUS head is closest to the root. Like all the other functional heads in
the above sequence, it is an optional element. When present, it triggers ‘causative inter
vention’ both for suppletive negative verbs (as discussed in this section) and suppletive
honorific verbs (as discussed in the previous section). The reason for this is the the
presence of CAUS blocks the lexicalisation of the the bottommost vP along with HON1
or NEG.
5.6
EXPLAINING THE PARADOX
So far, we discussed the interaction between causativisation and negation, and between
causativisation and honorification. What we still have not discussed is the interaction
between negation and honorification, which is precisely the domain where the paradox
discussed in section 5.1 arises. For convenience, we repeat the relevant data here.
(37)
regular pattern
p
a.
X
p
b. NEG X
p
X HON
c.
p
d. NEG X HON
p
EXIST
iss
eps
kyeysi
ani/mos kyeysi
p
KNOW
al
molu
alsi
molusi
p
The paradox is that with the root EXIST, the presence of the honorific blocks negative
suppletion, as shown on line (d). This suggests that HON is closer to the root than NEG.
p
With KNOW, the honorific does not block negative suppletion, suggesting NEG is closer
to the root.
The point of this section is to show that with the two HON heads in the structure, this
paradox can be resolved. We first focus on deriving the regular pattern as given in the
first column on the left. Recall that for such regular roots, we assume that they spell
out HON1 P.
As a starting point, let us draw the structures for a negated honorific and a negated
nonhonorific. The relevant structures are then as given in (68) (negated honorific) and
(69) (negated nonhonorific).
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
116
(68)
(69)
NEGP
NEG
HON2 P
HON1 P
ani
HON1
NEG
VP
ani
HON2
si
vP
v
NEGP
v
p
regular
root
p
regular
root
What do these structures predict for the negative suppletive form molu ‘not know’?
Recall that the lexical entry for molu, given in (66) above, contains NEG and a pointer to
al. First of all, they predict that molu will be able to spell out the whole structure (69),
provided vP is lexicalised by al.
The structures further predict that honorification will not block negative suppletion
for al ‘know.’ The reason is that the root al ‘know’ is associated to a constituent of the
size HON1P, recall (67) above. Therefore, al can spell out either the HON1 P in (68), or
just the vP in (69). Since molu ‘not know’ has a pointer to al, it will apply always when
the NEGP contains NEG and al, regardless of the size of the structure spelled out by al.
Therefore, whenever a NEG head is merged to a constituent that has been spelled out as
al on the previous cycle, molu is going to be inserted. Therefore, we correctly predict
that we find molu both in (68) and (69). The consequence is that with molu ‘not know,’
honorification does not block negative suppletion.
Next we turn to the verb iss ‘be, exist.’ This verb has three different suppletive allo
morphs. It has a suppletive honorific form kyey, which spells out HON1 P and contains
a pointer to iss, see (70). The nonhonorific form iss then spells out just vP, see (71).
(70)
HON1 P
HON1
⇔
kyey
(71)
VP
v
iss
⇔
iss
p
Then, iss ‘be, exist’ also has a suppletive negative form eps ‘not exist’, whose lexical
entry is as given in (72):
(72)
NEGP
NEG
⇔
eps
iss
What do these entries predict about the interaction between honorification and nega
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
117
tion? In order to see that, consider the fact that the applicability of a negative suppletive
form is always evaluated at NEGP. The suppletive item applies when NEG has a sister that
has been spelled out by a particular lexical item. Therefore, in order to determine the
applicability of (72), we need to look at whether the sister to NEG is spelled out by iss
‘be’ or not. The examples in (73) and (74) show the relevant structures.
(73)
(74)
HON2 P
NEGP
NEG
ani
HON2
HON1 P
HON1
vP
v
si
NEGP
NEG
VP
v
p
iss
‘be’
p
kyey
‘be.HON’
What we see here is that in the honorific structure (73), the HON1 P is spelled out by the
honorific suppletive form kyey, not iss. Therefore, the negative suppletive eps does not
match the NEGP in (73), and we correctly get the spellout ani kyey ‘not be.HON.’ The
consequence is that in this case, the presence of HON does block negative suppletion.
This is different from the nonhonorific structure, where iss is the only candidate for
spellout. Therefore, (74) is correctly predicted to be realised as eps ‘not be’.
The solution to the paradox relies on the most basic essence of the pointer hypoth
esis: namely that suppletive lexical items point one to the other, and each spells out a
structure of a different size. Since al is ambiguous between spelling out HON1P and vP,
the form molu ‘NEG.know’ inherits this ambiguity and we find it both in the honorific
and nonhonorific environments. However, ‘be, exist’ is not ambiguous in the same
way: we find iss in nonhonorific environments, and kyey in honorific environments.
The negative suppletive eps ‘not be’ has a pointer to iss, and it therefore inherits the
distribution of iss, and both are only found in nonhonorific contexts.
Importantly, this solution maintains the idea that suppletive roots spell out struc
tures of different sizes, and therefore, they never compete with each other: there is no
point in the derivation where both eps ‘not be’ and iss ‘be’ can be inserted. Since we
do not need competition among suppletive lexical items, we can maintain the idea that
whenever multiple roots match, the choice between them is free.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
118
CONCLUSION
This paper has proposed an approach that reaches an important theoretical goal, namely
to allow for root suppletion within a theory of narrow syntax that adopts the Strong
Modularity Thesis, which holds that syntax is both phonologyfree and conceptfree.
p
By dispensing with indexed
s, the theory is also compatible with approaches where
p
s are dispensed with altogether (Ramchand, 2008). What makes this type of theory
p
s are kept distinct from mor
available is a bottomup phrasal lexicalisation, where
phological roots. The latter come in a variety of classes, each class associated with a
particular amount of functional structure.
We have further explained why and how such a theory is compatible with the fact
that suppletion often cooccurs with overt marking. In order to test the predictions, we
looked at the details of comparative/superlative suppletion in English, Czech and Latin.
What we found is that suppletion in these languages is inevitably correlated with the
reduction of overt morphology, which supports the empirical predictions of the model.
We finally discussed a paradox involving suppletion in Korean involving the inter
action of negation and honorification in suppletion. Depending on the verb, different
patterns are observed, which lead Choi and Harley (2019) to argue that allomorphy can
be conditioned nonlocally, across an intervening head. We discussed two cases which
represent a similar type of configuration, but that does not give rise to allomorphy, and
concluded that allomorphy must be conditioned strictly locally after all. Our solution
to the Korean paradox consisted in a postulation of two different HON heads, one below
and one above negation. The different behaviour of different verbs when negation and
honorification interact could then simply be attributed to the different lexical entries
for the relevant suppletive roots. Our theory of suppletive roots as involving pointers
in lexical entries moreover ensured the correct distribution of suppletive roots, without
p
having to take recourse to indexed
nodes. We could thus maintain the Strong Mod
ularity Thesis, with a phonology and conceptfree syntax, and with free choice of the
root at the first cycle of insertion.
REFERENCES
ARREGI, K.; NEVINS, A. A monoradical approach to some cases of disuppletion. Theoret
ical Linguistics, v. 40, n. 34, p. 311–330, 2014.
BACHMANNOVÁ, J. Okrajové úseky severovýchodočeské nářeční oblasti z pohledu celočeského
(Na materiálu korespondenční ankety ÚJČ). Naše řeč, v. 90, p. 7–19, 1 2007.
BLIX, H. Spans in South Caucasian agreement: Revisiting the pieces of inflection. Nat
ural Language & Linguistic Theory, Springer, v. 39, p. 1–55, 2021.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
119
BOBALJIK, J. Distributed Morphology. In: LIEBER, R. et al. (Eds.). The Oxford Encyclo
pedia of Morphology. Oxford: Oxford University Press, 2017.
. Syncretism without paradigms: Remarks on Williams 1981, 1994. In: BOOIJ,
G.; MARLE, J. VAN (Eds.). Yearbook of Morphology 2001. Dordrecht: Kluwer, 2002.
p. 53–85.
. The ins and outs of contextual allomorphy. University of Maryland Working
Papers in Linguistics, v. 10, p. 35–71, 2000.
. Universals In Comparative Morphology. Cambridge, MA: MIT Press, 2012.
CAHA, P.; DE CLERCQ, K.; VANDEN WYNGAERD, G. The Fine Structure of the Comparative.
Studia Linguistica, v. 73, n. 3, p. 470–521, 2019. DOI: https://doi.org/10.1111/
stul.12107.
CAHA, P.; PANTCHEVA, M. Contiguity beyond linearity. Talk at Decennium: The first 10
years of CASTL, Sept. 2012.
CHOI, J.; HARLEY, H. Locality domains and morphological rules. Phases, heads, node
sprouting and suppletion in Korean honorification. Natural Language & Linguistic
Theory, v. 37, p. 1319–1365, 2019.
CHUNG, I. Suppletive negation in Korean and distributed morphology. Lingua, v. 117,
p. 95–148, 2007.
. Suppletive verbal morphology in Korean and the mechanism of vocabulary
insertion. Journal of Linguistics, v. 45, n. 3, p. 533–567, 2009.
DE BELDER, M.; VAN CRAENENBROECK, J. How to Merge a Root. Linguistic Inquiry, v. 46,
p. 625–655, 2015.
DE CLERCQ, K.; CAHA, P., et al. Degree morphology. In: ACKEMA, P. et al. (Eds.). The Wiley
Blackwell Companion to Morphology. Oxford: Blackwell, to appear.
DE CLERCQ, K.; VANDEN WYNGAERD, G. *ABA revisited: evidence from Czech and Latin
degree morphology. Glossa, v. 2, n. 1, 69: 1–32, 2017.
. On the idiomatic nature of unproductive morphology. In: BERNS, J.; TRIBUSHIN
INA, E. (Eds.). Linguistics in the Netherlands. Amsterdam: Benjamins, 2019. p. 99–114.
. Unmerging Analytic Comparatives. Jezikoslovlje, v. 19, n. 3, p. 341–363, 2018.
DOKULIL, M. et al. Mluvnice češtiny 1. Fonetika. Fonologie. Morfologie a morfemika.
Tvoření slov. Praha: Academia, 1986.
EMBICK, D. Linearization and local dislocation: Derivational mechanics and interac
tions. Linguistic analysis, v. 33, n. 34, p. 303–336, 2007.
. Localism versus Globalism in Morphology and Phonology. Cambridge, MA:
MIT Press, 2010.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
120
GILDERSLEEVE, B. L.; LODGE, G. Latin Grammar. London: MacMillan and Co., 1903.
HALLE, M. Distributed morphology: impoverishment and fission. MIT Working Papers
in Linguistics, v. 30, p. 425–449, 1997.
HALLE, M.; MARANTZ, A. Distributed morphology and the pieces of inflection. In: HALE,
K.; KEYSER, J. (Eds.). The View from Building 20. Cambridge, MA: MIT Press, 1993.
p. 111–176.
HARLEY, H. On the identity of roots. Theoretical Linguistics, v. 40, p. 225–276, 2014.
HARLEY, H.; NOYER, R. Mixed Nominalizations, Short Verb Movement and Object Shift
in English. In:
. Proceedings of the North East Linguistic Society. University
of Toronto: Graduate Linguistic Student Association, 1998. p. 143–158.
. Stateofthearticle: distributed morphology. Glot International, v. 4, p. 3–9,
1999.
HAUGEN, J. D.; SIDDIQI, D. Roots and the Derivation. Linguistic Inquiry, v. 44, p. 493–
517, 2013.
. Towards a Restricted Realization Theory: Multimorphemic Monolistemic
ity, Portmanteaux, and Postlinearization Spanning. In: SIDDIQI, D.; HARLEY, H. (Eds.).
Morphological metatheory. Amsterdam: John Benjamins, 2016. p. 343–385.
KARLÍK, P.; NEKULA, M.; RUSÍNOVÁ, Z. Příruční mluvnice češtiny. Praha: Nakladatelství
Lidové noviny, 1995.
KIM, J.B.; SELLS, P. Korean honorification: a kind of expressive meaning. Journal of
East Asian Linguistics, v. 16, n. 4, p. 303–336, 2007.
KIPARSKY, P. ‘Elsewhere’ in phonology. In: ANDERSON, S.; KIPARSKY, P. (Eds.). A Festschrift
for Morris Halle. New York: Holt, Rinehart & Winston, 1973. p. 93–106.
KŘIVAN, J. Komparativ v korpusu: explanace morfematické struktury českého stupňování
na základě frekvence tvarů. Slovo a slovesnost, v. 73, p. 13–45, 2012.
MARANTZ, A. A late note on late insertion. In: KIM, Y. S. et al. (Eds.). Explorations in
generative grammar: A festschrift for DongWhee Yang. Seoul: Hankuk, 1994. p. 396–
413.
. Case and Licensing. In:
. Eastern States Conference on Linguistics.
Cornell University, Ithaca, NY: Cornell Linguistics Club, 1991. p. 234–253.
. Cat as a phrasal idiom: consequences of late insertion in Distributed Mor
phology. ms., MIT, 1996.
. Implications of Asymmetries in Double Object Constructions. In: MCHOMBO,
S. A. (Ed.). Theoretical Aspects of Bantu Grammar 1. Stanford University: CSLI Publi
cations, 1993. p. 113–151.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
121
MARANTZ, A. No Escape from Syntax: Don’t Try Morphological Analysis in the Privacy of
your own Lexicon. In: DIMITRIADIS, A. et al. (Eds.). University of Pennsylvania Working
Papers in Linguistics. University of Pennsylvania, 1997. v. 4. p. 201–225.
MARTIN, S. E. A reference grammar of Korean. A complete guide to the grammar and
history of the Korean language. Rutland, VT: Tuttle Publishing, 1992.
MATUSHANSKY, O. More or better: On the derivation of synthetic comparatives and su
perlatives in English. In: MATUSHANSKY, O.; MARANTZ, A. (Eds.). Distributed Morphol
ogy Today: Morphemes for Morris Halle. Cambridge, MA: MIT Press, 2013. p. 59–
78.
MERCHANT, J. How Much Context Is Enough? Two Cases of SpanConditioned Stem
Allomorphy. Linguistic Inquiry, v. 46, p. 273–303, 2015.
MILLER, P.; PULLUM, G.; ZWICKY, A. The Principle of PhonologyFree Syntax: four appar
ent counterexamples in French. Journal of Linguistics, v. 33, p. 67–90, 1997.
MOSKAL, B. A Case Study in Nominal Suppletion. 2013. dissertation – University of Con
necticut, Storrs, CT.
MOSKAL, B.; SMITH, P. W. Towards a theory without adjacency: hypercontextual VIrules.
Morphology, v. 26, n. 3, p. 295–312, 2016. ISSN 18715656. DOI: 10.1007/s11525-0159275-y.
OSOLSOBĚ, K. Komparativ. In: KARLÍK, P.; NEKULA, M.; PLESKALOVÁ, J. (Eds.). Nový en
cyklopedický slovník češtiny. Praha: Nakladatelství Lidové noviny, 2016. p. 839. ISBN
9788074224805.
PFAU, R. Features and categories in language production. 2000. PhD thesis – Johann
Wolfgang GoetheUniversität, Frankfurt am Main.
. Grammar as processor: a distributed morphology account of spontaneous
speech errors. Amsterdam: Benjamins, 2009.
POMINO, N.; REMBERGER, E.M. Verbal Suppletion in Romance Synchrony and Diachrony:
The Perspective of Distributed Morphology. Transactions of the Philological Society,
v. 117, n. 3, p. 471–497, 2019.
RADKEVICH, N. On Location: The structure of case and adpositions. 2010. dissertation
– University of Connecticut, Storrs, CT.
RAMCHAND, G. Verb Meaning and the Lexicon. Cambridge: Cambridge University Press,
2008.
STARKE, M. Complex Left Branches, Spellout, and Prefixes. In: BAUNAZ, L. et al. (Eds.).
Exploring Nanosyntax. Oxford: Oxford University Press, 2018. p. 239–249.
. Nanosyntax: A Short Primer to a New Approach to Language. Nordlyd, v. 36,
p. 1–6, 2009.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
122
STARKE, M. Why you can’t stay away from clitics. Talk presented at GIST 7, Ghent, June
2014.
VANDEN WYNGAERD, G. The feature structure of pronouns: a probe into multidimen
sional paradigms. In: BAUNAZ, L. et al. (Eds.). Exploring Nanosyntax. Oxford: Oxford
University Press, 2018. p. 277–304.
VANDEN WYNGAERD, G. et al. How to be positive. Glossa, v. 5, n. 1, p. 23. 1–34, 2020.
ZWICKY, A. Phonological constraints in syntactic descriptions. Papers in Linguistics, v. 1,
p. 411–453, 1969.
ZWICKY, A.; PULLUM, G. The Principle of PhonologyFree Syntax: introductory remarks.
Working Papers in Linguistics, v. 32, p. 63–91, 1986.
ReVEL, edição especial, n. 18, 2021
ISSN 16788931
123