Structural Representation of Sounds in Phonology: 1. Motivation
Structural Representation of Sounds in Phonology: 1. Motivation
Structural Representation of Sounds in Phonology: 1. Motivation
Friedrich Neubarth
ÖFAI / Univ. of Vienna
1. Motivation
What we aim at is a theory of phonology that serves 2 criteria:
– as a theory about a cognitive capacity that deals with the representation of
linguistic objects it should be as abstract and minimal as possible.
– as an interface between lexical representations, morphology and the physical
realisation of sounds (perception/production) it should bare a certain
isomorphism to what we can describe as phonetic interpretation.
Such a perspective strongly shapes the conditions on how a theory of phonology
could be designed. In principle, the relevant formulations can be found in the
outlines of Government Phonology (KLV85, KLV90, and especially Kaye 1995),
the framework that this work builds upon, extending also on recent work by
Markus Pöchtrager (2006, 2010, MP henceforth). What we will attempt here is to
aim at a higher degree of both, abstractness and minimality. The second goal is
tied to an empirically anchored question: how many sound systems is such a
framework able to capture? Here, I want to emphasize the importance of a fine
tuned generalizability of the model. Neither should it over-generate, nor should it
leave certain sounds systems of particular languages un-representable. I will not
comment on the notion of markedness, although it is also an issue not to be
neglected.
2. Objectives
There are four objectives that comprise the core features of the proposed theory:
1. Sounds are not atomic segments, their representations are phonological
structures and these structures receive a specific interpretation.
2. The set of melodic objects consists of only 2 elements H(igh) and L(ow).
3. Headedness plays a central role in the interpretation of these elements.
4. The constraints on structure as well as their interpretation crucially relies
on directionality.
3. Historical background
For a moment, let us review the take-home messages from earlier theoretical
work on phonology as a theory of sounds and sound systems. Paradigmatic shifts
in science usually suggest that the whole perspective on a field has changed. But
usually, there are certain traits taken over without much discussion, mostly
because their content seems so obvious that they need not be highlighted within
the new frame. But what if these assumptions are misleading?
Not very much to say, the method of lexical opposition remains as an empirical
device. However, two assumptions are very problematic and seem to pertain with
phonology until now:
– Sounds of a phonological system within a language constitute the minimal
objects of linguistic representation. Peeking towards phonetics, these objects
are often labeled and regarded as segments. (See also Pöchtrager 2012.)
– In order to determine classes of sounds, recurrence is made to phonetics,
mainly to articulation (e.g., velar, palatal, labial etc.) In a structuralist view,
these features do not define the content of a phoneme per se. Phonemes are
still seen principally in contrast to other phonemes (which prohibits a
universal conception of phonological representations of sounds).
A more straightforward view on phonetics, particularly acoustic phonetics (we
are talking about sound, aren’t we?) already undermines these two points:
– Certain sounds consist of more than one acoustic event: plosives for instance
have at least three phases: closure/pause, burst, and a release phase that may
or may not pass over to a vocalic part of the signal. See Fant (1964, 1974) for a
precise analysis of (acoustic) phonetic segments, which definitly do not
coincide with phonemes, using that historical term for phonological objects.
– Certain articulatory features do have a systematic correspondent in the
acoustic signal (i.e., the position of tongue and the shape of lips viz. the
position and distance of formants in vocalic sounds), others refer more to the
systematic nature of sound systems (e.g., sonorant, continuant) while a third
class just refers to phonological properties not of the sounds themselves (e.g.,
syllabic, long vs. short). This casts some doubt on the general applicability of
such descriptive notions to phonological objects, given that we want to
conceive them as cognitive entities.
3.2 SPE
The deception of the century: instead of looking behind the above mentioned
notions, SPE merged them into one system: the segment was taken for granted,
and a vast set of features (mainly articulatory) is determining its content.
2
Thinking in matrices may have inveigled to such a move. GP has always argued
against features and offered an alternative in terms of autosegmentally inspired
elements as melodic primitives that make up sounds.
Gunnar Fant, who as a phonetician has worked together with Jacobson and Halle
on the concept of distinctive features, was rather explicit about the difficulties
arising from different perspectives, meaning phonetics and phonology:
The speech wave is not a very good image of our abstract notion of speech as a
sequence of discrete invariable units selected from a finite inventory. What we can
see in the spectrogram is a mixture of continuous and discrete events. (Fant 1974:
223). In general, one phoneme or one speech sound is encoded in the structure of
several successive sound segments. Conversely, any sound segment generally
contains information on the identity of several successive phonemes or speech
sounds. (Fant 1964: 223).
Regarding distinctive features, he remarks that [t]he concept of distinctive features
is a powerful tool in speech analysis. It is more economical to study how minimally
contrasting pairs of utterances differ phonetically and to search for rules expressing
such differences in all possible contrasting pairs where the feature operates than to
attempt to describe each contextual variant of a phoneme by a set of absolute
descriptors. However, the definition of features may vary according to the
particular interest and background of the investigator. (Fant 1974: 235)
3
elementary constraints on how the various levels of sequences can be interrelated –
or, as we shall say, “associated”. (Goldsmith 1976:28)
Features by themselves do not spread; they merely identify a segment for what it is.
The domain of association of an “autosegment”, on the other hand, does spread,
quite automatically. (Goldsmith 1976:22)
Notice that the mere existence of autosegments undermines the concept of
phonemes as minimal units of phonology.
In a series of papers, Harry van der Hulst (see 1994, 2000, as representative
examples) has explored the idea to reduce the number of features encoding
melody to two: C and V. Structural configurations determine the interpretation of
these objects. The fundamental difference to various conceptions of GP, including
this one, is that these structures are not phonological structures per se, but
configurations expressing the geometry of features. Also, C and V are not taken as
elements with an independent interpretations, but rather represent binary values
in specific locations of the feature geometry.
The two foundational papers of GP (KLV 1985, KLV 1989/90) took the
autosegmental idea one step further and suggest that:
– all phonological objects are made up of elements that have autosegmental
properties, i.e., they are privative (against features), primitive (meaning that
they are not de-composable) and interpretable.
– melodic material (elements) is associated to phonological structure via the
skeleton.
1
Conventions for SGP representations: heads are underlined and are separated from
operators to the right by a dot. Empty heads are indicated by underscore: ‘_’
4
– all expressions are headed, I and U may not combine, I and U must be heads:
A [a], I.A [e], I [i], U.A [o], U [u] (5 vowel system)
– I and U may not combine, no element can license the other:
A -> [a], _.IA -> [ɛ], I -> [i], _.UA -> [ɔ], U -> [u] (a different 5 vowel system)
Nice side-effect: it does not make any difference if an element is associated to one
position once and for all, if it spreads onto other positions (i.e. vowel harmony) or
if it ‘appears’ only in certain constellations (‘floating’ elements) – it is always the
same element that contributes to the ME of a certain position.
These are quite many, and Harris had to import a bunch of constraints from
feature geometry in order to prohibit over-generalisation: elements are assigned
to different nodes (in a sub-phonological) structure: ROOT (Ɂ, h, N), PLACE (A, I, U,
R), LARYNGEAL (H, L). These nodes resemble tiers, and can be shared between
different positions of phonological structure. Although solving the immediate
problem, such a setup suffers from rendering the theory a hybrid concept, and the
large number of elements suggests that there must be a flaw in the overall design.
5
(2) a. geometry of nodes b. node sharing: homorganic /nt/
Note that heads and immediate projections always take their complements to the
left, therefore both complements are on the same side with respect to the head.
6
The second function of the abandoned H element, to encode source contrast,
cannot be reflected by structure itself, so MP assumes that a structural relation
takes over this function, in particular m-command (melodic command), which is a
binary relationship between a head and a non-head terminal that implies that the
m-commandee receives the same interpretation as the m-commander. This
relation is encoded by the arrows that follow the projection lines. To motivate
this further, MP proposes that every un-annotated terminal (heads are always
annotated) has to be licensed by m-command, or – for terminals that are not
daughters of maximal projections (i.e., x2 in MP’s (20c,d)) – by control (horizontal
arrows.
Of course, this brief summary does not reflect the full scope of MP (2006). What is
really striking is that a categorial distinction (lenis/fortis) is encoded by a
structural relation, m-command, that may be ‘on’ or ‘off’, yielding fortis [f, p] or
lenis [v, d], and that two different relations are needed for the same function
(licensing of un-annotated terminals).
Regarding the set of elements, MP went one step further and proposed (2010)
that the element A should also be replaced by a structural configuration, a head
adjunction involving control. Regarding vowels, an empty (un-annotated) nucleus
head would be interpreted as a central schwa, an adjunction structure without
control as an e-schwa and with control as a full vowel (or perhaps a-schwa as
well).
7
(6) Pöchtrager (2010, handout: ex. 30d)
Again, we find a structural relation where the presence or absence of this relation
determines the phonetic interpretation.
Each structure consists of a head that may project and that can take a
complement to its left (cf. Right-Hand-Head Rule, but with the exception of A-
structures, see below). Further extensions are always to the right (cf. Anti-
Symmetry, which actually guarantees some sort of symmetry within one
constituent.)
Phonological structure has three levels:
– constituent structure: onsets, nuclei, post-nuclear rime complement
– melodic structure: the structure of an onset or nucleus
– A-structure: structural representation of the A-element of SGP
The labels ‘onset’, ‘nucleus’, as well as the traditional terms ‘consonant’ and
‘vowel’ can now be derived as structural configurations within constituent
structure. Vowels/nuclei are in head-position, all other positions will be
interpreted as consonants/onsets (or as a rime complement). As graphical
8
conventions, heads are always underlined, maximal projections are not indicated.
The terminal nodes of constituent structure are indicated by an ‘x’, in order to
graphically discern constituent structure from melodic structure. That does not
mean that they would constitute the skeleton of SGP, which is obsolete now:
association of melodic elements H and L is mediated through melodic structure.
Melodic structure has exactly the same form. The only difference is that while we
may conject that constituent structure always has at least a head and a
complement (reflecting the existence of onset and nucleus), melodic structure can
consist of only one head without complement or further extensions. A fully
extended, simple onset would have the following structure (its interpretation
would be a velar stop).
(9) x
⎽ head
[g]
Notice that ‘onset’ and ‘nucleus’ are derived categories here. They have no
theoretical status as special labels, whereas the terms ‘consonant’ and ‘vowel’ are
mere descriptive notions, on a par with ‘nasal’, ‘aspirated’ or ‘labial’.
What is still missing is a characterisation of the A- structure. The notion is taken
from MP (2010), but the definition is embedded within a different conception of
structure. In particular, this is the first time where linearity comes into play:
(10) A-structure:
A structure with a complement (immediate sister) to the right, where
‒ the complement bears no elements or
‒ the head of that structure is not the head of the constituent
forms a sub-structure on its own and receives an interpretation akin to the
A-element of SGP.
9
convention, the node above the A-structure will be marked with a short line if it is
the melodic head position of the constituent.
Elements may reside in any position of melodic structure. Notice that there are
only two cases of melodic structures, where an A-structure has to be identified by
marking the head at a particular node, the example in (10) being one of them (see
ex. 14 below, the other case involves the difference between /s/ and /l/). In all
other structures, it falls our from the possibilities how to build structures, if a
particular part of the structure has to be tagged as an A-structure, or if it is part of
the melodic structure of the constituent. That structures are unambiguous is an
important requirement, since structure explicitly determines interpretation.
We are now in the position to talk about the interpretation of structures and
elements residing in particular positions within these structures. Let us start with
non-head constituents and the simplest configurations:
(12) a. x b. x c. x
⎽ H L
[Ɂ] [j] [w]
10
iii. Regarding the interpretation of elements, the immediate complement
to the right of the head of an A-structure inherits ‘headedness status’
from the constituent (onset –> non-head, nucleus –> head).
(14) a. x b. x
⎽ ⎽
[ɣ] [r]
(15) a. x b. x c. x d. x
⎽ L H
⎽
[ɣ] [v] [ʝ] [ð]
Adding one more layer on top (head, complement to the left plus one complement
to the right) yields plosives. Plosives acoustically as well as in articulation have a
complex make-up: in order to form a plosive, one has to form a closure that has to
be complete in order to release it with a burst. In terms of accoustics, there is a
phase of silence and a burst. If we want to translate these physical facts into
structure, it is immediately clear that the structure must contain a head (of
course) and two positions that reflect closure and release, and also that these two
positions must be on differents sides of the head. Remember that the head bears
information about the place of articulation, meaning that it represents the
location of the occlusion. Consider the representation of a labial plosive:
(16) x
L
[b]
11
(17) Interpretation of elements in non-head position:
i. to the left of the head (complement):
– H is interpreted as aspiration (source contrast)
– L is interpreted as (pre-)nasal.
ii. to the right of the head:
– H is interpreted as fricative (or affricate)
– L is interpreted as voicing (source contrast).
Consider the following examples:
(18) a. x b. x
L ⎽ ⎽ H
[ŋ] [h]
c. x d. x
L H
⎽ ⎽
[n] [s]
An L element in the complement to the left of the head will always yield an
interpretation as a nasal. Conversely, a H element to the right of the head leads to
an interpretation of a fricative ([h], [f], [s], [ʃ]). (Notice that this configuration,
even though the head is to the right of the complement, cannot be an A-structure
since the head is the head of the entire melodic structure and the complement is
filled with an element.) Within a two-layered structure, L to the left and H to the
right result in complex sounds: pre-nasalised stops and affricates. Clearly, these
must be homorganic, since there is only one head determining place.
(19) a. x b. x
H
L L L
[mb] [p͜f]
c. x d. x
H
L
⎽ ⎽
[nd] [d͜s]
On the reverse (linear) order – H to the left and L to the right of the head, these
elements express source contrast, nothing new has to be said about that. Let me
just comment on the directionality: it may strike as unintuitive that H, often
12
phonetically manifest as aspiration (after the release or affricate), should be
encoded in a position to the left of the head. However, one can conceive the
phonetic correlation as follows: in order to produce aspiration, one has to have a
stronger phase of closure (before the burst) in order to build up more pulmonic
pressure. This is expressed by the H element to the left of the head. Additionally,
one might expect to find a longer phase of silence in the acoustic signal. And it
also was this particular position (the highest to the left of the head) that MP took
for regulating a fortis/lenis contrast, in his terms by virtue of m-command [+ON],
here with an H element residing in the relevant position.2 At this point, let us
include the A-structure in our inventory, such we can show the effect of source
contrast with spirants, plosives and affricates:
(20) a. x b. x
H
⎽ ⎽
[d] [th]
c. x d. x
H
⎽ ⎽
[ð] [ϴ]
2
MP uses the representations in (5) to explain the contrast between long and short vowels
followed by a lenis or fortis consonant in English. (E.g., bead, beat, bid, bit. See also
Odden 2011 for a discussion – unfortunately not mentioning MP’s analysis or data). This
effect disappears when the last onset is followed by a phonetically realised vowel (E.g.,
lady, Libby). In his terms, every position has either to be annotated (filled by an element)
or licensed by control or command. Having the higher complement of the last lenis onset
not m-commanded by the head of the onset structure calls for command from outside. In
case there is no following nucleus, the preceding nucleus can capture this position by m-
command, thus becoming a bit longer itself. If a vowel follows, it will be structurally
closer and exact m-command on the relevant position within the onset. In the proposed
system, such a reasoning cannot apply, since there is no notion of command whatsoever.
Also, I find it problematic to assume that phonological structures should be embedded
into each other. Most problematic for me is the idea that a following nucleus (possibly
arising from a morphological process) should be inserted between the hosting projection
of the first nucleus and the following onset, tampering with hierarchical relations.
However, one can do without: an onset without following nucleus will form a prosodic
unit with the preceding nucleus. If the (left) complement of that onset is filled by H, this
position contributes to the length of the onset. Assuming that structurally identical
prosodic units will be assigned a similar amount of time in speech, the onset without H
will proportionally leave more space (or time) to the preceding nucleus than an onset
with a filled complement. The circumstance that this effect disappears when the onset
forms a new prosodic domain (with a following nucleus) is rather straightforward. Not so
easy to capture is MP’s analysis of three-way length contrasts in Estonian onsets and
nuclei, which is far more complex and goes beyond the scope of this talk.
13
e. x f. x
H H
H
⎽ ⎽
[d͜s] [t͜sh]
Likewise an L-element to the right of the head encodes voicedness. Again, one is
inclined to think of voicedness as a phonetic property that is most manifest on the
left periphery of a speech segment, but this does not necessarily bear on the
phonological representation. Crucially, there are no phonologically ‘voiced’
spirants or nasals, since they lack a position to the right by definition. On the
other hand, it is straightforward to have truly ‘voiced’ fricatives/affricates.
(21) a. x b. x
L HL
⎽ ⎽
[d̬ ] [d͜z]
And finally, if we allow for adjunction of maximal structures (with their own
domain and head under certain preliminaries), we may arrive at very simple
representations of rather complex sounds. Take as an example [nd͜zw] – a
labialised, prenasalised, coronal affricate:
(22) a. x
x x
L
H(L)
L
⎽
[nd͜zw]
14
b. /r/ has L:
In Mandarin Chinese /r/ is retroflex and patterns with the palato-
alveolar series of sibilants/affricates with respect to the ‘neutral’ vowel
that articulatorily has only phonation but involves no change in place
(both, with shi and ri, the vowel is a retroflex, central schwa).
Sibilants/affricates in Mandarin have three series: palatal, alveolar and
palato-alveolar. Palatals are clearly associated with a H-element in
head position. Therefore, it is quite plausible to associate palato-
alveolars with the presence of L in the head of an A-structure.
Phonetically, palato-alveolar sibilants are tentatively retroflex, the
liquid /r/ is definitely retroflex and this property has been attributed
to the combination of an U and an A element in SGP, which translates to
an L element in the head of an A-structure. (Note that palato-alveolar
fricatives can also be the result of a H element within an A-structure,
which obviously is the case in palatalisation of velar stops, resulting
very often in /ʃ/ or /ʒ/. )
b. /l/ has L:
/l/-vocalisation in Slavic languages and Brazilian Portuguese results in
a rounded vowel ([o], or [u]). /l/-vocalisation in many Bavarian
Dialects spreads roundedness on the preceding vowel (e.g., [fyː] ‘viel’
much).
The proposal then is that /r/ is a simple A-structure. The lateral liquid is more
complex, I adopt MP’s analysis here and assume that /l/ involves an A-structure
in non-head position, but contrary to MP I assume that /l/ forms only a 1-layered
structure and keep 2-layered structures strictly reserved for plosives. This also
has the advantage of allowing for fortis laterals (parallel to spirants), while
elements occurring in head position do not contribute to the manner specification
of the resulting sounds:
(25) a. x b. x
{⎽/H/L} {⎽/H/L}
[r/ɹ/ɻ ] [l/ɭ/ʎ]
15
4.3 Head constituents (vowels)
(26) a. x b. x c. x
H H H
H
[ɛ] [e] [æ]
A H-element to the right of the A-structure, which has head status anyway,
indicates ATR, the correlation with length (at least in German) falls out quite
16
nicely under such an account. Example (26c) is more intricate. Recall that the
complement of an A-structure inherits the head status from the constituent,
hence we get ‘head status’, because the constituent is a nucleus. Therefore the H
element can be interpreted as place, but not as the head of the A-structure. This
way we receive an interpretation as a low front vowel (i.e. [æ]).
In Viennese dialect, low vowels (i.e., [æ], [ɒ], [ɶ]) arise as monophthongized
counterparts to diphthongs in Standard German. The diphthong [aɪ] in (27a) has
an A-structure in non-head position where the head of the whole melodic
structure is H, representing the second part of the diphthong. The monophthong
retains the structural configuration, but has the H element in a different position.
(27) a. x b. x
H ⎽
H
[aɪ] [æː]
One question remains to be answered: why can low vowels never be [+ATR]? I
have no good answer here, just notice that this would amount to an A-structure,
an empty head and an H element as the right complement to this empty head.
Nasal vowels are represented by an L element also to the right of the head. Thus,
the interpretation of L in non-head position mirrors the situation of L in non-head
constituents (onsets). The advantage of this assumption is that nasal spread can
be conceived as moving the L element from one position (leftmost of the onset) to
an adjacent position of the left neighbouring nucleus. A nice consequence of
merging the two SGP elements U and L into one is that with nasal spread we often
receive a falling diphthong interpretation of the nucleus (e.g. Viennese
“aunfongan” [ɔ̃ ʊ̃ nfɔŋɐn] ‘anfangen’ to begin in HC Artmann’s spelling).
(28) x b. x
L L
L L
L
[ɔ̃ ] [ɔ̃ ʊ̃ ]
Let us conclude with the seemingly least complex vowels, schwa. In order to
discern mid/e-schwa and central schwa ([ɨ]), we assume with MP that the latter
are indeed the least complex structures, i.e., they contain neither internal melodic
structure nor elements. All other alleged schwas contain an A-structure and are in
fact identical to full vowels. What makes them to be interpreted phonetically as
schwas is that they reside in positions that receive no stress at all. Schwa elision,
which in GP’s terms is a paramtetric option to have certain licensed nuclei remain
phonetically un-interpreted applies now also to positions containing structure
and melody. But such a behaviour is predicted to obtain anyway, and has already
17
be attested (cf. Kaye 1973 on Odawa.) An interesting consequence is that there is
good reason to believe that in Bavarian dialects the default realisation of empty
vocalic positions is [a/ɐ] and not [ɛ] or [ɘ]. This is corroborated by the fact that
more structure is needed for representing vowels anyway (e.g., regular simplex
vowels are tense, length distinctions seem to have disappeared in certain dialects,
such as for example Viennese).
(29) a. x b. x c. x d. x
⎽ H ⎽ H
[ɨ] [ɐ/a] [ə/ɛ] [æ]
Thus far, we have said nothing about tone, yet. Remember that we have identified
the complement to the right of the head as the host for L expressing nasality and
H expressing ATR. The immediate complement position to the left has not been
used, yet. This position could in principle host H and L representing tone. Why I
am reluctant to elaborate further on this is that with non-head constituents
(onsets/consonants), we observed an asymmetry between the elements: H
representing aspiration is to the left, L representing voicing is to the right.
Shouldn’t that also hold in head constituents representing vowels? What speaks
against this is that tone, lexically associated with structurally higher objects than
nuclei, seems to be exclusive with respect to a single slot: either there is a high, or
a low tone. Merging two different (linearly ordered tones) onto one position may
well bring out contour tones. In addition, tone systems can obtain a high degree
of complexity (indicating that more than one level of representation is involved),
and where and how that complexity should be represented is still not very well
understood.
5. Conclusion
The aim, to lay out a conception of phonology that is both as minimal and abstract
as possible seems to be maximally fulfilled.
Most, if not all insights of SGP can be retained as they were proposed in KLV85,
KLV89 and SGP.
Pöchtragers ideas about phonological structure are incorporated without
recurring to arbitrarily active structural relations.
The premises to assign the sound systems of many languages a sensible,
intuitively clear and phonologically plausible representation are quite good.
For future research: the interaction of various empty or filled positions may
reveal even more sophisticated insights about the notion of phonological length.
18
References:
Charette, M & A. Göksel (1994) “Vowel Harmony and Switching in Turkic
languages.” SOAS Working Papers in Linguistics and Phonetics 4, 31-52.
Fant, G. (1964) “Phonetics and speech research.” In: D.W. Brewer (ed.) Research
Potentials in Voice Physiology, State Univ. of New York, 199–256.
Fant, G. (1974) “Analysis and synthesis of speech processes.” In: B. Malmberg
(ed.) Manual of Phonetics. Amsterdam, London: North-Holland Publishing
Company, 173–277.
Goldsmith, J. (1976) Autosegmental phonology. Ph.D. diss., MIT.
Harris, J. (1994) English Sound Structure. London: Blackwell.
Hulst, H.G. van der (1994) “An introduction to Radical CV Phonology.” In: S. Shore
& M. Vilkuna (eds.). SKY 1994: Yearbook of the linguistic association of
Finland. Helsinki, 23-56.
Hulst, H.G. van der (2000) “Features, segments and syllables in Radical CV
Phonology.” In: J. Rennison (ed.). Phonologica 1996: Syllables!? The Hague:
Holland Academic Graphics, 89-111.
Hildenbrandt, T. (2013) Ach, ich und die /r/-Vokalisierung. – On the difference in
the distribution of [x] and [ç] in Standard German and Standard Austrian
German. MA thesis, Univ. of Vienna.
Jensen, S. (1994) “Is Ɂ an Element? Towards a Non-segmental Phonology.” SOAS
Working Papers in Linguistics & Phonetics 4, 71–78.
Kaye, J. (1973) “Odawa stress and related phenomena.” In: Odawa Language
Project: Second Report. University of Toronto.
Kaye, J. (1995) “Derivations and Interfaces.” In: J. Durand & F. Katamba (eds.)
Frontiers of Phonology. London & New York: Longman, 289–332. [Also in
SOAS Working Papers in Linguistics and Phonetics 3, 1993, 90–126.]
Kaye, J. (2000). A Users' Guide to Government Phonology. Ms., University of
Ulster. [available: www.unice.fr/dsl/tobweb/scan/Kaye00guideGP.pdf]
Kaye, J., J. Lowenstamm & J.-R. Vergnaud (1985) “The internal structure of
phonological representations: a theory of Charm and Government,”
Phonology Yearbook 2, 305–328. [KLV85]
Kaye, J., J. Lowenstamm & J.-R. Vergnaud (1989) “Konstituentenstruktur und
Rektion in der Phonologie”. Prinzhorn, M. (ed.) Phonologie. Linguistische
Berichte Sonderheft 2. Opladen: Westdeutscher Verlag, 31–75.
Kaye, J., J. Lowenstamm & J. –R. Vergnaud (1990) “Constituent structure and
government in phonology.” Phonology Yearbook 7/2, 193–231. [KLV90]
Neubarth, F. & J. R. Rennison (2005) “Structure in Melody, and vice versa.” In: N.
Kula & J. van de Weijer (eds.) Papers in Government Phonology. Special issue
of Leiden Papers in Linguistics 2.4, 95–124.
19
Odden, D (2011) “The Representation of Vowel Length.” In: Oostendorp, M. van, C.
J. Ewen, E. Hume & K. (eds.) The Blackwell Companion to Phonology. Vol. I:
General Issues and Segmental Phonology. Malden, MA & Oxford: Wiley-
Blackwell.
Pöchtrager, M. A. (2006) The structure of length. PhD dissertation, University of
Vienna.
Pöchtrager, M. A. (2010) The Structure of A. Paper presented at the 33rd GLOW
Colloquium, April 13–16, 2010, Wrocław, Poland.
Pöchtrager, M. A. (2012) Beyond the Segment. Talk given at the CUNY Conference
on the Segment, Jan. 11-13, 2012, New York City.
Rennison, J. R. & F. Neubarth (2003) “An x-bar theory of Government Phonology.”
In: S. Ploch (ed.) Living on the edge. 28 papers in honour of Jonathan Kaye,
Berlin: Mouton, 95–130.
Williams, E. (1976) Underlying Tone in Margi and Igbo. Linguistic Inquiry 7(3),
463–484.
20