Semantics. Summary
Semantics. Summary
Semantics. Summary
SUMMARY
Topic 1. SEMANTICS AS A LINGUISTIC DISCIPLINE. MEANING, CONCEPTS AND
REALITY. BASIC SEMANTIC NOTIONS.
UNIT 1. Semantics in Linguistics
Introduction
Semantics is the study of meaning communicated through language. The subject of study is
linguistic knowledge that speakers have, which according to modern linguistics, it comprises
different types of knowledge. Therefore, linguistic description is divided into different levels of
analysis: phonology, syntax, and semantics. Semanticists aim to describe semantic knowledge.
The definitions theory states that to give the meaning of linguistic expressions we should
establish definitions of the meaning of words. There exist three problems to face when
applying this approach:
- circularity, or how to define a word except in other words? To solve this problem one solution
is to design a semantic metalanguage with which to describe the semantic units and rules of all
languages, a metalanguage is a tool of description. An ideal metalanguage would be neutral
with respect to any other languages and should fulfil the scientific criteria of clarity, economy,
consistency, etc.
- the question whether linguistic knowledge (meaning of words) is different from encyclopedic
knowledge (the way the world is) would be solved as well by setting up a metalanguage.
Linguists identify different levels of analysis, that is to say that linguistic knowledge forms
distinct modules or is modularised. However, some writers state that meaning is a product of
all linguistic levels, and it therefore cannot be separated of syntax or morphology, as defends
the theory known as Cognitive Grammar (Langacker).
The mental lexicon is a large but finite body of knowledge, a mental store of words, part of
which must be semantic. With this limited mental store, speakers can create an illimited
number of expressions, phrases and utterances, but no new words. That is, there is an
inventory of word meaning but sentence meaning is not listed anywhere, it arises while the act
of speaking. This is the main difference between word meaning and sentence meaning:
productivity. However, to allow this, the rules for sentence formation must be recursive,
allowing repetitive embedding or coordination. Thus, sentence meaning is compositional, its
meaning is determined by the meaning of its component parts and the way in which they are
combined.
The meaning of a word derives both from what it can be used to refer to and from the way its
semantic scope is defined by related words. Words stand in a relationship to the world; they
allow us to identify parts of the world and make statements about them. This relationship is
usually called reference. On the other hand, meaning can be defined by the existence of other
words, such as taxonomies, which link elements within the vocabulary system considering
their sense.
These terms describe different levels of language. Utterance is the most concrete and is
created by speaking or writing a piece of language. Sentences are abstract grammatical
elements obtained from utterances. Propositions are descriptions of states of affairs, and for
some writers they are a basic element of sentence meaning. The same proposition can be
represented by different statements (passive-active voice).
Literal meaning refers to instances where the speaker speaks in a neutral, factually accurate
way, whereas non-literal or figurative language refers to utterances where the speaker
describes something in untrue or impossible terms in order to achieve special effects. These
figurative uses of language include metaphor, irony, metonymy, synecdoche, hyperbole, and
litotes. Nevertheless, speakers use figurative language in everyday speech with such a
frequency that it is difficult to find instances of literal uses of language. Scholars such as Lakoff
and Johnson see metaphor as an integral part of human categorization. Moreover, the
normalisation of some metaphors (fossilized metaphors) makes it difficult to distinguish
between literal and non-literal uses of language, that is, metaphors fade over time and
become part of normal literal language.
Semantics and pragmatics are both concerned with the transmission of meaning. In
pragmatics, meaning is described in relation to speakers and hearers, whereas in semantics,
meaning is abstracted away from users. Semantics is concerned with sentence meaning and
pragmatics with speaker meaning. However, the line is not clear between both disciplines and
it is difficult sometimes to decide which phenomena are semantic and which pragmatic.
Introduction
In semantics the action of identifying with words is often called referring or denoting. The
entity referred to is usually called the referent. However, some writers separate the terms
refer and denote. For those writers, refer is used when a speaker picks out entities in the world
while denote is a property of words that link a linguistic expression with the world. Denotation
is a stable relationship irrespective of context whereas reference is a moment-by-moment
relationship.
Two of the most important approaches in semantic theories: the referential (or denotational)
approach which aims to show how the expressions of the language relate to situations; and
the representational approach emphasises that our ability to talk about the world depends on
our conceptualisation of it, that is, different conceptualisations influence the description of the
real-world situations. Examples of denotational and representational approaches would be the
Cognitive Construction Grammar (Langacker) and the Prototype Theory (Rosch) respectively.
Reference
Types of reference
Words can be used to refer in several ways. The units which most clearly reveal this function of
language are nominals (names and noun phrases). The following are some basic distinctions in
reference:
There are linguistic expressions which can never be used to refer (i.e. so, very, maybe, if, not,
all). These are intrinsically non-referring items for they do not identify entities in the world.
Other kind of words, such as nouns, can be both, referring and non-referring items depending
on the speaker’s intention: A cat is a feline (non-referring) or I feed my cat (referring).
Some expressions have always the same referent, such as the Atlantic Ocean, while others
have their reference totally dependent on the context and are said to have variable reference
(she, you, he). These are deictic words.
The referent of an expression would be the thing picked out in a particular context, whereas
its extension would be the set of things, generally speaking, which could possibly be the
referent of that expression, i.e. the referent of cat in I feed my cat would be my cat and its
extension is the set of all cats. This relationship between an expression and its extension is
called denotation.
Names
Names are the simplest case of nominals which have a reference. They are definite in that they
identify a referent. There are two important approaches that study how names work: the
description theory in which a name is taken as a label for knowledge about the referent or for
one or more descriptions that refer the same item; and the causal theory, which stresses the
role of social knowledge in the use of names, and recognises that speakers may use names
with very little knowledge of the referent.
Indefinite and definite noun phrases can function like names to pick out an individual. Definite
noun phrases can also form definite descriptions as in I ate with the King of France. Noun
phrases are used to refer to groups of individuals, either collectively, where the focus is on the
aggregate or distributively, where the focus is on the individual members of the group.
Reference as a theory of meaning presents a number of problems. The first of them is that
there are words that are intrinsically non-referring, such as so, all, very, but, etc. Other
problem is that many nominal expressions do not have a referent that exists or has ever
existed i.e. a unicorn. Thus, if meaning is taken to be a relation between words and entities in
the real world, then this example would be meaningless. Also, the relation between a word
and its reference is not always one-to-one, that is, we can refer to a person by her proper
name or by definite descriptions meaning different things.
Thus, we can infer that meaning and reference are not exactly the same thing, that is, there is
more to meaning than reference. Here we should distinguish between two aspects of semantic
knowledge: sense and reference. In this division, sense allows reference since we need to
understand an expression in order to use it to refer to an individual. Thus, the various ways of
describing a person would have different senses but the same reference; and the meaning of
an expression will arise both from its sense and its reference.
Mental Representations
Concepts
One traditional approach to describing concepts is to define them by using sets of necessary
and sufficient conditions. These are a set of characteristics or attributes that contain the
information necessary to decide if an entity fits within a category. For example, if we have a
concept like WOMAN, the compulsory attributes will be the necessary conditions, whereas if we
can find the right set enough to define a woman, then they will be called sufficient conditions.
This approach is sometimes called definitional theory of concepts. However, speakers
sometimes use words without knowing anything about its necessary and sufficient conditions.
Prototypes
The definitional theory of concepts has problems to decide which are necessary or sufficient
conditions to define an entity, other theories have been proposed. Eleanor Rosch and her co-
workers proposed the notion of prototypes. This is a model of concepts which views them as
structured with both central or typical members of a category and less prototypical and
peripheral ones. The central element comes to the speaker’s mind more quickly than less
prototypical ones. However, this theory presents problems as well. An item may resemble to
two different prototypes, for example whale. There are other approaches to typicality effects:
frames theory (Fillmore) and idealized cognitive models theory (Lakoff).
Proponents of prototype theory have investigated conceptual hierarchies and have proposed
three levels of generality: a superordinate level, a basic level and a subordinate level. The basic
level corresponds to the prototype, it is the most used in everyday life, it is acquired first by
children, and it is therefore identified as cognitively important. The superordinate level has
relatively few characteristics, it is more general, whereas the subordinate is more specific and
includes still more features.
However, this theory is rejected by many linguists in cognitive science, which dismiss its strict
identification of thought and language. These researches maintain, on the one hand, that there
is evidence of thinking without language, so language cannot shape our conceptualisation of
the world; and, on the other, that language underspecifies meaning, that means that language
itself does not carry the whole meaning intended by the speaker, and the hearer must infer
the meaning by filling the gaps with context and with the possible implications made by the
speaker. If so, speakers translate their thoughts into a language, and not simply voice their
thoughts. The supporters of a language of thought state that memory and processes seem to
make use of some sort of propositional representation that lacks the surface syntax of a
spoken language. This language of thought is called sometimes Mentalese, and researchers
propose that it is roughly the same to everybody, and therefore, universal.
Grammatical categories of words are also defined by their semantic features. At the level of
writing, we can identify words because there is a space between them, what is usually known
as orthographic words. Due to syntax, a semantic word can be represented distinctively, i.e.
walks/walked, resulting in two grammatical words that share the same lexeme, the verb walk.
However, we can also have several lexemes represented by one phonological word, or a word
with several senses. A group of senses that share properties, such as pronunciation or syntactic
category, is often called lexical entry. In a dictionary, each entry corresponds with a lemma.
However, to define what a word is we need to apply together semantic and grammatical
criteria, that is, the word as a symbolic, linguistic counterpart to a concept; and the word as
the minimum free unit of speech. Neither of these definitions works out for themselves, they
do not give a solution for borderline cases such as words that do not relate to a concept, i.e.
and, of, etc.; or words not really independent such as the.
Context, either syntactical or situational, may affect word meaning. Regarding distribution,
words tend to occur together repeatedly, and are influenced by their context. Words with
similar meaning may undergo slight changes in meaning depending on the word the are next
to, i.e. a strong tea but a powerful car. Some words have preference for one term over
another, i.e. blond hair but addle eggs. Fixed expressions such as idioms are the result of the
fossilization of collocations.
Situational context, on the other hand, may affect the way a word is interpreted. We must face
here cases of ambiguity and vagueness. In examples of vagueness the context can add
information that is not specified in the sense, that is, a word can be used with the same sense
in different contexts and in this sense the word is considered to be vague; but in examples of
ambiguity the context will cause one of the senses to be selected.
There are some tests to identify lexical ambiguity. One test uses abbreviatory forms like do so,
do so too, so do, because there is a convention of identity between them and the preceding
phrase, so, they select the same meaning than the preceding verb phrase. If the do so clause
makes sense, then we are dealing with lexical ambiguity. A second type of test, the sense
relations test, relies on a relation of near synonymy between words, being one sense in a
network of relations with certain other lexemes and another sense being in a different
network. If one of the senses of the word is odd in one of the situations described, we are
dealing with lexical ambiguity. A third test employs zeugma, which is a feeling of oddness
when two distinct senses of a word are activated in the same sentence, i.e. I play the guitar
and football.
Lexical Relations
The lexicon should be understood as a network rather that a listing of words. One of the basic
principles of organization is the lexical field, that is, a group of lexemes that belong to a
particular activity or area of knowledge. Thus, lexical relations are more common between
lexemes in the same field.
Homonyms are unrelated senses of the same phonological word. There is a distinction
between homographs, senses of the same written word, and homophones, senses of the
same spoken word.
Polysemy, on the other hand, is a word with multiple, but related, senses. Polysemous senses
are listed under the same lexical entry, while homonymous senses are given separate entries.
Synonyms are different phonological words that have the same or similar meaning. However,
exact synonyms are very rare, they usually have different distributions or collocations, or
belong to different registers or styles.
- Complementary antonyms is a relation between words in which the negative of one implies
the positive of the other. They are also called contradictory, binary or simple antonyms.
- Gradable antonyms is a relation in which the positive of one term does not imply the
negative of the other, i.e. hot/cold. They are usually adjectives and have intermediate terms
such as warm/tepid/cool. They are relatives in the sense that the rely on subjectivity, and
usually one term is more basic and common.
- Reverse is a relation between terms describing movement, where one term describes
movement in one direction, and the other in the opposite direction. This relation can be
identified as well in processes such as inflate/deflate.
- Converses are terms which describe a relation between two entities from alternative
viewpoints as in own/belong to.
- Taxonomies are hierarchical classification systems of terms at the same level, for example
the colour adjectives or the days of the week, or at different levels (i.e. hyponymy). The
members of a taxonomy at the same level are called taxonomic sisters and they are
incompatible with each other. Some taxonomies are closed and others open.
Hyponymy is a relation of inclusion. A hyponym includes the meaning of a more general word,
i.e. dog-animal, which is called superordinate or hypernym. Hyponymy is a transitive relation;
each level includes the next vertically.
Meronymy is a term used to describe a part-whole relationship between lexical items such as
bicycle/wheel. The whole term is called holonym. It reflects hierarchical classifications in the
lexicon similar to taxonomies. Unlike Hyponymy, meronymy is not always a transitive relation.
Member-collection and portion-mass are lexical relations similar to meronymy.
Derivational Relations
There are two major derivational relations. The first of them is concerned with causative
verbs. In this type of lexical relation, we can identify a relationship between an adjective
describing a state; a verb describing the beginning or change of state; and a verb describing
the cause of this change of state i.e. wide (state) /widen (change of state or inchoative)
/widen (causative). The shape of the words could be the same as in widen. The adjective
describing the state that is the result of the process is called resultative and is usually in the
form of the past participle i.e. heated.
The second derivational relation has to do with agentive nouns. One type of agentive nouns is
derived from verbs and ends in the written forms -er or -or. These agentive nouns have the
meaning ‘the entity who/which performs the action of the verb’ and some examples are
walker, murderer, sailor, director, etc.
Lexical Typology
Lexical typology is one important branch of semantic typology, which is the cross-linguistic
study of meaning, and aims to identify regularities across languages. Many scholars have
focused their investigations on polysemy, looking for regularities in the patterns of word
meaning extensions. Polysemy is an essential feature of language that seems to work similarly
among languages.
Another area of research about cross-language word meaning centres on colour terms. While
there are differences in the way colours are described and in the number of the items in a
basic set across languages, there are a number of underlying similarities which point to
universals in colour term systems, regarding shared structural features. This study reveals that
the perception of the colour spectrum is the same for all human beings but that languages
lexicalize different ranges of the spectrum for naming.
Nevertheless, other researches centre their study on the idea that each language has a core
vocabulary of more frequent and basic words. Sapir proposed that this core vocabulary could
be used to trace lexical links between languages to establish family relationships between
them. However, semantic shift makes it difficult.
Another important investigation concerned Wierzbicka and her colleagues, who analysed a
range of languages in order to establish a core set of universal lexemes. They supported the
existence, in all languages, of a finite set of indefinable expressions such as words, bound
morphemes, and phrasemes. The meanings of these indefinable expressions are known as
‘semantic primes’. According to them, in every language there are pronouns, determiners,
evaluators, mental predicators and so on.
The notion of truth, that derives from the study of logic from Aristotle, is important to
semanticists because truth here establish a correspondence with facts or correct descriptions
of state of affairs. In these studies, the focus is on whether truth is preserved or lost when
changing the pattern of a sentence, and the notion of truth is used to make inferences and to
predict meaning, especially in compound statements. Semanticists call a sentence’s being true
or false its truth-value, and the facts that give a sentence its truth-value are called truth
conditions.
Logical operations help us to reveal the truth-value of a statement. For example, if we add not
to a sentence, it will reverse its truth value. This can be represented by a schema called logical
form, where a lower-case letter represents the statement and a special symbol represents
negation (¬). There are other logical forms that study the behaviour of the notion of truth in
conjoined sentences (^) with and, where the truth-value of the constituent statements would
predict the truth-value of the compound. Apart from logical conjunction, we can analyse also
other compound sentences such as those conjoined by or: the inclusive or, also called
disjunction (v), where the compound (p v q) is true if one or both of the constituents is true;
and the exclusive or ( ), where the compound is true if one of the constituent statement is
true and the other false. Material implication (p → q) which is only false when p (the
antecedent) is true and q (the consequent) is false, and corresponds to the construction in
English if… then in sentences like If it rains, then I’ll go to the movies. The bi-conditional
connective (↔) corresponds to the English expression if and only if, and the compound is true
when both constituents have the same truth-value and shows a relation of equivalence with
the compound (p → q) ∧ (q → p).
A priori truth is known before or without having to refer to or check the facts of the world, i.e.
Either he is still alive, or he is dead. This a priori truth is contrasted with a posteriori truth,
which can only be known after checking the facts of the world. On the other hand, necessary
truths cannot be denied without forcing a contradiction with reality, i.e. Two and two make
four; and contingent truths, which can be contradicted depending on the facts, i.e. The dodo is
extinct. A priori truth such as the example above, is therefore necessarily truth, and a
contradiction, for example, will be necessarily false, i.e. Two and two make five.
Entailment
There are fixed truth relations between sentences which hold regardless of the empirical truth
(i.e. related to experience) of the sentences, which are called entailment: A sentence p entails
a sentence q when the truth of the first (p) guarantees the truth of the second (q), and the
falsity of the second (q) guarantees the falsity of the first (p). This relation may come either
from syntactic (active/passive i.e. mutually entailment) or lexical source. A test to identify
entailment is:
Step 1: If p (The anarchist assassinated the emperor) is true, is q (The emperor died)
automatically true? Yes.
Step 2: If q (The emperor died) is false, is p (The anarchist assassinated the emperor) also
false? Yes.
Step 3: Then p entails q. Note if p is false then we can’t say anything about q; it can be either
true or false.
Presupposition
Presupposition seems an automatic relationship, free of contextual effects, like entailment; but
presupposition seems sensitive to facts about the context of utterance. Presupposition as a
truth relation:
Step 1: If p (the presupposing sentence) is true then q (the presupposed sentence) is true.
Step 2: If p is false, then q is still true.
Step 3: If q is true, p could be either true or false.
Thus, if we negate an entailing sentence, then the entailment fails and q can be either true or
false, but this is not the case of presupposition.
However, when we use a referent that does not exist within a sentence, there emerges a
phenomenon known as presupposition failure, where the status of p is dubious, possibly
neither true nor false. When a statement can be neither true nor false it is known as truth-
value gap.
There are three important dimensions to the task of classifying a situation: situation type,
tense and aspect.
Verbs inherently describe different situation types which can be stative, dynamic, punctual or
durative, for example.
Stative verbs describe a situation as a steady state, with no internal phases or changes, where
the speaker does not focus on the beginning or the end of the state (i.e. be, have, know, love,
hear, etc.). Grammatically, stative verbs do not allow progressive forms, i.e. -ing, or the
imperative, because of their dynamic connotations.
Dynamic verbs can be classified into different types, based on the semantic distinctions
durative/punctual and telic/atelic. Durative verbs describe a situation or process that lasts for
a period of time, while punctual describes an event almost instantaneous (semelfactive verbs
i.e. cough). On the other hand, telic refers to the processes that are supposed to have a natural
completion, such as build; and atelic describes processes that will last forever unless we stop
them, such as gaze.
The classification of situation types:
There are different tests to identify situation types. Stative verbs, for instance, do not allow
neither progressive forms nor the imperative, and simple present forms refer to the current
time of speaking. Other tests use temporal adverbial expressions for activity, accomplishment
and achievement. The temporal adverbial ‘in a period’ occurs with telic situation types,
whereas the durational time adverbial ‘for a period’ occur with atelic situation types. One
more test with finish will determine when a situation is durative and telic (accomplishments).
Tense
Tense, as well as aspect, allow speakers to relate situations to time. Tense allows a speaker to
locate a situation relative to some reference point in time. In English, tense is marked on the
verb by endings and the use of auxiliary verbs. A speaker can describe situations as prior to,
concurrent with or following the act of speaking (simple tenses); or describe an event in the
past or future and use that event as the referent point for its own past, present or future
(complex tenses). Tense can be represented with a temporal schemata, where S is the time of
utterance; R the reference point adopted by the speaker; and E the described action’s location
in time: I saw Helen (R=E<S).
Aspect
Our conversational practice seems to rely on the assumption that speakers generally try to tell
the truth, thus a declarative sentence such as ‘Joe bought potatoes’ seems to carry an
unspoken guarantee of “to the best of my knowledge”. Modality allows speakers to modulate
this guarantee, that is, it allows to express stronger or weaker commitment to the factuality of
the statements. Epistemic modality, so called because the speaker shows degrees of
knowledge, can be carried out by adjectives or adverbs, such as certain-probable-likely-
possible, which move from strong to weak commitment; by verbs of propositional attitude,
i.e. know-believe-think-don’t know-doubt-know not, that express a gradient from certainty of
the truth of the proposition to the certainty of its falsity; and by auxiliary verbs, i.e. must-
might-could-needn’t-couldn’t, called here modal verbs, that reveal variations on commitment.
Deontic modality, on the other hand, express the speaker’s attitude to social factors of
obligation, responsibility and permission. Deontic modals can communicate two types of social
information: obligation, i.e. +must/should/need/ought to-; and permission, i.e.
+can/could/might-. There are other types of non-epistemic modality: abilitive modality, which
reflects possibility based on the speaker’s view of a subject’s abilities; teleological modality,
which express degrees of possibility and necessity relative to the speaker’s view of a subject’s
goals; and bouletic modality, which express degrees of possibility and necessity relative to the
speaker’s view of a subject’s desires.
Modality allows speakers to compare the real world with hypothetical versions of it. This
suggestion derives from work on possible world semantics; thus, modals allow us to set up
hypothetical situations and express different strengths of prediction of their match with the
real world. Thus, a speaker can express a reasonable match with reality, i.e. might; a strong
coincidence with reality, i.e. must; and can express conditional meanings conveyed by
conditional sentences.
Modality distinctions are encoded in grammar by mood. Thus, the indicative mood is used for
descriptions of factual situations, whereas the subjunctive, or irrealis mood, is used for
potential situations (wishes, beliefs, exhortations, commands, etc.).
On the other hand, evidentiality allows speakers to communicate their attitude to the source
of information. Thus, a speaker can say whether a statement relies on personal first-hand
knowledge or was acquired from another source: I saw-I read-so they say-I’m told-apparently-
it seems-allegedly. This information is grammatically encoded in some languages, but not in
English, where they are voluntary to speakers.
Thematic roles encode the participants within a clause. The most common are AGENT, who
initiates an action and has volition; PATIENT, the entity undergoing the effect of some action,
often involving some change of state; THEME, which is moved by an action or whose location is
described; EXPERIENCER, which is aware of the action or state but cannot control it; BENEFICIARY,
the entity for whose benefit the action was performed; INSTRUMENT, the means by which an
action is performed; LOCATION, the place in which something is situated or takes place; GOAL, the
entity toward something moves, either literally or metaphorically; SOURCE, the entity from
which something moves, either literally or metaphorically; STIMULUS, the entity causing an
effect in the experiencer; ACTOR, more general term than AGENT, that denotes lack of volition;
FORCE, an inanimate entity that causes something but that cannot stop it; and RECIPIENT, which
is involved in actions describing changes of possession .
However, one entity may fulfil more than one role. This assertion becomes from the theory of
tiers (Jackendoff), which divide thematic roles into two main types: action tier roles (ACTOR,
AGENT, EXPERIENCER, PATIENT, BENEFICIARY, INSTRUMENT ), and thematic tier roles (THEME, GOAL,
SOURCE, LOCATION).
There are typical matchings between thematic roles and grammatical relations. The subject of
a sentence often corresponds to the AGENT, the direct object to the THEME, and the INSTRUMENT
often occurs as a prepositional phrase. However, there are two basic situations where this is
not the case: when the roles are omitted and when the speaker chooses to alter the verbal
voice (active, middle or passive voice). These result in a change in grammatical relations, and
different roles may occupy the subject position. Some writers consider this process of different
roles occupying the subject position an implicational hierarchy: AGENT>RECIPIENT/BENEFICIARY>
THEME/PATIENT>INSTRUMENT>LOCATION. This means that the leftmost elements are more
prototypical subjects, or that if a language allows LOCATION to be the subject, we expect that it
will allow the rest.
Verbs have particular requirements for their thematic roles, and we need to know not only
how many arguments a verb requires, but also what thematic roles its arguments may hold. In
the generative grammar approach, the listing of thematic roles is often called a thematic role
grid or theta-grid, i.e. put V: <AGENT, THEME, LOCATION>. However, not all nominals in a sentence
are arguments of the verb, that is, an element required by the verb to complete meaning;
some of them are adjuncts, less structurally attached to the verb, which give extra-information
about the context. As a result of listing theta-grids, we can see that some verbs share their
grids and we can classify them in classes, i.e. verbs of TRANSFER encode a view of the transfer
from the perspective of the AGENT ( V: <AGENT, THEME, RECIPIENT>), or from the perspective of the
RECIPIENT (V: <RECIPIENT, THEME, SOURCE>).
Although thematic roles theory faces some problems, such as their definition or delimitation,
linguists employ them to describe aspects of the interface between semantics and syntax.
Predict linkages between participant roles and the grammatical relations is one of the primary
functions of thematic roles. They also help to characterize semantic verbal classes, and to
describe argument-changing processes (passive voice), or argument structure alternations
(thema-rhema).
Causation
English causative-inchoative verb alternation allows a speaker to either select or omit a causing
entity, i.e. The door opened / John opened the door. However, in English not all changes of
state intransitives allow a corresponding causative transitive, nor all causative transitive an
intransitive inchoative.
Passive voice
The alternation between active and passive voice allows speakers some flexibility in viewing
thematic roles, and a different perspective, or viewpoint, on the situation described, usually to
obscure the entity of an AGENT. In English there are two passive constructions, be-passives and
get-passives, which differ on the amount of control over the event described.
Middle voice
Middle voice allows speakers to emphasize that the subject of the verb is affected by the
action described by the verb. In many languages, middle voice is marked either by inflectional
or pronominal particles, though not in English, where the distinction is only shown by
alternations between transitive active verbs and intransitive middle verbs. These alternations
are used to describe the success of a non- AGENT in some activity.
Speakers and hearers rely on the assumed or background knowledge in constructing and
interpreting meaning. This background knowledge is also called non-linguistic knowledge.
Deictic devices need of this non-linguistic knowledge or background knowledge to be correctly
linked with their referent. Thus, deictic devices in a language commit a speaker to set up a
frame of reference around himself.
There are different categories of deictic devices for in every language there is an implicit
division of the space around the speaker, a division of time relative to the act of speaking, and,
a naming system for the participants involved in the talk.
The English adverbs here and there pick out places according to the proximity to the location of
the speaker. Similarly, demonstratives in English have a two-term opposition between
this/these and that/those. Spatial deictic elements may also include information about motion
toward and away from the speaker, as in the English verbs come and go, which inform about
the location of the speaker.
However, these systems are also used in other domains, i.e. they are used as a form of
orientation within a discourse (discourse or textual deixis), or to refer to time. In some
languages, spatial deixis is used to express notions such as possession and states.
The roles of participants in the conversation are grammaticalized by pronouns: first person
singular pronoun is used for the speaker, second person pronouns for the addressee(s), and a
third person category for “neither-speaker-nor-addressee(s).” Other languages also
grammaticalize information about the social identities or relationships of the participant in the
conversation, which is called social deixis. An example of social deixis is the distinction
between “familiar” and “polite” pronouns, i.e. tú/usted.
Knowledge as Context
Speakers calculate the amount of information their hearers need to make references, and they
tend to economize where they can. Speakers use shorthands, metonymy or synecdoche to
refer to things in the world, relying on shared knowledge with the hearer, independently of the
source of this knowledge.
We can distinguish three types of this knowledge: that computable from the physical context,
which is gained from filling in the deictic expressions; that available from what has already
been said, that is, the discourse or discourse topic, i.e. what the discourse is about, which
influences the way speakers and hearers interpret the meaning of what they subsequently
hear; and, that available from background or common knowledge, that is, the knowledge
about how the world is shared by interlocutors, sometimes called encyclopedic knowledge or
common ground, which distinguishes between two types: communal common ground, or the
knowledge shared by co-members of communities; and personal common ground for the
knowledge two people share from their past experience of each other.
Information Structure
Information or thematic structure reflects how given and new information are “packaged” in
an utterance. One way for a speaker to convey that something is given is to use a definite
nominal, such as the English definite article the, which signals that the speaker assumes the
hearer can identify the referent; on the other hand, we can signal new information with and
indefinite nominal, i.e. a: the party / a party. These are the farthest opposites of the Givenness
Hierarchy, which identifies different information states of a referent, from most given to most
new:
Another way of marking information structure in English is intonation, where the assignment
of primary stress can be used to bring parts of the sentence into prominence. The prominent
part is called the focus, and marks new information. Syntactic constructions, such as clefts,
allow the speaker to place parts of a sentence as focus, as in It was yesterday that Bob came /
It was Bob who came yesterday.
Topic is another important information structure role, for it sets a spatial, temporal or
individual framework within which the main predication holds. The major characteristic of
topics is that they must be introduced in the conversation, therefore, in some languages there
exist sentence topics, which mark the current topic of discourse, though not in English.
Inference
Hearers make inferences to fill out the text toward an interpretation of speaker meaning. One
example of inference is the use of anaphora, a referential relation between expressions that
refer to the same entity. Anaphoric pronouns have no independent reference and must rely on
an antecedent. Other resources to construct meaning are bridging inferences, in which a
nominal occurs with a definite article, showing the speaker’s assumption that the reference is
accessible to the listener. The basis for these assumptions is background knowledge; and the
assumption that listeners will try to preserve coherence, gives speakers the freedom to imply
something rather than state it.
Conversational Implicature
This term was coined by Grice when he proposed an approach to the speaker’s and hearer’s
cooperative use of inference. He suggested that the success of communication could be
explained by postulating a cooperative principle, which is a kind of tacit agreement between
participants in a conversation. The principle allows participants to make assumptions that
license the use of inference as part of linguistic communication.
According to Grice, the assumptions that hearers made about a speaker’s conduct can be of
different types. Grice proposed a series of conversational principles that participants follow to
a greater or lesser extent, for these principles can be broken. Grice called these principles
maxims:
The Maxim of Quality: Try to make your contribution one that is true, i.e.
1. Make your contribution as informative as is required (for the current purposes of the
exchange).
2. Do not make your contribution more informative than is required.
1. Avoid ambiguity
2. Avoid obscurity
3. Be brief
4. Be orderly
These maxims help the hearer arrive at implicatures. Implicatures have three major
characteristics: firstly, the message is implied rather than stated; secondly, its existence is a
result of the context; and they are cancellable, or defeasible without causing a contradiction.
With this respect, implicatures contrast with entailments, where cancelling the entailed
sentence will cause anomaly. Another property of conversational implicatures is that they are
reinforceable without causing redundancy, as in:
This also contrasts with entailment where reinforcing will cause redundancy, i.e. The president
was assassinated yesterday, and he is dead.
However, these principles are not rules and they can be broken. Grice distinguished between
the speaker secretly breaking them, for example by lying, which he called violating the
maxims, or overtly breaking them for some linguistic effect, such as irony, which he called
flouting. The maxims help listeners to interpret non-literal meaning rather than reject it as
impossible.
Relevance Theory
This approach is an extension of Grice’s maxims, which seeks to unify the cooperative principle
and conversational maxims into a single principle of relevance that will motivate hearer’s
inference. In a communicative exchange, it is assumed by the hearer that the speaker has a
communicative intent. It is this intent that leads the speaker to calculate the relevance of his
utterance with the hearer’s role in mind. Thus, the speaker may calculate a balance between
profit and loss from the hearer’s point of view. In this theory, the target for reference will be
the one that makes the resulting proposition maximally relevant to the accessible context.
In relevance theory there is a distinction between implicated premises, not directly stated and
provided as an inferential support for the final implicature, and implicated conclusion, which is
this final implicature.
Lexical Pragmatics
Lexical pragmatics seeks to investigate how the meanings of words reflect or are adjusted to
specific contexts. This means that context may affect the sense in which hearers interpret a
word. In this literature, broadening is a process where the concept expressed by a word is
more general than that it usually expresses. Broadening includes uses such as hyperbole or
category extension. On the other hand, in narrowing processes a the meaning of a word can
be understood as more restrictive than that it usually expresses, as in All politicians drink,
where drink is understood as ‘drink alcohol’, and not every single liquid, as it is expected.
Sentence types (i.e. declarative, interrogative, imperative and optative) are conventionally
used to perform speech acts such as make assertions, make questions, give orders and make
wishes respectively. However, this conventional or literal use of sentence types was
questioned by J. L. Austin, who challenged the assumption that language was mainly used to
describe states of affairs by using statements, and that the meaning of utterances can be
described only in terms of truth or falsity. Austin thinks that language is used for more than
making statements, he identified a subset of declaratives that are not used to describe
situations, but that are in themselves a kind of action, what he called performative utterances.
This subset of declaratives performs the action named by the main verb as in I promise to
leave soon, or I now pronounce you man and woman. We can add the adverb hereby (‘con
estas palabras’) to performative verbs with no changes in meaning.
Performative statements can be evaluated in terms of felicitous, if they satisfy the social
conventions that rule them; or, infelicitous. The enabling conditions for a performative are
called felicity conditions. These conditions may vary depending on the degree of formality of
the performative act, ranging from ceremonial acts, such as a priest pronouncing a marriage,
to fewer formal acts like warning or promising.
Performative utterances that are characterized with the following features are considered
explicit performatives:
- they tend to begin with a first-person verb, usually simple present tense,
Speech acts consist of three elements, the first is the locutionary act, that is, what the speaker
says or the conventional meaning of the sentence (i.e. the proposition); the second element is
called illocutionary act, or the action intended by the speaker who associates the proposition
to a speech act (i.e. the particular construction); and the perlocutionary act, which is the effect
that an utterance causes on the participants of a conversation.
Apart from Austin’s, there were other attempts to explore speech acts. Searle, for example,
propose that all acts fall into five main types:
- Representatives, which commit the speaker to the truth of the expressed proposition
(asserting, concluding).
- Directives, which are attempts by the speaker to get the addressee to do something
(requesting, questioning).
- Commissives, which commit the speaker to some future course of action (promising,
threatening, warning).
- Declarations, which effect immediate changes in the institutional state of affairs and which
tend to rely on elaborate extralinguistic institutions (excommunicating, declaring war,
marrying, christening, etc.).
However, sometimes there is no match between sentence types and speech acts, and the
illocutionary act differs from the sentence type to which it is associated, as in Can you pass me
the salt?, where the direct act would be a question and the indirect act, a request.
Indirectness is commonly used to soften the force of directives, but also to heighten politeness
in offerings and so on.
In CA it is assumed that words are composed of smaller units of meaning which combine
differently to form other words. These units are called semantic components or semantic
primitives. This kind of analysis allows a simpler characterization of lexical relations such as
synonymy, antonymy, hyponymy, and so on.
It is important also because it can be extended beyond lexical analyses, for the components of
verbs causes them to participate in different grammatical rules and identifying their
components will help predict the grammatical processes they undergo. Componential analysis
of verbs allows to classify them into classes, based on the shared meaning components
between them. According to Rappaport Hovav and Levin, a verb is composed of event schema,
i.e. its grammatically relevant features, and root meaning, referring to its idiosyncratic
component of meaning, that is, what distinguishes from another verb.
In this approach, hyponymy is defined by comparing the semantic primitives of words: a lexical
item P is hyponym of Q if all the features of Q are contained in the feature specification of P.
Antonymy, on the other hand, is thus defined: lexical items P, Q, R, are incompatible, or
antonyms, if they share a set of features but differ from each other by one or more contrasting
features.
In order to establish these lexical relations, componential analysis needs of binary features (i.e.
+HUMAN, -HUMAN) and redundancy rules which predict the automatic relationships between
components (i.e. HUMAN → ANIMATE). Redundancy rules allow to reduce the number of
components stated for each item because the component on the left automatically contains
the component on the right.
Componential analysis and semantic primitives form part of our psychological architecture,
thus, they provide us a unique view of conceptual structure (Jackendoff and Pustejovsky)
Comprised in generative grammar, this theory has two central ideas: semantic rules have to be
recursive; and, the relationship between a sentence and its meaning is not arbitrary and
unitary, it is compositional, that is, syntactic structure and lexical content interact. The way
words are combined into phrases and phrases into sentences determines the meaning of
sentences. One of the components of this approach are projection rules, which show how the
meaning of sentences is built up from the meaning of lexical items. In this analysis, lexical
items are composed of semantic markers, the part of a word’s meaning which is shared by
other words, and distinguishers, the idiosyncratic semantic information unique to that word.
As sentences can be ambiguous, this theory limits ambiguity by selection restrictions, which
are designed to reflect some of the contextual effects on word meaning, and block some of the
possible combinations.
Leonard Talmy used semantic components to characterize the interaction between syntax and
semantics. His study explored how the elements of meaning are combined not only in single
words but across phrases.
Talmy identified the following semantic components associated with verbs of motion: Figure,
an object moving or located with respect to another object (the Ground); the Motion, the
inherent presence of motion or location in the event; the Path, the course followed or the site
occupied by the Figure; and, the Manner, which is the type of motion. Talmy revealed
differences between languages in how these semantic components are typically conflated in
verbs and phrases. In English, Path and Manner are encoded in verb phrases (i.e. run up) in
which the Manner is incorporated in the verb along with the Motion, and the Path, or direction
of movement, is encoded in an external prepositional phrase. In other languages, such as
Spanish, the Path is encoded in the verb that expresses motion (i.e. subió corriendo) and the
Manner is encoded in external phrases. A third possible pattern of conflation combines the
Figure with the Motion. Thus, it is claimed that in all languages the Motion component is
expressed by the verb, but languages can be divided according to these conflation patterns.
Jackendoff identifies different subcategories for the function BE which represent four
subcategories of STATE, namely, spatial location, temporal location, property ascription and
possession, which he called semantic fields. The same semantic fields apply to EVENT functions
like GO. He also studied INCHOATIVE and CAUSATIVE processes.
The category Thing refers to the semantics of nouns. The semantic feature [±BOUNDED]
distinguishes between count nouns [+ BOUNDED] and mass nouns [-BOUNDED]. However, plurals
of count nouns behave like mass nouns and are therefore [- BOUNDED]. The difference here is
that plurals of count nouns are [+INTERNAL STRUCTURE], because they can be divided into units,
while mass nouns are [-INTERNAL STRUCTURE]. On the other hand, collective nouns contain
individual units [+i], but are [+b] because if we divide the group in smaller units, the smaller
units do not represent the group. Thus, nouns are cross classified as follows:
Jackendoff used the feature [BOUNDED] also to describe situations. Thus, events which are
described as ongoing processes not overtly limited in time, or atelic, are analysed as [-b]; and
situations with events with clearly defined beginnings and ends (telic) are classified as [+b].
In his attempt to ungrasp the relationship between semantics and grammar, Jackendoff
demonstrated that the addition of an adverbial can change the internal structure of an event.
Thus, there exist combinatory processes that change the semantics of a situation type, i.e.
adverbials + semelfactive verbs= iteration. He also studied the combinatory processes of nouns
that account of different types of part-whole relations (i.e. including functions: plural,
composed of, containing; extracting functions: element of, partitive, universal grinder).
The term event structure stands for the situation types already known. In this theory, events
also include states. Pustejovsky claimed that events are composed of smaller events (sub-
events), and that this relationship needs to be represented in an articulated way, by a form of
syntax. Thus, states are single events that are evaluated relative to no other event,
represented as:
Processes (P) are sequences of events identifying the same semantic expression, represented
as:
Transitions represent either causative or inchoative events because both represent a transition
from one state to its opposite. This type of sub-structural description allows disambiguate
adverbial interpretation.
On the other hand, qualia structure refers to the classification of the properties of an item.
Qualia structure has four dimensions, viewed as roles, whose aim is to give a more adequate
account for polysemy:
a. CONSTITUTIVE: the relation between an object and its constituents, or proper parts. For
example: i. Material ii.Weight iii. Parts and component elements.
b. FORMAL: that which distinguishes the object within a larger domain. For example: i.
Orientation ii. Magnitude iii. Shape iv. Dimensionality v. Color vi. Position.
c. TELIC: the purpose and function of the object. For example: i. Purpose that an agent has in
performing an act. ii. Built-in function or aim which specifies certain activities.
d. AGENTIVE: factors involved in the origin or “bringing about” of an object. For example: i.
Creator ii. Artifact iii. Natural kind iv. Causal chain. The representation of the qualia structure
for the lexical item ‘knife’ is:
Pustejovsky claimed that the variation in interpretations in adjectives is triggered by specific
types of knowledge represented in the nouns with which they combine, which revealed that
these rules of combination worked also with other lexical categories.
Predicate Logic
Quantification: Every student passed the exam Ɐx(S(x) → P(x, e)) // One student kissed Mary
ꓱ x(S(x) Ʌ K(x, m))
1 A set {..}, which can be identified by listing the members, e.g. {Mercury, Mars, Earth,…} or by
describing an attribute of the members, e.g. {x: x is a planet in the solar system}.
4 Intersection of sets, A ∩ B, which is the set consisting of the elements which are members of
both A and B, e.g. {Venus, Mars, Jupiter, Saturn) ∩ {Mars, Jupiter, Uranus, Pluto} = {Mars,
Jupiter}.
5 Ordered pair, <a, b>, where the ordering is significant, e.g. <Mercury, Venus> ≠ <Venus,
Mercury>.
6 Ordered n-tuple, <a1, a2, a3…an>, e.g. the 4-tuple <Mercury, Venus, Earth, Mars>.
9 |A| > |B|, the cardinality of A is greater than B; i.e. A has more members than B.
10 |A| ≥ |B|, the cardinality of A is greater than or equal to B; i.e. A has the same or more
members than B.
Bill is singing S (b): [S(b)]M1 = 1 iff [b]M1 ∈ [S]M1. To check if this is true in M1 we must check the
extensions described in our model.
Evaluating a Compound Sentence with Ʌ “and”, V “inclusive or”, V “exclusive or” and →
“material implication”:
Then, we will check in the truth tables the truth behaviour of each type of sentence and check
the model situation to check if they match. I.e. Patrick, who is a millionaire, is a socialist M(p)
Ʌ S(p):
Meaning postulates:
Binary Antonyms: dead and alive. Represented using the “not” symbol ¬:
Ɐx(DEAD(x) → ¬ ALIVE(x))
Ɐx(SWEATER(x) ≡ JUMPER(x))
There are some common types of quantifiers whose real meaning cannot be captured by this
predicate logic. Moreover, predicate logic cannot resolve the problem of isomorphism, which
occurs when the meaning of a noun is split, part of the meaning is to the left of the head noun
in the choice of the quantifier, while part is to the right in the choice of the connector, i.e. all
students are hardworking. Thus, restricted quantification is a different notation which allows
to express the restriction of the quantifiers.
All students can be translated into a unitary logical expression (Ɐx: S(x)); most students would
be (Most x: S(x)); (Few x: S(x)); etc. Thus, everything (Ɐx: T(x)); everybody (Ɐx: P(x));
everywhere (Ɐx: L(x)); something (ꓱ x: T(x)); someone (ꓱ x: P(x)); somewhere (ꓱ x: L(x)).
However, a more accurate approach to provide a semantic interpretation for these quantifiers
is generalized quantifier theory. Here a noun like John is viewed as a set of properties instead
of an individual.
So, in Bill sings has the predicate-argument structure Bill (sings) and the semantic rule is
expressed: [Bill (sings)]M1 = 1 iff [sings] M1 ∈ [Bill] M1.
In this approach, the semantic rule for most is: Most (A, B) = 1 iff |A ∩ B|> |A - B|. Most A are
B is true if the cardinality of the set of things which are both A and B is greater then the
cardinality of the set of things which are A but not B.
All: All (A, B) = 1 iff A ⊆ B. All A are B is true if and only if set A is a subset of set B.
Some: Some (A, B) = 1 iff A ∩ B ≠ Ø. Some A are B is true if and only if the set of things which
are members of A and B is not empty.
No: No (A, B) = 1 iff A ∩ B = Ø. No A are B is true if and only if the set of things which are both
members of A and B is empty.