3rd International Workshop on
Multi-Relational Data Mining
Workshop
Chairs:
Sašo Džeroski
Hendrik Blockeel
August 22, 2004
Seattle, USA
Sašo Džeroski and Hendrik Blockeel, editors
Proceedings of the 3rd International
Workshop on Multi-Relational
Data Mining (MRDM-2004)
Seattle, Washington, USA, August 22, 2004
Tenth ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining (KDD-2004)
Foreword
The 3rd International Workshop on Multi-Relational Data Mining (MRDM2004) was held in Seattle, Washington, USA, on August 22, 2004, as a part of
the Tenth ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining (KDD-2004).
Multi-Relational Data Mining (MRDM) is the multi-disciplinary field dealing with knowledge discovery from relational databases consisting of multiple
tables. Mining data that consists of complex/structured objects also falls within
the scope of this field, since the normalized representation of such objects in a
relational database requires multiple tables. The field aims at integrating results
from existing fields such as inductive logic programming, knowledge discovery,
machine learning and relational databases; producing new techniques for mining
multi-relational data; and practical applications of such techniques.
Typical data mining approaches look for patterns in a single relation of a
database. For many applications, squeezing data from multiple relations into a
single table requires much thought and effort and can lead to loss of information. An alternative for these applications is to use multi-relational data mining.
Multi-relational data mining can analyze data from a multi-relation database
directly, without the need to transfer the data into a single table first. Thus the
relations mined can reside in a relational or deductive database. Using multirelational data mining it is often also possible to take into account background
knowledge, which corresponds to views in the database.
Present MRDM approaches consider all of the main data mining tasks, including association analysis, classification, clustering, learning probabilistic models and regression. The pattern languages used by single-table data mining approaches for these data mining tasks have been extended to the multiple-table
case. Relational pattern languages now include relational association rules, relational classification rules, relational decision trees, and probabilistic relational
models, among others. MRDM algorithms have been developed to mine for patterns expressed in relational pattern languages. Typically, data mining algorithms have been upgraded from the single-table case: for example, distancebased algorithms for prediction and clustering have been upgraded by defining
distance measures between examples/instances represented in relational logic.
MRDM methods have been successfully applied accross many application
areas, ranging from the analysis of business data, through bioinformatics (including the analysis of complete genomes) and pharmacology (drug design) to
Web mining (information extraction from text and Web sources).
The rationale behind organizing this workshop was as follows. The aim of
the workshop was to bring together researchers and practitioners of data mining
interested in methods for finding patterns in expressive languages from complex/
multi-relational/ structured data and their applications.
An increasing number of data mining applications involve the analysis of complex and structured types of data (such as sequences in genome analysis, HTML
and XML documents) and require the use of expressive pattern languages. There
is thus a clear need for multi-relational data mining (MRDM) techniques.
On the other hand, there is a wealth of recent work concerned with upgrading some recent successful data mining approaches to relational logic. A case in
point are kernel methods (support-vector machines): the development of kernels
for structured and richer data types is a hot research topic. Another example
is the development of probabilistic relational representations and methods for
learning in them (e.g., probabilistic relational models, first-order Bayesian networks, stochastic logic programs, etc.)
A non-exclusive list of topics from the call for papers, listed in alphabetical order,
is as follows:
– Applications of (multi-)relational data mining
– Data mining problems that require (multi-)relational methods
– Distance-based methods for structured/relational data
– Inductive databases
– Kernel methods for structured/relational data
– Learning in probabilistic relational representations
– Link analysis and discovery
– Methods for (multi-)relational data mining
– Mining structured data, such as amino-acid sequences, chemical compounds,
HTML and XML documents, ...
– Propositionalization methods for transforming
(multi-)relational data mining problems to single-table data mining problems
– Relational neural networks
– Relational pattern languages
The scientific program of the workshop included invited talks by Lise Getoor
and Jiawei Han, and 5 paper presentations. We wish to thank the invited speakers, all the authors who submitted their papers to MRDM-2004, the PC members
for their help in the reviewing process, and the organizers of KDD-2004 for help
with local organization.
Ljubljana/Leuven
July 2004
Sašo Džeroski
Hendrik Blockeel
Program Chairs
Sašo Džeroski
Department of Knowledge Technologies, Jožef Stefan Institute
Jamova 39, SI-1000 Ljubljana, Slovenia
Email:
[email protected]
URL: http://www-ai.ijs.si/SasoDzeroski/
Hendrik Blockeel
Department of Computer Science, Katholieke Universiteit Leuven
Celestijnenlaan 200A, B-3001 Leuven, Belgium
Email:
[email protected]
URL: http://www.cs.kuleuven.ac.be/~hendrik/
Program Committee
Jean-François Boulicaut (University of Lyon)
Diane Cook (University of Texas at Arlington)
Luc Dehaspe (PharmaDM)
Pedro Domingos (University of Washington)
Peter Flach (University of Bristol)
David Jensen (University of Massachusetts at Amherst)
Kristian Kersting (Albert-Ludwigs-Universität Freiburg)
Jörg-Uwe Kietz (Kdlabs AG, Zurich)
Ross King (University of Aberystwyth)
Stefan Kramer (Technical University Munich)
Nada Lavrač (Jožef Stefan Institute)
Donato Malerba (University of Bari)
Stan Matwin (University of Ottawa)
Hiroshi Motoda (University of Osaka)
David Page (University of Wisconsin at Madison)
Alexandrin Popescul (University of Pennsylvania)
Foster Provost (Stern School of Business, New York University)
Céline Rouveirol (University Paris Sud XI)
Michèle Sebag (University Paris Sud XI)
Arno Siebes (Universiteit Utrecht)
Ashwin Srinivasan (IBM India)
Takashi Washio (University of Osaka)
Stefan Wrobel (Fraunhofer Institute for Autonomous Intelligent Systems, Sankt
Augustin / University of Bonn)
Table of Contents
Link Mining
L. Getoor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
CrossMine: Efficient Classification Across Multiple Database Relations
J. Han . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Using Bayesian Classifiers to Combine Rules
J. Davis, V. Santos Costa, I. Ong, D. Page and I. Dutra . . . . . . . . . . . . . . . . . . . 5
Logical Bayesian Networks
D. Fierens, H. Blockeel, J. Ramon and M. Bruynooghe . . . . . . . . . . . . . . . . . . . . 19
Multi-Relational Record Linkage
Parag and P. Domingos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Dynamic Feature Generation for Relational Learning
A. Popescul and L. Ungar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Kernel-based distances for relational learning
A. Woznica, A. Kalousis and M. Hilario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Link Mining
Lise Getoor
Computer Science Department and UMIACS
University of Maryland
AV Williams Bldg, College Park, MD 20742, USA
Abstract. A key challenge for data mining is tackling the problem of
mining richly structured datasets, where the objects are linked in some
way. Links among the objects may demonstrate certain patterns, which
can be helpful for many data mining tasks and are usually hard to capture
with traditional statistical models. Recently there has been a surge of
interest in this area, fueled largely by interest in web and hypertext
mining, but also by interest in mining social networks, security and law
enforcement data, bibliographic citations and epidemiological records.
Link mining includes both descriptive and predictive modeling
of link data. Classification and clustering in linked relational domains
require new data mining models and algorithms. Furthermore, with the
introduction of links, new predictive tasks come to light. Examples include predicting the numbers of links, predicting the type of link between
two objects, inferring the existence of a link, inferring the identity of an
object, finding co-references, and discovering subgraph patterns.
In this talk, I will give an overview of this newly emerging research area. I will describe novel aspects of the modeling, learning and
inference tasks and I will give an introduction to a few of the many
proposed frameworks. I will spend the majority of the time discussing
commonalities in the issues that must be addressed in any statistical link
mining framework.
1
2
CrossMine: Efficient Classification
Across Multiple Database Relations
Jiawei Han
Department of Computer Science
University of Illinois at Urbana-Champaign
201 N. Goodwin Avenue, Urbana, IL 61801, USA
Abstract. A database usually consists of multiple relations which are
linked together conceptually via entity-relationship links in the design
of relational database schemas. However, most classification approaches
only work on single “flat” data relations. It is usually difficult to convert multiple relations into a single flat relation without either introducing huge, undesirable “universal relation” or losing essential information. Previous works using Inductive Logic Programming Approaches
(recently also known as Relational Mining) have proven effective with
high accuracy in multi-relational classification. Unfortunately, they suffer from poor scalability w.r.t. the number of relations and the number
of attributes in databases.
In this talk we introduce CrossMine, an efficient and scalable
approach for multi-relational classification. Several novel methods are
developed in CrossMine, including (1) tuple ID propagation, which performs semantics-preserving virtual join to achieve high efficiency on databases with complex schemas, and (2) a selective sampling method, which
makes it highly scalable w.r.t. the number of tuples in the databases.
Both theoretical backgrounds and implementation techniques of CrossMine are introduced. Our comprehensive experiments on both real and
synthetic databases demonstrate the high scalability and accuracy of
CrossMine.
3
4
Using Bayesian Classifiers to Combine Rules
Jesse Davis, Vı́tor Santos Costa, Irene M. Ong,
David Page and Inês Dutra
Department of Biostatistics and Medical Informatics
University of Madison-Wisconsin
{jdavis, vitor, ong, page, dutra}@biostat.wisc.edu
Abstract. One of the most popular techniques for multi-relational data
mining is Inductive Logic Programming (ILP). Given a set of positive and
negative examples, an ILP system ideally finds a logical description of the
underlying data model that discriminates the positive examples from the
negative examples. However, in multi-relational data mining, one often
has to deal with erroneous and missing information. ILP systems can
still be useful by generating rules that captures the main relationships in
the system. An important question is how to combine these rules to form
an accurate classifier. An interesting approach to this problem is to use
Bayes Net based classifiers. We compare Naı̈ve Bayes, Tree Augmented
Naı̈ve Bayes (TAN) and the Sparse Candidate algorithm to a voting
classifier. We also show that a full classifier can be implemented as a
CLP(BN ) program [14], giving some insight on how to pursue further
improvements.
1
Introduction
The last few years have seen a surge of interest in multi-relational data mining,
with applications in areas as diverse as bioinformatics and link discovery. One
of the most popular techniques for multi-relational data mining is Inductive
Logic Programming (ILP). Given a set of positive and negative examples, an
ILP system ideally finds a logical description of the underlying data model that
differentiates between the positive and negative examples. ILP systems confer
the advantages of a solid mathematical foundation and the ability to generate
understandable explanations.
As ILP systems are being applied to tasks of increasing difficulty, issues such
as large search spaces and erroneous or missing data have become more relevant.
Ultimately, ILP systems can only expect to search a relatively modest number of
clauses, usually on the order of millions. Evaluating increasingly complex clauses
may not be the solution. As clauses grow larger, they become more vulnerable
to the following errors: a query will fail because of missing data, a query will
encounter an erroneous database item, and a clause will give correct answers
simply by chance.
Our work relates to a sizeable application in the field of link discovery. More
precisely, our concern involves finding aliases in a relational domain [15] where
5
the data is subject to high levels of corruption. As a result, we cannot hope that
the learned rules will generally model the entire dataset. In these cases, ILP
can at best generate rules that describe fragments of the underlying model. Our
hope is that such rules will allow us to observe the central relationships within
the data.
An important question is how to combine the partial rules to obtain a useful
classifier. We have two major constraints in our domain. First, we expect the
number of positives to grow linearly with the number of individuals in the domain. In contrast, the number of negatives increases with the number of pairs,
and therefore grows quadratically. Consequently, any approach should be robust
to false positives. Furthermore, flexibility is also an important consideration as
we ultimately want to be able to weigh precision versus recall through some measure of confidence, ideally in the form of a probability. Secondly, we expect to use
the system for different datasets: our method should not be prone to overfitting
and it should be easy to parameterize for datasets with different observabilities
and error rates.
The previous discussion suggests probabilistic-based classifiers as a good approach to our problem. We explore three different Bayes net based approaches
to this problem. Each ILP learned rule is represented as a random variable in
the network. The simplicity and robustness of the Naı̈ve Bayes classifier make
it a good candidate for combining the learned rules [12]. Unfortunately, Naı̈ve
Bayes assumes independence between features and our rules may be quite interdependent and perhaps even share literals. A natural extension is to use TAN [6]
classifiers as they offer an efficient way to capture dependencies between rules.
Additionally, we explore using the Sparse Candidate algorithm [7] for learning
the structure of a full Bayes net. An alternative approach we consider is to group
our rules as an ensemble [3] and use voting, which has had excellent results in
practice. We will evaluate the relative merits of these approaches.
The paper is organized as follows. We first discuss the problem in more detail.
Then, we explain the voting and Bayesian based approaches to rule combination.
Next, we present the main applications and discuss our results. We follow this
by demonstrating how we can represent Bayesian classifiers as a logic program
with probabilities, using CLP(BN ). Finally, we end with related work and our
conclusions.
2
Using ILP
From a logic perspective, the ILP problem can be defined as follows. Let E + be
the set of positive examples, E − be the set of negative examples, E = E + ∧ E − ,
and B be the background knowledge. In general, B and E can be arbitrary logic
programs. The aim of an ILP system is to find a set of hypotheses (also referred
to as a theory) H, in the form of a logic program, such that all positive examples
and none of the negative examples are covered by the program.
In practice, learning processes generate relatively simple clauses which only
cover a limited subset of E + . Moreover, such clauses often cover some examples
2
6
in E − . One possible reason for the presence of these errors is that these examples
may have been misclassified. A second reason is that approximated theories can
never be as strict as the ground truth: if our clause is only a subclause of the
actual explanation, it is possible that the clause will cover a few other incorrect
examples. We also have to address implementational difficulties: for most cases
we can only search effectively for relatively simple explanations (clauses). Therefore, we assume that clauses represent fragments of the ground-truth and that
the learning process can capture different “features” of the ground truth. Clauses
have some distribution, which is likely to be non-uniform, over the interesting
aspects of the ground-truth theory. Even if we do not capture all features of the
ground truth, we can still learn interesting and relevant clauses.
Given a set H of clauses learned in an incomplete world we can combine them
to obtain a better classifier. One possible approach to combine clauses would be
to assume that each clause is an explanation, and form a disjunction over the
clauses. Although this approach has the merit of simplicity, and should work
well for cases where we are close to the ground truth, it does have two serious
issues we need to consider:
– We are interested in applications where the number of false instances dominates. Unfortunately, the disjunction of clauses maximizes the number of
false positives.
– We expect the classifier to make mistakes, so ideally we would like to know
the degree of confidence we have in a classification.
Our problem is not novel, and several approaches come to mind. We shall
focus on two such approaches here. The idea of exploiting different aspects of
an underlying classifier suggests ensemble-based techniques. Previous work on
applying ensemble methods to ILP [4] suggests that exploring the variability in
the seed is sufficient for generating diverse classifiers. We thus decided to use a
simple approach where we use the ILP engine to generate clauses and then use
voting to group them together. A second alternative is to consider each clause as
a feature of an underlying classifier. We want to know which features are most
important. Several possibilities exist and we focus on Bayesian networks, as they
provide us with an estimated probability for each different outcome.
3
3.1
Combining Rules
Voting
It is well known that ILP systems that learn clauses using seeds are exploiting
different areas of the search space, in a manner analogous to ensemble methods. In this vein, recent ILP work has exploited several techniques for ensemble
generation, such as bagging or bootstrapping [4] and different forms of boosting [5,13,9,10]. Bagging is a popular ensemble method that consists of generating
different training sets where each set contains a sample, with replacement, of
the original dataset. Hypotheses are learned from each dataset, and combined
3
7
through a voting method. Alternatively, in boosting each classifier is built depending on the errors made by previous classifiers. Each new rule thus depends
on the performance of the previous one.
Previous work on applying bagging to ILP [4] suggests that exploring variability from using different seed examples can be sufficient for generating diverse
classifiers. We shall follow a similar approach: we use hypotheses generated from
different runs of the ILP system, and combine them through unweighted voting.
With this method, we consider an example to be positive depending on the number of clauses that are satisfied for that example. The number of clauses we need
to satisfy to classify the example as positive is a variable threshold parameter.
One major advantage of using a voting method is that we can obtain different
values of precision and recall by varying the voting threshold. Thus, although a
voting method does not give an estimate of the probability for each classification,
it does provide an excellent baseline to compare with Bayesian-based methods.
3.2
Bayesian Networks
Rule 1
Rule 2
Rule 3
Rule n-2
Rule n-1
Rule n
Fig. 1. A Naı̈ve Bayes Net.
We expect every learned clause to be related to a clause in the “true” theory.
Hence, we would also expect that the way each learned clause classifies an example is somehow dependent on the example’s true classification. This suggests
a simple approach where we represent the outcome for each clause as a random
variable, whose value depends on the example’s classification. The Naı̈ve Bayes
approach is shown in Figure 1 [12]. Advantages of this approach are that it is
straightforward to understand as well as easy and fast to train.
The major drawback with Naı̈ve Bayes is that it makes the assumption that
the clauses are independent given the class value. Often, we expect clauses to
be strongly related. Learning a full Bayes Net is an NP-complete problem, so
in this work, we experimented with Tree Augmented Naı̈ve Bayes (TAN) [6]
networks. Figure 2 shows an example of a TAN network. TAN models allow for
more complex network structures than Naı̈ve Bayes. The model was proposed
by Geiger in 1992 [8] and it extends work done by Chow and Liu [2]. Friedman,
4
8
Rule 1
Rule 2
Rule 3
Rule n-2
Rule n-1
Rule n
Fig. 2. A TAN Bayes Net.
Geiger and Goldszmidt [6] evaluated the algorithm on its viability for classification tasks. The TAN model, while retaining the basic structure of Naı̈ve Bayes,
also permits each attribute to have at most one other parent, allowing the model
to capture dependencies between attributes. To decide which arcs to include in
the ’augmented’ network, the algorithm makes a complete graph between all the
non-class attributes, where the weight of each edge is given as the conditional
mutual information between those two attributes. A maximum weight spanning
tree is constructed over this graph, and the edges that appear in the spanning
tree are added to the network. Geiger proved that the TAN model can be constructed in polynomial time with a guarantee that the model maximizes the Log
Likelihood of the network structure given the dataset.
The problem arises of whether different Bayes networks could do better. We
report on some preliminary work using the Sparse Candidate Algorithm [7]. The
Sparse Candidate algorithm tries to speed up learning a full Bayesian Network
by limiting the search space of possible networks. The central premise is that
time is wasted in the search process by evaluating edges between attributes that
are not highly related. The algorithm retains standard search techniques, such
as greedy hill climbing, but uses mutual information to limit the number of
possible parents for each attribute to small ’candidate’ set. The algorithm works
in two phases. In the first phase, the candidate set of parents is picked for each
attribute. The candidate set must include all current parents of a node. The
second step involves performing the actual search. These two steps are repeated
either for a set number of times or until the score of the network converges.
4
Results
This section presents our results and analysis of the performance of several applications. For each application we show precision versus recall curves for the four
methods: Naı̈ve Bayes, TAN, Sparse Candidate and voting. All our experiments
were performed using Srinivasan’s Aleph ILP system [16] running on the Yap
5
9
Prolog system. We used our own software for Naı̈ve Bayes and TAN. For the
Sparse Candidate Algorithm we used the LearnBayes program provided by Nir
Friedman and Gal Elidan. For this algorithm we set the number of candidate
parents to be five and we used the Bayesian Information Criterion as the scoring
function. All results are obtained using five fold cross-validation.
Our main experiment was performed on synthetic datasets developed by
Information Extraction & Transport, Inc. within the EAGLE Project [15,11].
The datasets are generated by simulating an artificial world with large numbers
of relationships between agents. The data focuses on individuals which may have
capabilities, belong to groups, and participate in a wide range of events. In our
case, given that some individuals may be known through different identifiers
(e.g., through two different phone numbers), we were interested in recognizing
whether two identifiers refer to the same individual.
All datasets were generated by the same simulator, but with different parameters for observability (how much information is available as evidence), corruption, and clutter (irrelevant information that is similar to the information
being sought). Five datasets were provided for training, and six for evaluation.
All the datasets include detailed data on a few individuals, including aliases for
some individuals. Depending on the dataset, the data may or may not have been
corrupted.
Our methodology was as follows. First, we used the five training datasets to
generate rules, using the ILP system Aleph. Using the rules learned from the
training set, we selected the ones with best accuracy and combined them with
domain expert knowledge to provide new feedback to the training phase. Using
the final set of learned rules, we converted each of the evaluation datasets into a
set of propositional feature vectors, such that each rule appeared as an attribute
in the feature vector. Each rule served as a boolean attribute, which received a
value of one if the rule matched the example and zero otherwise. For each of the
six test datasets, we performed five fold cross validation. The network structure
and parameters were learned on four of the folds, while the accuracy was tested
on the remaining fold. For each dataset, we fixed the ratio of negative examples
to positive examples at seventy to one. This is an arbitrary ratio since the full
datasets are exceedingly large, and the ground truth files were only recently
released.
The precision/recall (P/R) curves for the different datasets are seen in Figures 3 through 8. On each curve, we included 95% confidence intervals on the
precision score for select levels of recall. The curves were obtained by averaging
the precision and recall values for fixed thresholds. The precision recall curve for
the TAN algorithm dominates the curves for Naı̈ve Bayes and voting on all six
of the datasets. For each dataset, there are several places where TAN yields at
least a 20 percentage point increase in precision, for the same level of recall, over
both Naı̈ve Bayes and voting. On two of the six datasets, Naı̈ve Bayes beats voting, while on the remaining four they have comparable performance. One reason
for TAN’s dominance compared to Naı̈ve Bayes is the presence of rules which
are simply refinements of other rules. The TAN model is able to capture some
6
10
Precision/Recall Curves for Dataset 1
1
Voting
TAN
Naive Bayes
Sparse Candidate
Precision
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
Recall
Fig. 3. P/R for Dataset 1
Precision/Recall Curves for Dataset 2
1
Voting
TAN
Naive Bayes
Sparse Candidate
Precision
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
Recall
Fig. 4. P/R for Dataset 2
7
11
0.8
1
Precision/Recall Curves for Dataset 3
1
Voting
TAN
Naive Bayes
Sparse Candidate
Precision
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
Recall
Fig. 5. P/R for Dataset 3
Precision/Recall Curves for Dataset 4
1
Voting
TAN
Naive Bayes
Sparse Candidate
Precision
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
Recall
Fig. 6. P/R for Dataset 4
8
12
0.8
1
Precision/Recall Curve for Dataset 5
1
Voting
TAN
Naive Bayes
Sparse Candidate
Precision
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
Recall
Fig. 7. P/R for Dataset 5
Precision/Recall Curves for Dataset 6
1
Voting
TAN
Naive Bayes
Sparse Candidate
Precision
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
Recall
Fig. 8. P/R for Dataset 6
9
13
0.8
1
of these interdependencies, whereas Naı̈ve Bayes explicitly assumes that these
dependencies do not exist. Naı̈ve Bayes’ independence assumption accounts for
the similar performance compared to voting on several of the datasets. TAN and
the Sparse Candidate algorithm had similar precision recall curves. The package
we used for the Sparse Candidate algorithm only allows for building generative
models. TAN is a discriminative model, so it emphasizes differentiating between
positive and negative examples. An important follow-up experiment would be to
adapt the Sparse Candidate algorithm to use discriminative scoring functions.
In situations with imprecise rules and a preponderance of negative examples, such as these link discovery domains, Bayesian models and especially TAN
provide an advantage. One area where both TAN and Naı̈ve Bayes excel is in
handling imprecise rules. The Bayes nets effectively weight the precision of each
rule either individually or based on the outcome of another rule in the case of
TAN. The Bayesian nets further combine these probabilities to make a prediction of the final classification, allowing them to discount the influence of spurious
rules in the classification process. Ensemble voting does not have this flexibility
and consequently lacks robustness to imprecise rules. Another area where TAN
provides an advantage is when multiple imprecise rules provide significant overlapping coverage on positive examples and a low level of overlapping coverage
on negative examples. The TAN network can model this scenario and weed out
the false positives. One potential disadvantage to the Bayesian approach is that
it could be overly cautious about classifying something as a positive. The high
number of negative examples relative to the number of positive examples, and
the corresponding concern of a high false positive rate, helps mitigate this potential problem. In fact, at similar levels of recall, TAN has a lower false positive
rate than voting.
5
The CLP(BN ) Representation
Using Bayesian classifiers to join the rules means that we will have two distinct
classifiers using very different technology: a logic program (a set of rules), and
a Bayes net. Some further insight may be obtained by using formalisms that
combine logic and probabilities, such as CLP(BN ).
CLP(BN ) is based on the observation that in Datalog, missing values are
represented by Skolem constants; more generally, in logic programming missing
values, or existentially-quantified variables, are represented by terms built from
Skolem functors. CLP(BN ) represents such terms with unknown values as constraints. Constraints are kept in a separate store and can be updated as execution
proceeds (ie, if we receive new evidence on a variable). Unifying a term with a
constrained variable invokes a specialized solver. The solver is also activated before presenting the answer to a query. Syntactically, constraints are represented
as terms of the form {C = Skolem with CP T }, where C is the logical variable,
Skolem identifies the skolem function, and CP T gives the parameters for the
probability distribution.
10
14
First, we show how the Naı̈ve Bayes net classifier can be built using CLP(BN ).
The value taken by the classifier is a random variable that may take the value t
or f with some prior probability:
classifier(C) :{ C = classifier with p([f,t],[0.25,0.75]) }.
Each rule I’s score V is known to depend on the classifier only:
rule(I,V) :classifier(C),
rule_cpt(I,P1,P2,P3,P4),
{ V = rule(I) with p([f,t],[P1,P2,P3,P4],[C]) }.
Rule I’s score is V , which is either f or t. The value of V depends on
the value of the classifier, C, according to the conditional probability table
[P1,P2,P3,P4]. Our implementation stores the tables for each rule in a database:
rule_cpt(1,0.91,0.66,0.09,0.34).
rule_cpt(2,0.98,0.87,0.02,0.13).
rule_cpt(3,0.99,0.79,0.01,0.21).
rule_cpt(4,0.99,0.87,0.01,0.13).
.....
This fully describes the Bayes net. To actually evaluate a rule we just need
to introduce the evidence given by the different rules:
nbayes(A,B,C) :all_evidence(0,39,A,B),
classifier(C).
all_evidence(N,N,_,_).
all_evidence(I0,N,A,B) :I0 < N, I is I0+1,
rule_evidence(I,A,B),
all_evidence(I,N,A,B).
rule_evidence(I,A,B) :- equals(I,A,B), !, rule(I,t).
rule_evidence(I,A,B) :- rule(I,f).
The predicate nbayes/3 receives a pair of individuals A and B, adds evidence
from all rules, and then asks for the new probability distribution on the classifier,C. The predicate all evidence recursively considers evidence from every
rule. The predicate rule evidence/3 calls rule I on the pair A and B. If the rule
succeeds, evidence from rule I is t, otherwise it adds evidence f.
A TAN network only differs in that a rule node may have two parents, the
classifier C and some other node J. This is described in the following clause:
11
15
rule(I,V) :rule_cpt(I,J,P1,P2,P3,P4,P5,P6,P7,P8),
classifier(C),
rule(J,V1),
{ V = rule(I) with p([f,t],[P1,P2,P3,P4,P5,P6,P7,P8],[C,V1]) }.
More complex networks can be described in a similar fashion.
CLP(BN ) offers two main advantages. First, we can offer interactive access
to the full classifier. Second, we gain some insight since our task now involves
learning a single CLP(BN ) program, where each newly induced rule will result
in recomputing the probability parameters currently in the database.
6
Relationship to Other Work
Our present work fits into the popular category of using ILP for feature construction. Such work treats ILP-constructed rules as Boolean features, re-represents
each example as a feature vector, and then uses a feature-vector learner to produce a final classifier. To our knowledge, the work closest to ours is by Kononenko
and Pompe [12], who were the fist to apply Naı̈ve Bayes to combine clauses. Other
work in this category was by Srinivasan and King [17], for the task of predicting biological activities of molecules from their atom-and-bond structures. Some
other research, especially on propositionalization of First Order Logic (FOL) [1],
have been developed that convert the training sets to propositions and then apply feature vector techniques to the learning phase. This is similar to what we
do; however, we first learn from FOL and then learn the network structure and
parameters using the feature vectors obtained with the FOL training, resulting
in much smaller feature vectors than in propositionalization.
Our paper contributes three novel points to this category of work. First,
it highlights the relationship between this category of work and ensembles in
ILP, because when the feature-vector learner is Naı̈ve Bayes the learned model
can be considered a weighted vote of the rules. Second, it shows that when the
features are ILP-learned rules, the independence assumption in Naı̈ve Bayes may
be violated badly enough to yield a high false positive rate. This false positive
rate can be brought down by permitting strong dependencies to be explicitly
noted, through learning a tree-augmented Naı̈ve Bayes net (TAN). Third, the
present paper provides some early experimental evidence suggesting that a more
computationally expensive full Bayes net learning algorithm may not provide
added benefit in performance.
7
Conclusions
One often has to deal with erroneous and missing information in multi-relational
data mining. We compare how four different approaches for combining rules
learned by an ILP system perform for an application where data is subject
to corruption and unobservability. We were particularly interested in Bayesian
12
16
methods because they associate a probability with each prediction, which can
be thought of as the classifier’s confidence in the final classification.
In our application, we obtained the best precision/recall results using a TAN
network to combine rules. Precision was a major concern to us due to the high
ratio of negative examples to positive examples. TAN had better precision than
Naı̈ve Bayes because it is more robust at handling high redundancy between
clauses. TAN also outperformed voting in this application. Initial results for the
sparse candidate algorithm show a significant increase in computation time, but
no significant improvements in precision/recall.
In future work we plan to experiment with different applications and with
full Bayesian networks trained using a discriminative scoring function. We also
plan to further continue work based on the observation that we learn a single
CLP(BN ) network: this suggests that the two learning phases could be better
integrated.
8
Acknowledgments
Support for this research was partially provided by U.S. Air Force grant F3060201-2-0571. We would also like to thank the referees for their insightful comments.
References
1. E. Alphonse and C. Rouveirol. Lazy propositionalisation for relational learning.
In H. W., editor, 14th European Conference on Artificial Intelligence, (ECAI’00)
Berlin, Allemagne, pages 256–260. IOS Press, 2000.
2. C. K. Chow and C. N. Liu. Approximating discrete probability distributions with
dependece trees. IEEE Transactions on Information Theory, 14:462–467, 1968.
3. T. Dietterich. Ensemble methods in machine learning. In J. Kittler and F. Roli,
editors, First International Workshop on Multiple Classifier Systems, Lecture Notes
in Computer Science, pages 1–15. Springer-Verlag, 2000.
4. I. Dutra, D. Page, V. Santos Costa, and J. Shavlik. An empirical evaluation of bagging in inductive logic programming. In S. Matwin and C. Sammut, editors, Proceedings of the 12th International Conference on Inductive Logic Programming, volume 2583 of Lecture Notes in Artificial Intelligence, pages 48–65. Springer-Verlag,
2003.
5. Y. Freund and R. Schapire. Experiments with a new boosting algorithm. In
Proceedings of the 14th National Conference on Artificial Intelligence, pages 148–
156. Morgan Kaufman, 1996.
6. N. Friedman, D. Geiger, and M. Goldszmidt. Bayesian networks classifiers. Machine Learning, 29:131–163, 1997.
7. N. Friedman, I. Nachman, and D. Pe’er. Learning bayesian network structure
from massive datasets: The “sparse candidate” algorithm. In Proceedings of the
15th Annual Conference on Uncertainty in Artificial Intelligence (UAI-99), pages
206–215, San Francisco, CA, 1999. Morgan Kaufmann Publishers.
8. D. Geiger. An entropy-based learning algorithm of bayesian conditional trees. In
Uncertainty in Artificial Intelligence: Proceedings of the Eighth Conference (UAI1992), pages 92–97, San Mateo, CA, 1992. Morgan Kaufmann Publishers.
13
17
9. S. Hoche and S. Wrobel. Relational learning using constrained confidence-rated
boosting. In C. Rouveirol and M. Sebag, editors, Proceedings of the 11th International Conference on Inductive Logic Programming, volume 2157 of Lecture Notes
in Artificial Intelligence, pages 51–64. Springer-Verlag, September 2001.
10. S. Hoche and S. Wrobel. A comparative evaluation of feature set evolution strategies for multirelational boosting. In T. Horváth and A. Yamamoto, editors, Proceedings of the 13th International Conference on Inductive Logic Programming,
volume 2835 of Lecture Notes in Artificial Intelligence, pages 180–196. SpringerVerlag, 2003.
11. J. M. Kubica, A. Moore, and J. Schneider. Tractable group detection on large link
data sets. In The Third IEEE International Conference on Data Mining, pages
573–576. IEEE Computer Society, November 2003.
12. U. Pompe and I. Kononenko. Naive Bayesian classifier within ILP-R. In
L. De Raedt, editor, Proceedings of the 5th International Workshop on Inductive
Logic Programming, pages 417–436. Department of Computer Science, Katholieke
Universiteit Leuven, 1995.
13. J. R. Quinlan. Boosting first-order learning. Algorithmic Learning Theory, 7th
International Workshop, Lecture Notes in Computer Science, 1160:143–155, 1996.
14. V. Santos Costa, D. Page, M. Qazi, and J. Cussens. CLP(BN ): Constraint Logic
Programming for Probabilistic Knowledge. In Proceedings of the 19th Conference
on Uncertainty in Artificial Intelligence (UAI03), pages 517–524, Acapulco, Mexico, August 2003.
15. R. C. Schrag. EAGLE Y2.5 Performance Evaluation Laboratory (PE Lab) Documentation Version 1.5. Internal report, Information Extraction & Transport Inc.,
April 2004.
16. A. Srinivasan. The Aleph Manual, 2001.
17. A. Srinivasan and R. King. Feature construction with inductive logic programming: A study of quantitative predictions of biological activity aided by structural
attributes. In S. Muggleton, editor, Proceedings of the Sixth Inductive Logic Programming Workshop, LNAI 1314, pages 89–104, Berlin, 1997. Springer-Verlag.
14
18
✂✁☎✄☎✆✞✝✠✟☛✡✌☞✍✟✏✎☎✑✏✒✓✆✔✟☛✕✗✖✘✑✠✙✛✚✜✁✣✢✥✤✦✒
✩✧ ★✪★✬✫✮✭✰✯✲✱✴✳✵✱✴✫✷✶✴✸✺✹✏✱✞✫✼✻✽✳✵✯✿✾❁❀❃❂✿❄❆❅❇✾❈✱✞✱✴❂❉✸✺❊❋★✬✫✮●✏★✬❍■❄❈✫❏✸✺★✬✫✷✻▲❑▼★✪◆✺✳✵✯❖❅✔✱P❀❃✳◗◆✺❘❆✫✺❄❆❄✪❙✪❚✷✱
❯❲❱❇❳❋❨✞❩❭❬❫❪P❱❇❴❵❬✥❛✞❜❞❝✰❛❡❪P❳❈❢❈❬❫❱◗❩❤❣✬✐❇❥❦❱❇❴❋✐❇❱✴❧❋♠✏❨✔❬❫♥❋❛❡♦❦❥❦❱❇♣q❱✠r❤❴❋❥❦sq❱◗❩❫t❭❥✉❬❫❱❇❥✉❬❤✈✼❱◗❢❋sq❱❇❴✷❧❋❝✰❱❇♦❦❱❇t✇❬❫❥ ①✇❴❋❱◗❴❋♦②❨❡❨✞❴
③✞④❡④✴⑤ ❧✽⑥ ④❡④✬⑦ ✈✼❱❇❢❈sq❱❇❴✼❧❈⑧✰❱◗♦②⑨✴❥❦❢❋❪
⑩❷❶✬❸q❸✞❹q❺❼❻❉❽✬❾✴❹q❶q❿✪➀❷➁❞❻✇➂q❸✞❹❵❿❼❻❖➃❆❸✞➄❵❿✪➀❡➅✴❾✬➆✞➇✪➅❡➈✛➉➊➁❵➄❈➋❡❾✞➄❵➌✬❾✴❹❞➉✇❸q➅➍➉➏➎✬❾
➐☛➑❼➒❇➓❇➔✔→❆➣✴➓❡↔ ❣✬❱❇sq❱◗❩↕❨✞♦❡❪P❛✬➙❈❱◗♦②t✷✐❇❛✴❪✩➛❋❥❦❴❈❥②❴❈⑨✥⑧✓❨❇➜❵❱❇t❭❥②❨✞❴➝❴❈❱◗❬➟➞✰❛✴❩❫♣✬t✺➞➠❥❦❬❫♥✏♦❦❛❡⑨✴❥❦✐✰❱◗➡✪❥❦t✇❬❷➢
➤✓♥❈❱✮❬➟➞✓❛➥❪P❛❡t✇❬❁➙❈❱◗sq❱❇♦❦❛❡❳✽❱❷➙✌❪P❛✬➙❈❱❇♦❦t✦❨✔❩❫❱▼➦❏❩❫❛✴➛❆❨✴➛❈❥❦♦②❥❦t✇❬❫❥❦✐➨➧❤❱❇♦②❨✔❬❫❥②❛✴❴❆❨✞♦➫➩➭❛✬➙✪❱❇♦❦t
➯ ➦❏➧❲➩➳➲ t↕➵❏❨✴❴❋➙➫⑧✓❨❇➜❵❱❇t❭❥②❨✴❴☛✈✷❛❡⑨✴❥②✐➸➦❼❩❫❛❡⑨✞❩↕❨✴❪Pt ➯ ⑧✓✈✛➦✥➲ t↕➵✵➢❡➺➻♥❈❥②♦❦❱➸➦❏➧❲➩➳➲ t➼❨✞❩❫❱✥❱❷❨✞t❭❥②❱✵❩
❬❫❛➸❢❋❴❋➙❈❱◗❩❫t✇❬↕❨✞❴❆➙✷❧❷⑧✓✈✛➦✥➲ t❏❨✞❩❫❱✓❪P❛✞❩❫❱➼❱◗➡✪❳✪❩❫❱❇t❭t❭❥❦sq❱❡➢✔➽❃❛❷➞✰❱❇sq❱◗❩❷❧❷➞✰❱✓❨✞❩❫⑨✴❢❋❱✰❬❫♥❋❨✞❬➍⑧✓✈✛➦✥➲ t
➙✪❛▲❴❋❛✞❬✣❨✴♦✉➞✥❨❇➜✪t✣❨✴♦❦♦❦❛❷➞➾❪P❛✬➙✪❱❇♦❦❥②❴❈⑨▲❳✪❩❫❛❡➛❋♦❦❱❇❪Pt✩❥②❴❵❬❫❢❈❥❦❬❫❥❦sq❱◗♦❦➜❵➢➼➤✓♥❈❥②tP❪P❛✞❬❫❥❦s❡❨✔❬❫❱❇tP❢❋t
❬❫❛P❥❦❴❵❬❭❩❫❛✬➙❈❢❈✐❇❱➝✈✼❛❡⑨✴❥❦✐❷❨✴♦❏⑧✓❨❇➜❵❱◗t❭❥✲❨✞❴✦➚❃❱◗❬❉➞✓❛✴❩❫♣✬t ➯ ✈✛⑧✰➚➝➲ t↕➵✵➢❋➺✮❱➝❨✔❩❫⑨❡❢❋❱✠❬❫♥❋❨✞❬❃✈✛⑧✰➚➝➲ t
❳✪❩❫❛✞s✬❥②➙❈❱➝❨✞❴❁❱✵➡❈❳✪❩❫❱❇t❭t❭❥❦sq❱➝❨✴❴❋➙✦❥❦❴❵❬❫❢❋❥✉❬❫❥❦sq❱✠❪P❛✬➙✪❱❇♦❦❥②❴❈⑨☛♦✲❨✞❴❋⑨✴❢❆❨✴⑨✴❱➫➙✪❢❋❱✠❬❫❛P❱◗➡✪❳❋♦❦❥❦✐❇❥✉❬❫♦✉➜
➙✪❥②t✇❬❫❥❦❴❈⑨❡❢❋❥❦t❭♥❈❥②❴❈⑨➪➙✪❱◗❬❫❱◗❩❫❪P❥❦❴❋❥❦t✇❬❫❥❦✐✮❨✞❴❆➙➻❳❈❩❫❛✴➛❆❨✴➛❈❥❦♦②❥❦t✇❬❫❥❦✐➶❥❦❴❈❜✿❛✴❩❫❪☛❨✞❬❫❥❦❛✴❴➹❨✞❴❆➙✂♥❆❨❷s✬❥②❴❈⑨
❪➘❢❋♦✉❬❫❥❦❳❋♦❦❱❤✐◗❛❡❪P❳✽❛❡❴❈❱❇❴❵❬❫t❇➢q➺✮❱➸➛❈❩❫❥❦❱◗➴❈➜☛➙❈❥❦t❭✐❇❢❈t❭t✓❳✽❱◗❩❫t❭❳✽❱❇✐✵❬❫❥②s❡❱❇t➼❜✿❛✴❩➠♦❦❱❷❨✞❩❫❴❈❥❦❴❋⑨➘✈✛⑧✰➚➝➲ t
❜✲❩❫❛❡❪➷➙❋❨✔❬↕❨✪➢
➬➭➮✬➱✷✃➘❐ ➔✴❒✛➒❡❮ ❳✪❩❫❛❡➛❋❨✴➛❋❥❦♦❦❥❦t✇❬❫❥❦✐◗❰➏♦②❛✴⑨❡❥❦✐❷❨✞♦❈❪P❛✬➙✪❱❇♦❦t❇❧❷⑧✓❨❇➜❵❱❇t❭❥②❨✞❴➘✈✷❛❡⑨✴❥②✐➼➦❼❩❫❛❡⑨✴❩↕❨✞❪Pt❇❧✞⑧✓❨❇➜❵❱❇t❭❥②❨✴❴
❴❈❱◗❬➟➞✰❛✴❩❫♣✬t
Ï Ð❋Ñ➸Ò✽Ó✺Ô✠Õ➘Ö➝×✛Ò✺Ø❫Ô➸Ñ
❭Ù ✫➫Ú✵❚✺✯❖✶❏Û✷★✪Û✼✱✴✳❏Ü❃✱➠✯✲✫❋Ú◗✳✵❄✽✻✽◆✷❅✞✱❤Ý❞Þ◗ß✪à➏á◗â✬ã✪ä➝â✬å✪æ❷ç❷à➏â❵è➭é✣æ✞ê❉ë➸Þ✬ì✵íqç✏î➏Ý❞ä❤é▼ï ç✇ð❵✸q★✬✫➫✯✲✫✼✶❫Ú❇★✬✫✷❅✞✱➠❄✬ñ❆Ú✵❚✷✱
★✬Û✺Û✺✳◗❄❈★❈❅❇❚P❅✞★✬❂✿❂✿✱✴✻✣ò☎è✛Þ✬ë❲ã✲æ❇ó✞ß❋æ❃ä➘âqç✞æ❇ó✩ô➨Þ✴ó❈æ✔ã❞õ✥Þ✬è✺ç❷ê➟ì❷ö✷á✔ê➟à➏Þ❵è✣÷❭ø↕ù❇úqû❲❚✷✱❤✯❖✻✽✱✴★➝✯✿✶❞Ú✵❚✼★❵Ú✴✸q❙❈✯✲ü❈✱✞✫
★▼✻✽✱✴✶◗❅✔✳◗✯✿Û✽Ú✵✯✿❄✪✫➻❄✪ñ✠★➨✶↕Û✛✱✴❅✞✯②ý✼❅❁Û✺✳◗❄✪þ✺❂✿✱✞❍✮✸➼★▼❙✪✱✞✫✷✱✞✳❇★✬❂➝ÿ➏Û✺✳✵❄❈þ✷★✬þ✷✯✲❂✿✯✿✶↕Ú✵✯❖❅✁✇❂✿❄✪❙✪✯❖❅✞★✪✄❂ ✂➘✾❆✫✺❄❵Ü✠❂✿✱✴✻✽❙❈✱
þ✷★✪✶✵✱P❅✞★✬✫▲þ✛✱P◆✷✶↕✱❡✻✦Ú◗❄ ❙✪✱✴✫✺✱✞✳❇★❵Ú◗✱✩★■✶✵Û✼✱❡❅✔✯✲ý✼❅➫❍■❄❆✻✺✱✞❂➟ú✷✭✺❄❈✆✳ ☎❞✞❀ ✝✠✟ ✶✞✸❆Ú◗❚✺✯❖✶✏✶↕Û✛✱✴❅✞✯②ý✛❅➘❍■❄✽✻✽✱✴❂❏✯✿✶
★ ä➘â❵å✪æ✔ç❷à➏â❵è➪è✛æ✞ê➟ë➸Þ❵ì✵☛í ✡✌☞✎✍✑✏✇ú
✒➠★✬✳◗✯✲❄❈◆✷✶❼❍■❄✽✻✽✱✞❂❖✶❞❅✞❄✪❍✣þ✷✯✲✫✺✯✿✫✺❙➝❀❲★q❘❈✱✴✶✵✯✿★✪✫➘✫✺✱✞Ú❫Ü❤❄❈✳✵✾✽✶❏★✪✫✷✻➫❂✿❄✪❙✪✯❖❅✔✥
✓ ★✬❂✿✳✵✱❡★✪✻✽❘➘✱✖✕✽✯✿✶↕Ú❤ÿ❉✶↕✱✴✱➠Ú✵❚✷✱
❄❵ü✪✱✞✳◗ü❆✯✲✱✴Ü þ❆✘❘ ✗➫✱✴✳◗✶↕Ú✵✯✿✫✺❙ ★✬✫✷✻✮✧➘✱☛●✏★✬✱❡✻❆✙Ú ✡✌☞✚☞✁✏✛✂❷ú✼û❲❚✺✱☛❍■❄❈✶↕Ú✏✻✽✱✞ü❈✱✞❂✿❄✪Û✛✱✴✻✮★✬✫✷✻▲þ✛✱✴✶↕Ú➝✾❆✫✺❄❵Ü✠✫
❍➭❄✽✻✽✱✴❂✿✶➫❄✪ñ➠Ú✵❚✷✯✿✶➫✾❆✯✿✫✷✻ ★✪✳✵✢✱ ✜➠✳◗❄✪þ✷★✪þ✺✯✿❂✲✯❖✶❫Ú◗✯✿❅➭●✠✱✴❂✿★✬Ú✵✯✿❄✪✫✷★✪❂✰❑➶❄✽✻✽✱✴❂✿✶ ✣ÿ ✜➸●✏✤❑ ✟ ✥✶ ✂➘þ❋✧❘ ✦✩✱✞Ú✵❄❆❄✪✳➫✱✞Ú
★✬❂➟★ú ✡ ✩✑✏❞★✬✫✷✻▲❀❲★q❘✪✱✴✶✵✯❖★✬✘✫ ☎❞❄✪❙❈✯✿✪❅ ✜➠✳◗❄✪❙❈✳◗★✪❍■✶➘ÿ➏✞❀ ☎✫✜✬✟ ✥✶ ❤✂ þ❆✭❘ ✗➫✱✞✳❇✶❫Ú◗✯✲✫✷❙➭★✪✫✷✻✮✧➝✱P●✏★✪✱✴✻❆✮Ú ✡ ✯✽★✸ ☞✔✰✑✏➟ú
✜➸●✏✱
❑ ✟ ✶❲★✬✳◗✱➝✱❡★✪✶✵❘☎Ú◗❄☎◆✷✫✷✻✽✱✞✳❇✶↕Ú◗★✬✫✼✻❁★✬✫✼✻✦✳❄ ✲➍✱✞✳✠★✪✫✦✯✲✫❋Ú◗◆✺✯②Ú◗✯✲ü❈✱➫Ü❃★q❘■❄✬ñ➼❍➭❄✽✻✽✱✴❂✲✯✿✫✺❙➭Û✺✳◗❄✪þ✺❂✿✱✞❍ ✶✴ú
✹✏❄❵Ü❃✱✞ü✪✱✴✳✴✸q✯✲Ú➠❚✷★✪✶✓þ✛✱✞✱✴✫ ✶↕❚✺❄❵Ü✠✫➭✯✿✫✣Ú◗❚✺✱✠❂✲✯✲Ú✵✱✴✳◗★✬Ú✵◆✺✳◗✱❃Ú✵❚✷★✬Ú✴➸✜ ●✏✤❑ ✟ ✶✥★✪✳✵✱❲❂✿✱✴✶◗✶✓✖✱ ✕✽Û✺✳◗✱✴✶◗✶↕✯✿ü✪✱❤Ú✵❚✷★✪✫
❄✬Ú✵❚✷✱✞✳✩❍➭❄✽✻✽✱✴❂✿✶➫❅✞❄✪❍✣þ✷✯✲✫✺✯✿✫✺❙✮❀❲★q❘✪✱✴✶✵✯❖★✬✫➶✫✺✱✞Ú❫Ü❤❄❈✳✵✾✽✶➝★✪✫✷✻▼❂✿❄✪❙❈✯✿❅✘✡✵☞✔✶✑✏✇✸✛✯✿✫✷❅✞❂✲◆✷✻✺✯✲✫✺❙✮✞❀ ☎✫✜✬✟ ✢✶ ✡ ✯✳✏➟ú
✷ ✫✦Ú✵❚✷✱✩❄✪Ú✵❚✺✱✴✳✠❚✷★✬✫✼✻❼✸❆Ü❤✱✩Ü✠✯✲❂✿❂❏★✬✳◗❙✪◆✷✱➘❄❈✫✦Ú✵❚✷✱✩þ✼★✪✶✵✯✿✶❃❄✬ñ✰★✬✫▲✖
✱ ✕✺★✬❍■Û✺❂✿✱➝Ú◗❚✷★❵Ú✏✞❀ ☎✫✜✬✟ ✶❲✻✽❄➭✫✺❄✪Ú
★✬❂✿Ü❃★q❘✽✶❲★✬❂✿❂✲❄❵Ü ❍➭❄✽✻✽✱✴❂✲✯✿✫✺❙ Û✺✳◗❄✪þ✷❂✲✱✴❍■✶❲✯✿✫❋Ú✵◆✺✯✲Ú✵✯✿ü✪✱✴❂✲❘❈ú✷û❲❚✺✱❡✶↕✱➫❄✪þ✼✶↕✱✴✳✵ü❵★❵Ú◗✯✲❄❈✫✷✶❲★✬✳◗✱➫❄✪◆✺✳❲❍■❄✬Ú◗✯✲ü❵✸★
Ú✵✯✿❄✪✫■ñ➊❄✪✳➸✯✲✫❋Ú✵✳◗❄✽✻✽◆✷❅✞✯✲✫✺✹❙ ☎➼✺❀ ✝✮✟ ✶✴✚ú ✻ ✯✲Ú✵❚✼☎➼✞❀ ✝✠✟ ✶➠Ü❃✱✠Ü❃★✪✫❈Ú✥Ú◗❄☛❅✔❄❈❍✣þ✺✯✿✫✺✱❲Ú✵❚✷✱✏✯✲✫❋Ú◗◆✺✯②Ú◗✯✲ü❈✱✞✫✺✱❡✶✵✶➠❄✬ñ
✜➸●✏✱
❑ ✟ ✶➠Ü✠✯✲Ú✵❚✦Ú✵❚✺✱➘✱✖✽✕ Û✺✳◗✱✴✶◗✶↕✯✿ü✪✱✴✫✺✱✴✶◗✶✓❄✬ñ❞✞❀ ☎✫✜✬✟ ✶✴✳ú ✻➥✱➘✻✽❄✣Ú✵❚✺✯❖✶➠þ❆❘■✱✖✽✕ Û✺❂✿✯❖❅✔✯✲Ú✵❂✿❘➭✻✽✯❖✶↕Ú✵✯✿✫✺❙✪◆✺✯❖✶✵❚✺✯✲✫✷❙
✻✽✱✔Ú◗✱✞✳◗❍➭✯✿✫✺✯❖✶❫Ú◗✯✿❅➫★✪✫✷✻❁Û✷✳✵❄❈þ✷★✬þ✺✯✿❂✿✯✿✶↕Ú✵✯❖❅➝✯✿✫✽ñ➊❄❈✳✵❍ ★❵Ú◗✯✲❄❈✫➳★✪✫✷✻❁þ❆❘ ❚✷★qü❆✯✿✫✺❙■✶↕✱✴Û✷★✬✳❇★❵Ú◗✱✩❅✞❄✪❍■Û✼❄❈✫✺✱✞✫❋Ú❇✶
Ú✵❄✦✻✽✱✞Ú✵✱✞✳◗❍■✯✲✫✷✱☛✻✽✵✯ ✲➍✱✞✳◗✱✞✫❋Ú✠✱✞❂✿✱✞❍■✱✴✫❈Ú❇✶☛ÿ➊✫✷❄❆✻✺✱✴✶✴✸✷✻✽✯✿✳✵✱❡❅❷Ú✵✱❡✻▲✱✴✻✽❙❈✱✴✶✠★✬✫✼✻➳❅✞❄✪✫✷✻✽✯✲Ú✵✯✿❄✪✫✼★✬❂❼Û✺✳◗❄✪þ✷★✪þ✺✌✯
❂✲✯✲Ú❫❘➳✻✽✯❖✶↕Ú✵✳◗✯✲þ✺◆✺Ú✵✯✿❄✪✫✷✶Pÿ✞★õ ✽✿✾➨ï ✔ç ✂❀✂❲❄✬ñ✰Ú✵❚✷✱☛❀❃★q❘❈✱✴✶✵✯✿★✪✫✦✫✺✱✞Ú❫Ü❤❄❈✳✵✾✽✶❃❙✪✱✴✫✺✱✞✳❇★❵Ú◗✱✴✻❼✸✽❂✿✯✲✾❈✱✹✜➸●✏✱❑ ✟ ✶✴ú
❁✿❂ ❴❁❬❫♥❋❥❦t✏❳❆❨✴❳✽❱✵❩➝➞✰❱☛❢❋t❭❱P❨■➛✪❩❫❛q❨❡➙➳❥②❴❵❬❫❱◗❩❫❳✪❩❫❱◗❬↕❨✞❬❫❥❦❛✴❴✮❛✴✺
❜ ❃ ♦❦❛❡⑨✴❥②✐✴➲❼❥❦❴❋✐❇♦❦❢❆➙✪❥❦❴❋⑨■❱❡➢ ⑨✪➢❼❨✴♦❦t❭❛■❩❫❱❇♦②❨✔❬❫❥②❛✴❴❆❨✞♦
❜❖❛✞❩❫❪☛❨✴♦❦❥❦t❭❪Pt❇➢
19
③
✻ ✱✠Û✺✳◗❄✽❅✔✱✴✱✴✻■★✪✶✓ñ➊❄❈❂✲❂✿❄❵Ü✏✶✴ú✬Ù❭✁
✫ ❆✱✴❅✔Ú✵✯✿❄✪✫✄✂✩Ü❃✱✏✻✽✯✿✶◗❅✔◆✼✶✵✶➠❀✞☎ ✜✆✟ ✶✞✸✬✱ ✕✽Û✺❂✿★✪✯✲✫➭Ú◗❚✺✱✏❍■❄✬Ú◗✯✲ü❵★❵Ú◗✯✲❄❈✫
ñ➊❄✪✳✏✯✿✫❋Ú✵✳◗❄❆✻✺◆✷❅✔✯✿✫✺❙ ☎❞❀✞✝✠✟ ✶✏★✪✫✷✻▲✯✲❂✿❂✲◆✼✶❫Ú◗✳◗★✬Ú✵✱➫Ú✵❚✷✱☛þ✷★✪✶✵✯❖❅➫Û✺✳✵✯✿✫✷❅✞✯✲Û✺❂✿✱✴✶✏❄✬ñ ☎❞❀✞✝✠✟ ✶✞ú✷Ù❭✫☎✽✱✴❅❷Ú◗✯✲❄❈✫ ✍
Ü❤✱✏❍■❄❈✳✵✱❲ñ➊❄❈✳✵❍ ★✬❂✿❂✿❘☎✻✽✱✞ý✷✫✺✬✱ ☎➼❀✺✝✮✟ ✶✴ú✪Ù❭✫✁✽✱✴❅❷Ú◗✯✲❄❈✫✝✆PÜ❤✱➘✻✽✯✿✶◗❅✔◆✼✶✵✶✥Û✛✱✞✳❇✶✵Û✼✱❡❅❷Ú✵✯✿ü✪✱❡✶✥ñ➊❄❈✳➠❂✲✱❡★✬✳◗✫✺✯✿✫✺❙
☎❞✞❀ ✝✠✟ ✶✦ñ➊✳◗❄✪❍ ✻✺★✬Ú◗★✺ú❤Ù❭✞
✫ ❆✱❡❅❷Ú✵✯✿❄✪✫✠✟➪Ü❤✱ þ✺✳◗✯✲✱☛✡✷❘➹✻✺✯✿✶◗❅✔◆✷✶◗✶✦✳◗✱✞❂❖★❵Ú◗✱✴✻ Ü❃❄✪✳◗✾➍ú➸Ù❭✫✞❆✱❡❅❷Ú◗✯✲❄❈✫ ✶
Ü❤✱✮❅✔❄❈✫✷❅✔❂✿◆✷✻✽✱❈ú ✻ ✱➶★✪✶◗✶✵◆✺❍■✱❁ñ➏★✪❍➭✯✿❂✿✯✿★✪✳✵✯✲Ú❫❘➪Ü✠✯②Ú◗❚ Ú◗❚✺✱✮þ✷★✪✶✵✯❖❅➳❅✞❄✪✫✷❅✞✱✞Û✽Ú❇✶☎❄✪ñ➘❀❲★q❘✪✱❡✶↕✯❖★✬✫✂✫✷✱✔Ú
Ü❤❄❈✳✵✾✽✶ ✡✵☞✔✍✑✏➼★✪✫✷✘✻ ☎❞❄✪❙❈✯✿❅ ✜➠✳◗❄✪❙❈✳◗★✪❍■❍➭✯✿✫✺❙ ✡✵☞☞ ✂✔➟✏ ú
✍✘Ô➠Ò✽Ø✏✎✒✑✰Ò✺Ø❫Ô❤Ñ✔✓❇Ô➠Ó➹Ð❈Ñ❤Ò✽Ó✺Ô✠Õ➝Ö➘×❼Ø↕Ñ✖✕✘✗☛Ô✙✕➸Ø↕✚× ✑✜✛✣✢✤✑✦✥✙✧✩★❆Ø✏✑✥Ñ✫✪✞✧❼Ò✭✬➪Ô➠Ó✯✮✰★
✱ ❄✪✫✷✶✵✯✿✻✺✱✞✳✠Ú◗❚✺✱Pñ➊❄✪❂✿❂✿❄❵Ü✠✯✲✫✺❙■✳◗◆✺✫✺✫✺✯✿✫✺❙ ✱ ✕✽★✪❍■Û✺❂✲✱✦ÿ➏✯②Ú✩✯✿✶✏þ✷★✪✶✵✱✴✻▲❄✪✫✮Ú◗❚✺✱✳✉✲ ✶✵❅❇❚✺❄❆❄❈❂✣✟ ➟✱ ✕✺★✬❍■Û✺❂✿✱✩þ❆❘
✌
✦✩✱✔Ú✵❄❆❄❈✳✠✱✔Ú➝★✬❂➟★ú ✡ ✩✔✏✛✂❷ú
✴✚✵✺æ✞ì✵æ➻â❵ì✵æ ç❷ê➟ö✷ó✪æ✞è✷ê❉ç➨â❵è✛ó á❇Þ❵ö✽ì❇ç✞æ✔☛ç ✶☎✷➶æ✮í✬è✛Þ✬ë ✸ë ✵✽à➏á✹✵ ç❷ê➟ö✷ó✪æ✞è✷ê❉ç✮ê❭â✴í❈æ➥✸ë ✵❆à➏✺á ✵
❇á Þ❵ö✽ì❇ç✞æ❷ç☛✶✚✻✠â✪á✺✵➳ç❷ê➟ö✷ó✪æ✞è✷ê✩✵✺â❵çPâ✬è✝✼✭✽ â❵è➍ó æ❇â✪á✺✵➶á❇Þ❵ö✽ì❇ç✞æ✩à❖ç➫æ✞à ê✾✵✷æ✔ì✩à è✷ê❭æ✞ì✵æ❷ç❷ê➟à è❋ß Þ✬ì
è✛Þ✬✿ê ✶✚❀ ç❇ê➟ö✷ó✪æ✞è✷ê✥ê❭â✴í✬à è❋ß❁â á◗æ✞ì❷ê❭â❵à è➨á❇Þ❵ö✽ì❇ç✞æ❂❁❼ß❋æ✔ê❉ç➫âPß✪ì✵â❈ó✪✜æ ❃✔Þ✬ì✩❄ê ✵✺â✬ê➸á❇Þ❵ö✽ì❇ç✞æ❅✶❇❆❲ö✽ì
❈ æ✔ã❦à➏æ❉❃➳â ❈ Þ❵ö✽ê➫✾ê ✵✷æ➳à è✷ê❭æ✔ì✵æ✔ç❷ê❉à è❆ß✬è➍æ❷ç❇ç❁Þ✏❃➳â á❇Þ❵ö✽ì❇ç✞æ➳à❖ç✦à è❅❊❃ö✷æ✞è✛á❇æ❇ó ❈ å▼ê❄✵✺æ❁ç❷ö●❋ Þ❍❃
ê✾✵✷✳æ ✼■✽ ï ç❁Þ✏❃➳â✬ã ã➠ç❷ê➟ö✷ó✪æ✞è✷ê❉ç➭ê❭â✴í✬à è❋ß ❄ê ✵✺æ✮á❇Þ❵ö✽ì❇ç✞❅æ ✶❏❆❃ö✽ì ❈ æ✞ã❦à➏æ❑▲❃ â ❈ Þ❵ö✽ê☛â➶ç❇ê➟ö✷ó✪æ✞è✷ê❉ç✺ï
ß✬ì◗â✪ó✪▲æ ❃✔Þ✬ì✣â➳á❇Þ❵ö✽ì❇ç✞æ☛à❖ç☛à ❅è ❊❃ö✷æ✔è➍á❇æ◗ó ❈ å ❄ê ✵✺æ☛ç❷ê❉ö✷ó❈æ✔è✼ê➏ç✺■ï ✼■✽✖✶
❆◆✺Û✺Û✛❄❈✶✵✱❁Ú◗❚✷★❵Ú Ü❃✱➳❚✼★qü✪✱➳★➥✶✵Û✼✱❡❅✔✯✲ý✼❅✮✶↕✯✲Ú✵◆✷★✬Ú✵✯✿❄✪✫ Ü✠✯②Ú◗❚➹❅✞❄✪◆✺✳❇✶↕✱❡✶➶â❵à ✸✠ã ▼ ★✪✫✷✻ ó ❈ ✸➠★✪✫✷✻
✶❫Ú◗◆✷✻✽✱✞✫❋Ú❇✶✩◆✞❉æ ❖✘ÿ Ú❇★✬✾❆✯✲✫✷❙▲â❵à ✂❷P✸ ▼✷æ✞ê✇æ■ÿ➊Ú◗★✬✾❆✯✿✫✺❙❁ã ▼ ✂❃★✬✫✷✻▼ì❷à➏á◗í➨ÿ➏★✪❂✿✶✵❄☛Ú❇★✬✾❆✯✲✫✷❙✦ã ▼ ✂✔ú ✻ ✱➫❅✞★✪✫✦✫✺❄❵Ü
◆✷✶↕✱✏Ú✵❚✺✱✏❙❈✱✞✫✺✱✴✳◗★✪❂✽✾❆✫✺❄❵Ü✠❂✲✱❡✻✽❙✪✱✏★✪þ✼❄❵ü❈✱❲Ú✵❄☛✻✽✱✞Ú✵✱✴✳✵❍■✯✿✫✺✱➝★☛❀❲★q❘✪✱❡✶↕✯❖★✬✫➭✫✺✱✞Ú❫Ü❤❄❈✳✵✾☛❍■❄✽✻✽✱✴❂✲✯✿✫✺❙PÚ◗❚✺✯✿✶
✶↕Û✛✱✴❅✞✯②ý✼❅✣✶↕✯✲Ú✵◆✷★✬Ú✵✯✿❄✪✫❏ú✛û❲❚✺✱✣✶↕Ú✵✳◗◆✷❅❷Ú◗◆✺✳◗✱☛❄✬ñ✓Ú✵❚✷✯✿✶➘❀❃★q❘❈✱✴✶✵✯✿★✪✫➳✫✷✱✔Ú❫Ü❃❄✪✳◗✾❁✯❖✶➝✶✵❚✺❄❵Ü✠✫➶✯✲✫▼✭✰✯✿❙✪◆✺✳◗✱ ☞✪ú
Ù✇Ú❤❅✔❄❈✫❈Ú❇★✬✯✿✫✷✶✥✱ ✕✽★❈❅❷Ú◗❂✲❘✣Ú✵❚✺✱✏✳❇★✬✫✼✻✽❄✪❍ üq★✪✳✵✯❖★✬þ✷❂✲✱❡✶✥Ü❤✱✏★✪✳✵✱✠✯✿✫❋Ú✵✱✴✳✵✱❡✶❫Ú◗✱✴✻➭✯✲✫✦★✪✫✷✻■✱✞✫✼❅✔❄✽✻✽✱✴✶➸★✬❂✿❂✺Ú✵❚✷✱
❅✔❄✪✫✼✻✽✯②Ú◗✯✲❄❈✫✷★✬❂❏✻✺✱✞Û✛✱✞✫✷✻✽✱✴✫✷❅✔✯✿✱✴✶✏Ü❤✱➫✾❆✫✺❄❵Ü ★✪þ✼❄❈◆✽Ú✴ú
❥❘◗ ➯ ❳✽❱◗❬❫❱✔➵
❥❘◗ ➯ ❩❫❥❦✐✵♣❈➵
⑨✞❩↕❨❡➙❈❱ ➯ ❳✽❱◗❬❫❱❡❧ ♦❦❳✽➵
⑨✴❩↕❨✴➙❈❱ ➯ ❩❫❥❦✐↕♣✽❧ ♦❦❳❆➵
❥❘◗ ➯ ①➟❱✹❙✰➵
⑨✞❩↕❨❡➙✪❱ ➯ ①✇❱✹❙❼❧ ❨✞❥✿➵
❥❦❴✬❬❫❱✵❩❫❱❇t✇❬❫❥❦❴❋⑨ ➯ ➙✪➛✽➵
❥②❴❵❬❫❱◗❩❫❱◗t✇❬❫❥②❴❈⑨ ➯ ♦❦❳✽➵
❥❦❴❵❬❫❱◗❩❫❱❇t✇❬❫❥❦❴❋⑨ ➯ ❨✴❥✲➵
❚❱❯❳❲ ■↔ ❆❨ ↔ ➤✓♥❈❱✠t✇❬❭❩❫❢❋✐◗❬❫❢✪❩❫❱❲❛✴❜❼❬❫♥❈❱✠⑧✓❨❇➜❵❱❇t❭❥②❨✴❴ ❴❈❱◗❬➟➞✰❛✴❩❫♣P❜❖❛✞❩❃❛✴❢❈❩✥❩❫❢❈❴❋❴❋❥❦❴❈⑨✩❱◗➡❈❨✴❪P❳❈♦❦❱❡➢
Ù❭❩
✫ ❆✱✴❅✔Ú✵✯✿❄✪❩
✫ ✺✂ ✌ú ☞❁Ü❤✱▲★✬✳◗❙✪◆✺✱❈✸❞❄✪✫✂Ú✵❚✺✱➳þ✷★✪✶✵✯✿✶✣❄✬ñ✏Ú◗❚✺✱❁✳◗◆✺✫✺✫✷✯✲✫✺❙ ✱✖✕✺★✬❍■Û✺❂✿✱✪✸➼Ú✵❚✼★❵Ú➭❀✞☎ ✜✆✟ ✶
✽✻ ❄▼✫✺❄✪Ú➭★✬❂✿Ü❲★q❘❆✶✣★✬❂✿❂✲❄❵Ü Ú✵❄ ❍➭❄✽✻✽✱✴❂➸Û✺✳◗❄✪þ✷❂✲✱✴❍■✶✣✯✲✫❋Ú◗◆✺✯②Ú◗✯✲ü❈✱✞❂✿❘✪ú✓û❲❚✺✯❖✶✣✯❖✶☛❄❈◆✺✳✣❍■❄✬Ú◗✯✲ü❵★✬Ú✵✯✿❄✪✫➻ñ➊❄❈✳
✯✲✫❋Ú✵✳◗❄✽✻✽◆✷❅✞✯✲✫✺❙ ☎❞❄✪❙✪✯❖❅✞★✪❂❼❀❃★q❘❈✱✴✶✵✯✿★✪✫✘✝➝✱✔Ú❫Ü❃❄✪✳◗✾❆✶❃✯✿✫❬❆✱✴❅✔Ú✵✯✿❄✪✫☎✂✽ú❭✂✽ú
❪✦❫❉❴ ❵✣❛❝❜❡❞❣❢❅❤❉❛✯✐❦❥✙❧❡♠♥❤❉♦q♣sr☞❧❡♠♥r☞❛✉t✈❢
✭➼✯✿✳❇✶❫Ú➸Ü❃✱✏þ✺✳◗✯✿❂✱ ✷✡ ❘➭✱✖✕✽Û✺❂❖★✬✯✿✫■Ú✵❚✺✱➘þ✷★✪✶✵✯❖❅✞✶➠❄✪ñ❏❀✺☎ ✜✆✟ ✶✆✡ ✯✺✸ ☞✔✰✑✏✇úP✇ ❀✺☎ ✜➪✱✴✶◗✶↕✱✴✫❋Ú✵✯❖★✬❂✿❂✲❘■❅✔❄❈❍➭Û✼★✪❅❷Ú◗❂✲❘
✶↕Û✛✱✴❅✞✯②ý✷✱❡✶➸★☛❀❲★q❘❈✱✴✶✵✯✿★✪✫➭✫✷✱✔Ú❫Ü❃❄✪✳◗✾✛ú❈û❲❚✺✱➘❅✞❄✪✳◗✱✠❄✬ñ❏★✣❀✞☎✫✜➪✯❖✶❃★☛✶↕✱✞Ú➸❄✬ñ✰ä➘â❵å✪æ✔ç❷à➏â❵è➨á✞ã✲â❵ö❆ç✞æ✔ç❷ú❣➝✇ ✫
20
⑥
✱✖✕✺★✬❍■Û✺❂✿✱➝❄✪ñ❞★✣❀❲★q❘✪✱❡✶↕✯❖★✬✫ ❅✔❂❖★✬◆✼✶↕✱✏✯❖✶✁✄✂✆☎✞✝✆✟✡✠☞☛✡✌✎✍✑✏✓✒✕✔✗✖✘✠✙☛✚✏✛✌✢✜✣☎✥✤✆✟✧✦✛✠☞☛✘✌★✍✑✏❆✸ú ✜➠✳◗✱✴✻✽✯❖❅✞★✬Ú✵✱❡✶
◆✷✶↕✱❡✻❁✯✿✫✮❀✺☎ ✜✆✟ ✶❲★✬✳◗✱✩❅✴★✬❂✿❂✲✱❡✻✮ä➘â❵å✪æ✔ç❷à➏â❵è✁✛▼ ì✵æ❇ó❵à➏á❇â✬ê✇æ✔ç☛★✬✫✷✻❼✸✽◆✷✫✺❂✲✯✿✾✪✱➫❄❈✳◗✻✽✯✿✫✷★✪✳✵❘■❂✿❄✪❙✪✯❖❅✞★✪❂➍Û✺✳✵✱❡✻✽✯✌
❅✞★❵Ú◗✱✴✶✴✸✼❚✼★qü✪✱☎★✪✫➨★✪✶◗✶✵❄❆❅✞✯✿★✬Ú✵✱❡✻➶✳◗★✪✫✺❙✪✱➳ÿ➊✱✪ú ❙✷ú✛Ú◗❚✺✱☎Û✷✳✵✱❡✻✽✯✿❅✴★❵Ú◗✩✱ ✔✗✖✣✪✚☎✫ ❚✷★❈✶➝✳❇★✬✫✷❙✪✭✱ ✬ ✂❷ú✴✦✩✳◗❄✪◆✷✫✷✻
★❵Ú✵❄❈❍ ✶➸✳◗✱✞Û✺✳◗✱✴✶✵✱✞✫❋Ú❃✳◗★✪✫✷✻✽❄✪❍➾ü❵★✬✳◗✯✿★✪þ✺❂✿✱✴✶✩ÿ➊✱✪ú ❙✷✮ú ✔✗✖✘✠☞✯✞✟✞✰✄✰✑✏✳✂❷ú✬û❲❚✺✱✩✳◗★✪✫✷✻✽❄❈❍➾ü❵★✪✳✵✯❖★✬þ✺❂✿✱✴✶❤✯✿✫✦Ú✵❚✷✱
❀❃★q❘❈✱✴✶✵✯✿★✪✫✌✫✺✱✞Ú❫Ü❤❄❈✳✵✾➹★✬✳◗✱▲Ú✵❚✺✱▼❙❈✳✵❄❈◆✺✫✷✻➹★❵Ú◗❄✪❍ ✶ ✯✿✫ Ú✵❚✺✱▼❂✿✱✴★❈✶❫Ú❁✹➝✱✞✳◗þ✺✳❇★✬✫✷✻➹❍■❄✽✻✽✱✞❂✲✱✴✳ ❄✬ñ
Ú✵❚✺✱➭✶↕✱✞Ú➘❄✬ñ➸❀❲★q❘❈✱✴✶✵✯✿★✪✫✮❅✞❂✿★✪◆✷✶✵✱✴✶✣ÿ➊Ú✵✳◗✱✴★✬Ú✵✯✿✫✺❙ Ú✵❚✷✱✴✶✵✱☎❅✔❂❖★✬◆✼✶↕✱❡✶➝★✪✶➝Û✺◆✺✳◗✱✣❂✿❄✪❙❈✯✿❅✴★✬❂➼❅✞❂✿★✪◆✷✶✵✱✴✥✶ ✂❷ú✼û❲❚✷✱
❙✪✳◗❄✪◆✺✫✷✻▲✯✿✫✷✶↕Ú◗★✬✫✼❅✔✱✴✶✏❄✬ñ✰Ú✵❚✺✱✣❀❲★q❘❈✱✴✶✵✯✿★✪✫▲❅✞❂✿★✪◆✷✶↕✱❡✶☛ÿ➊Ü✠✯✲Ú✵❚➶✳◗✱✴✶✵Û✛✱✴❅❷Ú✠Ú◗❄✩✱✵✳ ✂❃✱✞✫✷❅✞❄❆✻✺✱PÚ✵❚✺✱☎❅✔❄✪✫
✻✽✯②Ú◗✯✲❄❈✫✷★✬❂➠✻✺✱✞Û✛✱✞✫✷✻✽✱✴✫✷❅✔✯✿✱✴✶➫❄✪ñ➸Ú◗❚✺✱ ❀❃★q❘❈✱✴✶✵✯✿★✪✫▼✫✺✱✞Ú❫Ü❤❄❈✳✵✾✮✶✸✷✺✹✻✱✴✳ ✻✽✱✴Û✼✱✴✫✷✻✺✶P❄❈✼✫ ✷✆✽✕✹✾✱✵✳
✯✵✲✿✷ ✽ ✯✿✶☛✯✲✫➪Ú◗❚✺✱✦þ✛❄❆✻✺❘ ❄✪ñ❲★✮❙❈✳✵❄❈◆✺✫✷✻➥✯✲✫✼✶❫Ú❇★✬✫✷❅✞✱ Ü✠✯②Ú◗❚✻✷ ✯✲✫➥Ú◗❚✺✱✦❚✺✱❡★✪✻❼❁ú ❀➸★✪❅❇❚➪❀❲★q❘✪✱❡✶↕✯❖★✬✫
❅✔❂❖★✬◆✷✶✵✱ ❚✷★✪✶✣★✬✫➻★✪✶◗✶↕❄✽❅✞✯✿★✬Ú✵✱✴✻ ✱ ✜➸✧❃❋❂ ◆✷★✬✫❋Ú✵✯✲ñ➊❘❆✯✲✫✷❙✮Ú✵❚✺✱❁✻✺✱✞Û✛✱✞✫✷✻✽✱✴✫✷❅✔❘➥❄✬ñ❃Ú✵❚✷✱✦❚✺✱✴★❈✻➥❄✪✫➪Ú✵❚✷✱
þ✼❄✽✻✽❘❈ú✳✻ ❚✺✱✞✫☎Ú✵❚✷✱✞✳◗✱✏★✬✳◗✱❤❍☎◆✺❂②Ú◗✯✲Û✷❂✲✱✠✯✿●✫ ✡✷◆✺✱✴✫✷❅✔✱❡✶✰ñ➊❄✪✳✓Ú◗❚✺✱✠✶✵★✪❍➭❄✱ ✷❅✹❆✱✵✳ ÿ➊❍☎◆✺❂②Ú◗✯✲Û✷❂✲✱✠❙❈✳✵❄❈◆✺✫✷✻
✯✲✫✷✶↕Ú◗★✪✫✷❅✔✱❡✶➫❄✬ñ❲❅✔❂❖★✬◆✷✶✵✱✴✶✩Ü✠✯✲Ú✵❚❇✷ ✯✲✫➥Ú✵❚✷✱ ❚✺✱✴★❈✻ ✂❷✸❏★➻á❇■Þ ❋ ❈ à è✼à è❋ß➨ì❷ö✽ã✲æ ✯❖✶➫◆✷✶✵✱✴✻➨Ú✵❄✕❂❋◆✷★✬✫❋Ú◗✯②ñ➊❘
Ú✵❚✺✱☛❅✞❄✪❍✣þ✷✯✲✫✺✱❡✻❁✯✿●✫ ✷✡ ◆✷✱✞✫✷❅✞✱✪ú
✻ ✱ ✫✺❄❵Ü ❍■❄❆✻✺✱✞❂➸❄❈◆✺✳P✳◗◆✺✫✺✫✺✯✿✫✺❙➶✖✱ ✕✺★✪❍➭Û✷❂✲✱■Ü✠✯✲Ú✵❚✂★✮❀✞☎✫✜✓ú❞û❲❚✺✱✦❀❲★q❘❈✱✴✶✵✯✿★✪✫➨Û✺✳◗✱✴✻✽✯❖❅✞★✬Ú✵✱❡✶
ÿ➊Ü✠✯✲Ú✵❚➥Ú✵❚✺✱✴✯✲✳P✳❇★✬✫✷❙✪✱✔✂➘★✬✳◗✢✱ ✦❈✜✞❉✣✝✆✟❋❊✄✜✧✪✑✫❍✴● Ú✵✳◗◆✺✱✪✸ ñ➏★✬❂❖✶✵✱❏■❈▲✸ ❑❋▼❋❉✄✂✑✦❋✟✆✪✑✫◆●✴Ú✵✳◗◆✺✱✪✸ ñ➏★✬❂❖✶↕✱❋■❈✮✸ ✜✣☎✥✤✆✟✧✦✥✪✞❖
●✞Ú✵✳◗◆✺✱❈✸ ñ➏★✪❂✿✶✵❏✱ ❋■ ▲✸ ✔✗✖✣✪✑✫P❃✬ ❄
✸ ✔◗❊✆✜✣✟✥✂✆✟✧✦✗✜✮✔◗❊✄✧✪✑✫P✴● ✯✿✫❋Ú✵✱✴✳✵✱❡✶❫Ú◗✯✲✫✺❙✼✸➍◆✺✫✺✯✿✫❈Ú◗✱✞✳◗✱✴✶↕Ú✵✯✿✫✺❙✣■■★✬✫✷❘✻ ✄✂✣☎✥✝✆✟✆✪✞❖
●✔✰✺✸ ☞✪✸✼ú✞ú✴ú✞●✸ ✳✂ ✆✰ ❋■ ú❈û❲❚✺✱➘✯✲✫✺ñ➊❄✪✳◗❍■★✬Ú✵✯✿❄✪✫❁★✪þ✼❄❈◆✽Ú➸Ú◗❚✺✱➫✶↕Û✛✱✴❅✞✯②ý✼❅➘✶↕✯✲Ú✵◆✼★❵Ú✵✯✿❄✪✫➳❅✔❄✪✫✼✶↕✯❖✻✽✱✞✳◗✱✴✻❏✸❋❅✴★✬✫ ❄✪✫✷❂✲❘
þ✼✱✠✳◗✱✞Û✺✳◗✱✴✶✵✱✞✫❋Ú◗✱✴✻☛þ❆❘✣Ú✵❚✺✱❲ñ➊❄❈❂✲❂✿❄❵Ü✠✯✲✫✷❙✩❀❲★q❘✪✱❡✶↕✯❖★✬✫☎❙✪✳◗❄✪◆✺✫✼✻Pñ➏★✪❅✔Ú◗✶✏ÿ➊Ú✵❚✺✱❡✶↕✱✏❀❲★q❘❈✱✴✶✵✯✿★✪✫Pñ➏★✪❅✔Ú◗✶✥✱❡★✪❅❇❚
❚✷★qü✪✱➠★✪✫➫★❈✶✵✶✵❄✽❅✔✯❖★❵Ú◗✱✴✻✩Û✺✳◗❄✪þ✷★✪þ✺✯✲❂✿✯✲Ú❫❘➘✻✽✯❖✶❫Ú◗✳✵✯✿þ✺◆✽Ú◗✯✲❄❈✫ ÿ ✱ ✜➸✧ ✂❷✸❡þ✺◆✽Ú➼Ü❤✱➸✻✺❄✠✫✺❄✬Ú➼✶✵❚✺❄❵Ü Ú✵❚✺✯❖✶❏❚✷✱✞✳◗✔✱ ❷✂ ú
✦❈✜✞❉✣✝✆✟❋❊✄✜❙✠★❚✧✟✥✜✆✟✚✏❁❯
❑❏▼❋❉✆✂✑✦❋✟❁✠☞☎✚✔✄✏❁❯
✜✆☎✥✤✣✟✧✦✛✠✙✯✄✟✞✰✄✰❙✌✎☎✚✔✄✏❁❯
✦✗✜✞❉✆✝✆✟❋❊✆✜❙✠★✂✮✔✞❑✗✤✮✏▲❯
✦✗✜✞❉✣✝✄✟❋❊✆✜❙✠☞✯✞✟✞✰✄✰✑✏❁❯
❑❋▼❋❉✄✂✑✦❋✟✡✠❲❱✗❚✸✏❁❯
❑❋▼❋❉✆✂✚✦❋✟✡✠✙✝✥❳✮✏❁❯
✜✣☎✥✤✆✟✧✦✛✠★❚✧✟❋✜✣✟✘✌☞❱❏❚✮✏❁❯
✜✣☎✥✤✣✟✣✦✛✠✎✂✮✔✞❑❈✤✵✌☞❱❏❚✸✏▲❯
✇➝✻✺✻✽✯✿✫✺❙✠Ú◗❚✺✱✴✶✵✱➠❀❲★q❘✪✱❡✶↕✯❖★✬✫➝ñ➏★✪❅❷Ú❇✶❼Ú◗❄❲Ú✵❚✺✱➸✞❀ ☎ ➳
✜ ✯✿❍■Û✺❂✿✯✲✱❡✶❼Ú✵❚✼★❵Ú❙✦✗✜✞❉✣✝✄✟❋❊✆✜❙✠★❚✣✟✥✜✣✟✚✏❆✸❲❑❋▼❏❉✆✂✑✦❋✟✡✠✙☎✚✔✄✏❆✸
ú✞ú✞ú✵Ü✠✯✲❂✿❂➍★✬❂✿❂✛þ✛✱➝✳❇★✬✫✼✻✽❄✪❍✜ü❵★✬✳◗✯✿★✪þ✺❂✿✱✴✶➠✯✿✫✦Ú◗❚✺✱➫❀❃★q❘❈✱✴✶✵✯✿★✪✫■✫✺✱✔Ú❫Ü❃❄✪✳◗✾➭✻✺✱✔Ú✵✱✴✳✵❍■✯✿✫✺✱✴✻❁þ❆❘☎Ú◗❚✺✱➫✺❀ ☎ ✜
ÿ➏✶✵✯✲✫✷❅✞✱❁Ú◗❚✺✱✞❘➻★✪✳✵✱▲★✬❂✿❂❤✯✿✻
✫ ✱✵✳ ❷✂ ú✥û❞❄ ✳✵✱✴Û✺✳◗✱✴✶✵✱✞✫❋Ú✣❄✪◆✷✳✣❙✪✱✴✫✺✱✞✳❇★✬❂❤✯✲✫✺ñ➊❄✪✳◗❍■★✬Ú✵✯✿❄✪✫✌★✪þ✼❄❈◆✽Ú❆✔❈✖✣✪✑✫❈✸
✔❨❊✆✜✣✟✥✂✣✟✣✦✗✜✮✔◗❊✆✣✪✑✠
✫ ★✪✫✷✻❩✄✂✆☎✞✝✆✟✆✪✞✷❖ ✸❋Ü❃✱✩✫✷✱✞✱✴✻➳Ú◗❚✺✱➫ñ➊❄✪❂✿❂✲❄❵Ü✠✯✿✫✺❙■❀❲★q❘✪✱✴✶✵✯❖★✬✫➳❅✞❂✿★✪◆✷✶✵✱✴✶✴ú
✔❈✖✘✠☞☛✚✏❃✒❆✦✗✜✞❉✣✝✆✟❏❊✆✜❙✠☞☛✚✏▲❯
✔❨❊✆✜✣✟✥✂✣✟✣✦✗✜✮✔◗❊✆✘✠✙✍✑✏❅✒❬✜✣☎❋✤✣✟✧✦✛✠☞☛✡✌✎✍✑✏✛✌✕✔✗✖✘✠☞☛✚✏▲❯
✔❨❊✆✜✣✟✥✂✣✟✣✦✗✜✮✔◗❊✆✘✠✙✍✑✏❅✒✕❑❋▼❏❉✆✂✑✦❋✟✡✠✎✍✑✏❁❯
✞✂✣☎✞✝✆✟✡✠✙☛✘✌✎✍✑✏❭✒❪✔✗✖✘✠✙☛✚✏✛✌✢✜✣☎✥✤✆✟✧✦✛✠☞☛✘✌★✍✑✏❁❯
û❲❚✺✱➶ý✷✳❇✶❫Ú▲❅✔❂❖★✬◆✷✶✵✱➶❙✪◆✷★✪✳◗★✪✫❋Ú✵✱✞✱❡✶➭Ú✵❚✷★✬Ú✦Ú✵❚✷✱✞✳◗✱▼✯✿✶➳★➻✳❇★✬✫✷✻✽❄❈❍ ü❵★✬✳◗✯❖★✬þ✺❂✿✱❫✔✗✖✡✠☞☛✚▼✏ ✯✿✫ Ú✵❚✷✱
❀❃★q❘❈✱✴✶✵✯✿★✪✫➫✫✺✱✔Ú❫Ü❃❄✪✳◗✾✏ñ➊❄❈✳❞✱✞ü❈✱✞✳◗❘➘✶↕Ú✵◆✷✻✺✱✞✫❋Ú✡❲❴ úqû❲❚✺✱❃✶↕✱❡❅✔❄✪✫✼✻P❅✔❂❖★✬◆✷✶✵✱➠✱✖✽✕ Û✺✳◗✱✴✶◗✶✵✱✴✶➍Ú◗❚✷★❵Ú❛↕❵ Ü✠❚✺✱✞Ú✵❚✺✱✴✳
★➶❅✔❄✪◆✷✳◗✶✵✱ ✯✿✶☛✯✲✫❋Ú✵✱✴✳✵✱❡✶❫Ú◗✯✲✫✷❙▼✻✽✱✞Û✛✱✞✫✷✻✷✶☛❄✪✫➥Ú◗❚✺✱✦Ù✎❜✙✟ ✶P❄✬ñ✏★✬❂✿❂➸✶↕Ú✵◆✷✻✺✱✞✫❋Ú◗✶☛Ú◗★✬✾❆✯✿✫✺❙✮Ú✵❚✷✱❁❅✔❄❈◆✺✳◗✶✵✱❨✷❝ ú
û❲❚✺✱✦Ú◗❚✺✯✲✳❇✻✂❅✞❂✿★✪◆✷✶↕✱❁❙✪◆✷★✪✳◗★✪✫❈Ú◗✱✞✱❡✶➫Ú✵❚✷★✬Ú☎Ú◗❚✺✱✞✳◗✱✦✯❖✶☎★➨✳◗★✪✫✷✻✽❄❈❍ ü❵★✬✳◗✯❖★✬þ✺❂✿✱❞✔◗❊✆✜✆✟✥✂✣✟✧✦✗✜✑✔◗❊✆❙✠✙✍✚✏
ñ➊❄✪✳✠✱✞ü❈✱✞✳◗❘❁❅✔❄❈◆✺✳❇✶↕❢✱ ❡ ÿ➊✱✞ü❈✱✞✫➳ñ➊❄✪✳➝★■❅✞❄✪◆✺✳❇✶↕✱➘Ú✵❚✼★❵Ú➝✯❖✶❲✫✺❄✬Ú✠Ú❇★✬✾❈✱✞✫❏✸✺✶✵◆✷❅❇❚➶★✪✶☛ó ❈ ✔✂ ú✼û❲❚✺✱➫ñ➊❄❈◆✺✳↕Ú◗❚
❅✔❂❖★✬◆✷✶✵✱☎✱ ✽✕ Û✺✳✵✱❡✶✵✶✵✱✴✶➘Ú◗❚✷★❵✕Ú ✵❵ ★▲✶↕Ú✵◆✷✻✽✱✴✫❋Ú◗✶✎❼✟ ❙✪✳❇★✪✻✽✱✣ñ➊❄✪✳☛★▲❅✔❄✪◆✷✳◗✶✵✱➭✻✺✱✞Û✛✱✞✫✷✻✺✶➫❄❈✫➨Ú◗❚✺✱ ✶❫Ú◗◆✷✻✽✱✴✫❈Ú❇✶ ✟
Ù✎❜❛▼❝ ÿ Ú◗❚✺✱☎★✬Ú✵❄✪❣
❍ ✜✣☎✥✤✣✟✣✦✛✠☞☛✘✌✎✍✚✏➫✯✿✫➶Ú✵❚✷✱☎þ✛❄✽✻✽❘✮✯✿✶➝✫✺✱✞✱❡✻✽✱✴✻✮Ú◗❄❁❙❈◆✷★✬✳❇★✬✫❋Ú✵✱✴✱➫Ú✵❚✷★✬Ú➝Ú◗❚✺✱✞✳◗✱☛❄✪✫✷❂✲❘
✯✿✶✏★➭✳❇★✬✫✷✻✺❄✪❍ ü❵★✬✳◗✯✿★✪þ✺❂✿❛✱ ✄✂✣☎✞✝✆✟❁✠☞☛✘✌✎✍✑➝✏ ✯✲ñ✥✶↕Ú✵◆✼✻✽✱✞✫❋Ú❤✌
❴ ✯✿✶❃Ú❇★✬✾❆✯✲✫✷❙■❅✞❄✪◆✺✳❇✶✵❢
✱ ❡✹✔✂ ú
û❲❚✺✱✞✳◗✱✩★✪✳✵✱➫✶✵❄✪❍■✱P✻✽✯✿✶◗★✪✻✺üq★✪✫❋Ú◗★✬❙❈✱✴✶❃★✬þ✛❄✪◆✽Ú❲Ú◗❚✺✱PÜ❃★q❘■Ú◗❚✺✯❖✶✠❀✺☎ ➹
✜ ❍➭❄✽✻✽✱✴❂✿✶❃Ú◗❚✺✱➫✱✖✺✕ ★✬❍■Û✺❂✿✱✪ú
✭➼✯✿✳❇✶❫Ú❡✸➸★✬❂✲Ú✵❚✺❄❈◆✺❙✪❚➹Ú◗❚✺✯✿✶➳❀✞☎✫✜ ✶✵✱✞✱✞❍ ✶■Ú◗❚✺✱▼❍■❄❈✶↕Ú❁★✬Û✷Û✺✳✵❄❈Û✺✳◗✯✿★✬Ú✵✱▲ñ➊❄✪✳✦❄❈◆✺✳ ✱✖✺✕ ★✪❍➭Û✷❂✲✱❈✸➸✯✲Ú
✻✽❄❆✱✴✶➳✫✺❄✬Ú❁❙❈✱✞✫✺✱✴✳◗★✬Ú✵✱➶Ú◗❚✺✱ ❀❲★q❘❈✱✴✶✵✯✿★✪✫➹✫✷✱✔Ú❫Ü❃❄✪✳◗✾✌Ú◗❚✷★❵Ú▲Ü❤✱▼Ü❲★✬✫❋Ú◗✱✴✻➹Ú✵❄ ❄✪þ✽Ú❇★✬✯✿✫➷ÿ➏✶✵✱✞✱ ✭➼✯✿❙✳
◆✺✳✵✱ ☞✔✂❷ú❲û❲❚✺✱➥❀❃★q❘❈✱✴✶✵✯✿★✪✫ ✫✷✱✔Ú❫Ü❃❄✪✳◗✾✂ñ➊❄✪✳➳Ú✵❚✷✱ ★✬þ✛❄❵ü✪✱▼❀✞☎ ✍
✜ ❅✔❄❈✫❈Ú❇★✬✯✿✫✷✶✦✳❇★✬✫✼✻✽❄✪❍ üq★✪✳✵✯❖★✬þ✷❂✲✱❡✶
❂✲✯✿✾✪✱✕✦✗✜✞❉✣✝✆✟❏❊✆✜❙✠☞✯✄✟✥✰✄✰✑☎✏ ★✪✫✷✻✐✜✣☎✥✤✆✟✧✦✛✠☞✯✄✟✥✰✄✰❙✌✙☎✚✔✞❆✏ ú✷û❲❚✷✯✿✶✣✯✿✶☎✶❫Ú◗✳◗★✪✫✺❙✪✱✦✶✵✯✲✫✼❅✔✱ Ü❃✱✦Ü❃★✪✫❋Ú✵✱✴✻ Ú◗❄
21
❍➭❄✽✻✽✱✴❂➠Ú◗❚✷★❵Ú☛Ü❃✱✦★✬✳◗✱✦❅✔✱✴✳↕Ú❇★✬✯✿✫➨Ú◗❚✷★❵❬Ú ✦✗✜✞❉✣✝✆✟❏❊✆✜❙✠☞✯✄✟✥✰✄✰✑✏☎★✪✫✷✻✼✜✣☎❋✤✣✟✧✦✛✠☞✯✞✟✞✰✄✰❙✌✙☎✧✔✄➭✏ ★✬✳◗✱☎Ú◗✳✵◆✷✱
★✬✫✷✻❼✸❲❍■❄✪✳◗✱➨✯✿❍■Û✼❄❈✳↕Ú❇★✬✫❋Ú✴✸✠✶✵✯✲✫✼❅✔✱ Ú✵❚✺✱➪✶✵★✪❍■✱ ❀❲★q❘✪✱❡✶↕✯❖★✬✫ ✫✺✱✞Ú❫Ü❤❄❈✳✵✾ ★✪❂✿✶✵❄ ❅✞❄✪✫❋Ú◗★✪✯✲✫✷✶▲★✂✳❇★✬✫
✻✽❄✪❍ ü❵★✬✳◗✯✿★✪þ✺❂✿✩✱ ✄✂✣☎✥✝✆✟✡✠☞✯✄✟✥✰✄✰❙✌✙☎✚✔✞✏➭Ú◗❚✷★❵Ú■✯✿✶☎❄✪✫✷❂✲❘➪❍■✱✴★✪✫✺✯✲✫✷❙✬ñ➊◆✺❂❲✯✲ñ ✦✗✜✥❉✣✝✆✟❋❊✆✜✘✠☞✯✄✟✞✰✄✰✚✏ ★✪✫✷✻
✜✆☎✥✤✣✟✧✦✛✠✙✯✄✟✞✰✄✰❙✌✎☎✚✔✄✏➘★✪✳✵✱➫Ú◗✳✵◆✷✱❁ÿ➏★✪✫✷✻✮✫✺❄✬Ú➘✯②ñ✥Ú◗❚✺✱✞❘✮★✬✳◗✱➫ñ➏★✬❂❖✶↕✱✑✂❷ú✼Ù❭✫➶❙✪✱✞✫✷✱✞✳❇★✬❂➟✸✷✞
❀ ☎✫✜✆✟ ✶✠Û✺◆✺Ú➘✻✽✱✖
Ú✵✱✞✳◗❍■✯✲✫✷✯✿✶↕Ú✵✯❖❅✬✸❋✶❫Ú◗✳✵◆✷❅✔Ú✵◆✺✳❇★✬❂✷✾❋✫✷❄❵Ü✠❂✲✱❡✻✽❙✪✱Pÿ➏✶✵◆✷❅❇❚ ★✪❙✶ ✜✆☎✥✤✣✟✧✦✛✠✙✯✄✟✞✰✄✰❙✌✎☎✚✔✄✏✳✂✛✯✲✫✼✶↕✯❖✻✽✱✠Ú◗❚✺✱➝❀❲★q❘✪✱❡✶↕✯❖★✬✫
✫✺✱✔Ú❫Ü❃❄✪✳◗✾✽✶✞✸❼✻✽◆✺✱■Ú◗❄▲Ú✵❚✺✱■ñ➏★❈❅❷ÚPÚ✵❚✼★❵ÚPÚ✵❚✷✱✞❘➨✻✽❄➶✫✷❄✬ÚP❚✷★qü✪✱➭❄✪✳❇✻✽✯✿✫✷★✬✳◗❘➶❂✿❄✪❙✪✯❖❅✞★✪❂✓Û✺✳✵✱❡✻✽✯❖❅✞★❵Ú◗✱✴✶✩Ú◗❄
✱ ✕✽Û✺✳◗✱✴✶◗✶➠Ú◗❚✺✯✿✶❃✾❆✯✿✫✷✻ ❄✬ñ➼✾❋✫✷❄❵Ü✠❂✲✱❡✻✽❙✪✱❈ú ✝➝❄✬Ú✵✱➝Ú✵❚✷★✬Ú❤Ú◗❚✺✱➫❀❃★q❘❈✱✴✶✵✯✿★✪✫ ✫✺✱✔Ú❫Ü❃❄✪✳◗✾☎Ú✵❚✷★✬Ú❃Ü❃✱➘Ü❲★✬✫❋Ú◗✱✴✻
✖
Ú✵❄✣❄❈þ✽Ú◗★✪✯✲✫ ÿ➏✶✵✱✞✱➘✭✰✯✿❙✪◆✺✳◗✮✱ ☞✑✂❷✸❆✻✽❄❆✱✴✶❤✫✺❄✬Ú❲❅✔❄❈✫❈Ú❇★✬✯✿✫ ✳◗★✪✫✷✻✽❄❈❍➾ü❵★✪✳✵✯❖★✬þ✺❂✿✱✴✶➸❂✲✯✿✾✪✱ ✦❈✜✞❉✣✝✆✟❋❊✄✜❙✠☞✯✄✟✞✰✞✰✑✏
❄✪✳ ✜✣☎✥✤✆✟✧✦✛✠☞✯✄✟✥✰✄✰❙✌✙☎✚✔✞✏❆ú
❆✱✴❅✞❄✪✫✷✻❼✸qÚ◗❚✺✱✏❅✔❂❖★✬◆✼✶↕✱❡✶➼✯✿✫➭Ú✵❚✺✱✏★✪þ✼❄❵ü❈✱❃✞❀ ☎ ✜➥★✪✳✵✱✠✻✽✯ ✦❅✔◆✺❂✲Ú✥Ú✵❄➫✯✿✫❋Ú✵✱✞✳◗Û✺✳◗✱✔Ú❡ú✸✻ ✱✠❍■✱✴★✪✫➭Ú✵❚✷★✬Ú
✯②Ú❞✯❖✶❏❚✼★✬✳❇✻➝Ú◗❄✏✳◗✱✴❅✔❄❈✫✷✶↕Ú✵✳◗◆✷❅❷Ú➍ñ➊✳◗❄✪❍ Ú✵❚✺✯❖✶❞✶✵✱✔Ú❞❄✪ñ❆❅✞❂✿★✪◆✷✶✵✱✴✶➍Ú◗❚✺✱➠❄✪✳◗✯✿❙✪✯✿✫✷★✬❂✬✻✽✱❡✶✵❅✞✳✵✯✿Û✽Ú◗✯✲❄❈✫P❄✬ñ❋Ú✵❚✺✱➠✳◗◆✺✫
✫✺✯✲✫✷❙✠✱✖✕✺★✬❍■Û✺❂✿✱✪ú✞û❲❚✷✯✿✶❞✯✿✶❏★✪❙❈★✪✯✲✫✩þ✼✱❡❅✞★✪◆✷✶↕✱✥Û✷✳✵❄❈þ✷★✬þ✺✯✿❂✿✯✿✶↕Ú✵✯❖❅✥★✬✫✷✻➫✻✽✱✞Ú✵✱✴✳✵❍■✯✿✫✺✯✿✶↕Ú✵✯❖❅✬✸❡✶❫Ú◗✳✵◆✼❅❷Ú✵◆✷✳◗★✪❂❵✯✲✫
ñ➊❄✪✳◗❍■★✬Ú✵✯✿❄✪✫ ★✬✳◗✱✠❍■✵✯ ✽✕ ✱❡✻➶ÿ➏✶✵◆✷❅❇❚✦★✪✶✥✯✿✫■Ú✵❚✷✱➝❅✔❂❖★✬◆✷✶✵✱ ✄✂✣☎✥✝✆✟✡✠☞☛✘✌★✍✑✏ ✒✕✔✗✖✘✠☞☛✚✏ ✌✢✜✣☎✥✤✣✟✧✦ ✠☞☛✘✌✎✍✑✸✏ ✂
★✬✫✷✻❼✸✴★✬❂❖✶↕❄✼✸✔þ✼✱❡❅✞★✪◆✷✶↕✱➠✶✵❄✪❍■✱➠❅✞❂✿★✪◆✷✶↕✱❡✶❼❅✔❄❈✫❋ü❈✱✞❘✏✯✿✫✽ñ➊❄❈✳✵❍ ★❵Ú◗✯✲❄❈✫P★✬þ✛❄✪◆✽Ú❼Ú◗❚✺✱➸✻✽✱✴Û✼✱✴✫✷✻✽✱✞✫✼❅✔✯✿✱✴✶❏✯✿✫➫Ú✵❚✷✱
❀❃★q❘❈✱✴✶✵✯✿★✪✫✌✫✺✱✞Ú❫Ü❤❄❈✳✵✾ ÿ➏✱✪ú ❙✷❄ú ✔◗❊✆✜✣✟✥✂✆✟✧✦✗✜✮✔◗❊✄❙✠✙✍✑✏❅✒❬✜✆☎✥✤✣✟✧✦✛✠✙☛✘✌✎✍✑✏✛✌✕✔✗✖✘✠☞☛✧✳✏ ✂✔✸✓✶✵❄✪❍■✱➶★✬þ✛❄✪◆✺Ú
Ú✵❚✺✱▼✳❇★✬✫✼✻✽❄✪❍ üq★✪✳✵✯❖★✬þ✷❂✲✱❡✶▼ÿ➊✱❈ú ❙✼ú❄✔✗✖✡✠☞☛✚✏❃✒✕✦❈✜✞❉✣✝✆✟❋❊✄✜❙✠☞☛✚✏✳✂☛★✬✫✼✻ ✶↕❄❈❍➭✱▼★✪þ✼❄❈◆✽Ú❁þ✛❄✬Ú◗❚ ÿ➊✱✪ú ❙✷ú
✞✂✣☎✞✝✆✟✡✠✙☛✘✌✎✍✑✏❭✒❪✔✗✖✘✠✙☛✚✏✛✌✢✜✣☎✥✤✆✟✧✦✛✠☞☛✘✌★✍✑✳✏ ✂❷ú ✷ ñ✓❅✔❄❈◆✺✳◗✶✵✱✏ñ➊❄❈✳❤❂❖★✬✳◗❙✪✱✴✳✴✸❋✳◗✱✴★✬❂✌✇Ü❤❄❈✳✵❂❖✻■Û✺✳◗❄✪þ✺❂✿✱✞❍ ✶✴✸
✯✲✫❋Ú✵✱✴✳✵Û✷✳✵✱✞Ú✵✯✿✫✺❙ ✺❀ ☎ ✜✆✟ ✶❃❄✪✫✺❂✿❘❁þ✛✱✴❅✞❄✪❍■✱✴✶❲❚✷★✪✳◗✻✺✱✞✳❡ú
✂✁
❪✦❫✾❪ ❥❇❧♥♠❡❤❑♦ ❛ ❇❵✣❛ ❜♥❞❣❢❅❤❉❛✯✐ ☎❞ ❧♥r ❢
☎✄
✝✆
✟✞✡✠
☞☛
❲û ❚✺✱➻★✬þ✛❄❵ü✪✱➥❄✪þ✷✶✵✱✞✳◗ü❵★❵Ú✵✯✿❄✪✫✼✶➳❍■❄✪Ú✵✯✿üq★✬Ú✵✱❡✻ ◆✷✶➶Ú✵❄ ✻✽✱✞ü❈✱✞❂✿❄✪Û ☎❏❄✪❙❈✯✿❅✴★✬❂P❀❲★q❘✪✱❡✶↕✯❖★✬✫ ✝✏✱✔Ú❫Ü❃❄✪✳◗✾✽✶
ÿ ☎❞❀✞✝✮✟ ✶✥✂❷ú✞✻➥✱➪✶↕Ú◗★✪✳↕Ú▲þ❆❘ ✱✖✽✕ Û✺❂❖★✬✯✿✫✺✯✲✫✷❙ ☎❞✞❀ ✝✮✟ ✶▲✯✿✫✽ñ➊❄✪✳◗❍ ★✬❂✿❂✲❘❈ú▲✇ ✻✽✱✔ý✼✫✺✯②Ú◗✯✲❄❈✫ ❄✪ñ✮☎❞❀✞✝✠✟ ✶▲✯✿✶
❙✪✯✿ü✪✱✞✫▲✯✿☎✫ ✽✱✴❅❷Ú◗✯✲❄❈✫ ✍✷ú
☎❏✯✿✾✪✱➸❍■❄❋✶❫Ú✰✳✵✱✴❂✿★✬Ú✵✱✴✻☛❍➭❄✽✻✽✱✴❂✿✶✴✸❵✬
★ ☎➼✺❀ ✝➹❅✔❄❈✫❋Ú◗★✬✯✿✫✷✶➼❙✪✱✴✫✺✱✞✳❇★✬❂➟✸✞Û✺✳◗❄✪þ✼★✬þ✺✯✿❂✲✯❖✶↕Ú✵✯❖❅✁✇❂✲❄❈❙✪✯❖❅✞★✬❂✪✾❋✫✷❄❵Ü✞
❂✲✱❡✻✽❙✪✱❈ú ✦✩✯✲ü❈✱✞✫✦★P✻✽✱❡✶✵❅✞✳✵✯✿Û✽Ú◗✯✲❄❈✫ ❄✬ñ✛Ú✵❚✺✱■ó✪■Þ ❋ â❵à è▼❍Þ ❃➘ó❵à❖ç✞á❇Þ❵ö✽ì❇ç✞æ➝ñ➊❄✪✳❃★P✶↕Û✛✱✴❅✞✯②ý✛❅✠Û✺✳◗❄✪þ✺❂✿✱✞❍ ÿ➊✱✪ú ❙✷ú
✯✲✫✽ñ➊❄❈✳✵❍ ★✬Ú✵✯✿❄✪✫➶★✬þ✛❄✪◆✽Ú➝Ú✵❚✺✱✣❅✞❄✪◆✺✳❇✶✵✱✴✶❃Ú◗★✪✾✪✱✴✫➶þ❋❘▲✶❫Ú◗◆✷✻✽✱✴✫❈Ú❇✶ ✂✔✸✼✼★ ☎➼✞❀ ✝ ❙✪✱✞✫✷✱✞✳❇★❵Ú✵✱❡✶✏★ ❀❲★q❘✪✱❡✶↕✯❖★✬✫
✫✺✱✔Ú❫Ü❃❄✪✳◗✾➍ú
❆✱✴❅✔Ú✵✯✿❄✪✫ ✂✺✌ú ☞➘✯✲❂✿❂✿◆✷✶❫Ú◗✳◗★✬Ú✵✱❡✻ Ú✵❚✷✱➘✫✺✱✴✱✴✻❁Ú✵❄➭✖✱ ✕✽Û✺❂✿✯✿❅✞✯②Ú◗❂✲❘✦✻✺✯✿✶↕Ú✵✯✿✫✺❙✪◆✷✯✿✶✵❚➳✻✽✱✞Ú✵✱✴✳✵❍■✯✿✫✺✯✿✶↕Ú✵✯❖❅✬✸✺✶↕Ú✵✳◗◆✷❅✖
Ú✵◆✺✳❇★✬❂✺✯✿✫✽ñ➊❄❈✳✵❍ ★❵Ú◗✯✲❄❈✫ ★✬✫✷✻➭Û✺✳◗❄✪þ✷★✪þ✺✯✿❂✲✯❖✶❫Ú◗✯✿❅❲✯✿✫✽ñ➊❄❈✳✵❍ ★❵Ú◗✯✲❄❈✫▲ÿ❉★✬✫ ✯❖✻✽✱✴★✩Ú◗❚✷★❵Ú➸❅✴★✬✫■þ✛✱✠Ú✵✳❇★✪❅✞✱✴✻➭þ✷★✪❅❇✾
Ú✵❄ ✝✏❙❈❄➭★✪✫✷✻❁✹➘★✪✻✺✻✺★qÜ✠❘ ✡✌☞☛✳✆ ✏ ✂✔ú✺û❞❄■✻✽❄ ✶↕❄✼✸ ☎❞✞❀ ✝✠✟ ✶❃◆✷✶↕✱✩Ú❫Ü❃❄☎✻✽✯❖✶ ❫◆✺✫✷❅❷Ú➝✶✵✱✔Ú❇✶❃❄✬ñ➼Û✷✳✵✱❡✻✽✯✿❅✴★❵Ú◗✱✴❨✶ ✶
Ú✵❚✺✱P✶✵✱✔Ú✠❄✬ñ❃ã✲Þ✵ß✪à➏á❇â❵✯ã ▼✛ì✵æ❇ó❵à➏á❇â✬ê✇æ✔ç☛★✬✫✷✻✦Ú✵❚✺✱☛✶✵✱✔Ú✠❄✬ñ ▼✛ì✵Þ ❈ â ❈ à ã②à❖ç❇ê➟à➏á ▼✛ì◗æ◗ó✬à➏á❇â❵ê❭æ❷ç❷ú ☎❞❄✪❙✪✯❖❅✞★✪❂➍Û✺✳✵✱❡✻✽✯✌
❅✞★❵Ú◗✱✴✶✠★✪✳✵✱➫◆✷✶✵✱✴✻➳Ú✵❄✦✶✵Û✛✱✴❅✔✯✲ñ➊❘✦Ú◗❚✺✱☛✻✽❄❈❍■★✪✯✲✫✮❄✪ñ✰✻✽✯❖✶✵❅✞❄✪◆✺✳❇✶✵✱➘ñ➊❄❈✳✏★ ✶↕Û✛✱✴❅✞✯②ý✛❅➫Û✺✳✵❄❈þ✺❂✿✱✞❍✮ú✺û❲❚✺✯❖✶✠✯✿✶
✶↕◆✺Û✷Û✼❄❋✶↕✱❡✻ Ú✵❄➭þ✛✱✩✻✽✱✞Ú✵✱✴✳✵❍■✯✿✫✺✯✿✶↕Ú✵✯❖❅➘✯✲✫✽ñ➊❄❈✳✵❍ ★✬Ú✵✯✿❄✪✫❁★✪✫✷✻❁❅✴★✬✫❁þ✛✱➫✶↕Û✛✱✴❅✞✯②ý✷✱❡✻❁★❈✶❤★☎✫✺❄✪✳◗❍ ★✬❂✛❂✲❄❈❙✪✯❖❅
Û✺✳✵❄❈❙✪✳❇★✬❍ ✡✌☞ ✂✑✏ ➍ñ➊❄✪✳✥Ú◗❚✺✱➝❂✿❄✪❙❈✯✿❅✴★✬❂✺Û✷✳✵✱❡✻✽✯✿❅✴★❵Ú◗✱✴✶✴✚ú ✜➠✳◗❄✪þ✷★✪þ✺✯✿❂✲✯❖✶❫Ú◗✯✿❅❲Û✷✳✵✱❡✻✽✯✿❅✴★❵Ú◗✱✴✶✥✯✿✼✫ ☎❞✞❀ ✝✠✟ ✶✴✸✪❂✿✯✲✾❈✱
Û✺✳✵✱❡✻✽✯❖❅✞★❵Ú◗✱✴✶☎✯✲✫ ✺❀ ☎ ✜✆✟ ✶✞✸✰❚✷★qü❈✱❁★✬✫✌★❈✶✵✶✵❄✽❅✔✯❖★❵Ú◗✱✴✻➪✳❇★✬✫✺❙❈✱❁★✬✫✷✻✌★✬✳◗✱✦◆✷✶✵✱✴✻➻Ú◗❄➨✳◗✱✞Û✺✳◗✱✴✶✵✱✞✫❋Ú✣✳❇★✬✫
✻✽❄✪❍ ü❵★✬✳◗✯✿★✪þ✺❂✲✱❡✶✦ÿ➏★▼✳◗★✪✫✷✻✽❄✪❍ ü❵★✬✳◗✯✿★✪þ✺❂✲✱ ✯❖✶✣★✮❙❈✳✵❄❈◆✺✫✷✻➪★❵Ú◗❄✪❍ þ✺◆✺✯✿❂②Ú☎ñ➊✳✵❄❈❍ ★➶Û✷✳✵❄❈þ✷★✬þ✺✯✿❂✿✯✿✶↕Ú✵✯❖❅
Û✺✳✵✱❡✻✽✯❖❅✞★❵Ú◗✱P★✬✫✷✻➳❚✷★✪✶➝★☎✳❇★✬✫✷❙✪✱✩❨✱ ❂❋◆✷★✬❂➍Ú◗❄➭Ú✵❚✺✱P✳❇★✬✫✺❙❈✱➫❄✬ñ❞Ú◗❚✷★❵Ú✏Û✷✳✵✱❡✻✽✯✿❅✴★❵Ú◗✔✱ ✂❷ú
û❞❄▲❙✪✱✞ÚP★✪✫➨✯✿✫❋Ú✵◆✺✯✲Ú✵✯✿ü✪✱■❍■❄✽✻✽✱✞❂✿✯✲✫✷❙▲❂❖★✬✫✷❙✪◆✷★✪❙✪✱✪✸✛Ü❃✱➭þ✛✱✞❂✿✯✿✱✞ü✪✱➭✯✲Ú☛✯✿✶➫★✬❂❖✶✵❄➳✯✿❍■Û✼❄❈✳↕Ú❇★✬✫❋Ú➫Ú✵❚✷★✬Ú
☎❞✞
❀ ✝✠✟ ✶✞✸✛ñ➊❄❈❂✲❂✿❄❵Ü✠✯✲✫✷❙ ✜❤●✏✱❑ ✟ ✶✞✸➍❚✷★qü❈✱☎✶✵✱✞Û✷★✪✳◗★✬Ú✵✱☎❅✞❄✪❍■Û✛❄✪✫✺✱✴✫❈Ú❇✶✏Ú◗❄➶✻✽✱✔Ú◗✱✞✳◗❍➭✯✿✫✺✱■✻✽✌✯ ✲✛✱✴✳✵✱✴✫❈Ú➫✱✞❂✿✖✱
❍➭✱✴✫❋Ú◗✶☎❄✬ñ✠Ú✵❚✷✱▲❀❲★q❘✪✱❡✶↕✯❖★✬✫➪✫✺✱✞Ú❫Ü❤❄❈✳✵✾✽✶P❙❈✱✞✫✺✱✴✳◗★✬Ú✵✱❡✻❼ú✸✇➘✶☎✯✿❂✿❂✲◆✷✶↕Ú✵✳❇★❵Ú◗✱✴✻➻✯✿✫✌✭✰✯✿❙✪◆✺✳◗✱ ✂✽✸✓✧★ ☎➼✞❀ ✝
❅✔❄✪✫✼✶↕✯❖✶❫Ú▲❄✬ñ☛Ú◗❚✺✳◗✱✞✱ ❅✞❄✪❍■Û✛❄✪✫✺✱✴✫❈Ú❇✶ ✶ ✻✽✱✔Ú◗✱✞✳◗❍■✯✲✫✺✱❡✶✦Ú◗❚✺✱➪✳◗★✪✫✷✻✽❄❈❍ ❞★✪✳✵✯❖★✬þ✷❂✲✱❡✶✞✸ ✜✻✽✱✔Ú◗✱✞❀✳
❍➭✯✿✫✺✱❡✶➠Ú◗❚✺✱➘❅✞❄✪✫✷✻✽✯✲Ú✵✯✿❄✪✫✼★✬❂ ❞ Û✼✱✴✫✷✻✽✱✞✫✼❅✔✯✿✱✴✶✩ÿ➊❍■❄✪✳◗✱✏Û✺✳◗✱✴❅✞✯✿✶✵✱✞❂✿❘✪✸✪Ú✵❚✺✱✩✻✽✯✲✳◗✱✴❅✔Ú✵✱❡✻■✱✴✻✽❙❈✱✴✥✶ ✂✥★✬✫✷✻
✻✽✱✔Ú◗✱✞✳◗❍➭✯✿✫✺✱❡✶❃Ú✵❚✺✱☛❅✞❄✪✫✷✻✽✯✲Ú✵✯✿❄✪✫✼★✬❂❼Û✺✳◗❄✪þ✷★✪þ✺✯✿❂✲✯✲Ú❫❘ ❤ ✶↕Ú✵✳◗✯✲þ✷◆✽Ú✵✯✿❄✪✫✷✶✴ú
✻ ✱✦✫✷❄❵Ü➾✶✵❚✺❄❵Ü➾❚✺❄❵Ü ❄❈◆✺✳✣✳◗◆✺✫✺✫✷✯✲✫✺❙➶✱ ✕✺★✬❍■Û✺❂✿✱❁❅✞★✪✫➥þ✛✱❁❍■❄✽✻✽✱✴❂✲✱❡✻➪Ü✠✯②Ú◗❚✂★ ☎❞✞
❀ ✝☛ú✰✭✺❄❈✳
✫✺❄❵ÜP✸✽Ü❤✱☛★✪✳✵✱➫❍ ★✬✯✿✫✺❂✿❘✦✯✿✫❈Ú◗✱✞✳◗✱✴✶↕Ú✵✱❡✻➳✯✿✫ ✔ö✷â✬ã❦à ê❭â❵ê➟à qæ☎Û✺✳◗❄✪þ✷★✪þ✺✯✿❂✲✯❖✶❫Ú◗✯✿❅✩✾❆✫✺❄❵Ü✠❂✿✱✴✻✽❙❈✱✪ú✽✹✏✱✴✫✷❅✔✱❈✸✺Ü❃✱
✍✌
✏✎✒✑
✔✓
✖✕
✜✛
✘✗✚✙
✢✗✤✣
✥✛
✧✦
✩★
22
➯ ⑨❡❱❇❴❈❱◗❩↕❨✴♦✼♣✪❴❈❛❷➞➠♦②❱❇➙❈⑨❡❱❷➵
☎➼✺❀ ✝
✓
✝
✝
✝
✝
✝
✞
✗✚✙
✞
✞
➯ t❭❳✽❱❇✐◗❥ ❋✐✠❳❈❩❫❛✴➛❋♦❦❱❇❪✣➵
✂✁
✎ ✑
✆
✞
✞
➯ t❭❳✽❱❇✐❇❥ ❆✐✠❪P❛✬➙✪❱❇♦✲➵
❀❃★q❘❈✱✴✶✵✯✿★✪✫✦✫✺✱✞Ú❫Ü❤❄❈✳✵✾
✄✁
☎
✗ ✣
❚❱❯❳❲ ↔ ✷↔ ➤✛❛✴⑨❡❱◗❬❫♥❈❱◗❩➘❨➭✈✛⑧➼➚ ➯ ✐❇❛✴❴✬❬↕❨✞❥❦❴❋❥❦❴❋⑨■⑨✴❱❇❴❋❱✵❩↕❨✴♦❞♣✪❴❈❛❷➞➠♦②❱❇➙❈⑨❡❱❷➵✏❨✞❴❆➙❁❨➭❴❋❛✞❩❫❪☛❨✴♦❼♦❦❛❡⑨✴❥❦✐✣❳✪❩❫❛❡⑨✞❩↕❨✴❪
➯ ➙❈❱❇t❭✐◗❩❫❥❦➛❈❥②❴❈⑨P❨✩t❭❳✽❱❇✐◗❥ ❋✐✠❳❈❩❫❛✴➛❋♦❦❱❇❪✣➵✓➙❈❱✵❬❫❱◗❩❫❪P❥❦❴❋❱❲❨✩⑧✓❨❇➜❵❱❇t❭❥②❨✴❴ ❴❈❱◗❬➟➞✰❛✴❩❫♣✽➢
✠✟
✡☞☛
✂✁
✻✽✱✞❂❖★q❘➭Ú✵❚✺✱➫✻✽✯❖✶✵❅✞◆✷✶✵✶✵✯✿❄✪✫✦❄✪ñ❏Ú✵❚✺✱➫❅✔❄❈❍■Û✼❄❈✫✺✱✞✫❋Ú ✗ ✣ ÿ❉❅✔❄✪✫❋Ú❇★✬✯✿✫✺✯✲✫✷❙ ✦✞ö✷â❵è✷ê➟à ê❭â❵ê➟à✩★qæ➫Û✷✳✵❄❈þ✷★✬þ✺✯✿❂✿✯✿✶↕Ú✵✯❖❅
✾❋✫✷❄❵Ü✠❂✲✱❡✻✽❙✪✱✑✂➠Ú◗❄✄❆✱✴❅✔Ú✵✯✿❄✪✫ ✍✺ú
✻ ✱➘✫✺✱✞✱❡✻■Ú✵❚✺✱➘❂✲❄❈❙✪✯❖❅✞★✪❂✺Û✺✳◗✱✴✻✽✯❖❅✞★✬Ú✵✱✴✶❄✦✗✜✥❉✣✝✆✟❋❊✆✜✣✪✑✫❈✞✸ ❑❋▼❏❉✆✂✑✦❋✟✆✪✚✫✏★✬✫✷❍
✻ ✜✣☎✥✤✣✟✣✦✥✪✞❖✼ú✬û❲❚✺✱➝Û✷✳✵❄✚
þ✷★✬þ✺✯✿❂✿✯✿✶↕Ú✵✯❖❅✠Û✺✳◗✱✴✻✽✯❖❅✞★✬Ú✵✱❡✶✠ÿ➊Ü✠✯✲Ú✵❚ Ú◗❚✺✱✞✯✿✳➠✳❇★✬✫✺❙❈✱✔✂✓★✬✳◗✱ ✔✗✖✆✪✑✫ ✬❃✸❁✔◗❊✄✜✣✟✥✂✣✟✧✦❈✜✮✔◗❊✆✧✪✚✫ ●✴✯✿✫❋Ú✵✱✞✳◗✱✴✶↕Ú✵✯✿✫✺❙✼✸
◆✺✫✺✯✿✫❈Ú◗✱✞✳◗✱✴✶↕Ú✵✯✿✫✺✣❙ ■☛★✪✫✷✩✻ ✄✂✣☎✞✝✆✟✄✪✞❖✩●✔✰✷✌✸ ☞❈✸➍ú✞ú✔ú✴✸♥✳✂ ✰✣■❈ú✽û❲❚✺✱P❂✿❄✪❙✪✯❖❅➘Û✺✳✵❄❈❙✪✳❇★✬❍ ✎✒✑✰✻✺✱✴✶◗❅✔✳◗✯✲þ✺✯✿✫✺❙➭Ú✵❚✷✱
✶↕Û✛✱✴❅✞✯②ý✼❅➫Û✷✳✵❄❈þ✺❂✲✱✴❍✍✯❖✶❃Ú✵❚✺✱➫ñ➊❄❈❂✲❂✿❄❵Ü✠✯✲✫✷❙✷ú
✦❈✜✞❉✣✝✆✟❋❊✄✜❙✠★❚✧✟✥✜✆✟✚✏❁❯
❑❏▼❋❉✆✂✑✦❋✟❁✠☞☎✚✔✄✏❁❯
✜✆☎✥✤✣✟✧✦✛✠✙✯✄✟✞✰✄✰❙✌✎☎✚✔✄✏❁❯
✦✗✜✞❉✆✝✆✟❋❊✆✜❙✠★✂✮✔✞❑✗✤✮✏▲❯
❑❋▼❋❉✄✂✑✦❋✟✡✠❲❱✗❚✸✏❁❯
✜✣☎✥✤✆✟✧✦✛✠★❚✧✟❋✜✣✟✘✌☞❱❏❚✮✏❁❯
✦✗✜✞❉✣✝✄✟❋❊✆✜❙✠☞✯✞✟✞✰✄✰✑✏❁❯
❑❋▼❋❉✆✂✚✦❋✟✡✠✙✝✥❳✮✏❁❯
✜✣☎✥✤✣✟✣✦✛✠✎✂✮✔✞❑❈✤✵✌☞❱❏❚✸✏▲❯
✹✏✱✞✳◗✔✱ ✎ ✑ ✯✿✶✰ü❈✱✞✳◗❘➫✶✵✯✿❍➭Û✷❂✲✱✠★✬✫✼✻✣❅✔❄❈✫✷✶✵✯✿✶↕Ú◗✶✰❄✬ñ✼❙✪✳◗❄✪◆✺✫✷✻☛ñ➏★✪❅✔Ú◗✶✰❄✪✫✺❂✿❘✪ú❵Ù❭✫☎❙✪✱✴✫✺✱✞✳❇★✬❂➟✸☞✎ ✑ ❅✞★✬✫☎þ✼✱✠★✪✫❋❘
❂✲❄❈❙✪✯❖❅➸Û✺✳◗❄✪❙❈✳◗★✪❍ ñ➊❄✪✳✓Ú✵❚✷✱❃❂✿❄✪❙❈✯✿❅✴★✬❂❆Û✺✳◗✱✴✻✽✯❖❅✞★✬Ú✵✱❡✶✞ú✬û❲❚✺✱✠❅✞❄✪❍■Û✼❄❈✫✺✱✞✫❋Ú ✓➪❅✔❄❈✫❈Ú❇★✬✯✿✫✷✶✰Ú✵❚✷✱❤ñ➊❄❈❂✲❂✿❄❵Ü✠✯✿✫✺❙
❅✔❂❖★✬◆✷✶✵✱✴✶✴ú
✔❈✖✘✠☞☛✚✏✍✌✏✎ ✦✗✜✞❉✆✝✆✟❋❊✆✜❙✠✙☛✚✏❁❯
✔❨❊✆✜✣✟✥✂✣✟✣✦✗✜✮✔◗❊✆✘✠✙✍✑✑
✏ ✌✏✎ ❑❏▼❋❉✆✂✑✦❋✟❁✠✙✍✑✏❁❯
✞✂✣☎✞✝✆✟✡✠✙☛✘✌✎✍✑✏✒✌✏✎ ✜✆☎✥✤✣✟✧✦✛✠✙☛✘✌✎✍✑✏❁❯
➻✻✽✱✔Ú◗✱✞✳◗❍➭✯✿✫✺✱❡✶✓Ú✵❚✺✱✠✳❇★✬✫✼✻✽❄✪❍ ü❵★✬✳◗✯❖★✬þ✺❂✿✱✴✶✓✯✿✫✣Ú◗❚✺✱✏❀❲★q❘✪✱❡✶↕✯❖★✬✫☎✫✺✱✔Ú❫Ü❃❄✪✳◗✾➫Ú✵❚✷★✬Ú➠✯✿✶✓❙❈✱✞✫✺✱✴✳◗★✬Ú✵✱❡✻☛ñ➊❄❈✳
✎ ✑ ú❵✭✷❄✪✳✓✯✲✫✼✶❫Ú❇★✬✫✷❅✞✱✪✸qÚ✵❚✷✱❲❅✞❂✿★✪◆✷✶✵✱✲✔✗✖✘✠✙☛✚✏
✦❈✜✞❉✣✝✆✟❋❊✄✜❙✠☞☛✚✏➸✶↕Ú◗★❵Ú◗✱✴✶➼Ú✵❚✼★❵Ú❢❫❵ ñ➊❄❈✳➼✱✴ü✪✱✞✳◗❘☛✶↕Ú✵◆✷✻✽✱✴✫❋Ú
❴❲✸✺Ú◗❚✺✱✞✳◗✱P✯✿✶➘★■✳◗★✪✫✷✻✽❄❈❍ ü❵★✬✳◗✯✿★✪þ✺❂✿✱ ✔✗✖✘✠☞☛✚☛
✏ ✯✲✫✮Ú◗❚✺✱✣❀❲★q❘✪✱❡✶↕✯❖★✬✫▲✫✺✱✞Ú❫Ü❤❄❈✳✵✾✆❝✺ú✺û❲❚✷✱☛❅✔❄✪❍■Û✛❄✪✫✺✱✴✫❋Ú
✗✚✙➻❅✔❄✪✫❋Ú❇★✬✯✿✫✷✶❃Ú✵❚✺✱➫ñ➊❄❈❂✲❂✿❄❵Ü✠✯✿✫✺❙➭❅✞❂✿★✪◆✷✶✵✱✴✶✴ú
✓
✍✌✏✎
✔❨❊✆✜✣✟✥✂✣✟✣✦✗✜✮✔◗❊✆✘✠✙✍✑✏❅✒❞✔✗✖✡✠☞☛✚✏✍✌✔✓ ✜✣☎✥✤✣✟✣✦✛✠☞☛✘✌✎✍✚✏❁❯
✞✂✣☎✞✝✆✟✡✠✙☛✘✌✎✍✑✏❭✒❪✔✗✖✘✠✙☛✚✏❁❯
✂✻✽✱✔Ú◗✱✞✳◗❍■✯✲✫✺✱❡✶❃Ú✵❚✺✱☛❅✞❄✪✫✷✻✺✯②Ú◗✯✲❄❈✫✷★✬❂❞✻✺✱✞Û✛✱✞✫✷✻✽✱✴✫✷❅✔✯✿✱✴✶✏✯✲✫▲Ú✵❚✷✱☛❀❃★q❘❈✱✴✶✵✯✿★✪✫❁✫✺✱✞Ú❫Ü❤❄❈✳✵✾➍ú✽û❲❚✺✱➫ý✷✳❇✶↕Ú
✔❅ ❂❖★✬◆✷✶✵✱➸✶↕Ú◗★❵Ú◗✱✴✶❏Ú✵❚✷★✬❤Ú ❵↕Ü✠❚✷✱✔Ú✵❚✷✱✞✳➼★➝❅✔❄✪◆✷✳◗✶✵✱✥✯✿✶❞✯✲✫❋Ú◗✱✞✳◗✱✴✶↕Ú✵✯✿✫✺❙➝✻✽✱✴Û✼✱✴✫✷✻✺✶➼❄✪✫➫Ú✵❚✷✱➸Ù✎❜✌❄✬ñ✷★✏✶↕Ú✵◆✼✻✽✱✞✫❋Ú✴✸
✯②ñ✓Ú✵❚✷✱☛✶❫Ú◗◆✷✻✽✱✞✫❋Ú✏Ú❇★✬✾❈✱✴✶❃Ú✵❚✼★❵Ú➝❅✔❄❈◆✺✳❇✶↕✱◗✺❝ ú✺û❲❚✺✱☛✶✵✱✴❅✞❄✪✫✷✻▲❅✔❂❖★✬◆✷✶✵✱☛✶↕Ú◗★❵Ú◗✱✴✶❃Ú◗❚✷★❵Ú✩❵↕Ú✵❚✺✱P❙❈✳◗★❈✻✽✱✩❄✬ñ✥★
✶❫Ú◗◆✷✻✽✱✞✫❋Ú✠ñ➊❄❈✳✏★➭❅✞❄✪◆✺✳❇✶✵✱➫✻✺✱✞Û✛✱✞✫✷✻✺✶✏❄✪✫➳Ú✵❚✺✱☛✶↕Ú✵◆✷✻✺✱✞✫❋Ú◗✶✎✟✺Ù✎❜❛❝✺ú
û❲❚✺✱➝❀❲★q❘✪✱❡✶↕✯❖★✬✫➭✫✺✱✔Ú❫Ü❃❄✪✳◗✾✣❙❈✱✞✫✺✱✴✳◗★✬Ú✵✱❡✻☎þ❆❘☎Ú✵❚✺✱➘★✬þ✛❄❵ü✪✆✱ ☎❞✞❀ ✝ ñ➊❄❈✳➠Ú◗❚✺✯✿✘✶ ✎ ✑ ❚✼★✪✶✥Ú◗❚✺✱➘✶↕Ú✵✳◗◆✷✖❅
Ú✵◆✺✳◗✱➨✶✵❚✺❄❵Ü✠✫ ✯✲✫ ✭➼✯✿❙✪◆✷✳✵✤✱ ☞ ÿ➊✯➟ú ✱❈ú➸Ú◗❚✺✱ ✶↕Ú✵✳◗◆✷❅❷Ú◗◆✺✳◗✱➶Ú✵❚✷★✬Ú❁Ü❃✱▼Ü❃★✪✫❋Ú✵✱✴✻➹Ú◗❄✌❄❈þ✽Ú◗★✪✯✲✫ ñ➊❄❈✳❁❄✪◆✷✳
✳✵◆✺✫✷✫✺✯✲✫✷❙➭✱ ✕✺★✬❍■Û✺❂✿✔✱ ✂❷ú
û❲❚✺✱P★✬þ✛❄❵ü✪✱ ☎❞✞❀ ✝ ✶↕✱✴✱✞❍ ✶❤Ú✵❄ þ✛✱☛★➭❍➭❄❈✳✵✱✩✯✿✫❈Ú◗◆✺✯✲Ú✵✯✿ü✪✱P❍■❄✽✻✽✱✞❂❼❄✪ñ✰❄❈◆✺✳❲✳✵◆✺✫✷✫✺✯✲✫✷❙➭✱ ✕✺★✬❍■Û✺❂✿✱
Ú✵❚✷★✪✫ Ú✵❚✺✱✩✞❀ ☎✫✜➻❄✪✸ñ ❆✱✴❅✔Ú✵✯✿❄✪✫ ✂✺✌ú ☞❈ú ✇➝✶❃✻✽✱❡✶↕✯✿✳✵✱❡✻❼✸❈Ú✵❚✺✱✩❀❃★q❘❈✱✴✶✵✯✿★✪✫■✫✺✱✔Ú❫Ü❃❄✪✳◗✾✣❙❈✱✞✫✺✱✴✳◗★✬Ú✵✱✴✻■þ❆❘➭Ú✵❚✷✱
☎❞✞❀ ✝ ✻✽❄❆✱✴✶✰✫✺❄✬Ú➠❅✞❄✪✫❋Ú◗★✪✯✲✫☎✳◗★✪✫✷✻✽❄✪❍ ü❵★✬✳◗✯❖★✬þ✺❂✿✱✴✶✰❂✿✯✲✾❈✲
✱ ✦✗✜✞❉✣✝✄✟❋❊✆✜❙✠☞✯✞✟✞✰✄✰✑✏❆❂ú ✇✏❂❖✶↕❄✼✸✴Ú◗❚✺✱✠❅✔❂❖★✬◆✷✶✵✱✴✶✰✯✿✫
Ú✵❚✺✪✱ ☎❞✞❀ ✝ ★✪✳✵✱✠✱❡★✪✶✵❘✣Ú◗❄☛✯✲✫❋Ú◗✱✞✳◗Û✺✳✵✱✞Ú❤★✬✫✼✻■Ú✵❚✺✱✴✳✵✱➝✯✿✶❃★☛❅✔❂✿❄❈✶✵✱➝❅✞❄✪✳◗✳✵✱❡✶↕Û✛❄✪✫✷✻✺✱✞✫✷❅✞✱❃þ✛✱✔Ú❫Ü❃✱✞✱✴✫ Ú✵❚✷✱✴✶✵✱
❅✔❂❖★✬◆✷✶✵✱✴✶➘★✬✫✷✻▲Ú◗❚✺✱✣❄❈✳✵✯✿❙✪✯✿✫✷★✬❂❞Û✺✳✵❄❈þ✺❂✿✱✞❍ ✻✽✱✴✶◗❅✔✳◗✯✲Û✺Ú✵✯✿❄✪✫❏ú✛û❲❚✺✱☛✳◗✱✞❂❖★❵Ú✵✯✿❄✪✫✼✶↕❚✺✯✿Û➶❄✬ñ✓Ú◗❚✺✱☎❅✞❂✿★✪◆✷✶↕✱❡✶✏✯✿✫
✗✚✙
23
Ú✵❚✺✱✦✞❀ ☎✫✜ Ü✠✯✲Ú✵❚➥Ú✵❚✺✱■❄❈✳✵✯✿❙✪✯✿✫✷★✬❂✥✻✽✱❡✶✵❅✞✳✵✯✿Û✽Ú◗✯✲❄❈✫➨✯❖✶P★➳❂✿❄✬ÚP❍■❄✪✳◗✱■❄✪þ✷✶◗❅✔◆✷✳✵✱❈ú ✻➥✱■❄✪þ✷✶✵✱✞✳◗ü✪✱❡✻➶Ú◗❚✺✯✿✶
★✬❂❖✶↕❄■✯✿✫➳❄✪Ú✵❚✺✱✴✳✏★✬Û✺Û✺❂✿✯❖❅✞★❵Ú◗✯✲❄❈✫✷✶✴ú
✇✏✫✦★✪✻✷✻✽✯②Ú◗✯✲❄❈✫✷★✬❂✛★✪✻✽ü❵★✪✫❈Ú❇★✬❙❈✱✠❄✬ñ✫❞☎ ❀✞✝✮✟ ✶➸✯✿✶➠Ú✵❚✷★✬Ú➸Ú✵❚✷✱✞❘■❚✷★qü✪✱✠✫✷❄✪✫ ✇❍■❄✪✫✺❄✪Ú✵❄✪✫✷✯✿❅❲✫✺✱✴❙❈★✬Ú✵✯✿❄✪✫
ÿ ñ➊❄✪✳❲❂✿❄✪❙❈✯✿❅✴★✬❂✛Û✺✳✵✱❡✻✽✯❖❅✞★❵Ú◗✱✴✥✶ ✂❷ú❆û❲❚✺✯❖✶❤✯❖✶❃◆✷✶✵✱✔ñ➊◆✺❂✛ñ➊❄✪✳✠✶✵Û✼✱❡❅✔✯✲ñ➊❘❆✯✲✫✺❙☎Ú✵❚✷✱✩✶✵✱✔Ú❲❄✬ñ❞✳❇★✬✫✼✻✽❄✪❍✜ü❵★✬✳◗✯✿★✪þ✺❂✿✱✴✶✴ú
❆◆✺Û✺Û✛❄❈✶✵✱ Ú✵❚✼★❵Ú☛Ü❃✱ Ü❃★✪✫❈Ú ✄✂✣☎✞✝✆✟❁✠☞☛✘✌✎✍✑✣✏ Ú◗❄▼þ✼✱➳★✮✳◗★✪✫✷✻✽❄✪❍ üq★✪✳✵✯❖★✬þ✷❂✲✱■❄✪✫✷❂✲❘➥✯②ñ✠✶↕Ú✵◆✷✻✽✱✴✫❋Ú ❴
Ú◗★✬✾❈✱✴✶➫❅✔❄❈◆✺✳❇✶↕◆✱ ❡✍★✪✫✷✼✻ ❴ ✯❖✶➫✫✺❄✬Ú☛★✬þ✼✶↕✱✴✫❈Ú✩ñ➊✳◗❄✪❍ Ú◗❚✺✱■✖✱ ✕✺★✬❍ ❄✪ñ ❡➭ú ✻➥✱✦❅✞★✪✫ ✶✵✯✲❍■Û✺❂✿❘▼Ü✠✳◗✯②Ú◗✱
Ú✵❚✺✱✣❅✞❂✿★✪◆✷✶✵❛✱ ✄✂✣☎✞✝✆✟❁✠☞☛✘✌✎✍✑✏ ✌✏✎ ✜✆☎✥✤✣✟✧✦✛✠✙☛✘✌✎✍✑✏✛✌❍❊✧▼✥✜❙✠✙☎❋❳✮✦❋✟❋❊✄✜❙✠☞☛✘✌✎✍✚✏✞✏✏✯✿✫ ➝ú✼û❲❚✺✯✿✶➝❚✷★✪✶❲✫✷❄
✻✽✯✲✳◗✱✴❅✔Ú❞❅✔❄❈◆✺✫❋Ú✵✱✞✳◗Û✷★✪✳↕Ú❼✯✿✫☛✺❀ ☎ ✜✆✟ ✶❏★✪✶❼Ú✵❚✺✱✴✳✵✵✱ ☎❋❳✮✦❋✟❏❊✆✜❙✠☞☛✘✌★✍✑✏✰✯✿✶❞★✠❀❃★q❘❈✱✴✶✵✯✿★✪✫✩★✬Ú✵❄❈❍ Ú✵❚✼★❵Ú❞❅✞★✪✫✺✫✺❄✪Ú
þ✼✱P✫✺✱✴❙❈★✬Ú✵✱✴✻❏ú
✭➼✯✿✫✷★✪❂✲❂✿❘✪✸✺✫✷❄✬Ú✵✱P★❈✶✏★■✶↕✯❖✻✽✖✱ ✇✳◗✱✞❍ ★✬✳◗✾■Ú✵❚✷★✬Ú➝★ ❀❲★q❘✪✱✴✶✵✯❖★✬✫❁✫✷✱✔Ú❫Ü❃❄✪✳◗✾ ❙✪✱✞✫✷✱✞✳❇★❵Ú✵✱❡✻❁þ❆❘❁★ ☎➼❀✞✝
❅✞★✬✫■❄✪ñ➍❅✞❄✪◆✺✳❇✶↕✱❲þ✛✱✠◆✷✶✵✱✴✻☎ñ➊❄✪✳➸★✪✫✷✶✵Ü❤✱✴✳✵✯✿✫✺❙✩Û✺✳✵❄❈þ✷★✬þ✷✯✲❂✿✯✿✶↕Ú✵✯❖❅ ❂❋◆✺✱✞✳◗✯✿✱✴✶✥★✬þ✛❄✪◆✺Ú➠✳◗★✪✫✷✻✽❄✪❍➷üq★✪✳✵✯❖★✬þ✷❂✲✱❡✶
✯✲✫➹Ú✵❚✷✱➶✫✺✱✔Ú❫Ü❃❄✪✳◗✾➍ú✥Ù❭✫➹★➻✶✵✯✲❍■✯✿❂✿★✪✳ ü✪✱✴✯✲✫ ★✪✶➭ñ➊❄❈✳✦✺❀ ☎ ✜✆✟ ✶✞✸✓Ú◗❚✺✱➪ç❷ö ▼❝▼✷Þ❵ì❷ê☎è✛æ✞ê❉ë➸Þ✬ì✵í ÿ➊✯➟ú ✱❈ú➠Ú✵❚✷✱
✶↕◆✺þ✷✫✺✱✔Ú❫Ü❃❄✪✳◗✾✂❄✬ñ➫Ú◗❚✺✱▼✱✞✫❋Ú◗✯✲✳◗✱▼✫✺✱✞Ú❫Ü❤❄❈✳✵✾➻Ú◗❚✷★❵Ú➳✯✿✶➳✶↕◆ ✦❅✞✯✲✱✴✫❈Ú❁Ú✵❄✌★✪✫✷✶↕Ü❃✱✞✳■Ú◗❚✺❪✱ ❂❋◆✺✱✴✳✵❘ ✂■❅✞★✪✫
þ✼✱✮✻✽✱✔Ú◗✱✞✳◗❍■✯✲✫✺✱❡✻➻þ✷★❈✶↕✱❡✻➪❄✪✫➻Ú◗❚✺✱▲❅✞❂✿★✪◆✷✶✵✱✴✶☛✯✿✫ ➾★✬✫✷✻ ➸ú✓✭✺❄❈✳➭✻✽✱✞Ú◗★✪✯✲❂❖✶☛Ü❃✱❁✳◗✱✔ñ➊✱✴✳✣Ú◗❄▼Ú✵❚✷✱
✻✽✯✿✶◗❅✔◆✼✶✵✶✵✯✲❄❈✫❁ñ➊❄❈✳✏✞❀ ☎✫✜✆✟ ✶❲þ❆✭❘ ✗➫✱✞✳❇✶↕Ú✵✯✿✫✺❙➭★✪✫✷✻✮✧➝✱☛●➝★✬✱✴✻✽Ú✹✡ ✯✸✏✇ú
✓
✁
✓
✂❦✧☎✄✩Ñ➝Ø✵Ñ✖✕
✁
✗✚✙
✗☛✙Ô ✕➸Ø↕✚× ✑✜✛✣✢✤✑✦✥✙✧✩★❆Ø✏✥✑ Ñ✫✪✞✧❼Ò✭✬➪Ô➠Ó✯✮✰★
✻ ✱■✻✽✱✔ý✼✫✺✱➭★ ☎❞✞❀ ✝ ★✪✶➫★ Ú◗◆✺Û✺❂✿✱▲✝ÿ ✆ ✟✞✠✆ ✽ ✞ ✡✞ ☛✞ ✂✔✸✛Ü✠❚✷✱✞✳◗✱☞✆ ➠✯❖✶➘Ú✵❚✺✱■✶↕✱✞Ú✩❄✪ñ➠❂✲❄❈❙✪✯❖❅✞★✪❂
Û✺✳✵✱❡✻✽✯❖❅✞★❵Ú◗✱✴✶➭★✬✫✷✻✌✆ ✽ ✯❖✶☎Ú✵❚✺✱✮✶✵✱✔Ú■❄✬ñ➝Û✺✳◗❄✪þ✼★✬þ✺✯✿❂✲✯❖✶↕Ú✵✯❖❅✦Û✺✳◗✱✴✻✽✯❖❅✞★✬Ú✵✱❡✶➳ÿ➏❚✷★qü❆✯✲✫✺❙➥★✬✫✌★❈✶✵✶✵❄✽❅✔✯❖★❵Ú◗✱✴✻
✳◗★✪✫✺❙✪✑✱ ✂✎✍❵ú✛û❲❚✺✱✣✶✵✱✞❍ ★✬✫❋Ú✵✯❖❅✞✶✏❄✬ñ➠★✼☎❞❀✞✝ ✯❖✶✠Ú◗❚✷★❵Ú➘✯②Ú➝❙❈✱✞✫✺✱✴✳◗★✬Ú✵✱❡✶✏★ ❀❲★q❘✪✱✴✶✵✯❖★✬✫▲✫✺✱✔Ú❫Ü❃❄✪✳◗✾❁Ü✠❚✺✱✴✫
✶↕◆✺Û✷Û✺❂✲✯✿✱✴✻➥Ü✠✯✲Ú✵❚➪★▲✫✺❄✪✳◗❍ ★✬❂✓❂✲❄❈❙✪✯❖❅✣Û✷✳✵❄❈❙✪✳❇★✬❍ ✻✽✱✞ý✷✫✺✯✿✫✺❙▲Ú✵❚✺✱ Û✺✳◗✱✴✻✺✯✿❅✴★❵Ú✵✱❡✶✩✯✲✏✫ ✆ ú ✇➝✶P✱ ✕
Û✺❂✿★✪✯✲✫✷✱✴✻✮✯✿✫ ❆✱✴❅✔Ú✵✯✿❄✪✫ ✂✽❭ú ✂✽✸ ✻✽✱❡✶✵❅✞✳✵✯✿þ✛✱✴✶✠Ú◗❚✺✱✣✻✽❄❈❍ ★✬✯✿✫➶❄✬ñ➠✻✽✯❖✶✵❅✞❄✪◆✺✳❇✶↕✱➫ñ➊❄❈✳➝★✦✶✵Û✼✱❡❅✔✯✲ý✼❅PÛ✺✳◗❄✪þ
❂✲✱✴❍▲ú ✻➥✱➝◆✷✶✵✱✏Ú◗❚✺✱➭ë➸æ✔ã ✒ã ✑ ❃✔Þ✬ö✽è✛ó❈æ◗ó■ç✔☛æ ❋ â❵è✷ê➟à➏á✔✢ç ✡✌☞✔✓✑✏ ✶❈✱✴ü✪✱✴✳✵❘➭✫✺❄✪✳◗❍ ★✬❂✛❂✲❄❈❙✪✯❖❅✠Û✺✳◗❄✪❙❈✳◗★✪❍ ❚✷★❈✶
★☎◆✷✫✺✯ ❂❋◆✺✱PÜ❃✱✞❂✿✌❂ ❉ñ➊❄❈◆✺✫✷✻✽✱❡✻❁❍■❄✽✻✽✱✞✖❂ ✕✘✗✚✙✜ÿ ✂❷ú
Ù❭q✫ ❆✱❡❅❷Ú◗✯✲❄❈✫ ✍✷ú✌☞➫Ü❃✱P✻✽✱✔ý✷✫✷✱➫Ú✵❚✺✱☛✶✵❘❆✫❈Ú❇✸★ ✕❁★✪✫✷✻✮✶↕✱✴❍■★✪✫❋Ú✵✯❖❅✞✶❃❄✬ñ✰Ú✵❚✺✱☛❅✞❄✪❍■Û✛❄✪✫✺✱✴✫❈Ú❇✶ ➝✸
★✬✫✷✻ ➝✸➸✯✿❙✪✫✺❄❈✳✵✯✿✫✺❙➻❅✔❄❈✫✷✶↕✯❖✶↕Ú✵✱✞✫✼❅✔❘✂✯❖✶✵✶✵◆✺✱❡✶▼ÿ➊✯➟ú ✱❈ú❤❅✞❄✪✫✷✻✽✯✲Ú✵✯✿❄✪✫✼✶■Ú✵❚✷★✬Ú❁❚✷★qü❈✱➳Ú◗❄✂þ✛✱▼✶✵★✬Ú✵✯❖✶❫ý✷✱❡✻
ñ➊❄✪✳✏Ú◗❚✺✱☛❙❈✱✞✫✺✱✴✳◗★✬Ú✵✱✴✻✮❀❲★q❘✪✱❡✶↕✯❖★✬✫▲✫✺✱✞Ú❫Ü❤❄❈✳✵✾❁Ú✵❄✦þ✛✱☛Ü❃✱✞❂✿✵❂ ❭✻✽✱✔ý✼✫✺✱✴✻ ✂✔ú ✻➥✱✣✻✽✯❖✶◗❅✔◆✷✶◗✶➝❅✞❄✪✫✷✶✵✯✿✶↕Ú✵✱✴✫✷❅✔❘
✶↕✱✴Û✷★✬✳❇★❵Ú◗✱✞❂✿❘ ✯✲✫❬❆✱❡❅❷Ú✵✯✿❄✪✫ ✍✷ú ✂✺ú
✑
✓
✗✚✙
✗
✣
✢✑
✎ ✑
✑
✎ ✑
✎ ✑
✍✎ ✑
✓
✗
❫❉❴
✛
✗
✙
✣
✜ ✣❞ ✢✒✐✒❤❑✐✒♠ ✔✞ ✤✒❞✦✥
❧❡✏
t ✧✙❧❡✐✒❞P✐ ❢✩★✫✪✖✬✮✭✠❛✯✐
✏✞
✛
✬✰✯
✱✩✶↕✯✿❍■Û✺❂✲❘ ❅✴★✬❂✿❂✼★✬✫✦★✬Ú✵❄❈❍ þ✺◆✷✯✲❂✲Ú❤ñ➊✳◗❄✪❍✜★PÛ✺✳◗❄✪þ✼★✬þ✺✯✿❂✲✯❖✶↕Ú✵✯❖❅✠Û✺✳◗✱✴✻✽✯❖❅✞★✬Ú✵✱➝★ ▼➍ì✵Þ ❈ â ❈ à ã❦à❖ç❷ê➟à➏á☎â✬ê✇Þ✭❋☎ú
❆✯✲❍■✯✿❂✿★✪✳✵❂✿❘ Ü❃✱➝Ú❇★✬❂✿✾✦★✬þ✛❄✪◆✽Ú➫ã✲Þ◗ß✬à➏á❇â❵ã✰â❵ê❭■Þ ❋☎ç➫★✪✫✷✻▼ã✲Þ◗ß✪à➏á◗â✬ã❏ã❦à ê❭æ✔ì◗â❵ã✉ç❷ú✺●✠✱✴❍➭✱✴❍✣þ✛✱✞✳❲Ú◗❚✷★❵Ú✠★✣✳❇★✬✫
✻✽❄✪❍➾ü❵★✬✳◗✯✿★✪þ✺❂✲✱➝✯✿✶➸✳✵✱✴Û✺✳✵✱❡✶↕✱✴✫❋Ú✵✱✴✻ ★❈✶➸★P❙❈✳✵❄❈◆✺✫✷✻➭Û✺✳◗❄✪þ✷★✪þ✺✯✿❂✲✯❖✶❫Ú◗✯✿❅➝★❵Ú✵❄❈❍✮✚ú ✻➥✱➝◆✷✶✵✱✲✱✴✳✣✵✷✶✹✸❆ÿ ✷✙✂✓Ú◗❄
✻✽✱✞✫✺❄✪Ú✵✱➝Ú✵❚✺✱✩✳◗★✪✫✺❙✪✱➝❄✬ñ❞★☛Û✷✳✵❄❈þ✷★✬þ✺✯✿❂✿✯✿✶↕Ú✵✯❖❅➝★✬Ú✵❄✪❍ ✷➷★✬✫✷✻✻✺✼✱✴✸✾✛✽ ÿ ✷✢✓✂ Ú◗❄➭✻✺✱✞✫✺❄✪Ú✵✱✏Ú◗❚✺✱✩Û✺✳✵✱❡✻✽✯❖❅✞★❵Ú◗✱
Ú✵❚✷★✬Ú ✷ ✯✿✶➘þ✺◆✺✯✿❂②Ú➘ñ➊✳✵❄❈❍✮ú✛û➼❄➳✶✵✯✲❍■Û✺❂✿✯✲ñ➊❘➶✫✺❄✬Ú❇★❵Ú✵✯✿❄✪✫❞✸✷Ü❃✱✣◆✷✶✵✱☛Ú◗❚✺✢✱ ➼☎ ❀✞➷
✝ ★✬✫✼✻
✯✲✫❪❋❂ ◆✺✱❡✶❫Ú◗✯✲❄❈✫
★✪✶❲✯✿❍➭Û✷❂✲✯❖❅✔✯✲Ú➝★✬✳◗❙✪◆✺❍■✱✴✫❈Ú❇✶❲❄✬ñ➼Ú✵❚✺✱☛✻✽✱✞ý✷✫✺✯✲Ú✵✯✿❄✪✫✷✶✴ú
✻
✎ ✑
✿ ✤✸✦
❞ ❀✁❛✯✐
❈
✛
❧❡t❂❁❏❛●r■❤❑❛❄❃ ❉❞P❢❆❅❇★
✄
⑤ ✈✛⑧✰➚ ❨✞♦❦t❭❛✣❢❈t❭❱❇t❲❨➫t❭❱✵❬❃❛✞❜❞❜✿❢❋❴❈✐◗❬❫❛✴❩❃t✇➜❈❪➘➛✽❛❡♦❦t➸❨✴❴❋➙■✐❇❛❡❴❈t✇❬↕❨✴❴❵❬❫t❇❧✽➛❋❢✪❬➠❬❫♥❋❥❦t❤❥❦t❤♦②❱✵❜✿❬❲❥❦❪P❳❈♦②❥❦✐❇❥✉❬❤❥❦❴
❬❫♥❋❱✠➙✪❱ ✁❆❴❈❥✉❬❫❥②❛✴❴❋t➸❨✴t✥❬❫♥❈❥②t➠❥❦t➠t✇❬↕❨✴❴❋➙❋❨✞❩↕➙☎❥②❴➭✈✼❛✴⑨❡❥❦✐➝➦❏❩❫❛✴⑨✴❩↕❨✴❪P❪P❥❦❴❈⑨❊❉ ⑦✔③●❋ ➢
24
✜ ❞✣✢ ✐✸❤✍✞❅❤❑❧❡✐✠❴✂✁ ✓☎✄ ❫ ✓ à❖çPâ☛ç✞æ✞ê❤Þ✏❃➘ì✵â✬è❋ß❋æ ✑❭ì✵æ✔ç❷ê❉ì❷à➏á✞ê❭æ◗ó■á✞ã✲â❵ö❆ç✞æ✔ç➫Þ❍❃➘ê❄✵✺æ ❃✔Þ✬ì ❋ ✺✝✆✟✞✡✠☞☛✍✌
✎✑✏ ✞ ✓ ✞✓✒✔✒✔✒ ✞ ✎✑✏ ✞✡✕✝✒❃ë✸✵✷æ✔ì◗☞æ ✵✗✖ ✰●❁ ✺✘✆✟✞✡✠☞☛ à❖ç➳â✄▼✛ì◗Þ ❈ â ❈ à ã❦à❖ç❷ê❉à➏á➨â❵ê❭Þ■❋ â❵è➍ó ✎✙✏ ✞ ✓ ❁ ✶✾✶✾✶ ❁ ✎✑✏ ✞✡✕ â❵ì✵æ
ã✲Þ✵ß✪à➏á❇â❵ã✥ã❦à ê❭æ✞ì✵â❵ã✉ç▲î✇ë➸æ✦á❇â❵ã ã✹✺✘✆✚✞✡✠☞☛ ê✾✵✷æ ❚✺✱❡★✪✻ Þ✏❃☎✾ê ✵✷æ✦á✔ã✿â❵ö❆ç✞æ✦â❵è➍ó ✎✑✏ ✞ ✓ ✞✛✒✜✒✔✒ ✞ ✎✑✏ ✞✡✕➻ê❄✵✺æ✦þ✼❄✽✻✽❘
☎
Þ✏❃Pê❄✵✺æ➭á✔ã✿â❵ö❆ç✞æ❭❝ð ✶
✇ ❅✞❂✿★✪◆✷✶↕✱☎✯✿✶➭ì✵â✬è❋ß❋æ ✑❭ì◗æ❷ç❷ê➟ì❷à➏á✔ê❭æ❇ó➶✯✌✲✌★✬❂✿❂❞ñ➊✳◗✱✞✱➭ü❵★✬✳◗✯✿★✪þ✺❂✲✱❡✶✏Ú◗❚✷★❵Ú➫❄✽❅✞❅✞◆✺✳➫✯✲✫➨Ú✵❚✺✱➭❚✺✱✴★❈✻➨★✪❂✿✶✵❄
❄❆❅✴❅✔◆✺✳P✯✿✫➪★➳Û✛❄❈✶✵✯✲Ú✵✯✿ü✪✱■❂✲✯✲Ú✵✱✴✳◗★✪❂✥✯✲✫ Ú✵❚✷✱ þ✼❄✽✻✽❘❈ú ✇➝✶➫ñ➊❄❈✳➘Ú◗❚✺✱✦✶✵✱✞❍ ★✬✫❋Ú✵✯❖❅✞✶✴✸ ✓ ✻✽✱✞Ú✵✱✞✳◗❍■✯✲✫✷✱✴✶✩Ú✵❚✷✱
✶↕✱✞Ú✏❄✬ñ➼✳❇★✬✫✼✻✽❄✪❍✍ü❵★✪✳✵✯❖★✬þ✺❂✿✱✴✣✶ ✢❢✷ ✯✲✫▲Ú◗❚✺✱☛❀❲★q❘✪✱✴✶✵✯❖★✬✫❁✫✷✱✔Ú❫Ü❃❄✪✳◗✾ ❙✪✱✞✫✷✱✞✳❇★❵Ú✵✱❡✻ ñ➊❄❈✳✏★■❙✪✯✿ü✪✱✞✫ ✎ ✑↕ú
❁ ✢❢✷✳❁❤à❖ç☛ê✾✵✷æPç✞æ✞ê✠Þ❍✣❃ â❵ã ã❼ß✪ì✵Þ❵ö✽è➍ó
✜ ❞✣✢ ✐✸❤✍✞❅❤❑❧❡✐ ❪✤✁ ✢❢✷✥✄ ❫ ✴✚✵✷æ☎ç✔æ✞ê✠Þ✏❃☛ì◗â❵è✛ó❈■Þ ❋ ★❵â❵ì❷à➏â ❈ ã✲æ✔✺ç ✦
▼✛ì✵Þ ❈ â ❈ à ã②à❖ç❇ê➟à➏á✮â❵ê❭■Þ ❋☎ç☎❄ê ✵✺â✬ê➘â❵ì✵æ ê❉ì❷ö✷æ■à ✦
è ✕✘✗✚✙✜ÿ ✎ ✑✝✧ ✓✞✂✺❲❁ ✸ë ✵✺æ✞ì✵æ ❄ê ✵✺æ✦á✞ã✲â✬ö❆ç✔æ✔ç■à è ✓ â❵ì✵æ
à è✷ê❭æ✔ì ▼➍ì✵æ✔ê❭æ❇ó❁â❵ç✩ç❷ê❭â❵è➍ó✪â❵ì◗ó ã✲Þ◗ß✬à➏á✖▼➍ì✵Þ✵ß✪ì✵â✭❋ á✞ã✲â✬ö❆ç✔æ✔ç☛✶
✭✺❄✪✳☎❄✪◆✺✳➭✳◗◆✺✫✺✫✺✯✿✫✺❙➨✱ ✕✺★✬❍■Û✺❂✿✱✪✦✸ ✢ ✷ ✯✿✶☎Ú✵❚✷✱➳✶✵✱✔✩Ú ●✞✔✗✖✡✠☞✯✄✟✞✰✄✰✚❆✏ ✘✸ ✔✗✖✘✠★❚✣✟✥✜✣✟✚❆✏ ✘✸ ✔❈✖✘✠✎✂✮✔✞❑❈✤✮✏❆✸
✔❨❊✆✜✣✟✥✂✣✟✣✦✗✜✮✔◗❊✆✘✠☞☎✚✔✄❆✏ ✆
✸ ✔❨❊✆✜✣✟✥✂✣✟✣✦✗✜✮✔◗❊✆✘✠❲❱❏❚✸✏❈✆✸ ✔◗❊✆✜✣✟✥✂✆✟✧✦✗✜✮✔◗❊✄❙✠✙✝✥❳✸❋✏ ✗✸ ✄✂✣☎✞✝✆✟❁✠☞✯✄✟✞✰✄✰✘✌✙☎✚✔✄✏❆✸
✞✂✣☎✞✝✆✟✡✠ ❚✧✟✥✜✣✟✘✌✙❱❏❚✸✏❆✗✸ ✄✂✆☎✞✝✆✟✡✠✎✂✑✔✞❑✗✤✵✌☞❱✗❚✸✏✞❈■ ú
❑❤ r☞❞❣♦ ✞❅❞ ✛✩★ ✛ ♠❡❞P❢❆❅☎✬ ✭
❃ á✞ã✲â❵ö❆ç✞æ✔ç✮Þ✏❃▲ê❄✺✵ æ❏❃✔Þ✬ì ❋ ✺✘✆✚✞✡✠☞☛✭✬✷✺✝✆✟✞✡✠☞☛ ✓ ✞✛✒✜✒✔✒ ✞
✜ ❞✣✢ ✐✸❤✍✞❅❤❑❧❡✐ ✛✪✁ ✗✚✙✫✄ ❫ ✗✚✙ à❖ç✮â ç✞æ✞ê☎Þ✏➶
✺✘✆✟✞✡✠☞☛ ✕ ✌✻✑ ✎✑✏ ✞ ✓ ✞✓✒✔✒✔✒ ✞ ✎✑✏ ✞✡✮✯✒Pë✸✵✷æ✔ì✵æ ✵❇✞✰☛✱✖ ✰●❁ ✺✘✆✟✞✡✠☞☛ ❁ ✺✘✆✟✞✡✠☞☛ ✓ ❁✄✶✾✶✾✶ ❁ ✺✘✆✚✞✡✠☞☛ ✕ â❵ì✵æ
▼✛ì✵Þ ❈ â ❈ à ã②à❖ç❇ê➟à➏á✌â✬ê✇✭Þ ✣❋ ç â✬è✛ó ✎✑✏ ✞ ✓ ❁✝✶❅✶ ✶❂❁ ✎✑✏ ✞✡✮ â❵ì✵æ➥ã✿Þ✵ß✪à➏á❇â❵ã➫ã❦à ê❭æ✔ì✵â✬ã✉ç✌î✇ë➸æ➪á❇â❵ã ã✖✺✘✆✟✞✡✠☞☛✗ê❄✺✵ æ
❚✺✱✴★❈✻✚❁ ✺✘✆✟✞✡✠☞☛ ✓ ✞✛✒✜✒✔✒ ✞ ✺✘✆✚✞✡✠☞☛ ✕ ê❄✺✵ æ✣þ✛❄❆✻✺❘➪â✬è✛ó ✎✙✏ ✞ ✓ ✞✓✒✔✒✜✒ ✞ ✎✙✏ ✞✡✮✜ê✾✵✷æ☎❅✞❄✪✫❋Ú✵✱ ✕❆Ú☎✏Þ ❃➫ê❄✵✺æ✣á✞ã✲â✬ö❆ç✔æ❫ð❝✶
✿ ✤✸❞✦✜
Ù✇ñ✷Ú✵❚✷✱❲❅✞❄✪✫❋Ú✵✱ ✕❆Ú✰❄✪ñ✼★➫❅✔❂❖★✬◆✷✶✵✱❤✯❖✶✰✱✞❍■Û✽Ú❫❘❈✸❵Ü❤✱❃Ü✠✳✵✯✲Ú✵✱❃Ú◗❚✺✱✠❅✔❂❖★✬◆✷✶✵✱❃★❈✶ ✺✘✆✟✞✡✠☞☛✲✠✬ ✺✝✆✟✞✡✠☞☛ ✞✛✒✜✒✔✒ ✞
✘✆✟✞✡✠☞☛✳✕➍ú ✇➝✶✩ñ➊❄❈✳➫Ú✵❚✺✱ ✶✵✱✞❍ ★✬✫❋Ú◗✯✿❅✴✶✞✸ ✗✚✙ ✻✽✱✞ý✷✫✺✱✴✶✩Ú◗❚✺✱ ❅✔❄❈✫✷✻✽✯✲Ú✵✯✿❄✪✫✷★✪❂✓✻✺✱✞Û✛✱✞✫✷✻✽✱✴✫✷❅✔✯✿✱✴✶❁ÿ➊❍■✓ ❄✪✳◗✱
✺
Û✺✳✵✱❡❅✔✯❖✶↕✱✴❂✲❘❈✸✽Ú✵❚✷✱☎✻✽✯✿✳✵✱❡❅❷Ú◗✱✴✻▲✱✴✻✽❙❈✱✴✥✶ ❲✂ ❄✬ñ✓Ú◗❚✺✱✣❀❲★q❘✪✱❡✶↕✯❖★✬✫▲✫✺✱✔Ú❫Ü❃❄✪✳◗✾✦❙❈✱✞✫✺✱✴✳◗★✬Ú✵✱✴✻➳ñ➊❄❈✳➝★ ❙✪✯✿ü✪✱✴✫ ✎ ✑❫ú
✻ ✱✩❅✞★✬✫ ✱ ✕✽Û✺❂✲❄❈✯②Ú❤Ú✵❚✺✱✩❅✔❄❈✫❈Ú◗✖✱ ✕❆Ú➠Ú◗❄✣✱✞✫✼✶↕◆✺✳◗✱✠Ú◗❚✷★❵Ú❤Ú✵❚✺✱➘✫✺✱✔Ú❫Ü❃❄✪✳◗✾➭❅✞❄✪✫❋Ú◗★✪✯✲✫✼✶✥❄✪✫✺❂✿❘ ❅✔❄❈✫✷✻✽✯✲Ú✵✯✿❄✪✫✷★✪❂
✻✽✱✞Û✛✱✞✫✷✻✺✱✞✫✷❅✞✯✲✱❡✶✠✳✵✱✴❂✲✱✴ü❵★✬✫❋Ú❲ñ➊❄✪✳❲Ú✵❚✷✱☛✶↕Û✛✱✴❅✞✯②ý✼❅➫Û✷✳✵❄❈þ✺❂✲✱✴❍ ★❵Ú✠❚✷★✪✫✷✻❼ú
✂ ✬✼✻ â
✜ ❞✣✢ ✐✸❤✍✞❅❤❑❧❡✐✵✴✶✁✡✷ ✽ ✶✹✸✹✸✹✄ ❫ ✴❡✵✷æ➻ç✞æ✞ê▲❍Þ ❃✂ó❵à ì✵æ❇á✞ê✇æ❇ó æ❇ó✞ß❋æ✔✹ç ❁ ✷ ✽ ✶ ✸✹❝✸ ❁➭à❖ç✐●❋ÿ ✷ ✽ ✞✙✷✙✺
ß✬ì✵Þ✬ö✽è✛ó➥à è✷ç❇ê❭â❵è➍á❇æ◆✽
✷ ✬✫✾✿✠✴✽❁❀❂✌✻✑❄❃❅✠ ✵❆✞✎✸❈❇❉➶
✞ ✏Þ ❃✮â➻á✞ã✲â❵ö❆ç✞æ✮à è ✗✚✙➾ç❷ö✷✺á ✵ ✾ê ✵✷â❵✲ê ✷ ✽ ✹✗✾❅✴✠ ✽❁❀
â❵è✛✼ó ❃❅✠ ✵❆✞✎✸❈❇❉✞Pà❖çPê➟ì❷ö✷æPà ✰è ✕✘✗✚✙✜ÿ ✎ ✑ ✂✣â❵è➍ó✭✷ ✞✙✷✆✽ ✹✳✢❢✷❍■❣✶
û❞❄❈❙✪✱✔Ú◗❚✺✱✞❊✳ ✢❢✷ ★✪✫✷✻ ✷ ✽ ✶ ✸✹✸■❅✔❄❈✫✷✶❫Ú◗✯②Ú◗◆✽Ú✵✱➭Ú◗❚✺✱ ✶❫Ú◗✳✵◆✷❅✔Ú✵◆✺✳◗✱➭❄✬ñ❤★▲❀❲★q❘✪✱✴✶✵✯❖★✬✫▼✫✺✱✞Ú❫Ü❤❄❈✳✵✾➻ÿ❉★
✻✽✯✲✳◗✱✴❅✔Ú✵✱❡✻➶❙✪✳❇★✬Û✺❚ ✂❷ú✼û❲❚✷✯✿✶➘✻✽✯✲✳◗✱✴❅✔Ú✵✱❡✻➶❙✪✳❇★✬Û✺❚➶❚✼★✪✶➝★✦✫✺❄✽✻✽✱☛ñ➊❄❈✳➘✱❡★✪❅❇❋❚ ✢✓✹●❢✢ ✷ ★✪✫✷✻▼★✪✫➶✱✴✻✽❙❈✱
ñ➊✳✵❄❈❍ ✷ ✽ Ú◗✼❄ ✷ ñ➊❄❈✳✣✱❡★✪❅❇❚ ÿ ✷ ✽ ✞✙✷✙✂❍✹ ✷ ✽ ✶✹✸✹✸❈ú➼✭✷❄✪✳✣❄❈◆✺✳✣✳◗◆✺✫✺✫✷✯✲✫✺❙➨✖✱ ✺✕ ★✬❍■Û✺❂✿✱✪✸❞Ú✵❚✺✯❖✶☎✻✺✯✲✳◗✱✴❅✖
Ú✵✱✴✻ ❙✪✳❇★✬Û✺❚ ✯❖✶➳✶✵❚✺❄❵Ü✠✫ ✯✿✫ ✭✰✯✿❙✪◆✺✳◗✤✱ ☞ ÿ ❆✱✴❅✔Ú✵✯✿❄✪✞
✫ ✂ ✂❷ú❲❀❃✱✞❂✿❄❵ÜP✸➸Ü❃✱➨✫✺✱✴✱✴✻ Ú◗❚✺✱➨Ü❃✱✞❂✿✵❂ ✇✾❆✫✺❄❵Ü✠✫
✫✺❄✬Ú◗✯✲❄❈✫➥❄✪ñ▲▼✺â✬ì✵æ✔è✼ê➏ç ❄✪ñ❲★✮✫✺❄✽✻✽✱❁✯✲✫✂★➶✻✽✯✿✳◗✱✴❅❷Ú◗✱✴✻➥❙✪✳❇★✬Û✷❚❏ú ✻ ✱❁◆✷✶↕■✱ ❍●✾❵ÿ ✷✙✂✩Ú✵❄▼✻✽✱✴✫✺❄✬Ú◗✱ Ú✵❚✷✱
✫❋◆✷❍✣þ✛✱✞✳▲❄✬ñ☛Û✷★✪✳✵✱✴✫❈Ú❇✶ ❄✬Pñ ✷ ✹✪✢❢✷■✺ú ✻➥✱➥★❈✶✵✶✵◆✺❍■✱➶Ú✵❚✼★❵Ú➳Ü❃✱▼❚✷★qü❈✱➨★✬✫ ÿ➏★✬✳◗þ✺✯✲Ú✵✳❇★✬✳◗❘ ✂➭❄✪❀✳
✻✽✱✞❑✳ ❏❁❄❈✫✂Ú◗❚✺✱➳Û✼★✬✳◗✱✞✫❋Ú◗✶☎❄✬ñ➘★ ✫✺❄✽✻✽✱✮★✬✫✷✻✌◆✷✶✵✱ ✎✻✳✣✱✴✸✾❆✵ ✞✡▲✵ÿ ✷✮✂☛Ú✵❄➪✻✽✱✞✫✺❄✪Ú✵✱➳Ú✵❚✺✱ ✏ ➟Ú✵❚✌Û✼★✬✳◗✱✞✫❋Ú
❄✬❙ñ ✷❅✹▼✢❢✷ ú✼✭✺❄✪✳✠❄❈◆✺✳✠✳◗◆✺✫✺✫✺✯✿✫✺❙■✱✖✕✺★✬❍■Û✺❂✿✱✪☎✸ ✎✻✳✣✱✴✸✾❆✵ ✞ ✓ ★ÿ ✔◗❊✄✜✣✟✥✂✣✟✧✦❈✜✮✔◗❊✆❙✠☞❱❏❚✸✳✏ ✂➼✯✿✶ ✔✗✖✘✠★❚✣✟✥✜✣✟✚✏
★✬✫✷✻ ✎✻✳✣✱ ✸✾✵❆✞ ✍ ÿ ✔◗❊✆✜✣✟❋✂✣✟✧✦✗✜✮✔❨❊✆❙✠❲❱❏❚✮✏✳✂➼✯❖✶ ✔❈✖✘✠✎✂✮✔✞❑❈✤✮✏❆ú
◆ ➽❃❱◗❩❫❱❲➞✰❱✠❢❋t❭❱❃❬❫♥❈❱✏❨✴♦❦❳❋♥❋❨✴➛✽❱◗❬❫❥❦✐❷❨✞♦✛❛✞❩↕➙❈❱◗❩❷❧❈✐❇❛❡❴❈t❭❥②➙❈❱◗❩❫❥❦❴❋⑨☛❨✔❬❫❛❡❪Pt➠❨✞t➸t✇❬❭❩❫❥❦❴❈⑨❡t❇➢
25
✿ ✤✸❞ ✥ ❧❡✐ ❤ ❤❑❧❡✐✸❛ ❏♣sr☞❧✼❃ ❛ ❃✒❤ ❑❤ ✹❜ ✜ ❤❉❢ ❅r☞❤ ❃✂✁ ❅❤❉❧♥✐✒❢ ✁ ✥ ♣✻✜☎❳✄ ❢ ✄ ❅ ✬✰✯ û➼❄ ❅✞❄✪❍■Û✺❂✿✱✔Ú◗✱
Ú✵❚✺✱✂❙❈✱✞✫✺✱✴✳◗★✬Ú✵✱❡✻ ❀❲★q❘❈✱✴✶✵✯✿★✪✫ ✫✺✱✞Ú❫Ü❤❄❈✳✵✾➍✸✠Ü❃✱✂✫✺✱✴✱✴✻ ★ ✱ ✜➸✧ ñ➊❄✪✳▼✱❡★✪❅❇❚ ✳❇★✬✫✷✻✽❄❈❍ ü❵★✬✳◗✯❖★✬þ✺❂✿✱
✷❅✹▼✢❢✷ ✯
ú ❆◆✼❅❇❚❁★ ✱ ✜❤✧ ✯❖✶❲★Pñ➊◆✺✫✷❅✔Ú✵✯✿❄✪✫➳❍■★✪Û✺Û✺✯✿✫✺❙☎★✪✫❆❘☎Ú◗◆✺Û✺❂✿✱➘✯✿✫ ✱✴✳✣✵✷✶✹✸❆ÿ ✻✳❆✱✴✸✾❆✵ ✞ ✓ ÿ ✷✙✂❀✂✝✆
✒✜✒✔✒✞✆✰✱✴✳❆✵✷✶ ✸❆ÿ ✻✳✣✱✴✸✾✵❆✞✠✟☛✡✌☞✎✍✑✏✔ÿ ✷✮✂ ✂❤Ú◗❄✦★➭Û✺✳✵❄❈þ✷★✬þ✷✯✲❂✿✯②Ú❫❘➳✻✽✯✿✶↕Ú✵✳◗✯✿þ✺◆✽Ú✵✯✿❄✪✫✮❄❈✫✰✱✴✳✣✵✷✶ ✸❋ÿ ✷✮✂✔ú✷✭✺❄❈✳✠✯✲✫
✶❫Ú❇★✬✫✷❅✞✱✪✸✞★❲Û✛❄❈✶◗✶↕✯✿þ✺❂✿✱ ✱ ✜➸✧➪ñ➊❄❈✳❼Ú✵❚✷✱➠✳◗★✪✫✷✻✽❄❈❍ ü❵★✬✳◗✯✿★✪þ✺❂✲✱ ✔◗❊✆✜✣✟✥✂✆✟✧✦✗✜✮✔◗❊✄❙✠❲❱❏❚✸✏❏✯✿✶❼Ú✵❚✺✱✥ñ➊❄❈❂✲❂✿❄❵Ü✠✯✿✫✺❙
ñ➊◆✺✫✷❅❷Ú◗✯✲❄❈✫✓✒❋ú
Ù✇ñ ✏✕✔ ÿ ✺✼✸❈✞✎✸✸✂ ✏✕✔ ÿ ✱ ✏ ❃✗✖ ✂✙✘ ☞✎✰ ✰✚✰✣Ú✵❚✺✱✴✫ ✰ ✒ ✩✛✚✸✰❉✒ ✍✣✱✞❂❖✶↕✱✹❉✰ ✒ ✟✜✚✸✰ ✒❭✟❁ú
✹✏❄❵Ü❃✱✞ü✪✱✴✳✴✸✪Ü❤✱✩❅✞★✬✫✷✫✺❄✬Ú❲✻✽✱✔ý✷✫✷✱➘★✬❂✿❂ ✱ ✜➸✙✧ ✟ ✶❤✯✲✫✼✻✽✯✲ü❆✯❖✻✽◆✷★✪❂✲❂✿❘✪✸❋þ✼✱✩✯✲Ú❤❄✪✫✷❂✲❘➭Ú✵❚✼★❵Ú➸Ú◗❚✺✱➫✶↕✱✞Ú❤❄✬ñ❏✳❇★✬✫
✻✽❄✪❍ ü❵★✬✳◗✯✿★✪þ✺❂✿✱✴✶❏✻✽✱✴Û✼✱✴✫✷✻✺✶➼❄✪✫➫Ú◗❚✺✱❤✶✵Û✼✱❡❅✔✯✲ý✼❅➠Û✷✳✵❄❈þ✺❂✲✱✴❍ ✯✿✫✷✶↕Ú◗★✪✫✷❅✔✱ ✔ú ✻ ✱❤✫✺✱✞✱❡✻P★✠❍■✱✴❅❇❚✷★✪✫✺✯❖✶↕❍
Ú✵❄➳✻✺✱✔ý✷✫✺✱■★❁✶✵✱✔Ú✩❄✬ñ ✱ ✜❤✮✧ ✟ ✶✩★❵Ú➘❄❈✫✷❅✔✱❈ú✛û➼❄✦Ú◗❚✺✯❖✶➘✱✴✫✷✻▼Ü❃✱☎✻✺✱✔ý✷✫✺✱➭★▼ã✲Þ◗ß✪à➏á◗â✬ã✠✫õ ✽ ✾ ñ➊❄❈✳➘✱❡★✪❅❇❚
Û✺✳✵❄❈þ✷★✬þ✷✯✲❂✿✯✿✶↕Ú✵✯❖❅✩Û✺✳◗✱✴✻✽✯❖❅✞★✬Ú✵✱✪ú
â ▼✛ì✵Þ ❈ â ❈ à ã②à❖ç❇ê➟à➏á✖✛▼ ì✵æ❇ó✬à➏á◗â✬ê✇æ ✺✂à❖ç
✜ ❞✣✢ ✐✸❤ ❅❤❑❧❡✐✣✤
✢ ✁✏❥✙❧❡♠❡❤❑♦❝❛ ✥ ♣✻✜ ✄ ❫ ❀ ã✲Þ✵ß✪à➏á❇â❵ã➠õ✫✽ ✾✞❃✔Þ❵ì☛s
sâ ❃❇ö✽è✛á✞ê❉à➏Þ✬è ❋■✺â ▼❝✛▼ à è❆ß➨â✬è✷å▲ç✞æ✔ê➝❍Þ ❃✖✷▼ â❵à ì❇ç ÿ ✷ ✳✣✱✴✞✙✷✚✳ ✎ ✂✣ê❭Þ➶â ▼✛ì◗Þ ❈ â ❈ à ã❦à ê➟å➨ó❵à❖ç❷ê❉ì❷à ❈ ö✽ê➟à➏Þ❵è Þ✬è
✾ê ✵✷æ➭ì✵â❵è❆ß❈æ✦❍Þ ❃ ✺✩❁❤✸ë ✵✷æ✔ì✵✭æ ✷✚✳✣▼
✱ à❖ç■â■ß✪ì✵Þ❵ö✽è➍ó ▼✛ì✵Þ ❈ â ❈ à ã②à❖ç❇ê➟à➏á▲â❵ê❭■Þ ❋ ❈ ö✽à ã❦ê ❃❇ì✵✭Þ ❋ â ▼✛ì✵æ❇ó❵à➏á❇â✬ê✇æ
Þ✴á❇á✞ö✽ì❷à è❋ß✣à è❁ê❄✵✺æ ❈ Þ❡ó❵å☎❍Þ ✏❃ â✣á✞ã✲â❵ö❆ç✞æ➝à è ✌ë✥à ✾ê ✵ ✺➨à è✦❄ê ✺✵ æ ✵✺æ❇â✪ó➭â❵è➍✲ó ✷ ✳ ✎ ✹ ✱✴✳✣✵✷✶✹❆✸ ÿ ✷ ✳✣✱✸❅✂ ✶
✇✏✫➻✖✱ ✺✕ ★✪❍➭Û✷❂✲✱✦❄✪ñ✏★✮❂✲❄❈❙✪✯❖❅✞★✬❂ ✱ ✜➸✧ ñ➊❄❈❬
✳ ✔◗❊✆✜✆✟✥✂✣✟✧✦✗✜✑✔◗❊✆✧✪✑☛✫ ✯❖✶PÚ◗❚✺✱ ñ➊❄✪❂✿❂✿❄❵Ü✠✯✲✫✺❙✮ñ➊◆✷✫✷❅❷Ú◗✯✲❄❈✫
✥ÿ ✤ ❍ ✶↕Ú◗★✪✫✷✻✺✶➼ñ➊❄✪✳✰Ú✵❚✷✱❃✯✿✫✺Û✺◆✺Ú➠❄✬ñ✷Ú◗❚✺✱❃ñ➊◆✺✫✷❅✔Ú✵✯✿❄✪✫❏✸✬★✩✶↕✱✞Ú✥❄✬ñ✷Û✷★✪✯✲✳❇✶✓★✪❅✴❅✔❄✪✳❇✻✽✯✿✫✺❙✏Ú◗❄P✧➝✱✞ý✷✫✺✯✲Ú✵✯✿❄✪✝
✫ ✟✚✂❷ú
Ù✇✧ñ ✦ ☞ ▲✎★ ☞✪✩✜✏✬✫ ✍✂✭ ✏✬✮✛✯✰✟ ✷✻✳ ✎ ✘ ☞✔✰✚✰ ✰☛Ú◗❚✺✱✞✫ ✰ ✒ ✩✛✚✳✰ ✒ ✍✣✱✴❂✿✶✵✹✱ ✰ ✒❭✟✱✚✳✰ ✒❭✟ ú
✇➝✶❃Ü✠✯✲❂✿❂➍þ✼✱❡❅✔❄❈❍➭✱✩❅✔❂✿✱✴★✪✳❤þ✛✱✞❂✿❄❵ÜP✸✪Ú◗❚✺✯❖✶❃✶✵❚✺❄❈◆✺❂✿✻✦þ✼✱✩✳✵✱❡★✪✻✦★❈✶ ✶❁❵↕✯✲ñ❼Ú✵❚✷✱✩✶✵◆✺❍ ❄✬ñ❞Ú✵❚✺✱✩Ù✎❜✙✟ ✶❤❄✬ñ❞★✪❂✲❂
✶❫Ú◗◆✷✻✽✱✞✫❋Ú❇✶➫Ú◗★✪✾❋✯✿✫✺❙✮★✮❅✔✱✞✳✵Ú◗★✪✯✲✫➪❅✔❄❈◆✺✳❇✶↕✱➭✯✿✶☛❙✪✳◗✱✴★❵Ú◗✱✞✳➘Ú✵❚✷★✪✫ ☞✎✰✚✰ ✰✺✸✼Ú✵❚✺✱✴✳✵✱■✯❖✶☛★▲Û✺✳◗❄✪þ✷★✪þ✺✯✲❂✿✯✲Ú❫❘➶❄✬ñ
✰ ✒ ✩✛✚✸✰❉✒ ✍☛Ú✵❚✷★✬Ú✠Ú✵❚✺✱☛❅✞❄✪◆✺✳❇✶✵✱✩✯❖✶❲✯✿✫❈Ú◗✱✞✳◗✱✴✶↕Ú✵✯✿✫✺✳❙ ✲❵◆✺✫✺✯✿✫❋Ú✵✱✞✳◗✱✴✶↕Ú✵✯✿✫✺❙✼✸❋❄✪Ú✵❚✺✱✴✳✵Ü✠✯❖✶✵✱➘Ú◗❚✺✯❖✶✠✯✿✶ ✰❉✒ ✟✜✚✸❉✰ ✒ ✟❋❝✺ú
☎❏❄✪❙❈✯✿❅✴★✬❂ ✱ ✜❤✮
✧ ✟ ✶✦✯✿✫ ☎➼✺❀ ✝✮✟ ✶➳✱✴✶◗✶✵✱✞✫❋Ú✵✯❖★✬❂✿❂✲❘➹❚✷★qü❈✱✮Ú✵❚✺✱➥✶✵★✪❍➭✱➨ñ➊◆✺✫✷❅✔Ú✵✯✿❄✪✫ ★❈✶ á❇■Þ ❋ ❈ à è✼à è❋ß
ì❷ö✽ã✲æ❷ç✼✡ ✯✸✏❞✯✿✫▼✞❀ ☎✫✜✬✟ ✶✴ú ☎❏❄❈❙✪✯❖❅✞★✬❂ ✱ ✜❤✮✧ ✟ ✶✏❅✴★✬✫✮þ✼✱☎✶↕Û✛✱✴❅✞✯②ý✷✱❡✻✮✯✲✫✮ü❵★✬✳◗✯✿❄✪◆✷✶✠Ü❲★q❘✽✶✞✸❋ñ➊❄❈✳✏✯✲✫✼✶❫Ú❇★✬✫✷❅✞✱
✯✲✫ ✜➠✳◗❄✪❂✿❄✪❙➥❄✪✳ þ❆❘ ã✲Þ✵ß✪à➏á❇â❵ã➫ó✪æ❇á✔à❖ç❷à➏Þ✬è ê➟ì✵æ❇æ❷ç✵✴➶✶✵◆✷❅❇❚ ★✪✶■◆✼✶↕✱❡✻ þ❆❘✌Ú✵❚✺✷✱ ✶✹✸✻✺✜✼✑✽ ❭✶↕❘✽✶↕Ú✵✱✴❍ ✡ ✍✳✏➟ú
❀❤✱❡❅✞★✪◆✷✶↕✱❲❄✪ñ➍✶✵Û✷★✪❅✞✱✠✳✵✱❡✶❫Ú◗✳✵✯❖❅❷Ú◗✯✲❄❈✫✷✶✓Ü❃✱✏✻✽❄P✫✺❄✬Ú➠✱✴❂✿★✪þ✼❄❈✳◗★✬Ú✵✱❃Ú✵❚✷✯✿✶✓ñ➊◆✷✳↕Ú◗❚✺✱✞✳➸★✪✫✷✻➭✱✖✽✕ Û✺✳◗✱✴✶◗✶✓❂✲❄❈❙✪✯❖❅✞★✪❂
✱ ✜➸✙✧ ✟ ✶❲◆✷✶✵✯✿✫✺❙✦★➭✾❋✯✿✫✷✻▲❄✬ñ✰Û✷✶✵✱✞◆✷✻✽❄✚✇❅✞❄✽✻✽✱P★✪✶✠★✬þ✛❄❵ü✪✱❈ú
➹✯✿✶❲✫✺❄❵Ü ✶↕✯✿❍■Û✺❂✿❘➳✻✽✱✞ý✷✫✺✱✴✻✮★❈✶❃ñ➊❄✪❂✿❂✲❄❵Ü✏✶✴ú
à❖ç☛â☎ç✞æ✔ê❃á◗Þ✬è✷ê❭â❵à è✼à è❋ß✭❁✉❃✔Þ❵ì☛æ❇â✪✺á ✵ ▼✛ì✵Þ ❈ â ❈ à ã❦à❖ç❷ê❉à➏á✰✛▼ ì✵æ❇ó✬à➏á◗â✬ê✇æ ✺✕✹
✜ ❞✣✢ ✐✸❤ ❅❤❑❧❡✐✿✾✪✁ ✣✄ ❫
✆ ✽❣❃
❁ Þ✬è✛æ✣ã✲Þ✵ß✪à➏á❇â❵ã❤★õ ✽✿✾ ❃✔Þ✬☛ì ✺✒✶
✻ ✱ ◆✷✶✵❬
✱ ✱✴❡ ❁❀✭✽▲Ú✵❄▼✻✽✱✴✫✺❄✬Ú◗✱■Ú✵❚✺✱✦❂✿❄✪❙❈✯✿❅✴★✬❂ ✱ ✜❤✧✜ñ➊❄✪✳✲➻✺ ✯✿✫ ✏ú➼Ù✇Ú✣❅✞★✪✫➥þ✛✱✦◆✷✶✵✱✴✻ Ú◗❄
✻✽✱✔Ú◗✱✞✳◗❍➭✯✿✫✺✱➫Ú◗❚✺✱ ✱ ✜➸✧ ñ➊❄❈✳✏★✬✫❆❘✦✳❇★✬✫✷✻✽❄❈❍✍ü❵★✬✳◗✯✿★✪þ✺❂✿✱ ✷❅✹❋✢❢➷
✷ ñ➊❄✪✳✠Ü✠❚✺✯❖❅❇❊
❚ ✺ ✱✴✸ ✽✼ÿ ✷✮✂❃✯❖❇✶ ✺❞ú
❀ ÿ ✷✮✂ ❁❣✔❃ Þ✬ì✩â➫ß✬ì◗Þ❵ö✽è✛✖
ó ▼➍ì✵Þ ❈ â ❈ à ã❦à❖ç❷ê➟à➏á✣â❵ê❭Þ■❋
✜ ❞✣✢ ✐✸❤ ❅❤❑❧❡✐✣❂✤✁ ❡ ❁❀▼ÿ ✷✙✂ ✄ ❫ ✴✚✵✺æ■✫õ ✽ ✾ ❁✮❡ ❁▼
✷◆✹ ✢❢✷➷à❖ç➸✾ê ✵✷❡
æ ❃❇ö✽è✛á✞ê❉à➏Þ✬✣è ❋ ✹â ▼ ▼✛à è❆ßPâ❵è✷å✩ÿ ✷ ✳ ✎ ✓ ✞✓✒✔✒✜✒ ✞✙✷ ✳ ✎ ✟☛✡✌☞✎✍✑✏ ✂✴✹ ✱✴✳✣✵✷✶✹✸❆ÿ ✻✳✣✱✴✸✾❆✵ ✞ ✓ ÿ ✷✮✂ ✂❃✆
✒✜✒✔✒✳✆ ✱✴✳❆✵✷✶ ✸❆ÿ ✻✳✣✱✴✸✾✵❆✞✠✟☛✡✌☞✎✍❄✏❷ÿ ✷✮✂ ✂Pê✇Þ■❄ê ✺
✵ æ ▼✛ì✵Þ ❈ â ❈ à ã❦à ê❉å❁ó✬à❖ç❷ê❉ì❷à ❈ ö✽ê❉à➏Þ✬è➥Þ ❈ ê❭â❵à è➍æ◗ó ❈ å✦✺â ▼❝▼✛ã②å❵à è❋ß
✱✴❡ ❁❀ ✽✗❅❇❆✠❈ ☞✻✍❉✏➘ê✇Þ ✾ê ✵✷æ➫ç✞æ✔❙
ê ●❋ÿ ✻✳✣✱✴✸✾✵❆✞ ✓ ÿ ✷✮✂ ✞☞✷✚✳ ✎ ✓ ✂ ✞✓✒✔✒✜✒ ✞✴ÿ ✻✳✣✱ ✸✾❆✵ ✞✠✟❊✡✌☞✎✍❄✏❷ÿ ✷✙✂ ✞✙✷ ✳ ✎ ✟☛✡✌☞✎✍❄✏ ✂❲❣■ ✶
❋ ➧❤❱◗❪P❱❇❪✩➛✽❱◗❩➼❬❫♥❆❨✔❬➸❬❫♥❈❱❃❩↕❨✴❴❈⑨❡❱✠❛✞❜ ➀❷❍❹ ●❵❾❡❿❵❾❵✗➈ ●✪➀✔❍❹ ■❍❏✱❑ ❥❦t ⑩ ❥❦❴❵❬❫❱◗❩❫❱❇t✇❬❫❥❦❴❈⑨❈❧❆❢❈❴❋❥❦❴❵❬❫❱◗❩❫❱❇t✇❬❫❥❦❴❋⑨ ➆ ➢
▲☛▼ ❩❫❥❦❱❷➙❈❪☛❨✞❴➥❱◗❬➭❨✴♦ ➢ ❉ ●❋ t❭♥❈❛✔➞ ♥❋❛❷➞➷❬❫❛✮❩❫❱❇❳❈❩❫❱◗t❭❱❇❴❵❬☎❛✞❩↕➙❈❥❦❴❆❨✔❩❭➜➻❝✓➦❏❯✩➲ t■❨✞t➭➙✪❱❇✐❇❥❦t❭❥❦❛❡❴➪❬❭❩❫❱❇❱◗t❇➢✥⑧➼➜
❢❋❳❈⑨✴❩↕❨❡➙✪❥❦❴❋⑨✦❬❫♥❋❥❦tP❨✴❳❈❳❈❩❫❛❡❨✴❂ ✐↕♥ ❬❫❛ ❋❩❫t✇❬P❛✞❩↕➙❈❱◗❩P♦❦❛✴⑨❡❥❦✐❡❧❞➞✓❱■✐❇❛✴❢❋♦②➙➶❩❫❱❇❳❈❩❫❱❇t❭❱◗❴✬❬➫♦②❛✴⑨❡❥❦✐❷❨✞♦❲❝✓➦❏❯✩➲ t☛❨✴t
♦❦❛❡⑨❡❥❦✐❷❨✞♦✰➙✪❱❇✐❇❥❦t❭❥❦❛❡❴❁❬❭❩❫❱❇❱❇t❇➢ ❴ ❬❫♥❋❱➘✐❷❨✴t❭❱✩❛✴❜❞❬❫♥❋❱➫❨✞➛✽❛✔sq❱✩♦❦❛❡⑨✴❥❦✐❷❨✴♦✰❝✓➦❞❯ ❜❖❛✞❩ ➀✔❍❹ ●❵❾❡❿q❾✬✗➈ ●✪➀✔◆❹ ■❃❏✱❑ ❧✼➞✓❱
♥❆❨❷sq❱❃❬❫❛✩❢❋t❭❱✏❨✴❴☎❱◗➡✬❬❫❱❇❴❋t❭❥❦❛✴❴■❛✴❜❼♦❦❛✴⑨❡❥❦✐❷❨✴♦➍➙❈❱❇✐❇❥❦t❭❥❦❛❡❴■❬❭❩❫❱◗❱❇t✥➞➠❥❦❬❫♥■❨✴⑨❡⑨✞❩❫❱❇⑨q❨✔❬❫❱❇t ❉ ③✞✔④ ❋ ➢
✛
✞
✜✄
✄ ✍✞
✞
✒✞
✎
✍✎
✆
✎ ✑
✍✞
✜✄
✗✚✙
✑
✗
✣
✗
✍✞
✣
✗
✣
✎
✗
✎
✍✞
✣
✎
✍✎
✎
✍✎
✍✎
✎
✁
26
û❲❚✺✱ ✱ ❤✧ ñ➊❄✪✳❩✔◗❊✄✜✣✟✥✂✣✟✧✦❈✜✮✔◗❊✆❙✠☞❱❏❚✸✏ ✯❖✶➭★➨ñ➊◆✺✫✷❅✔Ú✵✯✿❄✪✫➹❄✬ñ ✔❈✖✘✠★❚✧✟✥✜✆✟✚▲✏ ★✬✫✷✻ ✔❈✖✘✠✎✂✮✔✞❑❈✤✮✏❆ú
❆◆✺Û✺Û✛❄❈✶✵✱✪✸✞ñ➊❄✪✳❏✯✿✫✷✶↕Ú◗★✬✫✼❅✔✱✪✸✔Ú✵❚✷★✬Ú❞Ü❤✱✥Ü❲★✬✫❋Ú➍Ú✵❄➘✻✽✱✔Ú◗✱✞✳◗❍➭✯✿✫✺✱✓Ú◗❚✺✱➠Û✺✳◗❄✪þ✷★✪þ✺✯✿❂✲✯✲Ú❫❘➘✻✽✯❖✶↕Ú✵✳◗✯✲þ✺◆✺Ú✵✯✿❄✪✫P✶↕Û✛✱
❅✔✯✲ý✷✱✴✻❁þ❋❘➭Ú◗❚✺✯✿✶ ✱ ❤✧ Ü✠❚✺✱✴✫❪✔✗✖✘✠★❚✣✟✥✜✣✟✚✏✏✯❖✶ ☛★✪✫✷✻❞✔✗✖✘✠✎✂✑✔✞❑✗✤✮✏✏✯❖✶ ☞✂ ✺ú❆❀❃❘ ★✬Û✺Û✺❂✿❘❆✯✲✫✷❙☛Ú✵❚✷✱
❂✲❄❈❙✪✯❖❅✞★✬❂ ✱ ➸✧ ñ➊❄❈✳✁✔◗❊✆✜✣✟✥✂✆✟✧✦✗✜✮✔◗❊✄✧✪✑➼✫ Ú✵❄➘Ú✵❚✷✱❲✶✵✱✔Ú✁●❋ÿ★✔✗✖✡✠★❚✧✟✥✜✣✟✧❆✏ ✸ ❷✸✬ÿ★✔✗✖✡✠✎✂✮✔✞❑✗✤✑✏❆✸ ✂ ☞✓■ Ü❃✱
❙✪✱✔Ú❞★❈✶❏★❲✳◗✱✴✶✵◆✺❂②Ú ❉✒ ✟✜✚ ❉✒ ✟✺ú ❞✯✲✾❈✱✓Ú✵❚✺✯❖✶❏Ü❃✱➠❅✞★✪✫➫❄✪þ✽Ú❇★✬✯✿✫➘Ú◗❚✺✱✥✱✞✫❋Ú◗✯✲✳◗✱ ✱ ➸✧✂ñ➊❄✪❙✳ ✔◗❊✆✜✣✟❋✂✣✟✧✦✗✜✮✔❨❊✆❙✠❲❱❏❚✮✏❋ú
û❲❚✺✯✿✶✏✯✿✶❃Ú◗❚✺✱➫ñ➊❄✪❂✿❂✲❄❵Ü✠✯✿✫✺❙➭ñ➊◆✺✫✷❅✔Ú✵✯✿❄✪✫❏ú
Ù✇ñ ✏✕✔ ÿ ❈✞ ✏✕✔ ÿ ✏ ❃✗✖ ✙✘ ✣Ú✵❚✺✱✴✫ ✒ ✛✚ ❉✒ ✣✱✞❂❖✶↕✱ ❉✒ ✟✜✚ ✒❭✟❁ú
✜
✖
✜
✬☞✔✰✚✰
✬☞ ✳✰
✜
☞✎✰ ✰ ✂
✫✰
✺✼✸ ✎✸✸✂
✛
❫✾❪
✸✰
✱
✥☎
✆
☞ ✚✰ ✂
✜
✂
☞✎✰ ✰✚✰
✰
✩ ✸✰
✍
✹✰
✸✰
✒✂
❞ ✁ ❞❣✐✒❞ r■❛ ✞ ❞ ✛ ❵ ❛ ❜♥❞❣❢ ❤❑❛✯✐ ☎✆ ❞✟✞✡✠ ❧♥r☞☛
✿
✤
❆❄✪❍■✱❤❅✞❄✪✫✷✻✺✯②Ú◗✯✲❄❈✫✷✶❞❚✼★qü✪✱✥Ú◗❄➝þ✛✱❃✶◗★❵Ú◗✯✿✶↕ý✷✱✴✻☛✯✲✫☛❄❈✳◗✻✺✱✞✳❏ñ➊❄❈✳✰★ ❞❀ ➻Ú◗❄➝❙❈✱✞✫✺✱✴✳◗★✬Ú✵✱➸★➝Ü❤✱✴❂✲❂ ✇✻✺✱✔ý✷✫✺✱❡✻
❃❀ ★q❘❈✱✴✶✵✯✿★✪✫➻✫✺✱✔Ú❫Ü❃❄✪✳◗✾ ÿ➊✯➟ú ✱❈ú✓★➥❀❲★q❘✪✱❡✶↕✯❖★✬✫➻✫✺✱✞Ú❫Ü❤❄❈✳✵✾ Ú✵❚✼★❵Ú ✶↕Û✛✱✴❅✞✯②ý✷✱❡✶■★▼◆✺✫✺✯ ❂❋◆✺✱▲Û✺✳✵❄❈þ✷★✬þ✷✯✲❂✿✯②Ú❫❘
❍➭✱❡★✪✶✵◆✺✳◗✱ ❷✸✽❙❈✯✲ü❈✱✞✫ ✎ ✑ ú
♣sr☞❧ ✙❧❡❢❅❤✍✞❅❤❉❧♥✐✞❴♥❫ ✼❄❃■ê❄✵✺æ✦ç✞æ✔êP✏Þ ✦❃ ì✵â❵è➍ó✪Þ■❋ ★❵â❵ì❷à➏â ❈ ã✲æ✔ç✥❢✢ ✷✘à❖ç è✛Þ✬è ↕æ☛❋✖▼➍ê❉å☞❁✏ê❄✵✺æ▲ó❵à ì◗æ◗á✞ê❭æ◗ó
ß✬ì✵✺â ▼✯✵▲ó✪æ☎✄➠è➍æ❇ó ❈ ✚å ❢✢ ✷ â❵è✛ó ✷ ☞✸☎à❖ç✩â❈á✔å✪á✞ã❦à➏á☛â✬è✛ó➭â❵ã ã➍ì✵â✬è✛ó✪Þ✭❋ ★qâ✬ì❷à➏â ❈ ã✲æ❷ç✖✵✺â ★qæ☛â✆✄➠è✼à ê✇æ
è✷ö●❋ ❈ æ✞ì❤❍Þ ❃❃â✬è✛á❇æ❷ç❷ê❭Þ❵ì❇ç❃à è➭❄ê ✵❆à❖ç✠ó❵à ì✵æ❇á✞ê✇æ❇ó❲ß✪ì✵✹â ▼✉✵P❁❼✾ê ✵✷æ✔è■❄ê ✵✺æ➸ä➘â❵å✪æ✔ç❷à➏â❵è✦è➍æ✔ê➟ë➸Þ❵ì✵í➫ó❈æ✔ê❭æ✞ì ❋☎à è➍æ◗ó
❈ ✯å ❢✢ ✷✳❁ ✷ ✹✸➳â✬è✛✭ó ❡ ✎❁❀➨ÿ ✷ ❱❃✔Þ❵ì✣æ❇â❈✹á ✵✢✷◆✹ ✢❢✷✣❁➸à❖çPë➸æ✞ã ã ↕ó✪æ☎✄➠è➍æ❇❝ó ✶
☎
✞✝
✌
✔✂
✑
✼✧
✽ ✶ ✸
✽ ✶ ✸
✒✑
✢✂
✞q❏✧ Ó✉★✠✟❏✧❞×✛Ò✺Ø✏✎✙✧✩★ ✓❇Ô➠Ó ✗ ✧ ✑✓Ó✷Ñ➘Ø↕Ñ ✕✘✗ ✢ ✪☛✡❉★
û❲❚✺✱➸❍ ★✬✯✿✫✣★✪✯✲❍ ❄✬ñ✽Ú◗❚✺✯✿✶➼Û✷★✪Û✼✱✴✳➼✯❖✶❼Ú◗❄➝✯✿✫❈Ú◗✳✵❄✽✻✽◆✼❅✔✱ ➼❀ ✶✰★✪✶➼★✏❍■❄✽✻✽✱✞❂❈ñ➊❄✪✳➼✾❆✫✺❄❵Ü✠❂✲✱❡✻✽❙✪✱✥✳◗✱✞Û✷✳✵✱
✶↕✱✴✫❈Ú❇★❵Ú◗✯✲❄❈✫❏ú✼✭✷◆✽Ú✵◆✺✳◗✱✣Ü❃❄✪✳◗✾❁✯✿✫✷❅✞❂✲◆✷✻✺✱✴✶✩✻✽✱✞ü❈✱✞❂✿❄✪Û✺✯✿✫✺❙✦★✬❂✿❙✪❄❈✳✵✯✲Ú✵❚✷❍■✶✠ñ➊❄✪✳➝❂✿✱✴★✪✳✵✫✷✯✲✫✺❙ ❞❀ ✶✠ñ➊✳◗❄✪❍
✻✺★❵Ú❇★✺ú➍✹✏✱✞✳◗✱☛Ü❃✱☎★✪❂✲✳◗✱✴★❈✻✽❘▲✻✺✯✿✶◗❅✔◆✷✶◗✶➝★❁❚✺✯✿❙✪❚ ✇❂✲✱✴ü✪✱✴❂➼✶↕Ú✵✳❇★❵Ú✵✱✴❙✪❘❈ú✼✭✺❄❈❂✲❂✿❄❵Ü✠✯✿✫✺❙ Ú✵❚✷✱☎★✬Û✷Û✺✳✵❄❋★✪❅❇❚✮ñ➊❄❈✳
❀ ✶ ➟✸❆Ü❃✱➝◆✷✶✵✱✩❅✞❄✪✫✷❅✞✱✞Û✽Ú❇✶➠ñ➊✳◗❄✪❍ ❂✿✱✴★✪✳✵✫✷✯✲✫✺❙☎❀❃★q❘❈✱✴✶✵✯✿★✪✫ ✫✺✱✔Ú❫Ü❃❄✪✳◗✾✽✶ ❏★✪✫✷✻✦Ù❭✫✷✻✺◆✷❅❷Ú◗✯✲ü❈✱
❏❄✪❙❈✯✿❅ ➸✳✵❄❈❙✪✳❇★✬❍■❍■✯✲✫✺❙➳ÿ➊Ù ✟ ➟ú
✇ ➼❀ ❅✴★✬✫ þ✼✱✮❂✿✱✴★✬✳◗✫✺✱❡✻➻ñ➊✳◗❄✪❍ ★➥✶↕✱✞Ú ❄✬ñ■ó❈â❵ê❭â✂á❇âqç✞æ✔ç▲✱❡★✪❅❇❚ ❅✞❄✪✫✷✶✵✯❖✶❫Ú◗✯✲✫✺❙➥❄✬ñ➫★➨✫✷❄✪✳
❍■★✪❂➍❂✲❄❈❙✪✯❖❅➝Û✺✳◗❄✪❙❈✳◗★✪❍ ✎ ✑ ★✪✫✷✻❁★➭✶✵✱✔Ú❲❄✬ñ➼★❈✶✵✶✵✯✲❙❈✫✺❍■✱✞✫❋Ú◗✶❤❄✬ñ➼üq★✪❂✲◆✷✱✴✶❤Ú✵❄➭✳◗★✪✫✷✻✽❄❈❍ üq★✪✳✵✯❖★✬þ✷❂✲✱❡✶✞ú❋Ù❭✫
Û✺✳◗★❈❅❷Ú◗✯✿❅✞✱✪✸ ✓ Ü✠✯✿❂✿❂❋❄✪ñ Ú✵✱✴✫✣þ✛✱❃✾❆✫✺❄❵Ü✠✫✣✯✲✫➭★✪✻✽ü❵★✬✫✼❅✔✱➘ÿ➏Ü✠❚✺✱✞✫☎❂✲✱❡★✬✳◗✫✺✯✿✫✺❙✏ñ➊✳◗❄✪❍ ★➝✳✵✱✴❂✿★✬Ú✵✯✿❄✪✫✷★✪❂❋✻✺★✬Ú◗★
þ✷★✪✶✵✱✪✸✪ñ➊❄✪✳➸✯✿✫✷✶↕Ú◗★✪✫✷❅✔✱❈✸ ✓➹❅✔❄✪✳◗✳◗✱✴✶✵Û✼❄❈✫✷✻✺✶✰Ú✵❄☛Ú✵❚✺✱➘✳✵✱✴❂✿★✬Ú✵✯✿❄✪✫✷★✪❂✼✶◗❅❇❚✺✱✞❍ ★ ✔ú Ú✵❚✷✱✞✳◗Ü✠✯✿✶✵✱✪✸✟✓✌❅✞★✪✫✦þ✛✱
❂✲✱❡★✬✳◗✫✺✱✴✻☎◆✷✶✵✯✲✫✺❙✩Ú◗✳◗★❈✻✽✯②Ú◗✯✲❄❈✫✷★✬❂➟✸q✫✺❄✪✫ ➟Û✺✳◗❄✪þ✼★✬þ✺✯✿❂✲✯❖✶↕Ú✵✯❖❅➸Ù ❭★✬❂✿❙✪❄✪✳◗✯✲Ú✵❚✺❍ ✶❃ÿ➊✱✪ú ❙✷ú✌☞ ✺✌✍✏❉✎ ✼✑✸✻✽✒✑ ❷ú
❏✱✴★✪✳✵✫✷✯✲✫✺❙ ✗ ✣ ✯✿✶P✫✷✱✴✶↕Ú✵✱✴✻➪✯✿✫✷✶✵✯✿✻✺✱✦❂✲✱❡★✬✳◗✫✺✯✿✫✺❙ ✗✚✙ ★✪✶➫ñ➊❄❈❂✲❂✿❄❵Ü✏✶✴ú ❚✺✱✞✫ ✗ ✙ ✯✿✶✣✾❋✫✷❄❵Ü✠✫❏✸❼Ú✵❚✷✱
❂✲❄❈❙✪✯❖❅✞★✬❂ ✱ ❤✧ ✶✩✯✲✫ ✗ ✣ ❅✞★✪✫ þ✛✱■❂✲✱❡★✬✳◗✫✺✱✴✻ ✯✲✫➪★➳✫❋◆✷❍✣þ✛✱✞✳P❄✬ñ❃Ü❃★q❘✽✶✴✸✛ñ➊❄✪✳➫✯✲✫✼✶❫Ú❇★✬✫✷❅✞✱➭◆✼✶↕✯✿✫✺❙✮★
❂✲❄❈❙✪✯❖❅✞★✬❂❏✻✺✱✴❅✔✯❖✶✵✯✲❄❈✫❁Ú◗✳✵✱✴✱P❂✲✱❡★✬✳◗✫✺✱✞✳☛ÿ➏✱✪ú ❙✷✓ú ✶✹✸✻✺✱✼❄✽ ✔ú✷û❞❄ ❂✿✱✴★✪✳✵✫➳Ú✵❚✷✱✣❅✔❂❖★✬◆✷✶✵✱✴✶❲✯✿✫ ✗✚✙➸✸✷Ü❤✱P❅✞★✪✫
Û✼✱✴✳↕ñ➊❄❈✳✵❍ ❂✲❄✽❅✞★✪❂❞✶✵✱✴★✬✳❇❅❇❚▲◆✷✶↕✯✿✫✺❙ ✳◗✱✔ý✷✫✷✱✞❍■✱✞✫❋Ú✏❄✪Û✛✱✞✳❇★❵Ú◗❄✪✳❇✶ ✟ ❞Ú✵❄■ý✼✫✷✻➶★ ✶↕✱✞Ú➝❄✬ñ✥❅✔❂❖★✬◆✼✶↕✱❡✶❲Ü✠✯②Ú◗❚
★ ❚✺✯✿❙✪❚▼✶◗❅✔❄✪✳◗✱✦ÿ➊✱❈ú ❙✼ú✷❂✿✯✲✾❈✱✞❂✿✯✲❚✷❄❋❄✽✻ ❷ú✛û❞❄➳❅✔❄✪❍■Û✺◆✺Ú✵✱✣✶✵◆✷❅❇❚▼★✦✶◗❅✔❄❈✳✵✱❈✸✽Ú✵❚✺✱✣❄✪Û✽Ú◗✯✲❍ ★✬❂➼❂✲❄❈❙✪✯❖❅✞★✪❂
✱ ➸✧ ✶Pÿ➏✯❉ú ✱✪ú ✗✤✣ ❤ñ➊❄✪✳❲Ú◗❚✺✯❖✶✏✶↕✱✞Ú✏❄✬ñ✓❅✔❂❖★✬◆✷✶✵✱✴✶ ✗✚✙➪Ü✠✯✿❂✲❂❏Ú❫❘❋Û✷✯✿❅✴★✬❂✿❂✲❘✦❚✼★qü✪✱➘Ú✵❄ þ✛✱P❂✲✱❡★✬✳◗✫✺✱✴✻❼ú
✇✏✫ ★✬❂✿❙✪❄❈✳✵✯✲Ú✵❚✷❍✘ñ➊❄✪✳✦❂✿✱✴★✪✳✵✫✷✯✲✫✺❙✂❀ ✶■❚✷★✪✶ þ✛✱✞✱✴✫ ✻✽✱✴✶◗❅✔✳◗✯✿þ✼✱❡✻ þ❋❘ ➫✱✞✳❇✶❫Ú◗✯✲✫✷❙➪★✪✫✷✻ ✧➘✱
●✏★✬✱❡✻❆Ú ✇ú ❞❀ ✶P❚✷★qü❈✱➭✶✵❄✪❍■✱ ★✪✻✽ü❵★✪✫❈Ú❇★✬❙❈✱✴✶✩❄❵ü✪✱✴✳✩❀ ✶➫Ü✠✯✲Ú✵❚➪✳◗✱✴✶✵Û✼✱❡❅❷Ú➫Ú✵❄✮❂✿✱✴★✪✳✵✫✷★✪þ✺✯
❂✲✯✲Ú❫❘➳✻✽◆✷✱➘Ú◗❄■Ú✵❚✺✱☛✻✺✯ ➍✱✞✳◗✱✞✫✷❅✞✱✴✶❲✯✿✫➳✳◗✱✞Û✷✳✵✱❡✶↕✱✴✫❈Ú❇★❵Ú◗✯✲❄❈✫❏✸✽★✬❂✲Ú✵❚✺❄❈◆✺❙✪❚▲✯②Ú✏✯❖✶❃ñ➊◆✽Ú◗◆✺✳✵✱PÜ❃❄✪✳◗✾■Ú✵❄ ü❈✱✞✳◗✯②ñ➊❘
Ú✵❚✺✯❖✶✏✱✴❍➭Û✷✯✲✳◗✯✿❅✴★✬❂✿❂✲❘❈ú✼✭✰✯✿✳◗✶↕Ú✴✸✼✯✲✫ ➼❀ ✶✴✸ ✓ ★✬✫✷✻ ✗✚✙ ❅✞★✬✫✮þ✛✱✣❂✿✱✴★✪✳✵✫✺✱❡✻✮✶↕✱✴Û✷★✬✳❇★❵Ú◗✱✞❂✿❘✦◆✷✶✵✯✲✫✺❙❁✳✵✱
✶↕Û✛✱✴❅✔Ú✵✯✿ü✪✱✞❂✿❘ Ú✵✳❇★✪✻✽✯✲Ú✵✯✿❄✪✫✷★✪❂❉✸➼✫✺❄✪✫ ➟Û✺✳◗❄✪þ✼★✬þ✺✯✿❂✲✯❖✶↕Ú✵✯❖❅■Ù ➟Ú✵✱❡❅❇❚✺✫✺✯ ❂❋◆✺✱✴✶☛★✪✫✷✻➪Û✺✳◗❄✪þ✷★✪þ✺✯✿❂✲✯❖✶❫Ú◗✯✿❅➭Ú◗✱✴❅❇❚
✫✺✯ ❂❋◆✺✱❡✶▼ÿ➊✯✲ñ ✓ ✫✺✱✞✱❡✻✺✶■Ú◗❄➥þ✛✱➶❂✿✱✴★✬✳◗✫✺✱❡✻➹★✬Ú✦★✬❂✿❂ ✔ú➠Ù❭✫ ❀ ✶➭Ú✵❚✺✯❖✶✦✻✽✯❖✶↕Ú✵✯✿✫✷❅❷Ú◗✯✲❄❈✫ ❅✴★✬✫✺✫✷❄✬Ú✦þ✛✱
❍■★❈✻✽✱P✶↕✯✿✫✷❅✞✱➫Ú✵❚✺✱P✯✿✫✽ñ➊❄✪✳◗❍ ★❵Ú◗✯✲❄❈✫❁ñ➊✳◗❄✪❍ ✓ ★✪✫✷✥✻ ✗✚✙➪✯✿✶✠❅✞❄✪✫❋Ú◗★✪✯✲✫✷✱✴✻➳✯✲✫➶★■✶↕✯✿✫✺❙❈❂✲✱P❅✔❄❈❍■Û✼❄❈✫✺✱✞✫❋Ú✴ú
❆✱✴❅✞❄✪✫✷✻❼✸ ➼❀ ✶➫✱ ✽Û✺❂✿✯✿❅✞✯②Ú☛✻✽✯❖✶❫Ú◗✯✲✫✼❅❷Ú✵✯✿❄✪✫➪þ✛✱✔Ú❫Ü❃✱✞✱✴✫ Û✺✳◗❄✪þ✼★✬þ✺✯✿❂✲✯❖✶↕Ú✵✯❖❅☎★✪✫✷✻➨❂✿❄✪❙❈✯✿❅✴★✬❂✓Û✺✳◗✱✴✻✽✯❖❅✞★✬Ú✵✱❡✶
✝
✴☎
✺✝✮✟
✘☎
✺☎
✜✆✟
☎
✡✌☞✎✰✸✏
✬✡✌☞✎✍✑✏
✜
☎✫✜✺✂
☎
✞✝✠✟
✡ ✸✏
✞✝
❀
✳
✂
☎
✷
✜
✡✌☞✎✰✸✏✛✂
☎
★✻
✜
✮✟
☛✡ ✍✳✏ ✂
✹✡ ✑✏
✜
✱✡✌☞✎✍✑✏✛✂
✙✟
✂
✞☎
✭✡✵☞✔✰✑✏ ★☎
✜✆✟
✗
✞✝✠✟
✞☎✫✜✬✟
✵✲
☎
✺✝✮✟
☎✫✜
✄✂
★☎
✞✝✠✟
✖✕
27
✺☎
✜✆✟
✌
❇⑦ ④
❅✞★✬✫➭★✬❂❖✶↕❄➘ñ➏★✪❅✞✯✲❂✿✯✲Ú◗★❵Ú◗✱❤❂✿✱✴★✪✳✵✫✷✯✲✫✺❙✼✸ú ✗➫✫✺❄❵Ü✠❂✲✱❡✻✽❙✪✱❃★✬þ✛❄✪◆✽Ú✓Ü✠❚✺✱✞Ú✵❚✺✱✴✳✓★✩Û✺✳◗✱✴✻✺✯✿❅✴★❵Ú✵✱❤✯✿✶✓Û✷✳✵❄❈þ✷★✬þ✺✯✿❂✿✯✿✶↕Ú✵✯❖❅
❄✪✳✏❂✿❄✪❙❈✯✿❅✴★✬❂❞❅✴★✬✫➶þ✛✱☛◆✷✶✵✱✴✻➶★✪✶✏❂✿★✪✫✺❙✪◆✷★✪❙✪✱Pþ✺✯❖★✪✶✏✳✵✱❡✻✽◆✷❅✔✯✿✫✺❙ Ú◗❚✺✱☛❚❆❘❆Û✼❄✪Ú✵❚✺✱❡✶↕✯❖✶➝✶✵Û✷★❈❅✔✱✪✸✷þ✼❄✪Ú✵❚➶ñ➊❄❈✳
❂✲✱❡★✬✳◗✫✺✯✲✫✷❙➘Ú◗❚✺✱✏❂✿❄✪❙❈✯✿❅✴★✬❂ ✱ ✜➸✙✧ ✟ ✶➠✯✲✫ ➪★✪✫✷✻☎ñ➊❄✪✳➠❂✿✱✴★✪✳✵✫✺✯✿✫✺❙✩Ú✵❚✷✱✏❅✔❂❖★✬◆✷✶✵✱✴✶✥✯✿✫ ➸ú✪Ù❭✫■★❈✻✺✻✽✯✲Ú✵✯✿❄✪✫❏✸
ñ➊✱✴★❵Ú◗◆✺✳◗✱✴✶➠✶✵◆✷❅❇❚ ★✪✶➠✻✺✱✔Ú✵✱✴✳✵❍■✯✿✫✷★✪❅✞❘☎❅✴★✬✫■❄✬ñ Ú◗✱✞✫ þ✛✱➝✶↕Û✛✱✴❅✞✯②ý✼✱✴✻ ★✪✶➠þ✺✯✿★❈✶✓ñ➊❄✪✳➸❂✲❄❈❙✪✯❖❅✞★✬❂✽Û✷✳✵✱❡✻✽✯✿❅✴★❵Ú◗✱✴✶✴✸
Ü✠❚✺✯✿❅❇❚▲✯❖✶❲◆✷✶↕◆✼★✬❂✿❂✲❘✦❍■❄❈✳✵✱➫✻✽✯ ❅✞◆✺❂✲Ú✠ñ➊❄✪✳✠Û✺✳◗❄✪þ✷★✪þ✺✯✿❂✲✯❖✶❫Ú◗✯✿❅✩Û✺✳◗✱✴✻✺✯✿❅✴★❵Ú✵✱❡✶✞ú ✝➝❄✬Ú✵✱✩Ú◗❚✷★❵Ú❡✸✷★✪❅✴❅✔❄✪✳❇✻✽✯✿✫✺❙
Ú✵❄✩Û✼✱✴✳◗✶✵❄✪✫✷★✪❂✽❅✔❄✪❍■❍☎◆✺✫✺✯❖❅✞★❵Ú◗✯✲❄❈✫☛Ü✠✯✲Ú✵❚ ✗➫✳✵✯❖✶❫Ú◗✯✿★✪✫✙➫✗ ✱✴✳◗✶↕Ú✵✯✿✫✺❙✷✸✴Ú✵❚✷✱❃✯✿❍■Û✺❂✿✱✞❍■✱✞✫❋Ú◗★✬Ú✵✯✿❄✪✫➭❄✬ñ✼❀✞☎ ✜✆✟ ✶
✻✽✯✿✶↕Ú✵✯✿✫✺❙❈◆✺✯✿✶✵❚✺✱❡✶➸Û✺✳◗❄✪þ✼★✬þ✺✯✿❂✲✯❖✶↕Ú✵✯❖❅➝★✬✫✼✻✦❂✿❄✪❙✪✯❖❅✞★✪❂✷Û✺✳◗✱✴✻✺✯✿❅✴★❵Ú✵✱❡✶✞ú❆û❲❚✺✯❖✶❃✯✿✶❤✫✺❄✬Ú✠✻✽✱❡✶✵❅✞✳✵✯✿þ✼✱❡✻ ✯✿✫✦Ú✵❚✷✱➘❂✿✵✯
Ú✵✱✞✳❇★❵Ú◗◆✺✳◗✱✮★✪þ✼❄❈◆✽Ú✦✞❀ ☎✫✜✆✟ ✶ ✶✵◆ ✦❅✔✯✿✱✞✫❋Ú✵❂✿❘✂Ú◗❄➥✾❆✫✺❄❵Ü ❚✺❄❵Ü✍Ú◗❚✺✯❖✶ Ü❤❄❈✳✵✾✽✶✴ú✥✹✏✱✞✫✼❅✔✱✪✸➸Ü❤✱➶❅✞★✪✫✺✫✺❄✪Ú
✻✽✯✿✶◗❅✔◆✼✶✵✶❃Ú◗❚✺✯✿✶✠ñ➊◆✺✳↕Ú◗❚✺✱✞✳❡ú
✗
✣
✢✗✚✙
✂✁
✁
✁ ✧✩✛❍✑➼Ò●❞✧ Õ✄✂ Ô➠Ó✯✮
üq★✪✳✵✯✿✱✔Ú❫❘☎❄✬ñ➍❍■❄❆✻✺✱✞❂❖✶➠❅✔❄❈❍✣þ✺✯✿✫✺✯✿✫✺❙➫❂✲❄❈❙✪✯❖❅❃★✪✫✷✻➭Û✺✳◗❄✪þ✷★✪þ✺✯✲❂✿✯✲Ú✵✯✿✱✴✶✓✖✱ ✕✽✯❖✶❫Ú❇✶✏ÿ➏✶✵✱✞✱❲Ú◗❚✺✱✠❄❵ü✪✱✴✳✵ü❆✯✿✱✞Ü✂þ❆❘
➫✱✞✳❇✶❫Ú◗✯✲✫✺❙➫★✬✫✼✻☎✧➘✱✏●✏★✬✱❡✻❆Ú✆✡✵☞ ☞✁✏✛✂❷ú✪❀❃✱✞❂✿❄❵ÜP✸qÜ❤✱❲ü❈✱✞✳◗❘Pþ✺✳✵✯✿✱❂✼✡ ❘☛❍■✱✞✫❋Ú✵✯✿❄✪✫➭Ú✵❚✷✱❃❍■❄❋✶❫Ú✥✯✿❍➭Û✛❄✪✳✵Ú◗★✪✫❋Ú
❍➭❄✽✻✽✱✴❂✿✶☎★✬✫✷✻➻✻✺✯✿✶◗❅✔◆✷✶◗✶✣✶✵❄✪❍■✱✦✳◗✱✞❂❖★❵Ú◗✯✲❄❈✫✷✶PÜ✠✯✲Ú✵❚ ☎➼❀✞✝✠✟ ✶✞ú✒✇ ❍■❄✪✳◗✱❁✻✽✱✔Ú❇★✬✯✿❂✲✱❡✻➻✶❫Ú◗◆✷✻✽❘➥❄✬ñ✠Ú◗❚✺✯✿✶
❍■★✬Ú↕Ú◗✱✞✳✠✯❖✶❲ñ➊◆✽Ú✵◆✷✳✵✱PÜ❃❄✪✳◗✾➍ú
✷ ✫☎Ú◗❚✺✱✠❄✪✫✷✱✠❚✷★✬✫✷✻■★✪✳✵✱❲❍■❄✽✻✽✱✞❂❖✶➠★✪✻✷✻✽✯✲✫✷❙PÛ✺✳✵❄❈þ✷★✬þ✷✯✲❂✿✯②Ú◗✯✲✱❡✶✰Ú✵❄✠☎❞❄✪❙✪✯❖❅✞✜➠✳◗❄✪❙❈✳◗★✪❍➭❍■✯✿✫✺❙ ✡✵☞☞✂✔✏➟✸
❂✲✯✿✾✪✱➫Ù❭✫✷✻✽✱✴Û✼✱✴✫✷✻✽✱✴✫❈Ú ✱ ❚✷❄✪✯❖❅✔✹✱ ❏☎ ❄❈❙✪✯❖❅➝þ❆❘✭✜✓❄❆❄✪❂✿✱ ✡✵☞☞✔✟ ✏➟✸ ✜➸●✠❍Ù ✽❑ þ❆❘ ✽★✬Ú✵❄ ★✬✫✷✻ ✗P★✬❍■✱✴❘❈★ ✡✵☞✸✩✎➟✏ ✸
☎❏❄✪❙❈✯✿❅ ✜➸✳✵❄❈❙✪✳❇★✬❍ ✶➸Ü✠✯✲Ú✵❬
❚ ✇➝✫✺✫✺❄✪Ú◗★❵Ú◗✱✴✻✮✧➘✯✿✶ ❫◆✺✫✷❅✔Ú✵✯✿❄✪✫✷✶➝þ❋✭❘ ✥✒ ✱✞✫✺✫✺✱✴✾✪✱✴✫✷✶✠✱✔Ú➝★✪❂❉ú ✡✵☞✔✯✑➼✏ ★✬✫✼q✻ ❆Ú✵✳❄
❅❇❚✷★✪✶↕Ú✵✯❖✙❅ ☎❞❄✪❙❈✯✿✢❅ ✜➠✳◗❄✪❙❈✳◗★✪❍■✶✏þ❋❘ ❑➶◆✺❙✪❙❈❂✲✱✞Ú✵❄❈✫ ★✬✫✼✻ ✱ ◆✷✶◗✶↕✱✴✫✷✶ ✡ ✆✚✏➟ú ✷ ✫➨Ú✵❚✺✱■❄✪Ú✵❚✺✱✴✳✩❚✼★✬✫✷✻➥★✬✳◗✱
❍➭❄✽✻✽✱✴❂✿✶▲❅✔❄❈❍✣þ✺✯✿✫✺✯✿✫✺❙ ❀❲★q❘❈✱✴✶✵✯✿★✪✫➹✫✷✱✔Ú❫Ü❃❄✪✳◗✾❆✶✦Ü✠✯②Ú◗❚ ❂✿❄✪❙✪✯❖❅✬ú❃❑➶❄✽✻✽✱✞❂❖✶➳❄✬ñ☛Ú◗❚✺✯✿✶➳✾❆✯✲✫✼✻ ❚✷★qü✪✱ ★
✶❫Ú◗✳✵❄❈✫✺❙✪✱✴✳❤✳◗✱✞❂❖★❵Ú◗✯✲❄❈✫✷✶↕❚✷✯✲Û Ú◗✢❄ ☎➼✞❀ ✝✠✟ ✶✞ú✺û❲❚✺✱✩❍■❄❈✶↕Ú❃✯✿❍■Û✼❄❈✳↕Ú❇★✬✫❋Ú❲❄✬ñ❞Ú◗❚✺✱✴✶✵✱✩❍➭❄✽✻✽✱✴❂✿✶❲★✪✳✵✱➫✞❀ ☎✫✜✆✟ ✶✴✸
✜➸●✏✱
❑ ✟ ✶➭★✬✫✷✤✻ ✜➸✳✵❄❈þ✷★✬þ✷✯✲❂✿✯✿✶↕Ú✵✯❖✼❅ ☎❞❄✪❙✪✯❖❅✼✜➸✳✵❄❈❙✪✳❇★✬❍ ✶✦ÿ ✜ ☎✫✜✬✟ ✥✶ ➫✂ þ❆✱❘ ✏✝ ❙✪❄➥★✬✫✷✻✌✹➝★✪✻✷✻✺★qÜ✠❘ ✡✵☞❅✆✸✏➟ú
❆❄✪❍■✱P❄✬Ú◗❚✺✱✞✳❇✶✏★✪✳✵✱ ✱ ☎✫✜➘✆ÿ ☎✞✝ ✂❲þ❆❘ ✽★✪✫❈Ú◗❄❈✶ ✱ ❄❋✶❫Ú❇★■✱✔Ú➘★✪❂❉✫ú ✡✌☞✎✶✸✏➼★✬✫✷✻✮●✠✱✴❂✿★✬Ú✵✯✿❄✪✫✷★✪❂❞❀❲★q❘✪✱❡✶↕✯❖★✬✫
✝✏✱✔Ú❫Ü❃❄✪✳◗✾✽✶❃þ❋❘➳❊❈★✪✱✞❙✪✱✴✹
✳ ✡ ✳✏➟ú
☎❞✞
❀ ✝✠✟ ✶❲★✬✳◗✱✩✶↕Ú✵✳◗❄✪✫✺❙❈❂✲❘ ✳◗✱✞❂❖★❵Ú◗✱✴✻✦Ú✵❄ ✜➸●✏✤❑ ✟ ✶❲✯✿✫❁Ú◗❚✷★❵Ú✠Ú◗❚✺✱✞✳◗✱✩✯✿✶✠★■❅✔❂✿✱✴★✪✳✠❅✔❄✪✳◗✳◗✱✴✶✵Û✼❄❈✫✷✻✽✱✞✫✼❅✔✱
✯✲✫➭ñ➊◆✺✫✷❅✔Ú✵✯✿❄✪✫✷★✪❂✲✯✲Ú❫❘➭þ✼✱✞Ú❫Ü❤✱✴✱✞✫➭Ú✵❚✺✱✏❅✞❄✪❍■Û✛❄✪✫✺✱✴✫❈Ú❇✶✥❄✬ñ❼✹★ ☎❞✞❀ ✝ ★✪✫✷✻☎Ú✵❚✷★✬Ú➠❄✬ñ❼✹★ ✜➸●✏❫❑ ✶ ✂❅✔❄❈✳✵✳◗✱✴✶
Û✼❄❈✫✷✻✺✶ Ú◗❄➪Ú✵❚✺✱➨✳✵✱✴❂✿★✬Ú✵✯✿❄✪✫✷★✪❂✏✶✵❅❇❚✺✱✴❍ ★✺✸ Ú◗❄➪Ú✵❚✺✱ ✻✽✱✞Û✛✱✞✫✷✻✺✱✞✫✷❅✞❘➹✶↕Ú✵✳◗◆✷❅❷Ú◗◆✺✳◗✱✪✸ ➾Ú◗❄➻Ú✵❚✷✱
✱ ✜➸✙
✧ ✟ ✶✦★✪✫✷✻ ★✬❙❈❙✪✳◗✱✞❙❈★✬Ú✵✱❡✶✣ñ➊❄❈✳ Ú✵❚✷✱▼✻✽✱✞Û✛✱✞✫✷✻✺✱✞✫✷❅✞❘➹✶↕Ú✵✳◗◆✷❅❷Ú◗◆✺✳◗✱▼★✬✫✷✻ ➘Ú◗❄➻Ú✵❚✷✱➶✳✵✱✴❂✿★✬Ú✵✯✿❄✪✫✷★✪❂
✶↕✾❈✱✞❂✿✱✔Ú✵❄❈✫✂ÿ ñ➊❄❈✳✩★✪✫▼✱✖✕✽Û✺❂❖★✬✫✷★✬Ú✵✯✿❄✪✫▼❄✪ñ✥Ú✵❚✺✱❡✶↕✱✣Ú✵✱✴✳✵❍ ✶✴✸✼Ü❃✱✣✳◗✱✔ñ➊✱✴✳➝Ú◗❄ ✦✩✱✔Ú◗❄❆❄✪✳➝✱✞Ú➫★✪❂❉ú ✡ ✩✎✏ ✂✔ú❼û❲❚✺✯✿✶
❙✪◆✷★✪✳◗★✪✫❈Ú◗✱✞✱❡✶✠Ú✵❚✼★❵ÚP✯②ÚP✯❖✶➫★✪✶✩✶↕✯✿❍■Û✺❂✲✱➭Ú◗❄➳❍■❄✽✻✽✱✞❂✓Û✺✳◗❄✪þ✷❂✲✱✴❍■✶✩Ü✠✯✲Ú✵❚ ☎❞✞❀ ✝✠✟ ✶➫★✪✶✩Ü✠✯✲Ú✵❚ ✜❤●✏✱❑ ✟ ✶✞ú
❑➶❄✪✳◗✱✞❄❵ü❈✱✞✳❡✸✓★❈✶ ❍■❄❈✶↕Ú❁❄✬Ú◗❚✺✱✞✳❁Û✺✳✵❄❈þ✷★✬þ✷✯✲❂✿✯✿✶↕Ú✵✯❖✁❅ ✇❂✿❄✪❙✪✯❖❅✞★✪❂❲❍➭❄✽✻✽✱✴❂✿✶✧✡ ✯✷✬✸ ☞✎✶✸✏➟✺✸ ☎❞✞❀ ✝✠✟ ✶✦★✬✳◗✱➶❍■❄✪✳◗✱
✱✖✕✽Û✺✳◗✱✴✶◗✶↕✯✿ü✪✱ Ú◗❚✷★✬✫ ✜➸●✏✱❑ ✟ ✶✴ú✰✭✷❄✪✳✣✯✿✫✷✶↕Ú◗★✪✫✷❅✔✱❈✸✰❀❃❂✿❄❆❅❇✾❈✱✞✱✴❂❤★✬✫✼✻✂❀❃✳✵◆✺❘❆✫✺❄❆❄❈❙✪❚✺✧✱ ✡ ✂✔✏✠✶↕❚✷❄❵Ü✜Ú✵❚✷★✬Ú
✜➸●✏✱
❑ ✟ ✶➭❅✞★✬✫✷✫✺❄✬Ú■❍■❄❆✻✺✱✞❂✠❅✔✱✴✳↕Ú❇★✬✯✿✫✌✻✽✱✞Û✛✱✞✫✼✻✽✱✞✫✷❅✞✯✲✱❡✶✞ú✜✇➝❂✿✶✵❄✷✸ ✜➸●➝✱❑ ✟ ✶➭✻✽❄➥✫✺❄✬Ú➭❚✷★qü✪✱❁ñ➊◆✺✫✷❅✔Ú✵❄✪✳
✶↕❘❆❍✣þ✛❄✪❂❖✶Pÿ➊◆✼✶↕✱✞ñ➊◆✺❂❼Ú✵❄ ❍■❄✽✻✽✱✴❂➍Ú✵✱✞❍■Û✛❄✪✳❇★✬❂❼Û✺✳◗❄✽❅✔✱❡✶✵✶✵✱✴✶✴✸❆ñ➊❄✪✳✠✯✿✫✷✶↕Ú◗★✬✫✼❅✔✔✱ ✔✂ ú
☎❞✞
❀ ✝✠✟ ✶P★✬✳◗✱➭★✪❂✿✶✵❄✮✶❫Ú◗✳✵❄❈✫✺❙✪❂✿❘➶✳✵✱✴❂✿★✬Ú✵✱❡✻▼Ú◗❄ ✜ ☎✫✜✬✟ ✶■ÿ➏★✪❂✿✶✵❄✮✾❆✫✺❄❵Ü✠✫ ★❈✶ ✱ ❄✪✫❋Ú✵✱ ✕❋❀Ú ✽✱✞✫✷✶✵✯②Ú◗✯✲ü❈✱
✜➠✳◗❄✪þ✷★✪þ✺✯✲❂✿✯❖✶❫Ú◗✯✿❅ ✗➫✫✺❄❵Ü✠❂✿✱✴✻✺❙✪✱✦❀❲★✪✶✵✱✴✶✥❷✂ ✫
ú ✜ ☎✫✜✬✟ ✶☛★✪❂✿✶✵❄▼✻✽✯❖✶❫Ú◗✯✲✫✷❙✪◆✺✯❖✶↕❚✂✻✺✱✔Ú✵✱✴✳✵❍■✯✿✫✺✯❖✶❫Ú◗✯✿❅❁✯✲✫✽ñ➊❄❈✳✵❍ ✳★
Ú✵✯✿❄✪✫☎★✪✫✷✻☛Û✷✳✵❄❈þ✷★✬þ✺✯✿❂✿✯✿✶↕Ú✵✯❖❅➠✯✿✫✽ñ➊❄✪✳◗❍ ★❵Ú◗✯✲❄❈✫❏ú❡✹➝❄❵Ü❤✱✴ü✪✱✴✳✴✔✸ ☎❞✞❀ ✝✠✟ ✶❞❚✼★qü✪✱❤★➝✫❆◆✺❍✣þ✛✱✞✳✓❄✬ñ✛★✪✻✽ü❵★✬✫❋Ú❇★✬❙✪✱❡✶
❄❵ü✪✱✞✆✳ ✜ ☎✫✜✬✟ ✶✴✸✺Ü✠❚✺✯❖❅❇❚▼★✬✳◗✱P❍ ★✬✯✿✫✺❂✲❘➳Ú✵❚✷✱✣✶✵★✪❍■✱☛★✪✶✠Ú◗❚✺✱✣★❈✻✽ü❵★✬✫❋Ú◗★✪❙✪✱✴✶❲❄✪ñ➠❀✺☎ ✜✆✟ ✶✠❄❵ü✪✱✴✆✳ ✜✴☎ ✜✆✟ ✶
✶❫Ú❇★❵Ú✵✱❡✻✦þ❆✼❘ ✗➫✱✞✳❇✶↕Ú✵✯✿✫✺❙✣★✬✫✼✻❁✧➘✱➫●➝★✬✱✴✻✽Ú ✡ ✯✑✏ ✶❈Ú◗❚✺✱P★✬þ✺✯✿❂✿✯②Ú❫❘➭Ú✵❄➭❚✷★✪✫✷✻✽❂✿✱✩❅✞❄✪✫❋Ú✵✯✿✫❆◆✺❄✪◆✷✶❤ü❵★✬✳◗✯✿★✪þ✺❂✿✱✴✶✴✸
Ú✵❚✺✱➫✶↕✱✴Û✷★✬✳❇★❵Ú◗✯✲❄❈✫ ❄✬▲ñ ❂❋◆✷★✪❂✲✯✲Ú◗★✬Ú✵✯✿ü✪✱✩★✬✫✷❬✻ ❂❋◆✷★✪✫❋Ú✵✯✲Ú◗★❵Ú◗✯✲ü❈✱➝✯✿✫✽ñ➊❄✪✳◗❍ ★❵Ú◗✯✲❄❈✫❁★✬✫✷✻✦Ú✵❚✺✱➫❅✔❂✿❄❈✶✵✱✞✳❤❂✲✯✿✫✺✾■Ú◗❄
❄✪✳❇✻✽✯✲✫✼★✬✳◗❘✦❀❃★q❘❈✱✴✶✵✯✿★✪✫✦✫✺✱✞Ú❫Ü❤❄❈✳✵✾✽✶✴ú
✇
✗
✍✌
✓
✓
✗✚✙
✗
✎✒✑
28
✣
⑦✴⑦
❤Ô Ñ➝× ↕Ö ❆Ø❫Ô➸Ñ
✻ ✱▲✯✲✫❋Ú◗✳✵❄✽✻✽◆✷❅✞✱✴✻ ★▼✫✺✱✴Ü✍Û✺✳◗❄✪þ✷★✪þ✺✯✲❂✿✯❖✶❫Ú◗✯✿❅✖➟❂✿❄✪❙❈✯✿❅✴★✬❂➠❍■❄✽✻✽✱✞❂✠❅✴★✬❂✿❂✲✱❡✻ ☎❏❄❈❙✪✯❖❅✞★✪❂❤❀❲★q❘✪✱❡✶↕✯❖★✬✫✤✝➝✱✔Ú
Ü❤❄❈✳✵✾✽✶✴✳ú ❞☎ ✞❀ ✝✠✟ ✶➸★✪✳✵✱✏✶↕Ú✵✳◗❄✪✫✺❙❈❂✲❘☎✳✵✱✴❂✿★✬Ú✵✱❡✻✣Ú◗❄☛❄✬Ú◗❚✺✱✞✳➸❍➭❄✽✻✽✱✴❂✿✶❤❅✔❄❈❍✣þ✺✯✿✫✺✯✿✫✺❙✣❀❲★q❘✪✱❡✶↕✯❖★✬✫➭✫✺✱✔Ú❫Ü❃❄✪✳◗✾✽✶
★✬✫✷✻ ❂✲❄❈❙✪✯❖❅✬✸✓❍■❄❈✶↕Ú Û✺✳✵❄❈❍■✯✲✫✺✱✴✫❋Ú✵❂✿❘✂✺❀ ☎ ✜✆✟ ✶✞ú✿➼☎ ❀✞✝✠✟ ✶■✱✖✽✕ Û✺❂✿✯✿❅✞✯②Ú◗❂✲❘✌✻✺✯✿✶↕Ú✵✯✿✫✺❙✪◆✷✯✿✶✵❚ Û✷✳✵❄❈þ✷★✬þ✺✯✿❂✿✯✿✶↕Ú✵✯❖❅
★✬✫✷✻▼✻✺✱✔Ú✵✱✴✳✵❍■✯✿✫✺✯❖✶❫Ú◗✯✿❅✣✯✲✫✽ñ➊❄❈✳✵❍ ★✬Ú✵✯✿❄✪✫➨★✪✫✷✻➶❚✷★qü❈✱☛✶↕✱✴Û✷★✬✳❇★❵Ú◗✱✣❅✔❄❈❍➭Û✛❄✪✫✷✱✞✫❋Ú◗✶➝Ú✵❄▲✻✽✱✔Ú◗✱✞✳◗❍➭✯✿✫✺✱➭✻✽✯②ñ
ñ➊✱✞✳◗✱✞✫❋Ú➘✱✴❂✲✱✴❍➭✱✴✫❋Ú◗✶➘❄✬ñ✥Ú✵❚✷✱☎❀❲★q❘✪✱❡✶↕✯❖★✬✫✮✫✺✱✞Ú❫Ü❤❄❈✳✵✾✽✶✏❙❈✱✞✫✺✱✴✳◗★✬Ú✵✱❡✻❼ú ✻➥✱☎★✪✳✵❙❈◆✺✱✴✻▲Ú◗❚✷★❵Ú❡✸➍★❈✶➝★➳❅✔❄✪✫
✶↕◗✱ ❂❈◆✷✱✞✫✷❅✞✱✪✸ ☎❞✞❀ ✝✮✟ ✶✏★✪✳✵✱✩❄✬ñ Ú◗✱✞✫✮❍■❄✪✳◗✱➫✯✲✫❋Ú✵◆✷✯②Ú◗✯✲ü❈✱➫Ú✵❚✷★✪✫▲✞❀ ☎✫✜✬✟ ✶✴ú
❆❄✪❍■✱➭✖✱ ✕❆Ú✵✱✴✫✷✶↕✯✿❄✪✫✼✶➘❄✪✺ñ ☎➼✺❀ ✝✮✟ ✶ ÿ➏✫✺❄✬Ú☛✻✽✯❖✶◗❅✔◆✷✶◗✶↕✱❡✻ ✻✺◆✺✱➭Ú✵❄➶✶↕Û✼★✪❅✔✱➭✳◗✱✴✶↕Ú✵✳◗✯✿❅✔Ú✵✯✿❄✪✫✷✶✥➝✂ ★✪✳✵✱☎Ú✵❚✷✱
✯✲✫✷❅✞❄✪✳◗Û✼❄❈✳◗★✬Ú✵✯✿❄✪✫✦❄✪ñ✰❂✿❄✪❙❈✯✿❅✴★✬❂✛þ✷★✪❅❇✾❆❙❈✳✵❄❈◆✺✫✷✻✦✾❆✫✺❄❵Ü✠❂✿✱✴✻✺❙✪✱➫★✬✫✷✻▲★✣❍■❄❈✳✵✱P✶↕Ú✵✳◗◆✷❅❷Ú◗◆✺✳✵✱❡✻❁✳◗✱✞Û✷✳✵✱❡✶↕✱✴✫
Ú◗★❵Ú◗✯✲❄❈✫☛❄✬ñ✽❂✿❄✪❙❈✯✿❅✴★✬❂ ✜❤✮✧ ✟ ✶✴ú ✌❂✲❄✪Ú➼❄✬ñ❆ñ➊◆✽Ú◗◆✺✳◗✱❤Ü❃❄✪✳◗✾➝✳◗✱✞❍ ★✬✯✿✫✷✶✴ú✴û❲❚✺✯❖✶➼✯✲✫✷❅✞❂✲◆✼✻✽✱✴✶➼✯✲❍■Û✺❂✿✱✞❍■✱✴✫❈Ú◗✯✲✫✷❙
★✬✫✷✻❁Ú✵✱❡✶❫Ú◗✯✲✫✺❙✦★■❅✞❄✪❍■Û✺❂✿✱✔Ú✵✱P❂✿✱✴★✪✳✵✫✷✯✲✫✺❙■★✬❂✿❙✪❄❈✳✵✯✲Ú✵❚✷❍✜ñ➊❄✪✆✳ ❞☎ ❀✞✝✮✟ ✶✏★✪✫✷✻➳✯✲✫❆ü✪✱❡✶❫Ú◗✯✲❙❋★❵Ú◗✯✲✫✺❙➭✯✿✫✮✻✺✱✔Ú◗★✪✯✲❂
Ú✵❚✺✱✦✳◗✱✞❂❖★❵Ú◗✯✲❄❈✫✷✶P❄✪✞ñ ☎➼✺❀ ✝✮✟ ✶✣Ü✠✯②Ú◗❚➻❄✬Ú◗❚✺✱✞✳☛Û✷✳✵❄❈þ✷★✬þ✺✯✿❂✿✯✿✶↕Ú✵✯❖❅✁✇❂✲❄❈❙✪✯❖❅✞★✪❂✰❍■❄✽✻✽✱✴❂✿✶✴✸❞Ü✠✯✲Ú✵❚➻✳◗✱✴✶✵Û✼✱❡❅❷ÚPÚ◗❄
þ✼❄✪Ú✵❚✮✾❆✫✺❄❵Ü✠❂✲✱❡✻✽❙✪✱✩✳◗✱✞Û✺✳◗✱✴✶✵✱✞✫❋Ú◗★✬Ú✵✯✿❄✪✫▲★✬✫✷✻▲❂✿✱✴★✬✳◗✫✺✯✿✫✺❙✷ú
✁
✦✛
★
★
✱
✇
× ✠Ñ✏Ô ➼Õ ☎✄ ❞Ñ❤Ò
✧➘★❈★✬✫☛✭✰✯✲✱✴✳✵✱✴✫✷✶❏✯❖✶❞✶✵◆✺Û✺Û✛❄✪✳✵Ú✵✱❡✻➫þ❆❘✏Ú✵❚✷✺✱ ✦ ✷ ✲ ✚✰✚✰ ✍✜✲✸✰❆✷✓ ÿ❉❀✺✰ ☞✔✶ ✂❋❄✪✫PÙ❭✫✼✻✽◆✷❅❷Ú◗✯✲ü❈✱ ✗➫✫✺❄❵Ü✠❂✿✱✴✻✺❙✪✱
❀❃★❈✶↕✱❡✶✞ú❡✹✏✱✴✫✷✻✽✳◗✯✲✾P❀❃❂✲❄✽❅❇✾❈✱✞✱✞❂❋★✬✫✷✻✣❊❈★✪✫✣●✏★✬❍■❄❈✫✣★✬✳◗✱➠Û✛❄❈✶↕Ú ❭✻✽❄✽❅❷Ú✵❄❈✳◗★✪❂✬ñ➊✱✴❂✲❂✿❄❵Ü✏✶❞❄✪ñ✽Ú✵❚✺✱❲✭✺◆✷✫✷✻Pñ➊❄❈✳
✽❅✔✯✿✱✞✫❋Ú✵✯✲ý✼❅❃●✠✱✴✶✵✱✴★✪✳◗❅❇❚➳ÿ➏✭ ✻ ✷ ✂❞❄✬ñ✛✭➼❂❖★✬✫✼✻✽✱✞✳❇✶✞úqû❲❚✺✱❲★✬◆✽Ú◗❚✺❄✪✳❇✶➼Ü❤❄❈◆✺❂✿✻☛❂✲✯✿✾✪✱➸Ú✵❄➘Ú✵❚✷★✪✫✺✾✠✗➫✳✵✯❖✶↕Ú✵✯❖★✬✫
✗➫✱✞✳❇✶❫Ú◗✯✲✫✺❙☎ñ➊❄✪✳➝✯✲✫❋Ú✵✱✴✳✵✱❡✶❫Ú◗✯✲✫✷❙➭✻✽✯❖✶◗❅✔◆✷✶◗✶↕✯✿❄✪✫✼✶✠★✬✫✷✻❁Ú✵❚✺✱P✳◗✱✞ü❆✯✿✱✞Ü❃✱✞✳❇✶❤ñ➊❄✪✳✠◆✷✶✵✱✔ñ➊◆✺❂➼❅✔❄✪❍■❍■✱✞✫❋Ú❇✶✞ú
✂
✯✮
✸✬❩✛✏✧ ✖✕❱✧
✧
✯★
✁
✇
✭✂
P✟
❞Ó ❞Ñ➝×
❉ ⑦ ❋ ▼ ➢✛⑧✓❨✴✐◗✐✵♥✬❢❋t❇➢➭r❃t❭❥❦❴❋⑨ ❋❩❫t✇❬❭❰➏❛✴❩↕➙✪❱◗❩➝❳❈❩❫❛❡➛❋❨✴➛❈❥②♦❦❥✉❬➟➜➳♦❦❛❡⑨✴❥②✐☛❜❖❛✞❩➘❬❫♥❈❱➫✐❇❛✴❴❋t✇❬❭❩❫❢❋✐✵❬❫❥②❛✴❴➶❛✴❜➸⑧✓❨❇➜❵❱◗t❭❥✲❨✞❴
❴❈❱◗❬➟➞✰❛✴❩❫♣✬t❇➢ ❂ ✝❴ ✆✟✞✡✠☞☛✍✌✎✌✡✏✒✑✔✓✖✕✒✗✘✠✚✙✜✛✣✢✤✌✜✥✦✑✔✓✤✛✣✢★✧☎✠✒✓✩✙☞✌✪✞✎✌✪✓✫☛✍✌✬✠✒✓✮✭✯✓✫☛✪✌✪✞✍✛✱✰✒✑✔✓✲✛✴✳✵✑✔✓✷✶✸✞✍✛✴✑ ✹✺☛✻✑✼✰✾✽
✿✍✓✲✛❀✌✻✽✔✽ ✑❁✕✖✌✪✓❂☛✍✵✌ ❧❈❳❆❨✞⑨❡❱❇t ③✬⑦ ❄❃✪③✴③✔ ❧ ⑦ ⑥✪➢
❉ ③ ❋ ➽➝➢q⑧✰♦❦❛❵✐↕♣q❱❇❱❇♦❆❨✴❴❆➙➫➩➳➢✬⑧➼❩❫❢❈➜✪❴❈❛❵❛❡⑨❡♥❈❱❡➢ ⑤ ⑨❡⑨✴❩❫❱◗⑨q❨✞❬❫❥❦❛✴❴✣s❡❱◗❩❫t❭❢❋t❏t❭❱❇♦❦❱❇✐◗❬❫❥❦❛✴❴✣➛❈❥②❨✴t❇❧✬❨✞❴❆➙➘❩❫❱❇♦②❨✔❬❫❥②❛✴❴❆❨✞♦
❴❈❱❇❢❈❩↕❨✞♦❞❴❈❱◗❬➟➞✰❛✴❩❫♣✬t❇➢ ❂ ✵❴ ✿✪❅✫✧❆✶❇✿✪❈✼❉❋❊●❊■❍❑❏▲✠✒✞✡▼■✗✚✢✲✠✪◆❖✠✒✓◗P❘✌✡✰✒✞✪✓✲✑✔✓❙✕❯❚✤✛✱✰✒✛✴✑✣✗✍✛✴✑✴☛✍✰✒✽❲❱❳✠☞✏✖✌✻✽ ✗✟✙✪✞✡✠■❨
❩❬✌✻✽❭✰✒✛✴✑✼✠✒✓✫✰✾✽✫❪❫✰✒✛✱✰✒❴☎❚❋❩❵P❲❈✱❉❋❊✾❊✒❍✾❴☎✶❬☛✎✰✪◆✫❛❋✽❭☛✎✠✒❴❲❱❳✌✡❜✒✑✼☛✎✠✒❴❘✶✸❛●✕✒❛❙✗✎✛✸❝●❝❄❴✺❉❋❊✾❊✒❍❡❧ ③✴④❡④ ⑥✪➢
❉ ⑥ ❋ ➽➝➢✺⑧✰♦❦❛❵✐✵♣❡❱❇❱❇♦❏❨✴❴❋➙✦✈❞➢✽❯❲❱➝➧❤❨✴❱❷➙✬❬❷➢✠➤✛❛✴❳❈❰❉➙❈❛❷➞➠❴ ❥❦❴❆➙✪❢❋✐◗❬❫❥❦❛❡❴✦❛✞❜ ❈❩❫t✇❬❲❛✴❩↕➙✪❱◗❩✠♦❦❛❡⑨✴❥❦✐❷❨✴♦❞➙❈❱◗✐❇❥❦t❭❥②❛✴❴
❬❭❩❫❱❇❱◗t❇➢❵✶✸✞✍✛✴✑ ✹✺☛✻✑✼✰✾✽✫✿✍✓✤✛❀✌✻✽✔✽ ✑❁✕✖✌✪✓✫☛✍✵✌ ❧ ⑦❷④✬⑦ ➯ ⑦ ❰ ③ ✪➵ ❞ ✵③ ✩❃✪③ ☞ ✯❧ ❡❡❢❈❴❋❱ ⑦ ✵ ➢
❉ ❋ ❡❈➢❡❝✰❢❈t❭t❭❱❇❴❋t❇➢❈➦❼❨✞❩↕❨✞❪P❱◗❬❫❱◗❩❼❱◗t✇❬❫❥②❪☛❨✔❬❫❥❦❛❡❴✩❥❦❴➝t✇❬❫❛❵✐✵♥❋❨✴t✇❬❫❥❦✐✥♦❦❛❡⑨✴❥②✐✥❳✪❩❫❛❡⑨✞❩↕❨✴❪Pt❇✤
➢ ❱❳✰●☛✍✢✖✑✔✓❢✌☎P❘✌✡✰✒✞✍✓✤✑✔✓✖✕✴❧
➯ ⑥q✪➵ ❞ ③ ▼ ☞❃✬❈③ ❵⑦ ❧ ③✴④✴④✪⑦ ➢
❉ ❋ ➦➍➢ ⑤ ➢ ♦②❨✴✐↕♥✼➢☛➤✓♥❈❱P♦❦❛❡⑨❡❥❦✐P❛✞❜✥♦❦❱❷❨✞❩❫❴❈❥②❴❈✲
⑨ ➍❞ ❨☎➛❈❩❫❥❦❱◗❜✓❥②❴❵❬❭❩❫❛✬➙✪❢❋✐◗❬❫❥❦❛❡❴❁❬❫❛ ❂ ❴❋➙❈❢❈✐◗❬❫❥❦sq❱✩✈✷❛❡⑨✴❥②✐✣➦❼❩❫❛✴❰
⑨✞❩↕❨✴❪P❪P❥❦❴❋⑨✪➢ ❂ ❣❴ ✆❵✞✚✠☞☛✍✌✍✌✡✏✒✑✣✓❙✕✒✗❤✠✚✙✐✛✣✢✤✌❥✧☎✠✒❨✦◆✤❛✲✽❭✠✎✕❄✥❦✌✪✛✸✶✸✞✡✌✡✰❥❱◗✌✎✌✪✛✴✑✔✓✖✕❳✠✒✓❧✧☎✠■❨✦◆✫❛❙✛✱✰✒✛✴✑✼✠■✓❂✰✾✽
P❆✠✎✕✒✑✴☛♠✰✒✓✫✏✐❱❳✰✾☛✍✢❙✑✔✓❢✌❇P❘✌✡✰✒✞✍✓✤✑✔✓✖✴✕ ❧❆❳❆❨✞⑨❡❱❇t ✍⑦ ❃❆❅⑦ ➢✽r❃❴❈❥❦sq❱◗❩❫t❭❥✉❬➟➜☛❛✞❜➼➩■❨✴❴❋✐↕♥❋❱◗t✇❬❫❱◗❩❷❧ ⑦ ✵ ➢
❉ ●❋ ➚➝➢ ▼ ❩❫❥❦❱❷➙✪❪☛❨✴❴✠❨✞❴❆➙✏➩➳■➢ ♥❃❛✴♦✲➙✪✡t ♦❇❪P❥②➙✪❬❷➢✴✈✼❱❇❨✞❩❫❴❋❥❦❴❈⑨❤⑧✰❨❷➜q❱❇t❭❥②❨✴❴✏❴❋❱◗❬❉➞✓❛✞❩❫♣✪t✺➞➠❥✉❬❫♥✏♦②❛❵✐❇❨✴♦❵t✇❬❭❩❫❢❋✐✵❬❫❢❈❩❫❱❡➢
❂❯
❴ P❘✌✡✰✒✞✍✓✤✑✔✓✖✕♣✑✔✓✜✕✒✞✚✰✪◆✲✢❙✑✴☛✍✰✒✽❆❨q✠☞✏✖✌✻✽ ✗❫❧❈❳❆❨✞⑨❡❱❇t ③❵✪⑦ ❃ ➢✷➩ ❂ ➤✌➦❏❩❫❱◗t❭t❇❧ ⑦ ➢
❉ ❋ ✈❏➢r♥❃❱◗❬❫❛❵❛✴❩❷❧✷➚➝➢ ▼ ❩❫❥❦❱❷➙❈❪☛❨✞❴✼❧✽❯➘➢✽♠✠❛❡♦❦♦❦❱◗❩❷❧✼❨✴❴❋➙ ⑤ ➢✺➦❼❜❖❱ ✺❱◗❩❷➢➝✈✷❱❷❨✞❩❫❴❈❥❦❴❋⑨☎➦❏❩❫❛✴➛❆❨✴➛❈❥❦♦②❥❦t✇❬❫❥❦✐➫➧➸❱❇♦②❨✞❰
❬❫❥❦❛❡❴❋❨✴♦➠➩➭❛✬➙✪❱❇♦❦t❇➢ ❂ ❴➨❣✺➢✛✐❯ ♦❇s ❱◗❩❫❛✴t❭♣✪❥✓❨✴❴❆➙✮➚➝➢❼✈✼❨✔s❵❩↕❋❨ ✐❡s ❧❼❱❷➙✪❥❦❬❫❛✞❩❫t❇t❧ ❩✸✌✻✽❭✰■✛✴✑✼✠✒✓❂✰✒✽☎❪❫✰✒✛✱✰❥❱✘✑✔✓✲✑✔✓❙✕✴❧
❳❋❨✴⑨❡❱◗t❤⑥ ☞④ ✩❃ ⑥❡⑥ ❈➢✼❣✬❳❈❩❫❥❦❴❋⑨✴❱◗❩❭❰✱✉➼❱◗❩❫♦②❨✞⑨❈❧ ③✴④❡④✬⑦ ➢ ❂
❉ ●❋ ➩➳✯
➢ ❡q❨✞❱❇⑨❡❱◗❩❷➢➫➧❤❱❇♦②❨✔❬❫❥②❛✴❴❆❨✞♦✰⑧✓❨❇➜❵❱◗t❭❥✲❨✞❴✮❴❋❱✵❬➟➞✓❛✞❩❫♣✬t❇➢ ◗❴ ✆✟✞✡✠☞☛✍✌✎✌✎✏■✑✔✓❙✕✒✗♣✠✈✙♠✛✣✢✤✌✘✇✯✢✖✑✔✞✪✛❀✌✎✌✪✓✲✛✣✢❳✶✸✓✤❈
✓✲❛✤✰✒✽✦✧☎✠✒✓☞✙✩✌✪✞✎✌✪✓❂☛✍✌❯✠✒✓①✭✯✓✫☛✪✌✪✞✍✛✱✰✒✑✔✓✲✛✴✳❣✑✔✓✵✶✸✞✪✛✴✑ ✹t☛✻✑✼✰✾✽❲✿✍✓✤✛❀✌✻✽✔✽ ✑❁✕✖✌✪✓✫☛✍✌❳②■✭❂✶✸✿✪❈✈③❙④✎⑤✔❧❞❳❋❨✴⑨❡❱◗t ✔③ ✔❄❃
✛③ ⑥✪➢✽➩➭❛✴❩❫⑨❡❨✴❴ ♠✏❨✴❢✪❜❖❪☛❨✞❴❋❴☎➦❞❢❈➛❋♦❦❥❦t❭♥❋❱◗❩❫t❇❧ ⑦ ❈ ➢
❉ ●❋ ♠➫➢✷♠✠❱◗❩❫t✇❬❫❥❦❴❋⑨➭❨✴❴❋➙▲✈❞➢✼❯❲❱➫➧❤❨✴❱❇➙✪❬❷➢P⑧✰❨❷➜q❱❇t❭❥②❨✴❴✮♦❦❛❡⑨✴❥❦✐✣❳✪❩❫❛❡⑨✞❩↕❨✴❪Pt❇➢☛➤✛❱◗✐✵♥❈❴❋❥❦✐❷❨✴♦❞➧➸❱❇❳✽❛✴❩❭❬ ⑦ ✬⑦ ❧
❂ ❴❋t✇❬❫❥✉❬❫❢❈❬❫❱❃❜✿❛✴❩❃❝✰❛❡❪P❳❈❢❈❬❫❱◗❩❤❣✬✐❇❥❦❱❇❴❋✐◗❱❡❧❋r❤❴❋❥❦sq❱◗❩❫t❭❥✉❬❉➜☎❛✞❜ ▼ ❩❫❱❇❥❦➛❈❢❈❩❫⑨❈✫
❧ ♥❃❱◗❩❫❪☛❨✞❴✬➜❵❧ ⑤ ❳❈❩❫❥❦♦ ③✴④❡④✬⑦ ➢
✧✩✓✹✧ ✯✧
✚✧✸★
✁
✁
✹❙
29
⑦❷③
❉ ⑦❇④●❋ ♠➫➢✺♠✠❱◗❩❫t✇❬❫❥❦❴❈⑨➭❨✞❴❆➙✦✈❞➢✺❯❲❱➘➧❃❨✞❱❷➙✪❬❷➢✠➤✛❛❷➞✥❨✞❩↕➙✪t✠✐❇❛❡❪➘➛❋❥❦❴❋❥❦❴❈⑨✣❥❦❴❋➙❈❢❋✐✵❬❫❥②s❡❱➘♦❦❛❡⑨✴❥❦✐P❳❈❩❫❛❡⑨✞❩↕❨✴❪P❪P❥❦❴❋⑨
❨✞❴❆➙➶⑧✓❨❇➜❵❱❇t❭❥②❨✞❴➨❴❈❱◗❬➟➞✰❛✴❩❫♣✬t❇➢ ✷❴ ✆✟✞✡✠☞☛✍✌✎✌✡✏✒✑✔✓❙✕✒✗▲✠✚✙q✛✣✢✤✌❯❝●❝■✛✣✢✵✑✔✓✤✛❀✌✪✞✍✓❂✰✒✛✴✑✴✠✒✓❂✰✾✽❬☛✎✠✒✓✩✙☞✌✪✞✎✌✪✓✫☛✍✌✘✠■✓
✿✍✓✫✏✒❛✲☛✻✛✴✑ ✒✌✦P❆✠✎✕✒✑✴☛ ✆✟✞✡✠✎✕✒✞✡✰■❨♠❨♠✑✣✓❙✕✴❧❆❳❋❨✴⑨✴❱❇t ⑦✴⑦ ☞❃❆⑦ ⑥ ⑦ ❧ ③✞④❡④✬⑦ ➢
❉ ⑦✴⑦ ❋ ♠➫➢✼♠✠❱◗❩❫t✇❬❫❥❦❴❋⑨❁❨✴❴❋➙✮✈❞➢✛❯❲❱☛➧❃❨✞❱❷➙✪❬❷➢■➦❏❩❫❛✴➛❆❨✞➛❋❥❦♦❦❥②t✇❬❫❥❦✐✣♦❦❛❡⑨❡❥❦✐☎♦②❱❇❨✞❩❫❴❋❥❦❴❈⑨❈➢ ❴▼❣✺➢✛❯ ♦❇❱◗❩❫❛✴t❭♣✪❥✰❨✴❴❋➙
✈❏➢✰❯❲❱ ➧❤❨✴❱❇➙✪❬❷❧➼❱❷➙❈❥✉❬❫❛✴❩❫t❇❧❇❚❋✿ ✦❪✦❪ ☎❜✪◆❂✽❭✠✒✞✡✰■✛✴✑✼✠✒✓✤✗✎❴❫✗✼◆✲✌✎☛✻✑✴✰✾✽❇✑✔✗✎✗✎❛✤✌✬✠■✓❧❱❯❛✲✽ ✛✴✑✔❈✴❩❬✌✻✽❭✰✒✛✴✑✼✠✒✓✫✰✾✽
❪❦✰✒✛✱✰✐❱❯✑✔✓✤✑✔✓✖✕✴❧✪sq❛❡♦❦❢❋❪P❱ ➯ ⑦ ➵✵❧❈❳❆❨✞⑨❡❱❇t❤⑥ ✍⑦ ❃ ❧ ③✞④❡④ ⑥✬➢
❉ ⑦❷③ ❋ ❡❈➢✪✈✷♦❦❛✔➜❈➙✺➢ ❆✠✒❛❙✓✫✏●✰✒✛✴✑✼✠✒✓✲✗ ✠✚✙❬P✯✠✎✕✒✑✼☛✸✆❵✞✚✠✎✕✒✞✡✰✒❨ ❨♠✑✔✓✖✕✴➢❞❣✪❳✪❩❫❥②❴❈⑨❡❱◗❩❭✱❰ ✉❞❱◗❩❫♦②❨✴⑨✪❧ ③ ❴❆➙P❱❷➙✪❥✉❬❫❥②❛✴❴✼❧ ⑦ ✵☞ ➢
❉ ⑦ ⑥ ❋ ➧✏➢✪➚❃❱❷❨✞❳✽❛❡♦❦❥✉❬↕❨✴❴✼t
➢ P❘✌✡✰✒✞✍✓✤✑✣✓❙✕ ❇✰■✳✖✌✪✗✎✑✼✰✒✓❥✥❦✌✪✛ ✺✠✒✞✎▼■✗❫➢✓➦❏❩❫❱❇❴❵❬❫❥❦✐❇❱✠➽❃❨✴♦❦♦ ❧❋➚❤❱◗➞ ❡❡❱◗❩❫t❭❱◗➜❵❧ ③✴④✴④ ⑥✪➢
❉ ⑦ ❋ ✈❏➢✺➚❃⑨❡❛➭❨✴❴❋➙➳➦➍➢✺➽❲❨✴➙❋➙❈❨❷➞✥➜❵➢ ⑤ ❴❋t✇➞✓❱✵❩❫❥②❴❈✣
⑨ ◗❵❢❈❱◗❩❫❥❦❱❇t❲❜✿❩❫❛✴❪✜✐❇❛❡❴❵❬❫❱◗➡✬❬❭❰➏t❭❱❇❴❈t❭❥❦❬❫❥❦sq❱➝❳❈❩❫❛✴➛❆❨✞➛❋❥❦♦❦❥②t✇❬❫❥❦✐
♣✬❴❋❛❷➞➠♦❦❱❷➙✪⑨❡❱✠➛❆❨✞t❭❱❇t❇➢❫✇✯✢✤✌✡✠✒✞✎✌✪✛✴✑✴☛✍✰✒✽t✧☎✠✒❨✦◆✤❛❋✛❀✌✪✞❦❚✫☛✻✑✱✌✪✓❂☛✍✌✵❧ ✿⑦ ✬⑦ ➯ ✪⑦ ❃✪③ ✪➵ ❞ ⑦ ✩❃❆✿⑦ ❈ ❧ ⑦ ☞ ➢
❉ ⑦ ❋ ❯✩➢❞➦➍❛❵❛✴♦❦❱❡➢✂➤✓♥❋❱ ❴❆➙✪❱❇❳✽❱❇❴❋➙❈❱❇❴❵❬☛❝✰♥❈❛❡❥❦✐❇❱✦✈✷❛❡⑨❡❥❦✐■❜✿❛✴❩☛❪P❛✬➙❈❱◗♦②♦❦❥❦❴❋⑨▲❪✩❢❋♦✉❬❫❥❦❳❋♦❦❱☎❨✴⑨✴❱❇❴❵❬❫t✣❢❈❴❆➙✪❱◗❩
❢❈❴❋✐❇❱◗❩❭❬↕❨✞❥❦❴✬❬❉➜❵✺➢ ✶✸✞✍✛✴✑ ✹t☛✻✑✼✰✒✽❂✿✍✓✲✛❀✌✻✽✔✽ ✑❁✕✖✌✪✓❂☛✍✌✵❧ ➯ ✍⑦ ❃✪③ ✪➵ ❞ ✩❃ ● ❧ ⑦ ☞ ➢
❉ ⑦ ●❋ ✉➝➢✺❣❈❨✞❴❵❬❫❛❡t❃❝✰❛❡t✇❬↕❨✪❧✺❯➘➢✽➦❼❨✴⑨✴❱❡❧✼➩➳➢ ❲■❨ ♦❇❥ ❧✺❨✴❴❋❯
➙ ❡❈➢✷❝✰❢❋t❭t❭❱◗❴❋t❇➢❃❝✓✈✛➦ ➯ ⑧✰➚✠➵✪❞✺❝✰❛❡❴❋t✇❬❭❩↕❨✞❥❦❴✬❬❃♦❦❛❡⑨❡❥❦✐
❳✪❩❫❛❡⑨✴❩↕❨✞❪P❪P❥❦❴❋⑨P❜❖❛✞❩✠❳❈❩❫❛✴➛❆❨✴➛❈❥❦♦②❥❦t✇❬❫❥❦✐➫♣✬❴❋❛❷➞➠♦❦❱❷➙✪⑨❡❱❡➢ ❳❴ ✆✟✞✡✠☞☛✍✌✎✌✡✏✒✑✔✓❙✕✒✗✐✠✚✙q❝✾③■✛✣✢❖✧☎✠■✓☞✙☞✌✪✞✡✌✪✓❂☛✍✌❤✠■✓
✭r✓❂☛✍✌✪✞✍✛✱✰✒✑✔✓✤✛✴✳♣✑✔✓▲✶✸✞✍✛✴✑ ✹✺☛✻✑✼✰✾✽❢✿✍✓✲✛❀✌✻✽✔✽ ✑❁✕✖✌✪✓✫☛✪✌✵❧ ③✴④✴④ ⑥✬➢
❉ ✿⑦ ❋ ➤❃➢✛❣❈❨✔❬❫❛➭❨✞❴❆➙ ✩➢✺♠✏❨✴❪P❱✵➜✬❨✪➢➝➦❏➧ ❣❋➩❳❞ ⑤ t✇➜❈❪➘➛✽❛❡♦❦❥❦✐◗❰➏t✇❬↕❨✞❬❫❥❦t✇❬❫❥❦✐❷❨✴♦❏❪P❛✬➙✪❱❇♦❦❥②❴❈⑨➭♦②❨✴❴❈⑨❡❢❋❨✴⑨❡❱✴➢ ❴
✆✟✞✡✠☞☛✍✌✎✌✡✏✒✑✔✓❙✕✒✗❤✠✈✙♠✛✣✢✤✌✜❝ ✒✛✣✢❳✿✍✓✲✛❀✌✪✞✪✓✫✰✒✛✴✑✼✠✒✓✫✰✾✽t❅❙✠■✑✔✓✤✛❫✧☎✠✒✓☞✙✩✌✪✞✎✌✪✓❂☛✍✌❤✠✒✓◗✶✸✞✪✛✴✑ ✹t☛✻✑✼✰✾✽❆✿✍✓✲✛❀✌✻✽✔✽ ✑❁✕✖✌✪✓❂☛✍✌
②✔✿✪❅✫✧❆✶✸✿✻③❙④✍❷⑤ ❧❋❳❋❨✴⑨✴❱❇t ⑦ ⑥❡⑥ ❄④ ❃❆⑦ ⑥❡⑥ ❧ ⑦ ☞ ➢
❉ ✰⑦ ●❋☛⑤ ❂
➢ ✉✰❨✞❳❴ ♥❃❱❇♦②➙❈❱◗❩❷❧✷♠➫➢✽➧➸❛❡t❭t❇❧✼❨✴❴❋➙✘❡❈➢✼❣✪✐↕♥❋♦❦❥❦❳❈❜❉➢➝➤✓♥❋❱➘➞✰❱❇♦❦♦✉❰➏❜✿❛❡❢❈❴❆➙❈❱❇➙❁t❭❱❇❪☛❨✞❴❵❬❫❥②✐◗t❤❜❖❛✞❩✏⑨❡❱❇❴❈❱◗❩↕❨✴♦
♦❦❛❡⑨✴❥❦✐➝❳✪❩❫❛❡⑨✞❩↕❨✴❪Pt❇❬➢ ❅❙✠✒❛❙✞✍✓❂✰✾✽ ✠✈✙ ✛✣✢✲✌ ✶❫✧❆❱✦❧❋⑥ ➯ ⑥q➵✵❧ ⑦ ✬⑦ ➢
❉ ⑦ ●❋ ❡❈❲
➢ ✉❞❱❇❴❈❴❋❱❇♣q❱◗❴❋t❇❧❏❣✽➢ ✉❞❱◗❩❫➛❆❨✞❱◗❬❫❱❇❴✼❧❼❨✴❴❆➙➨➩➳➢❏⑧❞❩❫❢❈➜✪❴❋❛❵❛✴⑨❡♥❋❱✴➢▼✈✼❛✴⑨❡❥❦✐➭❳✪❩❫❛❡⑨✴❩↕❨✞❪Pt✩➞➠❥✉❬❫♥➨❨✴❴❋❴❈❛✴❬❭❰
❨✔❬❫❱❷➙➳➙✪❥❦t❖①✇❢❋❴❈✐◗❬❫❥❦❛❡❴❈t❇➢ ◗❴ ✆✟✞✡✠☞☛✍✌✎✌✎✏■✑✔✓❙✕✒✗❤✠✚✙♠✛✣✢✲✌q❉❋❊❄✛✣✢✬✿✍✓✲✛❀✌✪✞✪✓✫✰✒✛✴✑✼✠✒✓✫✰✾✽❬✧☎✠✒✓☞✙☞✌✪✞✡✌✪✓❂☛✍✌✜✠■✓✬P❆✠✎✕✒✑✼☛
✆✟✞✡✠✎✕✒✞✚✰✒❨♠❨ ✑✔✓❙✕✴❧ ③✴④❡④ ❈➢
❉ ③✞●④ ❋ ❝❤❙
➢ ✉❞❱❇❴❋t❇❧ ⑤ ❙➢ ✉➼❨✴❴ ⑤ t❭t❭✐↕♥❋❱✴❧✬➽➘➢✪⑧✰♦❦❛❵✐↕♣q❱❇❱❇♦ ❧❈❨✴❴❋➙☎❣✽➢❵♠❯ ❇♦s ❱✵❩❫❛❡t❭♣✬❥ ➢ ▼ ❥✉❩❫t✇❬✥❛✴❩↕➙✪❱◗❩✓❩↕❨✞❴❆➙❈❛✴❪ ❜✿❛✴❩❫❱❇t✇❬❫t
➞➠❥✉❬❫♥➻✐❇❛✴❪P❳❋♦❦❱◗➡➥❨✴⑨❡⑨✞❩❫❱❇⑨q❨✔❬❫❱❇t❇➢ ❑❴ ✆✟✞✡✠☞☛✍✌✎✌✡✏✒✑✔✓✖✕✒✗◗✠✈✙◗❝ ❋✛✣✢ ✿✍✓✲✛❀✌✪✞✪✓✫✰✒✛✴✑✼✠✒✓✫✰✾✽♠✧☎✠✒✓✩✙☞✌✪✞✎✌✪✓✫☛✪✌❣✠■✓
✿✍✓✫✏✒❛✲☛✻✛✴✑ ✒✌✦P☎✠✎✕✒✑✼☛ ✆❬✠✎✕✒✞✡✰✒❨ ❨♠✑✔✓✖✕✒❴ ✆❬✠✒✞✍✛✱✠✒❴❘✆✸✠✒✞✍✛✴❛✾✕●✰✾✽ ❧ ③✞④❡④ ❈➢
❂
✁
❂
✄✂✆☎
✞✝
✠✟
☛✡
✌☞
❂
✎✍
❂
✑✏
❂
❂
✓✒
❂
✕✔
❂
✁
30
Multi-Relational Record Linkage
Parag and Pedro Domingos
Department of Computer Science and Engineering
University of Washington
Seattle, WA 98195, U.S.A.
{parag,pedrod}@cs.washington.edu
http://www.cs.washington.edu/homes/{parag,pedrod}
Abstract. Data cleaning and integration is typically the most expensive step in the KDD process. A key part, known as record linkage or
de-duplication, is identifying which records in a database refer to the
same entities. This problem is traditionally solved separately for each
candidate record pair (followed by transitive closure). We propose to use
instead a multi-relational approach, performing simultaneous inference
for all candidate pairs, and allowing information to propagate from one
candidate match to another via the attributes they have in common. Our
formulation is based on conditional random fields, and allows an optimal
solution to be found in polynomial time using a graph cut algorithm. Parameters are learned using a voted perceptron algorithm. Experiments
on real and synthetic databases show that multi-relational record linkage
outperforms the standard approach.
1
Introduction
Data cleaning and preparation is the first stage in the KDD process, and in
most cases it is by far the most expensive. Data from relevant sources must
be collected, integrated, scrubbed and pre-processed in a variety of ways before
accurate models can be mined from it. When data from multiple databases is
merged into a single relation, many duplicate records often result. These are
records that, while not syntactically identical, represent the same real-world entity. Correctly merging these records and the information they represent is an
essential step in producing data of sufficient quality for mining. This problem is
known by the name of record linkage, de-duplication, merge/purge, object identification, identity uncertainty, hardening soft information sources, and others.
In recent years it has received growing attention in the KDD community, with
a related workshop at KDD-2003 and a related task as part of the 2003 KDD
Cup.
Traditionally, the de-duplication problem has been solved by making an independent match decision for each candidate pair of records. A similarity score
is calculated for each pair, and the pairs whose similarity score is above some
31
2
pre-determined threshold are merged. This is followed by taking a transitive
closure over matching pairs. In this paper, we argue that there are several advantages to making the co-reference decisions together rather than considering
each pair independently. In particular, we propose to introduce an explicit relation between each pair of records and each pair of attributes appearing in
them, and use this to propagate information among co-reference decisions. To
take an example, consider a bibliography database where each bibliography entry is represented by a title, a set of authors and a conference in which paper
appears. Now, determining that two bib-entries in which the conference strings
are “KDD” and “Knowledge Discovery in Databases” refer to the same paper
would lead to the inference that the two conference strings refer to the same
underlying conference. This in turn might provide sufficient additional evidence
to match two other bib-entries containing those strings. This new match would
entail that the respective authors are the same, which in turn might trigger some
other matches, and so on. Note that none of this would have been possible if we
had considered the pair-wise decisions independently.
Our formulation of the problem is based on conditional random fields, which
are undirected graphical models [9]. Conditional random fields are discriminative
models, freeing us from the need to model dependencies in the evidence data. Our
formulation of the problem allows us to perform optimal inference in polynomial
time. This is done by converting the original graph into a network flow graph,
such that the min-cut of the network flow graph corresponds to the optimal
configuration of node labels in the original graph. The parameters of the model
are learned using a voted perceptron algorithm [5]. Experiments on real and semiartificial data sets show that our approach performs better than the standard
approach of making pairwise decisions independently.
The organization of this paper is as follows. In Section 2, we describe the
standard approach to record linkage. In Section 3, we describe in detail our
proposed solution to the problem based on conditional random fields, which we
call the collective model. Section 4 describes our experiments on real and semiartificial data sets. Section 5 discusses related work. Finally, we conclude and
give directions for future research in Section 6.
2
Standard Model
In this section, we describe the standard approach to record linkage [6]. Consider a database of records which we want to de-duplicate. Let each record be
represented by a set of attributes. Consider a candidate pair decision, denoted
by y, where y can take values from the set {1,-1}. A value of 1 means that the
records in the pair refer to the same entity and a value of −1 means that the
records in the pair refer to different entities. Let x = (x1 , x2 · · · xn ) denote a vector of similarity scores between the attributes corresponding to the records in
the candidate pair. Then, in the standard approach, the probability distribution
of y given x is defined using a naive Bayes or logistic regression model:
32
3
f (x) = log
n
X
P (y = 1|x)
λi xi
= λ0 +
P (y = −1|x)
i=1
(1)
f (x) is known as the discriminant function. λi , for 0 ≤ i ≤ n, are the parameters
of the model. Given these parameters and the attribute similarity vector x, a
candidate pair decision y is predicted to be positive (a match) if f (x) > 0 and
predicted to be negative (non-match) otherwise. The parameters are usually set
by maximum likelihood. Gradient descent is used to find the parameters which
maximize the conditional likelihood of y given x, i.e., Pλ (y|x) [1].
3
Collective Model
The basic difference between the standard model and the collective model is that
the collective model does not make pairwise decisions independently. Rather, it
makes a collective decision for all the candidate pairs, propagating information
through shared attribute values, thereby making a more informed decision about
the potential matches. Our model is based on conditional random fields as described in [9]. Before we describe the model, we will give a brief overview of
conditional random fields.
3.1
Conditional Random Fields
Conditional random fields, introduced by Lafferty et al. [9], are undirected graphical models which define the conditional probability of a set of output variables
Y given a set of input or evidence variables X. Formally,
P (y|x) =
1 Y
φc (yc , xc ),
Zx
(2)
c∈C
where C is the set of cliques in the graph, and yc and xc denote the subset
of variables participating in the clique c. φc , known as a clique potential, is
a function of the variables involved in the clique c. Zx is the normalization
constant. Typically, φP
c is defined as a log-linear combination of features over c,
i.e., φc (yc , xc ) = exp l λlc flc (yc , xc ), where flc , known as a feature function,
is a function of variables involved in the clique c, and λlc are the feature weights.
In many domains, rather than having different parameters (feature weights)
for each clique in the graph, the parameters of a conditional random field are
tied across repeating clique patterns in the graph. Following the terminology of
Taskar et al. [17], we call each such pattern a relational clique template. Each
clique c matching a clique template t is called an instance of the template. The
probability distribution can then be specified as
33
4
P (y|x) =
X
1 XX
exp
λlt flt (yc , xc )
Zx
t∈T c∈Ct
(3)
l
where T is the set of all the templates, Ct is the set of cliques which satisfy
the template t, and flt , λlt are respectively the feature functions and feature
weights pertaining to template t. Because of the parameter tying, the feature
functions and the parameters vary over the clique templates and not the individual cliques. A conditional random field with parameter tying as defined above
closely matches a relational Markov network as defined by [17].
3.2
Notation
Before we delve into the model, let us introduce some notation. Consider a
database relation R = {r1 , r2 , . . . , rn }, where ri is the ith record in the relation.
Let A = {A1 , A2 , . . . , Am } denote the set of attributes. For each attribute Ak ,
we have a set AS k of corresponding attribute values appearing in the relation,
AS k = {ak1 , ak2 , . . . aklk }. Now, the task at hand is to, given a pair of records
(ri , rj ) (and corresponding attribute values), find out if they refer to the same
underlying entity. We will denote the kth attribute value of record ri by ri .Ak .
Our formulation of the problem is in terms of undirected graphical models.
For the rest of the paper, we will use the following notation to denote node types,
a specific instance of a node and the node values. A capital letter subscripted
by a “∗” will denote a node type, e.g. R∗ . A capital letter with two subscripted
letters will denote a specific instance of a node type, e.g., Rij . A lower-case letter
with two subscripts will denote a binary or continuous node value, e.g., rij .
3.3
Constructing the Graph
Given a database relation which we want to de-duplicate, we construct an undirected graph as follows. For each pairwise question of the form, “Is ri same as
rj ?”, we have a binary node Rij in the graph. Because of symmetric nature of
the question, Rij and Rji represent the same node. We call these nodes record
nodes. The record node type is denoted by R∗ . For each record node, we have
a corresponding set of continuous-valued nodes, called attribute nodes. The kth
attribute node for record node Rij is denoted by Rij .Ak . The type of these nodes
is denoted by Ak∗ , for each attribute Ak . The value of the node Rij .Ak is the
similarity score between the corresponding attribute values ri .Ak and rj .Ak . For
example, for textual attributes this could be the TF/IDF similarity score [15].
For numeric attributes, this could be the normalized difference between the two
numerical values. Since the value of these nodes is known beforehand, we also
call them evidence nodes. We interchangeably use the term evidence node and
attribute node to refer to these nodes. We now introduce an edge between each
R∗ node and each of the the corresponding Ak∗ nodes, i.e., an edge between each
record node and the corresponding evidence nodes for each attribute. An edge
34
5
in the graph essentially means that values of the two nodes are dependent on
each other. To take an example, consider a relation which contains bibliography
entries for various papers. Let the attributes of the relation be author, title and
venue. Figure 1(a) represents the graph corresponding to candidate pairs b 12 and
b23 for this relation, where b12 corresponds to asking the question “Is bib-entry
b1 same as bib-entry b2 ?”. b23 is similarly defined. Sim(bi .A, bj .A) denotes the
similarity score for the authors of the bibliography entries bi and bj for the various values of i and j. Similarly, Sim(bi .T, bj .T ) and Sim(bi .V, bj .V ) denote the
similarity scores for title and venue attributes, respectively.
The graph corresponding to the full relation would have many such disconnected components, each component representing a candidate pair decision. The
above construction essentially corresponds to the way candidate pair decisions
are made in the standard approach, with no information sharing among the various decisions. Next, we describe how we change the representation to allow for
the exchange of information between candidate pair decisions.
3.4
Merging the Evidence Nodes
We notice the fact that the graph construction as described in the previous
section, would in general have many duplicates among the evidence nodes. In
other words, a lot of record pairs would have the same attribute value pair. Using
our notation, we say that nodes Rxy .Ak and Rwz .Ak are duplicates of each other
if (rx .ak = rw .ak ∧ ry .ak = rz .ak ) ∨ (rx .ak = rz .ak ∧ ry .ak = rw .ak ). Our idea is
to merge each such set of duplicates into a single node. Consider the bibliography
example introduced in Section 3.3. Let us suppose that (b12 .V, b34 .V ) are the
duplicate evidence pairs. Then, after merging the duplicate pairs, the graph
would be as shown in Figure 1(b). Since we merge the duplicate pairs, instead
of having a separate attribute node for each rij we now have an attribute node
for each pair of values aki′ , akj′ ∈ AS k , for each attribute Ak .
Although the formulation above helps to identify the places where information is shared between various candidate pairs, it does not facilitate any propagation of information. This is because the shared nodes are the evidence nodes
and hence their values are fixed. The model as described above is thus as no
better than the decoupled model (where there is no sharing of evidence nodes)
for the purpose of learning and inference. This sets the stage for the introduction
of auxiliary nodes, which we also call information nodes. As the name suggests,
these are the nodes which facilitate the exchange of information between the
candidate pairs.
3.5
Propagation of Information through Auxiliary Nodes
For each attribute pair node Aki′ j ′ , we introduce a binary node Iik′ j ′ . The node
type is denoted by I∗k and we call these information nodes. Semantically, an
information node Iik′ j ′ corresponds to asking the question “Is aki′ the same as
akj′ ?”. The binary value of the information node Iik′ j ′ is denoted by iki′ j ′ , and is
35
6
Record Node
Evidence Node
b3=b4?
b1=b2?
Sim(b1.A,b2.A)
Sim(b3.V,b4.V)
Sim(b1.V,b2.V)
Author(A)
Sim(b3.A,b4.A)
Venue(V)
Venue(V)
Author(A)
Sim(b3.T,b4.T)
Sim(b1.T,b2.T)
Title(T)
Title(T)
(a) Each pairwise decision considered independently
b1=b2?
Sim(b1.A,b2.A)
shared evidence
node
b3=b4?
Sim(b3.A,b4.A)
Sim(b1.V,b2.V)
Sim(b3.V,b4.V)
Author(A)
Venue(V)
Author(A)
Sim(b3.T,b4.T)
Sim(b1.T,b2.T)
Title(T)
Title(T)
(b) Evidence nodes merged
Fig. 1. Merging the evidence nodes
1 iff the answer to above question is “Yes.” Whereas the attribute node Aki′ j ′
corresponds to the similarity score between the two attribute values as present
in the database, the information node Iik′ j ′ corresponds to the Boolean-valued
answer to the question of whether the two attribute values refer to the same underlying attribute. Each information node Iik′ j ′ is connected to the corresponding
attribute node Aki′ j ′ and the corresponding record nodes Rij . For instance, information node Iik′ j ′ would be connected to the record node Rij iff ri .Ak = aki′
and rj .Ak = akj′ . Note that the same information node Iik′ j ′ would in general be
shared by several R∗ nodes. This sharing lies at the heart of our model. Figure
2(a) shows how our hypothetical bibliography example is represented using the
collective model.
36
7
Table 1. An example bibliography relation
Record
Title
Author
Venue
b1
“Record Linkage using CRFs”
“Linda Stewart”
“KDD-2003”
b2
“Record Linkage using CRFs”
“Linda Stewart” “9th SIGKDD”
b3
“Learning Boolean Formulas”
“Bill Johnson”
“KDD-2003”
b4 “Learning of Boolean Expressions” “William Johnson” “9th SIGKDD”
3.6
An Example
Consider the subset of a bibliography relation shown in Table 1. Each bibliography entry is represented by three string attributes: title (T), author (A)
and venue (V). Consider the corresponding undirected graph constructed as described in Section 3.5. We would have R∗ nodes for pair-wise binary decisions
of the form “Does bib-entry bi refer to the same paper as bib-entry bj ?”, for
each pair (i, j). Correspondingly, we would have evidence nodes for each pair of
attribute values for each of the three attributes. We would also have I∗k nodes
for each attribute. For example, I∗k nodes for the author attribute would correspond to the pairwise decisions of the form “Does the string ai refer to same
author as the string aj ?”, where ai and aj are some author strings appearing in
the database. Similarly, we would have I∗k nodes for venue and title attributes.
Each record node Rij would have edges linking it to the corresponding author,
title and venue information nodes, denoted by Iik′ j ′ , where k varies over author,
title and venue. In addition, each information node Iik′ j ′ would be connected to
corresponding evidence node Aki′ j ′ .
The corresponding graphical representation as described by the collective
model is given by Figure 2(b). The figure shows only a part of the complete graph
which is relevant to the following discussion. Note how dependencies flow through
information nodes. To take an example, consider the bib-entry pair consisting
of b1 and b2 . The titles and authors for the two bib-entries are essentially the
same string, giving sufficient evidence to infer that the two bib-entries refer to the
same underlying paper. This in turn leads to the inference that the corresponding
venue strings, “KDD-2003” and “9th SIGKDD”, refer to the same venue. Now,
since this venue pair is shared by the bib-entry pair (b3 , b4 ), the additional piece
of information that “KDD-2003” and “9th SIGKDD” refer to the same venue
might give sufficient evidence to merge b3 and b4 , when added to the fact that
the corresponding title and author pairs have high similarity scores. This in
turn would lead to the inference that the strings “William Johnson” and “Bill
Johnson” refer to the same underlying author, which might start another chain
of inferences somewhere else in the database.
Although the example above focused on a case when positive influence is
propagated through attribute values, i.e., a match somewhere in the graph results
in more matches, we can easily think of an example where negative influences
are propagated through the attribute values, i.e., a non-match somewhere in the
graph results in a chain of non-matches. In fact, our model is able to capture
37
8
complex interactions of positive and negative influences, resulting in an overall
most likely configuration.
3.7
The Model and its Parameters
We have a singleton clique template for R∗ nodes and another for I∗ nodes.
Also, we have a two-way clique template for an edge linking an R∗ node to an I∗k
node. Additionally, we have a clique template for edges linking I∗k and Ak∗ nodes.
Hence, the probability of a particular assignment r to the R∗ and I∗ nodes, given
that the attribute(evidence) node values are a, can be specified as
(
"
X X
X X
1
λl fl (rij ) +
exp
φkl fl (rij .I k )
P (r|a) =
Za
i,j
l
k
l
#)
X
X
k
k
k
+
γkl gl (rij , rij .I ) +
δkl hl (rij .I , rij .A )
l
(4)
l
where (i, j) varies over all the candidate pairs. rij .I k denotes the binary value
of the pairwise information node for the kth attribute pair corresponding to the
node Rij , and rij .Ak denotes the corresponding evidence value. λl and φkl denote
the feature weights for singleton cliques. γkl denotes the feature weights for two
way cliques involving binary variables. δkl denotes the feature weights for two
way cliques involving evidence variables. For the singleton cliques and two-way
cliques involving binary variables, we have a feature function for each possible
configuration of the arguments, i.e., fl (x) is non-zero for x = l, 0 ≤ l ≤ 1.
Similarly, gl (x, y) = gab (x, y) is non-zero for x = a, y = b, 0 ≤ a, b ≤ 1. For
two-way cliques involving a binary variable r and a continuous variable e, we
use two features: h0 is non-zero for r = 0 and is defined as h0 (r, e) = 1 − e;
similarly, h1 is non-zero for r = 1 and is defined as h1 (r, e) = e.
The way the collective model is constructed, a single information node in
the graph would in general correspond to many record pairs. But semantically
this single information node represents an aggregate of a number of nodes which
have been merged together because they would always have the same value in our
model. Therefore, for Equation 4 to be a correct model of the underlying graph,
each information node (and the corresponding cliques with the evidence nodes)
should be treated not as a single clique, but as an aggregate of cliques whose
nodes always have the same values. Equation 4 takes this fact into account by
summing the weighted features of the cliques for each candidate pair separately.
3.8
The Standard Model Revisited
In the absence of the information nodes, and the corresponding edges being
merged into the direct edges between the R∗ and Ak∗ nodes, the probability
distribution can be specified as
38
9
Record node
Information node
Evidence node
shared information
node
b1=b2?
b3=b4?
b1.A = b2.A ?
b1.V = b2.V ?
b3.V = b4.V ?
b1.T = b2.T ?
b3.A = b4.A ?
b3.T = b4.T ?
Sim(b1.A,b2.A)
Sim(b3.A,b4.A)
Sim(b1.V,b2.V)
Sim(b3.V,b4.V)
Author(A)
Author(A)
Venue (V)
Sim(b1.T,b2.T)
Sim(b3.T,b4.T)
Title(T)
Title(T)
(a) Complete representation
Title(T)
Title(T)
Sim(Record Linkage and CRF,
Record Linkage using CRF)
Sim(Learning Boolean Formula,
Learning of Boolean Expressions)
b3.T = b4.T?
b1.T = b2.T?
b1=b2?
b3=b4?
b1.V = b2.V?
b1.A = b2.A?
b3.A = b4.A?
b3.V = b4.V?
Sim(KDD−2003, 9th SIGKDD)
Venue(V)
Sim(Linda Stewart, Linda Stewart)
Sim(Bill Johnson, William Johnson)
Author(A)
Author(A)
(b) A bibliography database example
Fig. 2. Collective model
39
10
"
#
XX
X X
1
k
λl fl (rij ) +
ωkl hl (rij , rij .A )
exp
P (r|a) =
Za
i,j
l
k
(5)
l
where ωkl denotes the feature weights for two-way variables. The remaining symbols are as described before. This formulation in terms of a conditional random
field is very closely related to the standard model. Since in the absence of information nodes
Qeach pairwise decision is made independently of all others, we
have P (r|a) = i,j P (rij |a). When ∀k, ωk0 = ωk1 = ωk , for some ωk , we have
log
X
P (rij = 1|a)
= λ′ +
2ωk rij .Ak
P (rij = 0|a)
(6)
k
where λ′ = λ1 − λ0 − ωk . This equation is in fact the standard model for making
candidate pair decisions.
3.9
Inference
Inference corresponds to finding the configuration r∗ such that P (r∗ |a) given
the learned parameters is maximized. For the case of conditional random fields
where all non-evidence nodes and features are binary-valued and all cliques are
singleton or two-way (as is our case), this problem can be reduced to a graph
min-cut problem, provided certain constraints on the parameters are satisfied [7].
The idea is to map each node in the conditional random field to a corresponding
node in a network-flow graph.
Consider a conditional random field with binary-valued nodes and having
only one-way and two-way cliques. For the moment, we assume that there are
no evidence variables. Further, we assume binary-valued feature functions f (x)
and g(x, y) for singleton and two-way cliques respectively, as specified in the
collective model. Then the expression for the log-likelihood of the probability
distribution for assignment y to the nodes is given by
L(y) =
n
X
n
n
1 XX
[γij (1 − yi )(1 − yj )
2 i=1 j=1 00
[λi0 (1 − yi ) + λi1 yi ] +
i=1
+γij 01 (1 − yi )yj + γij 10 yi (1 − yj ) + γij 11 yi yj ] + C
(7)
where the first term varies over all the nodes in the graph taking the singleton
cliques into account, and the second term varies over all the pairs of the nodes in
the graph taking the two-way cliques into account. We assume the parameters for
non-existent cliques to be zero. Now, ignoring the constant term and rearranging
the terms, we obtain
−L(y) =
n
X
i=1
n
−(λi yi ) +
n
1 XX
(αij yi + βij yj − 2γij yi yj )
2 i=1 j=1
40
(8)
11
where λi = λi1 − λi0 , γij = 21 (γij 00 + γij 11 − γij 01 − γij 10 ), αij = γij 00 − γij 10 and
βij = γij 00 − γij 01 . Now, if γij ≥ 0 then the above equation can be rewritten as
−L(y) =
n
X
n
′
−(λi yi ) +
i=1
n
1 XX
γij (yi − yj )2
2 i=1 j=1
(9)
′
for some λi , 1 ≤ i ≤ n, given the fact that yi2 = yi , since yi ’s are binary-valued.
Now, consider a capacitated network with n + 2 nodes. For each node i in the
original graph, we have a corresponding node in the network graph. Additionally,
we have a source node (denoted by s) and a sink node (denoted by t). For each
′
′
node i, there is a directed edge (s, i) of capacity csi = λi if λi ≥ 0, else there a
′
directed edge (i, t) of capacity cit = −λi . Also, for each ordered pair (i, j), there
1
is a directed edge of capacity cij = 2 γij . For any partition of the network into
sets B and W , B =
P{s} ∪P{i : yi = 1} and W = {t} ∪ {i : yi = 0}, the capacity
of the cut C(y) = k∈B l∈W ckl is precisely the negative of the probability of
the induced configuration on the original graph, offset by a constant. Hence, the
partition induced by the min-cut corresponds to the most likely configuration
in the original graph. The details can be found in Greig et al. [7]. We know
that an exact solution to min-cut can be found in polynomial time. Hence, the
exact inference in our model takes time polynomial in the size of the conditional
random field.
It remains to see how to handle evidence nodes. This is straightforward.
Notice that a clique involving an evidence node would account for an additional
term of the form ωe in the log likelihood, where e is the value of the evidence
node. Let yi be the binary node in the clique. Since e is known beforehand,
the above term can simply be taken into account by adding ωe to the singleton
′
parameter λ in Equation 9 corresponding to yi .
3.10
Learning
Learning involves finding the maximum likelihood parameters (i.e., the parameters that maximize the probability of observing the training data). Instead of
maximizing P (r|a), we maximize the log of the probability distribution (log likelihood), using the standard approach of gradient descent. The partial derivative
of the log-likelihood L given by Equation 4 with respect to the parameter λl is
X
X
X
∂L
′
fl (rij
)
fl (rij ) −
PΛ (r′ |a)
=
∂λl
′
i,j
i,j
(10)
r
where r′ varies over all possible configurations of the nodes in the graph and
PΛ (r′ |a) denotes the probability distribution with respect to current set of parameters. This expression has an intuitive meaning: it is the difference between
the observed feature counts and the expected ones. The derivative with respect
to other parameters can be defined analogously. Notice that, for our inference to
41
12
work, a constraint on the parameters of the two-way binary-valued cliques must
be satisfied: γ00 + γ11 − γ01 − γ10 ≥ 0. To ensure this, instead of learning the original parameters, we perform the following substitution on the parameters and
learn the new parameters: γ00 = g(δ1 ) + δ2 , γ11 = g(δ1 ) − δ2 , γ01 = −g(δ3 ) + δ4 ,
γ10 = −g(δ3 ) − δ4 where g(x) = log(1 + ex ). It can be easily seen that, for any
values of the parameters δi , the required constraint is satisfied on the original parameters. The derivative expression is modified appropriately for the substituted
parameters.
The second term in the derivative expression involves the expected value
over an exponential number of configurations. Hence finding this term would
be intractable for any practical problem. As in McCallum and Wellner [11],
we use a voted perceptron algorithm as proposed by Collins [5]. The expected
value in the second term is approximated by the feature counts of the most
likely configuration. The most likely configuration based on the current set of
parameters can be found using our polynomial-time inference algorithm. At each
iteration, the algorithm updates the parameters by the current gradient and
then finds the gradient for the updated parameters. The final parameters are
the average of the parameters learned during each iteration.
We initialize each λ parameter to the log odds of the corresponding feature
being true in the data, which is the parameter value that would be obtained if all
features were independent of each other. Notice that the value of the information
nodes is not available in the training data. We initialize them as follows. An
information node is initialized to 1 if there is at least one record node linked
to the information node whose value is 1, otherwise we initialize it to 0. This
reflects the notion that, if two records are the same, all of their corresponding
fields should also be the same.
3.11
Canopies
If we consider each possible pair of records for a match, the potential number
of matches becomes O(n2 ), which is a very large number even for databases of
moderate size. Therefore, we use the technique of first clustering the database
into possibly-overlapping canopies as described by [10], and then applying our
learning/inference algorithms only to record pairs which fall in the same canopy.
This reduces the potential number of matches by a large factor. For example, for
a 650-record database we obtained on the order of 15000 potential matches after
forming the canopies. In our experiments we used this technique with both our
model and the standard one. The basic intuition behind the use of canopies and
related techniques in de-duplication is that most record pairs are very clearly
non-matches, and the plausible candidate matches can be found very efficiently
using a simple distance measure based on an inverted index.
4
Experiments
To evaluate our model, we performed experiments on real and semi-artificial
databases. This section describes the databases, methodology and results. The
42
13
Table 2. Performance of the two models on the Cora database
Model F-measure(%) Recall(%) Precision(%)
Standard
84.4
81.5
88.5
Collective
87.0
89.0
85.8
Table 3. Performance comparison after taking the transitive closure
Model F-measure(%) Recall(%) Precision(%)
Standard
80.7
92.0
73.7
Collective
87.0
90.9
84.2
results that we report are inclusive of the canopy process, i.e., they are over
all the possible O(n2 ) candidate match pairs. The evidence node values were
computed using cosine similarity with TF/IDF [15].
4.1
Real-World Data
Our primary source of data was the hand-labeled subset of the Cora database
provided by Andrew McCallum and previously used by Bilenko and Mooney [2]
and others.1 This dataset is a collection of 1295 different citations to 112 computer science research papers from the Cora Computer Science Research Paper
Engine. The original data set contains only unsegmented citation strings. Bilenko
and Mooney [2] used a segmented version of the data for their experiments, with
each bibliographic reference split into its constituent fields (author, venue, title,
publisher, year, etc.) using an information extraction system. We used this processed version of the Cora dataset for our experiments. We used only the three
most informative attributes: author, title and venue (with venue encompassing
different types of publication venue, such as conferences, journals,workshops,
etc.).
We divided the data into equal-sized training and test sets, ensuring that no
true set of matching records was split among the two, to avoid contamination
of the test data by the training set. We performed two-fold cross-validation, and
report the average f-measure, recall and precision over twenty different random
splits. We trained the models using a number of iterations that was first determined using a validation subset of the data. The “optimal” number of iterations
was 125 for the collective model and 17 for the standard one. The results are
shown in Table 2. The collective model gives an f-measure gain of about 2.5%
over the standard model, which is the result of a gain in recall that outweighs a
smaller loss in precision. Next, we took the transitive closure over the matches
produced by each model as a post-processing step to remove any inconsistent
decisions. Table 3 compares the performance of the standard and the collective
model after this step. The recall of the standard model is greatly improved, but
1
http://www.cs.umass.edu/∼mccallum/data/cora-refs.tar.gz
43
14
the precision is reduced even more drastically, resulting in a substantial deterioration in f-measure. This points to the fact that the standard model makes a
lot of decisions which are inconsistent with each other. On the other hand, the
collective model is relatively stable with respect to the transitive closure step,
with its f-measure remaining the same as a result of a small increase in recall
and a small loss in precision. The net f-measure gain of the collective model over
the standard model after transitive closure step is about 6.2%. This relative stability of the collective model leads us to infer that the flow of information it
facilitates not only improves predictive performance but also helps to produce
overall consistent decisions.
We hypothesize that as we move to larger databases (in number of records and
number of attributes) the advantage of our model will become more pronounced,
because there will be many more interactions between sets of candidate pairs
which our model can potentially benefit from.
4.2
Semi-Artificial Data
To further observe the behavior of the algorithms, we generated variants of the
Cora database by taking distinct field values from the original database and randomly combining them to generate distinct papers. The semi-artificial data has
the advantage that we can control various factors like the number of clusters,
level of distortion, etc., and observe how these factors affect the performance of
our algorithm. To generate the semi-artificial database, we first made a list of
author, title and venue field values. In particular, we had 80 distinct titles, 40
different venues and 20 different authors. Then, for each field value, we created
a fixed number of distorted duplicates of the string value (in our current experiments, we created 8 different distorted duplicates for each field value). The
number of distortions within each duplicate was chosen according a binomial
distribution whose Bernoulli parameter (success probability) we varied in our
experiments. A single trial corresponds to the distortion of a single word in the
original string. For each word that we decided to perturb, we randomly chose between one of the following: introduce a spelling mistake, replace by a word from
another field value, or delete the word. To generate the records in the database,
we first decided the total number of clusters our database would have. We varied this number in our experiments. The total number of documents was kept
constant at 1000 across all the experiments we carried out with semi-artificial
data. For each cluster to be generated, we randomly chose a combination of
original field values. This uniquely determines a cluster. To create the duplicate
records within each cluster, we randomly chose, for each field value assigned to
the cluster, one of the corresponding distorted field duplicates.
In the first set of experiments on the semi-artificial databases, our aim was
to analyze the relative performances of the standard model and the collective
model as we vary the cluster size/number of clusters. We varied the number
of clusters in the data from 50 to 400, the first two cluster sizes being 50 and
100 and then varying the size at an interval of 100. The average number of
records per cluster was varied inversely, to keep the total number of records
44
15
in the database constant (at 1000). The distortion parameter was kept at 0.4.
Figures 3(a), 3(c) and 3(e) show the results. Each data point was obtained by
performing two-fold cross validation over five random splits of the data. All the
results reported are before taking the transitive closure over the matching pairs.
The F-measure (Figure 3(a)) drops as the number of clusters is increased, but
the collective model always outperforms the standard model. The recall curve
(Figure 3(c)) shows similar behavior. Precision (Figure 3(e)) seems to drop with
increasing number of clusters, with none of the models emerging as the clear
winner.
In the second set of experiments on the semi-artificial databases, our aim was
to analyze the relative performances of the standard model and the collective
model as we vary the level of distortion in the data. We varied the distortion
parameter from 0 to 1, at intervals of 0.2. 0 means no distortion and 1 means
that every word in the string is distorted. The number of clusters in the database
was kept constant at 100, the total number of documents in the database being
1000. Figures 3(b), 3(d) and 3(f) show the results. Each data point was obtained
by performing two-fold cross validation over five random splits of the data. All
the results reported are before taking the transitive closure over the matching
pairs. As expected, the F-measure (Figure 3(b)) drops as the level of distortion
in the data is increased. The ollective model outperforms the standard model
at all levels of distortion. The recall curve (Figure 3(d)) shows similar behavior.
Precision (Figure 3(f)) initially drops with increasing distortion, but then starts
to increase when the distortion level is about half of the maximum possible. The
ollective model performs as well as or better than the standard model until the
distortion level reaches 0.4, after which the standard model takes over.
In summary, these experiments support the hypothesis that the collective
model yields improved predictive performance relative to the standard model.
It appears to improve f-measure as a result of a substantial gain in recall while
reducing precision by a smaller amount. Investigating these effects and trading
off precision and recall in our framework are significant items for future work.
5
Related Work
Most work on the record linkage problem to date has been based on calculating
pairwise distances and collapsing two records if their distance falls below a certain threshold. This is typically followed by taking a transitive closure over the
matching pairs. The problem of record linkage was originally proposed by Newcombe [13]. Fellegi and Sunter [6] then put the ideas proposed by Newcombe into
a rigorous statistical framework. Winkler [19] provides an overview of systems
for record linkage. There is a substantial literature on record linkage within the
KDD community ([8], [3], [12], [4],[16], [18], [2], etc.).
Recently, Pasula et al. proposed a multi-relational approach to the related
problem of reference matching [14]. This approach is based on directed graphical models and a different representation of the matching problem, also includes
parsing of the references into fields, and is quite complex. In particular, it is
45
100
100
80
80
F-measure
F-measure
16
60
40
20
collective
standard
400
(a) F-measure as a function of the number of clusters
0
80
80
Recall
100
60
40
collective
standard
0.4 0.6
Distortion
0.8
1
60
40
collective
standard
20
0
0
50 100
200
300
Number of Clusters
400
(c) Recall as a function of the number
of clusters
0
80
80
Precision
100
60
40
collective
standard
0.4 0.6
Distortion
0.8
1
60
40
collective
standard
20
0
0.2
(d) Recall as a function of the level of
distortion
100
20
0.2
(b) F-measure as a function of the level
of distortion
100
20
collective
standard
0
50 100
200
300
Number of Clusters
Recall
40
20
0
Precision
60
0
50 100
200
300
Number of Clusters
400
(e) Precision as a function of the number of clusters
0
0.2
0.4 0.6
Distortion
1
(f) Precision as a function of the level
of distortion
Fig. 3. Performance of the two models on semi-artificial datasets
46
0.8
17
a generative rather than discriminative approach, requiring modeling of all dependencies among all variables, and the learning task is correspondingly more
difficult. A multi-relational discriminative approach has been proposed by McCallum and Wellner [11]. The only inference performed across candidate pairs,
however, is the transitive closure that is traditionally done as a post-processing
step. While our approach borrows much of the conditional machinery developed
by McCallum et al., its representation of the problem and propagation of information through shared attribute values are new.
Taskar et al. [17] introduced relational Markov networks, which are conditional random fields with templates for cliques as described in Section 3.1, and
applied them to a Web mining task. Each template constructs a set of similar cliques via a conjunctive query over the database of interest. Our model is
very similar to a relational Markov network, except that it cannot be directly
constructed by such queries; rather, the cliques are over nodes for the relevant
record and attribute pairs that must first be created.
6
Conclusion and Future Work
Record linkage or de-duplication is a key problem in KDD. With few exceptions,
current approaches solve the problem for each candidate pair independently. In
this paper, we argued that a potentially more accurate approach to the problem is
to set up a network with a node for each record pair and each attribute pair, and
use it to infer matches for all the pairs simultaneously. We designed a framework
for collective inference where information is propagated through shared attribute
values of record pairs. Our experiments confirm that our approach outperforms
the standard approach.
We plan to apply our approach to a variety of domains other than the bibliography domain. So far, we have experimented with relations involving only a
few attributes. We envisage that as the number of attributes increases, there will
be potentially more sharing among attribute values, and our approach should
be able to take advantage of it.
In the current model, we use only cliques of size two. Although this has
the advantage of allowing for polynomial-time exact inference, it is a strong
restriction on the types of dependencies that can be modeled. In the future we
would like to experiment with introducing larger cliques in our model, which will
entail moving to approximate inference.
References
1. A. Agresti. Categorical Data Analysis. Wiley, New York, NY, 1990.
2. M. Bilenko and R. Mooney. Adaptive duplicate detection using learnable string
similarity measures. In Proc. 9th SIGKDD, pages 7–12, 2003.
3. W. Cohen, H. Kautz, and D. McAllester. Hardening soft information sources. In
Proc. 6th SIGKDD, pages 255–259, 2000.
47
18
4. W. Cohen and J. Richman. Learning to match and cluster large high-dimensional
data sets for data integration. In Proc. 8th SIGKDD, pages 475–480, 2002.
5. M. Collins. Discriminative training methods for hidden Markov models: Theory
and experiments with perceptron algorithms. In Proc. 2002 EMNLP, 2002.
6. I. Fellegi and A. Sunter. A theory for record linkage. Journal of the American
Statistical Association, 64:1183–1210, 1969.
7. D. M. Greig, B. T. Porteous, and A. H. Seheult. Exact maximum a posteriori
estimation for binary images. Journal of the Royal Statistical Society, Series B,
51:271–279, 1989.
8. M. Hernandez and S. Stolfo. The merge/purge problem for large databases. In
Proc. 1995 SIGMOD, pages 127–138, 1995.
9. J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic
models for segmenting and labeling sequence data. In Proc. 18th ICML, pages
282–289, 2001.
10. A. McCallum, K. Nigam, and L. Ungar. Efficient clustering of high-dimensional
data sets with application to reference matching. In Proc. 6th SIGKDD, pages
169–178, 2000.
11. A. McCallum and B. Wellner. Object consolidation by graph partitioning with a
conditionally trained distance metric. In Proc. SIGKDD-2003 Workshop on Data
Cleaning, Record Linkage, and Object Consolidation, pages 19–24, 2003.
12. A. Monge and C. Elkan. An efficient domain-independent algorithm for detecting
approximately duplicate database records. In Proc. SIGMOD-1997 Workshop on
Research Issues in Data Mining and Knowledge Discovery, 1997.
13. H. Newcombe, J. Kennedy, S. Axford, and A. James. Automatic linkage of vital
records. Science, 130:954–959, 1959.
14. H. Pasula, B. Marthi, B. Milch, S. Russell, and I. Shpitser. Identity uncertainty
and citation matching. In Adv. NIPS 15, 2003.
15. G. Salton and M. McGill. Introduction to Modern Information Retrieval. McGrawHill, New York, NY, 1983.
16. S. Sarawagi and A. Bhamidipaty. Interactive deduplication using active learning.
In Proc. 8th SIGKDD, pages 269–278, 2002.
17. B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In Proc. 18th UAI, pages 485–492, 2002.
18. S. Tejada, C. Knoblock, and S. Minton. Learning domain-independent string transformation weights for high accuracy object identification. In Proc. 8th SIGKDD,
pages 350–359, 2002.
19. W. Winkler. The state of record linkage and current research problems. Technical
report, Statistical Research Division, U.S. Census Bureau, 1999.
48
Dynamic Feature Generation for Relational Learning
Alexandrin Popescul and Lyle H. Ungar
Department of Computer and Information Science
University of Pennsylvania
Philadelphia, PA 19104
popescul,ungar @cis.upenn.edu
✁
✂
Abstract. We provide a methodology which integrates dynamic feature generation from relational databases with statistical feature selection and modeling.
Unlike the standard breadth- or depth-first search of a refinement graph, the order in which we generate features and test them for inclusion in the model is
dynamically determined based on which features are found to be predictive. This
best-first search often reduces the number of computationally expensive feature
evaluations. Multiple feature streams are created based on the syntactic structure
of the feature expressions; for example, based on the type of aggregate operator
used to generate the feature. At each iteration, the next feature to be evaluated
is taken from the stream which has been producing better features. Experiments
show that dynamic feature generation and selection produces more accurate models per feature generated than its static alternative.
1 Introduction
We provide a statistical relational learning method which integrates dynamic feature
generation with statistical modeling and feature selection. Dynamic feature generation
can lead to the discovery of predictive features with less computation than generating
all features in advance. Dynamic feature generation decides the order in which features
are evaluated based on run-time feature selection feedback.
We implement dynamic feature generation within Structural Generalized Linear Regression (SGLR) framework [5, 6]. SGLR integrates generation of feature candidates
from relational data with their selection using statistical model selection criteria. Relational feature generation is cast as a search in the space of queries to a relational
database. This search covers the space of queries involving one or more relation instances; query evaluations produce numeric values which are candidate features. At
each search node, candidate features are considered for inclusion in a discriminative
statistical model such as linear or logistic regression. The result is a statistical model
where each selected feature is the evaluation of a database query encoding a predictive
data pattern.
The SGLR framework allows us to control the order in which features are generated
and tested for inclusion in the model. Feature generation in SGLR consists of two steps:
query expression generation, which is cheap as it involves only syntactic operations on
✄
Contact author. Now at Ask Jeeves, Inc. (
[email protected]).
49
query strings, and query evaluation, which is computationally demanding. Prior to being evaluated, query expressions are assigned into multiple streams. At each iteration,
one of the streams is chosen to generate the next potential feature, based on the value of
the features that stream has provided relative the other streams (Figure 1). Multi-stream
feature selection is motivated by the fact that not all feature candidates are equally useful. It is often possible to heuristically classify features into different streams reflecting
the expectation of a modeler of how likely a stream is to produce better features, or how
expensive the features are to evaluate.
Generate
Stream of
Database
Query
Expressions
Assign
Queries into
Multiple
Streams
Evaluate
Next Feature
from
Winning
Stream
Fig. 1. Multi-Stream Dynamic Feature Generation
In general, the split of features into multiple streams need not be a disjoint partition.
For example, some streams can be formed based on the types of aggregate operators
in query expressions, as done below, and other streams can be formed based on the
type relations joined in a query, for example, based on whether a query contains a
cluster-derived relation or a relation based of the same type as the target concept. A
query would be enqueued on to one stream based on its aggregate operator, and on to
a different stream based on the type of its relation instances. The method, if used with
a check to avoid evaluating a query which has been evaluated previously in a different
stream, will not incur significant increase in computational cost.
The need for aggregates in relational learning comes from the fact that the central
type of relational representation is a table (set); aggregation summarizes information in
a table into a scalar value which can be included in a statistical model. Aggregate-based
feature generation often produces more accurate predictive models from relational data
than pure logic-based features [4]. While rich types of aggregates can be included into
feature generation, not all of them are expected to be equally useful, suggesting using
the aggregate type as a heuristic for stream assignment in dynamic feature generation.
First-order expressions are treated as database queries resulting in a table of all
satisfying solutions, rather than a single boolean value. The following is an example
of an aggregate feature useful in link prediction; here the target concept is binary and
the feature (right-hand side of the expression) is a database query about both target
documents ✂✁ and ☎✄ :
✆✞✝✠✟☛✡✌☞✎✍☛✏ ✑✁✓✒✔✕✄✓✖✘✗
✆✞✙✛✚✂✜✢✟✤✣ ✆✥✝✦✟☛✡✛☞✕✏ ✑✁✓✒★✧✩✖✥✒ ✆✥✝✦✟☛✡✌☞✕✏ ☎✄✪✒✫✧✬✖✦✭
is the number of documents that both ✂✁ and ☎✄ cite.
50
We report below an experiment where features are queued for evaluation into two
streams according to the type of aggregate operator in the query expression. We compare two-stream dynamic with one-stream static feature generation. In both cases, query
expressions are generated by breadth-first search. The base-line, static, strategy evaluates queries in the same order that the expressions appear in the search queue, while the
alternative, dynamic strategy, enqueues queries into two separate streams based on a
syntactic criterion, here the type of aggregation, at the time its expression is generated,
but chooses the next feature to be evaluated from the stream which has been producing
“better” features.
We use data from CiteSeer (a.k.a. ResearchIndex), an online digital library of computer science papers [2] (http://citeseer.org/ ). CiteSeer contains a rich set of
data, including text of papers, citation information, author names and affiliations, and
conference or journal names. We represent CiteSeer as a relational database. For example, citation information is represented as a binary relation between citing and cited
documents. Document authorship, publication venues and word occurrence are also relations:
Citation(from:Document, to:Document),
Author(doc:Document, auth:Person),
PublishedIn(doc:Document, vn:Venue),
HasWord(doc:Document, word:Word.
The next section describes the experiment testing a two-stream dynamic strategy
against the static one-stream alternative.
2 Experimental Set-up
Feature generation in the SGLR framework consists of two steps: query expression
generation, and query evaluation. The former is cheap as it involves only syntactic operations on query strings; the latter is computationally demanding. The experiment is
set up to test two strategies which differ in the order in which queries are evaluated. In
both strategies, query expressions are generated by breadth-first search. The base-line,
static, strategy evaluates queries in the same order the expressions appear in the search
queue, while the alternative, dynamic strategy, enqueues queries into separate streams
at the time its expression is generated, but chooses the next feature to be evaluated from
the stream with the highest ratio:
✏✁ ✡✄✂✕✟ ✚✆☎✌✡✌✞☞ ✝ ✕ ✡
✠✟
✁✎✖☛✡ ✏☞ ✡✞✂✕✟ ✚✌✌☎ ✡✛☞✎✍✏☎✛✝ ✡
✏✟
✁✛✖
✚ ☎✛✡✌☞✞✝ ✕ ✡ is the number of features selected for addition to the model,
where ✄✡ ✂✕✟ ✌
✡
✄
✂✕✟ ✌
✚
☎✛✡✌☞✎✍✏✛
☎ ✝ ✡
is the total number of features tried by feature selection in this
and
substream. Many other ranking methods could be used; this one has the virtue of being
simple and, for the realistic situation in which the density of predictive features tends
to decrease as one goes far into a stream, complete.
A query expression is assigned into one of two streams based on the type of aggregate operator it uses:
51
– Stream 1: queries with aggregates exists and count over entire table.
– Stream 2: other aggregates. Here, these are the counts of unique elements in individual columns.
We compare test set accuracies with dynamic and static feature generation in two
scenarios: i) difference in accuracies against the number of features generated and ii)
difference in accuracies against time.1
2.1 Data Sets
The experiments are performed for two tasks using CiteSeer data: classifying documents into their publication venues, conferences or journals, and predicting the existence of a citation between two documents. The target concept pair in the two tasks are
<Document, Venue> and <Document, Document> respectively. In the case of
venue prediction, the value of the response variable is one if the pair’s venue is a true
publication venue of the corresponding document and zero otherwise. Similarly, in link
prediction, value of the response variable is one if there exists a citation between two
documents and zero otherwise. In both tasks, the search space contains queries based on
several relations about documents and publication venues, such as citation information,
authorship and word content of the documents.
Each of the tasks consists of two datasets: one using the original relational representation and the other using an augmented cluster-based representation. Alternative
“cluster-relations” are derived from the attributes in the original database schema and
included in the feature generation process. We use clustering to derive new first class
relational entities reflecting hidden topics of papers, author communities and word
groups. New cluster relations included into the feature generation process in addition to
the original relations result in the creation of richer cluster-based features, where clusters enter into more complex relationships with existing background relations rather
than only provide dimensionality reduction. This approach can result in more accurate
models than those built only from the original relational concepts [5].
The following are descriptions of basic relations we use, followed by the description
of derived cluster relations we use to augment the search space:
– PublishedIn(doc:Document, vn:Venue). Publication venues are extracted by
matching information with the DBLP database2 . Publication venues are known for
60,646 CiteSeer documents. This is the total number of documents participating
in the experiments. All other relations are populated with information about these
documents. There are 1,560 unique conferences and journals. Training and test
examples are sampled from this background relation in the venue prediction task.
– Author(doc:Document, auth:Person). 53,660 out of the total of 60,646 documents have authorship information available; there are 26,740 unique last names
of authors. The number of tuples in this relation is 131,582.
1
2
Time is recorded on machines with equivalent characteristics and not used for other processing.
MySQL database engine is used.
http://dblp.uni-trier.de/
52
– Citation(from:Document, to:Document). There are a total of 173,410 citations among our “universe” of 60,646 documents. The relation contains 42,749
unique citing documents, 31,603 unique cited documents, and the total of 49,398
documents. Training and test examples are sampled from this background relation
in the link prediction task.
– HasWord(doc:Document, word:Word. This is by far the largest relation even
for relatively small vocabularies. It is populated by binary word occurrence vectors,
i.e. there is a tuple for each word in the vocabulary if it is contained in a corresponding document. With word data available for 56,104 documents and vocabulary of
size 1,000, the total number of tuples in HasWord is 6,894,712 (the vocabulary contains top count words in the entire collection after Porter stemming and stop word
removal).
We use -means to derive cluster relations; any other hard clustering algorithm
can be used for this purpose. The results of clustering are represented by binary relations <ClusteredEntity, Cluster ID>. Cluster relations are precomputed
and added to the relational schema before feature generation phase. The original database
schema contains several entities which can be clustered based on a number of alternative criteria. Each many-to-many relation in the original schema presented above can
produce two distinct cluster relations. Three out of four relations are many-to-many
(with the exception of PublishedIn), this results in six new cluster-relations. Since the
PublishedIn relation is not used to produce new clusters, nothing has to be done to exclude the attributes of entities in training and test sets from participating in clustering.
In the case of link prediction, on the other hand, the relation corresponding to the target
concept, Citation, does produce clusters. Clustering is run without the links sampled for
training and test sets. The following is the list of these six cluster relations which we
add to the relational database schema:
– ClusterDocumentsByAuthors(doc:Document, clust:Clust0).
53,660 documents are clustered based on the identity of their 26,740 authors.
– ClusterAuthorsByDocuments(auth:Person, clust:Clust1).
26,740 authors are clustered based on 53,660 documents they wrote.
– ClusterDocumentsByCitingDocuments(doc:Document,clust:Clust2).
31,603 documents are clustered based on 42,749 documents citing them (the numbers are slightly lower in link prediction where target concept links do not participate in clustering).
– ClusterDocumentsByCitedDocuments(doc:Document,clust:Clust3).
42,749 documents are clustered based on 31,603 documents cited from them (the
numbers are slightly lower in link prediction where target concept links do not
participate in clustering).
– ClusterDocumentsByWords(doc:Document, clust:Clust4).
56,104 documents are clustered based on the vocabulary of top 1,000 words they
contain.
– ClusterWordsByDocuments(word:Word, clust:Clust5).
The vocabulary of 1,000 words is clustered based on their occurrence in this collection of 56,104 documents.
53
An important aspect of optimizing cluster utility, in general, and of the use of cluster
relations in our setting, in particular, is the choice of , the number of groups into which
the entities are clustered. In our case, for each potential value of , we would need to
repeat expensive feature generation for all cluster derived features. In the experiments
presented here we fix to be equal to 100 in all cluster relations except for the last one,
ClusterWordsByDocuments, where the number of clusters is 10. The latter is clustered
into fewer groups than the rest of the data to reflect roughly an order of magnitude
smaller number of objects, words, to be clustered: we selected the vocabulary of size
1,000 to make the size of HasWord relation manageable. The accuracy of resulting
cluster-based models reported below can potentially be improved even further if one is
willing to incur the additional cost of optimizing the choice of .
Table 1 summarizes the sizes of four original relations and the sizes of six derived
cluster relations.
Table 1. Sizes of the original and cluster-based relations
Relation
Size
PublishedIn
60,646
Author
131,582
Citation
173,410
HasWord
6,894,712
ClusterDocumentsByAuthors
53,660
ClusterAuthorsByDocuments
26,740
ClusterDocumentsByCitingDocuments
31,603
ClusterDocumentsByCitedDocuments
42,749
ClusterDocumentsByWords
56,104
ClusterWordsByDocuments
1,000
Each of the sets,
–
–
–
–
Set1: venue prediction, with cluster relations,
Set2: venue prediction, without cluster relations,
Set3: link prediction, with cluster relations,
Set4: link prediction, without cluster relations,
consists of ten partitions which are used to produce 10-fold cross validation pointwise confidence estimates of the accuracy curves at chosen intervals. At each interval,
confidence is derived from ten points obtained from testing ten models learned from
each cross validation partition against the remaining nine partitions. The number of
observations in each partition of the venue prediction dataset and of the link prediction
dataset is 1,000 and 500 respectively.
Each of the datasets has 3,500 available features. Table 2 gives the number of features in each stream. We stop the experiment when one of the streams is exhausted.
54
Table 2. Sizes of feature streams
Data Stream 1 Stream 2
set1
1,552
1,948
set2
1,509
1,991
set3
1,778
1,722
set4
1,633
1,867
3 Results
6
4
2
0
(accuracyDynamic − accuracyStatic)
8
This section reports the difference in test set accuracy for dynamic and static feature
generation. In each of four datasets, the difference in accuracy is plotted against the
number of evaluated features considered by feature selection and against the time taken
to evaluate the queries. Each plot reports 95% confidence intervals of the accuracy
difference. The accuracies are compared until the winner stream becomes empty. After
the initial brief alternation between the two streams the process stabilized and continued
to request most of the features from the winning stream, which was Stream 1 in all
datasets.
0
500
1000
1500
# of features considered
Fig. 2. Set 1 (venue prediction, with cluster relations). Test accuracy difference against the number of features considered. Errors: 95% confidence interval (10-fold cross validation, in each of
10 runs
and
)
✂✁☎✄✝✆✟✞✡✠☞☛✍✌✏✎✒✑✓✑✏✑
✂✁☎✔✖✕✗✁✘☛✚✙✛✎✒✑✏✑✓✑
Figures 2, 3, 4 and 5 present the dynamic search test accuracy minus the static search
test accuracy against the number of features generated for sets 1 through 4 respectively.
55
5
4
3
2
1
0
(accuracyDynamic − accuracyStatic)
−1
−2
0
500
1000
1500
# of features considered
Fig. 3. Set 2 (venue prediction, without cluster relations). Test accuracy difference against the
number of features considered. Errors: 95% confidence interval (10-fold cross validation, in each
and
)
of 10 runs
✂✁☎✔✗✕✖✁✘☛ ✙ ✎ ✑✏✑✓✑
4
2
−2
0
(accuracyDynamic − accuracyStatic)
6
8
✂✁☎✄✝✆✟✞✡✠☞☛✍✌✏✎✒✑✓✑✏✑
0
500
1000
1500
# of features considered
Fig. 4. Set 3 (link prediction, with cluster relations). Test accuracy difference against the number
of features considered. Errors: 95% confidence interval (10-fold cross validation, in each of 10
runs
and
)
✂✁☎✄✝✆✟✞✡✠ ☛ ✁ ✏✑ ✑
✁ ✔✗✕✗✁✘☛✄✂ ✎ ✁ ✑✏✑
56
8
6
4
2
(accuracyDynamic − accuracyStatic)
0
−2
0
500
1000
1500
# of features considered
Fig. 5. Set 4 (link prediction, without cluster relations). Test accuracy difference against the number of features considered. Errors: 95% confidence interval (10-fold cross validation, in each of
and
)
10 runs
✂✁☎✔✗✕✗✁ ☛✄✂✛✎ ✁ ✑✏✑
6
4
2
0
(accuracyDynamic − accuracyStatic)
8
✂✁☎✄✝✆✟✞✡✠☞☛ ✁ ✓✑ ✑
5000
10000
15000
20000
time (sec)
Fig. 6. Set 1 (venue prediction, with cluster relations). Test accuracy difference against time taken
to generate features. Errors: 95% confidence interval (10-fold cross validation, in each of 10 runs
and
)
✂✁☎✄✝✆✟✞✡✠☞☛✍✌✓✎✒✑✏✑✏✑
✁ ✔✗✕✗✁✘☛✚✙✛✎✒✑✓✑✏✑
57
4
3
2
1
0
(accuracyDynamic − accuracyStatic)
−1
5000
10000
15000
20000
time (sec)
Fig. 7. Set 2 (venue prediction, without cluster relations). Test accuracy difference against time
taken to generate features. Errors: 95% confidence interval (10-fold cross validation, in each of
and
)
10 runs
✂✁☎✔✖✕✗✁✘☛✚✙✛✎✒✑✏✑✓✑
2
1
0
−2
−1
(accuracyDynamic − accuracyStatic)
3
4
✂✁☎✄✝✆✟✞✡✠☞☛✍✌✏✎✒✑✓✑✏✑
0
5000
10000
15000
20000
time (sec)
Fig. 8. Set 3 (link prediction, with cluster relations). Test accuracy difference against time taken
to generate features. Errors: 95% confidence interval (10-fold cross validation, in each of 10 runs
and
)
✂✁☎✄✝✆✟✞✡✠☞☛ ✁ ✏✑ ✑
✂✁☎✔✗✕✗✁✘☛✄✂✛✎ ✁ ✑✓✑
58
4
3
2
1
0
(accuracyDynamic − accuracyStatic)
−1
0
10000
20000
30000
40000
time (sec)
Fig. 9. Set 4 (link prediction, without cluster relations). Test accuracy difference against time
taken to generate features. Errors: 95% confidence interval (10-fold cross validation, in each of
10 runs
and
)
✂✁☎✄✝✆✟✞✡✠☞☛ ✁ ✓✑ ✑
✂✁☎✔✗✕✗✁ ☛✄✂✛✎ ✁ ✑✏✑
Figures 6, 7, 8 and 9 present accuracy difference against the time in seconds taken
to evaluate features considered for selection for sets 1 through 4 respectively. These
plots show early gains from dynamic feature generation. Gains tend to vanish as all
predictive features are exploited. The dynamic feature generation performs significantly
better than its static alternative along considerable intervals of the incremental learning
process. The dynamic feature generation never performs significantly worse than the
static search, based on the pairwise Gaussian 95% confidence intervals of the accuracy
difference.
4 Discussion
We presented a method for dynamic feature generation with statistical modeling and
feature selection and showed that dynamic feature generation can lead to the discovery
of predictive features with less computation than generating all features in advance.
Dynamic feature generation contrasts with “propositionalization” [1, 7], a static relational feature generation approach in which features are first constructed from relational representation and then presented to a propositional algorithm. The generation
of features with propositionalization is thus fully decoupled from the model used to
make predictions, and prematurely incurs full computational cost of feature generation.
Better models can be built if one allows native statistical feature selection criteria provide run-time feedback determining the order in which features are generated. Coupling
59
feature generation to model construction can significantly reduce computational costs.
Some inductive logic programming systems also perform dynamic feature generation,
in this case when modeling with logic. For example, Progol [3] uses A*-like algorithm
to direct its search.
In the experiments reported here, dynamic feature generation performs significantly
better than its static alternative along considerable intervals of the incremental learning
process. The dynamic feature generation never performs significantly worse than the
static search, based on the pairwise Gaussian 95% confidence intervals of the accuracy
difference.
One of the two feature streams was a clear winner, i.e. the heuristic used to split
features was successful in reflecting the expectation that one stream is more likely to
produce good features. In situations when the choice of a good heuristic is difficult, dynamic feature generation can still be used; in the worst case, when features in different
streams are “equally good”, the method will asymptotically lead to the same performance as the static feature generation by taking features from different streams with
equal likelihood.
The two stream approach can be generalized to a multi-stream approach. Also, the
split of features into multiple streams does not need to be a disjoint partition. For example, some streams can be formed based on the types of aggregate operators in query
expressions, as we did here, and other streams can be formed based on the type of relations joined in a query, for example, split based on whether a query contains a clusterderived relation, or a relation of the same type as the target concept. A given query is
enqueued into one stream based on its aggregate operator, and into a different stream
based on the type of its relation instances. The method, if used with a check to avoid
evaluating a query which has been evaluated previously in a different stream, will not
incur significant increase in computational cost.
Another approach is to split features into multiple streams according to the sizes of
their relation instances, which would serve as an estimate of evaluation time. This can
lead to improvements for the following reasons: i) out of two nearly collinear features
a cheaper one will likely be evaluated first. This will lead to approximately the same
accuracy improvement as the second more expensive feature, and ii) there is no obvious correlation between the cost to evaluate a query and its expected predictive power,
therefore it can be expected that cheap queries result in good features as likely as more
expensive ones.
References
1. Stefan Kramer, Nada Lavrac, and Peter Flach. Propositionalization approaches to relational
data mining. In Saso Dzeroski and Nada Lavrac, editors, Relational Data Mining, pages
262–291. Springer-Verlag, 2001.
2. Steve Lawrence, C. Lee Giles, and Kurt Bollacker. Digital libraries and autonomous citation
indexing. IEEE Computer, 32(6):67–71, 1999.
3. S. Muggleton. Inverse entailment and Progol. New Generation Computing, 13:245–286, 1995.
4. Claudia Perlich and Foster Provost. Aggregation-based feature invention and relational concept classes. In KDD-2003, 2003.
60
5. Alexandrin Popescul and Lyle Ungar. Cluster-based concept invention for statistical relational
learning. In KDD-2004, 2004.
6. Alexandrin Popescul, Lyle H. Ungar, Steve Lawrence, and David M. Pennock. Statistical
relational learning for document mining. In ICDM-2003, 2003.
7. A. Srinivasan and R. King. Feature construction with inductive logic programming: A study
of quantitative predictions of biological activity aided by structural attributes. Data Mining
and Knowledge Discovery, 3(1):37–57, 1999.
61
62
Kernel-based distances for relational learning
Adam Woznica1 , Alexandros Kalousis1, and Melanie Hilario1
University of Geneva,
Computer Science Department,
Rue General Dufour, 1211, Geneve, Switzerland
{woznica,kalousis,hilario}@cui.unige.ch
Abstract. In this paper we present a novel and general framework for
kernel-based learning over relational schemata. We exploit the notion of
foreign keys to perform the leap from a flat attribute-value representation to a structured representation that underlines relational learning.
We define a new attribute type which builds on the notion of foreign keys
that we call instance-set. It is shown that this more database oriented
approach enables intuitive modeling of relational problems. We also define some kernel functions over relational schemata and adapt them so
that they are used as a basis for a relational instance-based learning algorithm. We check the performance of our algorithm on a number of well
known relational benchmark datasets.
1
Introduction
Learning from structured data has recently attracted a great deal of attention
within the machine learning community. We propose a novel database oriented
approach and define our algorithms and operations over relational schemata.
Learning examples come in the form of relational tables. We assume that there
exists a single main table where every entry corresponds to a given instance
and where the class of the instance is also stated. Other than that there are no
constraints on the possible relations within the database.
In logic programming terminology the equivalent of a relation in a database is
a typed clause, i.e. a clause where variables appearing as arguments in its literals
take values in a specific domain. Using the relational algebra representation automatically results in a typed representation. Furthermore foreign keys provide
a simple syntactic way of specifying semantic information, entities are explicitly connected via foreign keys. In order to retrieve the complete information
related with a given instance one simply has to move through the relations of
the database following the associations provided by the foreign keys. To be able
to model the same type of semantic information in first order learning one has
to use typed representations and define objects of specific types, e.g. objects of
type person, number, etc. In RIBL [10] for example attributes of type object are
defined and used as object identifiers, thus implicitly implementing the notion
of a foreign key. Building directly on the notion of foreign keys renders such constructs unnecessary and provides the basis for more efficient implementations.
63
At each moment by simply looking at the relational schema we know exactly
at which table and with which attribute of that table we should be working in
order to retrieve the related information e.g via a simple SQL query.
Recently it has been realized that one strength of the kernel-based learning
paradigm is its ability to support input spaces whose representation is more
general than attribute-value. This is mainly due to the fact that the proper
definition of a kernel function enables the structured data to be embedded in
some linear feature space without the explicit computation of the feature map.
This can be achieved as long as we are able to define a function which is both
positive definite and appropriate for the problem at hand. The main advantage
of this approach is that any propositional algorithm which is based on inner
products can be applied on the structured data.
In this paper we define a family of kernel functions over relational schemata
which are generated in a “syntax-driven” manner in the sense that the input
description specifies the kernel’s operation. We also exploit these kernels to define
a relational distance over relation schemata and we experiment with and compare
a number of kernel-based distance measures.
2
Description of the Relational Instance Based Learner
Consider a general relational schema that consists of a set of tables {T }. Each row
–instance– Ti of a table T represents a relationship between a set of values {T ij }
of the set of attributes {T j } related via T . The domain, D(T j ), of attribute
T j is the set of values that the attribute assumes in table T . An attribute T j is
called a potential key of table T if it assumes a unique value for each instance of
the table. An attribute X i of table X is a foreign key if it references a potential
key T j of table T and takes values in the domain D(T j ) in which case we will
also call the T j a referenced key. The association between T j and X i models
one-to-many relations, i.e. one element of T can be associated with a set of
elements of X. A link is a quadruple of the form (T, T k , X, X l ) where either
X l is a foreign key of X referencing a potential key T k of T or vice versa. We
will call the set of attributes of a table T that are not keys (i.e. referenced keys,
foreign keys or attributes defined as keys but not referenced) standard attributes
and denote it with {S j }. The notion of links is critical for our relational learner
since it will provide the basis for the new type of attributes, i.e. the instance-set
type.
2.1
Accessing a relational instance
For a given referenced key T k of table T we denote by R(T, T k ) the set of links
(T, T k , X, X fk ) in which T k is referenced as a foreign key by X fk of X. We will
call the multiset1 of X tables, denoted as R(T, T k ){1}, the directly dependent
1
The term multiset is more appropriate than the term set, since there can be a table
X that appears more than once in the dependent tables, e.g. it has two foreign keys
pointing to attributes of T .
64
tables of T for T k . By R(T, ) = ∪k R(T, T k ) we note the list of all links in which
one of the potential keys of T is referenced as a foreign key by an attribute of
another table.
Similarly for a given foreign key T fk of T , R−1 (T, T fk ) will return the link
(T, T fk , X, X k ) where X k is the potential key of X referenced by the foreign
key T fk . We will call table X the directly referenced table of T for T fk and
denoted it as R−1 (T, T fk ){1}. If T has more than one foreign keys then by
R−1 (T, ) = ∪fk R−1 (T, T fk ) we denote the set of all links of T defined by the
foreign keys of T , and by R−1 (T, ){1} the corresponding list of tables to which
these foreign keys refer.
To define a classification problem one of the tables in {T } should be defined
as the main table, M , i.e. the table on which the classification problem will be
defined. Then one of the attributes of this table should be defined as the class
attribute, M c , i.e. the attribute that defines the classification problem. Each
instance, Mi , of the M table will give rise to one relational instance, Mi+ , i.e.
an instance that spans the different tables {T } of our relational schema. To
get the complete description of Mi+ one will have to traverse possibly the whole
relational schema according to the table associations defined in the schema. This
will be done via the recursive application of G(M, Mi ), a function that gives
the contents of instance Mi , on the associated instances of Mi in the tables
given by R(M, ){1}, R−1 (M, ){1}. More generally for a given instance Ti of
a table T and a link (T, T l , X, X k ) we can access the associated instances of
Ti in X via the function set(Til , X, X k ) which actually is nothing more than a
simple SQL query, that retrieves the set of instances from table X for which the
value of X k = Til .
We take the view that each one of the R(T, ) ∪ R −1 (T, ) links associated
with a table T adds one more feature to the ones already defined by the set of
standard attributes {S j } .It is these links that give rise to the new attribute
type which we call attribute of type instance-set. This new type is an extension
over the classical attribute-value representations; the value of an instance for an
attribute of that type would be the set of instances with which the given instance
is associated in the table given in the link.
Traversing the relational schema in order to retrieve the complete description
of a given relational instance can easily produce self replicated loops that bring
no additional information. In order to avoid that kind of situation we will have
to keep track of all the instances of the different tables that appear in a given
path of the recursion. The moment an instance appears a second time in the
given recursion path the recursion terminates.
Having an adequate way to handle attributes of type instance-set is the heart
of the problem that should be tackled in order to come with a relational learning
algorithm that could exploit the relational structure that we have sketched thus
far. In the next section we will see how we can define kernel-based distances that
operate on the relational structure that we have defined.
65
3
Kernels
A kernel is a symmetric function k : X × X → ℜ, where X is any set, such that
for all x, y ∈ X, k(x, y) =< φ(x), φ(y) > where φ is a mapping from X to a
feature space Φ embedded with an inner product, actually a pre-Hilbert space.
We should note here that the definition of kernels does not require that the
input space X be a vector space –it can be any kind of set which we can embed
in the feature space Φ via the kernel. This property allows us to define kernels
on any kind of structures that will embed these structures in a linear space.
The attractiveness of kernels lies in the fact that one does not need to explicitly compute the mappings φ(x) in order to compute the inner products in
the feature space.
Examples of kernels defined on vector spaces are polynomial kernel with
parameters a ∈ ℜ (indicating whether to include the lower-order terms) and
p ∈ N + (the exponent of the kernel) and the Gaussian RBF kernel with the
γ ∈ ℜ width parameter.
3.1
Kernels on relational instances
In order to define a kernel on the relational instances we will distinguish two
parts, Tis , Tiset , in each relational instance Ti found in a table T . Tis denotes
the vector of standard attributes {S j } of T , let Ds = |{S j }|; Tiset denotes the
vector of attributes that are of type instance-set and for a table T are given by
R(T, ) ∪ R−1 (T, ), let Dset = |R(T, ) ∪ R−1 (T, )|.
Let Ti = (Tis , Tiset ) ∈ X = X{S j } × Xset where Xset = Xset1 × Xset2 , ×...×
XsetDset and Tis ∈ X{S j } , Tiset ∈ Xset . Given this formalism we defined two
relational kernels: direct sum kernel (kΣ (., .)) and the kernel which is derived by
direct application of the R-Convolution kernel on the set X (kℜ (., .)). Since these
kernels are defined over multi-relational instances they are computed following
the same recursion path as the retrieval of a multi-relational instance, the only
difference being that recursion will involve two distances instead of only one.
The direct sum kernel is obtained by exploiting the fact that the direct sum
of kernels is a kernel itself [13], which would give the following kernel on the set
6 0):
X (if |{S j }| =
kΣ (Ti , Tj ) = ks (Tis , Tjs ) +
D
set
X
kset (Tisetl , Tjsetl )
l=1
where ks (., .) can be any type of elementary kernel defined on the set {S j } of
the standard attributes of T and kset (., .) is a kernel between sets which will
be defined in section 3.2. If |{S j }| = 0 then the kernel defined over standard
attributes vanishes and we obtain:
kΣ (Ti , Tj ) =
D
set
X
kset (Tisetl , Tjsetl )
l=1
66
It is obvious that the value of kΣ (., .) is affected by the number of attributes
that are of type instance-set since it contains a sum of kernels defined on these
attributes. If we were working with a single-table propositional problem that
would not pose a problem. However in the multi-relational context this is a
problem since the kernel on the main table is based on recursive computations
of kernels in tables at the next levels which can have varying Ds and Dset . In
order to factor out that effect among different tables we use a normalized version
of kΣ defined as:
kΣ (Ti , Tj ) =
kΣ (Ti , Tj )
1 + Dset
(1)
1
which is also a kernel since 1+D
> 0 (we normalize the kernel by 1 + Dset
set
because there are Dset kernels defined on sets and there is one kernel defined on
standard attributes).
An alternative kernel is derived by the direct application of the R-Convolution
kernel as described in [9]. The main idea underlying the R-Convolution kernel is
that composite objects consist of simpler parts that are connected via a relation
ℜ. The kernels on the composite objects can then be computed by combining
kernels defined on their constituent parts. More formally, let x ∈ X be a composite object and x = x1 , ..., xD ∈ X1 × ... × XD its constituent parts. Then
we can represent the relation x are the parts of x by the relation ℜ on the set
X1 × X2 × ... × XD × X where ℜ(x, x) is true iff x are the parts of x. Let
ℜ−1 (x) = {x : ℜ(x, x)}, a composite object can have more than one decomposing possibilities. Then the R-Convolution kernel is defined as:
kℜ (x, y) =
X
x∈ℜ−1 (x),y∈R−1 (y)
D
Y
Kd (xd , yd )
(2)
d=1
Since we defined only one way to decompose a relational instance T i the sum
in the equation 2 vanishes and we obtain only the product of kernels defined over
attributes of type instance-set and kernels defined on standard attributes (only if
standard attributes are present). In case |{S j }| =
6 0 the resulting R-Convolution
kernel becomes:
kℜ (Ti , Tj ) = ks (Tis , Tjs )
D
set
Y
kset (Tisetl , Tjsetl )
l=1
otherwise, i.e. the table does not have standard attributes, we obtain:
kℜ (Ti , Tj ) =
D
set
Y
kset (Tisetl , Tjsetl )
l=1
Again it is obvious that the value of kℜ (., .) is affected by the number of attributes
that are of type instance-set since it contains a product of kernels defined on these
67
attributes. In order to factor out the effect of the varying number of attributes
across different tables we opted for
kℜ (Ti , Tj )
kℜ (Ti , Tj ) = p
kℜ (Ti , Ti )kℜ (Tj , Tj )
(3)
It should be stressed that the above mentioned kernels together with kernels
defined over sets (section 3.2) allow nesting of data types.
These two kernels, kℜ (., .), kΣ (., .), are the ones with which we are going to
experiment and on which we are going to base our distance computations. Having
a kernel it is straightforward to compute the distance in the feature space Φ in
which the kernel computes the inner product as [7]:
p
d(φ(x), φ(y)) = k(x, x) − 2k(x, y) + k(y, y)
This is the final distance that we will be using to perform classification.
3.2
Kernels on Sets
To complete the definition of the kernel on the relational structure we have
to provide a way to deal with attributes of type instance-set, which in fact
means defining a kernel over sets of instances. This kernel can easily be derived
by the definition of R-Convolution kernels by letting ℜ in the equation 2 be
x ∈ ℜ−1 (x) ⇔ x ∈ x. Consequently we obtain:
kset (X, Y ) =
X
kΣ|ℜ (x, y)
(4)
x∈X,y∈Y
where kΣ|ℜ (., .) is either kΣ (., .) or kℜ (., .). The computation of the final kernel
is based on recursive alternating applications of kΣ (., .) or kℜ (., .) and kset (., .)
Normalization The procedure of computing the kernel on the variables of type
instance-set indicates that if the cardinalities of the sets vary considerably, sets
with larger cardinality will dominate the solution. This leads us to the issue of
normalization of the sum on the right side of the equation 4, so that we obtain:
knorm (X, Y ) =
kset (X, Y )
fnorm (X)fnorm (Y )
(5)
where fnorm (x) is a normalization function which is nonnegative and takes nonzero values. Various choices of fnorm (x) will enable us to define different normalization methods [8]. By putting fnorm (X) = CARD(X) we obtain
p the Averaging
normalization method (kΣA (., .)) while by defining fnorm (X) = kset (X, X) we
get the Normalization in the feature space (kΣF S (., .)).
68
4
Experiments
We will compare the selected kernel-based distance measures on a number of
relational problems: musk - version 1, diterpenes and mutagenesis. In the diterpene dataset [5] the goal is to identify the type of diterpenoid compound skeletons given their 13 C-NMR-Spectrum. The musk dataset was described in [3];
here the goal is to predict the strength of synthetic musk molecules. We worked
with version 1 of the dataset which contains 47 musk molecules and 45 similar non-musk molecules. The Mutagenesis dataset was introduced in [14]. The
application task is the prediction of mutagenicity of a set of 230 aromatic and
heteroaromatic nitro-compounds. We worked with the “regression friendly” version of the dataset. We defined two different versions of the learning problem. In
version 1 the examined compounds (in the main table) consist of atoms (in the
atom table) which constitute bonds (in the bound table). The recursion depth
was limited to four. In version 2 the compounds consist of bonds while bonds
consists of atoms and the recursion level was limited to three. In both versions
we limited the level of recursion because the algorithms are computationally expensive. Bonds are described by two links to specific entries in the atom table
and by the type of the bond while atoms are described by their charge (numeric
values), type and name (e.g. N, F, S, O, etc.). All the results are given in table
1.
The computation of kernel-based distance measures for the two datasets can
be simply reduced to computing kernels on sets of points, (diterpene and musk),
requiring thus no recursion. In these cases the KΣ (., .) and Kℜ (., .) relational
kernels are equivalent (up to a normalization term) so we report results only for
the former. In the mutagenicity problem it will be possible to move beyond a
single level comparison of the instances and have many levels of recursion. We
report results for different set normalization schemes; the subscript A will denote
averaging and the subscript F S feature space normalization. In all experiments
we limited ourselves to normalized polynomial KPp,a (., .) and Gaussian RBF
KGγ (., .) elementary kernels. In the experiments we want to explore the effect
of different elementary kernels, the effect of different kernel set normalizations,
as well as the relative performance of the KΣ (., .) and Kℜ (., .) kernels. We will
experiment with two different numbers of nearest neighbors K = 1, 3.
We estimate accuracy using ten-fold cross-validation and control for the statistical significance of observed differences using McNemar’s test (sig. level=0.05).
We also establish a ranking schema of the different kernel-based distance measures, based on their relative performance as determined by the results of the
significance tests, as follows: in a given dataset if kernel-based distance measure
a is significantly better than b then a is credited with one point and b with zero
points; if there is no significant difference then both are credited with half point.
5
Results
To compare the different elementary kernels we fix a dataset and average the
ranks of KP and KG , ignoring their parameter settings, among different number
69
Table 1. Accuracy and rank results on the benchmark datasets
diterpenes
musk (version 1)
K=1
K=3
K=1
K=3
kΣA
KPp=2,a=1
91.22 (5.5) 87.56 (5.5) 85.87 (3.5) 81.52 (3.5)
KPp=3,a=1
91.75 (6)
88.09 (5.5) 88.04 (4.5) 81.52 (3.5)
KGγ=0.01
86.69 (2.5) 84.90 (2.5) 83.70 (3.5) 84.78 (3.5)
KGγ=0.001
83.30 (0.5) 82.90 (0.5) 81.52 (2.5) 84.78 (3.5)
kΣF S
KPp=2,a=1
90.82 (4.5) 87.49 (5.5) 85.87 (3.5) 81.52 (3.5)
KPp=3,a=1
91.68 (6)
87.89 (5.5) 88.04 (4.5) 82.61 (3.5)
KGγ=0.01
86.76 (2.5) 84.76 (2.5) 83.70 (3.5) 84.78 (3.5)
KGγ=0.001
83.03 (0.5) 82.83 (0.5) 81.52 (2.5) 84.78 (3.5)
Default Accuracy
29.81
51.09
Mutagenesis (version 1) Mutagenesis (version 2)
Elementary kernel K = 1
K=3
K=1
K=3
kΣA
KPp=2,a=1
79.79 (3.5) 80.85 (3.5) 82.45 (3.5) 78.72 (3.5)
KPp=3,a=1
78.19 (3.5) 79.25 (3.5) 82.98 (3.5) 77.66 (3.5)
KGγ=0.01
79.79 (3.5) 79.79 (3.5) 83.51 (3.5) 80.32 (3.5)
KGγ=0.001
80.85 (3.5) 78.72 (3.5) 84.04 (3.5) 80.32 (3.5)
kℜ A
KPp=2,a=1
79.25 (3.5) 81.38 (3.5) 85.64 (3.5) 78.19 (3.5)
KPp=3,a=1
78.72 (3.5) 79.25 (3.5) 86.70 (3.5) 77.66 (3.5)
KGγ=0.01
79.25 (3.5) 81.38 (3.5) 84.57 (3.5) 80.85 (3.5)
KGγ=0.001
78.72 (3.5) 78.19 (3.5) 83.51 (3.5) 80.85 (3.5)
Default Accuracy
66.49
Elementary kernel
of neighbors. There is an advantage of the polynomial over the Gaussian RBF elementary kernel for musk 1 and diterpenes datasets. For musk 1 the average rank
of polynomial kernels is 3.75 (for Gaussian RBF 3.25) while for diterpenes is 5.5
(1.5). For both formulations of mutagenesis the the average rank of polynomial
kernels is 3.5 (3.5).
We performed more experiments than those listed in tables 1 where we systematically varied the parameters of the elementary kernels (p, a for polynomial
and γ for Gaussian RBF). This did not seem to have a significant effect on
predictive performance, it was quite stable among different values of the parameters, providing indication that the relational kernel is not very sensitive to
the parameter settings of the elementary kernels and thus no extensive search is
required over the parameters space.
The different normalization methods for kernels over sets also do not appear
to have an influence on the final results. For diterpenes Averaging had an average
rank of 3.56 over the different elementary kernels and Feature space normalization an average rank of 3.44. For musk 1 the corresponding figures were 3.5 and
3.5.
70
The final dimension of comparison is the relative performance of kΣ (., .)
and kℜ (., .). Here again it had not a big influence on the final results: for both
formulations of the mutagenesis problem kΣ (., .) and kℜ (., .) had an average rank
of 3.5.
To situate the performance of our relational learner to other relational learning systems we give some results on the same datasets in table 2. All the results
have been computed with ten fold cross-validation. The results on diterpenes are
taken from [12]. For musk 1 the TILDE result was the best result reported in [1],
while for matchings the result is from [12]. For the mutagenesis the results are
taken from [1] on the B2 formulation of the problem that corresponds to our version 2 of mutagenesis. The SVM MM and SVM MI denote SVM with Minimax
and Multi-Instance kernels respectively and the results are taken from [8]. The
KeS and DeS algorithms are described in [7]. From the results reported above
we can see that our kernel-based learner compares favorably with the results
achieved by most other state-of-the-art relational learners.
Table 2. Results of other relational learners.
PROGOL
RIBL
TILDE
MATCH.
SVM-MM
SVM-MI
KeS
DeS
Best Kernel
6
Diterpenes musk 1 mutagenesis 2
81.00
91.20
90.40
87.00
79.00
93.50
88.00
91.60
86.40
94.70
81.00
97.10
91.75
88.04
86.70
Related work
Apart from the work on R-Convolution kernels mentioned in section 3 recently
many novel kernels over structured data have been proposed.
[7] proposed a framework that allows the application of kernel methods to
different kinds of structured data e.g. trees, graphs, lists. The representation
formalism used was that of individuals as terms of the typed λ-calculus formal
logic. The composition of individuals from their parts is expressed in terms of
a fixed type structure that is made up of function types, product types and
type constructors. Function types are used to represent types corresponding to
sets and multisets, product types represent types corresponding to fixed size
tuples and type constructors for arbitrary structured objects such as lists, trees,
graphs. Each type defines a set of terms that represent instances of that type.
71
The main difference from the first order terms is that one may model sets and
multisets. The adaptation of a given relational classification problem to that
framework is not a trivial task since kernels need to be defined on each of the
functions, product types and type constructors, with the latter being a complex
task, along with a kernel defined on the data constructors associated with each
type constructor.
A family of kernels on relational problems represented using feature description logic concept graphs, a representation formalism based on description logics
and concept graphs, was introduced by [2]. Here instances can be any type of
structured data represented by a graph-based structure on which the kernel is
computed. One of the goals of this work was to limit the dimensionality of the
feature space to which the relational structures are implicitly mapped through
the use of the description language. This approach was shown to be beneficial,
contrary to the common belief, when compared with kernels that perform a map
to feature spaces of much higher dimensionalities.
Apart from the work on rich complex structures the related work also includes
kernels defined over sets –one of the most interesting was proposed in [11]. This
kernel is defined as Bhattacharyya’s affinity between Gaussian models fitted to
the sets.
Another example of kernel defined on sets is proposed in [15]. More precisely
this kernel is defined over pairs of matrices based on the concept of principal angles between two linear subspaces. The principal angles can be obtained only by
computing inner-products between pairs of column vectors of the input matrices.
Kernels for Multi-Instance (MI) problems were proposed in [8]. The authors
show that the introduced kernels separate positive and negative sets under natural assumptions and they provide the necessary and sufficient conditions.
7
Discussion and Future Work
We proposed a kernel based relational instance based learner which, contrary
to most of the previous relational approaches that rely on different forms of
typed logic, builds on notions from relational algebra. Thus we cover what we
see as an important gap in the current work on multirelational learning bringing
it closer to the database community. Quoting [4] ’... by looking directly at
”unpacked” databases, Multi-Relational Data Mining is closer to the ”real world”
of programmers formulating SQL queries than traditional KDD. This means
that it has the potential for wider use than the latter, but only if we address
the problem of expressing MRDM algorithms and operations in terms that are
intuitive to SQL programmers and OLAP users’. It is our feeling that the current
work makes one step to that direction.
The concept of kernels provided us with a natural and theoretically sound way
to define distances over our relational structures. Central to the whole approach
was the definition of appropriate kernels on the new type of attributes i.e. the
instance-set type. We believe that there is still a lot to be gained in classification
performance if more refined kernels are used for this type of attributes. We have
72
followed a rather simple approach where the kernel between two sets was simply
the sum of all the pairwise kernels defined over all the pairs of elements of the
two sets. A more elaborate approach would take into account only the kernels
computed over specific pairs of elements based on some mapping relation of one
set to the other defined on the feature space. That mapping relation can be
based on the notions of distance computation between sets given in [6, 12].
We experimented with two different elementary kernels which seem to affect
the performance of the final relational kernel. However, this performance seems
not to be sensitive to the parameter settings of these elementary kernels.
References
1. E. Bloedorn and R. Michalski. Data driven constructive induction. IEEE Intelligent
Systems, 13(2):30–37, 1998.
2. Chad Cumby and Dan Roth. On kernel methods for relational learning. In Proceedings of 20th International Conference on Machine Learning (ICML-2003), Washington, DC, 2003.
3. Thomas G. Dietterich, Richard H. Lathrop, and Tomas Lozano-Perez. Solving
the multiple instance problem with axis-parallel rectangles. Artificial Intelligence,
89(1-2):31–71, 1997.
4. Pedro Domingos. Prospects and challenges of multi-relational data mining.
SIGKDD, Explorations, 5(1):80–83, 2003.
5. Saso Dzeroski, Steffen Schulze-Kremer, Karsten R. Heidtke, Karsten Siems, and
Dietrich Wettschereck. Applying ILP to diterpene structure elucidation from 13 c
NMR spectra. In Inductive Logic Programming Workshop, pages 41–54, 1996.
6. T Eiter and H. Mannila. Distance measures for point sets and their computation.
Acta Informatica, 34(2):109–133, 1997.
7. T. Gaertner, J. Lloyd, and P. Flach. Kernels and distances for structured data.
Machine Learning, 2004.
8. Thomas Gaertner, Peter Flach, Adam Kowalczyk, and Alex Smola. Multi-instance
kernels. In Claude Sammut, editor, ICML02. Morgan Kaufmann, July 2002.
9. David Haussler. Convolution kernels on discrete structures. Technical report, UC
Santa Cruz, 1999.
10. Tamas Horvath, Stefan Wrobel, and Uta Bohnebeck. Relational instance-based
learning with lists and terms. Machine Learning, 43(1/2):53–80, 2001.
11. R. Kondor and T. Jebara. A kernel between sets of vectors. In Proceedings of
20th International Conference on Machine Learning (ICML-2003), Washington,
DC, 2003.
12. Jan Ramon and Maurice Bruynooghe. A polynomial time computable metric between point sets. Acta Informatica, 37(10):765–780, 2001.
13. Bernhard Schölkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge,
MA, 2002.
14. A. Srinivasan, S. Muggleton, R.D. King, and M.J.E. Sternberg. Mutagenesis: ILP
experiments in a non-determinate biological domain. In S. Wrobel, editor, Proceedings of the 4th International Workshop on Inductive Logic Programming, volume
237, pages 217–232, 1994.
15. Lior Wolf and Amnon Shashua. Learning over sets using kernel principal angles.
Journal of Machine Learning Research, 4:913–931, October 2003.
73
74