A Calculus for Orchestration of Web Services ?
April 4, 2007
Alessandro Lapadula, Rosario Pugliese and Francesco Tiezzi
Dipartimento di Sistemi e Informatica Università degli Studi di Firenze
{lapadula,pugliese,tiezzi}@dsi.unifi.it
Abstract. We introduce COWS (Calculus for Orchestration of Web Services),
a new foundational language for SOC whose design has been influenced by
WS-BPEL, the de facto standard language for orchestration of web services.
COWS combines in an original way a number of ingredients borrowed from wellknown process calculi, e.g. asynchronous communication, polyadic synchronization, pattern matching, protection, delimited receiving and killing activities, while
resulting different from any of them. Several examples illustrates COWS peculiarities and show its expressiveness both for modelling imperative and orchestration constructs, e.g. web services, flow graphs, fault and compensation handlers,
and for encoding other process and orchestration languages. We also present an
extension of the basic language with timed constructs.
Table of Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 COWS: Calculus for Orchestration of Web Services . . . . . . . . . . . . . . . . . . . . .
2.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Operational semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 Modelling imperative and orchestration constructs . . . . . . . . . . . . . . . . . . . . . . .
3.1 Imperative constructs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Fault and compensation handlers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Flow graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1 Rock/Paper/Scissors Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Shipping Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 Encoding other formal languages for orchestration . . . . . . . . . . . . . . . . . . . . . . .
5.1 Encoding Orc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Encoding - . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6 Timed extension of COWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
?
This work has been supported by the EU project SENSORIA, IST-2 005-016004.
2
3
4
5
9
11
11
13
15
17
17
19
21
21
27
28
31
33
1 Introduction
Service-oriented computing (SOC) is an emerging paradigm for developing loosely
coupled, interoperable, evolvable systems which exploits the pervasiveness of the Internet and its related technologies. SOC systems deliver application functionality as
services to either end-user applications or other services. These very features foster
a programming style based on service composition and reusability: new customized
service-based applications can be developed on demand by appropriately assembling
other existing, possibly heterogeneous, services.
Service definitions are used as templates for creating service instances that deliver
application functionality to either end-user applications or other instances. The loosely
coupled nature of SOC implies that the connection between communicating instances
cannot be assumed to persist for the duration of a whole business activity. Therefore,
there is no intrinsic mechanism for associating messages exchanged under a common
context or as part of a common activity. Even the execution of a simple request-response
message exchange pattern provides no built-in means of automatically associating the
response message with the original request. It is up to each single message to provide
a form of context thus enabling services to associate the message with others. This
is achieved by embedding values in the message which, once located, can be used to
correlate the message with others logically forming a same stateful interaction ‘session’.
Early examples of technologies that are at least partly service-oriented are CORBA,
DCOM, J2EE and IBM WebSphere. A more recent successful instantiation of the SOC
paradigm are web services. These are autonomous, stateless, platform-independent and
composable computational entities that can be published, located and invoked through
the Web via XML messages. To support the web service approach, many new languages, most of which based on XML, have been designed, like e.g. business coordination languages (such as WS-BPEL, WSFL, WSCI, WS-CDL and XLANG), contract languages (such as WSDL and SWS), and query languages (such as XPath and
XQuery). However, current software engineering technologies for development and
composition of web services remain at the descriptive level and do not integrate such
techniques as, e.g., those developed for component-based software development. Formal reasoning mechanisms and analytical tools are still lacking for checking that the
web services resulting from a composition meet desirable correctness properties and
do not manifest unexpected behaviors. The task of developing such verification methods is hindered also by the very nature of the languages used to program the services,
which usually provide many redundant constructs and support quite liberal programming styles.
In the last few years, many researchers have exploited the studies on process calculi
as a starting point to define a clean semantic model and lay rigorous methodological foundations for service-based applications and their composition. Process calculi,
being defined algebraically, are inherently compositional and, therefore, convey in a
distilled form the paradigm at the heart of SOC. This trend is witnessed by the many
process calculi-like formalisms for orchestration and choreography, the two more common forms of web services composition. Most of these formalisms, however, do not
suit for the analysis of currently available SOC technologies in their completeness because they only consider a few specific features separately, possibly by embedding ad
2
hoc constructs within some well-studied process calculus (see, e.g., the variants of πcalculus with transactions [2, 20, 21] and of CSP with compensation [9]).
Here, we follow a different approach and exploit WS-BPEL [1], the de facto standard language for orchestration of web services, to drive the design of a new process
calculus that we call COWS (Calculus for Orchestration of Web Services). Similarly to
WS-BPEL, COWS supports shared states among service instances, allows a same process to play more than one partner role and permits programming stateful sessions by
correlating different service interactions. However, COWS intends to be a foundational
model not specifically tight to web services’ current technology. Thus, some WS-BPEL
constructs, such as e.g. fault and compensation handlers and flow graphs, do not have
a precise counterpart in COWS, rather they are expressed in terms of more primitive
operators (see Section 3). Of course, COWS has taken advantage of previous work on
process calculi. Its design combines in an original way a number of constructs and features borrowed from well-known process calculi, e.g. asynchronous communication,
polyadic synchronization, pattern matching, protection, delimited receiving and killing
activities, while however resulting different from any of them.
The rest of the paper is organized as follows. Syntax and operational semantics
of COWS are defined in Section 2 where we also show many illustrative examples.
Section 3 presents the encodings of several imperative and orchestration constructs,
while Section 4 illustrates two example applications of our framework to web services.
Section 5 presents the encodings of two other orchestration languages, i.e. Orc [29] and
- [22]. Section 6 introduces an extension of COWS with timed orchestration
constructs. This turns out to be a very powerful language that also permits to express,
e.g., choices among alternative activities constrained by the expiration of some given
timeout. Section 7 concludes the paper by touching upon comparisons with related work
and directions for future work.
2
COWS: Calculus for Orchestration of Web Services
Before formally defining our language, we provide some insights on its main features.
The basic elements of COWS are partners and operations. Alike channels in [11], a
communication endpoint is not atomic but results from the composition of a partner
name p and of an operation name o, which can also be interpreted as a specific implementation of o provided by p. This results in a very flexible naming mechanism that
allows a same service to be identified by means of different logic names (i.e. to play
more than one partner role as in WS-BPEL). For example, the following service
pslow • o?w̄.sslow + pfast • o?w̄.sfast
accepts requests for the same operation o through different partners with distinct access
modalities: process sslow implements a slower service provided when the request is processed through the partner pslow , while sfast implements a faster service provided when
the request arrives through pfast . Additionally, it allows the names composing an endpoint to be dealt with separately, as in a request-response interaction, where usually the
service provider knows the name of the response operation, but not the partner name of
the service it has to reply to. For example, the ping service p • oreq ?hxi.x • ores !h“I live”i
3
will know at run-time the partner name for the reply activity. This mechanisms is also
sufficiently expressive to support implementation of explicit locations: a located service
can be represented by using a same partner for all its receiving endpoints. Partner and
operation names can be exchanged in communication, thus enabling many different interaction patterns among service instances. However, as in [26], dynamically received
names cannot form the communication endpoints used to receive further invocations.
COWS computational entities are called services. Typically, a service creates one
specific instance to serve each received request. An instance is composed of concurrent
threads that may offer a choice among alternative receive activities. Services could be
able to receive multiple messages in a statically unpredictable order and in such a way
that the first incoming message triggers creation of a service instance which subsequent
messages are routed to. Pattern-matching is the mechanism for correlating messages
logically forming a same interaction ‘session’ by means of their same contents. It permits locating those data that are important to identify service instances for the routing of
messages and is flexible enough for allowing a single message to participate in multiple
interaction sessions, each identified by separate correlation values.
To model and update the shared state of concurrent threads within each service
instance, receive activities in COWS bind neither names nor variables. This is different
from most process calculi and somewhat similar to [30, 31]. In COWS, however, interservice communication give rise to substitutions of variables with values (alike [30]),
rather than to fusions of names (as in [31]). The range of application of the substitution
generated by a communication is regulated by the delimitation operator, that is the only
binder of the calculus. Additionally, this operator permits to generate fresh names (as
the restriction operator of the π-calculus [28]) and to delimit the field of action of the
kill activity, that can be used to force termination of whole service instances. Sensitive
code can however be protected from the effect of a forced termination by using the
protection operator (inspired by [8]).
2.1 Syntax
The syntax of COWS, given in Table 1, is parameterized by three countable and pairwise disjoint sets: the set of (killer) labels (ranged over by k, k0 , . . .), the set of values
(ranged over by v, v0 , . . . ) and the set of ‘write once’ variables (ranged over by x, y,
. . . ). The set of values is left unspecified; however, we assume that it includes the set of
names, ranged over by n, m, . . . , mainly used to represent partners and operations. The
language is also parameterized by a set of expressions, ranged over by e, whose exact
syntax is deliberately omitted; we just assume that expressions contain, at least, values
and variables. Notably, killer labels are not (communicable) values. Notationally, we
prefer letters p, p0 , . . . when we want to stress the use of a name as a partner, o, o0 , . . .
when we want to stress the use of a name as an operation. We will use w to range over
values and variables, u to range over names and variables, and d to range over killer
labels, names and variables.
Services are structured activities built from basic activities, i.e. the empty activity 0,
the kill activity kill( ) , the invoke activity • ! and the receive activity • ? , by means
of prefixing . , choice + , parallel composition | , protection {| |} , delimitation
[ ] and replication ∗ . Notably, as in the Lπ [26], communication endpoints of receive
4
s ::=
|
|
|
|
|
|
kill(k)
u • u0 !ē
g
s|s
{|s|}
[d] s
∗s
(services)
(kill)
(invoke)
(input-guarded choice)
(parallel composition)
(protection)
(delimitation)
(replication)
0
p • o?w̄.s
g+g
(input-guarded choice)
(nil)
(request processing)
(choice)
g ::=
|
|
Table 1. COWS syntax
activities are identified statically because their syntax only allows using names and
not variables. The decreasing order of precedence among the operators is as follows:
monadic operators, choice and parallel composition.
Notation ¯· stands for tuples of objects, e.g. x̄ is a compact notation for denoting the
tuple of variables hx1 , . . . , xn i (with n ≥ 0). We assume that variables in the same tuple
are pairwise distinct. All notations shall extend to tuples component-wise. In the sequel,
we shall omit trailing occurrences of 0, writing e.g. p • o?w̄ instead of p • o?w̄.0, and use
[d1 , . . . , dn ] s in place of [d1 ] . . . [dn ] s.
The only binding construct is delimitation: [d] s binds d in the scope s. The occurrence of a name/variable/label is free if it is not under the scope of a binder. We denote
by fd(t) the set of names, variables and killer labels that occur free in a term t, and by
fk(t) the set of free killer labels in t. Two terms are alpha-equivalent if one can be obtained from the other by consistently renaming bound names/variables/labels. As usual,
we identify terms up to alpha-equivalence.
2.2 Operational semantics
The operational semantics of COWS is defined only for closed services, i.e. services
without free variables/labels (similarly to many real compilers, we consider terms with
free variables/labels as programming errors), but of course the rules also involve nonclosed services (see e.g. the premises of rules (del∗ )). Formally, the semantics is given
in terms of a structural congruence and of a labelled transition relation.
The structural congruence ≡ identifies syntactically different services that intuitively
represent the same service. It is defined as the least congruence relation induced by a
given set of equational laws. We explicitly show in Table 2 the laws for replication,
protection and delimitation, while omit the (standard) laws for the other operators stating that parallel composition is commutative, associative and has 0 as identity element,
and that guarded choice enjoys the same properties and, additionally, is idempotent.
All the presented laws are straightforward. In particular, commutativity of consecutive
delimitations implies that the order among the di in [hd1 , . . . , dn i] s is irrelevant, thus in
5
∗0
∗s
{|0|}
{| {|s|} |}
{|[d] s|}
[d] 0
[d1 ] [d2 ] s
s1 | [d] s2
≡
≡
≡
≡
≡
≡
≡
≡
0
s |∗ s
0
{|s|}
[d] {|s|}
0
[d2 ] [d1 ] s
[d] (s1 | s2 )
(repl1 )
(repl2 )
(prot1 )
(prot2 )
(prot3 )
(delim1 )
(delim2 )
if d < fd(s1 )∪fk(s2 ) (delim3 )
Table 2. COWS structural congruence (excerpt of laws)
M(x, v) = {x 7→ v}
M(w1 , v1 ) = σ1
M(v, v) = ∅
M(w̄2 , v̄2 ) = σ2
M((w1 , w̄2 ), (v1 , v̄2 )) = σ1 ] σ2
Table 3. Matching rules
the sequel we may use the simpler notation [d1 , . . . , dn ] s. Notably, law (delim3 ) can be
used to extend the scope of names (like a similar law in the π-calculus), thus enabling
communication of restricted names, except when the argument d of the delimitation is
a free killer label of s2 (this avoids involving s1 in the effect of a kill activity inside s2 ).
To define the labelled transition relation, we need a few auxiliary functions. First,
we exploit a function [[ ]] for evaluating closed expressions (i.e. expressions without
variables): it takes a closed expression and returns a value. However, [[ ]] cannot be
explicitly defined because the exact syntax of expressions is deliberately not specified.
Then, through the rules in Table 3, we define the partial function M( , ) that permits performing pattern-matching on semi-structured data thus determining if a receive
and an invoke over the same endpoint can synchronize. The rules state that two tuples
match if they have the same number of fields and corresponding fields have matching
values/variables. Variables match any value, and two values match only if they are identical. When tuples w̄ and v̄ do match, M(w̄, v̄) returns a substitution for the variables in
w̄; otherwise, it is undefined. Substitutions (ranged over by σ) are functions mapping
variables to values and are written as collections of pairs of the form x 7→ v. Application
of substitution σ to s, written s · σ, has the effect of replacing every free occurrence of
x in s with v, for each x 7→ v ∈ σ, by possibly using alpha conversion for avoiding v to
be captured by name delimitations within s. We use | σ | to denote the number of pairs
in σ and σ1 ] σ2 to denote the union of σ1 and σ2 when they have disjoint domains.
We also define a function, named halt( ), that takes a service s as an argument and
returns the service obtained by only retaining the protected activities inside s. halt( ) is
defined inductively on the syntax of services. The most significant case is halt({|s|}) =
{|s|}. In the other cases, halt( ) returns 0, except for parallel composition, delimitation
6
and replication operators, for which it acts as an homomorphism.
halt(kill(k)) = halt(u1 • u2 !ē) = halt(g) = 0
halt({|s|}) = {|s|}
halt(s1 | s2 ) = halt(s1 ) | halt(s2 )
halt([d] s) = [d] halt(s)
halt(∗ s) = ∗ halt(s)
Finally, we define a predicate, noc( , , , ), that takes a service s, an endpoint p • o,
a tuple of receive parameters w̄ and a matching tuple of values v̄ as arguments and holds
true if either there are no conflicting receives within s (namely, s cannot immediately
perform a receive activity matching v̄ over the endpoint p • o), or p • o?w̄ is the most
defined conflicting receive. The predicate exploits the notion of active context, namely
a service A with a ‘hole’ [[·]] such that, once the hole is filled with a service s, if the
resulting term A[[s]] is a COWS service then it is capable of immediately performing an
activity of s. Formally, active contexts are generated by the grammar:
A ::= [[·]] | A + g | g + A | A | s | s | A | {|A|} | [d] A | ∗ A
Now, predicate noc(s, p • o, w̄, v̄) can be defined as follows:
( s = A[[p • o?w̄0 .s0 ]] ∧ M(w̄0 , v̄) = σ ) ⇒ | M(w̄, v̄) | 6 | σ |
where s = A[[p • o?w̄0 .s0 ]] means that s can be written as p • o?w̄0 .s0 filling the hole of
some active context A.
α
The labelled transition relation −−→ is the least relation over services induced by the
rules in Table 4, where label α is generated by the following grammar:
α ::= †k
| (p • o) C v̄
| (p • o) B w̄ |
p • o bσc w̄ v̄ |
†
In the sequel, we use d(α) to denote the set of names, variables and killer labels occurring in α, except for α = p • o bσc w̄ v̄ for which we let d(p • o bσc w̄ v̄) = d(σ), where
d({x 7→ v}) = {x, v} and d(σ1 ]σ2 ) = d(σ1 )∪d(σ2 ). The meaning of labels is as follows:
†k denotes execution of a request for terminating a term from within the delimitation
[k] , (p • o) C v̄ and (p • o) B w̄ denote execution of invoke and receive activities over the
endpoint p • o, respectively, p • o bσc w̄ v̄ (if σ , ∅) denotes execution of a communication over p • o with receive parameters w̄ and matching values v̄ and with substitution
σ to be still applied, † and p • o b∅c w̄ v̄ denote computational steps corresponding to
taking place of forced termination and communication (without pending substitutions),
respectively. Hence, a computation from a closed service s0 is a sequence of connected
transitions of the form
α3
α1
α2
s0 −−→ s1 −−→ s2 −−→ s3 . . .
where, for each i, αi is either † or p • o b∅c w̄ v̄ (for some p, o, w̄ and v̄); services si , for
each i, will be called reducts of s0 .
We comment on salient points. Activity kill(k) forces termination of all unprotected
parallel activities (rules (kill) and (parkill )) inside an enclosing [k] , that stops the killing
effect by turning the transition label †k into † (rule (delkill )). Existence of such delimitation is ensured by the assumption that the semantics is only defined for closed services.
7
(p • o)Bw̄
†k
kill(k) −−→ 0 (kill)
[[ē]] = v̄
p • o?w̄.s −−−−−−−→ s (rec)
α
g1 −−→ s
(inv)
(p • o)Cv̄
p • o bσ]{x7→v0 }c w̄ v̄
†k
s −−−−−−−−−−−−−−→ s0
s −−→ s0
(del sub )
p • o bσc w̄ v̄
α
d < d(α)
s = A[[kill(d)]] ⇒ α = †, †k
α
[d] s −−→ [d] s0
(p • o)Bw̄
(p • o)Cv̄
s1 −−−−−−−→ s01
(delkill )
†
[k] s −→ [k] s0
[x] s −−−−−−−−→ s0 ·{x 7→ v0 }
s −−→ s0
(choice)
α
g1 + g2 −−→ s
p • o!ē −−−−−−→ 0
s2 −−−−−−→ s02
α
s −−→ s0
(del pass )
M(w̄, v̄) = σ
α
{|s|} −−→ {|s0 |}
noc(s1 | s2 , p • o, w̄, v̄)
p • o bσc w̄ v̄
(prot)
(com)
s1 | s2 −−−−−−−−→ s01 | s02
p • o bσc w̄ v̄
s1 −−−−−−−−→ s01
noc(s2 , p • o, w̄, v̄)
p • o bσc w̄ v̄
†k
(parcon f )
α
α , (p • o bσc w̄ v̄), †k
α
s1 | s2 −−→ s01 | s2
(parkill )
†k
s1 | s2 −−→ s01 | halt(s2 )
s1 | s2 −−−−−−−−→ s01 | s2
s1 −−→ s01
s1 −−→ s01
(par pass )
s ≡ s1
α
s1 −−→ s2
α
s −−→ s0
s2 ≡ s0
(cong)
Table 4. COWS operational semantics
Sensitive code can be protected from killing by putting it into a protection {| |}; this way,
{|s|} behaves like s (rule (prot)). Similarly, [d] s behaves like s, except when the transition
label α contains d or when a kill activity for d is active in s and α does not correspond
to a kill activity (rule (del pass )): in such cases the transition should be derived by using
rules (delkill ) or (del sub ). In other words, kill activities are executed eagerly. A service
invocation can proceed only if the expressions in the argument can be evaluated (rule
(inv)). Receive activities can always proceed (rule (rec)) and can resolve choices (rule
(choice)). Communication can take place when two parallel services perform matching
receive and invoke activities (rule (com)). Communication generates a substitution that
is recorded in the transition label (for subsequent application), rather than a silent transition as in most process calculi. If more than one matching receive activity is ready to
process a given invoke, then only the more defined one (i.e. the receive that generates
the ‘smaller’ substitution) progresses (rules (com) and (parcon f )). This mechanism permits to correlate different service communications thus implicitly creating interaction
sessions and can be exploited to model the precedence of a service instance over the
corresponding service specification when both can process the same request. When the
delimitation of a variable x argument of a receive is encountered, i.e. the whole scope
8
of the variable is determined, the delimitation is removed and the substitution for x is
applied to the term (rule (del sub )). Variable x disappears from the term and cannot be reassigned a value. Execution of parallel services is interleaved (rule (par pass )), but when a
kill activity or a communication is performed. Indeed, the former must trigger termination of all parallel services (according to rule (parkill )), while the latter must ensure that
the receive activity with greater priority progresses (rules (com) and (parcon f )). The last
rule states that structurally congruent services have the same transitions.
2.3 Examples
We end this section with a few observations and examples aimed at clarifying the peculiarities of our formalism.
Communication of private names. Communication of private names is standard and
exploits scope extension as in π-calculus. Notably, receive and invoke activities can interact only if both are in the scopes of the delimitations that bind the variables argument
of the receive. Thus, to enable communication of private names, besides their scopes,
we must possibly extend the scopes of some variables, as in the following example:
[x] (p • o?hxi.s | s0 ) | [n] p • o!hni
[n] ([x] (p • o?hxi.s | s0 ) | p • o!hni)
≡
≡
(n fresh)
p • o b∅c hxi hni
[n] [x] (p • o?hxi.s | s0 | p • o!hni) −−−−−−−−−−−→
[n] (s | s0 ) · {x 7→ n}
Notice that the substitution {x 7→ n} is applied to all terms delimited by [x] , not only
to the continuation s of the service performing the receive. This is different from most
process calculi and accounts for the global scope of variables. This very feature permits
to easily model the delayed input of fusion calculus [31], which is instead difficult to
express in π-calculus.
Delimited killer labels. We require killer labels to be delimited to avoid a single service be capable to stop all the other parallel services which would be unreasonable in
a service-oriented setting. Indeed, suppose a service s can perform a kill(k) with k undelimited in s. The killing effect could not be stopped, thus, due to a transition labelled
by †k, the whole service s would be terminated (but for protected activities). Moreover,
the effect of kill(k) could not be confined to s, thus, if there are other parallel services,
the whole service composition might be terminated by kill(k).
Protected kill activity. The following simple example illustrates the effect of executing
a kill activity within a protection block:
†
[k] ({|s1 | {|s2 |} | kill(k)|} | s3 ) | s4 −−→ [k] {| {|s2 |} |} | s4
where, for simplicity, we assume that halt(s1 ) = halt(s3 ) = 0. In essence, kill(k) terminates all parallel services inside delimitation [k] (i.e. s1 and s3 ), except those that are
protected at the same nesting level of the kill activity (i.e. s2 ).
9
Interplay between communication and kill activity. Kill activities can break communication, as the following example shows:
†
p • o!hni | [k] ([x] p • o?hxi.s | kill(k)) −−→ p • o!hni | [k] [x] 0
Communication can however be guaranteed by protecting the receive activity, as in
†
p • o!hni | [k] ([x] {|p • o?hxi.s|} | kill(k))
p • o!hni | [k] [x] {|p • o?hxi.s|}
[x] (p • o!hni | [k] {|p • o?hxi.s|})
[k] {|s · {x 7→ n}|}
−−→
≡
p • o b∅c hxi hni
−−−−−−−−−−−→
Conflicting receive activities. This example shows a persistent service (implemented
by mean of replication), that, once instantiated, enables two conflicting receives:
p1 • o b∅c hxi hvi
∗ [x] ( p1 • o?hxi.s1 | p2 • o?hxi.s2 ) | p1 • o!hvi | p2 • o!hvi
−−−−−−−−−−−→
∗ [x] ( p1 • o?hxi.s1 | p2 • o?hxi.s2 ) | s1 · {x 7→ v} | p2 • o?hvi.s2 · {x 7→ v} | p2 • o!hvi
Now, the persistent service and the created instance, being both able to receive the
same tuple hvi along the endpoint p2 • o, compete for the request p2 • o!hvi. However, our
(prioritized) semantics, in particular rule (com) in combination with rule (parcon f ), allows
only the existing instance to evolve (and, thus, prevents creation of a new instance):
∗ [x] ( p1 • o?hxi.s1 | p2 • o?hxi.s2 ) | s1 · {x 7→ v} | s2 · {x 7→ v}
Message correlation. Consider now uncorrelated receive activities executed by a same
instance, like in the following service:
∗ [x] p1 • o1 ?hxi.[y] p2 • o2 ?hyi.s
The fact that the messages for operations o1 and o2 are uncorrelated implies that, e.g., if
there are concurrent instances then successive invocations for a same instance can mix
up and be delivered to different instances. If one thinks it right, this behaviour can be
avoided simply by correlating successive messages by means of some correlation data,
e.g. the first received value as in the following service:
∗ [x] p1 • o1 ?hxi.[y] p2 • o2 ?hy, xi.s
Similarities with Lπ (localised π-calculus [26]). Lπ is the variant of π-calculus closest
to COWS. In fact, all Lπ constructs have a direct counterpart in COWS and is indeed
possible to define an encoding that enjoys operational correspondence. More precisely,
the syntax of Lπ processes is
P ::= 0
| a(b).P |
āb
|
P | P | (νa)P |
!a(b).P
with the constraint that in processes a(b).P and !a(b).P name b may not occur free in
P in input position. For simplicity sake, we define the encoding only for Lπ processes
such that their bound names are all distinct and different from the free ones (but it can
10
hhaiiS ∪{a} = xa • ya
hhaiiS = pa • oa
if a < S
hh0iiS = 0
hha(b).PiiS = [hhbiiS 0 ] hhaiiS 0 ?hhbiiS 0 .hhPiiS 0
hh(νa)PiiS = [hhaiiS ] hhPiiS
hhābiiS = hhaiiS !hhbiiS
hhP1 | P2 iiS = hhP1 iiS | hhP2 iiS
hh!a(b).PiiS = ∗ hha(b).PiiS
S 0 = S ∪ {b}
Table 5. Lπ encoding
be easily extended to deal with all processes). The crux of the encoding is mapping each
Lπ channel name in a COWS communication endpoint, that is composed of variables
if the channel name is bound by an input prefix (because in Lπ the name is used as
a placeholder and COWS distinguishes between names and variables), and of names
otherwise. The actual encoding function hh·iiS , that is parameterized by a set of names
S , is defined by induction on the syntax of Lπ processes by the clauses in Table 5 (where
the endpoint pa • oa is sometimes used in place of the tuple hpa , oa i). The encoding of
process P is given by service hhPiiS with S = ∅; as the encoding proceeds, S is used to
record the names that have been freed and were initially bound by an input prefix.
3 Modelling imperative and orchestration constructs
In this section, we present the encoding of some higher level imperative and orchestration constructs (mainly inspired by WS-BPEL). The encodings illustrate flexibility of
COWS and somehow demonstrate expressiveness of the chosen set of primitives.
In the sequel, we will write Zv̄ , W to assign a symbolic name Zv̄ to the term W and
to indicate the values v̄ occurring within W. Thus, Zv̄ is a family of names, one for each
tuple of values v̄. We use n̂ to stand for the endpoint n p • no . Sometimes, we write n̂ for
the tuple hn p , no i and rely on the context to resolve any ambiguity.
3.1 Imperative constructs
Suppose to add a matching with assignment construct [w = e] to COWS basic activities.
Hence, we can also write services of the form [w = e].s whose intended semantics is
that, if w and e do match, a substitution is returned that will eventually assign to the
variable in w the corresponding value of e, and service s can proceed. In COWS, this
meaning can be rendered through the following encoding
hh[w = e].sii = [m̂] (m̂!hei | m̂?hwi.hhsii)
(1)
for m̂ fresh. The new construct generalizes standard assignment because it allows values
to occur on the left of =, in which case it behaves as a matching mechanism. Similarly,
we can encode conditional choice as follows:
hhif (e) then {s1 } else {s2 }ii = [m̂] (m̂!hei | (m̂?htruei.hhs1 ii + m̂?hfalsei.hhs2 ii) ) (2)
11
where true and false are the values that can result from evaluation of e.
Like the receive activity, matching with assignment does not bind the variables on
the left of =, thus it cannot reassign a value to them if a value has already been assigned.
Therefore, the behaviour of matching with assignment may differ from standard assignment, even when the former uses only variables on the left of = as the latter does. For
example, activity [x = 1] will not necessarily generate substitution {x 7→ 1}. In fact,
when it will be executed, x could have been previously replaced by a value v in which
case execution of the activity corresponds to checking if v and 1 do match. For similar
reasons, activity [x = x + 1] does not have the effect of increasing the value of x by 1,
but that of checking if the value of x and that of x + 1 do match, which always fails.
Standard variables (that can be repeatedly assigned) can be rendered as services
providing ‘read’ and ‘write’ operations. When the service variable is initialized (i.e. the
first time the ‘write’ operation is used), an instance is created that is able to provide
the value currently stored. When this value must be updated, the current instance is
terminated and a new instance is created which stores the new value (alike the memory
cell service of [3]). Here is the specification:
Varx , [xv , xa ] x • owrite ?hxv , xa i.
[m̂] (m̂!hxv , xa i |
∗ [x, y] m̂?hx, yi.
(y!hi | [k] (∗ [y0 ] x • oread ?hy0 i.{|y0 !hxi|}
| [x0 , y0 ] x • owrite ?hx0 , y0 i .
(kill(k) | {|m̂!hx0 , y0 i|} ) ) ) )
where x is a public partner name. Service Varx provides two operations: oread , for getting the current value; owrite , for replacing the current value with a new one. To access
the service, a user must invoke these operations by providing a communication endpoint
for the reply and, in case of owrite , the value to be stored. The owrite operation can be
invoked along the public partner x, which corresponds, the first time, to initialization
of the variable. Thus, Varx uses the delimited endpoint m̂ in which to store the current
value of the variable. This last feature is exploited to implement further owrite operations
in terms of forced termination and re-instantiation. Delimitation [k] is used to confine
the effect of the kill activity to the current instance, while protection {| |} avoids forcing
termination of pending replies and of the invocation that will trigger the new instance.
Now, suppose temporarily that standard variables, ranged over by X, Y, . . ., may occur in the syntax of COWS anywhere a variable can. We can remove them by using the
following encodings. If e contains standard variables X1 , . . . Xn , we can let
hheiim̂,n̂ = [r̂1 , . . . , r̂n ] ( x1 • oread !r̂1 | · · · | xn • oread !r̂n |
[x1 , . . . , xn ] (r̂1 ?hx1 i. · · · .r̂n ?hxn i.
m̂!he·{Xi 7→ xi }i∈{1,..,n} , n̂i) )
where {Xi 7→ xi } denotes substitution of standard variable Xi with variable xi , endpoint
m̂ returns the result of evaluating e, and endpoint n̂ permits to receive an acknowledgment when the resulting value is assigned to a service variable (of course, we are
assuming that m̂, n̂, r̂i and xi are fresh). With this encoding of expression evaluation, the
12
encoding of matching with assignment becomes
hh[w = e].sii = [r̂, n̂] (hheiir̂,n̂ | r̂?hw, n̂i.hhsii)
hh[X = e].sii = [n̂] (hheiix•owrite ,n̂ | n̂?hi.hhsii)
where w is a value v or a variable x, while X is a standard variable. In the sequel, we
will write [w̄ = ē], where w̄ = hw1 , . . . , wn i and ē = he1 , . . . , en i, with w̄ and ē that may
contain standard variables, for the sequence of assignments [w1 = e1 ]. · · · .[wn = en ].
The encodings of the remaining constructs where standard variables may directly occur
are
hh[X] sii = [x] (Varx | hhsii)
hhX • u!ēii = [x, r̂] (x • oread !r̂ | r̂?hxi.hhx • u!ēii )
hhu • X!ēii = [x, r̂] (x • oread !r̂ | r̂?hxi.hhu • x!ēii )
hhu • u0 !he1 , . . . , en iii = [x1 , r̂1 , m̂1 , . . . , xn , r̂n , m̂n ] ( hhe1 iir̂1 ,m̂1 | . . . | hhen iir̂n ,m̂n |
r̂1 ?hx1 , m̂1 i. · · · .r̂n ?hxn , m̂n i.(u • u0 !hx1 , . . . , xn i) )
hhp • o?w̄.sii = [x1 , . . . , xn ] p • o?w̄·{Xi 7→ xi }i∈{1,..,n} .
[r̂1 , . . . , r̂n ] ( x1 • owrite !hx1 , r̂1 i | . . . | xn • owrite !hxn , r̂n i |
r̂1 ?hi. · · · .r̂n ?hi.hhsii )
if w̄ contains standard variables X1 , . . . Xn
This way, occurrences of standard variables can be completely removed. Sequential
composition can be encoded alike in CCS [27, Chapter 8] however, due to the asynchrony of invoke and kill activities, the notion of well-termination must be relaxed wrt
CCS. Firstly, we settle that services may indicate their termination by exploiting the invoke activity xdone • odone !hi, where xdone is a distinguished variable and odone is a distinguished name. Secondly, we say that a service s is well-terminating if, for every reduct
(p • odone )Chi
α
s0 of s and fresh partner p, s0 ·{xdone 7→ p} −−−−−−−−−−→ implies that if s0 ·{xdone 7→ p} −−→
then α = (p0 • o) C v̄, for some p0 , o and v̄. Notably, well-termination does not demand a
service to terminate, but only that whenever the service can perform activity p • odone !hi
and cannot perform any kill activities, then it terminates except for, possibly, some parallel pending invoke activities. As usual, the encoding relies on the assumption that all
calculus operators themselves (in particular, parallel composition) can be rendered as to
preserve well-termination. Finally, if we only consider well-terminating services, then,
for a fresh p, we can let:
hhs1 ; s2 ii = [p] (hhs1 · {xdone 7→ p}ii | p • odone ?hi.hhs2 ii)
Of course, iterative constructs can be encoded by exploiting the previous encodings.
In the sequel, we shall use the derived constructs with no more ado.
3.2 Fault and compensation handlers
In the SOC approach, fault handling is strictly related to the notion of compensation,
namely the execution of specific activities (attempting) to reverse the effects of previously executed activities. We consider here a minor variant of the WS-BPEL compensation protocol. To begin with, we extend COWS syntax as shown in Table 6. The
13
s ::=
|
|
|
...
throw(φ)
undo(ı)
[s : catch(φ1 ){s1 } : . . . : catch(φn ){sn } : sc ]ı
(services)
(fault generator)
(compensate)
(scope)
Table 6. COWS plus fault and compensation handlers
scope activity [s : catch(φ1 ){s1 } : . . . : catch(φn ){sn } : sc ]ı permits explicitly grouping
activities together. The declaration of a scope activity contains a unique scope identifier
ı, a service s representing the normal behaviour, an optional list of fault handlers, and
a compensation handler sc . The fault generator activity throw(φ) can be used by a service to rise a fault signal φ. This signal will trigger execution of activity s0 , if a construct
of the form catch(φ){s0 } exists within the same scope. The compensate activity undo(ı)
can be used to invoke a compensation handler of an inner scope named ı that has already completed normally (i.e. without faulting). Compensation can only be invoked
from within a fault or a compensation handler. As in WS-BPEL, we fix two syntactic
constraints: handlers do not contain scope activities and for each undo(ı) occurring in
a service there exists at least an inner scope ı.
In fact, it is not necessary to extend COWS syntax because fault and compensation
handling can be easily encoded. The most interesting cases of the encoding are shown
in the lower part of Table 7 (in the remaining cases, the encoding acts as an homomorphism), where the killer labels used to identify scopes and the introduced partner names
are taken fresh for s, s1 , . . . , sn and sc . The two distinguished names o f ault and ocomp
denote the operations for receiving fault and compensation signals, respectively. We
are assuming that for each scope identifier named ı, the partner used to activate scope
compensation is pı .
The encoding hh·iik is parameterized by the identifier k of the closest enclosing scope,
if any. The parameter is used when encoding a fault generator, to launch a kill activity
that forces termination of all the remaining activities of the enclosing scope, and when
encoding a scope, to delimit the field of action of inner kill activities. The compensation
handler sc of scope ı is installed when the normal behaviour s successfully completes,
but it is activated only when signal pı • ocomp !hi occurs. Similarly, if during normal
execution a fault φ occurs, a signal p • o f ault !hφi triggers execution of the corresponding
fault handler (if any). Installed compensation handlers are protected from killing by
means of {| |}. Notably, both the compensate activity and the fault generator activity can
immediately terminate (thus enabling possible sequential compositions); this, of course,
does not mean that the corresponding handler is terminated.
Other kinds of faults could be handled similarly. For example, an invoke activity
inv(u1 • u2 , ē) that generates a fault φundef when its argument ē is undefined, could be
encoded as follows:
hhinv(u1 • u2 , ē)ii = [k] (u1 • u2 !ē | [true = (ē == undef )].hhthrow(φundef )iik ) |
hhcatch(φundef ){sundef }ii
where k is fresh and, for simplicity, we assume that the fault handler for φundef is defined
locally to the invoke activity.
14
hh[s : catch(φ1 ){s1 } : . . . : catch(φn ){sn } : sc ]ı iik =
[p] ( hhcatch(φ1 ){s1 }iik | . . . | hhcatch(φn ){sn }iik |
[kı ] ( hhsiikı ; ( xdone • odone !hi | [k0 ] {|pı • ocomp ?hi.hhsc iik0 |} ) ) )
hhcatch(φ){s}iik = p • o f ault ?hφi.[k0 ] hhsiik0
hhundo(ı)iik = pı • ocomp !hi | xdone • odone !hi
hhthrow(φ)iik = {|p • o f ault !hφi|} | kill(k)
Table 7. Encoding of fault and compensation handling
s ::= . . .
| [ f l] ls
P
|
sjf
i∈I
ls ::= ( jc) ⇒ s ⇒ ( f l, ē)
|
jc ::= true
fl
s j f ::= yes
|
false
|
pi • oi ?w̄i .si
s ⇒ ( f l, ē)
| ¬ jc
|
(services)
| ls | ls
jc ∨ jc
(linked services)
|
jc ∧ jc
| no
(join conditions)
(supp. join failure)
Table 8. COWS plus flow graphs
3.3 Flow graphs
In business process management, flow graphs1 provide a direct and intuitive way to
structure workflow processes, where activities executed in parallel can be synchronized
by settling dependencies, called (flow) links, among them. At the beginning of a parallel
execution, all involved links are inactive and only those activities with no synchronization dependencies can execute. Once all incoming links of an activity are active (i.e.,
they have been assigned either a positive or negative state), a guard, called join condition, is evaluated. When an activity terminates, the status of the outgoing links, which
can be positive, negative or undefined, is determined through evaluation of a transition
condition. When an activity in the flow graph cannot execute (i.e., the join condition
fails), a join failure fault is emitted to signal that some activities have not completed.
An attribute called ‘suppress join failure’ can be set to yes to ensure that join condition
failures do not throw the join failure fault (this way obtaining the so-called Dead-Path
Elimination effect [1]).
To express the constructs above, we extend the syntax of COWS as illustrated in the
upper part of Table 8. A flow graph activity [ f l] ls is a delimited linked service, where
the activities within ls can synchronize by means of the flow links in f l, rendered as
(boolean) variables. A linked service is a service equipped with a set of incoming flow
1
Here, we refer to the corresponding notion of WS-BPEL rather than to similar synchronization
constructs of some process calculi (see e.g. [19]) or to the homonymous graphical notation
used for representing processes and their interconnection structure (see, e.g., [27, 28]).
15
hh[ f l] lsii = [ f l] hhlsii
hhls1 | ls2 ii = hhls1 ii | hhls2 ii
hhs ⇒ ( f l, ē)ii = hhsii; [ f l = ē]
yes
hh( jc) ⇒ s ⇒ ( f l, ē)ii = if ( jc) then {hhsii; [ f l = ē]} else {[outLinkOf (s) = false]}
no
hh( jc) ⇒ s ⇒ ( f l, ē)ii = if ( jc) then {hhsii; [ f l = ē]} else {throw(φ join f )}
hh
P
i∈{1..n}
S
pi • oi ?w̄i .si ii = p1 • o1 ?w̄1 .[ j∈{2..n} outLinkOf (s j ) = false].hhs1 ii
S
+ . . . + pn • on ?w̄n .[ j∈{1..n−1} outLinkOf (s j ) = false].hhsn ii
Table 9. Encoding of flow graphs
links that forms the join condition, and a set of outgoing flow links that represents the
sjf
transition condition. Incoming flow links and join condition are denoted by ( jc) ⇒.
Outgoing links are represented by ⇒ ( f li∈I , ēi∈I ) where each pair ( f li , ei ) is composed
of a flow link f li and the corresponding transition (boolean) condition ei . Attribute s j f
permits suppressing possible join failures. Input-guarded summation replaces binary
choice, because we want all the branches of a multiple choice to be considered at once.
Again, we show that in fact it is not necessary to extend the syntax because flow
graphs can be easily encoded by relying on the capability of COWS of modelling a
state shared among a group of activities. The most interesting cases of the encoding
are shown in Table 9. The encoding exploits the auxiliary function outLinkOf (s), that
returns the tuple of outgoing links in s and is inductively defined as follows:
outLinkOf ([ f l] ls) = outLinkOf (ls)
P
outLinkOf ( i∈{1..n} pi • oi ?w̄i .si ) = outLinkOf (s1 ), . . . , outLinkOf (sn )
sjf
outLinkOf (( jc) ⇒ s ⇒ ( f l, ē)) = outLinkOf (s) , f l
outLinkOf (s ⇒ ( f l, ē)) = outLinkOf (s) , f l
outLinkOf (ls1 | ls2 ) = outLinkOf (ls1 ) , outLinkOf (ls2 )
outLinkOf (0) = outLinkOf (kill(k)) = outLinkOf (u1 • u2 !ē) = hi
outLinkOf (s1 | s2 ) = outLinkOf (s1 ) , outLinkOf (s2 )
outLinkOf ({|s|}) = outLinkOf ([d] s) = outLinkOf (∗ s) = outLinkOf (s)
Flow graphs are rendered as delimited services, while flow links are rendered as variables. A join condition is encoded as a boolean condition within a conditional construct,
where the transition conditions are rendered as the assignment [ f l = ē]. In case attribute
‘suppress join failure’ is set to no, a join condition failure produces a fault signal that
can be caught by a proper fault handler. Choice among (linked) services is implemented
in such a way that, when a branch is selected, the links outgoing from the activities of
the discarded branches are set to f alse. The same rationale underlies the new encoding
of conditional choice
hhif (e) then {s1 } else {s2 }ii = if (e) then {[outLinkOf (s2 ) = false].hhs1 ii}
else {[outLinkOf (s1 ) = false].hhs2 ii}
16
4 Examples
In this section we show two application of our framework. The former is an example
of a web service inspired by the well-known game Rock/Paper/Scissors, the latter is an
example of a shipping service from WS-BPEL specification [1].
4.1 Rock/Paper/Scissors Service
Consider the following web service:
rps , ∗ [xchamp res , xchall res , xid , xthr 1 , xthr 2 , xwin ]
(pchamp • othrow ?hxchamp res , xid , xthr 1 i.xchamp res • owin !hxid , xwin i |
pchall • othrow ?hxchall res , xid , xthr 2 i.xchall res • owin !hxid , xwin i |
Assign)
The task of service rps is to collect two throws, stored in xthr 1 and xthr 2 , from two
different participants, the current champion and the challenger, assign the winner to
xwin and then send the result back to the two players. The service receives throws from
the players via two distinct endpoints, characterized by operation othrow and partners
pchamp and pchall . The service is of kind “request-response” and is able to serve challenges coming from any pairs of participants. The players are required to provide the
partner names, stored in xchamp res and xchall res , which they will use to receive the result.
A challenge is uniquely identified by a challenge-id, here stored in xid , that the partners
need to provide when sending their throws. Partner throws arrive randomly. Thus, when
a throw is processed, for instance the challenging one, it must be checked if a service
instance with the same challenge-id already exists or not. We assume that Assign implements the rules of the game and thus, by comparing xthr 1 and xthr 2 , assigns the winner
of the match by producing the assignment [xwin = xchamp res ] or [xwin = xchall res ]. Thus,
we have
Assign , if (xthr 1 == “rock”&xthr 2 == “scissors”)
then { [xwin = xchamp res ] }
else { if (xthr 1 == “rock”&xthr 2 == “paper”)
then { [xwin = xchall res ] }
else { . . .
}
}
A partner may simultaneously play multiple challenges by using different challenge
identifiers as a means to correlate messages received from the server. E.g., the partner
(pchall • othrow !hp0chall , 0, “rock”i | [x] p0chall • owin ?h0, xi.s0 ) |
(pchall • othrow !hp0chall , 1, “paper”i | [y] p0chall • owin ?h1, yi.s1 )
is guaranteed that the returned results will be correctly delivered to the corresponding
continuations.
17
Fig. 1. Graphical representation of a Rock/Paper/Scissors service scenario
Let us now consider the following match of rock/paper/scissors identified by the
correlation value 0:
s , rps | pchamp • othrow !hp0champ , 0, “rock”i | [x] p0champ • owin ?h0, xi.schamp
| pchall • othrow !hp0chall , 0, “scissors”i | [y] p0chall • owin ?h0, yi.schall
where p0champ and p0chall denote the players’ partner names. Figure 1 shows a customized
UML sequence diagram depicting the above scenario. The champion and a challenger
participate to the match, play their throws (i.e. “rock” and “scissors”), wait for the
resulting winner, and (possibly) use this result in their continuation processes (i.e. schamp
and schall ). Here is a computation produced by selecting the champion’s throw:
pchamp • othrow b∅c hxchamp res ,xid ,xthr 1 i hp0champ ,0,“rock”i
s −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−→
rps | [xchall res , xthr 2 , xwin ] ( p0champ • owin !h0, xwin i |
pchall • othrow ?hxchall res , 0, xthr 2 i.xchall res • owin !h0, xwin i |
Assign · {xchamp res 7→ p0champ , xid 7→ 0, xthr 1 7→ “rock”} )
| [x] p0champ • owin ?h0, xi.schamp
| pchall • othrow !hp0chall , 0, “scissors”i | [y] p0chall • owin ?h0, yi.schall , s0
Below, the challenger’s throw is consumed by the existing instance:
pchall • othrow b∅c hxchall res ,0,xthr 2 i hp0chall ,0,“scissors”i
s0 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−→
rps | [xwin ] ( p0champ • owin !h0, xwin i | p0chall • owin !h0, xwin i |
Assign · {xchamp res 7→ p0champ , xid 7→ 0, xthr 1 7→ “rock”,
xchall res 7→ p0chall , xthr 2 7→ “scissors”} )
0
| [x] pchamp • owin ?h0, xi.schamp
| [y] p0chall • owin ?h0, yi.schall
18
In the computation above, rules (com) and (parcon f ) allow only the existing instance to
evolve (thus, creation of a new conflicting instance is avoided). Once Assign determines
that pchamp won, the substitutive effects of communication transforms the system as
follows:
s00 , rps | p0champ • owin !h0, pchamp i | p0chall • owin !h0, pchamp i
0
•o
| [x] pchamp
win ?h0, xi.schamp
0
| [y] pchall • owin ?h0, yi.schall
At the end, the name of the resulting winner is sent to both participants as shown by the
following computation:
p0champ • owin b∅c h0,xi h0,pchamp i
p0chall • owin b∅c h0,yi h0,pchamp i
s00 −−−−−−−−−−−−−−−−−−−−−−→ −−−−−−−−−−−−−−−−−−−−−→
rps | schamp · {x 7→ pchamp } | schall · {y 7→ pchamp }
4.2 Shipping Service
We consider an extended version of the shipping service described in the official specification of WS-BPEL [1] (Section 15.1). This example covers most of the language
features we are interested in, including correlation sets, variables, flow control structures, fault and compensation handling. We assume that the reader is already familiar
with the main features of WS-BPEL. This service handles the shipment of orders. From
the service point of view, orders are composed of a number of items. The shipping service offers two types of shipment: shipments where the items are held and shipped
together and shipments where the items are shipped piecemeal until all of the order
is fulfilled. Figure 2 illustrates a scenario where the shipping service interacts with a
customer service. The shipping service is specified in COWS as follows:
∗ [xid , xcomplete , xitemsT ot ]
p shipS erv • oreq ?hxid , xcomplete , xitemsT ot i.
if (xcomplete ) then { pcust • onotice !hxid , xitemsT ot i }
else { [ s : catch(φnoItems ){ undo(ı price ) | pcust • oerr !hxid , “sorry”i } : 0 ]ı }
where the normal behaviour s is
[xratio ] ( [ s shipPriceCalc : s shipPriceComp ]ı price ;
[owhile ] ( p shipS erv • owhile !h0i |
∗ [xc ] p shipS erv • owhile ?hxc i.
if (xc < xitemsT ot ) then { [xitemsCount ]
[xitemsCount = rand()].
if (xitemsCount 6 0) then {
[xratio = xc / xitemsT ot ].throw(φnoItems ) }
else { pcust • onotice !hxid , xitemsCount i |
p shipS erv • owhile !hxc + xitemsCount i } }
else {0}
)
)
19
Fig. 2. Graphical representation of a shipping service scenario
p shipS erv is the partner associated to the shipping service, oreq is the operation used
to receive the shipping request, and hxid , xcomplete , xitemsT ot i is the tuple of variables used
for the request shipping message: xid stores the order identifier, that is used to correlate
the ship notice(s) with the ship order, xcomplete stores a boolean indicating whether the
order is to be shipped complete or not, and xitemsT ot stores the total number of items in
the order. Shipping notices and error messages to customers are sent using partner pcust
and operations onotice and oerr , respectively. A notice message is a tuple composed of the
order identifier and the number of items in the shipping notice. When partial shipment
is acceptable, xc is used to record the number of items already shipped. Replication and
the internal operation owhile are used to model iteration.
Our example extend that in [1] by allowing the service to generate a fault in case
the shipping company has ended the stock of items (this is modelled by function rand()
returning an integer less or equal to 0). The fault is handled by sending an error message
to the customer and by compensating the inner scope ı price , that has already completed
successfully. Function rand() returns a random integer number and represents an internal interaction with a back-end system. For the sake of simplicity, we don’t describe
this interaction. Moreover, we don’t show services s shipPriceCalc and s shipPriceComp . Basically, the former calculates the shipping price according to the value assigned to xitemsT ot
and sends the result to the accounts department. The latter is the corresponding compensation activity, that sends information about the non-shipped items to the accounts
department and sends a refund to the customer according to the ratio (stored in xratio )
between the shipped items (stored in xc ) and the required ones (stored in xitemsT ot ).
20
E(x) , f
!v0
S (v) ,→ 0 (SiteCall)
τ
E(w) ,→ f · {x 7→ w}
l
(Def)
l
f ,→ f 0
g ,→ g0
(Sym1)
l
(Sym2)
l
f | g ,→ f 0 | g
f | g ,→ f | g0
τ
!v
f ,→ f 0
τ
f > x > g ,→ f 0 > x > g
f ,→ f 0
(Seq1)
τ
f > x > g ,→ ( f 0 > x > g) | g · {x 7→ v}
l
(Seq2)
τ
g ,→ g0
(Asym1)
l
g where x :∈ f ,→ g0 where x :∈ f
f ,→ f 0
τ
g where x :∈ f ,→ g where x :∈ f 0
(Asym2)
!v
f ,→ f 0
τ
g where x :∈ f ,→ g · {x 7→ v}
(Asym3)
Table 10. Orc asynchronous operational semantics
5 Encoding other formal languages for orchestration
We present here the encodings in COWS of two orchestration languages: Orc and . The former language has already proved to be capable of expressing the most
common workflow patterns; the latter one turned out to be suitable to model in a quite
direct way the semantics of whole WS-BPEL [22, 23].
5.1 Encoding Orc
We present here the encoding of Orc [29], a recently proposed task orchestration language with applications in workflow, business process management, and web service
orchestration. We will show that the encoding enjoys a property of operational correspondence. This is another sign of COWS expressiveness because it is known that Orc
can express the most common workflow patterns identified in [32]. Orc syntax is:
(Expressions)
f, g ::= 0 | S (w) | E(w) | f > x > g | f | g | g where x :∈ f
(Parameters)
w
::= x | v
where S ranges over site names, E over expression names, x over variables, and v over
values. Each expression name E has a unique declaration of the form E(x) , f . Expressions can be composed by means of sequential composition · > x > ·, symmetric
parallel composition · | ·, and asymmetric parallel composition · where x :∈ · starting from the elementary expressions 0, S (w) (site call) and E(w) (expression call). The
variable x is bound in g for the expressions f > x > g and g where x :∈ f . Variable x is
free in f if it is not bound in f . We use f v( f ) to denote the set of variables which occur
free in f .
21
Evaluation of expressions may call a number of sites and returns a (possibly empty)
stream of values. The asynchronous operational semantics of Orc is given by the lal
belled transition relation ,→ defined in Table 10, where label τ indicates an internal
event while label !v indicates the value v resulting from evaluating an expression. A site
call can progress only when the actual parameter is a value (rule (SiteCall)); it elicits one
response. While site calls use a call-by-value mechanism, expression calls use a callby-name mechanism (rule (Def)), namely the actual parameter replaces the formal one
and then the corresponding expression is evaluated. Symmetric parallel composition
f | g consists of concurrent evaluations of f and g (rules (Sym1) and (Sym2)). Sequential composition f > x > g activates a concurrent copy of g with x replaced by v, for
each value v returned by f (rules (Seq1) and (Seq2)). Asymmetric parallel composition
g where x :∈ f prunes threads selectively. It starts in parallel both f and the part of
g that does not need x (rules (Asym1) and (Asym2)). The first value returned by f is assigned to x and the continuation of f and all its descendants are then terminated (rule
(Asym3)).
Notably, the presented operational semantics slightly simplifies that described in
[29], in a way that does not misrepresent the properties of the encoding. Indeed, in the
original semantics, a site call involves three steps: invocation of the site, response from
the site, and publication of the result. Here, instead, a site call is performed in one step,
that corresponds to the immediate publication of the result.
The encoding of Orc expressions in COWS exploits function hh·iir̂ shown in Table 11. The function is defined by induction on the syntax of expressions and is parameterized by the communication endpoint r̂ used to return the result of expressions
evaluation. Thus, a site call is rendered as an invoke activity that sends a pair made
of the parameter of the invocation and the endpoint for the reply along the endpoint Ŝ
corresponding to site name S . Expression call is rendered similarly, but we need two
invoke activities: Ê!hr̂, r̂0 i activates a new instance of the body of the declaration, while
z!hwi sends the value of the actual parameter (when this value will be available) to the
created instance, by means of a private endpoint stored in z received from the encoding
of the corresponding expression declaration along the private endpoint r̂0 previously
sent. Sequential composition is encoded as the parallel composition of the two components sharing a delimited endpoint, where a new instance of the component on the right
is created every time that on the left returns a value along the shared endpoint. Symmetric parallel composition is encoded as parallel composition, where the values produced
by the two components are sent along the same return endpoint. Finally, asymmetric
parallel composition is encoded in terms of parallel composition in such a way that,
whenever the encoding of f returns its first value, this is passed to the encoding of g
and a kill activity is enabled. Due to its eager semantics, the kill will terminate what
remains of the term corresponding to the encoding of f .
Moreover, for each site S , we define the service:
∗ [x, y] Ŝ ?hx, yi.y!heSx i
(1)
that receives along the endpoint Ŝ a value (stored in x) and an endpoint (stored in y)
to be used to send back the result, and returns the evaluation of eSx , an unspecified
expression corresponding to S and depending on x.
22
hh0iir̂ = 0
hhE(w)iir̂ = [r̂0 ] (Ê!hr̂, r̂0 i | [z] r̂0 ?hzi.z!hwi)
hhS (w)iir̂ = Ŝ !hw, r̂i
hh f > x > giir̂ = [r̂ f ] (hh f iir̂ f | ∗ [x] r̂ f ?hxi.hhgiir̂ )
hh f | giir̂ = hh f iir̂ | hhgiir̂
hhg where x :∈ f iir̂ = [r̂ f , x] ( hhgiir̂ | [k] ( hh f iir̂ f | r̂ f ?hxi.kill(k) ) )
Table 11. Orc encoding
Similarly, for each expression declaration E(x) , f we define the service:
∗ [y, z] Ê?hy, zi.[r̂] (z!hr̂i | [x] (r̂?hxi | hh f iiy ) )
(2)
Here, the received value (stored in x) is processed by the encoding of the body of the
declaration, that is activated as soon as the expression is called.
Finally, the encoding of an Orc expression f , written [[ f ]]r̂ , is the parallel composition of hh f iir̂ , of a service of the form (1) or (2) for each site or expression called in f ,
in any of the expressions called in f , and so on recursively.
We can prove that there is a formal correspondence, based on the operational semantics, between Orc expressions and the COWS services resulting from their encoding. To
simplify the proof, we found it convenient to extend the syntax of Orc expressions with
f · {x 7→ y}, that behaves as the expression obtained from f by replacing all free occurrences of x with y. Correspondingly, we add the following rule to those defining the
operational semantics:
l
f ,→ f 0
l
(Sub)
0
f · {x 7→ y} ,→ f · {x 7→ y}
Next proposition, that can be easily proved by induction on the syntax of expressions,
states that there is an operational correspondence between the extended semantics and
the original one.
l
l
Proposition 1. If y < f v( f ), then f ,→ f 0 iff f · {x 7→ y} ,→ f 0 · {x 7→ y}.
Expression f · {x 7→ y} is encoded as the parallel composition of the encoding of f
with a receive activity r̂0 ?hxi, that initializes the shared variable x, and with an invoke
activity r̂0 !hyi, that forwards the value of variable y (when it will be available) along the
private endpoint r̂0 . Formally, it is defined as
hh f · {x 7→ y}iir̂ = [r̂0 ] (r̂0 !hyi | [x] (r̂0 ?hxi | hh f iir̂ ) )
The operational correspondence between Orc expressions and the COWS services
resulting from their encoding can be characterized by two propositions, which we call
completeness and soundness. The former states that all possible executions of an Orc
expression can be simulated by its encoding, while the latter states that the initial step
of a COWS term resulting from an encoding can be simulated by the corresponding Orc
23
expression so that the continuation of the encoding can evolve in the encoding of the
α
expression continuation. By letting s ==⇒ s0 to mean that there exist two services, s1
α
and s2 , such that s1 is a reduct of s, s1 −−→ s2 and s0 is a reduct of s2 , the two properties
can be stated as follows.
Theorem 1 (Completeness). Given an Orc expression f and a communication endα
l
point r̂, f ,→ f 0 implies [[ f ]]r̂ ≡ hh f iir̂ | s ==⇒ hh f 0 iir̂ | s, where α = r̂ C hvi if l = !v, and
α = (p • o b∅c w̄ v̄) if l = τ.
l
Proof: The proof proceeds by induction on the length of the inference of f ,→ f 0 .
Base Step: We reason by case analysis on the axioms of the operational semantics.
(SiteCall) In this case f = S (v), l =!v0 and f 0 = 0. By encoding definition, hhS (v)iir̂ |
Ŝ b∅c hx,yi hv,r̂i
s = Ŝ !hv, r̂i | s, where s = ∗ [x, y] Ŝ ?hx, yi.y!hv0 i. Thus, Ŝ !hv, r̂i | s −−−−−−−−−−−→
0
r̂Chv i
r̂!hv0 i | s −−−−−→ 0 | s. Since hh0iir̂ = 0, we can conclude.
(Def) In this case f = E(w) with E(x) , g, l = τ and f 0 = g · {x 7→ w}. By encoding definition, hhE(w)iir̂ | s = [r̂0 ] (Ê!hr̂, r̂0 i | [z0 ] r̂0 ?hz0 i.z0 !hwi) | s, where s =
Ê b∅c hy,zi hr̂,r̂0 i
∗ [y, z] Ê?hy, zi.[r̂00 ] (z!hr̂00 i | [x] (r̂00 ?hxi | hhgiiy ) ). Thus, hhE(w)iir̂ | s −−−−−−−−−−−→
[r̂0 ] ([z0 ] r̂0 ?hz0 i.z0 !hwi | [r̂00 ] (r̂0 !hr̂00 i | [x] (r̂00 ?hxi | hhgiir̂ ) ) ) | s , s0 . Then,
r̂0 b∅c hz0 i hr̂00 i
s0 −−−−−−−−−−→ [r̂00 ] (r̂00 !hwi | [x] (r̂00 ?hxi | hhgiir̂ ) ) | s , s00 . In case w = z00 ,
since hhg · {x 7→ z00 }iir̂ ≡ [r̂00 ] (r̂00 !hz00 i | [x] (r̂00 ?hxi | hhgiir̂ ) ), we can directly conr̂00 b∅c hxi hv0 i
clude. Instead, in case w = v0 , we have s00 −−−−−−−−−−→ hhgiir̂ · {x 7→ v0 } | s. Thus,
since hhgiir̂ · {x 7→ v0 } ≡ hhg · {x 7→ v0 }iir̂ , we can conclude.
Inductive Step: We reason by case analysis on the last applied inference rule of the
operational semantics.
(Sym1) In this case, f = f1 | f2 , f 0 = f10 | f2 . By the premise of the rule (Sym1),
l
f1 ,→ f10 . By encoding definition, hh f iir̂ | s = hh f1 iir̂ | hh f2 iir̂ | s. By induction,
α
hh f1 iir̂ | s0 ==⇒ hh f10 iir̂ | s0 where s ≡ s0 | s00 . By rules (par pass ) or (parcon f ), we can
conclude.
(Sym2) Similar to the previous case.
(Seq1) In this case, f = f1 > x > f2 , l = τ and f 0 = f10 > x > f2 . By the premise
τ
of the rule (Seq1), f1 ,→ f10 . By encoding definition, hh f iir̂ | s = [r̂0 ] (hh f1 iir̂0 |
p • o b∅c w̄ v̄
∗ [x] r̂0 ?hxi.hh f2 iir̂ ) | s. By induction, hh f1 iir̂0 | s0 ========⇒ hh f10 iir̂0 | s0 where s ≡
p • o b∅c w̄ v̄
s0 | s00 . By rules (parcon f ) and (del pass ), we can conclude that hh f iir̂ | s ========⇒
hh f10 > x > f2 iir̂ | s.
(Seq2) In this case, f = f1 > x > f2 , l =!v and f 0 = ( f10 > x > f2 ) | f2 · {x 7→ v}.
!v
By the premise of the rule (Seq2), f1 ,→ f10 . By encoding definition, hh f iir̂ | s =
r̂0 Chvi
[r̂0 ] (hh f1 iir̂0 | ∗ [x] r̂0 ?hxi.hh f2 iir̂ ) | s. By induction, hh f1 iir̂0 | s0 =====⇒ hh f10 iir̂0 | s0
r̂0 b∅c hxi hvi
where s ≡ s0 | s00 . Thus, hh f iir̂ | s =========⇒ [r̂0 ] (hh f10 iir̂0 | ∗ [x] r̂0 ?hxi.hh f2 iir̂ |
hh f2 · {x 7→ v}iir̂ ) | s ≡ hh f10 > x > f2 iir̂ | hh f2 · {x 7→ v}iir̂ | s.
24
(Asym2) In this case, f = f1 where x :∈ f2 , l = τ and f 0 = f1 where x :∈ f20 . By
τ
the premise of the rule (Asym2), f2 ,→ f20 . By encoding definition, hh f iir̂ | s =
p • o b∅c w̄ v̄
[r̂0 , x] ( hh f1 iir̂ | [k] ( hh f2 iir̂0 | r̂0 ?hxi.kill(k) ) ) | s. By induction, hh f2 iir̂0 | s0 ========⇒
hh f20 iir̂0 | s0 where s ≡ s0 | s00 . Thus, by rule (del pass ), we can conclude that hh f iir̂ |
p • o b∅c w̄ v̄
s ========⇒ [r̂0 , x] ( hh f1 iir̂ | [k] ( hh f20 iir̂0 | r̂0 ?hxi.kill(k) ) ) | s ≡ hh f1 where x :∈
f20 iir̂ | s.
(Asym1) Similar to the previous case.
(Asym3) In this case, f = f1 where x :∈ f2 , l = τ and f 0 = f1 · {x 7→ v}. By the premise
!v
of the rule (Asym3), f2 ,→ f20 . By encoding definition, hh f iir̂ | s = [r̂0 , x] ( hh f1 iir̂ |
r̂0 Chvi
[k] ( hh f2 iir̂0 | r̂0 ?hxi.kill(k) ) ) | s. By induction, hh f2 iir̂0 | s0 =====⇒ hh f20 iir̂0 | s0 where
s ≡ s0 | s00 . Then, by encoding definition and rule (del pass ), s000 , [r̂0 , x] ( hh f1 iir̂ |
[k] ( hh f20 iir̂0 | r̂0 ?hxi.kill(k) ) | r̂0 !hvi ) | s is a reduct of hh f iir̂ | s. We can conclude that
r̂0 b∅c hxi hvi
†
s000 −−−−−−−−−→ [r̂0 ] ( hh f1 · {x 7→ v}iir̂ | [k] ( hh f20 iir̂0 | kill(k) ) ) | s −−→ [r̂0 ] ( hh f1 · {x 7→
v}iir̂ | [k] 0 ) | s ≡ hh f1 · {x 7→ v}iir̂ | s.
r̂!hvi
Lemma 1. Given an Orc expression f and a communication endpoint r̂, [[ f ]]r̂ −−−−9.
Proof: By a straightforward induction on the definition of the encoding.
Theorem 2 (Soundness). Given an Orc expression f and a communication endpoint
p • o b∅c w̄ v̄
r̂, [[ f ]]r̂ ≡ hh f iir̂ | s −−−−−−−−→ s0 implies that there exists an Orc expression f 0 such that
α
l
f ,→ f 0 and s0 ==⇒ hh f 0 iir̂ | s, where α = r̂ C hvi if l = !v, and α = (p0 • o0 b∅c w̄0 v̄0 ) if
l = τ.
Proof: The proof proceeds by induction on the definition of the encoding hh f iir̂ .
Base Step: We reason by case analysis on the non-inductive cases of the definition.
α
( f = 0) In this case hh f iir̂ = 0 and s = 0. Since [[ f ]]r̂ ≡ 0 −−9, we can trivially conculde.
( f = S (w)) In this case hh f iir̂ = Ŝ !hw, r̂i and s = ∗ [x, y] Ŝ ?hx, yi.y!hv0 i. If w = z
α
then [[ f ]]r̂ −−9, thus we can trivially conculde. If w = v then [[ f ]]r̂ ≡ Ŝ !hv, r̂i |
Ŝ b∅c hx,yi hv,r̂i
s −−−−−−−−−−−→ r̂!hv0 i | s = s0 . By rule (SiteCall) we have l =!v0 and f 0 = 0. Thus,
r̂Chv0 i
s0 −−−−−→ 0 | s. Since hh0iir̂ = 0, we can conclude.
( f = E(w)) Assuming E(x) , g, we have hhE(w)iir̂ = [r̂0 ] (Ê!hr̂, r̂0 i | [z] r̂0 ?hzi.z!hwi)
and s = ∗ [y0 , z0 ] Ê?hy0 , z0 i.[r̂00 ] (z0 !hr̂00 i | [x] (r̂00 ?hxi | hhgiiy0 ) ). Then, [[ f ]]r̂
Ê b∅c hy0 ,z0 i hr̂,r̂0 i
−−−−−−−−−−−−→ [r̂0 ] ([z] r̂0 ?hzi.z!hwi) | [r̂00 ] (r̂0 !r̂00 | [x] (r̂00 ?hxi | hhgiir̂ ) ) | s = s0 .
r̂0 b∅c hzi hr̂00 i
By rule (Def) we have l = τ and f 0 = g · {x 7→ w}. Moreover, s0 −−−−−−−−−−→
[r̂00 ] (r̂00 !hwi | [x] (r̂00 ?hxi | hhgiir̂ ) ) | s , s00 . If w = y, we directly conclude, since
r̂00 b∅c hxi hvi
s00 = hhg · {x 7→ y}iir̂ . If w = v, we can conclude by s00 −−−−−−−−−→ hhg · {x 7→ v}iir̂ .
Inductive Step: We reason by case analysis on the inductive cases of the definition.
25
( f = f1 | f2 ) In this case hh f iir̂ = hh f1 iir̂ | hh f2 iir̂ . By encoding definition, the transip • o b∅c w̄ v̄
tion −−−−−−−−→ cannot be produced by a synchronization between hh f1 iir̂ and hh f2 iir̂ .
Then, we have only the following cases:
p • o b∅c w̄ v̄
– hh f1 iir̂ | s −−−−−−−−→ s1 .
l
α
l
By induction, f1 ,→ f10 and s1 ==⇒ hh f10 iir̂ | s. By rule (Sym1), f ,→ f10 | f2 = f 0 .
α
By rule (par pass ) or (parcon f ), we can conclude that s0 ≡ s1 | hh f2 iir̂ ==⇒ hh f10 iir̂ |
s | hh f2 iir̂ ≡ hh f 0 iir̂ | s.
p • o b∅c w̄ v̄
– hh f2 iir̂ | s −−−−−−−−→ s2 .
Similar to the previous case.
( f = f1 > x > f2 ) In this case hh f iir̂ = [r̂0 ] (hh f1 iir̂0 |
∗ [x] r̂0 ?hxi.hh f2 iir̂ ). Lemma
p • o b∅c w̄ v̄
1 implies that the inference of the transition −−−−−−−−→ derives from hh f1 iir̂0 |
p • o b∅c w̄ v̄
α
l
s −−−−−−−−→ s1 . By induction, f1 ,→ f10 and s1 ==⇒ hh f10 iir̂0 | s. We have two
cases:
– l = τ.
p0 • o0 b∅c w̄0 v̄0
τ
By rule (Seq1), f ,→ f10 > x > f2 . By rules (parcon f ) and (del pass ), s0 ===========⇒
[r̂0 ] (hh f10 iir̂0 | ∗ [x] r̂0 ?hxi.hh f2 iir̂ ) | s ≡ hh f10 > x > f2 iir̂ | s.
– l =!v.
!v
By rule (Seq2), f ,→ ( f10 > x > f2 ) | f2 · {x 7→ v}. By rules (com) and (del pass ),
r̂0 b∅c hxi hvi
s0 =========⇒ [r̂0 ] (hh f10 iir̂0 | ∗ [x] r̂0 ?hxi.hh f2 iir̂ ) | hh f2 · {x 7→ v}iir̂ | s ≡ hh( f10 >
x > f2 ) | f2 · {x 7→ v}iir̂ | s.
( f = f1 where x :∈ f2 ) In this case hh f iir̂ = [r̂0 , x] ( hh f1 iir̂ | [k] ( hh f2 iir̂0 | r̂0 ?hxi.kill(k) ) ).
By Lemma 1, we have two cases:
p • o b∅c w̄ v̄
– hh f1 iir̂ | s −−−−−−−−→ s1 .
α
l
l
By induction, f1 ,→ f10 and s1 ==⇒ hh f10 iir̂0 | s. By rule (Asym1), f ,→
α
f10 where x :∈ f2 . By rules (del pass ) and (parcon f ), s0 ==⇒ [r̂0 , x] ( hh f10 iir̂ |
[k] ( hh f2 iir̂0 | r̂0 ?hxi.kill(k) ) ) | s ≡ hh f10 where x :∈ f2 iir̂ .
p • o b∅c w̄ v̄
– hh f2 iir̂0 | s −−−−−−−−→ s2 .
l
α
By induction, f2 ,→ f20 and s2 ==⇒ hh f20 iir̂0 | s. We have two cases:
• α = p0 • o0 b∅c w̄0 v̄0 .
Similar to the previous case.
• α = r̂0 !hvi.
τ
By rule (Asym3), f ,→ f1 · {x 7→ v}. By rules (com), (parcon f ) and (del pass ),
r̂0 b∅c hxi hvi
s0 =========⇒ hh f1 · {x 7→ v}iir̂ | [k] ( hh f20 · {x 7→ v}iir̂0 | kill(k) ) = s00 . Then,
†
by rules (kill), (parkill ), (delkill ) and (par pass ), s00 −−→ hh f1 · {x 7→ v}iir̂ .
( f = g · {x 7→ y}) In this case hh f iir̂ = [r̂0 ] (r̂0 !hyi | [x] (r̂0 ?hxi | hhgiir̂ ) ). Since
α
p • o b∅c w̄ v̄
l
α
r̂0 !hyi −−9, we have hhgiir̂ | s −−−−−−−−→ s00 . By induction, g ,→ g0 and s00 ==⇒
α
hhg0 iir̂ | s. By rules (parcon f ) and (del pass ), we can conclude that s0 ==⇒ [r̂0 ] (r̂0 !hyi |
0
0
0
[x] (r̂ ?hxi | hhg iir̂ ) ) | s ≡ hhg · {x 7→ y}iir̂ | s.
26
n ::= a :: C
C ::= ∗s | m s
| ha, o, ūi | C | C
m ::= ∅ | {p = u} | m ∪ m
s ::=
0
| exit
| ass (w̄, ē)
| inv (r, o, w̄)
| rec (r, o, w̄)
| if (e) then {s} else {s}
| s; s
| s|s
P
|
i∈I rec (ri , oi , w̄i ) ; si
| A(w̄)
(nodes)
(components)
(correl. const.)
(services)
(null)
(exit)
(assign)
(invoke)
(receive)
(switch)
(sequence)
(flow)
(pick)
(call)
Table 12. - syntax
5.2 Encoding -
The syntax of (a simplified version of) -, given in Table 12, is parameterized
with respect to the following syntactic sets: properties (sorts of late bound constants
storing some relevant values within service instances, ranged over by p), values (basic
values and addresses, ranged over by u), partner links (variables storing addresses used
to identify service partners within an interaction), operation parameters (basic variables, partner links and properties, ranged over by w), and service identifiers (ranged
over by A). Notationally, we will use a to range over addresses and r to range over
addresses and partner links.
- permits to model the interactions among web service instances in a
network context. A network of services is a finite set of nodes. Nodes, written as a :: C,
are uniquely identified by an address a and host components C. Components C may be
service specifications, instances or requests. The behavioural specification of a service
s is written ∗s, while m s0 represents a service instance that behaves according to
s0 and whose properties evaluate according to the (possibly empty) set m of correlation
constraints. A correlation constraint is a pair, written p = u, recording the value u
assigned to the property p. Properties are used to store values that are important to
identify service instances. A service request ha, o, ūi represents an operation invocation
that must still be processed and contains the invoker address a, the operation name o
and the data ū for operation execution.
Services are structured activities built from basic activities, i.e. instance forced
termination exit, assignment ass ( , ), service invocation inv ( , , ) and service request processing rec ( , , ), by exploiting operators for conditional choice
if ( ) then { } else { } (switch), sequential composition ; (sequence), parallel comP
position | (flow), external choice i∈I rec ( , , ) ; (pick) and service call A(w̄) (of
course, we assume that every service identifier A has a unique definition of the form
de f
A(w̄) = s).
27
hha :: Cii = hhCiia
hh∗siia
hh{ p̄ = ū} siia
hhha0 , o, ūiiia
hh0iia
= ∗ [ka , V(s)] hhsiia
= [ka , V(s)] hhs · { p̄ 7→ ū}iia
= {|a • o!ha0 , ūi|}
= 0
hhexitiia
= kill(ka )
hhass (w̄, ē)iia
= [w̄ = ē]
hhinv (r, o, w̄)iia
= r • o!ha, w̄i
hhrec (r, o, w̄)iia
= a • o?hr, w̄i
Table 13. - encoding
The encoding from - to COWS, denoted by hh·ii, is shown in Table 13.
We only show the relevant cases; for example, the encodings of structural activities are
quite standard and, hence, omitted. Both variables and properties of - are
encoded as COWS variables, while the encoding of a net, i.e. a finite set of nodes, is
the parallel composition of the encodings of its nodes. The encoding of a node is given
in term of the encoding of the hosted component parameterized by the address of the
node.
This auxiliary parameterized encoding hh·iia is defined inductively over the syntax of
services. The parameter a is used, e.g., both as the partner name to which a given request
must be sent and to refer the killer label ka used to identify each service instance. Service
specifications are encoded as COWS persistent services, by exploiting the replication
operator. The fact that variables are global to service instances is rendered through the
outermost delimitation [ka , V(s)] , where V(s) is the set of free variables and properties
of s. Partner links used to identify service partners within an interaction are translated
by exploiting the address of the hosting node. Finally, the effect of executing exit inside
a service instance hosted at a is achieved by forcing termination of the COWS term
resulting from the encoding the instance and identified by the label ka .
Of course, as in Section 5.1, we could prove that there is a formal correspondence,
based on the operational semantics, between - nets and the COWS services
resulting from their encoding.
6 Timed extension of COWS
We present here an extension of COWS that permits modeling timed activities. Specifically, we consider the wait activity of WS-BPEL which allows a service to wait for a
28
M ::= 0 | {s}t | [n] M
g ::= . . . | δ .s
|
M|M
(machines)
(guarded choice)
Table 14. Timed COWS
given time slot. For the sake of simplicity, we don’t deal with attribute until, that permits
to delay the executing service until time reaches a given value.
The extended syntax of COWS, given in Table 14, is now parameterized also by a
set of time values (ranged over by t, t0 , . . . ) and a set of time slots (ranged over by δ,
δ0 ,. . . ). The syntax includes a new category, that of machines (as in [20]). A machine {s}t
has its own clock, that is not synchronized with the clock of other machines (namely,
time progresses asynchronously between different machines). Guards are extended with
the timed activity δ , that specifies the time slot δ the executing service has to wait for.
Consequently, the choice construct can now be guarded both by reception of messages
and by expiration of timeouts, like WS-BPEL pick activity. We assume that evaluation
of expressions and execution of basic activities, except for δ , are instantaneous, i.e.
they don’t consume time units.
To define the operational semantics of the extended language, firstly we extend the
structural congruence of Section 2 with the abelian monoid laws for parallel composition over machines, and the following laws:
{[n] s}t ≡ [n] {s}t
[n] 0 ≡ 0
M | [n] N ≡ [n] (M | N) if n < fd(M)
{s}t ≡ {s0 }t if s ≡ s0
[n] [m] M ≡ [m] [n] M
The first law is used to extrude a name outside a machine, while the second one lifts to
machines the structural congruence defined on services. The remaining laws are standard.
α̂
Secondly, the labelled transition relation −−→ over services now models also time
elapsing, i.e. α̂ can be either α or δ, and is obtained by adding the rules shown in the
upper part of Table 15 to those in Table 4. Indeed, due to our assumptions on the model
of time used, the rules for kill, invoke, receive, communication, parallel composition,
protection and delimitation are those in Table 4; instead, in rule (cong), α must be replaced by α̂ to also consider time elapsing. Let us briefly comment on the new rules.
Rule (waitelaps ) permits updating the argument δ of until the timeout expires, this is
denoted by label † of rule (waittout ). Time can elapse while waiting on a receive activity,
rule (recelaps ), and cannot make a choice within a pick activity, rule (pickelaps ). Instead, a
choice can be done when a timeout occurs; in such a case rule (waittout ) generates a †
and rule (sum) in Table 4 permits to make the choice (thus ‘killing’ the discarded alternative activities). Rule (par sync ) says that time elapses equally for all services which run
in parallel within the same machine The remaining rules permit time elapsing within
invoke, delimitation, protection and replication constructs.
Finally, we define a reduction relation −
→ over machines through the rules shown in
the lower part of Table 15. Rules (locelaps ) and (locact ) model taking place of timed and
computational activities, respectively, within a machine. Rule (parasync ) says that time
elapses asynchronously between different machines (indeed, N and, then, its clocks,
29
†
0 .s −→ s
(waittout )
δ
p • o?w̄.s −→ p • o?w̄.s
δ
(recelaps )
p • o!ē −→ p • o!ē
0 < δ0 6 δ
δ
s −→ s0
(waitelaps )
δ0
δ
{|s|} −→ {|s0 |}
δ .s −−→ δ−δ0 .s
δ
δ
g1 −→ g01
g2 −→ g02
δ
g1 + g2 −→ g01 + g02
δ
s2 −→ s02
δ
s1 | s2 −→ s01 | s02
δ
δ
[d] s −→ [d] s0
δ
α
s −−→ s0
M|N−
→ M0 | N
M ≡ M0
(parasync )
[n] M −
→ [n] M0
(res)
M0 −
→ N0
N0 ≡ N
M−
→N
(p • o)Cv̄
M−
→ M0
α ∈ {†, p • o b∅c w̄ v̄}
{s}t −
→ {s0 }t
{s}t −
→ {s }t+δ
M−
→ M0
(replelaps )
∗ s −→ ∗ s0
(locelaps )
0
(scopeelaps )
δ
s −→ s0
(par sync )
δ
s −→ s0
(protelaps )
s −→ s0
(pickelaps )
δ
s1 −→ s01
(invelaps )
s1 −−−−−−→ s01
fv(w̄) = ū
(locact )
(cong M )
(p • o)Bw̄
s2 −−−−−−−→ s02
M(w̄, v̄) = σ
noc(A[[[ū] s2 ]], p • o, w̄, v̄)
{s1 }t1 | {A[[[ū] s2 ]]}t2 −
→ {s01 }t1 | {A[[s02 · σ]]}t2
(com M )
Table 15. Timed COWS operational semantics (additional rules)
remains unchanged after the transition), while rule (cong M ) says that structurally congruent machines have the same behaviour. Rule (res) deals with restriction of names.
Rule (com M ), where fv(s) are the free variables of s, enables interaction between services executing over different machines. It combines the effects of rules (del sub ) and
(com) in Table 4. For the interaction to take place, it is necessary to single out on the
receiving machine the component performing the reception and its executing context,
by also highlighting the delimitations for the input variables. This way, the communication effect can be immediately applied to the continuation of the receiving service. The
last premise ensures that, in case of multiple start activities, the message is routed to the
correlated service instance rather than triggering a new instantiation. In order to communicate a private name between machines, it is necessary to extrude the name outside
the sending machine and to extend its scope to the receiving machine, by applying rule
(cong M ), then the communication can take place, by applying rules (com) and (res).
Of course, in this new setting, computations are defined as sequences of (connected)
reductions from a given parallel composition of machines.
30
We end this section with two simple examples. The first one provides a sort of clock
service that is set to send a message “tick” along m̂ every 10 time units.
[n̂] ( n̂!hi | ∗ n̂?hi . 10 . ( m̂!h“tick”i | n̂!hi ) )
Notice that, however, because invoke activities are executed lazily, it is only guaranteed
that at least 10 time units pass between two consecutive emissions of message “tick”.
The second example is a variant of the rps service of Section 4.1 that exploits
the fact that the choice construct can now be guarded both by receive activities and
by timed activities (like the pick activity of WS-BPEL). Essentially, an instance of
service t rps created because of reception of the first throw of a challenge, waits for
the reception of the corresponding second throw for at most 30 time units. If this throw
arrives within the deadline, the instance behaves as usual. Otherwise, when the timeout
expires, the instance declares the sender of the first throw as the winner of the challenge
and terminates.
t rps , ∗ [xchamp res , xchall res , xid , xthr 1 , xthr 2 , xwin , k]
( ( pchamp • othrow ?hxchamp res , xid , xthr 1 i.
( pchall • othrow ?hxchall res , xid , xthr 2 i.
( xchamp res • owin !hxid , xwin i | xchall res • owin !hxid , xwin i )
+
30 . ( {|xchamp res • owin !hxid , xchamp res i|} | kill(k) ) )
+
pchall • othrow ?hxchall res , xid , xthr 2 i.
( pchamp • othrow ?hxchamp res , xid , xthr 1 i.
( xchamp res • owin !hxid , xwin i | xchall res • owin !hxid , xwin i )
+
30 . ( {|xchall res • owin !hxid , xchall res i|} | kill(k) ) ) )
| Assign )
7 Concluding remarks
We have introduced COWS, a formalism for specifying and combining services, while
modelling their dynamic behaviour (i.e. it deals with service orchestration rather than
choreography). COWS borrows many constructs from well-known process calculi, e.g.
π-calculus, update calculus, StACi , and Lπ, but combines them in an original way, thus
being different from all existing calculi. COWS permits modelling different and typical
aspects of (web) services technologies, such as multiple start activities, receive conflicts, routing of correlated messages, service instances and interactions among them.
We have also presented an extension of the basic language with timed constructs.
The correlation mechanism was first exploited in [33], that, however, only considers
interaction among different instances of a single business process. Instead, to connect
the interaction protocols of clients and of the respective service instances, the calculus
31
introduced in [3], and called SCC, relies on explicit modelling of sessions and their dynamic creation (that exploits the mechanism of private names of π-calculus). Interaction
sessions are not explicitly modelled in COWS, instead they can be identified by tracing
all those exchanged messages that are correlated each other through their same contents
(as in [14]). We believe that the mechanism based on correlation sets (also used by WSBPEL), that exploits business data and communication protocol headers to correlate
different interactions, is more robust and fits the loosely coupled world of Web Services
better than that based on explicit session references. Another notable difference with
SCC is that in COWS services are not necessarily persistent.
Many works put forward enrichments of some well-known process calculus with
constructs inspired by those of WS-BPEL. The most of them deal with issues of web
transactions such as interruptible processes, failure handlers and time. This is, for example, the case of [20, 21, 24, 25] that present timed and untimed extensions of the
π-calculus, called webπ and webπ∞ , tailored to study a simplified version of the scope
construct of WS-BPEL. Other proposals on the formalization of flow compensation are
[5, 4] that give a more compact and closer description of the Sagas mechanism [13] for
dealing with long running transactions.
We have focused on service orchestration rather than on service choreography. In
[6, 7] both aspects are studied. Other approaches are based on the use of schema languages [12] and Petri nets [15]. In [19] a sort of distributed input-guarded choice of
join patterns, called smooth orchestrators, gives a simple and effective representation
of synchronization constructs. The work closest to ours is [22], where - is
introduced to formalize the semantics of WS-BPEL. COWS represents a more foundational formalism than - in that it does not rely on explicit notions of location
and state, it is more manageable (e.g. has a simpler operational semantics) and, at least,
equally expressive (as the encoding of - in COWS shows, Section 5.2).
As a future work, we plan to continue our programme to lay rigorous methodological foundations for specification and validation of service oriented computing middlewares and applications by developing more powerful type systems. For example,
session types and behavioural types are emerging as powerful tools for taking into account behavioural and non-functional properties of computing systems and, in case of
services, could permit to express and enforce policies for, e.g., disciplining resources
usage, constraining the sequences of messages accepted by services, ensuring service
interoperability and compositionality, guaranteeing absence of deadlock in service composition, checking that interaction obeys a given protocol. Some of the studies developed for π-calculus and other standard process calculi (e.g. [10, 34, 17, 16, 18]) are
promising starting points, but they need non trivial adaptations to deal with all COWS
peculiar features. For example, one of the major problems we envisage concerns the
treatment of killing and protection activities, that are not commonly used in process
calculi.
Another promising approach that we plan to investigate is the definition of behavioural equivalences that would provide a precise account of what ‘observable behaviour’ means for a service. Pragmatically, they could provide a means to establish
formal correspondences between different views (abstraction levels) of a service, e.g.
the contract it has to honour and its true implementation.
32
References
1. A. Alves, A. Arkin, S. Askary, B. Bloch, F. Curbera, M. Ford, Y. Goland, A. Guı́zar,
N. Kartha, C.K. Liu, R. Khalaf, D. König, M. Marin, V. Mehta, S. Thatte, D. van der Rijn,
P. Yendluri, and A. Yiu. Web Services Business Process Execution Language Version 2.0.
Technical report, WS-BPEL TC OASIS, August 2006. http://www.oasis-open.org/.
2. L. Bocchi, C. Laneve, and G. Zavattaro. A calculus for long-running transactions. In
FMOODS, volume 2884 of LNCS, pages 124–138. Springer, 2003.
3. M. Boreale, R. Bruni, L. Caires, R. De Nicola, I. Lanese, M. Loreti, F. Martins, U. Montanari,
A. Ravara, D. Sangiorgi, V. T. Vasconcelos, and G. Zavattaro. SCC: a Service Centered
Calculus. In WS-FM, volume 4184 of LNCS, pages 38–57. Springer, 2006.
4. R. Bruni, M. Butler, C. Ferreira, T. Hoare, H. Melgratti, and U. Montanari. Comparing two
approaches to compensable flow composition. In CONCUR, volume 3653 of LNCS, pages
383–397. Springer, 2005.
5. R. Bruni, H.C. Melgratti, and U. Montanari. Theoretical foundations for compensations in
flow composition languages. In POPL, pages 209–220. ACM, 2005.
6. N. Busi, R. Gorrieri, C. Guidi, R. Lucchi, and G. Zavattaro. Choreography and orchestration:
A synergic approach for system design. In ICSOC, volume 3826 of LNCS, pages 228–240.
Springer, 2005.
7. N. Busi, R. Gorrieri, C. Guidi, R. Lucchi, and G. Zavattaro. Choreography and orchestration
conformance for system design. In COORDINATION, volume 4038 of LNCS, pages 63–81.
Springer, 2006.
8. M.J. Butler and C. Ferreira. An operational semantics for StAC, a language for modelling
long-running business transactions. In COORDINATION, volume 2949 of LNCS, pages 87–
104. Springer, 2004.
9. M.J. Butler, C.A.R. Hoare, and C. Ferreira. A trace semantics for long-running transactions.
In 25 Years Communicating Sequential Processes, volume 3525 of LNCS, pages 133–150.
Springer, 2005.
10. M. Carbone, K. Honda, and N. Yoshida. A calculus of global interaction based on session
types. In DCM’06, 2006. To appear as ENTCS.
11. M. Carbone and S. Maffeis. On the expressive power of polyadic synchronisation in πcalculus. Nordic J. of Computing, 10(2):70–98, 2003.
12. S. Carpineti and C. Laneve. A basic contract language for web services. In ESOP, volume
3924 of LNCS, pages 197–213. Springer, 2006.
13. H. Garcia-Molina and K. Salem. Sagas. In SIGMOD, pages 249–259. ACM Press, 1987.
14. C. Guidi, R. Lucchi, R. Gorrieri, N. Busi, and G. Zavattaro. SOCK: a calculus for service
oriented computing. In ICSOC, volume 4294 of LNCS, pages 327–338. Springer, 2006.
15. S. Hinz, K. Schmidt, and C. Stahl. Transforming BPEL to petri nets. In Business Process
Management, volume 3649, pages 220–235, 2005.
16. A. Igarashi and N. Kobayashi. A generic type system for the pi-calculus. Theor. Comput.
Sci., 311(1-3):121–163, 2004.
17. N. Kobayashi. Type systems for concurrent programs. In 10th Anniversary Colloquium of
UNU/IIST, volume 2757 of LNCS, pages 439–453. Springer, 2003.
18. N. Kobayashi, K. Suenaga, and L. Wischik. Resource usage analysis for the π-calculus. In
VMCAI, volume 3855 of LNCS, pages 298–312. Springer, 2006.
19. C. Laneve and L. Padovani. Smooth orchestrators. In FoSSaCS, volume 3921 of LNCS,
pages 32–46. Springer, 2006.
20. C. Laneve and G. Zavattaro. Foundations of web transactions. In FoSSaCS, volume 3441 of
LNCS, pages 282–298. Springer, 2005.
33
21. C. Laneve and G. Zavattaro. web-pi at work. In TGC, volume 3705 of LNCS, pages 182–194.
Springer, 2005.
22. A. Lapadula, R. Pugliese, and F. Tiezzi. A WSDL-based type system for WS-BPEL. In
COORDINATION, volume 4038 of LNCS, pages 145–163. Springer, 2006.
23. A. Lapadula, R. Pugliese, and F. Tiezzi. A WSDL-based type system for WS-BPEL (full version). Technical report, Dipartimento di Sistemi e Informatica, Univ. Firenze, 2006. Available at: http://www.dsi.unifi.it/∼pugliese/ DOWNLOAD/wsc-full.ps.
24. M. Mazzara and I. Lanese. Towards a unifying theory for web services composition. In
WS-FM, volume 4184 of LNCS, pages 257–272. Springer, 2006.
25. M. Mazzara and R. Lucchi. A pi-calculus based semantics for WS-BPEL. Journal of Logic
and Algebraic Programming, 70(1):96–118, 2006.
26. M. Merro and D. Sangiorgi. On asynchrony in name-passing calculi. Mathematical Structures in Computer Science, 14(5):715–767, 2004.
27. R. Milner. Communication and concurrency. Prentice-Hall, 1989.
28. R. Milner, J. Parrow, and D. Walker. A calculus of mobile processes, I and II. Inf. Comput.,
100(1):1–40, 41–77, 1992.
29. J. Misra and W. R. Cook. Computation orchestration: A basis for wide-area computing.
Journal of Software and Systems Modeling, May 2006. Published online.
30. J. Parrow and B. Victor. The update calculus. In AMAST, volume 1349 of LNCS, pages
409–423. Springer, 1997.
31. J. Parrow and B. Victor. The fusion calculus: Expressiveness and symmetry in mobile processes. In Logic in Computer Science, pages 176–185, 1998.
32. W.M.P. van der Aalst, A.H.M. ter Hofstede, B. Kiepuszewski, and A.P. Barros. Workflow
patterns. Distributed and Parallel Databases, 14(1):5–51, 2003.
33. M. Viroli. Towards a formal foundational to orchestration languages. ENTCS, 105:51–71,
2004.
34. N. Yoshida and V. T. Vasconcelos. Language primitives and type discipline for structured
communication-based programming revisited: Two systems for higher-order session communication. In 1st International Workshop on Security and Rewriting Techniques, ENTCS,
2006.
34