MPLS Automatic Bandwidth Allocation Via Adaptive Hysteresis: Key Words
MPLS Automatic Bandwidth Allocation Via Adaptive Hysteresis: Key Words
MPLS Automatic Bandwidth Allocation Via Adaptive Hysteresis: Key Words
Abstract
MPLS automatic bandwidth allocation (or provisioning) refers to the process
of dynamically updating the bandwidth allocation of a label switched path
on the basis of actual aggregate traffic demand on this path. Since band-
width updates require signaling, it is common to limit the rate of updates to
reduce signaling costs. In this article, we propose a model-free asynchronous
adaptive hysteresis algorithm for MPLS automatic bandwidth allocation un-
der bandwidth update rate constraints. We validate the effectiveness of the
proposed approach by comparing it against existing schemes in (i) voice (ii)
data traffic scenarios. The proposed method can also be used in more general
GMPLS networks.
Key words:
1. Introduction
MPLS (Multi-Protocol Label Switching) is a forwarding paradigm for IP
networks in which IP traffic is carried over an LSP (Label Switched Path)
that is established between two MPLS network edge devices using a sig-
naling protocol such as RSVP (Resource Reservation Protocol) or CR-LDP
(Constraint-based Routed Label Distribution Protocol) [1]. A label in the IP
✩
The work described in this article was carried out with the support of the Scientific
and Technological Research Council of Turkey (TUBITAK) under the project EEEAG-
106E046.
∗
Corresponding author
Email addresses: [email protected] (N.Akar), [email protected]
(M. A. Toksöz)
2
bandwidth update rate. Our goal then is to find an automatic bandwidth al-
location scheme for MPLS networks under update rate constraints. We seek
model-free and simple-to-implement ABA schemes for practicality purposes.
We next describe in more detail the ABA problem studied in this article.
We study two versions of ABA for two different types of traffic (i) circuit-
oriented traffic (such as TDM voice) (ii) packet-oriented traffic (such as IP).
Circuit-oriented traffic requires a circuit to be formed for a call between two
nodes for the traffic to flow. If a circuit can not be formed, the call would
be dropped. When a call is admitted in the network, a certain bandwidth
needs to be guaranteed for QoS. In case when circuit-oriented traffic is carried
through an MPLS network, the concept of a circuit would be replaced by an
LSP which employs per-LSP call admission control at the ingress and per-
LSP queuing at the core nodes. The main QoS parameter for such traffic
is the call blocking probability. In packet-oriented traffic, an establishment
of a circuit is not necessary and admission control is typically not used.
Instead, users may reduce their rates for congestion control purposes, i.e.,
TCP congestion control.
For circuit-oriented traffic case, we focus our attention to a single-class
traffic scenario in which individual calls arrive at an MPLS ingress node ac-
cording to a non-homogeneous Poisson process with rate λ(t) and call holding
times are exponentially distributed with mean 1/µ. We set the maximum
arrival rate λm = maxt λ(t) over the time interval of interest. These individ-
ual calls are then aggregated into an MPLS LSP in the core network whose
bandwidth needs to be dynamically adjusted on the basis of instantaneous
aggregate traffic demand. If the bandwidth allocation for the LSP is not
sufficient for the incoming call, then either the ingress router will signal the
network for a bandwidth update which is eventually accepted by all nodes
on the path and the call would be admitted, or the call is dropped. In our
model, each individual call requires one unit of bandwidth. We also impose
a maximum bandwidth allocation denoted by Cm . We suggest to set Cm to
the bandwidth required for the LSP to achieve a desired call blocking prob-
ability Pb in the worst case scenario, i.e., λ(t) = λm . The parameter Cm can
be derived using the Erlang-B formula which gives the blocking probability
B(ρm , Cm ) in terms of the maximum traffic load ρm = λm /µ and Cm [5]:
ρCm /Cm !
Pb = B(ρm , Cm ) = PCmm . (1)
ρk /k!
k=0 m
3
tween bandwidth efficiency and signaling costs. Our goal in ABA is then to
select the update decision epochs to dynamically vary the allocated band-
width R(t) at time t for the LSP as a function of the number of ongoing calls
in the system denoted by N(t) so as to minimize the average bandwidth use
over time subject to the following three constraints:
Rk ≤ Cm , (3)
4
be overcome by using buffers and/or end-to-end congestion control especially
when the measurement interval T is short.
There are two different approaches ABA for connection-oriented networks
(such as MPLS) under update frequency constraints, namely the synchronous
and asynchronous approaches. In the synchronous approach, the bandwidth
allocation of a connection is adjusted at regularly spaced time epochs with a
frequency dictated by signaling constraints. For circuit-oriented traffic, the
work in [6] proposes that at a decision epoch, a new capacity is reserved
for the aggregate depending on the current system occupancy so that the
expected time average of the blocking probability in the forthcoming interval
will be less than a predefined limit. A numerical algorithm is proposed by
[7] for the efficient numerical calculation of such time-dependent blocking
probabilities. It is clear that the approach in [6] in conjunction with [7] is
model-based since one should have a stochastic model at hand that describes
the actual traffic accurately. A practical example for synchronous bandwidth
adjustment for MPLS LSPs but for packet-oriented traffic is the MPLS-TE
(Traffic Engineering) Automatic Bandwidth Adjustment for TE tunnels de-
scribed in [8] and [9], the so-called auto-bandwidth allocator. This automatic
bandwidth adjustment mechanism adjusts the bandwidth for each such LSP
according to the adjustment frequency configured for the LSP and the sam-
pled output rate for the LSP since the last adjustment without regard for
any adjustments previously made or pending for other LSPs. In particular,
there are two types of intervals, a Y-type interval (default: 24 hours) and an
X-type interval (default: 5 minutes). The average bandwidth requirement is
sampled for each X-type interval within a Y-type interval and the highest of
these X-type samples is then allocated for the aggregate for the next Y-type
interval. The work in [10] also studies other sizing mechanisms that take
the average, or the weighted averages of the X-type samples rather than the
highest of them as in [8]. These algorithms are model-free, i.e., they do not
require a traffic model to be available. Model-based synchronous bandwidth
provisioning schemes that take into account packet level QoS requirements
such as packet delay and packet loss are proposed in [11] and [12], the first
of which also takes the signaling costs into consideration. The ref. [13]
proposes a scheme for inter-domain resource management by estimating the
inter-domain traffic using a Kalman filter and then forecasting the capacity
requirement at a future instant by the use of transient probabilities of the
system states. An ARIMA-based traffic model in conjunction with a traffic
forecasting and synchronous bandwidth provisioning scheme is proposed in
5
[14].
Restricting the bandwidth update decisions to regularly spaced time epochs
as in the synchronous approach may lead to poor bandwidth usage. In asyn-
chronous ABA, bandwidth adjustments take place asynchronously and corre-
sponding bandwidth update decision instants depend on the current system
state. An early work on this approach is by [17] for circuit-oriented traffic
which proposes the increase of bandwidth by a constant predetermined step
each time the current bandwidth can not accommodate a new call and the
bandwidth is decreased by the same constant step when the bandwidth re-
quirement drops back to the original value. Two drawbacks of this proposal
are the potential oscillations around a threshold which might substantially
increase the signaling load and the wastage of bandwidth as the number of
active calls grows due to the use of the constant step. In [18], a bandwidth
allocation policy is proposed that eliminates the above problems by applying
adaptive upper and lower thresholds and hysteresis. Since the computation of
the thresholds require construction of an auxiliary Markov chain with known
parameters, the work presented [18] provides a model-based policy. In [19],
simple operational rules are derived to determine the amount of bandwidth
resources to different connections while balancing between bandwidth waste
and connection processing overhead. A heuristic is proposed for a similar
problem for a channel sharing application by [20] which however falls short
of ensuring a desired update rate. The ref. [21] proposes a scheme for MPLS
networks that uses continuous-time Markov decision processes. The proposed
scheme decides on when an LSP should be created and how often it should
be re-dimensioned while taking into consideration the trade-off between uti-
lization of network resources and signaling/processing load incurred on the
network. In [16], the authors present an ARCH-based traffic forecasting and
dynamic bandwidth provisioning mechanism. The reference [15] proposes a
novel dynamic bandwidth provisioning scheme for traffic engineered tunnels.
The mechanism in [15] uses information from the traffic trend to make resiz-
ing decisions and is designed to lower signaling and computational overhead
while meeting QoS constraints. Although a vast amount of literature exists
on dynamic bandwidth provisioning while taking into consideration signaling
costs, none of the existing asynchronous algorithms ensure a desired update
rate, which is the main goal of this article.
In this paper, we propose a model-free asynchronous adaptive hysteresis
algorithm for ABA for connection-oriented networks like MPLS. The pro-
posed algorithm does not assume a traffic model to be available and there-
6
fore it is applicable to a wide range of scenarios with unpredictable and
non-stationary traffic patterns. The approach uses hysteresis to control the
number of updates but the hysteresis operation regime and the band of the
hysteresis vary adaptively over time based on system state and the occupancy
of a leaky bucket that we incorporate for the aim of update frequency control.
To the best of our knowledge, such model-free adaptive hysteresis methods
have not been proposed for dynamic bandwidth allocation in the existing
literature. Our preliminary results have been reported in [22] but a more
extensive study with guidelines on algorithm parameter selection and results
concerning packet-oriented data is presented in the current manuscript.
The rest of this article is organized as follows. In Section 2, we describe the
proposed model-free algorithm in the circuit-oriented traffic case as well as
two model-based conventional approaches. Section 3 describes the proposed
algorithm for packet-oriented traffic. In Section 4, we provide numerical
examples to validate the effectiveness of the proposed approach for both
circuit-oriented and packet-oriented traffic scenarios. Finally, we conclude.
7
policy. We continue iterations until the actual update rate is equal to
the desired update rate.
• The proposed asynchronous model-free adaptive hysteresis-based ABA.
2.1. Synchronous Model-based ABA
In synchronous model-based ABA denoted by syn-aba, the system is sam-
pled at regularly spaced epochs at t = kT, k = 0, 1, 2, . . ., where T is the
update period and is set to T = 1/β so as to ensure the desired update
rate β. The minimum bandwidth reservation Rk = R(kT ) that guarantees
a desired blocking probability Pb throughout the time interval [kT, (k + 1)T ]
is then chosen on the basis of Nk = N(kT ) and λk which is the average call
arrival rate in the interval [kT, (k +1)T ]. This approach assumes that a-priori
information on λk is available to the ABA agent.
For calculating Rk , we need to study the following problem. Given the
number of calls Nk in progress at time zero with a bandwidth allocation of
R ≥ Nk , the task is to calculate the probability of finding the system in state
R at time t denoted by P (t, R, Nk ). The average blocking probability in an
interval of length T is then given by
Z T
(T )
Pb = 1/T P (t, R, Nk )dt (4)
0
which approaches B(ρk , R) as T → ∞ where ρk = λk /µ. The bandwidth
allocation Rk is then chosen as the minimum R for which the probability
given in (4) stays below the desired blocking probability Pb . This idea is
based on [6] that uses the numerical algorithm given in [23] for finding a
solution to (4). The particular procedure in [23] requires a spectral expansion
of the underlying Markov chain. An alternative algorithm is given in [7] for
the same problem by numerically solving a single integral equation. In what
(T )
follows, we propose a novel numerical procedure for finding the quantity Pb
which is not only simple to implement but also it does not have to find the
spectral expansion as in [23].
Recall that the number of calls in progress at time 0 is Nk with band-
width allocation R in the interval [0, T ]. Let πj (t), j = 0, 1, . . . , R denote
the probability that there are j calls in progress at time t, 0 ≤ t ≤ T and
let π(t) = [π0 (t), π1 (t), . . . , πR (t)]. Also define z(t), 0 ≤ t ≤ T such that
d
dt
z(t) = πR (t). It is not difficult to show that
d
[π(t), z(t)] = [π(t), z(t)]Q, (5)
dt
8
where
−λk λk 0
µ −(λk + µ) λk 0
2µ −(λ k + 2µ) λ k 0
.
.. .. ..
Q= . . . .. ,
(R − 1)µ −(λk + (R − 1)µ) λk 0
Rµ −Rµ 1
0 0 ··· 0 0 0
(T ) 1 1
Pb = z(T ) = veQ T s. (6)
T T
2.2. Asynchronous Model-based ABA
In this section, we introduce an asynchronous model-based ABA algo-
rithm (denoted by asyn-aba) based on Markov decision processes in which
LSP bandwidth updates occur asynchronously as opposed to syn-aba. Since
the theory of Markov decision processes typically assumes a stationary model,
we assume that the arrival rate is fixed to λ.
We first define the following auxiliary problem denoted by aux-aba. For
this problem, we assign a fixed cost of K for each LSP bandwidth update and
a cost for allocated unit bandwidth per unit time (denoted by b). Our goal is
to minimize the average cost per unit time while maintaining a desired call
blocking probability of Pb . We denote the set of possible states in our model
by S:
where sa refers to the number of active calls using the LSP just after an event
which is defined either as a call arrival or a call departure and sr denotes the
bandwidth allocation before this particular event. For each s = (sa , sr ) ∈ S,
one has a possible action of reserving s′r , sa ≤ s′r ≤ Cm units of bandwidth
until the next event. The time until the next decision epoch (state transition
9
time) is a random variable denoted by τs that depends only on sa and its
1
average value is given by E[τs ] = .
λ + sa µ
Two types of incremental costs are incurred when at state s = (sa , sr )
and action s′r is chosen; the first one is the cost of allocated bandwidth
which is expressed as bτs s′r where b is the cost parameter of allocated unit
bandwidth per unit time. Secondly, since each reservation update requires
message processing in the network elements, we also assume that a change
in bandwidth allocation yields a fixed cost K. As described, at a decision
epoch, the action s′r (whether to update or not and if an update decision
is made, how much allocation/deallocation will be performed) is chosen at
state (sa , sr ), then the time until, and the state at, the next decision epoch
depend only on the present state (sa , sr ) and the subsequently chosen action
s′r , and are thus independent of the past history of the system. Upon the
chosen action s′r , the state will evolve to the next state s′ = (s′a , s′r ) and s′a
will equal to either (sa + 1) or (sa − 1) according to whether the next event
is a call arrival or departure. The probability of the next event being a call
arrival or call departure is given as
λ
, for s′a = sa + 1,
′
p(sa | sa ) = λ + s a µ
sa µ
for s′a = sa − 1.
λ + sa µ
The problem formulation above reduces to the semi-Markov decision model
described in depth in [24] where the long-run average cost is taken as the
optimality criterion. Relative Value Iteration (RVI)-based algorithms can
efficiently be used for solving aux-aba as in [24]. Now, we propose a semi-
analytic binary search-based procedure to produce a policy for the original
ABA problem for circuit-oriented traffic case under update rate constraints.
For this purpose, we set the maximum value for the update cost parameter
K to Km . This proposed procedure comprises the following steps.
1) First fix K = Km /2, Ku = Km , Kl = 0 and fix the bandwidth cost per
unit time to b.
2) Solve the aux-aba problem to obtain the optimal bandwidth update
rate policy.
3) Simulate the overall system using the policy obtained above and mon-
itor the actual bandwidth update rate denoted by βa .
10
4) If βa > β + ε for some tolerance parameter ε then set Ku = K, K =
1
2
(Ku + Kl ) and go to step 2. If βa < β − ε then set Kl = K, K =
1
2
(Ku + Kl ) and go to step 2. Otherwise, stop.
The above four-step semi-analytic procedure is denoted by asyn-aba due to
its asynchronous nature and it can be shown to produce the optimal policy
for the original ABA problem as ε → 0 and when simulations are run long
enough to estimate the bandwidth update rate accurately. To the best of
our knowledge, the semi-analytic procedure we describe above is novel. The
reason to present this algorithm in this article is that it provides the op-
timum policy for ABA amongst asynchronous model-based algorithms and
therefore it can be used as a benchmark for model-free algorithms that will
be introduced later.
11
control
action
d d
1
0
Tx - d Tx + d controlled
Tx
variable x
12
bandwidth update needs to take place. On the other hand, when an existing
call departs, we write N(t+ ) = N(t) − 1. We now define an event as the
union of an arrival or a departure. After an event takes place at time t, we
need to decide on making a bandwidth update if one of two conditions below
are met:
(i) N(t+ ) > R(t) (8)
(ii) N(t+ ) 6∈ N(ti + ) − d(t), N(ti + ) + d(t)
(9)
Note that when the second condition is met, the system occupancy does not
lie in the hysteresis band making it possible for us to make a bandwidth
reservation update. Upon an update decision, say at time ti+1 , the new
bandwidth reservation and the new bucket values are expressed as:
R(ti+1 + ) = min(Cm , N(ti+1 + ) + ⌈d(ti+1 )⌉), (10)
B(ti+1 + ) = min(Bm , B(ti+1 ) + 1), ifR(ti+1 + ) 6= R(ti+1 ) (11)
Cm
d(ti+1 + ) = B(t+
i+1 ), (12)
Bm
where ⌈x⌉ denotes the smallest integer ≥ x. Note that for t > ti+1 , we
rewrite the lower and upper thresholds of the hysteresis as N(t+ i+1 ) − d(t)
and N(t+ i+1 ) + d(t), respectively, and the hysteresis band immediately starts
to shrink in size in time after t = ti+1 . This procedure is repeated afterwards.
In order to describe how the proposed algorithm works, we construct
an example system that starts at t = 0 and for which Cm = Bm = 10,
N(0+ ) = 5, R(0+ ) = 6, B(0+ ) = 2 and β = 1/4 updates/min. We assume
at t = 0+ , a bandwidth update has just occurred. Note that with this choice
of β, we have 15 update opportunities per hour. Instead of a stochastic
model, we introduce arrivals and departures at pre-specified instances for
this system. The evolution of N(t), R(t), and the lower and upper hysteresis
thresholds are illustrated in Fig. 2. Let us first focus our attention to the
update epochs. At t = 3 and t = 7, we have departures from the system
and condition (ii) in (9) is satisfied or in other words N(t+ ), t = 3, 7, lies
outside the hysteresis band. Therefore, these two time instances are used
for bandwidth updates as described in (10). At the time epochs t = 14 and
t = 15, we have arrivals and we have corresponding bandwidth updates since
the conditions (ii) and (i) in (9) and (8) are met for the first and second
of these time epochs, respectively. We have two more updates at the time
epochs t = 16.5 and t = 19.5 stemming from condition (ii).
13
10
9 N(t)
R(t)
8
upper threshold
bandwidth allocation R(t)
lower threshold
number of calls N(t) and
0
0 2 4 6 8 10 12 14 15 16 18 20
time (minutes)
Figure 2: The evolution of number of ongoing calls N (t) and the bandwidth allocation
R(t) as a function of t for a sample scenario for which Cm = Bm = 10, N (0) = 5, R(0) = 6,
b(0) = 2 and β = 1/4 updates/min.
We study the same scenario but with the desired update rate increased
to β = 1 updates/min in Fig. 3. It is clear that when the desired update rate
increases, R(t) starts to track N(t) closely as indicated in Fig. 3 stemming
from loosened signaling constraints. To see this, the desired update rate
is large relative to the arrival rate and therefore the width of the hysteresis
band more rapidly drops toward zero and hence the occurrence of an arrival or
departure triggers a bandwidth update in most cases. If we increase β further
and practically remove the signaling constraint, the optimal policy would
change the bandwidth allocation upon each event as expected in which case
R(t) is to track N(t) exactly. These two examples are not meant to quantify
the effectiveness of the approach but rather to help the reader visualise the
basic features of the proposed algorithm. This algorithm is referred to as
hys-aba due to its reliance on adaptive hysteresis.
14
10
9
upper threshold
8 lower threshold
bandwidth allocation R(t)
N(t)
number of calls N(t) and
7
R(t)
6
0
0 2 4 6 8 10 12 14 16 18 20
time (minutes)
Figure 3: The same example as in Fig. 2 with the desired update rate set to β = 1
updates/min.
which is along the same lines of the algorithm already described in the previ-
ous section. For this purpose, let T denote the measurement window length
in units of hours. Recall that Nk denotes the average rate of packet-oriented
traffic measured in the interval [(k − 1)T, kT ] , k = 1, 2, . . . and Rk denotes
the bandwidth allocation in the interval [kT, (k + 1)T ] , k = 1, 2, . . .. Also let
us assume that Cm = maxk Nk is already known or we have a fairly accurate
estimate of Cm . Let Bk denote the occupancy of the leaky bucket at t = kT .
Let β denote the update rate in units of updates/hr and let κ = Cm /η de-
note the learning parameter where η represents a resolution parameter. At
the end of each measurement epoch t = kT , the bucket occupancy is al-
ways decremented by κβT units until the bucket occupancy hits zero, i.e.,
Bk = max(0, Bk−1 −κβT ). Moreover, let ki denote the ith bandwidth update
epoch and let the bucket occupancy be Bk+i . We also assume that the current
allocation be just changed at time ki to Rki+ . Similar to the previous section,
we define a lower threshold Nk+ − Bj and an upper threshold Nk+ + Bj for
i i
ti ≤ j ≤ ti+1 until the next bandwidth update but recall that the corre-
sponding hysteresis band (Nki+ − Bj , Nki+ + Bj ) shrinks unless a bandwidth
update takes place. At time t = kT > ki T , we need to decide on making a
15
bandwidth update if the following condition is met:
Nk 6∈ Nk+ − Bk , Nk+ + Bk (13)
i i
Note that the situation of Nk lying outside the hysteresis band as in (13)
gives us an opportunity to make a bandwidth update. Upon a potential
update decision, say at time ki+1 , the new bandwidth allocation, and the
new bucket values are written as:
The above expressions completely describe the proposed algorithm for the
packet-oriented traffic scenario which is also referred to as the hys-aba as
before. As the algorithm suggests, the bucket occupancy Bk ranges between
0 and Cm and a bucket occupancy value away from these two boundaries
is indicative of the update rate compliance to β. The learning parameter κ
determines the rate of adapting to changes in the traffic pattern. When κ
is large (η is small), changes in traffic are quickly detected at the expense
of relatively poor steady-state behavior and possible update rate violations
since the bucket can more likely hit the two boundaries in this scenario. On
the other hand, when κ is small (η is large) then the algorithm learns about
the environment very slowly but with more robust steady-state behavior. In
Section 4, we will provide guidelines for selecting η (or equivalently the κ
parameter) of the algorithm.
4. Numerical Examples
4.1. Circuit-Oriented Traffic - Single LSP Case
In this example, we study the automatic bandwidth allocation problem
for a single LSP carrying circuit-oriented traffic such as voice. We assume sta-
tionary Poisson call traffic arriving at the LSP and exponentially distributed
call holding times. We are interested in the LSP’s average bandwidth allo-
cation as a function of the desired update rate β. In this setting, we com-
pare the performances of the proposed method hys-aba with the alternative
syn-aba and asyn-aba approaches that are described in Section 2. For bench-
marking purposes, we also present results for the PVP (Permanent Virtual
16
Path) approach in which the bandwidth to the aggregate is fixed to Cm ac-
cording to the Erlang-B formula given in (1) as well as the SVC (Switched
Virtual Circuit) approach for which the bandwidth allocation is written as
R(t) = min(Cm , N(t)) and the allocation is updated every time a new call
arrives or leaves. Clearly, both approaches guarantee a desired blocking prob-
ability of Pb due to the way Cm is set according to (1). However, the PVP
approach suffers from poor bandwidth usage whereas the SVC approach suf-
fers from high bandwidth update rates. In this example, we set Cm = 16 and
the desired call blocking probability to Pb = 0.01. The mean service time for
calls (1/µ) is set to 180 seconds. The Erlang-B formula in (1) leads us to set
the call arrival rate to λ = 0.0493055 calls/sec, i.e, B(λ/µ, Cm ) = Pb = 0.01.
We also set the bucket size Bm to 16 in this example unless otherwise stated.
The average bandwidth use of each algorithm is given in Fig. 4. Note that
in the PVP approach, the allocated bandwidth always equals Cm = 16. For
the SVC case, the average reserved bandwidth equals λ(1 − Pb )/µ = 8.785
since each accepted call will occupy one unit of bandwidth for 1/µ sec. for
the current example. We observe that the hys-aba approach outperforms the
syn-aba approach for all values of the desired update rate β by taking ad-
vantage of asynchronous updates. While doing so, we note that hys-aba is
model-free and does not assume a traffic model to be available only except
the value of Cm that ensures a blocking probability Pb . Moreover, as ex-
pected, for low values of β, hys-aba approaches the PVP policy whereas for
very large β it approaches the SVC policy. In the syn-aba algorithm, even for
very large values of β, the bandwidth use would be slightly larger than that
of the SVC policy. On the other hand, the model-based optimal algorithm
asyn-aba produced the best results for all values of β as expected. However,
the relatively short gap between the asyn-aba and hys-aba is the price we pay
for not using an a-priori traffic model. We note that the model-free feature
of the proposed algorithm allows us to use this approach in a wider variety
of scenarios. It is also worthwhile to note that the blocking probabilities we
obtained as a result of the hys-aba algorithm were within the neighborhood
0.01 ± 0.0001 for all values of β.
17
16
PVP
15 hys−aba
Average bandwidth use SVC
14 asyn−aba
syn−aba
13
12
11
10
8
10 20 50 100 200 350
β (updates/hr)
Figure 4: Average bandwidth use of the three methods syn-aba, asyn-aba, and hys-aba
as a function of the desired update rate β for the case λ = 0.0493055 calls/s., Cm = 16,
µ = 1/180 calls/s., and Pb = 0.01.
of updates in each window over a span of 64 hours and U(1) then refers to
the maximum of the 64 counts. U(L) is plotted in Fig. 5 as a function of
the measurement window size L for the two algorithms asyn-aba and hys-aba
when β = 11 updates/hr. Note that U(L) approaches β as the window length
L increases for both algorithms. However, U(L) converging to β relatively
more quickly for the hys-aba algorithm compared to the asyn-aba algorithm is
indicative of update rate compliance of the hys-aba algorithm even for shorter
time scales. Although the bandwidth utilization performance for hys-aba is
slightly worse than that of asyn-aba, it presents a more strict update rate
compliance.
Note that both controls ensure the desired boundary behavior at B(t) = 0
and B(t) = Bm . In Fig. 6, we plot the percent gain (with respect to the
PVP approach) in average bandwidth use of the linear and non-linear control
18
35
hys−aba
20
15
10
0.5 1 2 4 8 16 32 64
Monitoring Window L (hours)
Figure 5: The quantity U (L) as a function of the monitoring window length L for the two
algorithms asyn-aba and hys-aba when β is set to 11 updates/hr.
45
40
Percent gain in bandwidth use
35
with respect to PVP
30
25
20 linear−control
square−control
15
sqrt−control
10
5
0 50 100 150 200 250 300 350
β (updates/hr)
Figure 6: Percent gain in average bandwidth use as a function of β for various hysteresis
band control policies.
4.1.3. Effect of Bm
For the same example, we also study the effect of the bucket size Bm
on system performance. For this purpose, we vary the bucket size Bm for
several values of β and for three values of Cm . We then plot the percent gain
19
in average bandwidth use with respect to the PVP approach in Fig. 7. When
varying Cm , we also change λ as a function of Cm according to (1) so that
Pb = B(λ/µ, Cm) = 0.01. For each simulation run except for the scenario
β = 2, the maximum gains have been obtained when Bm = Cm . However, we
note that the performance of the proposed system is quite robust with respect
to this particular choice of Bm . In the remainder of this study concerning
circuit-oriented traffic, Bm will be set to Cm unless otherwise stated.
Cm = 8 Cm = 16 Cm = 32
80 45 40
40 35
70
β=2 with respect to the PVP policy 35
30
β=4 60
β=8 30
25
β = 16
50 25
β = 32
20
β = 64 20
β = 128 40
15
β = 256 15
30
10
10
20 5 5
10 0 0
2 4 8 16 32 64 2 4 8 16 32 64 2 4 8 16 32 64
Bucket size Bm Bucket size Bm Bucket size Bm
Figure 7: Percent gain in average bandwidth use as a function of Bm for various values of
β.
4.1.4. Effect of Cm
In this part of the experiment, we vary Cm as in the previous example.
Our goal is to study if the gain in using bandwidth updates in hys-aba changes
with respect to system capacity Cm . We plot the average bandwidth gains
of hys-aba with respect to the PVP approach for three different values of Cm
as a function of β in Fig. 8. It is clear that the systems with lower capacities
benefit more from bandwidth updates using hys-aba. This example shows
that bandwidth updates are effective even with stationary traffic input with
the effectiveness increasing for lower capacity systems. The added benefits
of dynamic bandwidth updates stemming from traffic non-stationarity will
be explored next.
20
60
40
30
20
Cm = 8
Cm = 16
10
C = 32
m
0
1 2 3
10 10 10
β (updates/hr)
Figure 8: Percent bandwidth gains with respect to the PVP approach as a function of β
for three different values of Cm .
21
a) b)
0.12 80
α=0.0, Cm = 20 α=0.0, Cm = 20
70
α=0.5, Cm = 26 α=0.5, Cm = 26
0.08
λ(t) (calls/s)
50
0.06 40
30
0.04
20
0.02
10
0 0
0 2 4 6 8 2 4 8 16 32 64 128 256
time (s) x 10
4
β
Figure 9: a) Call intensity function for three different values of the parameter α when
K = 0.01, λm = 0.055, and Tp = 1 b) Percent bandwidth gains with respect to the PVP
approach as a function of β for three different values of α
22
−1
10
−2
10
Blocking Probability −3
10
−4
10
−5
10
M=3
−6
10 M=6
−7
M=9
10
−8
10
2 4 8 16 32 64 128 256
β (updates/hr)
Figure 10: Overall blocking probability as a function of β for various values of M when
dynamic bandwidth allocation is used.
improvements become more evident when the number of LSPs increases due
to statistical multiplexing effects. However, when β is very low, it is also
possible for a dynamic allocation policy to produce a blocking probability
slightly larger than that of the static allocation policy.
23
frequency of bandwidth adjustments is then simply the reciprocal of the con-
figured Y-type interval length. Therefore, the Y-type interval length is set to
the reciprocal of the desired update rate β of our proposed algorithm when
these two algorithms are compared against each other. Consequently, these
two algorithms will then have the same average bandwidth adjustment rate.
• All packets belonging to the same flow are assumed to have the same
packet size. On the other hand, the packet size distribution is obtained
from the traffic traces from [25] as given in Table 1. For example,
packet size for a new flow will be uniformly distributed in the interval
[33,64] bytes with probability 0.2955, or will be uniformly distributed
between 65 and 128 with probability 0.2621, and so on.
24
6
0
0 5 10 15 20
GMT hours
Figure 11: The intensity λ(t) of the flow arrival process used in the M/G/∞ traffic model.
• When the file size S (in bytes) is determined together with the packet
size P (in bytes), the inter-arrival times denoted by I between two
successively arriving packets of the same flow will be deterministically
chosen so that the rate of traffic generated by this flow will have a mean
of 300 Kbps. In particular,
S I is set to I = 8P/300 in milliseconds and
an overall number of P packets will be generated for this particular
flow.
25
a) β=0.5 b) β=1.0
30 30
actual traffic actual traffic
cisco−aba cisco−aba
25 hys−aba 25 hys−aba
Allocated bandwidth (Mbps)
15 15
10 10
5 5
0 0
0 5 10 15 20 0 5 10 15 20
time (hour) time (hour)
Figure 12: The bandwidth allocation dictated by cisco-aba and hys-aba algorithms over a
one-day period for the M/G/∞ traffic model for two different values of β: a) β = 0.5 b)
β = 1.
cially for low β. On the other hand, when the average intensity is increasing,
the cisco-aba bandwidth allocation algorithm appears to lag the actual traffic
whereas an overbooking is evident when the average intensity is decreasing
within a day for β = 0.5. Similar observations are obtained when β = 1
but the tracking performances of both algorithms get much better as would
be expected but still favoring the hys-aba algorithm. The reason behind
the outperformance of hys-aba is that cisco-aba makes periodic decisions and
after a decision is made, one needs to wait until the next decision epoch
irrespective of potential significant changes in between two successive deci-
sion epochs. On the other hand, such significant changes are captured in the
hys-aba algorithm while still maintaining a desired update rate β.
We now study the impact of the choice of the parameter η on algorithm
performance. Recall that Nk denotes the average rate of packet-oriented
traffic measured in the interval [(k − 1)T, kT ] , k = 1, 2, . . . and Rk denotes
the bandwidth allocation in the interval [kT, (k + 1)T ] , k = 1, 2, . . .. We
define bandwidth gain as
PK−1
(Cm − Rk )
gain = k=1 , (17)
(K − 1)Cm
26
effect, we introduce the concept of under-provisioning which arises when the
bandwidth allocation lags the actual traffic requirement. We also introduce
a parameter called RU to denote the under-provisioning rate which is defined
as the ratio of the area between the bandwidth allocation and the actual
traffic when the former lags the latter to the overall number of transmitted
bits over a measurement window. Mathematically,
PK−1
max(0, Nk+1 − Rk )
RU = k=1 PK−1 . (18)
k=1 N k+1
27
a) b)
75 5
β=0.25 β=0.25
70 β=0.5 4.5 β=0.5
β=1 β=1
4 β=2
β=2
65
β=4
m
β=4
Gain (%) with respect to C
3.5
60
3
R (%)
55 2.5
U
2
50
1.5
45
1
40
0.5
35 0
2 4 8 16 32 64 128 256 512 1024 2 4 8 16 32 64 128 256 512 1024
η η
Figure 13: Bandwidth gain and the under-provisioning rate RU for varying η when Cm is
fixed at 28 Mbps.
Table 2: Bandwidth gain and the under-provisioning rate RU when Cm = 28 Mbps and
η = 16, 32
β cisco-aba hys-aba hys-aba cisco-aba hys-aba hys-aba
%RU %RU %RU % gain % gain % gain
η = 16 η = 32 η = 16 η = 32
0.25 18.36 0.16 0.10 45.83 38.25 39.94
0.50 8.36 0.40 0.28 52.56 48.69 48.00
1.00 5.01 0.70 0.46 56.35 54.55 54.27
2.00 2.87 0.99 0.74 58.27 57.29 56.97
4.00 2.27 1.65 1.28 59.88 59.51 59.09
28
0.8
0.7 cisco−aba
packet loss probability
hys−aba
0.6
cisco−aba average
0.5 hys−aba average
0.4
0.3
0.2
0.1
0
0 5 10 15 20
time (hour)
Figure 14: Instantaneous and average packet loss rates for the cisco-aba or hys-aba algo-
rithms for synthetic traffic when β = 0.5 and η = 32.
networks. Therefore, we also carry out packet-level simulations for the par-
ticular case of β = 0.5 and η = 32 to find the actual packet loss probability
using these two algorithms as a function of time. In this case, we assume
that all traffic is UDP-type. While doing so, we feed the incoming traffic to a
queue with a capacity of 100 packets which has a service rate dictated by the
bandwidth allocation algorithm (cisco-aba or hys-aba). The instantaneous
loss rate as a function of time as well as the long-term average packet loss
rate for (cisco-aba or hys-aba) algorithms are given in Fig. 14. The results
demonstrate that the maximum loss rate over a day as well as the average
packet loss rates are much higher for the cisco-aba algorithm than hys-aba.
29
Table 3: Bandwidth gain and under-provisioning ratio RU for the traffic trace when Cm =
8.7 Mbps and η = 16, 32
β cisco-aba hys-aba hys-aba cisco-aba hys-aba hys-aba
% loss % loss % loss % gain % gain % gain
η = 16 η = 32 η = 16 η = 32
0.25 1.59 0.09 0.09 8.71 8.96 7.75
0.50 1.90 0.20 0.19 14.06 10.87 10.65
1.00 0.77 0.39 0.30 16.26 14.68 14.98
2.00 1.86 0.58 0.46 19.88 17.12 17.27
4.00 1.65 1.00 0.82 22.01 20.44 20.34
For the particular scenario of β = 0.5 and η = 32 for the real traffic trace,
we also carry out packet-level simulations for UDP-type traffic. Again, the
service rate of the queue is governed by the bandwidth allocation algorithm
(cisco-aba or hys-aba) and the queue storage capacity is fixed to 100 packets.
30
The instantaneous loss rate as a function of time as well as the long-term
average packet loss rate for (cisco-aba or hys-aba) algorithms are given in
Fig. 17. As in the synthetic traffic case, the results of Fig. 17 demonstrate
that the maximum loss rate as well as the average packet loss rate are much
higher for the (cisco-aba algorithm than hys-aba).
5. Conclusions
In this article, we introduce a simple model-free adaptive hysteresis-
based bandwidth allocation algorithm for automatic provisioning of LSPs
in an MPLS network. For circuit-oriented traffic, we show that our proposed
model-free mechanism provides very close to optimal results for which the lat-
ter would require an a-priori stochastic model which typically is not available
due to non-stationary and unpredictable traffic patterns. We also show the
benefits of adaptive hysteresis-based dynamic allocation in non-stationary
traffic scenarios and link sharing problems related to effective use of band-
width in MPLS networks. We also extend the idea to packet-oriented data
traffic and compare the proposed algorithms via existing methods in vari-
ous scenarios. Although more work is needed, our preliminary results show
promise in demonstrating the effectiveness of model-free dynamic bandwidth
allocation algorithms for MPLS networks in which LSPs carry aggregates of
flows.
References
[1] E. Rosen, A. Viswanathan, R. Callon, Multiprotocol Label Switching
Architecture, RFC 3031 (Proposed Standard) (Jan. 2001).
URL http://www.ietf.org/rfc/rfc3031.txt
31
[4] J. Ash, Y. Lee, P. Ashwood-Smith, B. Jamoussi, D. Fedyk, D. Skalecki,
L. Li, LSP Modification Using CR-LDP, RFC 3214 (Proposed Standard)
(Jan. 2002).
URL http://www.ietf.org/rfc/rfc3214.txt
[5] L. Kleinrock, Queuing Systems, Vol. 1, Theory, John Wiley, New York,
1975.
[8] Cisco, Cisco MPLS autobandwidth allocator for MPLS traffic en-
gineering: A unique new feature of Cisco IOS software, avail-
able at http://www.cisco.com/warp/public/cc/pd/iosw/prodlit/
mpatb_wp.htm (2001).
[13] T. Anjali, C. Scoglio, G. Uhl, A new scheme for traffic estimation and
resource allocation for bandwidth brokers, Computer Networks 41 (6)
(2003) 761–777.
32
[14] B. Krithikaivasan, K. Deka, D. Medhi, Adaptive bandwidth provisioning
envelope based on discrete temporal network measurements, in: IEEE
INFOCOM, Hong Kong, 2004, pp. 1786–1796.
33
[24] H. Tijms, A first course in stochastic models, John Wiley and Sons,
2003.
34
b) β=1.0
9
6
35
actual traffic
3 cisco−aba
hys−aba
2
15 20 0 5 10 15 20
time (hour)
a) b)
30 3.5
β=0.25 β=0.25
β=0.5 3 β=0.5
β=1 β=1
25
m
β=2 β=2
Gain (%) with respect to C
20 2
R (%)
U
1.5
15
1
10
0.5
0
2 4 8 16 32 64 128 256 512 1024 2 4 8 16 32 64 128 256 512 1024
η η
Figure 16: Bandwidth gain and under-provisioning ratio RU for the traffic trace with
varying η when Cm is fixed at 8.7 Mbps.
0.5
cisco−aba
hys−aba
packet loss probability
0.3
0.2
0.1
0
0 5 10 15 20
time (hour)
Figure 17: Instantaneous and average packet loss rates for the cisco-aba or hys-aba algo-
rithms for the real traffic trace when β = 0.5 and η = 32.
36