f5 LTM GTM Operations Guide 1 0 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 144

CONTENTS

ACKNOWLEDGEMENTS

INTRODUCTION

Legal Notices

About This Guide

BIG-IP LTM BASICS

12

Introduction

13

Core Concepts

14

Topologies

15

CMP/TMM Basics

21

BIG-IP LTM LOAD BALANCING

22

Introduction

23

BIG-IP LTM Load Balancing Methods

24

Monitors

29

Troubleshooting

42

BIG-IP LTM NETWORK ADDRESS OBJECTS

44

Introduction

45

Virtual Address

46

Address Translation

48

Self IP Address

51

IPv4/IPv6 Gateway Behavior

52

Auto Last Hop

53

BIG-IP LTM VIRTUAL SERVERS

54

Virtual Server Basics

55

Virtual Server Types

58

Practical Considerations

62

Virtual Server Troubleshooting

64

BIG-IP LTM PROFILES

68

Introduction to BIG-IP LTM profiles

69

Protocol Profiles

70

OneConnect

72

HTTP Profiles

75

SSL Profiles

78

BIG-IP LTM policies

82

Persistence Profiles

84

Other protocols and profiles

88

Troubleshooting

90

BIG-IP GTM/DNS SERVICES

91

Introduction

92

BIG-IP GTM/DNS Services Basics

94

BIG-IP GTM/DNS Services Core Concepts

96

BIG-IP GTM Load Balancing

107

Architectures

112

BIG-IP GTM iQuery

114

BIG-IP GTM/DNS Troubleshooting

119

IRULES

122

Introduction

123

Anatomy of an iRule

124

Practical Considerations

130

Troubleshooting

132

LOGGING

135

Introduction

136

Logging and Monitoring

137

Practical Considerations

140

ACKNOWLEDGEMENTS

Acknowledgements
Acknowledgements

ACKNOWLEDGEMENTS

F5 Net
works (F5) is con
stantly striv
ing to im
prove our ser
vices and cre
ate closer cus
tomer re
la
tion
ships. The F5 BIG-IP TMOS Op
er
a
tions Guide (v1.0) and the F5 BIG-IP
Local Traf
fic Man
ager (LTM) and BIG-IP Global Traf
fic Man
ager (GTM) Op
er
a
tions
Guide (v1.0) are tools to as
sist our cus
tomers in ob
tain
ing the full value of our so
lu
tions.
F5 is proud to be ranked among the worlds top tech
nol
ogy com
pa
nies for its prod
uct in
no
va
tions and in
dus
try lead
er
ship. With the chang
ing na
ture of pub
lish
ing tech
nolo
gies
and knowl
edge trans
mis
sion, we have ap
plied our in
no
va
tion com
mit
ment in the op
er
a
tions guide de
vel
op
ment process.
Ap
ply
ing agile con
cepts, sub
ject mat
ter ex
perts and a group of F5 en
gi
neers, (in
clud
ing
sev
eral for
mer F5 cus
tomers) gath
ered for a one week sprint of col
lab
o
ra
tive tech
ni
cal
writ
ing. Con
sid
er
ing the per
spec
tive of our cus
tomers, these tal
ent en
gi
neers put in ex
treme hours and ef
fort brain
storm
ing, de
bat
ing and sift
ing through a vast qual
ity of
tech
ni
cal con
tent to write an op
er
a
tions guide aimed at stream
lin
ing and op
ti
miz
ing F5
prod
uct main
te
nance. The BIG-IP Local Traf
fic Man
ager (LTM) and BIG-IP Global Traf
fic
Man
ager (GTM) Op
er
a
tions Guide is our sec
ond ex
plo
ration of this process.
This the first ver
sion of this guide and should be viewed in that con
text. F5 is com
mit
ted
to con
tin
u
ous im
prove
ment, so check for up
dates and ex
pan
sion of con
tent on the Ask
F5TM Knowl
edge Base. We wel
come feed
back and in
vite read
ers to visit our sur
vey
at https://
www.
surveymonkey.
com/
s/
F5OpsGuide. (This link takes you to an out
side
re
source) or email your ideas and sug
ges
tion to opsguide@
f5.
com.
Thanks to Adam Hyde and the staff at Book
Sprints (this link takes you to an out
side re
source) for their as
sis
tance in ex
plor
ing the col
lab
o
ra
tive writ
ing process used in de
vel
op
ing the BIG-IP Local Traf
fic Man
ager and BIG-IP Global Traf
fic Man
ager Op
er
a
tions
Guide. Book
sprints' com
bi
na
tion of on
site fa
cil
i
ta
tion, col
lab
o
ra
tive plat
form and fol
lowthe-sun back
end sup
port, de
sign and ad
min
is
tra
tion, is unique in the in
dus
try.
F5 Net
works Team
Ju
lian Eames, Ex
ec
u
tive Vice Pres
i
dent, Busi
ness Op
er
a
tions (ex
ec
u
tive spon
sor)
Don Mar
tin, Vice Pres
i
dent, GS New Prod
uct & Busi
ness De
vel
op
ment (ex
ec
u
tive spon
sor)
Travis Mar
shal and Nicole Fer
raro (fi
nance); Petra Duter
row (pur
chas
ing), Diana Young
(legal), Igna
cio Avel
laneda and Claire De
laney (mar
ket
ing)

ACKNOWLEDGEMENTS

Jeanne Lewis (pub


lisher, co-au
thor & pro
ject man
ager)
Andy Koop
mans (tech
ni
cal writer)
Angus Glanville, Amy Wil
helm, Eric Flo
res, Ge
of
frey Wang, John Tap
paro, Michael Will
hight, Nathan McKay (en
gi
neer
ing co-au
thors)

Book
Sprints Team
Adam Hyde (founder)
Hen
rik van Leeuwen (il
lus
tra
tor)
Raewyn Whyte (proof reader)
Juan Car
los Gutirrez Bar
quero and Julien Taquet (tech
ni
cal sup
port)
Laia Ros and Si
mone Pout
nik (fa
cil
i
ta
tors)

INTRODUCTION

Introduction
Legal Notices
About This Guide

LEGAL NOTICES INTRODUCTION

Legal Notices
Publication Date
This doc
u
ment was pub
lished on De
cem
ber 12, 2014.
Pub
li
ca
tion Num
ber
BIG-IP LT
MGT
MOps - 01_0_0

Copyright
Copy
right 2014-2015, F5 Net
works, Inc. All rights re
served.
F5 Net
works, Inc. (F5) be
lieves the in
for
ma
tion it fur
nishes to be ac
cu
rate and re
li
able. How
ever, F5 as
sumes no re
spon
si
bil
ity for the use of this in
for
ma
tion, nor any in
fringe
ment of patents or other rights of third par
ties, which may re
sult from its use. No
li
cense is granted by im
pli
ca
tion or oth
er
wise under any patent, copy
right, or other in
tel
lec
tual prop
erty right of F5 ex
cept as specif
i
cally de
scribed by ap
plic
a
ble user li
censes. F5
re
serves the right to change spec
i
fi
ca
tions at any time with
out no
tice.

Trademarks
AAM, Ac
cess Pol
icy Man
ager, Ad
vanced Client Au
then
ti
ca
tion, Ad
vanced Fire
wall Man
ager, Ad
vanced Rout
ing, AFM, APM, Ap
pli
ca
tion Ac
cel
er
a
tion Man
ager, Ap
pli
ca
tion Se
cu
rity Man
ager, ARX, AskF5, ASM, BIG-IP, BIG-IQ, Cloud Ex
ten
der, Cloud
Fu
cious, Cloud Man
ager, Clus
tered Mul
ti
pro
cess
ing, CMP, CO
HE
SION, Data Man
ager, De
v
Cen
tral, De
v
Cen
tral [DE
SIGN], DNS Ex
press, DSC, DSI, Edge Client, Edge Gate
way, Edge
Por
tal, EL
E
VATE,
EM, En
ter
prise Man
ager, EN
GAGE, F5, F5[DE
SIGN],F5 Cer
ti
fied [DE
SIGN], F5 Net
works, F5SalesX
change [DE
SIGN], F5Syn
the
sis, f5Syn
the
sis,F5Syn
the
sis[DE
SIGN], F5
TechX
change [DE
SIGN], Fast Ap
pli
ca
tion Proxy, Fast Cache, FirePass, Global Traf
fic Man
ager, GTM, GUARDIAN, iApps, IBR, In
tel
li
gent Browser Ref
er
enc
ing, In
tel
li
gent Com
pres
sion, IPv6 Gate
way, iCon
trol, iHealth, iQuery, iR
ules, iR
ules On
De
mand, iSes
sion, L7 Rate
-

LEGAL NOTICES INTRODUCTION

Shap
ing, LC, Link Con
troller, Local Traf
fic Man
ager, BIG-IP LTM, Lin
eR
ate, Lin
eR
ate Sys
tems [DE
SIGN], LROS, BIG-IP LTM, Mes
sage Se
cu
rity Man
ager, MSM, OneCon
nect, Packet
Ve
loc
ity, PEM, Pol
icy En
force
ment Man
ager, Pro
to
col Se
cu
rity Man
ager, PSM, Real Traf
fic
Pol
icy Builder, SalesX
change, ScaleN, Sig
nalling De
liv
ery Con
troller, SDC, SSL Ac
cel
er
a
tion, soft
ware de
signed ap
pli
ca
tions ser
vices, SDAC (ex
cept in Japan), Strong
Box, Su
per
VIP, SYN Check, TCP Ex
press, TDR, TechX
change, TMOS, To
tALL, Traf
fic Man
age
ment Op
er
at
ing Sys
tem, Traf
fix Sys
tems, Traf
fix Sys
tems (DE
SIGN), Trans
par
ent Data Re
duc
tion,
UNITY, VAULT, vCMP, VE F5 [DE
SIGN], Ver
safe, Ver
safe [DE
SIGN], VIPRION, Vir
tual Clus
tered Mul
ti
pro
cess
ing, Web
Safe, and ZoneRun
ner, are trade
marks or ser
vice marks of F5
Net
works, Inc., in the U.S. and other coun
tries, and may not be used with
out F5's ex
press
writ
ten con
sent. All other prod
uct and com
pany names herein may be trade
marks of
their re
spec
tive own
ers.

Patents
This prod
uct may be pro
tected by one or more patents in
di
cated at: http://
www.
f5.
com/
about/
guidelines-policies/
patents.

Notice
THE SOFT
WARE, SCRIPT
ING & COM
MAND EX
AM
PLES ARE PRO
VIDED "AS IS", WITH
OUT
WAR
RANTY OF ANY KIND, EX
PRESS OR IM
PLIED, IN
CLUD
ING BUT NOT LIM
ITED TO THE
WAR
RANTIES OF MER
CHANTABIL
ITY, FIT
NESS FOR A PAR
TIC
U
LAR PUR
POSE AND NON
IN
FRINGE
MENT. IN NO EVENT SHALL THE AU
THORS OR COPY
RIGHT HOLD
ERS BE LI
ABLE
FOR ANY CLAIM, DAM
AGES OR OTHER LI
A
BIL
ITY, WHETHER IN AN AC
TION OF CON
TRACT, TORT OR OTH
ER
WISE, ARIS
ING FROM, OUT OF OR IN CON
NEC
TION WITH THE
SOFT
WARE, SCRIPT
ING & COM
MAND EX
AM
PLES OR THE USE OR OTHER DEAL
INGS WITH
THE SOFT
WARE, SCRIPT
ING & COM
MAND EX
AM
PLES.

ABOUT THIS GUIDE INTRODUCTION

About This Guide


Important
THIS IS THE FIRST EDI
TION OF THIS GUIDE AND AS SUCH, IT MAY CON
TAIN OMIS
SIONS
OR ER
RORS AND IS PRO
VIDED WITH
OUT WAR
RANTY.
F5 rec
om
mends using the in
for
ma
tion pro
vided in this guide to sup
ple
ment ex
ist
ing cor
po
rate poli
cies, op
er
a
tions re
quire
ments, and rel
e
vant in
dus
try stan
dards. Read and an
a
lyze sug
ges
tions to suit in
di
vid
ual F5 im
ple
men
ta
tions, change man
age
ment processes,
and busi
ness op
er
a
tions re
quire
ments. Re
view all guide sug
ges
tions and pos
si
ble op
ti
miza
tions with a sub
ject mat
ter ex
pert, such as an F5 con
sul
tant or a cor
po
rate net
work
spe
cial
ist.

Prerequisites
It is as
sumed that:
The F5 platform is installed as detailed in the F5 Platform Guide for your F5 product,
or a cloud-based virtual server solution is correctly installed and configured.
TMOS has been installed and configured as described in the F5 product
documentation manuals appropriate for your version and module.
F5 BIG-IP TMOS: Operations Guide has been reviewed and applicable
suggestions implemented.
Acceptable industry operating thresholds have been determined and configured with
considerations for seasonal variance in load.
Administrators and Operators are well versed in F5 technology concepts. F5 provides
many training opportunities, services and resources. Details can be found in the F5
BIG-IP TMOS: Operations Guide.

Usage and Scope


7

ABOUT THIS GUIDE INTRODUCTION

Usage and Scope


The goal of this pub
li
ca
tion is to as
sist the ad
min
is
tra
tor or op
er
a
tor in keep
ing their BIGIP Local Traf
fic Man
ager (LTM) and BIG-IP Global Traf
fic Man
ager (GTM) sys
tems healthy,
op
ti
mized and per
form
ing as de
signed.

Our intended reader


This guide was writ
ten by sup
port en
gi
neers who as
sist cus
tomers to solve com
plex
prob
lems every
day. Also, many of the en
gi
neers were cus
tomers be
fore join
ing F5 Net
works. With this doc
u
ment, we are at
tempt
ing to take ad
van
tage of their unique per
spec
tive and hands-on ex
pe
ri
ence to serve the op
er
a
tional and main
te
nance guides our
cus
tomers have re
quested.
See Es
ca
la
tion and Feed
back to help us tai
lor fu
ture ver
sions of the doc
u
ment to bet
ter
serve you, our cus
tomer.

What is covered
The con
tent in this guide ad
dresses the BIG-IP LTM and BIG-IP GTM mod
ules in
cluded in
ver
sion 11.2.1 up to 11.6.0. In most cases it is ag
nos
tic to plat
form.

What is not covered


De
ploy
ment and ini
tial con
fig
u
ra
tion of BIG-IP LTM and BIG-IP GTM mod
ules will not be
cov
ered in this doc
u
ment. There are a num
ber of ex
cel
lent ref
er
ences that cover these
areas in AskF5. The F5 self help com
mu
nity, De
v
Cen
tral, is also a good place to ask ques
tions about ini
tial de
ploy
ment and con
fig
u
ra
tion. The fol
low
ing il
lus
tra
tion shows where
this guide can best be ap
plied in the prod
uct life cycle.

ABOUT THIS GUIDE INTRODUCTION

BIG-IP LTM/BIG-IP GTM Mod


ule Life
cy
cle

What is the focus of the Op


er
a
tions Guide?

Document Conventions
To iden
tify and un
der
stand im
por
tant in
for
ma
tion, F5 doc
u
men
ta
tion uses cer
tain styl
is
tic con
ven
tions, which are de
tailed in the fol
low
ing sub-sec
tions.

Examples
Only non-routable IP ad
dresses and ex
am
ple do
main names are used in this doc
u
ment;
valid IP ad
dresses and do
main names are re
quired in in
di
vid
ual im
ple
men
ta
tions.

ABOUT THIS GUIDE INTRODUCTION

New terms
New terms are shown in bold italic text.

Object, name, and command references


Bold text is ap
plied to high
light items such as spe
cific web ad
dresses, IP ad
dresses, util
ity names, and por
tions of com
mands, such as vari
ables and key
words.

Document references
Italic text de
notes a ref
er
ence to an
other doc
u
ment or sec
tion of a doc
u
ment. Bold,
italic text in
di
cates a book title ref
er
ence. AskF5 ar
ti
cle links ap
pear in bold text.

Command text examples


Courier font de
notes com
mand text and is al
ways shown on a sep
a
rate line. The ini
ti
at
ing com
mand prompt is never shown.
Courier

Reference Materials
We have in
cluded links for all so
lu
tions (SOL) ar
ti
cles, but we sug
gest you see AskF5 for a
com
pre
hen
sive list of ap
plic
a
ble and up-to-date ref
er
ence ma
te
ri
als such as de
ploy
ment
guides, hard
ware man
u
als, and con
cept guides.

AskF5
To find a doc
u
ment, go to http://
www.
askf5.
com and se
lect the ap
pro
pri
ate prod
uct, ver
sion, and doc
u
ment type, as shown in the fol
low
ing il
lus
tra
tion. A com
pre
hen
sive list of
rel
e
vant doc
u
ments will be dis
played. You can add key
words and other iden
ti
fiers to
nar
row the search.

10

ABOUT THIS GUIDE INTRODUCTION

Glossary of Terms
Due to the chang
ing na
ture of tech
nol
ogy, we have cho
sen not to in
clude a Glos
sary in
this doc
u
ment. An up-to-date and com
pre
hen
sive glos
sary of com
mon in
dus
try and F5spe
cific terms is archived at http://
www.
f5.
com/
glossary

Escalation and Feedback


Issue escalation
F5 BIG-IP TMOS: Op
er
a
tions Guide pro
vides de
tails on op
ti
miz
ing your sup
port ex
pe
ri
ence. Refer to Op
ti
miz
ing the Sup
port Ex
pe
ri
ence in that guide on AskF5. Cus
tomers with
web
sup
port con
tracts can also open a sup
port case by click
ing "Open a sup
port case"
at https://
support.
f5.
com.

Guide feedback and suggestions


Qual
ity sup
port is im
por
tant to F5. If you have ideas or sug
ges
tions, email opsguide@
f5.
com and spec
ify the guide with ver
sion num
ber. Or, pro
vide feed
back using our on
line
sur
vey at https://
www.
surveymonkey.
com/
s/
F5OpsGuide (This link takes you to an
out
side re
source).

11

ABOUT THIS GUIDE BIG-IP LTM BASICS

BIG-IP LTM Basics


Introduction
Core Concepts
Topologies
CMP/TMM Basics

12

INTRODUCTION BIG-IP LTM BASICS

Introduction

The BIG-IP Local Traf


fic Man
ager (LTM) mod
ule man
ages and op
ti
mizes traf
fic for net
work ap
pli
ca
tions and clients.
Some typ
i
cal tasks that BIG-IP LTM can per
form in
clude:
Automatically balancing application traffic amongst multiple servers.
Caching and compressing HTTP content.
Inspecting and transforming application content.
Offloading SSL decryption to BIG-IP LTM.
Isolating applications from harmful network traffic.
BIG-IP LTM runs on TMOS, which is the base plat
form soft
ware on all F5 hard
ware plat
forms and BIG-IP Vir
tual Edi
tion (VE).
F5 hard
ware plat
forms in
clude ac
cel
er
a
tion hard
ware, which may in
clude SSL ac
cel
er
a
tion, com
pres
sion ac
cel
er
a
tion, and an em
bed
ded Packet Ve
loc
ity Ac
cel
er
a
tor chip
(ePVA), which can sig
nif
i
cantly in
crease the avail
able through
put of BIG-IP LTM.
BIG-IP VE is avail
able for sev
eral pop
u
lar hy
per
vi
sor prod
ucts, which al
lows BIG-IP LTM to
be in
te
grated into a vir
tual dat
a
cen
ter in
fra
struc
ture. Ad
di
tion
ally, it is avail
able on a
num
ber of lead
ing cloud providers.

13

CORE CONCEPTS BIG-IP LTM BASICS

Core Concepts

BIG-IP LTM treats all net


work traf
fic as state
ful con
nec
tion flows. Even con
nec
tion
less
pro
to
cols such as UDP and ICMP are tracked as flows. BIG-IP LTM is a de
fault-deny de
vice: un
less traf
fic matches a con
fig
ured pol
icy, it will be re
jected. BIG-IP sys
tems act as a
full proxy, mean
ing that con
nec
tions through BIG-IP LTM are man
aged as two dis
tinct
con
nec
tion flows: a client-side flow and a server-side flow.

More de
tail about BIG-IP LTM con
nec
tion han
dling will be cov
ered in sub
se
quent chap
ters of this guide.

BIG-IP LTM Nomenclature


Nodes - A configuration object represented by the IP address of a device on the
network.
Pool Members - A pool member is a node and service port to which BIG-IP LTM can
load balance traffic. Nodes can be members of multiple pools.
Pools - A configuration object that groups pool members together to receive and
process network traffic in a fashion determined by a specified load balancing
algorithm.
Monitors - A configuration object that checks the availability or performance of
network resources such as pool members and nodes.
Virtual servers - A virtual server allows BIG-IP systems to send, receive, process, and
relay network traffic.
Profiles - Configuration objects that can be applied to virtual servers to affect the
behavior of network traffic.

14

TOPOLOGIES BIG-IP LTM BASICS

Topologies

BIG-IP sys
tems can be de
ployed to a net
work with lit
tle to no mod
i
fi
ca
tion to ex
ist
ing ar
chi
tec
ture. How
ever, to op
ti
mize the per
for
mance of your net
work and ap
pli
ca
tions,
some en
vi
ron
ment changes may be re
quired to take full ad
van
tage of the mul
ti
pur
pose
func
tion
al
ity of BIG-IP sys
tems.
There are three basic topolo
gies for load-bal
anced ap
pli
ca
tions with BIG-IP LTM: onearmed, two-armed, and nPath. nPath is also known as Di
rect Server Re
turn, or DSR.

One-Armed Deployment
In this de
ploy
ment, the vir
tual server is on the same sub
net and VLAN as the pool mem
bers. Source ad
dress trans
la
tion must be used in this con
fig
u
ra
tion to en
sure that
server re
sponse traf
fic re
turns to the client via BIG-IP LTM.

15

TOPOLOGIES BIG-IP LTM BASICS

Ad
van
tages of this topol
ogy:
Allows for rapid deployment.
Requires minimal change to network architecture to implement.
Allows for full use of BIG-IP LTM feature set.
Con
sid
er
a
tions for this topol
ogy:
Client IP address is not visible to pool members as the source address will be
translated by the BIG-IP LTM. This prevents asymmetric routing of server return
traffic. (This can be changed for HTTP traffic by using the X-Forwarded-For header).

Two-Armed Deployment
In this topol
ogy, the vir
tual server is on a dif
fer
ent VLAN from the pool mem
bers, which
re
quires that BIG-IP sys
tems route traf
fic be
tween them. Source ad
dress trans
la
tion may
or may not be re
quired, de
pend
ing on over
all net
work ar
chi
tec
ture. If the net
work is de
signed so that pool mem
ber traf
fic is routed back to BIG-IP LTM, it is not nec
es
sary to
use source ad
dress trans
la
tion.

16

TOPOLOGIES BIG-IP LTM BASICS

If pool mem
ber traf
fic is not routed back to BIG-IP LTM, it is nec
es
sary to use source ad
dress trans
la
tion to en
sure it is trans
lated back to the vir
tual server IP. The fol
low
ing fig
ure shows a de
ploy
ment sce
nario with
out source ad
dress trans
la
tion:

17

TOPOLOGIES BIG-IP LTM BASICS

The fol
low
ing il
lus
tra
tion shows the same de
ploy
ment sce
nario using source ad
dress
trans
la
tion:

Ad
van
tages of this topol
ogy:
Allows preservation of client source IP.
Allows for full use of BIG-IP LTM feature set.
Allows BIG-IP LTM to protect pool members from external exploitation.
Con
sid
er
a
tions for this topol
ogy:
May necessitate network topology changes in order to ensure return traffic traverses
BIG-IP LTM.

18

TOPOLOGIES BIG-IP LTM BASICS

nPath Deployment
nPath, also known by its generic name Di
rect Server Re
turn (DSR), is a de
ploy
ment topol
ogy in which re
turn traf
fic from pool mem
bers is sent di
rectly to clients with
out first tra
vers
ing the BIG-IP LTM. This al
lows for higher the
o
ret
i
cal through
put be
cause BIG-IP LTM
only man
ages the in
com
ing traf
fic and does not process the out
go
ing traf
fic. How
ever,
this topol
ogy sig
nif
i
cantly re
duces the avail
able fea
ture set that BIG-IP LTM can use for
ap
pli
ca
tion traf
fic. If the specifc use case for nPath is jus
ti
fied, See BIG-IP Local Traf
fic
Man
ager: Im
ple
men
ta
tions cov
er
ing nPath de
ploy
ments.

19

TOPOLOGIES BIG-IP LTM BASICS

Ad
van
tages of this topol
ogy:
Allows maximum theoretical throughput.
Preserves client IP to pool members.
Con
sid
er
a
tions for this topol
ogy:
Limits availability of usable features of BIG-IP LTM and other modules.
Requires modification of pool members and network.
Requires more complex troubleshooting.

20

CMP/TMM BASICS BIG-IP LTM BASICS

CMP/TMM Basics

Data plane traf


fic on BIG-IP LTM is han
dled by Traf
fic Man
age
ment Mi
cro-ker
nel (TMM).
TMM op
er
ates on the con
cept of Clus
tered Mul
ti
pro
cess
ing (CMP), which cre
ates mul
ti
ple TMM process. To achieve high per
for
mance net
work traf
fic pro
cess
ing, traf
fic is dis
trib
uted, con
nec
tion flows to the var
i
ous TMMs, based on an F5 pro
pri
etary layer-4 al
go
rithm.
There are a few cir
cum
stances where cer
tain con
fig
u
ra
tions may func
tion dif
fer
ently in
re
la
tion to CMP. These are dis
cussed in var
i
ous chap
ters of this guide.
For more de
tail on CMP, see the fol
low
ing AskF5 ar
ti
cles:
SOL14358: Overview of Clustered Multiprocessing - (11.3.0 and later)
SOL14248: Overview of Clustered Multiprocessing - (11.0.0 - 11.2x)

21

CMP/TMM BASICS BIG-IP LTM LOAD BALANCING

BIG-IP LTM Load


Balancing
Introduction
BIG-IP LTM Load Balancing Methods
Monitors
Load Balancing Troubleshooting

22

INTRODUCTION BIG-IP LTM LOAD BALANCING

Introduction
Overview of load balancing
BIG-IP sys
tems are de
signed to dis
trib
ute client con
nec
tions to load bal
anc
ing pools that
are typ
i
cally com
prised of mul
ti
ple pool mem
bers. The load bal
anc
ing method (or al
go
rithm) de
ter
mines how con
nec
tions are dis
trib
uted across pool mem
bers. Load bal
anc
ing meth
ods fall into one of two dis
tinct cat
e
gories: sta
tic or dy
namic. Fac
tors, such as
pool mem
ber avail
abil
ity and ses
sion affin
ity, some
times re
ferred to as stick
i
ness, in
flu
ence the choice of load bal
anc
ing method.
Cer
tain load-bal
anc
ing meth
ods are de
signed to dis
trib
ute con
nec
tions evenly across
pool mem
bers, re
gard
less of the pool mem
bers' or nodes' cur
rent work
load is. These
meth
ods tend to work bet
ter with ho
moge
nous server pools and/or ho
moge
nous work
loads. An ex
am
ple of a ho
moge
nous server pool is one that is com
prised of servers that
all have sim
i
lar pro
cess
ing per
for
mance and ca
pac
ity. An
other ex
am
ple of a ho
moge
nous work
load is one in which all con
nec
tions are short-lived or all re
quests/re
sponses
are sim
i
lar in size. Other load-bal
anc
ing meth
ods are de
signed to favor higher per
form
ing servers, pos
si
bly re
sult
ing in de
lib
er
ately un
even traf
fic dis
tri
b
u
tion across pool
mem
bers. Some load-bal
anc
ing meth
ods take into ac
count one or more fac
tors that
change at run time, such as cur
rent con
nec
tion count or ca
pac
ity; oth
ers do not.

23

BIG-IP LTM LOAD BALANCING METHODS BIG-IP LTM LOAD BALANCING

BIG-IP LTM Load


Balancing Methods
Static vs. dynamic
Sta
tic load bal
anc
ing meth
ods dis
trib
ute in
com
ing con
nec
tions in a uni
form and pre
dictable man
ner re
gard
less of load fac
tor or cur
rent con
di
tions. For ex
am
ple, the Round
Robin method causes the BIG-IP LTM sys
tem to send each in
com
ing con
nec
tion to the
next avail
able pool mem
ber, thereby dis
trib
ut
ing con
nec
tions evenly across the pool
mem
bers over time.
Dynamic load bal
anc
ing meth
ods dis
trib
ute con
nec
tions by fac
tor
ing in the cur
rent con
di
tions when mak
ing a load bal
anc
ing de
ci
sion. Cur
rent met
rics and sta
tis
tics are used to
de
rive and ad
just the traf
fic dis
tri
b
u
tion pat
tern with
out ad
min
is
tra
tor in
ter
ven
tion based on cri
te
ria such as the num
ber of con
nec
tions a server is cur
rently pro
cess
ing. For ex
am
ple, the Ob
served load bal
anc
ing method causes higher per
form
ing servers
to process more con
nec
tions over time than lower per
form
ing servers.

Static load balancing methods


Round Robin is the de
fault load bal
anc
ing method on a BIG-IP sys
tem. It is a sta
tic load
bal
anc
ing method since no dy
namic run-time in
for
ma
tion, ex
cept pool mem
ber sta
tus, is
used in the Round-Robin al
go
rithm to se
lect the next pool mem
ber. The Round-Robin
load bal
anc
ing method works best when the pool mem
bers are roughly equal in pro
cess
ing and mem
ory ca
pac
ity, and the ap
pli
ca
tion's re
quests use the servers' re
sources
equally.
Ratio load bal
anc
ing method dis
trib
utes new con
nec
tions across avail
able mem
bers in
pro
por
tion to a user-de
fined ratio. This mode is use
ful when mem
bers have dis
parate
avail
able ca
pac
i
ties. For ex
am
ple, if a pool con
tains one fast server and three slower
servers, the ratio can be set so the fast server re
ceives more con
nec
tions. The ratio can

24

BIG-IP LTM LOAD BALANCING METHODS BIG-IP LTM LOAD BALANCING

be set for each mem


ber or each node. Ratio mode dis
trib
utes new con
nec
tions in a
weighted round-robin pat
tern. The fol
low
ing fig
ure shows how con
nec
tions would be
dis
trib
uted if the ratio val
ues were set to 3:2:1:1.

Dynamic load balancing methods


The Least Con
nec
tions method dis
trib
utes new con
nec
tions across avail
able mem
bers
based on the cur
rent con
nec
tion count be
tween BIG-IP sys
tem and the server. It does
not ac
count for con
nec
tions the servers may have with other sys
tems. This mode is a
good gen
eral-pur
pose dis
tri
b
u
tion method for most work
loads, but it may be es
pe
cially
use
ful when sup
port
ing long-lived con
nec
tions like FTP and TEL
NET. Over time, con
nec
tions should be dis
trib
uted rel
a
tively evenly across the pool. If mul
ti
ple de
vices all cur
rently have a sim
i
lar num
ber of low con
nec
tions, traf
fic is dis
trib
uted across them in a

25

BIG-IP LTM LOAD BALANCING METHODS BIG-IP LTM LOAD BALANCING

round robin pat


tern. The fol
low
ing fig
ure shows how six con
nec
tions would be dis
trib
uted, given the con
nec
tion counts shown below each pool mem
ber and as
sum
ing that
no con
nec
tions close dur
ing the process.

The Fastest load bal


anc
ing method dis
trib
utes new con
nec
tions to the mem
ber or node
that cur
rently has the fewest out
stand
ing layer 7 re
quests. If the vir
tual server does not
have both a TCP and layer 7 pro
file as
signed, BIG-IP LTM can
not track re
quests and re
sponses. load bal
anc
ing will fall back to least con
nec
tions.
This mode is use
ful for dis
trib
ut
ing traf
fic to de
vices that may dif
fer in their re
sponse
times de
pend
ing on the load that pre
vi
ous re
quests have placed on the sys
tem. Over
time, con
nec
tions are dis
trib
uted rel
a
tively evenly if all servers have sim
i
lar ca
pa
bil
i
ties.
There are many more sta
tic and dy
namic load bal
anc
ing meth
ods be
sides the ones listed
here. For more in
for
ma
tion about ad
di
tional meth
ods, search on http://
www.
askf5.
com.

26

BIG-IP LTM LOAD BALANCING METHODS BIG-IP LTM LOAD BALANCING

Priority group activation


Pri
or
ity group ac
ti
va
tion al
lows pool mem
bers to be used only if pre
ferred pool mem
bers are un
avail
able. Each pool mem
ber is as
signed a pri
or
ity, and con
nec
tions are sent
to the high
est pri
or
ity pool mem
bers first. A min
i
mum num
ber of avail
able mem
bers is
as
signed, and if fewer than this num
ber be
come avail
able, the next high
est pri
or
ity pool
mem
bers are ac
ti
vated.
For ex
am
ple, in the fol
low
ing fig
ure, three phys
i
cal hard
ware pool mem
bers have been
as
signed a pri
or
ity of 10. Three ad
di
tional vir
tual ma
chines have been de
ployed as
backup and as
signed a pri
or
ity of 5.
If the Pri
or
ity group ac
ti
va
tion set
ting is 2, when all of the pool mem
bers are avail
able,
only the phys
i
cal nodes in the pri
or
ity-10 group will be used.

27

BIG-IP LTM LOAD BALANCING METHODS BIG-IP LTM LOAD BALANCING

How
ever, if the num
ber of pri
or
ity-10 phys
i
cal nodes avail
able meets or ex
ceeds the
Pri
or
ity group ac
ti
va
tion set
ting of 2, the vir
tual ma
chines in the pri
or
ity-5 group will be
used au
to
mat
i
cally.

Pri
or
ity group ac
ti
va
tion does not mod
ify per
sis
tence be
hav
ior: any new con
nec
tions
sent to lower-pri
or
ity mem
bers will con
tinue to per
sist even when higher pri
or
ity mem
bers be
come avail
able again.

FallBack host (HTTP only)


If all mem
bers of a pool fail, vir
tual servers con
fig
ured with a Fall
Back host in an at
tached
HTTP pro
file can send an HTTP redi
rect mes
sage to the client. This al
lows the ad
min
is
tra
tor to send con
nec
tions to an al
ter
nate site or an apol
ogy server. This op
tion is con
fig
ured through the HTTP pro
file.

28

MONITORS BIG-IP LTM LOAD BALANCING

Monitors
What is a monitor?
Mon
i
tors are used by BIG-IP sys
tems to de
ter
mine whether a pool mem
ber (server) is el
i
gi
ble to ser
vice ap
pli
ca
tion traf
fic. Mon
i
tors can be very sim
ple or very com
plex de
pend
ing on the na
ture of the ap
pli
ca
tion. Mon
i
tors en
able BIG-IP sys
tems to gauge the health
of pool mem
bers, by pe
ri
od
i
cally mak
ing spe
cific re
quests of them in order to eval
u
ate
their sta
tuses based on the re
sponses re
ceived or not re
ceived.

How monitors add value


When im
ple
mented prop
erly, a mon
i
tor can alert you to sta
bil
ity and avail
abil
ity is
sues
that may exist with an ap
pli
ca
tion or arise as the re
sult of a de
lib
er
ate or un
ex
pected
change to ap
pli
ca
tion in
fra
struc
ture. Mon
i
tors can make ex
plicit re
quests to an ap
pli
ca
tion, caus
ing it to per
form an ac
tion which will in turn test vital back-end re
sources of
that ap
pli
ca
tion, such as a SQL data
base.
Mon
i
tors can also be used to re
move pool mem
bers from load bal
anc
ing dur
ing sched
uled main
te
nance win
dows. Note that this can also be done by man
u
ally dis
abling pool
mem
bers within BIG-IP sys
tem con
fig
u
ra
tion, how
ever using a main
te
nance mon
i
tor to
fa
cil
i
tate this may pre
clude the need to:
1. Grant application owners operator level access to the BIG-IP system.
2. Sync the configuration after the pool member is removed from load balancing, and
again after it is returned to service.
A main
te
nance mon
i
tor would be most use
ful in con
junc
tion with an ap
pli
ca
tion health
mon
i
tor.

29

MONITORS BIG-IP LTM LOAD BALANCING

Monitor components
BIG-IP sys
tems in
clude na
tive sup
port for a wide num
ber of pro
to
cols and pro
pri
etary
ap
pli
ca
tions and ser
vices (for ex
am
ple, TCP, UDP, HTTP, HTTPS, SIP, LDAP, SOAP, MSSQL,
MySQL, etc.). Each mon
i
tor type will be made up of op
tions rel
e
vant to the mon
i
tor type,
but in gen
eral, each will have a re
quest to send and a re
sponse to ex
pect. There is also
the op
tion of con
fig
ur
ing a host and port to send the re
quest to a host other than one of
the pool mem
bers (alias hosts and ports), if nec
es
sary.
With re
spect to HTTP and TCP mon
i
tors for ex
am
ple, the send string

and re
ceive

string op
tions de
ter
mine what re
quests are sent to the pool mem
bers or alias hosts, and
what re
sponses are re
quired to be re
turned in order for the mon
i
tor
ing con
di
tions to be
sat
is
fied, and the pool mem
ber(s) sub
se
quently marked as avail
able.
If the avail
able op
tions don't suit the ap
pli
ca
tion, cus
tom mon
i
tors can be cre
ated and
ex
e
cuted using ex
ter
nal scripts, or by con
struct
ing cus
tom strings to be sent to the ap
pli
ca
tion over TCP or UDP.

Monitor implementation
Typ
i
cally, a mon
i
tor will be cre
ated that uses the same pro
to
col and sim
u
lates a nor
mal
re
quest to the back-end pool mem
bers that would oth
er
wise be re
ceived as part of le
git
i
mate client traf
fic. For in
stance, in the case of an HTTP-based ap
pli
ca
tion, the mon
i
tor
may make an HTTP re
quest to a web
page on the pool mem
ber. As
sum
ing a re
sponse is
re
ceived within the time
out win
dow, the re
sponse data will be eval
u
ated against the con
fig
ured re
ceive string to en
sure proper or ex
pected op
er
a
tion of the ser
vice.

Choosing an effective receive string


Wher
ever pos
si
ble, the re
quest string and the re
ceive string should be as ex
plicit as pos
si
ble. HTTP re
ceive strings are matched against not only the HTTP con
tent it
self (for ex
am
ple HTML, JSON, or plain text), but the HTTP head
ers as well. See AskF5 ar
ti
cle:
SOL3618: BIG-IP HTTP health mon
i
tors match re
ceive string against pay
load and
HTTP head
ers (in
clud
ing cook
ies) for de
tails.

30

MONITORS BIG-IP LTM LOAD BALANCING

While it may seem in


tu
itive to sim
ply match strings against the HTTP re
sponse code
(such as "200" for a good re
sponse), some web ap
pli
ca
tions will gen
er
ate 404 or "page
not found" er
rors using the "200" re
sponse code. Ad
di
tion
ally, other head
ers may in
clude dig
its which would also match "200," such as a "Con
tent-Length: 120055" or a "SetCookie:" header that in
cludes those dig
its. If a match against a "200" re
sponse code is
not de
sired, it is best to ex
plic
itly set "HTTP/1.1 200 OK" (as
sum
ing an HTTP ver
sion 1.1
re
sponse) as the re
ceive string.
Note that some ap
pli
ca
tions have built-in uni
form re
sponse iden
ti
fiers (URIs) that can be
used to de
ter
mine ap
pli
ca
tion health, so you may need to con
tact the ap
pli
ca
tion ven
dor
to see if URIs are im
ple
mented. When deal
ing with a cus
tom, or in-house-de
vel
oped ap
pli
ca
tion, the de
vel
op
ment team can build a page that ex
e
cutes a test suite and re
sponds to BIG-IP sys
tem health checks as ap
pro
pri
ate. For ex
am
ple:
ltmmonitorhttpfinance.example.com_health_http_monitor{
adaptivedisabled
defaultsfromhttp
destination*:*
interval5
ipdscp0
recv"HTTP/1.1200OK"
send"GET/apphealthcheck/statusHTTP/1.1\r\nHost:
finance.example.com\r\nConnection:close\r\n\r\n"
timeuntilup0
timeout16
}

Using monitors for pool member maintenance


One of the most com
mon op
er
a
tional tasks that BIG-IP sys
tem ad
min
is
tra
tors face is to
en
able and dis
able pool mem
bers dur
ing main
te
nance win
dows. While this can be
achieved by log
ging on to the BIG-IP sys
tem and man
u
ally set
ting a given pool mem
ber's
state, it may not be de
sir
able to grant op
er
a
tor-level ac
cess to ap
pli
ca
tion owner teams
in order to del
e
gate this func
tion to them. In
stead, you can cre
ate an ad
di
tional poollevel mon
i
tor to check for a given URI on the server's file sys
tem via HTTP. Check
ing for
this file in a known good lo
ca
tion will tell the BIG-IP sys
tem whether or not the ap
pli
ca
-

31

MONITORS BIG-IP LTM LOAD BALANCING

tion own
ers want the pool mem
ber to serve traf
fic, and gives them an easy way to re
move the pool mem
ber from load bal
anc
ing by ei
ther re
nam
ing it or mov
ing it to an al
ter
nate lo
ca
tion.
For ex
am
ple:
ltmmonitorhttpfinance.example.com_enable_http_monitor{
adaptivedisabled
defaultsfromhttp
destination*:*
interval5
ipdscp0
recv"HTTP/1.1200OK"
send"GET/apphealthcheck/enableHTTP/1.1\r\nHost:
finance.example.com\r\nConnection:close\r\n\r\n"
timeuntilup0
timeout16
}
ltmmonitorhttpfinance.example.com_health_http_monitor{
adaptivedisabled
defaultsfromhttp
destination*:*
interval5
ipdscp0
recv"HTTP/1.1200OK"
send"GET/apphealthcheck/statusHTTP/1.1\r\nHost:
finance.example.com\r\nConnection:close\r\n\r\n"
timeuntilup0
timeout16
}
ltmpoolfinance.example.com_pool{
members{
10.1.1.10:http{
address10.1.1.10
sessionmonitorenabled
stateup
}

32

MONITORS BIG-IP LTM LOAD BALANCING

10.1.1.11:http{
address10.1.1.11
sessionmonitorenabled
statedown
}
10.1.1.12:http{
address10.1.1.12
sessionmonitorenabled
statedown
}
}
monitorfinance.example.com_enable_http_monitorand
finance.example.com_health_http_monitor
}
Pool mem
bers can be pulled from load bal
anc
ing if the site is de
ter
mined to be un
healthy or if an ap
pli
ca
tion owner re
names the /app-health-check/en
able file. It can
also be ac
com
plished using a dis
able string, which func
tions sim
i
larly to the re
ceive
string, ex
cept that it dis
ables the pool mem
ber. This method can be used if the health
check page is dy
namic and the ap
pli
ca
tion re
turns dif
fer
ent con
tent when un
avail
able or
in main
te
nance mode.
Additional use cases
Some servers may host mul
ti
ple ap
pli
ca
tions or web
sites. For ex
am
ple, if a given set of
web servers host finance.
example.
com, hr.
example.
com, and intranet.
example.
com as
vir
tual hosts, it prob
a
bly wouldn't make sense to im
ple
ment the pool struc
ture so that all
of those sites use a sin
gle pool with mul
ti
ple mon
i
tors for each site:

33

MONITORS BIG-IP LTM LOAD BALANCING

ltmpoolexample.com_pool{
members{
10.1.1.10:http{
address10.1.1.10
sessionmonitorenabled
statechecking
}
10.1.1.11:http{
address10.1.1.11
sessionmonitorenabled
statechecking
}
10.1.1.12:http{
address10.1.1.12
sessionmonitorenabled
statechecking
}
}
monitorfinance.example.com_http_monitorand
hr.example.com_http_monitorandintranet.example.com_http_monitor
}
In
stead, you can im
ple
ment an in
di
vid
ual pool for each site, each with its own set of
health mon
i
tors:
ltmpoolfinance.example.com_pool{
members{
10.1.1.10:http{
address10.1.1.10
sessionmonitorenabled
statedown
}
10.1.1.11:http{
address10.1.1.11
sessionmonitorenabled
statedown
}

34

MONITORS BIG-IP LTM LOAD BALANCING

10.1.1.12:http{
address10.1.1.12
sessionmonitorenabled
statedown
}
}
monitorfinance.example.com_http_monitor
}
ltmpoolhr.example.com_pool{
members{
10.1.1.10:http{
address10.1.1.10
sessionmonitorenabled
statedown
}
10.1.1.11:http{
address10.1.1.11
sessionmonitorenabled
statedown
}
10.1.1.12:http{
address10.1.1.12
sessionmonitorenabled
statedown
}
}
monitorhr.example.com_http_monitor
}
ltmpoolintranet.example.com_pool{
members{
10.1.1.10:http{
address10.1.1.10
sessionmonitorenabled
statedown
}
10.1.1.11:http{

35

MONITORS BIG-IP LTM LOAD BALANCING

address10.1.1.11
sessionmonitorenabled
statedown
}
10.1.1.12:http{
address10.1.1.12
sessionmonitorenabled
statedown
}
}
monitorintranet.example.com_http_monitor
}
This pre
vi
ous ex
am
ple il
lus
trates how each in
di
vid
ual vir
tual host ap
pli
ca
tion can be
mon
i
tored and marked up or down in
de
pen
dently of oth
ers. Note that em
ploy
ing this
method re
quires ei
ther an in
di
vid
ual vir
tual server for each vir
tual host, or an iRule to
match vir
tual hosts to pools as ap
pro
pri
ate.

Performance considerations
Con
sid
er
a
tions for de
ter
min
ing an ap
pli
ca
tion's health and suit
abil
ity to serve client re
quests may in
clude the pool mem
bers' loads and re
sponse times, not just the val
i
da
tion
of the re
sponses them
selves. Sim
ple Net
work Man
age
ment Pro
to
col (SNMP) and Win
dows Man
age
ment In
stru
men
ta
tion (WMI) mon
i
tors can be used to eval
u
ate the load on
the servers, but those alone may not tell the whole story, and may pre
sent their own
chal
lenges. In ad
di
tion to those op
tions, the BIG-IP sys
tem also in
cludes an adap
tive mon
i
tor
ing set
ting that al
lows for eval
u
a
tion of the re
sponse time of the server to
the mon
i
tor re
quests.
Adap
tive mon
i
tor
ing al
lows an ad
min
is
tra
tor to re
quire that a pool mem
ber be re
moved
from load bal
anc
ing el
i
gi
bil
ity in the event that it fails to live up to a de
fined ser
vice level
agree
ment. For ex
am
ple:

36

MONITORS BIG-IP LTM LOAD BALANCING

ltmmonitorhttpfinance.example.com_http_monitor{
adaptiveenabled
adaptivelimit150
adaptivesamplingtimespan180
defaultsfromhttp
destination*:*
interval5
ipdscp0
recv"HTTP/1.1200OK"
send"GET/finance/apphealthcheck/\r\nHTTP/1.1\r\nHost:
finance.example.com\r\nConnection:close\r\n\r\n"
timeuntilup0
timeout16
}

Building send and receive strings


When build
ing HTTP re
quest strings, it is often use
ful to use tools that can au
to
mat
i
cally
gen
er
ate and send HTTP head
ers and also make that in
for
ma
tion vis
i
ble in a use
ful fash
ion. The curl util
ity, for ex
am
ple, is a great way to build a re
quest from the com
mand line
and dump the client and server head
ers, as well as the re
sponse con
tent it
self. For ex
am
ple:
$curlvH"Host:finance.example.com"http://10.1.1.10/app
healthcheck/status
*STATE:INIT=>CONNECThandle0x8001f418line1028(connection
#5000)
*HostnamewasNOTfoundinDNScache
*Trying10.1.1.10...
*STATE:CONNECT=>WAITCONNECThandle0x8001f418line1076
(connection#0)
*Connectedto10.1.1.10(10.1.1.10)port80(#0)
*STATE:WAITCONNECT=>DOhandle0x8001f418line1195
(connection#0)
>GET/apphealthcheck/statusHTTP/1.1
>UserAgent:curl/7.37.0

37

MONITORS BIG-IP LTM LOAD BALANCING

>Accept:*/*
>Host:finance.example.com
>
*STATE:DO=>DO_DONEhandle0x8001f418line1281(connection
#0)
*STATE:DO_DONE=>WAITPERFORMhandle0x8001f418line1407
(connection#0)
*STATE:WAITPERFORM=>PERFORMhandle0x8001f418line1420
(connection#0)
*HTTP1.1orlaterwithpersistentconnection,pipelining
supported
<HTTP/1.1200OK
*Servernginx/1.0.15isnotblacklisted
<Server:nginx/1.0.15
<Date:Wed,10Dec201404:05:40GMT
<ContentType:application/octetstream
<ContentLength:90
<LastModified:Wed,10Dec201403:59:01GMT
<Connection:keepalive
<AcceptRanges:bytes
<
<html>
<body>
Everythingissupergroovyatfinance.example.combaby!
</body>
<html>

*STATE:PERFORM=>DONEhandle0x8001f418line1590(connection
#0)
*Connection#0tohost10.1.1.10leftintact
*Expirecleared

38

MONITORS BIG-IP LTM LOAD BALANCING

At this point you can take the client head


ers and re
con
sti
tute them into the string for
mat
that the BIG-IP sys
tem ex
pects. In this ex
am
ple:
GET/apphealthcheck/statusHTTP/1.1\r\nHost:
finance.example.com\r\nConnection:close\r\n\r\n

Testing receive strings


After a re
ceive string is built, one way to val
i
date the string for
mat is to send it di
rectly
over TCP to the pool mem
ber using printf and nc (net
cat). For ex
am
ple:
$printf"GET/apphealthcheck/statusHTTP/1.1\r\nHost:
finance.example.com\r\nConnection:close\r\n\r\n"|nc10.1.1.1080
HTTP/1.1200OK
Server:nginx/1.0.15
Date:Wed,10Dec201404:32:53GMT
ContentType:application/octetstream
ContentLength:90
LastModified:Wed,10Dec201403:59:01GMT
Connection:close
AcceptRanges:bytes

<html>
<body>
Everythingissupergroovyatfinance.example.combaby!
</body>
<html>

If work
ing with an SSL-en
abled web
site, the same method can be used with OpenSSL to
send BIG-IP sys
tem-for
mat
ted mon
i
tor string:

39

MONITORS BIG-IP LTM LOAD BALANCING

$printf"GET/apphealthcheck/statusHTTP/1.1\r\nHost:
finance.example.com\r\nConnection:close\r\n\r\n"|openssl
s_clientconnectfinance.example.com:443quiet
depth=0C=US,ST=WA,L=Seattle,O=MyCompany,OU=IT,CN=
localhost.localdomain,[email protected]
verifyerror:num=18:selfsignedcertificate
verifyreturn:1
depth=0C=US,ST=WA,L=Seattle,O=MyCompany,OU=IT,CN=
localhost.localdomain,[email protected]
verifyreturn:1
HTTP/1.1200OK
Server:nginx/1.0.15
Date:Wed,10Dec201417:53:23GMT
ContentType:application/octetstream
ContentLength:90
LastModified:Wed,10Dec201403:59:01GMT
Connection:close
AcceptRanges:bytes

<html>
<body>
Everythingissupergroovyatfinance.example.combaby!
</body>
<html>

read:errno=0
It may also prove handy to cap
ture work
ing re
quests with a packet cap
ture analy
sis tool
such as Wire
shark, and view it from there to pull out the head
ers and re
sponses.

Upstream monitoring and alerting


40

MONITORS BIG-IP LTM LOAD BALANCING

Upstream monitoring and alerting


If BIG-IP sys
tems have been con
fig
ured to send SNMP traps to an up
stream mon
i
tor
ing
sys
tem, it should be pos
si
ble to con
fig
ure the mon
i
tor
ing sys
tem to send email alerts to
ap
pli
ca
tion own
ers when pool mem
ber state changes occur. Doing so will allow BIG-IP
sys
tem ad
min
is
tra
tors to del
e
gate mon
i
tor
ing func
tions to ap
pli
ca
tion own
ers them
selves.

Monitor reference
See BIG-IP Local Traf
fic Man
ager: Mon
i
tors Ref
er
ence pub
lished on AskF5 for fur
ther de
tails. De
v
Cen
tral site is also a good re
source for spe
cific ques
tions on how to build or
trou
bleshoot a mon
i
tor.

41

TROUBLESHOOTING BIG-IP LTM LOAD BALANCING

Troubleshooting

Load bal
anc
ing is
sues can typ
i
cally fall into one of the fol
low
ing cat
e
gories:
Imbalanced load balancing, in which connections are not distributed as expected
across all pool members.
No load balancing, in which all connections go to one pool member.
Traffic failure, in which no connections go to any pool member.

Imbalanced load balancing


Im
bal
anced load bal
anc
ing oc
curs when larger than ex
pected pro
por
tion of con
nec
tions
go to a sub
set of pool mem
bers. De
pend
ing on the load bal
anc
ing method cho
sen,
some im
bal
ance can be ex
pectedfor ex
am
ple, when using ratio load bal
anc
ing
method
ol
ogy.
When di
ag
nos
ing im
bal
anced load bal
anc
ing, con
sider the fol
low
ing con
di
tions:
Persistence.
Server conditions when using dynamic load balancing methods.
Server conditions when using node-based rather than member-based load balancing
methods.
iRule affecting/overriding load balancing decisions.
Monitor flapping (pool members marked down and up repeatedly in short period of
time).

No load balancing
No load bal
anc
ing oc
curs when all con
nec
tions are di
rected to one pool mem
ber.
When di
ag
nos
ing im
bal
anced load bal
anc
ing, con
sider the fol
low
ing con
di
tions:
Monitors marking pool members down and staying down.
Fallback host defined in HTTP profile.
Persistence.
Server conditions when using dynamic load balancing methods.

42

TROUBLESHOOTING BIG-IP LTM LOAD BALANCING

Server conditions when using node-based rather than member-based load balancing
methods.
iRule affecting/overriding load balancing decisions.

Traffic failure
Traf
fic fail
ure oc
curs where the BIG-IP sys
tem re
ceives ini
tial traf
fic from the client but is
un
able to di
rect the traf
fic to a pool mem
ber or when con
nec
tions to a pool mem
ber fail.
Con
sider the fol
low
ing con
di
tions:
Improper status (for example, monitor marking down when it should be up or vice
versa).
Inappropriate use of profiles (for example, not using a server-ssl profile with an SSL
server).
Server conditions.
iRule affecting/overriding load balancing decisions.
iRule error.
Insufficient source-address-translation addresses available.

References
The fol
low
ing list in
cludes re
sources that may be use
ful in trou
bleshoot
ing load bal
anc
ing is
sues:
SOL12531: Troubleshooting health monitors
SOL3224: HTTP health checks may fail even though the node is responding
correctly
SOL13898: Determining which monitor triggered a change in the availability of a
node or pool member
SOL14030: Causes of uneven traffic distribution across BIG-IP pool members
SOL7820: Overview of SNAT features contains information about insufficient
source-address-translation issues in the SNAT port exhaustion section.
The iRules section of this guide and DevCentral have information about
troubleshooting iRules.

43

TROUBLESHOOTING BIG-IP LTM NETWORK ADDRESS OBJECTS

BIG-IP LTM
Network Address
Objects
Introduction
Virtual Address
Address Translation
Self IP Address
IPv4/IPv6 Gateway Behavior
Auto Last Hop

44

INTRODUCTION BIG-IP LTM NETWORK ADDRESS OBJECTS

Introduction

IP ad
dresses and net
blocks are fun
da
men
tal to IP con
fig
u
ra
tion and op
er
a
tion. This sec
tion cov
ers the var
i
ous net
work-ad
dress ob
ject types and the im
pli
ca
tions of each.
When an en
vi
ron
ment uses both IPv4 and IPv6 ad
dress
ing, the BIG-IP sys
tem can also
act as a pro
to
col gate
way, al
low
ing clients that have only one IP stack to com
mu
ni
cate
with hosts that use the al
ter
nate ad
dress
ing scheme.
Ad
di
tional fea
tures not cov
ered by this man
ual that help sup
port a dis
parate IPv4/IPv6
en
vi
ron
ment in
clude NAT IPv6 to IPv4 (some
times re
ferred to as 6to4) and DNS6
to4.

45

VIRTUAL ADDRESS BIG-IP LTM NETWORK ADDRESS OBJECTS

Virtual Address

A vir
tual ad
dress is a BIG-IP ob
ject lim
ited in scope to an IP ad
dress or IP net
block level.
A vir
tual ad
dress can be used to con
trol how the BIG-IP sys
tem re
sponds to IP-level traf
fic des
tined for a TMM-man
aged IP ad
dress that is not oth
er
wise matched by a spe
cific Vir
tual Server ob
ject. For ex
am
ple, it is pos
si
ble to con
fig
ure BIG-IP to not send in
ter
net con
trol mes
sage pro
to
col (ICMP) echo-replies when all of the vir
tual servers as
so
ci
ated with a vir
tual ad
dress are un
avail
able (as de
ter
mined by mon
i
tors).
Vir
tual ad
dresses are cre
ated im
plic
itly when a vir
tual server is cre
ated, and they can be
cre
ated ex
plictly via tmsh, but they are gen
er
ally only use
ful in con
junc
tion with an as
so
ci
ated vir
tual server. Vir
tual ad
dresses are a mech
a
nism by which BIG-IP LTM vir
tual
server ob
jects are as
signed to traf
fic groups (see Traf
fic group in
ter
ac
tion in the BIG-IP
LTM Vir
tual Servers chap
ter of this guide for more in
for
ma
tion).
ARP, ICMP echo, and route ad
ver
tise
ment are be
hav
iors are con
trolled in the vir
tual ad
dress con
fig
u
ra
tion.

ARP
The ARP prop
erty of the vir
tual ad
dress con
trols whether or not the BIG-IP sys
tem will re
spond to ARP- and IPv6-neigh
bor dis
cov
ery re
quests for the vir
tual ad
dress. ARP is dis
abled by de
fault for net
work vir
tual ad
dresses.

N O T E

&

T I P S

Note:Disablingavirtualserverwillnotcausethe
BIGIPsystemtostoprespondingtoARPrequestsfor
thevirtualaddress.

ICMP echo
In BIG-IP ver
sion 11.3 and higher, the ICMP echo prop
erty of the vir
tual ad
dress con
trols
how the BIG-IP sysem re
sponds to ICMP echo (ping) re
quests sent to the vir
tual ad
dress.

46

VIRTUAL ADDRESS BIG-IP LTM NETWORK ADDRESS OBJECTS

It is pos
si
ble to en
able or dis
able ICMP echo re
sponses. In ver
sion 11.5.1 and higher, it is
also pos
si
ble to se
lec
tively en
able ICMP echo re
sponses based on the state of the vir
tual
servers as
so
ci
ated with the vir
tual ad
dress.
For more in
for
ma
tion, on this topic, see Con
trol
ling Re
sponses to ICMP Echo Re
quests in
BIG-IP Local Traf
fic Man
ager: Im
ple
men
ta
tions.

Route advertisement
When using route health in
jec
tion with dy
namic rout
ing pro
to
cols, the ad
di
tion of rout
ing en
tries into the rout
ing in
for
ma
tion base is con
trolled in the vir
tual ad
dress ob
ject.
The route can be added when any vir
tual server as
so
ci
ated with the vir
tual ad
dress is
avail
able, when all vir
tual servers as
so
ci
ated with the vir
tual ad
dress are avail
able, or
al
ways.
For more in
for
ma
tion on route health in
jec
tion, see Con
fig
ur
ing route ad
ver
tise
ment on
vir
tual ad
dresses in BIG-IP TMOS: IP Rout
ing Ad
min
is
tra
tion.

More Information
For more in
for
ma
tion about vir
tual ad
dresses, see About Vir
tual Ad
dresses in BIG-IP Local
Traf
fic Man
age
ment: Ba
sics.

47

ADDRESS TRANSLATION BIG-IP LTM NETWORK ADDRESS OBJECTS

Address Translation
NAT
Net
work ad
dress trans
la
tion (NAT) is used to map one IP ad
dress to an
other, typ
i
cally to
map be
tween pub
lic and pri
vate IP ad
dresses. NAT al
lows bi-di
rec
tional traf
fic through
the BIG-IP sys
tem.
NAT con
tains an ori
gin ad
dress and a trans
la
tion ad
dress. Con
nec
tions ini
ti
ated from
the ori
gin ad
dress will pass through the BIG-IP sys
tem and it will use the trans
la
tion ad
dress as the source on the server side. Con
nec
tions ini
ti
ated to the trans
la
tion ad
dress
will pass through the BIG-IP sys
tem to the ori
gin ad
dress.

In BIG-IP ver
sion 11.3 and higher, sim
i
lar be
hav
ior can be achieved more flex
i
bly by tak
ing ad
van
tage of both the source and des
ti
na
tion ad
dress con
fig
u
ra
tion in vir
tual server
con
fig
u
ra
tions.

SNAT
Secure net
work ad
dress trans
la
tion or source net
work ad
dress trans
la
tion (SNAT) is
used to map a set of source ad
dresses on the client side of BIG-IP to an al
ter
nate set of
source ad
dresses on the server side. It is not pos
si
ble to ini
ti
ate a con
nec
tion through
SNAT on BIG-IP.

48

ADDRESS TRANSLATION BIG-IP LTM NETWORK ADDRESS OBJECTS

Prior to BIG-IP ver


sion 11.3, SNAT ob
jects were typ
i
cally used to pro
vide uni-di
rec
tional
ac
cess through BIG-IP sys
tems. In BIG-IP ver
sion 11.3 and higher, sim
i
lar be
hav
ior can
be achieved more flex
i
bly tak
ing ad
van
tage of both the source and des
ti
na
tion ad
dress
con
fig
u
ra
tion of a vir
tual server.
For more in
for
ma
tion on SNATs, see NATS and SNATs in BIG-IP TMOS: Rout
ing Ad
min
is
tra
tion.

SNAT pool
A SNAT pool is a group of IP ad
dresses that BIG-IP can choose from to use for a trans
la
tion ad
dress. Each ad
dress in a SNAT pool will also im
plic
itly cre
ate a SNAT trans
la
tion if
one does not exist.

SNAT translation
A SNAT trans
la
tion sets the prop
er
ties of an ad
dress used as a trans
la
tion or in a SNAT
pool.
Com
mon prop
er
ties that might need to be changed for a SNAT trans
la
tion in
clude which
traf
fic-group the ad
dress be
longs to and whether or not the BIG-IP sys
tem should re
spond to ARP re
quests for the trans
la
tion ad
dress.

Limitations of SNATs and source address


49

ADDRESS TRANSLATION BIG-IP LTM NETWORK ADDRESS OBJECTS

Limitations of SNATs and source address


translation
For pro
to
cols such as UDP and TCP, net
work stacks keep track of con
nec
tions by the
source and des
ti
na
tion ad
dress, pro
to
col, and source and des
ti
na
tion pro
to
col
ports. When using a SNAT, source ad
dress be
comes lim
ited to the ad
dresses used for
the SNAT. When the ac
tual traf
fic uses a va
ri
ety of ports and des
ti
na
tion ad
dresses, this
does not typ
i
cally cause a prob
lem. How
ever, if there is a small set of des
ti
na
tion ad
dresses, then the BIG-IP sys
tem may not be able to es
tab
lish a con
nec
tion due to
ephemeral port ex
haus
tion. Typ
i
cally this prob
lem can be al
le
vi
ated by lever
ag
ing a
SNAT pool or adding more ad
dresses to a SNAT pool. For more in
for
ma
tion, see AskF5
ar
ti
cle: SOL8246: How BIG-IP sys
tem han
dles SNAT port ex
haus
tion.

More Information
For more in
for
ma
tion on NATs and SNATs, see NATs and SNATs in BIG-IP TMOS: Rout
ing
Ad
min
is
tra
tion.

50

SELF IP ADDRESS BIG-IP LTM NETWORK ADDRESS OBJECTS

Self IP Address

Self IP ad
dresses are con
fig
ured on BIG-IP sys
tems and as
so
ci
ated with a VLAN. These
ad
dresses de
fine what net
works are lo
cally at
tached to BIG-IP.
De
pend
ing on the port lock
down set
tings for the self IP, these ad
dresses can be used to
com
mu
ni
cate with the BIG-IP sys
tem. How
ever, it is rec
om
mended that all man
age
ment
of BIG-IP sys
tems be per
formed via the man
age
ment in
ter
face on a pro
tected man
age
ment net
work using SSH and HTTPS, but not via the self IPs.
Self IP ad
dresses are also used by the BIG-IP sys
tem when it needs to ini
ti
ate com
mu
ni
ca
tions with other hosts via routes as
so
ci
ated with the self IP net
works. For ex
am
ple,
mon
i
tor probes are ini
ti
ated from a non-float
ing self IP ad
dress.
In high-avail
abil
ity en
vi
ron
ments, con
fig
ur
ing float
ing self IP ad
dresses for each traf
fic
group on each VLAN will allow ease of con
fig
u
ra
tion going for
ward.
Any virtual servers that utilize an auto-map source address translation setting will
work properly in a fail-over event if mirroring is configured.
If routes need to be configured on adjacent hosts, they can be configured to utilize
the floating address to ensure that the routed traffic passes through the active BIG-IP
system.

51

IPV4/IPV6 GATEWAY BEHAVIOR BIG-IP LTM NETWORK ADDRESS OBJECTS

IPv4/IPv6 Gateway
Behavior

Since the BIG-IP sys


tem is a full proxy and can com
mu
ni
cate with both IPv4 and IPv6, the
BIG-IP sys
tem can be used as a gate
way be
tween the two pro
to
cols. For ex
am
ple, it is
pos
si
ble to cre
ate an IPv6 vir
tual server to ser
vice IPv6 clients that uses an IPv4 pool.
When the BIG-IP sys
tem con
verts from one ad
dress type to an
other, it needs to map the
client ad
dress to the same type as the pool mem
bers. In order to do this, the BIG-IP sys
tem im
plic
itly uses an auto map source ad
dress trans
la
tion for the con
nec
tion. As a re
sult, it will choose a self IP ad
dress of the same type as the pool mem
ber. If these con
nec
tions will be mir
rored, it is im
por
tant to en
sure that there is a float
ing self IP ad
dress
of the ap
pro
pri
ate type con
fig
ured. It is pos
si
ble to over
ride this im
plicit be
hav
ior by ex
plic
itly spec
i
fy
ing a source ad
dress trans
la
tion method on the Vir
tual Server.

52

AUTO LAST HOP BIG-IP LTM NETWORK ADDRESS OBJECTS

Auto Last Hop

With the Auto Last Hop fea


ture, which is en
abled by de
fault, the re
turn traf
fic for the
client side con
nec
tion is sent back to the source MAC ad
dress as
so
ci
ated with the ingress
traf
fic. In other words, it is sent back via the router it tran
sited on ingress. Auto Last Hop
will do this even if the rout
ing table points to a dif
fer
ent gate
way IP or in
ter
face. With
out
Auto Last Hop en
abled, the re
sponse traf
fic can tra
verse a dif
fer
ent re
turn path when
load-bal
anc
ing trans
par
ent de
vices, re
sult
ing in asym
met
ric rout
ing.
In cer
tain cir
cum
stances, Auto Last Hop may be dis
abled glob
ally or at the Vir
tual Server,
SNAT, NAT and VLAN ob
ject lev
els. How
ever, most BIG-IP LTM im
ple
men
ta
tions func
tion
best with Auto Last Hop left en
abled. For more in
for
ma
tion about Auto Last Hop, see
AskF5 ar
ti
cle: SOL13876: Overview of the Auto Last Hop set
ting.
Care should be taken when using Auto Last Hop in con
junc
tion with neigh
bor
ing routers
that im
ple
ment hop re
dun
dancy pro
to
cols (FHRP) such as HSRP or VRRP, as some is
sues
may be ex
pe
ri
enced dur
ing failover events. For more in
for
ma
tion, see AskF5 ar
ti
cle:
SOL9487: BIG-IP sup
port for neigh
bor
ing VRRP/HSRP routers.
Net
works with Cisco Nexus switches in
tro
duce their own chal
lenges; how
ever, cor
rec
tive
ad
just
ments can be made on the Nexus switches them
selves. See AskF5 ar
ti
cle:
SOL12440: Neigh
bor
ing de
vices using both Vir
tual PortChan
nels and HSRP may dis
card re
turn frames from the BIG-IP sys
tem and the rel
e
vant Cisco doc
u
men
ta
tion

53

AUTO LAST HOP BIG-IP LTM VIRTUAL SERVERS

BIG-IP LTM Virtual


Servers
Virtual Server Basics
Virtual Server Types
Practical Considerations
Virtual Server Troubleshooting

54

VIRTUAL SERVER BASICS BIG-IP LTM VIRTUAL SERVERS

Virtual Server Basics


Listener
A lis
tener is a net
work ad
dress space ob
ject that matches net
work traf
fic for man
age
ment by TMM. Lis
ten
ers are most fre
quently Vir
tual Servers and can be fil
tered any
where from a Class
less in
ter-do
main rout
ing (CIDR) net
block, such as 10.
1.
0.
0/
16, down
to an in
di
vid
ual IP ad
dress, pro
to
col, and port com
bi
na
tion, such as 10.
1.
1.
20:
80(TCP).
There are a num
ber of dif
fer
ent Vir
tual Server types, and these in
herit cer
tain ca
pa
bil
i
ties, de
pend
ing on their type and scope.
For more com
pre
hen
sive doc
u
men
ta
tion about Vir
tual Servers, see BIG-IP Local Traf
fic
Man
ager: Con
cepts.

Address matching precedence


BIG-IP LTM may re
ceive traf
fic flows that match the con
fig
u
ra
tion of mul
ti
ple vir
tual
servers. To avoid con
flict be
tween these con
fig
u
ra
tions BIG-IP LTM has an order of
prece
dence for se
lect
ing which vir
tual server the traf
fic matches. While the match
ing
order can be com
plex de
pend
ing on the in
stalled soft
ware ver
sion and ap
plied con
fig
u
ra
tion, the over
all op
er
a
tion can be summed up as match
ing from most spe
cific to least
spe
cific.
The ta
bles which fol
low demon
strate the order of prece
dence for match
ing for five ex
am
ple vir
tual servers with the cor
re
spond
ing con
nec
tion table in ver
sion 11.3 or higher:
Ex
am
ple Con
fig
u
ra
tion Table
Virtual server name

Destination

Service port

Source

MyVS1

192.168.1.101/32

80

0.0.0.0/0

MyVS2

192.168.1.101/32

192.168.2.0/24

MyVS3

192.168.1.101/32

192.168.2.0/25

MyVS4

0.0.0.0/0

80

192.168.2.0/24

MyVS5

192.168.1.0/24

0.0.0.0/0

55

VIRTUAL SERVER BASICS BIG-IP LTM VIRTUAL SERVERS

Ex
am
ple Con
nec
tion Table
Inbound
source
address

Inbound
destination
address

Virtual server selected by the BIG-IP system

192.168.2.1

192.168.1.101:80 MyVS3 is selected because the destination address matches MyVS1, MyVS2,
and MyVS3. The source address matches both MyVS2 and MyVS3, but
MyVS3 has a subnet mask narrower than MyVS2.

192.168.2.151 192.168.1.101:80 MyVS2 is selected because the destination address matches MyVS1, MyVS2,
and MyVS3. The source address matches only MyVS2.
192.168.10.1

192.168.1.101:80 MyVS1 is selected because the destination address matches MyVS1, MyVS2,
and MyVS3. The source address matches only MyVS1.

192.168.2.1

192.168.1.200:80 MyVS5 is selected because the destination address matches MyVS5.

For more in
for
ma
tion on Vir
tual Server prece
dence, see the fol
low
ing AskF5 ar
ti
cles:
SOL6459: Order of precedence for virtual server matching (9.x - 11.2.1)
SOL14800: Order of precedence for virtual server matching (11.3.0 and later)

Translation options
With re
spect to layer 4 vir
tual servers (stan
dard and fastl4 for ex
am
ple), most net
work
ad
dress trans
la
tion hap
pens au
to
mat
i
cally as
sum
ing a de
fault BIG-IP LTM con
fig
u
ra
tion.
In the event that the BIG-IP sys
tem is not con
fig
ured as the de
fault gate
way for the pool
mem
bers, or you can
not oth
er
wise guar
an
tee that re
turn traf
fic will tra
verse the BIG-IP
sys
tem, it will prob
a
bly be
come nec
es
sary to lever
age a SNAT au
tomap or a SNAT pool
as
signed to the vir
tual server to en
sure traf
fic will tran
sit the BIG-IP sys
tem on egress
and the re
sponse will be prop
erly re
turned to the client.

Traffic group interaction


Traf
fic groups allow for the dis
tri
b
u
tion of vir
tual servers and their as
so
ci
ated ob
jects
across BIG-IP sys
tem de
vices in an ac
tive/ac
tive fash
ion. When im
ple
ment
ing traf
fic
groups, note the fol
low
ing:

56

VIRTUAL SERVER BASICS BIG-IP LTM VIRTUAL SERVERS

When using an automap source address translation, each traffic group must have
appropriate floating self-IPs defined.
When using a SNATpool source address translation, the SNATpool must contain SNAT
translations that belong to the same traffic group as the virtual server's virtual
address. This will facilitate more seamless failover since the addresses used to NAT
the traffic can follow the traffic group to the next available BIG-IP system device.
To spec
ify the traf
fic group that one or more vir
tual servers re
sides in, as
sign the as
so
ci
ated vir
tual ad
dress ob
ject to the ap
pro
pri
ate traf
fic group.

Clone pools
The BIG-IP sys
tem in
cludes a fea
ture called clone pools that al
lows it to mir
ror TMMman
aged traf
fic off-box to third-party in
tru
sion de
tec
tion sys
tems (IDS) or in
tru
sion pre
ven
tion sys
tems (IPS) for fur
ther scru
ti
niza
tion. This is po
ten
tially a much more flex
i
ble
and ef
fi
cient so
lu
tion than using span ports or net
work taps.
Con
fig
ur
ing a clone pool for a vir
tual server al
lows BIG-IP sys
tem ad
min
is
tra
tors to se
lect
ei
ther client-side or server-side traf
fic for cloning off-box. In the case of SSL ac
cel
er
ated
traf
fic, cloning the server-side traf
fic would allow it to be seen by the IDS or IPS in the
clear with
out the need to ex
port SSL key
pairs for ex
ter
nal de
cryp
tion.
Fur
ther
more, the abil
ity to lever
age a pool of ad
dresses to clone traf
fic to al
lows for loadbal
anc
ing of mir
rored traf
fic to more than one IDS/IPS sys
tem. For more in
for
ma
tion
about clone pools, see BIG-IP Local Traf
fic Man
age
ment Guide.

Interaction with profiles


Pro
files are used by vir
tual servers to en
hance or aug
ment the be
hav
ior of the BIG-IP
sys
tem with re
spect to traf
fic flow
ing through it, fa
cil
i
tat
ing pro
to
col-level in
tel
li
gence.
For in
stance, a vir
tual server with an HTTP pro
file ap
plied can sub
se
quently make use of
cookie-based per
sis
tence pro
files and iR
ules that per
form HTTP in
spec
tion and mod
i
fi
ca
tion. For more in
for
ma
tion on pro
files, see BIG-IP LTM Pro
files in this guide.

57

VIRTUAL SERVER TYPES BIG-IP LTM VIRTUAL SERVERS

Virtual Server Types


Common virtual server types
Standard
Stan
dard vir
tual servers im
ple
ment full proxy func
tion
al
ity, mean
ing that the vir
tual
server main
tains two sep
a
rate con
nec
tions, a client-side con
nec
tion flow, a server-side
con
nec
tion, and passes data be
tween them. Stan
dard vir
tual servers are rec
om
mended
when
ever Layer 7 in
tel
li
gence is re
quired or an
tic
i
pated to be nec
es
sary in the fu
ture.
Use a stan
dard vir
tual server when:
It becomes necessary to ensure that requests from the same client are routed to the
same server using information in the protocol stream.
It will be necessary to inspect, modify, or log data in the protocol stream.
It is desired for BIG-IP LTM to perform protocol enforcement.
Supplemental DDoS protection would be beneficial.
AskF5 ar
ti
cle: SOL14163: Overview of BIG-IP vir
tual server types (11.x) pro
vides a de
tailed com
par
i
son of the dif
fer
ent vir
tual server types in the BIG-IP sys
tem.

Performance (layer 4)
The per
for
mance (layer 4) vir
tual server type may be use
ful when lit
tle-to-no layer 4 or
layer 7 pro
cess
ing is re
quired. While the BIG-IP sys
tem can still per
form source and des
ti
na
tion IP ad
dress trans
la
tion as well as port trans
la
tion, the load bal
anc
ing de
ci
sions
will be lim
ited in scope due to less layer 7 in
for
ma
tion being avail
able for such pur
poses. On F5 hard
ware plat
forms which sup
port it, per
for
mance (layer 4) vir
tual servers
can of
fload con
nec
tion flows to the ePVA which can re
sult in higher through
put and min
i
mal la
tency.

58

VIRTUAL SERVER TYPES BIG-IP LTM VIRTUAL SERVERS

Forwarding (IP)
A for
ward
ing IP vir
tual server al
lows for net
work traf
fic to be for
warded to a host or net
work ad
dress space. A for
ward
ing IP vir
tual server uses the rout
ing table to make for
ward
ing de
ci
sions based on the des
ti
na
tion ad
dress for the server-side con
nec
tion flow.
A for
ward
ing IP vir
tual server is most fre
quently used to for
ward IP traf
fic in the same
fash
ion as any other router. Par
tic
u
lar FastL4 pro
file op
tions are re
quired if state
less for
ward
ing is de
sired. For more in
for
ma
tion, see IP For
ward
ing in Com
mon De
ploy
ments in
this guide.
For more in
for
ma
tion about for
ward
ing IP vir
tual servers, see AskF5 ar
ti
cle: SOL7595:
Overview of IP for
ward
ing vir
tual servers.
You can also de
fine spe
cific net
work des
ti
na
tions and source masks for vir
tual servers
and/or en
able them only on cer
tain VLANs which allow fine-grained con
trol of how net
work traf
fic is han
dled when for
warded. For ex
am
ple, you can use a wild
card vir
tual
server with Source Ad
dress Trans
la
tion en
abled for out
bound traf
fic, and add an ad
di
tional net
work vir
tual server with Source Ad
dress Trans
la
tion dis
abled for traf
fic des
tined for other in
ter
nal net
works. Or you can use a per
for
mance (layer 4) vir
tual server
to se
lect cer
tain traf
fic for in
spec
tion by a fire
wall or IDS.
If you use a per
for
mance (layer 4) vir
tual server, you must en
sure that trans
late ad
dress and trans
late port are dis
abled. These set
tings are dis
abled au
to
mat
i
cally if a vir
tual server is con
fig
ured with a net
work des
ti
na
tion ad
dress.

Other Virtual Server Types


Forwarding (layer 2)
For
ward
ing (layer 2) vir
tual servers typ
i
cally share an IP with a node in a con
fig
ured
VLAN, and are usu
ally used in con
cert with VLAN Groups. They are oth
er
wise sim
i
lar to
for
ward
ing (IP) vir
tual servers.
For more in
for
ma
tion, see AskF5 ar
ti
cle: SOL4362: Overview of Layer 2 (L2) for
ward
ing
vir
tual servers.

59

VIRTUAL SERVER TYPES BIG-IP LTM VIRTUAL SERVERS

Stateless
The state
less vir
tual server type does not cre
ate con
nec
tion flows and per
forms min
i
mal packet pro
cess
ing. It only sup
ports UDP and is only rec
om
mended in lim
ited sit
u
a
tions.
For more in
for
ma
tion about state
less vir
tual servers, in
clud
ing rec
om
mended uses and
sam
ple con
fig
u
ra
tions, see AskF5 ar
ti
cle: SOL13675: Overview of the state
less vir
tual
server.

Reject
The re
ject vir
tual server type re
jects any pack
ets which would cre
ate a new con
nec
tion
flow. Use cases in
clude block
ing out a port or IP within a range cov
ered by a for
ward
ing
or other wild
card vir
tual server. For more in
for
ma
tion, see AskF5 ar
ti
cle SOL14163:
Overview of BIG-IP vir
tual server types (11.x).

Performance (HTTP)
The per
for
mance (HTTP) vir
tual server type is the only vir
tual server type that im
ple
ments the FastHTTP pro
file. While it may be the fastest way to pass HTTP traf
fic under
cer
tain cir
cum
stances, this type of vir
tual server has spe
cific re
quire
ments and lim
i
ta
tions. Read AskF5 ar
ti
cle SOL8024: Overview of the FastHTTP pro
file be
fore de
ploy
ing
per
for
mance (HTTP) vir
tual servers.

Miscellaneous
BIG-IP LTM sup
ports other vir
tual server types, but these are used less fre
quently and
have spe
cific use cases. As of TMOS v11.6.0, these in
clude DHCP, in
ter
nal, and mes
sage
rout
ing. For more in
for
ma
tion on these vir
tual server types, see AskF5 ar
ti
cle SOL14163:
Overview of BIG-IP vir
tual server types.

60

VIRTUAL SERVER TYPES BIG-IP LTM VIRTUAL SERVERS

Implicitly created listeners


Some lis
ten
ers are cre
ated and man
aged by the BIG-IP sys
tem im
plic
itly for cer
tain types
of traf
fic when used in con
junc
tion with the rel
e
vant ser
vice pro
files. For in
stance, when
using ac
tive FTP with a vir
tual server that has an FTP pro
file as
signed, the BIG-IP sys
tem
will dy
nam
i
cally al
lo
cate FTP data port lis
ten
ers in order to map out
bound data con
nec
tions from the pool mem
bers to the in
bound vir
tual server lis
tener ad
dress in order
for the client to see the con
nec
tion orig
i
nat
ing from the IP ad
dress it ex
pects. Lis
ten
ers
can also be cre
ated man
u
ally with the listen iRule com
mand.
SNAT and NAT ob
jects also im
plic
itly cre
ate lis
ten
ers to allow the BIG-IP sys
tem to pass
traf
fic in
volv
ing one or more hosts that re
quire ad
dress trans
la
tion.

61

PRACTICAL CONSIDERATIONS BIG-IP LTM VIRTUAL SERVERS

Practical
Considerations
Virtual server connection behavior for TCP
Dif
fer
ent types of vir
tual servers will each have a unique way of han
dling TCP con
nec
tions. For more in
for
ma
tion about TCP con
nec
tion be
hav
ior for vir
tual server types, see
AskF5 ar
ti
cle SOL8082: Overview of TCP con
nec
tion setup for BIG-IP LTM vir
tual
server types.

Standard vs performance (layer 4)


While stan
dard and per
for
mance L4 vir
tual servers might seem on the sur
face to per
form sim
i
lar func
tions, there are some dif
fer
ences in ca
pa
bil
i
ties to be aware of. The fol
low
ing table de
scribes these dif
fer
ent use cases:
Standard

Performance L4

When using some other modules (BIG-

No layer 7 interaction is needed.

IP ASM, BIG-IP APM, BIG-IP AAM).

When using no SSL or passing SSL

Using SSL profiles.

through BIG-IP LTM.

Using any layer 7 profile.

Making a policy based forwarding

Using layer 7 iRule events.

decision with layer 4 information only.

Using layer 7 persistence methods.

Using only layer 4 and below iRule


events.

For se
lect
ing the best vir
tual server type for HTTP traf
fic, see AskF5 ar
ti
cle: SOL4707:
Choos
ing ap
pro
pri
ate pro
files for HTTP traf
fic.

Forwarding (IP) vs performance (layer 4)


62

PRACTICAL CONSIDERATIONS BIG-IP LTM VIRTUAL SERVERS

Forwarding (IP) vs performance (layer 4)


Both for
ward
ing (IP) and per
for
mance (layer 4) vir
tual servers use FastL4 func
tion
al
ity,
and both can use a cus
tom FastL4 pro
file. Per
for
mance (layer 4) vir
tual servers allow for
a pool to be con
fig
ured and to act as a gate
way pool. This is com
monly used to load bal
ance fire
walls. For
ward
ing IP vir
tual servers do not have pools and use the rout
ing table
to di
rect server-side con
nec
tions.

Source port preservation


Under most cir
cum
stances, it will not be nec
es
sary for the BIG-IP sys
tem to trans
late the
ephemeral port used by the client when map
ping the con
nec
tion to a pool mem
ber,
how
ever there are ex
cep
tions. The de
fault set
ting for Source Port op
tion, pre
serve, will
cause the BIG-IP sys
tem to re
tain the source port of the client when
ever pos
si
ble, but it
can change on con
fig
u
ra
tions where vir
tual server port and pool mem
ber port are dif
fer
ent, or in cer
tain sit
u
a
tions where the source port is al
ready in use. This en
sures that re
turn traf
fic is sent to the cor
rect TMM CMP in
stance. This does not af
fect ap
pli
ca
tion op
er
a
tion; how
ever it may con
fu
sion when trou
bleshoot
ing. The be
hav
ior is com
mon in in
stances with SSL of
fload be
cause the vir
tual server will have a ser
vice port of 443 but the
pool mem
ber will typ
i
cally have a ser
vice port of 80.
For more in
for
ma
tion on source port preser
va
tion, see the fol
low
ing AskF5 ar
ti
cles:
SOL8227: The BIG-IP system may modify the client source port for virtual server
connections when CMP is enabled
SOL11004: Port Exhaustion on CMP systems.

63

VIRTUAL SERVER TROUBLESHOOTING BIG-IP LTM VIRTUAL SERVERS

Virtual Server
Troubleshooting
Capturing network traffic with tcpdump
For in
for
ma
tion about cap
tur
ing net
work traf
fic with tcp
dump, see the fol
low
ing AskF5
ar
ti
cles:
SOL411: Overview of packet tracing with the tcpdump utility
SOL13637: Capturing internal TMM information with tcpdump

Methodology
When trou
bleshoot
ing vir
tual servers, it is help
ful to have a sold un
der
stand
ing of the
Open Sys
tems In
ter
con
nec
tion (OSI) model. Stan
dard vir
tual servers op
er
ate at layer 4.
When trou
bleshoot
ing vir
tual servers or other lis
ten
ers, it is im
por
tant to con
sider the
lower lay
ers and how they con
tribute to a healthy con
nec
tion. These are the phys
i
cal
layer, data-link layer, and net
work layer.
Physical layer
The phys
i
cal layer is the phys
i
cal con
nec
tion be
tween BIG-IP LTM and other de
vices,
pri
mar
ily Eth
er
net ca
bles and fiber optic ca
bles. It in
cludes low-level link ne
go
ti
a
tion
and can be ver
i
fied by ob
serv
ing in
ter
face state.
Data-link layer
The data-link layer is pri

mar
ily Eth
er
net and Link Ag
gre
ga
tion Con
trol Pro
to
col (LACP).
You can val
i
date that the data-link layer is func
tion
ing by ob
serv
ing LACP sta
tus of
trunks or by cap
tur
ing net
work traf
fic to en
sure Eth
er
net frames are ar
riv
ing at BIG-IP
LTM.
Network layer
The Net

work layer is IP, ARP, and ICMP. ARP (or IPv6 neigh
bor dis
cov
ery) is a pre
req
ui
site for IP to func
tion.

64

VIRTUAL SERVER TROUBLESHOOTING BIG-IP LTM VIRTUAL SERVERS

Layer 3 troubleshooting
Your trou
bleshoot
ing process should in
clude layer 3 in
ves
ti
ga
tion. First, ARP res
o
lu
tion
should be con
firmed. If ARP is fail
ing, it may in
di
cate a lower-layer in
ter
rup
tion or mis
con
fig
u
ra
tion. You can val
i
date ARP res
o
lu
tion by check
ing the ARP table on the BIG-IP
sys
tem and other IP-aware de
vices on the local net
work.
If ARP is suc
ceed
ing, con
firm whether pack
ets from the client are reach
ing BIG-IP LTM; if
not, there may be a fire
wall block
ing pack
ets or rout
ing may not be suf
fi
ciently con
fig
ured.
As an ini
tial step, you should ver
ify IP con
nec
tiv
ity be
tween the client and the BIG-IP sys
tem, and be
tween the BIG-IP sys
tem and any de
pen
dent re
sources, such as pool mem
bers. This can usu
ally be ac
com
plished using tools such as ping and tracer
oute.

Example commands
tcp
dump can be used to cap
ture net
work traf
fic for trou
bleshoot
ing pur
poses. It can be
used to view live packet data on the com
mand line or to write cap
tured traf
fic to stan
dard pcap files which can be ex
am
ined with Wire
shark or other tools at a later time.
Com
monly used op
tions for tcp
dump:
iinterface
This must al
ways be spec
i
fied and in
di
cates the in
ter
face to cap
ture data from.
This can be a VLAN or 0.0, which in
di
cates all VLANs. You can also cap
ture man
ag
ment in
ter
face traf
fic using the in
ter
face eth0, or cap
ture pack
ets from frontpanel in
ter
faces by in
di
cat
ing the in
ter
face num
ber, i.e. 1.1, 1.2, etc. For VLANs and
0.0, you can spec
ify the :p flag on the in
ter
face to cap
ture traf
fic for the peer
flow to any that match the fil
ter as well. See the fol
low
ing ex
am
ples:
s0
Con
fig
ures tcp
dump to cap
ture the com
plete packet. This is rec
om
mended when
cap
tur
ing traf
fic for trou
bleshoot
ing.

65

VIRTUAL SERVER TROUBLESHOOTING BIG-IP LTM VIRTUAL SERVERS

w/var/tmp/filename.pcap
Con
fig
ures tcp
dump to write cap
tured packet data to a stan
dard pcap file which
can be in
spected with Wire
shark or tcp
dump. When the -w flag is spec
i
fied, tcp
dump does not print packet in
for
ma
tion to the ter
mi
nal.

N O T E

&

T I P S

Note:F5recommendswritingpacketcapturesto/var/tmp
toavoidserviceinterruptionswhenworkingwithlarge
captures.
nn
Dis
ables DNS ad
dress lookup and shows ports nu
mer
i
cally in
stead of using a
name from /etc/ser
vices. Only use
ful when pack
ets are not being writ
ten to a file.
Rec
om
mended.
It is also im
por
tant to iso
late the de
sired traf
fic by spec
i
fy
ing packet fil
ters when
ever pos
si
ble, par
tic
u
larly on sys
tems pro
cess
ing large vol
umes of traf
fic.
The fol
low
ing ex
am
ples as
sume a client IP of 192.
168.
1.
1 and a vir
tual server of 192.
168.
2.
1:
80. The client-side vlan is VLAN1, and if two-armed the server-side VLAN is VLAN2.
The pool mem
ber is 192.
168.
2.
2:
80 for one-armed ex
am
ples, and 192.
168.
3.
2:
80 for
two-armed ex
am
ples. BIG-IP LTM is con
fig
ured with non-float
ing self IPs of 192.
168.
2.
3
and 192.
168.
3.
3, and float
ing self IPs of 192.
168.
2.
4 and 192.
168.
3.
4.
To cap
ture ARP traf
fic only on the client-side VLAN and log to the ter
mi
nal:
tcpdumpiVLAN1nns0arp
To cap
ture all traf
fic from a par
tic
u
lar client to /var/tmp/example.pcap:
tcpdumpi0.0s0w/var/tmp/example.pcaphost192.168.1.1
If Source Ad
dress Trans
la
tion is en
abled, the above com
mand will not cap
ture serverside traf
fic. To do so, use the :p flag (a client-side flow's peer is its re
spec
tive serverside flow):
tcpdumpi0.0:ps0w/var/tmp/example.pcaphost192.168.1.1

66

VIRTUAL SERVER TROUBLESHOOTING BIG-IP LTM VIRTUAL SERVERS

To cap
ture client-side traf
fic only from a sam
ple client to the de
sired vir
tual server:
tcpdumpiVLAN1s0w/var/tmp/example.pcaphost192.168.1.1and
host192.168.2.1andtcpport80
To cap
ture traf
fic only to a pool mem
ber in a one-armed con
fig
u
ra
tion (ex
clud
ing nonfloat
ing self IP to ex
clude mon
i
tor traf
fic):
tcpdumpiVLAN1s0w/var/tmp/example.pcaphost192.168.2.2and
tcpport80andnothost192.168.2.3
To cap
ture client-side traf
fic as well, in a two-armed con
fig
u
ra
tion, use this mod
i
fied ver
sion of the last com
mand:
tcpdumpi0.0:ps0w/var/tmp/example.pcaphost192.168.3.2and
tcpport80andnothost192.168.3.3
This is only a small in
tro
duc
tion to the fil
ters avail
able for tcp
dump. More in
for
ma
tion is
avail
able on AskF5.

67

VIRTUAL SERVER TROUBLESHOOTING BIG-IP LTM PROFILES

BIG-IP LTM
Profiles
Introduction
Protocol Profiles
OneConnect
HTTP Profiles
SSL Profiles
BIG-IP LTM Policies
Persistence Profiles
Other Protocols and Profiles
Profile Troubleshooting

68

INTRODUCTION TO BIG-IP LTM PROFILES BIG-IP LTM PROFILES

Introduction to BIG-IP
LTM profiles

Pro
files en
able BIG-IP LTM to un
der
stand or in
ter
pret sup
ported net
work traf
fic types
and allow it to af
fect the be
hav
ior of man
aged traf
fic flow
ing through it. BIG-IP sys
tems
in
clude a set of stan
dard (de
fault) pro
files for a wide array of ap
pli
ca
tions and use cases,
such as per
form
ing pro
to
col en
force
ment of traf
fic, en
abling con
nec
tion and ses
sion
per
sis
tence, and im
ple
ment
ing client au
then
ti
ca
tion. Com
mon pro
file types in
clude:
Service: layer 7 protocols, such as HTTP, FTP, DNS, and SMTP.
Persistence: settings which govern routing of existing or related connections or
requests.
Protocol: layer 4 protocols, such as TCP, UDP, and Fast L4.
SSL: which enable the interception, offloading, and authentication of transport layer
encryption.
Other: which provide various types of advanced functionality.

Profile management
De
fault pro
files are often suf
fi
cient to achieve many busi
ness re
quire
ments. Note the fol
low
ing guid
ance for work
ing with the de
fault pro
files:
Changes to the default profile settings are lost when the BIG-IP system software is
upgraded
Changes to default profiles may have unexpected consequences for custom profiles
that otherwise inherit the default profile settings.
To customize a profile, do not modify a default profile. Create a child profile (which
will inherit the default profile settings) and customize the child profile as needed.
A cus
tom pro
file can in
herit set
tings from an
other cus
tom pro
file. This al
lows for ef
fi
cient man
age
ment of pro
files for re
lated ap
pli
ca
tions re
quir
ing sim
i
lar, but unique con
fig
u
ra
tion op
tions.

69

PROTOCOL PROFILES BIG-IP LTM PROFILES

Protocol Profiles
Introduction
There are a num
ber of pro
files avail
able that are jointly de
scribed as pro
to
col pro
files.
For more in
for
ma
tion on the fol
low
ing pro
files, see Pro
to
col Pro
files in BIG-IP Local Traf
fic Man
ager: Pro
files Ref
er
ence, along with the ref
er
enced so
lu
tions.

Fast L4
FastL4 pro
files are used for per
for
mance (layer 4), for
ward
ing (layer 2), and for
ward
ing
(IP) vir
tual servers. The FastL4 pro
file can also be used to en
able state
less IP level for
ward
ing of net
work traf
fic in the same fash
ion as an IP router. See AskF5 ar
ti
cle:
SOL7595: Overview of IP for
ward
ing vi
tual servers for more in
for
ma
tion.

Fast HTTP
The FastHTTP pro
file is a scaled down ver
sion of the HTTP pro
file that is op
ti
mized for
speed under con
trolled traf
fic con
di
tions. It can only be used with the per
for
mance
HTTP vir
tual server and is de
signed to speed up cer
tain types of HTTP con
nec
tions and
to re
duce the num
ber of con
nec
tions to back-end servers.
Given that the FastHTTP pro
file is op
ti
mized for per
for
mance under ideal traf
fic con
di
tions, the HTTP pro
file is rec
om
mended when load-bal
anc
ing most gen
eral-pur
pose web
ap
pli
ca
tions.
See AskF5 ar
ti
cle: SOL8024: Overview of the FastHTTP pro
file be
fore de
ploy
ing per
for
mance (HTTP) vir
tual servers.

70

PROTOCOL PROFILES BIG-IP LTM PROFILES

Transmission Control Protocol


Use of a Trans
mis
sion Con
trol Pro
to
col (TCP) pro
file is manda
tory on stan
dard TCP vir
tual servers. FastHTTP and HTTP pro
files also re
quire the use of the TCP pro
file.
For more de
tails on the TCP pro
file, see AskF5 ar
ti
cle: SOL7759: Overview of the TCP
pro
file.

User Datagam Protocol

Use of a User Data


gram Pro
to
col (UDP) pro
file is manda
tory on stan
dard UDP vir
tual
servers.
For more de
tail on the UDP pro
file see AskF5 ar
ti
cle: SOL7535: Overview of the UDP
pro
file.

Stream Control Transmission Protocol


Stream Con
trol Trans
mis
sion Pro
to
col (SCTP) is a layer 4 trans
port pro
to
col, de
signed
for mes
sage-ori
ented ap
pli
ca
tions that trans
port sig
nal
ing data, such as Di
am
e
ter.

71

ONECONNECT BIG-IP LTM PROFILES

OneConnect
Introduction
The OneCon
nect pro
file may im
prove HTTP per
for
mance by re
duc
ing con
nec
tion setup
la
tency be
tween the BIG-IP sys
tem and pool mem
bers as well as min
i
miz
ing the num
ber
of open con
nec
tions to them. The OneCon
nect pro
file main
tains a pool of con
nec
tions
to the con
fig
ured pool mem
bers. When there are idle con
nec
tions avail
able in the con
nec
tion pool, new client con
nec
tions will use these ex
ist
ing pool mem
ber con
nec
tions if
the con
fig
u
ra
tion per
mits. When used in con
junc
tion with an HTTP pro
file, each client
re
quest from an HTTP con
nec
tion is load-bal
anced in
de
pen
dently.
For in
for
ma
tion about set
tings within the OneCon
nect pro
file, see About OneCon
nect Pro
files in BIG-IP Local Traf
fic Man
age
ment: Pro
files Ref
er
ence.

OneConnect use cases


By en
abling OneCon
nect on a vir
tual server, HTTP re
quests can be load-bal
anced to dif
fer
ent pool mem
bers on a re
quest-by-re
quest basis. When using an HTTP pro
file with
out
a OneCon
nect pro
file, the BIG-IP sys
tem makes a load bal
anc
ing de
ci
sion based on the
first re
quest re
ceived on the con
nec
tion, and all sub
se
quent re
quests on that con
nec
tion
will con
tinue to go to the same pool mem
ber.
iR
ules can be used to se
lect pools and pool mem
bers on the fly, based on in
for
ma
tion
con
tained in each re
quest, such as based on the URI path or user-agent string.
The OneCon
nect source mask set
ting man
ages con
nec
tion re-use, and is ap
plied to the
client ad
dress on the server-side of a con
nec
tion to de
ter
mine its el
i
gi
bil
ity for con
nec
tion re-use. In this way, it is pos
si
ble to limit which clients can share server-side con
nec
tions with each other.

72

ONECONNECT BIG-IP LTM PROFILES

See AskF5 ar
ti
cle: SOL5911: Man
ag
ing con
nec
tion reuse using OneCon
nect source
mask

OneConnect limitations
Use of the OneCon
nect pro
file re
quires care
ful con
sid
er
a
tion of the fol
low
ing lim
i
ta
tions
and im
pli
ca
tions:
OneConnect is intended to work with traffic that can be inspected by the BIG-IP
system at layer 7. For example, using OneConnect with SSL passthrough would not be
a valid configuration.
For HTTP traffic, OneConnect requires the use of an HTTP profile.
Before leveraging OneConnect with non-HTTP applications, see AskF5 article:
SOL7208: Overview of the OneConnect profile.
Applications that utilize connection-based authentication such as NTLM, may need
additional profiles or may not be compatible with OneConnect.
An application that relays an authentication token once on a connection and
expects the application to authorize the requests for the life of the TCP session.
For more information, see AskF5 article: SOL10477: Optimizing NTLM traffic in
BIG-IP 10.x or later or Other Profiles in BIG-IP Local Traffic Management: Profiles
Reference.

73

ONECONNECT BIG-IP LTM PROFILES

When using the default OneConnect profile, the pool member cannot rely on the
client IP address information in the network headers to accurately represent the
source of the request. Consider using the X-Forwarded-For option in an HTTP
profile to pass the client IP address to the pool members via an HTTP header.

74

HTTP PROFILES BIG-IP LTM PROFILES

HTTP Profiles

BIG-IP LTM in
cludes sev
eral pro
file types used for op
ti
miz
ing or aug
ment
ing HTTP traf
fic,
in
clud
ing the fol
low
ing:
HTTP
HTTP compression
Web acceleration

HTTP
The HTTP pro
file en
ables the use of HTTP fea
tures in BIG-IP LTM Poli
cies and iR
ules, and
is re
quired for using other HTTP pro
file types, such as HTTP com
pres
sion and web ac
cel
er
a
tion.
In BIG-IP ver
sion 11.5 and higher, HTTP pro
files can op
er
ate in one of the fol
low
ing proxy
modes: re
verse, ex
plicit, and trans
par
ent. This guide will focus on re
verse proxy mode
be
cause it is the mode in
tended for a typ
i
cal ap
pli
ca
tion de
ploy
ment. For more in
for
ma
tion on ex
plicit and trans
par
ent see BIG-IP LTM man
u
als and AskF5 ar
ti
cles.
Here are some of the most com
mon op
tions:
Response chunking changes the behavior of BIG-IP LTM handling of a chunked
response from the server. By default it is set to Selective, which will reassemble and
then re-chunk chunked responses while preserving unchunked responses. This is the
recommended value. However, setting this to Unchunk may be required if clients are
unable to process chunked responses.
OneConnect Transformations controls whether BIG-IP LTM modifies Connection:
Close headers sent in response to HTTP/1.0 requests to Keep-Alive. This is enabled
by default. It allows HTTP/1.0 clients to take advantage of OneConnect.
Redirect Rewrite allows BIG-IP LTM to modify or strip redirects sent by pool members
in order to prevent clients from being redirected to invalid or unmanaged URLs. This
defaults to None, which disables the feature.
Insert X-Forwarded-For causes BIG-IP LTM to insert an X-Forwarded-For HTTP header
in requests. This reveals the client IP address to servers even when source address
translation is used. By default, this is Disabled.

75

HTTP PROFILES BIG-IP LTM PROFILES

HTTP Compression Profile


HTTP Com
pres
sion pro
files allow BIG-IP LTM to com
press HTTP re
sponse con
tent, which
can sub
stan
tially re
duce net
work traf
fic. Al
though data com
pres
sion can also be per
formed by most web servers, using BIG-IP LTM to com
press traf
fic for load bal
anced ap
pli
ca
tions can help min
i
mize server load and in
crease total ca
pac
ity.
If the web server has al
ready com
pressed the data, the BIG-IP sys
tem can
not per
form
traf
fic in
spec
tion.
F5 rec
om
mends re
view
ing AskF5 and/or man
u
als re
gard
ing the op
tions avail
able for
HTTP com
pres
sion pro
files be
fore im
ple
ment
ing this fea
ture. How
ever, the de
fault pro
file will com
press most text in
clud
ing HTML, javascript, and XML data. Most image data
can
not be fur
ther com
pressed.
Some com
mon op
tions with no
table im
pli
ca
tions in
clude:
Keep Accept Encoding should only be enabled if the server should compress data
instead of BIG-IP LTM. Otherwise, the default behavior when HTTP Compression is
enabled is to strip the Accept-Encoding header from requests, which prevents HTTP
servers from compressing responses.
HTTP/1.0 Requests allows BIG-IP LTM to compress responses to HTTP/1.0 requests.
This is disabled by default.
CPU Saver settings allows BIG-IP LTM to reduce compression levels and disable
compression altogether when CPU use reaches a predefined threshold. It is highly
recommended to leave this enabled.
gzip options allow you to adjust the compression level. Higher values will increase
compression efficiency, but result in a higher resource cost (in both CPU and memory)
and higher latency. The default settings were chosen to balance resource use,
minimal latency, and effective compression.
For more in
for
ma
tion about HTTP com
pres
sion pro
file, see AskF5 ar
ti
cle: SOL15434:
Overview of the HTTP Com
pres
sion pro
file.

Web Acceleration Profile


76

HTTP PROFILES BIG-IP LTM PROFILES

Web Acceleration Profile


Web ac
cel
er
a
tion pro
files en
able caching be
hav
ior in BIG-IP LTM, ac
cord
ing to guide
lines
set in RFC 2616. Web ac
cel
er
a
tion is highly rec
om
mended if you are using HTTP com
pres
sion and have a large amount of sta
tic con
tent, as BIG-IP LTM can cache the com
pressed re
sponse, re
sult
ing in re
duced BIG-IP LTM and server load.
F5 rec
om
mends cus
tomiz
ing cache set
tings to max
i
mize re
ten
tion of sta
tic data. For
more in
for
ma
tion, see AskF5 ar
ti
cle: SOL14903: Overview of the Web Ac
cel
er
a
tion
pro
file.

77

SSL PROFILES BIG-IP LTM PROFILES

SSL Profiles

BIG-IP LTM sup


ports en
crypt
ing both the client-side and server-side flows on full proxy
vir
tual servers. Client SSL pro
files en
crypt and de
crypt the client-side flow (BIG-IP LTM
acts as the SSL server), and server SSL pro
files en
crypt and de
crypt the server-side flow
(BIG-IP LTM acts as the SSL client).
When no SSL pro
file is en
abled on a vir
tual server but the pool mem
bers ex
pect SSL, it is
called SSL Passthrough and no layer 7 pro
files may be used. A FastL4 vir
tual server may
be pre
ferred for max
i
mum per
for
mance in this in
stance.

SSL Offload

When op
er
at
ing in SSL of
fload mode, only a client SSL pro
file is used, and while the con
nec
tion be
tween the client and the BIG-IP sys
tem is en
crypted, the con
nec
tion from BIGIP LTM to the pool mem
bers is un
en
crypted. This com
pletely re
moves the re
quire
ment
for the pool mem
bers to per
form SSL en
cryp
tion and de
cryp
tion, which can re
duce re
source usage on pool mem
bers and im
prove over
all per
for
mance.
SSL of
fload should only be used on trusted, con
trolled net
works.

78

SSL PROFILES BIG-IP LTM PROFILES

SSL re-encryption

When op
er
at
ing in SSL re-en
cryp
tion mode, both client SSL and server SSL pro
files are
con
fig
ured, and the con
nec
tion to pool mem
bers is also en
crypted. This re
quires that the
pool mem
bers per
form SSL en
cryp
tion and de
cryp
tion as well, but of
fers se
cu
rity on the
server-side flow.
Server SSL pro
files can also be con
fig
ured with a client cer
tifi
cate to au
then
ti
cate using
SSL client cer
tifi
cate auth to the pool mem
ber. This method can be lever
aged to en
sure that the ser
vice can only be ac
cessed via BIG-IP LTM vir
tual server, even in the event
that a client can ini
ti
ate con
nec
tions di
rectly to the pool mem
bers.

Other SSL features


Other fea
tures that may be of in
ter
est in
clude:
Proxy SSL allows clients to perform client certificate authentication directly to the pool
member while still allowing BIG-IP LTM to inspect the traffic
SSL Forward Proxy (introduced in v11.5.0) allows BIG-IP LTM to intercept SSL traffic
destined for external, third-party servers. For more information see the relevant
sections of BIG-IP Local Traffic Manager: Concepts and BIG-IP Local Traffic Manager:
Monitors Reference for your version.
It is also pos
si
ble to at
tach only a server SSL pro
file with
out a client SSL pro
file, but as
this sce
nario doesn't en
crypt the client side of the con
nec
tion, it may be of lim
ited use in
your en
vi
ron
ment.

79

SSL PROFILES BIG-IP LTM PROFILES

SSL profile options


Full doc
u
men
ta
tion of SSL pro
files is avail
able in the AskF5 ar
ti
cles:

SOL14783:

Overview of the Client SSL pro


file (11.x) and SOL14806: Overview of the Server SSL
pro
file (11.x)
Here are some of the more com
mon op
tions:
Certificate specifies the certificate BIG-IP LTM will offer. On client SSL profiles, this
must be a server certificate; on server SSL, it must be a client certificate.
Key specifies the key which will be used for authentication purposes. It must match
the configured certificate. Instructions to verify this are available on AskF5. When
attempting to configure a certificate and key, BIG-IP LTM will generate an error if they
do not match.
Chain (ClientSSL only) allows you to configure an intermediate or chain certificate that
BIG-IP LTM will present in addition to its own certificate. Contact your certificate
authority to determine whether you need this.
Passphrase allows you to specify the passphrase if you are using an encrypted key.
This increases security because if the key is lost it is still not compromised. The
passphrase is also encrypted in the configuration using the SecureVault TMOS
feature.

N O T E

&

T I P S

Note:onclientSSLprofilesinBIGIPv11.6and
higher,theCertificateKeyChainsectionmustbesaved
usingtheAddbutton.Youcannowconfiguremultiple
keyandcertificatepairs(includingachain
certificate,ifrequired)perSSLprofile:oneforRSA
andoneforDSA.
Ciphers is an OpenSSL-format cipher list that specifies which ciphers will be enabled
for use by this SSL profile. Defaults vary between client and server and between
versions.
Options can be used to selectively disable different SSL and TLS versions, if required
by your security policy.
Client Authentication (client SSL only) contains parameters which allow BIG-IP LTM to
verify certificates presented by an SSL client.

80

SSL PROFILES BIG-IP LTM PROFILES

Server Authentication (server SSL only) contains parameters which allow BIG-IP LTM
to validate the server certificate; normally, it is ignored.

Cipher strings and SSL/TLS version


For in
for
ma
tion on spec
i
fy
ing SSL/TLS ci
pher strings for use with SSL pro
files, see fol
low
ing AskF5 ar
ti
cles:
SOL13163: SSL ciphers supported on BIG-IP platforms (11.x)
SOL15194: Overview of BIG-IP SSL/TLS cipher suite
SOL8802: Using SSL ciphers with BIG-IP Client SSL and Server SSL profiles

81

BIG-IP LTM POLICIES BIG-IP LTM PROFILES

BIG-IP LTM policies

In
tro
duced in TMOS v11.4.0, BIG-IP LTM poli
cies su
percede HTTP class pro
files and in
tro
duce func
tion
al
ity to the con
fig
u
ra
tion which was pre
vi
ously only avail
able through iR
ules. Un
like other types of pro
files, BIG-IP LTM poli
cies are as
signed to vir
tual servers on
the Re
sources page.
BIG-IP LTM poli
cies are or
ga
nized into two major sec
tions: Gen
eral Prop
er
ties and
Rules.

General Properties
The most im
por
tant set
tings in Gen
eral Prop
er
ties are Re
quires and Con
trols. Re
quires
allow you to spec
ify what in
for
ma
tion the pol
icy will be able to use to make de
ci
sions,
and Con
trols allow you to spec
ify the types of ac
tions it is able to take. Some se
lec
tions
from Re
quires and Con
trols need par
tic
u
lar pro
files to be as
signed to vir
tual servers
which use this pol
icy. For ex
am
ple, a Re
quires set
ting of http de
pends on an HTTP pro
file, and a Con
trols set
ting of serverssl de
pends on a server SSL pro
file.

Rules
Each Rule is com
posed of Con
di
tions and Ac
tions. Con
di
tions de
fine the re
quire
ments
for the rule to run; Ac
tions de
fine the ac
tion taken if the Con
di
tions match.
By de
fault, the first Rule with match
ing Con
di
tions will be ap
plied; this be
hav
ior can be
changed using the Strat
egy set
ting under Gen
eral Prop
er
ties. For in-depth in
for
ma
tion
of this be
hav
ior, see AskF5 ar
ti
cle: SOL15085: Overview of the Web Ac
cel
er
a
tion pro
file.
A com
mon use case for BIG-IP LTM Poli
cies is se
lect
ing a pool based on HTTP URI. For
this you need a BIG-IP LTM pol
icy that Re
quires http and Con
trols forwarding. You
can then con
fig
ure Rules with an operand of httpuri and an Ac
tion with a tar
get of
forward to a pool.

82

BIG-IP LTM POLICIES BIG-IP LTM PROFILES

BIG-IP LTM poli


cies offer room for ex
pan
sion: Rules can have mul
ti
ple operands (all must
match for the rule to match) or ac
tions. Using the above ex
am
ple, if some pools for a vir
tual server re
quire ServerSSL while oth
ers do not, you could add Con
trols and Ac
tion of
serverssl to dis
able ServerSSL when it is not needed. You can also in
tro
duce ad
vanced HTTP caching or com
pres
sion be
hav
ior using BIG-IP LTM poli
cies. For ex
am
ple, if
some pools are just for sta
tic con
tent, you may want to en
able caching only for those
pools.
BIG-IP LTM poli
cies offer a broad range of ca
pa
bil
i
ties, and new ver
sions fre
quently in
tro
duce ad
di
tional ca
pa
bil
i
ties. Refer to BIG-IP LTM Con
cepts for a com
plete list
ing.

83

PERSISTENCE PROFILES BIG-IP LTM PROFILES

Persistence Profiles
Overview of persistence
Many ap
pli
ca
tions ser
viced by BIG-IP LTM are ses
sion-based and re
quire that the client
be load-bal
anced to the same pool mem
ber for the du
ra
tion of that ses
sion. BIG-IP LTM
can ac
com
plish this through per
sis
tence. When a client con
nects to a vir
tual server for
the first time, a load bal
anc
ing de
ci
sion is made and then the con
fig
ured per
sis
tence
record is cre
ated for that client. All sub
se
quent con
nec
tions that the client makes to that
vir
tual server are sent to the same pool mem
ber for the life of that per
sis
tence record.

Common persistence types


Cookie per
sis
tence uses an HTTP cookie stored on a clients com
puter to allow the client
to re
con
nect to the same server pre
vi
ously vis
ited at a web site. Be
cause a cookie is an
ob
ject of the HTTP pro
to
col, use of a cookie per
sis
tence pro
file re
quires that the vir
tual
server also have an HTTP pro
file as
signed. The cookie per
sis
tence pro
file works in the
fol
low
ing meth
ods:
Cookie insert. BIG-IP LTM inserts an additional cookie into the response of a pool
member. By default, the cookie has no explicit expiration time, making it valid for the
life of the browser session. This is the most common and seamless method of cookie
persistence to deploy. For more information on the cookie insert method see AskF5
article: SOL6917: Overview of BIG-IP persistence cookie encoding.

N O T E

&

T I P S

Note:Unencryptedcookiesmaybesubjecttounintended
informationdisclosure(thisisthedefaultbehavior).
Ifthesecuritypolicesofyourorganizationrequire
thiscookievaluetobeencrypted,seeAskF5article:
SOL14784:ConfiguringBIGIPcookieencryption(10.x
11.x).

84

PERSISTENCE PROFILES BIG-IP LTM PROFILES

Cookie passive and cookie rewrite. With these methods, the pool member must
provide all or part of the cookie contents. This method requires close collaboration
between BIG-IP LTM administrator and application owner to properly implement.
Source ad
dress affin
ity di
rects ses
sion re
quests to the same pool mem
ber based on the
source IP ad
dress of a packet. This pro
file can also be con
fig
ured with a net
work mask so
that mul
ti
ple clients can be grouped into a sin
gle per
sis
tence method. The net
work
mask can also be used to re
duce the amount mem
ory used for stor
ing per
sis
tence
records. The de
faults for this method are a time
out of 180 sec
onds and a mask of 255.
255.
255.
255 (one per
sis
tence record per source IP). See AskF5 ar
ti
cle: SOL5779: BIG-IP
LTM mem
ory al
lo
ca
tion for Source Ad
dress Affin
ity per
sis
tence records for more in
for
ma
tion.
Uni
ver
sal per
sis
tence di
rects ses
sion re
quests to the same pool mem
ber based on cus
tomiz
able logic writ
ten in an iRule. This per
sis
tence method is com
monly used in
many BIG-IP iApps.

Other persistence types


The fol
low
ing per
sis
tence meth
ods are also avail
able.
SSL per
sis
tence tracks non-ter
mi
nated SSL ses
sions, using the SSL ses
sion ID. Even when
the clients IP ad
dress changes, BIG-IP LTM still rec
og
nizes the con
nec
tion as being per
sis
tent based on the ses
sion ID. This method can be par
tic
u
larly prob
lem
atic when
work
ing with clients that fre
quently re-ne
go
ti
ate the SSL ses
sion.
For more in
for
ma
tion, see AskF5 ar
ti
cle: SOL3062: Using SSL ses
sion ID per
sis
tence.
Des
ti
na
tion ad
dress affin
ity di
rects ses
sion re
quests to the same pool mem
ber based on the des
ti
na
tion IP ad
dress of a packet. This method is com
monly used for
load bal
anc
ing out
bound traf
fic over dual WAN cir
cuits or to mul
ti
ple net
work de
vices.
This method should not be used when load bal
anc
ing ap
pli
ca
tion traf
fic to load bal
ance
pools.
Hash per
sis
tence is sim
i
lar to uni
ver
sal per
sis
tence, ex
cept that the per
sis
tence key is a
hash of the data, rather than the data it
self. This may be use
ful when it is nec
es
sary to
ob
fus
cate the data that is being per
sisted upon.

85

PERSISTENCE PROFILES BIG-IP LTM PROFILES

Mi
crosoft Re
mote Desk
top Pro
to
col (MSRDP) per
sis
tence tracks ses
sions be
tween clients
and pool mem
bers based on ei
ther a token pro
vided by a Mi
crosoft Ter
mi
nal Ser
vices
Ses
sion Di
rec
tory/TS Ses
sion Bro
ker server or the user
name from the client.

N O T E

&

T I P S

Note:Sincetheroutingtokensmayendupidenticalfor
someclients,theBIGIPsystemmaypersisttheRDP
sessionstothesameRDPservers.
SIP per
sis
tence is used for servers that re
ceive Ses
sion Ini
ti
a
tion Pro
to
col (SIP) mes
sages
sent through UDP, SCTP, or TCP.

Performance considerations of
persistence
Dif
fer
ent per
sis
tence meth
ods will have dif
fer
ent per
for
mance im
pli
ca
tions. Some meth
ods re
quire the BIG-IP sys
tem to main
tain an in
ter
nal state table, which con
sumes mem
ory and CPU. Other per
sis
tence meth
ods may not be gran
u
lar enough to en
sure uni
form
dis
tri
b
u
tion amongst pool mem
bers. For ex
am
ple, the source ad
dress affin
ity per
sis
tence method may make it dif
fi
cult to uniquely iden
tify clients on cor
po
rate net
works
whose source ad
dresses will be masked by fire
walls and proxy servers.

CARP hash persistence


The source ad
dress, des
ti
na
tion ad
dress, and hash per
sis
tence pro
files sup
port a CARP
hash al
go
rithm. When using CARP hash, the BIG-IP sys
tem per
forms a com
pu
ta
tion with
the per
sis
tence key and each pool mem
ber to de
ter
min
is
ti
cally choose a pool mem
ber.
Given a set of avail
able pool mem
bers, a client con
nec
tion will al
ways be di
rected to the
same pool mem
ber. If a pool mem
ber be
comes un
avail
able, the ses
sions to that pool
mem
ber will be dis
trib
uted among the re
main
ing avail
able pool mem
bers, but ses
sions
to the avail
able pool mem
bers will re
main un
af
fected.

86

PERSISTENCE PROFILES BIG-IP LTM PROFILES

Ad
van
tages of using CARP:
The BIG-IP system maintains no state about the persistence entries, so using this type
of persistence does not increase memory utilization.
There is no need to mirror persistence: Given a persistence key and the same set of
available pool members, two or more BIG-IP systems will reach the same conclusion.
Persistence does not expire: Given the same set of available pool members, a client
will always be directed to the same pool member.
Dis
ad
van
tages of using CARP:
When a pool member becomes available (either due to addition of a new pool
member or change in monitor state), new connections from some clients will be
directed to the newly available pool member at a disproportionate rate.
For more in
for
ma
tion about CARP hash, see AskF5 ar
ti
cle: SOL11362: Overview of the
CARP hash al
go
rithm.

More Information
For more in
for
ma
tion on the con
fi
u
ra
tion for per
sis
tence pro
files, see BIG-IP Local Traf
fic Man
ager: Con
cepts for your TMOS ver
sion.

87

OTHER PROTOCOLS AND PROFILES BIG-IP LTM PROFILES

Other protocols and


profiles

To man
age ap
pli
ca
tion layer traf
fic, you can use any of the fol
low
ing pro
file types. For
more in
for
ma
tion see BIG-IP Local Traf
fic Man
ager: Pro
files Ref
er
ence.
File Transfer Protocol (FTP) profile allows modifying a few FTP properties and
settings to your specific needs.
Domain Name System (DNS) profile allows users to configure various DNS
attributes and allow for many the BIG-IP system DNS features such as DNS caching,
DNS IPv6 to IPv4 translation, DNSSEC, etc.
Real Time Streaming Protocol (RTSP) is a network control protocol designed for use
in entertainment and communications systems to control streaming media servers.
Internet Content Adaptation Protocol (ICAP) is used to extend transparent proxy
servers, thereby freeing up resources and standardizing the way in which new
features are implemented.
Request Adapt or Response Adapt profile instructs an HTTP virtual server to send a
request or response to a named virtual server of type Internal for possible
modification by an Internet Content Adaptation Protocol (ICAP) server.
Diameter is an enhanced version of the Remote Authentication Dial-In User Service
(RADIUS) protocol. When you configure a Diameter profile, the BIG-IP system can
send client-initiated Diameter messages to load balancing servers.
Remote Authentication Dial-In User Service (RADIUS) profiles are used to load
balance RADIUS traffic.
Session Initiation Protocol (SIP) is a signaling communications protocol, widely used
for controlling multimedia communication sessions, such as voice and video calls over
Internet Protocol (IP) networks.
Simple Mail Transfer Protocol (SMTP) profile secures SMTP traffic coming into the
BIG-IP system. When you create an SMTP profile, BIG-IP Protocol Security Manager
provides several security checks for requests sent to a protected SMTP server.
SMTPS profile provides a way to add SSL encryption to SMTP traffic quickly and
easily.

88

OTHER PROTOCOLS AND PROFILES BIG-IP LTM PROFILES

iSession profile tells the system how to optimize traffic. Symmetric optimization
requires an iSession profile at both ends of the iSession connection.
Rewrite profile instructs BIG-IP LTM to display websites differently on the external
network than on an internal network and can also be used to instruct the BIG-IP
system to act as a reverse proxy server.
Extensible Markup Language (XML) profile causes the BIG-IP system to perform
XML content-based routing requests to an appropriate pool, pool member, or virtual
server based on specific content in an XML document.
Speedy (SPDY) is an open-source web application layer protocol developed by Google
in 2009 and is primarily geared toward reducing Web page latency. By using the SPDY
profile, the BIG-IP system can enhance functionality for SPDY requests.
Financial Information Exchange (FIX) is an electronic communications protocol for
international real-time exchange of information related to the securities transactions
and markets.
Video Quality of Experience (QOE) profile allows assessment of an audience's video
session or overall video experience, providing an indication of application
performance.

89

TROUBLESHOOTING BIG-IP LTM PROFILES

Troubleshooting

To re
solve most traf
fic-re
lated is
sues, re
view the vir
tual server and net
work
ing-level con
fig
u
ra
tions. These may in
clude pools and ad
dress trans
la
tion set
tings.
Also re
view the fol
low
ing:
Architectural design for the virtual server in question to verify the correct profile is
used.
Custom profiles and their associated options to ensure they modify traffic behavior as
anticipated.
As a next step, re
vert
ing to an un
mod
i
fied de
fault pro
file and ob
serv
ing po
ten
tial
changes in be
hav
ior can often help pin
point prob
lems.

90

TROUBLESHOOTING BIG-IP GTM/DNS SERVICES

BIG-IP GTM/DNS
Services
Introduction
BIG-IP GTM/DNS Services Basics
BIG-IP GTM/DNS Services Core Concepts
BIG-IP GTM Load Balancing
Architectures
BIG-IP GTM iQuery
BIG-IP GTM/DNS Services Troubleshooting

91

INTRODUCTION BIG-IP GTM/DNS SERVICES

Introduction

The fol
low
ing chap
ters will re
view BIG-IP Global Traf
fic Man
age
ment (GTM) and DNS ser
vice of
fer
ings avail
able from F5.

BIG-IP GTM
BIG-IP Global Traf
fic Man
ager is a DNS-based mod
ule which mon
i
tor the avail
abil
ity and
per
for
mance of global re
sources, such as dis
trib
uted ap
pli
ca
tions, in order to con
trol
net
work traf
fic pat
terns.

DNS Caching
DNS caching is a DNS ser
vices fea
ture that can pro
vide re
sponses for fre
quently re
quested DNS records from a cache main
tained in mem
ory on BIG-IP. This fea
ture can be
used to re
place or re
duce load on other DNS servers.

DNS Express
DNS Ex
press is a DNS Ser
vices fea
ture which al
lows the BIG-IP sys
tem to act as a DNS
slave server. DNS Ex
press sup
ports NO
TIFY, AXFR, and IXFR to fetch zone data and
store it in mem
ory for high per
for
mance. DNS Ex
press does not offer DNS record man
age
ment ca
pa
bil
ity.

RPZ
Re
sponse Pol
icy Zones (RPZ) allow the BIG-IP sys
tem, when con
fig
ured with a DNS cache,
to fil
ter DNS re
quests based on the re
source name being queried. This can be used to
pre
vent clients from ac
cess
ing known-ma
li
cious sites.

iRules
92

INTRODUCTION BIG-IP GTM/DNS SERVICES

iRules
When using iR
ules with BIG-IP GTM, there are two pos
si
ble places to at
tach iR
ules: ei
ther
to the wide IP or to the DNS lis
tener. The iRule com
mands and events avail
able will de
pend on where the iR
ules are at
tached in the con
fig
u
ra
tion. Some iRule func
tions will re
quire BIG-IP LTM to be pro
vi
sioned along
side BIG-IP GTM.

93

BIG-IP GTM/DNS SERVICES BASICS BIG-IP GTM/DNS SERVICES

BIG-IP GTM/DNS
Services Basics
BIG-IP GTM
As men
tioned in the in
tro
duc
tion, BIG-IP Global Traf
fic Man
ager is the mod
ule built to
mon
i
tor the avail
abil
ity and per
for
mance of global re
sources and use that in
for
ma
tion to
man
age net
work traf
fic pat
terns.
With BIG-IP GTM mod
ule you can:
Direct clients to local servers for globally-distributed sites using a GeoIP database.
Change the load balancing configuration according to current traffic patterns or time
of day.
Set up global load balancing among disparate Local Traffic Manager systems and
other hosts.
Monitor real-time network conditions.
Integrate a content delivery network from a CDN provider.
To im
ple
ment BIG-IP GTM you will need to un
der
stand the fol
low
ing ter
mi
nol
ogy and
basic func
tion
al
ity:
Configuration synchronization ensures the rapid distribution of BIG-IP GTM settings
among BIG-IP GTM systems in a synchronization group.
Load Balancing divides work among resources so that more work gets done in the
same amount of time and, in general, all users get served faster. BIG-IP GTM selects
the best available resource using either a static or a dynamic load balancing method.
When using a static load balancing method, BIG-IP GTM selects a resource based on a
pre-defined pattern. When using a dynamic load balancing method, BIG-IP GTM
selects a resource based on current performance metrics.
Prober Pool is an ordered collection of one or more BIG-IP systems that can be used
to monitor specific resources.

94

BIG-IP GTM/DNS SERVICES BASICS BIG-IP GTM/DNS SERVICES

wide IP is a mapping of a fully-qualified domain name (FQDN) to a set of virtual


servers that host the domains content, such as a web site, an e-commerce site, or a
CDN. BIG-IP GTM intercepts requests for domain names that are wide IPs and
answers them based on the wide IP configuration.
iQuery is an XML protocol used by BIG-IP GTM to communicate with other BIG-IP
systems.
BIG-IP GTM Listener is a specialized virtual server that provides DNS services.
Probe is an action the BIG-IP system takes to acquire data from other network
resources. BIG-IP GTM uses probes to track the health and availability of network
resources
Data Center is where BIG-IP GTM consolidates all the paths and metrics data collected
from the servers, virtual servers, and links.
Virtual Server is a combination of IP address and port number that points to a
resource that provides access to an application or data source on the network.
Link is a logical representation of a physical device (router) that connects the network
to the Internet.
Domain Name System Security Extensions (DNSSEC) is an industry-standard protocol
that functions to provide integrity for DNS data.
For more in
for
ma
tion, see BIG-IP GTM Load Bal
anc
ing and BIG-IP iQuery in this chap
ter.

95

BIG-IP GTM/DNS SERVICES CORE CONCEPTS BIG-IP GTM/DNS SERVICES

BIG-IP GTM/DNS
Services Core
Concepts

In ad
di
tion to BIG-IP GTM con
cepts DNS Ex
press, DNS cache, Auto-Dis
cov
ery, and Ad
dress trans
la
tion, ZoneRun
ner will be cov
ered in this chap
ter.

Configuration synchronization
Con
fig
u
ra
tion syn
chro
niza
tion en
sures the rapid dis
tri
b
u
tion of BIG-IP GTM set
tings to
other BIG-IP GTM sys
tems that be
long to the same syn
chro
niza
tion group. A BIG-IP GTM
syn
chro
niza
tion group might con
tain both BIG-IP GTM and BIG-IP Link Con
troller sys
tems.
Con
fig
u
ra
tion syn
chro
niza
tion oc
curs in the fol
low
ing man
ner:
When a change is made to a BIG-IP GTM configuration, the system broadcasts the
change to the other systems in BIG-IP GTM synchronization group.
When a configuration synchronization is in progress, the process must either
complete or time out before another configuration synchronization can occur.
It is im
por
tant to have a work
ing Net
work Time Pro
to
col (NTP) con
fig
u
ra
tion be
cause
BIG-IP GTM re
lies on time
stamps for proper syn
chro
niza
tion.

BIG-IP GTM listeners


A lis
tener is a spe
cial
ized vir
tual server that pro
vides DNS ser
vices on port 53 and at the
IP ad
dress as
signed to the lis
tener. When a DNS query is sent to the lis
tener, BIG-IP GTM
ei
ther han
dles the re
quest lo
cally or for
wards the re
quest to the ap
pro
pri
ate re
source.
How do lis
ten
ers process net
work traf
fic?

96

BIG-IP GTM/DNS SERVICES CORE CONCEPTS BIG-IP GTM/DNS SERVICES

BIG-IP GTM re
sponds to DNS queries on a per-lis
tener basis. The num
ber of lis
ten
ers
cre
ated de
pends on the net
work con
fig
u
ra
tion and the des
ti
na
tions to which spe
cific
queries are to be sent. For ex
am
ple, a sin
gle BIG-IP GTM can be the pri
mary au
thor
i
ta
tive
server for one do
main, while for
ward
ing other DNS queries to a dif
fer
ent DNS server.
BIG-IP GTM al
ways man
ages and re
sponds to DNS queries for the wide IPs that are con
fig
ured on the sys
tem.

Data centers and virtual servers


All of the re
sources on a net
work are as
so
ci
ated with a data cen
ter. BIG-IP GTM con
sol
i
dates the paths and met
rics data col
lected from the servers, vir
tual servers, and links in
the data cen
ter. BIG-IP GTM uses that data to con
duct load bal
anc
ing and route client re
quests to the best-per
form
ing re
source based on dif
fer
ent fac
tors.
BIG-IP GTM might send all re
quests to one data cen
ter when an
other data cen
ter is
down, Dis
as
ter re
cov
ery sites are a prime use case in this kind of con
fig
u
ra
tion. Al
ter
na
tively, BIG-IP GTM might send a re
quest to the data cen
ter that has the fastest re
sponse
time.
A third op
tion might be for BIG-IP GTM to send a re
quest to the data cen
ter that is lo
cated
clos
est to the client's source ad
dress, for in
stance send
ing a client that is lo
cated in France
to a host lo
cated France in
stead of the United States, greatly re
duc
ing round trip times

N O T E

&

T I P S

Tip:Theresourcesassociatedwithadatacenterare
availableonlywhenthedatacenterisalsoavailable.

Virtual Servers
A vir
tual server is a spe
cific IP ad
dress and port num
ber that points to a re
source on the
net
work. In the case of host servers, this IP ad
dress and port num
ber likely points to the
re
source it
self. With load-bal
anc
ing sys
tems, vir
tual servers are often prox
ies that allow
the load-bal
anc
ing server to man
age a re
source re
quest across a mul
ti
tude of re
sources.

97

BIG-IP GTM/DNS SERVICES CORE CONCEPTS BIG-IP GTM/DNS SERVICES

N O T E

&

T I P S

Tip:Virtualserverstatusmaybeconfiguredtobe
dependentonlyonthetimeoutvalueofthemonitor
associatedwiththevirtualserver.Thisensuresthat
whenworkinginamultibladedenvironment,whenthe
primarybladeinaclusterbecomesunavailable,the
gtmdagentonthenewprimarybladehastimeto
establishnewiQueryconnectionswithandreceive
updatedstatusfromotherBIGIPsystems.

N O T E

&

T I P S

Tip:Thebig3dagentonthenewprimaryblademustbe
upandfunctioningwithin90seconds(thetimeoutvalue
ofBIGIPmonitor).

Links
A link is an op
tional BIG-IP GTM or BIG-IP Link Con
troller con
fig
u
ra
tion ob
ject that rep
re
sents a phys
i
cal de
vice that con
nects a net
work to the In
ter
net. BIG-IP GTM tracks the
per
for
mance of links, which in
flu
ences the avail
abil
ity of pools, data cen
ters, wide IPs,
and dis
trib
uted ap
pli
ca
tions.
When one or more links are cre
ated, the sys
tem uses the fol
low
ing logic to au
to
mat
i
cally
as
so
ci
ate vir
tual servers with the link ob
jects:
BIG-IP GTM and BIG-IP Link Controller associate the virtual server with the link by
matching the subnet addresses of the virtual server, link, and self IP address. Most of
the time, the virtual server is associated with the link that is on the same subnet as
the self IP address.
In some cases, BIG-IP GTM and BIG-IP Link Controller cannot associate the virtual
server and link because the subnet addresses do not match. When this issue occurs,
the system associates the virtual server with the default link which is assigned to the
data center. This association may cause issues if the link that is associated with the
virtual server does not provide network connectivity to the virtual server.
If the virtual server is associated with a link that does not provide network
connectivity to that virtual server, BIG-IP GTM and BIG-IP Link Controller may

98

BIG-IP GTM/DNS SERVICES CORE CONCEPTS BIG-IP GTM/DNS SERVICES

incorrectly return the virtual server IP address in the DNS response to a wide IP query
even if the link is disabled or marked as down.

DNS Express
DNS Ex
press en
ables the BIG-IP sys
tem to func
tion as a replica au
thor
i
ta
tive name
server.
Zones are stored in mem
ory for op
ti
mal speed. DNS Ex
press sup
ports the stan
dard DNS
NO
TIFY pro
to
col from pri
mary au
thor
i
ta
tive name
servers and uses AXFR to trans
fer
zone data. DNS Ex
press does not it
self sup
port mod
i
fy
ing records, but if the BIG-IP sys
tem it
self is con
fig
ured as a pri
mary au
thor
i
ta
tive name
server using Zonerun
ner, it can
be used to in
crease ca
pac
ity and min
i
mize la
tency.
You can use DNS Ex
press to both pro
tect your DNS man
age
ment in
fra
struc
ture and
max
i
mize ca
pac
ity.

DNS cache
The DNS cache fea
ture is avail
able as part of ei
ther the DNS add-on mod
ule for BIG-IP
LTM or as part of BIG-IP GTM/DNS combo. DNS Cache has three dif
fer
ent forms of DNS
cache that are con
fig
urable: Trans
par
ent, Re
solver, and Val
i
dat
ing Re
solver.

Transparent DNS Cache


The trans
par
ent cache ob
ject is con
fig
urable on the BIG-IP sys
tem to use ex
ter
nal DNS
re
solvers to re
solve queries, and then cache the re
sponses from the mul
ti
ple ex
ter
nal re
solvers. When a con
sol
i
dated cache is in front of ex
ter
nal re
solvers (each with their own
cache), it can pro
duce a much higher cache hit per
cent
age.
Cache miss

99

BIG-IP GTM/DNS SERVICES CORE CONCEPTS BIG-IP GTM/DNS SERVICES

Cache hit

N O T E

&

T I P S

Note:Thetransparentcachewillcontainmessagesand
resourcerecords.
F5 Net
works rec
om
mends that you con
fig
ure the BIG-IP sys
tem to for
ward queries which
can
not be an
swered from the cache to a pool of local DNS servers, rather than to the
local BIND in
stance be
cause BIND per
for
mance is slower than using mul
ti
ple ex
ter
nal re
solvers.

N O T E

&

T I P S

Note:ForsystemsusingtheDNSExpressfeature,the
BIGIPsystemfirstprocessestherequeststhroughDNS
Express,andthencachestheresponses.

Resolver DNS Cache


You may con
fig
ure a re
solver cache on the BIG-IP sys
tem to re
solve DNS queries and
cache the re
sponses. The next time the sys
tem re
ceives a query for a re
sponse that ex
ists in the cache, the sys
tem re
turns the re
sponse from the cache. The re
solver cache
con
tains mes
sages, re
source records, and the name
servers the sys
tem queries to re
solve DNS queries.

N O T E

&

T I P S

Note:ItispossibletoconfigurethelocalBIND
instanceontheBIGIPsystemtoactasanexternalDNS
resolver.However,theperformanceofBINDisslower
thanusingaresolvercache.

100

BIG-IP GTM/DNS SERVICES CORE CONCEPTS BIG-IP GTM/DNS SERVICES

Validating Resolver DNS cache


The Val
i
dat
ing Re
solver DNS cache may be con
fig
ured to re
cur
sively query pub
lic DNS
servers, val
i
date the iden
tity of the DNS server send
ing the re
sponses, and then cache
the re
sponses. The next time the sys
tem re
ceives a query for a re
sponse that ex
ists in
the cache, the sys
tem re
turns the DNSSEC-com
pli
ant re
sponse from the cache.
The val
i
dat
ing re
solver cache con
tains mes
sages, re
source records, the name
servers the
sys
tem queries to re
solve DNS queries, and DNSSEC keys.
For more in
for
ma
tion about set
ting up each of the DNS ex
press caching method
olo
gies, refer to DNS Cache: Im
ple
men
ta
tions.

DNSSEC
DNSSEC is an ex
ten
sion to the Do
main Name Ser
vice (DNS) that en
sures the in
tegrity of
data re
turned by do
main name lookups by in
cor
po
rat
ing a chain of trust in the DNS hi
er
ar
chy. DNSSEC pro
vides ori
gin au
then
tic
ity, data in
tegrity and se
cure de
nial of ex
is
tence.
Specif
i
cally, Ori
gin Au
then
tic
ity en
sures that re
solvers can ver
ify that data has orig
i
nated

101

BIG-IP GTM/DNS SERVICES CORE CONCEPTS BIG-IP GTM/DNS SERVICES

from the cor


rect au
thor
i
ta
tive source. Data In
tegrity ver
i
fies that re
sponses are not mod
i
fied in-flight, and Se
cure De
nial of Ex
is
tence en
sures that when there is no data for a
query, that the au
thor
i
ta
tive server can pro
vide a re
sponse that proves no data ex
ists.
The basis of DNSSEC is pub
lic key cryp
tog
ra
phy (PKI). A chain of trust is built with pub
licpri
vate keys at each layer of the DNS ar
chi
tec
ture.
DNSSEC uses two kinds of keys: Key Sign
ing Keys and Zone Sign
ing Keys.
Key Sign
ing Key is used to sign other keys in order to build the chain of trust. This key is
some
times cryp
to
graph
i
cally stronger and has a longer lifes
pan than a Zone Sign
ing Key.
Zone Sign
ing Key is used to sign the data that is pub
lished in a zone. DNSSEC uses the
Key Sign
ing Keys and Zone Sign
ing Keys to sign and ver
ify records within DNS.
Why im
ple
ment DNSSEC?
When a user re
quests a site, DNS starts by trans
lat
ing the do
main name into an IP ad
dress. It does this through a se
ries of re
cur
sive lookups that form a "chain" of re
quests.
The issue that forms the need for DNSSEC is that each stop in this chain in
her
ently trusts
the other parts of the chain. So, what if an at
tacker could some
how ma
nip
u
late one of
the servers or traf
fic on the wire and sends the wrong IP ad
dress back to the client? The
at
tacker could redi
rect the client to a web
site where mal
ware is wait
ing to scan the un
sus
pect
ing client.
DNSSEC ad
dresses this prob
lem by val
i
dat
ing the re
sponse of each part of the chain by
using dig
i
tal sig
na
tures. These sig
na
tures help build a "chain of trust" that DNS can rely
on when an
swer
ing re
quests. To form the chain of trust, DNSSEC starts with a "trust an
chor" and every
thing below that trust an
chor is trusted. Ide
ally, the trust an
chor is the
root zone.
ICANN pub
lished the root zone trust an
chor, and root op
er
a
tors began serv
ing the
signed root zone in July, 2010. With the root zone signed, all other zones below it can
also be signed, thus form
ing a solid and com
plete chain of trust. Ad
di
tion
ally, ICANN also
lists the Top Level Do
mains that are cur
rently signed and have trust an
chors pub
lished
as DS records in the root zone.

102

BIG-IP GTM/DNS SERVICES CORE CONCEPTS BIG-IP GTM/DNS SERVICES

The fol
low
ing il
lus
tra
tion shows the build
ing blocks for the chain of trust from the root zone:

For more in
for
ma
tion, see Con
fig
ur
ing DNSSEC in BIG-IP DNS Ser
vices.

103

BIG-IP GTM/DNS SERVICES CORE CONCEPTS BIG-IP GTM/DNS SERVICES

Auto-discovery
Auto-dis
cov
ery is a process through which the BIG-IP GTM au
to
mat
i
cally iden
ti
fies re
sources that it man
ages. BIG-IP GTM can dis
cover two types of re
sources: vir
tual servers
and links.
Each re
source is dis
cov
ered on a per-server basis, so you can em
ploy auto-dis
cov
ery only
on the servers you spec
ify.
The auto-dis
cov
ery fea
ture of BIG-IP GTM has three modes that con
trol how the sys
tem
iden
ti
fies re
sources. These modes are:
Disabled: BIG-IP GTM does not attempt to discover any resources. Auto-discovery is
disabled on BIG-IP GTM by default.
Enabled: BIG-IP GTM regularly checks the server to discover any new resources. If a
previously discovered resource cannot be found, BIG-IP GTM deletes it from the
system.
Enabled (No Delete): BIG-IP GTM regularly checks the server to discover any new
resources. Unlike the Enabled mode, the Enabled (No Delete) mode does not delete
resources, even if the system cannot currently verify their presence.

N O T E

&

T I P S

Note:EnabledandEnabled(NoDelete)modesquerythe
serversfornewresourcesevery30secondsbydefault.

N O T E

&

T I P S

Important:Autodiscoverymustbegloballyenabledat
theserverandlinklevels,andthefrequencyatwhich
thesystemqueriesfornewresourcesmustbe
configured.
For in
for
ma
tion about en
abling auto-dis
cov
ery on vir
tual servers and links, see Dis
cov
er
ing re
sources au
to
mat
i
cally in the Con
fig
u
ra
tion Guide for BIG-IP Global Traf
fic Man
ager.

104

BIG-IP GTM/DNS SERVICES CORE CONCEPTS BIG-IP GTM/DNS SERVICES

Address translation
Sev
eral ob
jects in BIG-IP GTM allow the spec
i
fi
ca
tion of ad
dress trans
la
tion. Ad
dress
trans
la
tion is used in cases where the ob
ject is be
hind a Net
work Ad
dress Trans
la
tion
(NAT). For ex
am
ple, a vir
tual server may be known by one ad
dress on the In
ter
net but
an
other ad
dress be
hind the fire
wall. When con
fig
ur
ing these ob
jects, the ad
dress is the
ex
ter
nal ad
dress and will be re
turned in any DNS re
sponses gen
er
ated by BIG-IP GTM.
When prob
ing, the BIG-IP sys
tem may use ei
ther the ad
dress or trans
la
tion, de
pend
ing
on the sit
u
a
tion. As a gen
eral rule, if both the BIG-IP sys
tem per
form
ing the probe and
the tar
get of the probe are in the same data cen
ter and both have a trans
la
tion, the
probe will use the trans
la
tions. Oth
er
wise, the probe will use the ad
dress.
Spec
i
fy
ing a trans
la
tion on a BIG-IP server will cause vir
tual server auto-dis
cov
ery to
silently stop work
ing. This is be
cause BIG-IP GTM has no way of know
ing what the ex
ter
nally vis
i
ble ad
dress should be for the dis
cov
ered vir
tual server ad
dress, which is a trans
la
tion. For more in
for
ma
tion, see AskF5 ar
ti
cle: SOL9138: BIG-IP GTM sys
tem dis
ables
vir
tual server auto-dis
cov
ery for BIG-IP sys
tems that use trans
lated vir
tual server
ad
dresses.

ZoneRunner
ZoneRun
ner is an F5 prod
uct used for zone file man
age
ment on BIG-IP GTM. You may
use the ZoneRun
ner util
ity to cre
ate and man
age DNS zone files and con
fig
ure the BIND
in
stance on BIG-IP GTM. With the ZoneRun
ner util
ity, you can:
Import and transfer DNS zone files.
Manage zone resource records.
Manage views.
Manage a local nameserver and the associated configuration file, named.conf.
Transfer zone files to a nameserver.
Import only primary zone files from a nameserver.
BIG-IP GTM ZoneRun
ner util
ity uses dy
namic up
date to make zone changes. All changes
made to a zone using dy
namic up
date are writ
ten to the zone's jour
nal file.

105

BIG-IP GTM/DNS SERVICES CORE CONCEPTS BIG-IP GTM/DNS SERVICES

N O T E

&

T I P S

Important:
It'srecommendedthattheZoneRunnerutilitymanages
theDNS/BINDfile,ratherthanmanuallyeditingthe
file.Ifmanualeditingisrequired,thezonefiles
mustbefrozentoavoidissueswithnameresolution
anddynamicupdates.
Topreventthejournalfilesfrombeingsynchronized
ifBIGIPGTMisconfiguredtosynchronizeDNSzone
files,thezonemustbefrozenonallBIGIPGTM
systems.

iRules
iR
ules can be at
tached to or as
so
ci
ated with a wide IP, and in BIG-IP GTM ver
sion 11.5.0
and higher it can be at
tached to or as
so
ci
ated with the DNS lis
tener.

106

BIG-IP GTM LOAD BALANCING BIG-IP GTM/DNS SERVICES

BIG-IP GTM Load


Balancing
Monitors
BIG-IP GTM uses health mon
i
tors to de
ter
mine the avail
abil
ity of the vir
tual servers used
in its re
sponses to DNS re
quests. De
tailed in
for
ma
tion about mon
i
tors can be found
in BIG-IP Global Traf
fic Man
ager: Mon
i
tors Ref
er
ence.

Probers
When run
ning a mon
i
tor, BIG-IP GTM may re
quest that an
other BIG-IP de
vice ac
tu
ally
probe the tar
get of the mon
i
tor. BIG-IP GTM may choose a BIG-IP sys
tem in the same
data cen
ter with the tar
get of the mon
i
tor to ac
tu
ally send the probe and re
port back the
sta
tus. This can min
i
mize the amount of traf
fic that tra
verses the WAN.
The ex
ter
nal and scripted mon
i
tors listed in the doc
u
men
ta
tion use an ex
ter
nal file to
check the health of a re
mote sys
tem. BIG-IP GTM may re
quest that an
other BIG-IP sys
tem in the con
fig
u
ra
tion run the mon
i
tor. In order for the mon
i
tor to suc
ceed, the re
mote BIG-IP sys
tem must have a copy of the ex
ter
nal file. AskF5 ar
ti
cle SOL8154: BIG-IP
GTM EAV mon
i
tor con
sid
er
a
tions con
tains in
for
ma
tion about defin
ing a prober pool to
con
trol which the BIG-IP sys
tem will be used to run the mon
i
tor.

bigip monitor
The bigip mon
i
tor can be used to mon
i
tor BIG-IP sys
tems. It uses the sta
tus of vir
tual
servers de
ter
mined by the re
mote BIG-IP sys
tem rather than send
ing in
di
vid
ual mon
i
tor
probes for each vir
tual server. It is rec
om
mended that BIG-IP LTM be con
fig
ured to mon
i
tor its con
fig
u
ra
tion el
e
ments so that it can de
ter
mine the sta
tus of its vir
tual
servers. This vir
tual server sta
tus will be re
ported via the bigip mon
i
tor back to BIG-IP
GTM. This is an ef
fi
cient and ef
fec
tive way to mon
i
tor re
sources on other BIG-IP sys
tems.

107

BIG-IP GTM LOAD BALANCING BIG-IP GTM/DNS SERVICES

Application of monitors
In BIG-IP GTM con
fig
u
ra
tion, mon
i
tors can be ap
plied to the server, vir
tual server, pool
and pool mem
ber ob
jects. The mon
i
tor de
fined for the server will be used to mon
i
tor all
of its vir
tual servers un
less the vir
tual server over
rides the mon
i
tor se
lec
tion. Like
wise,
the mon
i
tor de
fined for the pool will be used to mon
i
tor all of its pool mem
bers, un
less
the pool mem
ber over
rides the mon
i
tor se
lec
tion.
It is im
por
tant not to over-mon
i
tor a pool mem
ber. If a mon
i
tor is as
signed to the server
and/or vir
tual server and also to the pool and/or pool mem
ber, then both of the mon
i
tors will fire, ef
fec
tively mon
i
tor
ing the vir
tual server twice. In most cases, mon
i
tors
should be con
fig
ured at the server/vir
tual server or at the pool/pool mem
ber, but not
both.

Prober pools
Prober pools allow the spec
i
fi
ca
tion of par
tic
u
lar set of BIG-IP de
vices that BIG-IP GTM
may use to mon
i
tor a re
source. This might be nec
es
sary in sit
u
a
tions where a fire
wall is
be
tween cer
tain BIG-IP sys
tems and mon
i
tored re
source, but not be
tween other BIG-IP
sys
tems and those same re
sources. In this case, a prober pool can be con
fig
ured and as
signed to the server to limit probe re
quests to those BIG-IP sys
tems that can reach the
mon
i
tored re
source. For more in
for
ma
tion about prober pools, see About Prober pools in
BIG-IP Global Traf
fic Man
ager: Con
cepts.

Wide IP
A wide IP maps a fully qual
i
fied do
main name (FQDN) to one or more pools. The pools
con
tain vir
tual servers. When a Local Do
main Name Server ( LDNS makes a re
quest for a
do
main that matches a wide IP, the con
fig
u
ra
tion of the wide IP de
ter
mines which vir
tual
server ad
dress should be re
turned.
Wide IP names can con
tain the wild
card char
ac
ters * (to match one or more char
ac
ters)
and ? (to match one char
ac
ter).
For more in
for
ma
tion about wide IPs, see About Wide IPs in BIG-IP Global Traf
fic Man
ager: Con
cepts.

108

BIG-IP GTM LOAD BALANCING BIG-IP GTM/DNS SERVICES

Load balancing logic


BIG-IP GTM pro
vides a tiered load bal
anc
ing mech
a
nism for wide IP res
o
lu
tion. At the
first tier, BIG-IP GTM chooses an ap
pro
pri
ate pool of servers, and then, at the sec
ond
tier, it chooses an ap
pro
pri
ate vir
tual server.
For com
plete in
for
ma
tion about BIG-IP GTM load bal
anc
ing, see About Global Server Load
Bal
anc
ing in BIG-IP Global Traf
fic Man
ager: Con
cepts.
The wide IP has four sta
tic load bal
anc
ing meth
ods avail
able for choice of an avail
able
pool.
The wide IP pool has sev
eral dy
namic and sta
tic load bal
anc
ing meth
ods avail
able to
choose an avail
able pool mem
ber.
The dy
namic load bal
anc
ing meth
ods rely on met
rics gath
ered to make a load bal
anc
ing
de
ci
sion.
The sta
tic load bal
anc
ing meth
ods make a load bal
anc
ing de
ci
sion based on a set pat
tern. The pool al
lows the spec
i
fi
ca
tion of pre
ferred, al
ter
nate and fall
back load bal
anc
ing
op
tions.
Not every load bal
anc
ing method is avail
able for each of the op
tions.
When choos
ing an avail
able pool mem
ber, the pre
ferred method is tried first. When
using a dy
namic load bal
anc
ing method as the pre
ferred load bal
anc
ing method, it is
pos
si
ble for load bal
anc
ing to fail. For ex
am
ple, if the round-trip time method is cho
sen,
when the first re
quest ar
rives from an LDNS, there will be no met
rics avail
able for
it. Since there are no met
rics avail
able, the pre
ferred method will fail and BIG-IP GTM will
sched
ule the gath
er
ing of round trip time met
rics for that LDNS.
When the pre
ferred method fails, the sys
tem falls back to the al
ter
nate method; there
fore, when using a dy
namic pre
ferred method, it is im
por
tant to spec
ify an al
ter
nate
method.
The fall
back method is used to en
sure that the a re
source is re
turned from the pool. The
fall
back method ig
nores the avail
abil
ity sta
tus of the re
source being re
turned.

109

BIG-IP GTM LOAD BALANCING BIG-IP GTM/DNS SERVICES

There are load bal


anc
ing meth
ods avail
able that do not ac
tu
ally load bal
ance. Meth
ods
such as none, drop packet, re
turn to DNS, and fall
back IP con
trol the be
hav
ior of load
bal
anc
ing, but do not ac
tu
ally use the con
fig
ured pool mem
bers.

Topology
BIG-IP GTM can make load bal
anc
ing de
ci
sions based upon the ge
o
graph
i
cal lo
ca
tion of
the LDNS mak
ing the DNS re
quest. The lo
ca
tion of the LDNS is de
ter
mined from a GeoIP
data
base. In order to use topol
ogy, the ad
min
is
tra
tor must con
fig
ure topol
ogy records
de
scrib
ing how BIG-IP GTM should make its load bal
anc
ing de
ci
sions. For more in
for
ma
tion on topol
ogy load bal
anc
ing, see Using Topol
ogy Load Bal
anc
ing to Dis
trib
ute DNS Re
quests to Spe
cific Re
sources in BIG-IP Global Traf
fic Man
ager: Load Bal
anc
ing or AskF5 ar
ti
cle SOL13412: Overview of BIG-IP GTM Topol
ogy records (11.x).
Topol
ogy load bal
anc
ing can be used to di
rect users to the servers that are ge
o
graph
i
cally close, or per
haps to di
rect users to servers that have lo
cal
ized con
tent.
BIG-IP sys
tem soft
ware pro
vides a pre-pop
u
lated data
base that pro
vides a map
ping of IP
ad
dresses to ge
o
graphic lo
ca
tions. The ad
min
is
tra
tor can also cre
ate cus
tom group call
re
gions. For ex
am
ple, it is pos
si
ble to cre
ate a cus
tom re
gion that groups cer
tain IP ad
dresses to
gether or that groups cer
tain coun
tries to
gether.
Up
dates for the GeoIP data
base are pro
vided on a reg
u
lar basis. AskF5 ar
ti
cle SOL11176:
Down
load
ing and in
stalling up
dates to the IP ge
olo
ca
tion data
base con
tains in
for
ma
tion about up
dat
ing the data
base.
Topol
ogy records are used to map an LDNS ad
dress to a re
source. The topol
ogy record
con
tains three el
e
ments:
A request source statement that specifies the origin LDNS of a DNS request.
A destination statement that specifies the pool or pool member to which the weight
of the topology record will be assigned.
A weight that the BIG-IP system assigns to a pool or a pool member during the load
balancing process.
When de
ter
min
ing how to load bal
ance a re
quest, BIG-IP GTM uses the ob
ject that has
the high
est weight ac
cord
ing the match
ing topol
ogy records.

110

BIG-IP GTM LOAD BALANCING BIG-IP GTM/DNS SERVICES

Im
por
tant: When con
fig
ur
ing topol
ogy load bal
anc
ing at the wide IP level, topol
ogy
records with a pool des
ti
na
tion state
ment must exist. Other des
ti
na
tion state
ment types
(such as data cen
ter or coun
try) may be used when using topol
ogy as a pool level load
bal
anc
ing method.

111

ARCHITECTURES BIG-IP GTM/DNS SERVICES

Architectures

This sec
tion will briely de
scribe three com
mon de
ploy
ment method
olo
gies for using a
BIG-IP sys
tem in a DNS en
vi
ron
ment. For more in
for
ma
tion, see BIG-IP Global Traf
fic
Man
ager: Im
ple
men
ta
tions.

Delegated mode
When op
er
at
ing in del
e
gated mode, re
quests for wide-IP re
source records are redi
rected
or del
e
gated to BIG-IP GTM. The BIG-IP sys
tem does not see all DNS re
quests, and op
er
ates on re
quests for records that are sent to it.
For more in
for
ma
tion, see Del
e
gat
ing DNS Traf
fic to BIG-IP GTM in BIG-IP Global Traf
fic
Man
ager: Im
ple
men
ta
tions.

Screening mode
When op
er
at
ing in screen
ing mode, BIG-IP GTM sits in front of one or more DNS
servers. This con
fig
u
ra
tion al
lows for easy im
ple
men
ta
tion of ad
di
tional BIG-IP sys
tem
fea
tures for DNS traf
fic be
cause DNS re
quests for records other than wide-IPs pass
through BIG-IP GTM. If the re
quest matches a wide IP, BIG-IP GTM will re
spond to the re
quest. Oth
er
wise, the re
quest is for
warded to the DNS servers. This con
fig
u
ra
tion can
pro
vide the fol
low
ing ben
e
fits:
DNS query validation: When a request arrives at BIG-IP GTM, BIG-IP GTM validates
that the query is well formed. BIG-IP GTM can drop malformed queries, protecting the
back end DNS servers from seeing the malformed queries.
DNSSEC dynamic signing: When the responses from the DNS server pass back
through BIG-IP GTM, it is possible for BIG-IP GTM to sign the response. This allows the
use of DNSSEC with an existing zone and DNS servers, but takes advantage of any
cryptographic accelerators in a BIG-IP device.

112

ARCHITECTURES BIG-IP GTM/DNS SERVICES

Transparent Caching: When the responses from the DNS server pass back through
the BIG-IP system , it can cache the response. Future requests for the same records
can be served directly from the BIG-IP system reducing the load on the back-end DNS
servers.
For more in
for
ma
tion, see Plac
ing BIG-IP GTM in Front of a DNS Server and Plac
ing BIG-IP
GTM in Front of a Pool of DNS Servers in BIG-IP Global Traf
fic Man
ager: Im
ple
men
ta
tions.

Replacing a DNS server


It is also pos
si
ble for the BIG-IP sys
tem to op
er
ate as a stand-alone, au
thor
i
ta
tive DNS
server for one or more zones. In this con
fig
u
ra
tion, all DNS re
quests for a zone are sent
to BIG-IP GTM. Any re
quests for a wide IP are han
dled by BIG-IP GTM and other re
quests
are sent to the local bind in
stance on the BIG-IP sys
tem. ZoneRun
ner is used to man
age
the records in the lo
cal-bind in
stance. For more in
for
ma
tion, see Re
plac
ing a DNS Server
with BIG-IP GTM in BIG-IP Global Traf
fic Man
ager: Im
ple
men
ta
tions.

113

BIG-IP GTM IQUERY BIG-IP GTM/DNS SERVICES

BIG-IP GTM iQuery


Introduction
iQuery is an XML pro
to
col that BIG-IP sys
tems use to com
mu
ni
cate with each other. BIGIP GTM uses iQuery for var
i
ous tasks:
Determining the health of objects in BIG-IP GTM configuration.
Exchanging information about BIG-IP GTM synchronization group state.
Providing a transport for synchronizing BIG-IP GTM configuration throughout the
synchronization group.
Communicating LDNS path probing metrics.
Exchanging wide-IP persistence information.
Gathering BIG-IP system configuration when using auto-discovery.
All of these tasks com
bined pro
vide a BIG-IP GTM syn
chro
niza
tion group with a uni
fied
view of BIG-IP GTM con
fig
u
ra
tion and state.

iQuery Agents
All BIG-IP GTM de
vices are iQuery clients. The gtmd process on each BIG-IP GTM de
vice
con
nects to the big3d process on every BIG-IP server de
fined in BIG-IP GTM con
fig
u
ra
tion, which in
cludes both BIG-IP GTM and BIG-IP LTM. These are long-lived con
nec
tions
made using TCP port 4353. This set of con
nec
tions among BIG-IP GTM de
vices and be
tween BIG-IP GTM and BIG-IP LTM de
vices is called an iQuery mesh.
iQuery com
mu
ni
ca
tion is en
crypted using SSL. The de
vices in
volved in the com
mu
ni
ca
tion au
then
ti
cate each other using SSL cer
tifi
cate-based au
then
ti
ca
tion. For in
for
ma
tion,
see Man
ual BIG-IP Global Traf
fic Man
ager: Con
cepts in Com
mu
ni
ca
tions Be
tween BIG-IP
GTM and Other Sys
tems.
In order to mon
i
tor the health of ob
jects in the con
fig
u
ra
tion, BIG-IP GTM de
vices in the
syn
chro
niza
tion group will send mon
i
tor re
quests via iQuery to an
other iQuery server
that is closer to the tar
get of the mon
i
tor. All BIG-IP GTM de
vices in the syn
chro
niza
tion

114

BIG-IP GTM IQUERY BIG-IP GTM/DNS SERVICES

group will agree on which BIG-IP GTM is re


spon
si
ble for ini
ti
at
ing the mon
i
tor
ing re
quest. The re
sult of the mon
i
tor
ing re
quest will be sent by the iQuery server to all BIG-IP
GTM de
vices con
nected to it that are par
tic
i
pat
ing in the syn
chro
niza
tion group.

The importance of iQuery communication


A lack of a uni
fied view of the iQuery mesh will cause un
pre
dictable be
hav
ior. For ex
am
ple, if each BIG-IP GTM de
vice is not con
nected to the same set of other BIG-IP GTM de
vices, there can be dis
agree
ment of mon
i
tor re
spon
si
bil
ity re
sult
ing in ob
ject avail
abil
ity
flap
ping ("flap
ping" is when a de
vice is marked down and up re
peat
edly).

big3d Software Versioning


The ver
sion of the big3d soft
ware in
stalled on each de
vice must be the same or higher
than the ver
sion of soft
ware used on BIG-IP GTM de
vices. For in
for
ma
tion on up
dat
ing
the big3d soft
ware, see Run
ning the big3d_in
stall script in BIG-IP Global Traf
fic Man
ager:
Im
ple
men
ta
tions.

Determining the health of the iQuery mesh


115

BIG-IP GTM IQUERY BIG-IP GTM/DNS SERVICES

Determining the health of the iQuery mesh


Reviewing log files or SNMP Traps
The /var/log/gtm log file will con
tain in
for
ma
tion about con
nec
tion sta
tus changes to
big3d agents. When a new con
nec
tion is es
tab
lished to a big3d agent or when a con
nec
tion is lost, a log mes
sage will be gen
er
ated.
Ex
am
ple con
nec
tion lost mes
sages:
alertgtmd[8663]:011a500c:1:SNMP_TRAP:Box10.14.20.209state
changegreen>red(Box10.14.20.209onUnavailable)
alertgtmd[8663]:011a5004:1:SNMP_TRAP:Server/Common/gtm3
(ip=10.14.20.209)statechangegreen>red(Nocommunication)
Ex
am
ple con
nec
tion es
tab
lished mes
sages:
alertgtmd[8663]:011a500b:1:SNMP_TRAP:Box10.14.20.209state
changered>green
alertgtmd[8663]:011a5003:1:SNMP_TRAP:Server/Common/gtm3
(ip=10.14.20.209)statechangered>green
Fur
ther
more, if a con
nec
tion to a con
fig
ured BIG-IP server is down, re
peated Con
nec
tion in progress to mes
sages will be gen
er
ated:
noticegtmd[8663]:011ae020:5:Connectioninprogressto
10.14.20.209
noticegtmd[8663]:011ae020:5:Connectioninprogressto
10.14.20.209
noticegtmd[8663]:011ae020:5:Connectioninprogressto
10.14.20.209

116

BIG-IP GTM IQUERY BIG-IP GTM/DNS SERVICES

tmsh
You can use the tmsh show gtm iquery com
mand to dis
play that sta
tus of all of the
iQuery con
nec
tions on a BIG-IP GTM de
vice. The com
mand will dis
play each IP ad
dress:
#tmsh show gtm iquery

Gtm::IQuery:10.12.20.207

Servergtm1
DataCenterDC1
iQueryStateconnected
QueryReconnects1
BitsIn8.2M
BitsOut47.7K
Backlogs0
BytesDropped96
CertExpirationDate10/29/2404:38:53
ConfigurationTime12/08/1416:37:49

Gtm::IQuery:10.14.20.209

Servergtm3
DataCenterDC2
iQueryStateconnected
QueryReconnects0
BitsIn8.2M
BitsOut45.7K
Backlogs0
BytesDropped0
CertExpirationDate10/29/2404:38:53
ConfigurationTime12/08/1416:37:49
For more in
for
ma
tion, see AskF5 ar
ti
cle SOL13690: Trou
bleshoot
ing BIG-IP GTM syn
chro
niza
tion and iQuery con
nec
tions (11.x).

117

BIG-IP GTM IQUERY BIG-IP GTM/DNS SERVICES

iqdump
You can use the iq
dump com
mand to check the com
mu
ni
ca
tion path and SSL cer
tifi
catebased au
then
ti
ca
tion from a BIG-IP GTM to an
other de
vice in the iquery mesh. The syn
tax of the iq
dump com
mand is iq
dump <ip ad
dress> <syn
chro
niza
tion group name>.
When using the iq
dump com
mand, BIG-IP GTM syn
chro
niza
tion group name is op
tional
and is not im
por
tant for this con
nec
tion ver
i
fi
ca
tion.
For ex
am
ple:
#iqdump10.14.20.209
<!Localhostname:gtm1.example.com>
<!Connectedtobig3dat:::ffff:10.14.20.209:4353>
<!Subscribingtosyncgroup:default>
<!MonDec816:45:002014>
<xml_connection>
<version>11.4.0</version>
<big3d>big3dVersion11.5.1.6.0.312</big3d>
<kernel>linux</kernel>
...

N O T E

&

T I P S

Note:Theiqdumpcommandwillcontinuetorununtilit
isinterruptedbypressingCtrlC.
If there is a prob
lem with the com
mu
ni
ca
tion path or the SSL au
then
ti
ca
tion, the iq
dump com
mand will fail and re
port an error.
The ver
sion of BIG-IP soft
ware being run on the re
mote sys
tem is re
ported in the ver
sion XML stanza. The ver
sion of the big3d soft
ware run
ning on the re
mote sys
tem is re
ported in the <big3d> XML stanza.
For more in
for
ma
tion, see AskF5 ar
ti
cle: SOL13690: Trou
bleshoot
ing BIG-IP GTM syn
chro
niza
tion and iQuery con
nec
tions (11.x).

118

BIG-IP GTM/DNS TROUBLESHOOTING BIG-IP GTM/DNS SERVICES

BIG-IP GTM/DNS
Troubleshooting
BIG-IP GTM query logging
BIG-IP GTM can log de
bug
ging in
for
ma
tion about the de
ci
sion mak
ing process when re
solv
ing a wide IP. These logs can re
port which pool and vir
tual server were cho
sen for a
wide-IP res
o
lu
tion or why BIG-IP GTM was un
able to re
spond.
AskF5 ar
ti
cle: SOL14615: Con
fig
ur
ing the BIG-IP GTM sys
tem to log wide IP re
quest
in
for
ma
tion con
tains in
for
ma
tion about en
abling query log
ging. This log
ging should
only be en
abled only for trou
bleshoot
ing and not for long pe
ri
ods of time.

DNS query logging


For DNS con
fig
u
ra
tions that pro
vide ser
vices be
yond wide IP res
o
lu
tion, for ex
am
ple
DNS re
cur
sive re
solvers or DNS Ex
press, it is pos
si
ble to en
able DNS query and re
sponse
log
ging. For more in
for
ma
tion, see Con
fig
ur
ing Im
ple
men
ta
tions in Ex
ter
nal Mon
i
tor
ing of
BIG-IP Sys
tems: Im
ple
men
ta
tions.

Statistics
The BIG-IP sys
tem main
tains sta
tis
tics for ob
jects through
out the sys
tem. Due to the va
ri
ety of sta
tis
tics gath
ered and the breadth of con
fig
u
ra
tion el
e
ments cov
ered, it is im
prac
ti
cal to cover them within this guide. Sta
tis
tics are doc
u
mented through
out the user
man
u
als where they are fea
tured.
Sta
tis
tics are vis
i
ble in the con
fig
u
ra
tion util
ity under "Sta
tis
tics >> Mod
ule Sta
tis
tics." In
tmsh, sta
tis
tics are vis
i
ble using the show com
mand with par
tic
u
lar ob
jects. Typ
i
cally,
sta
tis
tics are ei
ther gauges or coun
ters. A gauge keeps track of a cur
rent state, for ex
am
ple cur
rent con
nec
tions. A counter keeps an in
cre
ment
ing count of how many times a
par
tic
u
lar ac
tion is taken, for ex
am
ple total re
quests.

119

BIG-IP GTM/DNS TROUBLESHOOTING BIG-IP GTM/DNS SERVICES

Re
view
ing sta
tis
tics that use coun
ters may pro
vide in
sight into the pro
por
tion of traf
fic
which is valid, or per
haps may in
di
cate that there is a con
fig
u
ra
tion error.
For ex
am
ple, com
par
ing the Dropped Queries counter to Total Queries counter shows
that there are some drops, but this may not be of con
cern be
cause the drops are fairly
low as a per
cent
age of the total.

#show gtm wideip www.example.com

Gtm::WideIp:www.example.com

Status
Availability:available
State:enabled
Reason:Available

Requests
Total1765
A1552
AAAA213
Persisted0
Resolved1762
Dropped3

LoadBalancing
Preferred1760
Alternate2
Fallback0
CNAMEResolutions0
ReturnedfromDNS0
ReturnedtoDNS0
Sta
tis
tics are also avail
able for polling via SNMP and can be polled, cat
a
logued over time,
and graphed by an Net
work Man
age
ment Sta
tion (NMS).

120

BIG-IP GTM/DNS TROUBLESHOOTING BIG-IP GTM/DNS SERVICES

121

BIG-IP GTM/DNS TROUBLESHOOTING IRULES

iRules
Introduction
Anatomy of an iRule
Practical Considerations
iRules Troubleshooting

122

INTRODUCTION IRULES

Introduction

iR
ules plays a crit
i
cal role in ad
vanc
ing the flex
i
bil
ity of the BIG-IP sys
tem. Amongst their
many ca
pa
bil
i
ties, some no
table ac
tions in
clude mak
ing load bal
anc
ing de
ci
sions, per
sist
ing, redi
rect
ing, rewrit
ing, dis
card
ing, and log
ging client ses
sions.
iR
ules can be used to aug
ment or over
ride de
fault BIG-IP LTM be
hav
ior, en
hance se
cu
rity, op
ti
mize sites for bet
ter per
for
mance, pro
vide ro
bust ca
pa
bil
i
ties nec
es
sary to guar
an
tee ap
pli
ca
tion avail
abil
ity, and en
sure a suc
cess
ful user ex
pe
ri
ence on your sites.
iR
ules tech
nol
ogy is im
ple
mented using Tool Com
mand Lan
guage (Tcl). Tcl is known for
speed, em
bed
d
a
bil
ity, and us
abil
ity. iR
ules may be com
posed using most na
tive Tcl com
mands, as well as a ro
bust set of BIG-IP LTM/BIG-IP GTM ex
ten
sions pro
vided by F5.
Doc
u
men
ta
tion that cov
ers the root Tcl lan
guage can be found at http://
tcl.
tk/
and http://
tmml.
sourceforge.
net/
doc/
tcl/
index.
html (These links take you to an
ex
ter
nal re
source). iR
ules is based on Tcl ver
sion 8.4.6.

123

ANATOMY OF AN IRULE IRULES

Anatomy of an iRule

An iRule con
sists of one or more event de
c
la
ra
tions, with each de
c
la
ra
tion con
tain
ing Tcl
code that is ex
e
cuted when that event oc
curs.

Events
Events are an F5 ex
ten
sion to Tcl which pro
vide an event-dri
ven frame
work for the ex
e
cu
tion of iR
ules code in the con
text of a con
nec
tion flow, being trig
gered by in
ter
nal traf
fic pro
cess
ing state changes. Some events are trig
gered for all con
nec
tion flows, whilst
oth
ers are pro
file-spe
cific, mean
ing the event can only be trig
gered if an as
so
ci
ated pro
file has been ap
plied to the vir
tual server pro
cess
ing the flow in ques
tion.
For ex
am
ple, the CLIEN
T_AC
CEPTED event is trig
gered for all flows when the flow is es
tab
lished. In the case of TCP con
nec
tions, it is trig
gered when the three-way hand
shake
is com
pleted and the con
nec
tion en
ters the ES
TAB
LISHED state. In con
trast, the
HTTP_RE
QUEST event will only be trig
gered on a given con
nec
tion flow if an HTTP pro
file
is ap
plied to the vir
tual server. For ex
am
ple, the fol
low
ing iRule eval
u
ates every HTTP re
quest from a client against a list of known web robot user agents:

whenHTTP_REQUEST{
switchglob[stringtolower[HTTP::headerUserAgent]]{
"*scooter*"
"*slurp*"
"*msnbot*"
"*fast*"
"*teoma*"
"*googlebot*"{
#Sendbotstothebotpool
poolslow_webbot_pool
}
}
}
iRule events are cov
ered in de
tail in the Events sec
tion of the iR
ules Wiki on De
v
Cen
tral.

124

ANATOMY OF AN IRULE IRULES

Commands and functions


Com
mands and func
tions are re
spon
si
ble for the bulk of the heavy lift
ing within iR
ules.
They allow the BIG-IP user to do things like get the URI of an HTTP re
quest (HTTP::uri), or
en
cryp data with an Ad
vanced En
cryp
tion Stan
dard (AES) key (AES::en
crypt). In ad
di
tion
to the stan
dard Tcl com
mand set, F5 has added ad
di
tional com
mands that are ei
ther
global in scope (TCP::clien
t_
port, IP::addr, ...) or spe
cific to a cer
tain pro
file. For ex
am
ple, switch, HTTP::Header, and pool:

whenHTTP_REQUEST{
switchglob[stringtolower[HTTP::headerUserAgent]]{
"*scooter*"
"*slurp*"
"*msnbot*"
"*fast*"
"*teoma*"
"*googlebot*"{
#Sendbotstothebotpool
poolslow_webbot_pool
}
}
}
A full list of com
mands and func
tions can be found in the Com
mand sec
tion of the iR
ules Wiki on De
v
Cen
tral.

N O T E

&

T I P S

Note:SeveralnativecommandsbuiltintotheTcl
languagehavebeendisabledintheiRules
implementationduetorelevance,andother
complicationsthatcouldcauseanunexpectedhaltin
thetrafficflow(fileIO,Socketcalls,etc.).The
listofdisabledcommandscanbefoundinthe
iRulesWiki.

125

ANATOMY OF AN IRULE IRULES

Operators
An op
er
a
tor is a token that forces eval
u
a
tion and com
par
i
son of two con
di
tions. In ad
di
tion to the built in Tcl op
er
a
tors (==, <=, >=, ...), op
er
a
tors such as "start
s_with," "con
tains," and "end
s_with" have been added to act as helpers for com
mon com
par
isons. For
ex
am
ple, in the iRule below, the "<" op
er
a
tor com
pares the num
ber of avail
able pool
mem
bers against 1, while the start
s_with eval
u
ates the re
quested URI against a known
value:

whenHTTP_REQUEST{
if{[active_members[LB::serverpool]]<1}{
HTTP::respond503content{<html><body>Siteistemporarily
unavailable.Sorry!</body><html>}
return
}
if{[HTTP::uri]starts_with"/nothingtoseehere/"}{
HTTP::respond403content{<html><body>Toobad,sosad!
</body><html>}
return
}
}
A list of the F5 spe
cific op
er
a
tors added to Tcl for iR
ules are avail
able on the iR
ules Wiki
on De
v
Cen
tral.

Variables
iR
ules uses two vari
able scopes: sta
tic and local. In ad
di
tion, pro
ce
dures may ac
cept ar
gu
ments which pop
u
late tem
po
rary vari
ables scoped to the pro
ce
dure.
Sta
tic vari
ables are sim
i
lar to Tcl's global vari
ables and are avail
able to all iR
ules on all
flows. A sta
tic vari
able's value is in
her
ited by all flows using that iRule, and they are typ
i
cally set only in RULE_INIT and read in other events. They are com
monly used to tog
gle

126

ANATOMY OF AN IRULE IRULES

de
bug
ging or per
form minor con
fig
u
ra
tion such as the names of data
groups that are
used for more com
plete con
fig
u
ra
tion. Sta
tic vari
ables are de
noted by a sta
tic:: pre
fix.
For ex
am
ple, a sta
tic vari
able may be named sta
tic::debug.
Local, or flow, vari
ables are local to con
nec
tion flows. Once set they are avail
able to all iR
ules and iR
ules events on that flow. These are most of what iR
ules use, and can con
tain
any data. Their value is dis
carded when the con
nec
tion is closed. They are fre
quently ini
tial
ized in the CLIEN
T_
CON
NECTED event.
It is ex
tremely im
por
tant to un
der
stand the vari
able scope in order to pre
clude un
ex
pected con
di
tions as some vari
ables may be shared across pools and events and in
some cases, glob
ally to all con
nec
tions. For more in
for
ma
tion, see iR
ules 101 - #03 Vari
ables on De
v
Cen
tral.

iRules context
For every event that you spec
ify within an iRule, you can also spec
ify a con
text, de
noted
by the key
words clientside or server
side. Be
cause each event has a de
fault con
text as
so
ci
ated with it, you need only de
clare a con
text if you want to change the con
text from
the de
fault.
The fol
low
ing ex
am
ple in
cludes the event de
c
la
ra
tion CLIEN
T_AC
CEPTED, as well as the
iR
ules com
mand IP::re
mote_addr. In this case, the IP ad
dress that the iR
ules com
mand
re
turns is that of the client, be
cause the de
fault con
text of the event de
c
la
ra
tion CLIEN
T_AC
CEPTED is client-side.

whenCLIENT_ACCEPTED{
if{[IP::addr[IP::remote_addr]equals10.1.1.80]}{
poolmy_pool1
}
}
Sim
i
larly, if you in
clude the event de
c
la
ra
tion SERV
ER_
CON
NECTED as well as the com
mand IP::re
mote_addr, the IP ad
dress that the iRule com
mand re
turns is that of the

127

ANATOMY OF AN IRULE IRULES

server, be
cause the de
fault con
text of the event de
c
la
ra
tion SERV
ER_
CON
NECTED is
server-side.
The pre
ced
ing ex
am
ple shows what hap
pens when you write iR
ules that uses the de
fault
con
text when pro
cess
ing iR
ules com
mands. You can, how
ever, ex
plic
itly spec
ify
the clientside and server
side key
words to alter the be
hav
ior of iR
ules com
mands.
Con
tin
u
ing with the pre
vi
ous ex
am
ple, the fol
low
ing ex
am
ple shows the event de
c
la
ra
tion SERV
ER_
CON
NECTED and ex
plic
itly spec
i
fies the clientside key
word for the iR
ules
com
mand IP::re
mote_addr. In this case, the IP ad
dress that the iR
ules com
mand re
turns
is that of the client, de
spite the server-side de
fault con
text of the event de
c
la
ra
tion.

whenSERVER_CONNECTED{
if{[IP::addr[IP::addr[clientside{IP::remote_addr}]equals
10.1.1.80]}{
discard
}
}

Putting it all together


The fol
low
ing HTML Com
ment Scrub
ber ex
am
ple iR
ules is com
posed of sev
eral events
that de
pend upon each other, as well as nu
mer
ous op
er
a
tors and com
mands used to
eval
u
ate client sup
plied re
quests as well as the data con
tained within the server re
sponse.

whenRULE_INIT{
setstatic::debug0
}
whenHTTP_REQUEST{
#Don'tallowdatatobechunked
if{[HTTP::version]eq"1.1"}{
if{[HTTP::headeris_keepalive]}{
HTTP::headerreplace"Connection""KeepAlive"
}

128

ANATOMY OF AN IRULE IRULES

HTTP::version"1.0"
}
}
whenHTTP_RESPONSE{
if{[HTTP::headerexists"ContentLength"]&&[HTTP::header
"ContentLength"]<1000000}{
setcontent_length[HTTP::header"ContentLength"]
}else{
setcontent_length1000000
}
if{$content_length>0}{
HTTP::collect$content_length
}
}
whenHTTP_RESPONSE_DATA{
#FindtheHTMLcomments

setindices[regexpallinlineindices{<!(?:[^]||[^]
[^])*?\s*?>}[HTTP::payload]]

#Replacethecommentswithspacesintheresponse
if{$static::debug}{
loglocal0."Indices:$indices"
}
foreachidx$indices{
setstart[lindex$idx0]
setlen[expr{[lindex$idx1]$start+1}]
if{$static::debug}{
loglocal0."Start:$start,Len:$len"
}
HTTP::payloadreplace$start$len[stringrepeat""$len]
}
}

129

PRACTICAL CONSIDERATIONS IRULES

Practical
Considerations
Performance implications
While iR
ules pro
vide tremen
dous flex
i
bil
ity and ca
pa
bil
ity, de
ploy
ing them adds trou
bleshoot
ing com
plex
ity and some amount of pro
cess
ing over
head on the BIG-IP sys
tem
it
self. As with any ad
vanced func
tion
al
ity, it makes sense to weigh all the pros and cons.
Be
fore using iR
ules, de
ter
mine whether the same func
tion
al
ity can be per
formed by an
ex
ist
ing BIG-IP fea
ture. Many of the more pop
u
lar iR
ules func
tion
al
i
ties have been in
cor
po
rated into the na
tive BIG-IP fea
ture set over time. Ex
am
ples in
clude cookie en
cryp
tion
and header in
ser
tion via the HTTP pro
file, URI load-bal
anc
ing via BIG-IP LTM poli
cies and,
in TMOS ver
sions be
fore 11.4.0, the HTTP class pro
file.
Even if re
quire
ments can
not be met by the ex
ist
ing ca
pa
bil
i
ties of the BIG-IP sys
tem, it is
still worth con
sid
er
ing whether or not the goal can be ac
com
plished through other
means, such as up
dat
ing the ap
pli
ca
tion it
self. How
ever, iR
ules are often the most timeef
fi
cient and best so
lu
tion in sit
u
a
tions when it may take months to get an up
date to an
ap
pli
ca
tion. The neg
a
tive im
pacts of using iR
ules, such as in
creased CPU load, can be
mea
sured and man
aged through using iR
ules tools and by writ
ing iR
ules so they be
have
as ef
fi
ciently as pos
si
ble.

Practical programming considerations


The fol
low
ing prac
tices may be use
ful when it comes to de
vel
op
ing your iR
ules.
Write readable code: While Tcl supports multiple commands in a single line, writing
iRules in this way can make them difficult to read.
Choose meaningful variable names.
Write reusable code. If you perform certain operations frequently, consider creating a
single iRule that is associated with multiple virtual servers. In TMOS v11.4 and higher,
you can also define and call procedures in iRules.

130

PRACTICAL CONSIDERATIONS IRULES

Write efficient code. For example, avoid data type conversions unless necessary (from
strings to numbers).
Use the return command to abort further execution of a rule if the conditions it
was intended to evaluate have been matched.
Use timing commands to measure iRules before you put them into production.
Leverage data classes (data groups) to store variable and configuration information
outside of the iRules code itself. This will allow you to make updates to data that
needs to change frequently without the risk of introducing coding errors.It also helps
make the iRule portable as the same code can be used for QA and production
while the data classes themselves can be specific to their respective environments.
Use a stats profile instead of writing to a log to determine how many times an iRule
has fired or a particular action has been taken.

iRules assignment to a virtual server


When you as
sign mul
ti
ple iR
ules as re
sources for a vir
tual server, it is im
por
tant to con
sider the order in which you list them on the vir
tual server. This is be
cause BIG-IP LTM
processes du
pli
cate iR
ules events in the order in which the ap
plic
a
ble iR
ules are listed.
An iRule event can there
fore ter
mi
nate the trig
ger
ing of events, thus pre
vent
ing BIG-IP
LTM from trig
ger
ing sub
se
quent events.
For more in
for
ma
tion, see AskF5 ar
ti
cle SOL 13033: Con
struct
ing CMP-com
pat
i
ble iR
ules.

131

TROUBLESHOOTING IRULES

Troubleshooting

This chap
ter cov
ers a brief guide for iR
ules in re
la
tion to BIG-IP LTM and BIG-IP GTM.

N O T E

&

T I P S

Note:Thischapterassumesthattheinitial
configurationofyourBIGIPsysemhasbeencompleted
andyouareusingiRulesforimplementingnew
functionality.

Tools
The tools iden
ti
fied within this chap
ter are not an ex
haus
tive list of all pos
si
bil
i
ties, and
while there may be other tools, these are in
tended to cover the ba
sics.

Syntax:
Tools that you may use to write and check that the syn
tax of your iR
ules is cor
rect in
clude:
iRules Editor, F5's integrated code editor for iRules, built to develop iRules with full
syntax highlighting, colorization, textual auto-complete, integrated help, etc.
Notepad++, a text editor and source code editor for use with Microsoft Windows.

Client/server debug tools


After the cre
ation of iR
ules, the in
tended func
tion
al
ity needs to be ver
i
fied. De
pend
ing
on the ex
pected func
tion
al
ity, you may need to check client, the BIG-IP sys
tem, and
server re
sponse to the traf
fic and the iR
ules. Many browsers in
clude ex
ten
sions that as
sist with web de
bug
ging, such as Fire
Bug for Mozilla Fire
fox and Google Chrome de
vel
oper tools.
Ad
di
tion
ally, some third-party tools that may fa
cil
i
tate de
bug
ging in
clude:
HttpWatch, an HTTP sniffer for Internet Explorer , Firefox, iPhone, and iPad
that provides insights into the performance of your iRule.
Fiddler, an HTTP debugging proxy server application.

132

TROUBLESHOOTING IRULES

Wireshark, a free and open source packet analyzer. It is used for network
troubleshooting, analysis, software and communications protocol development, and
education.

DevCentral
De
v
Cen
tral is a com
mu
nity-ori
ented re
source to bring en
gi
neers to
gether to learn, solve
prob
lems, and fig
ure things out. iR
ules-spe
cific re
sources on De
v
Cen
tral in
clude:
iRules Reference Wiki.
Official documentation.
Tutorials.
Articles.
Q&A forum, examples (CodeShare).
Podcasts (videos).

iRules Troubleshooting
When writ
ing iR
ules, keep in mind the fol
low
ing con
sid
er
a
tions:
iRules development should take place on a non-production system.
Verify your iRules often while building it.
While error messages may be cryptic, review them carefully as they often indicate
where a problem exists.
Log within the iRules liberally, both to watch variables and to determine iRules
progression. Consider using or evaluating a variable to turn off unnecessary debug
logging in production.
Many iRules commands will have software version requirements. Check DevCentral
to verify the version requirements of the intended commands.
Use the catch command to encapsulate commands which are likely to throw
errors based on input and other factors. Instead of having an iRule fail and the
connection subsequently dropped, use the catch command to perform an alternate
action.
Thoroughly test your iRule to ensure intended functionality. Even once an iRule
compiles, it may have logical errors. BIG-IP LTM cannot detect logical errors, so while
an iRule may compile and execute, it may behave in such a way that precludes the
BIG-IP system from returning a valid response to the client.

133

TROUBLESHOOTING IRULES

Review your Syntax


The F5 iR
ules ed
i
tor will help you with iden
ti
fy
ing com
mon is
sues, but not all mis
takes
will be read
ily de
tected. A com
mon cause of er
rors is im
proper use of braces. If an iRule
con
tains an open
ing "{", "[", or "(", it must have a cor
re
spond
ing, clos
ing ")", "]", or "}". The
GUI will often give in
tim
i
dat
ing or cryp
tic error mes
sages when the prob
lem is a sim
ple
brac
ing issue.
It is im
por
tant to leave suf
fi
cient white
space to make the code read
able and man
age
able. iR
ules will gen
er
ate er
rors due to the extra data be
tween the clos
ing braces of the
if and el
seif and be
fore the el
seif and else. Syn
tax val
i
da
tion can fail due to poorly
placed com
ments.

Test your iRules


When de
bug
ging iR
ules it is im
por
tant to cap
ture the con
nec
tion data from end to end.
Ini
tial
ize the tools be
fore you make your ini
tial con
nec
tion to the BIG-IP sys
tem, and do
not stop gath
er
ing data until the con
nec
tion has fully com
pleted sucess
fully, or failed.

Deploy your iRules


Con
sider ver
sion
ing your iR
ules for sim
plic
ity of de
ploy
ment and main
tain
abil
ity. It is is
easy to in
tro
duce new er
rors or bugs, and hav
ing the old iR
ules to fall back on will greatly
de
crease your time to res
o
lu
tion if some
thing goes wrong. Pro
mot
ing an iRule from QA
to pro
duc
tion will go a lot smoother if you do not need to make any code up
dates to the
iRule it
self when sav
ing it (see pre
vi
ous guid
ance on using data classes wher
ever pos
si
ble).

134

TROUBLESHOOTING LOGGING

Logging
Introduction
Logging and Monitoring
Practical Considerations

135

INTRODUCTION LOGGING

Introduction

Being fa
mil
iar with your BIG-IP LTM and BIG-IP GTM logs can be very help
ful in main
tain
ing the sta
bil
ity and health of your sys
tems. Events can be logged ei
ther lo
cally or re
motely de
pend
ing on your con
fig
u
ra
tion. Log
ging is cov
ered ex
ten
sively in the BIG-IP
TMOS: Op
er
a
tions Guide. This sec
tion will cover some im
por
tant con
cepts as well as top
ics re
lated to BIG-IP LTM and BIG-IP GTM.

Logging levels
For each type of sys
tem event you can set the min
i
mum log level. When you spec
ify a
min
i
mum log level, all alerts at the min
i
mum level and higher will be logged. The log
ging
lev
els fol
low the typ
i
cal Linux-stan
dard lev
els.
Low
er
ing the min
i
mum log level in
creases the vol
ume of logs that are pro
duced. Set
ting
the min
i
mum log level to the low
est value (Debug) should be done with cau
tion. Large
vol
umes of log mes
sages can have a neg
a
tive im
pact on the per
for
mance of the BIG-IP
de
vice.
De
fault local log lev
els are set on the BIG-IP sys
tem in order to con
vey im
por
tant in
for
ma
tion. F5 rec
om
mends chang
ing de
fault log lev
els only when needed to as
sist with
trou
bleshoot
ing. When trou
bleshoot
ing is com
plete, log lev
els should be reset to their
de
fault value.
For more de
tailed in
for
ma
tion log
ging, see the fol
low
ing re
sources:
BIG-IP TMOS : Operations Guide.
Logging in BIG-IP TMOS: Concepts.
AskF5 article: SOL13455: Overview of BIG-IP logging BigDB database keys (11.x).
AskF5 article: SOL5532: Configuring the level of information logged for TMMspecific events.

136

LOGGING AND MONITORING LOGGING

Logging and
Monitoring
Local logging
By de
fault, the BIG-IP sys
tem logs via sys
log to the local filesys
tem. Most local log files
can be man
aged and viewed using the BIG-IP Con
fig
u
ra
tion Util
ity.
Im
por
tant log files for BIG-IP LTM in
clude:
/var/log/ltm
/var/log/tmm
/var/log/pktfilter
Im
por
tant log files for BIG-IP GTM and DNS in
clude:
/var/log/gtm
/var/log/user.log
/var/log/daemon.log

Remote Logging
Al
though log
ging can be done lo
cally, it is rec
om
mended to log to a pool of re
mote log
ging servers. Re
mote log
ging can be done using the legacy Linux sys
log-ng func
tion
al
ity
or it can be man
aged using the TMOS-man
aged high-speed log
ging func
tions. F5 rec
om
mends log
ging to a pool of re
mote high speed log
ging servers.

Remote Logging using syslog-ng


Re
mote sys
log servers can be added through BIG-IP con
fig
u
ra
tion util
ity or through
tmsh. For more in
for
ma
tion, see AskF5 ar
ti
cle: SOL13080: Con
fig
ur
ing the BIG-IP sys
tem to log to a re
mote sys
log server (11.x).

137

LOGGING AND MONITORING LOGGING

The de
fault log lev
els only apply to local log
ging, mean
ing that as soon as a re
mote sys
log server is de
fined, all sys
log mes
sages are sent to the re
mote server. In other words,
fil
ters only im
pact local log
ging. These fil
ters can be cus
tomized through the con
fig
u
ra
tion util
ity and tmsh.
Cus
tomiz
ing re
mote log
ging using sys
log-ng re
quires the use of tmsh. Cus
tomiz
ing re
mote log
ging with sys
log-ng would allow you to do the fol
low
ing:
Log to a remote server using TCP.
Set logging levels.
Direct specific messages to specific destinations.
Define multiple remote logging servers.
For more in
for
ma
tion, see AskF5 ar
ti
cle: SOL13333: Fil
ter
ing log mes
sages sent to re
mote sys
log servers (11.x).

Remote logging using high-speed logging


High-speed log
ging was in
tro
duced in BIG-IP ver
sion 11.3. It al
lows users to more eas
ily
fil
ter and di
rect log mes
sages pro
duced by BIG-IP sys
tem processes.
BIG-IP sys
tem ob
jects re
lated to high-speed log
ging con
fig
u
ra
tion are shown in the fig
ure
below. The con
fig
u
ra
tion process re
quires the de
f
i
n
i
tion of a pool of re
mote log
ging
servers and an un
for
mat
ted re
mote high-speed log des
ti
na
tion that ref
er
ences the pool.
If Arc
Sight, Splunk, or Re
mote Sys
log servers are used, for
mat
ted log des
ti
na
tions need
to be de
fined. The log fil
ter al
lows you to de
fine your fil
ter
ing cri
te
ria, such as sever
ity,
source, or sys
tem process that cre
ated the log mes
sage, and a mes
sage ID for lim
it
ing
the log
ging to one spe
cific in
stance of a spe
cific mes
sage type. See Ex
ter
nal Mon
i
tor
ing
of BIG-IP Sys
tems: Im
ple
men
ta
tions Guide to Con
fig
ure High Speed Log
ging for in
for
ma
tion about con
fig
ur
ing re
mote high-speed log
ging for sev
eral sce
nar
ios, in
clud
ing DNS
Log
ging, Pro
to
col Se
cu
rity Events, Net
work Fire
wall Events, DoS Pro
tec
tion Events, and
oth
ers.

138

LOGGING AND MONITORING LOGGING

Remote monitoring
The BIG-IP LTM and BIG-IP GTM can be man
aged and mon
i
tored using Sim
ple Net
work
Man
ag
ment Pro
to
col (SNMP). SNMP is an In
ter
net Pro
to
col used for man
ag
ing nodes on
an IP net
work. SNMP trap con
fig
u
ra
tion on the BIG-IP sys
tem in
volves defin
ing the trap
des
ti
na
tions and events that will re
sult in a trap being sent.
Spe
cific Man
age
ment In
for
ma
tion Bases (MIB) for BIG-IP LTM and BIG-IP GTM can be
down
loaded from the wel
come page of the BIG-IP con
fig
u
ra
tion util
ity, or from the
/usr/share/snmp/mibs di
rec
tory on the BIG-IP filesys
tem. The MIB for BIG-IP LTM ob
ject is in the F5-BIGIP-LOCAL-MIB.
txt file and the MIB for BIG-IP GTM ob
jects is in the
F5-BIGIP-GLOBAL-MIB.
txt file. Other F5 Net
works MIBs may be nec
es
sary. For more in
for
ma
tion, see AskF5 ar
ti
cle: SOL13322: Overview of BIG-IP MIB files (10.x - 11.x) and
SNMP Trap Con
fig
u
ra
tion in BIG-IP TMOS: Im
ple
men
ta
tions.

139

PRACTICAL CONSIDERATIONS LOGGING

Practical
Considerations

This sec
tion pro
vides some prac
ti
cal ad
vice and things to be aware of when work
ing with
log
ging:
When chang
ing the min
i
mum log
ging level, do so with great care to avoid:
Generating large log files.
Impacting the the BIG-IP sytem's performance.
Min
i
mize the amount of time that the log level is set to Debug. When debug log
ging is
no longer re
quired, be sure to set log lev
els back to their de
fault val
ues. The fol
low
ing
AskF5 re
sources list sev
eral log events and their de
fault val
ues:
SOL5532: Configuring the level of information logged for TMM-specific events
SOL13455: Overview of BIG-IP logging BigDB database keys (11.x)
If more than one mon
i
tor is as
signed to a pool mem
ber, it may be im
por
tant to de
ter
mine which mon
i
tor trig
gered an event. See the Mon
i
tors chap
ter in the LTM Load Bal
anc
ing sec
tion of this guide for more de
tails.
When cre
at
ing, mod
i
fy
ing and test
ing iR
ules, use Debug log
ging only in a de
vel
op
ment
en
vi
ron
ment.
Before an iRule is promoted to production, remove statements typically used during
the development and testing phases that write information to the logs for sanity
checking.
To determine how many times an iRule has fired or a particular action has been
taken, use a stats profile instead of writing to a log.
Ad
min
is
tra
tors of the BIG-IP LTM and GTM mod
ules should mon
i
tor the fol
low
ing log
events:
LTM/GTM: Pool members/nodes down or flapping.
LTM/GTM: Pool has no members.
LTM: Port exhaustion.
GTM: "Connection in progress" messages.

140

PRACTICAL CONSIDERATIONS LOGGING

141

You might also like