f5 LTM GTM Operations Guide 1 0 PDF
f5 LTM GTM Operations Guide 1 0 PDF
f5 LTM GTM Operations Guide 1 0 PDF
ACKNOWLEDGEMENTS
INTRODUCTION
Legal Notices
12
Introduction
13
Core Concepts
14
Topologies
15
CMP/TMM Basics
21
22
Introduction
23
24
Monitors
29
Troubleshooting
42
44
Introduction
45
Virtual Address
46
Address Translation
48
Self IP Address
51
52
53
54
55
58
Practical Considerations
62
64
68
69
Protocol Profiles
70
OneConnect
72
HTTP Profiles
75
SSL Profiles
78
82
Persistence Profiles
84
88
Troubleshooting
90
91
Introduction
92
94
96
107
Architectures
112
114
119
IRULES
122
Introduction
123
Anatomy of an iRule
124
Practical Considerations
130
Troubleshooting
132
LOGGING
135
Introduction
136
137
Practical Considerations
140
ACKNOWLEDGEMENTS
Acknowledgements
Acknowledgements
ACKNOWLEDGEMENTS
F5 Net
works (F5) is con
stantly striv
ing to im
prove our ser
vices and cre
ate closer cus
tomer re
la
tion
ships. The F5 BIG-IP TMOS Op
er
a
tions Guide (v1.0) and the F5 BIG-IP
Local Traf
fic Man
ager (LTM) and BIG-IP Global Traf
fic Man
ager (GTM) Op
er
a
tions
Guide (v1.0) are tools to as
sist our cus
tomers in ob
tain
ing the full value of our so
lu
tions.
F5 is proud to be ranked among the worlds top tech
nol
ogy com
pa
nies for its prod
uct in
no
va
tions and in
dus
try lead
er
ship. With the chang
ing na
ture of pub
lish
ing tech
nolo
gies
and knowl
edge trans
mis
sion, we have ap
plied our in
no
va
tion com
mit
ment in the op
er
a
tions guide de
vel
op
ment process.
Ap
ply
ing agile con
cepts, sub
ject mat
ter ex
perts and a group of F5 en
gi
neers, (in
clud
ing
sev
eral for
mer F5 cus
tomers) gath
ered for a one week sprint of col
lab
o
ra
tive tech
ni
cal
writ
ing. Con
sid
er
ing the per
spec
tive of our cus
tomers, these tal
ent en
gi
neers put in ex
treme hours and ef
fort brain
storm
ing, de
bat
ing and sift
ing through a vast qual
ity of
tech
ni
cal con
tent to write an op
er
a
tions guide aimed at stream
lin
ing and op
ti
miz
ing F5
prod
uct main
te
nance. The BIG-IP Local Traf
fic Man
ager (LTM) and BIG-IP Global Traf
fic
Man
ager (GTM) Op
er
a
tions Guide is our sec
ond ex
plo
ration of this process.
This the first ver
sion of this guide and should be viewed in that con
text. F5 is com
mit
ted
to con
tin
u
ous im
prove
ment, so check for up
dates and ex
pan
sion of con
tent on the Ask
F5TM Knowl
edge Base. We wel
come feed
back and in
vite read
ers to visit our sur
vey
at https://
www.
surveymonkey.
com/
s/
F5OpsGuide. (This link takes you to an out
side
re
source) or email your ideas and sug
ges
tion to opsguide@
f5.
com.
Thanks to Adam Hyde and the staff at Book
Sprints (this link takes you to an out
side re
source) for their as
sis
tance in ex
plor
ing the col
lab
o
ra
tive writ
ing process used in de
vel
op
ing the BIG-IP Local Traf
fic Man
ager and BIG-IP Global Traf
fic Man
ager Op
er
a
tions
Guide. Book
sprints' com
bi
na
tion of on
site fa
cil
i
ta
tion, col
lab
o
ra
tive plat
form and fol
lowthe-sun back
end sup
port, de
sign and ad
min
is
tra
tion, is unique in the in
dus
try.
F5 Net
works Team
Ju
lian Eames, Ex
ec
u
tive Vice Pres
i
dent, Busi
ness Op
er
a
tions (ex
ec
u
tive spon
sor)
Don Mar
tin, Vice Pres
i
dent, GS New Prod
uct & Busi
ness De
vel
op
ment (ex
ec
u
tive spon
sor)
Travis Mar
shal and Nicole Fer
raro (fi
nance); Petra Duter
row (pur
chas
ing), Diana Young
(legal), Igna
cio Avel
laneda and Claire De
laney (mar
ket
ing)
ACKNOWLEDGEMENTS
Book
Sprints Team
Adam Hyde (founder)
Hen
rik van Leeuwen (il
lus
tra
tor)
Raewyn Whyte (proof reader)
Juan Car
los Gutirrez Bar
quero and Julien Taquet (tech
ni
cal sup
port)
Laia Ros and Si
mone Pout
nik (fa
cil
i
ta
tors)
INTRODUCTION
Introduction
Legal Notices
About This Guide
Legal Notices
Publication Date
This doc
u
ment was pub
lished on De
cem
ber 12, 2014.
Pub
li
ca
tion Num
ber
BIG-IP LT
MGT
MOps - 01_0_0
Copyright
Copy
right 2014-2015, F5 Net
works, Inc. All rights re
served.
F5 Net
works, Inc. (F5) be
lieves the in
for
ma
tion it fur
nishes to be ac
cu
rate and re
li
able. How
ever, F5 as
sumes no re
spon
si
bil
ity for the use of this in
for
ma
tion, nor any in
fringe
ment of patents or other rights of third par
ties, which may re
sult from its use. No
li
cense is granted by im
pli
ca
tion or oth
er
wise under any patent, copy
right, or other in
tel
lec
tual prop
erty right of F5 ex
cept as specif
i
cally de
scribed by ap
plic
a
ble user li
censes. F5
re
serves the right to change spec
i
fi
ca
tions at any time with
out no
tice.
Trademarks
AAM, Ac
cess Pol
icy Man
ager, Ad
vanced Client Au
then
ti
ca
tion, Ad
vanced Fire
wall Man
ager, Ad
vanced Rout
ing, AFM, APM, Ap
pli
ca
tion Ac
cel
er
a
tion Man
ager, Ap
pli
ca
tion Se
cu
rity Man
ager, ARX, AskF5, ASM, BIG-IP, BIG-IQ, Cloud Ex
ten
der, Cloud
Fu
cious, Cloud Man
ager, Clus
tered Mul
ti
pro
cess
ing, CMP, CO
HE
SION, Data Man
ager, De
v
Cen
tral, De
v
Cen
tral [DE
SIGN], DNS Ex
press, DSC, DSI, Edge Client, Edge Gate
way, Edge
Por
tal, EL
E
VATE,
EM, En
ter
prise Man
ager, EN
GAGE, F5, F5[DE
SIGN],F5 Cer
ti
fied [DE
SIGN], F5 Net
works, F5SalesX
change [DE
SIGN], F5Syn
the
sis, f5Syn
the
sis,F5Syn
the
sis[DE
SIGN], F5
TechX
change [DE
SIGN], Fast Ap
pli
ca
tion Proxy, Fast Cache, FirePass, Global Traf
fic Man
ager, GTM, GUARDIAN, iApps, IBR, In
tel
li
gent Browser Ref
er
enc
ing, In
tel
li
gent Com
pres
sion, IPv6 Gate
way, iCon
trol, iHealth, iQuery, iR
ules, iR
ules On
De
mand, iSes
sion, L7 Rate
-
Shap
ing, LC, Link Con
troller, Local Traf
fic Man
ager, BIG-IP LTM, Lin
eR
ate, Lin
eR
ate Sys
tems [DE
SIGN], LROS, BIG-IP LTM, Mes
sage Se
cu
rity Man
ager, MSM, OneCon
nect, Packet
Ve
loc
ity, PEM, Pol
icy En
force
ment Man
ager, Pro
to
col Se
cu
rity Man
ager, PSM, Real Traf
fic
Pol
icy Builder, SalesX
change, ScaleN, Sig
nalling De
liv
ery Con
troller, SDC, SSL Ac
cel
er
a
tion, soft
ware de
signed ap
pli
ca
tions ser
vices, SDAC (ex
cept in Japan), Strong
Box, Su
per
VIP, SYN Check, TCP Ex
press, TDR, TechX
change, TMOS, To
tALL, Traf
fic Man
age
ment Op
er
at
ing Sys
tem, Traf
fix Sys
tems, Traf
fix Sys
tems (DE
SIGN), Trans
par
ent Data Re
duc
tion,
UNITY, VAULT, vCMP, VE F5 [DE
SIGN], Ver
safe, Ver
safe [DE
SIGN], VIPRION, Vir
tual Clus
tered Mul
ti
pro
cess
ing, Web
Safe, and ZoneRun
ner, are trade
marks or ser
vice marks of F5
Net
works, Inc., in the U.S. and other coun
tries, and may not be used with
out F5's ex
press
writ
ten con
sent. All other prod
uct and com
pany names herein may be trade
marks of
their re
spec
tive own
ers.
Patents
This prod
uct may be pro
tected by one or more patents in
di
cated at: http://
www.
f5.
com/
about/
guidelines-policies/
patents.
Notice
THE SOFT
WARE, SCRIPT
ING & COM
MAND EX
AM
PLES ARE PRO
VIDED "AS IS", WITH
OUT
WAR
RANTY OF ANY KIND, EX
PRESS OR IM
PLIED, IN
CLUD
ING BUT NOT LIM
ITED TO THE
WAR
RANTIES OF MER
CHANTABIL
ITY, FIT
NESS FOR A PAR
TIC
U
LAR PUR
POSE AND NON
IN
FRINGE
MENT. IN NO EVENT SHALL THE AU
THORS OR COPY
RIGHT HOLD
ERS BE LI
ABLE
FOR ANY CLAIM, DAM
AGES OR OTHER LI
A
BIL
ITY, WHETHER IN AN AC
TION OF CON
TRACT, TORT OR OTH
ER
WISE, ARIS
ING FROM, OUT OF OR IN CON
NEC
TION WITH THE
SOFT
WARE, SCRIPT
ING & COM
MAND EX
AM
PLES OR THE USE OR OTHER DEAL
INGS WITH
THE SOFT
WARE, SCRIPT
ING & COM
MAND EX
AM
PLES.
Prerequisites
It is as
sumed that:
The F5 platform is installed as detailed in the F5 Platform Guide for your F5 product,
or a cloud-based virtual server solution is correctly installed and configured.
TMOS has been installed and configured as described in the F5 product
documentation manuals appropriate for your version and module.
F5 BIG-IP TMOS: Operations Guide has been reviewed and applicable
suggestions implemented.
Acceptable industry operating thresholds have been determined and configured with
considerations for seasonal variance in load.
Administrators and Operators are well versed in F5 technology concepts. F5 provides
many training opportunities, services and resources. Details can be found in the F5
BIG-IP TMOS: Operations Guide.
What is covered
The con
tent in this guide ad
dresses the BIG-IP LTM and BIG-IP GTM mod
ules in
cluded in
ver
sion 11.2.1 up to 11.6.0. In most cases it is ag
nos
tic to plat
form.
Document Conventions
To iden
tify and un
der
stand im
por
tant in
for
ma
tion, F5 doc
u
men
ta
tion uses cer
tain styl
is
tic con
ven
tions, which are de
tailed in the fol
low
ing sub-sec
tions.
Examples
Only non-routable IP ad
dresses and ex
am
ple do
main names are used in this doc
u
ment;
valid IP ad
dresses and do
main names are re
quired in in
di
vid
ual im
ple
men
ta
tions.
New terms
New terms are shown in bold italic text.
Document references
Italic text de
notes a ref
er
ence to an
other doc
u
ment or sec
tion of a doc
u
ment. Bold,
italic text in
di
cates a book title ref
er
ence. AskF5 ar
ti
cle links ap
pear in bold text.
Reference Materials
We have in
cluded links for all so
lu
tions (SOL) ar
ti
cles, but we sug
gest you see AskF5 for a
com
pre
hen
sive list of ap
plic
a
ble and up-to-date ref
er
ence ma
te
ri
als such as de
ploy
ment
guides, hard
ware man
u
als, and con
cept guides.
AskF5
To find a doc
u
ment, go to http://
www.
askf5.
com and se
lect the ap
pro
pri
ate prod
uct, ver
sion, and doc
u
ment type, as shown in the fol
low
ing il
lus
tra
tion. A com
pre
hen
sive list of
rel
e
vant doc
u
ments will be dis
played. You can add key
words and other iden
ti
fiers to
nar
row the search.
10
Glossary of Terms
Due to the chang
ing na
ture of tech
nol
ogy, we have cho
sen not to in
clude a Glos
sary in
this doc
u
ment. An up-to-date and com
pre
hen
sive glos
sary of com
mon in
dus
try and F5spe
cific terms is archived at http://
www.
f5.
com/
glossary
11
12
Introduction
13
Core Concepts
More de
tail about BIG-IP LTM con
nec
tion han
dling will be cov
ered in sub
se
quent chap
ters of this guide.
14
Topologies
BIG-IP sys
tems can be de
ployed to a net
work with lit
tle to no mod
i
fi
ca
tion to ex
ist
ing ar
chi
tec
ture. How
ever, to op
ti
mize the per
for
mance of your net
work and ap
pli
ca
tions,
some en
vi
ron
ment changes may be re
quired to take full ad
van
tage of the mul
ti
pur
pose
func
tion
al
ity of BIG-IP sys
tems.
There are three basic topolo
gies for load-bal
anced ap
pli
ca
tions with BIG-IP LTM: onearmed, two-armed, and nPath. nPath is also known as Di
rect Server Re
turn, or DSR.
One-Armed Deployment
In this de
ploy
ment, the vir
tual server is on the same sub
net and VLAN as the pool mem
bers. Source ad
dress trans
la
tion must be used in this con
fig
u
ra
tion to en
sure that
server re
sponse traf
fic re
turns to the client via BIG-IP LTM.
15
Ad
van
tages of this topol
ogy:
Allows for rapid deployment.
Requires minimal change to network architecture to implement.
Allows for full use of BIG-IP LTM feature set.
Con
sid
er
a
tions for this topol
ogy:
Client IP address is not visible to pool members as the source address will be
translated by the BIG-IP LTM. This prevents asymmetric routing of server return
traffic. (This can be changed for HTTP traffic by using the X-Forwarded-For header).
Two-Armed Deployment
In this topol
ogy, the vir
tual server is on a dif
fer
ent VLAN from the pool mem
bers, which
re
quires that BIG-IP sys
tems route traf
fic be
tween them. Source ad
dress trans
la
tion may
or may not be re
quired, de
pend
ing on over
all net
work ar
chi
tec
ture. If the net
work is de
signed so that pool mem
ber traf
fic is routed back to BIG-IP LTM, it is not nec
es
sary to
use source ad
dress trans
la
tion.
16
If pool mem
ber traf
fic is not routed back to BIG-IP LTM, it is nec
es
sary to use source ad
dress trans
la
tion to en
sure it is trans
lated back to the vir
tual server IP. The fol
low
ing fig
ure shows a de
ploy
ment sce
nario with
out source ad
dress trans
la
tion:
17
The fol
low
ing il
lus
tra
tion shows the same de
ploy
ment sce
nario using source ad
dress
trans
la
tion:
Ad
van
tages of this topol
ogy:
Allows preservation of client source IP.
Allows for full use of BIG-IP LTM feature set.
Allows BIG-IP LTM to protect pool members from external exploitation.
Con
sid
er
a
tions for this topol
ogy:
May necessitate network topology changes in order to ensure return traffic traverses
BIG-IP LTM.
18
nPath Deployment
nPath, also known by its generic name Di
rect Server Re
turn (DSR), is a de
ploy
ment topol
ogy in which re
turn traf
fic from pool mem
bers is sent di
rectly to clients with
out first tra
vers
ing the BIG-IP LTM. This al
lows for higher the
o
ret
i
cal through
put be
cause BIG-IP LTM
only man
ages the in
com
ing traf
fic and does not process the out
go
ing traf
fic. How
ever,
this topol
ogy sig
nif
i
cantly re
duces the avail
able fea
ture set that BIG-IP LTM can use for
ap
pli
ca
tion traf
fic. If the specifc use case for nPath is jus
ti
fied, See BIG-IP Local Traf
fic
Man
ager: Im
ple
men
ta
tions cov
er
ing nPath de
ploy
ments.
19
Ad
van
tages of this topol
ogy:
Allows maximum theoretical throughput.
Preserves client IP to pool members.
Con
sid
er
a
tions for this topol
ogy:
Limits availability of usable features of BIG-IP LTM and other modules.
Requires modification of pool members and network.
Requires more complex troubleshooting.
20
CMP/TMM Basics
21
22
Introduction
Overview of load balancing
BIG-IP sys
tems are de
signed to dis
trib
ute client con
nec
tions to load bal
anc
ing pools that
are typ
i
cally com
prised of mul
ti
ple pool mem
bers. The load bal
anc
ing method (or al
go
rithm) de
ter
mines how con
nec
tions are dis
trib
uted across pool mem
bers. Load bal
anc
ing meth
ods fall into one of two dis
tinct cat
e
gories: sta
tic or dy
namic. Fac
tors, such as
pool mem
ber avail
abil
ity and ses
sion affin
ity, some
times re
ferred to as stick
i
ness, in
flu
ence the choice of load bal
anc
ing method.
Cer
tain load-bal
anc
ing meth
ods are de
signed to dis
trib
ute con
nec
tions evenly across
pool mem
bers, re
gard
less of the pool mem
bers' or nodes' cur
rent work
load is. These
meth
ods tend to work bet
ter with ho
moge
nous server pools and/or ho
moge
nous work
loads. An ex
am
ple of a ho
moge
nous server pool is one that is com
prised of servers that
all have sim
i
lar pro
cess
ing per
for
mance and ca
pac
ity. An
other ex
am
ple of a ho
moge
nous work
load is one in which all con
nec
tions are short-lived or all re
quests/re
sponses
are sim
i
lar in size. Other load-bal
anc
ing meth
ods are de
signed to favor higher per
form
ing servers, pos
si
bly re
sult
ing in de
lib
er
ately un
even traf
fic dis
tri
b
u
tion across pool
mem
bers. Some load-bal
anc
ing meth
ods take into ac
count one or more fac
tors that
change at run time, such as cur
rent con
nec
tion count or ca
pac
ity; oth
ers do not.
23
24
25
26
27
How
ever, if the num
ber of pri
or
ity-10 phys
i
cal nodes avail
able meets or ex
ceeds the
Pri
or
ity group ac
ti
va
tion set
ting of 2, the vir
tual ma
chines in the pri
or
ity-5 group will be
used au
to
mat
i
cally.
Pri
or
ity group ac
ti
va
tion does not mod
ify per
sis
tence be
hav
ior: any new con
nec
tions
sent to lower-pri
or
ity mem
bers will con
tinue to per
sist even when higher pri
or
ity mem
bers be
come avail
able again.
28
Monitors
What is a monitor?
Mon
i
tors are used by BIG-IP sys
tems to de
ter
mine whether a pool mem
ber (server) is el
i
gi
ble to ser
vice ap
pli
ca
tion traf
fic. Mon
i
tors can be very sim
ple or very com
plex de
pend
ing on the na
ture of the ap
pli
ca
tion. Mon
i
tors en
able BIG-IP sys
tems to gauge the health
of pool mem
bers, by pe
ri
od
i
cally mak
ing spe
cific re
quests of them in order to eval
u
ate
their sta
tuses based on the re
sponses re
ceived or not re
ceived.
29
Monitor components
BIG-IP sys
tems in
clude na
tive sup
port for a wide num
ber of pro
to
cols and pro
pri
etary
ap
pli
ca
tions and ser
vices (for ex
am
ple, TCP, UDP, HTTP, HTTPS, SIP, LDAP, SOAP, MSSQL,
MySQL, etc.). Each mon
i
tor type will be made up of op
tions rel
e
vant to the mon
i
tor type,
but in gen
eral, each will have a re
quest to send and a re
sponse to ex
pect. There is also
the op
tion of con
fig
ur
ing a host and port to send the re
quest to a host other than one of
the pool mem
bers (alias hosts and ports), if nec
es
sary.
With re
spect to HTTP and TCP mon
i
tors for ex
am
ple, the send string
and re
ceive
string op
tions de
ter
mine what re
quests are sent to the pool mem
bers or alias hosts, and
what re
sponses are re
quired to be re
turned in order for the mon
i
tor
ing con
di
tions to be
sat
is
fied, and the pool mem
ber(s) sub
se
quently marked as avail
able.
If the avail
able op
tions don't suit the ap
pli
ca
tion, cus
tom mon
i
tors can be cre
ated and
ex
e
cuted using ex
ter
nal scripts, or by con
struct
ing cus
tom strings to be sent to the ap
pli
ca
tion over TCP or UDP.
Monitor implementation
Typ
i
cally, a mon
i
tor will be cre
ated that uses the same pro
to
col and sim
u
lates a nor
mal
re
quest to the back-end pool mem
bers that would oth
er
wise be re
ceived as part of le
git
i
mate client traf
fic. For in
stance, in the case of an HTTP-based ap
pli
ca
tion, the mon
i
tor
may make an HTTP re
quest to a web
page on the pool mem
ber. As
sum
ing a re
sponse is
re
ceived within the time
out win
dow, the re
sponse data will be eval
u
ated against the con
fig
ured re
ceive string to en
sure proper or ex
pected op
er
a
tion of the ser
vice.
30
31
tion own
ers want the pool mem
ber to serve traf
fic, and gives them an easy way to re
move the pool mem
ber from load bal
anc
ing by ei
ther re
nam
ing it or mov
ing it to an al
ter
nate lo
ca
tion.
For ex
am
ple:
ltmmonitorhttpfinance.example.com_enable_http_monitor{
adaptivedisabled
defaultsfromhttp
destination*:*
interval5
ipdscp0
recv"HTTP/1.1200OK"
send"GET/apphealthcheck/enableHTTP/1.1\r\nHost:
finance.example.com\r\nConnection:close\r\n\r\n"
timeuntilup0
timeout16
}
ltmmonitorhttpfinance.example.com_health_http_monitor{
adaptivedisabled
defaultsfromhttp
destination*:*
interval5
ipdscp0
recv"HTTP/1.1200OK"
send"GET/apphealthcheck/statusHTTP/1.1\r\nHost:
finance.example.com\r\nConnection:close\r\n\r\n"
timeuntilup0
timeout16
}
ltmpoolfinance.example.com_pool{
members{
10.1.1.10:http{
address10.1.1.10
sessionmonitorenabled
stateup
}
32
10.1.1.11:http{
address10.1.1.11
sessionmonitorenabled
statedown
}
10.1.1.12:http{
address10.1.1.12
sessionmonitorenabled
statedown
}
}
monitorfinance.example.com_enable_http_monitorand
finance.example.com_health_http_monitor
}
Pool mem
bers can be pulled from load bal
anc
ing if the site is de
ter
mined to be un
healthy or if an ap
pli
ca
tion owner re
names the /app-health-check/en
able file. It can
also be ac
com
plished using a dis
able string, which func
tions sim
i
larly to the re
ceive
string, ex
cept that it dis
ables the pool mem
ber. This method can be used if the health
check page is dy
namic and the ap
pli
ca
tion re
turns dif
fer
ent con
tent when un
avail
able or
in main
te
nance mode.
Additional use cases
Some servers may host mul
ti
ple ap
pli
ca
tions or web
sites. For ex
am
ple, if a given set of
web servers host finance.
example.
com, hr.
example.
com, and intranet.
example.
com as
vir
tual hosts, it prob
a
bly wouldn't make sense to im
ple
ment the pool struc
ture so that all
of those sites use a sin
gle pool with mul
ti
ple mon
i
tors for each site:
33
ltmpoolexample.com_pool{
members{
10.1.1.10:http{
address10.1.1.10
sessionmonitorenabled
statechecking
}
10.1.1.11:http{
address10.1.1.11
sessionmonitorenabled
statechecking
}
10.1.1.12:http{
address10.1.1.12
sessionmonitorenabled
statechecking
}
}
monitorfinance.example.com_http_monitorand
hr.example.com_http_monitorandintranet.example.com_http_monitor
}
In
stead, you can im
ple
ment an in
di
vid
ual pool for each site, each with its own set of
health mon
i
tors:
ltmpoolfinance.example.com_pool{
members{
10.1.1.10:http{
address10.1.1.10
sessionmonitorenabled
statedown
}
10.1.1.11:http{
address10.1.1.11
sessionmonitorenabled
statedown
}
34
10.1.1.12:http{
address10.1.1.12
sessionmonitorenabled
statedown
}
}
monitorfinance.example.com_http_monitor
}
ltmpoolhr.example.com_pool{
members{
10.1.1.10:http{
address10.1.1.10
sessionmonitorenabled
statedown
}
10.1.1.11:http{
address10.1.1.11
sessionmonitorenabled
statedown
}
10.1.1.12:http{
address10.1.1.12
sessionmonitorenabled
statedown
}
}
monitorhr.example.com_http_monitor
}
ltmpoolintranet.example.com_pool{
members{
10.1.1.10:http{
address10.1.1.10
sessionmonitorenabled
statedown
}
10.1.1.11:http{
35
address10.1.1.11
sessionmonitorenabled
statedown
}
10.1.1.12:http{
address10.1.1.12
sessionmonitorenabled
statedown
}
}
monitorintranet.example.com_http_monitor
}
This pre
vi
ous ex
am
ple il
lus
trates how each in
di
vid
ual vir
tual host ap
pli
ca
tion can be
mon
i
tored and marked up or down in
de
pen
dently of oth
ers. Note that em
ploy
ing this
method re
quires ei
ther an in
di
vid
ual vir
tual server for each vir
tual host, or an iRule to
match vir
tual hosts to pools as ap
pro
pri
ate.
Performance considerations
Con
sid
er
a
tions for de
ter
min
ing an ap
pli
ca
tion's health and suit
abil
ity to serve client re
quests may in
clude the pool mem
bers' loads and re
sponse times, not just the val
i
da
tion
of the re
sponses them
selves. Sim
ple Net
work Man
age
ment Pro
to
col (SNMP) and Win
dows Man
age
ment In
stru
men
ta
tion (WMI) mon
i
tors can be used to eval
u
ate the load on
the servers, but those alone may not tell the whole story, and may pre
sent their own
chal
lenges. In ad
di
tion to those op
tions, the BIG-IP sys
tem also in
cludes an adap
tive mon
i
tor
ing set
ting that al
lows for eval
u
a
tion of the re
sponse time of the server to
the mon
i
tor re
quests.
Adap
tive mon
i
tor
ing al
lows an ad
min
is
tra
tor to re
quire that a pool mem
ber be re
moved
from load bal
anc
ing el
i
gi
bil
ity in the event that it fails to live up to a de
fined ser
vice level
agree
ment. For ex
am
ple:
36
ltmmonitorhttpfinance.example.com_http_monitor{
adaptiveenabled
adaptivelimit150
adaptivesamplingtimespan180
defaultsfromhttp
destination*:*
interval5
ipdscp0
recv"HTTP/1.1200OK"
send"GET/finance/apphealthcheck/\r\nHTTP/1.1\r\nHost:
finance.example.com\r\nConnection:close\r\n\r\n"
timeuntilup0
timeout16
}
37
>Accept:*/*
>Host:finance.example.com
>
*STATE:DO=>DO_DONEhandle0x8001f418line1281(connection
#0)
*STATE:DO_DONE=>WAITPERFORMhandle0x8001f418line1407
(connection#0)
*STATE:WAITPERFORM=>PERFORMhandle0x8001f418line1420
(connection#0)
*HTTP1.1orlaterwithpersistentconnection,pipelining
supported
<HTTP/1.1200OK
*Servernginx/1.0.15isnotblacklisted
<Server:nginx/1.0.15
<Date:Wed,10Dec201404:05:40GMT
<ContentType:application/octetstream
<ContentLength:90
<LastModified:Wed,10Dec201403:59:01GMT
<Connection:keepalive
<AcceptRanges:bytes
<
<html>
<body>
Everythingissupergroovyatfinance.example.combaby!
</body>
<html>
*STATE:PERFORM=>DONEhandle0x8001f418line1590(connection
#0)
*Connection#0tohost10.1.1.10leftintact
*Expirecleared
38
<html>
<body>
Everythingissupergroovyatfinance.example.combaby!
</body>
<html>
If work
ing with an SSL-en
abled web
site, the same method can be used with OpenSSL to
send BIG-IP sys
tem-for
mat
ted mon
i
tor string:
39
$printf"GET/apphealthcheck/statusHTTP/1.1\r\nHost:
finance.example.com\r\nConnection:close\r\n\r\n"|openssl
s_clientconnectfinance.example.com:443quiet
depth=0C=US,ST=WA,L=Seattle,O=MyCompany,OU=IT,CN=
localhost.localdomain,[email protected]
verifyerror:num=18:selfsignedcertificate
verifyreturn:1
depth=0C=US,ST=WA,L=Seattle,O=MyCompany,OU=IT,CN=
localhost.localdomain,[email protected]
verifyreturn:1
HTTP/1.1200OK
Server:nginx/1.0.15
Date:Wed,10Dec201417:53:23GMT
ContentType:application/octetstream
ContentLength:90
LastModified:Wed,10Dec201403:59:01GMT
Connection:close
AcceptRanges:bytes
<html>
<body>
Everythingissupergroovyatfinance.example.combaby!
</body>
<html>
read:errno=0
It may also prove handy to cap
ture work
ing re
quests with a packet cap
ture analy
sis tool
such as Wire
shark, and view it from there to pull out the head
ers and re
sponses.
Monitor reference
See BIG-IP Local Traf
fic Man
ager: Mon
i
tors Ref
er
ence pub
lished on AskF5 for fur
ther de
tails. De
v
Cen
tral site is also a good re
source for spe
cific ques
tions on how to build or
trou
bleshoot a mon
i
tor.
41
Troubleshooting
Load bal
anc
ing is
sues can typ
i
cally fall into one of the fol
low
ing cat
e
gories:
Imbalanced load balancing, in which connections are not distributed as expected
across all pool members.
No load balancing, in which all connections go to one pool member.
Traffic failure, in which no connections go to any pool member.
No load balancing
No load bal
anc
ing oc
curs when all con
nec
tions are di
rected to one pool mem
ber.
When di
ag
nos
ing im
bal
anced load bal
anc
ing, con
sider the fol
low
ing con
di
tions:
Monitors marking pool members down and staying down.
Fallback host defined in HTTP profile.
Persistence.
Server conditions when using dynamic load balancing methods.
42
Server conditions when using node-based rather than member-based load balancing
methods.
iRule affecting/overriding load balancing decisions.
Traffic failure
Traf
fic fail
ure oc
curs where the BIG-IP sys
tem re
ceives ini
tial traf
fic from the client but is
un
able to di
rect the traf
fic to a pool mem
ber or when con
nec
tions to a pool mem
ber fail.
Con
sider the fol
low
ing con
di
tions:
Improper status (for example, monitor marking down when it should be up or vice
versa).
Inappropriate use of profiles (for example, not using a server-ssl profile with an SSL
server).
Server conditions.
iRule affecting/overriding load balancing decisions.
iRule error.
Insufficient source-address-translation addresses available.
References
The fol
low
ing list in
cludes re
sources that may be use
ful in trou
bleshoot
ing load bal
anc
ing is
sues:
SOL12531: Troubleshooting health monitors
SOL3224: HTTP health checks may fail even though the node is responding
correctly
SOL13898: Determining which monitor triggered a change in the availability of a
node or pool member
SOL14030: Causes of uneven traffic distribution across BIG-IP pool members
SOL7820: Overview of SNAT features contains information about insufficient
source-address-translation issues in the SNAT port exhaustion section.
The iRules section of this guide and DevCentral have information about
troubleshooting iRules.
43
BIG-IP LTM
Network Address
Objects
Introduction
Virtual Address
Address Translation
Self IP Address
IPv4/IPv6 Gateway Behavior
Auto Last Hop
44
Introduction
IP ad
dresses and net
blocks are fun
da
men
tal to IP con
fig
u
ra
tion and op
er
a
tion. This sec
tion cov
ers the var
i
ous net
work-ad
dress ob
ject types and the im
pli
ca
tions of each.
When an en
vi
ron
ment uses both IPv4 and IPv6 ad
dress
ing, the BIG-IP sys
tem can also
act as a pro
to
col gate
way, al
low
ing clients that have only one IP stack to com
mu
ni
cate
with hosts that use the al
ter
nate ad
dress
ing scheme.
Ad
di
tional fea
tures not cov
ered by this man
ual that help sup
port a dis
parate IPv4/IPv6
en
vi
ron
ment in
clude NAT IPv6 to IPv4 (some
times re
ferred to as 6to4) and DNS6
to4.
45
Virtual Address
A vir
tual ad
dress is a BIG-IP ob
ject lim
ited in scope to an IP ad
dress or IP net
block level.
A vir
tual ad
dress can be used to con
trol how the BIG-IP sys
tem re
sponds to IP-level traf
fic des
tined for a TMM-man
aged IP ad
dress that is not oth
er
wise matched by a spe
cific Vir
tual Server ob
ject. For ex
am
ple, it is pos
si
ble to con
fig
ure BIG-IP to not send in
ter
net con
trol mes
sage pro
to
col (ICMP) echo-replies when all of the vir
tual servers as
so
ci
ated with a vir
tual ad
dress are un
avail
able (as de
ter
mined by mon
i
tors).
Vir
tual ad
dresses are cre
ated im
plic
itly when a vir
tual server is cre
ated, and they can be
cre
ated ex
plictly via tmsh, but they are gen
er
ally only use
ful in con
junc
tion with an as
so
ci
ated vir
tual server. Vir
tual ad
dresses are a mech
a
nism by which BIG-IP LTM vir
tual
server ob
jects are as
signed to traf
fic groups (see Traf
fic group in
ter
ac
tion in the BIG-IP
LTM Vir
tual Servers chap
ter of this guide for more in
for
ma
tion).
ARP, ICMP echo, and route ad
ver
tise
ment are be
hav
iors are con
trolled in the vir
tual ad
dress con
fig
u
ra
tion.
ARP
The ARP prop
erty of the vir
tual ad
dress con
trols whether or not the BIG-IP sys
tem will re
spond to ARP- and IPv6-neigh
bor dis
cov
ery re
quests for the vir
tual ad
dress. ARP is dis
abled by de
fault for net
work vir
tual ad
dresses.
N O T E
&
T I P S
Note:Disablingavirtualserverwillnotcausethe
BIGIPsystemtostoprespondingtoARPrequestsfor
thevirtualaddress.
ICMP echo
In BIG-IP ver
sion 11.3 and higher, the ICMP echo prop
erty of the vir
tual ad
dress con
trols
how the BIG-IP sysem re
sponds to ICMP echo (ping) re
quests sent to the vir
tual ad
dress.
46
It is pos
si
ble to en
able or dis
able ICMP echo re
sponses. In ver
sion 11.5.1 and higher, it is
also pos
si
ble to se
lec
tively en
able ICMP echo re
sponses based on the state of the vir
tual
servers as
so
ci
ated with the vir
tual ad
dress.
For more in
for
ma
tion, on this topic, see Con
trol
ling Re
sponses to ICMP Echo Re
quests in
BIG-IP Local Traf
fic Man
ager: Im
ple
men
ta
tions.
Route advertisement
When using route health in
jec
tion with dy
namic rout
ing pro
to
cols, the ad
di
tion of rout
ing en
tries into the rout
ing in
for
ma
tion base is con
trolled in the vir
tual ad
dress ob
ject.
The route can be added when any vir
tual server as
so
ci
ated with the vir
tual ad
dress is
avail
able, when all vir
tual servers as
so
ci
ated with the vir
tual ad
dress are avail
able, or
al
ways.
For more in
for
ma
tion on route health in
jec
tion, see Con
fig
ur
ing route ad
ver
tise
ment on
vir
tual ad
dresses in BIG-IP TMOS: IP Rout
ing Ad
min
is
tra
tion.
More Information
For more in
for
ma
tion about vir
tual ad
dresses, see About Vir
tual Ad
dresses in BIG-IP Local
Traf
fic Man
age
ment: Ba
sics.
47
Address Translation
NAT
Net
work ad
dress trans
la
tion (NAT) is used to map one IP ad
dress to an
other, typ
i
cally to
map be
tween pub
lic and pri
vate IP ad
dresses. NAT al
lows bi-di
rec
tional traf
fic through
the BIG-IP sys
tem.
NAT con
tains an ori
gin ad
dress and a trans
la
tion ad
dress. Con
nec
tions ini
ti
ated from
the ori
gin ad
dress will pass through the BIG-IP sys
tem and it will use the trans
la
tion ad
dress as the source on the server side. Con
nec
tions ini
ti
ated to the trans
la
tion ad
dress
will pass through the BIG-IP sys
tem to the ori
gin ad
dress.
In BIG-IP ver
sion 11.3 and higher, sim
i
lar be
hav
ior can be achieved more flex
i
bly by tak
ing ad
van
tage of both the source and des
ti
na
tion ad
dress con
fig
u
ra
tion in vir
tual server
con
fig
u
ra
tions.
SNAT
Secure net
work ad
dress trans
la
tion or source net
work ad
dress trans
la
tion (SNAT) is
used to map a set of source ad
dresses on the client side of BIG-IP to an al
ter
nate set of
source ad
dresses on the server side. It is not pos
si
ble to ini
ti
ate a con
nec
tion through
SNAT on BIG-IP.
48
SNAT pool
A SNAT pool is a group of IP ad
dresses that BIG-IP can choose from to use for a trans
la
tion ad
dress. Each ad
dress in a SNAT pool will also im
plic
itly cre
ate a SNAT trans
la
tion if
one does not exist.
SNAT translation
A SNAT trans
la
tion sets the prop
er
ties of an ad
dress used as a trans
la
tion or in a SNAT
pool.
Com
mon prop
er
ties that might need to be changed for a SNAT trans
la
tion in
clude which
traf
fic-group the ad
dress be
longs to and whether or not the BIG-IP sys
tem should re
spond to ARP re
quests for the trans
la
tion ad
dress.
More Information
For more in
for
ma
tion on NATs and SNATs, see NATs and SNATs in BIG-IP TMOS: Rout
ing
Ad
min
is
tra
tion.
50
Self IP Address
Self IP ad
dresses are con
fig
ured on BIG-IP sys
tems and as
so
ci
ated with a VLAN. These
ad
dresses de
fine what net
works are lo
cally at
tached to BIG-IP.
De
pend
ing on the port lock
down set
tings for the self IP, these ad
dresses can be used to
com
mu
ni
cate with the BIG-IP sys
tem. How
ever, it is rec
om
mended that all man
age
ment
of BIG-IP sys
tems be per
formed via the man
age
ment in
ter
face on a pro
tected man
age
ment net
work using SSH and HTTPS, but not via the self IPs.
Self IP ad
dresses are also used by the BIG-IP sys
tem when it needs to ini
ti
ate com
mu
ni
ca
tions with other hosts via routes as
so
ci
ated with the self IP net
works. For ex
am
ple,
mon
i
tor probes are ini
ti
ated from a non-float
ing self IP ad
dress.
In high-avail
abil
ity en
vi
ron
ments, con
fig
ur
ing float
ing self IP ad
dresses for each traf
fic
group on each VLAN will allow ease of con
fig
u
ra
tion going for
ward.
Any virtual servers that utilize an auto-map source address translation setting will
work properly in a fail-over event if mirroring is configured.
If routes need to be configured on adjacent hosts, they can be configured to utilize
the floating address to ensure that the routed traffic passes through the active BIG-IP
system.
51
IPv4/IPv6 Gateway
Behavior
52
53
54
Destination
Service port
Source
MyVS1
192.168.1.101/32
80
0.0.0.0/0
MyVS2
192.168.1.101/32
192.168.2.0/24
MyVS3
192.168.1.101/32
192.168.2.0/25
MyVS4
0.0.0.0/0
80
192.168.2.0/24
MyVS5
192.168.1.0/24
0.0.0.0/0
55
Ex
am
ple Con
nec
tion Table
Inbound
source
address
Inbound
destination
address
192.168.2.1
192.168.1.101:80 MyVS3 is selected because the destination address matches MyVS1, MyVS2,
and MyVS3. The source address matches both MyVS2 and MyVS3, but
MyVS3 has a subnet mask narrower than MyVS2.
192.168.2.151 192.168.1.101:80 MyVS2 is selected because the destination address matches MyVS1, MyVS2,
and MyVS3. The source address matches only MyVS2.
192.168.10.1
192.168.1.101:80 MyVS1 is selected because the destination address matches MyVS1, MyVS2,
and MyVS3. The source address matches only MyVS1.
192.168.2.1
For more in
for
ma
tion on Vir
tual Server prece
dence, see the fol
low
ing AskF5 ar
ti
cles:
SOL6459: Order of precedence for virtual server matching (9.x - 11.2.1)
SOL14800: Order of precedence for virtual server matching (11.3.0 and later)
Translation options
With re
spect to layer 4 vir
tual servers (stan
dard and fastl4 for ex
am
ple), most net
work
ad
dress trans
la
tion hap
pens au
to
mat
i
cally as
sum
ing a de
fault BIG-IP LTM con
fig
u
ra
tion.
In the event that the BIG-IP sys
tem is not con
fig
ured as the de
fault gate
way for the pool
mem
bers, or you can
not oth
er
wise guar
an
tee that re
turn traf
fic will tra
verse the BIG-IP
sys
tem, it will prob
a
bly be
come nec
es
sary to lever
age a SNAT au
tomap or a SNAT pool
as
signed to the vir
tual server to en
sure traf
fic will tran
sit the BIG-IP sys
tem on egress
and the re
sponse will be prop
erly re
turned to the client.
56
When using an automap source address translation, each traffic group must have
appropriate floating self-IPs defined.
When using a SNATpool source address translation, the SNATpool must contain SNAT
translations that belong to the same traffic group as the virtual server's virtual
address. This will facilitate more seamless failover since the addresses used to NAT
the traffic can follow the traffic group to the next available BIG-IP system device.
To spec
ify the traf
fic group that one or more vir
tual servers re
sides in, as
sign the as
so
ci
ated vir
tual ad
dress ob
ject to the ap
pro
pri
ate traf
fic group.
Clone pools
The BIG-IP sys
tem in
cludes a fea
ture called clone pools that al
lows it to mir
ror TMMman
aged traf
fic off-box to third-party in
tru
sion de
tec
tion sys
tems (IDS) or in
tru
sion pre
ven
tion sys
tems (IPS) for fur
ther scru
ti
niza
tion. This is po
ten
tially a much more flex
i
ble
and ef
fi
cient so
lu
tion than using span ports or net
work taps.
Con
fig
ur
ing a clone pool for a vir
tual server al
lows BIG-IP sys
tem ad
min
is
tra
tors to se
lect
ei
ther client-side or server-side traf
fic for cloning off-box. In the case of SSL ac
cel
er
ated
traf
fic, cloning the server-side traf
fic would allow it to be seen by the IDS or IPS in the
clear with
out the need to ex
port SSL key
pairs for ex
ter
nal de
cryp
tion.
Fur
ther
more, the abil
ity to lever
age a pool of ad
dresses to clone traf
fic to al
lows for loadbal
anc
ing of mir
rored traf
fic to more than one IDS/IPS sys
tem. For more in
for
ma
tion
about clone pools, see BIG-IP Local Traf
fic Man
age
ment Guide.
57
Performance (layer 4)
The per
for
mance (layer 4) vir
tual server type may be use
ful when lit
tle-to-no layer 4 or
layer 7 pro
cess
ing is re
quired. While the BIG-IP sys
tem can still per
form source and des
ti
na
tion IP ad
dress trans
la
tion as well as port trans
la
tion, the load bal
anc
ing de
ci
sions
will be lim
ited in scope due to less layer 7 in
for
ma
tion being avail
able for such pur
poses. On F5 hard
ware plat
forms which sup
port it, per
for
mance (layer 4) vir
tual servers
can of
fload con
nec
tion flows to the ePVA which can re
sult in higher through
put and min
i
mal la
tency.
58
Forwarding (IP)
A for
ward
ing IP vir
tual server al
lows for net
work traf
fic to be for
warded to a host or net
work ad
dress space. A for
ward
ing IP vir
tual server uses the rout
ing table to make for
ward
ing de
ci
sions based on the des
ti
na
tion ad
dress for the server-side con
nec
tion flow.
A for
ward
ing IP vir
tual server is most fre
quently used to for
ward IP traf
fic in the same
fash
ion as any other router. Par
tic
u
lar FastL4 pro
file op
tions are re
quired if state
less for
ward
ing is de
sired. For more in
for
ma
tion, see IP For
ward
ing in Com
mon De
ploy
ments in
this guide.
For more in
for
ma
tion about for
ward
ing IP vir
tual servers, see AskF5 ar
ti
cle: SOL7595:
Overview of IP for
ward
ing vir
tual servers.
You can also de
fine spe
cific net
work des
ti
na
tions and source masks for vir
tual servers
and/or en
able them only on cer
tain VLANs which allow fine-grained con
trol of how net
work traf
fic is han
dled when for
warded. For ex
am
ple, you can use a wild
card vir
tual
server with Source Ad
dress Trans
la
tion en
abled for out
bound traf
fic, and add an ad
di
tional net
work vir
tual server with Source Ad
dress Trans
la
tion dis
abled for traf
fic des
tined for other in
ter
nal net
works. Or you can use a per
for
mance (layer 4) vir
tual server
to se
lect cer
tain traf
fic for in
spec
tion by a fire
wall or IDS.
If you use a per
for
mance (layer 4) vir
tual server, you must en
sure that trans
late ad
dress and trans
late port are dis
abled. These set
tings are dis
abled au
to
mat
i
cally if a vir
tual server is con
fig
ured with a net
work des
ti
na
tion ad
dress.
59
Stateless
The state
less vir
tual server type does not cre
ate con
nec
tion flows and per
forms min
i
mal packet pro
cess
ing. It only sup
ports UDP and is only rec
om
mended in lim
ited sit
u
a
tions.
For more in
for
ma
tion about state
less vir
tual servers, in
clud
ing rec
om
mended uses and
sam
ple con
fig
u
ra
tions, see AskF5 ar
ti
cle: SOL13675: Overview of the state
less vir
tual
server.
Reject
The re
ject vir
tual server type re
jects any pack
ets which would cre
ate a new con
nec
tion
flow. Use cases in
clude block
ing out a port or IP within a range cov
ered by a for
ward
ing
or other wild
card vir
tual server. For more in
for
ma
tion, see AskF5 ar
ti
cle SOL14163:
Overview of BIG-IP vir
tual server types (11.x).
Performance (HTTP)
The per
for
mance (HTTP) vir
tual server type is the only vir
tual server type that im
ple
ments the FastHTTP pro
file. While it may be the fastest way to pass HTTP traf
fic under
cer
tain cir
cum
stances, this type of vir
tual server has spe
cific re
quire
ments and lim
i
ta
tions. Read AskF5 ar
ti
cle SOL8024: Overview of the FastHTTP pro
file be
fore de
ploy
ing
per
for
mance (HTTP) vir
tual servers.
Miscellaneous
BIG-IP LTM sup
ports other vir
tual server types, but these are used less fre
quently and
have spe
cific use cases. As of TMOS v11.6.0, these in
clude DHCP, in
ter
nal, and mes
sage
rout
ing. For more in
for
ma
tion on these vir
tual server types, see AskF5 ar
ti
cle SOL14163:
Overview of BIG-IP vir
tual server types.
60
61
Practical
Considerations
Virtual server connection behavior for TCP
Dif
fer
ent types of vir
tual servers will each have a unique way of han
dling TCP con
nec
tions. For more in
for
ma
tion about TCP con
nec
tion be
hav
ior for vir
tual server types, see
AskF5 ar
ti
cle SOL8082: Overview of TCP con
nec
tion setup for BIG-IP LTM vir
tual
server types.
Performance L4
For se
lect
ing the best vir
tual server type for HTTP traf
fic, see AskF5 ar
ti
cle: SOL4707:
Choos
ing ap
pro
pri
ate pro
files for HTTP traf
fic.
63
Virtual Server
Troubleshooting
Capturing network traffic with tcpdump
For in
for
ma
tion about cap
tur
ing net
work traf
fic with tcp
dump, see the fol
low
ing AskF5
ar
ti
cles:
SOL411: Overview of packet tracing with the tcpdump utility
SOL13637: Capturing internal TMM information with tcpdump
Methodology
When trou
bleshoot
ing vir
tual servers, it is help
ful to have a sold un
der
stand
ing of the
Open Sys
tems In
ter
con
nec
tion (OSI) model. Stan
dard vir
tual servers op
er
ate at layer 4.
When trou
bleshoot
ing vir
tual servers or other lis
ten
ers, it is im
por
tant to con
sider the
lower lay
ers and how they con
tribute to a healthy con
nec
tion. These are the phys
i
cal
layer, data-link layer, and net
work layer.
Physical layer
The phys
i
cal layer is the phys
i
cal con
nec
tion be
tween BIG-IP LTM and other de
vices,
pri
mar
ily Eth
er
net ca
bles and fiber optic ca
bles. It in
cludes low-level link ne
go
ti
a
tion
and can be ver
i
fied by ob
serv
ing in
ter
face state.
Data-link layer
The data-link layer is pri
mar
ily Eth
er
net and Link Ag
gre
ga
tion Con
trol Pro
to
col (LACP).
You can val
i
date that the data-link layer is func
tion
ing by ob
serv
ing LACP sta
tus of
trunks or by cap
tur
ing net
work traf
fic to en
sure Eth
er
net frames are ar
riv
ing at BIG-IP
LTM.
Network layer
The Net
work layer is IP, ARP, and ICMP. ARP (or IPv6 neigh
bor dis
cov
ery) is a pre
req
ui
site for IP to func
tion.
64
Layer 3 troubleshooting
Your trou
bleshoot
ing process should in
clude layer 3 in
ves
ti
ga
tion. First, ARP res
o
lu
tion
should be con
firmed. If ARP is fail
ing, it may in
di
cate a lower-layer in
ter
rup
tion or mis
con
fig
u
ra
tion. You can val
i
date ARP res
o
lu
tion by check
ing the ARP table on the BIG-IP
sys
tem and other IP-aware de
vices on the local net
work.
If ARP is suc
ceed
ing, con
firm whether pack
ets from the client are reach
ing BIG-IP LTM; if
not, there may be a fire
wall block
ing pack
ets or rout
ing may not be suf
fi
ciently con
fig
ured.
As an ini
tial step, you should ver
ify IP con
nec
tiv
ity be
tween the client and the BIG-IP sys
tem, and be
tween the BIG-IP sys
tem and any de
pen
dent re
sources, such as pool mem
bers. This can usu
ally be ac
com
plished using tools such as ping and tracer
oute.
Example commands
tcp
dump can be used to cap
ture net
work traf
fic for trou
bleshoot
ing pur
poses. It can be
used to view live packet data on the com
mand line or to write cap
tured traf
fic to stan
dard pcap files which can be ex
am
ined with Wire
shark or other tools at a later time.
Com
monly used op
tions for tcp
dump:
iinterface
This must al
ways be spec
i
fied and in
di
cates the in
ter
face to cap
ture data from.
This can be a VLAN or 0.0, which in
di
cates all VLANs. You can also cap
ture man
ag
ment in
ter
face traf
fic using the in
ter
face eth0, or cap
ture pack
ets from frontpanel in
ter
faces by in
di
cat
ing the in
ter
face num
ber, i.e. 1.1, 1.2, etc. For VLANs and
0.0, you can spec
ify the :p flag on the in
ter
face to cap
ture traf
fic for the peer
flow to any that match the fil
ter as well. See the fol
low
ing ex
am
ples:
s0
Con
fig
ures tcp
dump to cap
ture the com
plete packet. This is rec
om
mended when
cap
tur
ing traf
fic for trou
bleshoot
ing.
65
w/var/tmp/filename.pcap
Con
fig
ures tcp
dump to write cap
tured packet data to a stan
dard pcap file which
can be in
spected with Wire
shark or tcp
dump. When the -w flag is spec
i
fied, tcp
dump does not print packet in
for
ma
tion to the ter
mi
nal.
N O T E
&
T I P S
Note:F5recommendswritingpacketcapturesto/var/tmp
toavoidserviceinterruptionswhenworkingwithlarge
captures.
nn
Dis
ables DNS ad
dress lookup and shows ports nu
mer
i
cally in
stead of using a
name from /etc/ser
vices. Only use
ful when pack
ets are not being writ
ten to a file.
Rec
om
mended.
It is also im
por
tant to iso
late the de
sired traf
fic by spec
i
fy
ing packet fil
ters when
ever pos
si
ble, par
tic
u
larly on sys
tems pro
cess
ing large vol
umes of traf
fic.
The fol
low
ing ex
am
ples as
sume a client IP of 192.
168.
1.
1 and a vir
tual server of 192.
168.
2.
1:
80. The client-side vlan is VLAN1, and if two-armed the server-side VLAN is VLAN2.
The pool mem
ber is 192.
168.
2.
2:
80 for one-armed ex
am
ples, and 192.
168.
3.
2:
80 for
two-armed ex
am
ples. BIG-IP LTM is con
fig
ured with non-float
ing self IPs of 192.
168.
2.
3
and 192.
168.
3.
3, and float
ing self IPs of 192.
168.
2.
4 and 192.
168.
3.
4.
To cap
ture ARP traf
fic only on the client-side VLAN and log to the ter
mi
nal:
tcpdumpiVLAN1nns0arp
To cap
ture all traf
fic from a par
tic
u
lar client to /var/tmp/example.pcap:
tcpdumpi0.0s0w/var/tmp/example.pcaphost192.168.1.1
If Source Ad
dress Trans
la
tion is en
abled, the above com
mand will not cap
ture serverside traf
fic. To do so, use the :p flag (a client-side flow's peer is its re
spec
tive serverside flow):
tcpdumpi0.0:ps0w/var/tmp/example.pcaphost192.168.1.1
66
To cap
ture client-side traf
fic only from a sam
ple client to the de
sired vir
tual server:
tcpdumpiVLAN1s0w/var/tmp/example.pcaphost192.168.1.1and
host192.168.2.1andtcpport80
To cap
ture traf
fic only to a pool mem
ber in a one-armed con
fig
u
ra
tion (ex
clud
ing nonfloat
ing self IP to ex
clude mon
i
tor traf
fic):
tcpdumpiVLAN1s0w/var/tmp/example.pcaphost192.168.2.2and
tcpport80andnothost192.168.2.3
To cap
ture client-side traf
fic as well, in a two-armed con
fig
u
ra
tion, use this mod
i
fied ver
sion of the last com
mand:
tcpdumpi0.0:ps0w/var/tmp/example.pcaphost192.168.3.2and
tcpport80andnothost192.168.3.3
This is only a small in
tro
duc
tion to the fil
ters avail
able for tcp
dump. More in
for
ma
tion is
avail
able on AskF5.
67
BIG-IP LTM
Profiles
Introduction
Protocol Profiles
OneConnect
HTTP Profiles
SSL Profiles
BIG-IP LTM Policies
Persistence Profiles
Other Protocols and Profiles
Profile Troubleshooting
68
Introduction to BIG-IP
LTM profiles
Pro
files en
able BIG-IP LTM to un
der
stand or in
ter
pret sup
ported net
work traf
fic types
and allow it to af
fect the be
hav
ior of man
aged traf
fic flow
ing through it. BIG-IP sys
tems
in
clude a set of stan
dard (de
fault) pro
files for a wide array of ap
pli
ca
tions and use cases,
such as per
form
ing pro
to
col en
force
ment of traf
fic, en
abling con
nec
tion and ses
sion
per
sis
tence, and im
ple
ment
ing client au
then
ti
ca
tion. Com
mon pro
file types in
clude:
Service: layer 7 protocols, such as HTTP, FTP, DNS, and SMTP.
Persistence: settings which govern routing of existing or related connections or
requests.
Protocol: layer 4 protocols, such as TCP, UDP, and Fast L4.
SSL: which enable the interception, offloading, and authentication of transport layer
encryption.
Other: which provide various types of advanced functionality.
Profile management
De
fault pro
files are often suf
fi
cient to achieve many busi
ness re
quire
ments. Note the fol
low
ing guid
ance for work
ing with the de
fault pro
files:
Changes to the default profile settings are lost when the BIG-IP system software is
upgraded
Changes to default profiles may have unexpected consequences for custom profiles
that otherwise inherit the default profile settings.
To customize a profile, do not modify a default profile. Create a child profile (which
will inherit the default profile settings) and customize the child profile as needed.
A cus
tom pro
file can in
herit set
tings from an
other cus
tom pro
file. This al
lows for ef
fi
cient man
age
ment of pro
files for re
lated ap
pli
ca
tions re
quir
ing sim
i
lar, but unique con
fig
u
ra
tion op
tions.
69
Protocol Profiles
Introduction
There are a num
ber of pro
files avail
able that are jointly de
scribed as pro
to
col pro
files.
For more in
for
ma
tion on the fol
low
ing pro
files, see Pro
to
col Pro
files in BIG-IP Local Traf
fic Man
ager: Pro
files Ref
er
ence, along with the ref
er
enced so
lu
tions.
Fast L4
FastL4 pro
files are used for per
for
mance (layer 4), for
ward
ing (layer 2), and for
ward
ing
(IP) vir
tual servers. The FastL4 pro
file can also be used to en
able state
less IP level for
ward
ing of net
work traf
fic in the same fash
ion as an IP router. See AskF5 ar
ti
cle:
SOL7595: Overview of IP for
ward
ing vi
tual servers for more in
for
ma
tion.
Fast HTTP
The FastHTTP pro
file is a scaled down ver
sion of the HTTP pro
file that is op
ti
mized for
speed under con
trolled traf
fic con
di
tions. It can only be used with the per
for
mance
HTTP vir
tual server and is de
signed to speed up cer
tain types of HTTP con
nec
tions and
to re
duce the num
ber of con
nec
tions to back-end servers.
Given that the FastHTTP pro
file is op
ti
mized for per
for
mance under ideal traf
fic con
di
tions, the HTTP pro
file is rec
om
mended when load-bal
anc
ing most gen
eral-pur
pose web
ap
pli
ca
tions.
See AskF5 ar
ti
cle: SOL8024: Overview of the FastHTTP pro
file be
fore de
ploy
ing per
for
mance (HTTP) vir
tual servers.
70
71
OneConnect
Introduction
The OneCon
nect pro
file may im
prove HTTP per
for
mance by re
duc
ing con
nec
tion setup
la
tency be
tween the BIG-IP sys
tem and pool mem
bers as well as min
i
miz
ing the num
ber
of open con
nec
tions to them. The OneCon
nect pro
file main
tains a pool of con
nec
tions
to the con
fig
ured pool mem
bers. When there are idle con
nec
tions avail
able in the con
nec
tion pool, new client con
nec
tions will use these ex
ist
ing pool mem
ber con
nec
tions if
the con
fig
u
ra
tion per
mits. When used in con
junc
tion with an HTTP pro
file, each client
re
quest from an HTTP con
nec
tion is load-bal
anced in
de
pen
dently.
For in
for
ma
tion about set
tings within the OneCon
nect pro
file, see About OneCon
nect Pro
files in BIG-IP Local Traf
fic Man
age
ment: Pro
files Ref
er
ence.
72
See AskF5 ar
ti
cle: SOL5911: Man
ag
ing con
nec
tion reuse using OneCon
nect source
mask
OneConnect limitations
Use of the OneCon
nect pro
file re
quires care
ful con
sid
er
a
tion of the fol
low
ing lim
i
ta
tions
and im
pli
ca
tions:
OneConnect is intended to work with traffic that can be inspected by the BIG-IP
system at layer 7. For example, using OneConnect with SSL passthrough would not be
a valid configuration.
For HTTP traffic, OneConnect requires the use of an HTTP profile.
Before leveraging OneConnect with non-HTTP applications, see AskF5 article:
SOL7208: Overview of the OneConnect profile.
Applications that utilize connection-based authentication such as NTLM, may need
additional profiles or may not be compatible with OneConnect.
An application that relays an authentication token once on a connection and
expects the application to authorize the requests for the life of the TCP session.
For more information, see AskF5 article: SOL10477: Optimizing NTLM traffic in
BIG-IP 10.x or later or Other Profiles in BIG-IP Local Traffic Management: Profiles
Reference.
73
When using the default OneConnect profile, the pool member cannot rely on the
client IP address information in the network headers to accurately represent the
source of the request. Consider using the X-Forwarded-For option in an HTTP
profile to pass the client IP address to the pool members via an HTTP header.
74
HTTP Profiles
BIG-IP LTM in
cludes sev
eral pro
file types used for op
ti
miz
ing or aug
ment
ing HTTP traf
fic,
in
clud
ing the fol
low
ing:
HTTP
HTTP compression
Web acceleration
HTTP
The HTTP pro
file en
ables the use of HTTP fea
tures in BIG-IP LTM Poli
cies and iR
ules, and
is re
quired for using other HTTP pro
file types, such as HTTP com
pres
sion and web ac
cel
er
a
tion.
In BIG-IP ver
sion 11.5 and higher, HTTP pro
files can op
er
ate in one of the fol
low
ing proxy
modes: re
verse, ex
plicit, and trans
par
ent. This guide will focus on re
verse proxy mode
be
cause it is the mode in
tended for a typ
i
cal ap
pli
ca
tion de
ploy
ment. For more in
for
ma
tion on ex
plicit and trans
par
ent see BIG-IP LTM man
u
als and AskF5 ar
ti
cles.
Here are some of the most com
mon op
tions:
Response chunking changes the behavior of BIG-IP LTM handling of a chunked
response from the server. By default it is set to Selective, which will reassemble and
then re-chunk chunked responses while preserving unchunked responses. This is the
recommended value. However, setting this to Unchunk may be required if clients are
unable to process chunked responses.
OneConnect Transformations controls whether BIG-IP LTM modifies Connection:
Close headers sent in response to HTTP/1.0 requests to Keep-Alive. This is enabled
by default. It allows HTTP/1.0 clients to take advantage of OneConnect.
Redirect Rewrite allows BIG-IP LTM to modify or strip redirects sent by pool members
in order to prevent clients from being redirected to invalid or unmanaged URLs. This
defaults to None, which disables the feature.
Insert X-Forwarded-For causes BIG-IP LTM to insert an X-Forwarded-For HTTP header
in requests. This reveals the client IP address to servers even when source address
translation is used. By default, this is Disabled.
75
77
SSL Profiles
SSL Offload
When op
er
at
ing in SSL of
fload mode, only a client SSL pro
file is used, and while the con
nec
tion be
tween the client and the BIG-IP sys
tem is en
crypted, the con
nec
tion from BIGIP LTM to the pool mem
bers is un
en
crypted. This com
pletely re
moves the re
quire
ment
for the pool mem
bers to per
form SSL en
cryp
tion and de
cryp
tion, which can re
duce re
source usage on pool mem
bers and im
prove over
all per
for
mance.
SSL of
fload should only be used on trusted, con
trolled net
works.
78
SSL re-encryption
When op
er
at
ing in SSL re-en
cryp
tion mode, both client SSL and server SSL pro
files are
con
fig
ured, and the con
nec
tion to pool mem
bers is also en
crypted. This re
quires that the
pool mem
bers per
form SSL en
cryp
tion and de
cryp
tion as well, but of
fers se
cu
rity on the
server-side flow.
Server SSL pro
files can also be con
fig
ured with a client cer
tifi
cate to au
then
ti
cate using
SSL client cer
tifi
cate auth to the pool mem
ber. This method can be lever
aged to en
sure that the ser
vice can only be ac
cessed via BIG-IP LTM vir
tual server, even in the event
that a client can ini
ti
ate con
nec
tions di
rectly to the pool mem
bers.
79
SOL14783:
N O T E
&
T I P S
Note:onclientSSLprofilesinBIGIPv11.6and
higher,theCertificateKeyChainsectionmustbesaved
usingtheAddbutton.Youcannowconfiguremultiple
keyandcertificatepairs(includingachain
certificate,ifrequired)perSSLprofile:oneforRSA
andoneforDSA.
Ciphers is an OpenSSL-format cipher list that specifies which ciphers will be enabled
for use by this SSL profile. Defaults vary between client and server and between
versions.
Options can be used to selectively disable different SSL and TLS versions, if required
by your security policy.
Client Authentication (client SSL only) contains parameters which allow BIG-IP LTM to
verify certificates presented by an SSL client.
80
Server Authentication (server SSL only) contains parameters which allow BIG-IP LTM
to validate the server certificate; normally, it is ignored.
81
In
tro
duced in TMOS v11.4.0, BIG-IP LTM poli
cies su
percede HTTP class pro
files and in
tro
duce func
tion
al
ity to the con
fig
u
ra
tion which was pre
vi
ously only avail
able through iR
ules. Un
like other types of pro
files, BIG-IP LTM poli
cies are as
signed to vir
tual servers on
the Re
sources page.
BIG-IP LTM poli
cies are or
ga
nized into two major sec
tions: Gen
eral Prop
er
ties and
Rules.
General Properties
The most im
por
tant set
tings in Gen
eral Prop
er
ties are Re
quires and Con
trols. Re
quires
allow you to spec
ify what in
for
ma
tion the pol
icy will be able to use to make de
ci
sions,
and Con
trols allow you to spec
ify the types of ac
tions it is able to take. Some se
lec
tions
from Re
quires and Con
trols need par
tic
u
lar pro
files to be as
signed to vir
tual servers
which use this pol
icy. For ex
am
ple, a Re
quires set
ting of http de
pends on an HTTP pro
file, and a Con
trols set
ting of serverssl de
pends on a server SSL pro
file.
Rules
Each Rule is com
posed of Con
di
tions and Ac
tions. Con
di
tions de
fine the re
quire
ments
for the rule to run; Ac
tions de
fine the ac
tion taken if the Con
di
tions match.
By de
fault, the first Rule with match
ing Con
di
tions will be ap
plied; this be
hav
ior can be
changed using the Strat
egy set
ting under Gen
eral Prop
er
ties. For in-depth in
for
ma
tion
of this be
hav
ior, see AskF5 ar
ti
cle: SOL15085: Overview of the Web Ac
cel
er
a
tion pro
file.
A com
mon use case for BIG-IP LTM Poli
cies is se
lect
ing a pool based on HTTP URI. For
this you need a BIG-IP LTM pol
icy that Re
quires http and Con
trols forwarding. You
can then con
fig
ure Rules with an operand of httpuri and an Ac
tion with a tar
get of
forward to a pool.
82
83
Persistence Profiles
Overview of persistence
Many ap
pli
ca
tions ser
viced by BIG-IP LTM are ses
sion-based and re
quire that the client
be load-bal
anced to the same pool mem
ber for the du
ra
tion of that ses
sion. BIG-IP LTM
can ac
com
plish this through per
sis
tence. When a client con
nects to a vir
tual server for
the first time, a load bal
anc
ing de
ci
sion is made and then the con
fig
ured per
sis
tence
record is cre
ated for that client. All sub
se
quent con
nec
tions that the client makes to that
vir
tual server are sent to the same pool mem
ber for the life of that per
sis
tence record.
N O T E
&
T I P S
Note:Unencryptedcookiesmaybesubjecttounintended
informationdisclosure(thisisthedefaultbehavior).
Ifthesecuritypolicesofyourorganizationrequire
thiscookievaluetobeencrypted,seeAskF5article:
SOL14784:ConfiguringBIGIPcookieencryption(10.x
11.x).
84
Cookie passive and cookie rewrite. With these methods, the pool member must
provide all or part of the cookie contents. This method requires close collaboration
between BIG-IP LTM administrator and application owner to properly implement.
Source ad
dress affin
ity di
rects ses
sion re
quests to the same pool mem
ber based on the
source IP ad
dress of a packet. This pro
file can also be con
fig
ured with a net
work mask so
that mul
ti
ple clients can be grouped into a sin
gle per
sis
tence method. The net
work
mask can also be used to re
duce the amount mem
ory used for stor
ing per
sis
tence
records. The de
faults for this method are a time
out of 180 sec
onds and a mask of 255.
255.
255.
255 (one per
sis
tence record per source IP). See AskF5 ar
ti
cle: SOL5779: BIG-IP
LTM mem
ory al
lo
ca
tion for Source Ad
dress Affin
ity per
sis
tence records for more in
for
ma
tion.
Uni
ver
sal per
sis
tence di
rects ses
sion re
quests to the same pool mem
ber based on cus
tomiz
able logic writ
ten in an iRule. This per
sis
tence method is com
monly used in
many BIG-IP iApps.
85
Mi
crosoft Re
mote Desk
top Pro
to
col (MSRDP) per
sis
tence tracks ses
sions be
tween clients
and pool mem
bers based on ei
ther a token pro
vided by a Mi
crosoft Ter
mi
nal Ser
vices
Ses
sion Di
rec
tory/TS Ses
sion Bro
ker server or the user
name from the client.
N O T E
&
T I P S
Note:Sincetheroutingtokensmayendupidenticalfor
someclients,theBIGIPsystemmaypersisttheRDP
sessionstothesameRDPservers.
SIP per
sis
tence is used for servers that re
ceive Ses
sion Ini
ti
a
tion Pro
to
col (SIP) mes
sages
sent through UDP, SCTP, or TCP.
Performance considerations of
persistence
Dif
fer
ent per
sis
tence meth
ods will have dif
fer
ent per
for
mance im
pli
ca
tions. Some meth
ods re
quire the BIG-IP sys
tem to main
tain an in
ter
nal state table, which con
sumes mem
ory and CPU. Other per
sis
tence meth
ods may not be gran
u
lar enough to en
sure uni
form
dis
tri
b
u
tion amongst pool mem
bers. For ex
am
ple, the source ad
dress affin
ity per
sis
tence method may make it dif
fi
cult to uniquely iden
tify clients on cor
po
rate net
works
whose source ad
dresses will be masked by fire
walls and proxy servers.
86
Ad
van
tages of using CARP:
The BIG-IP system maintains no state about the persistence entries, so using this type
of persistence does not increase memory utilization.
There is no need to mirror persistence: Given a persistence key and the same set of
available pool members, two or more BIG-IP systems will reach the same conclusion.
Persistence does not expire: Given the same set of available pool members, a client
will always be directed to the same pool member.
Dis
ad
van
tages of using CARP:
When a pool member becomes available (either due to addition of a new pool
member or change in monitor state), new connections from some clients will be
directed to the newly available pool member at a disproportionate rate.
For more in
for
ma
tion about CARP hash, see AskF5 ar
ti
cle: SOL11362: Overview of the
CARP hash al
go
rithm.
More Information
For more in
for
ma
tion on the con
fi
u
ra
tion for per
sis
tence pro
files, see BIG-IP Local Traf
fic Man
ager: Con
cepts for your TMOS ver
sion.
87
To man
age ap
pli
ca
tion layer traf
fic, you can use any of the fol
low
ing pro
file types. For
more in
for
ma
tion see BIG-IP Local Traf
fic Man
ager: Pro
files Ref
er
ence.
File Transfer Protocol (FTP) profile allows modifying a few FTP properties and
settings to your specific needs.
Domain Name System (DNS) profile allows users to configure various DNS
attributes and allow for many the BIG-IP system DNS features such as DNS caching,
DNS IPv6 to IPv4 translation, DNSSEC, etc.
Real Time Streaming Protocol (RTSP) is a network control protocol designed for use
in entertainment and communications systems to control streaming media servers.
Internet Content Adaptation Protocol (ICAP) is used to extend transparent proxy
servers, thereby freeing up resources and standardizing the way in which new
features are implemented.
Request Adapt or Response Adapt profile instructs an HTTP virtual server to send a
request or response to a named virtual server of type Internal for possible
modification by an Internet Content Adaptation Protocol (ICAP) server.
Diameter is an enhanced version of the Remote Authentication Dial-In User Service
(RADIUS) protocol. When you configure a Diameter profile, the BIG-IP system can
send client-initiated Diameter messages to load balancing servers.
Remote Authentication Dial-In User Service (RADIUS) profiles are used to load
balance RADIUS traffic.
Session Initiation Protocol (SIP) is a signaling communications protocol, widely used
for controlling multimedia communication sessions, such as voice and video calls over
Internet Protocol (IP) networks.
Simple Mail Transfer Protocol (SMTP) profile secures SMTP traffic coming into the
BIG-IP system. When you create an SMTP profile, BIG-IP Protocol Security Manager
provides several security checks for requests sent to a protected SMTP server.
SMTPS profile provides a way to add SSL encryption to SMTP traffic quickly and
easily.
88
iSession profile tells the system how to optimize traffic. Symmetric optimization
requires an iSession profile at both ends of the iSession connection.
Rewrite profile instructs BIG-IP LTM to display websites differently on the external
network than on an internal network and can also be used to instruct the BIG-IP
system to act as a reverse proxy server.
Extensible Markup Language (XML) profile causes the BIG-IP system to perform
XML content-based routing requests to an appropriate pool, pool member, or virtual
server based on specific content in an XML document.
Speedy (SPDY) is an open-source web application layer protocol developed by Google
in 2009 and is primarily geared toward reducing Web page latency. By using the SPDY
profile, the BIG-IP system can enhance functionality for SPDY requests.
Financial Information Exchange (FIX) is an electronic communications protocol for
international real-time exchange of information related to the securities transactions
and markets.
Video Quality of Experience (QOE) profile allows assessment of an audience's video
session or overall video experience, providing an indication of application
performance.
89
Troubleshooting
To re
solve most traf
fic-re
lated is
sues, re
view the vir
tual server and net
work
ing-level con
fig
u
ra
tions. These may in
clude pools and ad
dress trans
la
tion set
tings.
Also re
view the fol
low
ing:
Architectural design for the virtual server in question to verify the correct profile is
used.
Custom profiles and their associated options to ensure they modify traffic behavior as
anticipated.
As a next step, re
vert
ing to an un
mod
i
fied de
fault pro
file and ob
serv
ing po
ten
tial
changes in be
hav
ior can often help pin
point prob
lems.
90
BIG-IP GTM/DNS
Services
Introduction
BIG-IP GTM/DNS Services Basics
BIG-IP GTM/DNS Services Core Concepts
BIG-IP GTM Load Balancing
Architectures
BIG-IP GTM iQuery
BIG-IP GTM/DNS Services Troubleshooting
91
Introduction
The fol
low
ing chap
ters will re
view BIG-IP Global Traf
fic Man
age
ment (GTM) and DNS ser
vice of
fer
ings avail
able from F5.
BIG-IP GTM
BIG-IP Global Traf
fic Man
ager is a DNS-based mod
ule which mon
i
tor the avail
abil
ity and
per
for
mance of global re
sources, such as dis
trib
uted ap
pli
ca
tions, in order to con
trol
net
work traf
fic pat
terns.
DNS Caching
DNS caching is a DNS ser
vices fea
ture that can pro
vide re
sponses for fre
quently re
quested DNS records from a cache main
tained in mem
ory on BIG-IP. This fea
ture can be
used to re
place or re
duce load on other DNS servers.
DNS Express
DNS Ex
press is a DNS Ser
vices fea
ture which al
lows the BIG-IP sys
tem to act as a DNS
slave server. DNS Ex
press sup
ports NO
TIFY, AXFR, and IXFR to fetch zone data and
store it in mem
ory for high per
for
mance. DNS Ex
press does not offer DNS record man
age
ment ca
pa
bil
ity.
RPZ
Re
sponse Pol
icy Zones (RPZ) allow the BIG-IP sys
tem, when con
fig
ured with a DNS cache,
to fil
ter DNS re
quests based on the re
source name being queried. This can be used to
pre
vent clients from ac
cess
ing known-ma
li
cious sites.
iRules
92
iRules
When using iR
ules with BIG-IP GTM, there are two pos
si
ble places to at
tach iR
ules: ei
ther
to the wide IP or to the DNS lis
tener. The iRule com
mands and events avail
able will de
pend on where the iR
ules are at
tached in the con
fig
u
ra
tion. Some iRule func
tions will re
quire BIG-IP LTM to be pro
vi
sioned along
side BIG-IP GTM.
93
BIG-IP GTM/DNS
Services Basics
BIG-IP GTM
As men
tioned in the in
tro
duc
tion, BIG-IP Global Traf
fic Man
ager is the mod
ule built to
mon
i
tor the avail
abil
ity and per
for
mance of global re
sources and use that in
for
ma
tion to
man
age net
work traf
fic pat
terns.
With BIG-IP GTM mod
ule you can:
Direct clients to local servers for globally-distributed sites using a GeoIP database.
Change the load balancing configuration according to current traffic patterns or time
of day.
Set up global load balancing among disparate Local Traffic Manager systems and
other hosts.
Monitor real-time network conditions.
Integrate a content delivery network from a CDN provider.
To im
ple
ment BIG-IP GTM you will need to un
der
stand the fol
low
ing ter
mi
nol
ogy and
basic func
tion
al
ity:
Configuration synchronization ensures the rapid distribution of BIG-IP GTM settings
among BIG-IP GTM systems in a synchronization group.
Load Balancing divides work among resources so that more work gets done in the
same amount of time and, in general, all users get served faster. BIG-IP GTM selects
the best available resource using either a static or a dynamic load balancing method.
When using a static load balancing method, BIG-IP GTM selects a resource based on a
pre-defined pattern. When using a dynamic load balancing method, BIG-IP GTM
selects a resource based on current performance metrics.
Prober Pool is an ordered collection of one or more BIG-IP systems that can be used
to monitor specific resources.
94
95
BIG-IP GTM/DNS
Services Core
Concepts
In ad
di
tion to BIG-IP GTM con
cepts DNS Ex
press, DNS cache, Auto-Dis
cov
ery, and Ad
dress trans
la
tion, ZoneRun
ner will be cov
ered in this chap
ter.
Configuration synchronization
Con
fig
u
ra
tion syn
chro
niza
tion en
sures the rapid dis
tri
b
u
tion of BIG-IP GTM set
tings to
other BIG-IP GTM sys
tems that be
long to the same syn
chro
niza
tion group. A BIG-IP GTM
syn
chro
niza
tion group might con
tain both BIG-IP GTM and BIG-IP Link Con
troller sys
tems.
Con
fig
u
ra
tion syn
chro
niza
tion oc
curs in the fol
low
ing man
ner:
When a change is made to a BIG-IP GTM configuration, the system broadcasts the
change to the other systems in BIG-IP GTM synchronization group.
When a configuration synchronization is in progress, the process must either
complete or time out before another configuration synchronization can occur.
It is im
por
tant to have a work
ing Net
work Time Pro
to
col (NTP) con
fig
u
ra
tion be
cause
BIG-IP GTM re
lies on time
stamps for proper syn
chro
niza
tion.
96
BIG-IP GTM re
sponds to DNS queries on a per-lis
tener basis. The num
ber of lis
ten
ers
cre
ated de
pends on the net
work con
fig
u
ra
tion and the des
ti
na
tions to which spe
cific
queries are to be sent. For ex
am
ple, a sin
gle BIG-IP GTM can be the pri
mary au
thor
i
ta
tive
server for one do
main, while for
ward
ing other DNS queries to a dif
fer
ent DNS server.
BIG-IP GTM al
ways man
ages and re
sponds to DNS queries for the wide IPs that are con
fig
ured on the sys
tem.
N O T E
&
T I P S
Tip:Theresourcesassociatedwithadatacenterare
availableonlywhenthedatacenterisalsoavailable.
Virtual Servers
A vir
tual server is a spe
cific IP ad
dress and port num
ber that points to a re
source on the
net
work. In the case of host servers, this IP ad
dress and port num
ber likely points to the
re
source it
self. With load-bal
anc
ing sys
tems, vir
tual servers are often prox
ies that allow
the load-bal
anc
ing server to man
age a re
source re
quest across a mul
ti
tude of re
sources.
97
N O T E
&
T I P S
Tip:Virtualserverstatusmaybeconfiguredtobe
dependentonlyonthetimeoutvalueofthemonitor
associatedwiththevirtualserver.Thisensuresthat
whenworkinginamultibladedenvironment,whenthe
primarybladeinaclusterbecomesunavailable,the
gtmdagentonthenewprimarybladehastimeto
establishnewiQueryconnectionswithandreceive
updatedstatusfromotherBIGIPsystems.
N O T E
&
T I P S
Tip:Thebig3dagentonthenewprimaryblademustbe
upandfunctioningwithin90seconds(thetimeoutvalue
ofBIGIPmonitor).
Links
A link is an op
tional BIG-IP GTM or BIG-IP Link Con
troller con
fig
u
ra
tion ob
ject that rep
re
sents a phys
i
cal de
vice that con
nects a net
work to the In
ter
net. BIG-IP GTM tracks the
per
for
mance of links, which in
flu
ences the avail
abil
ity of pools, data cen
ters, wide IPs,
and dis
trib
uted ap
pli
ca
tions.
When one or more links are cre
ated, the sys
tem uses the fol
low
ing logic to au
to
mat
i
cally
as
so
ci
ate vir
tual servers with the link ob
jects:
BIG-IP GTM and BIG-IP Link Controller associate the virtual server with the link by
matching the subnet addresses of the virtual server, link, and self IP address. Most of
the time, the virtual server is associated with the link that is on the same subnet as
the self IP address.
In some cases, BIG-IP GTM and BIG-IP Link Controller cannot associate the virtual
server and link because the subnet addresses do not match. When this issue occurs,
the system associates the virtual server with the default link which is assigned to the
data center. This association may cause issues if the link that is associated with the
virtual server does not provide network connectivity to the virtual server.
If the virtual server is associated with a link that does not provide network
connectivity to that virtual server, BIG-IP GTM and BIG-IP Link Controller may
98
incorrectly return the virtual server IP address in the DNS response to a wide IP query
even if the link is disabled or marked as down.
DNS Express
DNS Ex
press en
ables the BIG-IP sys
tem to func
tion as a replica au
thor
i
ta
tive name
server.
Zones are stored in mem
ory for op
ti
mal speed. DNS Ex
press sup
ports the stan
dard DNS
NO
TIFY pro
to
col from pri
mary au
thor
i
ta
tive name
servers and uses AXFR to trans
fer
zone data. DNS Ex
press does not it
self sup
port mod
i
fy
ing records, but if the BIG-IP sys
tem it
self is con
fig
ured as a pri
mary au
thor
i
ta
tive name
server using Zonerun
ner, it can
be used to in
crease ca
pac
ity and min
i
mize la
tency.
You can use DNS Ex
press to both pro
tect your DNS man
age
ment in
fra
struc
ture and
max
i
mize ca
pac
ity.
DNS cache
The DNS cache fea
ture is avail
able as part of ei
ther the DNS add-on mod
ule for BIG-IP
LTM or as part of BIG-IP GTM/DNS combo. DNS Cache has three dif
fer
ent forms of DNS
cache that are con
fig
urable: Trans
par
ent, Re
solver, and Val
i
dat
ing Re
solver.
99
Cache hit
N O T E
&
T I P S
Note:Thetransparentcachewillcontainmessagesand
resourcerecords.
F5 Net
works rec
om
mends that you con
fig
ure the BIG-IP sys
tem to for
ward queries which
can
not be an
swered from the cache to a pool of local DNS servers, rather than to the
local BIND in
stance be
cause BIND per
for
mance is slower than using mul
ti
ple ex
ter
nal re
solvers.
N O T E
&
T I P S
Note:ForsystemsusingtheDNSExpressfeature,the
BIGIPsystemfirstprocessestherequeststhroughDNS
Express,andthencachestheresponses.
N O T E
&
T I P S
Note:ItispossibletoconfigurethelocalBIND
instanceontheBIGIPsystemtoactasanexternalDNS
resolver.However,theperformanceofBINDisslower
thanusingaresolvercache.
100
DNSSEC
DNSSEC is an ex
ten
sion to the Do
main Name Ser
vice (DNS) that en
sures the in
tegrity of
data re
turned by do
main name lookups by in
cor
po
rat
ing a chain of trust in the DNS hi
er
ar
chy. DNSSEC pro
vides ori
gin au
then
tic
ity, data in
tegrity and se
cure de
nial of ex
is
tence.
Specif
i
cally, Ori
gin Au
then
tic
ity en
sures that re
solvers can ver
ify that data has orig
i
nated
101
102
The fol
low
ing il
lus
tra
tion shows the build
ing blocks for the chain of trust from the root zone:
For more in
for
ma
tion, see Con
fig
ur
ing DNSSEC in BIG-IP DNS Ser
vices.
103
Auto-discovery
Auto-dis
cov
ery is a process through which the BIG-IP GTM au
to
mat
i
cally iden
ti
fies re
sources that it man
ages. BIG-IP GTM can dis
cover two types of re
sources: vir
tual servers
and links.
Each re
source is dis
cov
ered on a per-server basis, so you can em
ploy auto-dis
cov
ery only
on the servers you spec
ify.
The auto-dis
cov
ery fea
ture of BIG-IP GTM has three modes that con
trol how the sys
tem
iden
ti
fies re
sources. These modes are:
Disabled: BIG-IP GTM does not attempt to discover any resources. Auto-discovery is
disabled on BIG-IP GTM by default.
Enabled: BIG-IP GTM regularly checks the server to discover any new resources. If a
previously discovered resource cannot be found, BIG-IP GTM deletes it from the
system.
Enabled (No Delete): BIG-IP GTM regularly checks the server to discover any new
resources. Unlike the Enabled mode, the Enabled (No Delete) mode does not delete
resources, even if the system cannot currently verify their presence.
N O T E
&
T I P S
Note:EnabledandEnabled(NoDelete)modesquerythe
serversfornewresourcesevery30secondsbydefault.
N O T E
&
T I P S
Important:Autodiscoverymustbegloballyenabledat
theserverandlinklevels,andthefrequencyatwhich
thesystemqueriesfornewresourcesmustbe
configured.
For in
for
ma
tion about en
abling auto-dis
cov
ery on vir
tual servers and links, see Dis
cov
er
ing re
sources au
to
mat
i
cally in the Con
fig
u
ra
tion Guide for BIG-IP Global Traf
fic Man
ager.
104
Address translation
Sev
eral ob
jects in BIG-IP GTM allow the spec
i
fi
ca
tion of ad
dress trans
la
tion. Ad
dress
trans
la
tion is used in cases where the ob
ject is be
hind a Net
work Ad
dress Trans
la
tion
(NAT). For ex
am
ple, a vir
tual server may be known by one ad
dress on the In
ter
net but
an
other ad
dress be
hind the fire
wall. When con
fig
ur
ing these ob
jects, the ad
dress is the
ex
ter
nal ad
dress and will be re
turned in any DNS re
sponses gen
er
ated by BIG-IP GTM.
When prob
ing, the BIG-IP sys
tem may use ei
ther the ad
dress or trans
la
tion, de
pend
ing
on the sit
u
a
tion. As a gen
eral rule, if both the BIG-IP sys
tem per
form
ing the probe and
the tar
get of the probe are in the same data cen
ter and both have a trans
la
tion, the
probe will use the trans
la
tions. Oth
er
wise, the probe will use the ad
dress.
Spec
i
fy
ing a trans
la
tion on a BIG-IP server will cause vir
tual server auto-dis
cov
ery to
silently stop work
ing. This is be
cause BIG-IP GTM has no way of know
ing what the ex
ter
nally vis
i
ble ad
dress should be for the dis
cov
ered vir
tual server ad
dress, which is a trans
la
tion. For more in
for
ma
tion, see AskF5 ar
ti
cle: SOL9138: BIG-IP GTM sys
tem dis
ables
vir
tual server auto-dis
cov
ery for BIG-IP sys
tems that use trans
lated vir
tual server
ad
dresses.
ZoneRunner
ZoneRun
ner is an F5 prod
uct used for zone file man
age
ment on BIG-IP GTM. You may
use the ZoneRun
ner util
ity to cre
ate and man
age DNS zone files and con
fig
ure the BIND
in
stance on BIG-IP GTM. With the ZoneRun
ner util
ity, you can:
Import and transfer DNS zone files.
Manage zone resource records.
Manage views.
Manage a local nameserver and the associated configuration file, named.conf.
Transfer zone files to a nameserver.
Import only primary zone files from a nameserver.
BIG-IP GTM ZoneRun
ner util
ity uses dy
namic up
date to make zone changes. All changes
made to a zone using dy
namic up
date are writ
ten to the zone's jour
nal file.
105
N O T E
&
T I P S
Important:
It'srecommendedthattheZoneRunnerutilitymanages
theDNS/BINDfile,ratherthanmanuallyeditingthe
file.Ifmanualeditingisrequired,thezonefiles
mustbefrozentoavoidissueswithnameresolution
anddynamicupdates.
Topreventthejournalfilesfrombeingsynchronized
ifBIGIPGTMisconfiguredtosynchronizeDNSzone
files,thezonemustbefrozenonallBIGIPGTM
systems.
iRules
iR
ules can be at
tached to or as
so
ci
ated with a wide IP, and in BIG-IP GTM ver
sion 11.5.0
and higher it can be at
tached to or as
so
ci
ated with the DNS lis
tener.
106
Probers
When run
ning a mon
i
tor, BIG-IP GTM may re
quest that an
other BIG-IP de
vice ac
tu
ally
probe the tar
get of the mon
i
tor. BIG-IP GTM may choose a BIG-IP sys
tem in the same
data cen
ter with the tar
get of the mon
i
tor to ac
tu
ally send the probe and re
port back the
sta
tus. This can min
i
mize the amount of traf
fic that tra
verses the WAN.
The ex
ter
nal and scripted mon
i
tors listed in the doc
u
men
ta
tion use an ex
ter
nal file to
check the health of a re
mote sys
tem. BIG-IP GTM may re
quest that an
other BIG-IP sys
tem in the con
fig
u
ra
tion run the mon
i
tor. In order for the mon
i
tor to suc
ceed, the re
mote BIG-IP sys
tem must have a copy of the ex
ter
nal file. AskF5 ar
ti
cle SOL8154: BIG-IP
GTM EAV mon
i
tor con
sid
er
a
tions con
tains in
for
ma
tion about defin
ing a prober pool to
con
trol which the BIG-IP sys
tem will be used to run the mon
i
tor.
bigip monitor
The bigip mon
i
tor can be used to mon
i
tor BIG-IP sys
tems. It uses the sta
tus of vir
tual
servers de
ter
mined by the re
mote BIG-IP sys
tem rather than send
ing in
di
vid
ual mon
i
tor
probes for each vir
tual server. It is rec
om
mended that BIG-IP LTM be con
fig
ured to mon
i
tor its con
fig
u
ra
tion el
e
ments so that it can de
ter
mine the sta
tus of its vir
tual
servers. This vir
tual server sta
tus will be re
ported via the bigip mon
i
tor back to BIG-IP
GTM. This is an ef
fi
cient and ef
fec
tive way to mon
i
tor re
sources on other BIG-IP sys
tems.
107
Application of monitors
In BIG-IP GTM con
fig
u
ra
tion, mon
i
tors can be ap
plied to the server, vir
tual server, pool
and pool mem
ber ob
jects. The mon
i
tor de
fined for the server will be used to mon
i
tor all
of its vir
tual servers un
less the vir
tual server over
rides the mon
i
tor se
lec
tion. Like
wise,
the mon
i
tor de
fined for the pool will be used to mon
i
tor all of its pool mem
bers, un
less
the pool mem
ber over
rides the mon
i
tor se
lec
tion.
It is im
por
tant not to over-mon
i
tor a pool mem
ber. If a mon
i
tor is as
signed to the server
and/or vir
tual server and also to the pool and/or pool mem
ber, then both of the mon
i
tors will fire, ef
fec
tively mon
i
tor
ing the vir
tual server twice. In most cases, mon
i
tors
should be con
fig
ured at the server/vir
tual server or at the pool/pool mem
ber, but not
both.
Prober pools
Prober pools allow the spec
i
fi
ca
tion of par
tic
u
lar set of BIG-IP de
vices that BIG-IP GTM
may use to mon
i
tor a re
source. This might be nec
es
sary in sit
u
a
tions where a fire
wall is
be
tween cer
tain BIG-IP sys
tems and mon
i
tored re
source, but not be
tween other BIG-IP
sys
tems and those same re
sources. In this case, a prober pool can be con
fig
ured and as
signed to the server to limit probe re
quests to those BIG-IP sys
tems that can reach the
mon
i
tored re
source. For more in
for
ma
tion about prober pools, see About Prober pools in
BIG-IP Global Traf
fic Man
ager: Con
cepts.
Wide IP
A wide IP maps a fully qual
i
fied do
main name (FQDN) to one or more pools. The pools
con
tain vir
tual servers. When a Local Do
main Name Server ( LDNS makes a re
quest for a
do
main that matches a wide IP, the con
fig
u
ra
tion of the wide IP de
ter
mines which vir
tual
server ad
dress should be re
turned.
Wide IP names can con
tain the wild
card char
ac
ters * (to match one or more char
ac
ters)
and ? (to match one char
ac
ter).
For more in
for
ma
tion about wide IPs, see About Wide IPs in BIG-IP Global Traf
fic Man
ager: Con
cepts.
108
109
Topology
BIG-IP GTM can make load bal
anc
ing de
ci
sions based upon the ge
o
graph
i
cal lo
ca
tion of
the LDNS mak
ing the DNS re
quest. The lo
ca
tion of the LDNS is de
ter
mined from a GeoIP
data
base. In order to use topol
ogy, the ad
min
is
tra
tor must con
fig
ure topol
ogy records
de
scrib
ing how BIG-IP GTM should make its load bal
anc
ing de
ci
sions. For more in
for
ma
tion on topol
ogy load bal
anc
ing, see Using Topol
ogy Load Bal
anc
ing to Dis
trib
ute DNS Re
quests to Spe
cific Re
sources in BIG-IP Global Traf
fic Man
ager: Load Bal
anc
ing or AskF5 ar
ti
cle SOL13412: Overview of BIG-IP GTM Topol
ogy records (11.x).
Topol
ogy load bal
anc
ing can be used to di
rect users to the servers that are ge
o
graph
i
cally close, or per
haps to di
rect users to servers that have lo
cal
ized con
tent.
BIG-IP sys
tem soft
ware pro
vides a pre-pop
u
lated data
base that pro
vides a map
ping of IP
ad
dresses to ge
o
graphic lo
ca
tions. The ad
min
is
tra
tor can also cre
ate cus
tom group call
re
gions. For ex
am
ple, it is pos
si
ble to cre
ate a cus
tom re
gion that groups cer
tain IP ad
dresses to
gether or that groups cer
tain coun
tries to
gether.
Up
dates for the GeoIP data
base are pro
vided on a reg
u
lar basis. AskF5 ar
ti
cle SOL11176:
Down
load
ing and in
stalling up
dates to the IP ge
olo
ca
tion data
base con
tains in
for
ma
tion about up
dat
ing the data
base.
Topol
ogy records are used to map an LDNS ad
dress to a re
source. The topol
ogy record
con
tains three el
e
ments:
A request source statement that specifies the origin LDNS of a DNS request.
A destination statement that specifies the pool or pool member to which the weight
of the topology record will be assigned.
A weight that the BIG-IP system assigns to a pool or a pool member during the load
balancing process.
When de
ter
min
ing how to load bal
ance a re
quest, BIG-IP GTM uses the ob
ject that has
the high
est weight ac
cord
ing the match
ing topol
ogy records.
110
Im
por
tant: When con
fig
ur
ing topol
ogy load bal
anc
ing at the wide IP level, topol
ogy
records with a pool des
ti
na
tion state
ment must exist. Other des
ti
na
tion state
ment types
(such as data cen
ter or coun
try) may be used when using topol
ogy as a pool level load
bal
anc
ing method.
111
Architectures
This sec
tion will briely de
scribe three com
mon de
ploy
ment method
olo
gies for using a
BIG-IP sys
tem in a DNS en
vi
ron
ment. For more in
for
ma
tion, see BIG-IP Global Traf
fic
Man
ager: Im
ple
men
ta
tions.
Delegated mode
When op
er
at
ing in del
e
gated mode, re
quests for wide-IP re
source records are redi
rected
or del
e
gated to BIG-IP GTM. The BIG-IP sys
tem does not see all DNS re
quests, and op
er
ates on re
quests for records that are sent to it.
For more in
for
ma
tion, see Del
e
gat
ing DNS Traf
fic to BIG-IP GTM in BIG-IP Global Traf
fic
Man
ager: Im
ple
men
ta
tions.
Screening mode
When op
er
at
ing in screen
ing mode, BIG-IP GTM sits in front of one or more DNS
servers. This con
fig
u
ra
tion al
lows for easy im
ple
men
ta
tion of ad
di
tional BIG-IP sys
tem
fea
tures for DNS traf
fic be
cause DNS re
quests for records other than wide-IPs pass
through BIG-IP GTM. If the re
quest matches a wide IP, BIG-IP GTM will re
spond to the re
quest. Oth
er
wise, the re
quest is for
warded to the DNS servers. This con
fig
u
ra
tion can
pro
vide the fol
low
ing ben
e
fits:
DNS query validation: When a request arrives at BIG-IP GTM, BIG-IP GTM validates
that the query is well formed. BIG-IP GTM can drop malformed queries, protecting the
back end DNS servers from seeing the malformed queries.
DNSSEC dynamic signing: When the responses from the DNS server pass back
through BIG-IP GTM, it is possible for BIG-IP GTM to sign the response. This allows the
use of DNSSEC with an existing zone and DNS servers, but takes advantage of any
cryptographic accelerators in a BIG-IP device.
112
Transparent Caching: When the responses from the DNS server pass back through
the BIG-IP system , it can cache the response. Future requests for the same records
can be served directly from the BIG-IP system reducing the load on the back-end DNS
servers.
For more in
for
ma
tion, see Plac
ing BIG-IP GTM in Front of a DNS Server and Plac
ing BIG-IP
GTM in Front of a Pool of DNS Servers in BIG-IP Global Traf
fic Man
ager: Im
ple
men
ta
tions.
113
iQuery Agents
All BIG-IP GTM de
vices are iQuery clients. The gtmd process on each BIG-IP GTM de
vice
con
nects to the big3d process on every BIG-IP server de
fined in BIG-IP GTM con
fig
u
ra
tion, which in
cludes both BIG-IP GTM and BIG-IP LTM. These are long-lived con
nec
tions
made using TCP port 4353. This set of con
nec
tions among BIG-IP GTM de
vices and be
tween BIG-IP GTM and BIG-IP LTM de
vices is called an iQuery mesh.
iQuery com
mu
ni
ca
tion is en
crypted using SSL. The de
vices in
volved in the com
mu
ni
ca
tion au
then
ti
cate each other using SSL cer
tifi
cate-based au
then
ti
ca
tion. For in
for
ma
tion,
see Man
ual BIG-IP Global Traf
fic Man
ager: Con
cepts in Com
mu
ni
ca
tions Be
tween BIG-IP
GTM and Other Sys
tems.
In order to mon
i
tor the health of ob
jects in the con
fig
u
ra
tion, BIG-IP GTM de
vices in the
syn
chro
niza
tion group will send mon
i
tor re
quests via iQuery to an
other iQuery server
that is closer to the tar
get of the mon
i
tor. All BIG-IP GTM de
vices in the syn
chro
niza
tion
114
116
tmsh
You can use the tmsh show gtm iquery com
mand to dis
play that sta
tus of all of the
iQuery con
nec
tions on a BIG-IP GTM de
vice. The com
mand will dis
play each IP ad
dress:
#tmsh show gtm iquery
Gtm::IQuery:10.12.20.207
Servergtm1
DataCenterDC1
iQueryStateconnected
QueryReconnects1
BitsIn8.2M
BitsOut47.7K
Backlogs0
BytesDropped96
CertExpirationDate10/29/2404:38:53
ConfigurationTime12/08/1416:37:49
Gtm::IQuery:10.14.20.209
Servergtm3
DataCenterDC2
iQueryStateconnected
QueryReconnects0
BitsIn8.2M
BitsOut45.7K
Backlogs0
BytesDropped0
CertExpirationDate10/29/2404:38:53
ConfigurationTime12/08/1416:37:49
For more in
for
ma
tion, see AskF5 ar
ti
cle SOL13690: Trou
bleshoot
ing BIG-IP GTM syn
chro
niza
tion and iQuery con
nec
tions (11.x).
117
iqdump
You can use the iq
dump com
mand to check the com
mu
ni
ca
tion path and SSL cer
tifi
catebased au
then
ti
ca
tion from a BIG-IP GTM to an
other de
vice in the iquery mesh. The syn
tax of the iq
dump com
mand is iq
dump <ip ad
dress> <syn
chro
niza
tion group name>.
When using the iq
dump com
mand, BIG-IP GTM syn
chro
niza
tion group name is op
tional
and is not im
por
tant for this con
nec
tion ver
i
fi
ca
tion.
For ex
am
ple:
#iqdump10.14.20.209
<!Localhostname:gtm1.example.com>
<!Connectedtobig3dat:::ffff:10.14.20.209:4353>
<!Subscribingtosyncgroup:default>
<!MonDec816:45:002014>
<xml_connection>
<version>11.4.0</version>
<big3d>big3dVersion11.5.1.6.0.312</big3d>
<kernel>linux</kernel>
...
N O T E
&
T I P S
Note:Theiqdumpcommandwillcontinuetorununtilit
isinterruptedbypressingCtrlC.
If there is a prob
lem with the com
mu
ni
ca
tion path or the SSL au
then
ti
ca
tion, the iq
dump com
mand will fail and re
port an error.
The ver
sion of BIG-IP soft
ware being run on the re
mote sys
tem is re
ported in the ver
sion XML stanza. The ver
sion of the big3d soft
ware run
ning on the re
mote sys
tem is re
ported in the <big3d> XML stanza.
For more in
for
ma
tion, see AskF5 ar
ti
cle: SOL13690: Trou
bleshoot
ing BIG-IP GTM syn
chro
niza
tion and iQuery con
nec
tions (11.x).
118
BIG-IP GTM/DNS
Troubleshooting
BIG-IP GTM query logging
BIG-IP GTM can log de
bug
ging in
for
ma
tion about the de
ci
sion mak
ing process when re
solv
ing a wide IP. These logs can re
port which pool and vir
tual server were cho
sen for a
wide-IP res
o
lu
tion or why BIG-IP GTM was un
able to re
spond.
AskF5 ar
ti
cle: SOL14615: Con
fig
ur
ing the BIG-IP GTM sys
tem to log wide IP re
quest
in
for
ma
tion con
tains in
for
ma
tion about en
abling query log
ging. This log
ging should
only be en
abled only for trou
bleshoot
ing and not for long pe
ri
ods of time.
Statistics
The BIG-IP sys
tem main
tains sta
tis
tics for ob
jects through
out the sys
tem. Due to the va
ri
ety of sta
tis
tics gath
ered and the breadth of con
fig
u
ra
tion el
e
ments cov
ered, it is im
prac
ti
cal to cover them within this guide. Sta
tis
tics are doc
u
mented through
out the user
man
u
als where they are fea
tured.
Sta
tis
tics are vis
i
ble in the con
fig
u
ra
tion util
ity under "Sta
tis
tics >> Mod
ule Sta
tis
tics." In
tmsh, sta
tis
tics are vis
i
ble using the show com
mand with par
tic
u
lar ob
jects. Typ
i
cally,
sta
tis
tics are ei
ther gauges or coun
ters. A gauge keeps track of a cur
rent state, for ex
am
ple cur
rent con
nec
tions. A counter keeps an in
cre
ment
ing count of how many times a
par
tic
u
lar ac
tion is taken, for ex
am
ple total re
quests.
119
Re
view
ing sta
tis
tics that use coun
ters may pro
vide in
sight into the pro
por
tion of traf
fic
which is valid, or per
haps may in
di
cate that there is a con
fig
u
ra
tion error.
For ex
am
ple, com
par
ing the Dropped Queries counter to Total Queries counter shows
that there are some drops, but this may not be of con
cern be
cause the drops are fairly
low as a per
cent
age of the total.
Gtm::WideIp:www.example.com
Status
Availability:available
State:enabled
Reason:Available
Requests
Total1765
A1552
AAAA213
Persisted0
Resolved1762
Dropped3
LoadBalancing
Preferred1760
Alternate2
Fallback0
CNAMEResolutions0
ReturnedfromDNS0
ReturnedtoDNS0
Sta
tis
tics are also avail
able for polling via SNMP and can be polled, cat
a
logued over time,
and graphed by an Net
work Man
age
ment Sta
tion (NMS).
120
121
iRules
Introduction
Anatomy of an iRule
Practical Considerations
iRules Troubleshooting
122
INTRODUCTION IRULES
Introduction
iR
ules plays a crit
i
cal role in ad
vanc
ing the flex
i
bil
ity of the BIG-IP sys
tem. Amongst their
many ca
pa
bil
i
ties, some no
table ac
tions in
clude mak
ing load bal
anc
ing de
ci
sions, per
sist
ing, redi
rect
ing, rewrit
ing, dis
card
ing, and log
ging client ses
sions.
iR
ules can be used to aug
ment or over
ride de
fault BIG-IP LTM be
hav
ior, en
hance se
cu
rity, op
ti
mize sites for bet
ter per
for
mance, pro
vide ro
bust ca
pa
bil
i
ties nec
es
sary to guar
an
tee ap
pli
ca
tion avail
abil
ity, and en
sure a suc
cess
ful user ex
pe
ri
ence on your sites.
iR
ules tech
nol
ogy is im
ple
mented using Tool Com
mand Lan
guage (Tcl). Tcl is known for
speed, em
bed
d
a
bil
ity, and us
abil
ity. iR
ules may be com
posed using most na
tive Tcl com
mands, as well as a ro
bust set of BIG-IP LTM/BIG-IP GTM ex
ten
sions pro
vided by F5.
Doc
u
men
ta
tion that cov
ers the root Tcl lan
guage can be found at http://
tcl.
tk/
and http://
tmml.
sourceforge.
net/
doc/
tcl/
index.
html (These links take you to an
ex
ter
nal re
source). iR
ules is based on Tcl ver
sion 8.4.6.
123
Anatomy of an iRule
An iRule con
sists of one or more event de
c
la
ra
tions, with each de
c
la
ra
tion con
tain
ing Tcl
code that is ex
e
cuted when that event oc
curs.
Events
Events are an F5 ex
ten
sion to Tcl which pro
vide an event-dri
ven frame
work for the ex
e
cu
tion of iR
ules code in the con
text of a con
nec
tion flow, being trig
gered by in
ter
nal traf
fic pro
cess
ing state changes. Some events are trig
gered for all con
nec
tion flows, whilst
oth
ers are pro
file-spe
cific, mean
ing the event can only be trig
gered if an as
so
ci
ated pro
file has been ap
plied to the vir
tual server pro
cess
ing the flow in ques
tion.
For ex
am
ple, the CLIEN
T_AC
CEPTED event is trig
gered for all flows when the flow is es
tab
lished. In the case of TCP con
nec
tions, it is trig
gered when the three-way hand
shake
is com
pleted and the con
nec
tion en
ters the ES
TAB
LISHED state. In con
trast, the
HTTP_RE
QUEST event will only be trig
gered on a given con
nec
tion flow if an HTTP pro
file
is ap
plied to the vir
tual server. For ex
am
ple, the fol
low
ing iRule eval
u
ates every HTTP re
quest from a client against a list of known web robot user agents:
whenHTTP_REQUEST{
switchglob[stringtolower[HTTP::headerUserAgent]]{
"*scooter*"
"*slurp*"
"*msnbot*"
"*fast*"
"*teoma*"
"*googlebot*"{
#Sendbotstothebotpool
poolslow_webbot_pool
}
}
}
iRule events are cov
ered in de
tail in the Events sec
tion of the iR
ules Wiki on De
v
Cen
tral.
124
whenHTTP_REQUEST{
switchglob[stringtolower[HTTP::headerUserAgent]]{
"*scooter*"
"*slurp*"
"*msnbot*"
"*fast*"
"*teoma*"
"*googlebot*"{
#Sendbotstothebotpool
poolslow_webbot_pool
}
}
}
A full list of com
mands and func
tions can be found in the Com
mand sec
tion of the iR
ules Wiki on De
v
Cen
tral.
N O T E
&
T I P S
Note:SeveralnativecommandsbuiltintotheTcl
languagehavebeendisabledintheiRules
implementationduetorelevance,andother
complicationsthatcouldcauseanunexpectedhaltin
thetrafficflow(fileIO,Socketcalls,etc.).The
listofdisabledcommandscanbefoundinthe
iRulesWiki.
125
Operators
An op
er
a
tor is a token that forces eval
u
a
tion and com
par
i
son of two con
di
tions. In ad
di
tion to the built in Tcl op
er
a
tors (==, <=, >=, ...), op
er
a
tors such as "start
s_with," "con
tains," and "end
s_with" have been added to act as helpers for com
mon com
par
isons. For
ex
am
ple, in the iRule below, the "<" op
er
a
tor com
pares the num
ber of avail
able pool
mem
bers against 1, while the start
s_with eval
u
ates the re
quested URI against a known
value:
whenHTTP_REQUEST{
if{[active_members[LB::serverpool]]<1}{
HTTP::respond503content{<html><body>Siteistemporarily
unavailable.Sorry!</body><html>}
return
}
if{[HTTP::uri]starts_with"/nothingtoseehere/"}{
HTTP::respond403content{<html><body>Toobad,sosad!
</body><html>}
return
}
}
A list of the F5 spe
cific op
er
a
tors added to Tcl for iR
ules are avail
able on the iR
ules Wiki
on De
v
Cen
tral.
Variables
iR
ules uses two vari
able scopes: sta
tic and local. In ad
di
tion, pro
ce
dures may ac
cept ar
gu
ments which pop
u
late tem
po
rary vari
ables scoped to the pro
ce
dure.
Sta
tic vari
ables are sim
i
lar to Tcl's global vari
ables and are avail
able to all iR
ules on all
flows. A sta
tic vari
able's value is in
her
ited by all flows using that iRule, and they are typ
i
cally set only in RULE_INIT and read in other events. They are com
monly used to tog
gle
126
de
bug
ging or per
form minor con
fig
u
ra
tion such as the names of data
groups that are
used for more com
plete con
fig
u
ra
tion. Sta
tic vari
ables are de
noted by a sta
tic:: pre
fix.
For ex
am
ple, a sta
tic vari
able may be named sta
tic::debug.
Local, or flow, vari
ables are local to con
nec
tion flows. Once set they are avail
able to all iR
ules and iR
ules events on that flow. These are most of what iR
ules use, and can con
tain
any data. Their value is dis
carded when the con
nec
tion is closed. They are fre
quently ini
tial
ized in the CLIEN
T_
CON
NECTED event.
It is ex
tremely im
por
tant to un
der
stand the vari
able scope in order to pre
clude un
ex
pected con
di
tions as some vari
ables may be shared across pools and events and in
some cases, glob
ally to all con
nec
tions. For more in
for
ma
tion, see iR
ules 101 - #03 Vari
ables on De
v
Cen
tral.
iRules context
For every event that you spec
ify within an iRule, you can also spec
ify a con
text, de
noted
by the key
words clientside or server
side. Be
cause each event has a de
fault con
text as
so
ci
ated with it, you need only de
clare a con
text if you want to change the con
text from
the de
fault.
The fol
low
ing ex
am
ple in
cludes the event de
c
la
ra
tion CLIEN
T_AC
CEPTED, as well as the
iR
ules com
mand IP::re
mote_addr. In this case, the IP ad
dress that the iR
ules com
mand
re
turns is that of the client, be
cause the de
fault con
text of the event de
c
la
ra
tion CLIEN
T_AC
CEPTED is client-side.
whenCLIENT_ACCEPTED{
if{[IP::addr[IP::remote_addr]equals10.1.1.80]}{
poolmy_pool1
}
}
Sim
i
larly, if you in
clude the event de
c
la
ra
tion SERV
ER_
CON
NECTED as well as the com
mand IP::re
mote_addr, the IP ad
dress that the iRule com
mand re
turns is that of the
127
server, be
cause the de
fault con
text of the event de
c
la
ra
tion SERV
ER_
CON
NECTED is
server-side.
The pre
ced
ing ex
am
ple shows what hap
pens when you write iR
ules that uses the de
fault
con
text when pro
cess
ing iR
ules com
mands. You can, how
ever, ex
plic
itly spec
ify
the clientside and server
side key
words to alter the be
hav
ior of iR
ules com
mands.
Con
tin
u
ing with the pre
vi
ous ex
am
ple, the fol
low
ing ex
am
ple shows the event de
c
la
ra
tion SERV
ER_
CON
NECTED and ex
plic
itly spec
i
fies the clientside key
word for the iR
ules
com
mand IP::re
mote_addr. In this case, the IP ad
dress that the iR
ules com
mand re
turns
is that of the client, de
spite the server-side de
fault con
text of the event de
c
la
ra
tion.
whenSERVER_CONNECTED{
if{[IP::addr[IP::addr[clientside{IP::remote_addr}]equals
10.1.1.80]}{
discard
}
}
whenRULE_INIT{
setstatic::debug0
}
whenHTTP_REQUEST{
#Don'tallowdatatobechunked
if{[HTTP::version]eq"1.1"}{
if{[HTTP::headeris_keepalive]}{
HTTP::headerreplace"Connection""KeepAlive"
}
128
HTTP::version"1.0"
}
}
whenHTTP_RESPONSE{
if{[HTTP::headerexists"ContentLength"]&&[HTTP::header
"ContentLength"]<1000000}{
setcontent_length[HTTP::header"ContentLength"]
}else{
setcontent_length1000000
}
if{$content_length>0}{
HTTP::collect$content_length
}
}
whenHTTP_RESPONSE_DATA{
#FindtheHTMLcomments
setindices[regexpallinlineindices{<!(?:[^]||[^]
[^])*?\s*?>}[HTTP::payload]]
#Replacethecommentswithspacesintheresponse
if{$static::debug}{
loglocal0."Indices:$indices"
}
foreachidx$indices{
setstart[lindex$idx0]
setlen[expr{[lindex$idx1]$start+1}]
if{$static::debug}{
loglocal0."Start:$start,Len:$len"
}
HTTP::payloadreplace$start$len[stringrepeat""$len]
}
}
129
Practical
Considerations
Performance implications
While iR
ules pro
vide tremen
dous flex
i
bil
ity and ca
pa
bil
ity, de
ploy
ing them adds trou
bleshoot
ing com
plex
ity and some amount of pro
cess
ing over
head on the BIG-IP sys
tem
it
self. As with any ad
vanced func
tion
al
ity, it makes sense to weigh all the pros and cons.
Be
fore using iR
ules, de
ter
mine whether the same func
tion
al
ity can be per
formed by an
ex
ist
ing BIG-IP fea
ture. Many of the more pop
u
lar iR
ules func
tion
al
i
ties have been in
cor
po
rated into the na
tive BIG-IP fea
ture set over time. Ex
am
ples in
clude cookie en
cryp
tion
and header in
ser
tion via the HTTP pro
file, URI load-bal
anc
ing via BIG-IP LTM poli
cies and,
in TMOS ver
sions be
fore 11.4.0, the HTTP class pro
file.
Even if re
quire
ments can
not be met by the ex
ist
ing ca
pa
bil
i
ties of the BIG-IP sys
tem, it is
still worth con
sid
er
ing whether or not the goal can be ac
com
plished through other
means, such as up
dat
ing the ap
pli
ca
tion it
self. How
ever, iR
ules are often the most timeef
fi
cient and best so
lu
tion in sit
u
a
tions when it may take months to get an up
date to an
ap
pli
ca
tion. The neg
a
tive im
pacts of using iR
ules, such as in
creased CPU load, can be
mea
sured and man
aged through using iR
ules tools and by writ
ing iR
ules so they be
have
as ef
fi
ciently as pos
si
ble.
130
Write efficient code. For example, avoid data type conversions unless necessary (from
strings to numbers).
Use the return command to abort further execution of a rule if the conditions it
was intended to evaluate have been matched.
Use timing commands to measure iRules before you put them into production.
Leverage data classes (data groups) to store variable and configuration information
outside of the iRules code itself. This will allow you to make updates to data that
needs to change frequently without the risk of introducing coding errors.It also helps
make the iRule portable as the same code can be used for QA and production
while the data classes themselves can be specific to their respective environments.
Use a stats profile instead of writing to a log to determine how many times an iRule
has fired or a particular action has been taken.
131
TROUBLESHOOTING IRULES
Troubleshooting
This chap
ter cov
ers a brief guide for iR
ules in re
la
tion to BIG-IP LTM and BIG-IP GTM.
N O T E
&
T I P S
Note:Thischapterassumesthattheinitial
configurationofyourBIGIPsysemhasbeencompleted
andyouareusingiRulesforimplementingnew
functionality.
Tools
The tools iden
ti
fied within this chap
ter are not an ex
haus
tive list of all pos
si
bil
i
ties, and
while there may be other tools, these are in
tended to cover the ba
sics.
Syntax:
Tools that you may use to write and check that the syn
tax of your iR
ules is cor
rect in
clude:
iRules Editor, F5's integrated code editor for iRules, built to develop iRules with full
syntax highlighting, colorization, textual auto-complete, integrated help, etc.
Notepad++, a text editor and source code editor for use with Microsoft Windows.
132
TROUBLESHOOTING IRULES
Wireshark, a free and open source packet analyzer. It is used for network
troubleshooting, analysis, software and communications protocol development, and
education.
DevCentral
De
v
Cen
tral is a com
mu
nity-ori
ented re
source to bring en
gi
neers to
gether to learn, solve
prob
lems, and fig
ure things out. iR
ules-spe
cific re
sources on De
v
Cen
tral in
clude:
iRules Reference Wiki.
Official documentation.
Tutorials.
Articles.
Q&A forum, examples (CodeShare).
Podcasts (videos).
iRules Troubleshooting
When writ
ing iR
ules, keep in mind the fol
low
ing con
sid
er
a
tions:
iRules development should take place on a non-production system.
Verify your iRules often while building it.
While error messages may be cryptic, review them carefully as they often indicate
where a problem exists.
Log within the iRules liberally, both to watch variables and to determine iRules
progression. Consider using or evaluating a variable to turn off unnecessary debug
logging in production.
Many iRules commands will have software version requirements. Check DevCentral
to verify the version requirements of the intended commands.
Use the catch command to encapsulate commands which are likely to throw
errors based on input and other factors. Instead of having an iRule fail and the
connection subsequently dropped, use the catch command to perform an alternate
action.
Thoroughly test your iRule to ensure intended functionality. Even once an iRule
compiles, it may have logical errors. BIG-IP LTM cannot detect logical errors, so while
an iRule may compile and execute, it may behave in such a way that precludes the
BIG-IP system from returning a valid response to the client.
133
TROUBLESHOOTING IRULES
134
TROUBLESHOOTING LOGGING
Logging
Introduction
Logging and Monitoring
Practical Considerations
135
INTRODUCTION LOGGING
Introduction
Being fa
mil
iar with your BIG-IP LTM and BIG-IP GTM logs can be very help
ful in main
tain
ing the sta
bil
ity and health of your sys
tems. Events can be logged ei
ther lo
cally or re
motely de
pend
ing on your con
fig
u
ra
tion. Log
ging is cov
ered ex
ten
sively in the BIG-IP
TMOS: Op
er
a
tions Guide. This sec
tion will cover some im
por
tant con
cepts as well as top
ics re
lated to BIG-IP LTM and BIG-IP GTM.
Logging levels
For each type of sys
tem event you can set the min
i
mum log level. When you spec
ify a
min
i
mum log level, all alerts at the min
i
mum level and higher will be logged. The log
ging
lev
els fol
low the typ
i
cal Linux-stan
dard lev
els.
Low
er
ing the min
i
mum log level in
creases the vol
ume of logs that are pro
duced. Set
ting
the min
i
mum log level to the low
est value (Debug) should be done with cau
tion. Large
vol
umes of log mes
sages can have a neg
a
tive im
pact on the per
for
mance of the BIG-IP
de
vice.
De
fault local log lev
els are set on the BIG-IP sys
tem in order to con
vey im
por
tant in
for
ma
tion. F5 rec
om
mends chang
ing de
fault log lev
els only when needed to as
sist with
trou
bleshoot
ing. When trou
bleshoot
ing is com
plete, log lev
els should be reset to their
de
fault value.
For more de
tailed in
for
ma
tion log
ging, see the fol
low
ing re
sources:
BIG-IP TMOS : Operations Guide.
Logging in BIG-IP TMOS: Concepts.
AskF5 article: SOL13455: Overview of BIG-IP logging BigDB database keys (11.x).
AskF5 article: SOL5532: Configuring the level of information logged for TMMspecific events.
136
Logging and
Monitoring
Local logging
By de
fault, the BIG-IP sys
tem logs via sys
log to the local filesys
tem. Most local log files
can be man
aged and viewed using the BIG-IP Con
fig
u
ra
tion Util
ity.
Im
por
tant log files for BIG-IP LTM in
clude:
/var/log/ltm
/var/log/tmm
/var/log/pktfilter
Im
por
tant log files for BIG-IP GTM and DNS in
clude:
/var/log/gtm
/var/log/user.log
/var/log/daemon.log
Remote Logging
Al
though log
ging can be done lo
cally, it is rec
om
mended to log to a pool of re
mote log
ging servers. Re
mote log
ging can be done using the legacy Linux sys
log-ng func
tion
al
ity
or it can be man
aged using the TMOS-man
aged high-speed log
ging func
tions. F5 rec
om
mends log
ging to a pool of re
mote high speed log
ging servers.
137
The de
fault log lev
els only apply to local log
ging, mean
ing that as soon as a re
mote sys
log server is de
fined, all sys
log mes
sages are sent to the re
mote server. In other words,
fil
ters only im
pact local log
ging. These fil
ters can be cus
tomized through the con
fig
u
ra
tion util
ity and tmsh.
Cus
tomiz
ing re
mote log
ging using sys
log-ng re
quires the use of tmsh. Cus
tomiz
ing re
mote log
ging with sys
log-ng would allow you to do the fol
low
ing:
Log to a remote server using TCP.
Set logging levels.
Direct specific messages to specific destinations.
Define multiple remote logging servers.
For more in
for
ma
tion, see AskF5 ar
ti
cle: SOL13333: Fil
ter
ing log mes
sages sent to re
mote sys
log servers (11.x).
138
Remote monitoring
The BIG-IP LTM and BIG-IP GTM can be man
aged and mon
i
tored using Sim
ple Net
work
Man
ag
ment Pro
to
col (SNMP). SNMP is an In
ter
net Pro
to
col used for man
ag
ing nodes on
an IP net
work. SNMP trap con
fig
u
ra
tion on the BIG-IP sys
tem in
volves defin
ing the trap
des
ti
na
tions and events that will re
sult in a trap being sent.
Spe
cific Man
age
ment In
for
ma
tion Bases (MIB) for BIG-IP LTM and BIG-IP GTM can be
down
loaded from the wel
come page of the BIG-IP con
fig
u
ra
tion util
ity, or from the
/usr/share/snmp/mibs di
rec
tory on the BIG-IP filesys
tem. The MIB for BIG-IP LTM ob
ject is in the F5-BIGIP-LOCAL-MIB.
txt file and the MIB for BIG-IP GTM ob
jects is in the
F5-BIGIP-GLOBAL-MIB.
txt file. Other F5 Net
works MIBs may be nec
es
sary. For more in
for
ma
tion, see AskF5 ar
ti
cle: SOL13322: Overview of BIG-IP MIB files (10.x - 11.x) and
SNMP Trap Con
fig
u
ra
tion in BIG-IP TMOS: Im
ple
men
ta
tions.
139
Practical
Considerations
This sec
tion pro
vides some prac
ti
cal ad
vice and things to be aware of when work
ing with
log
ging:
When chang
ing the min
i
mum log
ging level, do so with great care to avoid:
Generating large log files.
Impacting the the BIG-IP sytem's performance.
Min
i
mize the amount of time that the log level is set to Debug. When debug log
ging is
no longer re
quired, be sure to set log lev
els back to their de
fault val
ues. The fol
low
ing
AskF5 re
sources list sev
eral log events and their de
fault val
ues:
SOL5532: Configuring the level of information logged for TMM-specific events
SOL13455: Overview of BIG-IP logging BigDB database keys (11.x)
If more than one mon
i
tor is as
signed to a pool mem
ber, it may be im
por
tant to de
ter
mine which mon
i
tor trig
gered an event. See the Mon
i
tors chap
ter in the LTM Load Bal
anc
ing sec
tion of this guide for more de
tails.
When cre
at
ing, mod
i
fy
ing and test
ing iR
ules, use Debug log
ging only in a de
vel
op
ment
en
vi
ron
ment.
Before an iRule is promoted to production, remove statements typically used during
the development and testing phases that write information to the logs for sanity
checking.
To determine how many times an iRule has fired or a particular action has been
taken, use a stats profile instead of writing to a log.
Ad
min
is
tra
tors of the BIG-IP LTM and GTM mod
ules should mon
i
tor the fol
low
ing log
events:
LTM/GTM: Pool members/nodes down or flapping.
LTM/GTM: Pool has no members.
LTM: Port exhaustion.
GTM: "Connection in progress" messages.
140
141