Hawkeye User Guide
Hawkeye User Guide
Hawkeye User Guide
User Guide
Version 2.4 EA
February 2018
Notices
Copyright Notice
© Keysight Technologies 2017-2018
No part of this document may be reproduced in any form or by any means (including electronic storage and retrieval
or translation into a foreign language) without prior agreement and written consent from Keysight Technologies, Inc.
as governed by United States and international copyright laws.
Warranty
The material contained in this document is provided “as is,” and is subject to being changed, without notice, in future
editions. Further, to the maximum extent permitted by applicable law, Keysight disclaims all warranties, either
express or implied, with regard to this manual and any information contained herein, including but not limited to the
implied warranties of merchantability and fitness for a particular purpose. Keysight shall not be liable for errors or
for incidental or consequential damages in connection with the furnishing, use, or performance of this document or
of any information contained herein. Should Keysight and the user have a separate written agreement with warranty
terms covering the material in this document that conflict with these terms, the warranty terms in the separate
agreement shall control.
Technology Licenses
The hardware and/or software described in this document are furnished under a license and may be used or copied
only in accordance with the terms of such license.
U.S. Government Rights
The Software is "commercial computer software," as defined by Federal Acquisition Regulation ("FAR") 2.101. Pur-
suant to FAR 12.212 and 27.405-3 and Department of Defense FAR Supplement ("DFARS") 227.7202, the U.S. gov-
ernment acquires commercial computer software under the same terms by which the software is customarily
provided to the public. Accordingly, Keysight provides the Software to U.S. government customers under its standard
commercial license, which is embodied in its End User License Agreement (EULA), a copy of which can be found at
http://www.keysight.com/find/sweula or https://support.ixiacom.com/support-services/warranty-license-agree-
ments. The license set forth in the EULA represents the exclusive authority by which the U.S. government may use,
modify, distribute, or disclose the Software. The EULA and the license set forth therein, does not require or permit,
among other things, that Keysight: (1) Furnish technical information related to commercial computer software or
commercial computer software documentation that is not customarily provided to the public; or (2) Relinquish to, or
otherwise provide, the government rights in excess of these rights customarily provided to the public to use, modify,
reproduce, release, perform, display, or disclose commercial computer software or commercial computer software
documentation. No additional government requirements beyond those set forth in the EULA shall apply, except to the
extent that those terms, rights, or licenses are explicitly required from all providers of commercial computer soft-
ware pursuant to the FAR and the DFARS and are set forth specifically in writing elsewhere in the EULA. Keysight
shall be under no obligation to update, revise or otherwise modify the Software. With respect to any technical data
as defined by FAR 2.101, pursuant to FAR 12.211 and 27.404.2 and DFARS 227.7102, the U.S. government acquires no
greater than Limited Rights as defined in FAR 27.401 or DFAR 227.7103-5 (c), as applicable in any technical data.
52.227-14 (June 1987) or DFAR 252.227-7015 (b)(2) (November 1995), as applicable in any technical data.
Contacting Ixia
Corporate Ixia Worldwide Headquarters Web site: www.ixiacom.com
Headquarters 26601 W. Agoura Rd. General: [email protected]
Calabasas, CA 91302
USA Investor Relations: [email protected]
+1 877 FOR IXIA (877 367 4942) Training: [email protected]
+1 818 871 1800 (International) Support: [email protected]
(FAX) +1 818 871 1805 +1 818 595 2599
[email protected]
iv
CONTENTS
Notices ii
CONTENTS v
Introduction 1
XRPi 2
XR2000 18
XR Docker 50
Software endpoints 54
System Optimization 75
Hawkeye Services 75
Generation of Reports 79
Security 84
Change Passwords 96
INDEX i
XRPi
The XRPi is a low-profile ultra-small form factor unit ideal for use as an endpoint. It con-
tains an ultralow voltage CPU, one integrated Ethernet port, and no fan. The optional
WiFi dongle supports both 2.4 and 5 Mhz. The WiFi dongle also supports the latest state
802.1ac with 80 Mhz.
XRPi Hardware
Package Contents
The XRPi ships with the following components:
l XRPi unit
l Dual-voltage (110/220) AC power adapter
l Ixia WiFi dongle
l Non-powered 2.0 USB hub
l HDMI-to-DVI cable
l Quick Start Guide
Component Description
Ethernet port Use these ports to connect the XRPi to other Ethernet devices.
LAN LEDs
Green: Connection speed.
On 100Mbps
Off 10Mbps
Yellow: Activity.
On port is active
Component Description
Power port Micro-USB power port. Connect the supplied AC power adapter to
this port.
HDMI HDMI 1.4 video output port. Connect the supplied HDMI cable to
this port.
Micro SD slot
Component Description
Micro SD slot Micro SD card slot. The Hawkeye operating system and application
software is supplied on a micro SD card. The micro SD card is the
only permanent storage on the Hawkeye.
System Information
Current Draw ~650 mA, (3.0 W)
Weight 100g
XRPi configuration
The device can be connected through HDMI and keyboard via USB port
Username - root
Password - Ixia!123
Port Configuration
DHCP IP address configuration
DHCP is defined by default for eth0
1. Use the nano command to edit the following files to configure static IP and DNS
server information:
nano /etc/network/interfaces
iface lo inet loopback Loopback interface configuration
nano /etc/resolv.conf
2. Reboot the XRPi for changes to take effect (use the command reboot 0).
1. Using the attached keyboard and monitor, access the XRPi 's console and use the
following command to check the XRPis IP address:
ifconfig eth0
2. Use the following command to test connectivity:
ip route
Look for: default via [gateway IP]. Ensure that ping [gateway IP] suc-
ceeds.
3. Use the following command to test that the XRPi can reach the Internet and that
DNS resolution is working:
ping www.ixiacom.com
The ping should succeed, not timeout.
If either the connectivity or name resolution fails, check the static IP information
you entered and confirm that the DHCP server is available.
For XRPi wifi, when setting up static IP addresses and connect both
wlan0 and eth0 interface, XRPi automatic routing scripts that
ensure proper communications with the server will not be enabled
therefore the routing configuration is full responsibility of the user.
It is highly recommended to use DHCP configuration in this context.
4. The default hostname for the XRPi will be xrpi2-<last 5 digit of MAC address>. The
MAC address is on the label at the bottom of the XRPi.
If the XRPi is registered to the Hawkeye server, the upgrades are performed auto-
matically. If the XRPi was Hawkeye Server 1.0 (or registered as a manual endpoint),
you need to upgrade the end point by using probeConfigure, and it will be auto-
matically upgraded (see below). For Hawkeye users it is recommended to use this
upgrade strategy.
l A name for the probe. You can use letters, numbers and hyphens (_) in the name.
This option will set the host name and the name as displayed into Hawkeye. The
name can be edited on the Hawkeye GUI.
l The host name or IP address of the Hawkeye server (public IP if in cloud)
n Default routes for traffic will be over wifi interface (wlan0) if enabled.
However if eth0 and wlan0 interface are on different subnetworks for
Mesh tests a parameter in Preferences (Advanced Options – Set test IP
that matches default gateway) needs to be enabled so that when mesh
tests are made traffic will be made on the interface that has default
gateway defined. When wlan0 is defined, default gateway is configured
in the routing tables to be on wlan0.
n It is highly recommended not to configure the XRPi with just the Wifi
interface, as eth0 is used for Hawkeye management and is a more reli-
able interface.
When eth0 and wlan0 interfaces present for XRPi reboot the following occurs:
l The eth0 interface is brought up by system, the dhcp client runs first then XRPis-
cripts. The dhcp query obtains def GW, NTP server, dhcp lease, NIS servers, DNS
servers, DHCP IP.
l The XRPi specific scripts, overwrite default GW, save some details on routes (i.e.
netmask, IP, def GW), Set routes to Hawkeye server and configure per interface
routing tables.
l The wlan0 interface brought up by system connects to AP: DHCP runs, then custom
XRPi scripts configures the routing tables. specifically the global default route
(default GW for DHCP) is changed from eth0 to wlan0, then the default routes and
network routes are set in the global tables and per interface table. The above is
not impacted if eth0 and wlan0 are on different subnets.
Traffic:
Interface to be used for traffic is specified as part of test creation on Hawkeye server.
if eth0 and wlan0 are on same subnet, default route applies, traffic for destination will go
over subnet for wlan0
If eth0 and wlan0 are on different sub nets, (network routes apply) so if destination IP
for test is on subnet for eth0, traffic will go on eth0.
If XRPi receives disconnect from AP or a dhcp release, XRPi will re-attempt re-con-
nection using wpa_default.cfg, as long as DHCP lease has not expired.
If XRPi receives no message/event for lost connectivity (eg AP looses power), XRPi will
attemp reconnection every 10 seconds
Connect to WiFi
You can connect to WiFi using three methods.
face” is set to “no”, wlan0 will remain with temporary test configuration until XRPi is
restarted.
The command "wifiConnect -h" will list several options that can be used with the script.
The two most popular options are "wifiConnect -D" to save the AP connection details as
default and the "wifiConnect --config /etc/wpa_supplicant/wifi_ixia_guest.cfg" to use a
pre-existing AP configuration file.
1. The configuration including custom WiFi configuration is saved into the wpa_
default.cfg file on the XRPi.
2. The wpa_supplicant, which is an IEEE 802.1 cross platform compliant application
(third party linux app), reads the configuration from wpa_default.cfg and sends
information to the WiFi driver (specific to WiFi USB dongle). After WiFi dongle is
configured, the wpa_supplicant negotiates connection with the AP and radius
server for authentication.
Important to note Hawkeye can only take custom WiFi parameters store them in the
wpa_default.cfg file to be read by the linux wpa_supplicant application. For user’s cus-
tom WiFi parameters to be successful the version of wpa_supplicant must support the
parameters and the WiFi driver must also support the custom parameter. Refer online to
all parameters supported by the wpa_supplicant. Contact Ixia Customer support if you
need confirmation that a specific parameter is implemented or supported. Ixia recom-
mends to use the minimum amount of configuration in the wpa_default file (eg. key man-
agement, user, password, SSID).
a. Confirm WiFi dongle is plugged into XRPi USB port and has ini-
tialized by checking the LED on the Edimax labelled WiFi dongle. A
blue flash from the LED indicates the dongle is working.
b. Run the following Linux command to confirm WiFi dongle detected
in USB port. After detecting the dongle, the name of the dongle that
is Edimax appears in list of devices.
c. Confirm driver for Edimax dongle is running, use the following com-
mand:
# "ifconfig wlan0".
b. To check the status of WLAN use the following command:
d. To check the log files for time stamp for wifi connectivity and
release events, use the following command:
# cat /var/log/Ixia.
3. Check if the WiFi interface is connected to an AP by using the following command:
# iw wlan0 link.
4. Confirm if the XRPi WiFi can see all AP in range using the following command:
# iw wlan0 scan.
5. Confirm IP assigned to WLAN0, using the following command:
# ifconfig wlan0.
6. After an IP is assigned to wlan0, confirm all routes including the default route is
using wlan0 using the following command:
# route -n.
7. If an XRPi WiFi endpoint is moved from one location to another, the IP addresses
for wlan0 and eth0 changes. After the XRPi WiFi endpoint is located in its new loc-
ation, use the following commands in the given sequence to optimize the routes:
# route -n
# ip route flush table main
# rm -rf /etc/wpa_supplicant/wpa_default.cfg
# wlan0 down
# wlan0 up
# route -n
# ifconfig
8. Schedule the real service test WiFi Inspect on the Hawkeye server at an interval
of 5 or 10 minutes for one or two hours to find problems with different AP. View res-
ults in the WiFi Dashboard. The results show the stability of each AP signal
strength over the period. Graphical representation of the results, channels, and sig-
nal strength of each selected AP issues of all AP is available, which helps you to
view the overlapping interference.
Set debug flag to 7 for most detailed logs which includes mes-
saging and state changes. Second command saves trace to a file:
b. Perform a detailed WiFi scan to find other AP using the same channel as the
one previously attempting to connect to. The noise interference for an AP is
affected by the relative signal strength of other AP’s using the same channel
and the signal strength of those AP.The bandwidth used by an AP usually
spans the channel number above and below the reported channel number.
The following command will perform a detailed wifi scan and save results to a
file:
# /home/ixia/wifiScan.py -d 4 -u 2>&1 |tee /tmp/myScan.log
10. Fails to connect or drop from specific BSSID. If you specify BSSID with wifiConnect
command or test the connection may drop or lose. A disconnect from an AP
(BSSID) may be due to a local generation event or remote generation event. A loc-
ally generated event for disconnect can be due to XRPi WiFi inability to see the
beacon or keep it alive from its connected AP (BSSID). Ixia recommends you to
use different signal channels for each AP (BSSID) of an SSID with multiple AP. It is
possible that an AP for the same SSID, but different BSSID may have a stronger
signal and drown out the beacon of the weaker BSSID when using the same signal
channel.
The connection can also drop due to remote generation of events. The Radius
server may realize that the XRPi WiFi is connected to an AP (BSSID) but an AP
(BSSID) belonging to the same SSID (i.e. Corporate WiFi) may have a stronger sig-
nal. The Radius server will often inform the currently connected AP to send a dis-
XR2000
The XR2000 is a low-profile small form factor unit ideal for use as an endpoint. It con-
tains an ultra-low voltage CPU, six integrated Ethernet ports (one PXE-enabled, allowing
remote diagnostics and reboot), and no fan.
XR2000 Hardware
Package Contents
The XR2000 ships with the following components:
l XR2000 unit
l Dual-voltage (110/220) AC power adapter
l Quick Start Guide
Front Panel
Component Description
Power switch / LED Green indicates that the system is powered on. Red indicates
that the system is in standby mode.
VGA port Connect this port to a monitor or other display device. The
Hawkeye supports QXGA resolution (2048 x 1536 @ 75Hz).
24V DC power For use with the supplied 24V DC power adapter.
socket
Rear Panel
Component Description
Console Port (DB-9) Use this port to connect a suitable rollover cable (Yost
cable or Cisco console cable) to configure the Hawkeye, or
for diagnostics.
Default terminal configuration parameters are : 115200
baud, 8 data bits, no parity, 1 stop bit , no flow control.
Internally, this port is assigned as COM1.
USB 2.0 ports You can connect USB devices such as keyboards to the USB
ports.
Top: USB0, bottom: USB1
You should only connect USB devices that
have been qualified by Ixia for use with the
XR2000. For the list of qualified products,
contact your Ixia representative or sup-
[email protected].
10/100/1000Mbps Eth- Use these ports to connect the Hawkeye to other Ethernet
ernet LAN ports devices.
From left to right: LAN1-LAN6
LAN1 has Preboot Execution Environment (PXE) capability,
so if you access the Hawkeye through this port, you can
boot it independently of the installed operating system.
LAN LEDs
Left: Connection speed.
Green 100Mbps
Orange 1000Mbps
Off 10Mbps
Right: Activity.
On port is active
System Information
Temperature (ambient) Operating: 0ºC ~40ºC
Storage: -20ºC~70ºC
Weight 1.2kg
Introduction
There are four phases to configure and setup XR2000.
l Connecting to XR2000.
l First-time setup, in which you configure an IP address on the XR2000 so that you
can access it over a network.
l Web setup, in which you set the XR2000s basic boot-up and runtime configuration.
For this phase, you use a web browser to access the XR2000 over your network.
l Register XR2000 with Hawkeye Server.
Connect to XR2000
For a new XR2000, you need to connect to the XR2000, and then configure its IP
addresses so that you can access it over your network.
Once you can access it over the network, you can use a web browser to perform more
in-depth setup using the web setup or setip. You need to connect directly to the
XR2000to set the IP address. You can do it in the following ways:
l Connect a monitor to the VGA port and a keyboard to the USB port. See, First-time
Setup Through VGA and USB
l Use a console cable to connect a laptop to the console port. See, First-time Setup
Through the Console Port
l Use an Ethernet cable to connect a laptop to one of the LAN ports. See, First-time
Setup over Ethernet
XR2000 supports multiple ethernet ports. Default routes for traffic will be
over eth0. However if each interface is on a different subnetwork, go to
Preferences > Advanced Options > Set test IP that matches
default gateway and enable the option for Mesh tests a parameter so
that when mesh tests are made, traffic will be on the interface that has
default gateway defined. On the command line on the XR2000 use the
“route -n” command to see which interface has the defined default gate-
way.
3. On the laptop, set the laptop's IP address so that it is on the same subnet
(192.168.1.xxx) as the XR2000. For example, you can set it to 192.168.1.1.
4. Power on the XR2000.
At this point, you can do either of two things:
l Exit the initial setup, and move on to the web setup.
l Run the setup script to configure the permanent IP addressing on the XR2000.
1. Connect a monitor to the VGA port, and a keyboard to one of the USB ports.
1. Connect a rollover cable or RJ-45 to DB-9 female (Cisco console cable) to the
XR2000's console port.
Parameter Value
Speed 115200
Data bits 8
Stop bits 1
Parity None
Source-Based Routing
When using multiple interfaces on the XR2000, it is impossible to configure multiple
default gateways because all the interfaces share the same routing table. To customize
the interface routing, you can either configure static routes to remote hosts, or use
source based routing. Source-based routing allows each interface (physical and logical)
to have its own routing table, including default routes. .
To configure source-based routing, you use a text editor such as vi to edit script files
stored on the XR2000. Each logical interface or VLAN ID must have its own script file.
The scripts must be stored in the following folder:
/etc/sysconfig/network-scripts
and named according to the logical interface or VLAN ID that they control. For example,
to configure eth0, you edit the ifcfg-eth0 script. If a script file for the interface (or
combination of VLAN ID and interface) does not already exist, you must create it in the
editor.
1. Connect to the XR2000 using one of the methods described in First-time Setup,
and open a console window.
2. Start a text editor, such as vi.
3. Path to the /etc/sysconfig/network-scripts folder, and open the script for the
interface that you want to configure.
Scripts are named according to the interfaces that they configure, using the fol-
lowing naming convention:
ifcfg-<logical_interface>.<vlanId>
Examples:
Logical interface: To configure the logical interface eth0, edit ifcfg-eth0.
VLAN: To configure the logical interface eth0, edit ifcfg-eth0.25.
If there is no script file for the interface (or combination of VLAN ID and interface),
use the editor to create it.
4. The table below describes the parameters available for the scripts. You can also
refer to the examples in this section for examples of how to configure scripts for
specific configurations.
Parameter Description
DEFROUTE= Always set DEFROUTE to “no”, so that the default route in the
default routing table will not be used.
Example: DEFROUTE=no
GATEWAY= Always set this value to null, to exclude the default gateway
from the default routing table.
Example: GATEWAY=””
MACADDR= Specify a value for this parameter only if you want to a spoof
a MAC address
Example: MACADDR=4C:02:89:00:F3:26
5. After configuring and saving the script file, you need to create separate routing
tables for each logical interface that need specific default gateways.
To do this , edit the file /etc/iproute2/rt_tables, and add as many tables as
you need for your configuration. For each new table, you need a table identifier
and a table name.
You can use any name you want for the tables, but when using
VLANs, Ixia recommends that you use the VLAN ID and part of the
table name as the table identifier. See the example below, where
VLAN 25, 26 and 27 were defined over eth0.
Example:
# cat rt_tables
#
# reserved values
#
255 local
254 main
253 default
0 unspec
#
# local
#
#1 inr.ruhep
25 vlan25table
26 vlan26table
27 vlan27table
6. After you have created the tables, you need to add the routes and rules for each
table/logical interface. To do this, you create two files in the etc/sy-
sconfig/network-scripts folder for each table/logical interface: route-ethx
and rule-ethx.
l In some cases, you may need to add a default route for the entire system. In this
case, you need to add a default route to the default routing table.
To add a default route for the entire system, configure the GATEWAY parameter in
one of the ifcfg-ethx files (if the interface uses a static IP address), or by con-
figuring DEFROUTE=yes if the interface uses DHCP.
Also, you can add routes under default routing table, using normal commands, in
which you do not need to specify the name of the table.
A PPPoE connection uses the predefined default routing table and will auto-
matically add a default route for the system.
l To display the routes corresponding to a specific table, use the following command
ip route show table <table_name>.
l To display the routing rules created on the system, use the ip rule command.
Make sure that there are no duplicate entries for a table.
l Inside the specific routing tables, add as many supplementary routes as you need.
l Source based routing is only recommended when multiple gateways are required.
l For routing to specific destinations, use static routes instead.
l If a probe has multiple interfaces, in the Hawkeye GUI you need to add the probe
multiple times, using various names, but with the same serial number. Man-
agement and test addresses must be configured in various ways, in order to match
your test ing requirements.
DHCP Example
The following example shows how to configure interface eth0 for DHCP (script file
ifcfg- eth0).
Parameter Description
BOOTPROTO=dhcp
TYPE=Ethernet
IPV6INIT=no
DEVICE=eth0
ONBOOT=yes
Static IP Example
The following example shows how to configure interface eth0 with a static IP address
(script file ifcfg-eth0).
Parameter Description
BOOTPROTO=static
HWADDR=4C:02:89:00:F3:90 Replace the MAC value with the actual MAC address
of the card
TYPE=Ethernet
IPV6INIT=no
DEVICE=eth0
IPADDR=192.168.1.10
NETMASK=255.255.255.0
BROADCAST=192.168.1.255
NETWORK=192.168.1.0
ONBOOT=yes
Parameter Description
BOOTPROTO=dhcp
HWADDR=4C:02:89:00:F3:90 Replace the MAC value with the actual MAC address
of the card
TYPE=Ethernet
IPV6INIT=no
DEVICE=eth0.25
VLAN=yes
ONBOOT=yes
Parameter Description
BOOTPROTO=none
HWADDR=4C:02:89:00:F3:90 Replace the MAC value with the actual MAC address
of the card
TYPE=Ethernet
IPV6INIT=no
DEVICE=eth0.25
VLAN=yes
ONBOOT=yes
Parameter Description
BOOTPROTO=static
TYPE=Ethernet
IPV6INIT=no
DEVICE=eth0.25
IPADDR=192.168.1.10
NETMASK=255.255.255.0
BROADCAST=192.168.1.255
NETWORK=192.168.1.0
XR2000 Upgrades
Procedures below are valid for XR2000 and XR2000_vm probes
The procedures below are valid for XR2000 and XR2000_vm endpoints.
Run:
cd /tmp
wget https://ixiapublic.s3.amazonaws.com/hawkeye/xr2000_
upgrade.tar.gz
tar zxvf xr2000_upgrade.tar.gz
cd xr2000_upgrade
./ixia_chariotprobe_install.sh
cd ..
rm –rf xr2000_upgrade
rm –rf xr2000_upgrade.tar.gz
If the xr2000 does not have internet connectivity copy the file at the https://ixiapub-
lic.s3.amazonaws.com/hawkeye/xr2000_upgrade.tar.gz location, copy on USB drive to
XR2000 and place in /tmp folder. Then run the procedure.
tar zxvf xr2000_upgrade.tar.gz
cd xr2000_upgrade
./ixia_chariotprobe_install.sh
cd ..
rm –rf xr2000_upgrade
rm –rf xr2000_upgrade.tar.gz
Prerequisite for operating system upgrade is to have access to Internet. There is no way
to upgrade operating system without access to public servers.
To keep compatibility with the bittorrent test type in Hawkeye the following library
needs to be downgraded after yum upgrade:
yum downgrade libtorrent -y
XR2000_vm Installation
Overview
The XR2000 VM is available in OVA and QCOW2 images for windows and linux envir-
onments using a virtual manager. For windows systems the OVA image is used with a
VmWare hypervisor. For example VmWare ESX, VmPlayer and VmWare Workstation. For
linux systems the QCOW2 image is used with a virtual manager such as KVM or
OpenStack.
VM Requirements
l 8 GB of hard drive space
l Minimum 1 GB dedicated RAM. Ixia recommends 4 GB RAM
l Minimum 1 CPU. Ixia recommends 2 CPUs
l Access to virtual network
Click Next.
Click next
click next and select Format for virtual disks (use default)
click next and select network mapping - this phase is important to ensure the mapping
is done to the correct interfaces.
After booting phase if you go to the vmware console for the vm and the eth0 interface is
created and can get ip on a dhcp server, you will see the following displayed (ip dis-
played only if Vm can get an automatic IP):
https://yourxr2000_vm:10000
login/password
root/Ixia!123
netstat
Displays generic network statistics of the host.
If you include the -an and -rn arguments, netstat displays the routing table and the
application ports open on the XR2000, as well as their respective output.
With no arguments, netstat displays the active connections and the used sockets.
passwd
Changes the password for the current user.
route
Displays routing table information.
ifconfig
Displays the IP address and netmask associated with each Ethernet port. Also displays
details/counts of packets/bytes of traffic on each port
route
Displays routing table information
tcpdump
Displays the packets on an interface.
After capturing data, you can use the web interface remote tools Upload
and Download to load the .enc file and open it with Wireshark on a win-
dows PC.
XR Docker
The XR Docker image is a docker image which can be installed as a Hawkeye endpoint.
The XR Docker image can be installed on a platform as an application that is capable of
running Node to Node, Mesh, and a limited subset of Real service tests and functionality
of performing scheduled restarts from the Hawkeye Server. The XR Docker will register
as an automatic endpoint to a Hawkeye Server.
Installation
Use the following instructions to create an XR Docker endpoint where internet con-
nectivity is available for base platform to run the XR Docker (endpoint) and only one IP
is being used for the base platform. Only one XR Docker container can be installed on a
single base platform.
In Hawkeye, go to the for software downloads web page, download the xr_docker install
script and transfer this file to the base platform of host the endpoint. Then follow the
installation steps mentioned below.
Contact Ixia Customer support for more advanced installations of XR Docker, such as,
multiple IP (multiple subnetworks) support and unavailability of internet connection for
Centos base system to obtain the XR docker image.
l For some base platform Linux OS such as Ubuntu, the docker package is ref-
erenced as ‘docker.io’.
Installation steps
1. Install the Docker library and Docker application on the Centos base platform.
2. Transfer the XR Docker install script (located on Hawkeye server - software down-
load web page) to the base platform.
3. Run the install script to simulate a Hawkeye endpoint. You may have to change
permissions on the file.
# chmod +x install_xr_docker.sh
4. The XR Docker endpoint automatically registers with the Hawkeye Server as part
of the start up process and registers using the IP of the host platform.
The user will be prompted to enter the endpoint name and the IP of the
Hawkeye Server. The XR Docker endpoint will register with the Hawkeye
Server and perform any necessary software upgrade from the Hawkeye
Server.
sample output
Container
ID Image Command Created Status Ports Names
Log files on the XR Docker container can also be accessed for debugging purposes if dir-
ected by Ixia customer support.
XR Docker Removal/Uninstall
The XR Docker can be removed from the host platform using the following commands:
sample output
Container
ID Image Command Created Status Ports Names
sample output:
7. Remove install script and image tar from platform (from directory you transferred
install script to – recommended/tmp):
# rm -rf xr_docker_x86_64.tar install_xr_docker.sh
After you access the container, the following common XR commands are avail-
able.
# probeVersion
2. The XR Docker runs as a container on a host computer so the IP routing and port
restrictions of the host computer apply. If tests are failing due to connectivity reas-
ons, as seen in Hawkeye test results, check the firewall and iptables settings of
the host computer.
3. If the XR Docker is flagged as Private in Probe Management but it is expected
to be Public, check firewall settings of host computer.
4. If multiple XR Docker endpoints are required to be created for the user, a shortcut
of downloading and using a local copy of the XR Docker image can be employed as
follows:
l The first installation of an XR Docker endpoint will result in the XR Docker
image (.tar) file being downloaded onto a host computer. This TAR file can be
copied to the next host computer. Load the XR docker image and then run the
install script as follows:
# ./install_xr_docker.sh,
Then run
# docker load -i <xr docker tar image>
5. Log files on the XR Docker container can also be accessed for debugging purposes
if directed by Ixia customer support. Contact Ixia customer support for detailed
instructions.
6. How to cleanup unnecessary XR docker image, useful if you get into strange scen-
ario of having multiple docker images/entries.
# docker ps
# docker stop <cntrID>
# docker rm <cntrID>
# docker images
# docker rmi -f <imageID>
7. If the base platform is Ubuntu or similar, it may require commands to be preceded
by ‘sudo’ to execute commands with root access.
Software endpoints
Software endpoints offer the User the ability to use any hardware platform as a Hawkeye
endpoint but with the limitation of not being able to run Real Service tests. The XR2000
VM is an exception to this rule.
http://www.ixiacom.com/products/ixchariot/endpoint-library/platform-endpoints
The endpoints available on the Ixia web page are for the most part fully
compatible with Hawkeye, but some specific versions may not be com-
pletely tested with current Hawkeye server. It is recommended to down-
load the endpoints from Hawkeye server directly to ensure 100% tested
compatibility and stability with current Hawkeye version.
The important step is the registration server configuration: the url or IP address (public)
of Hawkeye server must be filled in for automatic endpoint creation and management.
This section explains how the Hawkeye server communicates on ports to the endpoints.
Endpoint to endpoint tests (communication) and Real service tests to and from the end-
points and internet.
Differences between manual and auto endpoints are explained for routing and ports
used.
There are many factors such as NAT, firewalls, VLANs, cloud, private and public net-
works, and IPV6 to be considered.
Tips on debugging routing issues are addressed. The purpose here is to identify ports, IP
ranges that are open or blocked, but not on how to resolve these issues.
If the Hawkeye server or endpoints are deployed in the Cloud, contact Ixia Customer
Support for suggestions on enforcing security with enabling or blocking egress or ingress
of ports, IP’s, traffic types per port range and direction of traffic (egress/ingress).
The server, upon installation or upgrade, runs a script to enable required ports. This
script updates the Linux system files in /etc/sysconfig/iptables and etc/sy-
sconfig/ip6tables directories and enables the following ports for both IPV4 and IPV6:
l TCP: 80, 123, 443, 22, 25025-25050, 10117, 546, 4501-4502, 27000-27009
l UDP: 123
Port 123 is for the NTP service for accurate clock timing.
Port or Port
Range Modi-
Port fication
Description (s) Policy Protocol Directions Description
Port or Port
Range Modi-
Port fication
Description (s) Policy Protocol Directions Description
Port or Port
Range Modi-
Descrip- fication Pro-
tion Port(s) Policy tocol Directions Description
Port or Port
Range Modi-
Descrip- fication Pro-
tion Port(s) Policy tocol Directions Description
For management:
l port 10115 on TCP and UDP (UDP is needed for time sync for Voice or Video pairs,
or any RTP pairs).
l port 22 for Real Service testing. Port 22 is also used for xr2000 hardware probes to
manage the probes remotely.
For traffic:
l port 10116- 10120 (example). A larger range may be needed if more concurrent
pairs of traffic are set against the NATed probe as destination.
TCP/UDP port needs to be configured for port forwarding depending on the nature of
traffic.
This must be set in the configuration of the system as well to consider the traffic sent.
See the following section for more information on handling NAT.
1. In Administration > Preferences > Test Engine Tab, set the autoNAT to 1.
2. Force traffic to be in range 10200 - max Range. There is no max range. The max
range is defined by maximum number of concurrent pairs in Administration >
Preferences > Traffic Port Management, set Destination Port con-
figuration to 1 and First Destination Port to 10116.
For these scenarios, the source and destination endpoints are determined by the dir-
ection of the traffic. When using a reversed traffic direction, the source and destination
endpoints for the scenarios are in reversed order as compared to how they appear in the
user interface.
While adding the source (from) and destination (to) endpoints to the test in the test exe-
cution configuration, check that they are in the relevant network for the use case being
tested. If it is not located in the correct network for that particular use case, you can
change that in the endpoint management dialog:
Assumptions
Unless otherwise specified, the following assumptions are made for firewall testing:
l The Hawkeye server (Hawkeye server and the registration server) are accessible
to all endpoints in a test.
l The virtual image that hosts the Hawkeye server comes with a pre-installed Regis-
tration Server. Although virtually any endpoint can act as Registration Server, we
recommend to use the pre-installed one. The Hawkeye server and Hawkeye regis-
tration server may have different IP addresses.
l When multiple system components are located in a private/enterprise network, we
assume that they are in the same network, with full network connectivity between
them.
l When exiting the enterprise network, a device may be encountered which acts as a
firewall, performing address translation and, for the most part of the testing scen-
arios, port translation.
l When entering the cloud, a device may be encountered which performs only 1:1
Network Address Translation (NAT), without any port translation. If a firewall is
also present, the ports specified in each of the test scenarios are opened.
l The Registration Server is usually able to detect if an endpoint is located behind a
firewall in an enterprise or in the public space/cloud.
Operational Concepts
For the purpose of running tests involving firewall, we consider the following working con-
cepts:
l Hawkeye server: is the main server for Hawkeye, that includes Hawkeye test
engine server and Registration server.
l Hawkeye Server (C): Central point of test execution. It is delivered as a virtual
image.
l Registration Server (RS): A service that handles endpoints management and
acts as a mediator between the Hawkeye Server and endpoints at runtime.
l (Source and Destination) Endpoints: Performance Endpoints that are installed
and run on the clients' device(s)
l Private/Enterprise: A private, usually enterprise network, located behind a fire-
wall, whose address is not Internet-routable and thus needs Network Address
Translation (NAT)
l Public: A public (and fully routable IP address) network
l Cloud: A network with a private/non-routable address, yet exposed as a public IP
by way of a 1:1 TCP/UDP IP-mapping (NAT) rule
Given that all Hawkeye components are located in the same private/enterprise network
(and, as such, there is no firewall or NATing between them), no special settings must be
configured. The endpoint should appear as public and/or be configured as such.
The following diagram illustrates the use case referencing the underlying components
from Hawkeye control engine that are involved and which ports are used.
For Hawkeye, both Hawkeye Srv and Registration server are located on
Hawkeye server.
We assume that both endpoints are in the same private network and that no device is
present that can alter the traffic or IPs.
Make sure that the private/enterprise firewall allows for outbound con-
nections to the public ports shown in the figure below.
The endpoints need to be set to private mode for this use case.
The following diagram illustrates the use case referencing the underlying components
from Hawkeye control engine that are involved and which ports are used.
For Hawkeye, both Hawkeye Srv and Registration server are located on
Hawkeye server.
Make sure that the private/enterprise firewall allows for outbound con-
nections to the public ports shown in the figure below
The endpoint in private (entreprise) must be configured as private, the endpoint in pub-
lic must be configured as public.
Make sure that the private/enterprise firewall allows for outbound con-
nections to the public ports shown in the figure below.
For the firewall at the entry point into the cloud, make sure to open the following ports:
l TCP licensing ports (for the Hawkeye Server, only if an external license server is
used): 27000-27009
Make sure that the private/enterprise firewall allows for outbound con-
nections to the Cloud ports shown in the figure below.
For the firewall at the entry point into the cloud, make sure to open the following ports:
Make sure that the private/enterprise firewall allows for outbound con-
nections to the Cloud ports shown in the figure below.
For the firewall at the entry point into the cloud, make sure to open the following ports:
Make sure that the private/enterprise firewall allows for outbound con-
nections to the Cloud ports shown in the figure below.
Limitations
By default, when the destination endpoint is behind a firewall (enterprise or Cloud),
application mixes traffic fails at test run unless the following actions are performed:
l In the firewall, make sure to open the ports corresponding to the applications that
you want to emulate. The table below lists the corresponding TCP ports set by
default for the test traffic as defined in the port management preferences.
l If the NAT is activated, an entry must be added in the NAT table to make sure that
the traffic is routed to the private IP of the destination endpoint(s). This is for test
traffic only.
7. Confirm on the Hawkeye server in Probe Health Check that the endpoint is in a
state of link up.
Issues with Node to Node tests can be caused by connectivity issues. Check routes and
no port blocking.
Verify traffic ports used in Hawkeye server in Preferences – Traffic Port Man-
agement. To verify routes correct run N2N traffic test between two endpoints (UDP/TCP
bidirectional tests). First run ipconfig on endpoint, then run test, then run ipconfig after
test and this will show packets increasing for RX/TX on interface used.
For the firewall at the entry point into the cloud, make sure to open the following ports:
Open into the Cloud firewall the TCP/UDP ports that you plan to use for test traffic, then
go to User Preferences and configure the same range there in the Administration -
Preferences - Traffic Port Management.
System Optimization
The following sections explain how to optimize the performance of the Hawkeye Server.
It is important to understand the Hawkeye software upgrade brings in required CentOS
updates compatible with Hawkeye Server and all necessary security upgrades.
You must not run yum update on the Hawkeye server in case required
libraries are changed. Contact Ixia customer support to update a pack-
age for a specific requirement.
Hawkeye Services
If the user needs to make a change to a Service configuration file or manipulate the
MySQL database, the services that run the Hawkeye Server must be stopped and restar-
ted to prevent database corruption and implement configuration changes.
The Hawkeye server consists of a number of services. The “hawkeye” service is a top
layer script that runs multiple sub-services. There are two additional key services,
which are the apache service (web server) which is labelled as “httpd” and the “mysqld”
database service.
These services can accept commands from the command line when the user has ssh to
the Hawkeye server. The command options are “start”, “stop”, and “status” to start the
service, stop the service, and check the status of the service, respectively. Theses com-
mands can be run by using the “service” command.
Following are the commands for the httpd (apache web server) service. Replace “httpd”
with “mysqld” for the database service.
The hawkeye and httpd service should be stopped when attempting any sort of low-level
maintenance on the database as they both perform many operations on the database
even on a fundamentally idle system. All three services should be stopped while res-
izing the mysql database. Each service must be stopped and restarted if any con-
figuration for that service is changed in its configuration file. Always stop the “httpd”
and “hawkeye” service before stopping the “mysqld” service, then restart in reverse
order.
Hawkeye service
The hawkeye service configuration file is located at /home/ixi-
a/Hawkeye/conf/configuration.txt. It is not recommended that the user change any val-
ues in this file without consulting Ixia Customer support. The Hawkeye server GUI uses
the Preferences section to change some settings. Passwords are defined in the file but
refer to Change Passwords on page 96for changing passwords safely.
mysql service
The mysql service configuration file is located at /etc/my.cnf. Refer to the section on
Manage MySQL Database – MySQL RAM for instructions on changing the amount of RAM
allocated to the mysql service. The user should consult with Ixia Customer support
before making any other change to this file. The actual database is stored on the
Hawkeye server under directory /home/mysql_data and user must not make any
changes in that database manually.
The MySQL service is configured to use 1GB of RAM by default. This is considered insuf-
ficient for most Corporate environments.
The MySQL database configuration parameters are located on the Hawkeye server in the
file /etc/my.cnf directory and the user can modify these.
Refer to the section Hawkeye Services for information on how to start an stop services
safely to implement the changes mentioned below.
The most important parameter that needs to be adjusted is the parameter that sets the
max amount of RAM to be used by MySQL. The default Hawkeye server settings is to
save all test results for 1 year, the amount of RAM required for processing this much
data for multiple users may result in slow performance of the system. In addition to the
work of saving and recycling of test results in MySQL.
Parameter: innodb_buffer_pool_size=1G
The default is to limit the MySQL to 1GB. For large systems, Ixia recommends setting
this to 70% of available RAM on the Hawkeye server. For a Hawkeye Server with 16GB
of allocated RAM, Ixia recommends setting this to innodb_buffer_pool_size=11G.
The default settings for Hawkeye Server will result in one year of tests results to be
saved. The user is highly recommended to configure the number of days to store test res-
ults to keep the system manageable. There are two suggested ways to calculate how
much disk space will be used by the system daily:
If the disk space is growing rapidly at 100 MB a day and the Hawkeye server virtual
machine was created with a virtual disk space of 30 GB this means after 300 days the
available disk space will be filled.
On the Hawkeye server under Administration > Preferences there are two para-
meters:
These two parameters define the recycling and retention period for keeping test results
and metrics. For the example above, Ixia recommends you to set the Number of days
for database retention and Number of days for database retention for
detailed metrics to 180 days (6 months) to allow for a large safety margin.
The Number of days for database retention for detailed metrics are disk intens-
ive and the retention period should be less than the Number of days for database
retention. For systems under heavy load (>100000 tests a day) it is recommended to
change this metric to less than 30 days, if possible less than 10 days is highly recom-
mended.
The parameter for Number of days for database retention of admin elements can
be reduced to as low as 5 days as it is mostly used for log tracking.
The recommended way is to backup the MySQL database. Copy the MySQL backup to
another disk. Next use the virtual manager and add another disk to the Hawkeye server
virtual machine. Many virtual managers allow the merging of two disks if physically loc-
ated on same physical device. Some virtual managers allow increasing the size of the
virtual machine image with just one command line, but each virtual manager is dif-
ferent.
Alternatively, create a new Hawkeye server virtual machine and install the same soft-
ware version of Hawkeye, then restore the MySQL backup. Refer to Upgrading Hawkeye,
which explains database backup and recovery.
It is not recommended to create a new virtual disk then use a mount point from
/home/mysql_data to new virtual disk. The time delays for two physically located hard
drives may prove problematic and has not been verified by Hawkeye QA team.
Generation of Reports
Generation and scheduling of reports can be very CPU intensive.
There are many configurable preferences that will impact report generation.
The parameter Max report generation time sets a maximum amount of processing
allowed to generate a report. If this time is exceeded the report is abandoned.
Hawkeye supports a mechanism to shorten the time to generate reports for users. This
is done with an embedded data aggregation mechanism. This is enabled and configured
with the two parameters Automatic storage of aggregation levels and Aggreg-
ation for report use. This data aggregation capability works by constantly aggreg-
ating (averaging test metrics) data over periods of time (hour, few hours, day) so that
there is less data to look for when users are building reports on large data sets. By
default, the Hawkeye server will be constantly maintaining these aggregation tables but
not use them unless enabled. The values are represented in seconds. The default
aggregation period is 1 hour, 3 hours, 1 day, which is selected by 3600, 10800, and
86400 respectively.
The aggregation for report use defines how Hawkeye automatically adjusts aggreg-
ation levels.
When users are looking for data for a date range, the aggregation level used will be:
If >0, the requested time range by the user (in second) is divided by "aggregation for
report use" integer and the aggregation period is the maximum one that is less or equal
to this number.
The clock on the Hawkeye server and the Linux endpoints (XRPi/XR2000/Linux SW) can
be synced to a common clock assuming access to the internet.
The following commands on the command line interface of the Hawkeye server or end-
point will synchronize clocks:
Another option is to set the correct timezones by choosing a city under /us-
r/share/zoneinfo directory. For example:
ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime
After the Hawkeye Server virtual machine clock has been changed for the Hawkeye
server to adjust to this change the user must stop the hawkeye, httpd, and mysqls
services, then restart each service in reverse order (alternatively reboot Hawkeye
server). Refer to Hawkeye Services for more information.
A user in a different country (timezone) using a web browser to the Hawkeye Server
must be aware of this time difference when viewing test results and generating reports
as test results/reports will be dated by the Hawkeye server time and not the user’s local
time.
If Hawkeye server and endpoints are not synced to the same clock source the
timestamps of tests in the Hawkeye server, may not be as expected as the
Hawkeye server, will be scheduling tests using one clock and the endpoint run-
ning tests by another clock.
Test timeouts
In the Test Controller section of preferences there are many parameters defining
max test durations. After a timer expires Hawkeye server ends the test and cleans up.
In the Advanced Options section, the parameter Maximum time processing Node
to Node and Mesh tests specifies how many seconds after test duration is over
before the Hawkeye server ends the test and cleans up.
There are two parameters for Real Services. In the Test Controller section the para-
meter Real Test Max duration defines how long the endpoint processes the test,
after which time the endpoint terminates the test. In the Advanced Options section,
the parameter Maximum time processing Real Service tests specifies how many
seconds after test duration is over before the Hawkeye server ends the test and cleans
up.
dashboard, one for the map display, and one for other web pages such as floorplans and
test execution pages.
The parameter Refresh Execution List Timer specifies how many seconds between
updating the test execution web pages. It is recommended to keep this value above 10
seconds for system performance.
The parameter Map update frequency in seconds impacts how frequently the maps
are re-calculated. It is recommended to keep this above 30 seconds. The parameter
Refresh list timer value impacts other web pages that support autorefresh. We recom-
mend you to keep this above 10 seconds.
Map Refresh
The frequency for updating maps can be CPU intensive. There are two parameters under
Preferences in Administration that defines this. First the parameter Map Report
must be enabled or disabled. If disabled no processing of maps to be displayed will take
place. In the GUI section of preferences the parameter Map update Frequency in
seconds defines how frequent the processing or refreshing of maps will be performed.
As this impacts the CPU we advise not setting this value to less than 30 seconds.
Security
This section provides information on the corporate security that Hawkeye provides.
Another layer of security is the use of firewalls to block specific ports for ingress/regress
to the endpoints and Hawkeye server which is explained in the Routing section.
The administrator can set the timeout period for the Hawkeye client (User’s web access
to Hawkeye server) by setting a parameter on the web page Administration > Prefer-
ences > GUI > Max Idle Timer.
The Hawkeye server by default uses HTTP (port 80) for web access. This can be
changed to secure HTTPS (port 443) access if required. If a user wants to do this they
are recommended to contact Ixia Customer Support for the steps as it involves changing
two configuration files and possibly disabling port 80 in iptables.
It is not recommended but for the advanced security conscious organization the SSL pro-
tocol and the cipher algorithms to be used can be configured/restricted by changing the
/etc/httpd/conf.d/ssl.conf file. Examples of changes are to exclude SSL protocol 1.0 and
1.1 and only use 1.2. Another example is to prioritize a list of ciphers to be used (nego-
tiated) for the connection. Important to note as this is a linux system file not maintained
by Hawkeye it can be replaced by an upgrade thus requiring the user to restore any pre-
viously saved changes to the file following the upgrade.
To generate the CSR from the Hawkeye server you will need to log into the server via
SSH and run the following commands.
openssl req -new -newkey rsa:2048 -nodes -out YourServerName.csr -key-
out YourServerName.key
Following the command, you will be prompted to answer some questions about your
organization and the FQDN of your server.
Upon completion, your system will have two new files in the directory from which you
ran the openssl command. YourSeverName.csr and YourServerName.key. Please move
YourServerName.key to /etc/pki/tls/private/ for use later.
Use cp (copy) and not mv (move) when placing the files in their new dir-
ectories).
The new SSL Certificate (.crt file) will need to be saved in /etc/pki/tls/certs/. If you
have not already done so you will also need to save the .key file that was generated dur-
ing your Certificate Signing request generation to /etc/pki/tls/private/ ( *Note please
use cp (copy) and not mv (move) when placing the files in their new directories). Once
the files are in the correct directories you will need to edit the following file with vi or
your favorite linux text editor.
vi /etc/httpd/conf.d/ssl.conf
You will need to change the following to lines to point to your signed files instead of the
localhost self-signed versions that are provided by Ixia.
SSLCertificateFile /etc/pki/tls/certs/localhost.crt
SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
Once the two lines are edited you will need to restart the Hawkeye web server. Refer to
the section Hawkeye Services on how to do this.
Following the start of httpd you should be able to reload the Hawkeye website by going
to https://www.YourServerName.com. This time you should not be prompted about hav-
ing an unsigned certificate.
For the regular LDAP server, which can include Active directory type LDAP servers, the
System Authentication mode is set to 1 or 2 for LDAP. The Hawkeye server
validates additional users by authenticating with the defined LDAP server. When Sys-
tem Authentication mode is set to 1, the user must already be defined as a user on
the Hawkeye server and the LDAP server is used to validate the password. When the
System Authentication mode is set to 2, the user name and password are validated
against the LDAP server and then automatically added as a user for Hawkeye. It is the
responsibility of the System Administrator to update the System Users Management
to change the entry for the user to reflect the expected group. Ixia recommends setting
System Authentication mode to 1 to keep control of the access and rights on the
Hawkeye server.
For the secure LDAP, the System Authentication mode is set to 4 for LDAP. The
Hawkeye server must already have users defined on its System User Management
to allow a select subset of the Organizations users access to the Hawkeye server. The
secure LDAP server is used to validate the password for the user.
In Administration > Preferences, define the following parameters to use the LDAP
server:
• LDAP Domain
• LDAP Bind DN
Refer to the Organizations secure LDAP server configuration to know the DN parameters
that you must specify. These can include a combination of: OU, UID, CN and DC/DC.
The values are not case sensitive.
The Ixia login feature for authenticating users is applied when the system Authentic-
ation mode is set to 3. To use the Ixia Login service, the administrator defines a new
user in the Adminstration > System Users Management and the user is defined
with an email address that ends in [email protected]. This allows a defined
user to access the Hawkeye server. The user engineer must then access the Ixia login
server and register, which means providing an email address and password. The Ixia
Login service will then send an email to the email address provided requiring con-
firmation to make the user active.
When the user engineer attempts to login, the Hawkeye server contacts Ixia to val-
idate the password provided for the login. If the username and password match in the
secure Ixia Authentication server, the user engineer is allowed to login. The privileges
allowed to the engineer is det ermined by the sysadmin administrator that defined the
account for the engineer in the Adminstration > System Users Management (refer
to Users and Groups Management).
Maps and floor plans visible to the users/group showing the status of the current net-
work follow these restrictions.
Users can be further restricted in their use of the Hawkeye server with the use of test
templates. This allows the system administrator to create test suites such as a voice_
suite and office_suite (collection of appropriate Node to Node or Real Service tests) with
pre-canned thresholds. These controlled tests could then be assigned to users. This
would limit users to a controlled set of tests to be able to detect network issues affecting
their groups/area. Refer to Replicating tests across multiple endpoints.
l System administrator
l Group administrator
l User
l System viewer
l Group viewer
• Probes created by Group administrator are available for test to all users from the group
AND system administrators. Only Group administrators and system administrators can
remove or update the probes
• Probes created by User are available for test to the user AND system administrators.
Only current user, group administrators and system administrators can remove or
update the probes
Group management
Creating a group
1. Login into Hawkeye application.
The user can only create Groups if the Admin level privileges of the
user is set to System Administrator .
2. Select the Administration menu from the main menu bar. Select option System
Group Management. The following options becomes available:
3. Enter the User Group Name. Enter group comments if any. Select Enabled or Dis-
abled to set the status of the group. For example, Voice group was created for the
users that will test the VoIP network.
Disabling the group disables all users belonging to the group, and there-
fore prevent them from accessing the application.
4. Assign the test types intended to be used by the group created. On the left side
panel, all test types are available. The test types added to right side panel, makes
those tests available for the new group created. For example, Voice test types
were added to the Voice group.
5. Click Add button to save the group created into the database. If everything is OK,
the group should be present in the Group table.
6. Hawkeye application allows the user to edit the comments, the status of an exist-
ing group and the available test types. To edit click on the Edit icon next to
the group ID or group name, of the group that you want to edit. All information
about the group will be displayed in the text boxes below, and the update option
will be enabled, as shown below:
If the status of a user group is set to Disabled, users in that group will
not be able to log into the Hawkeye.
7. Hawkeye also allows the user to Delete an existing group. Click on the Delete
icon of the group that you want to delete. A new pop up window will appear, as
shown below, asking for confirmation. Click OK to delete the group.
Deleting a user group will also delete all users into this group.
Users Management
Creating a user
1. Log in into Hawkeye application. A new user can be created only by a member of
System Administrator or Group Administrator groups.
2. Click on the Administration option on the main menu. Select from the drop-down
list option System User Management.
3. To add a new user, enter the details of the user in the text boxes as shown below.
l User with admin level privileges set to System Administrator can add
new users to any of the existing groups
l User with admin level privilege set to Group Administrator can add new
users only in the group to which the user belongs.
l A user logged in to the Hawkeye with System Administrator privileges,
will be able to view the details of all the users members to all groups.
l A user logged in to the Hawkeye with Group Administrator privileges,
will be able to view the details of the users belonging only to that particular
group, to which the logged in user belongs to.
l A new user can be assigned different User Level privileges as per the
requirement.
l System Administrator is the super user. This level has complete access to the
Hawkeye application. The user with this privilege level can add and delete users
and groups, enable and disable users and groups, view probes of each user, view
scheduled tests by each user, and extract results.
l Group Administrator user role can perform the same actions as the system
administrator, but the actions are limited only to the group that the user belongs
to. A group administrator is not entitled to add and delete groups.
l Group Viewer user role can only view and extract the test results of the tests run
by the users in that group.
l Full System Viewer user role can only view and extract the test results of the
tests run by all the users belonging to various groups.
l User role can create own probes and schedule a test or run a test. The user can
only view and extract the results of the tests that the user has executed.
User, logged into Hawkeye, with System Administrator privilege level can assign
any of the above five user levels to the new added user.
User, logged into Hawkeye, with Group Administrator privilege level can assign only
the following three user levels to the new added user: group administrator, group viewer
or user.
Editing a user
Hawkeye allows you to edit the details of an existing user.
To edit a particular user, click . The text boxes will be updated with the current
information about the user. You can modify any of the fields except the Login field, and
click the Update button. This will update the user details in the database and the same
will be displayed in the users table.
Removing a user
Hawkeye also allows the user to DELETE an existing user.
Click next to the user you want to delete. A new pop up window will appear, asking
for confirmation. Click OK to delete the user.
To filter the user, type the login name into the User column. The filtering is done dynam-
ically.
l ID of the action;
l Date and time when action was taken;
l Type of action taken;
l Source IP from where the action was taken;
l User name.
The user trail feature is available only for users with administrator priv-
ileges.
Change Passwords
Modify mySQL Password
The mySQL database for the Hawkeye server uses a password of Ixia/Ixia123 and
root/Ixia123. For Corporate security, we recommend you to change the default pass-
words.
To change the password for the mySQL database for Ixia and root to access mySQL to
run the Hawkeye server do the following:
1. Shutdown Hawkeye services from Hawkeye server console. Use the command
belwo to stop the services
l # service Hawkeye stop
l # service httpd stop
2. Change your password with mySQL command line using the following commands:
l # mysqladmin -u root -p’Ixia123’ password ‘newpassword’
l # mysqladmin -u Ixia -p’Ixia123’ password ‘newpassword’
You have changed the root and Ixia account (as we include this in
our Hawkeye server user guide) passwords.
3. Change file on Hawkeye server. To change the file, do the following:
l # cd /home/ixia/Hawkeye/Conf/configuration.txt
l Change passwords for the two databases
n "MySQL_Password" == "Ixia123"
n "Myresults_SQL_Password" == "Ixia123"
Ixia is the user account used by the Hawkeye server to access
mySQLdatabase. This is different from root account.
4. Restart Hawkeye server. You can also run the following commands from the
Hawkeye console:
l # service httpd start
l # ‘service Hawkeye start’
This will only change what the Hawkeye server will use so the user will also need to
login to every endpoint and change the endpojnt login credentials. For the XR hardware
(XRPi, XR2000, XR2000_VM) the user will need to be log in as root over ssh and then
change the local root password using the passwd command. Changing root password
for each type of Software endpoint wil be specific to the type of software endpoint.
When auto and manual endpoints are registered with Hawkeye server a secure link is
established (SSL) and this is used to run tests, bypassing the need to use the endpoint
root credentials.
Impact to automatic endpoints: Auto probe management also uses the SSL connection
and is not impacted by changing endpoint root password. This will only impact ssh to
endpoints.
Impact to manual endpoints: This will impact ssh. Hawkeye server uses the endpoint
root credentials to query endpoint for manual probe management.
Follow the guidelines given below to build a captive portal script, so that an XRPi can
connect to an organizations Captive Portal using the WiFi Connect Test. To install and
use the Captive Portal scripts on the Hawkeye server, complete the following:
Prerequisites
Before you start using captive portal on Hawkeye, you need the following:
Ixia highly recommends the time on the XRPi is synced to the time on the XRPi. If this is
not aligned, the web browser will appear very slow. Using a CLI on the XRPi issue the
‘date’ command to compare time to that on the Hawkeye server. The following set of
instructions ensure the time is correct on the XRPi:
You can also set the correct timezone by choosing a city under the /usr/share/zoneinfo
directory:
ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime
1. Select the top right Selenium IDE button, on your firefox browser to launch a
pop-up for selenium.
2. Click the red button on the top right corner of the Selenium pop-up to start the
recording.
3. Go to the browser and open any website. The system redirects you to the captive
portal.
4. Login with the credentials for your organization.
5. In the Selenium window click the red button to stop the recording. Login details
will be displayed in the table pane.
6. In the Selenium window from the File pull down menu select Export Test Case
as > Python 2/ unittest/ WebDriver and choose a directory and file on the
XRPi. Ixia recommends /tmp/scriptname.py.
7. On the XRPi go to the directory where you saved the file and confirm the script has
been saved with the command cat /tmp/scriptname.py.
Ensure that you save the script with .py extension.
8. Close selenium and ensure you do not select save again as it will change the script
being saved in the file.
9. Close firefox.
This will invoke Firefox and go through the same steps as recorded previously of logging
into the web site. The Captive portal for each organization is slightly different so you
may have to rework the script to successfully log in. You will need to edit the script,
make some changes then run the script until it successfully enters the user-
name/password to authenticate the user. Below is a list of common changes that may
need to be made:
l Add a wait delay between passing the URL to the webdriver. This would mean
adding the line “time.sleep(1)”.
l Verify the generated url in script is correct. The URL may need to be modified or
replaced with direct reference to URL for Organizations captive portal.
l The Organizations captive portal site may invoke a pop-up for login credentials. If
a pop-up for login credentials appears, as a result of re-direction, it is possible to
prevent the pop-up by making the following script change (see Example 3
below):
n On line for driver.get(self.base_url + "/gp2/webportal…”), replace this with
driver.get(self.base_url)” .
l Use the Inspect feature of firefox to identify the name of the identifier for login,
password and the authenticate button. These may need to be updated as per
Example 3 given below.
l User may want to add wrapper in python to add more logic. This may require
advanced knowledge of python scripting. Contact Ixia Customer support if assist-
ance required.
Example 1:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://www.google.com")
elem = driver.find_element_by_name("login")
elem.clear()
elem.send_keys("user1")
elem = driver.find_element_by_name("password")
elem.clear()
elem.send_keys("ppp")
elem=driver.find_element_by_name('Submit')
elem.click()
#html_source=driver.page_source
html_source=driver.title
Example 2:
# -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
import unittest, time, re
import os
class Ixiaguest(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.implicitly_wait(30)
# self.base_url = "https://guest.ixiacom.com:8443"
self.base_url = "http://www.google.com"
self.verificationErrors = []
self.accept_next_alert = True
def test_ixiaguest(self):
driver = self.driver
# driver.get(self.base_url + "/gp2/web-
portal/ext/webPortalAuthLogin?portal_ip=10.212.240.1
32&client_id=74%3Ada%3A38%3A90%3A89%3Af5&-
wbare-
direct=http%3A%2F%2Fwww.google.com%2F&ssid=IXIA+Guest&bssid=28%3A8a
%3A1c%3A21%3A23%3A02")
driver.get(self.base_url)
driver.find_element_by_id("webPortalAuthUsername").clear()
driver.find_element_by_id("webPortalAuthUsername").send_
keys("nribault")
driver.find_element_by_id("webPortalAuthPassword").clear()
driver.find_element_by_id("webPortalAuthPassword").send_
keys("IJ2W5Z")
driver.find_element_by_css_selector("img[alt-
t=\"Authenticate\"]").click()
def is_element_present(self, how, what):
try: self.driver.find_element(by=how, value=what)
except NoSuchElementException as e: return False
return True
def is_alert_present(self):
try: self.driver.switch_to_alert()
except NoAlertPresentException as e: return False
return True
def close_alert_and_get_its_text(self):
try:
alert = self.driver.switch_to_alert()
alert_text = alert.text
if self.accept_next_alert:
alert.accept()
else:
alert.dismiss()
return alert_text
finally: self.accept_next_alert = True
def tearDown(self):
self.driver.quit()
self.assertEqual([], self.verificationErrors)
hostname = "google.com" #example
response = os.system("ping -c 1 " + hostname)
Example 3:
class IxiaGuestLoginTest1(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.implicitly_wait(30)
self.base_url = "https://guest.ixiacom.com:8443/"
self.verificationErrors = []
self.accept_next_alert = True
def test_ixia_guest_login_test1(self):
driver = self.driver
driver.get(self.base_url)
self.driver.implicitly_wait(15)
driver.find_element_by_name("j_username").clear()
driver.find_element_by_name("j_username").send_keys
("mtest2")
driver.find_element_by_name("j_password").clear()
driver.find_element_by_name("j_password").send_keys
("H69KGR")
driver.find_element_by_name("submit").click()”
The Real service test, wifi Connect, has a configuration field for Captive Portal
Script, the user can enter the path and name /home/ixi-
a/Hawkeye/WebServer/uploads/ixiaportal so the captive portal script can be
run to complete connection to the organizations network.
Important to note the “wifi connect” test may give a status of “Fast re-
authentication” in the test results. If the organizations network supports
fast re-authentication, the AP will know that within a certain time period
the XRPi is still logged on, so will automatically allow the XRPi to con-
nect without authenticating the user with the Radius server (authen-
tication server).
Configure Hawkeye
This section explains the details of the Hawkeye application configuration.
Interface Presentation
The left pane of the window allows you to select the various modules of Hawkeye.
The modules of Hawkeye are restricted by user privileges. You need to have access
rights to view the different modules.
This panel is the main menu of the application, which allows you to browse through the
different sub menus of the application.
On the same bar, info about the current login account is present: who is logged in and
last login info:
The central panel changes the appearance according to the options enabled in the main
menu.
The panel located on the bottom of the page shows options typically reserved for user
input and actions.
It is possible to move the gray bar (vertical only) so that the panels
sizes can be expanded or reduced.
Probes Management
This section explains how to add a new probe entry, manage existing probes (aka end-
points), check the probes health, and configure the probes to be displayed on the live
map.
Manual:This is not recommended. The endpoint is manually added into the list of end-
points in Hawkeye.
Select the “Probes management” option from the Probe management main menu. As
shown below, this will open a probes table and a template to add new probes located on
the bottom of the page.
l Available for mesh: select if the probe is going to be available for mesh topology;
l Is active: select to make the probe available to use or set passive mode to use in
the future;
l Serial Number:Serial number used to identify probe uniquely. Probes with same
serial numbers are automatically considered as same hardware and will be treated
as unique device for scheduling and maintenance as well as for capacity checking.
Serial number is optional: if empty or set to ‘default’ then management ip will be
taken into account for identifying probes on same physical machine and different
test interfaces (or ID if this option is not selected in preferences).
l Probe Location Default Latitude:latitude coordinate of the probe for map con-
figuration;
l Probe Location Default Longitude:longitude coordinate of the probe for map con-
figuration;
l Change username and password: Username and password used to connect to the
particular probes for remote management and real services execution.
Note: both user name and password are hidden and encrypted in database.
If both login/password per probe are empty, then by default username and
password as defined in configuration.txt file on server (see admin guide) or
username and password in preferences
To assign a probe to a particular group of users, select the group from the left panel
named “Available User Groups” and add it to the right panel named “Available for User
Groups”.
To complete the definition of a new probe in the system, click on ADD PROBE button.
To check if the test agent is available on the probe, click on the icon, in the probe
table.
Remove a probe
A probe can be removed by clicking on the icon.
The probe needs to be removed from any existing mesh and any active test execution
schedule before being deleted. If you want to remove an automatic endpoint, you must
SSH to the endpoint and run the configuration command and remove reference to the
Hawkeye server IP address (set to 0.0.0.0). If this step is not performed the endpoint
automatically re-registers with the Hawkeye server and appears in the list of automatic
probes.
A warning message will be displayed if the probe doesn't meet these prerequisites:
Example:
Edit a probe
The information about the probe will be available in the window below . They can be
edited and click on update probe button to save new information.
Note: the probe name can be changed but any existing test results will still be stored
into test history with OLD probe name. 2 probes with the same name will also show in
test history as same "entity" with no way to distinguish them.
Click on the icon to edit an existing probe and view following information: active
schedules which the probe is participating to, and meshes for which probe is belonging.
To check the status of the configured probes, select the “Probes Health Check” sub
menu located under the “Probes Management” menu.
Here you can see information about all the probes configured in the system.
real service and remote probe management use port 22 for probe connectivity and
would therefore not be taken into account by the health check mechanism.
current status, running status, name and other information is displayed in this screen.
This can help to understand current status of a probe and state.
Probe remote management feature offers the user the possibility to control the probe
from the Hawkeye management server and to check various information about probes,
such as Hawkeye endpoint version, hardware configuration details and current status of
the available resources on the probe (CPU load, memory load and available disk size).
To access the probe remote management page, select the menu from the probe
management.
To refresh the above information the probe must expanded and minimized.
Probes that are defined can be public or private depending on the way they are con-
necting to the network.
The mode definition is automatically 'guessed' by Hawkeye server first time the probe is
registering.
The guess is based on the comparison between probe local IP and probe IP as seen by
Hawkeye server.
Typically, if the probe is behind a NAT, the probe will be seen as a private probe.
There are cases where forcing a probe detected as private to public will be needed (or
the other way around). Understanding where the probe is compared to the location of
the other probe and Hawkeye server is important.
Test Execution
1. Node to Node testing - allows execution of tests between one node (probe) and
another node (probe).
3. Real Services – will allow real tests (HTTP,FTP,ICMP,etc) test from probes to servers
located in internet.
The node to node test execution will allow the following selections:
l Probe from: probe where the test is executed FROM (can be network side);
l Probe to: probe where the test is executed TO (can be access side);
l Test type: the type of test that will be executed;
l Test duration: the duration of the test time while the active traffic will be gen-
erated;
l Test options: depending on the test types, some options would be available to
select from (e.g. bit rate, packet size, number of concurrent pairs, etc…).
Note: The direction of the test will be from Node 1 to Node 2. When test
names include downstream, the traffic will be generated from Node 1 to
Node 2. When the name includes upstream the traffic will be generated from
Node 2 to Node 1.
By selecting “Test Execution Mesh” option from the Test Execution main menu, the user
will have the option to configure tests for mesh topologies available. To see how to cre-
ate a mesh, check Probe Management section.
Test duration: the duration of the test time while the active traffic will be generated;
Note: Not all test types available for Node to Node topologies are avail-
able for Mesh topologies.
Starting a test on a mesh will add all available paths in the mesh (see mesh creation
and configuration section).
To configure a real service test scenario, select the “Test Execution Real Service” option
located under the Test execution menu.
The real services test execution will allow the following selections:
Destination Server: the real server where the test is executed TO;
Test options: depending on the test types, some options would be available to select
from or configure to (e.g. DNS server, packet size, ping interval, YouTube code, etc.)
Note: Real Services test are available only for hardware probes type.
- Running: the tests are under execution. This will be displayed into black color;
- Queued: waiting for execution – the test resources aren’t available (busy), so the test
is waiting in queue;
- Waiting next execution: the test is scheduled and the schedule is active, but the next
execution time is not available yet;
l Schedule on hold: the test is scheduled but the schedule has been placed on hold
(see putting a schedule on hold section);
l For mesh tests: queuing for completion. This status is displayed for a mesh test
when the mesh is partially completed and the test is split into different test runs.
The test scheduling options are available for all type of test configurations. The func-
tionality is the same all configurations.
Test interval [minutes]: defines the time interval between tests. This interval is cal-
culated on best effort basis, as the tests in the queue might be too numerous to allow all
intervals to be respected. In case the interval is too small for tests to be executed, the
queue will automatically adjust to best possible interval to execute the test.
If the test Interval is set to 0, then the test will be executed only one time. This will be
displayed as one shot test. In this case, the test will be displayed into test execution
table while waiting for execution (Queuing) or running. Once the test is completed, it
will be removed from the test execution table and the results will be available into the
test results lists.
Start schedule : defines the date when the schedule shall start.
To select the day and time, change time (24 hours selection) and click on the selected
date.
Note: The test execution list will display only the schedule currently active
OR active in the future. All schedules not active any more will not be dis-
played in the list.
If the test is waiting for execution, in the queue or on hold, the test will be canceled dir-
ectly.
If the test is canceled while running, first it will finish the current run and then will be
canceled.
The scheduled test list is refreshed every minute by default. It might be the case
that canceling a test or a schedule that appears queuing of waiting next execution gets
the test canceled into a running mode instead. In that case the test execution will
refresh and the new status of the test be displayed.
Tests can be put on hold by selecting the pause button. This will put the sched-
ule on hold so wait for next execution. The button will transform in play button,
and by selecting this button the test schedule will resume.
If the test is waiting for execution, in the queue or on hold, the test will directly change
status.
If the test is running, after the current run ends, the test will be put on hold. A specific
status (running then hold) will be displayed.
Interval optimizer is taking the time of test execution start and adds the interval for
next run. If the queue delays the start time for the first test or if there are some test exe-
cution interruptions then the interval optimizer will take that into account and optimize
the queue for being as close as possible for the execution interval. A downturn of this
behavior is that there will be no guarantee for the exact execution test times.
Disabling the interval optimizer will force the test execution to take place at the first
execution time + test interval. This means that when the test is executed, the next exe-
cution time will be set to first time + n * interval time where n is set to be the next time
for the schedule.
Example: first time is 12:00 and interval is 1 hour. If the test is executed at 16:12 then
next time will be set to 12:00 + n* 1 hour so 17:00. This will allow enforcing the
schedule to execute at exact times but if the queue is full it would delay the execution
and not optimize the queue for best use.
Threshold management
Thresholds are the baseline to decide if tests are passing or failing. Setting up correct
thresholds is essential for understanding the test results and making sure that the right
level of information is in the database.
To check the thresholds configured default in the application, select the “Global
Threshold Management” menu located under the Administration tab.
The list of available test types, corresponding pairs and metrics will be displayed:
Different thresholds can be filtered based on test types, pair names, and metrics.
The following information per test type, pair name and metric is displayed:
l Default threshold: this is a system wide default threshold for the metric. The value
is a float.
l Threshold type: currently three types of thresholds are supported:
l <=: the test result value will PASS if the result is less or equal to threshold. Any
result value above threshold will FAIL. (e.g.: delay or jitter measurements).
l =>: the test result value will PASS if the result is greater or equal to threshold.
Any result value below threshold will FAIL. (E.g. throughput).
l %: this defines a percentage of expected throughputs. This threshold is uniquely
supported for throughput and test types where user can define throughput (bit
rate) value. The test will PASS if the expected % of generated throughput is
reached, it will fail otherwise.
- Record timing records: during some tests supported by Hawkeye, some timing records
will be recorded during the test allowing Hawkeye to store information of behavior of
test. This will depend on the test type and will be available when selecting individual
test report (see browsing test results section).
The default thresholds can be changed from the default threshold page.
The bottom part of the screen allows to change the value of the threshold (the threshold
type cannot be changed).
To change the default threshold, edit the Threshold value field, configure the new value
and click on Update Threshold button.
Hawkeye application offers the possibility to change a specific threshold per test that
will be configured to run either in a node to node topology, mesh or real service scen-
ario.
To configure the threshold per test, select any test execution module. On the panel,
where the test parameters are configured, select “Show threshold option” button. The
available thresholds for the selected test type will be available. In the example below,
the available thresholds for node to node, DNS response time are shown:
- Threshold value;
- Threshold type;
Note: The threshold is changed only for the current test configuration. If a new
test of the same type is configured, for which the same thresholds must be con-
figured, the thresholds must be configured again in test configuration process.
If the thresholds must be changed globally, follow the steps mentioned in the
Change the default thresholds section.
Alarms management
Alarms configuration
When configuring a test, the users have the possibility to set up alarms to be sent in
case of failure or error in the test.
To enable the alarm configuration, click on the “Show Alarm Option” button located on
the bottom panel of the screen.
The first row defines the trigger for test status on Alarms:
- Status change: an alarm will be triggered when test result status will change. This will
only be triggered on scheduled tests and allow notification of a test result status chan-
ging.
- Failed: an alarm will be triggered every time a test result is failed (test was completed
but the threshold was not met).
- Error: an alarm will be triggered every time a test is reporting error (the test couldn’t
start or be completed).
Both alarms types need to be setup at installation with third party Email (SMTP
server) and/or SNMP alarm management system so that traps are generated to it. The
administrator can configure only one or the other alarm or none depending on the
Hawkeye integration.
The user has to select at least one “Set alarm on” for setting up the alarm. NO alarm will
be created if no option is selected in the “set alarm on” section.
The user can choose between the different alarm types. The alarm will be on for the dur-
ation of the test schedule. Alarm configuration can be seen in the alarm section.
The alarms are set based on tests schedules and when these are defined. The alarms
will run while the test schedule is active. The alarm management screen displays the
currently running tests with alarms setup and their statuses. The alarm menus are avail-
able under “Administration” menu on the main options bar.
The alarm information contains: from probe name, to probe name, alarm type and dif-
ferent alarm settings.
To change or remove an alarm, the related test execution needs to be cancelled and re-
executed with new alarm settings, if required.
Selecting the alarm list under the administration menu will display the alarm list in
reversed chronological order. There is no filtering available in the current version.
The alarm list shows the current status of alarms, and the related test data record ID
and schedule execution ID.
Mesh Management
This section describes the process of creating and managing a mesh test, using a group
of probes. Click Probes Management > Mesh Management to create and mange a
mesh test.
Add a mesh
1. Select the Add Mesh from the menu.
Notes:
l For a full mesh created using the selected paths, if you add more probes to
it, neither the mesh nor the hub and spoke is complete. You must manually
update the mesh to be full mesh or hub and spoke. Appropriate messages
are displayed in the top panel describing the type of mesh configured (Full
mesh or Hub and spokes).
l Meshes containing more than 200 selected paths can have performance
impact on server when using default MySQL configuration. It is essential to
optimize MySQL configuration as per instructions mentioned in Manage
MySQL Database.
l The manual path selection table is only supported for meshes that have
less than 50 selected probes. More than 50 selected probes will result in no
full mesh option and only the buttons at the top can be used to configure
9. To edit a mesh, select a mesh from the Mesh List menu and click the edit
icon. The following options will be available for editing:
l Add probes to mesh
l Remove probes from mesh
l Change mesh name
1. After you select a mesh from the Mesh List, click to edit it. The following
screen will be available.
2. Select the probe from the drop-down list, which you want to configure as hub.
3. Click Configure hub and spokes button. Automatically all paths between the
hub and the spoke will be created.
In the example below, Austin probe was selected as hub. Automatically all paths were
created when the Configure hub and spokes option was selected.
Remove a mesh
l Click to remove a mesh test.
l The probe coordinates must be available in order display the probe on the
map. There is no GPS device installed on the probes.
l The Hawkeye Server must have internet access in order to able to access
Google Maps
To start the map configuration process, select the Probe Management then select Maps
Management. The following window will be available:
On the left side, the Google map is displayed, on the right side the configuration area.
In the example below, the desired map is the map of United States, at a zoom level of 4.
Once the position is set and the Get Coordinates button is pressed, the coordinates and
zoom level will be automatically filled in the specific fields.
Once the position is set, the next step is to assign probes on it. There are two ways in
configuring probes on map.
1. Configure the probes coordinates when a new probe is created. In this case, once the
probe is selected from the “Available for map” group, the probe will be automatically
placed at the configured position.
2. If no coordinates are configured during the new probe creation process, the Hawkeye
user has the option to manually place the probe at the desired location. To do this, fol-
low the nest steps:
a. Select the probe from the “Available for map” group and add it to “Selected for map”
group;
b. Once the probe is in the “Selected for map” group, click on it. A marker will be auto-
matically dropped on the map.
Note: The marker will be dropped always in the same position for all probes
that will be configured to be displayed on the map.
c. Use drag and drop function to move the probe to the desired position.
d. Once the marker is on the position, the coordinates of the probe will be available in
the Probe latitude and longitude fields.
Once all the probes are placed in the desired locations on the map, to save it click on
the “Add map” button.
By default the map is available for all groups of users and it is assigned to all meshes.
If a map is configured for a particular group of users, the administrator has the option to
assign it only to that particular group. To do this, while configuring the map, select the
group to which the map will be assigned from the “Associated group” and save the map.
Note: If the Associated group is particular, the map will be available only for
that particular group and for the system administrator.
.
If a particular mesh will be displayed on the map, select the mesh from the “Associated
Mesh” list.
You can build multiple maps and all are listed in Maps Management
This menu offers the possibility to edit and remove maps. Also, the user can check
quickly which probes are assigned to a map by expanding the desired map field.
To view the active status of a map, Under Test Results, select MAP Status, then choose
the map to monitor results.
The desired map will be displayed and the results of the tests that are executed
between probes or inside the mesh.
To check details about the metrics measured on a particular path, click on the line that
connects those two points. A pop-up window will appear, showing the test details.
If the Hawkeye user, wants to check a different map, can do that by selecting it from the
drop down list located on the bottom of the page.
Note: For system administrator users all maps are available. For group
administrator users only the maps assigned to their group will be available.
Floorplan Management
The Floorplan Management feature in Hawkeye allws you to configure floor plans and loc-
ate XRPi Wi-Fi probes and access points (APs). This allows the you to view the status
and information of the APs. You can locate multiple XRPi probes and APs on the same
floorplan. This feature support the following formats .png, .jpeg and .bmp.
To build a floorplan for WiFi monitoring you can select a floorplan to use for the WiFI
floorplan.
Once the floorplan has been pulled in you can add XRPi wifi probes and APs to the map .
First each probe/AP is added to available list, then once selected in the add list the
probe/AP can be moved on the floorplan to the correct location. Once a name has been
provided for the wifi floorplan, the floorplan can be saved into the MySQL database.
The WiFi floorplan can be viewed in the “WiFi Dashboard” to see status of WiFi net-
works.. In order to populate the floorplan in the WiFi dashboard, the WiFi Inspect test
will need to be run (scheduled) on the XRPi WiFi probes placed on the WiFi floorplan.
Refer to “WiFi dashboard” for more details here. See below for adding XRPi probes and
APs to create the floorplan.
Hawkeye preferences
Hawkeye application is configured through Preferences screen
As user
Changing settings
1) Change desired settings in one screen (panel)
Most of the parameters changing GUI aspects require a logout / login to be taken into
account.
Refresh List Timer Value 120000 Refresh table timer (in milliseconds).
This will allow refreshing any table
with auto refresh option on every
defined interval. It is recommended to
keep it above 10 seconds for optim-
ized system performance.
Refresh Execution List 20000 Refresh Execution table timer (in ms).
Timer This will allow refreshing the execution
table with auto refresh option on every
defined interval. It is recommended to
keep it above 10 seconds for optim-
ized system performance.
The refresh window is specific for test
execution windows and overwrites the
Refresh List Timer Value for test exe-
cution related lists (see TestEx-
ecution.htm)
Allow Frequency Unit 0 Allow to change the unit for test exe-
Configuration cution interval set. This will allow to
use tests in schedules less than 1
minute. The default value is 0, set it
to 1 so scheduling interval unit can be
changed.
Refresh List Timer Value 120000 Refresh table timer (in milliseconds).
This will allow refreshing any table
with auto refresh option on every
defined interval. It is recommended to
keep it above 10 seconds for optim-
ized system performance.
Max records per Page 50,000 Used for reporting long lists and limit
the size of the reports (avoid very long
report creation)
Refresh Execution List 20000 Refresh Execution table timer (in ms).
Timer This will allow refreshing the execution
table with auto refresh option on every
defined interval. It is recommended to
keep it above 10 seconds for optim-
ized system performance.
The refresh window is specific for test
execution windows and overwrites the
Refresh List Timer Value for test exe-
cution related lists (see TestEx-
ecution.htm)
Threshold for WiFi sig- -50 Signal levels below this threshold will
nal level in WiFi Dash- be displayed as weak levels and
board levels above this threshold will be dis-
played as good levels.
MAPS Settings
On the following link there are details about the process of obtaining such key:
https://developers.google.com/maps/documentation/javascript/tutorial?hl=fr#api_
key
SMTPDebug
Port 25 or 465 Configure port 465 for SSL, 587 port for
or 587 TLS or 25 default SMTP port for no secur-
ity.
First Destination Port 10200 Destination port for Hawkeye test types
start of range. Default 10116.
First Source Port 10216 Source port for Hawkeye test types start
of range. Default 10216.
Use Skype4B Special Range 0 Use special ports range for Skype4B
traffic tests.
Endpoint Autorestart Inter- 360 Interval for the endpoint auto restarts
val (minutes) time.
Alarm on Probe Health Fail 0 Send an alarm when probe is not avail-
able (every check)
Log Retention Minutes 240 How long the application logs will be
retained.
The application logs are located in C:\Ixi-
a\HawkeyePro\log_server
Test Engine Log Reten- 1440 Log retention for underlying test engine -
tion Minutes need to be more than 240 minutes
The engine logs are located in
C:\Ixia\HawkeyePro\log_chariot
Explore results
This is the first page that it is displayed after the user is logged in.
l See paths . This option displays a table with the all the paths for the meshes
l See results . This redirects the user to the Test Results list, with the Module:
Mesh
l
l See metric report . This redirects you to the metric pie report page
Module:N2N.
l See trend report . This redirects you to the reporting engine page Module:N2N.
l See results . This redirects you to the test results list with Module: N2N.
l See test types This option displays a table showing the test type:
l See metric report . This redirects you to the metric pie report page Mod-
ule:RealService.
l See trend report . This option redirects you to the reporting engine page Mod-
ule:Real Service:
l See results . This action redirects you to the test results list with Real Service.
l See test types .This displays a table showing the test type.
If the user wants to look at the previous dashboard, before selecting one of the options
for the tables simply, select the Main Dashboard option located under the Test Results
tab located on the main bar.
WiFi Dashboard
The Wifi dashboard displays results of the Real Services Wifi Inspect test performed on
XRPi wifi probes. The WiFi Dashboard displays the results when a XRPi WiFi endpoint or
a floorplan is selected.
Select a XRPi WiFi endpoint to display four panes with information on the selected end-
point. The list of all Access Points(AP) in range and metrics of each selected AP in the
list is displayed. WiFi charts for 2.4 Ghz and another one for 5 Ghz shows the spread of
available APs over the channel IDs, and provides an indication of possible interference
amongst APs using the same channel IDs.
The WiFi Dashboard enables you to select a WiFi floorplan. When the real services test
wifi Inspect is exeecuted or scheduled on the XRPi WiFi in the floorplan, the floorplan is
updated with the results. The WiFi Dashboard contains instantaneous data and not his-
torical data. If the wifi router that is connected to the XRPi WiFi endpoint makes a con-
figuration change (such as changing channel number) this is not automatically picked
up by the XRPi Wifi endpoint and it can cause wlan0 to fail.
The color of an AP indicates its status. In Preferences you can specify the Threshold
for WiFi signal level in WiFi Dashboard so that if an AP signal strength is above
this level (RSSI level in dbm) the AP is displayed in green and if the signal strength is
below this threshold it is displayed in red. If an AP displayed on the floorplan is not
detected by the XRPi WiFi endpoint it is displayed in black. The floormap is updated with
the results of the last WiFi Inspect test.
Metric Status
This screen selects the results per metrics and provides Average/Maximum/Minimum
values
- standard deviation
The filters used at the bottom will decide what types of data is filtered in result sample
If you pick a specific metric, and click on "show trend report" this will redirect to Metric
Graph screen showing the results.
Metric Graph
This specific screen allows to build and customize reports
- Time interval: will decide time interval selected for browsing the data.
l - Graph Type:
l - trend graph will aggregate all tests of traffic and show results in specific metric,
test type and pair.
l - trend graph per individual path will show the same result with specific line for spe-
cific pair (node to, node from).
l - Repartition graph: will display how the different measurements are showing up
with respect to the number of measurements that have been made.
l - Site repartition graph: is showing the repartition but spreading the data based on
site being tested (node from).
l This decides what scale the graph is aggregating the data with. For example, if
selected granularity is 1 hour, one spot point per hour will be displayed on graph
with average value over the hour. It is possible to use range option set to display
to display the average but also min and max measures during each selected inter-
vals.
- display line:
- Threshold:
l User can select to show threshold in graph. Note: if threshold is out of range in the
graph it will not show. Auto adapted scale in the graph is NOT taking threshold into
account.
- Range:
l The range display can be set to "no display": only average value per interval is dis-
played, or "display': average value plus min/max range is displayed.
Path Discovery
Summary
Hawkeye Path discovery is a methodology to provide visibility into network topology
from a location with Hawkeye endpoint to any remote location.
The discovery relies on standard protocols packets being sent to the network to find out
about nodes being traversed. There is therefore no need for specific software or hard-
ware being deployed on the path nor at remote location.
Path discovery allows to discover application specific paths by way of configuring the
protocol and ports used by an application to discover the route taken by packets to
reach destination.
Path discovery leverages Hawkeye test execution framework to allow historical data col-
lection and perform analysis in real time or with retrospective views.
1. path discovery: packets are sent using TCP, UDP or ICMP protocols with different
TTL tags and responses from network elements are captured to indicate inform-
ation about traversed nodes. This sequence is repeated several times
To set it up, go to test execution-real service screen and select your endpoint and Path
discovery test type.
Destination server: is IP or URL of the remote server that needs to be reached to dis-
cover path. It can be hosted in private network, private or public cloud, public internet
etc.
Traceroute counts: number of path discovery attempts executed to find out about path.
This number must be into 1-20 range.
Maximum number of hops: the traceroute are attempted until destination is reached or
for a maximum number of hops between source and destination.
Protocol: protocol used for path discovery. The default is TCP which is recommended as
it is the most likely to provide accurate results. Other options available are UDP and
ICMP. UDP would function similar to TCP but using a different destination port and
without SSL, while ICMP is going to issue pings with TTL. It is important to know that
some nodes on the path may react differently to these protocols: some routers/switches
are blocked for ICMP responses for example and allow TCP or UDP, so the discovery may
provide more information with one protocol or the other.
Destination port: this is used if protocol is TCP or UDP. The discovery messages will be
sent with destination port set to this value and using a random source port. This can
again be important when determining routing/COS management details on the path for
specific protocol/port. Using the protocol and destination port a specific application
would use allows finding about how the associated packets would be handled by the net-
work.
Timeout (sec): this is the maximum time the path discovery will be allowed to run – in
case the overall process exceeds this maximum allowed time the test will error out and
free the test execution queue.
The different nodes detected in the path discovery are shown with squares.
When there were messages sent and no responses from the network, an unknown
square is shown.
This typically happens when packets are dropped on their way back or forth, or when the
node that is on the path is configured so that it does not advertise the drop to source.
The links between the nodes are shown with different thickness levels to illustrate the
distribution between paths.
As shown in the example below, it is also possible to investigate the packet drops or
Round Trip Time (RTT) detected in the path by configuring a threshold.
Note that the RTT and packet loss are displayed based on ICMP messages being sent to
the nodes. The metric is displayed only if at least one ICMP answer is received. The
assumption if 100% packets are dropped is that the node is blocked for ICMP responses
and therefore no data is displayed.
The ICMP packets are 100 bytes long and 100 of them are sent with 20ms interval for col-
lecting the information. The base ICMP sequence is the starting ICMP sequence number
that gets incremented with each packet.
During the path discovery, the information about Autonomous System (AS) to which the
node belongs is collected. This information is belonging to an organization having
routers on the public internet. It will identify service providers or autonomous entities
being traversed to reach destination.
When the Hawkeye server has access to internet it can collect information to what AS
code is registered.
The next figure illustrates a path discovery showing the changes in QOS. When the Path
discovery test was configured the required QOS (DSCP) value was selected. As the pack-
ets pass through each network node the specific node can change the QOS setting for
each packet. The figure below shows when the QOS changes box is selected that the
user has the ability to select a node to view its details which will include the QOS value
assigned to all packets being forwarded to the next network node.
By clicking on any nodes on the graph, full information about the results collected will
be displayed:
The traceroute button will display in a table all the information collected for all steps in
the path – see example below:
This will show the User the different steps and what nodes were involved. The ICMP
extensions contain feedback on MPLS network labelling and protocol information.
1. Reduce tracecount in configuration of Path Discovery test. This will reduce amount
of scans and may potentially reduce number of available routes identified to des-
tination.
2. Reduce maximum number of hops in configuration of Path Discovery test.
3. Increase test timeout in configuration of Path Discovery test. If this is increased
above 3 minutes, refer to Parameters effecting Performance on page 80 for inform-
ation on how to increase overall timeouts for Real Service tests.
Some nodes will respond to the first ICMP message then not respond to
any other ICMP messages from the same source for a certain time
period, this does not reflect a problem.
The results page contains all the details (such as, date and time of execution, which
user executed the test, test type, from/to probe IP/name, test duration, status, reason)
of all the tests that were executed.
The test results can be found by selecting the Test results list, located under the Test
Results option on the main options bar.
By default, the user will be able to see the results of the tests that were ran on the last
day. If the user wants to see the results of all the tests that were ran earlier, they can
use the full range as the value for the time interval parameter under time selection tab
on the left side. User can select one of the options (full range, last hour, last day, last
week, today, range) for the time interval parameter based on the requirement.
Test results of other users can be viewed depending on the admin level privileges of the
user.
The table below explains the test results that a user can see depending on the admin
level of the user and option selected for result access rights:
System Administrator View the test results of the View the test results of
tests executed by all users. the tests executed by all
users.
Group Administrator View the test results of the View the results of the
tests executed by the users test executed with one
belonging to that particular (real service) or both
group. probes (n2n) belonging
to the user group.
Group Viewer View the test results of the View the results of the
tests executed by the users test executed with one
belonging to that particular (real service) or both
group probes (n2n) belonging
to the user group.
User View the test results of the View the results of the
tests executed the by the user test executed with one
itself. (real service) or both
probes (n2n) belonging
to the user group.
Full System Viewer View the test results of the View the test results of
tests executed by all the the tests executed by all
users. This right allows only the users. This right
viewing the results. allows only viewing the
results.
Filters
Hawkeye allows the user to filter the test results, as shown below:
For example: If the user wants to see the test results of the tests that were run between
the Calabasas node and the Austin node, then the user can select “Calabasas” as the
value for the NODE FROM field and “Austin” value for the NODE TO field, as shown
above.
If the user wants to see only the results that passed, then the user can select Passed
value from the status TDR field.
Click on the RESET FILTERS button to clear the filters. After resetting, all the values of
the fields will be set to “all” and all the test results will be displayed.
Click on the refresh table button, at the bottom of the page, to update the test results of
the tests that were in running stage.
Note: The test results of the tests that were in running stage will be updated only
if the tests are complete.
To see the details of a test, click on the “+” icon, as shown below:
By selecting the “Test results metrics” under the “Test results” option on the main
option bar, the user has access to detailed metrics report, as shown in the picture
below:
By selecting the “Test results metrics average” under the “Test results” option on the
main option bar, the user has access to average metrics report, as shown in the picture
below:
Users can delete a test result by clicking on the delete icon located next to the Run
ID column.
Map Status
The Map Status feature in Hawkeye allows you to see the status of endpoints on a
global map. This allows you to see in real time the status of the endpoints. The map is
updated based on test results. If endpoints are running tests successfully, the line
between endpoints is displayed in green, but if tests start experiencing errors, the line
between the probes are displayed in red.
Hawkeye Reports
Per path metric matrix displays a metrics per path Selected metrics
with average/min and max,
per test type and metric. It
is particularly convenient
for Mesh tests results invest-
igation
Per path Status matrix Results are presented into Selected metrics
color codes with matrix for
meshes. This is a simpler
more visual way to display
the resutlts than per path
metric matrix report
Paths Error/Fail Summary This report displays the Selected metrics (not
number of errors reported taken into account)
and numbers of failures
reported with % of total
tests.
Per path trend Does a KPI trend report per Selected metrics,
path in selection. Graph type,
Graph granularity,
Display line type,
Display threshold,
Display range
Site report detailed Per site detailed report with Selected metrics,
per path per site report Graph type,
Graph granularity,
Display line type,
Display threshold,
Display range
Select Options into report- depending on the report selected there will be a set of
options available.
Truncated Reports
Running reports over too large selections can create memory allocation issues - there-
fore there is auto limitation into the report size that can result into displaying truncating
report warning:
If the Hawkeye server exceeds the max allowed time to create a report the following
error message will be generated.
Create a report
To create a report, select the “Reporting” menu located on the menu bar.
- Last hour/day/week;
- Today;
- Full range;
Once the report criteria has been defined, select view report option to view the report
and get access to the data into the result report.
- Export to html;
- Export to pdf;
- Export to csv;
Save a report
The reports can be viewed and saved on the server by selecting “Save/Schedule” but-
ton. Reports will be displayed on screen and saved on server disk. The user shall assign
a report name (into report name text box) and a report format (html, CSV or PDF). Refer
to report type list for availability of report format per report type.
The reports are saved in “Saved Reports” area under Reporting menu on the main
options bar.
When saving a report on disk, the user has the option to send it by email by selecting
this option:
Note: The email settings must have been configured correctly prior to this by
administrator.
Schedule a report
Report interval – Hours: defines the interval between reports creation. If leaving report
Interval to 0, then the report will be created only one time.
- to select the day and time, change time (24 hours selection) and click on the
selected date.
To select the day and time, change time (24 hours selection) and click on the selected
date.
By selecting scheduled reports list into the reporting menu, the user can see reports cur-
rently scheduled and pause, restart or delete these schedules.
Node to Node
Node to Node testing topology
The Node to Node (N2N) testing is based upon testing between 2 probes. The active
probes are deployed in the network at different key locations and controlled by the
Hawkeye test controller.
Hawkeye is controlling tests through a central location. Probes are deployed in the dif-
ferent network locations, and run active traffic between themselves to generate traffic
flows.
Node to Node testing allows full control of the tested path, as traffic is generated
between nodes. This allows generating predictive traffic to receiving side, generate bid-
irectional traffic, and setup several classes of services with full control of the traffic sent.
The Test Totals are divided into three major sections that correspond to the three basic
types of measurements taken in an Hawkeye test. Following are explanations of how
Hawkeye calculates throughput, transaction rate, and response time.
Throughput
see goodput
Measured time
For pairs that are bidirectional (TCP based application type of traffic), throughput is
aggregated in 2 directions:
In all calculations, elapsed time is the elapsed time of the longest-running pair in the
test. The measured time is the sum, in seconds, of all the timing record durations
returned for the endpoint pair.
- Throughput_Units: the current throughput units value, in bytes per second. For
example, if the throughput units are KBps, 1024 is the Throughput_Units value. In this
example, the throughput units number is shown in the column heading as Mbps, which is
125,000 bytes per second (that is, 1,000,000 bits divided by 8 bits per byte). See Raw
Data Totals.
Transaction Rate
The calculations are shown in transactions per second. This rate is calculated as fol-
lows:
Transaction_Count / Measured_Time
Response Time
The response time is the inverse of the transaction rate. The calculations are shown in
seconds per transaction. This value is calculated as follows:
Measured_Time / Transaction_Count
Lost Data
The lost data is the difference between the number of bytes sent by Endpoint 1 and the
number of bytes Endpoint 2 actually received. Lost data is only calculated when a pair is
running a streaming script (for example, in a VoIP or video). Only payload data is
included in the calculations.
Jitter
The jitter (delay variation) maximum reveals when the greatest variation in delay seen
for a timing record occurred during the test.
When a datagram is sent, the sender gives it a timestamp. When it is received, the
receiver adds another timestamp. These two timestamps are used to calculate the data-
gram's transit time. If the transit times for datagrams within the same test are different,
the test contains jitter. In a video application, it manifests itself as a flickering image,
while in a telephone call, its effect may be similar to the effect of packet loss; some
words may be missing or garbled.
The amount of jitter in a test depends on the degree of difference between the data-
grams' transit times. If the transit time for all datagrams is the same (no matter how
long it took for the datagrams to arrive), the test contains no jitter. If the transit times
differ slightly, the test contains some jitter. Jitter values in excess of 50 ms probably
indicate poor call quality.
Jitter statistics let you see a short-term measurement of network congestion and can
also show the effects of queuing within the network. The jitter value is reset for each tim-
ing record, so the jitter statistic for a specific timing record shows the jitter for that tim-
ing record only.
Almost all data transfers experience jitter, but it isn't necessarily a problem. Jitter
occurs in several patterns. If the delay time for each datagram steadily increases, jitter
values increase and the throughput decreases. But it is also possible for jitter to
increase while throughput remains constant. In this case, the delay variation fluctuates
widely, which could mean poor performance for delay-sensitive applications. Finally,
"bursty" jitter—occurring in bursts of datagrams—has the most significant effect on call
quality in voice over IP transmissions. If jitter occurs in bursts, it can lead to data loss
and a degradation of call clarity. Hawkeye measures this "bursty" quality; refer to Max
loss burst.
When calculated according to the specification for RTP, jitter (J) is defined as the mean
deviation (smoothed absolute value) of the difference (D) in datagram spacing at the
receiver compared to the sender for a pair of datagrams. As shown below, this is equi-
valent to the difference in the "relative transit time" for the two datagrams; the relative
transit time is the difference between a datagram's RTP timestamp and the receiver's
clock at the time of arrival, measured in the same units. If Si is the RTP timestamp from
datagram I, and Ri is the time of arrival in RTP timestamp units for Datagram I, then for
two datagrams (I and j), D may be expressed as follows:
D(i,j)=(Rj-Ri)-(Sj-Si)=(Rj-Sj)-(Ri-Si)
The inter-arrival jitter is the jitter calculated continuously as each datagram (I) is
received from the source. The jitter is calculated according to the formula defined in RFC
1889:
J=J+(¾D(I-1,I)¾-J)/16
Delay
Endpoints can calculate delay statistics in a single direction for VoIP pairs and for pairs
using the RTP protocol. Delay is calculated for each timing record in a test. These meas-
urements are useful in testing time-sensitive applications because they can help to pin-
point sources of delay. One-way or network delay excludes sources of delay apart from
the "wire" itself, while end-to-end delay includes all sources of delay, such as the codec
used, jitter buffers, and fixed delays.
MOS
indication of the relative quality of each call made during a test on your network. Ixia
uses a modified version of the ITU G.107 standard E-Model equation to calculate a Mean
Opinion Score (MOS) estimate for each endpoint pair.
Hawkeye modifies the E-model slightly and uses the following factors to calculate the R-
value and the MOS estimate:
Similar to the propagation delay; only the delay factors associated with the network
(the "wire") itself are included. Hawkeye measures this by synchronizing the endpoints'
timers and determining delay in a single direction. Refer to One-Way Delay for more
information.
•End-to-End Delay
•Data Loss
Total number of datagrams lost. When a datagram is lost, you can lose an entire syl-
lable, and the more datagrams that are lost consecutively, the more the clarity suffers.
Hawkeye factors in lost data and also includes the amount of consecutive datagram loss
that was measured. Refer to The Lost Data Tab for more information. In addition, if you
have packet loss concealment (PLC) enabled for the G.711 codecs, a delay factor asso-
ciated with PLC buffering is added. Refer to Codec Types for information about PLC.
Number of datagrams lost due to jitter buffer overruns and underruns. Refer to Jitter
Buffers for more information.
A MOS of 5 is excellent; a MOS of 1 is unacceptably bad. The following table (taken from
ITU G.107) summarizes the relationship between the MOS and user satisfaction:
The Media Delivery Index (MDI) is a proposed metric, defined in RFC 4445, that can be
used as a diagnostic tool or a quality indictor for monitoring the delivery of streaming
media on a network. It specifically focuses on the measurement of packet jitter and
packet loss in networks carrying streaming media, such as MPEG video, Voice over IP,
and other information that is sensitive to arrival time and media loss. Hawkeye collects
and reports MDI statistics for all Video Pairs.
The two major factors impacting the quality of a video stream transmitted over an IP net-
work are packet loss and packet jitter. Packet loss can be caused by many factors,
including data corruption, insufficient bandwidth, and out-of-order packet delivery. Any
packet loss will adversely affect the quality of the delivered video. Packet jitter causes
buffer overflows and underflows, either of which will cause unacceptable time distortions
in the video streams.
Packet Jitter
Packet jitter is a measure of the variation in arrival rates between individual packets in
a media stream. Streaming media requires a consistent and predictable time delay
between successive packets as they are received at a destination node. Variations in
this interpacket delay causes packet jitter. If packets are delayed by the network, some
packets will arrive in bursts with diminished interpacket delays while other packets will
arrive with longer interpacket delays. Therefore, the node at the receiving end (the
decoder) must buffer the video data to ensure that it can be displayed at its nominal
rate. The size of the buffer determines the maximum amount of packet jitter than can be
accommodated without experiencing buffer underrun or overrun.
Delay Factor
The MDI consists of two components: the Delay Factor (DF) and the Media Loss Rate
(MLR).
The Delay Factor (DF) indicates the maximum difference between the arrival of stream-
ing data and the drain of that data, as measured at the end of each media stream
packet. The drain rate refers to the payload media rate. For example, for a typical 3.75
Mbps MPEG video stream, the drain rate is 3.75 Mbps—the rate at which the payload is
displayed at a decoding node. The DF is computed at the time that each packet arrives
at the Hawkeye endpoint, and is recorded for each timing record. The default timing
record duration is three seconds.
The DF is computed at the arrival time of each packet at the point of measurement. For
Hawkeye, the point of measurement is Endpoint 2. It is recorded at set time intervals,
typically about one second. For Hawkeye, it is measured when each timing record is gen-
erated.
where media rate is expressed in bytes per second and max(X) and min(X) are the max-
imum and minimum values measured in an interval.
The largest difference is recorded for all intervals in a measurement period. That is, DF
is the maximum observed value of the flow rate imbalance over the calculated interval.
The second component in the MDI is the Media Loss Rate (MLR).
The MLR is the count of lost or out-of-order packets, measured over a selected time
interval (such as three seconds). That is,
There may be zero or more streaming packets in a single IP packet. For example, it is
common to carry seven 188 byte MPEG Transport Stream packets in an IP packet. In
such a case, loss of a single IP packet would result in seven lost MPEG Transport Stream
packets. Counting out-of-order packets is important because many streaming media
applications do not attempt to reorder packets that are received out of order.
The Media Delivery Index (MDI) combines the Delay Factor (DF) and the Media Loss
Rate (MLR) values for presentation, and is expressed as:
DF:MLR
The DF provides a indication of the size of the buffer needed to accommodate the packet
jitter in the network. The MLR gives an indication of the extent of media loss as the
streaming media traverses the network.
What is Goodput ?
In computer networks, goodput is the application level throughput, i.e. the number of
useful information bits delivered by the network to a certain destination per unit of time.
The amount of data considered excludes protocol overhead bits as well as retransmitted
data packets. This is related to the amount of time from the first bit of the first packet
sent (or delivered) until the last bit of the last packet is delivered, see below.
For example, if a file is transferred, the goodput that the user experiences corresponds
to the file size in bits divided by the file transfer time. The goodput is always lower than
the throughput (the gross bit rate that is transferred physically), which generally is
lower than network access connection speed (the channel capacity or bandwidth).
Protocol overhead; Typically, transport layer, network layer and sometimes datalink
layer protocol overhead is included in the throughput, but is excluded from the goodput.
Transport layer flow control and congestion avoidance, for example TCP slow start, may
cause a lower goodput than the maximum throughput.
TCP UDP
TCP stack adapts to available band- UDP sends packets on best effort –
width sender is « bit blaster »
TCP stack implementation and con- UDP stack is minimal (packets build-
figuration has important impact ing)
TCP is throughput QoE: that is what UDP is throughput QoS : this is what
user experience ! network can deliver !
A test called COS qualification offers pairs with TCP and pairs with UDP (RTP) and is
designed to test different classes of services :
TCP is the most widely used protocol for applications transiting over the Internet. Given
all factors influencing its performance, it is very important to make sure of what one is
trying to prove by doing TCP performance testing.
It thus simulates the core file transfer transaction performed by many applications.
TCP auto adapts to available bandwidth therefore will be a good indicator of the data
available for an application.
UDP traffic generation is however dangerous as it wont adapt to available bandwidth and
therefore is likely to create router congestions and affect other traffic.
Ixia performance endpoints comes with most common OS and OS versions so allow con-
trolling what version can be used.
It is also very important to understand the stack parameters on the OS can have a big
impact on the results.
Define what the target performance is: what are you looking as
a good result?
Depending on the characteristics of the line you are trying to test, it is important to
define a target maximum throughput that is proving a good result for a single TCP
stream.
This will help defining how the performance endpoint has to be used to use the stack.
For example, it is likely that a FTP server on a 1G local LAN can be expected to reach
500M throughput or more, but if 1G line is used between 2 different remote sites with
long delays, is this 500M throughput still be a valid target?
Hawkeye script libraries provide a very wide range of common applications that can be
complemented by specific built applications to help understand the behavior over a spe-
cific network.
It is also important then to compare how a standard TCP throughput test compares to a
specific application test over TCP. Hawkeye allows running both aspects and under-
standing what could explain the difference in expected performance.
These parameters are very important if one’s target is to understand the best possible
TCP performance a stream can get. As changing these parameters can influence greatly
on the stack optimization, there will be some performance impact by tuning these para-
meters.
Ixia performance endpoint has implemented these parameters. A typical test is to run a
“default settings” TCP test to see what the default stack is capable of, and then adjust
these parameters based on what is known of the environment (typically the RTT) to find
out what is the maximum achievable performance.
WARNING: If you leave OS parameters by default then when you set Hawkeye manually
force OS to use some buffers sizes it can happen that the values to are pushing are over
the limits configured on the OS and so are not taken into account.
Therefore, if link capacity for TCP is the target, it is often more efficient to test several
TCP streams in parallel, which would usually result in better aggregated performance
than a single TCP stream.
One important point: speed measurements free tools that are available on the Internet
usually use several concurrent TCP streams to assess line capacities.
Description One TCP stream - generates throughput with defined bit rate - from
E1 to E2
Purpose Verify that a singe TCP stream in one direction is going through
Available Generated Bitrate (layer 4 bit rate) that is pushed by sender (node
Options from).
DSCP setting (QOS)
Advanced This test will optimize use of TCP throughput pair from E1 to E2
Information based on requested throughput. Test samples will be computed
according to the requested bit rate so that there should be approx-
imately one measurement per second.
Results inter- The result in throughput is displayed at layer 4 level (with eth-
pretation ernet, IP and TCP headers, and includes potential retrans-
missions). TCP result in one pair is very sensitive to delay or
packet loss. Some result can show important discrepancy between
expected line rate and result due to TCP protocol high dependancy
on line characteristic. It is recommended to complement this test
with UDP or TCP multistream test for further pipe capacity assesse-
ment.
Potential If the configured bitrate is set too high compared to expected line
capacity, this might result in the test not able to accomplish first
Errors measurement. This would result in an error with time out ("Some
pairs in tests didnt complete"). If results cant be collected Check
that port in use is in range (see "preferences/Traffic Port Man-
agement/Destination Port Conf Option" and "preferences/Traffic
Port Management/First Destination Port" so that correct des-
tination port for test traffic is used.
Description Multiple TCP stream - generates throughput with defined bit rate in
multiple streams from E1 to E2
Available Generated Bitrate (layer 4 bit rate) that is pushed by sender (node
Options from).
DSCP setting (QOS)
Advanced This test will optimize use of TCP throughput pair from E1 to E2
Information based on requested throughput and shared between requested num-
ber of TCP pairs. The total throughput requested will be equaly dis-
tributed by number of pairs. The timing record per pair will be
optimized to be one timing record per second.
Potential If the configured bitrate is set too high compared to expected line
capacity, this might result in the test not able to accomplish first
Errors measurement. This would result in an error with time out ("Some
pairs in tests didnt complete"). If results cant be collected Check
that port in use is in range (see "preferences/Traffic Port Man-
agement/Destination Port Conf Option" and "preferences/Traffic
Port Management/First Destination Port" so that correct destination
port for test traffic is used.
If the number of pairs is greater than available max number of pairs
per license the test can be stuck in queue.
The maximum number of simultaneous pairs (N value) each indi-
vidual test can perform is also controlled from an advanced setting.
Only Administrator can modify this setting:
Description One TCP stream - generates throughput with defined packet size
and bit rate - from E2 to E1
Available
Options
Metrics
Advanced
Information
Results inter-
pretation
Porential
Errors
Available
Options
Metrics
Advanced
Information
Results inter-
pretation
Potential
Errors
Available Generated Bitrate (layer 4 bit rate) that is pushed by sender (node
Options from).
Average Throughput from->to - Generated Bitrate to->from
DSCP setting (QOS)
Advanced This test combines TCP stream from E1 to E2 and from E2 to E1.
Information The timing record per pair will be optimized to be one timing
record per second.
Description One TCP stream - generates throughput with defined bit rate - from
E1 to E2 ) with advanced configurable settings
Advanced This test allows full control of the TCP stream. It requires
Information advanced understanding of the TCP architecture and testing
engine.
Errors
Description One TCP stream - generates throughput with defined bit rate - from
E1 to E2 ) with advanced configurable settings
Purpose TCP throughput between E1 and E2. Optimized Windows size will
be calculated according to delay, throughput and probe type.
Advanced This test will allow optimization of the TCP window size according
Information to the value the user entered for one way delay expected on
the path and the file size. The calculations are performed for a
single TCP stream. The test will maximize the TCP throughput
through the use of a calculated window size. This calculated win-
dow size uses a TCP stack calculation provided by the OS (XR end-
points would be Linux) to maximize TCP throughput provided
because of the test. The file size and one way delay are two key
parameters used to calculate the TCP receive window. The TCP
receive window size will impact total throughput. Other values
such as network delay jitter are also considered to adjust the win-
dow size. The receive TCP window size impacts flow control by
adjusting the frequency of ACKs. The size of the window determ-
ines the number of bytes that will be transmitted without an Ack.
Note, the TCP process at each end of the connection can dynam-
ically adjust the sliding window to avoid network congestion, A lar-
ger sliding window enables higher throughput. Note XRPi and
XR2000 support RFC1323 (TCP window scaling) by default which
increases the TCP window size to use more than 64Kbps. However
not all software endpoints may support RFC1323. Note to max-
imize total throughput a large file size and small network delay are
required.
Errors
For powerful enough devices (xr2000, servers, high range laptops), it should be possible
to generate up to line rate TCP streams (resulting in a goodput calculation of 960 to 970
Mbps depending on the transport layer.)
Less powerful endpoints will typically provide less capability that can be measured on
case by case.
SEND
Filesize: how much data is sent, and then wait for acknowledgement. Increasing it will
result into getting more data sent at once to the stack, and increase the length of the
timing record (how long between each performance record).
Recommendation for TCP performance testing is to target 1 timing record per second, so
make sure based on expected throughput it takes at least 1 second to download filesize.
Using up to 2 or even 5 seconds sometimes help getting the best performance.
Send and Receive buffer size: determine how much data will be sent to the TCP
stack for processing for each SEND and RECEIVE operation. They are unrelated to TCP
sliding window service
Recommendation for endpoint send and received buffer size for performance endpoint is
to make sure they are not too small, so that they are not too many cycles. They should
also not be in excess to the CONNECTION send and receive buffers (see below) that are
used to configure the TCP stack. Typical recommended value is 16kbytes, 32kbytes or
64kbytes. Unless misconfigured, this should not have a great impact on TCP per-
formance.
Send data Rate: determine the maximum data rate the Performance endpoint will try
to push sending through the stack. It can be set to unlimited (in which case the data will
be used at maximum rate) or any value in kbit or kbytes.
Recommendation for endpoint send data rate is to use unlimited when trying to perform
as much as possible. However, a side effect using unlimited value is that the stack will
be used at maximum but can be overloaded by performance endpoint throwing data in
bursts or spikes. Whe targeting a defined maximum throughput, it is a good idea to use
the defined throughput so that the TCP stack is used in a more even or stable way, and
therefore it is stabilized. When pushing the stack and/or network to the limit, it is
always better to do some binary search with send data rate to define the most stable
value given network conditions and endpoint send and receive OS performance.
Find below optimization elements for the 2 main Operating Systems (Windows and
Linux).
$ cat /proc/sys/net/ipv4/tcp_mem
The default and maximum amount for the receive socket memory:
$ cat /proc/sys/net/core/rmem_default
$ cat /proc/sys/net/core/rmem_max
The default and maximum amount for the send socket memory:
$ cat /proc/sys/net/core/wmem_default
$ cat /proc/sys/net/core/wmem_max
$ cat /proc/sys/net/core/optmem_max
Recommended changes
Set the max OS send buffer size (wmem) and receive buffer size (rmem) to 16 MB for
queues on all protocols. In other words set the amount of memory that is allocated for
each TCP socket when it is opened or created while transferring files.
CAUTION: The default value of rmem_max and wmem_max is about 128 KB in most
Linux distributions, which may be enough for a low-latency general purpose network
environment or for apps such as DNS / Web server. However, if the latency is large, the
default size might be too small. Please note that the following settings going to increase
memory usage on your server
Turn on window scaling which can be an option to enlarge the transfer window:
By default, TCP saves various connection metrics in the route cache when the con-
nection closes, so that connections established in the near future can use these to set
initial conditions. Usually, this increases overall performance, but may sometimes
cause performance degradation. If set, TCP will not cache metrics on closing con-
nections. The benefits or activating or disabling caching might depend on test expected.
Set maximum number of packets, queued on the INPUT side, when the interface
receives packets faster than kernel can process them.
The commands above configure the parameters and made them persistent into linux. To
reload the changes:
# sysctl -p
http://msdn.microsoft.com/en-us/library/cc558565(v=bts.10).aspx
This article contains all information from Microsoft to optimize TCP stack for performance
management.
The traffic rate defined and packet size will be generated from one side to the other.
the data is sent through the operating system stack with packet sizes:
IP header : 20 byte
All indications into Hawkeye are set at layer 4 (inside UDP header).
Even though the traffic generation is IP to IP (layer 4) inside UDP packets (layer 4) the
result of the traffic generation can be considered as traffic blasting (aka RFC 2544 test-
ing).
Traffic generation from software based system can create some problems at high rates
with traffic generation burstiness.
That means that traffic as sent from ixia endpoint or probe can be generated with ave-
gare bitrate as requested, but with a non linear profile.
The following figure illustrates a traffic generated with bursts of traffic - typical behavior
from a software based traffic generation tool like Hawkeye.
Network equipments like traffic shapers or rate controllers would remove any traffic
going at higher rates than 100Mbps in the above example resulting into a problematic
measurement of the available bit rate.
Hawkeye implements an algorithm that can be used to reduce drammatically this jitter
and side effects measuring UDP flows in networks.
This optimization algorithm that uses very precise timers to reduce jitter between sent
packets and when this algorithm is enabled, the datagrams are sent by Endpoint 1 usea
lot more precise intervals than when the standard algorithm is in effect.
sendlowjitter to 1
NOTE: some endpoints would not support the send low jitter algorithm and therefore res-
ult into an error for unsupported option into result list.
Send low jitter option is a global parameter that needs to be set by administrator.
The total capacity of network cards and therefore Hawkeye endpoint software is limited
by number of packets per second.
Smaller packets will have less bit rate on the network, therefore it is expected the total
amount of traffic endpoints will be able to generate are declining with the frame rate.
As an example for a network interface card able to generate packets at about line rate
for max IP MTU (1500 bytes):
This is only example and should be read to give guidance on expected throughput cap-
abilities at different packet sizes and levels on an endpoint example. Real value would
slightly differ from the table below depending on the endpoint type:
64 92 110 42 60 72 82000
10 38 56 7 25 37 82000
Using the traffic buffering algorithm optimization can help generating more packets per
second for powerful cpu endpoints (where bottleneck is in the network interface card):
by way of sending more even traffic to the stack, Hawkeye probes optimize the packets
that can be sent.
Jumbo frames: packets with jumbo frames are supported by Ixia endpoints if the net-
work interface of the endpoint is configured for it and able to support it. A special jumbo
frame configuration needs to be set into Hawkeye - please contact support to set it up if
needed.
Generating too much traffic will result into packets drops at the traffic generator stack
(packets are never sent to the network) or packets being lost at reception of the stack
(packets are not counted as received by receiving endpoint/probe).
Description One UDP stream - generates throughput with defined packet size
and bit rate - from E1 to E2
Purpose Send UDP packets (bit blasting) at defined rate to validate the
capacity between endpoints
Advanced This test will generate UDP packets from E1 to E2 based on reques-
Information ted throughput. Test sample Filesize will be computed according to
the requested bit rate so that there should be approximately one
timing record per second. Optimized packet size is udp size used
to optimize the stack therefore pushing packet size bigger than
MTU to get full MTU packets on the line. While it provides better
results for throughput it can be sensitive to packet fragmentation
on the network therefore providing lot of packet losses in some net-
work conditions.
Results inter- The result in throughput is displayed at layer 4 level (with eth-
pretation ernet, IP and UDP headers). UDP packet generation is similar to
bit blasting therefore will not autoadjust to available bandwidth.
Some higher than capacity traffic generation can result into net-
work equipments traffic congestion and therefore cause some ser-
vice drops.
Traffic generation can be optimized with using an option in pref-
erences / test engine settings: send low jitter is an algorithm
optimizing UDP traffic generation and therefore avoiding bursty
traffic. This is particularly recommended to activate this option
when using traffic shaping. The total byte lossed metric is indic-
ation of number of bytes that were lossed during the test. By
default the threshold will always pass.
Potential If the configured bitrate is set too high compared to expected line
capacity, this might result in the test not able to accomplish first
Errors measurement. This would result in an error with time out (unable
to collect result). Small packet sizes are not able to generate as
much throughput as large ones: at high traffic, it is possible that
endpoints generate packet losses as they would be unable to gen-
erate the expected traffic rate.
In case of use of UDP packet size optimized or 1460 packet size,
be careful in case of low MTU or network fragmentation: this can
cause packets to be fragmented therefore unexpected or higher
than expected packet loss rates. It is recommened to use a lower
UDP packet size (1300 for example) to validate there is no impact
on test result.
If there is unpexected loss this could be because of some bursty
UDP traffic generation: make sure you have endpoints supporting
the send low jitter option and enable in preferences/test engine in
Hawkeye preferences.
if results cant be collected (some pairs didnt complete error):
Check that your E2 endpoint can open 10115 port toward E1 for
management (this is needed for stream pairs).
Check that port for traffic is opened (no firewall).
Check that synchronisation port between pairs is opened (man-
agement UDP 10115). Check that port in use is in range (see "pref-
erences/Test Controler/Destination Port Conf Option" and
"preferences/Test Controler/First Destination Port" so that correct
destination port for test traffic is used.
Description One UDP stream - generates throughput with defined packet size
and bit rate - from E2 to E1
Available
Options
Metrics
Advanced
Information
Results inter-
pretation
Potential
Errors
Description Bidirectional UDP throughput -with defined packet size and bitrate.
The UDP throughput is generated concurrently
Purpose Send UDP packets (bit blasting) at defined rate to validate the
capacity between endpoints in both directions.
Unlike TCP bidirectional, this measure is perfectly valid to meas-
ure both ways of communications simultaneously provided there is
no max out of endpoint/probe performance.
Advanced This test combines UDP stream from E1 to E2 and from E2 to E1.
Information The timing record per pair will be optimized to be one timing
record per second.
Note that performance of and endpoint sending and receiving is
typically lower than endpoint dedicated to sending or receiving.
We see typically 20 to 30% performance reduction when traffic is
generated both ways as opposed to one way only.
Errors
Purpose Send UDP packets (bit blasting) at defined rate to validate the
capacity between endpoints in both directions.
Advanced options allow to be more specific about what is required.
Advanced This test will allow to fine tune the UDP stack according to some
Information settings that are dedicated to the test; See Annex for complete
UDP stack configuration documenation. Recommended value is
filesize to be around 1 second of data transfer at expected through-
put so in that case (factor 8 to take into account the bytes to bit
ratio): filesize=thoughput (kbps)*1000/8
Errors
Purpose Analyzes loss metrics for a UDP stream played in one direction.
Metrics Metrics
l Loss (UDP bytes)
l Lost Datagrams (UDP packets)
l Max Loss burst
l Total Datagrams Rcvd
l Total Datagrams Sent
Errors
l loss
l jitter
l delay
These are key SLA indicator for network health and Service Level Agreements.
Test network delivery KPI with low foot print (100kbps) and 50
Description packets per second- From E1 to E2 direction
Test network delivery KPI with low foot print (100kbps per COS)
and 50 packets per second- From E1 to E2 direction, on 3 different
Description COS
Per COS:
One way delay (ms)
jitter (ms)
max jitter (ms)
packet loss (%)
voice MOS score
packet loss burst.
Metrics max delay variation(ms)
Description Test network delivery KPI with low foot print (100kbps) and 50
packets per second- From E2 to E1 direction
Results inter- The network KPI pair generates packets RTP stream .
pretation These packets traverse the network from one probe to the other
and provides indication based on time sent and receive, how many
are lossed etc...
The packets are sent similarly to a G711 voice stream as this is
expected to be a key application the network shall be able to trans-
port.
Voice MOS score gives an indication of the quality of transport net-
work for voice G711 application.
Description Test network delivery KPI with low foot print (100kbps) and 50
packets per second- From E1 to E2 direction and E2 to E1 sim-
ultaneously
Results inter- The network KPI pair generates packets RTP stream .
pretation These packets traverse the network from one probe to the other
and provides indication based on time sent and receive, how many
are lossed etc...
The packets are sent similarly to a G711 voice stream as this is
expected to be a key application the network shall be able to trans-
port.
Voice MOS score gives an indication of the quality of transport net-
work for voice G711 application.
Description Test network delivery KPI with advanced parameters for packet
settings - From E1 to E2 direction - with very low bandwidth
(<2kbps line rate) . Only sends 1 hearbeat per second (lower pre-
cision)
Results inter- The network KPI pair generates packets RTP stream .
pretation These packets traverse the network from one probe to the other
and provides indication based on time sent and receive, how many
are lossed etc...
Description Test network delivery KPI with advanced parameters for packet
settings - From E1 to E2 direction
Results inter- The network KPI pair generates packets RTP stream .
pretation These packets traverse the network from one probe to the other
and provides indication based on time sent and receive, how many
are lost etc.
Each link going to remote sites will have same or different Class of Services imple-
mentation, tied to same or different SLA from carriers transporting last mile.
The different classes of services are defined based on TOS parameter settings.
You can generate traffic up to line rate with class of service targets, and generate class
of service over-subscription traffic.
20% critical
In test scenario, try to fill up the link capacity with 100% best effort traffic and 10%
voice, 20% critical application.
The scenario will pass if the link discards best effort traffic and sustains prioritized
traffic.
Some dedicated test types can be used to fill up different class of services and run over-
subscriptions scenarios.
Purpose Send UDP packets (bit blasting) at defined rate to validate the
capacity between endpoints on all 4 different streams. This will
allow to validate class of services mix with bit blasting on each
one, and run over-subscription scenarios.
Results inter- The result in throughput is displayed at layer 4 level (with eth-
pretation ernet, IP and UDP headers). UDP packet generation is similar to
bit blasting therefore will not autoadjust to available bandwidth.
Some higher than capacity traffic generation can result into net-
work equipments traffic congestion and therefore cause some ser-
vice drops.
Traffic generation can be optimized with using an option in pref-
erences / test engine settings: send low jitter is an algorithm
optimizing UDP traffic generation and therefore avoiding bursty
traffic. This is particularly recommended to activate this option
when using traffic shaping. The total byte lossed metric is indic-
ation of number of bytes that were lossed during the test. By
default the threshold will always pass.
Potential If the configured bitrate is set too high compared to expected line
capacity, this might result in the test not able to accomplish first
Errors measurement. This would result in an error with time out (unable
to collect result). Small packet sizes are not able to generate as
much throughput as large ones: at high traffic, it is possible that
endpoints generate packet losses as they would be unable to gen-
erate the expected traffic rate.
In case of use of UDP packet size optimized or 1460 packet size,
be careful in case of low MTU or network fragmentation: this can
cause packets to be fragmented therefore unexpected or higher
than expected packet loss rates. It is recommened to use a lower
UDP packet size (1300 for example) to validate there is no impact
on test result.
If there is unexpected loss this could be because of some bursty
UDP traffic generation: make sure you have endpoints supporting
the send low jitter option and enable in preferences/test engine in
Hawkeye preferences.
if results cant be collected (some pairs didnt complete error):
Check that your E2 endpoint can open 10115 port toward E1 for
management (this is needed for stream pairs).
Check that port for traffic is opened (no firewall).
Check that synchronisation port between pairs is opened (man-
agement UDP 10115). Check that port in use is in range (see "pref-
erences/Test Controler/Destination Port Conf Option" and
"preferences/Test Controler/First Destination Port" so that correct
destination port for test traffic is used.
Purpose Send UDP packets (bit blasting) at defined rate to validate the
capacity between endpoints on all 4 different streams. The TCP
classes of services are tested with TCP stream mixes (3 mixed
streams) trying to achieve as much traffic as possible. The class of
services should "auto adjust" themselves to the defined compared
capacity they should allow transport. For voice class of service,
the expected throughput should be defined.
Available Global: filesize (in bytes)- will define which filesize shall be used
Options for TCP traffic generation.
filesize is going to determine how precise and efficient traffic gen-
eration on TCP classes of service will be.
Each class of service is send 3 streams.
So if classes of services are expected to adjust to :
1M : 40000 bytes
5M: 200000 bytes
10M: 400000 bytes
etc...
Per stream :
Destination port : leave to blank for auto, otherwise put a specific
port used by the network architecture to define class of service
DSCP setting: define stream class of service
Metrics Voicestream:
Average Throughput (kbps) from->to
Loss (%)
Total bytes lossed
Delay
Jitter
Max loss Burst
Voice
Voice from->to
Description Test voice quality from E1 to E2, with G711, G729, AMR codecs
Purpose Voice Media stream is generated in one direction and key trans-
port metrics are validated. At the same time a Mean Opinion Score
is computed to validate the transport quality.
Voice from->to
Advanced This test will generate unidirectional Voice traffic with selected
Information codec.
Results inter- The voice pair generates RTP stream according to selected codec.
pretation The content of the voice traffic itself is not actual voice. This test
qualifies network transport capability for realistic voice traffic.
Potential If test result pair cant be collected: Check that your E2 endpoint
can open 10115 port toward E1 for management (this is needed for
Errors stream pairs). Check that synchronisation port between pairs is
opened (management UDP 10115). Check that port in use is in
range (see "preferences/Test Controler/Destination Port Conf
Option" and "preferences/Test Controler/First Destination Port" so
that correct destination port for test traffic is used.
Voice bidirectional
Description Test voice quality from E1 to E2 AND E2 to E1, with G711, G729,
AMR codecs
Metrics are gathered by direction.
This reproduces a realistic voice media call
Available
Options
Metrics
Advanced
Information
Results inter-
pretation
Potential
Errors
Description Test voice quality from E1 to E2, with G711, G729, AMR codecs,
with N calls simultaneously
Purpose Voice Media stream is generated in one direction and key transport
metrics are validated. At the same time a Mean Opinion Score is
computed to validate the transport quality.
This test is performed for N calls
Available Voice codec: G711, G729, AMR DSCP setting , Number of calls
Options
Advanced This test will generate N pairs. Test results are aggregated and
Information there are no detailed results
Description Test voice quality from E1 to E2 AND E2 to E1, with G711, G729,
AMR codecs
Metrics are gathered by direction.
This reproduces a realistic voice media call.
This test does N calls
Available
Options
Metrics
Advanced
Information
Results inter-
pretation
Potential
Errors
Video Stream
Video Stream
Description Define a video stream from E1 to E2 with defined bit rate and
MPEG2 or customizable codec.
Purpose Generate a unidirectional video flow from E1 to E2. This can sim-
ulate video based on RTP/UDP flows going through the network.
Video Stream
Advanced This test allows generating video rtp streams - bitrate will define
Information corresponding codec (HD or SD)
Results inter- This test generate Video stream on the selected path. It can be
pretation optimized with setting up send low jitter in the preferences to be 1
if the endpoints support it.
Potential If test result pair cant be collected: Check that your E2 endpoint
can open 10115 port toward E1 for management (this is needed for
Errors stream pairs). Check that synchronisation port between pairs is
opened (management UDP 10115). Check that port in use is in
range (see "preferences/Test Controler/Destination Port Conf
Option" and "preferences/Test Controler/First Destination Port" so
that correct destination port for test traffic is used.
Available Test duration , Test bit rate (Kbps), DSCP Setting, Multicast
Options Address: Port Video from->to.
Advanced This test allows generating video rtp streams - bitrate will define
Information corresponding codec (HD or SD)
Results inter- This test generate Video stream on the selected path. It can be
pretation optimized with setting up send low jitter in the preferences to be 1
if the endpoints support it.
Adaptive video
Adaptive video
Available Test duration , Test bit rate list, DSCP Setting, Number of Parallel
Options Streams.
Adaptive video
Available Test duration , Test bit rate list, DSCP Setting, Number of Parallel
Options Streams.
Network KPI Metric:
Delay (Ms):
Jitter (Ms):
Jitter Max (Ms):
Metrics Loss:
This test is realistic and since jitter buffer is used and as such, the
download of the video segments is paused when the buffer
becomes full. On average, the throughput will not exceed the max-
imum video bitrate.
Video Playback Downshift is a decrease in the requested bit rate
for future video segments as compared to that requested for the
previous video segments. Video Playback Upshift is an increase in
the requested bit rate for future video segments as compared to
that requested for the previous video segments.
Video quality segments indicate the distribution of video playback
quality Video stopped count and duration indicate the video play-
back interruptions and its duration.
Video Stopped Count Video Stopped Duration will tell for any user
experience having video stalling during the watch
The number of Downshift and Upshifts indicate how often the adapt-
ive rate needed to change: lots of changes indicate an instable net-
work condition.
The Avg Playback rate will indicate the overall video quality exper-
ience: high rate will mean the high quality segments could be
Results inter- transported, lower rates mean the video had to adapt to lower qual-
pretation ity segments
Flash RTMP
Available Test duration , Test bit rate list, DSCP Setting, Number of Parallel
Options Streams.
Flash RTMP
Flash RTMP
Netflix
Netflix
Netflix
Youtube
Available Test duration, Video Rates Video Stream, Test bit rate list, DSCP
Options Setting, Number of Parallel Streams.
Youtube
Youtube
Unified Communications
Unified communication traffic is reproducing voice and video traffic going through net-
works.
Description Microsoft Skype4B traffic. Default parameters for audio and video
based on Microsoft recommended settings. User configurable para-
meters. This test is for bidirectional configuration.
Results inter- This test generate Skype4B Audio and Video streams on the selec-
pretation ted paths. It can be optimized with setting up send low jitter in the
preferences to be 1 if the endpoints support it.
Potential If test result pair cant be collected: Check that your E2 endpoint
can open 10115 port toward E1 for management (this is needed for
Errors stream pairs). Check that synchronisation port between pairs is
opened (management UDP 10115). Check that port in use is in
range (see "preferences/Traffic Port Management/Destination Port
Conf Option" and "preferences/Test Controler/First Destination
Port" so that correct destination port for test traffic is used.
NOTE: in preferences Skype4B has its own port management con-
trol:
Preferences/Traffic Port Management/Destination Port Conf Option
If Use Skype4B Special Range option is set to 1 this will take pre-
cedence over the default port management for Skype4B tests (only
for Skype4B tests).
Skype4B traffic
Description Microsoft Skype4B traffic. Default parameters for audio and video
based on microsoft recommended settings. User configurable para-
meters. This test is for unidirectional configuration.
Skype4B traffic
Skype4B traffic
Results inter- This test generate Skype4B Audio and Video streams on the selec-
pretation ted paths. It can be optimized with setting up send low jitter in the
preferences to be 1 if the endpoints support it.
Potential If test result pair cant be collected: Check that your E2 endpoint
can open 10115 port toward E1 for management (this is needed for
Errors stream pairs). Check that synchronisation port between pairs is
opened (management UDP 10115). Check that port in use is in
range (see "preferences/Traffic Port Management/Destination Port
Conf Option" and "preferences/Test Controler/First Destination
Port" so that correct destination port for test traffic is used.
NOTE: in preferences Skype4B has its own port management con-
trol:
Preferences/Traffic Port Management/Destination Port Conf Option
If Use Skype4B Special Range option is set to 1 this will take pre-
cedence over the default port management for Skype4B tests (only
for Skype4B tests).
Description Traffic Mix. Emulates three different traffic types. Measures voice,
video and HTTP metrics.
Purpose This can simulate three traffic types through network with one
test.
Available Test duration, Total bitrate voice from->to, and Total bitrate HTTP
Options from->to.
Advanced This test uses pre-defined configurations for Video and Voice
Information codec traffic.
The unitary transaction consists on exchange of information - typically from client (E1,
or endpoint from) to server (E2, endpoint to).
This transaction is based on simple exchange (one GET and file reception) or multiple
exchange.
The defined throughput will take into account TOTAL throughput on both sides (adding
upstream and downstream traffic) and is mostly used to limit the impact on transactions
on the network.
Description simple transaction (one packet) with get from client (endpoint
from) and response from server (endpoint to)
Advanced This test sends tcp sync and waits for response.
Information then 100 bytes are sent E1 to E2 and 100 bytes are sent from E2 to
E1. Typically only 2 packets are exchanged for full transaction.
Results inter- The response time is the expected time to transfer very simple
pretation packets in both directions into 2 payloads with TCP. In case of
retransmissions the response time will increase significantly.
By default no objective is set to throughput : any result is good
enough as total throughput should reflect response time and trans-
action rate (transaction rate is 1/response time).
Potential If the configured bitrate is set too high compared to expected line
capacity, this might result in the test not able to accomplish first
Errors measurement. This would result in an error with time out ("Some
pairs in tests didnt complete"). If results cant be collected Check
that port in use is in range (see "preferences/Traffic Port Man-
agement/Destination Port Conf Option" and "preferences/Traffic
Port Management/First Destination Port" so that correct des-
tination port for test traffic is used.
Purpose Measure simple TCP response time and / or transaction rate for
http requests. Request is 300 bytes, response is 25 kbytes which
is size of typical web page.
Results inter- The response time is the expected time to transfer web page.
pretation By default no objective is set to throughput : any result is good
enough as total throughput should reflect response time and trans-
action rate (transaction rate is 1/response time).
Potential If the configured bitrate is set too high compared to expected line
capacity, this might result in the test not able to accomplish first
Errors measurement. This would result in an error with time out ("Some
pairs in tests didnt complete"). If results cant be collected Check
that port in use is in range (see "preferences/Traffic Port Man-
agement/Destination Port Conf Option" and "preferences/Traffic
Port Management/First Destination Port" so that correct des-
tination port for test traffic is used.
Purpose Measure simple TCP response time and / or transaction rate for
pop3 or smtp requests. E1 is email client, E2 is email server. The
traffic reproduces synthetic transactions.
Results inter- The response time is the expected time to transfer email.
pretation By default no objective is set to throughput : any result is good
enough as total throughput should reflect response time and trans-
action rate (transaction rate is 1/response time).
Potential If the configured bitrate is set too high compared to expected line
capacity, this might result in the test not able to accomplish first
Errors measurement. This would result in an error with time out ("Some
pairs in tests didnt complete"). If results cant be collected Check
that port in use is in range (see "preferences/Traffic Port Man-
agement/Destination Port Conf Option" and "preferences/Traffic
Port Management/First Destination Port" so that correct des-
tination port for test traffic is used.
Purpose Measure simple TCP response time and / or transaction rate for
FTP requests. E1 is ftp client, E2 is ftp server. The traffic repro-
duces synthetic transactions.
Results inter- The response time is the expected time to transfer ftp file.
pretation By default no objective is set to throughput : any result is good
enough as total throughput should reflect response time and trans-
action rate (transaction rate is 1/response time).
Potential If the configured bitrate is set too high compared to expected line
capacity, this might result in the test not able to accomplish first
Errors measurement. This would result in an error with time out ("Some
pairs in tests didnt complete"). If results cant be collected Check
that port in use is in range (see "preferences/Traffic Port Man-
agement/Destination Port Conf Option" and "preferences/Traffic
Port Management/First Destination Port" so that correct des-
tination port for test traffic is used.
Purpose Measure simple TCP response time and / or transaction rate for
DNS requests. E1 is DNS client, E2 is DNS server. The traffic repro-
duces synthetic transactions.
Results inter- The response time is the expected time to accomplish DNS
pretation request.
By default no objective is set to throughput : any result is good
enough as total throughput should reflect response time and trans-
action rate (transaction rate is 1/response time).
Potential If the configured bitrate is set too high compared to expected line
capacity, this might result in the test not able to accomplish first
Errors measurement. This would result in an error with time out ("Some
pairs in tests didnt complete"). If results cant be collected Check
that port in use is in range (see "preferences/Traffic Port Man-
agement/Destination Port Conf Option" and "preferences/Traffic
Port Management/First Destination Port" so that correct des-
tination port for test traffic is used.
Exchange traffic
Exchange traffic
Results inter- The response time is the expected time to transfer email or
pretation received.
By default no objective is set to throughput : any result is good
enough as total throughput should reflect response time and trans-
action rate (transaction rate is 1/response time).
Potential If the configured bitrate is set too high compared to expected line
capacity, this might result in the test not able to accomplish first
Errors measurement. This would result in an error with time out ("Some
pairs in tests didnt complete"). If results cant be collected Check
that port in use is in range (see "preferences/Traffic Port Man-
agement/Destination Port Conf Option" and "preferences/Traffic
Port Management/First Destination Port" so that correct des-
tination port for test traffic is used.
Purpose Measure simple TCP response time and / or transaction rate for
SIP requests. E1 is SIP client, E2 is SIP server. The traffic repro-
duces synthetic transactions that are representative of typical SIP
based functions.
Results inter- The response time is the expected time to accomplish SIP trans-
pretation action.
By default no objective is set to throughput : any result is good
enough as total throughput should reflect response time and trans-
action rate (transaction rate is 1/response time).
Potential If the configured bitrate is set too high compared to expected line
capacity, this might result in the test not able to accomplish first
Errors measurement. This would result in an error with time out ("Some
pairs in tests didnt complete"). If results cant be collected Check
that port in use is in range (see "preferences/Traffic Port Man-
agement/Destination Port Conf Option" and "preferences/Traffic
Port Management/First Destination Port" so that correct des-
tination port for test traffic is used.
Available define multicast source IP address and port. Configure stream bit
Options rate
Advanced Can be used for N2N or Mesh tests.Only available for hub and
Information spoke mesh.
Results inter- Confirm all probes receive video stream at acceptable rate and
pretation quality.
Errors
Speedtest from->to
Purpose Performs TCP traffic using several TCP pairs. The test attempts to
generate as much TCP throughput as possible using several TCP
streams. The average throughput is reported.
Speedtest from->to
Advanced If the test duration is too short the time taken to ramp up TCP
Information throughput will reflect in a lower average throughput rate. The end-
point establishes multiple connections with the server over TCP
port. An initial chunk of data is sent. The endpoint then adjusts the
chunk size and buffer size based on TCP metrics to maximize
usage of the network connection. As the chunks are received by
the other endpoint, the endpoint will request more chunks through-
out the duration of the test. During the initial stages of the test,
the endpoint will establish extra TCP connections (streams) to the
other endpoint if it determines additional threads are required to
more accurately measure the throughput speed.
Errors
Speedtest to->from
Purpose Performs TCP traffic using several TCP pairs. The test attempts to
generate as much TCP throughput as possible using several TCP
streams. The average throughput is reported.
Speedtest to->from
Advanced If the test duration is too short the time taken to ramp up TCP
Information throughput will reflect in a lower average throughput rate.
The endpoint establishes multiple connections with the server
over TCP port. An initial chunk of data is sent. The endpoint then
adjusts the chunk size and buffer size based on TCP metrics to
maximize usage of the network connection.
As the chunks are received by the other endpoint, the endpoint
will request more chunks throughout the duration of the test.
During the initial stages of the test, the endpoint will establish
extra TCP connections (streams) to the other endpoint if it determ-
ines additional threads are required to more accurately measure
the throughput speed.
Errors
Speedtest bidir
Speedtest bidir
Purpose Performs TCP traffic using several TCP pairs. The test attempts to
generate as much TCP throughput as possible using several TCP
streams. The average throughput is reported.
Advanced If the test duration is too short the time taken to ramp up TCP
Information throughput will reflect in a lower average throughput rate.
The endpoint establishes multiple connections with the server
over TCP port. An initial chunk of data is sent. The endpoint then
adjusts the chunk size and buffer size based on TCP metrics to
maximize usage of the network connection.
As the chunks are received by the other endpoint, the endpoint
will request more chunks throughout the duration of the test.
During the initial stages of the test, the endpoint will establish
extra TCP connections (streams) to the other endpoint if it determ-
ines additional threads are required to more accurately measure
the throughput speed.
Errors
Purpose Performs TCP traffic using several TCP pairs. The test attempts to
generate as much TCP throughput as possible using several TCP
streams. The average throughput is reported. Each of the two TCP
streams will be using a different QOS setting impacting priority of
data packets and across the network which will be reflected in
throughput.
Advanced If the test duration is too short the time taken to ramp up TCP
Information throughput will reflect in a lower average throughput rate.
The endpoint establishes multiple connections with the server
over TCP port. An initial chunk of data is sent. The endpoint then
adjusts the chunk size and buffer size based on TCP metrics to
maximize usage of the network connection.
As the chunks are received by the other endpoint, the endpoint
will request more chunks throughout the duration of the test.
During the initial stages of the test, the endpoint will establish
extra TCP connections (streams) to the other endpoint if it determ-
ines additional threads are required to more accurately measure
the throughput speed.
The lower the COS number the lower the priority of the TCP stream
(QOS).
The software endpoint is dependent upon endpoint base OS to add
COS to ipV4 packets and it is possible the base OS (particularly if
windows) will not always do this. Additionally, COS is layer 2
traffic, just the ToS 1 byte value in the IP header.
QOS is layer 3 traffic where COS and traffic type
are both taken into consideration.
Errors
Purpose Performs TCP traffic using several TCP pairs. The test attempts to
generate as much TCP throughput as possible using several TCP
streams. The average throughput is reported. Each of the three
TCP streams will be using a different QOS setting impacting pri-
ority of data packets and across the network which will be reflec-
ted in throughput.
Advanced If the test duration is too short the time taken to ramp up TCP
Information throughput will reflect in a lower average throughput rate.
The endpoint establishes multiple connections with the server
over TCP port. An initial chunk of data is sent. The endpoint then
adjusts the chunk size and buffer size based on TCP metrics to
maximize usage of the network connection.
As the chunks are received by the other endpoint, the endpoint
will request more chunks throughout the duration of the test.
During the initial stages of the test, the endpoint will establish
extra TCP connections (streams) to the other endpoint if it determ-
ines additional threads are required to more accurately measure
the throughput speed.
The lower the COS number the lower the priority of the TCP stream
(QOS).
The software endpoint is dependent upon endpoint base OS to add
COS to ipV4 packets and it is possible the base OS (particularly if
windows) will not always do this. Additionally, COS is layer 2
traffic, just the ToS 1 byte value in the IP header.
QOS is layer 3 traffic where COS and traffic type
are both taken into consideration.
Errors
Purpose Performs TCP traffic using several TCP pairs. The test attempts to
generate as much TCP throughput as possible using several TCP
streams. The average throughput is reported. Each of the four TCP
streams will be using a different QOS setting impacting priority of
data packets and across the network which will be reflected in
throughput.
Advanced If the test duration is too short the time taken to ramp up TCP
Information throughput will reflect in a lower average throughput rate.
The endpoint establishes multiple connections with the server
over TCP port. An initial chunk of data is sent. The endpoint then
adjusts the chunk size and buffer size based on TCP metrics to
maximize usage of the network connection.
As the chunks are received by the other endpoint, the endpoint
will request more chunks throughout the duration of the test.
During the initial stages of the test, the endpoint will establish
extra TCP connections (streams) to the other endpoint if it determ-
ines additional threads are required to more accurately measure
the throughput speed.
The lower the COS number the lower the priority of the TCP stream
(QOS).
The software endpoint is dependent upon endpoint base OS to add
COS to ipV4 packets and it is possible the base OS (particularly if
windows) will not always do this. Additionally, COS is layer 2
traffic, just the ToS 1 byte value in the IP header.
QOS is layer 3 traffic where COS and traffic type
are both taken into consideration.
Errors
Purpose Performs TCP traffic using several TCP pairs. The test attempts to
generate as much TCP throughput as possible using several TCP
streams. The average throughput is reported. Each of the five TCP
streams will be using a different QOS setting impacting priority of
data packets and across the network which will be reflected in
throughput.
Advanced If the test duration is too short the time taken to ramp up TCP
Information throughput will reflect in a lower average throughput rate.
The endpoint establishes multiple connections with the server
over TCP port. An initial chunk of data is sent. The endpoint then
adjusts the chunk size and buffer size based on TCP metrics to
maximize usage of the network connection.
As the chunks are received by the other endpoint, the endpoint
will request more chunks throughout the duration of the test.
During the initial stages of the test, the endpoint will establish
extra TCP connections (streams) to the other endpoint if it determ-
ines additional threads are required to more accurately measure
the throughput speed.
The lower the COS number the lower the priority of the TCP stream
(QOS).
The software endpoint is dependent upon endpoint base OS to add
COS to ipV4 packets and it is possible the base OS (particularly if
windows) will not always do this. Additionally, COS is layer 2
traffic, just the ToS 1 byte value in the IP header.
QOS is layer 3 traffic where COS and traffic type
are both taken into consideration.
Errors
DS- DS- DS- DS- To- To- ToS To- To- To- ToS ToS TOS String
CP CP CP CP S S (bin) S S S Throg- Reli- Format
Cl- (bi- (h- (d- (d- (h- Pr- Pr- De- hput abil-
as- n) e- e- e- e- e- e- lay Flag ity
s x) c) c) x) c. c. Fla- Flag
(b- (d- g
i- e-
n) c)
The precedence field comprises the first three bits and supports eight levels of priority.
The lowest priority is 0, the highest is 7 (values 6 and 7 are reserved for network control
packets).
The four bits following the precedence field specify the type of service. Only one of
these bits can be enabled at one time. Each bit defines the desired type of service:
l D – The delay bit instructs network devices to choose high speed to minimize
delay.
l T – The throughput bit specifies high capacity links to ensure high throughput.
l R – The reliability bit specifies that reliable links should be used to minimize data
loss.
l C – The cost bit specifies that data transmission should be accomplished at min-
imal cost.
The last bit in the TOS byte is reserved and is always set to 0.
On Microsoft Windows Server 2008 (32- and 64-bit) and on Microsoft Win-
dows Vista (32- and 64-bit), IP TOS can be set only via qWave tem-
plates. If endpoints are used on these types of Operating systems,
qwave needs to be installed so that the endpoint can tag with correct
TOS.
DiffServ
Differentiated Services (DiffServ) is a QoS model defined by the IETF for IP networks
(refer to RFC 2474). This model is designed to be scalable and to provide consistent ser-
vice classes independent of application. DiffServ redefines the TOS byte of the IP
header (see Figure 9-1) as the DS field.
The first six bits of the DS field are used as a differentiated service code point (DSCP),
and the last two bits are currently unused (CU).
In the DiffServ QoS model, traffic is classified by marking the DS field with a DSCP
value. Queuing mechanisms provide differentiated forwarding of the traffic at each hop,
based on the DSCP value.
On Microsoft Windows Server 2008 (32- and 64-bit) and on Microsoft Win-
dows Vista (32- and 64-bit), IP TOS can be set only via qWave tem-
plates. If endpoints are used on these types of Operating systems,
qwave needs to be installed so that the endpoint can tag with correct
DSCP.
Default PHB
RFC 2474 recommends codepoint 000000 as the Default PHB.
Class-Selector PHBs
RFC 2474 defines 21 codepoints, including the Class-Selector PHBs listed below:
This table shows each of the AFMN bit values, where M is the AF class and N is the drop
precedence. In list form, these are:
AF11 = 001010
AF12 = 001100
AF13 = 001110
AF21 = 010010
AF22 = 010100
AF23 = 010110
AF31 = 011010
AF32 = 011100
AF33 = 011110
AF41 = 100010
AF42 = 100100
AF43 = 100110
The following topics will help you find the information necessary to solve problems you
encounter.
For most errors involving setup and file manipulation, the Test Agent is the computer
that detects the error.
For errors that occur while running a test, the error could have been detected on a par-
ticular endpoint pair by the Test Agent, by Endpoint 1, or by Endpoint 2. The program
that detects the error reports to the Test Agent, which shows the error and logs it. The
first line of the Test Agent error message tells which endpoint probe detected the error.
If for some reason you cannot see the error at the Test Agent, examine the error log at
the endpoints involved in the problem. A formatted error log entry should contain a line
that resembles the following:
l Hawkeye test engine started the test but had to abandon at result collection
l Your test generates too much traffic on the link and the first result collection is
never happening as result collection intervals depend on requested traffic : this
would result into Hawkeye timing out test execution.
l => Solution: Start a test with low traffic between endpoints.
l Check firewall NAT rules between endpoints : this can be test path not opened on
ports, can be E2 unable to reach E1 on port 10115 (you need that for the tests)
l => Solution: open relevant ports. Force user traffic to use specific port from
Hawkeye (preferences, test agent tab)
l Endpoint process in bad state : this happens sometimes on linux when there was a
problem with a test.
l Solution=> Restart the endpoint process on the probes (/etc/init.d/endpoint stop;
/etc/init.d/endpoint stop). For hardware probes you can do that from the probe
management / probe remote management menu of Hawkeye. There is also an auto-
matic endpoint cleanup process that would do that every hour per default. A hard
reboot on the probe also does the job.
l Hawkeye test engine gets stuck at test initialization stage:
l Port 10115 udp must be opened both ways between probes for sync.
l Endpoint process in bad state: same solution than above
If the error was detected by Endpoint 1 or Endpoint 2, check the test setup at the Test
Agent to determine the actual network address of the probe where the error was detec-
ted.
Although one probe may detect an error, the solution may actually lie elsewhere. For
example, if Endpoint 1 detects an error indicating that a network connection could not
be established, it may be because of a configuration error in the middle of the network
at Endpoint 2, or between Endpoint 1 and Endpoint 2.
- The first lines of the error message shows the timestamp when the error was detected
and received;
- The next lines tell you which endpoint detected the error, the error number and error
explanation together with other error details.
Below, is an explanation of the error codes received in Hawkeye, when running node-to-
node or mesh tests:
CHR0206 Error Cannot complete scripts because the assigned port is already
in use
CHR0264 Error RUNTST cannot run while the Test Agent is loaded. The
RUNTST program is installed at the Test Agent in the \Ixi-
a\Hawkeyegram folder
CHR0335 Warning A timing record was received with a measured time of 0 mil-
liseconds.
A script command was performed in less than one millisecond,
an insignificant amount of time
CHR0336 Warning A timing record was received with a measured time between 1
and 20 milliseconds. The clock timers used in timing scripts are
generally accurate to within 1 millisecond (ms). For most tests,
this is more than sufficient, but if the transactions in a test are
too short, problems may arise
CHR0337 Warning More than 500 timing records per pair were generated in Batch
Reporting Mode. Excessive timing records create extra traffic
on the line and interfere with performance results
CHR0338 Warning The test has run for less than 1 second while collecting end-
point CPU utilization. Meaningful data on CPU utilization at the
endpoints cannot be collected unless the test runs for a longer
period of time
Common problems
If you attempt to execute a VoIP or streaming pair test and you receive a CHR0359 error
message (An error was detected in the high precision timer), you may be able to resolve
the issue by following the instructions given in the Microsoft Knowledge Base article
entitled "Programs that use the QueryPerformanceCounter function may perform poorly
in Windows Server 2000, in Windows Server 2003, and in Windows XP". You can either
search for the article by name or reference it at this address: http://sup-
port.microsoft.com/kb/895980.
Insufficient Resources
If you receive an Insufficient Resource error while running node-to-node/mesh tests, the
test agent computer does not have access to the amount of memory required to suc-
cessfully run these tests. Close other applications that currently run and restart the
tests.
Insufficient Threads
The Hawkeye Test Agent creates one or more threads for each endpoint pair when run-
ning a test. This is in addition to the threads created by the underlying network software
(as well as those used by other concurrently-running applications).
In our testing, we did not exhaust threads in our default settings for Windows NT or Win-
dows 2000/2003 until we reached about 7000 threads. We don't believe you'll encounter
out-of-threads problems, but please let us know if you do.
Protection faults or traps are the operating system's way of telling you when a program
is trying to use memory that it doesn't own. They can occur in an Hawkeyegram, in any
library routines called by Hawkeye, or in the operating system itself. The default way
that they're handled is with a message box. This message box shows program instruc-
tion values in hex, which aren't helpful to you as a user.
Windows NT, Windows 2000/2003, and Windows XP write an entry to a file named drwts-
n32.log when they encounter a trap or protection fault. This file is written to the dir-
ectory where you installed Windows. Its default location is c:\Winnt. It contains
information that is immensely helpful to us in finding and fixing bugs. When you contact
the Technical Support team, they may suggest that you send the Dr. Watson file to us
via email.
l do not allow to run 2 different test types at the same time on same
probe
l allow running simultaneous tests on same probe only in same test run
(once a probe is running it cant be used for other tests even if this is
same test type.
l max total number of tests per probe depends on probe type and test
type. See table below.
This configuration is useful when using several interfaces on same probe and allows to
gave probe control management.
These metrics can be modified by advanced administrator - contact support for advise
and recommendations.
Exchange_traffic Software 1
Exchange_traffic xr2000 1
Exchange_traffic xr2000_vm 1
Exchange_traffic xr_pi 1
Real Services
Real service testing is performed between the testing probe (acting as a client) and a
server on the network, most of the time in the public Internet.
The probe will access the service and compute key performance indicators for the test.
These tests perform application performance testing: they measure network quality but
also application performance.
For distributed sites real service testing, the following endpoints support real service
tests:
Prerequisites
It is also important to take into account that a lot of the real service tests are done to
the public services, therefore they will only be available if the endpoint has access to
public internet. DNS settings are set so that public IP addresses can be resolved.
Test types
Bittorrent test
Available Torrent name (simply for display reasons, not used in test)
Options magnet link: is the torrent file url that the probe needs to down-
load to get access to torrent.
Example=
http://releases.ubuntu.com/14.10/ubuntu-14.10-desktop-
amd64.iso.torrent
Test duration: torrent download duration - after this duration tor-
rent download will stop and result printed out even if torrent is not
fully downloaded.
Advanced this test aims at proving capability to do peer to peer traffic from
Information the probe and identify the relevant max throughput.
It is recommended to use popular torrent files so that the results
are significant.
DNS test
Description Test against DNS server and verify the time it takes to resolve IP
address
Purpose Validate performance of DNS resolution (in response time) for spe-
cified server and
Advanced This test will verify availability and response time of DNS service
Information from probe or specific DNS server response time
Results inter- A long response time from DNS by default can result into bad user
pretation experience and therefore should be monitored closely
Dropbox upload/download
Dropbox upload/download
Advanced The files are provided by default. The dropbox account is a ded-
Information icated test account provided by Ixia.
Errors
Descrip- Send and receive email from same or different email accounts
tion
Purpose Measures The time taken to relay an email via the configured SMTP
servers and reaching a target mail server
Advanced The settings for email servers (SMTP and IMAP/POP) need to be con-
Inform- figured for both accounts.
ation The resolution for relaying tome (delay) is to the second. This is by
nature of the email clients that can poll received emails at second rate
minimum.
Errors
FTP download
Description FTP server test with login, password and download file
Purpose Test user experience downloading files from FTP server. The file
downloaded needs to be on the FTP server.
Advanced This test will download the FTP file as fast as possible from the
Information server to the probe.
Results inter- This is user experience from probe to download file from specificed
pretation FTP server
If ftp not downloaded before end of timeout defined then the test
will error out.
FTP download
Errors
Description FTP server test with login, password and download file - adding a
parameter to enable multiple concurrent streams download.
THIS TEST IS ONLY SUPPORTED ON XR2000, NOT ON XRPi
Purpose Test user experience downloading files from FTP server. The file
downloaded needs to be on the FTP server. The experience will be
enhanced by multiple parallel streams. If the FTP server access is
good enough, multiple FTP download streams are supposively
filling up the transport pipe.
Advanced
Information
Results inter-
pretation
Errors
Available Servers address or url - the list can contain up to 10 different serv-
Options ers
IPV4 or IPV6
Protocol http (0) or https (1)
Number of tests to execute in same batch
DSCP setting
Advanced This test will try to download http content on a url. It will simply
Information analyse result of the HTTP GET content - if a HTTP response code
differs from 200 information will be displayed (eg REDIRECT)
When http code is different from 200 no html is received.
It is important to understand the difference between abso-
lute URL’s and relative URL’s.
An absolute URL takes the form protocol://domain/path,
such as www.<domain name>.com, which will map spe-
cifically to https or http protocol. A relative URL like
<domain name>.com can be redirected to either http or
https. If test fails due to protocol error the test result
reason code will advise user to change ip protocol con-
figured for test.
Results inter- The test is useful to understand TCP response time (network) to
pretation full html download (server reponse time)
Errors
Description Download http pages full content from one or several servers and
provide statistics about content and response time
Purpose Get user experience information downloading the web page con-
tent. this will include all images and javascript/css etc...
Other information like DNS resolution will help correlate user
experience to CDN (Content Data Network) architecture.
Advanced This test will download elements in a web page providing user
Information experience getting information from the associated url. This test
will download all elements in page (unless setting prevents it).
It is important to understand the difference between abso-
lute URL’s and relative URL’s.
An absolute URL takes the form protocol://domain/path,
such as www.<domain name>.com, which will map spe-
cifically to https or http protocol. A relative URL like
<domain name>.com can be redirected to either http or
https. If test fails due to protocol error the test result
reason code will advise user to change ip protocol con-
figured for test.
URL may have multiple components to download, such as web
page, scripts, images, style sheets. Multiple components will
involve creating multiple TCP streams for the complete download.
When only the page is selected for the download, the Initial time
taken to setup connection is excluded from download time. Down-
load time will solely reflect the actual download of the page. This
means wireshark trace calculating time between first and last
packet will be wrong. Following is the formula for calculating Down-
load time.
Download time = (Download Size * 8 / KBitRate per sec)
Results inter- The result should be reflective as the user experience getting from
pretation a browser to the web page.
Errors
Description Download http pages full content from one or several servers and
provide statistics about content and response time- provides
advanced information with full list of downloaded objects
Purpose Get user experience information downloading the web page con-
tent. this will include all images and javascript/css etc...
Other information like DNS resolution will help correlate user
experience to CDN (Content Data Network) architecture.
Advanced This test will download elements in a web page providing user
Information experience getting information from the associated url. This test
will download all elements in page (unless setting prevents it).
It is important to understand the difference between abso-
lute URL’s and relative URL’s.
An absolute URL takes the form protocol://domain/path,
such as www.<domain name>.com, which will map spe-
cifically to https or http protocol. A relative URL like
<domain name>.com can be redirected to either http or
https. If test fails due to protocol error the test result
reason code will advise user to change ip protocol con-
figured for test.
Results inter- The result should be reflective as the user experience getting from
pretation a browser to the web page.
Errors
ICMP performance
Description generates ICMP packets at user defined bit rate and look at net-
work bandwidth performance to receive response
ICMP performance
Advanced This test sends ICMP packets at defined bit rate and listens for the
Information received packets throughput.
Number of packets per second is calculated based on send through-
put and packet size.
It is expected that the total bitrate generated will be accurate for
large packet sizes up to 200M for xr2000.
Received throughput will take into account lags into the network
so is expected to be always at least a bit less than the sent rate.
Results inter- A loss rate is likely to impact the total received throughput quite
pretation dramatically. Results will also get more precision if traffic gen-
eration length is longer
Errors
ICMP test
ICMP test
Advanced This test sends ICMP packets and listen for responses. The jitter
Information calculation might need to be disabled for small packet sizes.
Results inter- The ICMP packets will detect any obvious issues on the network
pretation (congestions, drops, long response time, instable response time).
It is an interesting data point as a change in the behaviour of an
ICMP test would likely reflect some modification of the network
behaviour. The results should however be interpreted with care as
the ICMP packets are usually treated with lower priority than data
packets by network elements, therefore could show worst behavior
than actual user traffic (prioritized) would experience in some
cases.
Errors
IGMP test
Description join a multicast channel and analyze received RTP stream- igmp
v2 or v3 will be used depending on the available network.
THIS TEST IS ONLY SUPPORTED ON XR2000, NOT ON XRPi
Purpose will join a channel (if available) and collect broadcasted stream if
available
Available watch duration: time before joining and leaving the stream
Options Probe physical interface:this settingis important as the igmp join
is allocated to physical interface. Can be eth0 or eth1 (port 0 or
port 1)
Multicast address: multicast address for the stream
Source Multicast address: for igmp v3 - will be used to select a
specific source address.
Advanced The RTP stream received is parsed and analyzed. Video is not
Information stored on the probe.
IGMP test
Errors
Path Discovery
Metrics
Path Discovery
Advanced Although Path Discovery has no configurable metrics there are cri-
Information teria for Pass/Fail/Error. The criteria for Fail is reached if for all the
last hops the destination is unknown.
The different servers on the path will be identified and multiple
paths to destination can be discovered by increasing the
traceroutes count.
Graph only shows forward routes. It does not show return routes
for any packets used for path discovery. Some nodes may have no
measurements as they are configured not to respond to ping. For
path discovery mechanism, each traceroute is run for one probe
per hop (not as typically the traceroute sends 3 pings or request
per probe).
Path Discovery
Results inter- There are various options in ‘Test Results’ that can help user to
pretation analyze path discovery results:
l Traceroute – displays the traceroute style output
l Show navigation – (uncheck) enables free zoom/ pan mode
l Frequent path – highlights frequent path and line thickness
will be proportional to frequency
l Loss – if selected, highlights nodes with loss equal or greater
than specified value
l Avg RTT – if selected, highlights nodes with average round
trip delay (RTT) equal or greater than specified value
l Cluster – gives user option to cluster Autonomous Systems
or unknown nodes
l Class of Service (DSCP setting) – identifies where in network
path nodes change the QOS setting
The following information is retrieved and available for display:
Hop info
l IP
l Reverse DNS
l Autonomous System
l ICMP extensions
l QoS Changes
When Hawkeye server is connected to internet, following hop info
are also retrieved
l Network name
l Prefix
l Organization
Metrics
l Frequency
l Min/Avg./Max response time
l Packet loss
Path Discovery
Errors
Port Scan
Purpose Can be used to ensure services (for example HTTPS, FTP, etc.) are
accessible or blocked by firewalls.
Metrics Open ports – shows how many ports from the configured range are
open. Available for both TCP and UDP
Closed ports - shows how many ports from the configured range
are closed. Available for both TCP and UDP
Filtered ports - shows how many ports from the configured range
are filtered (no replies are received). Available for TCP
Open/Filtered ports - shows how many ports from the configured
range are open/filtered (no replies are received either because of
a firewall or the application listening on the port is not sending any
reply). Available for UDP
Each of these metrics will only show up in result if at least on port
of corresponding state was found.
Advanced
Information
Port Scan
Results inter- The Port Scan results will reflect an overall test status and the
pretation status of different ranges of ports. For a given range of ports the
Metric column represents the state of the port and the value
column, represents the port number or port range that is in that
state. The Status KPI column represents the comparison with the
actual state of the ports compared to the configured “expected
state” in the test configuration.
The following describes the port state:
l OPEN: The port (TCP or UDP) on the destination is reachable
and we receive reply to our sent packets.
l OPEN/FILTERED: Only applies to UDP ports. If no reply is
received either because the application listening is not reply-
ing or we cannot reach the destination (e.g. due to firewalls
blocking the packets).
l FILTERED: Only applies to TCP ports. If no reply is received
back (e.g. due to firewalls blocking the packets).
l CLOSED: If the port on the destination is reachable (TCP or
UDP) but there is no application listening on destination (i.e.
packets received by destination IP, but packets not pro-
cessed by intended application). Specifically endpoint
received back ICMP Destination Unreachable message.
Errors
Advanced The test sends 150 ping TCP packets 10 milliseconds apart to
Information World Wide Anycast IP of Skype for Business media relay and com-
pare the measurements against Microsoft requirements.
See Media Quality and Network Connectivity Performance in
Skype for Business Online for more information
Errors
Speedtest
Speedtest
Metrics Speedtest access latency: measured with TCP response time, usu-
ally exceeds the pure network latency
Speedtest downstream
Speedtest upstream
Advanced This test will only become available once the test type has been
Information installed by following the instructions in Speed Test using Global
Servers.
The speed test servers are operated by a third
party providers and all of the servers may not be
active at a time.
Errors
TCP Ping
Purpose SYN packets are sent to interface and port, and round trip delay is
based on SYNACK received. This test is useful especially to meas-
ure TCP connection times to http server or when ICMP protocol is
blocked to check availability and network response time to server.
TCP Ping
Advanced This test sends TCP SYN packets and listen for SYNACK responses.
Information Some TCP ports (HTTP for example on some servers) would only
allow packet size of 0 (no content) for TCP syn.
Results inter- The TCP response round trip and packet loss provides good inform-
pretation ation about the network transport to TCP remote ends.
Errors
Traceroute
Purpose Traceroute request sent from original probe to the destination server
or url. Helps understanding the different hops involved and response
time of each hop based on used protocol (ICMP or UDP)
Traceroute
Errors
UDP Ping
Advanced This test sends UDP packets and listen for UDP packet responses.
Information
Results inter- The UDP response round trip and packet loss provides good inform-
pretation ation about the network transport to UDP remote ends. This test is
particularly useful to test routers configured with reflectors. The
UDP traffic generation is similar than TWAMP protocol defines for
user plane.
Errors
WiFi Connect
WiFi Connect
Purpose Wi-Fi connect measures the time taken by the Wi-Fi device to con-
nect to the access point (AP). This test highlights the bottle necks
in the various states like (Association, Authentication and DHCP)
while connecting to the Wi-Fi AP. This test can be executed to
check the connectivity of the device to the service provider by
pinging to the Internet or external network. The results are avail-
able as part of Metrics status and graphs. For this test to work, the
Ixia XRPi probe must be used with the supplied Wi-Fi dongle.
WiFi Connect
Results inter- Total connect time gives the entire time for the Wi-Fi connection
pretation to the AP. The four stages for connection to the AP are numbered
1,2.3, or 4 and have times reported to complete those stages of
total call connect. Bottle necks can be identified by looking at the
time taken by individual 802.11 standard Wi-Fi States. ICMP met-
rics provide the connectivity to internet stats. Historic data can be
used to make inference on the Wi-Fi load on the network.
WiFi Inspect
Available SSID
Options
Channel
Advertised by AP Max bitrate (Mbps)
Metrics Wi-Fi Signal Level (DBm)
WiFi Inspect
Results inter- Results are reported in WiFi Dashboard. Results can be associated
pretation with a floorplan.
Charting Wi-Fi AP over time may show gaps in statistics. This can
be caused by several factors such as, WiFi AP is busy and is not
sending out the beacon signal used to calculate measurements.
WiFi AP signal level drops below necessary signal strength to be
detected and reported. Interference from stronger signal level from
another Wi-Fi AP using same channel number may cause weaker
signal AP to not be reported.
If no Wlan0 interface on XRPi Wi-Fi probe - “No response from file”
Potential
If Probe is not registered – “Probe not registered or active for real
Errors service”
Youtube video
Purpose Measure user experience watching real time video files from you-
tube service and. The test will also provide useful information
about the video server actual location (with IP address and url) as
well as ICMP metrics to reach the server.
Youtube video
Youtube video
Calculated metrics:
Video Download Rate (kbps): average download rate for the video
Video Download time (sec): download time
Video Duration: video duration
Video Required BitRate (kbps): Required bitrate taking into
account video size and duration. Useful User experience metric.
Video Size (MBytes): video size
Video Total rebuffering events: number of video rebuffering events
during video watch. This is critical user experience metric as it will
decide the customer experience.
Youtube video
17 3gp 176x144
36 3gp 320x240
5 flv 400x240
43 webm 640x360
18 mp4 640x360
22 mp4 1280x720
Youtube video
Results inter- The download rate as obtained will allow a good understanding of
pretation the user experience compared to the computed required bitrate.
it is considered than a download rate should be on average 20%
over the required bitrate to ensure smooth user experience.
Ixia recommends 16:9 (1280 x 720) as the best youtube download
format.
This is very confusing as you would expect it to be 1024p. Youtube
can download at 1024p but the audio is in a separate download
stream and the web browser or application would need to combine
them. Combining the two streams is okay for TV's with specialized
hardware but not so good for web browsers.
Whenever you watch a youtube video with a web browser it is
nearly always 1280x720. Additionally, the video may not be avail-
able as a separate video and audio.
l The Video server URL has been converted to the You-
bube securitylink and will differ slightly from the video
URL entered as part of test configuration.
l Calculated metrics cannot be graphed, only metrics
configurable as thresholds.
Errors
In "Probe Management" there is a web GUI for "Test Templates". You can create a test
template and give it a name for referencing in test execution. Youc an configure all prop-
erties of a test except the endpoints to be used. You can create a test suite by adding
multiple tests of the same test type (N2N/Mesh/Real Servcices) to each test template.
For each test type added to the template all usual configuration of a regular test can be
configured, such as thresholds, email for alarms, schedule frequency of tests. For each
test the you have a flag to enforce the test is scheduled at correct synced time. Each
test template can be restricted to be available to only certain user groups. Once all
tests have been added to the new test template it is "added" and is available for use in
"Test Execution".
In "Test Execution" for each test type such as "Test Execution Node to Node" if a test
template is available then a button is presented to the user allowing the selection of
"normal" test or using a previously saved test template. A pull down menu allows for the
selection of a test template then the User only has to select the endpoints to assign for
the test template. When tests are running an icon identifies normal tests and tests
based on test templates.
Under "Test Execution" there is a web page "Test Execution Templates" which is used to
manage tests running using a test template (e.g. voice_suite). From this web page indi-
vidual tests on endpoints can be paused and deleted.
A test template can only be deleted once all scheduled tests using that
specific test template have been removed.
The name of the test template (test identifier) does not support special
characters such as quotes.
The system displays the statistics when you execute any TCP XR2000/XRPi/SW EP test
the Probe From must be Wifi(interface) and when you execute any UDP transport test
the Probe To must be Wifi.
The table below provides information on the various combinations when a WiFi enabled
endpoint or non-WiFi enabled endpoint in the "TO/FROM" position for TCP and UDP
traffic tests will provide WiFi statistics.
Wifi Statistics
Test Name Probe From Probe To Availability
N2N
Wifi Statistics
Test Name Probe From Probe To Availability
Wifi Statistics
Test Name Probe From Probe To Availability
Wifi Statistics
Test Name Probe From Probe To Availability
Wifi Statistics
Test Name Probe From Probe To Availability
Wifi Statistics
Test Name Probe From Probe To Availability
Wifi Statistics
Test Name Probe From Probe To Availability
Mesh
Wifi Statistics
Test Name Probe From Probe To Availability
Supported Standards
Node to Node
Protocol Standard
RTP RFC3550
Real Services
Protocol Standard
HTTP RFC2616
HTTPS RFC2818
FTP RFC959
ICMP RFC792
Traceroute RFC1393
POP3 RFC1939
Testing
Metric Standard
Jitter RFC1889
MOS ITU.G107
Protocol Standard
SNMP RFC1157
Protocol Standard
HTTP RFC2616
HTTPS RFC2818
You need to have administrator access to the server hosting Hawkeye to be able to use
this.
The Hawkeye system is currently based on a CentOS 6.6 system which is derived from
the System V standards. Specifically, the init scripts that control the various services.
The startup scripts themselves are located in /etc/init.d. Each service has its own script
that accepts a specific set of standard commands to control the service in question. The
most common relevant commands for our purposes are the “start”, “stop” and “check”
options. These start the service, stop the service and check the status of the service
respectively. Theses commands can either be run either by passing the command to the
script as an option (e.g. “/etc/init.d/httpd start”), or by using the “service” command
(e.g. “service httpd start”), which simply calls the associated script passing the given
command to it.
l Export or collect key information and alarms generated by Hawkeye to third party
system.
l Automate some tasks from third party tools (generate automatic tests for
example)
l Generate customized actions
Based on test results Hawkeye can trigger an Hawkeye SNMP trap to a third party.
The mib is based on SNMPv2 and is easily imported into supervision systems.
Automatic emails can also be sent with content result of aggregated report – these are
user defined and can be scheduled over time (every hour/day/week).
Hawkeye database has been designed to be efficiently integrated to third party report-
ing tools.
Therefore, the test data structure storage consists of a simple structure for storing the
data record.
Each active test in Hawkeye is considered as a single Data record, recorded into Test
Data Record Table.
Test Data Record contains a set of information describing the test result, with inform-
ation about:
l Unique ID
l Test execution time
Each Test Data record is independent from each other, and independent from any other
table in the database structure, therefore can be displayed as an independent and flat
view containing all available test data.
see MySQL database Management for more details on the test data record structure.
Hawkeye database
The configuration file for the MySQL server is /etc/my.cnf. This file contains the tuning
parameters for the server as well as defining where the MySQL server stores its data
files.
For larger installations, you should consult Ixia Support to determine if any of the values
here should be modified to accommodate the requirements of the specific installation.
On a Hawkeye system, by default, the data tables for the MySQL server are stored in
/home/mysql_data. The expectation is that, on a system with multiple partitions, the
/home partition will be a large local disk partition. If the installation requires NFS moun-
ted home directories, then these files should be moved to another location on a large
local disk.
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
#log-bin = /home/mysql_data/mysql-bin
#expire-logs-days = 14
#sync-binlog = 1
#tmp-table-size = 32M
#max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
max-connections = 1000
thread-cache-size = 100
open-files-limit = 65535
table-definition-cache = 1024
table-open-cache = 2048
innodb-flush-method = O_DIRECT
#innodb-log-files-in-group = 2
#innodb-flush-log-at-trx-commit = 1
#innodb-file-per-table = 1
innodb-buffer-pool-size = 12G
datadir=/home/mysql_data
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
slow-query-log = 1
slow-query-log-file = /home/mysql_data/mysql-slow.log
PHPmyadmin is a web based tool for MySQL administration and advanced configuration.
A lot of information and documentation about this tool can be found at:
http://www.phpmyadmin.net/home_page/docs.php
To access to the Hawkeye Web Portal database administration tool use the following
URL:
http://yourserverIP/phpmyadmin
Username: root
Password: Ixia123
If selected, the complete database will appear and be available, with table size in
entries and disk space.
https://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html
Login as root,
go to users tab
scroll down
----------------------------------------
[MySQLDatabase]
"MySQL_Host" == "localhost"
"MySQL_Database" == "HawkeyePro"
"MySQL_User" == "root"
"MySQL_Password" == "Password0"
"MySQL_UseSSL" == "0"
Login as root,
go to users tab
click edit privileges for ixia and/or root user (ixia user has restricted priviledges).
scroll down
change host from local to "any host" or "Use text field" where a specific originating IP
can be specified.
As noted below the windows firewall port must be opened for inbound request to mysql.
Some advanced modification sql scripts are sometimes required for ad hoc bug fixing or
applying a patch on the server.
Recommendation is to run them through phpmyadmin (command line using mysql tool is
also possible, refer to corresponding mysql user guide for this).
steps:
Hawkeye database has been designed to be efficiently integrated to third party report-
ing tools.
Therefore, the test data structure storage consists of a simple structure for storing the
data record.
Each active test in Hawkeye is considered as a single data record, recorded into Test
Data Record Table.
Test data record contains a set of information describing the test result, with information
about:
l Unique ID;
l Test execution time;
l ID to test_data_record_filters table
l Reason cause;
Each test data record is independent from each other, and independent from any other
table in the database structure, therefore can be displayed as an independent and flat
view containing all available test data.
The test_data_record_filters table contains meta data to more information about node
from, node to, test type etc... each test_data_record is linked to a test_data_record_fil-
ters . A join query on the 2 tables will allow to get explicit information about the con-
tent.
Each Test Data Record may contain a set of KPI (Key Performance Indicators) that will
contain the information about each performance indicator, and value for the data record.
A database table called kpi_result_table contains the information about the KPI. Each
KPI result has to be linked to the test_data_record (using TDR_ID).
another table called kpi_string_information is linked to TDR and contains string kpi typ-
ically used to provide further description of a test result.
One Test Data Record (TDR) can contain as many results as needed in string or integer
format.
l A metric name.
l A pair name (to identify a specific pair in a set of tests, for example when using
COS testing or traffic mix);
l A Status (PASS/FAIL);
The following drawing illustrates the different tables mentioned above and their struc-
ture.
Standard SQL queries shall be made to get these into any reporting, data mining or data
post processing engines.
SNMP Traps
Hawkeye server supports the generation of SNMP traps for tests originated from end-
points. Tests configured to generate SNMP traps on certain error/failure conditions
sends the SNMP traps in MIB format to a designated SNMP server.
SNMP traps are supported for all test types. When configuring a test, select Show
Alarm options to see the SNMP configuration.
Each test can be enabled to trigger SNMP traps, based on a failure or error result. There
is the option to only report an SNMP trap on a “change”. This means if a test on an end-
point is scheduled to run every 5 minutes and it is failing every time, it is only reported
as an SNMP trap the very first time it fails, then it generates another SNMP trap once
the endpoint test result changes to be an error or passes.
Files required:
/usr/share/snmp/mibs /NET-SNMP-MIB.txt
/home/ixia/Hawkeye/includes/MIBS/Hawkeye-MIB.txt
/home/ixia/Hawkeye/includes/MIBS/IXIA-SMI.txt
There are a number of independent free web sites that verify that custom MIB files con-
firm to correct standards.
From these web sites pull in the retrieved two Hawkeye files and validate to MIBs stand-
ards. There are three supported versions of MIB files (v1, v2 and v3).
Also configure the IP address of the your SNMP trap server, to which the SNMP traps are
to be sent to.
Multiple SNMP trap receivers can be supported by using a third party application that
will forward received UDP packets from the Hawkeye Server IP address/port to a defined
list of destination SNMP trap receivers (IP/port). There are many free applications that
are available for this purpose, such as the free linux samplicator application. The
SNMP server IP defined on the Hawkeyeserver to achieve this will be the IP of the third
party application doing the forwarding.
1. Load in the two Hawkeye MIB modules identified and retrieved from the Hawkeye
server in the above steps (NET-SNMP-MIB.txt, IXIA-SMI.TXT and Hawkeye-
MIB.txt). This enables your SNMP trap server to decode the Hawkeye SNMP traps.
2. Configure the host address of the Hawkeye server that will be the source of the
SNMP traps.
3. Set the host port used to generate and be monitored for the SNMP traps to 162.
4. Set the community to ixia.
5. Following a Hawkeye Server upgrade, remove, flush, or delete any Hawkeye MIB
files from the SNMP server (receiver) and pull in the latest version as new test
types. Else, changed test metrics will cause decode issues.
When configuring the test, set thresholds to expected values additionally select Show
Alarm Options.
Also select the conditions for the alarm to be triggered that generates the SNMP Trap
(Error/Failed/Status change).
Confirm able to see logs such as the one below where the first IP address is the IP
address of the SNMP server to receive the SNMP trap.
20:24:58 UTC (Alarm send) sending snmptrap -
snmptrap -M /usr/share/snmp/mibs/:/home/ixia/Hawkeye/includes/MIBS -c
ixia -v 2c 10.220.120.11 "" HAWKEYE-MIB::hawkeye-notification trapID s
"xr2000autoqa2xrpiwifiqa2Skype4BTrafficDelaymsAudioRTPfromto12kbpsFAIL"
summary s "xr2000autoqa2 to xrpiwifiqa2 Skype4B Traffic Audio RTP fromto
12 kbps Delay ms Failed" runID s "8285" TimeStamp s "2016-11-30
20:24:57" TestType s "Skype4B Traffic" TestStatus s "Failed" From s
"xr2000-auto-qa-2" To s "xrpi-wifi-qa-2" PAIRNAME s "Audio RTP from->to
12 kbps" MetricName s "Delay (ms)" METRICVALUE s "2.5" DefaultThreshold-
Type s "1" THRESHOLD s "9000" METRICSTATUS s "Failed" FailReason s "
Threshold failed on Audio RTP from->to 12 kbps Delay (ms) "
Additionally use TCP dump command to confirm that the SNMPtrap packet is not being
blocked in the network. The TCP logs below (taken from Hawkeye server) show that a
SNMP trap is generated from Hawkeye server (10.220.120.127) and sent to the SNMP
trap server/receiver (10.220.20.45) but is blocked as unreachable.
In the above example, unlike the Network Unreachable and Host Unreachable messages
which come from routers, the Port Unreachable message comes from the SNMP server-
/receiver. The primary implication for troubleshooting is that the frame was successfully
routed across the communications infrastructure, the last router ARP'ed for the host, got
the response, and sent the frame. Furthermore, the intended SNMP server (destination
host) was on-line and willing to accept the frame into its communications buffer. The
frame was then processed and an attempt was made to send the data up to the des-
tination port number (UDP port 162) and the port process (SNMP server) did not exist.
The protocol handler then reports Destination Port Unreachable.
For SNMP traps generated by Hawkeye Server to reach a third party SNMP trap receiver
the UDP ports 161 and 162 need to be open for UDP traffic on the Hawkeye server and
switches between the two parties. One way to open the ports on the Hawkeye Server is
to putty to the Hawkeye Server then using the following CLI commands to open the fire-
wall for SNMP traffic:
# iptables -I INPUT -p udp -m udp --dport 161 -j ACCEPT
# iptables -I INPUT -p udp -m udp --dport 162 -j ACCEPT
# iptables-save > /etc/sysconfig/iptables
ixia MODULE-IDENTITY
LAST-UPDATED "201701040000Z"
ORGANIZATION "www.ixiacom.com"
CONTACT-INFO
" Ixia Communications
Postal: 26601 W. Agoura Rd.
Calabasas, CA 91302
USA
Email: [email protected]"
DESCRIPTION
"The Structure of Management Information for the
Ixia Communication enterprise."
::= { enterprises 3054 } -- assigned by IANA
ixiaProducts OBJECT-IDENTITY
STATUS current
DESCRIPTION
"ixiaProducts is the root OBJECT IDENTIFIER from
which sysObjectID values are assigned."
::= { ixia 1 }
END
IMPORTS
MODULE-IDENTITY,
OBJECT-TYPE,
Integer32,
NOTIFICATION-TYPE
FROM SNMPv2-SMI
SnmpAdminString
FROM SNMP-FRAMEWORK-MIB
ixiaProducts
FROM IXIA-SMI;
hawkeye MODULE-IDENTITY
LAST-UPDATED "201701040000Z"
ORGANIZATION "www.ixiacom.com"
CONTACT-INFO
" Ixia Communications
Postal: 26601 W. Agoura Rd.
Calabasas, CA 91302
USA
Email: [email protected]"
DESCRIPTION
"Hawkeye MIB objects for trap notifications"
REVISION "201604260000Z"
DESCRIPTION
"Hawkeye MIB"
REVISION "201402060000Z"
DESCRIPTION
"First draft"
::= { ixiaProducts 1 }
probeID OBJECT-TYPE
SYNTAX Integer32
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 1 }
probeMgtIP OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 2 }
probeName OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 3 }
probeStatus OBJECT-TYPE
SYNTAX INTEGER {
down(0),
up(1)
}
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 4 }
testAgentName OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 5 }
runID OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 6 }
timeStamp OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 7 }
testStatus OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 8 }
from OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 9 }
to OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 10 }
errorReason OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 11 }
failReason OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 12 }
pairname OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 13 }
metricName OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 14 }
metricvalue OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 15 }
defaultThresholdType OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 16 }
threshold OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 17 }
metricstatus OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 18 }
timeStampProviso OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 19 }
provisoAlarmMessage OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 20 }
testType OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 21 }
summary OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 22 }
trapID OBJECT-TYPE
SYNTAX SnmpAdminString
MAX-ACCESS accessible-for-notify
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationobjects 23 }
hawkeye-notification NOTIFICATION-TYPE
OBJECTS {
trapID,
summary,
runID,
timeStamp,
testType,
testStatus,
from,
to,
pairname,
metricName,
metricvalue,
defaultThresholdType,
threshold,
metricstatus,
failReason
}
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationprefix 1 }
hawkeye-errornotification NOTIFICATION-TYPE
OBJECTS {
trapID,
summary,
runID,
timeStamp,
testType,
testStatus,
from,
to,
errorReason
}
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationprefix 2 }
hawkeye-probenotification NOTIFICATION-TYPE
OBJECTS {
probeID,
probeMgtIP,
probeName,
probeStatus,
testAgentName
}
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationprefix 3 }
hawkeye-provisoalarm NOTIFICATION-TYPE
OBJECTS {
timeStampProviso,
provisoAlarmMessage
}
STATUS current
DESCRIPTION
""
::= { hawkeye-notificationprefix 4 }
END
/usr/share/snmp/mibs
Vast majority of systems dont need this feature which is only required when importing
external data to the TDR database out of sync.
Since TDR ID sequencing is tied to TDR timestamp for historical search, switching to this
feature is required.
Steps:
2) go into the configuration file: configuration.txt and set the following option in main
section
"useTimestampTDRID"=="1"
If the setting doesnt exist copy the line under Main section.
The global API framework is using SOAP web services and allows third party to connect
to the Hawkeye Web services through industry standard APIs.
SOAP, originally defined as Simple Object Access Protocol, is a protocol specification for
exchanging structured information in the implementation of Web Services in computer
networks. It relies on Extensible Markup Language (XML) for its message format.
The following diagram present high level scheme for the SOAP xml implementation.
SOAP XML Server on the Hawkeye is implemented over PHP php_soap generic exten-
sion. It is therefore compatible with any SOAP client and agnostic about the connecting
technology.
It is installed and uses the same port as the (Apache) Hawkeye server GUI. The default
port is 80, but if 443 is specified for security for the Hawkeye server, it will listen for
incoming requests on port 443.
It publishes WSDL file format for creating SOAP client connector and publishing all avail-
able APIs.
Modification needs to be done in the Hawkeye.wsdl file to get access to the APIs. On
Hawkeye server login as root and edit the following file:
/home/ixia/Hawkeye/WebServer/WebServices/Hawkeye.wsdl
<soap:address loc-
ation="http://127.0.0.1:80/WebServices/HawkeyeWebService.php"/> <!-- modify path
to server path -->
with your server access (IP or url) and https instead of http if relevant
Example:
<soap:address loc-
ation="https://myHawkeyeServerURL/WebServices/HawkeyeWebService.php"/> <!--
modify path to server path -->
addGroupAndTestTypesRequest
This is used to add a new group/ update an existing group into Hawkeye.
Input:
addGroupAndTestTypesResponse
Output:
addProbe
This is used to add a new Probe into Hawkeye. This is only relevant for Manual probes
and not required for any Automatic probes (which we recommend).
Parameters
available id are
"2";"Software"
"6";"xr2000"
"7";"xr2000_vm"
"8";"xr_pi"
probeAvailability- integer -
2 for to only
output
0: failed to add
1: added
addUser
This is used to add a new user/ update an existing user into Hawkeye.
Input:
addUserResponse
Output:
cancelTestExecution
This is used to pause/remove a test execution.
Input:
cancelTestExecutionResponse
Output:
checkTestExecutionResultStatus
This is used to find out about the test execution based on a test execution ID that was
created with API or manually on the UI.
This returns the last TDR ID found into the database for this execution ID
Parameters:
execID: corresponds to test execution ID
<x:Envelope xmlns:x="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:urn="urn:Hawkeye">
<x:Header/>
<x:Body>
<urn:checkTestExecutionResultStatus>
<urn:execID>?</urn:execID>
</urn:checkTestExecutionResultStatus>
</x:Body>
</x:Envelope>
output:
collectTestExecutionResultRange
This api allows to search results for a specific test execution ID and collects all the
information on kpis for a number of test results in history - limited to 1000 test results.
Output is in json
Parameters:
NumberofResults: number of results to look for - results will provided from most recent
to least recent. The limit for the number of results in time interval is 1000 - an error will
be returned for higher values. For latest results in the interval pick 1 for this value.
output: a result in json format with fields populated describing TDR, result (one line per
metric)
example:
[{"ID":"14547914077197","Value":"2053.64","METRIC":"Download Rate
(kbps)","PAIR_NAME":"HTTP Server Test","NODEFROM_NAME":"xrpi2Par-
is","NODETO_NAME":"www.google.com","TIMESTAMP":"2016-02-06
21:43:27","STATUS":"Passed","REASON_CAUSE":"","TDR_com-
ment":"DownloadFullPage: 1 ip_version: ipv4 UseProxy: 0 ProxyAddress:
","MODULE":"RealService","TESTTYPE_ID":"5140","TESTTYPE":"HTTP Server
Test","MESHID":"0","MESHNAME":"","NODEFROM_PROBEID":"51150","NODETO_
PROBEID":"0","NODEFROM_IP":"10.204.20.27","NODETO_IP":"","NODEFROM_
MGMTIP":"10.204.20.27","NODETO_MGMTIP":"","NODEFROM_
LOCATION":"TEST","NODETO_LOCATION":"","NODEFROM_PROBE_GROUP":"","NODETO_
PROBE_GROUP":"","EXECUTION_USER_ID":"1","EXECUTION_USER_LOGIN":"sysad-
min","TEST_DURATION":"0","TESTEXEC_ID":"23552","TESTEXEC_STRING":""},
{"ID":"14547914077197","Value":"261","METRIC":"Download Time
(msec)","PAIR_NAME":"HTTP Server Test","NODEFROM_NAME":"xrpi2Par-
is","NODETO_NAME":"www.google.com","TIMESTAMP":"2016-02-06
21:43:27","STATUS":"Passed","REASON_CAUSE":"","TDR_com-
ment":"DownloadFullPage: 1 ip_version: ipv4 UseProxy: 0 ProxyAddress:
","MODULE":"RealService","TESTTYPE_ID":"5140","TESTTYPE":"HTTP Server
Test","MESHID":"0","MESHNAME":"","NODEFROM_PROBEID":"51150","NODETO_
PROBEID":"0","NODEFROM_IP":"10.204.20.27","NODETO_IP":"","NODEFROM_
MGMTIP":"10.204.20.27","NODETO_MGMTIP":"","NODEFROM_
LOCATION":"TEST","NODETO_LOCATION":"","NODEFROM_PROBE_GROUP":"","NODETO_
PROBE_GROUP":"","EXECUTION_USER_ID":"1","EXECUTION_USER_LOGIN":"sysad-
min","TEST_DURATION":"0","TESTEXEC_ID":"23552","TESTEXEC_STRING":""},
{"ID":"14547914077197","Value":"67","METRIC":"Files Size (kB)","PAIR_
NAME":"HTTP Server Test","NODEFROM_NAME":"xrpi2Paris","NODETO_
NAME":"www.google.com","TIMESTAMP":"2016-02-06
21:43:27","STATUS":"Passed","REASON_CAUSE":"","TDR_com-
ment":"DownloadFullPage: 1 ip_version: ipv4 UseProxy: 0 ProxyAddress:
","MODULE":"RealService","TESTTYPE_ID":"5140","TESTTYPE":"HTTP Server
Test","MESHID":"0","MESHNAME":"","NODEFROM_PROBEID":"51150","NODETO_
PROBEID":"0","NODEFROM_IP":"10.204.20.27","NODETO_IP":"","NODEFROM_
MGMTIP":"10.204.20.27","NODETO_MGMTIP":"","NODEFROM_
LOCATION":"TEST","NODETO_LOCATION":"","NODEFROM_PROBE_GROUP":"","NODETO_
PROBE_GROUP":"","EXECUTION_USER_ID":"1","EXECUTION_USER_LOGIN":"sysad-
min","TEST_DURATION":"0","TESTEXEC_ID":"23552","TESTEXEC_STRING":""},
{"ID":"14547914077197","Value":"6","METRIC":"Number Of Files","PAIR_
NAME":"HTTP Server Test","NODEFROM_NAME":"xrpi2Paris","NODETO_
NAME":"www.google.com","TIMESTAMP":"2016-02-06
21:43:27","STATUS":"Passed","REASON_CAUSE":"","TDR_com-
ment":"DownloadFullPage: 1 ip_version: ipv4 UseProxy: 0 ProxyAddress:
","MODULE":"RealService","TESTTYPE_ID":"5140","TESTTYPE":"HTTP Server
Test","MESHID":"0","MESHNAME":"","NODEFROM_PROBEID":"51150","NODETO_
PROBEID":"0","NODEFROM_IP":"10.204.20.27","NODETO_IP":"","NODEFROM_
MGMTIP":"10.204.20.27","NODETO_MGMTIP":"","NODEFROM_
LOCATION":"TEST","NODETO_LOCATION":"","NODEFROM_PROBE_GROUP":"","NODETO_
PROBE_GROUP":"","EXECUTION_USER_ID":"1","EXECUTION_USER_LOGIN":"sysad-
min","TEST_DURATION":"0","TESTEXEC_ID":"23552","TESTEXEC_STRING":""},
{"ID":"14547914077197","Value":"100.14","METRIC":"Time to First Byte Avg
(ms)","PAIR_NAME":"HTTP Server Test","NODEFROM_
NAME":"xrpi2Paris","NODETO_NAME":"www.google.com","TIMESTAMP":"2016-02-
06 21:43:27","STATUS":"Passed","REASON_CAUSE":"","TDR_com-
ment":"DownloadFullPage: 1 ip_version: ipv4 UseProxy: 0 ProxyAddress:
","MODULE":"RealService","TESTTYPE_ID":"5140","TESTTYPE":"HTTP Server
Test","MESHID":"0","MESHNAME":"","NODEFROM_PROBEID":"51150","NODETO_
PROBEID":"0","NODEFROM_IP":"10.204.20.27","NODETO_IP":"","NODEFROM_
MGMTIP":"10.204.20.27","NODETO_MGMTIP":"","NODEFROM_
LOCATION":"TEST","NODETO_LOCATION":"","NODEFROM_PROBE_GROUP":"","NODETO_
PROBE_GROUP":"","EXECUTION_USER_ID":"1","EXECUTION_USER_LOGIN":"sysad-
min","TEST_DURATION":"0","TESTEXEC_ID":"23552","TESTEXEC_STRING":""},
{"ID":"14547914077197","Value":"206","METRIC":"Time to First Byte Max
(ms)","PAIR_NAME":"HTTP Server Test","NODEFROM_NAME":"xrpi2Par-
is","NODETO_NAME":"www.google.com","TIMESTAMP":"2016-02-06
21:43:27","STATUS":"Passed","REASON_CAUSE":"","TDR_com-
ment":"DownloadFullPage: 1 ip_version: ipv4 UseProxy: 0 ProxyAddress:
","MODULE":"RealService","TESTTYPE_ID":"5140","TESTTYPE":"HTTP Server
Test","MESHID":"0","MESHNAME":"","NODEFROM_PROBEID":"51150","NODETO_
PROBEID":"0","NODEFROM_IP":"10.204.20.27","NODETO_IP":"","NODEFROM_
MGMTIP":"10.204.20.27","NODETO_MGMTIP":"","NODEFROM_
LOCATION":"TEST","NODETO_LOCATION":"","NODEFROM_PROBE_GROUP":"","NODETO_
PROBE_GROUP":"","EXECUTION_USER_ID":"1","EXECUTION_USER_LOGIN":"sysad-
min","TEST_DURATION":"0","TESTEXEC_ID":"23552","TESTEXEC_STRING":""},
{"ID":"14547736316850","Value":"755.99","METRIC":"Download Rate
(kbps)","PAIR_NAME":"HTTP Server Test","NODEFROM_NAME":"xrpi2Par-
is","NODETO_NAME":"www.google.com","TIMESTAMP":"2016-02-06
16:47:11","STATUS":"Passed","REASON_CAUSE":"","TDR_com-
ment":"DownloadFullPage: 1 ip_version: ipv4 UseProxy: 0 ProxyAddress:
","MODULE":"RealService","TESTTYPE_ID":"5140","TESTTYPE":"HTTP Server
Test","MESHID":"0","MESHNAME":"","NODEFROM_PROBEID":"51150","NODETO_
PROBEID":"0","NODEFROM_IP":"10.204.20.27","NODETO_IP":"","NODEFROM_
MGMTIP":"10.204.20.27","NODETO_MGMTIP":"","NODEFROM_
LOCATION":"TEST","NODETO_LOCATION":"","NODEFROM_PROBE_GROUP":"","NODETO_
PROBE_GROUP":"","EXECUTION_USER_ID":"1","EXECUTION_USER_LOGIN":"sysad-
min","TEST_DURATION":"0","TESTEXEC_ID":"23552","TESTEXEC_STRING":""},
{"ID":"14547736316850","Value":"709","METRIC":"Download Time
(msec)","PAIR_NAME":"HTTP Server Test","NODEFROM_NAME":"xrpi2Par-
is","NODETO_NAME":"www.google.com","TIMESTAMP":"2016-02-06
16:47:11","STATUS":"Passed","REASON_CAUSE":"","TDR_com-
ment":"DownloadFullPage: 1 ip_version: ipv4 UseProxy: 0 ProxyAddress:
","MODULE":"RealService","TESTTYPE_ID":"5140","TESTTYPE":"HTTP Server
Test","MESHID":"0","MESHNAME":"","NODEFROM_PROBEID":"51150","NODETO_
PROBEID":"0","NODEFROM_IP":"10.204.20.27","NODETO_IP":"","NODEFROM_
MGMTIP":"10.204.20.27","NODETO_MGMTIP":"","NODEFROM_
LOCATION":"TEST","NODETO_LOCATION":"","NODEFROM_PROBE_GROUP":"","NODETO_
PROBE_GROUP":"","EXECUTION_USER_ID":"1","EXECUTION_USER_LOGIN":"sysad-
min","TEST_DURATION":"0","TESTEXEC_ID":"23552","TESTEXEC_STRING":""},
{"ID":"14547736316850","Value":"67","METRIC":"Files Size (kB)","PAIR_
NAME":"HTTP Server Test","NODEFROM_NAME":"xrpi2Paris","NODETO_
NAME":"www.google.com","TIMESTAMP":"2016-02-06
16:47:11","STATUS":"Passed","REASON_CAUSE":"","TDR_com-
ment":"DownloadFullPage: 1 ip_version: ipv4 UseProxy: 0 ProxyAddress:
","MODULE":"RealService","TESTTYPE_ID":"5140","TESTTYPE":"HTTP Server
Test","MESHID":"0","MESHNAME":"","NODEFROM_PROBEID":"51150","NODETO_
PROBEID":"0","NODEFROM_IP":"10.204.20.27","NODETO_IP":"","NODEFROM_
MGMTIP":"10.204.20.27","NODETO_MGMTIP":"","NODEFROM_
LOCATION":"TEST","NODETO_LOCATION":"","NODEFROM_PROBE_GROUP":"","NODETO_
PROBE_GROUP":"","EXECUTION_USER_ID":"1","EXECUTION_USER_LOGIN":"sysad-
min","TEST_DURATION":"0","TESTEXEC_ID":"23552","TESTEXEC_STRING":""},
{"ID":"14547736316850","Value":"6","METRIC":"Number Of Files","PAIR_
NAME":"HTTP Server Test","NODEFROM_NAME":"xrpi2Paris","NODETO_
NAME":"www.google.com","TIMESTAMP":"2016-02-06
16:47:11","STATUS":"Passed","REASON_CAUSE":"","TDR_com-
ment":"DownloadFullPage: 1 ip_version: ipv4 UseProxy: 0 ProxyAddress:
","MODULE":"RealService","TESTTYPE_ID":"5140","TESTTYPE":"HTTP Server
Test","MESHID":"0","MESHNAME":"","NODEFROM_PROBEID":"51150","NODETO_
PROBEID":"0","NODEFROM_IP":"10.204.20.27","NODETO_IP":"","NODEFROM_
MGMTIP":"10.204.20.27","NODETO_MGMTIP":"","NODEFROM_
LOCATION":"TEST","NODETO_LOCATION":"","NODEFROM_PROBE_GROUP":"","NODETO_
PROBE_GROUP":"","EXECUTION_USER_ID":"1","EXECUTION_USER_LOGIN":"sysad-
min","TEST_DURATION":"0","TESTEXEC_ID":"23552","TESTEXEC_STRING":""},
{"ID":"14547736316850","Value":"314.57","METRIC":"Time to First Byte Avg
(ms)","PAIR_NAME":"HTTP Server Test","NODEFROM_NAME":"xrpi2Par-
is","NODETO_NAME":"www.google.com","TIMESTAMP":"2016-02-06
16:47:11","STATUS":"Passed","REASON_CAUSE":"","TDR_com-
ment":"DownloadFullPage: 1 ip_version: ipv4 UseProxy: 0 ProxyAddress:
","MODULE":"RealService","TESTTYPE_ID":"5140","TESTTYPE":"HTTP Server
Test","MESHID":"0","MESHNAME":"","NODEFROM_PROBEID":"51150","NODETO_
PROBEID":"0","NODEFROM_IP":"10.204.20.27","NODETO_IP":"","NODEFROM_
MGMTIP":"10.204.20.27","NODETO_MGMTIP":"","NODEFROM_
LOCATION":"TEST","NODETO_LOCATION":"","NODEFROM_PROBE_GROUP":"","NODETO_
PROBE_GROUP":"","EXECUTION_USER_ID":"1","EXECUTION_USER_LOGIN":"sysad-
min","TEST_DURATION":"0","TESTEXEC_ID":"23552","TESTEXEC_STRING":""},
{"ID":"14547736316850","Value":"364","METRIC":"Time to First Byte Max
(ms)","PAIR_NAME":"HTTP Server Test","NODEFROM_NAME":"xrpi2Par-
is","NODETO_NAME":"www.google.com","TIMESTAMP":"2016-02-06
16:47:11","STATUS":"Passed","REASON_CAUSE":"","TDR_com-
ment":"DownloadFullPage: 1 ip_version: ipv4 UseProxy: 0 ProxyAddress:
","MODULE":"RealService","TESTTYPE_ID":"5140","TESTTYPE":"HTTP Server
Test","MESHID":"0","MESHNAME":"","NODEFROM_PROBEID":"51150","NODETO_
PROBEID":"0","NODEFROM_IP":"10.204.20.27","NODETO_IP":"","NODEFROM_
MGMTIP":"10.204.20.27","NODETO_MGMTIP":"","NODEFROM_
LOCATION":"TEST","NODETO_LOCATION":"","NODEFROM_PROBE_GROUP":"","NODETO_
PROBE_GROUP":"","EXECUTION_USER_ID":"1","EXECUTION_USER_LOGIN":"sysad-
min","TEST_DURATION":"0","TESTEXEC_ID":"23552","TESTEXEC_STRING":""}]
return 0 if no results are found , and ERROR with indication in case of errors in input para-
meters
collectAverageKPIresult
This function will allow to return an array of results averaged over time for specific filters
set within the function
fromDate: date to gather data from - format is YYYY-MM-DD HH:mm:ss (example 2016-
02-01 20:07:15)
fromDate: date to gather data to- format is YYYY-MM-DD HH:mm:ss (example 2016-02-
01 20:07:15)
("MODULE","TESTTYPE_ID","TESTTYPE","MESHID","MESHNAME","NODEFROM_
PROBEID","NODETO_PROBEID","STATUS","NODEFROM_IP","NODETO_IP","NODEFROM_
MGMTIP","NODETO_MGMTIP","NODEFROM_NAME","NODETO_NAME","NODEFROM_
LOCATION","NODETO_LOCATION","NODEFROM_PROBE_GROUP","NODETO_PROBE_
GROUP","EXECUTION_USER_ID","EXECUTION_USER_LOGIN","TDR_comment","TEST_
DURATION","TESTEXEC_ID","TESTEXEC_STRING")
("MODULE","TESTTYPE_ID","TESTTYPE","MESHID","MESHNAME","NODEFROM_
PROBEID","NODETO_PROBEID","STATUS","NODEFROM_IP","NODETO_IP","NODEFROM_
MGMTIP","NODETO_MGMTIP","NODEFROM_NAME","NODETO_NAME","NODEFROM_
LOCATION","NODETO_LOCATION","NODEFROM_PROBE_GROUP","NODETO_PROBE_
GROUP","EXECUTION_USER_ID","EXECUTION_USER_LOGIN","TDR_comment","TEST_
DURATION","TESTEXEC_ID","TESTEXEC_STRING")
TestType: filter for test type - leave to blank for not filtering-value of filter - % is wild
card
available values
"Adaptive Video"
"BitTorrent"
"DNS Test"
"DropBox Download"
"DropBox Upload"
"Email"
"Exchange_traffic"
"Flash RTMP"
"FTP Download"
"HTTP Test"
"HTTPS Test"
"ICMP performance"
"ICMP Test"
"IGMP Test"
"Skype4B Traffic"
"Netflix"
"Network KPI"
"TCP ping"
"Traceroute"
"UDP ping"
"Video Stream"
"Voice bidirectional"
"Voice from->to"
"Wifi Connect"
"Wifi Inspect"
"Youtube"
"Youtube Test"
PairName: available for specific pair name - leave to blank for not filtering-value of filter
- % is wild card
available values
"KPI from->to"
"KPI to->from"
"TCP from->to"
"TCP to->from"
"UDP from->to"
"UDP to->from"
"Voice from->to"
"Voice to->from"
"SG0 COS"
"SG1 COS"
"SG2 COS"
"SG3 COS"
"SG4 COS"
"SG5 COS"
"SG6 COS"
"Stream"
"Audio Stream"
"Video Stream"
"Video from->to"
"DNS"
"COS 1"
"COS 2"
"COS 3"
"Voice"
"UDP transaction"
"TCP transaction"
"Exchange rcv"
"Exchange send"
"HTTP from->to"
"POP3 Response"
"SMTP Response"
Metric: selected metric to filter on - can be left blank for not filtering (not recom-
mended)-value of filter - % is wild card
"Delay (ms)"
"Jitter (ms)"
"Loss"
"Throughput (kbps)"
"MOS"
"MOS Max"
"MOS Min"
"Number Of Files"
"DNS Availability"
"Availability"
"Jitter (ms)"
"Packet loss"
"Authentication Availability"
"Frame Loss"
"PADI packets"
"PADO packets"
"Loss rate"
"Number Of flows"
"ts duplicates"
"Duplicated packets"
"throughput (kbps)"
"Video Codec"
"ICMP loss"
"Association Attempts"
"Connectivity Status"
<x:Envelope xmlns:x="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:urn="urn:Hawkeye">
<x:Header/>
<x:Body>
<urn:collectAverageKPIresult>
<urn:fromDate>2014-02-01 20:07:15</urn:fromDate>
<urn:toDate>2016-02-01 20:07:15</urn:toDate>
<urn:fromFilterType></urn:fromFilterType>
<urn:fromFilter></urn:fromFilter>
<urn:toFilterType></urn:toFilterType>
<urn:toFilter></urn:toFilter>
<urn:TestType></urn:TestType>
<urn:PairName></urn:PairName>
<urn:Metric></urn:Metric>
</urn:collectAverageKPIresult>
</x:Body>
</x:Envelope>
example
ErrorCode,Passed,Failed,myavg-
value,myvaluemin,myvaluemax,StandardDeviation,totalcount,threshold_min,-
threshold_max,threshold_type
0,1061481,287788,3.10,0.00,1280.00,5.97,1349269,5,8,0.0000
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV-
V="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="urn:Hawkeye"
xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:x-
si="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC-
C="http://schemas.xmlsoap.org/soap/encoding/" SOAP-
ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
<SOAP-ENV:Body>
<ns1:collectAverageKPIresultResponse>
<return xsi:-
type-
="xsd:string">ErrorCode,Passed,Failed,myavgvalue,myvaluemin,myvaluemax,Standard
min,threshold_max,threshold_type
0,1061481,287788,3.10,0.00,1280.00,5.97,1349269,5,8,0.0000</return>
</ns1:collectAverageKPIresultResponse>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
collectKPI_result
Use this function to collect the KPI information from a specific test result based on ID
Parameter
Result:
0 if no idea is found
Example
"1666360" "Datagrams Out of Order" "15152" "KPI from->to" "Passed" "0" "1" "0"
"1666360" "Delay (ms)" "15152" "KPI from->to" "Passed" "10.43" "100" "0"
"1666360" "Jitter (ms)" "15152" "KPI from->to" "Failed" "9.57" "5" "0"
"1666360" "Jitter Max (ms)" "15152" "KPI from->to" "Failed" "15" "5" "0"
"1666360" "Max loss burst" "15152" "KPI from->to" "Passed" "0" "2" "0"
collectTDR_result
This will collect test data result information about test ID
Parameter
<x:Envelope xmlns:x="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:urn="urn:Hawkeye">
<x:Header/>
<x:Body>
<urn:collectTDR_result>
<urn:tdrID>1666360</urn:tdrID>
</urn:collectTDR_result>
</x:Body>
</x:Envelope>
example:
" "N2N" "5129" "Network KPI" "0" "8" "9" "ip-10-1-1-139" "ip-10-1-1-143" "ip-10-1-1-
139" "ip-10-1-1-143" "1" "sysadmin" "15" "74" "webservice"
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV-
V="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="urn:Hawkeye"
xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:x-
si="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC-
C="http://schemas.xmlsoap.org/soap/encoding/" SOAP-
ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
<SOAP-ENV:Body>
<ns1:collectTDR_resultResponse>
<return xsi:type="xsd:string">ID NODEFROM_NAME NODETO_NAME TIMESTAMP
STATUS REASON_CAUSE TDR_comment MODULE TESTTYPE_ID TESTTYPE MESHID
MESHNAME NODEFROM_PROBEID NODETO_PROBEID NODEFROM_IP NODETO_IP NODEFROM_
MGMTIP NODETO_MGMTIP NODEFROM_LOCATION NODETO_LOCATION NODEFROM_PROBE_
GROUP NODETO_PROBE_GROUP EXECUTION_USER_ID EXECUTION_USER_LOGIN TEST_
DURATION TESTEXEC_ID TESTEXEC_STRING
configureN2NListExecution
Use this function to setup a new test for Node to Node (prefer to configure TestEx-
ecution)
If using list of probes, you can create one to one or many to many combinations of tests
Parameters:
Includes
"Network KPI"
"Adapative Video"
"Flash RTMP"
"Netflix"
"Youtube"
"Skype4B Traffic"
"Video Stream"
"Voice from->to"
"Voice bidirectional"
"Exchange_traffic"
"HTTP Test"
"HTTPS Test"
the probe needs to have an existing and active probe configured into db otherwise will
be ignored
Example : probe1,probe2
the probe needs to have an existing and active probe configured into db otherwise will
be ignored
Example : probe3,probe4
nodefrom: probe1,probe2
nodeto: probe3,probe4
probe1->probe3
probe1->probe4
probe2->probe3
probe2->probe4
probe1->probe3
probe2->probe4
Note: leave empty for starting now or for immediate one shot
Following table describe the parameter. for each test type, the options need to be put in
the exact order as detailed in the table below.
IF there is an associated pair (pair_id is not null), the name must be entered with fol-
lowing format in API:
ParameterName|SPECIFICPAIR|pair_id
arrayOptionsNameString
packetsize,QOS,bitrate|SPECIFICPAIR|15161,bitrate|SPECIFICPAIR|15162
arrayOptionsValueString
1400,EF,20000,40000
will set packetsize to 1400, QOS to EF, UDP from->to 20000kbps, UDP to->from to
40000kbps
"Exchange_traffic";"numberofpairs";"Number of Users";"0";NULL
"Netflix";"QOS";"DSCP Setting";"0";NULL
"Youtube";"QOS";"DSCP Setting";"0";NULL
1-email
2-snmp
FailedAlarm-
ErrorAlarm-
StatusChangeAlarm-
<x:Envelope xmlns:x="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:urn="urn:Hawkeye">
<x:Header/>
<x:Body>
<urn:configureN2NListExecution>
<urn:TestType>Network KPI</urn:TestType>
<urn:NodeFrom>AWSprivate2</urn:NodeFrom>
<urn:NodeTo>AWSprivate4,AWSprivate3,AWSprivate5</urn:NodeTo>
<urn:OneToOne>0</urn:OneToOne>
<urn:Frequency>0</urn:Frequency>
<urn:EnforceSchedule>0</urn:EnforceSchedule>
<urn:mystartdate></urn:mystartdate>
<urn:myenddate></urn:myenddate>
<urn:arrayOptionsNameString>null</urn:arrayOptionsNameString>
<urn:arrayOptionsValueString>null</urn:arrayOptionsValueString>
<urn:thresholdArrayString>null</urn:thresholdArrayString>
<urn:AlarmType>0</urn:AlarmType>
<urn:FailedAlarm>0</urn:FailedAlarm>
<urn:ErrorAlarm>0</urn:ErrorAlarm>
<urn:StatusChangeAlarm>0</urn:StatusChangeAlarm>
<urn:EmailAddress></urn:EmailAddress>
<urn:TESTEXEC_STRING>webservice</urn:TESTEXEC_STRING>
<urn:mytestduration>15</urn:mytestduration>
</urn:configureN2NListExecution>
</x:Body>
</x:Envelope>
output:
response with array of executed test paths and Test execution ID if the test could be
added to the execution list
Example of output
(AWSprivate2,AWSprivate4,74),(AWSprivate2,AWSprivate3,75),
(AWSprivate2,AWSprivate5,76)
</SOAP-ENV:Envelope>
configureTestExecution
Use this function to setup a new test for Node to Node or Real service
Parameters:
Type:
Includes
"Network KPI"
"Adapative Video"
"Flash RTMP"
"Netflix"
"Youtube"
"Skype4B Traffic"
"Video Stream"
"Voice from->to"
"Voice bidirectional"
"Exchange_traffic"
"HTTP Test"
"HTTPS Test"
"BitTorrent"
"DNS Test"
"DropBox Download"
"DropBox Upload"
"Email"
"FTP Download"
"ICMP performance"
"ICMP Test"
"IGMP Test"
"TCP ping"
"Traceroute"
"UDP ping"
"Wifi Connect"
"Wifi Inspect"
"Youtube Test"
0 - nodefrom and nodeto will be used for node to node, Meshid ignored
the probe needs to have an existing and active probe configured into db otherwise will
be ignored
Example : probe1,probe2
Nodeto: node List to use to start the test - ignored for real services
the probe needs to have an existing and active probe configured into db otherwise will
be ignored
Example : probe3,probe4
Note: leave empty for starting now or for immediate one shot
following table describe the parameter. for each test type, the options need to be put in
the exact order as detailed in the table below.
IF there is an associated pair (pair_id is not null), the name must be entered with fol-
lowing format in API:
ParameterName|SPECIFICPAIR|pair_id
arrayOptionsNameString
packetsize,QOS,bitrate|SPECIFICPAIR|15161,bitrate|SPECIFICPAIR|15162
arrayOptionsValueString
1400,EF,20000,40000
will set packetsize to 1400, QOS to EF, UDP from->to 20000kbps, UDP to->from to
40000kbps
"Exchange_traffic";"numberofpairs";"Number of Users";"0";NULL
"Netflix";"QOS";"DSCP Setting";"0";NULL
"Youtube";"QOS";"DSCP Setting";"0";NULL
"BitTorrent";"DestinationServer";"Torrent Name";"0";NULL
"BitTorrent";"magnet";"Torrent link";"0";NULL
"BitTorrent";"duration";"Test duration";"0";NULL
"Email";"email";"Email Address";"0";NULL
"Email";"authuser";"Mail User";"0";NULL
"Email";"authpass";"Mail Password";"0";NULL
"FTP Download";"Password";"Password";"0";NULL
"ICMP Test";"PingInterval";"Interval";"0";NULL
"ICMP Test";"PingCount";"Count";"0";NULL
"TCP ping";"PingInterval";"Interval";"0";NULL
"TCP ping";"PingCount";"Count";"0";NULL
"Traceroute";"DestinationServer";"Destination Server";"0";NULL
"Traceroute";"Timeout";"Timeout (sec)";"0";NULL
"Traceroute";"QOS";"DSCP Setting";"0";NULL
"Traceroute";"TraceRouteProtocol";"Protocol";"0";NULL
"Traceroute";"ip_version";"ip protocol";"0";NULL
"UDP ping";"PingInterval";"Interval";"0";NULL
"UDP ping";"PingCount";"Count";"0";NULL
"Wifi Connect";"SSID";"SSID";"0";NULL
"Wifi Connect";"BSSID";"BSSID";"0";NULL
"Wifi Connect";"Password";"Password";"0";NULL
"Wifi Inspect";"DestinationServer";"SSID";"0";NULL
1-email
2-snmp
FailedAlarm-
ErrorAlarm-
StatusChangeAlarm-
<x:Envelope xmlns:x="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:urn="urn:Hawkeye">
<x:Header/>
<x:Body>
<urn:configureTestExecution>
<urn:Type>?</urn:Type>
<urn:TestType>?</urn:TestType>
<urn:isMesh>?</urn:isMesh>
<urn:myMesh>?</urn:myMesh>
<urn:NodeFrom>?</urn:NodeFrom>
<urn:NodeTo>?</urn:NodeTo>
<urn:Frequency>?</urn:Frequency>
<urn:EnforceSchedule>?</urn:EnforceSchedule>
<urn:mystartdate>?</urn:mystartdate>
<urn:myenddate>?</urn:myenddate>
<urn:arrayOptionsNameString>?</urn:arrayOptionsNameString>
<urn:arrayOptionsValueString>?</urn:arrayOptionsValueString>
<urn:thresholdArrayString>?</urn:thresholdArrayString>
<urn:AlarmType>?</urn:AlarmType>
<urn:FailedAlarm>?</urn:FailedAlarm>
<urn:ErrorAlarm>?</urn:ErrorAlarm>
<urn:StatusChangeAlarm>?</urn:StatusChangeAlarm>
<urn:EmailAddress>?</urn:EmailAddress>
<urn:TESTEXEC_STRING>?</urn:TESTEXEC_STRING>
<urn:mytestduration>?</urn:mytestduration>
</urn:configureTestExecution>
</x:Body>
</x:Envelope>
output:
findProbeIDfromName
Find probe ID from inputting name
<x:Envelope xmlns:x="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:urn="urn:Hawkeye">
<x:Header/>
<x:Body>
<urn:findProbeIDfromName>
<urn:ProbeName>?</urn:ProbeName>
</urn:findProbeIDfromName>
</x:Body>
</x:Envelope>
output
API will return 0 if probe is not found, integer with ID if the probe is found.
findProbeIDfromSerialRequest
Find probe ID from input serial number
Input:
findProbeIDfromSerialResponse
Output:
API will return 0 if probe is not found, integer with ID if the probe is found.
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV-
V="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="urn:Hawkeye"
xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:x-
si="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC-
C="http://schemas.xmlsoap.org/soap/encoding/" SOAP-
ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
<SOAP-ENV:Body>
<ns1:findProbeIDfromSerialResponse>
<return xsi:type="xsd:string">0</return>
</ns1:findProbeIDfromSerialResponse>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
listProbesNamesRequest
List all probes matching specified filters
Input:
available values
0 - all probes
1 - all up probes
</x:Body>
</x:Envelope>
listProbesNamesResponse
Output:
API will return 0 if no matching probe is found, a comma-separated string with all
testWebService
This is a test method to validate connectivity to Hawkeye SOAP server.
Input:
Param1: string
Param2: string
<x:Envelope xmlns:x="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:urn="urn:Hawkeye">
<x:Header/>
<x:Body>
<urn:testWebService>
<urn:Param1>p1</urn:Param1>
<urn:Param2>p2</urn:Param2>
</urn:testWebService>
</x:Body>
testWebServiceResponse
Output:
I
INDEX
management 113