Fortios - KVM Cookbook
Fortios - KVM Cookbook
Fortios - KVM Cookbook
Version 6.2
FORTINET DOCUMENT LIBRARY
https://docs.fortinet.com
FORTINET BLOG
https://blog.fortinet.com
NSE INSTITUTE
https://training.fortinet.com
FORTIGUARD CENTER
https://fortiguard.com/
FEEDBACK
Email: techdoc@fortinet.com
FortiGate-VMs allow you to mitigate blind spots by implementing critical security controls within your virtual
infrastructure. They also allow you to rapidly provision security infrastructure whenever and wherever it is needed.
FortiGate-VMs feature all the security and networking services common to hardware-based FortiGate appliances. You
can deploy a mix of FortiGate hardware and virtual appliances, operating together and managed from a common
centralized management platform.
This document describes how to deploy a FortiGate-VM in a KVM environment.
FortiGate-VM offers perpetual licensing (normal series and V-series) and annual subscription licensing (S-series,
available starting Q4 2019). FortiOS 6.2.3 and later versions support the S-series licensing. The differences are as
follows:
Licensing VM base is perpetual. You must separately contract support Single annually contracted SKU
term services on an annual basis. that contains VM base and a
FortiCare service bundle.
Support Each VM base type is associated with over a dozen SKUs. See the Four support service bundle
services pricelist for details. types:
l Only FortiCare
l UTM
l Enterprise
l 360 protection
License SKUs are based on the number of virtual CPUs (vcPU) (1, 2, 4, 8, 16, 32, or unlimited). The
level RAM/memory restriction no longer applies for FortiOS 6.2.2 and later versions. FortiOS 6.2.1 and
earlier versions have RAM/memory restrictions.
Virtual By default, each CPU level By default, all CPU levels do not support adding VDOMs.
domain supports up to a certain number
(VDOM) of VDOMs.
support
After you submit an order for a FortiGate-VM, Fortinet sends a license registration code to the email address that you
entered on the order form. Use this code to register the FortiGate-VM with Customer Service & Support, and then
download the license file. After you upload the license to the FortiGate-VM and validate it, your FortiGate-VM is fully
functional.
The primary requirement provisioning a FortiGate-VM may be the number of interfaces it can accommodate rather than
its processing capabilities. In some cloud environments, the options with a high number of interfaces tend to have high
numbers of vCPUs.
FortiGate-VM licensing does not restrict whether the FortiGate can work on a VM instance in a public cloud that uses
more vCPUs than the license allows. The number of vCPUs that the license indicates does not restrict the FortiGate
from working, regardless of how many vCPUs the virtual instance includes. However, only the licensed number of
vCPUs process traffic and management tasks. The FortiGate-VM does not use the rest of the vCPUs.
You can provision a VM instance based on the number of interfaces you need and license the FortiGate-VM for only the
processors you need.
This documentation assumes that before deploying the FortiGate-VM on the KVM virtual platform, you have addressed
the following requirements:
Virtual environment
You have installed the KVM software on a physical server with sufficient resources to support the FortiGate-VM and all
other VMs deployed on the platform.
If you configure the FortiGate-VM to operate in transparent mode, or include it in a FortiGate clustering protocol (FGCP)
high availability (HA) cluster, ensure that you have configured any virtual switches to support the FortiGate-VM's
operation before you create the FortiGate-VM.
Connectivity
An Internet connection is required for the FortiGate-VM to contact FortiGuard to validate its license. If the FortiGate-VM
is in a closed environment, it must be able to connect to a FortiManager to validate the FortiGate-VM license. See
Validating the FortiGate-VM license with FortiManager on page 22.
Configuring resources
Before you start the FortiGate-VM for the first time, ensure that the following resources are configured as specified by
the FortiGate-VM virtual appliance license:
l Disk sizes
l CPUs
l RAM
l Network settings
1. In the Virtual Machine Manager, locate the VM name, then select Open from the toolbar.
2. Select Add Hardware.
3. In the Add Hardware window select Storage.
4. Select Create a disk image on the computer’s harddrive and set the size to 30GB.
If you know your environment will expand in the future, it is recommended to increase the
hard disk size beyond 30GB. The VM license limit is 2TB.
5. Enter:
Even though raw is the storage format listed, the qcow2 format is also supported.
6. Select Network to configure or add more network interfaces. The Device type must be Virtio.
A new VM includes one network adapter by default. You can add more through the Add Hardware window.
FortiGate-VM requires four network adapters. You can configure network adapters to connect to a virtual switch or
to network adapters on the host computer.
7. Select Finish.
Registering the FortiGate-VM with Customer Service & Support allows you to obtain the FortiGate-VM license file.
1. Log in to the Customer Service & Support site using a support account, or select Sign Up to create an account.
2. In the main page, under Asset, select Register/Activate.
3. In the Registration page, enter the registration code that you received via email, and select Register to access the
registration form.
4. If you register the S-series subscription model, the site prompts you to select one of the following:
a. Click Register to newly register the code to acquire a new serial number with a new license file.
b. Click Renew to renew and extend the licensed period on top of the existing serial number, so that all features
on the VM node continue working uninterrupted upon license renewal.
FortiGate-VM deployment packages are found on the Customer Service & Support site. In the Download drop-down
menu, select VM Images to access the available VM deployment packages.
1. In the Select Product drop-down menu, select FortiGate.
2. In the Select Platform drop-down menu, select KVM.
3. Select the FortiOS version you want to download.
There are two files available for download: the file required to upgrade from an earlier version and the file required
for a new deployment.
4. Click the Download button and save the file.
For more information, see the FortiGate datasheet.
You can also download the following resources for the firmware version:
l FortiOS Release Notes
l FORTINET-FORTIGATE MIB file
l FSSO images
l SSL VPN client
The FORTINET.out.kvm.zip contains only fortios.qcow2, the FortiGate-VM system hard disk in qcow2 format. You
must manually:
l create a 32 GB log disk
l specify the virtual hardware settings
Cloud-init
You can use the cloud-init service for customizing a prepared image of a virtual installation. The cloud-init
service is built into the virtual instances of FortiGate-VM that are found on the support site so that you can use them on
a VM platform that supports the use of the service. To customize the installation of a new FortiGate-VM instance, you
must combine the seed image from the support site with user data information customized for each new installation.
Hypervisor platforms such as QEMU/KVM, BSD, and Hyper-V support the use of this service on most major Linux
distributions. A number of cloud-based environments such as VMware and AWS also support it.
You can use the cloud-init service to help install different instances based on a common seed image by assigning
hostnames, adding SSH keys, and settings particular to the specific installation. You can add other more general
customizations such as the running of post install scripts.
While cloud-init is the service used to accomplish the customized installations of VMs, various other programs,
depending on the platform, are used to create the customized ISOs used to create the images that will build the
FortiGate-VM.
Installing Virt-install
Another required program is virt-install. Among other things, this program allows you to combine the user_data file with
the seed image for the FortiGate-VM installation. To run virt-install, libvirt and KVM must be running on the system. You
need root privileges to run virt-install
To install virt-install on a Red Hat based system use the command:
sudo yum install libvirt libguestfs-tools virtinst
There may be other methods of installing the software but these are two common methods.
Make sure that after the installation that libvirt-bin is installed and the service is
running.
Some distros like Ubuntu may have a variation of the program called genisoimage, but using the original mkisofs
command should still work, as mkisofs is used as an alias for genisoimage in many of the distros.
To verify whether or not you have the utility intalled, enter the command:
mkisofs --version
If your system has genisoimage instead you may get a message along the lines of:
mkisofs 2.0.1 is not what you see here. This line is only a fake for
too clever GUIs and other frontend applications. In fact, the
program is:
genisoimage 1.1.11 (Linux)
The cloud-init service passes a script to newly created VMs, in this case FortiGate-VM. The title of the file is
user_data. All configuration on the FortiGate is done through the configuration file so components of the the scripts
follow the syntax of the configuration file or commands being entered through the CLI.
The following example content is from a basic user_data file:
#this is for fgt init config file. Can only handle fgt config.
config sys interface
edit port1
set mode dhcp
set allowaccess http https ssh ping telnet
next
end
config sys dns
set primary 8.8.8.8
unset secondary
end
config sys global
set hostname cloud-init-test
end
License file
The other file that is used to configure the customized install, contains the license key. Take the license key you receive
from Fortinet and place it into a text file. This file is named 0000, without any extension.
There are no requirements for where the holding folder that will be used to create the new ISO image is placed, but
there are requirements as to the folder structure within the folder. The cloud-init needs to find specific content in specific
folders in order to work correctly. The folder structure should be as follows:
<holding folder>
/openstack
/content
0000
/latest
user_data
It may seem counter-intuitive to use the folder name openstack in an instance where the target VM platform is not
OpenStack, but a number of utilities are common to both OpenStack and KVM environments.
Once you have your user_data file and the license key file, you can create an ISO image containing all of the files that
are used to customize the seed image of the FortiGate-VM. This is done using the mkisofs utility.
Option Description
pathspec direction to the folder(s) that are to be included in the ISO image file. Separate the paths with
[pathspec...] a space.
-input-charset Input charset that defines the characters used in local file names. To get a list of valid charset
names, use the command mkisofs -input-charset help. To get a 1:1 mapping, you
may use default as charset name.
-R Generate SUSP and RR records using the Rock Ridge protocol to further describe the files on
the iso9660 filesystem.
-r This is like the -R option, but file ownership and modes are set to more useful values. The uid
and gid are set to zero, because they are usually only useful on the author's system, and not
useful to the client. All the file read bits are set true, so that files and directories are globally
readable on the client.
Example:
The iso-folder holds the data structure for the new ISO image. The /home/username/test folder contains the iso-
folderfolder.The name for the new ISO image is fgt-bootstrap.iso.
cd /home/username/test
sudo mkisofs -R -r -o fgt-bootstrap.iso iso-folder
The following table contains some of the more common options used in setting up a FortiGate-VM image. Not all of
them are required. To get a complete listing of the options, at the command prompt, type in the command virt-
install --help or virt-install -h.
Option Description
--connect <option> This connects the VM image to a non-default VM platform. If one is not specified,
libvirt will attempt to choose the most suitable default platform.
Some valid options are:
l qemu:///system
Creates KEM and QEMU guests run by the system. This is the most
common option.
l qem:///session
Creates KEM and QEMU guests run as a regular user.
l xen:///
For connecting to Xen.
--name <name> New guest VM instance name. This must be unique amongst all guests known to
-n <name> the hypervisor on the connection, including those not currently active.
To redefine an existing guest, use the virsh tool
--memory <option> Memory to allocate for the guest, in MiB. (This deprecates the -r/ --ram
option.)
Sub-options are available, like:
l maxmemory
l hugepages
l hotplugmemorymax
l hotplugmemoryslots
--cdrom <options> File or device used as a virtual CD-ROM device. It can be path to an ISO image,
-c <options> or to a CDROM device.
It can also be a URL from which to fetch/access a minimal boot ISO image. The
URLs take the same format as described for the "--location" argument. If a cdrom
has been specified via the "--disk" option, and neither "--cdrom" nor any other
install option is specified, the "--disk" cdrom is used as the install media.
--location <options> Distribution tree installation source. virt-install can recognize certain distribution
-l <options> trees and fetches a bootable kernel/initrd pair to launch the install.
With libvirt 0.9.4 or later, network URL installs work for remote connections. virt-
install will download kernel/initrd to the local machine, and then upload the media
to the remote host. This option requires the URL to be accessible by both the
local and remote host.
--location allows things like --extra-args for kernel arguments, and using --
initrd-inject. If you want to use those options with CDROM media, you have a few
options:
l Run virt-install as root and do --location ISO
l Mount the ISO at a local directory, and do --location DIRECTORY
l Mount the ISO at a local directory, export that directory over local http, and
Option Description
do --location http://localhost/DIRECTORY
The "LOCATION" can take one of the following forms:
l http://host/path
An HTTP server location containing an installable distribution image.
l ftp://host/path
An FTP server location containing an installable distribution image.
l nfs:host:/path or nfs://host/path
An NFS server location containing an installable distribution image. This
requires running virt-install as root.
l DIRECTORY
Path to a local directory containing an installable distribution image. Note
that the directory will not be accessible by the guest after initial boot, so the
OS installer will need another way to access the rest of the install media.
l ISO
Mount the ISO and probe the directory. This requires running virt-install as
root, and has the same VM access caveat as DIRECTORY.
--import Skip the OS installation process, and build a guest around an existing disk image.
The device used for booting is the first device specified via "--disk" or "--
filesystem".
--disk <options> Specifies media to use as storage for the guest, with various options.
The general format of a disk string is
--disk opt1=val1,opt2=val2,...
When using multiple options, separate each option with a comma (no spaces
before or after the commas).
Example options:
l size
size (in GiB) to use if creating new storage
example: size=10
l path
A path to some storage media to use, existing or not. Existing media can be
a file or block device.
Specifying a non-existent path implies attempting to create the new storage,
and will require specifying a 'size' value. Even for remote hosts, virt-install
will try to use libvirt storage APIs to automatically create the given path.
If the hypervisor supports it, path can also be a network URL, like
http://example.com/some-disk.img . For network paths, they hypervisor will
directly access the storage, nothing is downloaded locally.
l format
Disk image format. For file volumes, this can be 'raw', 'qcow2', 'vmdk', etc.
See format types in libvirt: Storage Management for possible values. This is
often mapped to the driver_type value as well.
If not specified when creating file images, this will default to .qcow2.
If creating storage, this will be the format of the new image. If using an
Option Description
--network <options> Connect the guest to the host network. The value for <options> can take one of 4
-w <options> formats:
l bridge=BRIDGE
Connect to a bridge device in the host called "BRIDGE". Use this option if the
host has static networking config & the guest requires full outbound and
inbound connectivity to/from the LAN. Also use this if live migration will be
used with this guest.
l network=NAME
Connect to a virtual network in the host called "NAME". Virtual networks can
be listed, created, deleted using the "virsh" command line tool. In an
unmodified install of "libvirt" there is usually a virtual network with a name of
"default". Use a virtual network if the host has dynamic networking (eg
NetworkManager), or using wireless. The guest will be NATed to the LAN by
whichever connection is active.
l type=direct,source=IFACE[,source_mode=MODE]
Direct connect to host interface IFACE using macvtap.
l user
Connect to the LAN using SLIRP. Only use this if running a QEMU guest as
an unprivileged user. This provides a very limited form of NAT.
l none
Tell virt-install not to add any default network interface.
Use --network=? to see a list of all available sub options.
See details at libvirt: Domain XML format - Network interfaces.
This option deprecates -m/--mac, -b/--bridge, and --nonetworks
--noautoconsole This stops the system from automatically trying to connect to the guest console.
The default behavior is to launch virt-viewer to run a GUI console or run the
virsh console command to display a text version of the console.
Example:
This will take the iso image made in the previous file and install it into the VM platform giving the name Example_VM to
the FortiGate-VM instance.
virt-install --connect qemu:///system --noautoconsole --name Example_VM --memory 1024 --
vcpus 1 --import --disk fortios.qcow2,size=3 --disk fgt-logs.qcow2,size=3 --disk
/home/username/test/fgt-bootstrap.iso,device=cdrom,bus=ide,format=raw,cache=none --
network bridge=virbr0,model=virtio
Before running the command, make sure that QEMU/KVM is running properly.
The deployment uses cases in this document describe the tasks required to deploy a FortiGate-VM virtual appliance on
a KVM server. Before you deploy a virtual appliance, ensure that the requirements described in Preparing for
deployment on page 7 are met and that the correct deployment package is extracted to a folder on the local computer
(see Downloading the FortiGate-VM deployment package on page 9).
After you deploy a FortiGate-VM and upload a full license to replace the default evaluation license, you can power on
the FortiGate-VM and test connectivity.
1. Launch Virtual Machine Manager (virt-manager) on your KVM host server. The Virtual Machine Manager
homepage opens.
2. In the toolbar, select Create a new virtual machine.
9. Select Browse.
10. If you copied the fortios.qcow2 file to /var/lib/libvirt/images, it will be visible on the right. If you saved it
somewhere else on your server, select Browse Local and find it.
11. Choose Choose Volume.
12. Select Forward.
13. Specify the amount of memory and number of CPUs to allocate to this VM. Whether or not the amounts can
exceed the license limits will depend on the FortiOS version. See FortiGate-VM virtual licenses and resources on
page 5
14. Select Forward.
15. Expand Advanced options. A new VM includes one network adapter by default.
16. Select a network adapter on the host computer. Optionally, set a specific MAC address for the virtual network
interface.
17. Set Virt Type to virtio and Architecture to qcow2.
18. Select Finish.
Initial settings
After you deploy a FortiGate-VM on the KVM server, perform the following tasks:
l Connect the FortiGate-VM to the network so that it can process network traffic and maintain the validity of the
license.
l Connect to the GUI of the FortiGate-VM via a web browser for easier administration.
l Ensure that the full license file is uploaded to the FortiGate-VM.
l If you are in a closed environment, enable validation of the FortiGate-VM license against a FortiManager on your
network.
Network configuration
The first time you start the FortiGate-VM, you will have access only through the console window of your KVM server
environment. After you configure one FortiGate network interface with an IP address and administrative access, you can
access the FortiGate-VM GUI.
Configuring port 1
VM platform or hypervisor management environments include a guest console window. On the FortiGate-VM, this
provides access to the FortiGate console, equivalent to the console port on a hardware FortiGate unit. Before you can
access the GUI, you must configure FortiGate-VM port1 with an IP address and administrative access.
1. In your hypervisor manager, start the FortiGate-VM and access the console window. You may need to press Enter
to see a login prompt.
2. At the FortiGate-VM login prompt enter the username admin. By default there is no password. Press Enter.
3. Using CLI commands, configure the port1 IP address and netmask:
config system interface
edit port1
set mode static
set ip 192.168.0.100 255.255.255.0
next
end
4. To configure the default gateway, enter the following CLI commands:
config router static
edit 1
set device port1
set gateway <class_ip>
next
end
You must configure the default gateway with an IPv4 address. FortiGate-VM needs to
access the Internet to contact the FortiGuard Distribution Network (FDN) to validate its
license.
You connect to the FortiGate-VM GUI via a web browser by entering the IP address assigned to the port 1 interface (see
Configuring port 1 on page 19) in the browser location field. You must enable HTTP and/or HTTPS access and
administrative access on the interface to ensure that you can connect to the GUI. If you only enabled HTTPS access,
enter "https://" before the IP address.
When you use HTTP rather than HTTPS to access the GUI, certain web browsers may display
a warning that the connection is not private.
On the FortiGate-VM GUI login screen, enter the default username "admin" and then select Login. FortiOS does not
assign a default password to the admin user.
Fortinet recommends that you configure a password for the admin user as soon as you log in to the FortiGate-VM GUI
for the first time.
Every Fortinet VM includes a 15-day trial license. During this time the FortiGate-VM operates in evaluation mode.
Before using the FortiGate-VM you must enter the license file that you downloaded from Customer Service & Support
upon registration.
GUI
2. In the Evaluation License dialog box, select Enter License. The license upload page opens.
3. Select Upload and locate the license file (. lic) on your computer.
4. Select OK to upload the license file.
5. Refresh the browser to log in.
6. Enter admin in the Name field and select Login.
The VM registration status appears as valid in the License Information widget after the license is validated by the
FortiGuard Distribution Network (FDN) or FortiManager for closed networks.
Modern browsers can have an issue with allowing connecting to a FortiGate if the
encryption on the device is too low. If this happens, use a FTP/TFTP server to apply the
license.
CLI
You can also upload the license file using the following CLI command:
execute restore vmlicense {ftp | tftp} <filenmame string> <ftp server>[:ftp port]
Example:
The following is an example output when using a tftp server to install license:
execute restore vmlicense tftp license.lic 10.0.1.2
This operation will overwrite the current VM license!Do you want to continue? (y/n)y
Please wait...Connect to tftp server 10.0.1.2 ...
Get VM license from tftp server OK.
VM license install succeeded.
Rebooting firewall.
This command automatically reboots the firewall without giving you a chance to back out or
delay the reboot.
You can validate your FortiGate-VM license with some FortiManager models. To determine whether your FortiManager
has the VM activation feature, see the FortiManager datasheet's Features section.
1. To configure your FortiManager as a closed network, enter the following CLI command on your FortiManager:
config fmupdate publicnetwork
set status disable
end
2. To configure FortiGate-VM to use FortiManager as its override server, enter the following CLI commands on your
FortiGate-VM:
config system central-management
set mode normal
set type fortimanager
set fmg <FortiManager IPv4 address>
config server-list
edit 1
set server-type update
set server-address <FortiManager IPv4 address>
end
end
set fmg-source-ip <Source IPv4 address when connecting to the FortiManager>
set include-default-servers disable
set vdom <Enter the VDOM name to use when communicating with the FortiManager>
end
3. Load the FortiGate-VM license file in the GUI:
a. Go to System > Dashboard > Status.
b. In the License Information widget, in the Registration Status field, select Update.
c. Browse for the .lic license file and select OK.
4. To activate the FortiGate-VM license, enter the execute update-now command on your FortiGate-VM.
5. To check the FortiGate-VM license status, enter the following CLI commands on your FortiGate-VM:
get system status
Version: Fortigate-VM v5.0,build0099,120910 (Interim)
Virus-DB: 15.00361(2011-08-24 17:17)
Extended DB: 15.00000(2011-08-24 17:09)
Extreme DB: 14.00000(2011-08-24 17:10)
IPS-DB: 3.00224(2011-10-28 16:39)
FortiClient application signature package: 1.456(2012-01-17 18:27)
Serial-Number: FGVM02Q105060000
License Status: Valid
BIOS version: 04000002
Log hard disk: Available
Hostname: Fortigate-VM
Operation Mode: NAT
Current virtual domain: root
Max number of virtual domains: 10
Virtual domains status: 1 in NAT mode, 0 in TP mode
Virtual domain configuration: disable
FIPS-CC mode: disable
Current HA mode: standalone
Distribution: International
Licensing timeout
In closed environments without Internet access, it is mandatory to perform offline licensing of the FortiGate-VM using a
FortiManager as a license server. If the FortiGate-VM cannot perform license validation within the license timeout
period, which is 30 days, the FortiGate will discard all packets, effectively ceasing operation as a firewall.
The license status goes through some changes before it times out.
Status Description
Valid The FortiGate can connect and validate against a FortiManager or FDS
Warning The FortiGate cannot connect and validate against a FortiManager or FDS. A check is made
against how many days the Warning status has been continuous. If the number is less than
30 days the status does not change.
Invalid The FortiGate cannot connect and validate against a FortiManager or FDS. A check is made
against how many days the Warning status has been continuous. If the number is 30 days or
more, the status changes to Invalid. The firewall ceases to function properly.
There is only a single log entry after the FortiGate-VM cannot access the license server for the
license expiration period. When you search the logs for the reason that the FortiGate is
offline, there is not a long error log list that draws attention to the issue. There is only one
entry.
Testing connectivity
You can now proceed to power on your FortiGate-VM. Select the FortiGate-VM in the VM list. In the toolbar, select
Console and then select Start.
The PING utility is the usual method to test connectivity to other devices. For this, you need the console on the
FortiGate-VM.
In FortiOS, the command for the PING utility is execute ping followed by the IP address
you want to connect to.
Before you configure the FortiGate-VM for use in production, ensure that connections between it and all required
resources can be established.
l If the FortiGate-VM will provide firewall protection between your network and the Internet, verify that it can connect
to your Internet access point and to resources on the Internet.
l If the FortiGate-VM is part of a Fortinet Security Fabric, verify that it can connect to all devices in the Fabric.
l Verify that each node on your network can connect to the FortiGate-VM.
For information about configuring and operating the FortiGate-VM after you have successfully deployed and started it
on the hypervisor, see the FortiOS Handbook.
High availability
FortiGate-VM HA supports having two VMs in an HA cluster on the same physical platform or different platforms. The
primary consideration is that all interfaces involved can communicate efficiently over TCP/IP connection sessions.
Heartbeat
There are two options for setting up the HA heartbeat: unicast and broadcast. Broadcast is the default HA heartbeat
configuration. However, the broadcast configuration may not be ideal for FortGate-VM because it may require special
settings on the host. In most cases, the unicast configuration is preferable.
The differences between the unicast heartbeat setup and the broadcast heartbeat setup are:
l The unicast method does not change the FortiGate-VM interface MAC addresses to virtual MAC addresses.
l Unicast HA only supports two FortiGate-VMs.
l Unicast HA heartbeat interfaces must be connected to the same network and you must add IP addresses to these
interfaces.
Unicast
Setting Description
unicast-hb Enable or disable (the default) unicast HA heartbeat.
unicast-hb-peerip IP address of the HA heartbeat interface of the other FortiGate-VM in the HA
cluster.
Broadcast
Broadcast HA heartbeat packets are non-TCP packets that use Ethertype values 0x8890, 0x8891, and 0x8890. These
packets use automatically assigned link-local IPv4 addresses in the 169.254.0.x range for HA heartbeat interface IP
addresses.
For FortiGate-VMs to support a broadcast HA heartbeat configuration, you must configure the virtual switches that
connect heartbeat interfaces to operate in promiscuous mode and support MAC address spoofing.
In addition, you must configure the VM platform to allow MAC address spoofing for the FortiGate-VM data interfaces.
This is required because in broadcast mode, the FGCP applies virtual MAC addresses to FortiGate data interfaces, and
these virtual MAC addresses mean that matching interfaces of the FortiGate-VM instances in the cluster have the same
virtual MAC addresses.
Promiscuous mode
KVM's Virtual Machine Manager does not have the ability to set a virtual network interface to promiscuous mode. This is
done to the host's physical network interface. When KVM creates a VM, it also creates a tap interface as well as a new
MAC address for it. Once the host's physical interface is set to promiscuous mode, it must be connected to a bridge
device that is used by the VM to connect to the network outside of the host.
Because this configuration is done on the host and not the VM, the methodology depends on the host's operating
system distribution and version.
Setting up the network interfaces and bridge devices requires using an account with root privileges.
This section describes FortiGate-VM and KVM performance optimization techniques that can improve your FortiGate-
VM performance by optimizing the hardware and the KVM host environment for network- and CPU-intensive
performance requirements of FortiGate-VMs.
SR-IOV
FortiGate-VMs installed on KVM platforms support Single Root I/O virtualization (SR-IOV) to provide FortiGate-VMs
with direct access to physical network cards. Enabling SR-IOV means that one PCIe network card or CPU can function
for a FortiGate-VM as multiple separate physical devices. SR-IOV reduces latency and improves CPU efficiency by
allowing network traffic to pass directly between a FortiGate-VM and a network card, bypassing KVM host software and
without using virtual switching.
FortiGate-VMs benefit from SR-IOV because SR-IOV optimizes network performance and reduces latency and CPU
usage. FortiGate-VMs do not use KVM features that are incompatible with SR-IOV, so you can enable SR-IOV without
negatively affecting your FortiGate-VM. SR-IOV implements an I/O memory management unit (IOMMU) to
differentiate between different traffic streams and apply memory and interrupt translations between the physical
functions (PF) and virtual functions (VF).
Setting up SR-IOV on KVM involves creating a PF for each physical network card in the hardware platform. Then, you
create VFs that allow FortiGate-VMs to communicate through the PF to the physical network card. VFs are actual PCIe
hardware resources and only a limited number of VFs are available for each PF.
SR-IOV requires that the hardware and operating system on which your KVM host is running has BIOS, physical NIC,
and network driver support for SR-IOV.
To enable SR-IOV, your KVM platform must be running on hardware that is compatible with SR-IOV and with FortiGate-
VMs. FortiGate-VMs require network cards that are compatible with ixgbevf or i40evf drivers. As well, the host hardware
CPUs must support second level address translation (SLAT).
For optimal SR-IOV support, install the most up to date ixgbevf or i40e/i40evf network drivers. Fortinet recommends
i40e/i40evf drivers because they provide four TxRx queues for each VF and ixgbevf only provides two TxRx queues.
Use the following steps to enable SR-IOV support for KVM host systems that use Intel CPUs. These steps involve
enabling and verifying Intel VT-d specifications in the BIOS and Linux kernel. You can skip these steps if you have
already enabled VT-d.
On an Intel host PC, Intel VT-d BIOS settings provide hardware support for directly assigning a physical device to a VM.
1. View the BIOS settings of the host machine and enable VT-d settings if they are not already enabled.
You may have to review the manufacturer's documentation for details.
2. Activate Intel VT-d in the Linsux kernel by adding the intel_iommu=on parameter to the kernel line in the
/boot/grub/grub.conf file. For example:
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.32-330.x86_645)
root (hd0,0)
kernel /vmlinuz-2.6.32-330.x86_64 ro root=/dev/VolGroup00/LogVol00 rhgb quiet intel_
iommu=on
initrd /initrd-2.6.32-330.x86_64.img
3. Restart the system.
Use the following steps to enable SR-IOV support for KVM host systems that use AMD CPUs. These steps involve
enabling the AMD IOMMU specifications in the BIOS and Linux kernel. You can skip these steps if you have already
enabled AMD IOMMU.
On an AMD host PC, IOMMU BIOS settings provide hardware support for directly assigning a physical device to a VM.
1. View the BIOS settings of the host machine and enable IOMMU settings if they are not already enabled.
You may have to review the manufacturer's documentation for details.
2. Append amd_iommu=on to the kernel command line in /boot/grub/grub.conf so that AMD IOMMU
specifications are enabled when the system starts up.
3. Restart the system.
Verifying that Linux and KVM can find SR-IOV-enabled PCI devices
You can use the lspci command to view the list of PCI devices and verify that your SR-IOV supporting network cards
are on the list. The following output example shows some example entries for the Intel 82576 network card:
# lspci
03:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
03:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
If the device is supported, the kernel should automatically load the driver kernel module. You can enable optional
parameters using the modprobe command. For example, the Intel 82576 network interface card uses the igb driver
kernel module.
# modprobe igb [<option>=<VAL1>,<VAL2>,
# lsmod |grep
igb 87592
You can enable SR-IOV for a FortiGate-VM by creating a Virtual Function (VF) and then attaching the VF to your
FortiGate-VM.
The max_vfs parameter of the igb module allocates the maximum number of VFs. The max_vfs parameter causes
the driver to spawn multiple VFs.
Before activating the maximum number of VFs enter the following command to remove the igb module:
# modprobe -r igb
Restart the igb module with max_vfs set to the maximum supported by your device. For example, the valid range for
the Intel 82576 network interface card is 0 to 7. To activate the maximum number of VFs supported by this device enter:
# modprobe igb max_vfs=7
Make the VFs persistent by adding options igb max_vfs=7 to any file in /etc/modprobe.d. For example:
# echo "options igb max_vfs=7" >>/etc/modprobe.d/igb.conf
Verify the new VFs. For example, you could use the following lspci command to list the newly added VFs attached to
the Intel 82576 network device. Alternatively, you can use grep to search for Virtual Function, to search for
devices that support VFs.
# lspci | grep 82576 0b:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network
Connection (rev 01)
0b:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection(rev 01)
0b:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:10.7 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
0b:11.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
Use the -n parameter of the lspci command to find the identifier for the PCI device. The PFs correspond to
0b:00.0 and 0b:00.1. All VFs have Virtual Function in the description.
The libvirt service must recognize a PCI device before you can add it to a VM. libvirt uses a similar notation to
the lspci output.
Use the virsh nodedev-list command and the grep command to filter the Intel 82576 network device from the
list of available host devices. In the example, 0b is the filter for the Intel 82576 network devices. This may vary for your
system and may result in additional devices.
# virsh nodedev-list | grep 0b
pci_0000_0b_00_0
pci_0000_0b_00_1
pci_0000_0b_10_0
pci_0000_0b_10_1
pci_0000_0b_10_2
pci_0000_0b_10_3
pci_0000_0b_10_4
pci_0000_0b_10_5
pci_0000_0b_10_6
pci_0000_0b_11_7
pci_0000_0b_11_1
pci_0000_0b_11_2
pci_0000_0b_11_3
pci_0000_0b_11_4
pci_0000_0b_11_5
The serial numbers for the VFs and PFs should be in the list.
The pci_0000_0b_00_0 is one of the PFs and pci_0000_0b_10_0 is the first corresponding VF for that PF. Use
virsh nodedev-dumpxml to get device details for both devices.
Example device details for the pci_0000_0b_00_0 PF device:
# virsh nodedev-dumpxml pci_0000_0b_00_0
<device>
<name>pci_0000_0b_00_0</name>
<parent>pci_0000_00_01_0</parent>
<driver>
<name>igb</name>
</driver>
<capability type='pci'>
<domain>0</domain>
<bus>11</bus>
<slot>0</slot>
<function>0</function>
<product id='0x10c9'>82576 Gigabit Network Connection</product>
<vendor id='0x8086'>Intel Corporation</vendor>
</capability>
</device>
<bus>11</bus>
<slot>16</slot>
<function>0</function>
<product id='0x10ca'>82576 Virtual Function</product>
<vendor id='0x8086'>Intel Corporation</vendor>
</capability>
</device>
You must use this information to specify the bus, slot, and function parameters when you add the VF to a FortiGate-VM.
A convienent way to do this is to create a temporary xml file and copy the following text into that file.
<interface type='hostdev' managed='yes'>
<source>
<address type='pci' domain='0' bus='11' slot='16' function='0'/>
</source>
</interface>
You can also include additional information about the VF such as a MAC address, VLAN tag, and so on. If you specify a
MAC address, the VF will always have this MAC address. If you do not specify a MAC address, the system generates a
new one each time the FortiGate-VM restarts.
Enter the following command to add the VF to a FortiGate-VM. This configuration attaches the new VF device
immediately and saves it for subsequent FortiGate-VM restarts.
virsh attach-device MyFGTVM <temp-xml-file> --config
Where MyFGTVM is the name of the FortiGate-VM for which to enable SR-IOV, and <temp-xml-file> is the
temporary XML file containing the VF configuration.
After this configuration, when you start up the FortiGate-VM it detects the SR-IOV VF as a new network interface.
Interrupt affinity
In addition to enabling SR-IOV in the VM host, to fully take advantage of SR-IOV performance improvements you must
configure interrupt affinity for your FortiGate-VM. Interrupt affinity (also called CPU affinity) maps FortiGate-VM
interrupts to the CPUs that are assigned to your FortiGate-VM. You use a CPU affinity mask to define the CPUs that the
interrupts are assigned to.
A common use of this feature would be to improve your FortiGate-VM's networking performance by:
l On the VM host, add multiple host CPUs to your FortiGate-VM.
l On the VM host, configure CPU affinity to specify the CPUs that the FortiGate-VM can use.
l On the VM host, configure other VM clients on the VM host to use other CPUs.
l On the FortiGate-VM, assign network interface interrupts to a CPU affinity mask that includes the CPUs that the
FortiGate-VM can use.
In this way, all available CPU interrupts for the configured host CPUs are used to process traffic on your FortiGate
interfaces. This configuration could lead to improve FortiGate-VM network performance because you have dedicated
VM host CPU cycles to processing your FortiGate-VM's network traffic.
You can use the following CLI command to configure interrupt affinity for your FortiGate-VM:
config system affinity-interrupt
edit <index>
set interrupt <interrupt-name>
set affinity-cpumask <cpu-affinity-mask>
next
end
Where:
l <interrupt-name> is the name of the interrupt to associate with a CPU affinity mask. You can view your
FortiGate-VM interrupts using the diagnose hardware sysinfo interrupts command. Usually you
would associate all of the interrupts for a given interface with the same CPU affinity mask.
l <cpu-affinity-mask> is the CPU affinity mask for the CPUs that will process the associated interrupt.
For example, consider the following configuration:
l The port2 and port3 interfaces of a FortiGate-VM send and receive most of the traffic.
l On the VM host you have set up CPU affinity between your FortiGate-VM and four CPUs (CPU 0, 1 , 2, and 3).
l SR-IOV is enabled and SR-IOV interfaces use the i40evf interface driver.
The output from the diagnose hardware sysinfo interrupts command shows that port2 has the following
transmit and receive interrupts:
i40evf-port2-TxRx-0
i40evf-port2-TxRx-1
i40evf-port2-TxRx-2
i40evf-port2-TxRx-3
The output from the diagnose hardware sysinfo interrupts command shows that port3 has the following
transmit and receive interrupts:
i40evf-port3-TxRx-0
i40evf-port3-TxRx-1
i40evf-port3-TxRx-2
i40evf-port3-TxRx-3
Use the following command to associate the port2 and port3 interrupts with CPU 0, 1 , 2, and 3.
config system affinity-interrupt
edit 1
set interrupt "i40evf-port2-TxRx-0"
set affinity-cpumask "0x0000000000000001"
next
edit 2
set interrupt "i40evf-port2-TxRx-1"
set affinity-cpumask "0x0000000000000002"
next
edit 3
set interrupt "i40evf-port2-TxRx-2"
set affinity-cpumask "0x0000000000000004"
next
edit 4
set interrupt "i40evf-port2-TxRx-3"
set affinity-cpumask "0x0000000000000008"
next
edit 1
set interrupt "i40evf-port3-TxRx-0"
set affinity-cpumask "0x0000000000000001"
next
edit 2
set interrupt "i40evf-port3-TxRx-1"
Packet-distribution affinity
With SR-IOV enabled on the VM host and interrupt affinity configured on your FortiGate-VM there is one additional
configuration you can add that may improve performance. Most common network interface hardware has restrictions on
the number of RX/TX queues that it can process. This can result in some CPUs being much busier than others and the
busy CPUs may develop extensive queues.
You can get around this potential bottleneck by configuring affinity packet redistribution to allow overloaded CPUs to
redistribute packets they receive to other less busy CPUs. The may result in a more even distribution of packet
processing to all available CPUs.
You configure packet redistribution for interfaces by associating an interface with an affinity CPU mask. This
configuration distributes packets set and received by that interface to the CPUs defined by the CPU affinity mask
associated with the interface.
You can use the following CLI command to configure affinity packet redistribution for your FortiGate-VM:
config system affinity-packet-redistribution
edit <index>
set interface <interface-name>
set affinity-cpumask <cpu-affinity-mask>
next
end
Where:
l <interface-name> the name of the interface to associate with a CPU affinity mast.
l <cpu-affinity-mask> the CPU affinity mask for the CPUs that will process packets to and from the
associated interface.
For example, you can improve the performance of the interrupt affinity example shown in the following command to
allow packets sent and received by the port3 interface to be redistributed to CPUs according to the 0xE CPU affinity
mask.
config system affinity-packet-redistribution
edit 1
set interface port3
set affinity-cpumask "0xE"
next
end
You can configure the FortiGate-VM to use a card called Intel QuickAssist (QAT) through IPsec VPN, which accelerates
traffic-handling performance. Configuring QAT consists of the following steps:
perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq
dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_
2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm
3dnowprefetch epb cat_l3 cdp_l3 intel_ppin intel_pt ssbd mba ibrs ibpb stibp tpr_shadow
vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm
cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl
xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat
pln pts pku ospke spec_ctrl intel_stibp flush_l1d
Setting up SR-IOV on KVM involves creating a PF for each physical network card in the hardware platform. VFs allow
FortiGate-VMs to communicate through the physical network card. VFs are actual PCIe hadware resources and only a
limited number of VFs are available for each PF. Each QAT addon card creates three PFs with a maximum capacity of
16 VFs each. Ensure that the VM and QAT card are on the same NUMA node.
To configure SR-IOV and create VFs, see SR-IOV support for virtual networking.
After enabling SR-IOV and setting the VF numbers, review the QAT and NIC VFs:
[root@localhost net]# lshw -c processor -businfo
Bus info Device Class Description
========================================================
cpu@0 processor Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz
cpu@1 processor Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz
pci@0000:b1:00.0 processor C62x Chipset QuickAssist Technology
pci@0000:b1:01.0 processor Intel Corporation
pci@0000:b1:01.1 processor Intel Corporation
pci@0000:b1:01.2 processor Intel Corporation
pci@0000:b1:01.3 processor Intel Corporation
pci@0000:b1:01.4 processor Intel Corporation
pci@0000:b1:01.5 processor Intel Corporation
pci@0000:b1:01.6 processor Intel Corporation
pci@0000:b1:01.7 processor Intel Corporation
pci@0000:b1:02.0 processor Intel Corporation
pci@0000:b1:02.1 processor Intel Corporation
pci@0000:b1:02.2 processor Intel Corporation
pci@0000:b1:02.3 processor Intel Corporation
pci@0000:b1:02.4 processor Intel Corporation
pci@0000:b1:02.5 processor Intel Corporation
pci@0000:b1:02.6 processor Intel Corporation
pci@0000:b1:02.7 processor Intel Corporation
pci@0000:b3:00.0 processor C62x Chipset QuickAssist Technology
pci@0000:b3:01.0 processor Intel Corporation
pci@0000:b3:01.1 processor Intel Corporation
pci@0000:b3:01.2 processor Intel Corporation
pci@0000:b3:01.3 processor Intel Corporation
pci@0000:b3:01.4 processor Intel Corporation
pci@0000:b3:01.5 processor Intel Corporation
pci@0000:b3:01.6 processor Intel Corporation
pci@0000:b3:01.7 processor Intel Corporation
pci@0000:b3:02.0 processor Intel Corporation
pci@0000:b3:02.1 processor Intel Corporation
pci@0000:b3:02.2 processor Intel Corporation
pci@0000:b3:02.3 processor Intel Corporation
pci@0000:b3:02.4 processor Intel Corporation
pci@0000:b3:02.5 processor Intel Corporation
The above example has one physical QAT card with three QAT accelerators:
[root@localhost ~]# lspci | grep Ethernet
86:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev
02)
86:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev
02)
86:02.0 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:02.1 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:02.2 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:02.3 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:02.4 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:02.5 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:02.6 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:02.7 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:0a.0 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:0a.1 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:0a.2 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:0a.3 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:0a.4 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:0a.5 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:0a.6 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:0a.7 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev
02)
88:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev
02)
88:02.0 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:02.1 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:02.2 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:02.3 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:02.4 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:02.5 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:02.6 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:02.7 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:0a.0 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:0a.1 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:0a.2 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:0a.3 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:0a.4 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:0a.5 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:0a.6 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:0a.7 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
[root@localhost ~]# lshw -c network -businfo | grep X710
pci@0000:86:00.0 p5p1 network Ethernet Controller X710 for 10GbE SFP+
pci@0000:86:00.1 p5p2 network Ethernet Controller X710 for 10GbE SFP+
pci@0000:88:00.0 p7p1 network Ethernet Controller X710 for 10GbE SFP+
pci@0000:88:00.1 p7p2 network Ethernet Controller X710 for 10GbE SFP+
You can inject an SR-IOV network VF into a Linux KVM VM using one of the following ways:
l Connecting an SR-IOV VF to a KVM VM by directly importing the VF as a PCI device using the PCI bus information
that the host OS assigned to it when it was created
l Using the Virtual Manager GUI
l Adding an SR-IOV network adapter to the KVM VM as a VF network adapter connected to a macvtap on the host
l Creating an SR-IOV VF network adapter using a KVM virtual network pool of adapters
See Configure SR-IOV Network Virtual Functions in Linux* KVM*.
In the following example, virtual network adapter pools were created for KVM04:
[root@localhost ~]# vnlist
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes
p5p1-pool active no no
p5p2-pool active no no
p7p1-pool active no no
p7p2-pool active no no
Virtual Functions on Intel Corporation Ethernet Controller X710 for 10GbE SFP+. (p5p1):
PCI BDF Interface
======= =========
0000:86:02.0 p5p1_0
0000:86:02.1 p5p1_1
0000:86:02.2 p5p1_2
0000:86:02.3 p5p1_3
0000:86:02.4 p5p1_4
0000:86:02.5 p5p1_5
0000:86:02.6 p5p1_6
0000:86:02.7 p5p1_7
Virtual Functions on Intel Corporation Ethernet Controller X710 for 10GbE SFP+. (p5p2):
PCI BDF Interface
======= =========
0000:86:0a.0 p5p2_0
0000:86:0a.1 p5p2_1
0000:86:0a.2 p5p2_2
0000:86:0a.3 p5p2_3
0000:86:0a.4 p5p2_4
0000:86:0a.5 p5p2_5
0000:86:0a.6 p5p2_6
0000:86:0a.7 p5p2_7
Virtual Functions on Intel Corporation Ethernet Controller X710 for 10GbE SFP+. (p7p1):
PCI BDF Interface
======= =========
0000:88:02.0 p7p1_0
0000:88:02.1 p7p1_1
0000:88:02.2 p7p1_2
0000:88:02.3 p7p1_3
0000:88:02.4 p7p1_4
0000:88:02.5 p7p1_5
0000:88:02.6 p7p1_6
0000:88:02.7 p7p1_7
Virtual Functions on Intel Corporation Ethernet Controller X710 for 10GbE SFP+. (p7p2):
PCI BDF Interface
======= =========
0000:88:0a.0 p7p2_0
0000:88:0a.1 p7p2_1
0000:88:0a.2 p7p2_2
0000:88:0a.3 p7p2_3
0000:88:0a.4 p7p2_4
0000:88:0a.5 p7p2_5
0000:88:0a.6 p7p2_6
0000:88:0a.7 p7p2_7
The XML file is as follows. <cputune> locks the virtual CPUs to the same NUMA node, while <hostdev
mode='subsystem' type='pci' managed='yes'> creates the QAT VFs:
[root@localhost ~]# virsh dumpxml vm04numa1
<domain type='kvm'>
<name>vm04numa1</name>
<uuid>fc5e1cec-8b4e-4bb8-9f89-e86f1abfffeb</uuid>
<memory unit='KiB'>6291456</memory>
<currentMemory unit='KiB'>6291456</currentMemory>
<memoryBacking>
<hugepages>
<page size='1048576' unit='KiB'/>
</hugepages>
</memoryBacking>
<vcpu placement='static'>4</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='17'/>
<vcpupin vcpu='1' cpuset='19'/>
<vcpupin vcpu='2' cpuset='21'/>
<vcpupin vcpu='3' cpuset='23'/>
<emulatorpin cpuset='17,19,21,23'/>
</cputune>
<numatune>
<memory mode='strict' nodeset='1'/>
</numatune>
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact' check='partial'>
<model fallback='allow'>Skylake-Server-IBRS</model>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/vm04numa1.0984'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' mul-
tifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<master startport='4'/>
next
edit 6
set interrupt "i40evf-port3-TxRx-1"
set affinity-cpumask "0x02"
next
edit 7
set interrupt "i40evf-port3-TxRx-2"
set affinity-cpumask "0x02"
next
edit 8
set interrupt "i40evf-port3-TxRx-3"
set affinity-cpumask "0x02"
next
edit 9
set interrupt "i40evf-port4-TxRx-0"
set affinity-cpumask "0x04"
next
edit 10
set interrupt "i40evf-port4-TxRx-1"
set affinity-cpumask "0x04"
next
edit 11
set interrupt "i40evf-port4-TxRx-2"
set affinity-cpumask "0x04"
next
edit 12
set interrupt "i40evf-port4-TxRx-3"
set affinity-cpumask "0x04"
next
edit 13
set interrupt "i40evf-port5-TxRx-0"
set affinity-cpumask "0x08"
next
edit 14
set interrupt "i40evf-port5-TxRx-1"
set affinity-cpumask "0x08"
next
edit 15
set interrupt "i40evf-port5-TxRx-2"
set affinity-cpumask "0x08"
next
edit 16
set interrupt "i40evf-port5-TxRx-3"
set affinity-cpumask "0x08"
next
edit 17
set interrupt "qat_00:14.00"
set affinity-cpumask "0x01"
next
edit 18
set interrupt "qat_00:15.00"
set affinity-cpumask "0x02"
next
edit 19
set interrupt "qat_00:16.00"
set affinity-cpumask "0x04"
next
edit 20
set interrupt "qat_00:17.00"
set affinity-cpumask "0x08"
next
end
For how CPU interrupt affinity optimizes FortiGate-VM performance, see Technical Note: Optimize FortiGate-VM
performance by configuring CPU interrupt affinity.
FGVM04TM19001384 (global) # diagnose hardware sysinfo interrupts
CPU0 CPU1 CPU2 CPU3
0: 26 0 0 0 IO-APIC-edge timer
1: 9 0 0 0 IO-APIC-edge i8042
4: 15 0 0 0 IO-APIC-edge serial
8: 0 0 0 0 IO-APIC-edge rtc
9: 0 0 0 0 IO-APIC-fasteoi acpi
10: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3, uhci_
hcd:usb4
11: 16 0 0 0 IO-APIC-fasteoi ehci_hcd:usb1, uhci_
hcd:usb2
12: 3 0 0 0 IO-APIC-edge i8042
14: 0 0 0 0 IO-APIC-edge ata_piix
15: 0 0 0 0 IO-APIC-edge ata_piix
40: 0 0 0 0 PCI-MSI-edge virtio1-config
41: 629 0 0 0 PCI-MSI-edge virtio1-requests
42: 0 0 0 0 PCI-MSI-edge virtio3-config
43: 978 0 0 0 PCI-MSI-edge virtio3-input.0
44: 1 0 0 0 PCI-MSI-edge virtio3-output.0
45: 255083 0 0 0 PCI-MSI-edge qat_00:14.00
46: 17 537891 0 0 PCI-MSI-edge qat_00:15.00
47: 17 0 1244511 0 PCI-MSI-edge qat_00:16.00
48: 17 0 0 1224563 PCI-MSI-edge qat_00:17.00
49: 173 0 0 0 PCI-MSI-edge i40evf-0000:00:0a.0:mbx
50: 119912 0 0 0 PCI-MSI-edge i40evf-port2-TxRx-0
51: 1 200309 0 0 PCI-MSI-edge i40evf-port2-TxRx-1
52: 1 0 538905 0 PCI-MSI-edge i40evf-port2-TxRx-2
53: 1 0 0 532128 PCI-MSI-edge i40evf-port2-TxRx-3
54: 172 0 0 0 PCI-MSI-edge i40evf-0000:00:0b.0:mbx
SOFTWARE:
Encryption (encrypted/decrypted)
null : 0 0
des : 0 0
3des : 0 0
aes : 0 0
aes-gcm : 0 0
aria : 0 0
seed : 0 0
chacha20poly1305 : 0 0
Integrity (generated/validated)
null : 0 0
md5 : 0 0
sha1 : 0 0
sha256 : 0 0
sha384 : 0 0
sha512 : 0 0
Test Results:
1360 bytes IPSEC packet loss results with QAT in KVM04: 10,654m(v6.2 build 0984)
DPDK and vNP enhance FortiGate-VM performance by offloading part of packet processing to user space while using a
kernel bypass solution within the operating system. You must enable and configure DPDK with FortiOS CLI commands.
FortiOS 6.2.3 supports DPDK for KVM and VMware ESXi environments.
The current DPDK+vNP offloading-capable version of FortiOS only supports FortiGate instances with two or more
vCPUs. Minimum required RAM sizes differ from those on regular FortiGate-VM models without offloading. It is
recommended to allocate as much RAM size as the licensed limit for maximum performance, as shown below. See the
5.6 document for minimum size reference. FortiOS 6.2.2 and later versions do not restrict RAM size by license.
Therefore, you can allocate as much memory as desired on 6.2-based DPDK-enabled FortiGate-VMs:
FG-VM02(v) No restriction
FG-VM04(v) No restriction
FG-VM08(v) No restriction
FG-VM16(v) No restriction
FG-VM32(v) No restriction
The current build does not support encrypted traffic. Support is planned for future versions. It
is recommended to disable the DPDK option using the CLI or adopt regular FortiGate-VM
builds when using IPsec and SSL VPN features.
Enabling DPDK+vNP offloading may result in fewer concurrent sessions when under high load
than when DPDK+vNP offloading is not enabled and the same FortiGate-VM license is used.
Provided that you obtained a DPDK+vNP offloading-capable FortiOS build, the following provides the configuration to
enable the capability:
l DPDK global settings on page 47
l DPDK CPU settings on page 49
l DPDK diagnostic commands on page 50
FortiOS 6.2.3 and later versions support SNMP to poll DPDK-related status. For details, see the corresponding MIB file
that Fortinet provides.
1. In the FortiOS CLI, enter the following commands to enable DPDK operation:
config dpdk global
set status enable
set interface port1
end
2. The CLI displays the following message:
Status and interface changes will trigger system reboot and take effect after the reboot.
Do you want to continue? (y/n)
Press y to reboot the device.
Before system reboot, you must check if other DPDK settings are configured properly. You
must enable at least one network interface for DPDK. The example enables port1. You
can enable other interfaces as desired. If you do not set an interface, a prompt displays
and the change is discarded. See To enable a network interface to run DPDK operation:
on page 48.
Enabling multiqueue at network RX/TX helps DPDK better balance the workload onto multiple engines.
1. In the FortiOS CLI, enter the following commands to enable DPDK operation:
config dpdk global
set multiqueue enable
end
2. The CLI displays the following message:
Multiqueue change will trigger IPS restart and will take effect after the restart. Traffic
may be interrupted briefly.
Do you want to continue? (y/n)
Press y to reboot IPS engine.
To set the percentage of main memory allocated to DPDK huge pages and packet buffer pool:
You can configure the amount of main memory (as a percentage) allocated to huge pages, which are dedicated to
DPDK use. You can also configure the amount of main memory (as a percentage) allocated to the DPDK packet buffer
pool.
Enter the following commands to set these amounts:
config dpdk global
set hugepage-percentage [X]
set mbufpool-percentage [Y]
end
Huge page memory is mounted at system startup and remains mounted as long as the
FortiGate-VM is running. Packet buffer pool memory is drawn from huge pages. Therefore,
the packet buffer pool amount (Y) must not exceed the huge pages amount (X).
In practice, it is mandated that Y is lesser than or equal to X - 5 to leave 5% memory overhead
for other DPDK data structures. The range of X is between 10 and 50, and the range of Y is
between 5 and 45.
Setting X too high may force FortiOS to enter conserve mode. Setting X too low may result in
insufficient memory for DPDK operation and failure of initialization.
During FortiOS DPDK Helper environment initialization, RTE memory zones are drawn from
huge memory pages. The system tries to reserve continuous memory chunks for these
memory zones with best effort. Therefore, the amount of huge page memory is slightly larger
than the amount of memory that RTE memory zones use. To gain insight into how RTE
memory zones reserve memory spaces, run the diagnose dpdk statistics show
memory command.
You must enable at least one network interface to run DPDK operation.
config dpdk global
set interface "portX" "portY"
end
You must enable at least one network interface for DPDK. Otherwise, DPDK early
initialization during system startup fails and falls back to a disabled state. In this example, if
there are two network interfaces that you intend to use, you can specify set interface
port1 port2.
By default, DPDK monitor engine is disabled. When enabled, only one DPDK engine polls DPDK-enabled interfaces.
When packets arrive, corresponding DPDK entries are activated. This helps when services other than firewall or
IPS engine, such as antivirus, WAD, or web filter, are running and performance degradation is observed while DPDK
performance statistics show that DPDK engines are not fully used. Latency may increase due to the time needed to
activate the proper DPDK engines by the monitor engine.
By default, elastic buffer is disabled. When enabled, an elastic buffer takes effect to store packets in case of traffic
burst. The feature helps to reduce packet drops when received packets peak under system overload by storing packets
in the buffer and processing them afterward. This feature is experimental.
By default, per-session accounting is configured only for traffic logs, which results in per-session accounting being
enabled when you enable traffic logging in a policy.
Per-session accounting is a logging feature that allows FortiOS to report the correct bytes per packet numbers per
session for sessions offloaded to a vNP process. This information appears in traffic log messages, FortiView, and
diagnose commands. Per-session accounting can affect vNP offloading performance. You should only enable per-
session accounting if you need the accounting information. A similar feature is available for physical FortiGate NP6
processors.
On the FortiGate-VM, a DPDK engine is attached to an IPS engine, which shares the same process and is mapped to a
CPU. A processing pipeline of four stages handles a packet from RX to TX:
1. DPDK RX
2. vNP
3. IPS
4. DPDK TX
You can freely determine the CPUs enabled for each pipeline stage by running the following commands:
config dpdk cpus
set [X] [Y]
end
Here X is one of the pipeline stages: rx-cpus, vnp-cpus, ips-cpus, and tx-cpus.
Y is a string expression of CPU IDs, which contains comma-delimited individual CPU IDs or ranges of CPU IDs
separated by a dash.
The example below enables CPUs 0, 2, 4, 6, 7, 8, 9. 10, and 15 to run the vNP pipeline stage:
set vnp-cpus 0,2,4,6-10,15
You must enable at least one CPU for each pipeline stage. Otherwise, DPDK early
initialization fails.
If the DPDK early initialization was unsuccessful, refer to DPDK global settings on page 47 to see if the DPDK-related
options were properly set.
The early init-log also keeps records of last-edited DPDK configuration, enabled CPUs/ports, binding/unbinding of
drivers, device PCI info, and so on.
This command resets engine and port statistics to zeroes, but does not affect vNP and memory statistics.
A useful way to check whether traffic is properly forwarded is to check the port statistics. This shows the number of
received/transmitted/dropped packets in each DPDK-enabled port.
Checking engine statistics is helpful in understanding how traffic is load-balanced among DPDK engines at each
pipeline stage.
Checking vNP statistics provides insights to how traffic is offloaded from the slow path (traversing the kernel) to the fast
path (firewall and IPS operations quickly processed by the vNP engine). In particular, observe the number of session
search engine (SSE) entries pushed from kernel or IPS to vNP engine, shown bolded below (ctr_sse_entries). The
number of packets going through the SSE fast path is also important and is bolded below (ctr_fw_and_ips_
fpath).
To see DPDK CPU settings, run the following commands. In this case, N is the number of CPUs that the FortiGate-VM
uses.
show dpdk cpus
config dpdk cpus
set rx-cpus "0-N"
set vnp-cpus "0-N"
set ips-cpus "0-N"
set tx-cpus "0-N"
end
The diagnose dpdk performance show command provides near real-time performance of each DPDK engine,
in particular, the CPU usage. The system provides the following response:
This provides better insight into how many CPUs to allocate to each pipeline stage.
2019-07-19 Updated Validating the FortiGate-VM license with FortiManager on page 22.
2020-01-03 Added Enhancing FortiGate-VM Performance with DPDK and vNP offloading on page 46.
2020-01-24 Added To enable DPDK monitor engine: on page 48, To enable elastic buffer (temporary
memory buffer): on page 49, and To enable per-session accounting: on page 49.
2020-05-05 Updated FortiGate-VM models and licensing on page 4 and Registering the FortiGate-VM on
page 8.
2020-06-04 Updated Enhancing FortiGate-VM Performance with DPDK and vNP offloading on page 46.