CP R81.10 ClusterXL AdminGuide
CP R81.10 ClusterXL AdminGuide
CP R81.10 ClusterXL AdminGuide
CLUSTERXL
R81.10
Administration Guide
[Classification: Protected]
Check Point Copyright Notice
© 2021 Check Point Software Technologies Ltd.
All rights reserved. This product and related documentation are protected by copyright and distributed under
licensing restricting their use, copying, distribution, and decompilation. No part of this product or related
documentation may be reproduced in any form or by any means without prior written authorization of Check
Point. While every precaution has been taken in the preparation of this book, Check Point assumes no
responsibility for errors or omissions. This publication and features described herein are subject to change
without notice.
TRADEMARKS:
Refer to the Copyright page for a list of our trademarks.
Refer to the Third Party copyright notices for a list of relevant copyrights and third-party licenses.
Important Information
Important Information
Latest Software
We recommend that you install the most recent software release to stay up-to-date with the
latest functional improvements, stability fixes, security enhancements and protection against
new and evolving attacks.
Certifications
For third party independent certification of Check Point products, see the Check Point
Certifications page.
Feedback
Check Point is engaged in a continuous effort to improve its documentation.
Please help us by sending your comments.
Revision History
Date Description
Table of Contents
Glossary 12
Introduction to ClusterXL 36
The Need for Clusters 36
ClusterXL Solution 36
How ClusterXL Works 36
The Cluster Control Protocol 37
ClusterXL Requirements and Compatibility 38
Check Point Appliances and Open Servers 38
Supported Number of Cluster Members 39
Hardware Requirements for Cluster Members 39
Software Requirements for Cluster Members 39
VMAC Mode 40
Supported Topologies for Synchronization Network 42
Clock Synchronization in ClusterXL 45
IPv6 Support for ClusterXL 45
Synchronized Cluster Restrictions 45
High Availability and Load Sharing Modes in ClusterXL 47
Introduction to High Availability and Load Sharing modes 47
High Availability 47
Load Sharing 48
Example ClusterXL Topology 49
Example Diagram 49
Defining the Cluster Member IP Addresses 50
Defining the Cluster Virtual IP Addresses 50
Defining the Synchronization Network 50
Configuring Cluster Addresses on Different Subnets 51
ClusterXL Mode Considerations 52
Choosing High Availability, Load Sharing, or Active-Active mode 52
Considerations for the Load Sharing Mode 52
IP Address Migration 52
ClusterXL Modes 53
High Availability Mode 53
Load Sharing Modes 56
Glossary
3
Active
State of a Cluster Member that is fully operational: (1) In ClusterXL, this applies to the
state of the Security Gateway component (2) In 3rd party / OPSEC cluster, this applies to
the state of the cluster State Synchronization mechanism.
Active-Active
A cluster mode (in R80.40 and higher versions), where cluster members are located in
different geographical areas (different sites, different cloud availability zones). This mode
supports the configuration of IP addresses from different subnets on all cluster
interfaces, including the Sync interfaces. Each cluster member inspects all traffic routed
to it and synchronizes the recorded connections to its peer cluster members. The traffic
is not balanced between the cluster members.
Active Up
ClusterXL in High Availability mode that was configured as Maintain current active
Cluster Member in the cluster object in SmartConsole: (1) If the current Active member
fails for some reason, or is rebooted (for example, Member_A), then failover occurs
between Cluster Members - another Standby member will be promoted to be Active (for
example, Member_B). (2) When former Active member (Member_A) recovers from a
failure, or boots, the former Standby member (Member_B) will remain to be in Active
state (and Member_A will assume the Standby state).
Active(!)
In ClusterXL, state of the Active Cluster Member that suffers from a failure. A problem
was detected, but the Cluster Member still forwards packets, because it is the only
member in the cluster, or because there are no other Active members in the cluster. In
any other situation, the state of the member is Down. Possible states: ACTIVE(!),
ACTIVE(!F) - Cluster Member is in the freeze state, ACTIVE(!P) - This is the Pivot
Cluster Member in Load Sharing Unicast mode, ACTIVE(!FP) - This is the Pivot Cluster
Member in Load Sharing Unicast mode and it is in the freeze state.
Active/Active
See "Load Sharing".
Active/Standby
See "High Availability".
Administrator
A user with permissions to manage Check Point security products and the network
environment.
API
In computer programming, an application programming interface (API) is a set of
subroutine definitions, protocols, and tools for building application software. In general
terms, it is a set of clearly defined methods of communication between various software
components.
Appliance
A physical computer manufactured and distributed by Check Point.
ARP Forwarding
Forwarding of ARP Request and ARP Reply packets between Cluster Members by
encapsulating them in Cluster Control Protocol (CCP) packets. Introduced in R80.10
version. For details, see sk111956.
Backup
(1) In VRRP Cluster on Gaia OS - State of a Cluster Member that is ready to be promoted
to Master state (if Master member fails). (2) In VSX Cluster configured in Virtual System
Load Sharing mode with three or more Cluster Members - State of a Virtual System on a
third (and so on) VSX Cluster Member. (3) A Cluster Member or Virtual System in this
state does not process any traffic passing through cluster.
Blocking Mode
Cluster operation mode, in which Cluster Member does not forward any traffic (for
example, caused by a failure).
Bond
A virtual interface that contains (enslaves) two or more physical interfaces for
redundancy and load sharing. The physical interfaces share one IP address and one
MAC address. See "Link Aggregation".
Bonding
See "Link Aggregation".
Bridge Mode
A Security Gateway or Virtual System that works as a Layer 2 bridge device for easy
deployment in an existing topology.
CA
Certificate Authority. Issues certificates to gateways, users, or computers, to identify
itself to connecting entities with Distinguished Name, public key, and sometimes IP
address. After certificate validation, entities can send encrypted data using the public
keys in the certificates.
CCP
See "Cluster Control Protocol".
Certificate
An electronic document that uses a digital signature to bind a cryptographic public key to
a specific identity. The identity can be an individual, organization, or software entity. The
certificate is used to authenticate one identity to another.
CGNAT
Carrier Grade NAT. Extending the traditional Hide NAT solution, CGNAT uses improved
port allocation techniques and a more efficient method for logging. A CGNAT rule
defines a range of original source IP addresses and a range of translated IP addresses.
Each IP address in the original range is automatically allocated a range of translated
source ports, based on the number of original IP addresses and the size of the translated
range. CGNAT port allocation is Stateless and is performed during policy installation.
See sk120296.
Cluster
Two or more Security Gateways that work together in a redundant configuration - High
Availability, or Load Sharing.
Cluster Interface
An interface on a Cluster Member, whose Network Type was set as Cluster in
SmartConsole in cluster object. This interface is monitored by cluster, and failure on this
interface will cause cluster failover.
Cluster Member
A Security Gateway that is part of a cluster.
Cluster Mode
Configuration of Cluster Members to work in these redundant modes: (1) One Cluster
Member processes all the traffic - High Availability or VRRP mode (2) All traffic is
processed in parallel by all Cluster Members - Load Sharing.
Cluster Topology
Set of interfaces on all members of a cluster and their settings (Network Objective, IP
address/Net Mask, Topology, Anti-Spoofing, and so on).
ClusterXL
Cluster of Check Point Security Gateways that work together in a redundant
configuration. The ClusterXL both handles the traffic and performs State
Synchronization. These Check Point Security Gateways are installed on Gaia OS: (1)
ClusterXL supports up to 5 Cluster Members, (2) VRRP Cluster supports up to 2 Cluster
Members, (3) VSX VSLS cluster supports up to 13 Cluster Members. Note: In ClusterXL
Load Sharing mode, configuring more than 4 Cluster Members significantly decreases
the cluster performance due to amount of Delta Sync traffic.
CoreXL
A performance-enhancing technology for Security Gateways on multi-core processing
platforms. Multiple Check Point Firewall instances are running in parallel on multiple
CPU cores.
CoreXL SND
Secure Network Distributer. Part of CoreXL that is responsible for: Processing incoming
traffic from the network interfaces; Securely accelerating authorized packets (if
SecureXL is enabled); Distributing non-accelerated packets between Firewall kernel
instances (SND maintains global dispatching table, which maps connections that were
assigned to CoreXL Firewall instances). Traffic distribution between CoreXL Firewall
instances is statically based on Source IP addresses, Destination IP addresses, and the
IP 'Protocol' type. The CoreXL SND does not really "touch" packets. The decision to stick
to a particular FWK daemon is done at the first packet of connection on a very high level,
before anything else. Depending on the SecureXL settings, and in most of the cases, the
SecureXL can be offloading decryption calculations. However, in some other cases,
such as with Route-Based VPN, it is done by FWK daemon.
CPHA
General term in Check Point Cluster that stands for Check Point High Availability
(historic fact: the first release of ClusterXL supported only High Availability) that is used
only for internal references (for example, inside kernel debug) to designate ClusterXL
infrastructure.
CPUSE
Check Point Upgrade Service Engine for Gaia Operating System. With CPUSE, you can
automatically update Check Point products for the Gaia OS, and the Gaia OS itself. For
details, see sk92449.
Critical Device
Also known as a Problem Notification, or pnote. A special software device on each
Cluster Member, through which the critical aspects for cluster operation are monitored.
When the critical monitored component on a Cluster Member fails to report its state on
time, or when its state is reported as problematic, the state of that member is
immediately changed to Down. The complete list of the configured critical devices
(pnotes) is printed by the 'cphaprob -ia list' command or 'show cluster members pnotes
all' command.
DAIP Gateway
A Dynamically Assigned IP (DAIP) Security Gateway is a Security Gateway where the IP
address of the external interface is assigned dynamically by the ISP.
Data Type
A classification of data. The Firewall classifies incoming and outgoing traffic according to
Data Types, and enforces the Policy accordingly.
Database
The Check Point database includes all objects, including network objects, users,
services, servers, and protection profiles.
Dead
State reported by a Cluster Member when it goes out of the cluster (due to 'cphastop'
command (which is a part of 'cpstop'), or reboot).
Decision Function
A special cluster algorithm applied by each Cluster Member on the incoming traffic in
order to decide, which Cluster Member should process the received packet. Each
Cluster Members maintains a table of hash values generated based on connections
tuple (source and destination IP addresses/Ports, and Protocol number).
Delta Sync
Synchronization of kernel tables between all working Cluster Members - exchange of
CCP packets that carry pieces of information about different connections and operations
that should be performed on these connections in relevant kernel tables. This Delta Sync
process is performed directly by Check Point kernel. While performing Full Sync, the
Delta Sync updates are not processed and saved in kernel memory. After Full Sync is
complete, the Delta Sync packets stored during the Full Sync phase are applied by order
of arrival.
Distributed Deployment
The Check Point Security Gateway and Security Management Server products are
deployed on different computers.
Domain
A network or a collection of networks related to an entity, such as a company, business
unit or geographical location.
Down
State of a Cluster Member during a failure when one of the Critical Devices reports its
state as "problem": In ClusterXL, applies to the state of the Security Gateway
component; in 3rd party / OPSEC cluster, applies to the state of the State
Synchronization mechanism. A Cluster Member in this state does not process any traffic
passing through cluster.
Dying
State of a Cluster Member as assumed by peer members, if it did not report its state for
0.7 second.
Expert Mode
The name of the full command line shell that gives full system root permissions in the
Check Point Gaia operating system.
External Network
Computers and networks that are outside of the protected network.
External Users
Users defined on external servers. External users are not defined in the Security
Management Server database or on an LDAP server. External user profiles tell the
system how to identify and authenticate externally defined users.
Failback in Cluster
Also, Fallback. Recovery of a Cluster Member that suffered from a failure. The state of a
recovered Cluster Member is changed from Down to either Active, or Standby
(depending on Cluster Mode).
Failed Member
A Cluster Member that cannot send or accept traffic because of a hardware or software
problem.
Failover
Also, Fail-over. Transferring of a control over traffic (packet filtering) from a Cluster
Member that suffered a failure to another Cluster Member (based on internal cluster
algorithms).
Failure
A hardware or software problem that causes a Security Gateway to be unable to serve
as a Cluster Member (for example, one of cluster interface has failed, or one of the
monitored daemon has crashed). Cluster Member that suffered from a failure is declared
as failed, and its state is changed to Down (a physical interface is considered Down only
if all configured VLANs on that physical interface are Down).
Firewall
The software and hardware that protects a computer network by analyzing the incoming
and outgoing network traffic (packets).
Flapping
Consequent changes in the state of either cluster interfaces (cluster interface flapping),
or Cluster Members (Cluster Member flapping). Such consequent changes in the state
are seen in the 'Logs & Monitor' > 'Logs' (if in SmartConsole > cluster object, the cluster
administrator set the 'Track changes in the status of cluster members' to 'Log').
Forwarding
Process of transferring of an incoming traffic from one Cluster Member to another
Cluster Member for processing. There are two types of forwarding the incoming traffic
between Cluster Members - Packet forwarding and Chain forwarding. Also see
"Forwarding Layer in Cluster" and "ARP Forwarding in Cluster".
Forwarding Layer
The Forwarding Layer is a ClusterXL mechanism that allows a Cluster Member to pass
packets to peer Cluster Members, after they have been locally inspected by the firewall.
This feature allows connections to be opened from a Cluster Member to an external host.
Packets originated by Cluster Members are hidden behind the Cluster Virtual IP address.
Thus, a reply from an external host is sent to the cluster, and not directly to the source
Cluster Member. This can pose problems in the following situations: (1) The cluster is
working in High Availability mode, and the connection is opened from the Standby
Cluster Member. All packets from the external host are handled by the Active Cluster
Member, instead. (2) The cluster is working in a Load Sharing mode, and the decision
function has selected another Cluster Member to handle this connection. This can
happen since packets directed at a Cluster IP address are distributed between Cluster
Members as with any other connection. If a Cluster Member decides, upon the
completion of the firewall inspection process, that a packet is intended for another
Cluster Member, it can use the Forwarding Layer to hand the packet over to that Cluster
Member. In High Availability mode, packets are forwarded over a Synchronization
network directly to peer Cluster Members. It is important to use secured networks only,
as encrypted packets are decrypted during the inspection process, and are forwarded as
clear-text (unencrypted) data. In Load Sharing mode, packets are forwarded over a
regular traffic network. Packets that are sent on the Forwarding Layer use a special
source MAC address to inform the receiving Cluster Member that they have already
been inspected by another Cluster Member. Thus, the receiving Cluster Member can
safely hand over these packets to the local Operating System, without further inspection.
Full Sync
Process of full synchronization of applicable kernel tables by a Cluster Member from the
working Cluster Member(s) when it tries to join the existing cluster. This process is meant
to fetch a"snapshot" of the applicable kernel tables of already Active Cluster Member(s).
Full Sync is performed during the initialization of Check Point software (during boot
process, the first time the Cluster Member runs policy installation, during 'cpstart', during
'cphastart'). Until the Full Sync process completes successfully, this Cluster Member
remains in the Down state, because until it is fully synchronized with other Cluster
Members, it cannot function as a Cluster Member. Meanwhile, the Delta Sync packets
continue to arrive, and the Cluster Member that tries to join the existing cluster, stores
them in the kernel memory until the Full Sync completes. The whole Full Sync process is
performed by fwd daemons on TCP port 256 over the Sync network (if it fails over the
Sync network, it tries the other cluster interfaces). The information is sent by fwd
daemons in chunks, while making sure they confirm getting the information before
sending the next chunk. Also see "Delta Sync".
Gaia
Check Point security operating system that combines the strengths of both
SecurePlatform and IPSO operating systems.
Gaia Clish
The name of the default command line shell in Check Point Gaia operating system. This
is a restrictive shell (role-based administration controls the number of commands
available in the shell).
Gaia gClish
The name of the global command line shell in Check Point Gaia operating system for
Security Appliances connected to Check Point Quantum Maestro Orchestrators and for
Security Gateway Modules on Scalable Chassis. Commands you run in this shell apply
to all Security Gateway Module / Security Appliances in the Security Group.
Gaia Portal
Web interface for Check Point Gaia operating system.
Geo Cluster
A High Availability cluster mode (in R81 and higher versions), where cluster members
are located in different geographical areas (different sites, different cloud availability
zones). This mode supports the configuration of IP addresses from different subnets on
all cluster interfaces, including the Sync interfaces. The Active cluster member inspects
all traffic routed to the cluster and synchronizes the recorded connections to its peer
cluster members. The traffic is not balanced between the cluster members. See "High
Availability".
HA not started
Output of the 'cphaprob <flag>' command or 'show cluster <option>' command on the
Cluster Member. This output means that Check Point clustering software is not started
on this Security Gateway (for example, this machine is not a part of a cluster, or
'cphastop' command was run, or some failure occurred that prevented the ClusterXL
product from starting correctly).
High Availability
A redundant cluster mode, where only one Cluster Member (Active member) processes
all the traffic, while other Cluster Members (Standby members) are ready to be promoted
to Active state if the current Active member fails. In the High Availability mode, the
Cluster Virtual IP address (that represents the cluster on that network) is associated: (1)
With physical MAC Address of Active member (2) With virtual MAC Address (see
sk50840). Acronym: HA.
Hotfix
A piece of software installed on top of the current software in order to fix some wrong or
undesired behavior.
HTU
Stands for "HA Time Unit". All internal time in ClusterXL is measured in HTUs (the times
in cluster debug also appear in HTUs). Formula in the Check Point software: 1 HTU = 10
x fwha_timer_base_res = 10 x 10 milliseconds = 100 ms.
Hybrid
Starting in R80.20, on Security Gateways with 40 or more CPU cores, Software Blades
run in the user space (as 'fwk' processes). The Hybrid Mode refers to the state when you
upgrade Cluster Members from R80.10 (or below) to R80.20 (or above). The Hybrid
Mode is the state, in which the upgraded Cluster Members already run their Software
Blades in the user space (as fwk processes), while other Cluster Members still run their
Software Blades in the kernel space (represented by the fw_worker processes). In the
Hybrid Mode, Cluster Members are able to synchronize the required information.
ICA
Internal Certificate Authority. A component on Check Point Management Server that
issues certificates for authentication.
Init
State of a Cluster Member in the phase after the boot and until the Full Sync completes.
A Cluster Member in this state does not process any traffic passing through cluster.
Internal Network
Computers and resources protected by the Firewall and accessed by authenticated
users.
IP Tracking
Collecting and saving of Source IP addresses and Source MAC addresses from
incoming IP packets during the probing. IP tracking is a useful for Cluster Members to
determine whether the network connectivity of the Cluster Member is acceptable.
IP Tracking Policy
Internal setting that controls, which IP addresses should be tracked during IP tracking:
(1) Only IP addresses from the subnet of cluster VIP, or from subnet of physical cluster
interface (this is the default) (2) All IP addresses, also outside the cluster subnet.
IPv4
Internet Protocol Version 4 (see RFC 791). A 32-bit number - 4 sets of numbers, each set
can be from 0 - 255. For example, 192.168.2.1.
IPv6
Internet Protocol Version 6 (see RFC 2460 and RFC 3513). 128-bit number - 8 sets of
hexadecimal numbers, each set can be from 0 - ffff. For example,
FEDC:BA98:7654:3210:FEDC:BA98:7654:3210.
Link Aggregation
Technology that joins (aggregates) multiple physical interfaces together into one virtual
interface, known as a bond interface. Also known as Interface Bonding, or Interface
Teaming. This increases throughput beyond what a single connection could sustain, and
to provides redundancy in case one of the links should fail.
Load Sharing
Also, Load Balancing mode. A redundant cluster mode, where all Cluster Members
process all incoming traffic in parallel. See "Load Sharing Multicast Mode" and "Load
Sharing Unicast Mode". Acronym: LS.
Log
A record of an action that is done by a Software Blade.
Log Server
A dedicated Check Point computer that runs Check Point software to store and process
logs in Security Management Server or Multi-Domain Security Management
environment.
Management Interface
Interface on Gaia computer, through which users connect to Portal or CLI. Interface on a
Gaia Security Gateway or Cluster member, through which Management Server connects
to the Security Gateway or Cluster member.
Management Server
A Check Point Security Management Server or a Multi-Domain Server.
Master
State of a Cluster Member that processes all traffic in cluster configured in VRRP mode.
Multi-Domain Server
A computer that runs Check Point software to host virtual Security Management Servers
called Domain Management Servers. Acronym: MDS.
Multi-Version Cluster
The Multi-Version Cluster (MVC) mechanism lets you synchronize connections between
cluster members that run different versions. This lets you upgrade to a newer version
without a loss in connectivity and lets you test the new version on some of the cluster
members before you decide to upgrade the rest of the cluster members.
MVC
See "Multi-Version Cluster".
Network Object
Logical representation of every part of corporate topology (physical machine, software
component, IP Address range, service, and so on).
Network Objective
Defines how the cluster will configure and monitor an interface - Cluster, Sync,
Cluster+Sync, Monitored Private, Non-Monitored Private. Configured in SmartConsole >
cluster object > 'Topology' pane > 'Network Objective'.
Non-Blocking Mode
Cluster operation mode, in which Cluster Member keeps forwarding all traffic.
Non-Monitored Interface
An interface on a Cluster Member, whose Network Type was set as Private in
SmartConsole, in cluster object.
Non-Pivot
A Cluster Member in the Unicast Load Sharing cluster that receives all packets from the
Pivot Cluster Member.
Non-Sticky Connection
A connection is called non-sticky, if the reply packet returns via a different Cluster
Member, than the original packet (for example, if network administrator has configured
asymmetric routing). In Load Sharing mode, all Cluster Members are Active, and in
Static NAT and encrypted connections, the Source and Destination IP addresses
change. Therefore, Static NAT and encrypted connections through a Load Sharing
cluster may be non-sticky.
Open Server
A physical computer manufactured and distributed by a company, other than Check
Point.
Packet Selection
Distinguishing between different kinds of packets coming from the network, and
selecting, which member should handle a specific packet (Decision Function
mechanism): CCP packet from another member of this cluster; CCP packet from another
cluster or from a Cluster; Member with another version (usually older version of CCP);
Packet is destined directly to this member; Packet is destined to another member of this
cluster; Packet is intended to pass through this Cluster Member; ARP packets.
Pingable Host
Some host (that is, some IP address) that Cluster Members can ping during probing
mechanism. Pinging hosts in an interface's subnet is one of the health checks that
ClusterXL mechanism performs. This pingable host will allow the Cluster Members to
determine with more precision what has failed (which interface on which member). On
Sync network, usually, there are no hosts. In such case, if switch supports this, an IP
address should be assigned on the switch (for example, in the relevant VLAN). The IP
address of such pingable host should be assigned per this formula: IP_of_pingable_host
= IP_of_physical_interface_on_member + ~10. Assigning the IP address to pingable
host that is higher than the IP addresses of physical interfaces on the Cluster Members
will give some time to Cluster Members to perform the default health checks. Example:
IP address of physical interface on a given subnet on Member_A is 10.20.30.41; IP
address of physical interface on a given subnet on Member_B is 10.20.30.42; IP address
of pingable host should be at least 10.20.30.5
Pivot
A Cluster Member in the Unicast Load Sharing cluster that receives all packets. Cluster
Virtual IP addresses are associated with Physical MAC Addresses of this Cluster
Member. This Pivot Cluster Member distributes the traffic between other Non-Pivot
Cluster Members.
Pnote
See "Critical Device".
Preconfigured Mode
Cluster Mode, where cluster membership is enabled on all Cluster Members to be.
However, no policy had been yet installed on any of the Cluster Members - none of them
is actually configured to be primary, secondary, and so on. The cluster cannot function, if
one Cluster Member fails. In this scenario,the "preconfigured mode" takes place. The
preconfigured mode also comes into effect when no policy is yet installed, right after the
Cluster Members came up after boot, or when running the 'cphaconf init' command.
Primary Up
ClusterXL in High Availability mode that was configured as Switch to higher priority
Cluster Member in the cluster object in SmartConsole: (1) Each Cluster Member is given
a priority (SmartConsole > cluster object > 'Cluster Members' pane). Cluster Member
with the highest priority appears at the top of the table, and Cluster Member with the
lowest priority appears at the bottom of the table. (2) The Cluster Member with the
highest priority will assume the Active state. (3) If the current Active Cluster Member with
the highest priority (for example, Member_A), fails for some reason, or is rebooted, then
failover occurs between Cluster Members. The Cluster Member with the next highest
priority will be promoted to be Active (for example, Member_B). (4) When the Cluster
Member with the highest priority (Member_A) recovers from a failure, or boots, then
additional failover occurs between Cluster Members. The Cluster Member with the
highest priority (Member_A) will be promoted to Active state (and Member_B will return
to Standby state).
Private Interface
An interface on a Cluster Member, whose Network Type was set as 'Private' in
SmartConsole in cluster object. This interface is not monitored by cluster, and failure on
this interface will not cause any changes in Cluster Member's state.
Probing
If a Cluster Member fails to receive status for another member (does not receive CCP
packets from that member) on a given segment, Cluster Member will probe that segment
in an attempt to illicit a response. The purpose of such probes is to detect the nature of
possible interface failures, and to determine which module has the problem. The
outcome of this probe will determine what action is taken next (change the state of an
interface, or of a Cluster Member).
Problem Notification
See "Critical Device".
Ready
State of a Cluster Member during after initialization and before promotion to the next
required state - Active / Standby / VRRP Master / VRRP Backup (depending on Cluster
Mode). A Cluster Member in this state does not process any traffic passing through
cluster. A member can be stuck in this state due to several reasons - see sk42096.
Rule
A set of traffic parameters and other conditions in a Rule Base that cause specified
actions to be taken for a communication session.
Rule Base
Also Rulebase. All rules configured in a given Security Policy.
SecureXL
Check Point product that accelerates IPv4 and IPv6 traffic. Installed on Security
Gateways for significant performance improvements.
Security Gateway
A computer that runs Check Point software to inspect traffic and enforces Security
Policies for connected network resources.
Security Policy
A collection of rules that control network traffic and enforce organization guidelines for
data protection and access to resources with packet inspection.
Selection
The packet selection mechanism is one of the central and most important components in
the ClusterXL product and State Synchronization infrastructure for 3rd party clustering
solutions. Its main purpose is to decide (to select) correctly what has to be done to the
incoming and outgoing traffic on the Cluster Member. (1) In ClusterXL, the packet is
selected by Cluster Member(s) depending on the cluster mode: In HA modes - by Active
member; In LS Unicast mode - by Pivot member; In LS Multicast mode - by all members.
Then the Cluster Member applies the Decision Function (and the Cluster Correction
Layer). (2) In 3rd party / OPSEC cluster, the 3rd party software selects the packet, and
Check Point software just inspects it (and performs State Synchronization).
SIC
Secure Internal Communication. The Check Point proprietary mechanism with which
Check Point computers that run Check Point software authenticate each other over SSL,
for secure communication. This authentication is based on the certificates issued by the
ICA on a Check Point Management Server.
Single Sign-On
A property of access control of multiple related, yet independent, software systems. With
this property, a user logs in with a single ID and password to gain access to a connected
system or systems without using different usernames or passwords, or in some
configurations seamlessly sign on at each system. This is typically accomplished using
the Lightweight Directory Access Protocol (LDAP) and stored LDAP databases on
(directory) servers. Acronym: SSO.
SmartConsole
A Check Point GUI application used to manage Security Policies, monitor products and
events, install updates, provision new devices and appliances, and manage a multi-
domain environment and each domain.
SmartDashboard
A legacy Check Point GUI client used to create and manage the security settings in
R77.30 and lower versions.
SmartUpdate
A legacy Check Point GUI client used to manage licenses and contracts.
Software Blade
A software blade is a security solution based on specific business needs. Each blade is
independent, modular and centrally managed. To extend security, additional blades can
be quickly added.
SSO
See "Single Sign-On".
Standalone
A Check Point computer, on which both the Security Gateway and Security Management
Server products are installed and configured.
Standby
State of a Cluster Member that is ready to be promoted to Active state (if the current
Active Cluster Member fails). Applies only to ClusterXL High Availability Mode.
State Synchronization
Technology that synchronizes the relevant information about the current connections
(stored in various kernel tables on Check Point Security Gateways) among all Cluster
Members over Synchronization Network. Due to State Synchronization, the current
connections are not cut off during cluster failover.
Sticky Connection
A connection is called sticky, if all packets are handled by a single Cluster Member (in
High Availability mode, all packets reach the Active Cluster Member, so all connections
are sticky).
Subscribers
User Space processes that are made aware of the current state of the ClusterXL state
machine and other clustering configuration parameters. List of such subscribers can be
obtained by running the 'cphaconf debug_data' command (see sk31499).
Sync Interface
Also, Secured Interface, Trusted Interface. An interface on a Cluster Member, whose
Network Type was set as Sync or Cluster+Sync in SmartConsole in cluster object. This
interface is monitored by cluster, and failure on this interface will cause cluster failover.
This interface is used for State Synchronization between Cluster Members. The use of
more than one Sync Interfaces for redundancy is not supported because the CPU load
will increase significantly due to duplicate tasks performed by all configured
Synchronization Networks. See sk92804.
Synchronization Network
Also, Sync Network, Secured Network, Trusted Network. A set of interfaces on Cluster
Members that were configured as interfaces, over which State Synchronization
information will be passed (as Delta Sync packets ). The use of more than one
Synchronization Network for redundancy is not supported because the CPU load will
increase significantly due to duplicate tasks performed by all configured Synchronization
Networks. See sk92804.
Traffic
Flow of data between network devices.
Users
Personnel authorized to use network resources and applications.
VLAN
Virtual Local Area Network. Open servers or appliances connected to a virtual network,
which are not physically connected to the same network.
VLAN Trunk
A connection between two switches that contains multiple VLANs.
VMAC
Virtual MAC address. When this feature is enabled on Cluster Members, all Cluster
Members in High Availability mode and Load Sharing Unicast mode associate the same
Virtual MAC address with Virtual IP address. This allows avoiding issues when
Gratuitous ARP packets sent by cluster during failover are not integrated into ARP cache
table on switches surrounding the cluster. See sk50840.
VSX
Virtual System Extension. Check Point virtual networking solution, hosted on a computer
or cluster with virtual abstractions of Check Point Security Gateways and other network
devices. These Virtual Devices provide the same functionality as their physical
counterparts.
VSX Gateway
Physical server that hosts VSX virtual networks, including all Virtual Devices that provide
the functionality of physical network devices. It holds at least one Virtual System, which
is called VS0.
Introduction to ClusterXL
The Need for Clusters
Security Gateways and VPN connections are business critical devices. The failure of a Security Gateway or
VPN connection can result in the loss of active connections and access to critical data. The Security
Gateway between the organization and the world must remain open under all circumstances.
ClusterXL Solution
ClusterXL is a Check Point software-based cluster solution for Security Gateway redundancy and Load
Sharing. A ClusterXL Security Cluster contains identical Check Point Security Gateways.
n A High Availability Security Cluster ensures Security Gateway and VPN connection redundancy by
providing transparent failover to a backup Security Gateway in the event of failure.
n A Load Sharing Security Cluster provides reliability and also increases performance, as all members
are active.
Item Description
1 Internal network
5 Internet
ClusterXL uses virtual IP addresses for the cluster itself and unique physical IP and MAC addresses for the
Cluster Members. Virtual IP addresses do not belong to physical interfaces.
Note - This guide contains information only for Security Gateway clusters. For additional
information about the use of ClusterXL with VSX, see the R81.10 VSX Administration
Guide.
Important - There is no need to add an explicit rule to the Security Policy Rule Base that
accepts CCP packets.
n SecureXL status on all Cluster Members must be the same (either enabled, or disabled)
n Number of CoreXL Firewall instances on all Cluster Members must be the same
Notes:
l A Cluster Member with a greater number of CoreXL Firewall instances
VMAC Mode
When ClusterXL is configured in High Availability mode or Load Sharing Unicast mode (not Multicast), a
single Cluster Member is associated with the Cluster Virtual IP address. In a High Availability environment,
the single member is the Active member. In a Load Sharing environment, the single member is the Pivot.
After fail-over, the new Active member (or Pivot member) broadcasts a series of Gratuitous ARP Requests
(GARPs). The GARPS associate the Virtual IP address of the cluster with the physical MAC address of the
new Active member or the new Pivot.
When this happens:
n A member with a large number of Static NAT entries can transmit too many GARPs
Switches may not integrate these GARP updates quickly enough into their ARP tables. Switches
continue to send traffic to the physical MAC address of the member that failed. This results in traffic
outage until the switches have fully updated ARP cache tables.
n Network components, such as VoIP phones, ignore GARPs
These components continue to send traffic to the MAC address of the failed member.
To minimize possible traffic outage during a fail-over, configure the cluster to use a virtual MAC address
(VMAC).
By enabling Virtual MAC in ClusterXL High Availability mode, or Load Sharing Unicast mode, all Cluster
Members associate the same Virtual MAC address with all Cluster Virtual Interfaces and the Virtual IP
address. In Virtual MAC mode, the VMAC that is advertised by the Cluster Members (through GARP
Requests) keeps the real MAC address of each member and adds a Virtual MAC address on top of it.
For local connections and sync connections, the real physical MAC address of each Cluster Member is still
associated with its real IP address.
Note - Traffic originating from the Cluster Members will be sent with the VMAC Source address.
You can enable VMAC in SmartConsole, or on the command line. See sk50840.
Failover time in a cluster with enabled VMAC mode is shorter than a failover in a cluster that uses a physical
MAC addresses.
Configuring VMAC Mode in SmartConsole
On each Cluster Member, set the same value for the global kernel parameter fwha_vmac_global_
param_enabled.
1. Connect to the command line on each Cluster Member.
2. Log in to the Expert mode.
3. Get the current value of this kernel parameter. Run:
4. Set the new value for this kernel parameter temporarily (does not survive reboot). Run:
n To enable VMAC mode:
5. Make sure the state of the VMAC mode was changed. Run on each Cluster Member:
n In Gaia Clish:
cphaprob -a if
When VMAC mode is enabled, output of this command shows the VMAC address of each virtual
cluster interface.
6. To set the new value for this kernel parameter permanently:
Follow the instructions in sk26202 to add this line to the $FWDIR/boot/modules/fwkern.conf
file:
fwha_vmac_global_param_enabled=<value>
Topology 2
If a Cluster Member does not receive CCP packets from other Cluster Members (for
example, a cable is disconnected between switches), it sends a dedicated CCP packet to
all other Cluster Members.
This dedicated CCP packet includes a map of all available bond slave interfaces on that
Cluster Member.
When other Cluster Members receive this dedicated CCP packet, they:
a. Compare the received map of available bond slave interfaces to the state of their
own bond slave interfaces.
b. Perform bond internal failover accordingly.
n Fail over to a specific selected slave interface instead of iterating over all available slave
interfaces.
If a Cluster Member does not receive CCP packets from other Cluster Members (for
example, a cable is disconnected between switches), it sends a dedicated CCP packet to
all other Cluster Members.
This dedicated CCP packet includes a map of all available bond slave interfaces on that
Cluster Member.
When other Cluster Members receive this dedicated CCP packet, they:
a. Compare the received map of available bond slave interfaces to the state of their
own bond slave interfaces.
b. Perform bond internal failover accordingly.
n Fail over to a specific selected slave interface instead of iterating over all available slave
interfaces.
Limitations:
n IPv6 is not supported for Load Sharing clusters.
n You cannot define IPv6 address for synchronization interfaces.
Note - ClusterXL failover event detection is based on IPv4 probing. During state
transition the IPv4 driver instructs the IPv6 driver to reestablish IPv6 network
connectivity to the HA cluster.
High Availability
In a High Availability cluster, only one member is active (Active/Standby operation). In the event that the
active Cluster Member becomes unavailable, all connections are re-directed to a designated standby
without interruption. In a synchronized cluster, the standby Cluster Members are updated with the state of
the connections of the Active Cluster Member.
In a High Availability cluster, each member is assigned a priority. The highest priority member serves as the
Security Gateway in normal circumstances. If this member fails, control is passed to the next highest priority
member. If that member fails, control is passed to the next member, and so on.
Upon Security Gateway recovery, you can maintain the current Active Security Gateway (Active Up), or to
change to the highest priority Security Gateway (Primary Up).
ClusterXL High Availability mode supports both IPv4 and IPv6.
Load Sharing
ClusterXL Load Sharing distributes traffic within a cluster so that the total throughput of multiple members is
increased. In Load Sharing configurations, all functioning members in the cluster are active, and handle
network traffic (Active/Active operation).
If any member in a cluster becomes unreachable, transparent failover occurs to the remaining operational
members in the cluster, thus providing High Availability. All connections are shared between the remaining
Security Gateways without interruption.
ClusterXL Load Sharing modes do not support IPv6.
Example Diagram
The following diagram illustrates a two-member ClusterXL cluster, showing the cluster Virtual IP addresses
and members physical IP addresses.
This sample deployment is used in many of the examples presented in this chapter.
Item Description
1 Internal network
6 Internet
Each Cluster Member has three interfaces: one external interface, one internal interface, and one for
synchronization. Cluster Member interfaces facing in each direction are connected via a hub or switch.
All Cluster Member interfaces facing the same direction must be in the same network. For example, there
must not be a router between Cluster Members.
The Management Server can be located anywhere, and connection should be established to either the
internal or external cluster IP addresses.
These sections present ClusterXL configuration concepts shown in the example.
Note - In these examples, RFC 1918 private addresses in the range 192.168.0.0 to
192.168.255.255 are treated as public IP addresses.
IP Address Migration
If you wish to provide High Availability or Load Sharing to an existing Security Gateways configuration, we
recommend taking the existing IP addresses from the Active Security Gateway, and make these the Cluster
Virtual IP addresses, when feasible. Doing so lets you avoid altering of current IPsec endpoint identities, as
well keep Hide NAT configurations the same in many cases.
ClusterXL Modes
ClusterXL has several working modes.
This section briefly describes each mode and its relative advantages and disadvantages.
n High Availability Mode
n Load Sharing Multicast Mode
n Load Sharing Unicast Mode
n Active-Active Mode (see "Active-Active Mode in ClusterXL" on page 63)
Note - Many examples in the section refer to the sample deployment shown in the
"Example ClusterXL Topology" on page 49.
Example
This scenario describes a connection from a Client computer on the external network to a Web server
behind the Cluster (on the internal network).
The cluster of two members is configured in High Availability mode.
Example topology:
[Client on]
[external network]
{IP 192.168.10.78/24}
{DG 192.168.10.100}
|
|
{VIP 192.168.10.100/24}
/ \
| |
{IP 192.168.10.1/24} {IP 192.168.10.2/24}
| |
{Active} {Standby}
[Member1]-----sync-----[Member2]
| |
{IP 10.10.0.1/24} {IP 10.10.0.2/24}
| |
\ /
{VIP 10.10.0.100/24}
|
|
{DG 10.10.0.100}
{IP 10.10.0.34/24}
[Web server on]
[internal network]
Chain of events:
1. The user tries to connect from his Client computer 192.168.10.78 to the Web server 10.10.0.34.
2. The Default Gateway on Client computer is 192.168.10.100 (the cluster Virtual IP address).
3. The Client computer issues an ARP Request for IP 192.168.10.100.
4. The Active Cluster Member (Member1) handles the ARP Request for IP 192.168.10.100.
5. The Active Cluster Member sends the ARP Reply with the MAC address of the external interface,
on which the IP address 192.168.10.1 is configured.
6. The Client computer sends the HTTP request packet to the Active Cluster Member - to the VIP
address 192.168.10.100 and MAC address of the corresponding external interface.
7. The Active Cluster Member handles the HTTP request packet.
8. The Active Cluster Member sends the HTTP request packet to the Web server 10.10.0.34.
9. The Web server handles the HTTP request packet.
10. The Web server generates the HTTP response packet.
11. The Default Gateway on Web server computer is 10.10.0.100 (the cluster Virtual IP address).
12. The Web server issues an ARP Request for IP 10.10.0.100.
13. The Active Cluster Member handles the ARP Request for IP 10.10.0.100.
14. The Active Cluster Member sends the ARP Reply with the MAC address of the internal interface,
on which the IP address 10.10.0.1 is configured.
15. The Web server sends the HTTP response packet to the Active Cluster Member - to VIP address
10.10.0.100 and MAC address of the corresponding internal interface.
16. The Active Cluster Member handles the HTTP response packet.
17. The Active Cluster Member sends the HTTP response packet to the Client computer
192.168.10.78.
18. From now, all traffic between the Client computer and the Web server is routed through the Active
Cluster Member (Member1).
19. If a failure occurs on the current Active Cluster Member (Member1), the cluster fails over.
20. The Standby Cluster Member (Member2) assumes the role of the Active Cluster Member.
21. The Standby Cluster Member sends Gratuitous ARP Requests to both the 192.168.10.x and the
10.10.0.x networks.
These GARP Requests associate the Cluster Virtual IP addresses with the MAC addresses of the
physical interfaces on the new Active Cluster Member (former Standby Cluster Member):
n Cluster VIP address 192.168.10.100 and MAC address of the corresponding external
interface, on which the IP address 192.168.10.2 is configured.
n Cluster VIP address 10.10.0.100 and MAC address of the corresponding internal interface,
on which the IP address 10.10.0.2 is configured.
22. From now, all traffic between the Client computer and the Web server is routed through the new
Active Cluster Member (Member2).
23. The former Active member (Member1) is now considered to be "down". Upon the recovery of a
former Active Cluster Member, the role of the Active Cluster Member may or may not be switched
back to that Cluster Member, depending on the cluster object configuration.
This scenario describes a user logging from the Internet to a Web server behind the cluster that is
configured in Load Sharing Multicast mode.
1. The user requests a connection from 192.168.10.78 (his computer) to 10.10.0.34 (the Web
server).
2. A router on the 192.168.10.x network recognizes 192.168.10.100 (the cluster Virtual IP address)
as the default gateway to the 10.10.0.x network.
3. The router issues an ARP Request for IP 192.168.10.100.
4. One of the Active Cluster Members processes the ARP Request, and responds with the Multicast
MAC assigned to the cluster Virtual IP address of 192.168.10.100.
5. When the Web server responds to the user requests, it recognizes 10.10.0.100 as its default
gateway to the Internet.
6. The Web server issues an ARP Request for IP 10.10.0.100.
7. One of the Active members processes the ARP Request, and responds with the Multicast MAC
address assigned to the cluster Virtual IP address of 10.10.0.100.
8. All packets sent between the user and the Web server reach every Cluster Member, which decide
whether to process each packet.
9. When a cluster failover occurs, one of the Cluster Members goes down. However, traffic still
reaches all of the Active Cluster Members. Therefore, there is no need to make changes in the
network ARP routing. The only thing that changes, is the cluster decision function, which takes into
account the new state of the Cluster Members.
In this scenario, we use a Load Sharing Unicast cluster as the Security Gateway between the end user
computer and the Web server.
1. The user requests a connection from 192.168.10.78 (his computer) to 10.10.0.34 (the Web
server).
2. A router on the 192.168.10.x network recognizes 192.168.10.100 (the cluster Virtual IP address)
as the default gateway to the 10.10.0.x network.
3. The router issues an ARP Request for IP 192.168.10.100.
4. The Pivot Cluster Member handles the ARP Request, and responds with the MAC address that
corresponds to its own unique IP address of 192.168.10.1.
5. When the Web server responds to the user requests, it recognizes 10.10.0.100 as its default
gateway to the Internet.
6. The Web server issues an ARP Request for IP 10.10.0.100.
7. The Pivot Cluster Member handles the ARP Request, and responds with the MAC address that
corresponds to its own unique IP address of 10.10.0.1.
8. The user request packet reaches the Pivot Cluster Member on interface 192.168.10.1.
9. The Pivot Cluster Member decides that the second non-Pivot Cluster Member should handle this
packet, and forwards it to 192.168.10.2.
10. The second Cluster Member recognizes the packet as a forwarded packet, and handles it.
11. Further packets are processed by either the Pivot Cluster Member, or forwarded and processed by
the non-Pivot Cluster Member.
12. When a failover occurs from the current Pivot Cluster Member, the second Cluster Member
assumes the role of Pivot.
13. The new Pivot member sends Gratuitous ARP Requests to both the 192.168.10.x and the
10.10.0.x networks. These GARP requests associate the cluster Virtual IP address of
192.168.10.100 with the MAC address that corresponds to the unique IP address of
192.168.10.2, and the cluster Virtual IP address of 10.10.0.100 with the MAC address that
correspond to the unique IP address of 10.10.0.2.
14. Traffic sent to the cluster is now received by the new Pivot Cluster Member, and processed by it
(as it is currently the only Active Cluster Member).
15. When the former Pivot Cluster Member recovers, it re-assumes the role of Pivot, by associating
the cluster Virtual IP addresses with its own unique MAC addresses.
Load
High Load Sharing Active-
Feature Sharing
Availability Multicast Active
Unicast
Hardware Support All routers Not all routers are All routers All routers
supported
Cluster Failover
What is Failover?
Failover is a cluster redundancy operation that automatically occurs if a Cluster Member is not functional.
When this occurs, other Cluster Members take over for the failed Cluster Member.
In the High Availability mode:
n If the Active Cluster Member detects that it cannot function as a Cluster Member, it notifies the peer
Standby Cluster Members that it must go down. One of the Standby Cluster Members (with the next
highest priority) will promote itself to the Active state.
n If one of the Standby Cluster Members stops receiving Cluster Control Protocol (CCP) packets from
the current Active Cluster Member, that Standby Cluster Member can assume that the current Active
Cluster Member failed. As a result, one of the Standby Cluster Members (with the next highest
priority) will promote itself to the Active state.
n If you do not use State Synchronization in the cluster, existing connections are interrupted when
cluster failover occurs.
In Load Sharing modes:
n If a Cluster Member detects that it cannot function as a Cluster Member, it notifies the peer Cluster
Members that it must go down. Traffic load will be redistributed between the working Cluster
Members.
n If the Cluster Members stop receiving Cluster Control Protocol (CCP) packets from one of their peer
Cluster Member, those working Cluster Members can assume that their peer Cluster Member failed.
As a result, traffic load will be redistributed between the working Cluster Members.
n Because by design, all Cluster Members are always synchronized, current connections are not
interrupted when cluster failover occurs.
To tell each Cluster Member that the other Cluster Members are alive and functioning, the ClusterXL Cluster
Control Protocol (CCP) maintains a heartbeat between Cluster Members. If after a predefined time, no CCP
packets are received from a Cluster Member, it is assumed that the Cluster Member is down. As a result,
cluster failover can occur.
Note that more than one Cluster Member may encounter a problem that will result in a cluster failover event.
In cases where all Cluster Members encounter such problems, ClusterXL will try to choose a single Cluster
Member to continue operating. The state of the chosen member will be reported as Active(!). This situation
lasts until another Cluster Member fully recovers. For example, if a cross cable connecting the sync
interfaces on Cluster Members malfunctions, both Cluster Members will detect an interface problem. One of
them will change to the Down state, and the other to Active (!) state.
Important:
n You can configure the Active-Active mode only on Management Server R80.40
(and higher) and only on ClusterXL R80.40 (and higher).
n The Active-Active mode does not provide Load Sharing of the traffic.
Administrator must monitor the load on each Cluster Member (see "Monitoring
and Troubleshooting Clusters" on page 203).
n Cluster Control Protocol (CCP) Encryption must be enabled, which is the default
(see "Configuring the Cluster Control Protocol (CCP) Settings" on page 194).
n It is possible to configure the Cluster Control Protocol (CCP) to work on Layer 2.
n It is possible to enable Dynamic Routing on each Cluster Member, so they
become routers in the applicable Area or Autonomous System on the site.
In this case, you must enable the Bidirectional Forwarding Detection (BFD - ip-
reachability-detection) on each interface that is configured for dynamic routing.
2 Optional: On each Cluster Member, enable Dynamic Routing, so they become routers in the
applicable Area or Autonomous System on the site.
Important - In this case, you must enable the Bidirectional Forwarding Detection
(BFD - ip-reachability-detection) on each interface that is configured for dynamic
routing.
See the R81.10 Gaia Advanced Routing Administration Guide.
n From the top toolbar, click the New ( ) > Cluster > Cluster.
n In the top left corner, click Objects menu > More object types > Network Object >
Gateways and Servers > Cluster > New Cluster.
n In the top right corner, click Objects Pane > New > More > Network Object >
Gateways and Servers > Cluster > Cluster.
In the Check Point Security Gateway Cluster Creation window, you must click Classic
Mode.
Step Instructions
If the Trust State field does not show Trust established, perform these steps:
a. Connect to the command line on the Cluster Member.
b. Make sure there is a physical connectivity between the Cluster Member and the
Management Server (for example, pings can pass).
c. Run:
cpconfig
d. Enter the number of this option:
Secure Internal Communication
e. Follow the instructions on the screen to change the Activation Key.
f. In SmartConsole, click Reset.
g. Enter the same Activation Key you entered in the cpconfig menu.
h. In SmartConsole, click Initialize.
Step Instructions
Best Practice - Enable this setting to prevent connection drops after a cluster
failover.
d. Optional: Select Start synchronizing [ ] seconds after connection initiation and enter
the applicable value.
This option is available only for clusters R80.20 and higher.
To prevent the synchronization of short-lived connections (which decreases the cluster
performance), you can configure the Cluster Members to start the synchronization of all
connections a number of seconds after they start.
Range: 2 - 60 seconds
Default: 3 seconds
Notes:
n This setting in the cluster object applies to all connections that pass
through the cluster.
You can override this global cluster synchronization delay in the
properties of applicable services - see "Configuring Services to
Synchronize After a Delay" on page 74.
n The greater this value, the fewer short-lived connections the Cluster
Members have to synchronize.
n The connections that the Cluster Members did not synchronize, do not
survive a cluster failover.
Best Practice - Enable and configure this setting to increase the cluster
performance.
9 Click OK to update the cluster object properties with the new cluster mode.
Step Instructions
12 Click OK.
Step Instructions
Member1>
Note - If it is necessary that Cluster Members change their cluster state because of
other Critical Devices, you must manually configure this behavior.
Procedure
Procedure:
1. In SmartConsole, click Objects > Object Explorer.
2. In the left tree, click the small arrow on the left of the Services to expand this category
3. In the left tree, select TCP.
4. Search for the applicable TCP service.
5. Double-click the applicable TCP service.
6. In the TCP service properties window, click Advanced page.
7. At the top, select Override default settings.
On Domain Management Server: select Override global domain settings.
8. At the bottom, in the Cluster and synchronization section, select Start synchronizing and enter the
applicable value.
Important - This change applies to all policies that use this service.
9. Click OK.
10. Close the Object Explorer.
11. Publish the SmartConsole session.
12. Install the Access Control Policy on the cluster object.
Note - The Delayed Notifications setting in the service object is ignored, if Connection
Templates are not offloaded by the Firewall to SecureXL. For additional information
about the Connection Templates, see the R81.10 Performance Tuning Administration
Guide.
Important - This change applies to all policies that use this service.
Sticky Connections
In This Section:
Item Description
1 Internal network
Item Description
5 Internet
In this scenario:
n A third-party peer (gateway or client) attempts to create a VPN tunnel.
n Cluster Members "A" and "B" belong to a ClusterXL in Load Sharing mode.
The third-party peers, lacking the ability to store more than one set of SAs, cannot negotiate a VPN tunnel
with multiple Cluster Members, and therefore the Cluster Member cannot complete the routing transaction.
This issue is resolved for certain third-party peers or gateways that can save only one set of SAs by making
the connection sticky. The Cluster Correction Layer (CCL) makes sure that a single Cluster Member
processes all VPN sessions, initiated by the same third-party gateway.
Item Description
4 Internet
5 Gateway - Spoke A
Item Description
6 Gateway - Spoke B
The Tunnel Group Identifier ;1 stays the same, which means that the listed peers will always connect
through the same Cluster Member.
This procedure turns off Load Sharing for the affected connections. If the implementation is to
connect multiple sets of third-party gateways one to another, a form of Load Sharing can be
accomplished by setting Security Gateway pairs to work in tandem with specific Cluster Members.
For instance, to set up a connection between two other spokes (C and D), simply add their IP
addresses to the line and replace the Tunnel Group Identifier ;1 with ;2. The line would then look
something like this:
Note that there are now two peer identifiers: ;1 and ;2. Spokes A and B will now connect through one
Cluster Member, and Spokes C and D through another.
Note - The tunnel groups are shared between active Cluster Members. In case of
a change in cluster state (for example, failover or Cluster Member attach/detach),
the reassignment is performed according to the new state.
Non-Sticky Connections
In This Section:
A connection is called sticky if a single Cluster Member handles all packets of that connection. In a non-
sticky connection, the response packet of a connection returns through a different Cluster Member than the
original request packet.
The cluster synchronization mechanism knows how to handle non-sticky connections properly. In a non-
sticky connection, a Cluster Member can receive an out-of-state packet, which Firewall normally drops
because it poses a security risk.
In Load Sharing configurations, all Cluster Members are active. In Static NAT and encrypted connections,
the source and destination IP addresses change. Therefore, Static NAT and encrypted connections through
a Load Sharing cluster may be non-sticky. Non-stickiness may also occur with Hide NAT, but ClusterXL has
a mechanism to make it sticky.
In High Availability configurations, all packets reach the Active member, so all connections are sticky. If
failover occurs during connection establishment, the connection is lost, but synchronization can be
performed later.
If the other members do not know about a non-sticky connection, the packet will be out-of-state, and the
connection will be dropped for security reasons. However, the Synchronization mechanism knows how to
inform other members of the connection. The Synchronization mechanism thereby prevents out-of-state
packets in valid, but non-sticky connections, so that these non-sticky connections are allowed.
Non-sticky connections will also occur if the network Administrator has configured asymmetric routing,
where a reply packet returns through a different Security Gateway than the original packet.
Item Description
1 Client
Item Description
4 Server
The client initiates a connection by sending a SYN packet to the server. The SYN passes through Cluster
Member A, but the SYN-ACK reply returns through Cluster Member B. This is a non-sticky connection,
because the reply packet returns through a different Security Gateway than the original packet.
The synchronization network notifies Cluster Member B. If Cluster Member B is updated before the SYN-
ACK packet sent by the server reaches it, the connection is handled normally. If, however, synchronization
is delayed, and the SYN-ACK packet is received on Cluster Member B before the SYN flag has been
updated, then the Security Gateway treats the SYN-ACK packet as out-of-state, and drops the connection.
You can configure enhanced 3-Way TCP Handshake enforcement to address this issue (see "Enhanced 3-
Way TCP Handshake Enforcement" on page 152).
Note - The Active-Active mode also supports Layer 3 (see "Active-Active Mode in
ClusterXL" on page 63).
You can monitor and troubleshoot geographically distributed clusters using the command line interface.
n Cluster Members cannot synchronize the connection statutes that use system resources. The reason
is the same as for the user-authenticated connections.
n Accounting information for connections is accumulated on each Cluster Member, sent to the
Management Server, and aggregated.
In the event of a cluster failover, the accounting information that is not yet sent to the Management
Server, is lost.
To minimize this risk, you can reduce the time interval when accounting information is sent.
To do this, in the cluster object > Logs > Additional Logging pane, set a lower value for the Update
Account Log every attribute.
Configuring ClusterXL
This procedure describes how to configure the Load Sharing Multicast, Load Sharing Unicast, and High
Availability modes from scratch.
Their configuration is identical, apart from the mode selection in SmartConsole Cluster object or Cluster
creation wizard.
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
Cluster Members configure the Cluster Control Protocol (CCP) mode automatically.
You can configure the Cluster Control Protocol (CCP) Encryption on the Cluster Members.
See "Viewing the Cluster Control Protocol (CCP) Settings" on page 246.
Shell Command
The Cluster Wizard is recommended for all Check Point Appliances (except models lower than 3000
series), and for Open Server platforms.
Step Instructions
1 In SmartConsole, click Objects menu > More object types > Network Object > Gateways
and Servers > Cluster > New Cluster.
2 In Check Point Security Gateway Cluster Creation window, click Wizard Mode.
Step Instructions
Important - You must define a corresponding IPv4 address for every IPv6
address. This release does not support pure IPv6 addresses.
d. In the Choose the Cluster's Solution field, select the applicable option and click
Next:
n Check Point ClusterXL and then select High Availability or Load Sharing
n Gaia VRRP
4 In the Cluster member's properties window perform these steps for each Cluster Member
and click Next.
We assume you create a new cluster object from the scratch.
a. Click Add > New Cluster Member to configure each Cluster Member.
b. In the Cluster Name field, enter unique name for the Cluster Member object.
c. In the Cluster IPv4 Address, enter the unique Cluster Virtual IPv4 addresses for this
Cluster Member.
This is the main IPv4 address of the Cluster Member object.
d. In the Cluster IPv6 Address, enter the unique Cluster Virtual IPv6 addresses for this
Cluster Member.
This is the main IPv6 address of the Cluster Member object.
Important - You must define a corresponding IPv4 address for every IPv6
address. This release does not support pure IPv6 addresses.
e. In the Activation Key and Confirm Activation Key fields, enter a one-time password
that you entered in First Time Configuration Wizard during the installation of this
Cluster Member.
f. Click Initialize.
Management Server will try to establish SIC with each Cluster Member.
The Trust State field should show Trust established.
g. Click OK.
Step Instructions
5 In the Cluster Topology window, define a network type (network role) for each cluster
interface and define the Cluster Virtual IP addresses. Click Next.
The wizard automatically calculates the subnet for each cluster network and assigns it to
the applicable interface on each Cluster Member. The calculated subnet shows in the
upper section of the window.
The available network objectives are:
Network
Description
Objective
After you complete the wizard, we recommend that you open the cluster object and complete the
configuration:
n Define Anti-Spoofing properties for each interface
n Change the Topology settings for each interface, if necessary
n Define the Network Type
n Configure other Software Blades, features and properties as necessary
The Small Office Cluster wizard is recommended for Centrally Managed Check Point appliances -
models lower than 3000 series.
Step Instructions
1 In SmartConsole, click Objects menu > More object types > Network Object > Gateways
and Servers > Cluster > New Small Office Cluster.
2 In Check Point Security Gateway Cluster Creation window, click Wizard Mode.
5 In the Configure WAN Interface page, configure the Cluster Virtual IPv4 address.
6 Define the Cluster Virtual IPv4 addresses for the other cluster interfaces.
After you complete the wizard, we recommend that you open the cluster object and complete the
configuration:
n Define Anti-Spoofing properties for each interface
n Change the Topology settings for each interface, if necessary
n Define the Network Type
n Configure other Software Blades, features and properties as necessary
n From the top toolbar, click the New ( ) > Cluster > Cluster.
n In the top left corner, click Objects menu > More object types > Network Object >
Gateways and Servers > Cluster > New Cluster.
n In the top right corner, click Objects Pane > New > More > Network Object >
Gateways and Servers > Cluster > Cluster.
4 In the Check Point Security Gateway Creation window, click Classic Mode.
The Gateway Cluster Properties window opens.
6 On the General Properties page > Platform section, select the correct options:
a. In the Hardware field:
If you install the Cluster Members on Check Point Appliances, select the correct
appliances series.
If you install the Cluster Members on Open Servers, select Open server.
b. In the Version field, select R81.10.
c. In the OS field, select Gaia.
Step Instructions
If the Trust State field does not show Trust established, perform these steps:
a. Connect to the command line on the Cluster Member.
b. Make sure there is a physical connectivity between the Cluster Member and the
Management Server (for example, pings can pass).
c. Run:
cpconfig
d. Enter the number of this option:
Secure Internal Communication
e. Follow the instructions on the screen to change the Activation Key.
f. In SmartConsole, click Reset.
g. Enter the same Activation Key you entered in the cpconfig menu.
h. In SmartConsole, click Initialize.
Step Instructions
Step Instructions
Step Instructions
iii. Select the connection sharing method between the Cluster Members:
l IPs, Ports, SPIs
Step Instructions
Step Instructions
iv. Select the connection sharing method between the Cluster Members:
l IPs, Ports, SPIs
Step Instructions
11 Click OK.
Important - You must define a corresponding IPv4 address for every IPv6
address. This release does not support pure IPv6 addresses.
IPv6 Considerations
To activate IPv6 functionality for an interface, define an IPv6 address for the applicable interface on each
Cluster Member and in the cluster object. All interfaces configured with an IPv6 address must also have a
corresponding IPv4 address. If an interface does not require IPv6, only the IPv4 definition address is
necessary.
Note - You must configure synchronization interfaces with IPv4 addresses only. This is
because the synchronization mechanism works using IPv4 only. All IPv6 information
and states are synchronized using this interface.
1. Connect with SmartConsole to the Security Management Server or Domain Management Server that
manages this cluster.
Network
Description
Type
Private An interface that is not part of the cluster. ClusterXL does not monitor
the state of this interface. As a result, there is no cluster failover if a
fault occurs with this interface. This option is recommended for the
management interface.
n Virtual IPv4 - Virtual IPv4 address assigned to this Cluster Virtual Interface
n Virtual IPv6 - Virtual IPv6 address assigned to this Cluster Virtual Interface
Important - You must define a corresponding IPv4 address for every IPv6
address. This release does not support the configuration of only IPv6
addresses.
7. In the Member IPs section, click Modify and configure these settings:
n Physical IPv4 address and Mask Length assigned to the applicable physical interface on each
Cluster Member
n Physical IPv6 address and Mask Length assigned to the applicable physical interface on each
Cluster Member
Important - You must define a corresponding IPv4 address for every IPv6
address. This release does not support the configuration of only IPv6 addresses.
Monitoring in
Interface
ClusterXL (non- Monitoring in VSX Cluster
type
VSX)
Only the lowest Enabled by default. Must disable the monitoring of all VLAN IDs - set
VLAN ID the value of the kernel parameter fwha_
monitor_all_vlan to 0.
See sk92826.
Only the lowest Enabled by default. VSX High Availability (non-VSLS): Enabled by
and highest Controlled by the kernel default.
VLAN IDs parameter fwha_monitor_ Controlled by the kernel parameter fwha_
low_high_vlans. monitor_low_high_vlans.
See sk92826. See sk92826.
All VLAN IDs Disabled by default. Virtual System Load Sharing: Disabled by default.
Controlled by the kernel Controlled by the kernel parameter fwha_
parameter fwha_monitor_ monitor_all_vlan.
all_vlan. See sk92826.
See sk92826.
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
This procedure configures the Cluster Member to monitor only the physical link on the cluster interfaces
(instead of monitoring the Cluster Control Protocol (CCP) packets):
n If a link disappears on the configured interface, the Cluster Member changes the interface's state to
DOWN.
This causes the Cluster Member to change its state to DOWN.
n If a link appears again on the configured interface, the Cluster Member changes the interface's state
back to UP.
This causes the Cluster Member to change its state back to ACTIVE or STANDBY.
See "Viewing Cluster State" on page 209.
Procedure
Step Instructions
Step Instructions
Best Practices:
n In High Availability cluster
1. Perform the configuration steps on all Cluster Members
2. Reboot all the Standby Cluster Members
3. Initiate a manual failover on the Active Cluster Member
4. Reboot the former Active Cluster Member
n In Load Sharing Unicast cluster
1. Perform the configuration steps on all Cluster Members
2. Reboot all the non-Pivot Cluster Members
3. Initiate a manual failover on the Pivot Cluster Member
4. Reboot the former Pivot Cluster Member
n In Load Sharing Multicast cluster
1. Perform the configuration steps on all Cluster Members
2. Reboot all Cluster Members except one
3. Initiate a manual failover on the remaining Cluster Member
4. Reboot the remaining Cluster Member
Item Description
1 Security Gateway
1A Interface 1
1B Interface 2
2 Bond Interface
3 Router
A bond interface (also known as a bonding group or bond) is identified by its Bond ID (for example: bond1)
and is assigned an IP address. The physical interfaces included in the bond are called slaves and do not
have IP addresses.
You can configure a bond interface to use one of these functional strategies:
High Availability (Active/Backup)
Gives redundancy when there is an interface or a link failure. This strategy also supports switch
redundancy.
Bond High Availability works in Active/Backup mode - interface Active/Standby mode. When an Active
slave interface is down, the connection automatically fails over to the primary slave interface. If the
primary slave interface is not available, the connection fails over to a different slave interface.
You can configure Bond Load Sharing to use one of these modes:
Mode Description
802.3ad Dynamically uses Active slave interfaces to share the traffic load.
This mode uses the LACP protocol, which fully monitors the interface link between the
Check Point Security Gateway and a switch.
XOR All slave interfaces in the UP state are Active for Load Sharing.
Traffic is assigned to Active slave interfaces based on one of these transmit hash
policies:
n Layer 2 information (XOR of hardware MAC addresses)
n Layer 3+4 information (IP addresses and Ports)
ABXOR Slave interfaces in the UP state are assigned to sub-groups called bundles.
Only one bundle is Active at a time.
All slave interfaces in the Active bundle share the traffic load.
The system assigns traffic to all interfaces in the Active bundle based on the defined
transmit hash policy.
Note - Scalable Chassis 60000 / 40000 do not support this mode (Known
Limitation MBS-1520).
For Bonding High Availability mode and for Bonding Load Sharing mode:
n The number of bond interfaces that can be defined is limited by the maximal number of interfaces
supported by each platform.
See the R81.10 Release Notes.
n Up to 8 physical slave interfaces can be configured in a single bond interface.
Item Description
1 Cluster Member GW1 with interfaces connected to the external switches (5 and 6)
2 Cluster Member GW2 with interfaces connected to the external switches (5 and 6)
3 Interconnecting network C1
4 Interconnecting network C2
5 Switch S1
6 Switch S2
If Cluster Member GW1 (1), its NIC, or switch S1 (5) fails, Cluster Member GW2 (2) becomes the only Active
member, connecting to switch S2 (6) over network C2 (4).
If any component fails (Cluster Member, NIC, or switch), the result of the failover is that no further
redundancy exists.
A further failure of any active component completely stops network traffic through this cluster.
Item Description
1 Cluster Member GW1 with interfaces connected to the external switches (4 and 5)
2 Cluster Member GW2 with interfaces connected to the external switches (4 and 5)
3 Interconnecting network
4 Switch S1
5 Switch S2
In this scenario:
n GW1 and GW2 are Cluster Members in the High Availability mode, each connected to the two
external switches
n S1 and S2 are external switches
n Item 3 are the network connections
If any of the interfaces on a Cluster Member that connect to an external switch fails, the other interface
continues to provide the connectivity.
If any Cluster Member, its NIC, or switch fails, the other Cluster Member, connecting to switch S2 over
network C2. If any component fails (Cluster Member , NIC, or switch), the result of the failover is that no
further redundancy exists. A further failure of any active component completely stops network traffic.
Bonding provides High Availability of NICs. If one fails, the other can function in its place.
Step Instructions
Sync Redundancy
The use of more than one physical synchronization interface (1st sync, 2nd sync, 3rd sync) for
synchronization redundancy is not supported. For synchronization redundancy, you can use bond
interfaces.
cphaprob -am if
Step Instructions
4 Make sure the value of the kernel parameter fwha_bond_enhanced_enable was set to
1:
fw ctl get int fwha_bond_enhanced_enable
Step Instructions
5 Add this line to the file (spaces and comments are not allowed):
fwha_bond_enhanced_enable=1
8 Make sure the value of the kernel parameter fwha_bond_enhanced_enable was set to
1:
fw ctl get int fwha_bond_enhanced_enable
Important - If you change your cluster configuration from VRRP to ClusterXL, you must
remove the kernel parameter configuration from each Cluster Member.
Important - Bond Load Sharing mode requires SecureXL to be enabled on each Cluster
Member (this is the default).
Step Instructions
Group of Bonds
In This Section:
Introduction
Group of Bonds, which is a logical group of existing Bond interfaces, provides additional link redundancy.
Example topology with Group of Bonds
1. The Cluster Member GW-A is the Active and the Cluster Member GW-B is the Standby.
2. On the Cluster Member GW-A, the Bond-1 interface fails.
3. On the Cluster Member GW-A, the Critical Device Interface Active Check reports its state as
"problem".
4. The Cluster Member GW-A changes its cluster state from Active to Down.
5. The cluster fails over - the Cluster Member GW-B changes its cluster state from Standby to Active.
This is not the desired behavior, because the Cluster Member GW-A connects not only to the switch SW-
1, but also to the switch SW-2. In our example topology, there is no actual reason to fail-over from the
Cluster Member GW-A to the Cluster Member GW-B.
In order to overcome this problem, Cluster Members use the Group of Bonds consisting of Bond-1 and
Bond-2. The Group of Bonds fails only when both Bond interfaces fail on the Cluster Member. Only then
the cluster fails over.
1. The Cluster Member GW-A is the Active and the Cluster Member GW-B is the Standby.
2. On the Cluster Member GW-A, the Bond-1 interface fails.
3. On the Cluster Member GW-A, the Critical Device Interface Active Check reports its state as
"problem".
4. The Cluster Member GW-A does not change its cluster state from Active to Down.
5. On the Cluster Member GW-A, the Bond-2 interface fails as well.
6. The Cluster Member GW-A changes its cluster state from Active to Down.
7. The cluster fails over - the Cluster Member GW-B changes its cluster state from Standby to Active.
Important - In a Cluster, you must configure all the Cluster Members in the same way.
vsenv <VSID>
cp -v $FWDIR/boot/modules/fwkern.conf{,_BKP}
vi $FWDIR/boot/modules/fwkern.conf
c. Add these two lines at the bottom of the file (spaces or comments are not allowed):
Example:
fwha_group_of_bonds_str=GoB0:bond0,bond1;GoB1:bond2,bond3
fwha_arp_probe_method=1
5. Change the value of the kernel parameter fwha_group_of_bonds_str to add the Group of
Bonds on-the-fly:
Example:
Important - In a Cluster, you must configure all the Cluster Members in the same way.
vsenv <VSID>
cp -v $FWDIR/boot/modules/fwkern.conf{,_BKP}
vi $FWDIR/boot/modules/fwkern.conf
c. Edit the value of the kernel parameter fwha_group_of_bonds_str to add the Bond
interface to the existing Group of Bonds.
Example:
fwha_group_of_bonds_str=GoB0:bond0,bond1;GoB1:bond2,bond3,bond4
7. Make sure the Cluster Member reset the value of the kernel parameter fwha_group_of_bonds_
str:
8. Change the value of the kernel parameter fwha_group_of_bonds_str to add the Bond
interface to the existing Group of Bonds on-the-fly:
Example:
10. In SmartConsole, install the Access Control Policy on the cluster object.
Important - In a Cluster, you must configure all the Cluster Members in the same way.
vsenv <VSID>
cp -v $FWDIR/boot/modules/fwkern.conf{,_BKP}
vi $FWDIR/boot/modules/fwkern.conf
c. Edit the value of the kernel parameter fwha_group_of_bonds_str to remove the Bond
interface from the existing Group of Bonds.
Example:
fwha_group_of_bonds_str=GoB0:bond0,bond1;GoB1:bond2,bond3
7. Make sure the Cluster Member reset the value of the kernel parameter fwha_group_of_bonds_
str:
8. Change the value of the kernel parameter fwha_group_of_bonds_str to remove the Bond
interface from the existing Group of Bonds on-the-fly:
Example:
10. In SmartConsole, install the Access Control Policy on the cluster object.
Important - In a Cluster, you must configure all the Cluster Members in the same way.
vsenv <VSID>
cp -v $FWDIR/boot/modules/fwkern.conf{,_BKP}
vi $FWDIR/boot/modules/fwkern.conf
6. Make sure the Cluster Member reset the value of the kernel parameter fwha_group_of_bonds_
str:
Monitoring
To see the configured Groups of Bonds, run the "cphaprob show_bond_groups" command. See
"Viewing Bond Interfaces" on page 225.
Logs
Cluster Members generate some applicable logs.
Applicable log files
Limitations
Specific limitations apply to a Group of Bonds.
List of Limitations
n The maximal length on the text string "<Name for Group of Bonds>" is 16 characters.
n The maximal length on this text string is 1024 characters:
n You can configure the maximum of five Groups of Bonds on a Cluster Member or Virtual System.
n You can configure the maximum of five Bond interfaces in each Groups of Bonds.
n Group of Bonds feature does support Virtual Switches and Virtual Routers. Meaning, do not
configure Groups of Bonds in the context of these Virtual Devices.
n Group of Bonds feature supports only Bond interfaces that belong to the same Virtual System.
You cannot configure bonds that belong to different Virtual Systems into the same Group of
Bonds.
You must perform all configuration in the context of the applicable Virtual System.
n Group of Bonds feature does support Sync interfaces (an interface on a Cluster Member, whose
Network Type was set as Sync or Cluster+Sync in SmartConsole in cluster object).
n Group of Bonds feature does support Bridge interfaces.
n If a Bond interface goes down on one Cluster Member, the "cphaprob show_bond_groups"
command (see "Viewing Bond Interfaces" on page 225) on the peer Cluster Members also shows
the same Bond interface as DOWN.
This is because the peer Cluster Members stop receiving the CCP packets on that Bond interface
and cannot probe the local network to determine that their Bond interface is really working.
n After you add a Bond interface to the existing Group of Bonds, you must install the Access Control
Policy on the cluster object.
n After you remove a Bond interface from the existing Group of Bonds, you must install the Access
Control Policy on the cluster object.
An appliance has:
n Four processing CPU cores:
core 0, core 1, core 2, and core 3
n Two bond interfaces:
bond0 with slave interfaces eth0, eth1, and eth2
bond1 with slave interfaces eth3, eth4, and eth5
In such case, two of the CPU cores need to handle two slave interfaces each.
An optimal configuration can be:
0 eth0 eth3
1 eth1 eth4
2 eth2
3 eth5
For more information, see the R81.10 Performance Tuning Administration Guide:
n Chapter SecureXL > Section SecureXL Commands and Debug > Section 'sim' and 'sim6' > Section
sim affinity.
Note - Removing a slave interface from an existing bond, does not require a reboot.
Important: - In a Cluster, you must configure all the Cluster Members in the same way.
See the R81.10 Quantum Security Gateway Guide - Chapter Working with Kernel Parameters on Security
Gateway.
The reason for blocking new connections is that new connections are the main source of new Delta
Synchronization traffic. Delta Synchronization may be at risk, if new traffic continues to be processed at high
rate.
A related error message in cluster logs and in the /var/log/messages file is:
Reducing the amount of traffic passing through the Cluster Member protects the Delta Synchronization
mechanism. See sk43896: Blocking New Connections Under Load in ClusterXL.
These kernel parameters let you control how Cluster Member behave:
Kernel
Description
Parameter
fw_sync_ Controls how Cluster Member detect heavy loads and whether they start blocking new
block_new_ connections.
conns Load is considered heavy when the synchronization transmit queue of the Cluster
Member starts to fill beyond the value of the kernel parameter "fw_sync_buffer_
threshold".
n To enable blocking new connections under load, set the value of the "fw_
sync_block_new_conns" to 0.
n To disable blocking new connections under load, set the value of the "fw_
sync_block_new_conns" to -1 (must use the hex value 0xFFFFFFFF). This
is the default.
Note - Blocking new connections when sync is busy is only recommended
for ClusterXL Load Sharing deployments. While it is possible to block new
connections in ClusterXL High Availability mode, doing so does not solve
inconsistencies in sync, because the High Availability mode prevents that
from happening.
fw_sync_ Configures the maximum percentage of the buffer that may be filled before new
buffer_ connections are blocked (see the parameter "fw_sync_block_new_conns" above).
threshold The default percentage value is 80, with a buffer size of 512.
By default, if more than 410 consecutive packets are sent without getting an ACK on
any one of them, new connections are dropped.
Kernel
Description
Parameter
fw_sync_ Determines the type of connections that can be opened while the system is in a
allowed_ blocking state.
protocols Thus, the user can have better control over the system behavior in cases of unusual
load.
The value of this kernel parameter is a combination of flags, each specifying a different
type of connection. The required value is the result of adding the separate values of
these flags.
Summary table:
Flag Value
ICMP_CONN_ALLOWED 1
Warning - Do not set the value of this kernel parameter to a number larger than 3.
See the R81.10 Quantum Security Gateway Guide - Chapter Working with Kernel Parameters on Security
Gateway.
Warning - The price for this extra security is a considerable delay in TCP connection
establishment.
sync_tcp_handshake_mode
7. In the lower pane, right-click on the sync_tcp_handshake_mode property and select Edit.
8. Choose complete_sync and click OK.
For more information, see the section "Synchronization modes for TCP 3-way handshake" below.
9. To save the changes, from the File menu select Save All.
10. Close GuiDBedit Tool.
11. Connect with SmartConsole to the Security Management Server or Domain Management Server
that manages this cluster.
12. In SmartConsole, install the Access Control Policy onto the Cluster object.
Mode Instructions
Mode Instructions
Complete All 3-way handshake packets are Sync-and-ACK'ed, and the 3-way handshake is
sync enforced.
This mode slows down connection establishment considerably.
It may be used when there is no way to know where the next packet goes (for
example, in 3rd party clusters).
Smart sync In most cases, we can assume that if SYN and SYN-ACK were encountered by the
same cluster member, then the connection is “sticky”.
ClusterXL uses one additional flag in Connections Table record that says, “If this
member encounters a 3-way handshake packet, it should sync all other cluster
members”.
When a SYN packet arrives, the member that encountered it, records the connection
and turns off its flag. All other members are synchronized, and by using a post-sync
handler, their flag is turned on (in their Connections Tables).
If the same member encounters the SYN-ACK packet, the connection is sticky, thus
other cluster members are not informed.
Otherwise, the relevant member will inform all other member (since its flag is turned
on).
The original member (that encountered the SYN) will now turn on its flag, thus all
members will have their flag on.
In this case, the third packet of the 3-way handshake is also synchronized.
If for some reason, our previous assumption is not true (i.e., one cluster member
encountered both SYN and SYN-ACK packets, and other members encountered the
third ACK), then the “third” ACK will be dropped by the other cluster members, and we
rely on the periodic sync and TCP retransmission scheme to complete the 3-way
handshake.
This 3-way handshake synchronization mode is a good solution for ClusterXL Load
Sharing users that want to enforce 3-way handshake verification with the minimal
performance cost.
This 3-way handshake synchronization mode is also recommended for ClusterXL
High Availability.
Traffic sent from Cluster Members to internal or external networks is hidden behind the cluster Virtual IP
addresses and cluster MAC addresses. The cluster MAC address assigned to cluster interfaces is:
Load Sharing Multicast Multicast MAC address of the cluster Virtual IP Address
Load Sharing Unicast MAC address of the Pivot Cluster Member's interface
The use of different subnets with cluster objects has some limitations - see "Limitations of Cluster
Addresses on Different Subnets" on page 158.
The required static routes must use applicable local member's interface as the next hop gateway
for network of cluster Virtual IP address.
If you do not define the static routes correctly, Cluster Members are not able to pass traffic.
Note - In VSX Cluster, you must configure all routes only in SmartConsole in
the VSX Cluster object.
For configuration instructions, see the R81.10 Gaia Administration Guide - Chapter Network
Management - Sections IPv4 Static Routes and IPv6 Static Routes.
Procedure:
1. Configure static routes on the Cluster Members
IP address of IP address of
Interface Properties
Cluster Interface "A" Cluster Interface "B
g. Click OK.
h. Install the Access Control Policy on this cluster object.
Note - Static ARP is not required in order for the Cluster Members to work properly as a
cluster, since the cluster synchronization protocol does not rely on ARP.
Configuring Anti-Spoofing
1. Connect with SmartConsole to the Management Server.
2. From the left navigation panel, click Gateways & Servers.
3. Create a Group object, which contains the objects of both the external network and the internal
network.
In the "Example of Cluster IP Addresses on Different Subnets" on page 156, suppose Side "A" is the
external network, and Side "B" is the internal network.
You must configure the Group object to contain both the network 172.16.4.0 / 24 and the network
192.168.2.0 / 24.
4. Open the cluster object.
5. From the left tree, click Network Management.
6. Select the cluster interface and click Edit.
7. On the General page, in the Topology section, click Modify.
8. Select Override.
9. Select This Network (Internal).
10. Select Specific
11. Select the Group object that contains the objects of both the external network and the internal
network.
12. Click OK.
13. Install the Access Control Policy on this cluster object.
Best Practice - Before you change the current configuration, export a complete
management database with "migrate_server" command. See the R81.10 CLI
Reference Guide.
Install a new Cluster Member you plan to add to the existing cluster.
See the R81.10 Installation and Upgrade Guide > Chapter Installing a ClusterXL, VSX Cluster,
VRRP Cluster.
Follow only the step "Install the Cluster Members".
Important - The new Cluster Member must run the same version with the
same Hotfixes as the existing Cluster Members.
On the new Cluster Member you plan to add to the existing cluster:
a. Configure or change the IP addresses on the applicable interfaces to match the current
cluster topology.
Use Gaia Portal or Gaia Clish.
See R81.10 Gaia Administration Guide.
b. Configure or change the applicable static routes to match the current cluster topology.
Use Gaia Portal or Gaia Clish.
c. Connect to the command line.
d. Log in to Gaia Clish or the Expert mode.
e. Start the Check Point Configuration Tool. Run:
cpconfig
f. Select the option Enable cluster membership for this gateway and enter y to confirm.
g. Reboot the new Cluster Member.
h. In the IPv6 Address field, enter a physical IPv6 address, if you need to use IPv6.
The Management Server must be able to connect to the Cluster Member at this IPv6
address. This IPv6 address can be an internal, or external. You can use a dedicated
management interface on the Cluster Member.
Shell Command
a. Connect to the command line on each Cluster Member (existing and the newly added).
b. Log in to Gaia Clish or the Expert mode.
c. Restart the clustering on each Cluster Member.
Run:
cphastop
cphastart
d. Make sure all Cluster Members detect each other and agree on their cluster states. Run:
Shell Command
Important - The existing Security Gateway must run the same version with the same
Hotfixes as the existing Cluster Members.
On the existing Security Gateway you plan to add to the existing cluster:
a. Configure or change the IP addresses on the applicable interfaces to match the current
cluster topology.
Use Gaia Portal or Gaia Clish.
See R81.10 Gaia Administration Guide.
b. Configure or change the applicable static routes to match the current cluster topology.
Use Gaia Portal or Gaia Clish.
c. Connect to the command line.
d. Log in to Gaia Clish or the Expert mode.
e. Start the Check Point Configuration Tool. Run:
cpconfig
f. Select the option Enable cluster membership for this gateway and enter y to confirm.
g. Reboot the Security Gateway.
f. In the list of Cluster Members, select the new Cluster Member and click Edit.
g. Click the NAT tab and configure the applicable NAT settings.
h. Click the VPN tab and configure the applicable VPN settings.
i. From the left tree, click Network Management.
n Make sure all interfaces are defined correctly.
n Make sure all IP addresses are defined correctly.
j. Click OK.
k. Install the Access Control Policy on this cluster object.
Policy installation must succeed on all Cluster Members.
l. Install the Threat Prevention Policy on this cluster object.
Policy installation must succeed on all Cluster Members.
Shell Command
Procedure
a. Connect to the command line on each Cluster Member (existing and the newly added).
b. Log in to Gaia Clish or the Expert mode.
c. Restart the clustering on each Cluster Member.
Run:
cphastop
cphastart
d. Make sure all Cluster Members detect each other and agree on their cluster states. Run:
Shell Command
Best Practice - Before you change the current configuration, export a complete
management database with "migrate_server" command. See the R81.10 CLI
Reference Guide.
cphastop
cphastart
d. Make sure all Cluster Members detect each other and agree on their cluster states. Run:
Shell Command
cpconfig
d. Select the option Disable cluster membership for this gateway and enter y to confirm.
e. Select the option Secure Internal Communication > enter y to confirm > enter the new
Activation Key. Make sure to write it down.
f. Exit from the cpconfig menu.
g. Reboot the Security Gateway.
If you need to use the Security Gateway you removed from the existing cluster, then establish
Secure Internal Communication (SIC) with it.
a. Connect with SmartConsole to the Security Management Server or Domain Management
Server that manages this Security Gateway.
b. From the left navigation panel, click Gateways & Servers.
Introduction 169
ISP Redundancy Modes 171
Outgoing Connections 172
Incoming Connections 173
Note - For information about ISP Redundancy on a Security Gateway, see the R81.10
Quantum Security Gateway Guide.
Introduction
ISP Redundancy connects Cluster Members to the Internet through redundant Internet Service Provider
(ISP) links.
ISP Redundancy monitors the ISP links and chooses the best current link.
Notes:
n ISP Redundancy requires a minimum of two external interfaces and supports up
to a maximum of ten.
To configure more than two ISP links, the Management Server and the Cluster
must run the version R81.10 and higher.
n ISP Redundancy is intended to traffic that originates on your internal networks
and goes to the Internet.
Important:
n You must connect each Cluster Member with a dedicated physical interface to
each of the ISPs.
n The IP addresses assigned to physical interfaces on each Cluster Member must
be on the same subnet as the Cluster Virtual IP address.
Item Description
1 Internal network
2 Switches
3 Cluster Member A
3b Cluster interface (IP address 20.20.20.11) connected to the Sync network (IP address
20.20.20.0/24)
4 Cluster Member B
4b Cluster interface (IP address 20.20.20.22) connected to the Sync network (IP address
20.20.20.0/24)
Item Description
5 ISP B
6 ISP A
7 Internet
Mode Description
Best Practice:
n If all ISP links are basically the same, use the Load Sharing mode to ensure that
you are making the best use of all ISP links.
n You may prefer to use one of your ISP links that is more cost-effective in terms of
price and reliability.
In that case, use the Primary/Backup mode and set the more cost-effective ISP
as the Primary ISP link.
Outgoing Connections
Mode Description
Load Sharing Outgoing traffic that exits the Cluster on its way to the Internet is distributed
between the ISP Links.
You can set a relative weight for how much you want each of the ISP Links to be
used.
For example, if one link is faster, it can be configured to route more traffic across
that ISP link than the other links.
Incoming Connections
For external users to make incoming connections, the administrator must:
1. Give each application server one routable IP address for each ISP.
2. Configure Static NAT to translate the routable addresses to the real server address.
If the servers handle different services (for example, HTTP and FTP), you can use NAT to employ only
routable IP addresses for all the publicly available servers.
External clients use one of the assigned IP addresses. In order to connect, the clients must be able to
resolve the DNS name of the server to the correct IP address.
Example
Note - The example below is for two ISP links. The subnets 172.16.0.0/24 and
192.168.0.0/24 represent public routable IP addresses.
Make sure you have the ISP data - the speed of the link and next hop IP address.
Automatic vs Manual configuration:
n If the Cluster object has at least two interfaces with the Topology "External" in the Network
Management page, you can configure the ISP links automatically.
Configuring ISP links automatically
n If the Cluster object has only one interface with the Topology "External" in the Network
Management page, you must configure the ISP links manually.
Configuring ISP links manually
The Cluster, or a DNS server behind it, must respond to DNS queries.
It resolves IP addresses of servers in the DMZ (or another internal network).
Get a public IP address from each ISP. If public IP addresses are not available, register the
domain to make the DNS server accessible from the Internet.
The Cluster intercepts DNS queries "Type A" for the web servers in its domain that come from
external hosts.
n If the Cluster recognizes the external host, it replies:
l In ISP Redundancy Load Sharing mode, the Cluster replies with IP addresses of all
ISP links, alternating their order.
l In ISP Redundancy Primary/Backup mode, the Cluster replies with the IP addresses
of the active ISP link.
n If the Cluster does not recognize the host, it passes the DNS query on to the original
destination, or to the domain DNS server.
Note - If the servers use different services (for example, HTTP and
FTP), you can use NAT for only the configured public IP addresses.
Services &
Destinatio Install
Name Source VPN Application Action Track
n On
s
DNS Proxy Applicable Applicable DNS Any domain_udp Accept None Policy
sources Servers Targets
The Access Control Policy must allow connections through the ISP links, with Automatic Hide NAT
on network objects that start outgoing connections.
a. In the properties of the object for an internal network, select NAT > Add Automatic Address
Translation Rules.
b. Select Hide behind the gateway.
c. Click OK.
d. Define rules for publicly reachable servers (Web servers, DNS servers, DMZ servers).
n If you have one public IP address from each ISP for the Cluster, define Static NAT.
Allow specific services for specific servers.
For example, configure NAT rules, so that incoming HTTP connections from your
ISPs reach a Web server, and DNS connections from your ISPs reach the DNS
server.
Example: Manual Static Rules for a Web Server and a DNS Server
n If you have a public IP address from each ISP for each publicly reachable server (in
addition to the Cluster), configure the applicable NAT rules:
i. Give each server a private IP address.
ii. Use the public IP addresses in the Original Destination.
iii. Use the private IP address in the Translated Destination.
iv. Select Any as the Original Service.
Note - If you use Manual NAT, then automatic ARP does not work for the IP
addresses behind NAT. You must configure the local.arp file as described
in sk30197.
Note - ISP Redundancy settings override the VPN Link Selection settings.
When ISP Redundancy is enabled, VPN encrypted connections survive a failure of an ISP link.
The settings in the ISP Redundancy page override settings in the IPsec VPN > Link Selection page.
Configuring ISP Redundancy for VPN with a Check Point peer
Step Instructions
7 Make sure that Use ongoing probing. Link redundancy mode shows the mode of the ISP
Redundancy:
High Availability (for Primary/Backup) or Load Sharing.
The VPN Link Selection now only probes the ISP configured in ISP Redundancy.
If the VPN peer is not a Check Point Security Gateway, the VPN may fail, or the third-party device may
continue to encrypt traffic to a failed ISP link.
n Make sure the third-party VPN peer recognizes encrypted traffic from the secondary ISP link as
coming from the Check Point cluster.
n Change the configuration of ISP Redundancy to not use these Check Point technologies:
l Use Probing - Makes sure that Link Selection uses another option.
l The options Load Sharing, Service Based Link Selection, and Route based probing work
only on Check Point Security Gateways/ Clusters / Security Groups.
If used, the Security Gateway / Cluster Members / Security Group uses one link to connect
to the third-party VPN peer.
The link with the highest prefix length and lowest metric is used.
For more information, see the R81.10 CLI Reference Guide > Chapter Security Gateway Commands -
Section fw - Section fw isp_link.
2. If you use PPPoE or PPTP xDSL modems, in the PPPoE or PPTP configuration, the Use Peer as
Default Gateway option must be cleared.
Router IP Address
All Cluster Members use the Cluster Virtual IP address(es) as Router IP address(es).
Routing information is synchronized among the Cluster Members using the Forwarding Information Base
(FIB) Manager process.
This is done to prevent traffic interruption in case of failover, and used for Load Sharing and High Availability
modes.
The FIB Manager is the responsible for the routing information.
The FIB Manager is registered as a Critical Device called "FIB". If the slave goes out of sync, this Critical
Device reports its state as "problem". As a result, the slave member changes its state to "DOWN" until the
FIB Manager is synchronized.
n When Dynamic Routing protocols and/or DHCP Relay are configured on cluster, the "Wait for
Clustering" option must be enabled in these cluster modes:
l ClusterXL High Availability
l ClusterXL Load Sharing Unicast
l ClusterXL Load Sharing Multicast
l VSX High Availability
l VSX Load Sharing (VSLS)
n When Dynamic Routing protocols and/or DHCP Relay are configured on cluster, the "Wait for
Clustering" must be disabled in these cluster modes:
l VRRP Cluster on Gaia OS
For more information, see sk92322.
Failure Recovery
Dynamic Routing on ClusterXL avoids creating a ripple effect upon failover by informing the neighboring
routers that the router has exited a maintenance mode.
The neighboring routers then reestablish their relationships to the cluster, without informing the other routers
in the network.
These restart protocols are widely adopted by all major networking vendors.
This table lists the RFC and drafts compliant with Check Point Dynamic Routing:
Syntax
Notes:
n In Gaia Clish:
Enter the set cluster<ESC><ESC> to see all the available commands.
n In Expert mode:
Run the cphaconf command see all the available commands.
You can run the cphaconf commands only from the Expert mode.
n Syntax legend:
1. Curly brackets or braces { }:
Enclose a list of available commands or parameters, separated by the
vertical bar |, from which user can enter only one.
2. Angle brackets < >:
Enclose a variable - a supported value user needs to specify explicitly.
3. Square brackets or brackets [ ]:
Enclose an optional command or parameter, which user can also enter.
n You can include these commands in scripts to run them automatically.
The meaning of each command is explained in the next sections.
Table: ClusterXL Configuration Commands
Description Command in Command in
of Command Gaia Clish Expert Mode
Configure how to show the Cluster Member in set cluster cphaconf mem_id_mode {id
local ClusterXL logs - by its Member ID or its member | name}
Member Name (see "Configuring the Cluster idmode {id |
Member ID Mode in Local Logs" on page 186) name}
Configure the Cluster Control Protocol (CCP) set cluster cphaconf ccp_encrypt
Encryption on the Cluster Member (see member {off | on}
"Configuring the Cluster Control Protocol (CCP) ccpenc {off cphaconf ccp_encrypt_key
Settings" on page 194) | on} <Key String>
Configure the Cluster Forwarding Layer on the set cluster cphaconf forward {off |
Cluster Member (controls the forwarding of member on}
traffic between Cluster Members) forwarding
Note - For Check Point use only. {off | on}
Initiate manual cluster failover (see "Initiating set cluster clusterXL_admin {down |
Manual Cluster Failover" on page 195) member admin up}
{down | up}
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
This command configures how to show the Cluster Member in the local ClusterXL logs - by its Member ID
(default), or its Member Name.
This configuration applies to these local logs:
n /var/log/messages
n dmesg
n $FWDIR/log/fwd.elg
Syntax
Shell Command
Example
[Expert@Member1:0]#
[Expert@Member1:0]# cphaconf mem_id_mode name
[Expert@Member1:0]#
[Expert@Member1:0]# cphaprob names
[Expert@Member1:0]#
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
You can add a user-defined critical device to the default list of critical devices. Use this command to register
<device> as a critical process, and add it to the list of devices that must run for the Cluster Member to be
considered active. If <device> fails, then the Cluster Member is seen as failed.
If a Critical Device fails to report its state to the Cluster Member in the defined timeout, the Critical Device,
and by design the Cluster Member, are seen as failed.
Define the status of the Critical Device that is reported to ClusterXL upon registration.
This initial status can be one of these:
n ok - Critical Device is alive.
n init - Critical Device is initializing. The Cluster Member is Down. In this state, the Cluster Member
cannot become Active.
n problem - Critical Device failed. If this state is reported to ClusterXL, the Cluster Member immediately
goes Down. This causes a failover.
Syntax
Shell Command
Gaia N/A
Clish
Notes:
n The "-t" flags specifies how frequently to expect the periodic reports from this Critical
Device.
If no periodic reports should be expected, then enter the value 0 (zero).
n The "-p" flag makes these changes permanent (survive reboot).
n The "-g" flag applies the command to all configured Virtual Systems.
Restrictions
n Total number of critical devices (pnotes) on Cluster Member is limited to 16.
n Name of any critical device (pnote) on Cluster Member is limited to 15 characters, and must not
include white spaces.
Related topics
n "Viewing Critical Devices" on page 213
n "Reporting the State of a Critical Device" on page 190
n "Registering Critical Devices Listed in a File" on page 191
n "Unregistering a Critical Device" on page 189
n "Unregistering All Critical Devices" on page 193
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
This command unregisters a user-defined Critical Device (Pnote). This means that this device is no longer
considered critical.
If a Critical Device was registered with a state "problem", before you ran this command, then after you run
this command, the status of the Cluster Member depends only on the states of the remaining Critical
Devices.
Syntax
Shell Command
Notes:
n The "-p" flag makes these changes permanent.
This means that after you reboot, these Critical Devices remain
unregistered.
n The "-g" flag applies the command to all configured Virtual Systems.
Related topics
n "Viewing Critical Devices" on page 213
n "Reporting the State of a Critical Device" on page 190
n "Registering a Critical Device" on page 187
n "Registering Critical Devices Listed in a File" on page 191
n "Unregistering All Critical Devices" on page 193
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
This command manually reports (changes) the state of a Critical Device to ClusterXL.
The reported state can be one of these:
n ok - Critical Device is alive.
n init - Critical Device is initializing. The Cluster Member is Down. In this state, the Cluster Member
cannot become Active.
n problem - Critical Device failed. If this state is reported to ClusterXL, the Cluster Member immediately
goes Down. This causes a failover.
If a Critical Device fails to report its state to the Cluster Member within the defined timeout, the Critical
Device, and by design the Cluster Member, are seen as failed. This is true only for Critical Devices with
timeouts. If a Critical Device is registered with the "-t 0" parameter, there is no timeout. Until the Critical
Device reports otherwise, the state of the Critical Device is considered to be the last reported state.
Syntax
Shell Command
Gaia N/A
Clish
Notes:
n The "-g" flag applies the command to all configured Virtual Systems.
n If the "<Name of Critical Device>" reports its state as "problem", then the
Cluster Member reports its state as failed.
Related topics
n "Viewing Critical Devices" on page 213
n "Registering a Critical Device" on page 187
n "Registering Critical Devices Listed in a File" on page 191
n "Unregistering a Critical Device" on page 189
n "Unregistering All Critical Devices" on page 193
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
This command registers all the user-defined Critical Devices listed in the specified file.
This file must be a plain-text ASCII file, with each Critical Device defined on a separate line.
Each definition must contain three parameters, which must be separated by a space or a tab character:
Where:
Parameter Description
<Timeout> If the Critical Device <Name of Device> fails to report its state to the Cluster Member
within this specified number of seconds, the Critical Device (and by design the Cluster
Member), are seen as failed.
For no timeout, use the value 0 (zero).
<Status> The Critical Device <Name of Device> reports one of these statuses to the Cluster
Member:
n ok - Critical Device is alive.
n init- Critical Device is initializing. The Cluster Member is Down. In this state,
the Cluster Member cannot become Active.
n problem - Critical Device failed. If this state is reported to ClusterXL, the Cluster
Member immediately goes Down. This causes a failover.
Syntax
Shell Command
Note - The "-g" flag applies the command to all configured Virtual Systems.
Related topics
n "Viewing Critical Devices" on page 213
n "Reporting the State of a Critical Device" on page 190
n "Registering a Critical Device" on page 187
n "Unregistering a Critical Device" on page 189
n "Unregistering All Critical Devices" on page 193
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
This command unregisters all critical devices from the Cluster Member.
Syntax
Shell Command
Notes:
n The "-a" flag specifies that all Pnotes must be unregistered
n The "-g" flag applies the command to all configured Virtual
Systems
Related topics
n "Viewing Critical Devices" on page 213
n "Reporting the State of a Critical Device" on page 190
n "Registering a Critical Device" on page 187
n "Registering Critical Devices Listed in a File" on page 191
n "Unregistering a Critical Device" on page 189
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
Cluster Members configure the Cluster Control Protocol (CCP) mode automatically.
You can configure the Cluster Control Protocol (CCP) Encryption on the Cluster Members.
See "Viewing the Cluster Control Protocol (CCP) Settings" on page 246.
Shell Command
Syntax
Shell Command
Example
... ...
[Expert@Member1:0]#
[Expert@Member1:0]#
[Expert@Member1:0]# clusterXL_admin up
This command does not survive reboot. To make the change permanent, please run 'set cluster member admin
down/up permanent' in clish or add '-p' at the end of the command in expert mode
Setting member to normal operation ...
Member current state is STANDBY
[Expert@Member1:0]#
[Expert@Member1:0]#
Important - In a Cluster, you must configure all the Cluster Members in the same way.
Description
ClusterXL considers a bond in Load Sharing mode to be in the "down" state when fewer than a minimal
number of required slave interfaces stay in the "up" state.
By default, the minimal number of required slave interfaces, which must stay in the "up" state in a bond of n
slave interfaces is n-1.
If one more slave interface fails (when n-2 slave interfaces stay in the "up" state), ClusterXL considers the
bond interface to be in the "down" state, even if the bond contains more than two slave interfaces.
If a smaller number of slave interfaces can pass the expected traffic, you can configure explicitly the minimal
number of required slave interfaces.
Divide your maximal expected traffic speed by the speed of your slave interfaces and round up the result to
find an applicable minimal number of required slave interfaces.
Notes:
n Cluster Members save the configuration in the $FWDIR/conf/cpha_bond_ls_
config.conf file.
n The commands below save the changes in this file.
n Each line in the file has this syntax:
<Name of Bond Interface> <Minimal Number of Required
Slave Interfaces>
Syntax to add the minimal number of required slave interfaces for a specific Bond interface
Shell Command
Gaia N/A
Clish
Syntax to remove the configured minimal number of required slave interfaces for a specific Bond
interface
Shell Command
Syntax to see the current configuration of the minimal number of required slave interfaces
Shell Command
Procedure
Step Instructions
3 Add or remove the minimal number of required slave interfaces for a specific Bond interface:
cphaconf bond_ls set <Bond> <Minimal Number of Slaves>
Example
[Expert@Member1:0]#
bond1 2
[Expert@Member1:0]#
[Expert@Member1:0]#
[Expert@Member1:0]#
Syntax
Shell Command
Parameters
Parameter Description
Notes:
n This command does not provide an output. To view the current state of the MVC
Mechanism, see "Viewing the State of the Multi-Version Cluster Mechanism" on
page 248.
n The change made with this command survives reboot.
n If a specific scenario requires you to disable the MVC Mechanism before the first
start of an R81.10 Cluster Member (for example, immediately after an upgrade to
R81.10), then disable it before the first policy installation on this Cluster Member.
Syntax
Notes:
n In Gaia Clish:
Enter the show cluster<ESC><ESC> to see all the available commands.
n In Expert mode:
Run the cphaprob command see all the available commands.
You can run the cphaprob commands from Gaia Clish as well.
n Syntax legend:
1. Curly brackets or braces { }:
Enclose a list of available commands or parameters, separated by the
vertical bar |, from which user can enter only one.
2. Angle brackets < >:
Enclose a variable - a supported value user needs to specify explicitly.
3. Square brackets or brackets [ ]:
Enclose an optional command or parameter, which user can also enter.
n You can include these commands in scripts to run them automatically.
The meaning of each command is explained in the next sections.
Show states of Cluster Members and their names (see show cluster cphaprob [-vs
"Viewing Cluster State" on page 209) state <VSID>] state
Show Critical Devices (Pnotes) and their states on the show cluster cphaprob [-l]
Cluster Member (see "Viewing Critical Devices" on members pnotes [-ia] [-e]
page 213) {all | problem} list
Show cluster interfaces on the cluster member (see show cluster cphaprob [-vs
"Viewing Cluster Interfaces" on page 221) members all] [-a] [-
interfaces {all m] if
| secured |
virtual | vlans}
Show cluster bond configuration on the Cluster Member show cluster cphaprob
(see "Viewing Bond Interfaces" on page 225) bond {all | name show_bond
<bond_name>} [<bond_name>]
Show (and reset) cluster failover statistics on the Cluster show cluster cphaprob [-
Member (see "Viewing Cluster Failover Statistics" on failover [reset reset {-c | -
page 229) {count | h}] [-l
history}] <count>]
show_failover
Show information about the software version (including show cluster cphaprob
hotfixes) on the local Cluster Member and its release release
matches/mismatches with other Cluster Members (see
"Viewing Software Versions on Cluster Members" on
page 231)
Show Delta Sync statistics on the Cluster Member (see show cluster cphaprob [-
"Viewing Delta Synchronization" on page 232) statistics sync reset]
[reset] syncstat
Show Delta Sync statistics for the Connections table on show cluster cphaprob [-
the Cluster Member (see "Viewing Cluster Delta Sync statistics reset] ldstat
Statistics for Connections Table" on page 239) transport
[reset]
Show the Cluster Control Protocol (CCP) mode on the show cluster cphaprob [-vs
Cluster Member (see "Viewing Cluster Interfaces" on members all] -a if
page 221) interfaces
virtual
Show the IGMP membership of the Cluster Member (see show cluster cphaprob igmp
"Viewing IGMP Status" on page 238) members igmp
Show cluster unique IP's table on the Cluster Member show cluster cphaprob
(see "Viewing Cluster IP Addresses" on page 240) members ips tablestat
show cluster cphaprob -m
members tablestat
monitored
Show the Cluster Member ID Mode in local logs - by show cluster cphaprob
Member ID (default) or Member Name (see "Viewing the members idmode names
Cluster Member ID Mode in Local Logs" on page 241)
Show interfaces, which the RouteD monitors on the show ospf cphaprob
Cluster Member when you configure OSPF (see "Viewing interfaces routedifcs
Interfaces Monitored by RouteD" on page 242) [detailed]
Show roles of RouteD daemon on Cluster Members (see show cluster cphaprob
"Viewing Roles of RouteD Daemon on Cluster Members" roles roles
on page 243)
Show the Cluster Control Protocol (CCP) mode (see show cluster cphaprob -a
"Viewing the Cluster Control Protocol (CCP) Settings" on members if
page 246) interfaces
virtual
Show the Cluster Control Protocol (CCP) Encryption show cluster cphaprob ccp_
settings (see "Viewing the Cluster Control Protocol (CCP) members ccpenc encrypt
Settings" on page 246)
show cluster
bond
all
name <Name of Bond>
failover
members
ccpenc
idmode
igmp
interfaces
all
secured
virtual
vlans
ips
monitored
mvc
pnotes
all
problem
release
roles
state
statistics
sync [reset]
transport [reset]
Syntax
Shell Command
Example
Member1>
Assigned n In the ClusterXL High Availability mode - shows the Active Cluster Member with
Load 100% load, and all other Standby Cluster Members with 0% load.
n In ClusterXL Load Sharing modes (Unicast and Multicast) - shows all Active
Cluster Members with 100% load.
State n In the ClusterXL High Availability mode, only one Cluster Member in a fully-
functioning cluster must be ACTIVE, and the other Cluster Members must be in
the STANDBY state.
n In the ClusterXL Load Sharing modes (Unicast and Multicast), all Cluster
Members in a fully-functioning cluster must be ACTIVE.
n In 3rd-party clustering configuration, all Cluster Members in a fully-functioning
cluster must be ACTIVE. This is because this command only reports the status of
the Full Synchronization process.
See the summary table below.
Active Shows the Critical Devices that report theirs states as "problem" (see "Viewing Critical
PNOTEs Devices" on page 213).
Last member Shows information about the last time this Cluster Member changed its cluster state.
state change
event
State change Shows the previous cluster state and the new cluster state of this Cluster Member.
Reason for Shows the reason why this Cluster Member changed its cluster state.
state change
Event time Shows the date and the time when this Cluster Member changed its cluster state.
Last cluster Shows information about the last time a cluster failover occurred.
failover event
Event time Shows the date and the time of the last cluster failover.
Time of Shows the date and the time of the last counter reset, and the reset initiator.
counter reset
When you examine the state of the Cluster Member, consider whether it forwards packets, and whether it
has a problem that prevents it from forwarding packets. Each state reflects the result of a test on critical
devices. This table shows the possible cluster states, and whether or not they represent a problem.
Table: Description of the cluster states
Is this
Cluster Forwarding
Description state a
State packets?
problem?
ACTIVE(!) A problem was detected, but the Cluster Member still Yes Yes
ACTIVE(!F) forwards packets, because it is the only member in
ACTIVE(!P) the cluster, or because there are no other Active
ACTIVE(!FP) members in the cluster. In any other situation, the
state of the member is Down.
n ACTIVE(!) - See above.
n ACTIVE(!F) - See above. Cluster Member is
in the freeze state.
n ACTIVE(!P) - See above. This is the Pivot
Cluster Member in Load Sharing Unicast
mode.
n ACTIVE(!FP) - See above. This is the Pivot
Cluster Member in Load Sharing Unicast mode
and it is in the freeze state.
INIT The Cluster Member is in the phase after the boot and No No
until the Full Sync completes.
Problem Monitors all the Critical Devices. None of the At least one of the Critical
Notification Critical Devices on this Cluster
Devices on Member reports its state as
this Cluster problem.
Member
report its state
as problem.
Interface Monitors the state of cluster All cluster At least one of the cluster
Active Check interfaces. interfaces on interfaces on this Cluster
this Cluster Member is down (CCP
Member are packets are not sent and/or
up (CCP received on time).
packets are
sent and
received on
all cluster
interfaces).
Fullsync Monitors if Full Sync on this This Cluster This Cluster Member was not
Cluster Member completed Member able to complete Full Sync.
successfully. completed
Full Sync
successfully.
Policy Monitors if the Security Policy is This Cluster Security Policy is not
installed. Member currently installed on this
successfully Cluster Member.
installed
Security
Policy.
fwd Monitors the Security Gateway fwd daemon fwd daemon on this Cluster
process called fwd. on this Cluster Member did not report its
Member state on time.
reported its
state on time.
ted Monitors the Threat Emulation ted daemon ted daemon on this Cluster
process called ted. on this Cluster Member did not report its
Member state on time.
reported its
state on time.
Instances This pnote appears in VSX HA The number There is a mismatch between
mode (not VSLS) cluster. of CoreXL the number of CoreXL
Firewall Firewall instances in the
instances in received CCP packet and the
the received number of loaded CoreXL
CCP packet Firewall instances on this
matches the VSX Cluster Member or this
number of Virtual System (see
loaded sk106912).
CoreXL
Firewall
instances on
this VSX
Cluster
Member or
this Virtual
System.
host_monitor Monitors the Critical Device All monitored At least one of the monitored
host_monitor. IP addresses IP addresses on this Cluster
User executed the on this Cluster Member did not reply to at
$FWDIR/bin/clusterXL_ Member least one ping.
monitor_ips script. replied to
See "The clusterXL_monitor_ips pings.
Script" on page 284.
A name of a user User executed the All monitored At least one of the monitored
space process $FWDIR/bin/clusterXL_ user space user space on this Cluster
(except fwd, monitor_process script. processes on Member processes is not
routed, cvpnd, See "The clusterXL_monitor_ this Cluster running.
ted) process Script" on page 288. Member are
running.
Local Probing Monitors the probing CCP packets At least one of the cluster
mechanism on the cluster are received interfaces on this Cluster
interfaces (see the term Probing on all cluster Member does not receive
in the "Glossary" on page 12). interfaces. CCP packets for 5 seconds.
The probing started for the
network connected to the
affected interface.
Important:
n The state of
this Critical
Device does
not affect
the cluster
state of a
Cluster
Member.
This Critical
Device is
only an
indicator for
the probing
mechanism
(instead of
running a
cluster
debug).
n If there is a
real issue
with a
cluster
interface,
the Critical
Device
"
Interface
Active
Check"
reports its
state as
"problem".
Syntax
Shell Command
Where:
Command Description
show cluster Prints the list of all the "Built-in Devices" and the "Registered
members pnotes Devices"
problem
cphaprob -l Prints the list of all the "Built-in Devices" and the "Registered
Devices"
cphaprob -i list When there are no issues on the Cluster Member, shows:
There are no pnotes in problem state
When a Critical Device reports a problem, prints only the Critical Device that
reports its state as "problem".
cphaprob -ia list When there are no issues on the Cluster Member, shows:
There are no pnotes in problem state
When a Critical Device reports a problem, prints the Critical Device "Problem
Notification" and the Critical Device that reports its state as "problem"
cphaprob -e list When there are no issues on the Cluster Member, shows:
There are no pnotes in problem state
When a Critical Device reports a problem, prints only the Critical Device that
reports its state as "problem"
Related topics
n "Reporting the State of a Critical Device" on page 190
n "Registering a Critical Device" on page 187
n "Registering Critical Devices Listed in a File" on page 191
n "Unregistering a Critical Device" on page 189
n "Unregistering All Critical Devices" on page 193
Examples
Example 1 - Critical Device 'fwd'
Critical Device fwd reports its state as problem because the fwd process is down.
Built-in Devices:
Registered Devices:
[Expert@Member1:0]#
Critical Device CoreXL Configuration reports its state as problem because the numbers of CoreXL
Firewall instances do not match between the Cluster Members.
Built-in Devices:
Registered Devices:
[Expert@Member1:0]#
Syntax
Shell Command
Where:
Command Description
show cluster members interfaces Shows full list of all cluster interfaces:
all
n including the number of required interfaces
n including Network Objective
n including VLAN monitoring mode, or list of
monitored VLAN interfaces
show cluster members interfaces Shows only cluster interfaces (Cluster and Sync) and
secured their states:
n without Network Objective
n without VLAN monitoring mode
n without monitored VLAN interfaces
show cluster members interfaces Shows full list of cluster virtual interfaces and their states:
virtual
n including the number of required interfaces
n including Network Objective
n without VLAN monitoring mode
n without monitored VLAN interfaces
cphaprob -a -m if Shows full list of all cluster interfaces and their states:
n including the number of required interfaces
n including Network Objective
n including VLAN monitoring mode, or list of
monitored VLAN interfaces
Output
The output of these commands must be identical to the configuration in the cluster object's Network
Management page in SmartConsole.
Example
[Expert@Member1:0]# cphaprob -a -m if
eth0 UP
eth1 (S) UP
eth2 (LM) UP
bond1 (LS) UP
eth0 192.168.3.247
eth2 44.55.66.247
bond1 77.88.99.247
[Expert@Member1:0]#
Required interfaces Shows the total number of monitored cluster interfaces, including the
Sync interface.
This number is based on the configuration of the cluster object >
Network Management page.
Required secured interfaces Shows the total number of the required Sync interfaces.
This number is based on the configuration of the cluster object >
Network Management page.
Non-Monitored This means that Cluster Member does not monitor the state of this
interface.
In SmartConsole, in the cluster object > Network Management page,
administrator configured the Network Type Private for this interface.
UP This means that Cluster Member monitors the state of this interface.
The current cluster state of this interface is UP, which means this
interface can send and receive CCP packets.
In SmartConsole, in the cluster object > Network Management page,
administrator configured one of these Network Types for this
interface: Cluster, Sync, or Cluster + Sync.
DOWN This means that Cluster Members monitors the state of this interface.
The current cluster state of this interface is DOWN, which means this
interface cannot send CCP packets, receive CCP packets, or both.
In SmartConsole, in the cluster object > Network Management page,
administrator configured one of these Network Types for this
interface: Cluster, Sync, or Cluster + Sync.
Virtual cluster interfaces Shows the total number of the configured virtual cluster interfaces.
This number is based on the configuration of the cluster object >
Network Management page.
No VLANs are monitored on Shows the VLAN monitoring mode - there are no VLAN interfaces
the member configured on the cluster interfaces.
Monitoring mode is Monitor all Shows the VLAN monitoring mode - there are some VLAN interfaces
VLANs: All VLANs are configured on the cluster interfaces, and Cluster Member monitors all
monitored VLAN IDs.
Monitoring mode is Monitor Shows the VLAN monitoring mode - there are some VLAN interfaces
specific VLAN: Only specified configured on the cluster interfaces, and Cluster Member monitors
VLANs are monitored only specific VLAN IDs.
Syntax
Shell Command
Where:
Command Description
show cluster bond all Shows configuration of all configured bond interfaces
show bonding groups
cphaprob show_bond
show cluster bond name <bond_ Shows configuration of the specified bond interface
name>
cphaprob show_bond <bond_name>
Examples
Example 1 - 'cphaprob show_bond'
[Expert@Member2:0]# cphaprob show_bond
Legend:
-------
UP! - Bond interface state is UP, yet attention is required
Slaves configured - number of slave interfaces configured on the bond
Slaves link up - number of operational slaves
Slaves required - minimal number of operational slaves required for bond to be UP
[Expert@Member2:0]#
Description of the output fields for the "cphaprob show_bond" and "show cluster bond all"
commands:
Table: Description of the output fields
Field Description
Slaves Total number of physical slave interfaces configured in this Gaia bonding group.
configured
Slaves link Number of operational physical slave interfaces in this Gaia bonding group.
up
Slaves Minimal number of operational physical slave interfaces required for the state of this
required Gaia bonding group to be UP.
[Expert@Member2:0]#
Description of the output fields for the "cphaprob show_bond <bond_name>" and "show cluster
bond name <bond_name>" commands:
Table: Description of the output fields
Field Description
Bond mode Bonding mode of this Gaia bonding group. One of these:
n High Availability
n Load Sharing
Configured Total number of physical slave interfaces configured in this Gaia bonding group.
slave
interfaces
In use slave Number of operational physical slave interfaces in this Gaia bonding group.
interfaces
Required Minimal number of operational physical slave interfaces required for the state of this
slave Gaia bonding group to be UP.
interfaces
Slave name Names of physical slave interfaces configured in this Gaia bonding group.
Link State of the physical link on the physical slave interfaces in this Gaia bonding group.
One of these:
n Yes - Link is present
n No - Link is lost
Legend:
---------
Bonds in group - a list of the bonds in the bond group
Required active bonds - number of required active bonds
[Expert@Member2:0]#
Required active bonds Number of required active bonds in this Group of Bonds.
Bonds in group Names of the Gaia bond interfaces configured in this Group of Bonds.
Shell Command
Shell Command
Parameters
Parameter Description
-l <number> Specifies how many of last failover events to show (between 1 and 50)
Example
Cluster failover history (last 20 failovers since reboot/reset on Sun Sep 8 16:08:34 2019):
[Expert@Member1:0]#
Syntax
Shell Command
Example
ID SW release
[Expert@Member1:0]#
Shell Command
Shell Command
Example output of the "show cluster statistics sync" and "cphaprob syncstat" commands
from a Cluster Member:
Sync status: OK
Drops:
Lost updates................................. 0
Lost bulk update events...................... 0
Oversized updates not sent................... 0
Sync at risk:
Sent reject notifications.................... 0
Received reject notifications................ 0
Sent messages:
Total generated sync messages................ 26079
Sent retransmission requests................. 0
Sent retransmission updates.................. 0
Peak fragments per update.................... 1
Received messages:
Total received updates....................... 3710
Received retransmission requests............. 0
Sync Interface:
Name......................................... eth1
Link speed................................... 1000Mb/s
Rate......................................... 46000 [Bps]
Peak rate.................................... 46000 [Bps]
Link usage................................... 0%
Total........................................ 376827[KB]
Timers:
Delta Sync interval (ms)..................... 100
This section shows the status of the Delta Sync mechanism. One of these:
n Sync status: OK
n Sync status: Off - Full-sync failure
n Sync status: Off - Policy installation failure
n Sync status: Off - Cluster module not started
n Sync status: Off - SIC failure
n Sync status: Off - Full-sync checksum error
n Sync status: Off - Full-sync received queue is full
n Sync status: Off - Release version mismatch
n Sync status: Off - Connection to remote member timed-out
n Sync status: Off - Connection terminated by remote member
This section shows statistics for drops on the Delta Sync network.
Table: Description of the output fields
Field Description
Lost updates Shows how many Delta Sync updates this Cluster Member considers as lost (based
on sequence numbers in CCP packets).
If this counter shows a value greater than 0, this Cluster Member lost Delta Sync
updates.
Possible mitigation:
Increase the size of the Sending Queue and the size of the Receiving Queue:
n Increase the size of the Sending Queue, if the counter Received reject
notification is increasing.
n Increase the size of the Receiving Queue, if the counter Received reject
notification is not increasing.
Lost bulk Shows how many times this Cluster Member missed Delta Sync updates.
update (bulk update = twice the size of the local receiving queue)
events This counter increases when this Cluster Member receives a Delta Sync update with
a sequence number much greater than expected. This probably indicates some
networking issues that cause massive packet drops.
This counter increases when the amount of missed Delta Sync updates is more than
twice the local Receiving Queue Size.
Possible mitigation:
n If the counter's value is steady, this might indicate a one-time synchronization
problem that can be resolved by running manual Full Sync. See sk37029.
n If the counter's value keeps increasing, probable there are some networking
issues. Increase the sizes of both the Receiving Queue and Sending Queue.
Oversized Shows how many oversized Delta Sync updates were discarded before sending
updates not them.
sent This counter increases when Delta Sync update is larger than the local Fragments
Queue Size.
Possible mitigation:
n If the counter's value is steady, increase the size of the Sending Queue.
n If the counter's value keeps increasing, contact Check Point Support.
This section shows statistics that the Sending Queue is at full capacity and rejects Delta Sync
retransmission requests.
Table: Description of the output fields
Field Description
Sent reject Shows how many times this Cluster Member rejected Delta Sync retransmission
notifications requests from its peer Cluster Members, because this Cluster Member does not
hold the requested Delta Sync update anymore.
Received Shows how many reject notifications this Cluster Member received from its peer
reject Cluster Members.
notification
This section shows statistics for Delta Sync updates sent by this Cluster Member to its peer Cluster
Members.
Table: Description of the output fields
Field Description
Total generated Shows how many Delta Sync updates were generated.
sync messages This counts the Delta Sync updates, Retransmission Requests, Retransmission
Acknowledgments, and so on.
Sent Shows how many times this Cluster Member asked its peer Cluster Members to
retransmission retransmit specific Delta Sync update(s).
requests Retransmission requests are sent when certain Delta Sync updates (with a
specified sequence number) are missing, while the sending Cluster Member
already received Delta Sync updates with advanced sequences.
Note - Compare the number of Sent retransmission requests to the Total
generated sync messages of the other Cluster Members.
A large counter's value can imply connectivity problems. If the counter's value is
unreasonably high (more than 30% of the Total generated sync messages of
other Cluster Members), contact Check Point Support equipped with the entire
output and a detailed description of the network topology and configuration.
Sent Shows how many times this Cluster Member retransmitted specific Delta Sync
retransmission update(s) at the requests from its peer Cluster Members.
updates
Peak fragments Shows the peak amount of fragments in the Fragments Queue on this Cluster
per update Member (usually, should be 1).
This section shows statistics for Delta Sync updates that were received by this Cluster Member from its
peer Cluster Members.
Table: Description of the output fields
Field Description
Total received Shows the total number of Delta Sync updates this Cluster Member received
updates from its peer Cluster Members.
This counts only Delta Sync updates (not Retransmission Requests,
Retransmission Acknowledgments, and others).
Received Shows how many retransmission requests this Cluster Member received from
retransmission its peer Cluster Members.
requests A large counter's value can imply connectivity problems. If the counter's value is
unreasonably high (more than 30% of the Total generated sync messages on
this Cluster Member), contact Check Point Support equipped with the entire
output and a detailed description of the network topology and configuration.
Sending Shows the size of the cyclic queue, which buffers all the Delta Sync updates that
queue size were already sent until it receives an acknowledgment from the peer Cluster
Members.
This queue is needed for retransmitting the requested Delta Sync updates.
Each Cluster Member has one Sending Queue.
Default: 512 Delta Sync updates, which is also the minimal value.
Receiving Shows the size of the cyclic queue, which buffers the received Delta Sync updates in
queue size two cases:
n When Delta Sync updates are missing, this queue is used to hold the
remaining received Delta Sync updates until the lost Delta Sync updates are
retransmitted (Cluster Members must keep the order, in which they save the
Delta Sync updates in the kernel tables).
n This queue is used to re-assemble a fragmented Delta Sync update.
Each Cluster Member has one Receiving Queue.
Default: 256 Delta Sync updates, which is also the minimal value.
Fragments Shows the size of the queue, which is used to prepare a Delta Sync update before
queue size moving it to the Sending Queue.
Notes:
n This queue must be smaller than the Sending Queue.
n This queue must be significantly smaller than the Receiving Queue.
Default: 50 Delta Sync updates, which is also the minimal value.
Field Description
Delta Sync Shows the interval at which this Cluster Member sends the Delta Sync updates
interval (ms) from its Sending Queue.
The base time unit is 100ms (or 1 tick).
Default: 100 ms, which is also the minimum value.
See Increasing the Sync Timer.
Syntax
Shell Command
Example
[Expert@Member1:0]#
Syntax
Shell Command
The "reset" flag resets the kernel statistics, which were collected since the last reboot or reset.
Example
[Expert@Member1:0]#
Shell Command
Shell Command
Example
Note - To see name of interfaces that correspond to numbers in the "Interface" column,
run the fw ctl iflist command.
(Local)
0 1 192.168.3.245
0 2 11.22.33.245
0 3 44.55.66.245
1 1 192.168.3.246
1 2 11.22.33.246
1 3 44.55.66.246
------------------------------------------
[Expert@Member1:0]#
[Expert@Member1:0]# fw ctl iflist
1 : eth0
2 : eth1
3 : eth2
[Expert@Member1:0]#
Syntax
Shell Command
Example
[Expert@Member1:0]#
Syntax
Shell Command
Example 1
[Expert@Member1:0]#
Example 2
eth0
[Expert@Member1:0]#
Syntax
Shell Command
Example
ID Role
1 (local) Master
2 Non-Master
[Expert@Member1:0]#
Note - For more information about CoreXL, see the R81.10 Performance Tuning
Administration Guide.
Syntax
Shell Command
Where:
Command Description
cphaprob -d corr Shows Cluster Correction Statistics for CoreXL SND only.
cphaprob -f corr Shows Cluster Correction Statistics for CoreXL Firewall instances only.
Shell Command
Shell Command
Syntax
Shell Command
Example
id 2
Latency | Drop
[msec] | rate
eth0 0.000 0%
eth1 0.000 0%
eth2 0.000 0%
[Expert@Member1:0]#
Syntax
Shell Command
Example
ON
Member1>
Syntax
Shell Command
Example
During FCU....................... no
Connection module map............ none
[Expert@Member1:0]#
General Logs
Log Description
State Logs
Log Description
Log Description
State change of member [ID] When a Cluster Member needs to change its state (for
([IP]) from [STATE] to [STATE] example, when an Active Cluster Member encounters a
was cancelled, since all other problem and needs to change its state to "Down"), it
members are down. Member remains first queries the other Cluster Members for their state.
[STATE]. If all other Cluster Members are down, this Cluster
Member cannot change its state to a non-active one
(otherwise the cluster cannot function).
Thus, the reporting Cluster Member continues to
function, despite its problem (and usually reports its
state as "Active(!)").
member [ID] ([IP]) <is active|is This message is issued whenever a Cluster Member
down|is stand-by|is initializing> changes its state.
([REASON]). The log text specifies the new state of the Cluster
Member.
Log Description
[DEVICE] on member [ID] Either an error was detected by the Critical Device, or the
([IP]) detected a problem Critical Device has not reported its state for a number of
([REASON]). seconds (as set by the "timeout" option of the Critical Device)
[DEVICE] on member [ID] Indicates that the Critical Device has registered itself with the
([IP]) is initializing Critical Device mechanism, but has not yet determined its state.
([REASON]).
Interface Logs
Log Description
Log Description
interface [INTERFACE NAME] of Indicates that a new interface was registered with the
member [ID] ([IP]) was added. Cluster Member (meaning that Cluster Control
Protocol (CCP) packets arriving on this interface).
Usually, this message is the result of activating an
interface (such as issuing the "ifconfig up"
command).
The interface is now included in the ClusterXL reports
(in the output of the applicable CLI commands.
Note that the interface may still be reported as
"Disconnected", in case it was configured as such
for ClusterXL.
interface [INTERFACE NAME] of Indicates that an interface was detached from the
member [ID] ([IP}) was removed. Cluster Member, and is therefore no longer monitored
by ClusterXL.
Reason Strings
Log Description
member [ID] ([IP]) reports This text can be included in a Critical Device log message
more interfaces up. describing the reasons for a problem report: another Cluster
Member has more interfaces reported to be working, than the
local Cluster Member does.
Usually, this means that the local Cluster Member has a faulty
interface, and that its peer Cluster Member can do a better job
as a Cluster Member. The local Cluster Member changes it
state to "Down", leaving the peer Cluster Member specified in
the message to handle traffic.
member [ID] ([IP]) has more This message is issued when Cluster Members in the same
interfaces - check your cluster have a different number of interfaces.
disconnected interfaces A Cluster Member with fewer interfaces than the maximal
configuration in the number in the cluster (the reporting Cluster Member) may not
<discntd.if file|registry> be working properly, as it is missing an interface required to
operate against a cluster IP address, or a synchronization
network.
If some of the interfaces on the other Cluster Member are
redundant, and should not be monitored by ClusterXL, they
should be explicitly designated as "Non-Monitored". See
"Defining Non-Monitored Interfaces" on page 150.
Log Description
[NUMBER] interfaces ClusterXL has detected a problem with one or more of the
required, only [NUMBER] up. monitored interfaces.
This does not necessarily mean that the Cluster Member
changes its state to "Down", as the other Cluster Members may
have less operational interfaces.
In such a condition, the Cluster Member with the largest number
of operational interfaces will remain up, while the others go
down.
4. Run:
threshold_config
For more information, see the R81.10 CLI Reference Guide > Chapter Security Management Server
Commands > Section threshold_config.
5. From the Threshold Engine Configuration Options menu, select (9) Configure Thresholds.
6. From the Threshold Categories menu, select (2) High Availability.
7. Select the applicable traps.
8. Select and configure these actions for the specified trap:
n Enable/Disable Threshold
n Set Severity
n Set Repetitions
n Configure Alert Destinations
9. From the Threshold Engine Configuration Options menu, select (7) Configure alert destinations.
10. Configure your alert destinations.
11. From the Threshold Engine Configuration Options menu, select (3) Save policy.
You can optionally save the policy to a file.
12. In SmartConsole, install the Access Control Policy on this cluster object.
Note - You can download the most recent Check Point MIB files from sk90470.
Command
Change in the State Command
in the Expert Notes
of the Cluster Member in Gaia Clish
Mode
Change the state of the Cluster set cluster member clusterXL_ Does not disable
Member to DOWN admin down admin down Delta Sync.
Change the state of the Cluster set cluster member clusterXL_ Does not initiate
Member to UP admin up admin up Full Sync.
See:
n "Initiating Manual Cluster Failover" on page 195
n "The clusterXL_admin Script" on page 280
See:
n "Registering a Critical Device" on page 187
n "Reporting the State of a Critical Device" on page 190
n "Unregistering a Critical Device" on page 189
Notes:
n In Load Sharing mode, the cluster distributes the traffic load between the
remaining Active members.
n In High Availability mode, the cluster fails over to a Standby Cluster Member with
the highest priority.
dbset routed:instance:default:traceoptions:traceoptions:Cluster
Part of the
Description
message
Y Shows the ID or the NAME of the local Cluster Member that generated this log
message.
See "Configuring the Cluster Member ID Mode in Local Logs" on page 186.
Syntax
Notes:
n In Gaia Clish:
Enter the set cluster<ESC><ESC> to see all the available commands.
n In Expert mode:
Run the cphaconf command see all the available commands.
You can run the cphaconf commands only from the Expert mode.
n Syntax legend:
1. Curly brackets or braces { }:
Enclose a list of available commands or parameters, separated by the
vertical bar |, from which user can enter only one.
2. Angle brackets < >:
Enclose a variable - a supported value user needs to specify explicitly.
3. Square brackets or brackets [ ]:
Enclose an optional command or parameter, which user can also enter.
n You can include these commands in scripts to run them automatically.
The meaning of each command is explained in the next sections.
Table: ClusterXL Configuration Commands
Description Command in Command in
of Command Gaia Clish Expert Mode
Configure how to show the Cluster Member in set cluster cphaconf mem_id_mode {id
local ClusterXL logs - by its Member ID or its member | name}
Member Name (see "Configuring the Cluster idmode {id |
Member ID Mode in Local Logs" on page 186) name}
Configure the Cluster Control Protocol (CCP) set cluster cphaconf ccp_encrypt
Encryption on the Cluster Member (see member {off | on}
"Configuring the Cluster Control Protocol (CCP) ccpenc {off cphaconf ccp_encrypt_key
Settings" on page 194) | on} <Key String>
Configure the Cluster Forwarding Layer on the set cluster cphaconf forward {off |
Cluster Member (controls the forwarding of member on}
traffic between Cluster Members) forwarding
Note - For Check Point use only. {off | on}
Initiate manual cluster failover (see "Initiating set cluster clusterXL_admin {down |
Manual Cluster Failover" on page 195) member admin up}
{down | up}
Syntax
Notes:
n In Gaia Clish:
Enter the show cluster<ESC><ESC> to see all the available commands.
n In Expert mode:
Run the cphaprob command see all the available commands.
You can run the cphaprob commands from Gaia Clish as well.
n Syntax legend:
1. Curly brackets or braces { }:
Enclose a list of available commands or parameters, separated by the
vertical bar |, from which user can enter only one.
2. Angle brackets < >:
Enclose a variable - a supported value user needs to specify explicitly.
3. Square brackets or brackets [ ]:
Enclose an optional command or parameter, which user can also enter.
n You can include these commands in scripts to run them automatically.
The meaning of each command is explained in the next sections.
Show states of Cluster Members and their names (see show cluster cphaprob [-vs
"Viewing Cluster State" on page 209) state <VSID>] state
Show Critical Devices (Pnotes) and their states on the show cluster cphaprob [-l]
Cluster Member (see "Viewing Critical Devices" on members pnotes [-ia] [-e]
page 213) {all | problem} list
Show cluster interfaces on the cluster member (see show cluster cphaprob [-vs
"Viewing Cluster Interfaces" on page 221) members all] [-a] [-
interfaces {all m] if
| secured |
virtual | vlans}
Show cluster bond configuration on the Cluster Member show cluster cphaprob
(see "Viewing Bond Interfaces" on page 225) bond {all | name show_bond
<bond_name>} [<bond_name>]
Show (and reset) cluster failover statistics on the Cluster show cluster cphaprob [-
Member (see "Viewing Cluster Failover Statistics" on failover [reset reset {-c | -
page 229) {count | h}] [-l
history}] <count>]
show_failover
Show information about the software version (including show cluster cphaprob
hotfixes) on the local Cluster Member and its release release
matches/mismatches with other Cluster Members (see
"Viewing Software Versions on Cluster Members" on
page 231)
Show Delta Sync statistics on the Cluster Member (see show cluster cphaprob [-
"Viewing Delta Synchronization" on page 232) statistics sync reset]
[reset] syncstat
Show Delta Sync statistics for the Connections table on show cluster cphaprob [-
the Cluster Member (see "Viewing Cluster Delta Sync statistics reset] ldstat
Statistics for Connections Table" on page 239) transport
[reset]
Show the Cluster Control Protocol (CCP) mode on the show cluster cphaprob [-vs
Cluster Member (see "Viewing Cluster Interfaces" on members all] -a if
page 221) interfaces
virtual
Show the IGMP membership of the Cluster Member (see show cluster cphaprob igmp
"Viewing IGMP Status" on page 238) members igmp
Show cluster unique IP's table on the Cluster Member show cluster cphaprob
(see "Viewing Cluster IP Addresses" on page 240) members ips tablestat
show cluster cphaprob -m
members tablestat
monitored
Show the Cluster Member ID Mode in local logs - by show cluster cphaprob
Member ID (default) or Member Name (see "Viewing the members idmode names
Cluster Member ID Mode in Local Logs" on page 241)
Show interfaces, which the RouteD monitors on the show ospf cphaprob
Cluster Member when you configure OSPF (see "Viewing interfaces routedifcs
Interfaces Monitored by RouteD" on page 242) [detailed]
Show roles of RouteD daemon on Cluster Members (see show cluster cphaprob
"Viewing Roles of RouteD Daemon on Cluster Members" roles roles
on page 243)
Show the Cluster Control Protocol (CCP) mode (see show cluster cphaprob -a
"Viewing the Cluster Control Protocol (CCP) Settings" on members if
page 246) interfaces
virtual
Show the Cluster Control Protocol (CCP) Encryption show cluster cphaprob ccp_
settings (see "Viewing the Cluster Control Protocol (CCP) members ccpenc encrypt
Settings" on page 246)
show cluster
bond
all
name <Name of Bond>
failover
members
ccpenc
idmode
igmp
interfaces
all
secured
virtual
vlans
ips
monitored
mvc
pnotes
all
problem
release
roles
state
statistics
sync [reset]
transport [reset]
cpconfig
Description
This command starts the Check Point Configuration Tool.
This tool configures specific settings for the installed Check Point products.
Important:
n In a Cluster, you must configure all the Cluster Members in the same way.
n On Scalable Platforms (Maestro and Chassis), you must connect to the
applicable Security Group.
Syntax on a Security Gateway / Cluster Member in Gaia Clish or the Expert mode
cpconfig
Syntax on a Scalable Platform Security Group in Gaia gClish or the Expert mode
cpconfig
Menu Options
Note - The options shown depend on the configuration and installed products.
Licenses and contracts Manages Check Point licenses and contracts on this Security
Gateway or Cluster Member.
PKCS#11 Token Register a cryptographic token, for use by Gaia Operating System.
See details of the token, and test its functionality.
Random Pool Configures the RSA keys, to be used by Gaia Operating System.
Secure Internal Communication Manages SIC on the Security Gateway or Cluster Member.
This change requires a restart of Check Point services on the
Security Gateway or Cluster Member.
For more information, see:
n The R81.10 Security Management Administration Guide.
n sk65764: How to reset SIC.
Enable cluster membership for Enables the cluster membership on the Security Gateway.
this gateway This change requires a reboot of the Security Gateway.
For more information, see the R81.10 Installation and Upgrade
Guide.
Note - This section does not apply to Scalable Platforms (Maestro
and Chassis).
Disable cluster membership for Disables the cluster membership on the Security Gateway.
this gateway This change requires a reboot of the Security Gateway.
For more information, see the R81.10 Installation and Upgrade
Guide.
Note - This section does not apply to Scalable Platforms (Maestro
and Chassis).
Enable Check Point Per Virtual Enables Virtual System Load Sharing on the VSX Cluster Member.
System State For more information, see the R81.10 VSX Administration Guide.
Note - This section does not apply to Scalable Platforms (Maestro
and Chassis).
Disable Check Point Per Virtual Disables Virtual System Load Sharing on the VSX Cluster Member.
System State For more information, see the R81.10 VSX Administration Guide.
Note - This section does not apply to Scalable Platforms (Maestro
and Chassis).
Enable Check Point ClusterXL Enables Check Point ClusterXL for Bridge mode.
for Bridge Active/Standby This change requires a reboot of the Cluster Member.
For more information, see the R81.10 Installation and Upgrade
Guide.
Note - This section does not apply to Scalable Platforms (Maestro
and Chassis).
Disable Check Point ClusterXL Disables Check Point ClusterXL for Bridge mode.
for Bridge Active/Standby This change requires a reboot of the Cluster Member.
For more information, see the R81.10 Installation and Upgrade
Guide.
Note - This section does not apply to Scalable Platforms (Maestro
and Chassis).
Check Point CoreXL Manages CoreXL on the Security Gateway / Cluster Member /
Scalable Platform Security Group.
After all changes in CoreXL configuration, you must reboot the
Security Gateway / Cluster Member / Security Group.
For more information, see the R81.10 Performance Tuning
Administration Guide.
Automatic start of Check Point Shows and controls which of the installed Check Point products
Products start automatically during boot.
[Expert@MySingleGW:0]# cpconfig
This program will let you re-configure
your Check Point products configuration.
Configuration Options:
----------------------
(1) Licenses and contracts
(2) SNMP Extension
(3) PKCS#11 Token
(4) Random Pool
(5) Secure Internal Communication
(6) Enable cluster membership for this gateway
(7) Check Point CoreXL
(8) Automatic start of Check Point Products
(9) Exit
[Expert@MyClusterMember:0]# cpconfig
This program will let you re-configure
your Check Point products configuration.
Configuration Options:
----------------------
(1) Licenses and contracts
(2) SNMP Extension
(3) PKCS#11 Token
(4) Random Pool
(5) Secure Internal Communication
(6) Disable cluster membership for this gateway
(7) Enable Check Point Per Virtual System State
(8) Enable Check Point ClusterXL for Bridge Active/Standby
(9) Check Point CoreXL
(10) Automatic start of Check Point Products
(11) Exit
cphastart
Description
Starts the cluster configuration on a Cluster Member after it was stopped with the "cphastop" on page 273
command.
Note - This command does not initiate a Full Synchronization on the Cluster Member.
Syntax
cphastart
[-h]
[-d]
Parameters
Parameter Description
Best Practice - If you use this parameter, then redirect the output to a file, or
use the script command to save the entire CLI session.
Refer to:
n These lines in the output file:
prepare_command_args: -D ... start
/opt/CPsuite-R81.10/fw1/bin/cphaconf clear-secured
/opt/CPsuite-R81.10/fw1/bin/cphaconf -D ...(truncated here
for brevity)... start
n The $FWDIR/log/cphastart.elg log file.
cphastop
Description
Stops the cluster software on a Cluster Member.
Notes:
n This command stops the Cluster Member from passing traffic.
n This command stops the State Synchronization between this Cluster Member and
its peer Cluster Members.
n After you run this command, you can still open connections directly to this Cluster
Member.
n To start the cluster software, run the "cphastart" on page 272 command.
Syntax
cphastop
cp_conf fullha
Important - This command does not apply to Scalable Platforms (Maestro and Chassis).
Description
Manages the state of the Full High Availability Cluster:
n Enables the Full High Availability Cluster
n Disables the Full High Availability Cluster
n Deletes the Full High Availability peer
n Shows the Full High Availability state
Important - To configure a Full High Availability cluster, follow the R81.10 Installation
and Upgrade Guide.
Syntax
cp_conf fullha
enable
del_peer
disable
state
Parameters
Parameter Description
del_peer Deletes the Full High Availability peer from the configuration.
Example
cp_conf ha
Important - This command does not apply to Scalable Platforms (Maestro and Chassis).
Description
Enables or disables cluster membership on this Security Gateway.
Important - This command is for Check Point use only. To configure cluster
membership, you must use the "cpconfig" command.
Syntax
Parameters
Parameter Description
norestart Optional: Specifies to apply the configuration change without the restart of Check
Point services. The new configuration takes effect only after reboot.
Example 1 - Enable the cluster membership without restart of Check Point services
[Expert@MyGW:0]#
Example 2 - Disable the cluster membership without restart of Check Point services
[Expert@MyGW:0]#
fw hastat
Description
Shows information about Check Point computers in High Availability configuration and their states.
Note - This command is outdated. On cluster members, run the Gaia Clish command
"show cluster state", or the Expert mode command "cphaprob state". See
"Viewing Cluster State" on page 209.
Syntax
Parameters
Parameter Description
[Expert@Member1:0]# fw hastat
HOST NUMBER HIGH AVAILABILITY STATE MACHINE STATUS
192.168.3.52 1 active OK
[Expert@Member1:0]#
fwboot ha_conf
Description
Configures the cluster mechanism during boot.
Notes:
n You must run this command from the Expert mode.
n To install a cluster, see the R81.10 Installation and Upgrade
Guide.
Syntax
ClusterXL Scripts
You can use special scripts to change the state of Cluster Members.
$FWDIR/bin/clusterXL_admin
Script Workflow
This shell script does one of these:
n Registers a Critical Device called "admin_down" and reports the state of that Critical Device as
"problem".
This gracefully changes the state of the Cluster Member to "DOWN".
n Reports the state of the registered Critical Device "admin_down" as "ok".
This gracefully changes the state of the Cluster Member to "UP".
Then, the script unregisters the Critical Device "admin_down".
For more information, see sk55081.
Example
#! /bin/csh -f
#
# The script will cause the machine to get into down state, thus the member will not filter packets.
# It will supply a simple way to initiate a failover by registering a new device in problem state when
# a failover is required and will unregister the device when wanting to return to normal operation.
# USAGE:
# clusterXL_admin <up|down>
# Inform the user that the command can run with persistent mode.
if ("$PERSISTENT" != "-p") then
echo "This command does not survive reboot. To make the change permanent, please run 'set cluster
member admin down/up permanent' in clish or add '-p' at the end of the command in expert mode"
endif
if ($status) then
set state = $stateArr[5]
else
set state = $stateArr[4]
endif
$FWDIR/bin/clusterXL_monitor_ips
Script Workflow
1. Registers a Critical Device called "host_monitor" with the status "ok".
2. Starts to send pings to the list of predefined IP addresses in the $FWDIR/conf/cpha_hosts file.
3. While the script receives responses to its pings, it does not change the status of that Critical Device.
4. If the script does not receive a response to even one ping, it reports the state of that Critical Device as
"problem".
This gracefully changes the state of the Cluster Member to DOWN.
If the script receives responses to its pings again, it changes the status of that Critical Device to "ok"
again.
For more information, see sk35780.
Example
#!/bin/sh
#
# The script tries to ping the hosts written in the file $FWDIR/conf/cpha_hosts. The names (must be
resolveable) ot the IPs of the hosrs must be written in seperate lines.
# the file must not contain anything else.
# We ping the given hosts every number of seconds given as parameter to the script.
# USAGE:
# cpha_monitor_ips X silent
# where X is the number of seconds between loops over the IPs.
# if silent is set to 1, no messages will appear on the console
#
# We initially register a pnote named "host_monitor" in the problem notification mechanism
# when we detect that a host is not responding we report the pnote to be in "problem" state.
# when ping succeeds again - we report the pnote is OK.
silent=0
fi
$FWDIR/bin/cphaconf set_pnote -d host_monitor -s ok report
fi
if [ "$silent" = 0 ]
then
echo "sleeping"
fi
sleep $1
echo "sleep $1"
done
$FWDIR/bin/clusterXL_monitor_process
Script Workflow
1. Registers Critical Devices (with the status "ok") called as the names of the processes you specified in
the $FWDIR/conf/cpha_proc_list file.
2. While the script detects that the specified process runs, it does not change the status of the
corresponding Critical Device.
3. If the script detects that the specified process do not run anymore, it reports the state of the
corresponding Critical Device as "problem".
This gracefully changes the state of the Cluster Member to "DOWN".
If the script detects that the specified process runs again, it changes the status of the corresponding
Critical Device to "ok" again.
For more information, see sk92904.
Example
#!/bin/sh
#
# This script monitors the existance of processes in the system. The process names should be written
# in the $FWDIR/conf/cpha_proc_list file one every line.
#
# USAGE :
# cpha_monitor_process X silent
# where X is the number of seconds between process probings.
# if silent is set to 1, no messages will appear on the console.
#
#
# We initially register a pnote for each of the monitored processes
# (process name must be up to 15 charachters) in the problem notification mechanism.
# when we detect that a process is missing we report the pnote to be in "problem" state.
# when the process is up again - we report the pnote is OK.
if [ "$2" -le 1 ]
then
silent=$2
else
silent=0
fi
if [ -f $FWDIR/conf/cpha_proc_list ]
then
procfile=$FWDIR/conf/cpha_proc_list
else
echo "No process file in $FWDIR/conf/cpha_proc_list "
exit 0
fi
arch=`uname -s`
while [ 1 ]
do
result=1
status=$?
if [ $status = 0 ]
then
if [ $silent = 0 ]
then
echo " $process is alive"
fi
# echo "3, $FWDIR/bin/cphaconf set_pnote -d $process -s ok report"
$FWDIR/bin/cphaconf set_pnote -d $process -s ok report
else
if [ $silent = 0 ]
then
echo " $process is down"
fi
done
if [ $result = 0 ]
then
if [ $silent = 0 ]
then
echo " One of the monitored processes is down!"
fi
else
if [ $silent = 0 ]
then
echo " All monitored processes are up "
fi
fi
if [ "$silent" = 0 ]
then
echo "sleeping"
fi
sleep $1
done
List of APIs
API Category API Description
Asynchronous add simple- Creates a new simple cluster object from scratch
cluster
Synchronous show simple- Shows an existing simple cluster object specified by its
cluster Name or UID
API Examples
Example 1 - Adding a simple cluster object
API command:
Use this API to add a simple cluster object.
add simple-cluster
Once the API command finishes, and the session is published, a new cluster object appears in
SmartConsole.
Prerequisite:
1. All Cluster Members must already be installed.
2. The applicable interfaces on each Cluster Member must already be configured.
Example description:
n A simple ClusterXL in High Availability mode called cluster1
n With two cluster members called member1 and member2
n With three interfaces: eth0 (external), eth1 (sync), and eth2 (internal)
n Only the Firewall Software Blade is enabled (the IPsec VPN blade is disabled)
n Cluster software version is R80.20
API example:
Important - In the API command you must use the same One-Time Password you
used on Cluster Members during the First Time Configuration Wizard.
{
"name" : "cluster1",
"color" : "yellow",
"version" : "R80.20",
"ip-address" : "172.23.5.254",
"os-name" : "Gaia",
"cluster-mode" : "cluster-xl-ha",
"firewall" : true,
"vpn" : false,
"interfaces" : [
{
"name" : "eth0",
"ip-address" : "172.23.5.254",
"network-mask" : "255.255.255.0",
"interface-type" : "cluster",
"topology" : "EXTERNAL",
"anti-spoofing" : "true"
},
{
"name" : "eth1",
"interface-type" : "sync",
"topology" : "INTERNAL",
"topology-settings": {
"ip-address-behind-this-interface": "network defined by the interface ip and net mask",
"interface-leads-to-dmz": false
}
},
{
"name" : "eth2",
"ip-address" : "192.168.1.254",
"network-mask" : "255.255.255.0",
"interface-type" : "cluster",
"topology" : "INTERNAL",
"anti-spoofing" : "true",
"topology-settings": {
"ip-address-behind-this-interface": "network defined by the interface ip and net mask",
"interface-leads-to-dmz": false
}
}
],
"members" : [ {
"name" : "member1",
"one-time-password" : "abcd",
"ip-address" : "172.23.5.1",
"interfaces" : [
{
"name" : "eth0",
"ip-address" : "172.23.5.1",
"network-mask" : "255.255.255.0"
},
{
"name" : "eth1",
"ip-address" : "1.1.1.1",
"network-mask" : "255.255.255.0"
},
{
"name" : "eth2",
"ip-address" : "192.168.1.1",
"network-mask" : "255.255.255.0"
}
]
},
{
"name" : "member2",
"one-time-password" : "abcd",
"ip-address" : "172.23.5.2",
"interfaces" : [
{
"name" : "eth0",
"ip-address" : "172.23.5.2",
"network-mask" : "255.255.255.0"
},
{
"name" : "eth1",
"ip-address" : "1.1.1.2",
"network-mask" : "255.255.255.0"
},
{
"name" : "eth2",
"ip-address" : "192.168.1.2",
"network-mask" : "255.255.255.0"
}
]
}
]
}
API command:
Use this API to add (scale up) Cluster Members.
set simple-cluster
Example description:
Adding a Cluster Member called member3.
API example:
{
"name" : "cluster1",
"members" : { "add" :
{
"name" : "member3",
"ipv4-address" : "172.23.5.3",
"one-time-password" : "aaaa",
"interfaces" : [
{
"name" : "eth0",
"ip-address" : "172.23.5.3",
"network-mask" : "255.255.255.0"
},
{
"name" : "eth1",
"ip-address" : "1.1.1.3",
"network-mask" : "255.255.255.0"
},
{
"name" : "eth2",
"ip-address" : "192.168.1.3",
"network-mask" : "255.255.255.0"
}
]
}
}
}
API command:
Use this API to remove (scale down) Cluster Members.
set simple-cluster
Example description:
Removing a Cluster Member called member3.
API example:
{
"name" : "cluster1",
"members" : { "remove" : "member3" }
}
API command:
Use this API to add a cluster interface.
set simple-cluster
Example description:
Adding a cluster interface called eth3.
API example:
{
"name" : "cluster1",
"interfaces" : { "add" :
{
"name" : "eth3",
"ip-address" : "10.10.10.254",
"ipv4-mask-length" : "24",
"interface-type" : "cluster",
"topology" : "INTERNAL",
"anti-spoofing" : "true"
}
},
"members" : { "update" :
[{
"name" : "member1" ,
"interfaces" :
{"name" : "eth3",
"ipv4-address" : "10.10.10.1",
"ipv4-network-mask" : "255.255.255.0"}
},
{
"name" : "member2" ,
"interfaces" :
{"name" : "eth3",
"ipv4-address" : "10.10.10.2",
"ipv4-network-mask" : "255.255.255.0"}
}
]}
}
API command:
Use this API to remove a cluster interface.
set simple-cluster
Example description:
Removing a cluster interface called eth3.
API example:
{
"name" : "cluster1",
"interfaces" : { "remove" : "eth3" }
}
API command:
Use this API to change settings of a cluster interface.
set simple-cluster
Example description:
Changing the IP address of the cluster interfaces called eth2 from 192.168.x.254 / 255.255.255.0 to
172.30.1.x / 255.255.255.0
API example:
{
"name" : "cluster1",
"interfaces" : { "update" :
{
"name" : "eth2",
"ip-address" : "172.30.1.254",
"ipv4-mask-length" : "24",
"interface-type" : "cluster",
"topology" : "INTERNAL",
"anti-spoofing" : "true"
}
},
"members" : { "update" : [
{
"name" : "member1" ,
"interfaces" :
{"name" : "eth2",
"ipv4-address" : "172.30.1.1",
"ipv4-mask-length" : "24"}
},
{
"name" : "member2" ,
"interfaces" :
{"name" : "eth2",
"ipv4-address" : "172.30.1.2",
"ipv4-mask-length" : "24"}
}
]}
}
API command:
Use this API to reestablish SIC with Cluster Members.
set simple-cluster
Prerequisite:
SIC must already be reset on the Cluster Members.
API example:
Important - In the API command you must use the same One-Time Password you
used on Cluster Members during the SIC reset.
{
"name" : "cluster1",
"members" : { "update" :
[
{
"name" : "member1",
"one-time-password" : "aaaa"
},
{
"name" : "member2",
"one-time-password" : "aaaa"
}
]
}
}
API command:
Use this API to enable and disable Software Blades on Cluster Members.
set simple-cluster
Notes:
n To enable a Software Blade, set its value to true in the
API command.
n To disable a Software Blade, set its value to false in the
API command.
API example:
To enable all Software Blades supported by the Cluster API:
{
"name" : "cluster1",
"vpn" : true,
"application-control" : true,
"url-filtering" : true,
"ips" : true,
"content-awareness" : true,
"anti-bot" : true,
"anti-virus" : true,
"threat-emulation" : true
}
{
"name" : "cluster1",
"vpn" : false,
"application-control" : false,
"url-filtering" : false,
"ips" : false,
"content-awareness" : false,
"anti-bot" : false,
"anti-virus" : false,
"threat-emulation" : false
}
API command:
Use this API to view a specific existing cluster object.
show simple-cluster
{
"limit-interfaces" : "10",
"name" : "cluster1"
}
{
"uid": "e0ce560b-8a0a-4468-baa9-5f8eb2658b96",
"name": "cluster1",
"type": "simple-cluster",
"domain": {
"uid": "41e821a0-3720-11e3-aa6e-0800200c9fde",
"name": "SMC User",
"domain-type": "domain"
},
"meta-info": {
"lock": "unlocked",
"validation-state": "ok",
"last-modify-time": {
"posix": 1567417185885,
"iso-8601": "2019-09-02T12:39+0300"
},
"last-modifier": "aa",
"creation-time": {
"posix": 1567417140278,
"iso-8601": "2019-09-02T12:39+0300"
},
"creator": "aa"
},
"tags": [],
"read-only": false,
"comments": "",
"color": "yellow",
"icon": "NetworkObjects/cluster",
"groups": [],
"ipv4-address": "172.23.5.254",
"dynamic-ip": false,
"version": "R80.20",
"os-name": "Gaia",
"hardware": "Open server",
"firewall": true,
"firewall-settings": {
"auto-maximum-limit-for-concurrent-connections": true,
"maximum-limit-for-concurrent-connections": 25000,
"auto-calculate-connections-hash-table-size-and-memory-pool": true,
"connections-hash-size": 131072,
"memory-pool-size": 6,
"maximum-memory-pool-size": 30
},
"vpn": false,
"application-control": false,
"url-filtering": false,
"content-awareness": false,
"ips": false,
"anti-bot": false,
"anti-virus": false,
"threat-emulation": false,
"save-logs-locally": false,
"send-alerts-to-server": [
"harry-main-take-96"
],
"send-logs-to-server": [
"harry-main-take-96"
],
"send-logs-to-backup-server": [],
"logs-settings": {
"rotate-log-by-file-size": false,
"rotate-log-file-size-threshold": 1000,
"rotate-log-on-schedule": false,
"alert-when-free-disk-space-below-metrics": "mbytes",
"alert-when-free-disk-space-below": true,
"alert-when-free-disk-space-below-threshold": 3000,
"alert-when-free-disk-space-below-type": "popup alert",
"delete-when-free-disk-space-below-metrics": "mbytes",
"delete-when-free-disk-space-below": true,
"delete-when-free-disk-space-below-threshold": 5000,
"before-delete-keep-logs-from-the-last-days": false,
"before-delete-keep-logs-from-the-last-days-threshold": 0,
"before-delete-run-script": false,
"before-delete-run-script-command": "",
"stop-logging-when-free-disk-space-below-metrics": "mbytes",
"stop-logging-when-free-disk-space-below": true,
"stop-logging-when-free-disk-space-below-threshold": 100,
"reject-connections-when-free-disk-space-below-threshold": false,
"reserve-for-packet-capture-metrics": "mbytes",
"reserve-for-packet-capture-threshold": 500,
"delete-index-files-when-index-size-above-metrics": "mbytes",
"delete-index-files-when-index-size-above": false,
"delete-index-files-when-index-size-above-threshold": 100000,
"delete-index-files-older-than-days": false,
"delete-index-files-older-than-days-threshold": 14,
"forward-logs-to-log-server": false,
"perform-log-rotate-before-log-forwarding": false,
"update-account-log-every": 3600,
"detect-new-citrix-ica-application-names": false,
"turn-on-qos-logging": true
},
"interfaces": {
"total": 3,
"from": 1,
"to": 3,
"objects": [
{
"name": "eth0",
"ipv4-address": "172.23.5.254",
"ipv4-network-mask": "255.255.255.0",
"ipv4-mask-length": 24,
"ipv6-address": "",
"topology": "external",
"anti-spoofing": true,
"anti-spoofing-settings": {
"action": "prevent"
},
"security-zone": false,
"comments": "",
"color": "black",
"icon": "NetworkObjects/network",
"interface-type": "cluster"
},
{
"name": "eth1",
"ipv4-address": "1.1.1.0",
"ipv4-network-mask": "255.255.255.0",
"ipv4-mask-length": 24,
"ipv6-address": "",
"topology": "internal",
"topology-settings": {
"ip-address-behind-this-interface": "network defined by the interface ip and net
mask",
"anti-spoofing": true,
"anti-spoofing-settings": {
"action": "prevent"
},
"security-zone": false,
"comments": "",
"color": "black",
"icon": "NetworkObjects/network",
"interface-type": "cluster"
}
]
},
"cluster-mode": "cluster-xl-ha",
"cluster-members": [
{
"name": "member1",
"sic-state": "initialized",
"sic-message": "Initialized but trust not established",
"ip-address": "172.23.5.1",
"interfaces": [
{
"name": "eth0",
"ipv4-address": "172.23.5.1",
"ipv4-network-mask": "255.255.255.0",
"ipv4-mask-length": 24,
"ipv6-address": "",
"ipv6-network-mask": "::",
"ipv6-mask-length": 0
},
{
"name": "eth1",
"ipv4-address": "1.1.1.1",
"ipv4-network-mask": "255.255.255.0",
"ipv4-mask-length": 24,
"ipv6-address": "",
"ipv6-network-mask": "::",
"ipv6-mask-length": 0
},
{
"name": "eth2",
"ipv4-address": "192.168.1.1",
"ipv4-network-mask": "255.255.255.0",
"ipv4-mask-length": 24,
"ipv6-address": "",
"ipv6-network-mask": "::",
"ipv6-mask-length": 0
}
]
},
{
"name": "member2",
"sic-state": "initialized",
"sic-message": "Initialized but trust not established",
"ip-address": "172.23.5.3",
"interfaces": [
{
"name": "eth0",
"ipv4-address": "172.23.5.2",
"ipv4-network-mask": "255.255.255.0",
"ipv4-mask-length": 24,
"ipv6-address": "",
"ipv6-network-mask": "::",
"ipv6-mask-length": 0
},
{
"name": "eth1",
"ipv4-address": "1.1.1.2",
"ipv4-network-mask": "255.255.255.0",
"ipv4-mask-length": 24,
"ipv6-address": "",
"ipv6-network-mask": "::",
API command:
Use this API to view all existing cluster objects.
show simple-clusters
API command:
Use this API to delete a specific cluster object.
delete simple-cluster
API example:
{
"name" : "cluster1"
}
Known Limitations
n These Cluster APIs support only subset of cluster operations.
n These Cluster APIs support only basic configuration of Software Blades (similar to "simple-
gateway" APIs - see the Check Point Management API Reference).
n These Cluster APIs support only ClusterXL High Availability, ClusterXL Load Sharing, and
CloudGuard OPSEC clusters.
n These Cluster APIs do not support the configuration of a Cluster Virtual IP address on a different
subnet than the IP addresses of the Cluster Members.
For such configuration, use SmartConsole.
n These Cluster APIs do not support VRRP Clusters (either on Gaia OS or IPSO OS).
n These Cluster APIs support a limited subset of interface settings.
To change interface settings such as Topology, Anti-Spoofing and Security Zone, you must replace
the interface.