Crescendo User Guide
Crescendo User Guide
Crescendo User Guide
Copyright 2010 by Crescendo Networks. All rights reserved worldwide. No part of this publication may be reproduced, modified, transmitted, transcribed, stored in retrieval system, or translated into a ny human or computer language, in any form or by any means, electronic, mechanical, magnetic, chemical, manual, or otherwise, without the express written permission of Crescendo Networks, 6 Yoni Netanyahu Street, Or Yehuda 60376, Israel. Crescendo Networks provides this documentation without warranty in any form, either expressed or implied. Crescendo Networks may revise this document at any time without notice. This document may contain proprietary information and shall be respected as a proprietary docume nt with permission for review and usage given only to the rightful owner of the equipment to which this document is associated. This document was designed, produced and published by Technical Publications, Crescendo Networks. Produced in U.S.A. May 6, 2010
Use of controls or adjustment or performance of procedures other then those specified herein may result in hazardous radiation exposure. CLASS 1 LASER PRODUCT internal lasers comply with IEC 60 825-1:1993 + A1:1997 + A2:2001 and EN 60825-1:1994+A1:1996+ A2:2001. Equipment may operate in maximum ambient temperature 40C.
FCC Warning
Modifications not expressly approved by the manufacturer could void the user authority to operate the equipment under FCC Rules.
Acknowledgements
This product includes software developed by:
MoreThanIP. PLDA. Northwest Logic. Altera Corporation. Internet System Consortium under BSD license . Networks Associates Technology under BSD license. Squid Project under GNU GPL license. Jan Kneschke under BSD license. Daniel Stenberg <[email protected]> under MIT/X license. The OpenSSL Project under Dual License. Juho Santeri Paavolainen under LGPL license. David Gilbert under LGPL license.
This product includes Liquid Look and Feel software under LGPL license.
Table of Contents
vi
Table of Contents
Logging ....................................................................................................................................... 45
Networking Commands ................................................................................................................ 48 Interface Commands ................................................................................................................. 48 Configuring Interface Speed/Duplex Settings for the CN-7000 .......................................... 53 IP Routing ................................................................................................................................... 58 ARP .............................................................................................................................................. 59 Access Control............................................................................................................................ 59 Client-side TCP Commands.......................................................................................................... 64 Client-side TCP Windows ........................................................................................................ 64 Client-side TCP Inactivity Timers ........................................................................................... 65 Client-side MSS .......................................................................................................................... 66 FastTCP ....................................................................................................................................... 67 Server-side TCP Commands ......................................................................................................... 67 Server-side TCP Windows ....................................................................................................... 68 User Configuration ......................................................................................................................... 69 System Commands ......................................................................................................................... 71 Configuration File Management ............................................................................................. 71 Loading Additional Configuration Files to a Running Config ........................................... 72 File Transfer/Management ....................................................................................................... 72 File Commands .......................................................................................................................... 74 Software Upgrade and Version Control ................................................................................. 75
vii
Configuring Cookies ............................................................................................................... 100 Acceleration of Authenticated HTTP Sessions .................................................................... 101
Farm Configuration ...................................................................................................................... 103 Configuration Steps................................................................................................................. 103 Cluster Configuration .................................................................................................................. 105 Cluster Configuration ............................................................................................................. 105 Web Server Logging ................................................................................................................ 108 Connection Profiles ................................................................................................................. 108 Load Balancing Profiles .......................................................................................................... 109 Persistency ................................................................................................................................ 114 Health Check Configuration .................................................................................................. 116 Elastic Resource Control ......................................................................................................... 132 Server Inactivity Check ........................................................................................................... 135 Real Servers ................................................................................................................................... 140 Configuring a Real Server ...................................................................................................... 140 Device Configuration ................................................................................................................... 142 Configuring Devices................................................................................................................ 142 Associating Real Servers with Devices ................................................................................. 145
viii
Table of Contents
Vservices and Usage Control Example ..................................................................................... 163 Configuring a Usage Control Profile ......................................................................................... 164 Configuring a Vservice ................................................................................................................ 165
ix
Cipher Profile ................................................................................................................................ 200 Creating a Cipher Profile ........................................................................................................ 200 Configuring an SSL Server Profile (Client-side SSL) ............................................................... 203 SSL Server Profile Configuration Outline ............................................................................ 203 Creating an SSL Server Profile ............................................................................................... 204 Applying an SSL Server Profile to a Virtual Server ............................................................ 205 Configuring an SSL Client Profile (Server-side SSL) ............................................................... 206 SSL Client Profile Configuration Outline ............................................................................. 206 Creating an SSL Client Profile ............................................................................................... 206 Applying an SSL Client Profile to a Cluster ........................................................................ 207 Converting Keys, Certificates, and Chained Certificates ........................................................ 208 OpenSSL.................................................................................................................................... 209 Keys ........................................................................................................................................... 209 Certificate .................................................................................................................................. 209 Converting Certificates and Keys Exported from Microsoft IIS ....................................... 211 Chained Certificates ................................................................................................................ 213
Table of Contents
xi
1
Introduction to the AppBeat DC Platform
This chapter provides an introduction to the AppBeat DC including a feature overview and implementation examples. Additionally, the Installation and Configuration Guidelines section on page 9 of this chapter is used to provide a configuration framework which can be referenced throughout any stage of configuration.
Overview of the AppBeat DC. Hardware Technology. Hardware Platforms. TCP Offload and Delivery Optimization. Load Balancing. Compression. SSL Acceleration. Deployment Options. VRRPc Redundancy. Installation and Configuration Guidelines.
Hardware Technology
The AppBeat DC utilizes Crescendo Networks proprietary hardware architecture. Designed to specifically address the requirements of application acceleration and infrastructure scalability, the Maestro Application Delivery Platform provides superior server acceleration and resource optimization. The FreeFlow architecture, utilizing Network Processors (NP) and Field Programmable Gate Arrays (FPGA), incorporates over 80 micro-engines, explicitly tasked with various application-specific processes. The implementation of task-specific hardware enables the AppBeat DC to utilize all functionality simultaneously without suffering any performance degradation. This concept of Feature Concurrency allows the AppBeat DC to operate at maximum capacity, regardless of the features or configuration being used. Crescendo Networks hardware demonstrates a unique and powerful approach to application acceleration.
Hardware Platforms
Three models of the AppBeat DC are available on the following platforms: CN-7710, CN-7740, and CN-7790.
CN-7710
10 or 4 SFP (Small Form-factor Pluggable) GbE (fiber) or 10x10/100/1000 ports (copper). 1 10/100 Ethernet (Out-of-band) Management Interface. 1 RS-232 RJ45 Serial Port. 2 SFP (Small Form-factor Pluggable) GbE (fiber) or 10x10/100/1000 (copper) logging and mirroring ports.
CN-7740
1 10/100 Ethernet (Out-of-band) Management Interface. 1 RS-232 RJ45 Serial Port. 2 SFP (Small Form-factor Pluggable) GbE (fiber) or 10x10/100/1000 (copper) logging and mirroring ports.
CN-7790
2 XFP (10 Gigabit Small Form Factor Pluggable) ports. 10 SFP (Small Form-factor Pluggable) GbE (fiber) or 10x10/100/1000 ports (copper). 1 10/100 Ethernet (Out-of-band) Management Interface. 1 RS-232 RJ45 Serial Port. 2 SFP (Small Form-factor Pluggable) GbE (fiber) or 10x10/100/1000 (copper) logging and mirroring ports.
The unit waits until the entire request has arrived from the client before it decides to deliver it to the server. This is incredibly beneficial in situations where long client requests are arriving over slow or problematic TCP connections. If the server were exposed to the weaknesses of these client-side TCP conditions, valuable resources would be tied up while it waited for the arrival of the complete request. By waiting for the entire request to arrive and then delivering it in whole to the server, SLT shields the server from client-side TCP conditions and allows it to minimize its processing time for each request. Normally, a unit performing Connection Consolidation would need to fully buffer an object in route from the server to the client before starting to transmit it to the client. However, at high capacity, this would require massive amounts of memory, which leads to the solution either not being very scalable or very cost effective. SLT addresses this issue by using partial requests on the server side, causing the server to break up large objects into smaller ones. This is coupled with proper memory management allowing high performance consolidation to occur with a reasonable amount of memory, making the AppBeat DC both scalable and economical. This is completely transparent to the client who never knows or needs to worry about the way in which objects are fetched from the server by the AppBeat DC.
Response Optimization One of the main objectives of SLT is to shield the server from weaknesses imposed by client connections that are subjected to WAN environments. These client connections experience packet loss, delay, and congestion, all of which would impact the server through increased CPU and memory utilization if it were exposed to them. By completely shielding the server from these issues, SLT allows the AppBeat DC to communicate with the servers in a highly optimized environment. The server is already dealing with fewer connections; and since those connections are managed by the AppBeat DC, the server can transmit its responses to the network at maximum throughput. Client requests are served as optimally as possible, allowing the server to quickly move on to the next request to be processed.
Load Balancing
The AppBeat DC provides a comprehensive load balancing feature set that allows it to efficiently distribute user requests across clusters of identical servers. Additionally, since the AppBeat DC is in control of the actual request flow to the servers, it can direct traffic to them based on real-time request load as well as other L7 switching criteria (url, file name, hostname, browser language, etc.) All HTTP (L7) load-balancing functionality is fully and seamlessly integrated with all other optimization services provided by the highly scalable, multi-gigabit AppBeat DC platform. Additionally, because of its unique and powerful task-specific hardware architecture, all services can operate concurrently without any degradation in device performance.
The AppBeat DC also incorporates traditional Load Balancing for providing load balancing for non-HTTP TCP-based and UDP-based protocols. A load balancing license must be configured on the AppBeat DC to enable this feature. Contact your Crescendo Networks Reseller or Sales Associate for assistance with enabling this feature.
Compression
Incorporating the hardware-based Compression module further enhances server acceleration and resource optimization. The compression module, using industry standard and broadly supported compression methodsgzip and deflate algorithmsenables a dramatic reduction in outbound bandwidth usage, while also significantly reducing enduser response times.
SSL Acceleration
The hardware-based SSL Acceleration module reduces a significant level of processing resources from servers while allowing secure applications to easily scale beyond what normal server platforms can provide. Because the AppBeat DC relieves the servers from handling these tasks, the servers can redirect their full resources to provide up to 10 times more processing performance.
Deployment Options
The AppBeat DC is a scalable, non-intrusive solution that is easy to integrate. The AppBeat DC provides flexible physical and logical configuration options to ensure seamless integration in different environments. The AppBeat DC can be configured to accelerate individual servers, in which each server is seen as a separate entity, or in a load balanced cluster, in which a group of identical servers is represented as a single Virtual Server (Virtual IP) to the outside world. Regardless of whether load balancing is used, all methods of server acceleration including TCP Offload, Compression, and SSL Acceleration can be used. This section describes the two options available for single server acceleration: virtual server and spoofed server modes. Physical Configuration The AppBeat DC is available in 4 Gbic (CN-7710), 10 Gbic (CN-7710, CN-7740) and 10 Gbic and 2 10G XFP (CN-7790) interface configurations. The AppBeat DC supports several physical configuration options enabling deployment in virtually any environment.
One-leg single interface deployment. Routed multiple interface deployment. VLAN tagged implementation utilizing 802.1q tagging on one or more physical interfaces.
The flexibility of the AppBeat DC enables the deployment methods described to be used in combination with one another. Single Server Acceleration Virtual Server Mode In Virtual Server mode, a Virtual Server IP address and TCP port is configured on the AppBeat DC and is then mapped to a single real server IP and port. Client traffic is destined to the Virtual Server on the AppBeat DC, which communicates with the real server directly. Traffic previously destined to the real server is directed to the Virtual Server Address on the AppBeat DC instead. The following diagrams present examples of Virtual mode configured in either one or two interface configurations.
Single Server Acceleration Spoofed Server Mode In Spoofed server mode, the AppBeat DC will be deployed as a router between client traffic and the real server. The real server IP address and port is configured in the AppBeat DC as a spoofed address and port. Traffic destined to this address will be intercepted by the AppBeat DC, which communicates with the real server directly. All other traffic is routed normally.
Load Balanced Server Acceleration When using Load Balancing, a cluster of identically configured servers will be configured with a single Virtual Server IP address.
VRRPc Redundancy
VRRPc is Crescendo Networks proprietary redundancy protocol for Application Delivery Controllers. VRRPc can be implemented in one of two ways: hot/standby or load-sharing (i.e., active/active). Implemented in a similar fashion to VRRPusing virtual MAC and IP addressesVRRPc extends the capabilities of traditional VRRP by enabling more intelligent redundancy decisions. VRRPc tests more than simple network availability between two redundant units as VRRP does. Instead, failover decisions are based on upstream network unit availability, as well as application server health and connectivity.
Using a single interface configuration provides the flexibility of installing the AppBeat DC without making any additional network changes. Using a two interface configuration requires the AppBeat DC to act as a router, meaning servers, routers, and other devices may require additional configuration (static or default routes, etc.).
Will single server acceleration or load balancing be used? If using single server, which method will be configured virtual or spoofed?
IP Address Requirements Prepare IP addresses and route information. The following is a list of components that usually require an IP address:
The Management Ethernet interface. Each data interface of the AppBeat DC.
Each Virtual Server (unless using a spoofed server, in which an additional IP is not necessary). VRRPc. This IP address will be shared between the redundantly deployed units.
Most keys/certificates can be exported from existing servers and then imported into the AppBeat DC. Additionally, the certificate must have the text prepended before the BEGIN CERTIFICATE statement.
If keys/certificates do not exist yet, a Certificate Request will have to be created and submitted to a Certificate Authority, which will then issue the appropriate certificate for import into the AppBeat DC.
Unpack and securely install unit. Plug in required Gbic(s) and attach AppBeat DC to local switch(es). Attach the provided serial cable to workstations running terminal emulation software (for example, Microsoft HyperTerminal or TeraTerm). The default serial configuration is as follows:
Refer to Chapter 2. AppBeat DC Installation for specific information regarding unpacking and mounting instructions. Log in to AppBeat DC
Refer to Chapter 3. Introduction to the Command Line Interface or Chapter 4. Introduction to the Graphical User Interface for specific information regarding login procedures and options.
10
Additional Basic Configuration Options Once logged into the AppBeat DC, additional options can be configured.
Additional IP addresses and/or routes. Management Access Control Lists. Logging Options. HTTP Header Options.
Refer to Chapter 5. Initial Configuration and Global Settings for additional configuration details and options. Acceleration Topology Configuration
Configure Real Server(s). One server per cluster for single server acceleration. The load balancing license is required to add more than one server to a cluster.
Create Virtual Server. If deploying in spoofed mode, the Virtual Server IP will be the same as the real server. Otherwise, the Virtual Server IP should be a new, unused IP address. Map Virtual Server to a Cluster.
Refer to Chapter 7. Server Topology Farms/Clusters/Real Servers for additional information. Compression Configuration
Create a Compression Profile. Define specific compression rules using the Crescendo Rules Engine described in Appendix A. Enable Compression Profile per Cluster.
Import or create private key. Import or create Certificate. Create SSL Server Profile.
11
Refer to Chapter 13. SSL Acceleration for additional configuration details. VRRPc Redundancy Configuration
Install two AppBeat DC units. Configure VRRPc Interface IP addresses. Configure VRRPc groups and enable feature.
12
2
AppBeat DC Installation
This chapter describes the hardware installation process for the AppBeat DC.
Introduction. AppBeat DC Installation Kit. Installing the AppBeat DC Hardware. LED Indications.
13
Introduction
This chapter provides the essential information required to unpack and mount the AppBeat DC. The CN-7000 is a 1.5U or 2U rack mounted unit. The AppBeat DC is offered in 2, 4, 8, or 10 SFP GbE interface configurations. Gbic interfaces enable the use of either copper or fiber Gigabit Ethernet connectivity based on the module(s) installed. The AppBeat DC comes with two management interfaces:
AppBeat DC unit. SFP (Gbic) Gigabit Ethernet modules (fiber or copper), as per the number and type you ordered. Installation guide (provided on CD). Cables:
1.5 meter power cable According to the relevant standard of your country. 2 meter, RS-232 to RJ-45 serial console cable.
Power cable(s) (enclosed for units sold in U.S.A. only). Brackets and screws:
14
Installing the AppBeat DC in the Rack on page 15. Inserting the SFP Gigabit Ethernet Modules and Connecting the Cables on page 16.
The AppBeat DC unit is an electrical appliance. Handle it carefully and do not plug in the power cord until after it is installed in the rack. Installing the AppBeat DC in the Rack To install the AppBeat DC 1. Install the rack mount brackets included in the installation kit to the front of the AppBeat DC. Be sure to use the black screws that accompany the brackets, as they are longer than the screws removed from the AppBeat DC. Tighten screws to ensure the brackets are securely connected to the front sides of the AppBeat DC. Slide the AppBeat DC into an available rack. Secure the AppBeat DC to the rack with the screws provided by the rack manufacturer, as illustrated in Figure 6.
2. 3. 4.
15
Inserting the SFP Gigabit Ethernet Modules and Connecting the Cables After you mount the AppBeat DC in the rack, the next step requires you to insert the Gigabit Ethernet modules into the ports and connect the cables. Inserting an SFP Gigabit Ethernet module into an AppBeat DC Port
Insert the module (optical or copper) into the SPF ports on the front panel of the AppBeat DC (Figure 7).
Connecting Cables For the initial setup, you are required to attach the following cables to the AppBeat DC :
Serial Console cable 2 meter, RS-232 to RJ-45 serial console cable. Management Ethernet cable See AppBeat DC Installation Kit on page 14 for a description. Power cable Standard 110 (US) or 220 (Europe/Asia) cable according to your location. Gigabit Ethernet cables Standard optical or copper cables.
To connect the cables 1. Connect one end of the serial console cable to the AppBeat DC Console port (see Figure 8), and the other end to the console.
16
2. 3.
Connect one end of the Management cable to the AppBeat DC Ethernet port (see Figure 8), and the other end to the management network. Connect the power cable. Connect the other end to the power source. The unit will be powered-on immediately after plugging the cable into the power source.
LED Indications
The AppBeat DC has three operational status LEDs located on the right front panel, as well as a single LED for each physical interface. The blinking activity and related status of each LED is defined in this section.
17
LED Activity
Condition On
18
3
Introduction to the Command Line Interface
This chapter describes the AppBeat DC CLI command set. The CLI command set consists of all the commands required to configure and monitor the AppBeat DC. This chapter provides the basic information needed to access, navigate, and use the CLI as a powerful means of configuration.
Accessing the CLI. Conventions Used in this Guide. CLI Prompt Structure. CLI Navigation. Configurable CLI Parameters. Using the show Command. Using the no Command. Using the exit Command.
19
Use the serial port in conjunction with the provided serial cable to open a console session using a Terminal Emulation program (for example, Microsoft HyperTerminal, TeraTerm, etc.). Setup the serial port as follows:
Bits per second: Data bits: Parity: Stop bits: Flow control:
20
The CLI conventions used for this user guide are as follows:
Table 3: CLI Conventions Convention Italicized ? | {Braces} [Brackets] Description Indicates user input command elements such as specifying a name or IP address. Enter a question mark at any point to get help. Indicates a delimiter between options. Commands enclosed in braces indicate mandatory command elements. Commands enclosed in brackets indicate optional settings.
21
The prompt represents the current prompt level in which the user is located. The prompt level is stated for each command explained throughout this guide. Examples:
In Root level, the prompt is root>. In System level, the prompt is system>. In Configuration level, the prompt is config>. In Configuration Interface level, the prompt is gigabit-ethernet port 1>. In Configuration Farm level, the prompt is farm "Farm">.
Commands on a higher level in the command tree are available. Command completion is only available when in the correct prompt level.
CLI Navigation
Case Sensitivity CLI commands, keywords, and reserved words are not case-sensitive. Commands and keywords can be entered in upper or lower case. User-defined text strings are not case-sensitive and can be defined in both upper and lower case (including mixed cases). Character case in the user-defined text strings is preserved in the configuration for readability purposes only. Basic Navigation The CLI enables the use of the TAB key for command completion and supports abbreviated commands. For example, instead of typing the command configure terminal you can type c t instead. The CLI contains a command buffer of the last 16 commands. In addition, prior to accepting a configuration entry (line) the line can be edited. The following keys can be used when navigating the CLI.
Table 4: Special Keys for Navigating the CLI Key ? Function List available choices in the current prompt level and privilege/security level.
22
Key Backspace Tab [ESC] [ESC] Ctrl-N or Down Arrow Ctrl-P or Up Arrow
Function Deletes characters backward, one character at a time. Completes the command word. Clears the prompt line. Go to the next line in the history buffer. Go to the previous line in the history buffer.
These keys can only be used on a VT compatible terminal. Online Help Commands that enable you to query the Online Help feature are specified according to:
Description Obtain a list of commands that begin with a particular character string. Complete a partial command name. List all commands available for a particular command mode in given prompt level and with current user credentials. List a commands associated keywords. List a keywords associated arguments.
23
parent-mode more
set cli parent mode set number of lines for asking for more
Output:
root> show cli interfaces vlans networking ip vrrpc system utilization attack-monitor running startup file ftp-record version license-codes users snmp real device virtual farm cluster vservice counters global-data server-queue-limit tcp-params udp-params load-balancing show cli information display interfaces table display vlans table display list of mirrored interfaces display IP information display vrrpc information display system parameters display utilization metrics of the system components display attack monitor counters display running configuration display startup configuration display a file from /FLD/cfg directory display ftp record display version show codes for activated features display users table display snmp information display real server information display device server information display virtual server information display farms display clusters display vservice information display counters show global data show long queue protection status show TCP parameters show UDP parameters show load balancing profiles
24
display compression information display content-control information display usage-control information display caching information display health check information display ssl configuration information display traffic control profile information display pplus information show logging information
Output:
gigabit-ethernet 01, Admin UP, Status UP Description gigabit-ethernet port 1 Hardware address 00-1d-b1-01-01-e0 Fiber Sfp MTU 9216 bytes, BW 1000 Mbps , FULL duplex, Autoneg number of vlans: 0 Internet address 10.10.10.128, Mask 255.255.255.0 Internet address 12.12.12.1, Mask 255.255.255.0 root> show system
Output:
Hostname slax, Date: Mar. 25, 2009 Time: 10:45:37 Servers: HTTP Server Disabled, SNMP Disabled, SNMP trap Disabled SSH V1 & V2 Enabled (port 22, sessions number 0, limit 5), Telnet Enabled (port 23, sessions number 0, limit 5) Power Supply: Single power supply Number of Fans: 3 Fans status and rpm: 1st Sensor status UP 2nd Sensor status UP 3rd Sensor status UP Board temperature levels : 1st Sensor 47 C , 2nd Sensor 52 C
25
Prompt level Root Example command: To navigate from the configure prompt level to the root prompt level:
config> exit
26
4
Introduction to the Graphical User Interface
This chapter introduces and explains the AppBeat DC Web-based Graphical User Interface (GUI).
Graphical User Interface (GUI) Overview. Preparations Installing Sun Java. Logging in to the GUI. Navigating the GUI.
27
Ensure that ports 80 and 161 are available to enable access to the GUI. Once connected, a Crescendo Networks image will display in the existing browser window, as shown in Figure 10. Do not close this window; doing so will close the Javabased GUI management application.
28
The user is presented with a separate window, which prompts for log in credentials, as shown in Figure 11.
Log in using a user name and password created during the Auto Configuration Dialog or normal CLI configuration. Once logged in, the AppBeat DC GUI will be presented as a separate window. See Chapter 5. Initial Configuration and Global Settings.
29
Summary Displays basic real time information and unit status. Monitoring Enables the user to view real-time and last 5 minutes performance information for the AppBeat DC, farms, clusters, and servers. History Displays historical performance information for the AppBeat DC, farms, clusters, and servers. Configuration Enables the user to configure most aspects of the AppBeat DC. Events Enables the user to view real-time and past events.
Summary Summary mode displays basic global information such as the number of operational farms, clusters, and servers. Additionally, it shows real time relative performance and transaction performance within the previous 24 hours.
Monitoring Monitoring Mode enables the user to view real-time and maximum in last 5 minutes performance information for the AppBeat DC, farms, clusters, and servers. Click an object
30
in the Topology window to view related performance information. Selecting a cluster will present the aggregate information for all servers contained in that specific cluster. Selecting a farm will present the aggregate information for all clusters and servers contained in that specific farm.
31
History The History mode displays historical performance information for the AppBeat DC, farms, clusters, and servers. The History service must be enabled for each unit for which you wish to view historical information. History can be enabled through the Configuration mode.
While in History mode, click an object in the Topology window. If historical information is available, the drop-down data menus will be available. Up to 4 data types can be viewed simultaneously. Once selected, the information will be charted in the right panel. Selecting a cluster will present the aggregate information for all servers contained in that specific cluster. Selecting a farm will present the aggregate information for all clusters and servers contained in that specific farm. Additionally, the graphs time scale can be adjusted to minutes, days, or weeks by cycling through the icon at the bottom of the window.
32
Configuration Configuration mode enables the user to configure most aspects of the AppBeat DC. Click an object in the Topology window. Available configuration variables will be displayed in the right panel. Always click Apply to implement changes. To make the configuration change permanent for subsequent unit startups, make sure to save the running configuration by clicking File Configuration Save Configuration.
33
Events Events mode enables the user to view GUI Event information. In order to see information, GUI Events and Logging per unit/object must be enabled.
To enable GUI Events, enter Configuration mode. From the Topology window, expand the Management icon by clicking the + symbol and then select Events and Logging. In the right pane, check the box labeled GUI Events and customize the logging level for associated events you would like displayed in the Events mode window. Click Apply. Next, you will have to enable logging for each element you would like to see logging information. Do this by selecting each element in the Topology window and checking the box labeled Logging. Click Apply.
34
5
Initial Configuration and Global Settings
This chapter introduces the initial configuration and basic administrative configuration options of the AppBeat DC.
Before Proceeding. Initial Configuration. Root/Global Commands. Management Configuration. Networking Commands. Client-side TCP Commands. Server-side TCP Commands. User Configuration. System Commands.
35
Before Proceeding
In order to proceed with the initial configuration and global settings, the following steps should be satisfied.
The AppBeat DC should be properly mounted and connected to power. For more information, see Chapter 2. AppBeat DC Installation. The Gbic interfaces should be installed and connected via fiber or copper to a switch. For more information, see Chapter 2. AppBeat DC Installation. Management connectivity, whether through a serial console or via a management Ethernet interface (GUI, Telnet, or SSH). For more information, see Chapter 3. Introduction to the Command Line Interface.
Initial Configuration
Once the AppBeat DC is properly mounted and connected to a terminal via the provided serial cable, the unit can be powered on for the first time. The following figure demonstrates the configuration of a newly installed AppBeat DC. The examples used throughout this section assume a basic network environment, as displayed in Figure 17.
36
Root/Global Commands
The CLI Root Commands are located in the root prompt level, but can be performed at any prompt-level. Use the CLI Root Commands to perform the following:
Exit the current prompt level (i.e., navigate to one prompt level higher). From the root, this will exit the CLI. Refer to Using the exit Command on page 26. Undo commands. Refer to Using the no Command on page 25. Show configuration information. Refer to Using the show Command on page 24. Login as a different user. Refer to Using the login Command on page 37. Ping a host. Refer to Using the ping Command on page 38.
Using the login Command The CLI provides the login command to login again as a different user. This can be useful if you want to change the permissions level that you are using. After using the login command, you will have to supply the user name and password of the user with which you now want to login.
37
Using the ping Command The CLI provides the ping command to enable you to ping a host. You can ping via a management port, a data port, directly to an IP address, or a real server. To ping a host Command Syntax:
ping [mgmt | data-port] ip-address
or
ping server real-server-name
Management Configuration
The management commands enable you to configure general management functions. To access the management commands, navigate to the config prompt level, and then to the management prompt level beneath it, as described below. This is the same method that should be used throughout the CLI to navigate to any prompt level. To navigate to the management prompt level Command Syntax:
configure terminal management
38
Setting a Hostname Specify the host name to distinguish the AppBeat DC being managed. Perform the following commands to set the AppBeat DC host name. To set the hostname using the CLI Command Syntax:
hostname box-name
Creating a User Create a new user, and provide a user name, password, and permissions level. The permissions level can be either: user (least permissions), admin, or tech (most permissions). For a detailed explanation, refer to User Configuration on page 69. To define the user name, password, and permissions level Command Syntax:
user username {password | encrypted encrypted-passwd} {admin | user | technician}
39
or
calendar yyyy-mm-dd
Setting the Clock To set the internal clock using the CLI Command Syntax:
clock hh:mm:ss
Setting Time by Synchronizing with NTP Servers The Network Time Protocol (NTP) feature enables synchronizing the AppBeat DC clock with an accurate time source NTP servers. You can specify up to three private or public NTP servers with which to work by specifying either their IP address or their name. If you specify an NTP server name, you must also configure at least one DNS server for DNS resolution of the server name. You can configure up to five DNS servers After configuring at least one NTP server, enable the NTP feature. If you add or delete an NTP server while NTP is enabled, you must disable NTP and then enable it for the changes to take effect.
40
To specify an NTP server using the CLI Command Syntax: To add an NTP server:
ntp ntp-server ntp-server-ip-or-name
To configure NTP synchronization using the CLI Command Syntax: To enable NTP synchronization:
ntp ntp-client
Prompt level Configure Management To configure DNS servers using the CLI Command Syntax: To specify a DNS server:
dns dns-server-ip-address
41
If you enable NTP synchronization and then run the clock command, the AppBeat clock immediately resets to the time you set in clock. The following then happens: - If there are more than 1,000 seconds between the time you set in the clock command and the NTP time, NTP will stop synchronization. To re-enable NTP synchronization, you must disable NTP and then enable it. The NTP mechanism will immediately synchronize the AppBeat clock with an NTP server and maintain an updated clock. - If there are less than 1,000 seconds between the time you set in the clock command and the NTP time, it takes the NTP mechanism about 15 minutes to re-synchronize the AppBeat clock with an NTP server. Once it synchronizes the clock, it maintains an updated clock. The show running command displays the configured NTP and DNS settings. Setting the Time Zone You can set the AppBeat DC to reflect local time. Select from the list of available time zones. The time zone command is independent of how you set the AppBeat clock (via the clock command or via NTP synchronization). To set the time zone using the CLI Command Syntax:
time-zone name time-zone
If you specify a time zone, the show system command displays the local time.
42
Setting the Connection Methods Enable the method(s) that you will use to connect: telnet/http/ssh. Optionally, you can also change the following:
Telnet: listening port and session limit. Http: listening port. SSH: listening port and session limit; you can also select to enable only ssh version 1, only ssh version 2, or both.
To disable a connection method Use the same commands as above, preceded by the no command. The telnet-server and ssh-server toggle each other. When one is enabled, the other is disabled. The HTTP service must be enabled in order for the GUI to function properly. Setting the SNMP Server Configuration The SNMP server can be enabled and disabled only from the CLI. To enable/disable the SNMP server using the CLI Command Syntax
snmp-server no snmp-server
43
Output: disabling Snmp access The SNMP server status can be enabled or disabled only from the CLI. The SNMP contact and location variables are the only fields modifiable via the GUI. Additionally, the SNMP server must be enabled for the GUI to operate. To configure the SNMP server contact using the CLI Command Syntax
snmp-server contact contact-string
To configure the SNMP server location using the CLI Command Syntax
snmp-server location location-string
To configure the SNMP V2 server community and associated parameters using the CLI Command Syntax
snmp-server community community-string snmp-version v2 securitygroup {user | admin | tech} [optional-ip-address-limitation]
44
To configure the SNMP V3 server community and associated parameters using the CLI Command Syntax
snmp-server community community-string snmp-version v3 securitygroup {user | admin | tech | auth | priv | contxt} {password}
Logging The logging commands are configurable by the administrator user. The administrator user can access levels 0-6 (debug level 7 is restricted to debug with technician privileges). The log information is configured globally, and each client can be configured to filter or receive all the logs. A client can be a console, memory, file on flash, or syslog server. To set the AppBeat DC message logging level setting using the CLI Command Syntax:
logging threshold {global | syslog | GUI} logging level
45
Output:
logging configuration logging to syslog is disabled, server 10.0.0. logging to console is disabled 01 test events generated from level debug buffer does not capture events persistent buffer does not capture events console does not capture events syslog does not capture events 02 network events generated from level debug buffer does not capture events persistent buffer does not capture events console does not capture events syslog does not capture events 03 system....
This command continues to list all the services. To log messages to the Syslog server using the CLI Command Syntax:
logging syslog ip-address {port-num [514]} facility [local7] no logging syslog
To show which units are configured to log using the CLI Command Syntax:
show logging
46
Output:
logging configuration: logging to syslog is disabled, server 10.0.0.48:514 base 184 logging to console is enabled (this terminal)
47
Networking Commands
The networking commands for the AppBeat DC enable you to configure AppBeat DCs interfaces and static routes (including the default gateway) for the data path. Interface Commands Use the CLI commands to configure the following AppBeat DCs interfaces:
Use the CLI or the GUI to configure the following AppBeat DC interface:
It is important to understand that the AppBeat DC utilizes an out-of-band management architecture for enhanced security and manageability. Because of this, two terms are used throughout this guide to discuss the path of data: data-path and management-path. Datapath refers to any traffic being accelerated or routed through the primary interfaces of the AppBeat DC. Management-path refers only to traffic destined to the management Ethernet port. For each path, there is a separate routing table and Ping commands. Configuring the Management Ethernet Interface The management Ethernet interface can only be configured from the CLI. Perform the following commands to configure the AppBeat DC management interfaces. The management Ethernet interface is used for all remote management access, e.g., GUI, SNMP, Software and configuration file management, etc. The management Ethernet interface has a separate routing table and must have a default route to access a remote network. To configure the management Ethernet interface using the CLI Command Syntax
interface management ethernet
48
To add an IP address to the management Ethernet interface using the CLI Command Syntax
ip address ip-address subnet-mask no ip address
Secondary IP addresses can be added to provide additional ports. To add a Secondary IP address to the management Ethernet interface using the CLI Command Syntax
ip secondary ip-address subnet-mask no ip secondary
To configure the management Ethernet interface description using the CLI Command Syntax:
description interface-description
49
To configure the management Ethernet interface route using the CLI Command Syntax:
ip route prefix-ip-address prefix-mask nexthop-ip no ip route prefix-ip-address prefix-mask nexthop-ip
To ping via the management interfaces using the CLI Command Syntax:
ping mgmt IP-address [count number of pings] [size buffer-size]
Configuring the Management Serial Interface The management serial interface can only be configured from the CLI. Perform the following commands to configure the AppBeat DC Management interface. The default settings for the Management Serial Interface are: Baud: Data bits: Parity: Stop bits: Flow Control: 115,200 8 none 1 none
To configure the management serial interface using the CLI Command Syntax
interface management serial
50
Perform the following commands to configure the AppBeat DC console port. Management-serial console configuration is required so port specific characteristics can be configured. To configure management serial interfaces using the CLI Command Syntax:
speed bps
To configure management serial interface descriptions using the CLI Command Syntax:
description interface-description
Configuring Gigabit-Ethernet Interfaces Perform the following commands to configure the AppBeat DC Gigabit Ethernet interfaces. To configure Gigabit Ethernet interfaces using the CLI Command Syntax:
interface gigabit-ethernet {1-2 | 1-8 | 1-4 | 1-10}
51
To configure Gigabit Ethernet interface descriptions using the CLI Command Syntax:
description interface-description
To set the administrative status of the Gigabit Ethernet interface using the CLI Command Syntax:
shutdown no shutdown
To configure Gigabit Ethernet interface IP addresses using the CLI Command Syntax:
ip address ip-address subnet-mask no ip address
52
To add a Secondary IP address to the Gigabit Ethernet interface using the CLI Command Syntax
ip address ip-address subnet-mask secondary no ip secondary
Configuring Interface Speed/Duplex Settings for the CN-7000 The CN-7000 supports the ability to configure individual port speed and duplex parameters. Each interface can be configured for auto negotiation of these options, manually configured to 10/100/1000 Mb and full/half duplex. To configure speed/duplex settings per interface Command Syntax:
speed {10mb | 100mb | 1000mb | auto} duplex {full | half | auto}
VLAN Support VLAN support is achieved by defining sub-interfaces on a physical port. The range can be from 1 to 4095. The VLAN is exactly the same configuration as a regular Gigabit Ethernet port with added VLAN and VLAN number. The AppBeat DC supports 802.1q VLAN tagging. Tagging is automatically enabled upon configuration of a VLAN interface. Packets leaving a VLAN interface are tagged using that interfaces associated VLAN number.
53
To establish single or multiple sub-interfaces per port using the CLI Command Syntax:
interface gigabit-ethernet inf-number vlan vlan-number
To configure the VLAN Gigabit Ethernet interface description using the CLI Command Syntax:
description interface-description
To set the administrative status of the VLAN Gigabit Ethernet interface using the CLI Command Syntax:
shutdown no shutdown
To configure the VLAN Gigabit Ethernet interface IP addresses using the CLI Command Syntax:
ip address ip-address subnet-mask no ip address ip-address
54
While in the interface prompt, a shortcut to the sub-interface with VLAN tag is available with the command: VLAN {vlan-number}. This brings the user into the prompt level: gigabit-ethernet port VLAN {interface}.{vlan-number}. The Gigabit Ethernet port cannot have an IP address if VLANs are associated with the port. Each VLAN interface can be shut down individually, or the entire Gigabit Ethernet port can be shut down, which results in all associated VLANs being shut down. Packets from tagged and untagged ports can be received simultaneously. For security purposes, packets are only accepted when the port/network/VLAN match. Any mismatched packets are discarded. Aggregation The AppBeat DC supports Aggregation, which enables the configured system to reach increased bandwidth and availability by creating an Aggregation Group, or aggregator. Depending on the configuration, there are between 1 - 5 predefined aggregators in the system. The aggregator enables one or more physical ports to be grouped together and treated as a single link. The aggregator is a system interface, for which IP subnets, secondary IPs, secondary subnets, and VLANs can be created. The IP subnets, secondary IPs, secondary subnets, and VLANs are created the same way for aggregators as they are for regular interfaces. To switch the CLI interface to specific aggregators context menu In the CLI, some aggregation commands can only be used within a specific aggregators context menu. Use this command to ensure that you are working in the correct aggregators context menu. Command Syntax:
interface aggregator aggregated_link_interface_[1-5}
55
set the interface description set the mode of the interface (speed & duplex) IP related commands vrrpc configuration
Prompt level Configure Networking Interface Gigabit Ethernet Vlan Example commands:
networking> interface gigabit-ethernet 4 gigabit-ethernet port 4> aggregator-group 2 networking> interface gigabit-ethernet 4 gigabit-ethernet port 4> no aggregator-group 2
56
Prompt level Configure Networking Gigabit Ethernet Vlan Example commands: To display information about all physical interfaces, VLANs, aggregators, and VLANs on aggregators:
root> show interfaces
Output:
aggregator 01, Admin UP, Status UP Description aggregator port 1 Hardware address: 00-50-C2-22-A5-71 BW 1000 Mbps , FULL duplex number of vlans: 0
To display information about all interfaces, as well as VLANs with configured IP addresses Command Syntax:
show interfaces ip
Output:
Interface IP Address IP Mask 1 1.2.3.1 255.0.0.0 2.56 10.20.3.6 255.255.0.0 3 2.2.3.4 255.0.0.0 aggr1 4.2.3.4 255.0.0.0 aggr3.40 5.2.3.4 255.0.0.0 Mgmt 10.0.2.146 255.255.252.0 Available Ethernet ports: 4 ShapeRate No Limit No Limit No Limit No Limit No Limit No Limit BurstSize No Limit No Limit No Limit No Limit No Limit No Limit Admin UP UP UP UP UP UP Oper DOWN DOWN DOWN DOWN DOWN UP
The output contains a line for each interface. The interfaces can be any of the following:
Aggregator Appears as aggr<aggregator number>, for example, agg1. Physical port Appears as <port number>, for example, 1. VLANs with IP addresses Appears as aggr<aggregator number>.<VID>, for example, agg3.40. Management Appears as Mgmt.
57
IP Routing Configure the routing for the AppBeat DC unit by using the following commands. To add/remove routes using the CLI Command Syntax:
ip route ip-address mask nexthop-ip [enable | disable] no ip route ip-address mask
Routing Profiles A routing profile can be created to store routing information for easy saving and reloading. To edit a routing profile using the CLI Command Syntax:
ip routing profile profile-name {enable | disable | profile-ipaddress}
58
Access Control AppBeat AC uses access control methods to filter traffic on a packet basis. Access control is achieved by defining Access Control List (ACL) rules. ACL rules are ordered (prioritized) rules that determine what action to take for certain classifications of packets. Each ACL rule includes the rule criteria, a unique priority, and a defined action for packets matching the rule criteria. NAT One of the actions you can specify in an ACL rule is to open a Network Address Translation (NAT) session. In a NAT session, the source IP and source port of the packets are replaced by a different source IP and source port. This is usually used to allow servers with a private IP address to communicate with devices outside the private network. The new source IP is taken from a configurable public IP pool of addresses. Source ports are randomly assigned per NAT session. NAT is supported for TCP, UDP and ICMP packets. The timeout period for a NAT session is 15 seconds. ACL Criteria ACL rule criteria are built using a combination of some Crescendo Rules Engine (CRE) keywords, operators, and the logical AND. Refer to Appendix A for a general explanation of the Crescendo Rules Engine. Table 6 lists the CRE keywords that can be used to create ACL rule criteria, and the CRE operators you can use for each keyword. The ACL feature supports up to 10,240 rules of type ip.src==, and another 100 rules of all other types. A keyword cannot appear more than once, unless it is used by the operators < and > to specify a range. For example: tcp.srcport >5 and tcp.srcport <13.
59
Table 6: CRE Keywords and Operators Available for ACL Rules Keyword ip.src ip.dst tcp.srcport udp.srcport tcp.dstport udp.dstport ip.protocol eth.vlan.id eth.ingif Description Source IP Destination IP TCP source port UDP source port TCP destination port UDP destination port Protocol type VLAN Ingress Interface Type IP Address IP Address Integer Integer Integer Integer Protocol Integer String Possible Operators == == >, <, !=, == >, <, !=, == >, <, !=, == >, <, !=, == ==, != >, <, !=, == !=, == Any of: gigabit-ethernet [1-10] aggregator *1-5+ ten-gigabit-ethernet [a-b+ Any of: tcp, TCP, udp, UDP, icmp 1-4K 1-64K 1-64K 1-64K Values IP / mask IP / mask 1-64K
ACL Rule Priorities When configuring an ACL rule, you must assign it a unique priority. The priority value is only used in instances when more than one ACL rule is matched. No two rules can have the same priority rating. Priority is based on an ascending scale, so a rule with priority 2 has a higher precedence than a rule with a priority of 1. ACL Actions When configuring ACL rules, five possible actions are available if a packet matches the rule criteria:
Forward Pass on the packet. Deny Discard the packet. NAT Open a NAT session for the packets. You must also specify the public IP pool from which to take the public IP address, and the routing profile to be used for routing the packets after their source IP and port were changed. Redirect Change the source IP and/or source port and/or destination IP and/or destination port, and send the packet on.
60
Route Route the packet according to a specified routing profile. For information on routing profiles, refer to Routing Profiles on page 58. Note that the specified routing profile must be enabled.
Configuring ACL Rules To configure ACL rules using the CLI Command Syntax: To define an ACL rule:
traffic-control rule name rule-name expression CRE-expression priority rule-priority_[1..10,340] action {pass-on | discard | nat address-pool public-ip-pool-name routing-profile routing-profilename| redirect [srcip new-source-ip] [srcport new-source-port] [dstip new-destination-ip] [dstport new-destination-port] | route routing-profile routing-profile }
Prompt level Configure Networking Example commands: To specify that packets with a source IP of 10.1.2.3 and a mask of 255.255.255.0 should be discarded:
networking> traffic-control rule name Rule4 expression "ip.src==10.1.2.3/24" priority 4 action discard
To pass on packets where the VLAN tag is in the open range of (13-20), the source TCP IP port is in the open range of (5-13), and the TCP destination IP port is less than 80:
networking> traffic-control rule name Rule10 expression "eth.vlan.id>13 and eth.vlan.id<20 and tcp.srcport >5 and tcp.srcport <13 and tcp.dstport <80" priority 10 action pass-on
61
To open a NAT session using the IP address from public-pool-A, for packets whose UDP destination port is greater than 30 and whose source IP is 10.0.1.1 on a subnet of 255.255.252.0; then route all packets in the session according to the route-1 routing profile:
traffic-control rule name Rule-13 expression "udp.dstport> 30 and.ip.src == 10.0.1.1/22" priority 13 action nat address-pool public-pool-A routing-profile route-1
Configuring Public IP Pools If you will be routing packets using NAT, you must configure at least one public IP pool containing an IP address. You can create up to four public IP pools. Each public IP pool can contain up to one public IP address. Since each public IP address can use 65,536 different port numbers, the AppBeat DC supports up to 262,144 concurrent NAT sessions. You can delete an existing public IP pool, and an existing address in a public IP pool. However, you can delete a public IP pool only if it contains no IP addresses, and if it is not being referenced by any ACL rule. A public IP pool referenced by an ACL rule must contain an IP address. To configure public IP pools using the CLI Command Syntax: To create a public IP pool:
traffic-control nat address-pool public-ip-pool-name
62
Prompt level Configure Networking Example commands: To create the PublicPool-1 public IP pool and add the IP address 14.14.14.1 to the pool:
networking> traffic-control nat address-pool PublicPool-1 ip-address 14.14.14.1
To display all the configured public IP pools and the IP addresses configured in each pool:
show traffic-control nat total_all_public_ip_pool_show
63
Initial Transmit Window the initial transmit window used for client-side TCP connections. This is the total number of bytes the AppBeat DC sends to the client without waiting for an ACK, in the start of a TCP connection. The transmit window increases as the connection ramps up. The default value for this parameter is 3072 bytes. Maximum Transmit Window the maximum number of bytes the AppBeat DC sends over a client connection without waiting for an ACK. The default value for this parameter is 32768 bytes. Maximum Receive Window the maximum window size the AppBeat DC advertises to a TCP client. The default value for this parameter is 8192 bytes.
64
To configure client-side TCP windows using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, click the Virtual Servers icon, then select the Properties tab.
3.
Enter values, in bytes, for the Initial Transmit Window, Maximum Transmit Window, and Maximum Receive Window.
Client-side TCP Inactivity Timers There are also two TCP inactivity timers that control how long idle client connections are kept open by the AppBeat DC. There are two kinds of TCP client connections. An Active client connection is one where the connection is currently in use for a transaction. This is most common when the client has sent a request but has yet to receive a response. An Idle client connection is one where there is no activity on the TCP connection at all; the last transaction (if applicable) was completed successfully and the TCP connection is now idle with the client not waiting for a response. The inactivity timers for these two types of connections are both configurable and indicate how long the AppBeat DC will keep each kind of connection open when there is no data present over the connection. The default timer for both kinds of connections is 30 seconds. After this inactivity timer, the AppBeat DC will close the connection. To configure client-side TCP inactivity timers using the CLI Command Syntax:
tcp connection-inactivity {idle-client-time | active-client-time} inactivity-time
65
The active-client-time is a value between 15 and 4,096 seconds. To configure client-side TCP inactivity timers using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, click the Virtual Servers icon, then select the Properties tab.
Client-side MSS TCP Maximum Segment Size (MSS) is used for a TCP client to announce the maximum TCP segment it is willing to receive to its TCP peer. The peer, in turn, should not send any TCP segments larger than the MSS announced by the client. This occurs by both TCP endpoints, each endpoint announcing the MSS its expecting to receive to its peer when the connection is initially set up. The TCP MSS will have an impact on packet sizes as well. MSS is a TCP option and is only seen in TCP SYN segments. The AppBeat DC uses a default MSS of 1462 Bytes for client-side TCP connections. However, this MSS is configurable and can be adjusted if necessary.
66
The client-max-mss is a value between 536 and 1452 bytes. To configure the client-side TCP initial SS threshold using the CLI Command Syntax:
tcp client-initial-ss-threshold ss-threshold-size
The client initial SS threshold is a value between 3072 and 65535 bytes. FastTCP Command Syntax: Prompt level Configure Example commands:
67
Server-side TCP Windows The server-side TCP window configuration parameters are similar to those on the clientside:
Initial Transmit Window the initial transmit window used for server-side TCP connections. This is the total number of bytes the AppBeat DC sends to the server without waiting for an ACK, at the beginning of a TCP connection. The transmit window increases as the connection ramps up. The default value for this parameter is 3072 bytes. Maximum Transmit Window the maximum number of bytes the AppBeat DC sends over a server connection without waiting for an ACK. The default value for this parameter is 32768 bytes. Maximum Receive Window the maximum window size the AppBeat DC advertises to a TCP server. The default value for this parameter is 8192 bytes.
The server-rx-window is a value between 2048 and 32768 bytes. To configure the server-side TCP initial Segment Size threshold using the CLI Command Syntax:
tcp server-initial-ss-threshold ss-threshold-size
68
User Configuration
Three categories of users can be assigned to logon to the AppBeat DC, each with their own set of privileges. The user categories are:
User permitted to use show commands and view statistics. Administrator (admin) permitted privileges to do all operations. Technician (tech) permitted the same privileges as the admin, with the addition of "debug" facilities.
The default user created with the Auto Configuration Dialog (A.C.D.) has administrator privileges. Define users for the AppBeat DC unit by using the following commands. Substitute real names in place of the listed example names, where required. To configure user/password privileges using the CLI Command Syntax:
user username {password | encrypted encrypted-passwd} {admin | user | technician}
The option to add a user with an encrypted password enables inserting a user from a previous configuration without knowing the users clear text password. To view online users information using the CLI Command Syntax:
show users
69
Output:
User Table: user name bob james permission admin user
70
System Commands
The system commands for the AppBeat DC unit are grouped in the following categories:
Configuration File Management. File Transfer/Management. File Commands. Software and Operating System Upgrade and Version Control.
Configuration File Management You can manage the configuration file by using the following commands. Substitute real names in place of the listed example names, where required. The configuration file will save the running configuration to flash. The startup.cfg file loads after the system boots. The configuration file is text based and can be viewed with a standard text editor. To save the configuration file using the CLI Command Syntax:
save [config filename]
A Save As is performed to save the startup configuration file as backup.cfg. To view the running configuration using the CLI Command Syntax:
show running config
71
To view the saved configuration file using the CLI Command Syntax:
show startup config
Loading Additional Configuration Files to a Running Config This feature enables an administrator to apply configuration variables from a separate configuration file. For example, an administrator adding to or modifying an existing configuration may choose to upload a file which contains all of the required configuration modifications. The add-config command can be used to process all new configuration changes found in the file. The changes can then be saved to the startup configuration. To execute commands from a file using the CLI Command Syntax
add-config file-name
Prompt level System This command processes commands found in the defined file. The file should be an ASCII text file and should be located on the local file system. The ftp-get command can be used to download the file to the local file system. For more information, see File Transfer/Management below. File Transfer/Management The AppBeat DC has the capability to transfer files/software versions to and from a remote FTP server. You must first configure a remote FTP account using the ftp-record command. To configure a remote FTP account using the CLI Command Syntax:
ftp-record username : passwd @ ipaddress directory
72
To retrieve a remote file via FTP using the CLI There are three types of files that can be transferred via FTP. Each file type is addressed in a different way. It is important to select the correct operation:
default retrieved as a regular file and is saved as-is to the flash file system. config downloads a configuration file, tests for validity, and saves the file as "startup.cfg". version downloads an application image. This is the combined hardware and software image. As with the operating system, there are two banks, primary and secondary. Unlike the operating system, the downloaded version is saved to the secondary bank, and can be toggled to be the primary, at the users discretion. Command Syntax:
ftp-get filename [config | version]
73
File Commands The AppBeat DC has a flash file system for storing configuration files. Use the following commands to manage the files.
To display files in the current directory using the CLI
Command Syntax:
dir
74
Software Upgrade and Version Control The AppBeat DC has two software images, primary and secondary. You can set which one of them is active and which is passive. The system boots from the active image. Upgrading the system software requires a support license. To obtain a support license, contact your local sales representative, or email [email protected]. Using the support license, you can upgrade you system to any version whose build date is earlier than the expiry date of the support license. The expiry date of the support license is indicated by the last six digits of the license string, in the format ddmmyy; for example, 010112. To enter the details of the support license using the CLI 1. 2. From the CLI, log in as an administrator. From the system> prompt, enter the support license details:
To upgrade the application software using the CLI 1. 2. 3. 4. Download the necessary file(s) from the Crescendo Networks Support website. The application software is typically named CN7KA_x_x_x_xx.tbz. Place these files on the FTP server configured for access by the AppBeat DC (refer to File Transfer/Management on page 72). From the CLI, log in as an administrator. From the system> prompt, upgrade the system software:
The software upgrade is installed on the passive image. The passive image will become the active image after a reboot. 5. Reboot the AppBeat DC with the reboot command.
75
You can run the show version and show system commands to view the results. Check whether the GUI display has been updated, by selecting Help About and verifying that the JAR version number is identical to the Running version number. If the versions are not identical, you need to delete the Java temporary files, as follows: Exit the AppBeat DC GUI application. In your desktop, select Start Control Panel. Double-click the Java icon. The Java Control Panel appears. In the General tab, click Settings in the Temporary Internet Files section. The Temporary Files Settings window appears. Unselect the option Keep temporary files on my computer. Click OK. Click OK again to exit the Java Control Panel. Restart the AppBeat DC GUI application. To enter the support license using the CLI Command Syntax:
license load support license_string_with_date
To view the current running software version using the CLI Command Syntax:
show version
76
Output:
hardware version model vendor board serial number number of 1G ports management port MAC Running version Builder Creation date Build count Secondary version Builder Creation date Build count Version after reboot FPGA-S RBF version SSL is supported Compression is supported : : : : : : : : : : : : : : : C0 7740 Crescendo CN7030080016 10 00-00-50-49-98-BC Release 8.4.0.10 [email protected] 2010.03.07_18.39.18 26768 Release 8.4.0.10 [email protected] 2010.03.07_18.39.18 26768 Running
: 2200002f
Total Memory 2016 Mbytes, Free memory 252 MBytes uptime is 4 days, 22 hours, 39 minutes, 26 seconds
Output
Hostname: Crescendo Date: Mar. 14, 2010 Time: 16:44:56 Servers: HTTP Server Enabled (on port 80), SNMP Enabled (on part 161), SNMP trap disabled, SSH V1 & V2 Enabled (port 22, sessions number 0, limit 5), Telnet Enabled (port 23, sessions number 0, limit 5) NTP: client mode disabled Power Supply: Single power supply Number of Fans: 3 Fans status and rpm: 1st Sensor 37770 rpm
77
2nd Sensor 3770 rpm 3rd Sensor 3668 rpm Board temperature levels: 1st Sensor 47oC, 2nd Sensor 54oC
To toggle the boot to alternate software image using the CLI Command Syntax:
software-toggle
To synchronize the secondary software image with the primary using the CLI Command Syntax:
software-sync
To upgrade the software version using the GUI 1. Select File Software Update via HTTP.
2.
In the File Open window that appears, specify the software upgrade file. Click Open. The software upgrade is installed on the passive image. The passive image will become the active image after a reboot.
78
3.
79
6
Server Preparation and Logging Considerations
This chapter provides critical information regarding server configuration. This chapter should be consulted to ensure the proper server configuration before attempting to accelerate and/or load balance with the AppBeat DC.
81
Server Preparation
Within the AppBeat DC configuration, servers are defined as real servers. A real server definition includes the server IP address and TCP port from which the application can be accessed. Real servers can be configured within HTTP clusters or TCP clusters. HTTP clusters are defined for HTTP-based applications, which have the ability to utilize the full suite of acceleration features within the AppBeat DC, such as TCP Offload (multiplexing and optimization), Compression, and SSL Acceleration. TCP clusters are used for non-HTTP based TCP applications. Depending on the type of applicationand, ultimately the type of clusterthe server must be properly configured to ensure functionality and optimum performance. HTTP Server Configuration Requirements When a server is configured in an HTTP cluster, the AppBeat DC opens a small number of backend connections to it. These connections are designed to stay open indefinitely, limiting the overall TCP connection setup and teardown activity on the server. Because of this behavior, it is important that the servers be configured to optimally take advantage of the small number of backend connections. Typically, many servers are not configured to use long-lasting TCP connections because of the burden of managing them when not frontended by the AppBeat DC. Therefore, it is important to follow the following guidelines before configuring a server to be accelerated by the AppBeat DC. Failure to do so may result in poor performance and in some cases, increased CPU utilization on the server. Apache Apache requires the following modifications be made to the httpd.conf file usually found in the /etc/httpd/conf/ directory.
KeepAlive On (By default, this is set to Off). MaxKeepAliveRequests 0 (Provides unlimited requests, by default, set to 100). KeepAliveTimeout 45 (By default, set to 15).
Microsoft IIS There is no special configuration required for default configurations of Microsoft IIS 5 or IIS 6.
82
Other Servers If the server being load balanced/accelerated by the AppBeat DC is a server other than Apache or Microsoft IIS, verify that the HTTP Keep-Alive and Request per Connection settings are set to appropriate values, as specified in Apache on page 82. TCP Server Configuration Requirements If a server is not an HTTP server, but will be load balanced via the AppBeat DC using the Layer 4 Load balancing featureCluster and Virtual Server set to TCP protocol modethe servers default gateway (or return path route) must be configured as the interface (or VRRPc interface) of the AppBeat DC. If the routes are not properly configured on the server, asymmetrical routing will occur, causing the application to malfunction. Note that if the AppBeat DC is deployed redundantly using VRRPc, then the default gateway of the server should be configured as the AppBeat DCs VRRPc interface.
The X-Forwarded-For header is used by default, and a sample HTTP GET request and headers are provided below:
GET /sales/homepage.html HTTP/1.1 Accept: image/gif, image/x-xbitmap, image/jpeg Accept-Language: en-us Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (Compatible; MSIE 6.0)
83
Add always The client source IP observed by the AppBeat DC is inserted in the header, even if another header exists. For example, if the AppBeat DC receives a request which contains the same header used by the AppBeat DC, X-Forwarded-For, for example, the AppBeat DC overwrites the existing header with its own header and observed source IP. Add if not present If the AppBeat DC receives a request which contains the same header used by the AppBeat DC, X-Forwarded-For, for example, the AppBeat DC leaves the original header and does not modify or add an additional header, preserving the original header and contents.
Server logging software should be reconfigured to identify the client IP address in the header configured in the AppBeat DC. To configure the originator IP header using the CLI Command Syntax:
http originator-ip {no-mark | mark} [xforwardedfor | oasip | clientip | cresclientip]
In addition to configuring the clients Originator IP address on the global level, the Originator IP address can also be configured for each Virtual Server. For more information, see Configuring Virtual Servers on page 148. To configure originator IP header using the GUI 1. 2. Once logged in through the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol and click the Farms icon. The HTTP tab appears.
84
3.
Check the Originator IP box if you wish to insert the originating client IP address in the requests sent to servers.
In the Header drop-down list, select the type of header. In the Action drop-down list, specify whether to Add the header only if it is not present, or whether to Always add the header.
In addition to configuring the clients Originator IP address on the global level, the Originator IP address can also be configured for each Virtual Server. For more information, see Configuring Virtual Servers on page 148. Server Log Configuration The following section provides instructions for configuring the logging functionality within some popular Web/application servers to properly use the originator IP information provided by the AppBeat DC. Microsoft Internet Information Server (IIS) Logging Before proceeding, verify that the AppBeat DC is configured to insert the original client IP address in the X-Forwarded-For header. Through the GUI, verify this by logging in as an admin user and entering the Configuration mode. Click the Topology icon and select the General tab. Verify that Originator IP is checked, and that x-forwarded-for is selected in the Header field.
85
To configure IIS to report the client IP address from the X-Forwarded-For header, an ISAPI filter must be installed on each server. The process is outlined below: 1. 2. 3. 4. 5. 6. 7. Download the CN-XFF.dll file from the Crescendo Networks Support website or contact your local Technical Support Engineer for assistance. Copy the CN-XFF.dll file into a directory on the server. Open the IIS Manager on the server. Right-click and enter the Web Site Properties menu for the desired Web server. Click the ISAPI Filters tab. Click New and enter a name, such as Crescendo Filter, and browse for the CNXFF.dll file. Click OK. You may need to restart the Web server in order for the changes to take effect.
The IIS Server will now search for the X-Forwarded-For: header when populating the client IP field in the logs. For all other application traffic not forwarded by the AppBeat DC, the log files will display the correct client IP. Apache Logging Before proceeding, verify that the AppBeat DC is configured to insert the original client IP address in the X-Forwarded-For header. Through the GUI, verify this by logging in as an admin user and entering the Configuration mode. Click the Topology icon and select
86
the General tab. Verify that Originator IP is checked, and that x-forwarded-for is selected in the Header field. Perform the following steps to configure Apache to log the X-Forwarded-For header: 1. 2. 3. Open the httpd.conf file, typically located in the /etc/httpd/conf/ directory. Look for the Logformat section and edit the logging format nickname, e.g.: common. Add the following logging parameter: %{X-Forwarded-For}i For example:
LogFormat "%{X-Forwarded-For}i %h %l %u %t \"%r\" %>s %b" common
The preceding example enables Apache to log the information found in the X-ForwardedFor header in the Source-IP field of the log files. Sun ONE Server (formerly iPlanet) Logging Before proceeding, verify that the AppBeat DC is configured to insert the original client IP address in the X-Forwarded-For header. Through the GUI, verify this by logging in as an admin user and entering the Configuration mode. Click the Topology icon and select the General tab. Verify that Originator IP is checked, and that x-forwarded-for is selected in the Header field. Perform the following steps to configure the Sun ONE Server for correct source IP logging: 1. 2. 3. Log in to the Sun ONE server Web-based management interface. Go to the Preferences tab Access Logging Options Custom Format. For Custom Format, replace the string:
%Ses->client.ip%
87
Figure 25: Configuring Sun ONE Server for Correct Source IP Logging
88
7
Server Topology Farms/Clusters/Real Servers
This chapter provides information for configuring the AppBeat DC server topology settings, including Farms, Clusters, Real servers, and devices. Additionally, this section discusses and explains how to configure HTTP Application Based Load Balancing, Layer 4 (TCP-based) Load balancing, Backend Server Connection Management, Server Health Checking, and Session Persistence.
Before Proceeding. Configuration Overview. Server Topology Configuration. Farm Configuration. Cluster Configuration. Real Servers. Device Configuration.
89
Before Proceeding
In order to proceed with configuring server acceleration and/or load balancing, the following steps should be satisfied.
Management connectivity for each unit, whether through Serial Console or via Management Ethernet Interface (GUI, Telnet, or SSH). See Chapter 3. Introduction to the Command Line Interface. At least one Data Interface on each unit configured with an IP Address and connected to the same network as the server(s) to be accelerated. See Chapter 5. Initial Configuration and Global Settings. Some servers may require a configuration change to work properly with the AppBeat DC. See Chapter 6. Server Preparation and Logging Considerations.
Configuration Overview
Topology Farms, Clusters, and Real Servers The configuration topology is comprised of Farms, which contain one or more Clusters, which in turn contain one or more real servers. For instance, a configuration designed to accelerate a single server would look as follows:
Farm-1.
Cluster-1.
Server-1.
As discussed in Chapter 1, the AppBeat DC can be configured to accelerate individual servers or a load balanced cluster of servers. Therefore, the configuration of a cluster with three identically configured servers intended to be load balanced would look as follows:
Farm-1.
Cluster-1.
If the Load Balancing license is not installed, you will be unable to add more than one server to a cluster. However, all other features, including single server acceleration will still function. Contact your Crescendo Networks Reseller or Sales Associate for assistance with enabling this feature.
90
The concept of Farms and Clusters exists primarily as a logical grouping tool for administration as well as monitoring and viewing performance information. For example, performance information can be viewed for a real server, cluster, farm, or entire unit. It is common for a AppBeat DC to be configured to accelerate several different groups of servers. It may make sense for an administrator to logically group the servers in separate Farms or Clusters for administrative and reporting reasons. For example:
Accounting.
Application-1.
Sales.
Application-3.
Server-6. Server-7.
Virtual Servers After the real servers are defined in a cluster, a Virtual Server must be configured to enable acceleration and/or load balancing. The Virtual Server has several configuration options depending on whether load balancing is used and how the server is intended to be accelerated. Virtual Server setup and configuration is covered in detail in Chapter 8. Virtual Servers and Traffic Control. As discussed in Chapter 1, servers can be accelerated as stand-alone servers (no load balancing), or exist within a load balanced cluster. If the server is a stand-alone server, it will be configured in a Cluster by itself. An administrator has the option of accelerating the server using a Virtual Server IP (VIP), in which server traffic is destined to the VIP configured on the AppBeat DC, or in spoofed mode, in which traffic is routed through the AppBeat DC, and only traffic destined to the server is intercepted and accelerated while all other traffic is routed normally. Note that load-balancing is not supported when using spoofed mode, since traffic is not destined to a unique Virtual Server (VIP). Regardless of mode, a Virtual Server must be created. The Virtual Server is then mapped to a cluster. The Virtual Server is configured with a Virtual IP address and TCP/UDP port number. In the case of a stand-alone server which will operate in spoofed mode, the Virtual Server IP address should be configured as the same IP address as the real server.
91
Additionally, a check box will be selected indicating the Virtual Server is a spoofed server. For load balanced HTTP clusters, additional HTTP Switching rules can be configured which enable the ability to direct client requests to different clusters based on Layer 7 application-based information such as host name, file extension, URL, or browser language. Load Balancing Concepts - HTTP Application Load Balancing and Acceleration vs. TCP/UDP/DHCP Load Balancing The AppBeat DC inherently operates at the HTTP layer providing advanced load balancing capabilities, SSL termination, compression, and L7 switching/redirection features. Additionally, the AppBeat DC is capable of performing load balancing for non-HTTP applications that run over the TCP/UDP/DHCP protocol. TCP load balancing is performed on a per-connection basis. UDP load balancing is performed on UDP sessions. When creating a Cluster or Virtual Server, the administrator has the option of configuring these entities as HTTP, TCP, UDP, DHCP, or DNS. The HTTP setting should be used for all HTTP/HTTPS applications, whereas any other, non-HTTP application requiring load balancing should be configured as TCP, UDP, DHCP, or DNS. The AppBeat DC treats traffic destined to TCP/UDP/DHCP/DNS and HTTP Virtual Servers and Clusters differently. When a Cluster and Virtual Server are configured as HTTP, the AppBeat DC will operate in its native proxy-based acceleration modeopening a small number of persistent backend connections to each configured server. In this mode, the AppBeat DC can apply compression, SSL termination, Layer 7 Switching/Redirection, and advanced load balancing functionality to HTTP traffic. When a Cluster and Virtual Server are configured as TCP, the AppBeat DC will function as a traditional Layer 4 load balancer. Unlike HTTP mode, which utilizes TCPmultiplexingmany client-side connections and a smaller number of server-side connectionsTCP mode utilizes a 1:1 connection ratio between the client and the server. Therefore, the AppBeat DC load balances each new connection among the cluster of servers using one of several load balancing algorithms. Additionally, because the AppBeat DC is not functioning as a Proxy (communicating to the backend server via its own IP address), the backend server sees the client IP address. Therefore, a server in a TCP cluster must have its Default Gateway configured as the interface of the AppBeat DC (or, the VRRPc interface address if two AppBeat DC units are deployed redundantly). The AppBeat DC performs UDP load balancing on UDP sessions. A UDP session is established with the arrival of a frame with a new source ip address and port pair destined for a virtual server, and is terminated after a pre-defined inactivity time. DNS load balancing is available for DNS servers running the UDP protocol.
92
Health Monitoring Each cluster can be configured to monitor the health of servers. Health checking can include the following mechanisms:
Verifying the servers ability to open a TCP connection or UDP session on the designated port and checking how long it takes. Confirming the existence and ability for the server to serve specific content requested by the AppBeat DC. Finally, the AppBeat DC can also confirm the existence (or non-existence) of specific content being retrieved.
When using the UDP protocol, the health checking can include the following mechanisms:
Forever UP/Forever DOWN health checks. Sending a UDP packet and expecting a response string in order to trigger an UP server event. Sending a UDP packet and receiving an ICMP destination port unreachable error in order to trigger a DOWN server event. Pinging the server and receiving a response in order to trigger an UP server event.
When using the DHCP protocol, the health checking can include the following mechanisms:
Sending a DHCP frame and not receiving an ICMP destination port unreachable error. This is performed as a regular UDP health check. Renewing an existing configured address and checking that the renewal is successful. This is performed as a two-way handshake (simple state-machine).
93
Prompt level Configure By default, these settings are globally set to 64 static connections and 32 dynamic connections (96 backend connections per server). To configure backend connections using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol and click the Farms icon. The HTTP tab appears.
94
3. 4. 5.
Specify the Static and Dynamic connections to be used globally. These numbers represent the number of connections opened for each new server configured. Specify the maximum total server side connections allowed. A value of 0 indicates no restriction. When configuring servers, the global connection numbers can be ignored by specifying specific connection counts per individual server on a local level.
Connection Mode The connection mode can be set to control how to open connections. Though it is suggested to leave the connections to the servers open, they can be closed. You can configure the connection mode globally or for a specific cluster. The cluster configuration overrides the global configuration. The following options are available:
Global For Cluster configuration only. Use the global connection mode setting. In-advance Open connections in advance. On-demand Open connections on demand.
95
To set the connection mode globally using the CLI Command Syntax:
http connection-open-mode { in-advance | on-demand {preserve-clientip | use-own-ip}}
To set the connection mode globally using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol and click the Farms icon. The HTTP tab appears.
3.
Select a Mode in the HTTP connection open mode section. If you select on-demand, select whether to use-own-ip, or to preserve-client-ip.
To set the connection mode for a Cluster using the CLI Command Syntax:
http connection-open-mode {global | in-advance | on-demand {preserve-client-ip | use-own-ip}}
96
To set the connection mode for a Cluster using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology panel, expand the Topology icon by clicking the + symbol then expand the Farms icon and click the desired Cluster icon. The Properties tab appears. Select the HTTP tab.
3.
Select the HTTP Connection Mode. If you select on-demand, select whether to useown-ip, or to preserve-client-ip.
Overflow Protection The AppBeat DC offers overflow protection for the global, cluster and real server queues. To enable/disable cluster queue overflow protection globally using the CLI Command Syntax:
http server-queue-limit { disable | enable {size absolute threshold in queue transactions | duration threshold in # of seconds}}
97
To enable/disable cluster queue overflow protection for a cluster using the CLI Two CLI commands are available for configuring queue overflow protection for a cluster:
http server-queue-limit and http cluster-queue-limit. If you configure
persistency for the cluster (as described in Persistency on page 114 ), the settings defined in http server-queue-limit are applied. If you do not configure persistency for the cluster, the settings defined in http cluster-queue-limit are applied. Command Syntax:
http cluster-queue-limit {global | disable | enable {[size absolute threshold in queue transactions] [duration threshold in # of seconds]}} http server-queue-limit {global | disable | enable {[size absolute threshold in queue transactions] [duration threshold in # of seconds]}}
To enable/disable cluster queue overflow protection for a real server using the CLI Command Syntax:
Real real servers name http server-queue-limit {disable | enable {[size threshold (# of seconds)] [duration absolute threshold in queue transaction]}}
Proxy Signature (HTTP Header Settings) The AppBeat DC acts as a TCP intermediary, maintaining separate client and server connections. In this way, the AppBeat DC operates as a proxy and enables the ability to insert special headers on the client and server connections to identify itself.
98
The header used to identify the AppBeat DC can be disabled or configured as either Via or X-Via for either the client or server side connections. To configure proxy signature using the CLI Command Syntax:
http proxy-sign {to-client | to-server} {via | x-via} no proxy-sign
Queue Selection Queue Selection enables classifying which requests are to be sent via the dynamic connections and which via the static connections based on the rules you configure. The rules are built with a combination of keywords and operators. Refer to Appendix A for the list of keywords and operators you can use. Queue selection configuration is achieved by specifying a default action (send to static queue or send to dynamic queue) and then defining a set of rules that serve as exceptions to the default action. When configuring a queue selection rule, a priority value is required. The priority value is only used in instances when more than one queue selection rule is matched. For more information on setting the priority value refer to Setting Rule Priority on page 274. You can configure queue selection globally or for a specific cluster. The cluster configuration overrides the global configuration. To configure queue selection globally using the CLI: Command Syntax: To define the queue selection default action:
http queue-selection default-queue {static | dynamic}
99
To configure queue selection for a Cluster using the CLI: To define the queue selection default action:
http queue-selection default-queue {static | dynamic}
Configuring Cookies You can configure up to four different cookie values to be used in a Crescendo Rules Engine (CRE) expression. Refer to Appendix A for a general explanation of the Crescendo Rules Engine.
100
To delete a cookie:
no http cookie{1|2|3|4}
Acceleration of Authenticated HTTP Sessions The HTTP protocol allows various user authentication techniques to be used in case a server requires certain credentials from a user. Authentication protocols include Basic, Digest, NTLM, and Negotiate (SPNGEO), among others. Sometimes, however, HTTP authentication does not work properly with TCP consolidation (multiplexing) because the server authenticates an actual TCP connection, rather than the clients HTTP session. Because of this, the AppBeat DC can enable/disable multiplexing for various authentication protocols. The AppBeat DC recognizes the authentication protocol used from a users request headers (specifically, the Authorization request header). What happens with each authentication protocol depends on the configuration of the AppBeat DC. The following authentication protocols are recognized:
Basic multiplexing can be enabled/disabled via user configuration. Negotiate (SPNGEO) multiplexing can be enabled/disabled via user configuration. Other (protocols other than those listed above) multiplexing can be enabled/disabled via user configuration.
For multiplexing authenticated sessions, the AppBeat DC provides enable/disable configuration options at two levels: global and per-cluster. First, which authentication protocols are multiplexed is configured globally. Then, each cluster has the option of handling authenticated sessions either per the global configuration, or per configuration specifically for that cluster.
101
To configure Authentication Multiplexing per cluster using the CLI Command Syntax:
http multiplexing {global | accelerate | not-accelerate | basicauthentication {global | accelerate | not-accelerate} | negotiateauthentication { global | accelerate | not-accelerate} | otherauthentication { global | accelerate | not-accelerate}}
To configure Authentication Multiplexing globally using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol and click the Farms icon. The HTTP tab appears.
102
3. 4.
Select Enable connections multiplexing. Select the appropriate authentication method to be accelerated.
Farm Configuration
Configuration Steps Perform the following commands to add/remove farms. Substitute actual names for the example names where required. To add farms using the CLI Command Syntax:
farm name no farm name
103
To add farms using the GUI 1. 2. 3. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol then click the Farms icon. Click New. The Add New Farm window appears.
104
4.
Cluster Configuration
Cluster configuration includes the configuration of load balancing and health checks, the association of Compression policies (covered in Chapter 12), of server-side SSL profiles (covered in Chapter 13), and of Content Control profiles (covered in Chapter 11). Load balancing and server health check configuration is covered in detail later in this chapter. Cluster Configuration You need a load balancing license to configure more than one server per Cluster.
105
To add/remove a service for entire cluster using the CLI Command Syntax:
service {load-balancing | compression | content-control | ssl | health-check | caching} no service {load-balancing | compression | content-control | ssl | health-check | caching}
Only the load-balancing service may be associated with a DHCP cluster. The other features configurable at the cluster level include health-check, server-inactivity, load balancing, and compression. These features are addressed individually in greater detail throughout this manual. To add a cluster using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol then click the farm to which you want to add the cluster.
106
3.
4.
Specify a Cluster Name and Protocol for the cluster and click Apply.
107
Web Server Logging Command Syntax: Prompt level Configure Management Example commands: Command Syntax: Prompt level Configure Topology Farm Cluster Example commands: Connection Profiles The Connection Profiles feature enables you to configure the number of persistent connections (static and dynamic) for each cluster. This feature is used by organizations hosting large server farms to simplify the configuration process. Instead of having to configure the servers dynamic and static connections multiple times (for each server), you can configure the connection settings once for each cluster. The connection settings for the application are then applied to all real servers within the cluster. When a clusters connection settings are not configured, the global settings are used for the cluster. To configure the connection profiles for each cluster using the CLI Command Syntax:
conns [static [global | #]] [dynamic [global | #]] [max [global | #]]
To configure the connection profiles for each cluster using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol and then expand the Farms icon. Click the Cluster icon of the cluster for which you want to configure connection profiles. The Cluster window appears.
108
3.
In the Connections area, configure the number of Static connections, Dynamic connections, and Max connections. For each of these fields:
4.
Select Global or Local from the drop-down list. If you select Global mode, the default global configuration is used for the cluster. If you select Local mode, enter the number of connections in the adjacent box. This local setting will override the default global configuration.
Click Apply.
Load Balancing Profiles Load balancing and health checking are performed using profiles. Profiles are created to enable use of the same configuration multiple times, without the need for redefining the configuration each time. Load balancing profiles contain:
The load balancing algorithm to be used (rr/wrr/wll/wlbw see Load Balancing Algorithms on page 111). The persistence mode to be used (none/by hash/by application).
109
Cluster Protocol: HTTP, TCP (Layer 4 Load Balancing), UDP, DNS, or DHCP The AppBeat DC inherently operates at the application (HTTP) layer, functioning as a full proxy. The AppBeat DC therefore, sees an application as a series of requests and responses, instead of only packets and TCP connectionslike a traditional Layer 4 Load Balancer. Functioning at the HTTP level also enables the AppBeat DC to perform advanced load balancing functions like L7 Switching and Redirection, while simultaneously having the ability to compress response data in real time and secure an application with SSL. The AppBeat DC is also capable of performing load balancing for non-HTTP applications that run over the TCP/UDP/DHCP protocols. TCP load balancing is performed on a perconnection basis. TCP Layer 4 load balancing is still performed using TCP termination. The AppBeat DC terminates all TCP connections that need layer 4 load balancing services, thus allowing them to use the units advanced TCP services such as FastTCP and buffering. These services will help the TCP connections perform more optimally. With TCP layer 4 load balancing, since client-side connections are terminated by the AppBeat DC, server-side connections are initiated by the unit. Since there is no connection consolidation for non-HTTP connections (i.e., no multiplexing), there is a 1-to-1 relationship between client-side and server-side connections. However, the server-side connections still carry the source IP address of the original client, in order to allow server logging mechanisms to operate as before. This means that TCP servers must guarantee their path back to the client through the AppBeat DC. This is often done by configuring the IP address of server-side interface of the AppBeat DC to be the default gateway of the server. This way, all response traffic from the server flows through the AppBeat DC to assure proper TCP connection handling. To configure TCP/UDP/DNS/DHCP Load Balancing, perform the following steps: 1. 2. 3. Configure each cluster that is configured with non-HTTP servers as a TCP/UDP/DNS/DHCP cluster, rather than an HTTP cluster. Select the relevant protocol for the cluster. Configure the real servers within the cluster to route return traffic back through the AppBeat DC. This is accomplished by configuring the servers default route (or network specific route) to route through the AppBeat DCs physical IP interface (or VRRPc interface, if redundantly deployed). Configure a Virtual Server (with IP address and TCP/UDP port) as a TCP/UDP/DNS/DHCP Virtual Server, rather than an HTTP virtual server. Note that the protocol of the Virtual Server must be the same as the protocol of the cluster.
4.
110
A TCP virtual server can only be bound to a single cluster. That cluster must be configured as a TCP cluster. Also, no SSL or compression services are available to TCP/UDP/DNS/DHCP Virtual Servers or clusters. The only exception is that SSL is available on a TCP cluster. Load Balancing Algorithms The algorithm represents the logic by which application requests will be distributed to available servers in a cluster. The following options exist:
Round Robin (RR) Application requests are forwarded in a cyclical fashion to each available server. Weighted Round Robin (WRR) Similar to Round Robin in that requests are cyclically distributed among available servers, however they are forwarded based on each servers configured weight. Servers are configured with a weight, or metric, between 1 and 100. The higher the weight, the greater priority, or amount of traffic a server should receive relative to other lower weighted servers.
Server Response Time (RESPONSE-TIME) The AppBeat DC calculates the servers response time as it receives updates from the server health check mechanism regarding the servers in the cluster. The load balancing process distributes a high percentage of the load to the fast servers, enabling them to receive more traffic and a small percentage of the load to the slow servers, enabling them to receive less traffic. In addition, each cluster has a Stop Traffic Factor (STF) parameter, which removes very slow servers from being eligible for traffic distribution. When a servers response time is greater than STF multiplied by the response time of the fastest server, the server is no longer eligible for new requests. STF has the value of three by default, but can accept values from 1 to 100. The server response time is updated every five seconds by default, enabling the response times to be recalculated and the server priorities to change.
The server response time option can only be enabled when the Health Check is enabled. If the Health Check is disabled on the cluster, a warning message appears.
Weighted Least Load (WLL) A common name for WLPR and WLC. WLPR is used in HTTP clusters, and WLC is used in TCP clusters.
Weighted Least Pending Requests (WLPR) For HTTP Clusters only. The AppBeat DC is fully application awareknowing the status of each outstanding client request and the servers subsequent response. This application level intelligence enables the AppBeat DC to make extremely accurate load balancing decisions based on real-time application knowledge of each servers pending request load. Weighted Least Connections (WLC) For TCP Clusters. When performing in TCP mode (Layer 4 Load Balancing), the AppBeat DC keeps track of the number of
111
individual TCP connections load balanced to each server within a cluster. The AppBeat DC can make load balancing decisions based on a combination of the servers configured weight as well as the number of connections currently established with each server.
Weighted Least Bandwidth (WLBW) The actual weight of a real server for load balancing purposes is calculated based on its original configured weight and the bandwidth it consumed during the last 5 seconds. For example, if two servers, A and B, have the same configured weights, and A's measured bandwidth is 20mbps while B's measured bandwidth is 40mbps, then A should get twice as much traffic as B. The decisions are weighed anew every time new bandwidth measurements are performed (every few seconds).
Configuring a Load Balancing Algorithm To configure a load balancing algorithm using the CLI Command Syntax:
profile name algorithm { wll response-time stop-traffic-factor | wrr response-time stop-traffic-factor | wlbw response-time stop-trafficfactor | rr }
To configure a load balancing algorithm using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Services icon by clicking the + symbol and then click the Load Balancing icon. Click New. The New Load Balancing Profile window appears.
112
3. 4. 5.
Specify a Name. Select the Load Balance Algorithm. Optionally specify to Use server response time, and enter a value in the Stop traffic factor field.
To specify the Persistency, refer to Persistency. When the Load Balance Algorithm is set to RR, the server response mechanism is disabled. When Use Server Response Time is set to Disable, the Stop Traffic Factor field is disabled. Attaching a Load Balancing Profile to a Cluster After configuring a load balancing profile, including its persistency and health-check profile, you can attach the load balancing profile to a cluster.
113
To attach a load balancing profile to a cluster using the CLI Command Syntax:
service load-balancing profile profile-name
To attach a load balancing profile to a cluster using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol then expand the Farms icon and click the desired Cluster icon. Select the Services tab.
3.
Select a load balancing profile from the Load balancing profile drop-down list.
Persistency Some applications require that a client communicate with the same server in a load balanced cluster throughout the duration of their session. This functionality is called persistence, as each new connection from the same client should be kept persistent, or sticky to the same server. The persistency mechanism of the AppBeat DC offers several settings:
114
None No persistence is enabled for the cluster. All requests are distributed via the configured load balancing algorithm. Application Level Persistency (by-application) (Available only for HTTP Clusters). The AppBeat DC inserts data into the HTTP/HTTPS response of each new client request. The data identifies to which server the clients requests should be forwarded. Therefore, each request from the client includes the inserted data which the AppBeat DC uses to identify to which server to forward the request, thus maintaining persistency throughout the duration of the clients session. Hashing Persistency (by-hash) AppBeat DC hashes various fields of every transaction received in a certain cluster, and sets the destination of each request based on the hash function value of its relevant fields, as specified in the persistence tables. The fields to be hashed are specified using a CRE expression (refer to Appendix A for the list of keywords and operators you can use). If you specify by-hash, the following settings are optional:
Maximum Hash Length This specifies the length from the beginning of the fields values to be used for hashing. Aging Period The AppBeat DC identifies a client by how long the servers are valid in the persistence tables.
Previous versions included the "by-ip" option. The current version improves this feature. The "by-hash" functionality supports the "by-ip" option and also supports hashing of various request fields.
To configure persistency using the GUI 1. Once logged in to the GUI, click Configuration.
115
2.
In the Topology window, expand the Services icon by clicking the + symbol and then expand the Load Balancing icon by clicking the + symbol. Select the load balancing profile. The clusters Properties window appears.
3.
Health Check Configuration The AppBeat DC will only forward traffic to available (healthy) servers and verifies the health of a server by one of several means. By default, the AppBeat DC attempts to connect directly to and verify the basic connectivity of each server configured in a cluster (HTTP, TCP, UDP, DNS, or DHCP). If the connection cannot be established, the AppBeat DC marks the server as Operationally Down and will continue to periodically check the health of the server. More advanced health checking options exist allowing the capability to request specific content from a server and verify content within server responses. This type of health checking is referred to as data checks within the configuration. The servers response is analyzed to determine whether the server is functioning properly. Data checks are available for HTTP, TCP, UDP, DNS, or DHCP clusters and are covered in more detail in Health Checking for HTTP Clusters on page 119 and Health Checking for TCP, UDP and DNS (non-HTTP) Clusters on page 120 respectively.
116
Up Always define the status of all real servers in the Cluster as Up, regardless of their actual status. Down Always define the status of all real servers in the Cluster as Down, regardless of their actual status. Ping Ping the server at the specified IP address, at the frequency specified in the Frequency field. If the server is up, then after a consecutive number of Failures, it is considered down. If the server is down, then after a consecutive number of Successes, it is considered up. If you select this mode, you must enter values in the Frequency, Failures, Success, IP, and Wait time fields. UDP A UDP packet is sent to the specified IP and Port, every Frequency. The contents of the UDP packet is the data specified in Request. If you also enter a string in the Response field, the system also checks if the string is Present or not in the server response (depending on whether you check the Present box). If the server is up, then after a consecutive number of Failures, it is considered down. If the server is down, then after a consecutive number of Successes, it is considered up. If you select this mode, you must enter values in the Frequency, Failures, Success, IP, Port, Request, and Wait time fields. You can optionally enter values in the Response and Present fields.
Use the UDP health check mode for DHCP health checks.
TCP A TCP packet is sent to the specified IP and Port, every Frequency. The contents of the TCP packet is the data specified in Request. If you also enter a string in the Response field, the system also checks if the string is Present or not in the server response (depending on whether you check the Present box). If the server is up, then after a consecutive number of Failures, it is considered down. If the server is down, then after a consecutive number of Successes, it is considered up. If you select this mode, you must enter values in the Frequency, Failures, Success, IP, Port, Request, and Wait time fields. You can optionally enter values in the Response and Present fields. HTTP An HTTP packet is sent to the specified IP and Port, every Frequency. If you also enter a string in the Response field, the system also checks if the string is Present or not in the server response (depending on whether you check the Present box). For HTTP clusters, specify a URL and Host. If the server is up, then after a consecutive number of Failures, it is considered down. If the server is down, then after a consecutive number of Successes, it is considered up.
117
If you select this mode, you must enter values in the Frequency, Failures, Success, IP Port, URL, Host, and Wait time fields. You can optionally enter values in the Response and Present fields.
HTTPS An HTTPS packet is sent to the specified IP and Port, every Frequency. If you also enter a string in the Response field, the system also checks if the string is Present or not in the server response (depending on whether you check the Present box). Specify a URL and Host. You can also specify an SSL profile to be used for the health check. If you do not specify an SSL profile, the SSL profile associated with the cluster is used. If the server is up, then after a consecutive number of Failures, it is considered down. If the server is down, then after a consecutive number of Successes, it is considered up. If you select this mode, you must enter values in the Frequency, Failures, Success, IP Port, URL, Host, and Wait time fields. You can optionally enter values in the SSL Profile, Response and Present fields. Script This is an alternate method of specifying a health check. Refer to Configuring Health Checks Using the Scripting Language on page 126 for instructions on how to use this method.
To implement health checks for DNS sessions, you can only use the Script option.
Frequency (1-300 seconds) Default value is 5 seconds. Defines the number of seconds between health check messages. Consecutive Failures (1-100) Default value is 3. The number of consecutive failures which must occur before the AppBeat DC classifies a server as down. Consecutive Successes (1-100) Default value is 3. The number of consecutive successes which must occur before the AppBeat DC classifies a server as up. IP Destination IP address. Port Destination port. Request The content of the request packet. Response AppBeat DC checks if this string appears in the server response. Present Whether the Response string is present in the server response. Wait Time (1-15000 milliseconds) Default value is 1 second (1000 milliseconds). Defines the number of milliseconds the AppBeat DC should wait for a server response to a heath-check message. If a healthy response is not returned within the designated time, the AppBeat DC classifies the request as a failure. URL to be checked Used with HTTP clusters only. The URL to be requested from each server within the cluster. The URL is requested at the configured Frequency. It is recommended that a small page be designated to reduce the load on the server.
118
Host Header Used with HTTP clusters only. The host name to be used for the health check request. This is useful if several Virtual Hosts exist on a single server. For example, a server may have a single IP address, but distinguishes between several virtual servers by the Host Header (for example, www.site1.com vs. www.site2.com) to determine which virtual server should serve the content. If no Host Header is configured, the host header will consist of the IP address of the server being health checked. SSL profile Used with HTTPS clusters only. The SSL profile to be used for the health check. If you do not specify an SSL profile, the SSL profile associated with the cluster is used.
How these options are used depends on whether the cluster is made up of HTTP servers or TCP/UDP/DHCP (non-HTTP) servers. This is configured on a per-cluster basis. Health Checking for HTTP Clusters If the cluster is HTTP, then only the following configuration parameters are relevant:
Standard options.
Mode. Frequency. Wait time. Consecutive failures. Consecutive successes URL. Host. SSL profile (for HTTPS health checks).
The data check request field is not applicable since the URL field determines the request that is sent to the server. If the data check response field is left blank, then the health check mechanism operates exactly as it did in versions before 4.2: it sends a request to the server and only validates that the response has a status code of 200. If the data check response field is configured, however, then the AppBeat DC will parse the response from the server. The AppBeat DC will parse both the headers and the body of the response in order to look for the presence or absence of the response string
119
configured, depending on whether the validate the absence of the response string option is enabled or not. The data check response field is case-sensitive. The binary option of the data check response field is not relevant with HTTP. Health Checking for TCP, UDP and DNS (non-HTTP) Clusters If the cluster is a TCP, UDP or DNS (non-HTTP) cluster, then only the following configuration parameters are relevant:
Standard options.
Mode. Frequency. Wait time. Consecutive failures. Consecutive successes. IP. Port.
With TCP, UDP and DNS clusters, there are several ways of configuring health checks:
Without using data checks If no data check options are configured, then the TCP servers will only be checked at the TCP connection level. The AppBeat DC attempts to open a TCP connection to the server. If the connection is successfully opened before the wait time expires, then the health check is considered a success. Otherwise, it is considered a failure.
Only using the request data check option If the intent of health checking is only to verify that the server responds with some data to a request (any data), then only configure the data check request option. In this case, the AppBeat DC first attempts to open a TCP connection/UDP session with the server. If the connection/session is successfully opened, the AppBeat DC sends the
120
data configured in the request field to the server (either text or binary, per configuration). After the request is sent, the AppBeat DC expects a response (any response) from the server. If a response is received, the server is considered UP. In a UDP session, an ICMP error response triggers a DOWN server event. All of this (opening a connection, sending the request, and getting a response) has to happen before the wait time expires; otherwise the health check is considered a failure. This option is not available for DNS.
Only using the response data check option Certain TCP applications respond with a banner when a connection is first opened to them. Mail servers (SMTP and POP) are examples of such cases. For checking the health of these servers, you only need to configure the data check response option. If only the response data check option is configured, the AppBeat DC first attempts to open a connection with the server. If the connection is opened successfully, the AppBeat DC expects data from the server to follow immediately after the connection is opened. The contents of the response are compared to the response option (text or binary, per configuration) and the presence or absence of the configured response is validated, depending on whether the validate absence of response option is configured. Receipt of a validated response triggers an UP server event. All of this (opening the connection, receiving the response, and validating it against the response option one way or another) needs to happen before the wait time expires; otherwise, the health check is considered a failure.
Using both request and response data check options For bi-directional application health checking of TCP/UDP servers, both the request and response data check options must be configured. In these cases, the AppBeat DC first attempts to open a TCP connection/UDP session with the server. Once the connection/session is opened, the AppBeat DC sends the server the contents of the request field (in text or binary, per configuration). Then, the AppBeat DC examines the server response and compares it to the content of the response field (in text or binary, per configuration) to validate the presence or absence of the configured response, depending on whether the validate absence of response option is configured. Receipt of a validated response triggers an UP server event. All of this (opening the connection, sending the request, receiving the response, and validating it against the response option one way or another) needs to happen before the wait time expires; otherwise, the health check is considered a failure.
121
Through these four options, all cases of server health checking for TCP/UDP (nonHTTP) servers are covered. Clusters should be configured according to the type of application the TCP/UDP servers within the cluster host. With all TCP/UDP/DNS checks, the time it takes to open the TCP connection/UDP session/DNS session is part of the overall time of the health check. As with HTTP checks, the response field is case-sensitive (in text responses). When analyzing server responses, the AppBeat DC will accept data from the server regardless of the number of packets exchanged between the AppBeat DC and the server as long as the total time does not exceed the wait time. For DHCP, the AppBeat DC uses UDP-based health checks. To configure a health check profile using the CLI Command Syntax: To configure an Up health check:
profile name mode up
122
For more information on the script mode, refer to Configuring Health Checks Using the Scripting Language on page 126. To delete a health check profile:
profile name mode mode no
DHCP Example 1: MAC address of the management port-90-00 :FB-13-47-21 Gateway IP address: 192.168.6.2000 xC0A806C8 )No response string to check(
health-check> profile myprofile mode script "udp(1000 ,env.ip, env.port, my.sip, 67 , 0x'01.01.06.01.00.00.00.00.00.00.80.00.00.00.00.00.00.00.00.00.00.00 .00.00.C0.A8.06.C8.00.90.FB.13.47.21.00.00.00.00.00.00.00.00.00.00.0 0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00. 00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00 .00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0 0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00. 00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00 .00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0 0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00. 00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00 .00.00.00.00.00.00.00.00.00.00.63.82.53.63.35.01.01.37.0A.21.06.0C.0 F.1C.2A.48.01.03.78.3C.09.43.72.65.73.63.65.6E.64.6F.74.01.01.FF ,' ,present)"
123
DHCP Example 2: MAC address of the management port-90-00 :FB-13-47-21 Gateway IP address: 192.168.6.2000 xC0A806C8 Check that the response is a DHCP OFFER message, as expected (response string is written as a binary stream of the DHCP message type option (with DHCP OFFER code
health-check> profile myprofile mode script "udp(1000 ,env.ip, env.port, my.sip, 67 , 0x'01.01.06.01.00.00.00.00.00.00.80.00.00.00.00.00.00.00.00.00.00.00 .00.00.C0.A8.06.C8.00.90.FB.13.47.21.00.00.00.00.00.00.00.00.00.00.0 0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00. 00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00 .00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0 0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00. 00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00 .00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0 0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00. 00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00 .00.00.00.00.00.00.00.00.00.00.63.82.53.63.35.01.01.37.0A.21.06.0C.0 F.1C.2A.48.01.03.78.3C.09.43.72.65.73.63.65.6E.64.6F.74.01.01.FF ,' 0x'35.01.02', present)"
To configure health checks using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Services icon by clicking the + symbol and then select the Health check icon. Click New. The Add New Health Check profile window appears.
124
3. 4. 5. 6. 7.
Specify a Name. Select the Mode. Specify values in the Frequency, Failures, Success, IP, Port, Request, URL, Host, Request and Wait time fields, according to the mode you selected. For HTTP clusters, specify the URL and Host to be requested. Define a Response if required. If not defined, the AppBeat DC determines health based on the HTTP response code returned from the server, identifying a response of 200 as healthy.
125
Attaching a Health Check Profile to a Cluster After configuring a health check profile, you can attach it to a cluster. To attach a health check profile to a cluster using the CLI Command Syntax:
service health-check profile profile-name
To attach a health check profile to a cluster using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol then expand the Farms icon and click the desired Cluster icon. Select the Services tab.
3.
Select a health check profile from the Health check profile drop-down list.
Configuring Health Checks Using the Scripting Language You can use the script option, available in the CLI, to configure the following types of health checks: Ping, TCP, DNS, UDP, HTTP, HTTPS, Up, and Down.
126
The script option is the only method for configuring health checks for DNS sessions. The syntax of the scripting language requires the following:
Strings must be enclosed in single quotes. For example: account. An empty string is indicated by . A list must appear as a comma separated list within braces. For example: {200,400}.
Command Syntax:
profile name Frequency # Failures # Success # mode script Ping(WT, IP)
Prompt level Configure Services Health Check Where the script syntax is:
WT IP
Wait time for a server response. The range of values is 1-15000 milliseconds. The destination IP address. Possible values:
Example command:
health-check> profile ping1 frequency 10 failures 1 success 2 mode script "ping(1000, env.ip)" Configuring a TCP Health Check Using the Scripting Language
Command Syntax:
profile name Frequency # Failures # Success # mode script tcp(WT, IP, Port, SIP, SPort, reqStr, rspStr, exPrsnt)
Prompt level Configure Services Health Check Where the script syntax is:
WT IP
Wait time for a server response. The range of values is 1-15000 milliseconds. The destination IP address. Possible values:
An IP address. env.ip The configured IP address of each real server in the cluster.
127
Port
SIP
An integer. env.port The configured port number of each real server in the cluster.
SPort
An IP address. my.ip The configured source IP address for each server in the cluster.
reqStr
An integer. my.port The configured source port number for each real server in the cluster.
rspStr
Any string of bytes. An empty string indicates not to send a request packet..
exPrsnt
Any string of bytes. An empty string means that any response is OK.
Whether the response string is expected to be present or missing in the server response. Possible values:
missing present
Command Syntax:
profile name Frequency # Failures # Success # mode script dns(WT, IP, Port, SIP, SPort, domainsStr, rspIPList, exPrsnt)
Prompt level Configure Services Health Check Where the script syntax is:
WT IP
Wait time for a server response. The range of values is 1-15000 milliseconds. The destination IP address. Possible values:
Port
An IP address. env.ip The configured IP address of each real server in the cluster.
An integer. env.port The configured port number of each real server in the cluster.
128
SIP
SPort
An IP address. my.ip The configured source IP address for each server in the cluster.
domainStr rspIPList exPrsnt
An integer. my.port The configured source port number for each real server in the cluster.
The string sent for resolution to the DNS server. This can be any string of bytes. The list of IP addresses that are possible responses for the domainStr sent. Whether the rspIPList is expected to be present or missing in the server response. Possible values:
missing present
Example command: The following health check sends a DNS request for www.abs.com. If one of the addresses 1.1.1.2 or 1.1.1.3 is returned, the health check is considered a success.
health-check> profile dns_hc frequency 10 failures 1 success 2 mode script "dns(1000, env.ip, env.port, my.sip, my.sport, 'www.abc.com', {1.1.1.2,1.1.1.3}, present)" Configuring a UDP Health Check Using the Scripting Language
Command Syntax:
profile name Frequency # Failures # Success # mode script udp(WT, IP, Port, SIP, SPort, reqStr, rspStr, exPrsnt)
Wait time for a server response. The range of values is 1-15000 milliseconds. The destination IP address. Possible values:
Port
An IP address. env.ip The configured IP address of each real server in the cluster.
SIP
An integer. env.port The configured port number of each real server in the cluster.
An IP address. my.ip The configured source IP address for each server in the cluster.
129
SPort
reqStr
An integer. my.port The configured source port number for each real server in the cluster.
rspStr
Any string of bytes. An empty string indicates not to send a request packet..
exPrsnt
Any string of bytes. An empty string means that any response is OK.
Whether the rspIPList is expected to be present or missing in the server response. Possible values:
missing present
Command Syntax:
profile name Frequency # Failures # Success # mode script http(WT, IP, Port, url, host, rspStr, exPrsnt, ranges)
Prompt level Configure Services Health Check Where the script syntax is:
WT IP
Wait time for a server response. The range of values is 1-15000 milliseconds. The destination IP address. Possible values:
Port
An IP address. env.ip The configured IP address of each real server in the cluster.
url
An integer. env.port The configured port number of each real server in the cluster.
A string containing the URL requested from each server within the cluster.
130
host
A string containing the host header to use in the request. If this parameter is empty, the host header is the IP address of the server being health checked. The string expected in the server response. Possible values:
rspStr
exPrsnt
Any string of bytes. An empty string means that any response is OK.
Whether the rspIPList is expected to be present or missing in the server response. Possible values:
Ranges
missing present
Example commands:
health-check> profile http1 frequency 10 failures 1 success 2 mode script "http(1000, env.ip, env.port, 'my-account', 'my-bank', 'data_check', present, {200})"
Command Syntax:
profile name Frequency # Failures # Success # mode script https(WT, IP, Port, url, host, rspStr, exPrsnt, ranges, sprof)
Prompt level Configure Services Health Check Where the script syntax is:
WT IP
Wait time for a server response. The range of values is 1-15000 milliseconds. The destination IP address. Possible values:
Port
An IP address. env.ip The configured IP address of each real server in the cluster.
url host
An integer. env.port The configured port number of each real server in the cluster.
A string containing the URL requested from each server within the cluster. A string containing the host header to use in the request. If this parameter is empty, the host header is the IP address of the server being health checked. The string expected in the server response. Possible values:
rspStr
Any string of bytes. An empty string means that any response is OK.
131
exPrsnt
Whether the rspIPList is expected to be present or missing in the server response. Possible values:
Ranges sProf
missing present
A list containing the HTTP response code acceptable by the test. The SSL profile to be used for the health check. Possible values:
Example commands:
health-check> profile ssl_default frequency 10 failures 1 success 2 mode script "https(1000, env.ip, env.port, 'my-account', 'mybank', 'data_check', present, {200}, env.sprof)"
Command Syntax:
profile name mode script up
Command Syntax:
profile name mode script down
Prompt level Configure Services Health Check Elastic Resource Control Elastic Resource Control (ERC) is a mechanism which controls the systems resources in order to save energy and resources. In clusters on which ERC is applied, real servers with a light traffic load are put in standby mode. Depending on traffic load, these real servers may be brought out of standby, as needed. ERC is implemented by means of ERC profiles that can be attached to a cluster. Each profile defines an ERC policy that determines when to put real servers on the cluster in or out of standby, depending on traffic conditions. An ERC profile can be attached only to clusters on which multiplexing is enabled.
132
An ERC profile is composed of the following components, which together determine the ERC policy:
Minimum Servers The minimal number of real servers that must remain active. The AppBeat DC will not put in standby real servers if the number of active servers will fall below this value, even if traffic is very light. You can specify this setting as a number or as a percentage of the number of real servers in the cluster. Maximum Servers The maximal number of active real servers. The AppBeat DC will not use more than this number of servers, even if there are more real servers in the cluster and traffic load is very high. You can specify this setting as a number or as a percentage of the number of real servers in the cluster. Service Level The service level policy. This is determined as follows:
Vservice If you specify a Vservice, the policy of the ERC profile is applied only to traffic that matches the Vservice. Max Response Time The maximum allowed response time. The time is measured from the time a request reaches the AppBeat DC, until the AppBeat DC finishes receiving the response from the real servers. If the response time exceeds the value specified here, AppBeat DC will start bringing real servers out of standby. The range of values is between 1 15,000 milliseconds. Min Response Time A percentage of the Max Response Time. If the actual response time goes below the Min Response Time, AppBeat DC will try to put servers in standby. The range of values is 1 99 %. The default value is 75%. Critical Response Time A percentage of the Max Response Time. If the actual response time rises above the Critical Response Time, AppBeat DC will bring out of standby real servers to avoid reaching the Max Response Time. Servers with a higher weight will be used first. The range of values is 1 99 %. The default value is 90%. Min Transactions Below this number of transactions per second (TPS), the ERC policy is ignored. The range of values is any positive number. The default value is 5 TPS.
Stabilization Period The minimum number of seconds that must elapse between changes caused by the ERC policy. A change is an implementation of a decision to put servers in or out of standby. The range of values is 20-3600 seconds. If you expect large changes in traffic load, specify a smaller stabilization period.
133
Configuring an ERC Profile To configure an ERC profile using the CLI Command Syntax: To add an ERC profile:
profile name
To specify the minimum number of active servers: profile name minimum-servers {number number | percent percentage} To specify the maximum number of active servers: profile name maximum-servers { number number | percent percentage } To specify that the ERC policy will be implemented only on traffic classified by a Vservice: profile name service-level vservice name To specify the maximum response time: profile name service-level max-response-time time-in-ms To specify the minimum response time: profile name service-level min-response-time percent To specify the critical response time: profile name service-level critical-response-time percent To specify the minimum number of transactions per second: profile name service-level min-transactions number To specify the stabilization period between ERC changes: profile name stabilization-period number Prompt level Configure Services Resource Control Example commands:
Resource-control> profile ERC1 Resource-control> profile ERC1 minimum-servers percent 40 Resource-control> profile ERC1 maximum-server number 1000 Resource-control> profile ERC1 service-level vservice vservice1
134
Resource-control> profile ERC1 service-level max-response-time 150 Resource-control> profile ERC1 service-level min-response-time 75 Resource-control> profile ERC1 service-level critical-response-time 90 Resource-control> profile ERC1 service-level min-transactions 5 Resource-control> profile ERC1 stabilization-period 120
You can use the show resource-control command to see your defined ERC profiles. Attaching an ERC Profile to a Cluster After configuring an ERC profile, you can attach it to a cluster. To attach an ERC profile to a cluster using the CLI Command Syntax:
service resource-control profile profile-name
The show real command displays PW-SAVE for real servers that are currently deactivated due to ERC policy Server Inactivity Check The AppBeat DC opens a limited number of persistent TCP connections to each accelerated server. If a connection is idleno data sent to, or received from the serverfor 30 seconds (default setting), one of three actions can be defined:
The connection can be closed, and a new one immediately opened. The connection can be kept alive using an HTTP HEAD method to verify connectivity over the open connection. The connection can be kept alive using an HTTP GET method to verify connectivity over the open connection.
A path and file name can be specified for the HEAD and GET keep-alive methods. The server-inactivity feature can be configured on a global level, affecting all servers configured for acceleration, or on an individual server level.
135
By default, the server-inactivity feature is configured globally to close connections. If configured in Keep-Alive GET mode, it is advised to configure the URL as a small static page, up to 1000 Bytes in total size, to avoid creating unnecessary load on the server. Keep in mind, the server-inactivity feature only executes after 30 seconds of inactivity on each individual backend server connection. Recommended Server Settings The following table lists the recommended server settings.
Table 8: Recommended Server Settings Web Server / Operating System Microsoft IIS 6.0 (Server 2003) Microsoft IIS 5.0 (Server 2000) Apache* (Linux, BSD, Windows) Recommended Configuration server-inactivity close server-inactivity keep-alive [HEAD | GET] server-inactivity close
* Apache requires the following modifications be made to the httpd.conf file usually found in the /etc/httpd/conf/ directory.
KeepAlive On (By default, this is set to Off). MaxKeepAliveRequests 0 (Provides unlimited requests, by default, set to 100). KeepAliveTimeout 45 (By default, set to 15).
The settings outlined in the table are recommendations based on typical environments. Because many applications may vary based on customization, it is recommended that the settings be verified with a Crescendo Networks support engineer to ensure optimal performance. For example, the default server-inactivity timer is set to 30 seconds. If, for some reason, a request may take longer than 30 seconds to be processed by the server, the server-inactivity timer value should be increased to allow for maximum server processing time. Configuring Server Inactivity Globally To configure server-inactivity globally using the CLI Command Syntax
http server-inactivity {close | keep-alive {url {get | head}}}
136
To configure server-inactivity globally using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol and click the Farms icon. The HTTP tab appears.
3. 4.
Specify whether the Server Inactivity option is Close or Keep Alive. If you specify Keep Alive mode, enter a URL and select either the Get or Head method.
137
Configuring Server Inactivity Per Cluster To configure server-inactivity per cluster using the CLI Configuring the server-inactivity is an extension of the Cluster configuration within the Farm > Cluster prompt level. Each cluster can be configured with a unique server-inactivity action (Close, Keep-Alive GET, or Keep-Alive HEAD) similar to the global configuration. Additionally, the Cluster can be configured to use the global settings. Command Syntax
http server-inactivity {close | global | keep-alive {url {get | head}}}
To configure server-inactivity per cluster using the GUI Server inactivity can be configured per Cluster. 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol then expand the Farms icon and click the desired Cluster icon. Select the HTTP tab.
138
3.
Specify whether the Server Inactivity option is Close, Keep Alive, or Global.
If you specify Keep Alive mode, enter a URL and a GET or HEAD method. If you specify Global mode, the server inactivity setting is taken from the global configuration, as described in Configuring Server Inactivity Globally on page 136.
Configuring Server Inactivity Per Real Server To configure server-inactivity per real server using the CLI You can configure server-inactivity for an individual real server within the Farm > Cluster prompt level. Each real server can be configured with a unique server-inactivity action (Close, Keep-Alive GET, or Keep-Alive HEAD) similar to the global configuration. Additionally, the real server can be configured to use the global settings or the cluster settings. Command Syntax
Real real-server-name http server-inactivity {close | global | cluster | keep-alive {url {get | head}}}
139
Real Servers
When configuring servers to be accelerated or just load-balanced, it is important to verify several settings in the server configuration before defining them in the AppBeat DC configuration. For information pertaining to server configuration, consult Chapter 6. Server Preparation and Logging Considerations before proceeding with configuring real servers. Configuring a Real Server Real servers are defined within a cluster. When the server is configured, the AppBeat DC immediately attempts to connect to the server. In the case of a real server defined in an HTTP cluster, the AppBeat DC will attempt to open the preconfigured number backend connections to the server, as well as begin performing separate Health Checks (if configured). In the case of a real server defined in a TCP cluster, the AppBeat DC will begin performing health checks (if configured). Backup Servers When configuring a real server, the option exists to make the server a backup server. This designation means that the server configured as backup within the cluster will not receive any traffic, unless all other servers within the cluster are unavailable. The configuration allows for only one backup server to be designated per cluster. When all servers in a cluster fail, the backup server will become active. When the previously failed server becomes available again, the backup server will do the following, based on whether the cluster is configured as an HTTP or TCPbased cluster:
HTTP Cluster The backup server will immediately stop receiving new traffic and will be placed in backup mode again. TCP Cluster The backup server will not be forwarded any new TCP connections, and will gracefully timeout any existing connections. Once all connections are no longer active, and have been timedout, the server will be placed in backup mode again.
140
no real real-name
To add a real server using the GUI 1. 2. 3. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol, then expand the Farms icon and the Cluster icon. Click the Real Server icon and click New. The Add New Server window appears.
4. 5. 6.
Specify a name for the server, the servers real IP address, and TCP port of the HTTP application to be accelerated. Additionally, configure additional services such as Logging, History, or the backend connections. Click Apply.
141
To configure backend connections per server using the CLI When configuring an individual server, the backend connections will use the global settings unless otherwise specified. The following command outlines the configuration of connection settings per server. Command Syntax:
real name conns static # of static dynamic {by cluster | # of dynamic} real name conns max {by cluster | maximum # of connections (static+dynamic) | unlimited}
Device Configuration
Devices are optional logical entities you can define to limit the number of concurrent TCP connections on a single physical server at any point in time. A Device corresponds to a physical server, to which you can attach any number of Real servers. The aggregation of the current connections of all the Real servers belonging to the device cannot exceed the maximum number of TCP connections you specify for the device. Device configuration does not replace the mandatory Farm Cluster Real configuration you must perform. Device configuration is optional. Device configuration does not impose a hierarchy as does Farm Cluster Real configuration, it only provides a method for limiting the number of concurrent connections on a physical server. Device configuration consists of specifying a Device and the maximum number of connections it can support, and of logically associating Real servers with the Device. Configuring Devices This sections lists the commands needed to configure Devices. Substitute actual names for the example names where required.
142
To configure the maximum number of connections for a Device using the CLI Command Syntax:
device name conns num
143
When configuring the maximum number of connections, you can specify a number between 1-131072, or enter the number 0 to specify an unlimited number of connections. To configure Devices using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, click the Devices icon, and then click New. The New Device window appears.
3. 4.
Specify the maximum number of connections for the device. Click Apply.
144
Associating Real Servers with Devices This section lists the commands for attaching or detaching Real servers from a device. Substitute actual names for the example names where required. To attach a Real server to a Device using the CLI Command Syntax:
real real-name device device-name
To detach a Real server from a Device, using the CLI Command Syntax:
real real-name no device
To attach a Real server to a Device using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol, then expand the Farms icon and the Cluster icon. Click the Real Server icon and then select the Real server you wish to add to a device. The Server Properties window appears.
145
3.
Specify to which device to attach this Real server, and click Apply.
146
8
Virtual Servers and Traffic Control
This chapter provides information about configuring Virtual Servers (VIPS) and advanced traffic control.
147
Before Proceeding
In order to proceed with configuring server acceleration and/or load balancing, the following steps should be satisfied.
Management connectivity for each unit, whether through Serial Console or via Management Ethernet Interface (GUI, Telnet, or SSH). See Chapter 3. Introduction to the Command Line Interface. Farms, Clusters, and at least one Real Server must be defined and properly configured within the Server Topology configuration. See Chapter 7. Server Topology Farms/Clusters/Real Servers.
Virtual Servers
Configuring Virtual Servers
The following section outlines the steps required to create and configure a Virtual Server (VIP). Virtual Servers are mapped to Clusters. Similar to Clusters, Virtual Servers have a protocol configuration; either HTTP, TCP, UDP, DHCP or DNS. A Virtual Servers protocol configuration must match that of the Cluster to which it is mapped. Therefore, a Cluster configured as HTTP can only be mapped to a Virtual Server configured as HTTP. Cluster and Virtual Server protocol designations cannot be mismatched. Virtual Servers configured for HTTP protocol also allow configuration of Traffic Control rules, as well as Client-Side SSL (described in Chapter 13. SSL Acceleration). No services may be associated with a DHCP virtual server. To create virtual servers using the CLI Command Syntax:
virtual virtual-name [shutdown | no shutdown] virtual-ip virtualport [protocol {http | tcp | udp | dhcp | dns}] no virtual virtual-name
148
149
To add a virtual server using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol then click the Virtual Servers icon. Click New.
3. 4.
The New Virtual Server window appears. Specify a name for the Virtual Server, the Virtual IP address, and TCP port of the HTTP application to be accelerated. Select the Protocol.
A Virtual Server can only be mapped to a cluster using the same Protocol. For example, if the Virtual Server is mapped to a TCP-based cluster, the Protocol must be set to TCP. The GUI will generate an error if the Protocol of the Virtual Server is not the same as the Protocol of the cluster to which it is mapped. 5. Check the Originator IP box if you wish to insert the originating client IP address in the requests sent to servers.
150
In the Header drop-down list, select the type of header. In the Action drop-down list, specify whether to Add the header only if it is not present, or whether to Always add the header.
If you do not check the Originator IP box, the global Originator IP configuration is used. For more information on configuring the global Originator IP settings, see Originator (Client) IP Address on page 83. The local Originator IP settings override the global level Originator IP settings. 6. Specify the Default Action as sending traffic to a specific cluster, or denying access. If you specify a cluster, then the virtual server is mapped to the cluster you specify. Once the Default Action is configured, traffic control can be configured via the Traffic Control tab if the Virtual Server and subsequent Clusters are configured as HTTP protocol.
151
Traffic Control Criteria Traffic control rules are built with a combination of keywords and operators. Refer to Appendix A for the list of keywords and operators you can use. After specifying criteria for a rule, a priority should be configured, as well as a defined action for traffic matching the rule. Traffic Control Rule Priorities When configuring a traffic control rule, a priority value is required. The priority value is only used in instances when more than one traffic control rule is matched. For more information on setting the priority value, refer to Setting Rule Priority on page 274. Traffic Control Actions When configuring traffic control rules, three possible actions are available if a rule matches a user request:
Send to cluster the virtual server will direct the user request to the configured cluster. Redirect to URL the virtual server will redirect the user request to the specified URL. Deny the virtual server will deny the user request and reset the connection.
Traffic Control Example Configuration Assume the following configuration in which two clusters exist:
Farm-1.
Cluster-1.
Server-1. Server-2.
Cluster-1 serves image content consisting of jpegs and gifs, while Cluster-2 serves all other content and application requests. After configuring each Cluster on the AppBeat DC, a Virtual Server must be created with the following configuration:
152
Table 9: Traffic Control Rule Example Rule Name Rule 1 Rule Text http.request.fileext=='jpg' or http.request.fileext=='gif' Priority 1 Action send-to-cluster Cluster-1
As the Virtual Server configuration demonstrates, requests for jpg or gif objects will be forwarded to Cluster-1 as stipulated in Rule 1, while all other requests will be forwarded to Cluster-2 (default action). Configuring Traffic Control Traffic Control determines whether to drop a request intercepted by AppBeat DC, send it on to the cluster, or redirect it elsewhere. Configuring traffic control for a specific Virtual Server requires the following steps:
Specify the default action for the Virtual Server (Send to Cluster, Redirect to a specified URL, or Deny). Define a set of traffic control rules that serve as exceptions to the default action. For every HTTP request, AppBeat DC checks the request against each rule. If it finds any matches, it takes the action specified by the rule with the highest priority.
To configure traffic control using the CLI Command Syntax: To define the default traffic control action:
traffic-control default-action {send-to-cluster | redirect newlocation url-string | deny}
153
Prompt level Configure Topology Virtual Example commands: To define the default traffic control action:
virtual Virtual-1> traffic-control default-action deny
To configure traffic control using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol, expand the Virtual Servers icon by clicking the + symbol, then click the specific Virtual Server icon. Select the Properties tab.
3.
154
4.
5. 6.
Send to Cluster Send to a default Cluster. If you specify Send to Cluster, select the cluster from the drop-down list. Deny Deny traffic destined to the Virtual Server.
155
7.
Define the Traffic Control rules, while keeping in mind the following:
The rules you define should serve as exceptions to the default action you specified in the Properties tab. Every request is checked against each rule. If any matches are found, the action specified by the rule with the highest priority is taken.
To define a traffic control rule, enter the following in the bottom pane: a. b. Specify a Name for the rule. Specify a Priority of between 1-100 for the rule. You cannot assign the same priority level to two rules. Specify the Rule Text. Refer to Appendix A for the list of keywords and operators you can use.
c.
d. Specify the Action to take, either Send to Cluster, Redirect to URL, or Deny. e. If you specified the action to be Send to Cluster, select the cluster from the dropdown list. If you specified the action to be Redirect to URL, enter the desired URL, check the Permanent redirection box if desired, and specify whether the Connection should be Keep Alive or Close. Click Apply. The rule is added to the list of rules.
f.
g.
To delete a rule, select it in the list of rules, and click Delete. To edit a rule, select it in the list of rules. Its properties appear in the bottom pane. Edit the properties, and click Apply. The modified rule appears in the list of rules.
156
9
Caching
This chapter introduces and explains how to configure caching.
Caching Overview
The caching feature requires an operational license. Caching utilizes flash memory disks on the AppBeat DC platform, and an open-source Squid caching server that has been packaged and optimized for co-existence with AppBeat DC. Caching is performed using a cache server, which intercepts requests for static and dynamic content from a Web server and instead responds to the request out of a store of cached content. When an end user makes a request for content from a Web server, the cache server intercepts the request. If the content exists in the cache, the cache server responds to the request out of its store of cached pages. If the content being requested does not exist in the cache, the cache forwards the request to the server and stores the servers response. When another client requests the same content, the cache can serve the content without forwarding the request to the server. In this way, caching reduces load on the origin web server and accelerate application performance. The benefits of caching include reducing server load and latency up to 80 percent, improving user performance, reducing infrastructure costs, delivering stand-in capability during unanticipated or planned site outages, and providing pinpoint control of content accuracy without changes to site architecture or code. Caching is implemented using caching profiles. Each profile is a set of prioritized rules that specify which content to cache. A typical caching rule specifies which types of files (by extension) are cacheable. AppBeat DC uses the default Squid 2.7 configuration file to determine its caching policy (refer to Appendix B). Any settings configured by a user in a caching profile are added to or override the settings of the default caching policy.
157
Caching Rules
Caching Criteria Caching rules include rule criteria for defining what content to cache. Since the caching service utilizes the open-source Squid caching server, the rule criteria should be entered as a Squid expression. Refer to http://www.squid-cache.org/Doc/config/ version 2.7 for a description of Squid syntax. After specifying criteria for a rule, a priority should be configured. Caching Rule Priorities When configuring a caching rule, a priority value is required. The priority value is only used in instances when more than one caching rule is matched. For more information on setting the priority value, refer to Setting Rule Priority on page 274. Caching Rule Example Suppose you wish to change the default lifetime of objects in the cache. Every object in the cache has an associated lifetime. During this lifetime, the object is considered fresh. After this lifetime, the object expires and becomes stale. Once it becomes stale, it can still stay in the cache (unless the cache is otherwise configured), but is no longer fresh. The default lifetime is 1 week. To change the lifetime to two weeks, use the max_stale Squid expression, and assign a priority:
profile my-cache expression "max_stale 2 week" priority 5000
Configuring Caching
Configuring caching requires the following steps: 1. 2. Create a caching profile. This includes defining a set of rules specifying which content to cache. Apply a caching profile to a Cluster.
You can define multiple caching profiles, and apply a suitable profile to each cluster. The AppBeat DC includes a pre-defined default caching profile which constitutes a typical recommended caching policy.
158
Chapter 9 Caching
Configuring a Caching Profile To define a caching profile using the CLI Command Syntax: To define a caching profile and its rules:
profile caching profile name expression SQUID command priority priority between 1 10,000
To configure a caching profile using the GUI 1. 2. 3. Once logged in to the GUI, click Configuration. In the Topology window, click Caching under the Services icon. Click New. The New Caching Profile window appears.
159
4. 5. 6. 7.
Specify the Profile name. Click Apply. In the Topology panel, select the profile in the Caching node under the Services icon Define caching rules by selecting an empty line in the rules table and entering the following in the bottom pane: a. Specify a Priority of between 1-10,000 for the rule. You cannot assign the same priority level to two rules. Specify the Rule Text as a Squid expression. Refer to http://www.squidcache.org/Doc/config/ for information.
b.
8.
Click Apply.
Applying a Caching Profile to a Cluster To apply a caching profile to a Cluster using the CLI Command Syntax: To apply a caching profile to a Cluster:
service caching profile caching profile name
160
Chapter 9 Caching
To apply a caching profile to a Cluster using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology panel, expand the Topology icon by clicking the + symbol then expand the Farms icon and click the desired Cluster icon. Select the Services tab. This will display the Cluster configuration Services settings as shown in the figure below.
3. 4.
Select a profile from the Caching profile drop-down list. Click Apply.
161
10
Vservices and Usage Control Policies
This chapter introduces and explains how to configure Vservices and usage control.
Overview of Vservices and Usage Control. Vservices and Usage Control Example. Configuring a Usage Control Profile. Configuring a Vservice.
Restrict the bandwidth to 200 Mbps. Restrict the transaction rate to 30 Req/sec.
163
2.
Use the CRE keywords and operators to classify all traffic arriving from mobile browsers such as iPhone, Opera, and BlackBerry. For example: http.request.user_agent == iPhone Associate a usage control profile with this Vservice.
To configure a usage control profile using the GUI 1. 2. 3. Once logged in to the GUI, click Configuration. In the Topology window, click Usage Control under the Services icon. Click New. The New Usage Control Profile window appears.
164
4. 5. 6. 7.
Specify a Name for the usage control profile. Specify a Bandwidth. Specify a TPS (transaction per second) value. Click Apply.
Configuring a Vservice
To create a Vservice using the CLI Command Syntax:
vservice vservice-name
Prompt level Configure Topology To classify the Vservice traffic using the CLI Command Syntax:
expression vservice classification
165
Prompt level Configure Topology Vservice To associate a usage control profile with the Vservice using the CLI Command Syntax:
service usage-control profile profile name
Prompt level Configure Topology Vservice Command Syntax: Prompt level Configure Topology Vservice To view Vservice statistics using the CLI Command Syntax:
show vservice
To classify the Vservice traffic and associate a usage control profile with it:
vservice mobile_users> expression http.request.user_agent iPhone profile mobile_restrictions ==
To configure a Vservice using the GUI 1. 2. 3. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol, then click Vservices icon Click New. The New Vservices Profile window appears.
166
4. 5. 6. 8.
Specify a Name for the Vservice. Specify a Rule Text to classify the traffic to be restricted or monitored. Refer to Building CRE Expressions on page 275 for the list of keywords and operators you can use. Optionally, select a Usage control profile to restrict the Vservice traffic. Click Apply.
167
11
Content Control
This chapter introduces and explains how to configure content control.
Content Control Overview. Content Control Flow of Operations. Content Control Rules. Configuring Content Control.
A content control Request profile, which modifies the fields of an HTTP request. A content control Response profile, which modifies the fields of an HTTP response.
You can define multiple request profiles and response profiles, and assign the suitable ones to a Virtual Server and its corresponding cluster. The following section describes how the content control profiles are applied throughout the cycle that starts with a client request and ends with a server response to the client.
169
Figure 54: Intervention points along the HTTP Request HTTP Response Route
1.
A request from the client (Pass 1) is intercepted by AppBeat DC. If you assigned a content control request profile to the Virtual Server, the Virtual Server applies the profile to modify the client request. If you also configured a Traffic Control policy for the Virtual Server, the Virtual Server applies the Traffic Control policy to determine whether to drop the modified request, or send it to the cluster. If the decision is to send the request to the cluster, the modified request is sent to the matching cluster (Pass 2).
2.
If you assigned a content control request profile to the cluster receiving the request, the cluster applies the profile to modify the request further. The cluster then sends the request to a real server. The real server fills the request and returns a response to the cluster.
3.
If you assigned a content control response profile to the cluster, then before sending out the server response, the cluster applies the profile to modify the server response. If you also configured a Compression profile and assigned it to the cluster, the cluster applies the Compression profile to determine whether to compress the modified response. The modified server response is sent to the Virtual Server, and reviewed by AppBeat DC (Pass 3).
4.
If you assigned a content control response profile to the Virtual Server, the Virtual Server applies the profile to modify the response. The modified response is sent to the client (Pass 4).
170
Delete field a specified field is deleted from the response or request. Modify field data in a specified field in the response or request is replaced with different data. Add field a new field with a specified name and data is added to the response or request.
The fields that can be added, modified, or deleted are listed in the following table:
Table 10: Fields that can be Modified in a Content Control Rule Field In HTTP request HTTP request HTTP request HTTP response
171
Content Control Rule Examples The following sections provide several content control rule examples. Content Control Request Profile Example The following is an example of a rule in a request profile:
Table 11: Content Control Request Profile Rule Example Rule Text If: http.request.host is '2.2.0.10' AND http.request.path is '/1k Priority 5 Action Modify a field Applies to URL Field Value www.cn.com/bigweb/text/txt/1k.txt
This rule stipulates that if an incoming request from host 2.2.0.10 includes /1k in the request path, then the entire URL (both the host name and path) should be replaced with www.cn.com/bigweb/text/txt/1k.txt. The request profile can be applied to a Virtual Server or to a cluster, depending on your needs:
If this rule is performed when applied to a Virtual Server, it modifies the HTTP request received from the client before sending the modified request to the cluster. If this rule is performed when applied to a cluster, it modifies the HTTP request received from the Virtual Server before sending it out to a real server.
Content Control Response Profile Example The following is an example of a rule in a response profile:
Table 12: Content Control Request Profile Rule Example Rule Text If: http.response.code is 302 Priority 3 Action Modify a field Applies to location Field Value http://www.site2.com
This rule stipulates that if an outgoing response includes the code 302 (redirection), then the contents of the location field should be changed to http://www.site2.com.
172
The response profile can be applied to a cluster or to a Virtual Server, depending on your needs:
If this rule is performed when applied to a cluster, it modifies the HTTP response received from a real server, before sending it to the Virtual Server. If this rule is performed when applied to a Virtual Server, it modifies the HTTP response received from the cluster before sending it on to the client.
2.
Defining the profile as being either a request profile or a response profile. Configuring the profiles rules.
Apply the content control profile to a Virtual Server and/or a cluster. Perform at least one of the following:
Apply a content control request profile to a Virtual Server. Apply a content control response profile to a Virtual Server. Apply a content control request profile to a cluster. Apply a content control response profile to a cluster.
Configuring a Content Control Profile To define a content control profile using the CLI Command Syntax To define whether the profile is a request profile or a response profile:
profile profile-name type {request | response}
173
Prompt level Configure Services Content-control Example commands: To set a content controls profile type:
content-control> profile req1 type request
To define a content control profile using the GUI 1. 2. 3. Once logged in to the GUI, click Configuration. In the Topology panel, click Content Control under the Services icon. Click New. The New Content Control Profile window appears.
174
4. 5. 6. 7.
Specify a Profile name. Select a Profile type. A profile of type Request is intended for HTTP requests, and a profile of type Response is intended for HTTP responses. Click Apply. The content control profile appears in the Content Control node. In the Topology panel, select the content control profile to define its Content Control rules. Keep in mind that every request or response is checked against each rule. If any matches are found, the action specified by the rule with the highest priority is performed. To define a content control rule, select an empty line in table, and enter the following in the bottom pane: a. b. Specify a Name for the rule. Specify a Priority of between 1-100 for the rule. You cannot assign the same priority level to two rules. Specify the Rule Text. Refer to Appendix A for the list of keywords and operators you can use.
c.
d. Select the Rule Action, either Add Field, Modify Field, or Delete Field.
175
e.
Specify which Field name the action applies to. The fields that can be added, modified or deleted are url, full-url, or host in an HTTP request, and the location field in an HTTP response. If the action is Add Field or Modify Field, enter the field data in Field value. Click Apply. The rule appears in the list of rules.
f. g.
To delete a rule, select it in the list of rules, and click the Delete button. To edit a rule, select it in the list of rules. Its properties appear in the bottom pane. Edit its properties, and click Apply. The modified rule appears in the list of rules. Applying Response and Request Content Control Profiles to a Virtual Server To apply content control profiles to a Virtual Server using the CLI
service content-control {request | response} profile profile-name
To apply content control profiles to a Virtual Server using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol, expand the Virtual Servers icon by clicking the + symbol, then click the specific Virtual Server icon. Select the Services tab.
3.
176
4. 5.
To apply a content control request profile, select it from the Request drop-down list. To apply a content control response profile, select it from the Response drop-down list.
Applying Response and Request Content Control Profiles to a Cluster To apply content control profiles to a Cluster using the CLI
service content-control {request | response} profile profile-name
To apply content control profiles to a cluster using the GUI 1. 2. 3. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol, then expand the Farms icon. Select a farm and click the Cluster icon. Select the Services tab.
177
4. 5.
To apply a content control request profile, select it from the Request drop-down list. To apply a content control response profile, select it from the Response drop-down list.
178
12
Compression
This chapter introduces and explains the configuration of the Compression module.
Before Proceeding. Compression Module Overview. Compression Profile Configuration. Enhanced Compression Module Configuration.
179
Before Proceeding
In order to proceed with configuring Compression, the following steps should be satisfied:
Management connectivity, whether through Serial Console or via Management Ethernet Interface (GUI, Telnet, or SSH). See Chapter 5. Initial Configuration and Global Settings. Server(s) configured in at least one HTTP cluster. See Chapter 7. Server Topology Farms/Clusters/Real Servers.
180
Chapter 12 Compression
For example, to specify that all HTML files should be compressed, create a rule with the expression http.response.content_type == 'Application/x-javascript' and the action Compress. Mime-types are listed as a type and sub-type in the format of: type/sub-type. When configuring mime-types in a compression rule, you can choose to specify the exact mime type, like text/plain, or specify only the type, like text. Specifying only the type will ensure that all content within the specific type will be compressed or passed through, without having to input each individual mime-type. The full list of mime-types is available at the Internet Assigned Numbers Authority (IANA) website at http://www.iana.org/assignments/media-types/. Below is a sample of common mime-types:
Table 13: Common Mime-types Sample Mime-type (includes type/subtype) application/x-javascript application/xml image/bmp image/jpeg text/html text/plain Corresponding File Extension Js xml xsl Bmp jpeg jpg jpe html htm asc txt
Compression Rule Priorities When configuring a compression rule, a priority value is required. The priority value is only used in instances when more than one compression rule is matched. For more information on setting the priority value, refer to Setting Rule Priority on page 274. Compression Actions When configuring compression rules, two possible actions are available if a rule matches the data:
Compress Compress HTTP content before sending it to the client. Pass through Data sent to the client is not compressed.
181
2.
You can define multiple compression profiles, and apply a suitable one to each cluster. The AppBeat DC includes a pre-defined default compression profile which constitutes a typical recommended compression policy. Creating a Compression Profile To create a compression profile using the CLI Command Syntax To define the compression profiles default action:
profile profile-name default-action {pass-through | compress}
182
Chapter 12 Compression
Prompt level Configure Services Compression Example commands: To define the compression profiles default action:
compression> profile Cmp-Profile-1 default-action pass-through
To create a compression profile using the GUI 1. 2. 3. Once logged in to the GUI, click Configuration. In the Topology panel, click Compression under the Services icon. Click New. The New Compression Profile window appears.
183
4. 5. 6. 7. 8.
Specify the Profile name Select the Default Action either Compress or Pass Through. Click Apply. In the Topology panel, select the profile in the Compression node under the Services icon Define Compression rules, while keeping in mind the following:
The rules you define serve as exceptions to the default action you specified in the previous step. Outgoing data is checked against each rule. If any matches are found, the action specified by the rule with the highest priority is taken. The New Compression Profile screen already includes several rules. These rules are the rules typically recommended to be included in every compression profile.
To define a compression rule, select an empty line in the rules table and enter the following in the bottom pane: a. b. Specify a Name for the rule. Specify a Priority of between 1-100 for the rule. You cannot assign the same priority level to two rules.
184
Chapter 12 Compression
c.
Specify the Rule Text using the http.response.content_type keyword and a mime type or mime types. Refer to Appendix A for the list of logical operators you can use. Refer to Using Mime-types for a list of mime-types.
d. Specify the Action to take, either Compress or Pass Through. e. Click Apply. The rule appears in the list of rules.
To delete a rule, select it in the list of rules, and click Delete. To edit a rule, select it in the list of rules. Its properties appear in the bottom pane. Edit the properties, and click Apply. The modified rule appears in the list of rules. Applying a Compression Profile to a Cluster To apply a compression profile to a Cluster using the CLI Command Syntax
service compression profile profile-name
To apply a compression profile to a Cluster using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology panel, expand the Topology icon by clicking the + symbol then expand the Farms icon and click the desired Cluster icon. Select the Services tab. This will display the Cluster configuration Services settings as shown in the figure below.
185
3.
Check the Compression to Client box and select a profile from the Profile drop-down list.
Normal Enhanced compression throughput using normal compression ratio. This option provides enhanced compression throughput of the unit according to the acquired license and up to 66 percent compression ratio. Boosting the compression throughput enables you to create the following compression throughput levels:
High Normal compression throughput using a high compression ratio. This option provides up to 1 Gbps of throughput and up to 86 percent compression on certain file types, while maintaining zero latency performance.
The enhanced compression module is purchased as a separate add-on. After installing the module, you must install the software and software license before the module is available for use.
186
13
SSL Acceleration
This chapter introduces and explains the configuration of the SSL Acceleration module.
Before Proceeding. Overview of the SSL Acceleration Module. Configuration Preparation. Configuring a Virtual Server. Importing or Creating a Private Key. Importing or Creating a Certificate. Cipher Profile. Configuring an SSL Server Profile (Client-side SSL). Configuring an SSL Client Profile (Server-side SSL). Converting Keys, Certificates, and Chained Certificates.
187
Before Proceeding
In order to proceed with configuring SSL Acceleration, the following steps should be satisfied.
Management connectivity for each unit, whether through Serial Console or via Management Ethernet Interface (GUI, Telnet, or SSH). See Chapter 3. Introduction to the Command Line Interface. Server(s) configured in at least one cluster. See Chapter 7. Server Topology Farms/Clusters/Real Servers.
Configuration Preparation
SSL Acceleration Configuration Outline Configuring SSL Acceleration requires the following steps:
SSL is customarily configured to operate on TCP port 443. However, the AppBeat DC can provide SSL Acceleration on any port designated by the virtual server.
Create or import an SSL private key. Create or import an SSL certificate, or create an SSL Certificate Request for submission to a Certificate Authority.
188
Create a Cipher Profile, or use the default list. Create an SSL Server Profile. Map the SSL Server Profile to a Virtual Server.
By default all communication to the server, even if originally terminated as SSL, is transmitted from the AppBeat DC as HTTP. This section also outlines the configuration of server-side SSL using a Client-Profile which enables encrypted communication between the AppBeat DC and the backend server. Server Configuration This section outlines the various methods in which the AppBeat DC and server are configured to support applications encrypted with SSL. Virtual Server Providing SSL Only To configure a Virtual Server (VIP) which only accepts encrypted HTTPS communication and communicates with the backend server over unencrypted HTTP, the following logical configuration would apply:
Virtual Server 10.1.1.100 TCP port 443 Mapped to Cluster-1 Server-1 TCP port 80
The server configuration and further SSL configuration is covered later in this chapter. Virtual Server Providing HTTP and SSL to a Single Cluster To configure a Virtual Server (VIP) with HTTP and SSL Acceleration, create two Virtual Servers each configured to listen on a different port, mapped to the same cluster. For example:
Virtual Server 10.1.1.100 TCP port 80 Mapped to Cluster-1 Server-1 TCP port 80 Virtual Server 10.1.1.100 TCP port 443 Mapped to Cluster-1 Server-1 TCP port 80
In this example, the server has the entire website or application available over port 80. Depending on the authentication and content control mechanisms being used, it may not be desirable to have content accessible over HTTP (port 80) which would otherwise only be accessible via HTTPS (port 443). If this is the case, proceed to the following example.
189
Virtual Server Providing HTTP and SSL to Two Clusters As discussed in the previous section, some applications being offloaded by the AppBeat DC may require originally encrypted content to be served through a different Web service running on the same server. For example, it may not be desirabledepending on the authentication and content control mechanisms being usedto have content accessible over HTTP (port 80) which would otherwise only be accessible via HTTPS (port 443). In these cases, it is advisable to have content which is only intended to be accessed by users over HTTP to be served over port 80 on the server. Content intended to be accessed by users using HTTPS should be served over a different port (separate Web server instance); port 81 for example. All communication to the backend server is still offloaded, using only HTTP communication; however, the data is now secured, preventing a user accessing the site with HTTP from viewing or downloading content which should only be accessed via HTTPS. The configuration of such a setup looks as follows:
Virtual Server 10.1.1.100 TCP port 80 Mapped to Cluster-1 Server-1-80 TCP port 80 Virtual Server 10.1.1.100 TCP port 443 Mapped to Cluster-2 Server-1-81 TCP port 81
Different server names are used to differentiate the port being configured. All communication to the server, even if originally terminated as SSL, is transmitted from the AppBeat DC as HTTP. Preparation In preparation for configuring SSL Acceleration, the following steps will need to be completed:
If your servers are currently using SSL, the Private Keys and Certificates must be exported as individual files so they can be imported into the AppBeat DC.
There should be one file for the private key, and one file for the certificate which includes the public key. The files must be in PEM (.pem) format. The key cannot have a pass phrase (password) associated with it. In addition to being in PEM format, the certificate file must have the correct text information at the beginning of the certificate.
Read Converting Keys, Certificates, and Chained Certificates on page 208 before proceeding to modify, convert, and verify the format of keys and certificates.
190
OpenSSL can be used to modify, convert, and verify format of the keys and certificates before being imported into the AppBeat DC. OpenSSL can be downloaded for free for most popular operating systems (including binary versions for Windows machines) at http://www.openssl.org. See Converting Keys, Certificates, and Chained Certificates on page 208 for detailed information before proceeding. If you do not have a valid SSL key(s) or certificate(s), they should be requested from a Certificate Authority (for example, VeriSign or DigiCert). The AppBeat DC enables you to create your own private key which is associated with a Certificate Request. The Certificate Request can then be sent to a Certificate Authority to be officially signed and validated. The SSL Configuration requires the AppBeat DC to import and/or export files from an FTP server. Therefore, the ftp-record should be configured in the AppBeat DC configuration. As discussed in Chapter 5. Initial Configuration and Global Settings, the ftp-record specifies an available FTP server, user credentials, and home directory.
191
Importing/Exporting or Creating a Private Key To import or export a private key using the CLI Command Syntax
key name {import | export} filename
To import a private key using the GUI 1. 2. 3. Once logged in to the GUI, click Configuration. In the Topology window, expand the Services icon by clicking the + symbol then expand the SSL icon. Click the Keys icon and click New. The Add New Key window appears.
4.
192
5. 6. 7.
Check the Import box. Select whether to import the key By FTP or By HTTP. If you specify By FTP, enter a desired key name in the box adjacent to the Import checkbox. Click Apply. If you specified importing by FTP, the AppBeat DC will automatically log in and download the file based on the FTP information configured in the ftp-record command. If you specified importing by HTTP, a dialog box appears. Specify which file to import using HTTP. The imported key will be displayed under Services SSL Key.
To export a private key using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Services icon by clicking the + symbol then expand the SSL icon. Expand the Keys icon and select the key you wish to export.
3. 4.
193
To create a private key using the GUI 1. 2. 3. 4. Once logged in to the GUI, click Configuration. In the Topology window, expand the Services icon by clicking the + symbol then expand the SSL icon. Click the Keys icon and click New . The Add New Key window appears (Figure 60). Specify a key name and size (between 384-2048) and click Apply. The created key will be displayed under Services SSL Key.
Import an existing, signed, and valid certificate from a Certificate Authority. Create a Certificate Request which is then exported from the AppBeat DC and sent to a Certificate Authority for validation. The signed certificate received from the Certificate Authority is then imported into the AppBeat DC. Create a self-signed certificate. This certificate is not validated by a Certificate Authority and should typically be used only for testing purposes. Clients accessing accelerated servers using a self-signed certificate will receive a security message from their browser.
When an SSL client receives a certificate from a server, it checks the Certificate Authority (CA) that authorized the certificate and if that CA is trusted, then the certificate itself can be trusted.
194
Servers may also send the client a Certificate Chain which is essentially a series of certificates. A Chained Certificate allows SSL hierarchies to be conveyed from a server to a client. In a Chained Certificate, the first certificate is always that of the sender itself (i.e., the server). The second certificate is of the CA that authorized the senders certificate. The third certificate is of the CA that authorized the second certificate, and so on. As long as the client can validate the last certificate in the chain, the entire chain is trusted. The AppBeat DC supports both individual certificates and chained certificates without any special configuration. Importing/Exporting or Creating a Certificate To import a certificate using the CLI Command Syntax
certificate name key key-name import name
To import a certificate using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Services icon by clicking the + symbol then expand the SSL icon. Click the Certificates icon and click New. The Add New Certificate window appears.
195
3.
Even though a certificate will be imported, all fields should still be filled out. If any of the field values are different than those in the actual certificate, they will be overwritten by the correct values from the imported certificate. Make sure the key name specified is the correct key which will correspond with the certificate to be imported. Select whether to import the certificate By FTP or By HTTP. If you specify By FTP, enter a desired certificate name in the box adjacent to the Import checkbox. Do not check the Self Signed box. Click Apply. If you specified importing by FTP, the AppBeat DC will automatically log in and download the file based on the FTP information configured for the ftp-record command. If you specified importing by HTTP, a dialog box appears. Specify which file to import using HTTP.
4. 5.
To create a certificate request and export it using the CLI The following command generates a new interactive certificate request which is exported to the FTP server and directory specified in ftp-record. Once the command is issued, the user will be prompted to answer a series of questions regarding the Certificate to be requested.
196
Before a certificate request can be created, a key must be created as discussed in Importing or Creating a Private Key on page 191. Command Syntax
ssl certificate name key key-name export export-name
Output:
Enter Enter Enter Enter Enter Enter Subject Subject Subject Subject Subject Subject Country (2 characters): US State: CA Locality: San Jose Org: Sample, Co. Common: www.sample.com Email address: [email protected]
Use quotation marks for values which contain spaces. To create a certificate request and export it using the GUI Before a certificate request can be created, a key must be created as discussed in Importing or Creating a Private Key on page 191. 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Services icon by clicking the + symbol then expand the SSL icon. Click the Certificates icon and click New. The Add New Certificate window appears.
197
Figure 63: Creating a Certificate Request 3. 4. 5. 6. Specify a name for the certificate, the associated Key name for the key created in the previous step. Complete the subject information. Do not check the Self Signed box. Click Apply. Once created, click the Certificate Name created in the previous step under Services SSL Certificates. Check the Export box, and provide the file name of the Certificate Request and click Apply. The AppBeat DC will automatically log in and upload the file based on the FTP information configured for the ftp-record command. The Certificate Request should then be retrieved from the FTP server and submitted to a Certificate Authority for validation. Once a signed and valid certificate has been received from the Certificate Authority, it should be placed on the FTP server and uploaded to the AppBeat DC. 7. 8. To upload the certificate, click the Certificate Name created in the previous step under Services SSL Certificates. Check the Import box, and provide the file name of the certificate to be uploaded. Click Apply.
198
To create a self-signed certificate using the CLI A self-signed certificate is not validated by a Certificate Authority and should typically be used only for testing purposes. Clients accessing accelerated servers using a self-signed certificate will receive a security message from their browser. Command Syntax
certificate name key key-name self-signed export export-file-name
To create a self-signed certificate using the GUI Before a self-signed certificate can be created, a key must be created as discussed in Importing or Creating a Private Key on page 191. 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Services icon by clicking the + symbol then expand the SSL icon. Click the Certificates icon and click New. The Add New Certificate window appears.
199
3. 4.
Specify a name for the certificate, the associated Key name for the key created in the previous step. Complete the subject information. Check the Self Signed box and specify the number of days the certificate should be valid. Click Apply.
Cipher Profile
The cipher is the algorithm used for encryption and decryption. Typically, the client and server have the ability to use several different ciphers. During the initiation of the SSL session, the cipher to be used is negotiated between the two end points. The AppBeat DC supports many ciphers used by different client browsers. Creating a Cipher Profile The available ciphers on the AppBeat DC can be configured with a Cipher Profile. Therefore, an administrator can specify the exact ciphersencryption methodsthey would like to use for their application.
200
It is not mandatory to create a Cipher Profile. If no profile is created and associated with a Server Profile, the AppBeat DC will simply negotiate the cipher based on the default list. The following steps are required for creating a Cipher Profile: 1. 2. Create a Cipher Profile. Add Cipher types to the profile with associated priorities for negotiation.
To create a cipher profile using the CLI The following command creates a cipher profile. Once created, the profile does not contain any ciphers. Proceed to the following section to learn how to add cipher types to the profile. Command Syntax
cipher profile profile-name
To add cipher types to a cipher profile using the CLI The following command adds individual cipher types to a profile configured in the previous step. The available list of cipher types are as follows:
A priority is also associated with each cipher entry. The priority is used during cipher negotiation between the AppBeat DC and the client. Command Syntax
cipher type profile-name cipher-type cipher-priority
201
To create a cipher profile using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Services icon by clicking the + symbol then expand the SSL icon. Click the Ciphers icon and click New. The Add New Cipher window appears.
3. 4.
Enter a Cipher Profile Name and click Apply. By default, no Ciphers will be selected for the newly created profile. Follow the steps outlined in the next section.
202
To add cipher types to a cipher profile using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Services icon by clicking the + symbol then expand the SSL icon, and the Ciphers icon. Click the Cipher Profile created in the previous section.
3. 4.
In the right panel, select the cipher type from the Available window and click the Add button to move it to the Selected window. Once a cipher type is selected, its priority (shown in parenthesis) can be changed by clicking on the cipher type, and then clicking either of the single up/down arrows.
203
Creating an SSL Server Profile To create an SSL server profile using the CLI Command Syntax
server-profile name certificate name [cipher-profile name] [SSL-2 | no-SSL-2] [SSL-3 | no-SSL-3] [TLS-1 | no-TLS-1] [cipherselection {server | client}]
To create an SSL server profile using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Services icon by clicking the + symbol then expand the SSL icon. Click the Server Profiles icon and click New. The New SSL Server Profile window appears.
204
3. 4.
Specify a Profile name and select the associated Certificate and Cipher profile. Select either client or server from the Cipher Selection drop-down box. This specifies which end point will have priority over determining the selected cipher. Selecting server enables the AppBeat DC to make the decision.
Applying an SSL Server Profile to a Virtual Server SSL Acceleration will function once an SSL Server Profile is associated with an existing Virtual Server. To apply an SSL server profile to a Virtual Server using the CLI Command Syntax
service ssl profile profile-name
To apply an SSL server profile to a Virtual Server using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol then expand the Virtual Servers icon. Select the desired Virtual Server and click the Services tab.
205
3. 4.
In the right panel, check the SSL to Client box and select the desired profile from the drop-down menu. Click Apply.
Verify that the servers defined in the cluster have the appropriate TCP port number configured for HTTPS communication.
Creating an SSL Client Profile To create an SSL Client Profile using the CLI Command Syntax
client-profile name [cipher-profile name] [SSL-3 | no-SSL-3] [TLS-1 | no-TLS-1]
To create an SSL Client Profile using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol then expand the Services icon. Expand the SSL icon by clicking the + symbol. Click the Client Profiles icon and click New. The New SSL Client Profile window appears.
206
3. 4. 5. 6.
In the right panel, enter the Client Profile Name. Select a Cipher Profile. Select the desired Protocols. Click Apply.
Applying an SSL Client Profile to a Cluster To apply an SSL Client Profile to a Cluster using the CLI Command Syntax
service ssl profile name
207
To apply an SSL Client profile to a Cluster using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology window, expand the Topology icon by clicking the + symbol then expand the Farms icon. Click the desired Cluster icon. Click the Services tab.
3. 4.
Check the SSL to Server box and select the desired Client Profile. Click Apply.
208
OpenSSL All of the commands required for verifying and converting keys and certificates will use OpenSSL. OpenSSL can be downloaded for free for most popular operating systems (including binary versions for Windows machines) at http://www.openssl.org. Keys As previously discussed, the key must be a separate file. The key must also be in PEM format and cannot have a pass phrase associated with it. The AppBeat DC will not function properly if a key with a pass phrase is imported. To remove the pass phrase follow the steps outlined in To remove the pass phrase on an RSA private key on page 209. Sample Key file: The MII located after the --BEGIN RSA PRIVATE KEY-- tag indicate that the key is in PEM format.
-----BEGIN RSA PRIVATE KEY----MIICXQIBAAKBgQDEs8ST2FxGTCZNR1/0hqxk0umq//MFVhxI7qzXJvCnVFBE5M1r eWY0s1wMO1t9o9frmSEqTSq+wmFYhNq7Ilel/EsbpTpa5FhnEO9iuI8MHXDET7yx KRjF5NqxFOGYyldKWdXNCX3nsXeWTdGEsJdMN3je9Ab9pbfmdVLIUBUxswIDAQAB AoGAJHR0sDnfECA40QWzYOw8swrrx4dcENcest2ZJt7OpxRXNA17jLmZGZdMLfAq SqS89asRnHdkvqnjxLYKm7gHqiYRFYCxEU17T9hFtuQpSI4oPa+79bMjuriik78W vnnA3u0JhRNP4Z743O7Ku2UEbbtVRPKCVS53TjF11z3yLkECQQD1J8jMH78YHuhD RD6j+ZIPCADZEVtMiO0tDRAKphGQj2xJAejlbSIXAIvWYdsRnQqU7ByaZbL4lcRt kqEpSWQRAkEAzWdI3MJhfMs8NYt1e2SwkqvKlouFbha927up2251jYMO4buGHtF6 uGJNn/P6uu3juKjT5Ak/3jt0Fmtd6fAtgwJBAMMpJMS7ERlWoXfLQEKxTwEAUgx7 sL7A0m7m0zpm8dyvEHkeOBVMR7MgEDJePFNNPTtIq4yOIWebcn/4FqwTbMECQCD/ 4/vbms/y0uSDWEePwLJ/uReAqNor+yqvNrXTRD2M/boUZ5LR8tZmrLPy/ahEid5j +U7ckY9Bm//yFe98r8MCQQDgBYH8Xd+MjmsZEBDQQsHMP8lZLxTqqWImyblLTs4D HQiqeez97sFqUTUQoRNeJolGQ1cGceyaj3bNGVXWO6CS -----END RSA PRIVATE KEY-----
To remove the pass phrase on an RSA private key The following command should be input on a workstation with read and write access to the key file. If you are unsure whether a private key file has a pass phrase, it is ok to run the following command against the key; if the original key file does not have a pass phrase, it will not be altered.
openssl rsa -in key.pem -out keyout.pem
You will be prompted for the current pass phrase before openssl will allow you to remove it. Once the pass phrase has been removed, the new key can be properly imported into the AppBeat DC. Certificate Like the key, the certificate must also be in PEM format. Additionally, the certificate must include the text information within the certificate file. The following are samples demonstrating the certificate file with and without the required text information. Follow the steps outlined in 10.10.3.1 to properly format the certificate.
209
Sample Certificate in PEM format without text information. The MII located after the -BEGIN CERTIFICATE-- tag indicate that the certificate is in PEM format.
-----BEGIN CERTIFICATE----MIICcjCCAdugAwIBAgIBADANBgkqhkiG9w0BAQQFADB/MQswCQYDVQQGEwJVUzEL MAkGA1UECBMCTkoxEDAOBgNVBAcTB1RlbmFmbHkxGzAZBgNVBAoTEkNyZXNjZW5k byBOZXR3b3JrczEVMBMGA1UEAxMMd3d3LnRlc3QuY29tMR0wGwYJKoZIhvcNAQkB Fg5hZG1pbkB0ZXN0LmNvbTAeFw0wNzA2MjUwODI4MjNaFw0wODA2MjQwODI4MjNa MH8xCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJOSjEQMA4GA1UEBxMHVGVuYWZseTEb MBkGA1UEChMSQ3Jlc2NlbmRvIE5ldHdvcmtzMRUwEwYDVQQDEwx3d3cudGVzdC5j b20xHTAbBgkqhkiG9w0BCQEWDmFkbWluQHRlc3QuY29tMIGfMA0GCSqGSIb3DQEB AQUAA4GNADCBiQKBgQDEs8ST2FxGTCZNR1/0hqxk0umq//MFVhxI7qzXJvCnVFBE 5M1reWY0s1wMO1t9o9frmSEqTSq+wmFYhNq7Ilel/EsbpTpa5FhnEO9iuI8MHXDE T7yxKRjF5NqxFOGYyldKWdXNCX3nsXeWTdGEsJdMN3je9Ab9pbfmdVLIUBUxswID AQABMA0GCSqGSIb3DQEBBAUAA4GBAID1oCh6dXj1SijrYIx2tHBFX4Jlw7isazut JW4byRWtAtYcCGVEKGKgjxsD7SB3rTyGKGveYDyoiEh/uodac6EYPJT0gcUtg0Ku izR25RuYklMZ+nQybaWnXA2yYA3YHED8hcXbx5GwpNTxeDMnDmQZj5ri51FQU4Ux bhMy7o0/ -----END CERTIFICATE-----
210
MAkGA1UECBMCTkoxEDAOBgNVBAcTB1RlbmFmbHkxGzAZBgNVBAoTEkNyZXNjZW5k byBOZXR3b3JrczEVMBMGA1UEAxMMd3d3LnRlc3QuY29tMR0wGwYJKoZIhvcNAQkB Fg5hZG1pbkB0ZXN0LmNvbTAeFw0wNzA2MjUwODI4MjNaFw0wODA2MjQwODI4MjNa MH8xCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJOSjEQMA4GA1UEBxMHVGVuYWZseTEb MBkGA1UEChMSQ3Jlc2NlbmRvIE5ldHdvcmtzMRUwEwYDVQQDEwx3d3cudGVzdC5j b20xHTAbBgkqhkiG9w0BCQEWDmFkbWluQHRlc3QuY29tMIGfMA0GCSqGSIb3DQEB AQUAA4GNADCBiQKBgQDEs8ST2FxGTCZNR1/0hqxk0umq//MFVhxI7qzXJvCnVFBE 5M1reWY0s1wMO1t9o9frmSEqTSq+wmFYhNq7Ilel/EsbpTpa5FhnEO9iuI8MHXDE T7yxKRjF5NqxFOGYyldKWdXNCX3nsXeWTdGEsJdMN3je9Ab9pbfmdVLIUBUxswID AQABMA0GCSqGSIb3DQEBBAUAA4GBAID1oCh6dXj1SijrYIx2tHBFX4Jlw7isazut JW4byRWtAtYcCGVEKGKgjxsD7SB3rTyGKGveYDyoiEh/uodac6EYPJT0gcUtg0Ku izR25RuYklMZ+nQybaWnXA2yYA3YHED8hcXbx5GwpNTxeDMnDmQZj5ri51FQU4Ux bhMy7o0/ -----END CERTIFICATE-----
To add text information to a certificate in PEM format The following command should be input on a workstation with read and write access to the certificate file.
openssl x509 -in cert.pem -out certout.pem -text
Once the command has been executed, check the new certificate file to verify the existence of the text information. The certificate will not import correctly without the text at the beginning of the file. Converting Certificates and Keys Exported from Microsoft IIS Microsoft IIS server does not support the ability to export keys and certificates as separate files in PEM format. Instead, a single PFX file is exported which includes the key and certificate. Use the following instructions to properly export the PFX file from the IIS server and then convert the file into a separate key and certificate file. Exporting the Keys and Certificates from Microsoft IIS To migrate an SSL certificate from an IIS server to the AppBeat DC 1. 2. 3. 4. 5. 6. 7. 8. From the Run prompt, type mmc, and click Enter. Go to File Add/Remove Snap-in. Click Add. Select Certificates and click Add. Select Computer Account and click Next and Finish. Click Close and Ok. Expand the Certificates tree and expand Personal Certificates. Right-click the certificate you want to export and select All Tasks Export. The Export Welcome Screen appears.
211
9.
Click Next.
10. On the next screen, you MUST select to export the private key. Select Yes, Export the private key and click Next. 11. Check Include all certificates in the certificate path. Doing so guarantees the proper exporting of all parent certificates if the certificate being exported is a chained certificate. 12. Uncheck Enable Strong Protection. Then click Next. 13. On the next screen leave the two password fields blank, unless a password was assigned when generating the key. Click Next. 14. Select a destination and file name. In this example, lets call it cert.pfx. 15. Click Next and Finish. Converting the PFX File into Separate Key and Certificate Files
Once the certificate has been exported, open a command prompt window, go to the directory where the certificate was saved, and type in the following commands:
openssl pkcs12 -in cert.pfx -out cert_temp.pem -nodes -nokeys<enter> <enter>
Extract the private key from the PFX Certificate to a separate file
Run the following command to extract the key from the original certificate file:
openssl pkcs12 in cert.pfx out cert.key nodes nocerts <enter> <enter>
If there is a password on the private key, you will need to enter it, otherwise, just press enter twice.
Add required Header information to PEM certificate using OpenSSL
1.
Check the contents of cert_temp.pem, to verify whether there is more than a single certificate within the file. Every certificate in the file will have some existing header information followed by a -----BEGIN CERTIFICATE----- tag, and ending with an ----END CERTIFICATE----- tag. If there is only one certificate in the file, this means the exported certificate was not a chained certificate, and you should therefore proceed to step 21 now. If, however, there is more than one certificate in the file, this means that a chained certificate was exported from the IIS server, and additional steps need to be taken before each certificate can be processed. For chained certificates, skip to Chained Certificates on page 213. Now, run the following command from the command prompt:
openssl x509 -in cert_tmp.pem -out cert.pem text
2.
3.
212
4.
The last step is to validate that the certificate file and key file have the same signature. To do this, run the following two commands, and verify that the output strings match:
openssl x509 -noout -modulus -in cert.pem | openssl md5 openssl rsa -noout -modulus -in cert.key | openssl md5
5.
Once you have verified the signatures, copy the two files onto the FTP server.
Chained Certificates As explained above, if the PEM file contains more than one Certificate, this means that the certificate that was exported from IIS was a chained certificate. The first step to handling a chained certificate file is to separate each of the certificates into separate files. Following are the detailed steps for handling a chained certificate (proceed with these steps after completing the steps above). 1. Cut and paste each certificate contained in the cert_temp.pem file into a separate text file, by doing the following: a. Cut and paste each certificate from and including the -----BEGIN CERTIFICATE---- tag, up to and including the -----END CERTIFICATE----- tag.
There will be additional header information that precedes each certificate in the PEM file, but you need not copy this header information into each new file only the certificates themselves need to be copied. b. Name each certificate file sequentially. For example: chain_cert1.pem, chain_cert2.pem, etc. This is important, as the order of the certificates will need to be preserved at the end of this process.
2.
After the certificates are separated into separate files, run the following OpenSSL command for each certificate file, to add the necessary header:
openssl x509 -in chain_cert1.pem -out cert1_with_header.pem text
3.
Once all certificate files have been converted to include a header, merge the contents of the individual certificate files into a single new file, called cert.pem. Make sure to paste the certificates in the same order that they existed in the original certificate. The last step is to validate that the certificate file and key file have same signature. To do this, run the following two commands, and verify that the output strings match:
openssl x509 -noout -modulus -in cert.pem | openssl md5 openssl rsa -noout -modulus -in cert.key | openssl md5
4.
5.
Once you have verified the signatures, copy the two files onto the FTP server. The key and certificate files are now ready to be imported into the AppBeat DC.
213
14
Global Server Load Balancing
This chapter discusses the Global Server Load Balancing (GSLB) feature designed to distribute application traffic evenly among multiple site locations.
215
GSLB Overview
Global Server Load Balancing (GSLB) is a service that runs locally on the AppBeat DC and provides a load balancing and redundancy solution for applications and websites operating across globally dispersed AppBeat DCs, and in external locations. GSLB can be configured for any combination of up to ten local, remote, and external sites. A local site is a site located on a particular AppBeat DC. A remote site is a site located on another AppBeat DC. An external site is a site not located on any AppBeat DC, or a site located on an AppBeat DC not configured for GSLB. GSLB communicates with remote site locations using an inter-unit communication protocol. GSLB determines the best performing site and directs the traffic to that site. The best performing site is determined according to the following:
Load balancing. GSLB directs the traffic to the location of the best performing site, according to the configured load balancing policy. Disaster recovery. GSLB directs the traffic to the location of the healthy site, in case one or more of the sites are not operating.
GSLB functionality is only available if a GSLB license is obtained. The following figure shows a typical AppBeat DC deployment that uses GSLB:
216
In Figure 71, an AppBeat DC is deployed to each of the domains geographical locations to accelerate traffic to each cluster on each location. AppBeat DCs CN1 and CN2 are deployed to the London and New York locations of the www.customer.com domain. The AppBeat DCs participating in a particular GSLB policy require individual configuration:
Each AppBeat DC unit should be configured with a GSLB listener to enable communication with the other AppBeat DCs. At least one AppBeat DC should be configured as a DNS server to delegate the authority of the domain name to the AppBeat DC.
In addition, all AppBeat DC units participating in a particular GSLB policy require identical configuration of a GSLB profile and a configuration of a GSLB context.
A GSLB profile specifies the load balancing policies, backup policy, and disaster recovery policy. The GSLB context specifies which local, remote, and external sites can be returned as a DNS resolution for DNS queries regarding a particular domain name, the relative weight of each site, and which GSLB profile to employ for this context.
AppBeat DC units, belonging to the same GSLB context and located in various geographic locations, continuously exchange site status and health information using an inter-unit communication protocol to determine the website with the best performance. When DNS queries are received for the www.customer.com domain, the AppBeat DCs return a DNS resolution, based on the configured load balancing algorithm, taking into account each of the sites load and health parameters and their configured weights. In our example, the DNS resolution may be 212.30.34.45, which is the IP address of the NY site, or it may 212.90.31.60, which is the IP address of the London site (see Figure 71). For example, if the load balancing algorithm configured for the GSLB context is Round Robin and there are two sites for the www.customer.com domain name, the DNS queries are resolved according to the following:
For the first DNS query received, the IP address of the first site is returned. For the next DNS query received, the IP address of the second site is returned.
The IP addresses returned continue to alternate for each DNS query received, as long as both of the locations are reported as operating and healthy. Each AppBeat DC unit that belongs to a certain GSLB context may serve as a DNS server for the domain names of that context. Each of them applies the same load balancing policies in order to determine the preferred site to direct traffic to at a specific moment in time. Because all the units share the same information, it is very probable that they will reach the same load balancing decisions at any given moment.
217
GSLB operation can be configured with multiple GSLB contexts that are not necessarily symmetric. For example, AppBeat DC CN1 can operate with AppBeat DC CN2 on one context, while AppBeat DC CN2 operates with another AppBeat DC, such as CN3, on another context.
GSLB Algorithms
GSLBs load balancing algorithms are described in the following table.
Table 14: GSLB Load Balancing Algorithms Load Balancing Algorithm Round Robin (rr) Description The system directs the traffic to all of the operating sites equally, one after another. The system places the IP addresses of all of the operating GSLB sites into a queue. Every DNS response picks the IP address at the start of the queue, directs the traffic to that site, and moves the IP address to the end of the queue. Weighted Round Robin (wrr) The same as Round Robin, taking into account the relative configured weight. For example: Site A has a weight value of 10. Site B has a weight value of 20. Out of 3 DNS queries, one is directed to Site A, the other two are directed to Site B. The system responds to DNS queries with the IP address of the site with the highest dynamic weight, calculated as follows: Sites dynamic weight = sites weight / sites pending requests Where: sites pending request = sites current number of pending requests on servers. If the number of pending requests is 0, a value of 1 is used instead. Weighted Least Bandwidth (wlbw) The system responds to DNS queries with the IP address of the site with the highest dynamic weight, calculated as follows: Sites dynamic weight = (sites weight / (sites bandwidth) Where: sites bandwidth = sum of requests (bytes) per second + sum of responses (bytes) per second. If the bandwidth is 0, a value of 1 is used instead.
218
Description The system responds to DNS queries with the IP address of the site with the highest dynamic weight, calculated as follows: Sites dynamic weight = (sites weight / (sites transactions per sec) Where: sites transactions per sec = number of responses per second. If the number of responses per second is 0, a value of 1 is used instead.
GSLB Configuration
GSLB configuration consists of configuring all of the AppBeat DCs participating in a particular GSLB policy, called a GSLB context. The configurations must be identical, except for the following:
Each AppBeat DC requires its own local GSLB listener. Each AppBeat DC may be configured as a DNS server. All sites located on the AppBeat DC you are currently configuring are considered local. These same sites are considered remote sites when you are configuring a different AppBeat DC. Local and remote sites are configured slightly differently.
GSLB configuration flow consists of the following steps: In each AppBeat DC: 1. Configure health check profiles, as described in Configuring Health Check Profiles to Determine Site Health on page 220. These profiles will be used to determine the health status of external and local sites. Configure a GSLB profile, as described in Configuring a GSLB Profile on page 220. Configure Vservices, as described in Configuring Vservices for Local Sites on page 224. The Vservices will be used when specifying local sites in the GSLB context. Optionally configure the AppBeat DC as a DNS server, as described in Configuring the Local DNS Server Settings on page 225. At least one AppBeat DC in the GSLB context must be configured as a DNS server. Configure a local GSLB listener, as described in Configuring the Local GSLB Listener Settings on page 226. Configure a GSLB context, as described in Configuring a GSLB Context on page 227. The GSLB context specifies a list of domain names, and the list of sites (IP addresses) that can be returned as a DNS resolution for DNS queries regarding the domain. Additional
2. 3. 4.
5. 6.
219
parameters, such as the weight of each site and the GSLB profile you associate with the GSLB context, determine how to resolve the DNS query. Before beginning the GSLB configurations, verify that your AppBeat DCs are configured. For example, AppBeat DC CN1 and CN2 in Figure 71 are configured as follows:
AppBeat DC CN1 (London): farm fa_london cluster cl_london server-inactivity global real r1 . . . real r2 . . . real r3 . . . virtual v_lon 212.90.31.60 80 redundancy-group 1 default-cluster cl_london AppBeat DC CN2 (New York): farm fa_new_york cluster cl_new_york server-inactivity global real r1 . . . real r2 . . . real r3 . . . virtual v_ny 212.30.34.45 80 redundancy-group 1 default-cluster cl_new_york
Step 1: Configuring Health Check Profiles to Determine Site Health Configure all the health check profiles you need to determine the health of external sites and local sites. You may decide to employ different health check profiles for different sites. For information on how to configure a health check profile, refer to Health Check Configuration on page 116. Step 2: Configuring a GSLB Profile You can configure up to 256 GSLB profiles. If you do not configure one of the parameters in the profile, its default value is assigned. The common GSLB profile configuration flow consists of the following steps: 1. 2. Define a new GSLB profile. Configure the DNS time to live (TTL) value. The default value is 5 seconds.
A smaller TTL value makes the setup more resilient during disaster recovery. However, it may increase DNS traffic since a DNS resolution with a relatively small TTL value ages fast.
220
3.
Configure the inter-unit communication protocol settings. This protocol is used to advertise periodic messages among AppBeat DC units. Configure the following threshold parameters:
4.
The interval in seconds between two such consecutive messages. The default value is 2 seconds. The number of advertisements a remote location had neglected to send, that renders this location non-reporting, and therefore down. The default value is 3.
Configure the backup site policy, which specifies under which conditions a backup site becomes active in case of disaster. Select one of the following:
5.
This site is the active site if all other sites fail. This is the default setting. This site is the active site if one of the other sites fails.
Configure how to treat HTTP requests in backup mode. Select one of the following: Redirect HTTP requests while in backup mode. This is the default setting. Serve HTTP requests while in backup mode.
Round Robin (rr). Weighted Round Robin (wrr). Weighted Least Pending Requests (wlpr). This is the default setting. Weighted Least Bandwidth (wlbw). Weighted Least Transactions Per Second (wltps).
For more information about the load balancing algorithms, see Table 14. Optionally, configure the client IP persistency. The IP persistency is used to ensure that after performing subsequent accesses to a specific location, the DNS resolution is directed to the same location. By default, no persistency is configured. Optionally, indicate whether HTTP IP redirect is enabled. HTTP IP redirect ensures that when a DNS resolution is directed to a location where a cluster is operationally down, the DNS is redirected to an operating location. HTTP IP redirect is disabled by default.
6.
Optionally, specify a password for the profile. This password is used to secure interunit communications. You can specify either of the following types of password:
Regular password The password is entered into the CLI as a regular string. When viewing the site configuration, the password is encrypted. Encrypted password The password is entered obscurely, and is then encrypted. The remote AppBeat DC can then decrypt the password.
221
The GSLB profile uses only one password. If you specify both a password and an encrypted password, the more recently configured one is used. When you finish configuring the GSLB profile in the first AppBeat DC, run show gslb profile and copy the profile settings. You can then paste them when configuring the remaining AppBeat DCs, to ensure that the GSLB profile settings are identical among all AppBeat DCs. If the GSLB profile included an encrypted password, it is highly advisable to use copy and paste to enter the password into the GSLB profile configuration of the other AppBeat DCs. GSLB Profile Configuration CLI Commands To configure a new GSLB profile using the CLI Command Syntax:
profile profile-name
A new profile named gslb_prfl is created and entered in the CLI context. To configure the GSLB profiles DNS time to live using the CLI
dns-ttl dns time-to-live in seconds [1-86400]
To configure the GSLB profiles inter-unit communication protocol settings Command Syntax:
threshold advertisement-interval interval in seconds between messages sent [1-60] site-down site is considered down after this many tries of being reached [1-10]
222
To configure the GSLB profiles disaster recovery settings using the CLI Command Syntax:
backuppolicy {active-if-all-fail | active-if-one-fail} {redirecton-back mode | no redirect-on-back mode}
To configure load balancing for the GSLB profile using the CLI Command Syntax:
load-balancing algorithm {rr | wrr | wlpr | wlbw |wltps } [persistency ip-mask][http-ip-redirect
To configure a password for the GSLB profile using the CLI Command Syntax:
password password string
223
To configure an encrypted password for the profile using the CLI Command Syntax:
encrypted-password string up to 64 characters
Viewing the GSLB Profile Settings You can view the configured GSLB profile settings using the show gslb profile command. To view the configured GSLB profile using the CLI Command Syntax:
show gslb profile [name profile-name]
Step 3: Configuring Vservices for Local Sites You can configure one or more Vservices for your local sites. When you will be specifying the local sites in the GSLB context, you will have to associate each local site with a Vservice. You can use the same Vservice for all the local sites, or different Vservices for the sites. Vservices are used for collecting performance metrics in a site. If there is no need to collect performance metrics (for example, if you specify the Round Robin or Weighted Round Robin load balancing algorithm in the GSLB profile), you can configure an empty Vservice. That is, only assign the Vservice a name, but do not specify any filtering rules for classifying which traffic should be restricted or monitored.
224
For information on how to configure a Vservice, refer to Configuring a Vservice on page 170. Step 4: Configuring the Local DNS Server Settings Configure at least one AppBeat DC in the GSLB context as a DNS server. This enables the configured AppBeat DC to receive and resolve DNS queries for the domain you will define in the GSLB context. You can configure up to 10 DNS servers. DNS Server Configuration CLI Commands To configure a DNS server using the CLI Command Syntax:
dns-server DNS server name
A new DNS server named DNS-srvr is created and entered in the CLI context. To configure the DNS server settings using the CLI Command Syntax:
ip DNS servers ip address
Prompt level Configure Topology DNS server Example commands: For AppBeat DC CN1 (London):
dns-server DNS-srvr> ip 64.208.185.14
To view the configured DNS server using the CLI Command Syntax:
show gslb
225
IP address 88.75.66.13
Port 53
Step 5: Configuring the Local GSLB Listener Settings For each AppBeat DC, configure the local GSLB listener settings. The GSLB listener listens to GSLB inter-unit messages from all AppBeat DCs at remote locations. The AppBeat DCs local listener recognizes the remote AppBeat DC with which it is communicating according to the remote AppBeat DCs configured local listener settings. For example, when the AppBeat DC CN1 (London) reads that the GSLB site configurations are gslb-site ny listener 64.208.185.13 80, CN1 recognizes these as the New York site settings and that AppBeat DC CN2 (New York) is listening on that address. You can configure up to 10 GSLB listeners. To configure a GSLB listener, you must define for it a unique IP address and port number. You can also optionally assign it a public IP address, and optionally specify whether to secure communications by encrypting them. GSLB Listener Configuration CLI Commands To configure a local GSLB listener using the CLI Command Syntax:
gslb-listener GSLB listener name
A GSLB listener named London_Lsnr is created and entered in the CLI context.
226
To configure local GSLB listener settings using the CLI Command Syntax:
ip GSLB listeners ip address port port number [public ip GSLB listeners public ip address port port number] [secure]
Prompt level Configure Topology GSLB listener Example commands: For AppBeat DC CN1 (London):
gslb-listener London_Lsnr> ip 64.208.185.13 port 80
To view the configured GSLB listener using the CLI Command Syntax:
show gslb
IP address 88.75.66.13
Port 53
Port 80
Secure No
Public IP
Status Enabled
Step 6: Configuring a GSLB Context The GSLB context configuration flow consists of the following steps: 1. Define a new GSLB context.
You can configure up to 256 GSLB contexts. 2. Associate a GSLB profile with the GSLB context. For information on configuring a GSLB profile, refer to Configuring a GSLB Profile on page 220.
227
3.
You can enter up to 4096 domain names in a single GSLB context. 4. Configure the local, remote, and external sites that can be returned as a DNS resolution for DNS queries regarding the domain. Configure each site as follows: a. b. Configure the public IP address of the site. Configure the priority level, or weight, of the site in comparison to the other sites. This is used for load balancing purposes. Optionally, specify that this site can serve as a backup site in case of disaster. The actual decision of which site serves as backup, is determined by the backup policy settings in the GSLB profile associated with this GSLB context.
c.
e.
Associate a Vservice with the site. Associate a health check profile with the site. This determines the method by which the AppBeat DC decides if this site is alive.
If the site is a remote site, configure a listener for the remote site. That is, specify an IP address and port on which the remote AppBeat DC listens to GSLB inter-unit messages from this AppBeat DC. If the site is an external site, specify that it is external, and associate a health check profile with the site. This determines the method by which the AppBeat DC decides if the external site is alive.
f.
You can configure up to 10 sites for each GSLB context. When you finish configuring the GSLB context in the first AppBeat DC, run show gslb context and copy the context settings. You can then paste them when configuring the remaining AppBeat DCs, while keeping in mind that you will need to change the local and remote site settings appropriately in each AppBeat DC. In each AppBeat DC you are configuring, keep in mind that:
All sites residing in this AppBeat DC must be configured as local sites. That is, you must associate a Vservice and a Health check profile with each local site. All sites residing in another AppBeat DC must be configured as remote sites. That is, you must configure a listener for each remote site.
228
GSLB Context Configuration CLI Commands To configure a GSLB context using the CLI Command Syntax:
gslb-context GSLB context name
A GSLB context named gslb-1 is created and entered in the CLI context. To associate a GSLB profile with the GSLB context using the CLI Command Syntax:
profile GSLB profile name
To configure a domain name for the GSLB context using the CLI Command Syntax:
domain-name GSLB domain name
To configure a local site for the GSLB context using the CLI Command Syntax:
public-ip public ip address of the site weight the sites weight used for load balancing purposes vservice vservice name health-check profile health-check profile name [backup]
229
To configure a remote site for the GSLB context using the CLI Command Syntax:
public-ip public ip address of the site weight the sites weight used for load balancing purposes listener ip IP address on which this remote location listens to GSLB inter-unit messages port port on which this remote location listens to GSLB inter-unit messages [backup]
To configure an external site for the GSLB context using the CLI Command Syntax:
public-ip public ip address of the site weight the sites weight used for load balancing purposes external health-check profile health-check profile name [backup]
Viewing the GSLB Context Settings You can view the configured GSLB context settings using the show gslb context command. To view the configured GSLB context using the CLI Command Syntax:
show gslb context [name context-name]
230
GSLB profile: per Sites: Name A ext1 ext2 local local1 Public IP 10.0.1.33 10.0.1.210 10.0.1.32 10.0.1.23 10.0.1.25 Listener 10.0.1.32:80 Sec. Off Off Off Off Off Whgt 10 10 10 10 10 Bkp No No No No No Ext No Yes Yes No No H/C No Yes Yes Yes Yes VS Admn Active No Up Yes No Up Yes No Up Yes Yes Up Yes Yes Down Yes
View the current health of the sites configured for the GSLB context and receive notification when the status changes. View the DNS resolutions counters of the sites for the GSLB context and receive notification when there is a change in the counters.
Viewing and Monitoring GSLB Status To view the GSLB sites health using the CLI Command Syntax:
show gslb status
231
gslb context: local Site Name A ext1 ext2 local local1 Name A ext1 ext2 local local1 |Mode | |Remote |Ext. |Ext. |Local |Local |Operational |Status |Not Reporting |UP |DOWN |UP |DOWN | Raw metrics |Wght BW TPS |10 0 0 |10 0 0 |10 0 0 |10 0 0 |10 0 0 |Calculated Pndg Req |Site score 0 |0 0 |1 0 |0 0 |1 0 |1
|Last state |Tue Mar 11 |Tue Mar 11 |Tue Mar 11 |Tue Mar 11 |Tue Mar 11
change time 00:02:41 2003 00:03:17 2003 00:02:40 2003 00:02:40 2003 01:19:21 2003
To receive a notification when there is a change in the sites status using the CLI Command Syntax:
logging threshold syslog notification gslb
Prompt level Configure Management Viewing and Monitoring GSLB DNS Counters To view the GSLB DNS counters using the CLI Command Syntax:
show gslb counters
232
Table 15: GSLB DNS Counters Fields Field Site Name Load Balancing DNS Resolutions Total Description The name of the site. The number of times a DNS resolution is returned based on the load balancing algorithm configured for the domain. The average times per second that a DNS resolution is returned based on the load balancing algorithm configured for the domain. The number of times a DNS resolution is directed to this location based on persistency policies (and not by performing load balancing). The average times per second that a DNS resolution is directed to this location based on persistency policies (and not by performing load balancing). The number of times a DNS resolution is redirected to a new location after the HTTP request was sent to a location where the cluster is operationally down. The average times per second that a DNS resolution is redirected to a new location after the HTTP request was sent to a location where the cluster is operationally down.
Per second
Total
Per second
HTTP Redirect
Total
Per second
To receive notification when there is a change in the DNS counters using the CLI Command Syntax:
config> logging threshold syslog informational gslb
233
15
VRRPc Redundancy
This chapter discusses the VRRPc feature designed to provide redundancy between two AppBeat DC units.
Before Proceeding. VRRPc Overview. VRRPc in Hot-Standby Mode. VRRPc in Load-Sharing Mode (Active/Active). Switchover. Configuration Synchronization. VRRPc Maintenance Mode
235
Before Proceeding
In order to proceed with configuring VRRPc Redundancy, the following steps should be satisfied:
Two AppBeat DC units should be properly mounted and installed. Management connectivity for each unit, whether through Serial Console or via Management Ethernet Interface (GUI, Telnet, or SSH). See Chapter 2. AppBeat DC Installation. At least one Data Interface on each unit configured with an IP Address and connected to the same network as the server(s) to be accelerated. See Chapter 5. Initial Configuration and Global Settings. Server(s) configured in at least one cluster. See Chapter 7. Server Topology Farms/Clusters/Real Servers.
VRRPc Overview
VRRPc is Crescendo Networks proprietary redundancy protocol for Application Delivery Controllers. Implemented in a similar fashion to VRRPusing virtual MAC and IP addressesVRRPc extends the capabilities of traditional VRRP by enabling more intelligent redundancy decisions. VRRPc tests more than simple network availability between two redundant units as VRRP does. Instead, it bases failover decisions on upstream network unit availability as well as application server health and connectivity. VRRPc is configured by assigning a VRRPc IP address and ID number to each participating interface of an AppBeat DC. Each unit can be configured to health check upstream routers or load balancers as well as verify the connectivity to servers configured for acceleration. Each AppBeat DC compares its availability (ability to reach all configured units) and then determines which AppBeat DC should be active. In the event of unit failure, or if the backup AppBeat DC has a greater level of successful connectivity to servers and/or upstream units, failover will take place ensuring application availability. VRRPc can be implemented in one of two ways: hot/standby or load-sharing (i.e., active/active). In hot/standby mode, only one AppBeat DC will be active, while the other unit remains dormant. Load-sharing mode enables two AppBeat DC units to be simultaneously active, providing acceleration for different groups of servers at the same time. The configuration examples provided in the following sections pertain to the configuration of two AppBeat DC units. While most implementations will require an almost identical configuration between units, there are still small differences which are noted in the Guidelines for each section.
236
Configure interface IP address and mask on each gigabit-ethernet interface. Each AppBeat DC will have different regular IP addresses. Configure VRRPc virtual router IP (VRIP) address and Virtual Router ID (VRID). The VRIP and VRID defined will be identical between AppBeat DC s. The VRID must be a number within the range 1-255. The VRID should be different for each VRIP defined across all physical interfaces. Enable VRRPc in hot-standby mode.
To configure VRRPc IP and ID per interface using the CLI Command Syntax:
vrrpc {group-1 | group-2} {vrid virtual-router-id | vrip virtualrouter-IP-address}
If the AppBeat DC is installed as a router (i.e., when using passive mode), all units configured to route through the AppBeat DC should configure those routes to forward through the VRRPc IP addresses. To configure VRRPc IP and ID per interface using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology panel, click the Redundancy icon. This will display the General VRRPc Configuration settings as shown in the figure below.
237
3. 4. 5.
Select the Port or Aggregator to which you want to add a VRRPc IP address. Highlight the existing IP interface to populate the configuration windows below. Configure VRRP address and VRID. VRID 1 and VRRP IP 1 belong to group-1 while VRID 2 and VRRP IP 2 belong to group-2. For Hot-Standby, only use group-1 settings.
To enable VRRPc globally using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology panel, click the Redundancy icon. This will display the General VRRPc Configuration settings as shown in the figure below.
238
3.
Prompt level Root Once the VRRPc IP addresses and IDs are configured for each interface, VRRPc must be enabled globally. Once enabled, the AppBeat DC automatically takes into account connectivity to existing servers. Therefore, routing may not work properly until accelerated servers are defined in the configuration. Additionally, it is not required that health checks be configured for upstream routers or load balancer. However, it is recommended that these additional checks be configured to ensure the highest level of availability.
239
identical farm, cluster, and server configuration, in which some of the Virtual Servers will be defined as group-1 and some defined as group-2. Since each AppBeat DC has the same configuration, either unit could provide acceleration for each group of servers. Using the VRRPc election mechanism, the two AppBeat DC units determine which should provide acceleration for each group. This is determined based on connectivity to the servers and other upstream units such as load balancers or routers. If network connectivity and server availability is the same for each AppBeat DC, then the MAC address of each unit is used as the final arbitrator. Essentially, if all things are equal (i.e., health and connectivity), the AppBeat DC with the highest MAC address will provide acceleration for servers denoted as group-1 while the unit with the lowest MAC address will provide acceleration for servers denoted as group-2. VRRPc will then provide seamless failover between each AppBeat DC should there be unit or connectivity failure. VRRPc Load-Sharing Configuration Guidelines
Configure an interface IP address and mask on each gigabit-ethernet interface. Each AppBeat DC will have different regular IP addresses. Configure a VRRPc Virtual Router IP address (VRIP) and Virtual Router ID (VRID). The VRIP and VRID defined will be identical between AppBeat DC s. The VRID must be a number within the range 1-255. The VRID should be different for each VRIP defined across all physical interfaces. Assign VRRPc interfaces and virtual servers as either group-1 or group-2. When both AppBeat DC units are functioning simultaneously, each will be responsible for a different group which will include an interface and servers. Between redundant units, each VRID should correspond with each VRIP defined. Enable VRRPc in load-sharing mode.
To configure Virtual Servers for load sharing using the CLI Command Syntax:
virtual virtual-server-name redundancy-group {1 | 2}
240
To Configure Virtual Servers for Load Sharing using the GUI VRRPc variables can be configured for Virtual Servers. 1. 2. Once logged in to the GUI, click Configuration. In the Topology panel, click the individual Virtual Server under the Virtual Servers icon. This will display the general Virtual Server Properties as shown in the figure below.
3.
To enable VRRPc for Load Sharing using the CLI Command Syntax:
vrrpc {enable load-sharing | disable}
241
To enable VRRPc for Load Sharing using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology panel, click the Redundancy icon. This will display the General VRRPc Configuration settings as shown in the figure below.
3.
Switchover
The VRRPc switchover feature provides you with a switchover in case of unit or network failure in any of the AppBeat DC units. To use this feature, you must set the threshold for the percentage of up servers at which the switch is made. To configure the switchover using the CLI Command Syntax:
vrrpc switch-threshold percentage
242
To enable the switchover using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology panel, click the Redundancy icon. This will display the General VRRPc Configuration settings, as shown in the figure below.
3.
In the Switching Threshold field, set the threshold for the percentage of up servers at which the switch is made.
Configuration Synchronization
Configuration synchronization enables you to duplicate the configuration data from one AppBeat DC to another. Once the units are synchronized, the mate unit can be used in the following cases:
When a failover occurs in the AppBeat DC, the mate unit assumes traffic processing. When performing load sharing.
Do not use configuration synchronization when employing VRRPc load sharing. Configuration synchronization consists of the following steps: 1. 2. Configuring the AppBeat DCs mate unit. Synchronizing the unit configurations (with or without saving the configurations).
243
Configuring the Mate Unit To configure the mate unit using the CLI Command Syntax:
vrrpc mate ip address of the target unit
To configure the mate unit using the GUI 1. 2. Once logged in to the GUI, click Configuration. In the Topology panel, click the Redundancy icon. The Redundancy tab appears.
3.
In the Mate IP field, enter the IP address of the management interface of the mate unit and click Apply. The AppBeat DCs mate unit is configured. You can now synchronize the units.
244
Saving and Synchronizing the Unit Configurations You can synchronize the units with or without saving the unit configurations. Synchronization is performed only if all interfaces maintain IP addresses in the same subnets. To synchronize the unit configurations using the CLI Command Syntax:
vrrpc config-sync
To save and synchronize the unit configurations using the CLI Command Syntax:
save conf-synch
245
AppBeat DCs are in maintenance mode, you can change network configuration on either AppBeat DC and no failover changes will occur. VRRPc maintenance mode is mandatory before performing the following changes. The changes will go into effect as soon as you disable VRRPc maintenance mode.
Deleting or adding an interfaces IP address, whether primary or secondary, on a new or an existing subnet. Deleting, adding, or changing a VRIP or VRID. Changing the VRRPc advertise interval. Changing the VRRPc Master Down counter.
VVRPc maintenance mode is recommended when adding a virtual server, removing a virtual server, changing its IP address, or assigning a virtual server to a VRRPc group. It is generally recommended that before configuring any changes on one or both AppBeat DCs participating in VRRPc, configure at least one of the AppBeat DCs to Maintenance mode. After network configurations are complete, disable maintenance mode on the AppBeat DC for which you enabled maintenance mode. To enable VRRPc maintenance mode using the CLI Command Syntax:
vrrpc-maintenance-mode enable
246
16
Monitoring the AppBeat DC
This chapter provides a description and explanation about monitoring the AppBeat DC unit and the accelerated farms, clusters, servers, and Devices using either the GUI-Based AppBeat DC Management system or the CLI.
Overview. Viewing the AppBeat DC Summary Feature. Monitoring the AppBeat DC. Monitoring Attacks and Abnormal Network Behavior. Monitoring Devices.
247
Overview
The AppBeat DC Platform offers an intuitive tool for managing and monitoring the AppBeat DC. The GUI is accessible via any Web browser, which launches the Java-based SNMP management and monitoring tool. The GUI provides a simple method to configure the AppBeat DC, while also accessing a rich level of statistical information about farms, clusters, individual servers, or even the global statistics regarding the AppBeat DC and how it is enhancing application performance. The following chapter describes the AppBeat DCs monitoring features with references to the relevant CLI monitoring commands.
Active Port Indicators (1-10, according to the AppBeat DC unit purchased). Server Inventory, including the number of servers, clusters, and farms, and the status of each. Traffic per port/Accelerated traffic. AppBeat DC Statistics. Events legend.
The Summary window enables you to view, at a glance, your systems current status, e.g., which servers are operational/failed, which AppBeat DC unit ports are configured, etc. To open the Summary window, click Summary.
248
The Summary window displays the System Summary dashboard screen, which contains the following information areas:
System Status This area contains current information about system utilization and temperature, as well as an Events legend. Logical Entity Status This area contains current information about the total number of farms, clusters and servers configured; and how many are operational or failed. Acceleration Statistics This area provides information about three system acceleration indicators:
Transactions your system is handling. Total bandwidth processed by the system. Active clients your system is handling.
Use the Historical and Real Time buttons to specify whether to show a historical or real time graph. For a real time graph, you can click the Reset button to refresh the display.
249
Output:
- DC processor CPU utilization : Memory consumption : 87 % 56 % : : 45 % 23 % : 45 %
250
- HTTP HTTP engine CPU utilization : HTTP request buffering memory consumption : HTTP response buffering memory consumption :
45 % 23 % 23 % 45 % 23 % 23 %
- TCP TCP engine CPU utilization : DC processor TCP client connections memory consumption : TCP engine connections table memory consumption :
Monitoring the AppBeat DC via the GUI The following section describes and explains the AppBeat DC GUI monitoring feature. You can pause the monitoring at any time by clicking the Pause Update button at the bottom of the screen. To view administrative information 1. In the left panel of the AppBeat DC window, click the Monitoring button.
2.
In the Topology window, click the AppBeat DC icon. The tabs displayed in the right panel present global performance data for the AppBeat DC. Several tabs are available to display current configuration information. The Traffic tab window contains the following information, in read-only mode:
251
HTTP statistics
Byte/Second - Current and Last 5 Minute Max, for Server and for Client. Requests/Second - Current and Last 5 Minute Max, for Server and for Client. Responses/Second - Current and Last 5 Minute Max, for Server and for Client. Average Response Time For Server and for Client.
Compression statistics, showing: Transactions/Second Number of compressed transactions per second, both current and Last 5 Minute Max. Pre-Bytes/Second Number of pre-compression bytes per second within the traffic that enters the unit, both current and Last 5 Minute Max. Post-Bytes/Second Number of post-compression bytes per second within the compressed traffic that exits the unit, both current and Last 5 Minute Max. Summary Overall percentage of compressed and saved traffic.
Non-HTTP statistics. Bytes/Second - Current and Last 5 Minute Max, for Server and for Client.
Clicking the Pause Update button freezes the counters on the screen (internally the counters continue to progress). Click again to display the updated counters. To view the TCP tab
252
The TCP tab window contains the following information, in read-only mode:
Connections.
Active Monitors the number of clients and servers connected to HTTP entities and non-HTTP entities. Established/Accepted Monitors the total number of clients and servers connected to HTTP entities and non-HTTP entities.
Connections per second, for both HTTP entities and non-HTTP entities.
Attempted Monitors the number attempted connections per second for clients and servers. Max. Attempted Monitors the maximum number of attempted connections per second for clients and servers. Accepted Monitors the number of established connections per second for clients and servers. Max. Accepted Monitors the maximum number of established connections per second for clients and servers.
In. Out.
253
The HTTP tab window contains the following read-only information for both servers and clients:
Request - For HTTP 1.0 and 1.1. Response - For HTTP 1.0 and 1.1. Total - For HTTP 1.0 and 1.1. Requests Breakdown (per second):
Responses Breakdown (per second): Success. Redirect. Client error. Server error.
254
The System tab window contains the following information, in read-only mode:
SSL:
HTTP: HTTP engine HW utilization. HTTP engine CPU utilization. HTTP request buffering memory consumption. HTTP response buffering memory consumption.
255
TCP:
Parser: Parser engine CPU 1 utilization. Parser engine CPU 2 utilization. Parser engine CPU 3 utilization. Parser engine CPU 4 utilization. Parser engine CPU 5 utilization. Parser engine CPU 6 utilization.
To view the ports and VLANs information 1. 2. In the left panel of the AppBeat DC window, click the Monitoring button. In the Topology window, expand the Network icon by clicking the + symbol then click the Ports and VLANs icon.
256
The Ports tab window contains the following counter information for each port and aggregator, in read-only mode:
Frames: In and Out. Octets: In and Out. Errors: In and Out. Discards: In. ARP Requests Sent. ARP Responses Received. PING (Echo requests). ARP Learning.
Teardrop. Ping of Death. Open/Close. ICMP unreachable attack. ICMP redirect attack. Ping attack. ARP attack. Christmas tree attack. TCP flood.
The AppBeat DC is also capable of reporting attacks and abnormal traffic behavior to the administrator, providing a warning mechanism on top of the protection mechanisms implemented. Reporting is based on user-configurable thresholds, described below. The following attacks and abnormal traffic behavior are reported by the AppBeat DC:
Attacks.
Land attack IP packets where the source address is the same as the destination address.
257
SYN attacks SYN packets received from malicious clients indicating a need to open a TCP connection, but the client never fully opens the connection (the client does not respond to the SYN/ACK of the server).
Abnormal behavior.
IP broadcasts packets with any broadcast IP address destination. TCP frames (to virtual IP) TCP frames destined for a virtual IP address, but not an associated TCP port. Non-TCP frames (to primary IP) Any non-TCP frame destined for one of the IP addresses associated with a data port on the AppBeat DC. Non-TCP frames (to virtual IP) Any non-TCP frame destined for one of the virtual IP addresses configured on the AppBeat DC.
The AppBeat DC will monitor and report on any of these attacks and abnormal events based on two user-configurable parameters:
Interval a sample interval (in seconds) over which the number of frames matching the attack or abnormal behavior are counted. Threshold - defined in terms of number of frames. If the number of frames (matching the attack or abnormal behavior) per sample interval exceeds this number, an event is generated indicating a single instance of an attack.
The default interval and threshold for all attacks and abnormal behavior are 5 seconds and 20 frames, respectively. That means that if 20 frames of each type are seen within a 5 second window, an attack event is registered and reported. The only exception to these default values is the SYN attack where the default threshold is 200 frames. The AppBeat DC reports each attack event and keeps track of the total number of attacks of each type. This number can be reset for any of the attacks or abnormal behaviors, independently. Configuring Attack Monitors Configuring Attack Monitors in the CLI Perform the following steps to configure the attack monitors and associated thresholds in the CLI. Command Syntax:
attack-monitor {land | syn | ip-broadcast | tcp-to-virtual | nontcpto-primary | nontcp-to-virtual | all | default} {interval #-ofseconds | threshold #-of-frames-per-interval | enable | disable | reset-counter}
258
Monitoring Attacks in the CLI Perform the following steps to view the attack monitors status in the CLI. To monitor attacks using the CLI Command Syntax:
show attack-monitor
Monitoring Devices
The AppBeat DC provides you with the capability for monitoring your Devices via the CLI or GUI. In the following section, examples of monitoring procedures using the CLI and GUI are described and explained. You can also perform various Show commands to view information. See the following examples. Monitoring Devices via the CLI You can view information about a specific device or all configured devices, and information about the Real servers attached to a device. To show Device information Command Syntax:
show device [device-name]
259
To show information about a Real server associated with a Device Command Syntax:
show real [real-name]
To show information about a cluster with Real servers associated with a Device Command Syntax:
show cluster [brief | name name | queue selection [name]]
260
17
Using the AppBeat DC History Feature
This chapter provides you with the description and explanation of the AppBeat DC History feature.
Overview of the AppBeat DC History Feature. Selecting and Viewing AppBeat DC History Graphs.
261
262
The list of values for each list in the Data legend changes according to the level (global, farm, cluster, or server) selected in the Topology tree.
Data legend - Provides you with color-coded definitions for the data the history graph measures:
Four graphs can be viewed at any time. The four graphs are selected from the predefined counters for which history is gathered. There is a drop-down list for each of the four legends. Available Historical Variables Client-side TCP Connection History Statistics
Client Attempted Conns PerSec. Client Accepted Conns PerSec. Client Max Accepted Conns PerSec. Client Established Connections. Client Active Connections.
Server Attempted Conns PerSec. Server Accepted Conns PerSec. Server Max Accepted Conns PerSec. Server Established Connections. Server Active Connections.
L2 Bytes PerSec. Client L7 Request Bytes PerSec. Max L2 Bytes PerSec. Max Client L7 Request Bytes PerSec.
263
Client Requests PerSec. Client Responses PerSec. Max Client Requests PerSec. Max Client Responses PerSec. Avg Client Transaction Time. Avg Server Transaction Time. Client HTTP 10 Requests PerSec. Client HTTP 10 Responses PerSec. Client HTTP 11 Requests PerSec. Client HTTP 11 Responses PerSec. Client Gets PerSec. Client Others PerSec. Client Puts PerSec. Client Posts PerSec. Client Heads PerSec. Client 2xx Responses PerSec. Client 3xx Responses PerSec. Client 4xx Responses PerSec. Client 5xx Responses PerSec. Client Discarded Requests. Accelerated Bytes PerSec. Max Accelerated Bytes PerSec. Non Accelerated Bytes PerSec. Max Non Accelerated Bytes PerSec. Server Up Events. Server Down Events.
Compressible Transactions PerSec. Compressed Transactions PerSec. Compressed Bytes Before PerSec. Compressed Bytes After PerSec. Max Compressible Transactions PerSec.
264
Max Compressed Transactions PerSec. Max Compressed BytesBefore PerSec. Max Compressed BytesAfter PerSec. Client L7 Response Bytes PerSec.
MaxClient L7 Response Bytes PerSec. Server Requests. Server Responses. Server Requests PerSec. Server Responses PerSec. Max Server Requests PerSec. Max Server Responses PerSec. Server L7 Request Bytes PerSec. MaxServer L7 Request Bytes PerSec. Server L7 Response Bytes PerSec. Max Server L7 Response Bytes PerSec. Server HTTP 10 Requests PerSec. Server HTTP 10Responses PerSec. Server HTTP 11Requests PerSec. Server HTTP 11Responses PerSec. Server Gets PerSec. Server Others PerSec. Server Puts PerSec. Server Posts PerSec. Server Heads PerSec. Server 2xx Responses PerSec. Server 3xx Responses PerSec. Server 4xx Responses PerSec. Server 5xx Responses PerSec. Client Max Attempted Conns PerSec. Server Max Attempted Conns PerSec.
265
Attack Land PerInt. Attack Land Total Frames. Attack Syn PerInt. Attack Syn Total Frames. Attack Ip Brdcst PerInt. Attack Ip Brdcst Total Frames. Attack Tcp Virtual PerInt. Attack Tcp Virtual Total Frames. Attack Non Tcp Primary PerInt. Attack Non Tcp Primary Total Frames. Attack Non Tcp Virtual PerInt.
266
18
Troubleshooting
This chapter provides example troubleshooting FAQs along with information outlining common issues and solutions for the AppBeat DC.
267
Connect via console and check if the startup.cfg file is present. root> system system>dir If the file is not there, it may have been erased or was not saved prior to power cycling the unit. You may either restore the file from a backed up configuration file residing on your FTP server, or reconfigure the unit.
Connect via console. Copy the error message (for later reference). Delete files from flash including startup.cfg. If problem persists Upgrade OS and application, reboot. If the problem persists; send the output from the debug> show tech-support, copy all text and send it to Crescendo Networks Technical Support.
Check port settings defaults are: 115k baud, 8 data bits, 1 stop bit no parity no flow control Make sure the console cable is plugged into the correct management port, labeled Serial, NOT Ethernet.
Power cycle the AppBeat DC. Contact Crescendo Networks Technical Support
Connect via console: Verify that the telnet/ssh servers are enabled. Check ACL is not preventing access. Verify username and password.
268
Chapter 18 Troubleshooting
Solution Verify that the snmp-server and http-server are enabled. Client: check that SUN Java is installed and enabled. Verify that the Web browser cache does not have an older version of the GUI than the current release. (Clear cache from Java console and retry.)
Verify correct username/password via telnet/ssh or console. Verify that there are no intermediary devices (i.e., firewalls, filters, etc.) which may be blocking SNMP traffic between the workstation and the AppBeat DC management interface.
Check cables, IP configuration, and switch/hub port. Verify the configuration of the gateway on the management port in order to get a response to an external network. Close session and retry. Try SSH This may happen with non standard telnet clients.
SNMP Communities are not working can not use MIB browser Syslog does not log anything on syslog server No traffic on data path
Verify community configuration in the AppBeat DC and on the MIB Browser (or SNMP tool). Check syslog threshold settings. Verify that objects in the configuration (i.e., servers, clusters, etc.) have logging enabled. Check cablings (Fiber tx/rx for example) and IP. Check show IP interfaces. Verify that the server port is open.
Check server properties, IP address and port. Check connectivity from AppBeat DC to server with
ping.
Log onto the Web server and make sure its HTTP task
is running, and that it is accepting new TCP connections. Also, check the Web servers TCP connection timeout to make sure it is not set too low.
269
Solution Check that the management port is enabled and works. Check connectivity by pinging the FTP server. Verify that the user/password/path configured in the ftp-record is correct. Verify that there are no intermediary devices (i.e., firewalls) which may block FTP transfers. Verify that the proper default gateway has been configured for the management interface.
Check with a PING from other appliances that it is available on the network, as well as from the AppBeat DC. Use the debug> show tech-support command. Issue the command twice in a 5 minute interval. Copy the output and send it to Crescendo Network Technical Support. This can occur if the date and time were not configured on the AppBeat DC before the SSL certificate was generated. Reset the date and time, and re-create the certificate. This occurs if the AppBeat DCs GUI console applet was not closed after updating the AppBeat DC. The GUI console and all other open browser windows must be closed following an AppBeat DC code update, so that the new GUI console is downloaded from the AppBeat DC. Check the TCP timeout settings on both the AppBeat DC and the Web server. Make sure the Web servers TCP timeout setting is greater than the timeout setting on the AppBeat DC. This occurs because the Require Secure Channel (SSL) option in the IIS configuration is enabled. Contact Support for instructions on how to resolve this matter.
Using a AppBeat DC Self-Signed SSL certificate causes the browser to display a certificate is expired or is not yet valid warning New AppBeat DC features do not appear in the AppBeat DCs GUI console immediately after updating the AppBeat DC via an HTTP upload A real server intermittently appears to be down, and then up again after a few seconds. After configuring the AppBeat DC to offload SSL from an IIS server, you receive the following error message when trying to access a secure portion of the website via SSL. The page must be viewed over a secure channel While trying to access a specific farm or cluster in the CLI (Via Telnet, SSH or Console), you find that the farm or cluster is empty.
This can occur if you misspelled the name of the farm or the cluster when issuing the Farm Farm_Name or Cluster Cluster_Name commands. If the Farm_Name or Cluster_Name are not pre-existing entities, a new entity will be created with the misspelled name when the command is entered in the CLI.
270
Chapter 18 Troubleshooting
Problem CLI session ends spontaneously After configuring two AppBeat DC units to operate in Hot/Standby mode, the virtual IP addresses become intermittently inaccessible
Solution Check the value of the idle-inactivity parameter. See Configurable CLI Parameters on page 23 for details. Check to makes sure that the option force master is not enabled on both AppBeat DC, as this will cause a race condition among the two units. Disable this option on the standby unit.
2.
3.
Change password of existing admin account. For this example, the admin account is called "hooman" and the new password should be "80hairband".
config> user hooman 80hairband admin config>
4.
5.
Logout.
system> exit root> exit login:
6.
271
A
Crescendo Rules Engine
This appendix introduces and explains how to use the Crescendo Rules Engine.
ACL the Rules Engine is used to build a set of rules for filtering traffic on a packet basis. Traffic control the Rules Engine is used to build a set of Traffic control rules for a Virtual Server. Content Control the Rules Engine is used to build content control profiles. Compression the Rules Engine is used to build content compression profiles. Queue selection the Rules Engine is used to build queue selection rules. Caching the Rules Engine is used to build caching profiles. Vservices the Rules Engine is used to build a Vservice. Hashing persistency the Rules Engine is used to configure hashing persistency.
An incoming HTTP request or HTTP response or packet is checked against each rule. If any matches are found, the action specified by the rule with the highest priority is taken.
In the case of access control, the action is whether to discard the packet, pass it on, redirect it, route it according to a routing profile, or open a NAT session for the packet.
273
In the case of a compression profile, the action is whether to compress or not. In the case of traffic control, the action is whether to deny the request, send it to a specified cluster, or redirect it elsewhere. In the case of a content control profile, the action is a modification of the URL. In the case of queue selection, the action is whether to send the traffic to the dynamic queue or to the static queue. In the case of caching, the action is whether to cache specific content.
A priority value must be assigned to each rule. Priority values are only used when two rules match all criteria for a request. The action specified by the rule with the higher priority is taken. No two rules can have the same priority rating. Priority is based on an ascending scale, so a rule with priority 5 has a higher precedence than a rule with a priority of 2. For example, a request is received which contains the following information:
In this example, the request actually matches both configured rules. When this occurs, the AppBeat DC uses the configured priority to determine which action to take. In this case, Rule 2 has a higher priority (2) and the request will be redirected to www.site2.com per the configured action.
274
275
Keywords The following table lists the keywords you can use to create a CRE expression.
Keyword String client.in.ip.dst client.in.ip.src client.in.tcp.dstport client.in.tcp.srcport client.in.udp.dstport client.in.udp.srcport server.in.ip.dst server.in.ip.src server.in.tcp.dstport server.in.tcp.srcport server.in.udp.dstport server.in.udp.srcport tcp.dstport Description Client ingress destination IP address Client ingress source IP address Client ingress destination TCP port Client ingress source TCP port Client ingress destination UDP port Client ingress source UDP port Server ingress destination IP address Server ingress source IP address Server ingress destination TCP port Server ingress source TCP port Server ingress destination UDP port Server ingress source UDP port TCP destination port Valid in Services All services All services All services All services Non-HTTP Traffic Control Non-HTTP Traffic Control All services All services All services All services Non-HTTP Traffic Control Non-HTTP Traffic Control Access Control Keyword Type IP Address IP Address Integer Integer Integer Integer IP Address IP Address Integer Integer Integer Integer Integer Permitted Operators is_present, ==, !=, >, <, >=, <= /mask is_present, ==, !=, >, <, >=, <= /mask is_present, ==, !=, >, <, >=, <= is_present, ==, !=, >, <, >=, <= is_present, ==, !=, >, <, >=, <= is_present, ==, !=, >, <, >=, <= is_present, ==, !=, >, <, >=, <= /mask is_present, ==, !=, >, <, >=, <= /mask is_present, ==, !=, >, <, >=, <= is_present, ==, !=, >, <, >=, <= is_present, ==, !=, >, <, >=, <= is_present, ==, !=, >, <, >=, <= is_present, ==, !=
276
Keyword String tcp.srcport udp.dstport udp.srcport ip.protocol ip.dst ip.src eth.vlan.id eth.ingif eth.egif http.request.accept_language
Description TCP source port UDP destination port UDP source port IP protocol IP address destination IP address source Ethernet VLAN identifier Ethernet ingress interface Ethernet egress interface HTTP request accept-language header HTTP request authorization header HTTP request cookie
Valid in Services Access Control Access Control Access Control Access Control Access Control Access Control Access Control Access Control Access Control All services available for HTTP clusters and virtual servers Compression, Response Content-control Request content-control
Keyword Type Integer Integer Integer Protocol IP Address IP Address Integer String String String
Permitted Operators is_present, ==, != is_present, ==, != is_present, ==, != ==, != is_present, ==, != /mask is_present, ==, != /mask is_present, ==, != ==, != ==, != is_present, ==, !=, >, <, >=, <=, contains, regex_match, structured-like is_present, ==, != ==, !=, contains, regex_match
String
is_present, ==, !=, >, <, >=, <=, contains, regex_match, structured-like
277
Valid in Services All services available for HTTP clusters and virtual servers All services available for HTTP clusters and virtual servers All services available for HTTP clusters and virtual servers All services available for HTTP clusters and virtual servers All services available for HTTP clusters and virtual servers All services available for HTTP clusters and virtual servers All services available for HTTP clusters and virtual servers All services available for HTTP clusters and virtual servers
Permitted Operators is_present, ==, !=, >, <, >=, <=, contains, regex_match, structured-like is_present, ==, !=, >, <, >=, <=, contains, regex_match, structured-like is_present, ==, !=
http.request.host
String
http.request.method
HTTP Method
http.request.path
String
is_present, ==, !=, >, <, >=, <=, contains, regex_match, structured-like is_present, ==, !=, >, <, >=, <=, contains, regex_match, structured-like is_present, ==, !=, >, <, >=, <=, contains, regex_match, structured-like is_present, ==, !=, >, <, >=, <=, contains, regex_match is_present, ==, !=
http.request.query
String
http.request.uri
String
http.request.user_agent
String
http.request.version
HTTP Version
278
Description DHCP gateway (gateway address) field DHCP chaddr (client hardware address) field HTTP response code HTTP response content-type header HTTP response location header HTTP response version Always perform, regardless of the incoming content Always perform, regardless of the incoming content Do not perform, regardless of the incoming content
Valid in Services Hashing persistency for DHCP clusters and virtual servers Hashing persistency for DHCP clusters and virtual servers Compression, Response Content-control Compression, Response Content-control Compression, Response Content-control Compression, Response Content-control All services All services All services
Permitted Operators is_present, ==, !=, >, <, >=, <= /mask
dhcp.request.chaddr
Integer
is_present, ==, !=, >, <, >=, <= is_present, ==, !=, >, <, >=, <=, contains, regex_match, structured-like is_present, ==, !=, >, <, >=, <=, contains, regex_match, structured-like is_present, ==, != ==, != ==, != ==, !=
279
Tags, Operators, and Functions This section lists the tags, logical operators, and functions you can use in a CRE expression. The Rule Engines operators include standard logical operators, as well as the proprietary Structured-Like operator described in Structured-Like on page 282. Tags
( ) ,
Operators
Functions
281
Keyword Values Some keywords can only accept certain values. The following table lists those keywords and their keyword values.
Keyword ip.protocol Possible Keyword Values ip_proto_udp ip_proto_tcp ip_proto_icmp ip_proto_other http.request.authorization http_auth_basic http_auth_negotiate http_auth_ntlm http_auth_other http.request.method http_method_get http_method_head http_method_post http_method_put http.response.version http_ver_1_0 http_ver_1_1 http_ver_other Description IP protocol UDP IP protocol TCP IP protocol ICMP Other IP protocols HTTP basic authentication HTTP negotiate authentication HTTP NTLM authentication HTTP other authentication method HTTP GET request method HTTP HEAD request method HTTP POST request method HTTP PUT request method HTTP version 1.0 HTTP version 1.1 HTTP version other
Variables Variables may be used to perform actions. Variables are built using the $ operator and keywords. The following variables are available:
$01- $99 and $R1 -$R9 are variables that can be employed only within the structuredlike operator. For a full description, refer to Structured-Like on page 282. $[keyword] is a variable that refers to the content of a field within the TCPIP/HTTP protocols .It is used according to the protocol context. For a full description, refer to $[keyword] on page 285.
Structured-Like The Structured-Like operator of the Rules Editor enables the AppBeat DC to rewrite fields in an incoming request or outgoing response by examining the structure of the request/response and changing it according to the rules action.
282
You can modify the URL in an HTTP request and the Location field in an HTTP response. The URL or Location field is rewritten based on the original URL (including the Host field of the URLs HTTP header) or Location, and the action specified in the matching rule. The rewrite is performed by removing or copying parts of the URL/Location and pasting them in other areas within the URL/Location. When a URL/Location is matched against a rule with the Structured Like operator, the AppBeat DC compares the format of the incoming URL/Location with the format specified in the rule. When a match is found between a rule and a URL/Location, the URL/Location is rewritten according to the rule action. The Structured Like operator enables site administrators to achieve greater control of the HTTP traffic entering or leaving their site. Some typical uses of Structured-Like include:
Hiding Web server names and server configuration information from your users by redirecting an external URL to an internal URL. This improves the security on your site and makes the sites URLs shorter. Redirecting an old web page to a new web page. Redirecting specific keyword searches to simplified URLs.
For example:
If the URL and host name are Structured-Like: www.cnn.com/$01/$02/$R Then change them to : $01.cnn.com/$02/$R
And the incoming URL is: URL: GET /sports/bball/index.asp?id=12213234 Host name: www.cnn.com
Rules Using Structured-Like Creating rules using Structured-Like enables you to control how to rewrite each incoming URL or outgoing Location field. This involves creating a generic format for the input URL or for the output Location. Predefined variables are used in the rules to indicate the generic information that varies with each URL/Location. Structured-Like rules can contain two types of variables:
<$XX>, where XX is 01, 02, 03< tokens enclosed between slashes or between dots.
283
<$RX> the remainder of the URL/Location, from the place of the last matched character until the end of the URL/Location.
The following example rule demonstrates the use of the variable <$XX>, which indicates a string from the URL.
URL From Incoming Request www.x.com/<$01>friend/<$02>index.htm URL After Applying Rule www.x.com/<$02>/<$01>/index.htm
When using this rule, the URL www.x.com/myfriend/homeindex.htm is rewritten to www.x.com/home/my/index.htm. The <$XX> variable cannot contain slashes, periods, question marks, or spaces. This variable can appear only once between a set of dashes, periods, or question marks in the URL/Location. The following example rule demonstrates the use of the variable <$RX>.
URL From Incoming Request www.x.com/images/<$R0> URL After Applying Rule www.x.com/images/jpg/<$R0>
When using this rule, the URL www.x.com/images/hello is rewritten to www.x.com/images/jpg/hello. Using the same rule, the URL www.x.com/images/goodbye is rewritten to www.x.com/images/jpg/goodbye. The <$RX> variable can contain any character, including slashes and periods, except for a space. Since a space indicates the end of the URL/Location, it cannot be used within the variable. You can use the <$RX> variable several times in the CRE expression, as demonstrated in the following example:
URL From Incoming Request Host name: www.<$R1> AND Path: /sports/$01/$R2 URL After Applying Rule www.foobar.$01.$R1/$R2
When using this rule, the URL www.cnn.com/sports/bball/index.asp is rewritten to www.foobar.bball.cnn.com/index.asp. The following table displays additional examples of Structured-Like rules, using the <$RX> and <$XX> variables. The Desired row describes the desired output for a certain input. The Rule row displays the input and output rule to be used to receive the desired result.
284
Table 19: Examples of Structured-Like Rules Example Number 1 Desired Rule 2 Desired Rule 3 Desired Rule 4 Desired Rule 5 Desired Rule 6 Desired Rule 7 Desired Rule 8 Desired Rule URL Input www.x.com/images/<rest> www.x.com/images/$R0 www.x.com/images/<rest> www.x.com/images/$R0 www.x.com/images/<rest> www.x.com/images/$R0 images.x.com/<rest> images.x.com/$R0 <uname>.x.com/<rest> $1.x.com/$R0 www.x.com/<app>/<rest> www.x.com/$1/$R0 www.x.com/<uname> www.x.com/$1 www.x.com/dir www.x.com/$1 URL After Applying Rule www.x.com/images/jpg/<rest> www.x.com/images/jpg/$R0 www.x.com/pictures/<rest> www.x.com/pictures/$R0 pictures.x.com/<rest> pictures.x.com/$R0 www.x.com/pictures/<rest> www.x.com/pictures/$R0 www.x.com/~<uname>/<rest> www.x.com/~$1/$R0 <app>.x.com/<rest> $1.x.com/$R0 www.x.com/user.php?uname=<uname> www.x.com/user.php?uname=$1 www.x.com/dir/ www.x.com/$1/
$[keyword] $[keyword] enables you to refer to the contents of specified fields within the TCPIP/HTTP protocols. For example, $[http.request.uri] specifies the content of the URI field in an HTTP request. $[keyword] can currently be used only when configuring CRE expressions for hashing persistency, Content Control rule actions, and traffic control redirection. The following variations of $[keyword] are available:
Case Sensitivity Each keyword in the CRE language has its own case-sensitivity attribute. For example, the http.request.uri and http.request.fileext keywords are case sensitive, while http.request.host and http.request.accept_language are case-insensitive. You can specify whether to use a case sensitive or case insensitive version of a fields content. $i denotes caseinsensitivity, and $c denotes case sensitivity. For example:
285
$i[http.request.uri] is a case-insensitive version of the URI field. $c[http.request.accept-language] is a case-sensitive version of the acceptlanguage field.
Handling integers as a string You can specify whether to handle an integer/IP field as a string. $s denotes the string representation of the field. For example:
$s[http.response.code] is the string version of the response code, while $[http.response.code] is the response code as an integer.
286
B
Squid Configuration File
This appendix displays the contents of the Squid 2.7 configuration file.
# # # # # # # # # # # # # # # # # # # # # # # # # # # WELCOME TO SQUID 2.7.STABLE7 ---------------------------This is the default Squid configuration file. You may wish to look at the Squid home page (http://www.squid-cache.org/) for the FAQ and other documentation. The default Squid config file shows what the defaults for various options happen to be. If you don't need to change the default, you shouldn't uncomment the line. Doing so may cause run-time problems. In some cases "none" refers to no default setting at all, while in other cases it refers to a valid option - the comments for that keyword indicate if this is the case.
Configuration options can be included using the "include" directive. Include takes a list of files to include. Quoting and wildcards is supported. For example, include /path/to/included/file/squid.acl.config Includes can be nested up to a hard-coded depth of 16 levels. This arbitrary restriction is to prevent recursive include references from causing Squid entering an infinite loop whilst trying to load configuration files.
# OPTIONS FOR AUTHENTICATION # ----------------------------------------------------------------------------# # # # # TAG: auth_param This is used to define parameters for the various authentication schemes supported by Squid. format: auth_param scheme parameter [setting]
287
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
The order in which authentication schemes are presented to the client is dependent on the order the scheme first appears in config file. IE has a bug (it's not RFC 2617 compliant) in that it will use the basic scheme if basic is the first entry presented, even if more secure schemes are presented. For now use the order in the recommended settings section below. If other browsers have difficulties (don't recognize the schemes offered even if you are using basic) either put basic first, or disable the other schemes (by commenting out their program entry). Once an authentication scheme is fully configured, it can only be shutdown by shutting squid down and restarting. Changes can be made on the fly and activated with a reconfigure. I.E. You can change to a different helper, but not unconfigure the helper completely. Please note that while this directive defines how Squid processes authentication it does not automatically activate authentication. To use authentication you must in addition make use of ACLs based on login name in http_access (proxy_auth, proxy_auth_regex or external with %LOGIN used in the format tag). The browser will be challenged for authentication on the first such acl encountered in http_access processing and will also be re-challenged for new login credentials if the request is being denied by a proxy_auth type acl. WARNING: authentication can't be used in a transparently intercepting proxy as the client then thinks it is talking to an origin server and not the proxy. This is a limitation of bending the TCP/IP protocol to transparently intercepting port 80, not a limitation in Squid. === Parameters for the basic scheme follow. === "program" cmdline Specify the command for the external authenticator. Such a program reads a line containing "username password" and replies "OK" or "ERR" in an endless loop. "ERR" responses may optionally be followed by a error description available as %m in the returned error page. By default, the basic authentication scheme is not used unless a program is specified. If you want to use the traditional proxy authentication, jump over to the helpers/basic_auth/NCSA directory and type: % make % make install Then, set this line to something like auth_param basic program //libexec/ncsa_auth //etc/passwd "children" numberofchildren The number of authenticator processes to spawn. If you start too few squid will have to wait for them to process a backlog of credential verifications, slowing it down. When credential verifications are done via a (slow) network you are likely to need lots of authenticator processes. auth_param basic children 5
288
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
"concurrency" numberofconcurrentrequests The number of concurrent requests/channels the helper supports. Changes the protocol used to include a channel number first on the request/response line, allowing multiple requests to be sent to the same helper in parallell without wating for the response. Must not be set unless it's known the helper supports this. "realm" realmstring Specifies the realm name which is to be reported to the client for the basic proxy authentication scheme (part of the text the user will see when prompted their username and password). auth_param basic realm Squid proxy-caching web server "credentialsttl" timetolive Specifies how long squid assumes an externally validated username:password pair is valid for - in other words how often the helper program is called for that user. Set this low to force revalidation with short lived passwords. Note that setting this high does not impact your susceptibility to replay attacks unless you are using an one-time password system (such as SecureID). If you are using such a system, you will be vulnerable to replay attacks unless you also use the max_user_ip ACL in an http_access rule. auth_param basic credentialsttl 2 hours "casesensitive" on|off Specifies if usernames are case sensitive. Most user databases are case insensitive allowing the same username to be spelled using both lower and upper case letters, but some are case sensitive. This makes a big difference for user_max_ip ACL processing and similar. auth_param basic casesensitive off "blankpassword" on|off Specifies if blank passwords should be supported. Defaults to off as there is multiple authentication backends which handles blank passwords as "guest" access. === Parameters for the digest scheme follow === "program" cmdline Specify the command for the external authenticator. Such a program reads a line containing "username":"realm" and replies with the appropriate H(A1) value hex encoded or ERR if the user (or his H(A1) hash) does not exists. See RFC 2616 for the definition of H(A1). "ERR" responses may optionally be followed by a error description available as %m in the returned error page. By default, the digest authentication scheme is not used unless a program is specified. If you want to use a digest authenticator, jump over to the helpers/digest_auth/ directory and choose the authenticator to use. It it's directory type % make % make install Then, set this line to something like
289
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
auth_param digest program //libexec/digest_auth_pw //etc/digpass "children" numberofchildren The number of authenticator processes to spawn. If you start too few squid will have to wait for them to process a backlog of credential verifications, slowing it down. When credential verifications are done via a (slow) network you are likely to need lots of authenticator processes. auth_param digest children 5 "concurrency" numberofconcurrentrequests The number of concurrent requests/channels the helper supports. Changes the protocol used to include a channel number first on the request/response line, allowing multiple requests to be sent to the same helper in parallell without wating for the response. Must not be set unless it's known the helper supports this. "realm" realmstring Specifies the realm name which is to be reported to the client for the digest proxy authentication scheme (part of the text the user will see when prompted their username and password). auth_param digest realm Squid proxy-caching web server "nonce_garbage_interval" timeinterval Specifies the interval that nonces that have been issued to clients are checked for validity. auth_param digest nonce_garbage_interval 5 minutes "nonce_max_duration" timeinterval Specifies the maximum length of time a given nonce will be valid for. auth_param digest nonce_max_duration 30 minutes "nonce_max_count" number Specifies the maximum number of times a given nonce can be used. auth_param digest nonce_max_count 50 "nonce_strictness" on|off Determines if squid requires strict increment-by-1 behavior for nonce counts, or just incrementing (off - for use when useragents generate nonce counts that occasionally miss 1 (ie, 1,2,4,6)). auth_param digest nonce_strictness off "check_nonce_count" on|off This directive if set to off can disable the nonce count check completely to work around buggy digest qop implementations in certain mainstream browser versions. Default on to check the nonce count to protect from authentication replay attacks. auth_param digest check_nonce_count on "post_workaround" on|off This is a workaround to certain buggy browsers who sends an incorrect request digest in POST requests when reusing the same nonce as acquired earlier in response to a GET request. auth_param digest post_workaround off === NTLM scheme options follow === "program" cmdline
290
# Specify the command for the external NTLM authenticator. Such a # program participates in the NTLMSSP exchanges between Squid and the # client and reads commands according to the Squid NTLMSSP helper # protocol. See helpers/ntlm_auth/ for details. Recommended ntlm # authenticator is ntlm_auth from Samba-3.X, but a number of other # ntlm authenticators is available. # # By default, the ntlm authentication scheme is not used unless a # program is specified. # # auth_param ntlm program /path/to/samba/bin/ntlm_auth --helperprotocol=squid-2.5-ntlmssp # # "children" numberofchildren # The number of authenticator processes to spawn. If you start too few # squid will have to wait for them to process a backlog of credential # verifications, slowing it down. When credential verifications are # done via a (slow) network you are likely to need lots of # authenticator processes. # auth_param ntlm children 5 # # "keep_alive" on|off # This option enables the use of keep-alive on the initial # authentication request. It has been reported some versions of MSIE # have problems if this is enabled, but performance will be increased # if enabled. # # auth_param ntlm keep_alive on # # === Negotiate scheme options follow === # # "program" cmdline # Specify the command for the external Negotiate authenticator. Such a # program participates in the SPNEGO exchanges between Squid and the # client and reads commands according to the Squid ntlmssp helper # protocol. See helpers/ntlm_auth/ for details. Recommended SPNEGO # authenticator is ntlm_auth from Samba-4.X. # # By default, the Negotiate authentication scheme is not used unless a # program is specified. # # auth_param negotiate program /path/to/samba/bin/ntlm_auth --helperprotocol=gss-spnego # # "children" numberofchildren # The number of authenticator processes to spawn. If you start too few # squid will have to wait for them to process a backlog of credential # verifications, slowing it down. When credential verifications are # done via a (slow) network you are likely to need lots of # authenticator processes. # auth_param negotiate children 5 # # "keep_alive" on|off # If you experience problems with PUT/POST requests when using the # Negotiate authentication scheme then you can try setting this to # off. This will cause Squid to forcibly close the connection on # the initial requests where the browser asks which schemes are # supported by the proxy.
291
# # auth_param negotiate keep_alive on # #Recommended minimum configuration per scheme: #auth_param negotiate program <uncomment and complete this line to activate> #auth_param negotiate children 5 #auth_param negotiate keep_alive on #auth_param ntlm program <uncomment and complete this line to activate> #auth_param ntlm children 5 #auth_param ntlm keep_alive on #auth_param digest program <uncomment and complete this line> #auth_param digest children 5 #auth_param digest realm Squid proxy-caching web server #auth_param digest nonce_garbage_interval 5 minutes #auth_param digest nonce_max_duration 30 minutes #auth_param digest nonce_max_count 50 #auth_param basic program <uncomment and complete this line> #auth_param basic children 5 #auth_param basic realm Squid proxy-caching web server #auth_param basic credentialsttl 2 hours #auth_param basic casesensitive off # TAG: authenticate_cache_garbage_interval # The time period between garbage collection across the username cache. # This is a tradeoff between memory utilization (long intervals - say # 2 days) and CPU (short intervals - say 1 minute). Only change if you # have good reason to. # #Default: # authenticate_cache_garbage_interval 1 hour # TAG: authenticate_ttl # The time a user & their credentials stay in the logged in user cache # since their last request. When the garbage interval passes, all user # credentials that have passed their TTL are removed from memory. # #Default: # authenticate_ttl 1 hour # TAG: authenticate_ip_ttl # If you use proxy authentication and the 'max_user_ip' ACL, this # directive controls how long Squid remembers the IP addresses # associated with each user. Use a small value (e.g., 60 seconds) if # your users might change addresses quickly, as is the case with # dialups. You might be safe using a larger value (e.g., 2 hours) in a # corporate LAN environment with relatively static address assignments. # #Default: # authenticate_ip_ttl 0 seconds # TAG: authenticate_ip_shortcircuit_ttl # Cache authentication credentials per client IP address for this # long. Default is 0 seconds (disabled). # # See also authenticate_ip_shortcircuit_access directive. # #Default: # authenticate_ip_shortcircuit_ttl 0 seconds
292
# ACCESS CONTROLS # ----------------------------------------------------------------------------# TAG: external_acl_type # This option defines external acl classes using a helper program to # look up the status # # external_acl_type name [options] FORMAT.. /path/to/helper [helper arguments..] # # Options: # # ttl=n TTL in seconds for cached results (defaults to 3600 # for 1 hour) # negative_ttl=n # TTL for cached negative lookups (default same # as ttl) # children=n number of processes spawn to service external acl # lookups of this type. (default 5). # concurrency=n concurrency level per process. Only used with helpers # capable of processing more than one query at a time. # Note: see compatibility note below # cache=n result cache size, 0 is unbounded (default) # grace= Percentage remaining of TTL where a refresh of a # cached entry should be initiated without needing to # wait for a new reply. (default 0 for no grace period) # protocol=2.5 Compatibility mode for Squid-2.5 external acl helpers # # FORMAT specifications # # %LOGIN Authenticated user login name # %EXT_USER Username from external acl # %IDENT Ident user name # %SRC Client IP # %SRCPORT Client source port # %URI Requested URI # %DST Requested host # %PROTO Requested protocol # %PORT Requested port # %METHOD Request method # %MYADDR Squid interface address # %MYPORT Squid http_port number # %PATH Requested URL-path (including query-string if any) # %USER_CERT SSL User certificate in PEM format # %USER_CERTCHAIN SSL User certificate chain in PEM format # %USER_CERT_xx SSL User certificate subject attribute xx # %USER_CA_xx SSL User certificate issuer attribute xx # %{Header} HTTP request header "Header" # %{Hdr:member} HTTP request header "Hdr" list member "member" # %{Hdr:;member} # HTTP request header list member using ; as # list separator. ; can be any non-alphanumeric # character. # %ACL The ACL name # %DATA The ACL arguments. If not used then any arguments # is automatically added at the end
293
# # In addition to the above, any string specified in the referencing # acl will also be included in the helper request line, after the # specified formats (see the "acl external" directive) # # The helper receives lines per the above format specification, # and returns lines starting with OK or ERR indicating the validity # of the request and optionally followed by additional keywords with # more details. # # General result syntax: # # OK/ERR keyword=value ... # # Defined keywords: # # user= The users name (login also understood) # password= The users password (for PROXYPASS login= cache_peer) # message= Error message or similar used as %o in error messages # (error also understood) # log= String to be logged in access.log. Available as # %ea in logformat specifications # # If protocol=3.0 (the default) then URL escaping is used to protect # each value in both requests and responses. # # If using protocol=2.5 then all values need to be enclosed in quotes # if they may contain whitespace, or the whitespace escaped using \. # And quotes or \ characters within the keyword value must be \ escaped. # # When using the concurrency= option the protocol is changed by # introducing a query channel tag infront of the request/response. # The query channel tag is a number between 0 and concurrency-1. # # Compatibility Note: The children= option was named concurrency= in # Squid-2.5.STABLE3 and earlier, and was accepted as an alias for the # duration of the Squid-2.5 releases to keep compatibility. However, # the meaning of concurrency= option has changed in Squid-2.6 to match # that of Squid-3 and the old syntax no longer works. # #Default: # none # # # # # # # # # # # # # # # TAG: acl Defining an Access List Every access list definition must begin with an aclname and acltype, followed by either type-specific arguments or a quoted filename that they are read from. acl aclname acltype argument ... acl aclname acltype "file" ... when using "file", the file should contain one item per line. By default, regular expressions are CASE-SENSITIVE. them case-insensitive, use the -i option. To make
294
# acl aclname src ip-address/netmask ... (clients IP address) # acl aclname src addr1-addr2/netmask ... (range of addresses) # acl aclname dst ip-address/netmask ... (URL host's IP address) # acl aclname myip ip-address/netmask ... (local socket IP address) # # acl aclname arp mac-address ... (xx:xx:xx:xx:xx:xx notation) # # The arp ACL requires the special configure option --enable-arp-acl. # # Furthermore, the arp ACL code is not portable to all operating systems. # # It works on Linux, Solaris, FreeBSD and some other *BSD variants. # # # # NOTE: Squid can only determine the MAC address for clients that are on # # the same subnet. If the client is on a different subnet, then Squid cannot # # find out its MAC address. # # acl aclname srcdomain .foo.com ... # reverse lookup, client IP # acl aclname dstdomain .foo.com ... # Destination server from URL # acl aclname srcdom_regex [-i] xxx ... # regex matching client name # acl aclname dstdom_regex [-i] xxx ... # regex matching server # # For dstdomain and dstdom_regex a reverse lookup is tried if a IP # # based URL is used and no match is found. The name "none" is used # # if the reverse lookup fails. # # acl aclname time [day-abbrevs] [h1:m1-h2:m2] # # day-abbrevs: # # S - Sunday # # M - Monday # # T - Tuesday # # W - Wednesday # # H - Thursday # # F - Friday # # A - Saturday # # h1:m1 must be less than h2:m2 # acl aclname url_regex [-i] ^http:// ... # regex matching on whole URL # acl aclname urlpath_regex [-i] \.gif$ ... # regex matching on URL path # acl aclname urllogin [-i] [^a-zA-Z0-9] ... # regex matching on URL login field # acl aclname port 80 70 21 ... # acl aclname port 0-1024 ... # ranges allowed # acl aclname myport 3128 ... # (local socket TCP port) # acl aclname myportname 3128 ... # http(s)_port name # acl aclname proto HTTP FTP ... # acl aclname method GET POST ... # acl aclname browser [-i] regexp ... # # pattern match on User-Agent header (see also req_header below) # acl aclname referer_regex [-i] regexp ... # # pattern match on Referer header # # Referer is highly unreliable, so use with care # acl aclname ident username ... # acl aclname ident_regex [-i] pattern ... # # string match on ident output. # # use REQUIRED to accept any non-null ident. # acl aclname src_as number ... # acl aclname dst_as number ... # # Except for access control, AS numbers can be used for # # routing of requests to specific caches. Here's an
295
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # acl acl # # # # # # # # # # # # # #
example for routing all requests for AS#1241 and only those to mycache.mydomain.net: acl asexample dst_as 1241 cache_peer_access mycache.mydomain.net allow asexample cache_peer_access mycache_mydomain.net deny all aclname proxy_auth [-i] username ... aclname proxy_auth_regex [-i] pattern ... list of valid usernames use REQUIRED to accept any valid username. NOTE: when a Proxy-Authentication header is sent but it is not needed during ACL checking the username is NOT logged in access.log. NOTE: proxy_auth requires a EXTERNAL authentication program to check username/password combinations (see auth_param directive). NOTE: proxy_auth can't be used in a transparent proxy as the browser needs to be configured for using a proxy in order to respond to proxy authentication.
acl aclname snmp_community string ... # A community string to limit access to your SNMP Agent # Example: # # acl snmppublic snmp_community public acl aclname maxconn number # This will be matched when the client's IP address has # more than <number> HTTP connections established. acl # # # # # # # # # # # acl # # # # # acl # # # aclname max_user_ip [-s] number This will be matched when the user attempts to log in from more than <number> different ip addresses. The authenticate_ip_ttl parameter controls the timeout on the ip entries. If -s is specified the limit is strict, denying browsing from any further IP addresses until the ttl has expired. Without -s Squid will just annoy the user by "randomly" denying requests. (the counter is reset each time the limit is reached and a request is denied) NOTE: in acceleration mode or where there is mesh of child proxies, clients may appear to come from multiple addresses if they are going through proxy farms, so a limit of 1 may cause user problems. aclname req_mime_type mime-type ... regex match against the mime type of the request generated by the client. Can be used to detect file upload or some types HTTP tunneling requests. NOTE: This does NOT match the reply. You cannot use this to match the returned file type. aclname req_header header-name [-i] any\.regex\.here regex match against any of the known request headers. May be thought of as a superset of "browser", "referer" and "mime-type" ACLs.
296
# acl aclname rep_mime_type mime-type ... # # regex match against the mime type of the reply received by # # squid. Can be used to detect file download or some # # types HTTP tunneling requests. # # NOTE: This has no effect in http_access rules. It only has # # effect in rules that affect the reply data stream such as # # http_reply_access. # # acl aclname rep_header header-name [-i] any\.regex\.here # # regex match against any of the known reply headers. May be # # thought of as a superset of "browser", "referer" and "mime-type" # # ACLs. # # # # Example: # # # # acl many_spaces rep_header Content-Disposition -i [[:space:]]{3,} # # acl aclname external class_name [arguments...] # # external ACL lookup via a helper class defined by the # # external_acl_type directive. # # acl aclname urlgroup group1 ... # # match against the urlgroup as indicated by redirectors # # acl aclname user_cert attribute values... # # match against attributes in a user SSL certificate # # attribute is one of DN/C/O/CN/L/ST # # acl aclname ca_cert attribute values... # # match against attributes a users issuing CA SSL certificate # # attribute is one of DN/C/O/CN/L/ST # # acl aclname ext_user username ... # acl aclname ext_user_regex [-i] pattern ... # # string match on username returned by external acl helper # # use REQUIRED to accept any non-null user name. # #Examples: #acl macaddress arp 09:00:2b:23:45:67 #acl myexample dst_as 1241 #acl password proxy_auth REQUIRED #acl fileupload req_mime_type -i ^multipart/form-data$ #acl javascript rep_mime_type -i ^application/x-javascript$ # #Recommended minimum configuration: acl all src all acl manager proto cache_object acl localhost src 127.0.0.0/8 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 # # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network # acl SSL_ports port 443
297
acl acl acl acl acl acl acl acl acl acl acl
Safe_ports port 80 # Safe_ports port 21 # Safe_ports port 443 # Safe_ports port 70 # Safe_ports port 210 # Safe_ports port 1025-65535 Safe_ports port 280 # Safe_ports port 488 # Safe_ports port 591 # Safe_ports port 777 # CONNECT method CONNECT
http ftp https gopher wais # unregistered ports http-mgmt gss-http filemaker multiling http
# TAG: http_access # Allowing or Denying access based on defined access lists # # Access to the HTTP port: # http_access allow|deny [!]aclname ... # # NOTE on default values: # # If there are no "access" lines present, the default is to deny # the request. # # If none of the "access" lines cause a match, the default is the # opposite of the last line in the list. If the last line was # deny, the default is allow. Conversely, if the last line # is allow, the default will be deny. For these reasons, it is a # good idea to have an "deny all" or "allow all" entry at the end # of your access lists to avoid potential confusion. # #Default: # http_access deny all # #Recommended minimum configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to unknown ports http_access deny !Safe_ports # Deny CONNECT to other than SSL ports http_access deny CONNECT !SSL_ports # # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet # And finally deny all other access to this proxy #http_access deny all
298
# TAG: http_access2 # Allowing or Denying access based on defined access lists # # Identical to http_access, but runs after redirectors. If not set # then only http_access is used. # #Default: # none # TAG: http_reply_access # Allow replies to client requests. This is complementary to http_access. # # http_reply_access allow|deny [!] aclname ... # # NOTE: if there are no access lines present, the default is to allow # all replies # # If none of the access lines cause a match the opposite of the # last line will apply. Thus it is good practice to end the rules # with an "allow all" or "deny all" entry. # #Default: # http_reply_access allow all # TAG: icp_access # Allowing or Denying access to the ICP port based on defined # access lists # # icp_access allow|deny [!]aclname ... # # See http_access for details # #Default: # icp_access deny all # #Allow ICP queries from local networks only icp_access allow localnet icp_access deny all # TAG: htcp_access # Note: This option is only available if Squid is rebuilt with the # --enable-htcp option # # Allowing or Denying access to the HTCP port based on defined # access lists # # htcp_access allow|deny [!]aclname ... # # See http_access for details # # NOTE: The default if no htcp_access lines are present is to # deny all traffic. This default may cause problems with peers # using the htcp or htcp-oldsquid options. # #Default: # htcp_access deny all # #Allow HTCP queries from local networks only
299
# htcp_access allow localnet # htcp_access deny all # TAG: htcp_clr_access # Note: This option is only available if Squid is rebuilt with the # --enable-htcp option # # Allowing or Denying access to purge content using HTCP based # on defined access lists # # htcp_clr_access allow|deny [!]aclname ... # # See http_access for details # ##Allow HTCP CLR requests from trusted peers #acl htcp_clr_peer src 172.16.1.2 #htcp_clr_access allow htcp_clr_peer # #Default: # htcp_clr_access deny all # TAG: miss_access # Use to force your neighbors to use you as a sibling instead of # a parent. For example: # # acl localclients src 172.16.0.0/16 # miss_access allow localclients # miss_access deny !localclients # # This means only your local clients are allowed to fetch # MISSES and all other clients can only fetch HITS. # # By default, allow all clients who passed the http_access rules # to fetch MISSES from us. # #Default setting: # miss_access allow all # TAG: ident_lookup_access # Note: This option is only available if Squid is rebuilt with the # --enable-ident-lookups option # # A list of ACL elements which, if matched, cause an ident # (RFC931) lookup to be performed for this request. For # example, you might choose to always perform ident lookups # for your main multi-user Unix boxes, but not for your Macs # and PCs. By default, ident lookups are not performed for # any requests. # # To enable ident lookups for specific client addresses, you # can follow this example: # # acl ident_aware_hosts src 198.168.1.0/255.255.255.0 # ident_lookup_access allow ident_aware_hosts # ident_lookup_access deny all # # Only src type ACL checks are fully supported. A src_domain # ACL might work at times, but it will not always provide
300
# the correct result. # #Default: # ident_lookup_access deny all # TAG: reply_body_max_size bytes allow|deny acl acl... # This option specifies the maximum size of a reply body in bytes. # It can be used to prevent users from downloading very large files, # such as MP3's and movies. When the reply headers are received, # the reply_body_max_size lines are processed, and the first line with # a result of "allow" is used as the maximum body size for this reply. # This size is checked twice. First when we get the reply headers, # we check the content-length value. If the content length value exists # and is larger than the allowed size, the request is denied and the # user receives an error message that says "the request or reply # is too large." If there is no content-length, and the reply # size exceeds this limit, the client's connection is just closed # and they will receive a partial reply. # # WARNING: downstream caches probably can not detect a partial reply # if there is no content-length header, so they will cache # partial responses and give them out as hits. You should NOT # use this option if you have downstream caches. # # If you set this parameter to zero (the default), there will be # no limit imposed. # #Default: # reply_body_max_size 0 allow all # TAG: authenticate_ip_shortcircuit_access # Access list determining when shortcicuiting the authentication process # based on source IP cached credentials is acceptable. Use this to deny # using the ip auth cache on requests from child proxies or other source # ip's having multiple users. # # See also authenticate_ip_shortcircuit_ttl directive # #Default: # none # OPTIONS FOR X-Forwarded-For # ----------------------------------------------------------------------------# # # # # # # # # # # # # TAG: follow_x_forwarded_for Allowing or Denying the X-Forwarded-For header to be followed to find the original source of a request. Requests may pass through a chain of several other proxies before reaching us. The X-Forwarded-For header will contain a comma-separated list of the IP addresses in the chain, with the rightmost address being the most recent. If a request reaches us from a source that is allowed by this configuration item, then we consult the X-Forwarded-For header to see where that host received the request from. If the X-Forwarded-For header contains multiple addresses, and if
301
# acl_uses_indirect_client is on, then we continue backtracking # until we reach an address for which we are not allowed to # follow the X-Forwarded-For header, or until we reach the first # address in the list. (If acl_uses_indirect_client is off, then # it's impossible to backtrack through more than one level of # X-Forwarded-For addresses.) # # The end result of this process is an IP address that we will # refer to as the indirect client address. This address may # be treated as the client address for access control, delay # pools and logging, depending on the acl_uses_indirect_client, # delay_pool_uses_indirect_client and log_uses_indirect_client # options. # # SECURITY CONSIDERATIONS: # # Any host for which we follow the X-Forwarded-For header # can place incorrect information in the header, and Squid # will use the incorrect information as if it were the # source address of the request. This may enable remote # hosts to bypass any access control restrictions that are # based on the client's source addresses. # # For example: # # acl localhost src 127.0.0.1 # acl my_other_proxy srcdomain .proxy.example.com # follow_x_forwarded_for allow localhost # follow_x_forwarded_for allow my_other_proxy # #Default: # follow_x_forwarded_for deny all # TAG: acl_uses_indirect_client on|off # Controls whether the indirect client address # (see follow_x_forwarded_for) is used instead of the # direct client address in acl matching. # #Default: # acl_uses_indirect_client on # TAG: delay_pool_uses_indirect_client on|off # Controls whether the indirect client address # (see follow_x_forwarded_for) is used instead of the # direct client address in delay pools. # #Default: # delay_pool_uses_indirect_client on # TAG: log_uses_indirect_client on|off # Controls whether the indirect client address # (see follow_x_forwarded_for) is used instead of the # direct client address in the access log. # #Default: # log_uses_indirect_client on
302
# SSL OPTIONS # ----------------------------------------------------------------------------# TAG: ssl_unclean_shutdown # Note: This option is only available if Squid is rebuilt with the # --enable-ssl option # # Some browsers (especially MSIE) bugs out on SSL shutdown # messages. # #Default: # ssl_unclean_shutdown off # TAG: ssl_engine # Note: This option is only available if Squid is rebuilt with the # --enable-ssl option # # The OpenSSL engine to use. You will need to set this if you # would like to use hardware SSL acceleration for example. # #Default: # none # TAG: sslproxy_client_certificate # Note: This option is only available if Squid is rebuilt with the # --enable-ssl option # # Client SSL Certificate to use when proxying https:// URLs # #Default: # none # TAG: sslproxy_client_key # Note: This option is only available if Squid is rebuilt with the # --enable-ssl option # # Client SSL Key to use when proxying https:// URLs # #Default: # none # TAG: sslproxy_version # Note: This option is only available if Squid is rebuilt with the # --enable-ssl option # # SSL version level to use when proxying https:// URLs # #Default: # sslproxy_version 1 # TAG: sslproxy_options # Note: This option is only available if Squid is rebuilt with the # --enable-ssl option # # SSL engine options to use when proxying https:// URLs # #Default: # none
303
# TAG: sslproxy_cipher # Note: This option is only available if Squid is rebuilt with the # --enable-ssl option # # SSL cipher list to use when proxying https:// URLs # #Default: # none # TAG: sslproxy_cafile # Note: This option is only available if Squid is rebuilt with the # --enable-ssl option # # file containing CA certificates to use when verifying server # certificates while proxying https:// URLs # #Default: # none # TAG: sslproxy_capath # Note: This option is only available if Squid is rebuilt with the # --enable-ssl option # # directory containing CA certificates to use when verifying # server certificates while proxying https:// URLs # #Default: # none # TAG: sslproxy_flags # Note: This option is only available if Squid is rebuilt with the # --enable-ssl option # # Various flags modifying the use of SSL while proxying https:// URLs: # DONT_VERIFY_PEER Accept certificates even if they fail to # verify. # NO_DEFAULT_CA Don't use the default CA list built in # to OpenSSL. # #Default: # none # TAG: sslpassword_program # Note: This option is only available if Squid is rebuilt with the # --enable-ssl option # # Specify a program used for entering SSL key passphrases # when using encrypted SSL certificate keys. If not specified # keys must either be unencrypted, or Squid started with the -N # option to allow it to query interactively for the passphrase. # #Default: # none # NETWORK OPTIONS # -----------------------------------------------------------------------------
304
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
TAG: http_port Usage: port [options] hostname:port [options] 1.2.3.4:port [options] The socket addresses where Squid will listen for HTTP client requests. You may specify multiple socket addresses. There are three forms: port alone, hostname with port, and IP address with port. If you specify a hostname or IP address, Squid binds the socket to that specific address. This replaces the old 'tcp_incoming_address' option. Most likely, you do not need to bind to a specific address, so you can use the port number alone. If you are running Squid in accelerator mode, you probably want to listen on port 80 also, or instead. The -I command line option will override the *first* port specified here. You may specify multiple socket addresses on multiple lines. Options: transparent Support for transparent interception of outgoing requests without browser settings. tproxy accel Support Linux TPROXY for spoofing outgoing connections using the client IP address. Accelerator mode. See also the related vhost, vport and defaultsite directives.
defaultsite=domainname What to use for the Host: header if it is not present in a request. Determines what site (not origin server) accelerators should consider the default. Defaults to visible_hostname:port if not set May be combined with vport=NN to override the port number. Implies accel. vhost vport Accelerator mode using Host header for virtual domain support. Implies accel. Accelerator with IP based virtual host support. Implies accel.
vport=NN As above, but uses specified port number rather than the http_port number. Implies accel. allow-direct Allow direct forwarding in accelerator mode. Normally accelerated requests is denied direct forwarding as it never_direct was used. urlgroup= Default urlgroup to mark requests with (see also acl urlgroup and url_rewrite_program)
305
# protocol= Protocol to reconstruct accelerated requests with. # Defaults to http. # # no-connection-auth # Prevent forwarding of Microsoft connection oriented # authentication (NTLM, Negotiate and Kerberos) # # act-as-origin # Act is if this Squid is the origin server. # This currently means generate own Date: and # Expires: headers. Implies accel. # # http11 Enables HTTP/1.1 support to clients. The HTTP/1.1 # support is still incomplete with an internal HTTP/1.0 # hop, but should work with most clients. The main # HTTP/1.1 features missing due to this is forwarding # of requests using chunked transfer encoding (results # in 411) and forwarding of 1xx responses (silently # dropped) # # name= Specifies a internal name for the port. Defaults to # the port specification (port or addr:port) # # tcpkeepalive[=idle,interval,timeout] # Enable TCP keepalive probes of idle connections # idle is the initial time before TCP starts probing # the connection, interval how often to probe, and # timeout the time before giving up. # # If you run Squid on a dual-homed machine with an internal # and an external interface we recommend you to specify the # internal address:port in http_port. This way Squid will only be # visible on the internal address. # # Squid normally listens to port 3128 # http_port 3128 # TAG: https_port # Note: This option is only available if Squid is rebuilt with the # --enable-ssl option # # Usage: [ip:]port cert=certificate.pem [key=key.pem] [options...] # # The socket address where Squid will listen for HTTPS client # requests. # # This is really only useful for situations where you are running # squid in accelerator mode and you want to do the SSL work at the # accelerator level. # # You may specify multiple socket addresses on multiple lines, # each with their own SSL certificate and/or options. # # Options: # # In addition to the options specified for http_port the folling # SSL related options is supported: #
306
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
cert= key=
Path to SSL certificate (PEM format). Path to SSL private key file (PEM format) if not specified, the certificate file is assumed to be a combined certificate and key file.
version= The version of SSL/TLS supported 1 automatic (default) 2 SSLv2 only 3 SSLv3 only 4 TLSv1 only cipher= Colon separated list of supported ciphers.
options= Various SSL engine options. The most important being: NO_SSLv2 Disallow the use of SSLv2 NO_SSLv3 Disallow the use of SSLv3 NO_TLSv1 Disallow the use of TLSv1 SINGLE_DH_USE Always create a new key when using temporary/ephemeral DH key exchanges See src/ssl_support.c or OpenSSL SSL_CTX_set_options documentation for a complete list of options. clientca= cafile= File containing the list of CAs to use when requesting a client certificate. File containing additional CA certificates to use when verifying client certificates. If unset clientca will be used. Directory containing additional CA certificates and CRL lists to use when verifying client certificates.
capath=
crlfile= File of additional CRL lists to use when verifying the client certificate, in addition to CRLs stored in the capath. Implies VERIFY_CRL flag below. dhparams= sslflags= File containing DH parameters for temporary/ephemeral DH key exchanges. Various flags modifying the use of SSL: DELAYED_AUTH Don't request client certificates immediately, but wait until acl processing requires a certificate (not yet implemented). NO_DEFAULT_CA Don't use the default CA lists built in to OpenSSL. NO_SESSION_REUSE Don't allow for session reuse. Each connection will result in a new SSL session. VERIFY_CRL Verify CRL lists when accepting client certificates. VERIFY_CRL_ALL Verify CRL lists for all certificates in the
307
# TAG: tcp_outgoing_tos # Allows you to select a TOS/Diffserv value to mark outgoing # connections with, based on the username or source address # making the request. # # tcp_outgoing_tos ds-field [!]aclname ... # # Example where normal_service_net uses the TOS value 0x00 # and good_service_net uses 0x20 # # acl normal_service_net src 10.0.0.0/255.255.255.0 # acl good_service_net src 10.0.1.0/255.255.255.0 # tcp_outgoing_tos 0x00 normal_service_net # tcp_outgoing_tos 0x20 good_service_net # # TOS/DSCP values really only have local significance - so you should # know what you're specifying. For more information, see RFC2474 and # RFC3260. # # The TOS/DSCP byte must be exactly that - a octet value 0 - 255, or # "default" to use whatever default your host has. Note that in # practice often only values 0 - 63 is usable as the two highest bits # have been redefined for use by ECN (RFC3168). # # Processing proceeds in the order specified, and stops at first fully # matching line. # # Note: The use of this directive using client dependent ACLs is # incompatible with the use of server side persistent connections. To # ensure correct results it is best to set server_persisten_connections # to off when using this directive in such configurations. # #Default: # none # # # # # # # # # # # # # # # TAG: tcp_outgoing_address Allows you to map requests to different outgoing IP addresses based on the username or source address of the user making the request. tcp_outgoing_address ipaddr [[!]aclname] ... Example where requests from 10.0.0.0/24 will be forwarded with source address 10.1.0.1, 10.0.2.0/24 forwarded with source address 10.1.0.2 and the rest will be forwarded with source address 10.1.0.3. acl normal_service_net src 10.0.0.0/24 acl good_service_net src 10.0.1.0/24 10.0.2.0/24 tcp_outgoing_address 10.1.0.1 normal_service_net
308
# tcp_outgoing_address 10.1.0.2 good_service_net # tcp_outgoing_address 10.1.0.3 # # Processing proceeds in the order specified, and stops at first fully # matching line. # # Note: The use of this directive using client dependent ACLs is # incompatible with the use of server side persistent connections. To # ensure correct results it is best to set server_persistent_connections # to off when using this directive in such configurations. # #Default: # none # TAG: zph_mode # This option enables packet level marking of HIT/MISS responses, # either using IP TOS or socket priority. # off Feature disabled # tos Set the IP TOS/Diffserv field # priority Set the socket priority (may get mapped to TOS by OS, # otherwise only usable in local rulesets) # option Embed the mark in an IP option field. See also # zph_option. # # See also tcp_outgoing_tos for details/requirements about TOS usage. # #Default: # zph_mode off # TAG: zph_local # Allows you to select a TOS/Diffserv/Priority value to mark local hits. # Default: 0 (disabled). # #Default: # zph_local 0 # TAG: zph_sibling # Allows you to select a TOS/Diffserv/Priority value to mark sibling hits. # Default: 0 (disabled). # #Default: # zph_sibling 0 # TAG: zph_parent # Allows you to select a TOS/Diffserv/Priority value to mark parent hits. # Default: 0 (disabled). # #Default: # zph_parent 0 # TAG: zph_option # The IP option to use when zph_mode is set to "option". Defaults to # 136 which is officially registered as "SATNET Stream ID". # #Default: # zph_option 136
309
# OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION ALGORITHM # ----------------------------------------------------------------------------# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # TAG: cache_peer To specify other caches in a hierarchy, use the format: cache_peer hostname type http-port icp-port [options] For example, # # # cache_peer cache_peer cache_peer type: proxy-port: icp-port: hostname -------------------parent.foo.net sib1.foo.net sib2.foo.net proxy icp type port port -------- ----- ----parent 3128 3130 sibling 3128 3130 sibling 3128 3130 options ----------proxy-only default proxy-only proxy-only
either 'parent', 'sibling', or 'multicast'. The port number where the cache listens for proxy requests.
Used for querying neighbor caches about objects. To have a non-ICP neighbor specify '7' for the ICP port and make sure the neighbor machine has the UDP echo port enabled in its /etc/inetd.conf file. NOTE: Also requires icp_port option enabled to send/receive requests via this method.
options: proxy-only weight=n ttl=n no-query default round-robin carp multicast-responder multicast-siblings closest-only no-digest no-netdb-exchange no-delay login=user:password | PASS | *:password connect-timeout=nn digest-url=url allow-miss max-conn=n htcp htcp-oldsquid originserver userhash sourcehash name=xxx monitorurl=url monitorsize=sizespec monitorinterval=seconds monitortimeout=seconds
310
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
forceddomain=name ssl sslcert=/path/to/ssl/certificate sslkey=/path/to/ssl/key sslversion=1|2|3|4 sslcipher=... ssloptions=... front-end-https[=on|auto] connection-auth[=on|off|auto] idle=n http11 use 'proxy-only' to specify objects fetched from this cache should not be saved locally. use 'weight=n' to affect the selection of a peer during any weighted peer-selection mechanisms. The weight must be an integer; default is 1, larger weights are favored more. This option does not affect parent selection if a peering protocol is not in use. use 'ttl=n' to specify a IP multicast TTL to use when sending an ICP queries to this address. Only useful when sending to a multicast group. Because we don't accept ICP replies from random hosts, you must configure other group members as peers with the 'multicast-responder' option below. use 'no-query' to NOT send ICP queries to this neighbor. use 'default' if this is a parent cache which can be used as a "last-resort" if a peer cannot be located by any of the peer-selection mechanisms. If specified more than once, only the first is used. use 'round-robin' to define a set of parents which should be used in a round-robin fashion in the absence of any ICP queries. use 'carp' to define a set of parents which should be used as a CARP array. The requests will be distributed among the parents based on the CARP load balancing hash function based on their weight. 'multicast-responder' indicates the named peer is a member of a multicast group. ICP queries will not be sent directly to the peer, but ICP replies will be accepted from it. the 'multicast-siblings' option is meant to be used only for cache peers of type "multicast". It instructs Squid that ALL members of this multicast group have "sibling" relationship with it, not "parent". This is an optimization that avoids useless multicast queries to a multicast group when the requested object would be fetched only from a "parent" cache, anyway. It's
311
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
useful, e.g., when configuring a pool of redundant Squid proxies, being members of the same multicast group. 'closest-only' indicates that, for ICP_OP_MISS replies, we'll only forward CLOSEST_PARENT_MISSes and never FIRST_PARENT_MISSes. use 'no-digest' to NOT request cache digests from this neighbor. 'no-netdb-exchange' disables requesting ICMP RTT database (NetDB) from the neighbor. use 'no-delay' to prevent access to this neighbor from influencing the delay pools. use 'login=user:password' if this is a personal/workgroup proxy and your parent requires proxy authentication. Note: The string can include URL escapes (i.e. %20 for spaces). This also means % must be written as %%. use 'login=PASS' if users must authenticate against the upstream proxy or in the case of a reverse proxy configuration, the origin web server. This will pass the users credentials as they are to the peer. Note: To combine this with local authentication the Basic authentication scheme must be used, and both servers must share the same user database as HTTP only allows for a single login (one for proxy, one for origin server). Also be warned this will expose your users proxy password to the peer. USE WITH CAUTION use 'login=*:password' to pass the username to the upstream cache, but with a fixed password. This is meant to be used when the peer is in another administrative domain, but it is still needed to identify each user. The star can optionally be followed by some extra information which is added to the username. This can be used to identify this proxy to the peer, similar to the login=username:password option above. use 'connect-timeout=nn' to specify a peer specific connect timeout (also see the peer_connect_timeout directive) use 'digest-url=url' to tell Squid to fetch the cache digest (if digests are enabled) for this host from the specified URL rather than the Squid default location. use 'allow-miss' to disable Squid's use of only-if-cached when forwarding requests to siblings. This is primarily useful when icp_hit_stale is used by the sibling. To extensive use of this option may result in forwarding loops, and you should avoid having two-way peerings with this option. (for example to deny peer usage on requests from peer by denying cache_peer_access if the
312
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
source is a peer) use 'max-conn=n' to limit the amount of connections Squid may open to this peer. use 'htcp' to send HTCP, instead of ICP, queries to the neighbor. You probably also want to set the "icp port" to 4827 instead of 3130. You must also allow this Squid htcp_access and http_access in the peer Squid configuration. use 'htcp-oldsquid' to send HTCP to old Squid versions You must also allow this Squid htcp_access and http_access in the peer Squid configuration. 'originserver' causes this parent peer to be contacted as a origin server. Meant to be used in accelerator setups. use 'userhash' to load-balance amongst a set of parents based on the client proxy_auth or ident username. use 'sourcehash' to load-balance amongst a set of parents based on the client source ip. use 'name=xxx' if you have multiple peers on the same host but different ports. This name can be used to differentiate the peers in cache_peer_access and similar directives. use 'monitorurl=url' to have periodically request a given URL from the peer, and only consider the peer as alive if this monitoring is successful (default none) use 'monitorsize=min[-max]' to limit the size range of 'monitorurl' replies considered valid. Defaults to 0 to accept any size replies as valid. use 'monitorinterval=seconds' to change frequency of how often the peer is monitored with 'monitorurl' (default 300 for a 5 minute interval). If set to 0 then monitoring is disabled even if a URL is defined. use 'monitortimeout=seconds' to change the timeout of 'monitorurl'. Defaults to 'monitorinterval'. use 'forceddomain=name' to forcibly set the Host header of requests forwarded to this peer. Useful in accelerator setups where the server (peer) expects a certain domain name and using redirectors to feed this domain name is not feasible. use 'ssl' to indicate connections to this peer should be SSL/TLS encrypted. use 'sslcert=/path/to/ssl/certificate' to specify a client SSL certificate to use when connecting to this peer. use 'sslkey=/path/to/ssl/key' to specify the private SSL
313
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
key corresponding to sslcert above. If 'sslkey' is not specified 'sslcert' is assumed to reference a combined file containing both the certificate and the key. use sslversion=1|2|3|4 to specify the SSL version to use when connecting to this peer 1 = automatic (default) 2 = SSL v2 only 3 = SSL v3 only 4 = TLS v1 only use sslcipher=... to specify the list of valid SSL ciphers to use when connecting to this peer. use ssloptions=... to specify various SSL engine options: NO_SSLv2 Disallow the use of SSLv2 NO_SSLv3 Disallow the use of SSLv3 NO_TLSv1 Disallow the use of TLSv1 See src/ssl_support.c or the OpenSSL documentation for a more complete list. use sslcafile=... to specify a file containing additional CA certificates to use when verifying the peer certificate. use sslcapath=... to specify a directory containing additional CA certificates to use when verifying the peer certificate. use sslcrlfile=... to specify a certificate revocation list file to use when verifying the peer certificate. use sslflags=... to specify various flags modifying the SSL implementation: DONT_VERIFY_PEER Accept certificates even if they fail to verify. NO_DEFAULT_CA Don't use the default CA list built in to OpenSSL. use ssldomain= to specify the peer name as advertised in it's certificate. Used for verifying the correctness of the received peer certificate. If not specified the peer hostname will be used. use front-end-https to enable the "Front-End-Https: On" header needed when using Squid as a SSL frontend in front of Microsoft OWA. See MS KB document Q307347 for details on this header. If set to auto the header will only be added if the request is forwarded as a https:// URL. use connection-auth=off to tell Squid that this peer does not support Microsoft connection oriented authentication, and any such challenges received from there should be ignored. Default is auto to automatically determine the status of the peer.
314
# # # # # # # # # #Default: # none
use idle=n to specify a minimum number of idle connections that should be kept open to this peer. use http11 to send requests using HTTP/1.1 to this peer. Note: The HTTP/1.1 support is still incomplete, with an internal HTTP/1.0 hop. As result 1xx responses will not be forwarded.
# TAG: cache_peer_domain # Use to limit the domains for which a neighbor cache will be # queried. Usage: # # cache_peer_domain cache-host domain [domain ...] # cache_peer_domain cache-host !domain # # For example, specifying # # cache_peer_domain parent.foo.net .edu # # has the effect such that UDP query packets are sent to # 'bigserver' only when the requested object exists on a # server in the .edu domain. Prefixing the domain name # with '!' means the cache will be queried for objects # NOT in that domain. # # NOTE: * Any number of domains may be given for a cache-host, # either on the same or separate lines. # * When multiple domains are given for a particular # cache-host, the first matched domain is applied. # * Cache hosts with no domain restrictions are queried # for all requests. # * There are no defaults. # * There is also a 'cache_peer_access' tag in the ACL # section. # #Default: # none # TAG: cache_peer_access # Similar to 'cache_peer_domain' but provides more flexibility by # using ACL elements. # # cache_peer_access cache-host allow|deny [!]aclname ... # # The syntax is identical to 'http_access' and the other lists of # ACL elements. See the comments for 'http_access' below, or # the Squid FAQ (http://www.squid-cache.org/FAQ/FAQ-10.html). # #Default: # none # # # TAG: neighbor_type_domain usage: neighbor_type_domain neighbor parent|sibling domain domain ...
315
# Modifying the neighbor type for specific domains is now # possible. You can treat some domains differently than the the # default neighbor type specified on the 'cache_peer' line. # Normally it should only be necessary to list domains which # should be treated differently because the default neighbor type # applies for hostnames which do not match domains listed here. # #EXAMPLE: # cache_peer cache.foo.org parent 3128 3130 # neighbor_type_domain cache.foo.org sibling .com .net # neighbor_type_domain cache.foo.org sibling .au .de # #Default: # none # TAG: dead_peer_timeout (seconds) # This controls how long Squid waits to declare a peer cache # as "dead." If there are no ICP replies received in this # amount of time, Squid will declare the peer dead and not # expect to receive any further ICP replies. However, it # continues to send ICP queries, and will mark the peer as # alive upon receipt of the first subsequent ICP reply. # # This timeout also affects when Squid expects to receive ICP # replies from peers. If more than 'dead_peer' seconds have # passed since the last ICP reply was received, Squid will not # expect to receive an ICP reply on the next query. Thus, if # your time between requests is greater than this timeout, you # will see a lot of requests sent DIRECT to origin servers # instead of to your parents. # #Default: # dead_peer_timeout 10 seconds # TAG: hierarchy_stoplist # A list of words which, if found in a URL, cause the object to # be handled directly by this cache. In other words, use this # to not query neighbor caches for certain objects. You may # list this option multiple times. Note: never_direct overrides # this option. #We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # MEMORY CACHE OPTIONS # ----------------------------------------------------------------------------# # # # # # # # # # # TAG: cache_mem (bytes) NOTE: THIS PARAMETER DOES NOT SPECIFY THE MAXIMUM PROCESS SIZE. IT ONLY PLACES A LIMIT ON HOW MUCH ADDITIONAL MEMORY SQUID WILL USE AS A MEMORY CACHE OF OBJECTS. SQUID USES MEMORY FOR OTHER THINGS AS WELL. SEE THE SQUID FAQ SECTION 8 FOR DETAILS. 'cache_mem' specifies the ideal amount of memory to be used for: * In-Transit objects * Hot Objects * Negative-Cached objects
316
# # Data for these objects are stored in 4 KB blocks. This # parameter specifies the ideal upper limit on the total size of # 4 KB blocks allocated. In-Transit objects take the highest # priority. # # In-transit objects have priority over the others. When # additional space is needed for incoming data, negative-cached # and hot objects will be released. In other words, the # negative-cached and hot objects will fill up any unused space # not needed for in-transit objects. # # If circumstances require, this limit will be exceeded. # Specifically, if your incoming request rate requires more than # 'cache_mem' of memory to hold in-transit objects, Squid will # exceed this limit to satisfy the new requests. When the load # decreases, blocks will be freed until the high-water mark is # reached. Thereafter, blocks will be used to store hot # objects. # #Default: # cache_mem 8 MB cache_mem 16 MB # TAG: maximum_object_size_in_memory (bytes) # Objects greater than this size will not be attempted to kept in # the memory cache. This should be set high enough to keep objects # accessed frequently in memory to improve performance whilst low # enough to keep larger objects from hoarding cache_mem. # #Default: # maximum_object_size_in_memory 8 KB maximum_object_size_in_memory 8 MB # TAG: memory_replacement_policy # The memory replacement policy parameter determines which # objects are purged from memory when memory space is needed. # # See cache_replacement_policy for details. # #Default: # memory_replacement_policy lru # DISK CACHE OPTIONS # ----------------------------------------------------------------------------# # # # # # # # # # # TAG: cache_replacement_policy The cache replacement policy parameter determines which objects are evicted (replaced) when disk space is needed. lru : heap GDSF : heap LFUDA: heap LRU : Squid's original list based LRU policy Greedy-Dual Size Frequency Least Frequently Used with Dynamic Aging LRU policy implemented using a heap
317
# The LRU policies keeps recently referenced objects. # # The heap GDSF policy optimizes object hit rate by keeping smaller # popular objects in cache so it has a better chance of getting a # hit. It achieves a lower byte hit rate than LFUDA though since # it evicts larger (possibly popular) objects. # # The heap LFUDA policy keeps popular objects in cache regardless of # their size and thus optimizes byte hit rate at the expense of # hit rate since one large, popular object will prevent many # smaller, slightly less popular objects from being cached. # # Both policies utilize a dynamic aging mechanism that prevents # cache pollution that can otherwise occur with frequency-based # replacement policies. # # NOTE: if using the LFUDA replacement policy you should increase # the value of maximum_object_size above its default of 4096 KB to # to maximize the potential byte hit rate improvement of LFUDA. # # For more information about the GDSF and LFUDA cache replacement # policies see http://www.hpl.hp.com/techreports/1999/HPL-1999-69.html # and http://fog.hpl.external.hp.com/techreports/98/HPL-98-173.html. # #Default: # cache_replacement_policy lru # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # TAG: cache_dir Usage: cache_dir Type Directory-Name Fs-specific-data [options] You can specify multiple cache_dir lines to spread the cache among different disk partitions. Type specifies the kind of storage system to use. Only "ufs" is built by default. To enable any of the other storage systems see the --enable-storeio configure option. 'Directory' is a top-level directory where cache swap files will be stored. If you want to use an entire disk for caching, this can be the mount-point directory. The directory must exist and be writable by the Squid process. Squid will NOT create this directory for you. Only using COSS, a raw disk device or a stripe file can be specified, but the configuration of the "cache_swap_log" tag is mandatory. The ufs store type: "ufs" is the old well-known Squid storage format that has always been there. cache_dir ufs Directory-Name Mbytes L1 L2 [options] 'Mbytes' is the amount of disk space (MB) to use under this directory. The default is 100 MB. Change this to suit your configuration. Do NOT put the size of your disk drive here.
318
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Instead, if you want Squid to use the entire disk drive, subtract 20% and use that value. 'Level-1' is the number of first-level subdirectories which will be created under the 'Directory'. The default is 16. 'Level-2' is the number of second-level subdirectories which will be created under each first-level directory. The default is 256. The aufs store type: "aufs" uses the same storage format as "ufs", utilizing POSIX-threads to avoid blocking the main Squid process on disk-I/O. This was formerly known in Squid as async-io. cache_dir aufs Directory-Name Mbytes L1 L2 [options] see argument descriptions under ufs above The diskd store type: "diskd" uses the same storage format as "ufs", utilizing a separate process to avoid blocking the main Squid process on disk-I/O. cache_dir diskd Directory-Name Mbytes L1 L2 [options] [Q1=n] [Q2=n] see argument descriptions under ufs above Q1 specifies the number of unacknowledged I/O requests when Squid stops opening new files. If this many messages are in the queues, Squid won't open new files. Default is 64 Q2 specifies the number of unacknowledged messages when Squid starts blocking. If this many messages are in the queues, Squid blocks until it receives some replies. Default is 72 When Q1 < Q2 (the default), the cache directory is optimized for lower response time at the expense of a decrease in hit ratio. If Q1 > Q2, the cache directory is optimized for higher hit ratio at the expense of an increase in response time. The coss store type: block-size=n defines the "block size" for COSS cache_dir's. Squid uses file numbers as block numbers. Since file numbers are limited to 24 bits, the block size determines the maximum size of the COSS partition. The default is 512 bytes, which leads to a maximum cache_dir size of 512<<24, or 8 GB. Note you should not change the COSS block size after Squid has written some objects to the cache_dir. overwrite-percent=n defines the percentage of disk that COSS must write to before a given object will be moved to the current stripe. A value of "n" closer to 100 will cause COSS to waste less disk space by having multiple copies of an object
319
# on disk, but will increase the chances of overwriting a popular # object as COSS overwrites stripes. A value of "n" close to 0 # will cause COSS to keep all current objects in the current COSS # stripe at the expense of the hit rate. The default value of 50 # will allow any given object to be stored on disk a maximum of # 2 times. # # max-stripe-waste=n defines the maximum amount of space that COSS # will waste in a given stripe (in bytes). When COSS writes data # to disk, it will potentially waste up to "max-size" worth of disk # space for each 1MB of data written. If "max-size" is set to a # large value (ie >256k), this could potentially result in large # amounts of wasted disk space. Setting this value to a lower value # (ie 64k or 32k) will result in a COSS disk refusing to cache # larger objects until the COSS stripe has been filled to within # "max-stripe-waste" of the maximum size (1MB). # # membufs=n defines the number of "memory-only" stripes that COSS # will use. When an cache hit is performed on a COSS stripe before # COSS has reached the overwrite-percent value for that object, # COSS will use a series of memory buffers to hold the object in # while the data is sent to the client. This will define the maximum # number of memory-only buffers that COSS will use. The default value # is 10, which will use a maximum of 10MB of memory for buffers. # # maxfullbufs=n defines the maximum number of stripes a COSS partition # will have in memory waiting to be freed (either because the disk is # under load and the stripe is unwritten, or because clients are still # transferring data from objects using the memory). In order to try # and maintain a good hit rate under load, COSS will reserve the last # 2 full stripes for object hits. (ie a COSS cache_dir will reject # new objects when the number of full stripes is 2 less than maxfullbufs) # # The null store type: # # no options are allowed or required # # Common options: # # no-store, no new objects should be stored to this cache_dir # # min-size=n, refers to the min object size this storedir will accept. # It's used to restrict a storedir to only store large objects # (e.g. aufs) while other storedirs are optimized for smaller objects # (e.g. COSS). Defaults to 0. # # max-size=n, refers to the max object size this storedir supports. # It is used to initially choose the storedir to dump the object. # Note: To make optimal use of the max-size limits you should order # the cache_dir lines with the smallest max-size value first and the # ones with no max-size specification last. # # Note that for coss, max-size must be less than COSS_MEMBUF_SZ # (hard coded at 1 MB). # #Default: # cache_dir ufs //var/cache 100 16 256
320
# TAG: store_dir_select_algorithm # Set this to 'round-robin' as an alternative. # #Default: # store_dir_select_algorithm least-load # TAG: max_open_disk_fds # To avoid having disk as the I/O bottleneck Squid can optionally # bypass the on-disk cache if more than this amount of disk file # descriptors are open. # # A value of 0 indicates no limit. # #Default: # max_open_disk_fds 0 # TAG: minimum_object_size (bytes) # Objects smaller than this size will NOT be saved on disk. The # value is specified in kilobytes, and the default is 0 KB, which # means there is no minimum. # #Default: # minimum_object_size 0 KB # TAG: maximum_object_size (bytes) # Objects larger than this size will NOT be saved on disk. The # value is specified in kilobytes, and the default is 4MB. If # you wish to get a high BYTES hit ratio, you should probably # increase this (one 32 MB object hit counts for 3200 10KB # hits). If you wish to increase speed more than your want to # save bandwidth you should leave this low. # # NOTE: if using the LFUDA replacement policy you should increase # this value to maximize the byte hit rate improvement of LFUDA! # See replacement_policy below for a discussion of this policy. # #Default: # maximum_object_size 4096 KB # TAG: cache_swap_low (percent, 0-100) # TAG: cache_swap_high (percent, 0-100) # # The low- and high-water marks for cache object replacement. # Replacement begins when the swap (disk) usage is above the # low-water mark and attempts to maintain utilization near the # low-water mark. As swap utilization gets close to high-water # mark object eviction becomes more aggressive. If utilization is # close to the low-water mark less replacement is done each time. # # Defaults are 90% and 95%. If you have a large cache, 5% could be # hundreds of MB. If this is the case you may wish to set these # numbers closer together. # #Default: # cache_swap_low 90 # cache_swap_high 95 # TAG: update_headers on|off
321
# By default Squid updates # a 304 response. Set this # for disk I/O performance # HTTP standard, and could # causes. # #Default: # update_headers on
stored HTTP headers when receiving to off if you want to disable this reasons. Disabling this VIOLATES the make you liable for problems which it
# LOGFILE OPTIONS # ----------------------------------------------------------------------------# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # TAG: logformat Usage: logformat <name> <format specification> Defines an access log format. The <format specification> is a string with embedded % format codes % format codes all follow the same basic structure where all but the formatcode is optional. Output strings are automatically escaped as required according to their context and the output format modifiers are usually not needed, but can be specified if an explicit output format is desired. % ["|[|'|#] [-] [[0]width] [{argument}] formatcode " [ # ' output output output output in quoted string format in squid text log format as used by log_mime_hdrs in URL quoted format as-is
left aligned width field width. If starting with 0 the output is zero padded {arg} argument such as header name etc Format codes: >a >A >p <A la lp oa ts tu tl tg tr >h Client source IP address Client FQDN Client source port Server IP address or peer name Local IP address (http_port) Local port number (http_port) Our outgoing IP address (tcp_outgoing_address) Seconds since epoch subsecond time (milliseconds) Local time. Optional strftime format argument default %d/%b/%Y:%H:%M:%S %z GMT time. Optional strftime format argument default %d/%b/%Y:%H:%M:%S %z Response time (milliseconds) Request header. Optional header name argument on the format header[:[separator]element]
322
# <h Reply header. Optional header name argument # as for >h # un User name # ul User name from authentication # ui User name from ident # us User name from SSL # ue User name from external acl helper # Hs HTTP status code # Ss Squid request status (TCP_MISS etc) # Sh Squid hierarchy status (DEFAULT_PARENT etc) # mt MIME content type # rm Request method (GET/POST etc) # ru Request URL # rp Request URL-Path excluding hostname # rv Request protocol version # ea Log string returned by external acl # <st Reply size including HTTP headers # >st Request size including HTTP headers # st Request+Reply size including HTTP headers # sn Unique sequence number per log line entry # % a literal % character # # The default formats available (which do not need re-defining) are: # #logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs %<st %rm %ru %un %Sh/%<A %mt #logformat squidmime %ts.%03tu %6tr %>a %Ss/%03Hs %<st %rm %ru %un %Sh/%<A %mt [%>h] [%<h] #logformat common %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st %Ss:%Sh #logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh # #Default: # none # TAG: access_log # These files log client request activities. Has a line every HTTP or # ICP request. The format is: # access_log <filepath> [<logformat name> [acl acl ...]] # access_log none [acl acl ...]] # # Will log to the specified file using the specified format (which # must be defined in a logformat directive) those entries which match # ALL the acl's specified (which must be defined in acl clauses). # If no acl is specified, all requests will be logged to this file. # # To disable logging of a request use the filepath "none", in which case # a logformat name should not be specified. # # To log the request via syslog specify a filepath of "syslog": # # access_log syslog[:facility.priority] [format [acl1 [acl2 ....]]] # where facility could be any of: # authpriv, daemon, local0 .. local7 or user. # # And priority could be any of: # err, warning, notice, info, debug. #access_log /var/logs/access.log squid
323
# TAG: log_access allow|deny acl acl... # This options allows you to control which requests gets logged # to access.log (see access_log directive). Requests denied for # logging will also not be accounted for in performance counters. # #Default: # none # TAG: logfile_daemon # Specify the path to the logfile-writing daemon. This daemon is # used to write the access and store logs, if configured. # #Default: logfile_daemon /bin/logfile-daemon # TAG: cache_log # Cache logging file. This is where general information about # your cache's behavior goes. You can increase the amount of data # logged to this file with the "debug_options" tag below. # #Default: # cache_log /var/logs/cache.log # TAG: cache_store_log # Logs the activities of the storage manager. Shows which # objects are ejected from the cache, and which objects are # saved and for how long. To disable, enter "none". There are # not really utilities to analyze this data, so you can safely # disable it. # #Default: # cache_store_log /var/logs/store.log cache_store_log none # # # # # # # # # # # # # # # # # # # # # # # # TAG: cache_swap_state Location for the cache "swap.state" file. This index file holds the metadata of objects saved on disk. It is used to rebuild the cache during startup. Normally this file resides in each 'cache_dir' directory, but you may specify an alternate pathname here. Note you must give a full filename, not just a directory. Since this is the index for the whole object list you CANNOT periodically rotate it! If %s can be used in the file name it will be replaced with a a representation of the cache_dir name where each / is replaced with '.'. This is needed to allow adding/removing cache_dir lines when cache_swap_log is being used. If have more than one 'cache_dir', and %s is not used in the name these swap logs will have names such as: cache_swap_log.00 cache_swap_log.01 cache_swap_log.02 The numbered extension (which is added automatically) corresponds to the order of the 'cache_dir' lines in this configuration file. If you change the order of the 'cache_dir'
324
# lines in this file, these index files will NOT correspond to # the correct 'cache_dir' entry (unless you manually rename # them). We recommend you do NOT use this option. It is # better to keep these index files in each 'cache_dir' directory. # #Default: # none # TAG: logfile_rotate # Specifies the number of logfile rotations to make when you # type 'squid -k rotate'. The default is 10, which will rotate # with extensions 0 through 9. Setting logfile_rotate to 0 will # disable the file name rotation, but the logfiles are still closed # and re-opened. This will enable you to rename the logfiles # yourself just before sending the rotate signal. # # Note, the 'squid -k rotate' command normally sends a USR1 # signal to the running squid process. In certain situations # (e.g. on Linux with Async I/O), USR1 is used for other # purposes, so -k rotate uses another signal. It is best to get # in the habit of using 'squid -k rotate' instead of 'kill -USR1 # <pid>'. # #Default: # logfile_rotate 10 # TAG: emulate_httpd_log on|off # The Cache can emulate the log file format which many 'httpd' # programs use. To disable/enable this emulation, set # emulate_httpd_log to 'off' or 'on'. The default # is to use the native log format since it includes useful # information Squid-specific log analyzers use. # #Default: # emulate_httpd_log off # TAG: log_ip_on_direct on|off # Log the destination IP address in the hierarchy log tag when going # direct. Earlier Squid versions logged the hostname here. If you # prefer the old way set this to off. # #Default: # log_ip_on_direct on # TAG: mime_table # Pathname to Squid's MIME table. You shouldn't need to change # this, but the default file contains examples and formatting # information if you do. # #Default: mime_table /etc/mime.conf # # # # # # TAG: log_mime_hdrs on|off The Cache can record both the request and the response MIME headers for each HTTP transaction. The headers are encoded safely and will appear as two bracketed fields at the end of the access log (for either the native or httpd-emulated log formats). To enable this logging set log_mime_hdrs to 'on'.
325
# #Default: # log_mime_hdrs off # TAG: useragent_log # Note: This option is only available if Squid is rebuilt with the # --enable-useragent-log option # # Squid will write the User-Agent field from HTTP requests # to the filename specified here. By default useragent_log # is disabled. # #Default: # none # TAG: referer_log # Note: This option is only available if Squid is rebuilt with the # --enable-referer-log option # # Squid will write the Referer field from HTTP requests to the # filename specified here. By default referer_log is disabled. # Note that "referer" is actually a misspelling of "referrer" # however the misspelt version has been accepted into the HTTP RFCs # and we accept both. # #Default: # none # TAG: pid_filename # A filename to write the process-id to. # #Default: # pid_filename //var/logs/squid.pid To disable, enter "none".
# TAG: debug_options # Logging options are set as section,level where each source file # is assigned a unique section. Lower levels result in less # output, Full debugging (level 9) can result in a very large # log file, so be careful. The magic word "ALL" sets debugging # levels for all sections. We recommend normally running with # "ALL,1". # #Default: # debug_options ALL,1 # TAG: log_fqdn on|off # Turn this on if you wish to log fully qualified domain names # in the access.log. To do this Squid does a DNS lookup of all # IP's connecting to it. This can (in some situations) increase # latency, which makes your cache seem slower for interactive # browsing. # #Default: # log_fqdn off # # # TAG: client_netmask A netmask for client addresses in logfiles and cachemgr output. Change this to protect the privacy of your cache clients.
326
# A netmask of 255.255.255.0 will log all IP's in that range with # the last digit set to '0'. # #Default: # client_netmask 255.255.255.255 # TAG: forward_log # Note: This option is only available if Squid is rebuilt with the # --enable-forward-log option # # Logs the server-side requests. # # This is currently work in progress. # #Default: # none # TAG: strip_query_terms # By default, Squid strips query terms from requested URLs before # logging. This protects your user's privacy. # #Default: # strip_query_terms on # TAG: buffered_logs on|off # cache.log log file is written with stdio functions, and as such # it can be buffered or unbuffered. By default it will be unbuffered. # Buffering it can speed up the writing slightly (though you are # unlikely to need to worry unless you run with tons of debugging # enabled in which case performance will suffer badly anyway..). # #Default: # buffered_logs off # TAG: netdb_filename # A filename where Squid stores it's netdb state between restarts. # To disable, enter "none". # #Default: # netdb_filename //var/logs/netdb.state # OPTIONS FOR FTP GATEWAYING # ----------------------------------------------------------------------------# TAG: ftp_user # If you want the anonymous login password to be more informative # (and enable the use of picky ftp servers), set this to something # reasonable for your domain, like [email protected] # # The reason why this is domainless by default is the # request can be made on the behalf of a user in any domain, # depending on how the cache is used. # Some ftp server also validate the email address is valid # (for example perl.com). # #Default: # ftp_user Squid@
327
# TAG: ftp_list_width # Sets the width of ftp listings. This should be set to fit in # the width of a standard browser. Setting this too small # can cut off long filenames when browsing ftp sites. # #Default: # ftp_list_width 32 # TAG: ftp_passive # If your firewall does not allow Squid to use passive # connections, turn off this option. # #Default: # ftp_passive on # TAG: ftp_sanitycheck # For security and data integrity reasons Squid by default performs # sanity checks of the addresses of FTP data connections ensure the # data connection is to the requested server. If you need to allow # FTP connections to servers using another IP address for the data # connection turn this off. # #Default: # ftp_sanitycheck on # TAG: ftp_telnet_protocol # The FTP protocol is officially defined to use the telnet protocol # as transport channel for the control connection. However, many # implementations are broken and does not respect this aspect of # the FTP protocol. # # If you have trouble accessing files with ASCII code 255 in the # path or similar problems involving this ASCII code you can # try setting this directive to off. If that helps, report to the # operator of the FTP server in question that their FTP server # is broken and does not follow the FTP standard. # #Default: # ftp_telnet_protocol on # OPTIONS FOR EXTERNAL SUPPORT PROGRAMS # ----------------------------------------------------------------------------# TAG: diskd_program # Specify the location of the diskd executable. # Note this is only useful if you have compiled in # diskd as one of the store io modules. # #Default: # diskd_program //libexec/diskd-daemon # TAG: unlinkd_program # Note: This option is only available if Squid is rebuilt with the # --enable-unlinkd option # # Specify the location of the executable for file deletion process.
328
# #Default: # unlinkd_program //libexec/unlinkd # TAG: pinger_program # Note: This option is only available if Squid is rebuilt with the # --enable-icmp option # # Specify the location of the executable for the pinger process. # #Default: # pinger_program //libexec/pinger # OPTIONS FOR URL REWRITING # ----------------------------------------------------------------------------# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # TAG: storeurl_rewrite_program Specify the location of the executable for the Store URL rewriter. The Store URL rewriter allows URLs to be "normalised" ; mapping multiple URLs to a single URL representation for cache operations. For example, if you request an object at: http://srv1.example.com/image.gif and a subsequent request for: http://srv2.example.com/image.gif then Squid will treat these both as different URLs and cache them seperately. This is almost the normal case, but an increasing number of sites distribute the same content between multiple frontend hosts. The Store URL rewriter allows you to rewrite these URLs to one URL to use for cache operations, but not -fetches-. Fetches are still made from the original site, but stored with the store URL rewritten URL as the store key. For each requested URL rewriter will receive on line with the format URL <SP> client_ip "/" fqdn <SP> user <SP> method <SP> urlgroup [<SP> kvpairs] <NL> In the future, the rewriter interface will be extended with key=value pairs ("kvpairs" shown above). Rewriter programs should be prepared to receive and possibly ignore additional whitespace-separated tokens on each input line. And the rewriter may return a rewritten URL. The other components of the request line does not need to be returned (ignored if they are). By default, a Store URL rewriter is not used. Please note - the normal URL rewriter rewrites Squid's _destination_ URL - ie, what it fetches. The Store URL rewriter rewrites Squid's _store_ URL - ie, what it uses to store and retrieve objects.
329
# #Default: # none # TAG: storeurl_rewrite_children # # #Default: # storeurl_rewrite_children 5 # TAG: storeurl_rewrite_concurrency # # #Default: # storeurl_rewrite_concurrency 0 # TAG: url_rewrite_program # Specify the location of the executable for the URL rewriter. # Since they can perform almost any function there isn't one included. # # For each requested URL rewriter will receive on line with the format # # URL <SP> client_ip "/" fqdn <SP> user <SP> method <SP> urlgroup # [<SP> kvpairs] <NL> # # In the future, the rewriter interface will be extended with # key=value pairs ("kvpairs" shown above). Rewriter programs # should be prepared to receive and possibly ignore additional # whitespace-separated tokens on each input line. # # And the rewriter may return a rewritten URL. The other components of # the request line does not need to be returned (ignored if they are). # # The rewriter can also indicate that a client-side redirect should # be performed to the new URL. This is done by prefixing the returned # URL with "301:" (moved permanently) or 302: (moved temporarily). # # It can also return a "urlgroup" that can subsequently be matched # in cache_peer_access and similar ACL driven rules. An urlgroup is # returned by prefixing the returned URL with "!urlgroup!". # # By default, a URL rewriter is not used. # #Default: # none # TAG: url_rewrite_children # The number of redirector processes to spawn. If you start # too few Squid will have to wait for them to process a backlog of # URLs, slowing it down. If you start too many they will use RAM # and other system resources. # #Default: # url_rewrite_children 5 # # # TAG: url_rewrite_concurrency The number of requests each redirector helper can handle in parallel. Defaults to 0 which indicates the redirector
330
# is a old-style single threaded redirector. # # When this directive is set to a value >= 1 then the protocol # used to communicate with the helper is modified to include # a request ID in front of the request/response. The request # ID from the request must be echoed back with the response # to that request. # #Default: # url_rewrite_concurrency 0 # TAG: url_rewrite_host_header # By default Squid rewrites any Host: header in redirected # requests. If you are running an accelerator this may # not be a wanted effect of a redirector. # # WARNING: Entries are cached on the result of the URL rewriting # process, so be careful if you have domain-virtual hosts. # #Default: # url_rewrite_host_header on # TAG: url_rewrite_access # If defined, this access list specifies which requests are # sent to the redirector processes. By default all requests # are sent. # #Default: # none # TAG: storeurl_access # # #Default: # none # TAG: redirector_bypass # When this is 'on', a request will not go through the # redirector if all redirectors are busy. If this is 'off' # and the redirector queue grows too large, Squid will exit # with a FATAL error and ask you to increase the number of # redirectors. You should only enable this if the redirectors # are not critical to your caching system. If you use # redirectors for access control, and you enable this option, # users may have access to pages they should not # be allowed to request. # #Default: # redirector_bypass off # # # # # # # # TAG: location_rewrite_program Specify the location of the executable for the Location rewriter, used to rewrite server generated redirects. Usually used in conjunction with a url_rewrite_program For each Location header received the location rewriter will receive one line with the format:
331
# location URL <SP> requested URL <SP> urlgroup <NL> # # And the rewriter may return a rewritten Location URL or a blank line. # The other components of the request line does not need to be returned # (ignored if they are). # # By default, a Location rewriter is not used. # #Default: # none # TAG: location_rewrite_children # The number of location rewriting processes to spawn. If you start # too few Squid will have to wait for them to process a backlog of # URLs, slowing it down. If you start too many they will use RAM # and other system resources. # #Default: # location_rewrite_children 5 # TAG: location_rewrite_concurrency # The number of requests each Location rewriter helper can handle in # parallel. Defaults to 0 which indicates that the helper # is a old-style singlethreaded helper. # #Default: # location_rewrite_concurrency 0 # TAG: location_rewrite_access # If defined, this access list specifies which requests are # sent to the location rewriting processes. By default all Location # headers are sent. # #Default: # none # OPTIONS FOR TUNING THE CACHE # ----------------------------------------------------------------------------# TAG: cache # A list of ACL elements which, if matched, cause the request to # not be satisfied from the cache and the reply to not be cached. # In other words, use this to force certain objects to never be cached. # # You must use the word 'DENY' to indicate the ACL names which should # NOT be cached. # # Default is to allow all to be cached. # #Default: # none # TAG: max_stale time-units # This option puts an upper limit on how stale content Squid # will serve from the cache if cache validation fails. # #Default:
332
# max_stale 1 week # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # TAG: refresh_pattern usage: refresh_pattern [-i] regex min percent max [options] By default, regular expressions are CASE-SENSITIVE. them case-insensitive, use the -i option. To make
'Min' is the time (in minutes) an object without an explicit expiry time should be considered fresh. The recommended value is 0, any higher values may cause dynamic applications to be erroneously cached unless the application designer has taken the appropriate actions. 'Percent' is a percentage of the objects age (time since last modification age) an object without explicit expiry time will be considered fresh. 'Max' is an upper limit on how long objects without an explicit expiry time will be considered fresh. options: override-expire override-lastmod reload-into-ims ignore-reload ignore-no-cache ignore-private ignore-auth stale-while-revalidate=NN ignore-stale-while-revalidate max-stale=NN negative-ttl=NN override-expire enforces min age even if the server sent an explicit expiry time (e.g., with the Expires: header or Cache-Control: max-age). Doing this VIOLATES the HTTP standard. Enabling this feature could make you liable for problems which it causes. Note: this does not enforce staleness - it only extends freshness / min. If the server returns a Expires time which is longer than your max time, Squid will still consider the object fresh for that period of time. override-lastmod enforces min age even on objects that were modified recently. reload-into-ims changes client no-cache or ``reload'' to If-Modified-Since requests. Doing this VIOLATES the HTTP standard. Enabling this feature could make you liable for problems which it causes. ignore-reload ignores a client no-cache or ``reload'' header. Doing this VIOLATES the HTTP standard. Enabling this feature could make you liable for problems which it causes. ignore-no-cache ignores any ``Pragma: no-cache'' and
333
# ``Cache-control: no-cache'' headers received from a server. # The HTTP RFC never allows the use of this (Pragma) header # from a server, only a client, though plenty of servers # send it anyway. # # ignore-private ignores any ``Cache-control: private'' # headers received from a server. Doing this VIOLATES # the HTTP standard. Enabling this feature could make you # liable for problems which it causes. # # ignore-auth caches responses to requests with authorization, # as if the originserver had sent ``Cache-control: public'' # in the response header. Doing this VIOLATES the HTTP standard. # Enabling this feature could make you liable for problems which # it causes. # # stale-while-revalidate=NN makes Squid perform an asyncronous # cache validation if the object isn't more stale than NN. # Doing this VIOLATES the HTTP standard. Enabling this # feature could make you liable for problems which it # causes. # # ignore-stale-while-revalidate makes Squid ignore any 'Cache-Control: # stale-while-revalidate=NN' headers received from a server. Can be # combined with stale-while-revalidate=NN to override the server provided # value. # # max-stale=NN provided a maximum staleness factor. Squid won't # serve objects more stale than this even if it failed to # validate the object. # # negative-ttl=NN overrides the global negative_ttl parameter # selectively for URLs matching this pattern (in seconds). # # Basically a cached object is: # # FRESH if expires < now, else STALE # STALE if age > max # FRESH if lm-factor < percent, else STALE # FRESH if age < min # else STALE # # The refresh_pattern lines are checked in the order listed here. # The first entry which matches is used. If none of the entries # match the default will be used. # # Note, you must uncomment all the default lines if you want # to change one. The default setting is only active if none is # used. # #Suggested default: refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 # TAG: quick_abort_min (KB)
334
# TAG: quick_abort_max (KB) # TAG: quick_abort_pct (percent) # The cache by default continues downloading aborted requests # which are almost completed (less than 16 KB remaining). This # may be undesirable on slow (e.g. SLIP) links and/or very busy # caches. Impatient users may tie up file descriptors and # bandwidth by repeatedly requesting and immediately aborting # downloads. # # When the user aborts a request, Squid will check the # quick_abort values to the amount of data transfered until # then. # # If the transfer has less than 'quick_abort_min' KB remaining, # it will finish the retrieval. # # If the transfer has more than 'quick_abort_max' KB remaining, # it will abort the retrieval. # # If more than 'quick_abort_pct' of the transfer has completed, # it will finish the retrieval. # # If you do not want any retrieval to continue after the client # has aborted, set both 'quick_abort_min' and 'quick_abort_max' # to '0 KB'. # # If you want retrievals to always continue if they are being # cached set 'quick_abort_min' to '-1 KB'. # #Default: # quick_abort_min 16 KB # quick_abort_max 16 KB # quick_abort_pct 95 # TAG: read_ahead_gap buffer-size # The amount of data the cache will buffer ahead of what has been # sent to the client when retrieving an object from another server. # #Default: # read_ahead_gap 16 KB # TAG: negative_ttl time-units # Time-to-Live (TTL) for failed requests. Certain types of # failures (such as "connection refused" and "404 Not Found") are # negatively-cached for a configurable amount of time. The # default is 5 minutes. Note that this is different from # negative caching of DNS lookups. # #Default: # negative_ttl 5 minutes negative_ttl 0 seconds # TAG: positive_dns_ttl time-units # Upper limit on how long Squid will cache positive DNS responses. # Default is 6 hours (360 minutes). This directive must be set # larger than negative_dns_ttl. # #Default:
335
# positive_dns_ttl 6 hours # TAG: negative_dns_ttl time-units # Time-to-Live (TTL) for negative caching of failed DNS lookups. # This also sets the lower cache limit on positive lookups. # Minimum value is 1 second, and it is not recommendable to go # much below 10 seconds. # #Default: # negative_dns_ttl 1 minute # TAG: range_offset_limit (bytes) # Sets a upper limit on how far into the the file a Range request # may be to cause Squid to prefetch the whole file. If beyond this # limit Squid forwards the Range request as it is and the result # is NOT cached. # # This is to stop a far ahead range request (lets say start at 17MB) # from making Squid fetch the whole object up to that point before # sending anything to the client. # # A value of -1 causes Squid to always fetch the object from the # beginning so it may cache the result. (2.0 style) # # A value of 0 causes Squid to never fetch more than the # client requested. (default) # #Default: # range_offset_limit 0 KB # TAG: minimum_expiry_time (seconds) # The minimum caching time according to (Expires - Date) # Headers Squid honors if the object can't be revalidated # defaults to 60 seconds. In reverse proxy enorinments it # might be desirable to honor shorter object lifetimes. It # is most likely better to make your server return a # meaningful Last-Modified header however. # #Default: # minimum_expiry_time 60 seconds # TAG: store_avg_object_size (kbytes) # Average object size, used to estimate number of objects your # cache can hold. The default is 13 KB. # #Default: # store_avg_object_size 13 KB # TAG: store_objects_per_bucket # Target number of objects per bucket in the store hash table. # Lowering this value increases the total number of buckets and # also the storage maintenance rate. The default is 20. # #Default: # store_objects_per_bucket 20 # HTTP OPTIONS
336
# ----------------------------------------------------------------------------# TAG: request_header_max_size (KB) # This specifies the maximum size for HTTP headers in a request. # Request headers are usually relatively small (about 512 bytes). # Placing a limit on the request header size will catch certain # bugs (for example with persistent connections) and possibly # buffer-overflow or denial-of-service attacks. # #Default: # request_header_max_size 20 KB # TAG: reply_header_max_size (KB) # This specifies the maximum size for HTTP headers in a reply. # Reply headers are usually relatively small (about 512 bytes). # Placing a limit on the reply header size will catch certain # bugs (for example with persistent connections) and possibly # buffer-overflow or denial-of-service attacks. # #Default: # reply_header_max_size 20 KB # TAG: request_body_max_size (KB) # This specifies the maximum size for an HTTP request body. # In other words, the maximum size of a PUT/POST request. # A user who attempts to send a request with a body larger # than this limit receives an "Invalid Request" error message. # If you set this parameter to a zero (the default), there will # be no limit imposed. # #Default: # request_body_max_size 0 KB # TAG: broken_posts # A list of ACL elements which, if matched, causes Squid to send # an extra CRLF pair after the body of a PUT/POST request. # # Some HTTP servers has broken implementations of PUT/POST, # and rely on an extra CRLF pair sent by some WWW clients. # # Quote from RFC2616 section 4.1 on this matter: # # Note: certain buggy HTTP/1.0 client implementations generate an # extra CRLF's after a POST request. To restate what is explicitly # forbidden by the BNF, an HTTP/1.1 client must not preface or follow # a request with an extra CRLF. # #Example: # acl buggy_server url_regex ^http://.... # broken_posts allow buggy_server # #Default: # none # # # # TAG: upgrade_http0.9 This access list controls when HTTP/0.9 responses is upgraded to our current HTTP version. The default is to always upgrade.
337
# Some applications expect to be able to respond with non-HTTP # responses and clients gets confused if the response is upgraded. # For example SHOUTcast servers used for mp3 streaming. # # To enable some flexibility in detection of such applications # the first line of the response is available in the internal header # X-HTTP09-First-Line for use in the rep_header acl. # # Don't upgrade ShoutCast responses to HTTP acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast # TAG: via on|off # If set (default), Squid will include a Via header in requests and # replies as required by RFC2616. # #Default: # via on # TAG: cache_vary # When 'cache_vary' is set to off, response that have a # Vary header will not be stored in the cache. # #Default: # cache_vary on # TAG: broken_vary_encoding # Many servers have broken support for on-the-fly Content-Encoding, # returning the same ETag on both plain and gzip:ed variants. # Vary replies matching this access list will have the cache split # on the Accept-Encoding header of the request and not trusting the # ETag to be unique. # # Apache mod_gzip and mod_deflate known to be broken so don't trust # Apache to signal ETag correctly on such responses acl apache rep_header Server ^Apache broken_vary_encoding allow apache # TAG: collapsed_forwarding (on|off) # This option enables multiple requests for the same URI to be # processed as one request. Normally disabled to avoid increased # latency on dynamic content, but there can be benefit from enabling # this in accelerator setups where the web servers are the bottleneck # and reliable and returns mostly cacheable information. # #Default: # collapsed_forwarding off # TAG: refresh_stale_hit (time) # This option changes the refresh algorithm to allow concurrent # requests while an object is being refreshed to be processed as # cache hits if the object expired less than X seconds ago. Default # is 0 to disable this feature. This option is mostly interesting # in accelerator setups where a few objects is accessed very # frequently. # #Default: # refresh_stale_hit 0 seconds
338
# TAG: ie_refresh on|off # Microsoft Internet Explorer up until version 5.5 Service # Pack 1 has an issue with transparent proxies, wherein it # is impossible to force a refresh. Turning this on provides # a partial fix to the problem, by causing all IMS-REFRESH # requests from older IE versions to check the origin server # for fresh content. This reduces hit ratio by some amount # (~10% in my experience), but allows users to actually get # fresh content when they want it. Note because Squid # cannot tell if the user is using 5.5 or 5.5SP1, the behavior # of 5.5 is unchanged from old versions of Squid (i.e. a # forced refresh is impossible). Newer versions of IE will, # hopefully, continue to have the new behavior and will be # handled based on that assumption. This option defaults to # the old Squid behavior, which is better for hit ratios but # worse for clients using IE, if they need to be able to # force fresh content. # #Default: # ie_refresh off # TAG: vary_ignore_expire on|off # Many HTTP servers supporting Vary gives such objects # immediate expiry time with no cache-control header # when requested by a HTTP/1.0 client. This option # enables Squid to ignore such expiry times until # HTTP/1.1 is fully implemented. # WARNING: This may eventually cause some varying # objects not intended for caching to get cached. # #Default: # vary_ignore_expire off # TAG: extension_methods # Squid only knows about standardized HTTP request methods. # You can add up to 20 additional "extension" methods here. # #Default: # none # TAG: request_entities # Squid defaults to deny GET and HEAD requests with request entities, # as the meaning of such requests are undefined in the HTTP standard # even if not explicitly forbidden. # # Set this directive to on if you have clients which insists # on sending request entities in GET or HEAD requests. But be warned # that there is server software (both proxies and web servers) which # can fail to properly process this kind of request which may make you # vulnerable to cache pollution attacks if enabled. # #Default: # request_entities off # # # TAG: header_access Usage: header_access header_name allow|deny [!]aclname ...
339
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
WARNING: Doing this VIOLATES the HTTP standard. Enabling this feature could make you liable for problems which it causes. This option replaces the old 'anonymize_headers' and the older 'http_anonymizer' option with something that is much more configurable. This new method creates a list of ACLs for each header, allowing you very fine-tuned header mangling. You can only specify known headers for the header name. Other headers are reclassified as 'Other'. You can also refer to all the headers with 'All'. For example, to achieve the same behavior as the old 'http_anonymizer standard' option, you should use: header_access header_access header_access header_access header_access header_access From deny all Referer deny all Server deny all User-Agent deny all WWW-Authenticate deny all Link deny all
Or, to reproduce the old 'http_anonymizer paranoid' feature you should use: header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access header_access Allow allow all Authorization allow all WWW-Authenticate allow all Proxy-Authorization allow all Proxy-Authenticate allow all Cache-Control allow all Content-Encoding allow all Content-Length allow all Content-Type allow all Date allow all Expires allow all Host allow all If-Modified-Since allow all Last-Modified allow all Location allow all Pragma allow all Accept allow all Accept-Charset allow all Accept-Encoding allow all Accept-Language allow all Content-Language allow all Mime-Version allow all Retry-After allow all Title allow all Connection allow all Proxy-Connection allow all All deny all
340
#Default: # none # TAG: header_replace # Usage: header_replace header_name message # Example: header_replace User-Agent Nutscrape/1.0 (CP/M; 8-bit) # # This option allows you to change the contents of headers # denied with header_access above, by replacing them with # some fixed string. This replaces the old fake_user_agent # option. # # By default, headers are removed if denied. # #Default: # none # TAG: relaxed_header_parser on|off|warn # In the default "on" setting Squid accepts certain forms # of non-compliant HTTP messages where it is unambiguous # what the sending application intended even if the message # is not correctly formatted. The messages is then normalized # to the correct form when forwarded by Squid. # # If set to "warn" then a warning will be emitted in cache.log # each time such HTTP error is encountered. # # If set to "off" then such HTTP errors will cause the request # or response to be rejected. # #Default: # relaxed_header_parser on # TAG: server_http11 on|off # This option enables the use ot HTTP/1.1 on outgoing "direct" requests. # See also the http11 cache_peer option. # Note: The HTTP/1.1 support is still incomplete, with an # internal HTTP/1.0 hop. As result 1xx responses will not # be forwarded. # #Default: # server_http11 off # TAG: ignore_expect_100 on|off # This option makes Squid ignore any Expect: 100-continue header present # in the request. # Note: Enabling this is a HTTP protocol violation, but some client may # not handle it well.. # #Default: # ignore_expect_100 off # # # # # # TAG: external_refresh_check This option defines an external helper for determining whether to refresh a stale response. It will be called when Squid receives a request for a cached response that is stale; the helper can either confirm that the response is stale with a STALE response, or extend the freshness of the response (thereby avoiding a refresh
341
# check) with a FRESH response, along with a freshness=nnn keyword. # # external_refresh_check [options] FORMAT.. /path/to/helper [helper_args] # # If present, helper_args will be passed to the helper on the command # line verbatim. # # Options: # # children=n Number of processes to spawn to service external # refresh checks (default 5). # concurrency=n Concurrency level per process. Only used with # helpers capable of processing more than one query # at a time. # # When using the concurrency option, the protocol is changed by introducing # a query channel tag infront of the request/response. The query channel # tag is a number between 0 and concurrency-1. # # FORMAT specifications: # # %CACHE_URI The URI of the cached response # %RES{Header} HTTP response header value # %AGE The age of the cached response # # The request sent to the helper consists of the data in the format # specification in the order specified. # # The helper receives lines per the above format specification, and # returns lines starting with OK or ERR indicating the validity of # the request and optionally followed by additional keywords with # more details. URL escaping is used to protect each value in both # requests and responses. # # General result syntax: # # FRESH / STALE keyword=value ... # # Defined keywords: # # freshness=nnn The number of seconds to extend the freshness of # the response by. # log=string String to be logged in access.log. Available as # %ef in logformat specifications. # res{Header}=value # Value to update response headers with. If already # present, the supplied value completely replaces # the cached value. # # In the event of a helper-related error (e.g., overload), Squid # will always default to STALE. # #Default: # none # TIMEOUTS # -----------------------------------------------------------------------------
342
# TAG: forward_timeout time-units # This parameter specifies how long Squid should at most attempt in # finding a forwarding path for the request before giving up. # #Default: # forward_timeout 4 minutes # TAG: connect_timeout time-units # This parameter specifies how long to wait for the TCP connect to # the requested server or peer to complete before Squid should # attempt to find another path where to forward the request. # #Default: # connect_timeout 1 minute # TAG: peer_connect_timeout time-units # This parameter specifies how long to wait for a pending TCP # connection to a peer cache. The default is 30 seconds. You # may also set different timeout values for individual neighbors # with the 'connect-timeout' option on a 'cache_peer' line. # #Default: # peer_connect_timeout 30 seconds # TAG: read_timeout time-units # The read_timeout is applied on server-side connections. After # each successful read(), the timeout will be extended by this # amount. If no data is read again after this amount of time, # the request is aborted and logged with ERR_READ_TIMEOUT. The # default is 15 minutes. # #Default: # read_timeout 15 minutes # TAG: request_timeout # How long to wait for an HTTP request after initial # connection establishment. # #Default: # request_timeout 5 minutes # TAG: persistent_request_timeout # How long to wait for the next HTTP request on a persistent # connection after the previous request completes. # #Default: # persistent_request_timeout 2 minutes # # # # # # # # # TAG: client_lifetime time-units The maximum amount of time a client (browser) is allowed to remain connected to the cache process. This protects the Cache from having a lot of sockets (and hence file descriptors) tied up in a CLOSE_WAIT state from remote clients that go away without properly shutting down (either because of a network failure or because of a poor client implementation). The default is one day, 1440 minutes.
343
# NOTE: The default value is intended to be much larger than any # client would ever need to be connected to your cache. You # should probably change client_lifetime only as a last resort. # If you seem to have many client connections tying up # filedescriptors, we recommend first tuning the read_timeout, # request_timeout, persistent_request_timeout and quick_abort values. # #Default: # client_lifetime 1 day # TAG: half_closed_clients # Some clients may shutdown the sending side of their TCP # connections, while leaving their receiving sides open. Sometimes, # Squid can not tell the difference between a half-closed and a # fully-closed TCP connection. By default, half-closed client # connections are kept open until a read(2) or write(2) on the # socket returns an error. Change this option to 'off' and Squid # will immediately close client connections when read(2) returns # "no more data to read." # #Default: # half_closed_clients on # TAG: pconn_timeout # Timeout for idle persistent connections to servers and other # proxies. # #Default: # pconn_timeout 1 minute # TAG: ident_timeout # Note: This option is only available if Squid is rebuilt with the # --enable-ident-lookups option # # Maximum time to wait for IDENT lookups to complete. # # If this is too high, and you enabled IDENT lookups from untrusted # users, you might be susceptible to denial-of-service by having # many ident requests going at once. # #Default: # ident_timeout 10 seconds # TAG: shutdown_lifetime time-units # When SIGTERM or SIGHUP is received, the cache is put into # "shutdown pending" mode until all active sockets are closed. # This value is the lifetime to set for all open descriptors # during shutdown mode. Any active clients after this many # seconds will receive a 'timeout' message. # #Default: # shutdown_lifetime 30 seconds # ADMINISTRATIVE PARAMETERS # ----------------------------------------------------------------------------# TAG: cache_mgr
344
# Email-address of local cache manager who will receive # mail if the cache dies. The default is "webmaster". # #Default: # cache_mgr webmaster # TAG: mail_from # From: email-address for mail sent when the cache dies. # The default is to use 'appname@unique_hostname'. # Default appname value is "squid", can be changed into # src/globals.h before building squid. # #Default: # none # TAG: mail_program # Email program used to send mail if the cache dies. # The default is "mail". The specified program must comply # with the standard Unix mail syntax: # mail-program recipient < mailfile # # Optional command line options can be specified. # #Default: # mail_program mail # TAG: cache_effective_user # If you start Squid as root, it will change its effective/real # UID/GID to the user specified below. The default is to change # to UID to nobody. If you define cache_effective_user, but not # cache_effective_group, Squid sets the GID to the effective # user's default group ID (taken from the password file) and # supplementary group list from the from groups membership of # cache_effective_user. # #Default: # cache_effective_user nobody cache_effective_user squid # TAG: cache_effective_group # If you want Squid to run with a specific GID regardless of # the group memberships of the effective user then set this # to the group (or GID) you want Squid to run as. When set # all other group privileges of the effective user is ignored # and only this GID is effective. If Squid is not started as # root the user starting Squid must be member of the specified # group. # #Default: # none # TAG: httpd_suppress_version_string on|off # Suppress Squid version string info in HTTP headers and HTML error pages. # #Default: # httpd_suppress_version_string off # TAG: visible_hostname
345
# If you want to present a special hostname in error messages, etc, # define this. Otherwise, the return value of gethostname() # will be used. If you have multiple caches in a cluster and # get errors about IP-forwarding you must set them to have individual # names with this setting. # #Default: # none # TAG: unique_hostname # If you want to have multiple machines with the same # 'visible_hostname' you must give each machine a different # 'unique_hostname' so forwarding loops can be detected. # #Default: # none # TAG: hostname_aliases # A list of other DNS names your cache has. # #Default: # none # TAG: umask # Minimum umask which should be enforced while the proxy # is running, in addition to the umask set at startup. # # Note: Should start with a 0 to indicate the normal octal # representation of umasks # #Default: # umask 027 # OPTIONS FOR THE CACHE REGISTRATION SERVICE # ----------------------------------------------------------------------------# # This section contains parameters for the (optional) cache # announcement service. This service is provided to help # cache administrators locate one another in order to join or # create cache hierarchies. # # An 'announcement' message is sent (via UDP) to the registration # service by Squid. By default, the announcement message is NOT # SENT unless you enable it with 'announce_period' below. # # The announcement message includes your hostname, plus the # following information from this configuration file: # # http_port # icp_port # cache_mgr # # All current information is processed regularly and made # available on the Web at http://www.ircache.net/Cache/Tracker/. # # TAG: announce_period This is how frequently to send cache announcements. The
346
# default is `0' which disables sending the announcement # messages. # # To enable announcing your cache, just uncomment the line # below. # #Default: # announce_period 0 # #To enable announcing your cache, just uncomment the line below. #announce_period 1 day # TAG: announce_host # TAG: announce_file # TAG: announce_port # announce_host and announce_port set the hostname and port # number where the registration message will be sent. # # Hostname will default to 'tracker.ircache.net' and port will # default default to 3131. If the 'filename' argument is given, # the contents of that file will be included in the announce # message. # #Default: # announce_host tracker.ircache.net # announce_port 3131 # HTTPD-ACCELERATOR OPTIONS # ----------------------------------------------------------------------------# TAG: httpd_accel_no_pmtu_disc on|off # In many setups of transparently intercepting proxies Path-MTU # discovery can not work on traffic towards the clients. This is # the case when the intercepting device does not fully track # connections and fails to forward ICMP must fragment messages # to the cache server. # # If you have such setup and experience that certain clients # sporadically hang or never complete requests set this to on. # #Default: # httpd_accel_no_pmtu_disc off # DELAY POOL PARAMETERS # ----------------------------------------------------------------------------# TAG: delay_pools # Note: This option is only available if Squid is rebuilt with the # --enable-delay-pools option # # This represents the number of delay pools to be used. For example, # if you have one class 2 delay pool and one class 3 delays pool, you # have a total of 2 delay pools. # #Default: # delay_pools 0
347
# TAG: delay_class # Note: This option is only available if Squid is rebuilt with the # --enable-delay-pools option # # This defines the class of each delay pool. There must be exactly one # delay_class line for each delay pool. For example, to define two # delay pools, one of class 2 and one of class 3, the settings above # and here would be: # #Example: # delay_pools 2 # 2 delay pools # delay_class 1 2 # pool 1 is a class 2 pool # delay_class 2 3 # pool 2 is a class 3 pool # # The delay pool classes are: # # class 1 Everything is limited by a single aggregate # bucket. # # class 2 Everything is limited by a single aggregate # bucket as well as an "individual" bucket chosen # from bits 25 through 32 of the IP address. # # class 3 Everything is limited by a single aggregate # bucket as well as a "network" bucket chosen # from bits 17 through 24 of the IP address and a # "individual" bucket chosen from bits 17 through # 32 of the IP address. # # NOTE: If an IP address is a.b.c.d # -> bits 25 through 32 are "d" # -> bits 17 through 24 are "c" # -> bits 17 through 32 are "c * 256 + d" # #Default: # none # TAG: delay_access # Note: This option is only available if Squid is rebuilt with the # --enable-delay-pools option # # This is used to determine which delay pool a request falls into. # # delay_access is sorted per pool and the matching starts with pool 1, # then pool 2, ..., and finally pool N. The first delay pool where the # request is allowed is selected for the request. If it does not allow # the request to any pool then the request is not delayed (default). # # For example, if you want some_big_clients in delay # pool 1 and lotsa_little_clients in delay pool 2: # #Example: # delay_access 1 allow some_big_clients # delay_access 1 deny all # delay_access 2 allow lotsa_little_clients # delay_access 2 deny all #
348
#Default: # none # TAG: delay_parameters # Note: This option is only available if Squid is rebuilt with the # --enable-delay-pools option # # This defines the parameters for a delay pool. Each delay pool has # a number of "buckets" associated with it, as explained in the # description of delay_class. For a class 1 delay pool, the syntax is: # #delay_parameters pool aggregate # # For a class 2 delay pool: # #delay_parameters pool aggregate individual # # For a class 3 delay pool: # #delay_parameters pool aggregate network individual # # The variables here are: # # pool a pool number - ie, a number between 1 and the # number specified in delay_pools as used in # delay_class lines. # # aggregate the "delay parameters" for the aggregate bucket # (class 1, 2, 3). # # individual the "delay parameters" for the individual # buckets (class 2, 3). # # network the "delay parameters" for the network buckets # (class 3). # # A pair of delay parameters is written restore/maximum, where restore is # the number of bytes (not bits - modem and network speeds are usually # quoted in bits) per second placed into the bucket, and maximum is the # maximum number of bytes which can be in the bucket at any time. # # For example, if delay pool number 1 is a class 2 delay pool as in the # above example, and is being used to strictly limit each host to 64kbps # (plus overheads), with no overall limit, the line is: # #delay_parameters 1 -1/-1 8000/8000 # # Note that the figure -1 is used to represent "unlimited". # # And, if delay pool number 2 is a class 3 delay pool as in the above # example, and you want to limit it to a total of 256kbps (strict limit) # with each 8-bit network permitted 64kbps (strict limit) and each # individual host permitted 4800bps with a bucket maximum size of 64kb # to permit a decent web page to be downloaded at a decent speed # (if the network is not being limited due to overuse) but slow down # large downloads more significantly: # #delay_parameters 2 32000/32000 8000/8000 600/8000
349
# # There must be one delay_parameters line for each delay pool. # #Default: # none # TAG: delay_initial_bucket_level (percent, 0-100) # Note: This option is only available if Squid is rebuilt with the # --enable-delay-pools option # # The initial bucket percentage is used to determine how much is put # in each bucket when squid starts, is reconfigured, or first notices # a host accessing it (in class 2 and class 3, individual hosts and # networks only have buckets associated with them once they have been # "seen" by squid). # #Default: # delay_initial_bucket_level 50 # WCCPv1 AND WCCPv2 CONFIGURATION OPTIONS # ----------------------------------------------------------------------------# TAG: wccp_router # Note: This option is only available if Squid is rebuilt with the # --enable-wccp option # # TAG: wccp2_router # Note: This option is only available if Squid is rebuilt with the # --enable-wccpv2 option # # Use this option to define your WCCP ``home'' router for # Squid. # # wccp_router supports a single WCCP(v1) router # # wccp2_router supports multiple WCCPv2 routers # # only one of the two may be used at the same time and defines # which version of WCCP to use. # #Default: # wccp_router 0.0.0.0 # TAG: wccp_version # Note: This option is only available if Squid is rebuilt with the # --enable-wccp option # # This directive is only relevant if you need to set up WCCP(v1) # to some very old and end-of-life Cisco routers. In all other # setups it must be left unset or at the default setting. # It defines an internal version in the WCCP(v1) protocol, # with version 4 being the officially documented protocol. # # According to some users, Cisco IOS 11.2 and earlier only # support WCCP version 3. If you're using that or an earlier # version of IOS, you may need to change this value to 3, otherwise # do not specify this parameter.
350
# #Default: # wccp_version 4 # TAG: wccp2_rebuild_wait # Note: This option is only available if Squid is rebuilt with the # --enable-wccpv2 option # # If this is enabled Squid will wait for the cache dir rebuild to finish # before sending the first wccp2 HereIAm packet # #Default: # wccp2_rebuild_wait on # TAG: wccp2_forwarding_method # Note: This option is only available if Squid is rebuilt with the # --enable-wccpv2 option # # WCCP2 allows the setting of forwarding methods between the # router/switch and the cache. Valid values are as follows: # # 1 - GRE encapsulation (forward the packet in a GRE/WCCP tunnel) # 2 - L2 redirect (forward the packet using Layer 2/MAC rewriting) # # Currently (as of IOS 12.4) cisco routers only support GRE. # Cisco switches only support the L2 redirect assignment method. # #Default: # wccp2_forwarding_method 1 # TAG: wccp2_return_method # Note: This option is only available if Squid is rebuilt with the # --enable-wccpv2 option # # WCCP2 allows the setting of return methods between the # router/switch and the cache for packets that the cache # decides not to handle. Valid values are as follows: # # 1 - GRE encapsulation (forward the packet in a GRE/WCCP tunnel) # 2 - L2 redirect (forward the packet using Layer 2/MAC rewriting) # # Currently (as of IOS 12.4) cisco routers only support GRE. # Cisco switches only support the L2 redirect assignment. # # If the "ip wccp redirect exclude in" command has been # enabled on the cache interface, then it is still safe for # the proxy server to use a l2 redirect method even if this # option is set to GRE. # #Default: # wccp2_return_method 1 # TAG: wccp2_assignment_method # Note: This option is only available if Squid is rebuilt with the # --enable-wccpv2 option # # WCCP2 allows the setting of methods to assign the WCCP hash # Valid values are as follows:
351
# # 1 - Hash assignment # 2 - Mask assignment # # As a general rule, cisco routers support the hash assignment method # and cisco switches support the mask assignment method. # #Default: # wccp2_assignment_method 1 # TAG: wccp2_service # Note: This option is only available if Squid is rebuilt with the # --enable-wccpv2 option # # WCCP2 allows for multiple traffic services. There are two # types: "standard" and "dynamic". The standard type defines # one service id - http (id 0). The dynamic service ids can be from # 51 to 255 inclusive. In order to use a dynamic service id # one must define the type of traffic to be redirected; this is done # using the wccp2_service_info option. # # The "standard" type does not require a wccp2_service_info option, # just specifying the service id will suffice. # # MD5 service authentication can be enabled by adding # "password=<password>" to the end of this service declaration. # # Examples: # # wccp2_service standard 0 # for the 'web-cache' standard service # wccp2_service dynamic 80 # a dynamic service type which will be # # fleshed out with subsequent options. # wccp2_service standard 0 password=foo # # #Default: # wccp2_service standard 0 # TAG: wccp2_service_info # Note: This option is only available if Squid is rebuilt with the # --enable-wccpv2 option # # Dynamic WCCPv2 services require further information to define the # traffic you wish to have diverted. # # The format is: # # wccp2_service_info <id> protocol=<protocol> flags=<flag>,<flag>.. # priority=<priority> ports=<port>,<port>.. # # The relevant WCCPv2 flags: # + src_ip_hash, dst_ip_hash # + source_port_hash, dst_port_hash # + src_ip_alt_hash, dst_ip_alt_hash # + src_port_alt_hash, dst_port_alt_hash # + ports_source # # The port list can be one to eight entries.
352
# # Example: # # wccp2_service_info 80 protocol=tcp flags=src_ip_hash,ports_source # priority=240 ports=80 # # Note: the service id must have been defined by a previous # 'wccp2_service dynamic <id>' entry. # #Default: # none # TAG: wccp2_weight # Note: This option is only available if Squid is rebuilt with the # --enable-wccpv2 option # # Each cache server gets assigned a set of the destination # hash proportional to their weight. # #Default: # wccp2_weight 10000 # TAG: wccp_address # Note: This option is only available if Squid is rebuilt with the # --enable-wccp option # # TAG: wccp2_address # Note: This option is only available if Squid is rebuilt with the # --enable-wccpv2 option # # Use this option if you require WCCP to use a specific # interface address. # # The default behavior is to not bind to any specific address. # #Default: # wccp_address 0.0.0.0 # wccp2_address 0.0.0.0 # PERSISTENT CONNECTION HANDLING # ----------------------------------------------------------------------------# # Also see "pconn_timeout" in the TIMEOUTS section # TAG: client_persistent_connections # TAG: server_persistent_connections # Persistent connection support for clients and servers. By # default, Squid uses persistent connections (when allowed) # with its clients and servers. You can use these options to # disable persistent connections with clients and/or servers. # #Default: # client_persistent_connections on # server_persistent_connections on # # TAG: persistent_connection_after_error With this directive the use of persistent connections after
353
# HTTP errors can be disabled. Useful if you have clients # who fail to handle errors on persistent connections proper. # #Default: # persistent_connection_after_error off # TAG: detect_broken_pconn # Some servers have been found to incorrectly signal the use # of HTTP/1.0 persistent connections even on replies not # compatible, causing significant delays. This server problem # has mostly been seen on redirects. # # By enabling this directive Squid attempts to detect such # broken replies and automatically assume the reply is finished # after 10 seconds timeout. # #Default: # detect_broken_pconn off # CACHE DIGEST OPTIONS # ----------------------------------------------------------------------------# TAG: digest_generation # Note: This option is only available if Squid is rebuilt with the # --enable-cache-digests option # # This controls whether the server will generate a Cache Digest # of its contents. # #Default: # digest_generation on # TAG: digest_bits_per_entry # Note: This option is only available if Squid is rebuilt with the # --enable-cache-digests option # # This is the number of bits of the server's Cache Digest which # will be associated with the Digest entry for a given HTTP # Method and URL (public key) combination. The default is 5. # #Default: # digest_bits_per_entry 5 # TAG: digest_rebuild_period (seconds) # Note: This option is only available if Squid is rebuilt with the # --enable-cache-digests option # # This is the wait time between Cache Digest rebuilds. # #Default: # digest_rebuild_period 1 hour # TAG: digest_rewrite_period (seconds) # Note: This option is only available if Squid is rebuilt with the # --enable-cache-digests option # # This is the wait time between Cache Digest writes to disk.
354
# #Default: # digest_rewrite_period 1 hour # TAG: digest_swapout_chunk_size (bytes) # Note: This option is only available if Squid is rebuilt with the # --enable-cache-digests option # # This is the number of bytes of the Cache Digest to write to # disk at a time. It defaults to 4096 bytes (4KB), the Squid # default swap page. # #Default: # digest_swapout_chunk_size 4096 bytes # TAG: digest_rebuild_chunk_percentage (percent, 0-100) # Note: This option is only available if Squid is rebuilt with the # --enable-cache-digests option # # This is the percentage of the Cache Digest to be scanned at a # time. By default it is set to 10% of the Cache Digest. # #Default: # digest_rebuild_chunk_percentage 10 # SNMP OPTIONS # ----------------------------------------------------------------------------# TAG: snmp_port # Note: This option is only available if Squid is rebuilt with the # --enable-snmp option # # Squid can now serve statistics and status information via SNMP. # By default it listens to port 3401 on the machine. If you don't # wish to use SNMP, set this to "0". # #Default: # snmp_port 3401 # TAG: snmp_access # Note: This option is only available if Squid is rebuilt with the # --enable-snmp option # # Allowing or denying access to the SNMP port. # # All access to the agent is denied by default. # usage: # # snmp_access allow|deny [!]aclname ... # #Example: # snmp_access allow snmppublic localhost # snmp_access deny all # #Default: # snmp_access deny all
355
# TAG: snmp_incoming_address # Note: This option is only available if Squid is rebuilt with the # --enable-snmp option # # TAG: snmp_outgoing_address # Note: This option is only available if Squid is rebuilt with the # --enable-snmp option # # Just like 'udp_incoming_address' above, but for the SNMP port. # # snmp_incoming_address is used for the SNMP socket receiving # messages from SNMP agents. # snmp_outgoing_address is used for SNMP packets returned to SNMP # agents. # # The default snmp_incoming_address (0.0.0.0) is to listen on all # available network interfaces. # # If snmp_outgoing_address is set to 255.255.255.255 (the default) # it will use the same socket as snmp_incoming_address. Only # change this if you want to have SNMP replies sent using another # address than where this Squid listens for SNMP queries. # # NOTE, snmp_incoming_address and snmp_outgoing_address can not have # the same value since they both use port 3401. # #Default: # snmp_incoming_address 0.0.0.0 # snmp_outgoing_address 255.255.255.255 # ICP OPTIONS # ----------------------------------------------------------------------------# TAG: icp_port # The port number where Squid sends and receives ICP queries to # and from neighbor caches. Default is 3130. To disable use # "0". May be overridden with -u on the command line. # #Default: # icp_port 3130 icp_port 0 # TAG: htcp_port # Note: This option is only available if Squid is rebuilt with the # --enable-htcp option # # The port number where Squid sends and receives HTCP queries to # and from neighbor caches. Default is 4827. To disable use # "0". # #Default: # htcp_port 4827 # # # # TAG: log_icp_queries on|off If set, ICP queries are logged to access.log. You may wish do disable this if your ICP load is VERY high to speed things up or to simplify log analysis.
356
# #Default: # log_icp_queries on # TAG: udp_incoming_address # udp_incoming_address is used for UDP packets received from other # caches. # # The default behavior is to not bind to any specific address. # # Only change this if you want to have all UDP queries received on # a specific interface/address. # # NOTE: udp_incoming_address is used by the ICP, HTCP, and DNS # modules. Altering it will affect all of them in the same manner. # # see also; udp_outgoing_address # # NOTE, udp_incoming_address and udp_outgoing_address can not # have the same value since they both use the same port. # #Default: # udp_incoming_address 0.0.0.0 # TAG: udp_outgoing_address # udp_outgoing_address is used for UDP packets sent out to other # caches. # # The default behavior is to not bind to any specific address. # # Instead it will use the same socket as udp_incoming_address. # Only change this if you want to have UDP queries sent using another # address than where this Squid listens for UDP queries from other # caches. # # NOTE: udp_outgoing_address is used by the ICP, HTCP, and DNS # modules. Altering it will affect all of them in the same manner. # # see also; udp_incoming_address # # NOTE, udp_incoming_address and udp_outgoing_address can not # have the same value since they both use the same port. # #Default: # udp_outgoing_address 255.255.255.255 # TAG: icp_hit_stale on|off # If you want to return ICP_HIT for stale cache objects, set this # option to 'on'. If you have sibling relationships with caches # in other administrative domains, this should be 'off'. If you only # have sibling relationships with caches under your control, # it is probably okay to set this to 'on'. # If set to 'on', your siblings should use the option "allow-miss" # on their cache_peer lines for connecting to you. # #Default: # icp_hit_stale off
357
# TAG: minimum_direct_hops # If using the ICMP pinging stuff, do direct fetches for sites # which are no more than this many hops away. # #Default: # minimum_direct_hops 4 # TAG: minimum_direct_rtt # If using the ICMP pinging stuff, do direct fetches for sites # which are no more than this many rtt milliseconds away. # #Default: # minimum_direct_rtt 400 # TAG: netdb_low # TAG: netdb_high # The low and high water marks for the ICMP measurement # database. These are counts, not percents. The defaults are # 900 and 1000. When the high water mark is reached, database # entries will be deleted until the low mark is reached. # #Default: # netdb_low 900 # netdb_high 1000 # TAG: netdb_ping_period # The minimum period for measuring a site. There will be at # least this much delay between successive pings to the same # network. The default is five minutes. # #Default: # netdb_ping_period 5 minutes # TAG: query_icmp on|off # If you want to ask your peers to include ICMP data in their ICP # replies, enable this option. # # If your peer has configured Squid (during compilation) with # '--enable-icmp' that peer will send ICMP pings to origin server # sites of the URLs it receives. If you enable this option the # ICP replies from that peer will include the ICMP data (if available). # Then, when choosing a parent cache, Squid will choose the parent with # the minimal RTT to the origin server. When this happens, the # hierarchy field of the access.log will be # "CLOSEST_PARENT_MISS". This option is off by default. # #Default: # query_icmp off # TAG: test_reachability on|off # When this is 'on', ICP MISS replies will be ICP_MISS_NOFETCH # instead of ICP_MISS if the target host is NOT in the ICMP # database, or has a zero RTT. # #Default: # test_reachability off # TAG: icp_query_timeout (msec)
358
# Normally Squid will automatically determine an optimal ICP # query timeout value based on the round-trip-time of recent ICP # queries. If you want to override the value determined by # Squid, set this 'icp_query_timeout' to a non-zero value. This # value is specified in MILLISECONDS, so, to use a 2-second # timeout (the old default), you would write: # # icp_query_timeout 2000 # #Default: # icp_query_timeout 0 # TAG: maximum_icp_query_timeout (msec) # Normally the ICP query timeout is determined dynamically. But # sometimes it can lead to very large values (say 5 seconds). # Use this option to put an upper limit on the dynamic timeout # value. Do NOT use this option to always use a fixed (instead # of a dynamic) timeout value. To set a fixed timeout see the # 'icp_query_timeout' directive. # #Default: # maximum_icp_query_timeout 2000 # TAG: minimum_icp_query_timeout (msec) # Normally the ICP query timeout is determined dynamically. But # sometimes it can lead to very small timeouts, even lower than # the normal latency variance on your link due to traffic. # Use this option to put an lower limit on the dynamic timeout # value. Do NOT use this option to always use a fixed (instead # of a dynamic) timeout value. To set a fixed timeout see the # 'icp_query_timeout' directive. # #Default: # minimum_icp_query_timeout 5 # MULTICAST ICP OPTIONS # ----------------------------------------------------------------------------# # # # # # # # # # # # # # # # # # # TAG: mcast_groups This tag specifies a list of multicast groups which your server should join to receive multicasted ICP queries. NOTE! Be very careful what you put here! Be sure you understand the difference between an ICP _query_ and an ICP _reply_. This option is to be set only if you want to RECEIVE multicast queries. Do NOT set this option to SEND multicast ICP (use cache_peer for that). ICP replies are always sent via unicast, so this option does not affect whether or not you will receive replies from multicast group members. You must be very careful to NOT use a multicast address which is already in use by another group of caches. If you are unsure about multicast, please read the Multicast chapter in the Squid FAQ (http://www.squid-cache.org/FAQ/). Usage: mcast_groups 239.128.16.128 224.0.1.20
359
# # By default, Squid doesn't listen on any multicast groups. # #Default: # none # TAG: mcast_miss_addr # Note: This option is only available if Squid is rebuilt with the # --enable-multicast-miss option # # If you enable this option, every "cache miss" URL will # be sent out on the specified multicast address. # # Do not enable this option unless you are are absolutely # certain you understand what you are doing. # #Default: # mcast_miss_addr 255.255.255.255 # TAG: mcast_miss_ttl # Note: This option is only available if Squid is rebuilt with the # --enable-multicast-miss option # # This is the time-to-live value for packets multicasted # when multicasting off cache miss URLs is enabled. By # default this is set to 'site scope', i.e. 16. # #Default: # mcast_miss_ttl 16 # TAG: mcast_miss_port # Note: This option is only available if Squid is rebuilt with the # --enable-multicast-miss option # # This is the port number to be used in conjunction with # 'mcast_miss_addr'. # #Default: # mcast_miss_port 3135 # TAG: mcast_miss_encode_key # Note: This option is only available if Squid is rebuilt with the # --enable-multicast-miss option # # The URLs that are sent in the multicast miss stream are # encrypted. This is the encryption key. # #Default: # mcast_miss_encode_key XXXXXXXXXXXXXXXX # TAG: mcast_icp_query_timeout (msec) # For multicast peers, Squid regularly sends out ICP "probes" to # count how many other peers are listening on the given multicast # address. This value specifies how long Squid should wait to # count all the replies. The default is 2000 msec, or 2 # seconds. # #Default:
360
# mcast_icp_query_timeout 2000 # INTERNAL ICON OPTIONS # ----------------------------------------------------------------------------# TAG: icon_directory # Where the icons are stored. These are normally kept in # //share/icons # #Default: # icon_directory //share/icons # TAG: global_internal_static # This directive controls is Squid should intercept all requests for # /squid-internal-static/ no matter which host the URL is requesting # (default on setting), or if nothing special should be done for # such URLs (off setting). The purpose of this directive is to make # icons etc work better in complex cache hierarchies where it may # not always be possible for all corners in the cache mesh to reach # the server generating a directory listing. # #Default: # global_internal_static on # TAG: short_icon_urls # If this is enabled Squid will use short URLs for icons. # # If off the URLs for icons will always be absolute URLs # including the proxy name and port. # #Default: # short_icon_urls off # ERROR PAGE OPTIONS # ----------------------------------------------------------------------------# TAG: error_directory # If you wish to create your own versions of the default # (English) error files, either to customize them to suit your # language or company copy the template English files to another # directory and point this tag at them. # # The squid developers are interested in making squid available in # a wide variety of languages. If you are making translations for a # langauge that Squid does not currently provide please consider # contributing your translation back to the project. # #Default: # error_directory //share/errors/English # # # # # # TAG: error_map Map errors to custom messages error_map message_url http_status ... http_status ... is a list of HTTP status codes or Squid error
361
# messages. # # Use in accelerators to substitute the error messages returned # by servers with other custom errors. # # error_map http://your.server/error/404.shtml 404 # # Requests for error messages is a GET request for the configured # URL with the following special headers # # X-Error-Status: The received HTTP status code (i.e. 404) # X-Request-URI: The requested URI where the error occurred # # In Addition the following headers are forwarded from the client # request: # # User-Agent, Cookie, X-Forwarded-For, Via, Authorization, # Accept, Referer # # And the following headers from the server reply: # # Server, Via, Location, Content-Location # # The reply returned to the client will carry the original HTTP # headers from the real error message, but with the reply body # of the configured error message. # # #Default: # none # TAG: err_html_text # HTML text to include in error messages. Make this a "mailto" # URL to your admin address, or maybe just a link to your # organizations Web page. # # To include this in your error messages, you must rewrite # the error template files (found in the "errors" directory). # Wherever you want the 'err_html_text' line to appear, # insert a %L tag in the error template file. # #Default: # none # # # # # # # # # # # # # # TAG: deny_info Usage: deny_info err_page_name acl or deny_info http://... acl Example: deny_info ERR_CUSTOM_ACCESS_DENIED bad_guys This can be used to return a ERR_ page for requests which do not pass the 'http_access' rules. Squid remembers the last acl it evaluated in http_access, and if a 'deny_info' line exists for that ACL Squid returns a corresponding error page. The acl is typically the last acl on the http_access deny line which denied access. The exceptions to this rule are: - When Squid needs to request authentication credentials. It's then the first authentication related acl encountered
362
# - When none of the http_access lines matches. It's then the last # acl processed on the last http_access line. # # You may use ERR_ pages that come with Squid or create your own pages # and put them into the configured errors/ directory. # # Alternatively you can specify an error URL. The browsers will # get redirected (302) to the specified URL. %s in the redirection # URL will be replaced by the requested URL. # # Alternatively you can tell Squid to reset the TCP connection # by specifying TCP_RESET. # #Default: # none # OPTIONS INFLUENCING REQUEST FORWARDING # ----------------------------------------------------------------------------# TAG: nonhierarchical_direct # By default, Squid will send any non-hierarchical requests # (matching hierarchy_stoplist or not cacheable request type) direct # to origin servers. # # If you set this to off, Squid will prefer to send these # requests to parents. # # Note that in most configurations, by turning this off you will only # add latency to these request without any improvement in global hit # ratio. # # If you are inside an firewall see never_direct instead of # this directive. # #Default: # nonhierarchical_direct on # TAG: prefer_direct # Normally Squid tries to use parents for most requests. If you for some # reason like it to first try going direct and only use a parent if # going direct fails set this to on. # # By combining nonhierarchical_direct off and prefer_direct on you # can set up Squid to use a parent as a backup path if going direct # fails. # # Note: If you want Squid to use parents for all requests see # the never_direct directive. prefer_direct only modifies how Squid # acts on cacheable requests. # #Default: # prefer_direct off # # # # TAG: ignore_ims_on_miss on|off This options makes Squid ignore If-Modified-Since on cache misses. This is useful while the cache is mostly empty to more quickly have the cache populated.
363
# #Default: # ignore_ims_on_miss off # TAG: always_direct # Usage: always_direct allow|deny [!]aclname ... # # Here you can use ACL elements to specify requests which should # ALWAYS be forwarded by Squid to the origin servers without using # any peers. For example, to always directly forward requests for # local servers ignoring any parents or siblings you may have use # something like: # # acl local-servers dstdomain my.domain.net # always_direct allow local-servers # # To always forward FTP requests directly, use # # acl FTP proto FTP # always_direct allow FTP # # NOTE: There is a similar, but opposite option named # 'never_direct'. You need to be aware that "always_direct deny # foo" is NOT the same thing as "never_direct allow foo". You # may need to use a deny rule to exclude a more-specific case of # some other rule. Example: # # acl local-external dstdomain external.foo.net # acl local-servers dstdomain .foo.net # always_direct deny local-external # always_direct allow local-servers # # NOTE: If your goal is to make the client forward the request # directly to the origin server bypassing Squid then this needs # to be done in the client configuration. Squid configuration # can only tell Squid how Squid should fetch the object. # # NOTE: This directive is not related to caching. The replies # is cached as usual even if you use always_direct. To not cache # the replies see no_cache. # # This option replaces some v1.1 options such as local_domain # and local_ip. # #Default: # none # # # # # # # # # # # TAG: never_direct Usage: never_direct allow|deny [!]aclname ... never_direct is the opposite of always_direct. Please read the description for always_direct if you have not already. With 'never_direct' you can use ACL elements to specify requests which should NEVER be forwarded directly to origin servers. For example, to force the use of a proxy for all requests, except those in your local domain use something like:
364
# acl local-servers dstdomain .foo.net # acl all src 0.0.0.0/0.0.0.0 # never_direct deny local-servers # never_direct allow all # # or if Squid is inside a firewall and there are local intranet # servers inside the firewall use something like: # # acl local-intranet dstdomain .foo.net # acl local-external dstdomain external.foo.net # always_direct deny local-external # always_direct allow local-intranet # never_direct allow all # # This option replaces some v1.1 options such as inside_firewall # and firewall_ip. # #Default: # none # ADVANCED NETWORKING OPTIONS # ----------------------------------------------------------------------------# TAG: max_filedescriptors # The maximum number of filedescriptors supported. # # The default "0" means Squid inherits the current ulimit setting. # # Note: Changing this requires a restart of Squid. Also # not all comm loops supports values larger than --with-maxfd. # #Default: # max_filedescriptors 0 # TAG: accept_filter # FreeBSD: # # The name of an accept(2) filter to install on Squid's # listen socket(s). This feature is perhaps specific to # FreeBSD and requires support in the kernel. # # The 'httpready' filter delays delivering new connections # to Squid until a full HTTP request has been received. # See the accf_http(9) man page for details. # # The 'dataready' filter delays delivering new connections # to Squid until there is some data to process. # See the accf_dataready(9) man page for details. # # Linux: # # The 'data' filter delays delivering of new connections # to Squid until there is some data to process by TCP_ACCEPT_DEFER. # You may optionally specify a number of seconds to wait by # 'data=N' where N is the number of seconds. Defaults to 30 # if not specified. See the tcp(7) man page for details. #EXAMPLE:
365
## FreeBSD #accept_filter httpready ## Linux #accept_filter data # #Default: # none # TAG: tcp_recv_bufsize (bytes) # Size of receive buffer to set for TCP sockets. Probably just # as easy to change your kernel's default. Set to zero to use # the default buffer size. # #Default: # tcp_recv_bufsize 0 bytes # TAG: incoming_rate # This directive controls how aggressive Squid should accept new # connections compared to processing existing connections. # The lower number the more frequent Squid will look for new # incoming requests. # #Default: # incoming_rate 30 # DNS OPTIONS # ----------------------------------------------------------------------------# TAG: check_hostnames # For security and stability reasons Squid by default checks # hostnames for Internet standard RFC compliance. If you do not want # Squid to perform these checks then turn this directive off. # #Default: # check_hostnames on # TAG: allow_underscore # Underscore characters is not strictly allowed in Internet hostnames # but nevertheless used by many sites. Set this to off if you want # Squid to be strict about the standard. # This check is performed only when check_hostnames is set to on. # #Default: # allow_underscore on # TAG: cache_dns_program # Specify the location of the executable for dnslookup process. # #Default: cache_dns_program /bin/dnsserver # # # # # # TAG: dns_children The number of processes spawn to service DNS name lookups. For heavily loaded caches on large servers, you should probably increase this value to at least 10. The maximum is 32. The default is 5.
366
# You must have at least one dnsserver process. # #Default: dns_children 1 # TAG: dns_retransmit_interval # Note: This option is only available if Squid is rebuilt with the # --enable-internal-dns option # # Initial retransmit interval for DNS queries. The interval is # doubled each time all configured DNS servers have been tried. # # #Default: # dns_retransmit_interval 5 seconds # TAG: dns_timeout # Note: This option is only available if Squid is rebuilt with the # --enable-internal-dns option # # DNS Query timeout. If no response is received to a DNS query # within this time all DNS servers for the queried domain # are assumed to be unavailable. # #Default: # dns_timeout 2 minutes # TAG: dns_defnames on|off # Normally the RES_DEFNAMES resolver option is disabled # (see res_init(3)). This prevents caches in a hierarchy # from interpreting single-component hostnames locally. To allow # Squid to handle single-component names, enable this option. # #Default: # dns_defnames off # TAG: dns_nameservers # Use this if you want to specify a list of DNS name servers # (IP addresses) to use instead of those given in your # /etc/resolv.conf file. # On Windows platforms, if no value is specified here or in # the /etc/resolv.conf file, the list of DNS name servers are # taken from the Windows registry, both static and dynamic DHCP # configurations are supported. # # Example: dns_nameservers 10.0.0.1 192.172.0.4 # #Default: # none # # # # # # # # TAG: hosts_file Location of the host-local IP name-address associations database. Most Operating Systems have such a file on different default locations: - Un*X & Linux: /etc/hosts - Windows NT/2000: %SystemRoot%\system32\drivers\etc\hosts (%SystemRoot% value install default is c:\winnt) - Windows XP/2003: %SystemRoot%\system32\drivers\etc\hosts
367
# (%SystemRoot% value install default is c:\windows) # - Windows 9x/Me: %windir%\hosts # (%windir% value is usually c:\windows) # - Cygwin: /etc/hosts # # The file contains newline-separated definitions, in the # form ip_address_in_dotted_form name [name ...] names are # whitespace-separated. Lines beginning with an hash (#) # character are comments. # # The file is checked at startup and upon configuration. # If set to 'none', it won't be checked. # If append_domain is used, that domain will be added to # domain-local (i.e. not containing any dot character) host # definitions. # #Default: # hosts_file /etc/hosts # TAG: dns_testnames # The DNS tests exit as soon as the first site is successfully looked up # # This test can be disabled with the -D command line option. # #Default: # dns_testnames netscape.com internic.net nlanr.net microsoft.com # TAG: append_domain # Appends local domain name to hostnames without any dots in # them. append_domain must begin with a period. # # Be warned there are now Internet names with no dots in # them using only top-domain names, so setting this may # cause some Internet sites to become unavailable. # #Example: # append_domain .yourdomain.com # #Default: # none # TAG: ignore_unknown_nameservers # By default Squid checks that DNS responses are received # from the same IP addresses they are sent to. If they # don't match, Squid ignores the response and writes a warning # message to cache.log. You can allow responses from unknown # nameservers by setting this option to 'off'. # #Default: # ignore_unknown_nameservers on # TAG: ipcache_size (number of entries) # TAG: ipcache_low (percent) # TAG: ipcache_high (percent) # The size, low-, and high-water marks for the IP cache. # #Default: # ipcache_size 1024
368
# ipcache_low 90 # ipcache_high 95 # TAG: fqdncache_size (number of entries) # Maximum number of FQDN cache entries. # #Default: # fqdncache_size 1024 # MISCELLANEOUS # ----------------------------------------------------------------------------# TAG: memory_pools on|off # If set, Squid will keep pools of allocated (but unused) memory # available for future use. If memory is a premium on your # system and you believe your malloc library outperforms Squid # routines, disable this. # #Default: # memory_pools on # TAG: memory_pools_limit (bytes) # Used only with memory_pools on: # memory_pools_limit 50 MB # # If set to a non-zero value, Squid will keep at most the specified # limit of allocated (but unused) memory in memory pools. All free() # requests that exceed this limit will be handled by your malloc # library. Squid does not pre-allocate any memory, just safe-keeps # objects that otherwise would be free()d. Thus, it is safe to set # memory_pools_limit to a reasonably high value even if your # configuration will use less memory. # # If set to zero, Squid will keep all memory it can. That is, there # will be no limit on the total amount of memory used for safe-keeping. # # To disable memory allocation optimization, do not set # memory_pools_limit to 0. Set memory_pools to "off" instead. # # An overhead for maintaining memory pools is not taken into account # when the limit is checked. This overhead is close to four bytes per # object kept. However, pools may actually _save_ memory because of # reduced memory thrashing in your malloc library. # #Default: # memory_pools_limit 5 MB # # # # # # # # # # TAG: forwarded_for on|off If set, Squid will include your system's IP address or name in the HTTP requests it forwards. By default it looks like this: X-Forwarded-For: 192.1.2.3 If you disable this, it will appear as X-Forwarded-For: unknown
369
# #Default: # forwarded_for on # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # TAG: cachemgr_passwd Specify passwords for cachemgr operations. Usage: cachemgr_passwd password action action ... Some valid actions are (see cache manager menu for a full list): 5min 60min asndb authenticator cbdata client_list comm_incoming config * counters delay digest_stats dns events filedescriptors fqdncache histograms http_headers info io ipcache mem menu netdb non_peers objects offline_toggle * pconn peer_select reconfigure * redirector refresh server_list shutdown * store_digest storedir utilization via_headers vm_objects * Indicates actions which will not be performed without a valid password, others can be performed if not listed here. To disable an action, set the password to "disable". To allow performing an action without a password, set the password to "none". Use the keyword "all" to set the same password for all actions.
370
#Example: # cachemgr_passwd secret shutdown # cachemgr_passwd lesssssssecret info stats/objects # cachemgr_passwd disable all # #Default: # none # TAG: client_db on|off # If you want to disable collecting per-client statistics, # turn off client_db here. # #Default: # client_db on # TAG: reload_into_ims on|off # When you enable this option, client no-cache or ``reload'' # requests will be changed to If-Modified-Since requests. # Doing this VIOLATES the HTTP standard. Enabling this # feature could make you liable for problems which it # causes. # # see also refresh_pattern for a more selective approach. # #Default: # reload_into_ims off # TAG: maximum_single_addr_tries # This sets the maximum number of connection attempts for a # host that only has one address (for multiple-address hosts, # each address is tried once). # # The default value is one attempt, the (not recommended) # maximum is 255 tries. A warning message will be generated # if it is set to a value greater than ten. # # Note: This is in addition to the request re-forwarding which # takes place if Squid fails to get a satisfying response. # #Default: # maximum_single_addr_tries 1 # TAG: retry_on_error # If set to on Squid will automatically retry requests when # receiving an error response. This is mainly useful if you # are in a complex cache hierarchy to work around access # control errors. # #Default: # retry_on_error off # TAG: as_whois_server # WHOIS server to query for AS numbers. NOTE: AS numbers are # queried only when Squid starts up, not for every request. # #Default: # as_whois_server whois.ra.net # as_whois_server whois.ra.net
371
# TAG: offline_mode # Enable this option and Squid will never try to validate cached # objects. # #Default: # offline_mode off # TAG: uri_whitespace # What to do with requests that have whitespace characters in the # URI. Options: # # strip: The whitespace characters are stripped out of the URL. # This is the behavior recommended by RFC2396. # deny: The request is denied. The user receives an "Invalid # Request" message. # allow: The request is allowed and the URI is not changed. The # whitespace characters remain in the URI. Note the # whitespace is passed to redirector processes if they # are in use. # encode: The request is allowed and the whitespace characters are # encoded according to RFC1738. This could be considered # a violation of the HTTP/1.1 # RFC because proxies are not allowed to rewrite URI's. # chop: The request is allowed and the URI is chopped at the # first whitespace. This might also be considered a # violation. # #Default: # uri_whitespace strip # TAG: coredump_dir # By default Squid leaves core files in the directory from where # it was started. If you set 'coredump_dir' to a directory # that exists, Squid will chdir() to that directory at startup # and coredump files will be left there. # #Default: # coredump_dir none # # Leave coredumps in the first cache dir coredump_dir //var/cache # TAG: chroot # Use this to have Squid do a chroot() while initializing. This # also causes Squid to fully drop root privileges after # initializing. This means, for example, if you use a HTTP # port less than 1024 and try to reconfigure, you will may get an # error saying that Squid can not open the port. # #Default: # none # # # # # TAG: balance_on_multiple_ip Some load balancing servers based on round robin DNS have been found not to preserve user session state across requests to different IP addresses.
372
# By default Squid rotates IP's per request. By disabling # this directive only connection failure triggers rotation. # #Default: # balance_on_multiple_ip on # TAG: pipeline_prefetch # To boost the performance of pipelined requests to closer # match that of a non-proxied environment Squid can try to fetch # up to two requests in parallel from a pipeline. # # Defaults to off for bandwidth management and access logging # reasons. # #Default: # pipeline_prefetch off # TAG: high_response_time_warning (msec) # If the one-minute median response time exceeds this value, # Squid prints a WARNING with debug level 0 to get the # administrators attention. The value is in milliseconds. # #Default: # high_response_time_warning 0 # TAG: high_page_fault_warning # If the one-minute average page fault rate exceeds this # value, Squid prints a WARNING with debug level 0 to get # the administrators attention. The value is in page faults # per second. # #Default: # high_page_fault_warning 0 # TAG: high_memory_warning # If the memory usage (as determined by mallinfo) exceeds # this amount, Squid prints a WARNING with debug level 0 to get # the administrators attention. # #Default: # high_memory_warning 0 KB # TAG: sleep_after_fork (microseconds) # When this is set to a non-zero value, the main Squid process # sleeps the specified number of microseconds after a fork() # system call. This sleep may help the situation where your # system reports fork() failures due to lack of (virtual) # memory. Note, however, if you have a lot of child # processes, these sleep delays will add up and your # Squid will not service requests for some amount of time # until all the child processes have been started. # On Windows value less then 1000 (1 milliseconds) are # rounded to 1000. # #Default: # sleep_after_fork 0 # TAG: zero_buffers on|off
373
# Squid by default will zero all buffers before using or reusing them. # Setting this to 'off' will result in fixed-sized temporary buffers # not being zero'ed. This may give a performance boost on certain # platforms but it may result in undefined behaviour at the present # time. # #Default: # zero_buffers on # TAG: windows_ipaddrchangemonitor on|off # On Windows Squid by default will monitor IP address changes and will # reconfigure itself after any detected event. This is very useful for # proxies connected to internet with dial-up interfaces. # In some cases (a Proxy server acting as VPN gateway is one) it could be # desiderable to disable this behaviour setting this to 'off'. # Note: after changing this, Squid service must be restarted. # #Default: # windows_ipaddrchangemonitor on # Follow X-Forwarded-For configuration. Means that squid copies X-Forwarded-For # value from the previous request. follow_x_forwarded_for allow all
374