GGC Installation
GGC Installation
GGC Installation
Installation Guide
2 Hardware Installation
2.1 You Will Need
2.2 Procedure
5 IP Addressing
5.1 IP Addressing Requirements for GGC Nodes
5.2 IP Addressing Requirements for Google Routers
5.3 Enabling IPv6
This document describes the installation and configuration process for a Google Global Cache (GGC)
node. Follow these instructions if you’re an ISP deploying GGC in your network.
1. Hardware installation
2. Network installation
3. IP addressing
4. Software installation
5. BGP configuration
Google may provide a networking device that connects to GGC machines. In some cases it’s managed by
the ISP, in others by Google. In this Installation Guide, all references to Google router mean a networking
device both provided and managed by Google.
Steps 2 and 5 have two scenarios, depending on whether the GGC node is connected to a Google router
or an ISP managed switch.
Follow only the steps applicable to the GGC node type you’re installing.
2
Hardware Installation
Follow these instructions to rack mount GGC machines. Some details vary depending on the type of GGC
hardware provided. See the appendices for detailed information.
You may rack mount GGC machines as soon as you receive them.
Rack mount installation kit and vendor specific instructions (shipped with machines)
Network and power cabling, as listed in Appendix A - Cabling Requirements - Google Router and
Appendix B - Cabling Requirements - ISP Managed Switch
2.2 Procedure
1. HPE Apollo 4200 only: Remove the shipping screws on either side of the chassis, following vendor
instructions. If you don’t remove the shipping screws you won’t be able to open the chassis to
maintain the mid-chassis mounted disks in the future.
GGC equipment is heavy. Use two people to lift it and follow appropriate health and
safety procedures to prevent injury and to avoid damage to the GGC equipment.
NOTE: For Dell R740xd2 machines, to prevent possible injury, ensure that the thumbscrews located
on the front left and right control panels are fastened during racking, so that the machine doesn’t
slide out of the rack when you pull the front drive bay.
5. Verify that both Power Supply Units (PSUs) show green indicator lights.
For redundancy and performance reasons, you must use both PSUs. Google
strongly recommends that you connect each power supply to independent power
feeds. If a second feed isn’t available, you can connect both PSUs to the same feed.
3
Network Installation - Google Router
Follow the instructions in this section if Google is providing and managing a GGC router to which the
GGC machines are connected. Otherwise, proceed to Network Installation - ISP Managed Switch.
Connect the GGC machines and the router to your network as soon as you receive them. Do this even if
you aren’t ready to install the GGC software or start GGC traffic.
The GGC router comes preconfigured with uplink interface, LACP (for uplinks with 2+ physical ports) and
IP addressing as shown in the ISP portal (https://isp.google.com/assets/).
Network diagram of a GGC deployment with GGC router: single interconnect to ISP network (left) and two interconnects (right)
3.1 You Will Need
GGC router
Uplink connectivity components, including SFPs or QSFPs, single or multimode fibers, as required
for number of uplinks
3.2 Procedure
Details of cabling and network interfaces vary, depending on the GGC hardware type, the number of
machines in the node, and the type and number of uplinks in use. See Appendix A - Cabling Requirements
- Google Router.
2. Install SFPs and cabling as described in Appendix A - Cabling Requirements - Google Router.
Passive mode
Layer 2 mode
Check uplink physical status (link lights) on GGC router, and on your device.
Verify GGC router light levels (Tx and Rx) at your device.
Follow the instructions in this section if you’ll be providing all network connectivity for the GGC
machines.
You may connect the GGC machines to your network as soon as you receive them, even if you aren’t
ready to install the GGC software or start GGC traffic.
Switch or router
Install SFPs and cabling, as described in Appendix B - Cabling Requirements - ISP Managed Switch.
Maximum port speed (10Gbps for 10Gbps links, 1Gbps for 1Gbps links, etc.)
Full duplex
Auto-negotiation enabled
For GGC machines using a single interface, Link Aggregation Control Protocol (LACP) should be
disabled.
For GGC machines using multiple interfaces, LACP should be enabled, and configured as follows:
Passive mode
Layer 2 mode
Standalone mode (aggregated link should remain up, even if a physical port is down)
You may connect different GGC machines in the same node to different switches, but it isn’t required. If
you use multiple switches, the VLAN used by the GGC machines must span all switches involved.
Sample switch configurations are provided in Appendix C - Configuration Examples - ISP Managed
Switch. Refer to your switch vendor’s documentation for specific configuration commands.
5
IP Addressing
GGC nodes require a dedicated layer 3 subnet. You can configure nodes as dual-stacked (preferred), IPv4
only, or IPv6 only.
For each IP protocol version supported by the GGC node, machines have the following addresses
assigned:
Maintenance IPs are configured statically. VIPs are managed automatically to ensure they move to other
machines during failures and machine maintenances. This minimizes disruption of traffic to the users
and ensures the BGP sessions with GGC nodes don’t remain down for extended periods.
5.1 IP Addressing Requirements for GGC Nodes
GGC nodes require a specific size of allocated public IP subnet, as shown in the table below.
IPv4 addresses are assigned within the GGC allocated subnet as follows:
You must use the 1st usable address in the subnet for the subnet gateway.
If required, assign the 2nd and 3rd addresses to an ISP-managed switch (e.g. for HSRP or GLBP).
The 4th and following IP addresses in the subnet are reserved for GGC machines (management
IPs and VIPs).
Check the node information in the ISP Portal (https://isp.google.com/assets/) for the BGP peering
IP address.
IPv6 addresses are assigned within the GGC allocated subnet as follows:
The ::1 address in the subnet is used as a statically addressed subnet gateway. If your device is
configured to send router advertisements, the GGC machines uses this in preference to the static
gateway.
If required, assign the ::2 and ::3 addresses to an ISP-managed switch (e.g. for HSRP or GLBP).
The ::4 and following IP addresses in the subnet are reserved for GGC machines (management IPs
and VIPs).
Check the node information in the ISP Portal (https://isp.google.com/assets/) for the BGP peering
IP address.
5.2 IP Addressing Requirements for Google Routers
GGC nodes that are behind Google routers have additional subnet requirements because of
interconnects needed to connect them to your network.
Each interconnect may be configured with either IPv4, IPv6 or both address types.
A /29 IPv4 subnet (or larger) is required for interconnects where redundancy protocol (HSRP,
VRRP, etc.) is used at the ISP’s side.
It’s up to the ISP to decide which IP from the interconnect allocated subnet is configured at either side of
the interconnect. Google doesn’t have any guidelines or preferences.
5.3 Enabling IPv6
You can enable IPv6 prior to installation by specifying the IPv6 subnet and IPv6 Router for BGP Sessions
when you supply the technical information required for node activation in the ISP Portal
(https://isp.google.com/assets/) asset pages.
Google strongly recommends you enable IPv6 for new nodes, even if you don’t yet have significant IPv6
user traffic, provided that IPv6 is globally reachable. Connectivity issues will delay turn up.
For an existing node that’s already serving IPv4 traffic, you can enable IPv6 through the ISP Portal
(https://isp.google.com/assets/) asset pages by following these steps:
If you’re enabling IPv6 support for a GGC node that’s connected to a Google router, you’ll
need to provide the IPv6 subnet for the interconnect that’s configured with a BGP session
over it. Contact [email protected] to verify these prerequisites are in place.
6
Software Installation (Network based)
Network based install is possible on GGC nodes that meet all of these requirements:
IPv4 enabled (Network-based IPv6-only installation is not supported)
6.1 Procedure
2. Wire the machines to the GGC router, following the cabling requirements in Appendix A exactly.
5. No further action needed. All machines will attempt to boot of the network at regular intervals.
Once the GGC Router is fully provisioned and has established BGP session(s), all machines should
install over the network without intervention on your side.
2. Wire the machines to the GGC router, following the cabling requirements in Appendix A exactly.
4. Wait for the GGC router to complete turnup. You will receive an email message once this is done.
This message will be instructing you to boot the machines. Do this by pressing the front power
button. If the machines were already powered on, reboot them at this point.
6. No further action needed. The machines should perform an automated network-based install.
6.2 GGC software network-based installation failures
Occasionally GGC network-based installation may fail to start. If this happens, please retry the procedure
above. If the failure persists, try the USB-drive installation process.
If the machine can’t establish network connectivity, check all cables are properly seated and connected
according to requirements in Appendix A.
If the machines still fail a network boot, try the Software Installation ( USB-drive) procedure below.
7
Software Installation (USB-drive)
This section describes the steps to install the GGC software on the machine. After this step is complete,
the installer automatically signals Google to begin the turnup process or to return this machine to a
serving state, in the case of a reinstallation.
4. Enter the network configuration and wait for the installer to complete.
It’s possible to install a GGC machine without Internet connectivity. In this case, the machine repeatedly
tries contacting GGC management systems until it succeeds.
A USB stick with at least 1 GB capacity. One USB stick per machine is provided.
Download, save, and run the tools required to create a bootable USB stick
GGC machines, mounted in a rack and powered up. A connection to the network with Internet
connectivity is preferred but not required.
You’ll also need to advertise the prefix allocated to the GGC node to your upstream networks.
Only the latest version of the setup image is supported. If you use an older installer version we may ask
you to reinstall the machine.
You’ll need a USB removable media device to store the GGC setup image. This can be the USB stick
shipped with GGC servers, or any other USB stick or portable USB drive with at least 1 GB capacity.
Create the USB boot stick on a computer on which you have the permissions described above. You can
create multiple boot sticks, to install machines in parallel, or you can make one USB stick and reuse it for
multiple machine installations.
For details on how to write the install image to the USB stick on various operating systems, see the ISP
Help Center (https://support.google.com/interconnect?p=usb).
2. Insert the prepared setup USB stick in any USB port (front or back). If you use the front port, that
may prevent you from easily connecting a keyboard or from closing the rack doors.
5. Allow some time for the machine to boot from the USB stick.
If you’ve installed this machine before, it might not automatically boot from the USB
stick. If that happens, follow the steps at Booting the machine from the USB stick.
6. Press ENTER or wait for 10 seconds for the ‘Boot Menu’ to disappear. The machine boots up and
starts the installation program.
The installer examines the hardware. Some modifications applied by the installer may require the
machine to reboot.
If that happens, make sure the machine boots from the USB stick again. The installation program
then resumes.
7. The installer detects which network interface has a link. In this example, it’s the first 10GE
interface. It prompts you to configure it, as shown in this screen. Press ENTER to proceed:
NIC Detection
8. Respond to the prompts that appear on the screen. The configuration should match the
information provided to Google in the ISP Portal (https://isp.google.com/assets/):
NOTE: If you’ve used this USB stick to install any GGC machines before, the fields listed below are
pre-populated with previously used values. Verify they’re correct before pressing ENTER.
Enable LACP: Select ‘Y’ if the machine has multiple interfaces connected. Select ‘N’ if it has
only a single interface connected.
Enter the machine number: 1 for the first machine, 2 for the second, and so on.
IP Information
9. The installer validates IP information and connectivity, then begins software installation onto the
local hard drive. This step takes a couple of minutes. Allow it to finish.
10. When the installation process has completed successfully, it prompts you to press ENTER to
reboot the machine:
Successful installation
If any warnings or error messages are shown on the screen, don’t reboot the machine. See GGC
Software Installation Troubleshooting.
11. Remove the USB stick from the machine and press ENTER to reboot.
12. When the machine reboots after a successful installation, it boots from disk. The machine is now
ready for remote management. The monitor shows the Machine Health Screen:
13. Label each machine with the name and IP address assigned to it.
If the installation didn’t go as described above, follow the steps in the next section, GGC
Software Installation Troubleshooting.
Occasionally GGC software installation may fail. This is usually due to either:
Hardware issues, which prevent the machine from booting, or that prevent the install image from
being written to disk
Network issues, which prevent the machine from connecting to GGC management systems
First, check machine hardware status by attaching a monitor, and viewing the Machine Health Screen
(https://support.google.com/interconnect/answer/9028258?hl=en&ref_topic=7658879). Further
information is available in the ISP Help Center (https://support.google.com/interconnect?p=usb).
If you can’t establish network connectivity, check the cables, switch and router configuration, and the IP
information you entered during installation. Check IP connectivity to the GGC machines from the GGC
switch, from elsewhere in your network, using external Looking Glass utilities.
If you’re installing a brand new machine for the first time and the setup process reports an error before
network connectivity is available, let [email protected] know what its service tag is when reporting the
problem.
If the setup process reports an error after network connectivity is established, it automatically uploads
logs to Google for investigation. If this happens, leave the machine running with the USB stick inserted
so we can gather additional diagnostics, if required.
Don’t place transparent proxies, NAT devices, or filters in the path of communications between the GGC
Node and the internet, subject to the explicit exceptions in the GGC Agreement. Any filtering of traffic to
or from GGC machines is likely to block remote management of machines. This delays turnup.
In other cases, contact the GGC Operations team: [email protected]. Always include the GGC node name
in communications with us.
NOTE: Photos of the fault, including those of the installer and Machine Health Screen, are useful for
troubleshooting.
7.4 Boot the Machine from the USB Stick
1. Insert the USB stick and reboot the machine. During POST (Power On Self Test phase of PC boot),
this menu appears on the screen:
4. From the list of bootable devices, select ‘Hard disk C:’ and then the USB stick, as shown in this
screen:
1. Insert the USB stick and reboot the machine. During POST (Power On Self Test phase of PC boot),
this menu appears on the screen:
HPE POST screen
In this scenario you establish BGP sessions with the Google-provided router. One session per each
interconnect and IP protocol version is required.
8.2 Procedure
You can configure your BGP router at any time. The session doesn’t come up until Google completes the
next steps of the installation. Establishing a BGP session with a Google router and advertising prefixes
doesn’t cause traffic to flow. We’ll contact you to arrange a date and time to start traffic.
We recommend at least one BGP session between your BGP peer(s) and the Google-provided router.
We’ll contact you if we have problems establishing the BGP session, or if we detect problems with the
prefixes advertised.
If you’re planning to perform work that may adversely affect GGC nodes, don’t disable
In this scenario you establish a BGP session directly with the GGC node.
BGP configuration details for this node from ISP Portal (https://isp.google.com/assets/)
9.2 Procedure
You can configure your BGP router at any time. The session doesn’t come up until Google completes the
next steps of the installation. Establishing a BGP session with a new GGC node and advertising prefixes
doesn’t cause traffic to flow. We’ll contact you to arrange a date and time to start traffic.
It’s important to understand that the BGP session established with GGC node isn’t used for traditional
routing purposes. This has two implications:
Only a single BGP session (for IPv4 and IPv6 each) with a GGC node is supported and necessary.
We’ll contact you if we have problems establishing the BGP session, or if we detect problems with the
prefixes advertised.
Disabling the BGP session doesn’t stop the node from serving traffic. Our systems
continue to use the most recently received BGP feed when the session isn’t established.
If you’re planning to perform work that may adversely affect GGC nodes, schedule
maintenance for it in the ISP Portal (https://isp.google.com/assets/).
10
Appendix A - Cabling Requirements - Google Router
1 x 1G copper SFP
1 x multi-mode fiber
Procedure:
1. Insert 10G SFPs into machine network interface “Port 1”, as shown below.
2. Insert SFPs into GGC router: - Machine-facing 10G SFPs, starting from interface #0 - Machine-
facing 1G SFPs, starting from interface #14 - Uplink SFPs, starting from interface #30 or uplink
QSFPs, starting from interface #1/0
3. Connect machines to switch with fiber, as shown: - First machine 10G interface “Port 1” to router
interface #0 - Second machine 10G interface “Port 1” to router interface #1, and so on.
4. Connect machines to switch with Cat5e cabling: - First machine RJ45 interface #3 to router
interface #14 - Second machine RJ45 interface #3 to router interface #15, and so on.
R730 Cabling (out-of-band)
2 x multi-mode fiber
Procedure:
1. Insert uplink SFPs into GGC router: - If you’re using 10G SFPs, insert up to 16 10G SFPs into the
GGC router, starting from interface #24. - if you’re using 100G QSFPs, insert up to 2 100G QSFPs
into the GGC router, starting from interface #1/0.
2. Insert machine-facing 10G SFPs into GGC router, starting from interface #0.
4. Connect machines to switch, as follows: - First machine interface #1 to router interface #0 - First
machine interface #2 to router interface #1 - Second machine interface #1 to router interface #2 -
Second machine interface #2 to router interface #3, and so on.
2 x multi-mode fiber
Procedure:
1. Insert uplink SFPs into GGC router: Insert up to 2 100G QSFPs into GGC router, starting from
interface #1/0. 10G uplinks aren’t supported.
2. Insert machine-facing 10G SFPs into GGC router, starting from interface #0.
4. Connect machines to switch, as follows: - First machine interface #1 to router interface #0 - First
machine interface #2 to router interface #1 - Second machine interface #1 to router interface #2 -
Second machine interface #2 to router interface #3, and so on.
Procedure:
1. Connect machines to switch with direct-attach cable, as shown: - First machine 40G interface (#1
in the diagram) to GGC router interface #0 - Second machine 40G interface (#1 in the diagram) to
GGC router interface #1, and so on.
R740xd2 Cabling
2. Insert 100G QSFPs used for uplinks into GGC router, starting from interface #24.
R740xd2 Uplinks
2 x multi-mode fibers
Procedure:
1. Insert SFPs into GGC router: - Machine-facing 10G SFPs, starting from interface #0 - Machine-
facing 1G SFPs, starting from interface #16 - Uplink SFPs, starting from interface #24 or QSFPs,
starting from interface #1/0
2. Insert 10G SFPs into machine network interfaces #1 and #2, as shown.
3. Connect machines to switch with fiber, as shown: - First machine 10G interfaces #1, #2 to router
interface #0, #1 - Second machine 10G interfaces #1, #2 to router interfaces #2, #3, and so on.
4. Connect machines to switch with Cat5e cabling, as shown: - First machine RJ45 interface #1 to
router interface #16 - Second machine RJ45 interface #1 to router interface #17, and so on.
2 x multi-mode fibers
Additional 100G QSFPs for uplink to your network (10G SFP uplinks aren’t supported)
Procedure:
1. Insert SFPs into GGC router: - Machine-facing 10G SFPs, starting from interface #0 - Machine-
facing 1G SFPs, starting from interface #24 - Uplink QSFPs, starting from interface #1/0
2. Insert 10G SFPs into machine network interfaces #1 and #2, as shown.
3. Connect machines to switch with fiber, as shown: - First machine 10G interfaces #1, #2 to router
interface #0, #1 - Second machine 10G interfaces #1, #2 to router interfaces #2, #3, and so on.
4. Connect machines to switch with Cat5e cabling: - First machine RJ45 interface #1 to router
interface #24 - Second machine RJ45 interface #1 to router interface #25, and so on.
Procedure:
1. Insert 100G QSFPs into GGC router, starting from interface #24.
2. Connect machines to switch with direct-attach cable, as shown: - First machine 40G interface (#2
in the diagram) to GGC router interface #0 - Second machine 40G interface (#2 in the diagram) to
GGC router interface, and so on.
Procedure:
Connect machine RJ45 network interfaces #1 and #2, as shown, to your switch.
R430 Cabling
Procedure:
Connect machine RJ45 network interfaces #1 and #2, as shown, to your switch.
R440 Cabling
11.3 Dell R730xd
2 x 10G SFPs (SR or LR): 1 for the machine, 1 for for the switch
Procedure:
4 x 10G SFPs (SR or LR); 2 for the machine, 2 for the router
Procedure:
1. Insert two SFPs into machine network interfaces #1 and #2, as shown.
5. On your switch add two ports facing each machine into separate LACP bundle.
2 x 10G SFPs (SR or LR): 1 for the machine, 1 for for the switch
Procedure:
These examples are for illustration only; your configuration may vary. Contact your switch vendor for
detailed configuration support for your specific equipment.
Replace interface descriptions mynode-abc101 with the name of the GGC node and the machine
number.
!
interface TenGigabitEthernet1/1
description mynode-abc101
switchport mode access
flowcontrol send off
spanning-tree portfast
!
interface TenGigabitEthernet1/2
description mynode-abc102
switchport mode access
flowcontrol send off
spanning-tree portfast
!
interface TenGigabitEthernet1/3
description mynode-abc103
switchport mode access
flowcontrol send off
spanning-tree portfast
!
interface TenGigabitEthernet1/4
description mynode-abc104
switchport mode access
flowcontrol send off
spanning-tree portfast
!
end
Replace interface descriptions mynode-abc101-Gb1 with the name of the GGC node, the machine
number, and machine interface name.
!
interface GigabitEthernet1/1
description mynode-abc101-Gb1
switchport mode access
flowcontrol send off
channel-protocol lacp
channel-group 1 mode passive
spanning-tree portfast
!
interface GigabitEthernet1/2
description mynode-abc101-Gb2
switchport mode access
flowcontrol send off
channel-protocol lacp
channel-group 1 mode passive
spanning-tree portfast
!
interface Port-channel1
description mynode-abc101
switchport
switchport mode access
no port-channel standalone-disable
spanning-tree portfast
!
interface GigabitEthernet1/3
description mynode-abc102-Gb1
switchport mode access
flowcontrol send off
channel-protocol lacp
channel-group 2 mode passive
spanning-tree portfast
!
interface GigabitEthernet1/4
description mynode-abc102-Gb2
switchport mode access
flowcontrol send off
channel-protocol lacp
channel-group 2 mode passive
spanning-tree portfast
!
interface Port-channel2
description mynode-abc102
switchport
switchport mode access
no port-channel standalone-disable
spanning-tree portfast
end
12.3 Juniper Switch Interface Configuration (LACP Disabled)
Replace interface descriptions mynode-abc101 with the name of the GGC node and the machine
number.
Replace interface descriptions mynode-abc101-Xe1 with the name of the GGC node, the machine
number, and interface name on your switch.
policy-statement no-routes {
term default {
then reject;
}
}
12.8 Juniper BGP Configuration (AS-PATH Based Policy)
Required operating temperature is 10°C to 35°C (50°F to 95°F), at up to 80% non-condensing humidity.
You might see exhaust temperatures of up to 50°C (122°F); this is normal.
See Appendix F - Vendor Hardware Documentation for further environmental and mechanical
specifications.
14
Appendix E - Power Requirements
Each machine requires dual power feeds. Machines are shipped with either 110V/240V PSUs, or 50V DC
power PSUs.
Google doesn’t provide all power cord types. If your facility requires other types of power cords and
they’re not listed in the table below, you’ll need to provide them.
Google doesn’t provide DC power cords for machines; have your electrician connect machines to DC
power.
If you have further questions related to hardware delivered by Google and can’t find answers in this
document, consult the appropriate resource listed below.
2. Dell estimated maximum potential instantaneous power draw of the product under maximum
usage conditions. Should only be used for sizing the circuit breaker. Peak power values for other
platforms were sourced in-house by running synthetic stress test.↩
3. Dell estimated maximum potential instantaneous power draw of the product under maximum
usage conditions. Should only be used for sizing the circuit breaker. Peak power values for other
platforms were sourced in-house by running synthetic stress test.↩
4. Dell estimated maximum potential instantaneous power draw of the product under maximum
usage conditions. Should only be used for sizing the circuit breaker. Peak power values for other
platforms were sourced in-house by running synthetic stress test.↩