Lab Guide Training v2.0 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36
At a glance
Powered by AI
The key takeaways are that the lab network uses Ravello Systems to create virtual machines loaded with Arista's vEOS software to emulate a physical switch topology. Students are assigned individual lab networks and access them through a 'LabAccess' VM using SSH.

The lab network topology constructs every pair of leaf switches connected to the spine switches in a specific configuration shown in the document.

Students access their assigned lab network by first connecting to the 'LabAccess' VM using the provided public IP address and credentials. They can then select options to connect via console sessions or SSH directly to the assigned VMs that make up their individual lab network.

Lab Overview

NOTE: Configuration management for the lab is controlled through a mix of ZTP, eAPI, Python, and Perl,
scripting, all of which relies on the following configurations to function:

• Do not change or remove the Management IP address.


• Do not change or remove the eAPI configuration.
• Do not change or remove the Script or Class users.
• Do not change or remove the login banner.
• Do not change or remove the hostname.
• Do not add/change a password to the admin user.
• Do not add/change an enable secret password.

Lab Description

The lab network is constructed such that every pair of leaf switches is connected to the spine switches in the
following topology:

1
Lab Access (Ravello Systems)

The lab network is based on virtual machines using Ravello Systems’ solution combined with Arista’s virtual
switch vEOS solution running on the virtual machines. Essentially the instructor(administrator) created several
virtual machines using Ravello and loaded them with vEOS software image from Arista networks (topology
shown above) so they became virtual Arista switches, then the instructor assigns these switches to each student,
so each student will have a lab network with multiple switches to practice EOS commands and implement
various networking solutions.

In our class, the instructor will first create the lab networks (this process is called token creation) and assigns to
each student, this will be a URL format and each student will get a different one.

To access the lab, students need to access(SSH) the virtual machine called “LabAccess” then use LabAccess as
a jump server to ssh to all the VMs(or access through direct console)

Here are the steps:

1, After clicking the provided URL link (please use chrome if you can), students will see this:




2, Public IP address was provided on the left top corner of the host called “LabAccess”, students can use this
information to initiate the remote access procedure using hyperterminal software such as SecureCRT. Take
above snapshot for an example:

35.188.149.129 // port 22 so it is SSH

Username and password are both arista

2
Please DO NOT CHANGE THE USERNAME/PASSWORD!!!
Please DO NOT UPGRADE/DOWNGRADE SOFTWARE!!!



Students can now make selections based on the menu. To toggle between sessions, type “exit”. Please
note that these are the console sessions of each VM.

Students can also use SSH to access each VM if they preferred, to do so, simply selecting “10. Shell(bash)”
and then use SSH to access each box (as shown below). In this way students could create multiple
sessions without toggling between console sessions, which is preferred by some folks.

However default configuration does NOT have SSH enabled, student would have to configure SSH to use
this direct access method.

switch>en
switch #conf t
switch (config)#management ssh
switch (config-mgmt-ssh)#no shut
switch (config-mgmt-ssh)#end
switch #

Please note the ip address range of management interface(ma1) for each switch is 192.168.0.X (10-----15

If you are unsure of the specific IP address, kindly use above “Device Menu” to connect to each switch to
find out (show interface ma1)

Please DO NOT change/delete this management IP address.
3





This concludes the lab access procedure part.

Note: All the VMs are loaded with Arista vEOS, the only difference between vEOS and EOS running in a
physical switch is that vEOS does not have hardware dataplane and the bridging and routing capabilities
are emulated in software.

IP ACL, NAT, DirectFlow, LANZ, “platform” commands would NOT be available on vEOS.

4
Lab#1 // EOS CLI Fundamentals
Lab Objectives:

In this activity you will:


• Execute a number of commands on the CLI
• Interact with the BASH shell and navigate the Linux file structure
• Using FastCli execute a CLI command from BASH
• Configure your switch to send email and send yourself a test email

TASK 1: Test drive the CLI

Step 1
Execute the following CLI commands on your switch to familiarize yourself with the config.

show run
show lldp neighbors
show interfaces status connected
show ip route

Step 2
Explore the aliases

HINT: use the “show run section alias” to view the aliases that are already configured.

Step 3
Use the “show run section interface ethernet” command to view the running configuration of
each interface.

student-20# show run section interface ethernet


interface Ethernet1
description [ ESXi ]
!
interface Ethernet2
description [ Agile Port ]
shutdown
!
<OUTPUT OMITTED>

Step 4
Configure an interface descripton named “This is NOT a link to the spine” on interface Ethernet 2

student-20# conf
student-20(config)# int Et2
student-20(config-if-Et10)#description This is NOT a link to the spine
student-20(config-if-Et10)#

Step 5

Use the “show run section spine” command to view your change.

5
student-20(config-if-Et10)# sh run section spine
interface Ethernet10
description This is NOT a link to the spine
!
interface Ethernet21
description [ Spine-1 ]
!
interface Ethernet22
description [ Spine-2 ]
student-20(config-if-Et10)# exit
student-20(config)#

HINT: Notice how “show run section” displayed other sections of the configuration. This command can
be extremely useful.

Step 6:

Generate a custom syslog using the “send log message” command. Make the syslog say “I like this
feature”

student-20(config)# send log message I like this feature


student-20(config)#

Step 7:

Use the “show log last 5 min” command to view your custom syslog.

NOTE: You could also grep for a string in your custom syslog using the “show log | grep” command.

student-20(config)# sh log last 5 min


Feb 3 02:22:07 student-20 Cli: %SYS-6-LOGMSG_INFO: Message from admin
on vty3 (10.0.0.100): I like this feature
student-20(config)#

Step 8: Configure your switch to send email and then send yourself an email. (This is just a command practice,
the actual email function will not work)

student-20(config)# email
student-20(config-email)# from-user [email protected]
student-20(config-email)# server 10.0.0.100
student-20(config-email)# end
student-20# show tech | email [email protected]

NOTE: The “show tech | email [email protected]” is only an example! Replace with the command of
your choosing and a valid email address.

TASK 2: Enter BASH and get familiar with the Linux kernel.

Step 1
Enter bash on your switch- Type “bash”. To exit bash type “exit”.

Student-05# bash

6
Arista Networks EOS shell

[admin@Student-05 ~]$

Step 2
Show the interfaces on the switch using the “ifconfig –a” command.

[admin@Student-05 ~]$ ifconfig -a


cpu Link encap:Ethernet HWaddr 00:1C:73:68:D7:F7
UP BROADCAST RUNNING MULTICAST MTU:9216 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

et1 Link encap:Ethernet HWaddr 00:1C:73:68:D7:F7


UP BROADCAST RUNNING MULTICAST MTU:9214 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:21357 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:2724658 (2.5 MiB)

et2 Link encap:Ethernet HWaddr 00:1C:73:68:D7:F7


UP BROADCAST MULTICAST MTU:9214 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

et3 Link encap:Ethernet HWaddr 00:1C:73:68:D7:F7


UP BROADCAST MULTICAST MTU:9214 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

<output omitted>

Step 3
Issue the “top” command. Inspect the output including items such as the load average, % CPU and % MEM
for processes, etc. Break out of top by typing “control-c”.

admin@Student-05 ~]$ top

top - 00:58:47 up 11:31, 3 users, load average: 0.04, 0.05, 0.10


Tasks: 204 total, 1 running, 203 sleeping, 0 stopped, 0 zombie
Cpu(s): 3.6%us, 1.7%sy, 0.0%ni, 94.7%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Mem: 4017112k total, 2101608k used, 1915504k free, 164552k
buffers
Swap: 0k total, 0k used, 0k free, 1328684k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND


1936 root 20 0 824m 135m 86m S 3.3 3.5 22:33.40
FocalPointV2
1662 root 20 0 434m 145m 84m S 1.0 3.7 6:37.61 Sysdb
1745 root 20 0 388m 46m 15m S 1.0 1.2 6:30.97
AgentMonitor
7
1661 root 20 0 391m 36m 3448 S 0.7 0.9 2:54.15 ProcMgr-
worker
1244 admin 20 0 416m 73m 26m S 0.3 1.9 0:06.00 Cli
<output omitted>

TASK 3: Use the FastCLI tool to run a CLI command from BASH.

Step 1
Using the “FastCLI” command, from bash, issue the CLI command “show interface status”.

[admin@Student-05 ~]$ FastCli -c 'show interfaces status'


Port Name Status Vlan Duplex Speed
Type
Et1 [ ESXi ] connected 1 a-full a-1G
1000BASE-T
Et2 notconnect 1 full 10G Not
Present
<output omitted>

Step 2
Issue the same command and redirect the output to a file named “show_int_stat.txt”.

[admin@Student-05 ~]$ FastCli -c 'show interfaces status' >


sh_int_stat.txt
[admin@Student-05 ~]$

Step 3
Verify your file was created.

[admin@Student-05 ~]$ ls
sh_int_stat.txt

End of lab

8
Lab#2 // Multi-Chassis LAG (MLAG)
Lab Objectives:

In this lab you will:


• Configure Multi-chassis LAG (MLAG) and observe its operation in steady state

Diagram:

Configure MLAG on the two Spine switches using the following criteria

1. Confirm MLAG Heartbeats are permitted in the switch control plane ACL.
1. show ip access-lists default-control-plane-acl

2. Configure the MLAG Peer Link.


1. config
2. interface eth1
3. switchport
4. switchport mode trunk
5. channel-group 10 mode active
6. interface port-channel 10
7. switchport
8. switchport mode trunk

3. Configure the MLAG VLAN (both Layer 2 and Layer 3).


1. vlan 4094
9
2. trunk group mlagpeer
3. interface port-channel 10
4. switchport trunk group mlagpeer
5. exit
6. no spanning-tree vlan 4094
7. interface vlan 4094
8. description MLAG Peer Link
9. ip address <match drawing>
10. ping <match drawing to ping MLAG neighbor IP address>

4. Define the MLAG Domain.


1. mlag
2. local-interface vlan 4094
3. peer-address <match drawing>
4. peer-link port-channel 10
5. domain-id mlag01

5. Configure MLAG member ports


1. int eth2-3
2. switchport
3. switchport mode trunk
4. channel-group 12 mode active
5. interface port-channel 12
6. switchport
7. switchport mode trunk
8. mlag 12

6. Verify MLAG operation


1. show mlag
2. show mlag detail
3. show mlag interfaces

7. Verify switching operation


1. show interface status
2. show lldp nei

8. Configure MLAG member ports


1. int eth4-5
2. switchport
3. switchport mode trunk
4. channel-group 34 mode active
5. interface port-channel 34
6. switchport
7. switchport mode trunk
8. mlag 34

9. Verify MLAG operation


1. show mlag
2. show mlag detail
3. show mlag interfaces

10
10. Verify switching operation
1. show interface status
2. show lldp nei

Configure MLAG on the Leaf1 and Leaf2 switches using the following criteria

11. Configure Port-channels on Leafs


1. int eth1
2. switchport
3. switchport mode trunk
4. channel-group 10 mode active
5. interface port-channel 10
6. switchport
7. switchport mode trunk

12. Verify MLAG operation


1. show mlag
2. show mlag detail
3. show mlag interfaces

13. Verify switching operation


1. show interface status
2. show lldp nei
3. show interfaces trunk

14. Configure the MLAG VLAN (both Layer 2 and Layer 3).
1. vlan 4094
2. trunk group mlagpeer
3. interface port-channel 10
4. switchport trunk group mlagpeer
5. exit
6. no spanning-tree vlan 4094
7. interface vlan 4094
8. description MLAG Peer Link
9. ip address <match drawing>
10. ping <match drawing to ping MLAG neighbor IP address>

15. Define the MLAG Domain.


1. mlag
2. local-interface vlan 4094
3. peer-address <match drawing>
4. peer-link port-channel 10
5. domain-id mlag12

16. Configure Port-channels on Leafs


1. int eth2-3
2. switchport
3. switchport mode trunk
4. channel-group 12 mode active
5. interface port-channel 12
6. switchport
11
7. switchport mode trunk
8. mlag 12

17. Verify MLAG operation


1. show mlag
2. show mlag detail
3. show mlag interfaces

18. Verify switching operation


1. show interface status
2. show lldp nei
3. show interfaces trunk

Configure MLAG on the Leaf3 and Leaf4 switches using the following criteria

19. Configure Port-channels on Leafs


1. int eth1
2. channel-group 10 mode active
3. interface port-channel 10
4. switchport mode trunk
5. show interface status

20. Verify MLAG operation


1. show mlag
2. show mlag detail
3. show mlag interfaces

21. Verify switching operation


1. show interface status
2. show lldp nei
3. show interfaces trunk

22. Configure the MLAG VLAN (both Layer 2 and Layer 3).
1. vlan 4094
2. trunk group mlagpeer
3. interface port-channel 10
4. switchport trunk group mlagpeer
5. exit
6. no spanning-tree vlan 4094
7. interface vlan 4094
8. description MLAG Peer Link
9. ip address <match drawing>
10. ping <match drawing to ping MLAG neighbor IP address>

23. Define the MLAG Domain.


1. mlag
2. local-interface vlan 4094
3. peer-address <match drawing>
4. peer-link port-channel 10
5. domain-id mlag34

12
24. Configure Port-channels on Leafs
1. int eth2-3
2. channel-group 34 mode active
3. interface port-channel 34
4. switchport mode trunk
5. mlag 34
6. show interface status

25. Verify MLAG operation


1. show mlag
2. show mlag detail
3. show mlag interfaces

26. Verify switching operation


1. show interface status
2. show lldp nei
3. show interfaces trunk

Jan 2 17:35:15 7050S64-D2 Mlag: %FWK-3-SOCKET_CLOSE_REMOTE: Connection to Mlag (pid:1404)


at tbt://10.0.0.1:4432/ closed by peer (EOF)
Jan 2 17:35:15 7050S64-D2 Mlag: %FWK-3-SELOR_PEER_CLOSED: Peer closed socket connection.
(tbt://10.0.0.1:4432/-in)(Mlag (pid:1404))
Jan 2 17:35:15 7050S64-D2 Mlag: %MLAG-3-STATE_INACTIVE_CONNECTION_CLOSED: MLAG is
inactive with peer 10.0.0.1 because the TCP session was closed
Jan 2 17:35:15 7050S64-D2 Mlag: %MLAG-6-INTF_UNESTABLISHED: Interface Port-Channel1 is no
longer in MLAG 1.
Jan 2 17:35:15 7050S64-D2 Mlag: %MLAG-6-INTF_UNESTABLISHED: Interface Port-Channel10 is no
longer in MLAG 10.
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-ROOTCHANGE: Root changed for instance Vl11: new
root interface is (none), new root bridge mac address is 02:1c:73:19:fe:c8 (this switch)
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-INTERFACE_ADD: Interface Port-Channel1 has been
added to instance Vl11
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-INTERFACE_ADD: Interface Port-Channel10 has been
added to instance Vl11
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-ROOTCHANGE: Root changed for instance Vl1: new
root interface is (none), new root bridge mac address is 02:1c:73:19:fe:c8 (this switch)
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-INTERFACE_ADD: Interface Port-Channel1 has been
added to instance Vl1
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-INTERFACE_ADD: Interface Port-Channel10 has been
added to instance Vl1
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-ROOTCHANGE: Root changed for instance Vl10: new
root interface is (none), new root bridge mac address is 02:1c:73:19:fe:c8 (this switch)
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-INTERFACE_ADD: Interface Port-Channel1 has been
added to instance Vl10
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-INTERFACE_ADD: Interface Port-Channel10 has been
added to instance Vl10
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-ROOTCHANGE: Root changed for instance Vl50: new
root interface is (none), new root bridge mac address is 02:1c:73:19:fe:c8 (this switch)
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-INTERFACE_ADD: Interface Port-Channel1 has been
added to instance Vl50

13
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-INTERFACE_ADD: Interface Port-Channel10 has been
added to instance Vl50
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-ROOTCHANGE: Root changed for instance Vl50: new
root interface is Port-Channel10, new root bridge mac address is 02:1c:73:1a:b5:26
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-ROOTCHANGE: Root changed for instance Vl10: new
root interface is Port-Channel10, new root bridge mac address is 00:1c:73:1a:b5:26
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-ROOTCHANGE: Root changed for instance Vl1: new
root interface is Port-Channel10, new root bridge mac address is 00:1c:73:1a:b5:26
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-ROOTCHANGE: Root changed for instance Vl11: new
root interface is Port-Channel10, new root bridge mac address is 00:1c:73:1a:b5:26
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-INTERFACE_ADD: Interface Port-Channel50 has been
added to instance Vl11
Jan 2 17:35:15 7050S64-D2 Stp: %SPANTREE-6-INTERFACE_ADD: Interface Port-Channel50 has been
added to instance Vl1
Jan 2 17:35:16 7050S64-D2 Stp: %SPANTREE-6-INTERFACE_ADD: Interface Port-Channel50 has been
added to instance Vl10
Jan 2 17:35:16 7050S64-D2 Stp: %SPANTREE-6-INTERFACE_ADD: Interface Port-Channel50 has been
added to instance Vl50
Jan 2 17:35:16 7050S64-D2 Stp: %SPANTREE-6-INTERFACE_STATE: Interface Port-Channel50 instance
Vl11 moving from discarding to forwarding

Troubleshooting commands

show mlag detail


show mlag interface detail
show mlag tunnel counter detail
show lacp nei
show lldp nei
trace commands
cd /var/log/messages
cd /var/log/agents

End of lab

14
Lab#3 // TCPdump and Mirror to EOS
Lab Objectives:
• Use TCPDUMP to capture control plane traffic on your switch
• Use the Mirror to EOS feature to capture data plane traffic on your switch.

TASK 1: Use TCPDUMP to capture various control plane traffic on your switch

Step 1
Use TCPDUMP to capture all control plane traffic in/out of interface Ethernet 2.

Spine1# bash tcpdump -i et2


tcpdump: WARNING: et19: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol
decode
listening on et19, link-type EN10MB (Ethernet), capture size 65535
bytes
03:28:18.889472 00:1c:73:40:27:34 (oui Arista Networks) >
01:80:c2:00:00:00 (oui Unknown), 802.3, length 119: LLC, dsap STP
(0x42) Individual, ssap STP (0x42) Command, ctrl 0x03: STP 802.1s,
Rapid STP, CIST Flags [Proposal, Learn, Forward, Agreement], length 102
<output omitted>

Step 2
Use TCPDUMP to capture only packets to/from your MLAG peer’s IP address.

Spine1# bash tcpdump -nei any host X.X.X.X


tcpdump: verbose output suppressed, use -v or -vv for full protocol
decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size
65535 bytes
03:31:40.958859 In 00:1c:73:68:d2:7b ethertype IPv4 (0x0800), length
89: 10.100.100.37.4432 > 10.100.100.38.58181: Flags [P.], seq
2859282953:2859282974, ack 2454782164, win 124, options [nop,nop,TS val
95460474 ecr 95907066], length 21
03:31:40.958914 Out 00:1c:73:40:27:21 ethertype IPv4 (0x0800), length
68: 10.100.100.38.58181 > 10.100.100.37.4432: Flags [.], ack 21, win
501, options [nop,nop,TS val 95907566 ecr 95460474], length 0<output
omitted>

Step 3
Use TCPDUMP capture the LACP packets on an interface in port-channel 1000.

Spine1# bash tcpdump -nevvi et2 ether dst host 01:80:c2:00:00:02


tcpdump: WARNING: et23: no IPv4 address assigned
tcpdump: listening on et23, link-type EN10MB (Ethernet), capture size
65535 bytes
03:33:25.058390 00:1c:73:40:27:38 > 01:80:c2:00:00:02, ethertype Slow
Protocols (0x8809), length 124: LACPv1, length 110
Actor Information TLV (0x01), length 20
System 00:1c:73:40:27:21, System Priority 32768, Key 1000, Port
23, Port Priority 32768
State Flags [Activity, Aggregation, Synchronization,
Collecting, Distributing]

15
0x0000: 8000 001c 7340 2721 03e8 8000 0017 3d00
0x0010: 0000
Partner Information TLV (0x02), length 20
System 00:1c:73:68:d2:7b, System Priority 32768, Key 1000, Port
23, Port Priority 32768
State Flags [Activity, Aggregation, Synchronization,
Collecting, Distributing]
<output omitted>

TASK 2: Use Mirror to EOS to capture data plane traffic on your switch

Step 1
Create a monitor session with the source interface of et19 and destionation “cpu”

Spine1# conf
student-20(config)# monitor session sniff source ethernet 2 both
student-20(config)# monitor session sniff destination cpu

Step 2
Use the “show monitor session” command to view the newly created “mirror” kernel interface.

Spine1(config)# sh monitor session

Session sniff
------------------------

Source Ports:

Both: Et19

Destination Ports:

Cpu : active (mirror0)

student-20(config)#

Step 3
TCPDUMP on the mirror interface

(Currently vEOS in our lab does not support mirror interface, so we just practice the commands)

Spine1(config)# bash tcpdump -i mirror0


tcpdump: WARNING: mirror1: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol
decode
listening on mirror1, link-type EN10MB (Ethernet), capture size 65535
bytes
04:09:51.989495 00:1c:73:40:27:34 (oui Arista Networks) >
01:80:c2:00:00:00 (oui Unknown), 802.3, length 119: LLC, dsap STP
(0x42) Individual, ssap STP (0x42) Command, ctrl 0x03: STP 802.1s,
Rapid STP, CIST Flags [Proposal, Learn, Forward, Agreement]

End of Lab

16

17
Lab#4 // VMTracer Configuration

Log into the Ravello Systems portal as per Student Assignment.

HTTP to the public IP address assigned to the jumphost server. Click on either CVP to open the CloudVision page, or vCenter
to open the vCenter Server page.

For this VM Tracer lab, lets click on the vCenter page and enter root/vmware


Added to our topology will be two VMware 5.5 ESXi servers. For this lab we will configure VM Tracer on
Leaf1 and Leaf3, run show vmtracer commands to verify these real-time changes.



1. Configure VM Tracer session on Leaf1 and Leaf3 switches. The vmtracer sessions just talk to vCenter via
management interface of the switch (ma1)
config t
vmtracer session ATD
url https://192.168.0.20/sdk
username root
password vmware

18
2. Eth5 on Leaf1 and Leaf3 connected to the ESXi host. Configure this port on both switches as an VM Tracer
enabled port.
int eth5
vmtracer vmware-esx
3. Verify your config
show vmtracer session ATD
show run sec vmtracer

4. Once you’ve verified the sessionState of your session is Connected, run show vmtracer ? commands to gain
visibility of your vCenter environment from the switch. Some commands to try are…
show vmtracer session
show vmtracer interface eth5
show vmtracer vm
show vmtracer vm detail
show vmtracer interface host
5. (optional) Log Leaf1 and type in the command watch diff show vmtracer vm. Now, log into vCenter and
migrate (vMotion) DSL-1 from ESXi-1 to ESXi-2 by right clicking the vm and selecting Migrate. Notice the
state of the DSL-1 vm change as well as seeing it removed from the show command.



6. (optional) if you have time, create a new VM on one of the hosts. Notice once you’ve created the vm it will
appear in the show vmtracer vm command.

19
Lab#5 // BGP Lab

Lab Objectives:

• Establish a BGP session between your switch and the spine switches
• Advertise and accept routes to/from the spine switches
• Configure and confirm BGP multi-path

Diagram:



Removing the previous L2 labs configuration

If working in the Full Lab and have configured Layer 2, we need to tear that out and configure Layer 3
between the Spines and Leafs

1. On each device, remove the Leaf/Spine Layer 2 configurations
1. Spine 1 and Spine 2
1. no mlag configuration
2. no int po10
3. no int po12
4. no int po34
5. int eth1
6. shut
20
7. no int vlan 4094
8. spanning-tree vlan 4094

2. Leaf 1 and Leaf 2
1. no int po12
3. Leaf 3 and Leaf 4
2. no int po34

Configure eBGP on the Spine switches using the following criteria

1. Based on the diagram, turn on BGP and configure the neighbor relationships
1. Spine 1
1. int eth2
2. no switchport
3. ip add 172.16.200.1/30
4. int eth3
5. no switchport
6. ip add 172.16.200.5/30
7. int eth4
8. no switchport
9. ip add 172.16.200.9/30
10. int eth5
11. no switchport
12. ip add 172.16.200.13/30
13. sh ip int brief
14. sh run sec bgp (verify BGP is not running)
15. config
16. interface loop0
17. ip add 172.16.0.1/32
18. router bgp 65001
19. router-id 172.16.0.1
20. bgp log-neighbor-changes
21. nei 172.16.200.2 remote-as 65101
22. nei 172.16.200.6 remote-as 65102
23. nei 172.16.200.10 remote-as 65103
24. nei 172.16.200.14 remote-as 65104
25. sh run sec bgp (verify bgp configuration)
2. Spine 2
1. int eth2
2. no switchport
3. ip add 172.16.200.17/30
4. int eth3
5. no switchport
6. ip add 172.16.200.21/30
7. int eth4
8. no switchport
9. ip add 172.16.200.25/30
10. int eth5
11. no switchport
12. ip add 172.16.200.29/30
21
13. sh ip int brief
14. sh run sec bgp (verify BGP is not running)
15. config
16. interface loop0
17. ip add 172.16.0.2/32
18. router bgp 65002
19. router-id 172.16.0.2
20. bgp log-neighbor-changes
21. nei 172.16.200.18 remote-as 65101
22. nei 172.16.200.22 remote-as 65102
23. nei 172.16.200.26 remote-as 65103
24. nei 172.16.200.30 remote-as 65104
25. sh run sec bgp (verify bgp configuration)

Configure eBGP on the Leaf switches using the following criteria

1. Based on the diagram, turn on BGP and configure the neighbor relationships
1. Leaf 1
1. int eth2
2. no switchport
3. ip add 172.16.200.2/30
4. int eth3
5. no switchport
6. ip add 172.16.200.18/30
7. sh ip int brief
8. interface loop0
9. ip add 172.16.0.3/32
10. router bgp 65101
11. router-id 172.16.0.3
12. bgp log-neighbor-changes
13. nei 172.16.200.1 remote-as 65001
14. nei 172.16.200.17 remote-as 65002
15. sh run sec bgp (verify bgp configuration)
16. sh ip bgp sum (verify BGP neighbors moved to Estab)

2. Leaf 2
1. int eth2
2. no switchport
3. ip add 172.16.200.6/30
4. int eth3
5. no switchport
6. ip add 172.16.200.22/30
7. sh ip int brief
8. interface loop0
9. ip add 172.16.0.4/32
10. router bgp 65102
11. router-id 172.16.0.4
12. bgp log-neighbor-changes
13. nei 172.16.200.5 remote-as 65001
14. nei 172.16.200.21 remote-as 65002
22
15. sh run sec bgp (verify bgp configuration)
16. sh ip bgp sum (verify BGP neighbors moved to Estab)
3. Leaf 3
1. int eth2
2. no switchport
3. ip add 172.16.200.10/30
4. int eth3
5. no switchport
6. ip add 172.16.200.26/30
7. sh ip int brief
8. interface loop0
9. ip add 172.16.0.5/32
10. router bgp 65103
11. router-id 172.16.0.130
12. bgp log-neighbor-changes
13. nei 172.16.200.9 remote-as 65001
14. nei 172.16.200.25 remote-as 65002
15. sh run sec bgp (verify bgp configuration)
16. sh ip bgp sum (verify BGP neighbors moved to Estab)

4. Leaf 4
1. int eth2
2. no switchport
3. ip add 172.16.200.14/30
4. int eth3
5. no switchport
6. ip add 172.16.200.30/30
7. sh ip int brief
8. interface loop0
9. ip add 172.16.0.6/32
10. router bgp 65104
11. router-id 172.16.0.131
12. bgp log-neighbor-changes
13. nei 172.16.200.13 remote-as 65001
14. nei 172.16.200.29 remote-as 65002
15. sh run sec bgp (verify bgp configuration)
16. sh ip bgp sum (verify BGP neighbors moved to Estab)

We don’t have any BGP routes yet, lets spin up some networks on the Leafs

CREATE SVIs as per the topology(configuration not shown here)

1. Add the following SVIs to BGP announcements each of the Leafs
1. Leaf 1
1. router bgp 65101
2. network 172.16.112.0/24
2. Leaf 2
1. router bgp 65102
2. network 172.16.112.0/24
3. Leaf 3
23
1. router bgp 65103
2. network 172.16.134.0/24
4. Leaf 4
1. router bgp 65104
2. network 172.16.134.0/24

Verify all of the Spines and Leafs see these new network announcements

1. Check the BGP and IP Routing tables on each of the Spines and Leafs
1. Spine 1
1. sh ip bgp
2. sh ip route
2. Spine 2
1. sh ip bgp
2. sh ip route
3. Spine 3
1. sh ip bgp
2. sh ip route
4. Spine 4
1. sh ip bgp
2. sh ip route

Turning on ECMP (Equali Cost Multi-Path) as it is off by default

1. Check the BGP and IP route tables on each of the Spines and Leafs
1. show ip bgp, show ip route and show ip route bgp
2. Why are we only seeing one next hop in the FIB?
2. Lets get more hops (sounds like we’re brewing beer)
1. On each router, jump into BGP configuration mode and add
1. maximum-paths 4
3. Check the BGP and IP route tables on each of the Spines and Leafs
1. show ip bgp, show ip route and show ip route bgp
2. Ahh, just like a good beer, we’re much hoppier now
3. And notice the new status code in the show ip bgp output

BGP Troubleshooting commands

show ip bgp sum
show ip bgp
show ip bgp nei x.x.x.x
show run sec bgp
show log
cd /var/log/messages
cd /var/log/agents


End of Lab

24
Appendix A: BGP Route Selection Decision Process
Selection step BGP4 RFC 4271 Arista Cisco Juniper
Non-standard Highest Weight
1 Highest LOCAL_PREF
Prefer locally /
internally originated
Non-standard
routes over received
ones
2 Shortest AS_PATH
3 Lowest ORIGIN: IGP < EGP < incomplete (e.g. redistributed static)
4 Lowest MED
Prefer locally /
internally originated
Non-standard
routes over received
ones
Prefer eBGP over Smallest IGP metric Prefer eBGP over Prefer eBGP over
5
iBGP to next hop iBGP iBGP
Smallest IGP metric Prefer eBGP over Smallest IGP metric Smallest IGP metric
6
to next hop iBGP to next hop to next hop
Non-standard BGP Multipath option
Non-standard Oldest route
7 Lowest BGP Originator ID, or Router ID if no Originator ID is present
8 (RFC4456
Smallest RR cluster list
standard)

9 Lowest BGP peering address

25
Lab#6 // Virtualization Overlay with VXLAN

Lab Access
CVP Public IP = Dynamic
Mgmt IP = 192.168.0.5 192.168.0.4
arista/arista arista/arista
S1 S2
192.168.0.10 192.168.0.11
Eth2 Eth3 Eth4 Eth5 Eth2 Eth3 Eth4 Eth5

172.16.200.1/30 .200.5/30 .200.9/30 .200.13/30 .200.17/30 .200.21/30 .200.25/30 .200.29/30

.200.2/30 .200.18/30 .200.6/30 .200.26/30 .200.14/30 .200.30/30


.200.22/30 .200.10/30

Eth2 Eth3 Eth2 Eth3 Eth2 Eth3 Eth2 Eth3

VTEP VTEP VTEP VTEP


172.16.0.3 172.16.0.4 172.16.0.5 172.16.0.6

L1 Eth4 SVI L2 Eth4 L3 Eth4 L4 Eth4


SVI
192.168.0.14 VLAN 12 VLAN 12 VLAN 12 192.168.0.15 192.168.0.16 VLAN 12 VLAN 12 VLAN 12 192.168.0.17
172.16.112.2 172.16.112.1 172.16.112.3 172.16.112.4 172.16.112.1 172.16.112.5

SVI SVI
VLAN 34 VLAN 34 VLAN 34 VLAN 34 VLAN 34 VLAN 34
172.16.134.2 172.16.134.1 172.16.134.3 172.16.134.2 172.16.134.1 172.16.134.3

Eth1 Host1 Eth2 Eth1 Host2 Eth2


192.168.0.31 192.168.0.32
This interface This interface
172.16.112.201 172.16.112.202
gets shut in this lab gets shut in this lab

Note: we’ve had intermittent connectivity with dual forwarding on a Ravello virtual switch. To avoid this issue
shut down interface Ethernet4 on Leaf2 and interface Ethernet4 Leaf4

For this lab, there are two options: use the provided “vxlan” script (by making selections from the menu)
to configure all but Leaf3, or alternatively, manually build the whole network via CLI.

Option 1: Using the provided “vxlan” script:

The “vxlan” script is composed of python code that uses the CloudVision Portal Rest API to automate the
provisioning of CVP Configlets(a.k.a making configuration changes automatically)

1. In the menu interface, selection item “16. VXLAN Lab (vxlan) excludes leaf3 instead of leaf4”
1.1. Wait for 5-10 minutes for the configurations to be pushed

2. On Leaf 3; add the Loopback0 interface to the BGP advertisements

2.1. Leaf 3
router bgp 65103
network 172.16.0.5/32
3. Verify these addresses (and all /32 loopbacks) are advertised and received
3.1. On each Spine and Leaf
show ip route bgp
4. Announce a new SVI into BGP

26
4.1. On Leaf 3; add VLAN 12, SVI VLAN 12 and announce it in BGP
vlan 12
exit
interface vlan 12
ip address 172.16.112.4/24
ip virtual-router address 172.16.112.1
router bgp 65103
network 172.16.112.0/24
4.2. Verify the network is advertised and received on each Spine
show ip route bgp

5. Create the VXLAN VTEP (VTI) interfaces


5.1. Leaf 3
interface vxlan 1
vxlan source-interface loopback 0
vxlan flood vtep 172.16.0.3
vxlan vlan 12 vni 1212
5.2. Verification
show run interface vxlan 1
show vxlan vtep
5.3. On Leaf 3 we need to change the Host2 connection to be in VLAN 12
int eth4
switchport mode access vlan 12
5.4. Verification with
show run int eth4
5.5. Verification – from Host 1 and Host 2
ping 172.16.112.1

Note: If ping fails, please check interface Eth4 on both Leaf 1 and Leaf 3. They should be configured as
normal access ports in VLAN 12

6. Log into Host 1 and Host 2 and ping the other host.
6.1. Host 1
ping 172.16.112.202
sh interface eth1 | grep Hardware (note the MAC address)
6.2. Host 2
ping 172.16.112.201
sh interface eth1 | grep Hardware (note the MAC address)
7. Verification – on Leaf 1 and Leaf 3
7.1. Verify the MAC addresses and the associated VTEP IP
show vxlan address-table
7.2. Verify the MAC address and the associated interface
show mac address-table
8. Let’s run some other show commands and tests to poke around VXLAN.
8.1. On Leaf 1 and Leaf 3 issue the following command:
27
show interface vxlan 1
9. Start a ping on Host 1 with a long count and shut Leaf 3’s VXLAN interface
9.1. Host 1
ping 172.16.112.202 repeat 1000
9.2. Leaf 3
interface vxlan 1
shut (Take note - did your pings stop getting an echo reply?)
no shut (Take note - did you pings pick right back up?)
10. Troubleshooting VXLAN
10.1. First, verify basic IP connectivity between the VTEP endpoints
show ip route
ping x.x.x.x source loopback 0
10.2. Verify Jumbo frames are enabled throughout your path by tracing the path between the two VTEPS and
then sending pings along the way (Side Note; MTR is a useful tool for this scenario)
show run interface vxlan 1
show interface vxlan 1
show vxlan address-table
show mac address-table
show log
cd /var/log/messages
cd /var/log/agents

Option 2: Manually build the whole VXLAN network (no script):

11. On Leaf 1 and Leaf 3; add the Loopback0 interface to the BGP advertisements
11.1. Leaf 1
router bgp 65101
network 172.16.0.3/32
11.2. Leaf 3
router bgp 65103
network 172.16.0.5/32
12. Verify these addresses (and all /32 loopbacks) are advertised and received
12.1. On each Spine and Leaf
show ip route bgp
13. Enable Jumbo frames on the Layer 3 interfaces. For Arista on the Ethernet interfaces the maximum is 9214
13.1. On Spine 1 and Spine 2:
int eth1
mtu 9214
int eth2
mtu 9214
int eth3
mtu 9214
int eth4
mtu 9214
int eth5
mtu 9214
13.2. Verify MTU settings:

28
show ip interface brief

13.3. On Leaf 1 and Leaf 2:
int eth2
mtu 9214
int eth3
mtu 9214
13.4. Verify MTU settings:
show ip interface brief
14. Announce a new SVI into BGP
14.1. On Leaf 3; add VLAN 12, SVI VLAN 12 and announce it in BGP
vlan 12
exit
interface vlan 12
ip address 172.16.112.4/24
ip virtual-router address 172.16.112.1
router bgp 65103
network 172.16.112.0/24
14.2. Verify the network is advertised and received on each Spine
show ip route bgp

15. Create the VXLAN VTEP interfaces (VTI)
15.1. Leaf 1
interface vxlan 1
vxlan source-interface loopback 0
vxlan flood vtep 172.16.0.5
vxlan vlan 12 vni 1212
15.2. Leaf 3
interface vxlan 1
vxlan source-interface loopback 0
vxlan flood vtep 172.16.0.3
vxlan vlan 12 vni 1212
15.3. Verification
show run interface vxlan 1
show vxlan vtep

15.4. On Leaf 3 we need to change the Host2 connection to be in VLAN 12
int eth4
switchport mode access vlan 12
15.5. Verification
show run int eth4
16. Log into Host 1 and Host 2 and IP them into VLAN 12
16.1. Host 1
no int po1
int eth1
no switchport
ip address 172.16.112.201/24
29
ip route 0/0 172.16.112.1
16.2. Host 2
no int po1
int eth1
no switchport
ip address 172.16.112.202/24
ip route 0/0 172.16.112.1
16.3. Verification – from Host 1 and Host 2
ping 172.16.112.1

Note: If ping fails, please check interface Eth4 on both Leaf 1 and Leaf 3. They should be configured as normal
access ports in VLAN 12

30
Lab#7 // L2 EVPN Configuration

Note: Based on limitations in vEOS-LAB data plane, EVPN with Multi-homing via MLAG is unsupported. As
such, this lab exercise will not enable MLAG.

17. In the menu interface, selection item “17. EVPN Type 2 Lab (l2evpn) excludes leaf3 instead of leaf4”
17.1. Wait for 5-10 minutes for the configurations to be pushed

18. On Leaf 3, configure L2EVPN


18.1. Leaf 3 set Ar-BGP
service routing protocols model multi-agent
18.2. Leaf 3 Underlay Interface configurations.
interface Port-Channel4
switchport mode access
switchport access vlan 12
!
interface Ethernet1
shutdown
!
interface Ethernet2
no switchport
ip address 172.16.200.10/30
!

interface Ethernet3
no switchport
ip address 172.16.200.26/30
31
!

interface Ethernet4
channel-group 4 mode active
lacp rate fast
!
interface Ethernet5
shutdown
!
interface Loopback0
ip address 172.16.0.5/32
!
interface Loopback1
ip address 3.3.3.3/32
ip address 99.99.99.99/32 secondary
!

18.3. Leaf 3 Add Underlay BGP configurations


router bgp 65103
router-id 172.16.0.5
distance bgp 20 200 200
maximum-paths 4 ecmp 4
neighbor SPINE peer-group
neighbor SPINE fall-over bfd
neighbor SPINE maximum-routes 12000
neighbor 172.16.200.9 peer-group SPINE
neighbor 172.16.200.9 remote-as 65001
neighbor 172.16.200.25 peer-group SPINE
neighbor 172.16.200.25 remote-as 65002
redistribute connected
!
19. Verify Underlay
19.1. On each leaf and spine
show ip bgp summary
show ip route bgp

20. On Leaf 3, build BGP Overlay


router bgp 65103
neighbor SPINE-EVPN-TRANSIT peer-group
neighbor SPINE-EVPN-TRANSIT next-hop-unchanged
neighbor SPINE-EVPN-TRANSIT update-source Loopback0
neighbor SPINE-EVPN-TRANSIT ebgp-multihop
neighbor SPINE-EVPN-TRANSIT send-community extended
neighbor SPINE-EVPN-TRANSIT maximum-routes 0
neighbor 172.16.0.1 peer-group SPINE-EVPN-TRANSIT
neighbor 172.16.0.1 remote-as 65001
neighbor 172.16.0.2 peer-group SPINE-EVPN-TRANSIT
neighbor 172.16.0.2 remote-as 65002
!
address-family evpn
neighbor 172.16.0.1 activate
neighbor 172.16.0.2 activate
!
address-family ipv4
no neighbor 172.16.0.1 activate
no neighbor 172.16.0.2 activate

32
!

21. Verify overlay


21.1. On leaf 1 and 3
show bgp evpn summary
22. Configure L2EVPN
22.1. On Leaf 3; add VLAN 12, and vxlan1
vlan 12
!
interface Vxlan1
vxlan source-interface Loopback1
vxlan udp-port 4789
vxlan vlan 12 vni 1200
!

22.2. On Leaf 3; add mac vrf


router bgp 65103
vlan 12
rd 3.3.3.3:12
route-target both 1:12
redistribute learned
!
23. Verify vxlan and L2EVPN
23.1. On leaf 1 and 3
show interface vxlan1
show bgp evpn route-type mac-ip
###
Log into host 1 and ping host 2
ping 172.16.112.201 Ping local interface first
ping 172.16.112.202
###
show bgp evpn route-type mac-ip

33
Lab#8 // L3 EVPN Configuration

Note: Based on limitations in vEOS-LAB data plane, EVPN with Multi-homing via MLAG is unsupported. As
such, this lab exercise will not enable MLAG.

1. In the menu interface, selection item “18. EVPN Type 5 Lab (l3evpn) excludes leaf3 instead of leaf4”
1.1. Wait for 5-10 minutes for the configurations to be pushed

2. On Leaf 3, configure EOS to ArBGP and add loopback0


2.1. Leaf 3 set Ar-BGP
service routing protocols model multi-agent
2.2. Leaf 3 Underlay Interface configurations.
interface Ethernet1
shutdown
!
interface Ethernet2
no switchport
ip address 172.16.200.10/30
!
interface Ethernet3
no switchport
ip address 172.16.200.26/30
!

interface Ethernet4
shutdown
!
interface Ethernet5

34
channel-group 5 mode active
!
interface Loopback0
ip address 172.16.0.5/32
!
interface Loopback1
ip address 3.3.3.3/32
ip address 99.99.99.99/32 secondary
!

2.3. Leaf 3 Add Underlay BGP configurations


router bgp 65103
router-id 172.16.0.5
distance bgp 20 200 200
maximum-paths 4 ecmp 4
neighbor SPINE peer-group
neighbor SPINE fall-over bfd
neighbor SPINE maximum-routes 12000
neighbor 172.16.200.9 peer-group SPINE
neighbor 172.16.200.9 remote-as 65001
neighbor 172.16.200.25 peer-group SPINE
neighbor 172.16.200.25 remote-as 65002
redistribute connected
!
3. Verify Underlay
3.1. On each leaf and spine
show ip bgp summary
show ip route bgp

4. On Leaf 3, build BGP Overlay


router bgp 65103
neighbor SPINE-EVPN-TRANSIT peer-group
neighbor SPINE-EVPN-TRANSIT next-hop-unchanged
neighbor SPINE-EVPN-TRANSIT update-source Loopback0
neighbor SPINE-EVPN-TRANSIT ebgp-multihop
neighbor SPINE-EVPN-TRANSIT send-community extended
neighbor SPINE-EVPN-TRANSIT maximum-routes 0
neighbor 172.16.0.1 peer-group SPINE-EVPN-TRANSIT
neighbor 172.16.0.1 remote-as 65001
neighbor 172.16.0.2 peer-group SPINE-EVPN-TRANSIT
neighbor 172.16.0.2 remote-as 65002
!
address-family evpn
neighbor 172.16.0.1 activate
neighbor 172.16.0.2 activate
!
address-family ipv4
no neighbor 172.16.0.1 activate
no neighbor 172.16.0.2 activate
!

5. Verify overlay
5.1. On leaf 1 and 3
show bgp evpn summary

35
6. Configure L3EVPN
6.1. Configure vrf interfaces

vlan 2003
!
interface Port-Channel5
switchport access vlan 2003
!
interface Vlan2003
mtu 9000
no autostate
vrf forwarding vrf1
ip address virtual 172.16.114.1/24
!
interface Loopback901
vrf forwarding vrf1
ip address 200.200.200.2/32
!
6.2. Configure the vrf
vrf definition vrf1
!
ip routing vrf vrf1
!
router bgp 65103
vrf vrf1
rd 3.3.3.3:1001
route-target import 1:1001
route-target export 1:1001
redistribute connected
redistribute static
!

6.3. Map vrf to vni


interface Vxlan1
vxlan vrf vrf1 vni 1001
!
7. Verify VRF
7.1. Leaf 1 and 3
show ip route vrf vrf1 (note route resolution over vni)
show interface vxlan1 (note dynamic vlan to vni mapping)

36

You might also like