MikroTik OSPF Routing
MikroTik OSPF Routing
MikroTik OSPF Routing
MIKROTIK
You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!
MikroTik OSPF Routing
Open Shortest Path First (OSPF) is a Link-State routing protocol used by routers to
dynamically exchange route information. It's an open, industry-standard protocol
supported by all major vendors. While OSPF doesn't move traffic across the network
on its own, it does allow routers to discover network paths. Configuring it on
MikroTik router's isn't difficult, and the long-term benefits of using dynamic routing
can be big.
OSPF Configuration
The following sections walk us through configuring interfaces, adding address, setting
up OSPF, and advertising networks:
1. Physical Connections
2. Loopback Interfaces
3. Assigning IP Addresses
4. OSPF Instance Configuration
5. Advertising Networks
Physical Connections
We'll establish the point-to-point links between the router first. This provides the
foundation for OSPF to communicate. The link from top—middle routers will use the
172.16.0.0/30 subnet. The link from middle—bottom routers will use the
172.16.0.4/30 subnet.
/ip address
add interface=ether2 address=172.16.0.1/30
On the middle router:
/ip address
add interface=ether1 address=172.16.0.2/30
add interface=ether2 address=172.16.0.5/30
/ip address
add interface=ether1 address=172.16.0.6/30
Loopback Interfaces
We'll use virtual loopback (bridge) interfaces for this exercise. This makes the
following steps work across any router model, regardless of how many ethernet ports
it has. Running OSPF on a virtual interface also makes the protocol more stable,
because that interface will always be online. The following commands create new
bridge interfaces on all three of our routers:
/interface bridge
add name=ospf comment="OSPF loopback"
add name=lan comment="LAN"
/interface bridge
add name=ospf comment="OSPF loopback"
add name=lan comment="LAN"
/interface bridge
add name=ospf comment="OSPF loopback"
add name=lan comment="LAN"
Assigning IP Addresses
Each ospf bridge interface needs an IP address that can be used later to identify the
router. LAN interfaces need addresses for connecting user-facing LANs. Use the
following commands to assign IP addresses to the new bridge interfaces:
/ip address
add interface=ospf address=10.255.255.1
add interface=lan address=192.168.1.1/24
/ip address
add interface=ospf address=10.255.255.2
add interface=lan address=192.168.2.1/24
/ip address
add interface=ospf address=10.255.255.3
add interface=lan address=192.168.3.1/24
We'll configure the IP addresses created in the previous steps as the OSPF router's ID.
Since the top router is attached to an upstream provider we'll also advertise the default
route from that device. Use the following commands to configure the OSPF instances:
Advertising Networks
With our OSPF instances configured properly we can now begin advertising our
connected networks. OSPF will advertise the following networks and addresses:
OSPF loopback
Point-to-point router links
LAN subnets
Use the following commands to advertise the routes directly connected on each router:
Verifying OSPF
With networks connected and OSPF configured we need to verify functionality. The
following sections walk us through checking the status of OSPF routing:
1. Neighbor Routers
2. OSPF Routes
Neighbor Routers
By now OSPF should have established neighbor states between devices. The best
device to check for neighbors is the middle router — if it has two neighbors then the
top and bottom routers must be configured correctly. List the OSPF neighbors on the
device with the following command:
OSPF Routes
The best routes for a given destination will be copied from the protocol's route table to
the main route table
You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!
Preface
Running an IP-IP tunnel between sites with OSPF for routing is an easy, dynamic site-
to-site solution. We'll set up a tunnel, configure OSPF, and verify connectivity.
Navigation
1. Network Topology
2. IPIP Tunnel
3. OSPF Routing
Network Topology
The network topology for this writeup is two sites, each with a Mikrotik router: Site |
WAN IP | LAN Subnet | LAN Gateway | Point-to-Point IP | --- | --- | --- | --- | --- |
Philly | 1.1.1.1 | 192.168.1.0/24 | 192.168.1.1 | 10.255.0.1/30 | Seattle | 2.2.2.2 |
10.1.0.0/24 | 10.1.0.1 | 10.255.0.2/30 |
Both routers are connected to the internet and have a publicly routable address. Their
respective LAN networks don't overlap, and we've set aside a 10.255.0.0/30 network
for the point-to-point IPIP addresses. Using the high 10.255.0.0/30 network ensures it
won't overlap with any additional sites that come online.
IPIP Tunnel
Setting up the IPIP tunnel is pretty straightforward - point one router to the other and
that's it.
Add the routable IP addresses to the IPIP tunnel interfaces. This gives OSPF
something to run over between the two devices. Having a dynamic routing protocol
running means this solution can grow beyond two sites.
OSPF Routing
We'll use a very simple OSPF configuration since there's only two sites. Both sites
will be put on the OSPF "Backbone" area, number zero. As the network grows you
can add additional OSPF areas.
These configurations have OSPF advertising the point-to-point links between the
routers, and the LAN's behind the routers. With those routes advertised we should
have full reachability between sites
You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!
Either way, it's important to monitor our networks for rogue DHCP servers. In
RouterOS there is a handy tool in the IP DHCP-Server menu for just this purpose.
We'll first set up a logging script. Then we'll configure DHCP server alerts. Finally,
we'll add trusted DHCP server MAC addresses so there won't be false positives in our
logs.
1. Logging Script
2. DHCP Alerts
3. Trusted DHCP Servers
4. Finding Rogue Devices
Logging Script
1. Create the logging script:
NOTE: The backslashes ("\") are required because nested quotes must be escaped.
2. Run the script and verify a log entry is shown:
This log entry will be shown in addition to the default system log that has the
rogue server's MAC and IP addresses.
DHCP Alerts
1. Configure DHCP server alerts on interface ether2:
/ip dhcp-server alert add interface=ether2 on-
alert=rogue-dhcp disabled=no
The interface that the rogue DHCP server is connected to can be turned off remotely
while someone else hand-over-hands the cable to find the device
You can now get MikroTik training direct from Manito Networks. MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!
Six Step Troubleshooting
The US Navy's six step troubleshooting procedure has become part of academic and
professional courses and certifications around the world. It presents a logical, step-by-
step approach for troubleshooting system faults. We can apply this to computer
networks, electrical and electronic circuits, or business processes. When we use the
six steps properly, our troubleshooting can be faster and more efficient than it would
be if we "just jump right in".
Troubleshooting Goals
The primary goal of troubleshooting is very simple - fix faults. But that goal is more
nuanced than it first appears. While we want to fix faults, we should aim to do it as
efficiently and quickly as possible. Time wasted troubleshooting a system that's
unrelated to the fault is expensive. Meanwhile, the person who originally reported the
fault is still unable to perform whatever task they had been attempting.
Six Steps
First, we'll outline the six steps. Second, we'll explore what each of them entails.
Third, we'll apply the six steps to a real-world network outage scenario. The following
six steps make up the formal troubleshooting process:
1. Symptom Recognition
2. Symptom Elaboration
3. List Probable Faulty Functions
4. Localize the Faulty Function
5. Localize the Faulty Component
6. Failure Analysis
With the steps originally developed for troubleshooting electrical and electronic
systems, some of the wording has been changed over time. For example, Step 5 was
originially "Localizing trouble to the circuit". The wording has evolved but the result
is still the same - finding the specific root cause.
Sympton Recognition
This first step kicks off the overall troubleshooting process. Often for IT professionals
this happens when someone calls the helpdesk or puts in a ticket. IT staff might also
be alerted by a monitoring tool that a system has gone offline. At this point we know
"something is wrong", but there's no indication of exactly what it is. Begin the
troubleshooting process and respond with urgency.
Symptom Elaboration
Now that we know something is wrong it's time to begin asking questions. Here's a
list of some questions that I like to ask my users when they come to me with a
problem:
When dealing with non-technical users it's important to understand that they may not
be able to fully articulate what they're experiencing when giving us answers. For
example, a common report I get when troubleshooting network outages is, "the
internet is down!". While this isn't strictly true, most users don't have the training to
understand the distinction between LAN, WAN, and the internet. It's not their job to
understand that the LAN isn't working and the internet is still there waiting for them.
A certain amount of interpretation is needed, and the skills to do it come with time
and experience.
During this steps I'm also looking for sights, sounds, and smells. Loss of power is
typically easy to spot because there will be a lack of LEDs, and the conspicuous sound
of silence where there should be the whirring of cooling fans. The smell of burning
plastic and electronic components is very distinctive as well.
Power
Environmental Controls
Networks
Servers
Security
These are all very broad and that's the point of this step. We'll brainstorm which
function could be the cause of our fault, and we'll also rule out which could not be.
This points us in the right general direction. It's important to note what could not be
the cause of a fault because that prevents us from wasting time on an unrelated
system. When a technician gets pulled in the wrong direction and troubleshoots a
function unrelated to the fault it's sometimes called "going down the rabbit hole".
If the lights are on in a server room, and hardware LEDs on front panels are blinking
while the fans whir away, the Power domain can probably be ruled out. If one of those
servers with blinking lights cannot be pinged or accessed remotely it's a fair guess that
the Network domain might be faulted. There is also a possibility that a hardware
failure has occurred on the server, taking it off the network. Depending on past
reliability of your servers, you may or may not include the Server domain in your list
of possible faulty functions.
Running a traceroute to the server's address shows successful hops all the way to the
switch that the server connects to. That switch is the final hop, after which all packets
are lost. Based on that result it appears likely that the Network domain in the culprit.
We know that our network is segmented using VLANs, so we list the VLANs
configured on the switch and their associated ports. The port that connects the server
is assigned to VLAN number 1 - that's the default VLAN, not the server VLAN. This
explains why we have a good physical connection with link lights, but no network
traffic.
Failure Analysis
At this final step we correct the fault and document the process. In the case of our
server, setting the port to the right VLAN restored network connectivity, and our users
could access the server once again. Once the fault is fixed we need to verify that
operations have returned to normal. It's important to follow up with whomever
originally reported the fault and ensure that it's been fully resolved. This leads us to
the point where we ask questions and document the process. By documenting the fault
we make it possible for future technicians to fix the same issue much faster if they
experience it again.
Preventing the fault from happening again can be tricky. A mix of training, mentoring,
good documentation, and change management processes can stop it from happening
again. Even informal knowledge sharing within an IT team is better than nothing.
During a weekly meeting it's good to recap faults quickly with the following points:
Doing this week-over-week grows the knowledge base within an IT team and helps
develop good troubleshooters
MikroTik Winbox Security
MIKROTIK
You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!
First, we need to make sure that Winbox is updated. Second, we need to understand
how saved credentials can be used smartly. Third, we need to implement best
practices for managing credentials in Winbox overall.
Updates
It's a best practice all-around to run the latest stable, supported software. This is true
for RouterOS, and it's also true for Winbox. MikroTik has added a built-in updater
inside Winbox so checking for updates regularly is easy. Open Winbox, then
click Tools and Check for Updates:
Checking for Winbox updates
I do this about once per month, just in case a new version has been released that
patches security holes or adds new functionality.
Managed Hosts
We can store device connection profiles in Winbox to make reconnecting to them
easy. Unfortunately this can lead to some bad credential management practices.
Entering the IP address or hostname, login, and password then clicking
the Add/Set button saves our credentials:
Adding managed host
Anyone who walks up to the computer with Winbox open can double-click a managed
host entry and it will log them in. We can set a Master Password that requires a
password before the managed host entries are shown. Simply click Set Master
Password and enter a password twice:
Setting Winbox master password
Now when Winbox opens it will first prompt for the master password before giving us
access to the managed host credentials:
Using master password in Winbox
Of course, if the computer running Winbox is left unattended after the master
password was entered it doesn't do us any good, so locking the computer is a must.
After saving a bunch of managed host profiles many MikroTik administrators export
the list for backup purposes. I've seen some MSPs that manage MikroTik devices for
their customers share the exported file among their employees. While this might be
convenient it opens a can of security worms for customers that have to be PCI DSS or
HIPAA compliant. Exporting our managed host credentials can be done by
clicking Tools then Export:
Winbox managed hosts export
The exported .WBX file has all our login information, making it easy to restore the
saved entries in Winbox if they are lost. This can be dangerous though, because the
file that's exported is in plaintext. Exporting the file and opening it in a more
advanced text editor like Notepad++ shows our IP addresses or hostnames,
usernames, and passwords:
Winbox plaintext credentials
By unchecking the Keep Password box we can prevent Winbox from saving or
exporting the password for an individual managed host entry. Using Tools - Export
Without Passwords doesn't export passwords for any managed host, so it's a more
secure option. Of course it will still export usernames, which could allow an attacker
to kick-off a password guessing attack.
Best Practices
I recommend that these best practices be followed when storing credentials in
Winbox:
1. On computers with credentials stored in Winbox lock the screen when stepping
away.
2. Set a Master Password that must be entered before accessing the managed host
entries.
3. Don't include passwords when exporting the managed host list.
4. Don't share the .WBX export file with others.
5. If you must have passwords in the exported .WBX file then encrypt it with a
robust key.
6. For traveling laptops and tablets with credentials stored in Winbox encrypt the
entire drive in case of theft.
You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!
I do a lot of work with virtual MikroTik routers, mostly in Microsoft Hyper-V. The
CHR is great for labbing-out solutions and developing configuration templates for
clients. Unfortunately copy-and-paste operations aren't really possible through the
built-in Hyper-V console. Winbox is another solution, but much of what I do happens
at the command line. The cleanest solution I've come up with is using the built-in
serial device functionality with named pipes for the VMs. PuTTY provides a handy
serial interface for accessing the virtual device.
First we'll add a serial device to the CHR's VM configuration. Then we'll use it to
create a named pipe. Finally, we'll use PuTTY to access the serial console.
CHR Configuration
Adding a serial port to the CHR gives us the "hardware" that we need, even though it's
virtual. In Hyper-V right-click the CHR VM. Then select Settings and COM1 to the
left. Note how no device was included by default in the configuration:
Select the Named Pipe option and enter a name:
Remember the full name, beginning with .\pipe\ and ending with the chosen name.
PuTTY Serial
Launch PuTTY with administrative privileges, otherwise the named pipe can't be
accessed in Windows. Select the Serial option, then use the named pipe string:
Click Open, then click inside the PuTTY window and press [Enter]. The RouterOS
login prompt should appear. If the router is rebooted we can quickly restart the serial
session by right-clicking the title bar:
Restarting virtual serial connectio
You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!
Preface
While MikroTik does sell switches, many organizations deploy SOHO
RouterBOARD models to small, remote offices with only a few devices. This is very
common inside residential networks as well. Switching ports connected to local
devices and using one port for an internet connection makes the most sense for these
locations, rather than deploying a separate switch and router. There are a couple ways
to combine ports in a switched (bridged) configuration depending on what RouterOS
version we're running.
/interface bridge
add name=Switch comment="Switched ports" fast-
forward=yes
Use the "protocol-mode" option with the command above to configure Spanning Tree
Protocol as needed. Options include STP, RSTP, and MSTP. Now add
ports ether2 - ether5 to the bridge and use the "hw=yes" option:
/interface bridge port
add interface=ether2,ether3,ether4,ether5 bridge=Switch
hw=yes
Once these ports are connected they should also be operating with the "Running" and
"Slave" statuses.
VLAN Trunking
ROUTING
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Preface
VLAN trunking and routing is one of the most basic and essential skills that a network
administrator can have. Segmenting the network with VLANs is required for PCI,
HIPAA, and other compliance standards, and it helps keep some measure of order and
sanity in large network infrastructures. Setting up VLANs on a Mikrotik router and
configuring VLAN trunking is easy, even if a couple of the steps are less-than-
intuitive.
Navigation
1. VLAN Design
2. VLAN Trunking Protocols
3. VLAN Topology
4. Creating VLANs on Mikrotik
5. Addressing VLAN Interfaces
6. DHCP for VLAN Networks
7. Switch VLAN Configuration
VLAN Design
The first step in segmenting the networking isn't done on the router at all, it's done on
the whiteboard - deciding how to structure your VLANs. If a network has to be
HIPAA or PCI compliant this decision is easier because it's spelled out in black and
white what has to be segmented. If segmenting a network is happening for another
reason, like a company mandate to improve security, then it's a bit "up in the air" but
still doesn't have to be hard.
For the most part I like to mirror the organizational structure with VLANs. Each
department typically gets its own VLAN, because each department is its own logical
group with a unique function, and probably has its own security needs. Servers and
storage get their own VLANs, or (preferably) their own switching hardware if that's in
the budget. I like being able to firewall and monitor traffic per-department, and having
their traffic going through virtual VLAN interfaces lets me use tools like Torch or
NetFlow. Guest networks get their own VLANs that are firewalled from accessing the
internal network. Wireless networks get their own VLANs too, keeping wireless
chatter, IOS / Android and App updates, etc off the other networks. Once you decide
who gets their own VLAN it's time to create them and segment the network.
VLAN Topology
For this scenario we only have one router, and we'll create VLANs for HR
(192.168.100.0/24), Accounting (192.168.150.0/24), and Guests (192.168.175.0/24).
If you can create 3 VLANs you can create 30, so I'm keeping the example brief. The
IP addresses for each VLAN were also chosen randomly, it's up to you to choose an
IP scheme that fits your organization. The router is connected to a switch on ether2,
with an 802.1q trunk link in between. This is also known as a "router on a stick" type
configuration. I'm not going to be specific about the switch being a Cisco, HP, or
whatever switch because 802.1q trunking is almost the same across platforms. Just
check your vendor's documentation for setting it up on a trunk port. The router also
has a WAN connection on ether1 that clients in the VLANs will use to access the
Internet via a default route to the ISP's gateway.
/interface vlan
add comment="HR" interface=ether2 name="VLAN 100 - HR"
vlan-id=100
add comment="Accounting" interface=ether2 name="VLAN 150
- Accounting" vlan-id=150
add comment="Guests" interface=ether2 name="VLAN 175 -
Guests" vlan-id=175
I've taken the time to name the VLAN interfaces and give them a useful comment, and
I suggest you do the same. This will make administering VLANs and onboarding new
administrators easier. As mentioned earlier, creating the VLANs and assigning them
to the physical ether2 interface automatically changed encapsulation to 802.1q, even
though you won't see that if you print the interface details. This is one of those non-
intuitive things mentioned before.
/ip address
add address=192.168.100.1/24 comment="HR Gateway"
interface="VLAN 100 - HR"
add address=192.168.150.1/24 comment="Accounting Gateway"
interface="VLAN 150 - Accounting"
add address=192.168.175.1/24 comment="Guests Gateway"
interface="VLAN 175 - Guests"
Again, I took the time to add comments and you should as well. At this point we have
our VLANs, and they have usable addresses. If you're using static IP addressing on
your network that's pretty much it for VLAN configurations. The next (optional) steps
are setting up DHCP instances on the VLAN interfaces, so that clients inside each
network segment can get dynamic addresses. First, create the address pools that
DHCP will hand out:
Next, set up the DHCP networks with options for DNS (Google public servers) and
the gateways:
In this case I'm using Google's Public DNS service, and the internal gateways are set
to the IP addresses you assigned before on the VLAN interfaces.
Lastly we'll spin up the DHCP server instances on the VLAN interfaces, using the
pools you set up earlier:
/ip dhcp-server
add address-pool=HR disabled=no interface="VLAN 100 - HR"
name=HR
add address-pool=Accounting disabled=no interface="VLAN
150 - Accounting" name=Accounting
add address-pool=Guests disabled=no interface="VLAN 175 -
Guests" name=Guests
The pools correspond with the networks set up previously, and that's how the DHCP
options like gateway and DNS are associated with a particular DHCP instance. I like
spinning up DHCP for each VLAN, so you can control lease times, options, etc
individually for each network segment. This gives you a lot of flexibility to tweak and
monitor DHCP across the organization.
IPSEC Tunnels
VPN , SECURITY
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Preface
IPSEC is one of the most commonly used VPN technologies to connect two sites
together over some kind of WAN connection like Ethernet-Over-Fiber or Broadband.
It creates an encrypted tunnel between the two peers and moves data over the tunnel
that matches IPSEC policies.
Navigation
1. Nomenclature
2. IPSEC Policy vs Routing
3. IPSEC Topology
4. Mikrotik IPSEC Peers
1. Seattle Peer
2. Boise Peer
5. Mikrotik IPSEC Policy
1. Seattle Policy
2. Boise Policy
6. Mikrotik NAT Bypass
1. Seattle NAT Bypass
2. Boise NAT Bypass
7. IPSEC Tunnel Testing
Nomenclature
"Peers" and "Policy" will be used a lot in this article, so it's important to know what
they mean. Peers are the endpoints for IPSEC tunnels. Policies are the settings that
define the interesting traffic that will get pushed over the tunnel. If packet traffic isn't
covered by a policy it isn't interesting, and gets routed like any other traffic would be.
If packet traffic does match what's in a policy, the router defines those packets as
interesting, and sends them over the tunnel, rather than routing them.
IPSEC Topology
Below is the physical topology diagram of what we're working with, and it shows the
logical connection that the IPSEC tunnel will create between
subnets.
We have two routers, in Seattle and Boise, both connected to the Internet somehow
with their own static IP addresses. These routers could be at two offices owned by one
company, or just two locations that need to be connected together. We need
computers or servers at one location to be able to contact devices at the other, and it
has to be done securely. An IPSEC VPN is perfect for this sort of implementation.
Seattle Peer
On the Seattle router:
Boise Peer
On the Boise router:
The encryption algorithm and secret must match, otherwise the IPSEC tunnel will
never initiate properly. In production networks a much more robust secret key should
be used. This is one time when network administrators often generate long random
strings and use them for the secret, because it's not something a human will have to
enter again by memory. Secret keys should be changed on a regular basis, perhaps
every 6 or 12 months, or more often depending on your regulatory needs. Do not
enable NAT traversal, it's pretty hit-or-miss. This feature is meant to help get around
NAT'ing, which breaks IPSEC, but it doesn't always work necessarily.
https://www.stigviewer.com/stig/network_devices/2015-09-22/finding/V-3008
If you look at the policies side-by-side you'll notice that the IP address entries on both
routers are reversed - each router points to the other. It really helps to open up the
same dialog boxes in two Winbox windows, looking at them side-by-side, checking
that the SRC address on one router is the DST address on the other.
Seattle Policy
On the Seattle router:
Boise Policy
On the Boise router:
For IPSEC tunnels that stay up all the time and also give you routed virtual interfaces,
take a look at running GRE over IPSEC.
To force this IPSEC tunnel to come up I've sent pings from one subnet to the other,
creating interesting traffic and triggering the IPSEC policy. When viewing the
Installed SAs on the Boise router we can see that encryption keys have been
established, and that on each side the SRC and DST addresses correspond with each
other:
In the Remote Peers tab it also indicates that the Seattle router is an established
remote
peer:
On the Seattle router you'll see the same information in the Installed SA and Remote
Peers tab, but the IP addresses will be backwards from Boise's.
Tracerouting from an IP address on the Seattle LAN shows one hop to an IP address
on the Boise
LAN:
Notice that I specified the source address in the traceroute above. This is so that the
packets sent for the traceroute will appear to originate inside the IPSEC policy's SRC
network, and be headed to a DST network that matches the policy as well - interesting
traffic. If you just try pinging straight from one router to another it won't work,
because the packets won't match the policy and IPSEC will ignore them. Either
specify the SRC to match the policy when pinging from the router, or ping from a real
host inside those subnets.
There is a lot more we can do with IPSEC VPNs, like running GRE over a tunnel for
routing or using OSPF, but this is a great start.
This guide uses a real-world network topology for creating secure site-to-site links in
two scenarios. The first scenario is a basic link between LANs at separate locations
using IPSEC. The second scenario uses IPSEC with GRE+OSPF to create secure,
routed links that can scale to dozens of networks or more.
This article will focus on creating a site-to-site VPN tunnel using PPTP. We'll use
static routes on each router that allow devices in one LAN to communicate with
devices in the other. The topology being used is the same one in the MPLS with
VPLS article, but the Seattle and Santa Fe LER devices have been converted to
customer-owned routers. The topology is shown below:
Mikrotik PPTP Site to Site Topology
The requirements for this network aren't too complicated - connect customer LAN
networks 192.168.1.0/24 and 192.168.5.0/24 via a PPTP tunnel over a provider's
network. This is a cheaper alternative to MPLS tunnels, though in fairness it is also a
very different technology and somewhat legacy. The Seattle customer router will be
the PPTP server, and the Santa Fe router will run the PPTP client. It could be the other
way around, it doesn't matter, as long as one router is the server and the other is the
client. First we'll enable the PPTP server on the Seattle router:
I've specifically set the authentication to MSCHAP v2 because that is the best
encryption that PPTP can handle, and we don't want to use anything less than that.
We'll also set the PPTP profile being used to require encryption, it's no longer
optional.
Next on the Seattle router we'll set up the credentials that the Santa Fe PPTP client
will use to establish the tunnel:
This PPP secret is what the PPTP client will use to establish the tunnel. It has a
username (santafe), a password, the local address that will be dynamically assigned to
the PPTP server, and the remote address that will be dynamically assigned to the
PPTP client. The IP addresses I chose for the PPTP tunnel are totally arbitrary, you
can use whatever you want as long as they don't overlap with anything already in use.
We also need to put some firewall rules in to allow PPTP (which uses GRE) into the
firewall:
This allows PPTP traffic from the Santa Fe router into the Seattle router. I've only
opened up PPTP to a specific source address, and I suggest you do the same. That
wraps up the configuration on the PPTP server side in Seattle, let's look at Santa Fe.
First thing to do is add those same firewall rules, just with Seattle's source IP address:
Then we'll make sure encryption is being required in the Santa Fe PPTP profile, just
like on Seattle's router:
Next we'll create the PPTP client that connects to the Seattle router:
/interface pptp-client
add allow=mschap2 connect-to=72.156.29.2 disabled=no
mrru=1600 name=Seattle \
password=supersecretpassword user=santafe
At this point the PPTP client should automatically connect, and a dynamic PPTP
interface is created. The IP addresses assigned in the PPP secret will now be set
dynamically on both routers as well. On the Seattle PPTP server you should see
something like this for interfaces:
Mikrotik PPTP Server Interface
And on the Santa Fe PPTP client you should see something like this:
On the PPTP client side you can see the same thing, just with the other IP address:
The final step is adding the static routes, pointing traffic from one LAN to another
over the new tunnel. Because PPTP creates interfaces and assigns IPs that can be used
for routing we could use a dynamic routing protocol like OSPF, but because this
implementation is so small I'm opting for static routes.
On the Seattle router:
Traffic destined for the opposite LAN goes to the opposite side of the tunnel, hitting
the other router which hands off the traffic to the LAN port. In this case we've used
static routes for simplicity, but you could also use OSPF or another routing protocol.
EoIP Tunnel
VPN
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
MikroTik's EoIP tunnel functionality is very popular with users who need to extend
Layer 2 networks between sites. It's configured much like a GRE tunnel and extends
an OSI Layer 2 broadcast domain between sites. Once established the tunnel can be
bridged to physical adapters or other connections. For applications or other systems
that require a Layer 2 adjacency this is the only way to make it work across sites,
other than using a dedicated provider circuit or fiber / microwave link.
EoIP is also a solution for quick-and-dirty network integration for two sites that have
overlapping subnets that, for whatever reason, can't be completely readdressed. Small
businesses and branch offices often have flat networks in the 192.168.0.0/24 or
192.168.1.0/24 ranges, and when the mandate comes down to enable communication
between them quickly and cheaply, EoIP is a possible solution. There will need to be
a little bit of compromise though, and the same rules that apply to a single network in
one location now apply across locations. In the long-term readdressing networks to
not overlap is ideal, but for small businesses with limited IT budgets and often no IT
staff to speak of this is a solution that works for the short-term.
In this situation we have two small offices, one in St. Louis and the other in Norfolk.
Both offices have LANs in the 192.168.1.0/24 subnet, and users need to be able to
access resources in both offices remotely. Management doesn't understand what
network addresses do or why they matter, and this task has to be done with minimal
disruption of ongoing operations. Here is the network:
First we'll create the EoIP tunnels, then create the bridges that will connect them to the
physical LAN, and lastly do a bit of IP readdressing.
Create the EoIP tunnel on St. Louis router and enable encryption:
Create the EoIP tunnel on the Norfolk router and enable encryption:
The tunnel ID numbers must match on each side. Additionally, an IPSEC key has
been added, which will encrypt the EoIP traffic between the two sites. This is a good
idea to have in place, but it is an optional step depending on your security needs, and
it only works between Mikrotik devices. At this point you should see the tunnel come
up and be active, though there probably isn't any traffic going over it. Here is the
tunnel to the Norfolk side, just as an example:
Mikrotik EoIP Active Tunnel
You may have noticed the tunnel is running (R) in slave mode (S) because it has been
bridged. We haven't completed that step yet, but we'll get to it. First we need to
resolve a potential conflict.
Originally both routers had their LAN gateways set as 192.168.1.1 - not a big deal
because they are separate. However, once we bridge the two LANs together it
becomes a very big deal, because IP conflicts will wreak havoc. This is one of those
compromises mentioned earlier; while we can have two locations sharing the same
subnet, we can't have duplicate IP addresses between the locations. So on the Norfolk
side the router's gateway IP address has been changed to 192.168.1.128, which doesn't
conflict with the St. Louis router's gateway IP. Gateway address modifications would
need to be made on the Norfolk side for devices that have static IPs to accommodate
the change,
Another critical step that would have to be performed is splitting the DHCP scope on
each side of the tunnel, so that the DHCP servers running on the Mikrotik routers
aren't handing out duplicate client addresses. This step is unique to each organization
and the DHCP scope(s) they are using. Once that is done we can move on to bridging
interfaces to the EoIP tunnel.
Ether2 is our physical LAN interface on both routers, and we have to get traffic from
the physical LAN interface to the EoIP tunnel, then out of the tunnel and into the
physical LAN on the other side - easy to do with bridging. First we'll create the
bridges, then add ports to them.
Create the bridge on the St. Louis router and add ports:
At this point we're done, other than testing. Let's do a bandwidth test from 192.168.1.1
in St. Louis to 192.168.1.128 in Norfolk:
Mikrotik EoIP Speed Test
While 120Mbps isn't exactly thrilling, this is done in a lab environment on virtual
routers, and the graph shows consistent speed across two tests. That amount of
bandwidth is probably enough to handle data between two small offices, and using
larger Mikrotik Routers would allow for higher speeds. It should be noted, however,
that this particular solution isn't scalable for larger flat networks, and pushing a
consistently high volume of traffic is taxing for the CPU. Also, the overall speed
across the EoIP tunnel is limited to the speed of the slowest WAN connection, so
some testing will be needed to see if overall performance is acceptable in your
environment.
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
For those of you playing the home game - and by that I mean playing on an Xbox or
Playstation at home using Mikrotik for routing, you've probably seen the console
complaining about your NAT configuration. It's not a huge deal in most cases, but for
some games and services it can cause issues with download speed, voice, and chat
communications. If you have the Xbox test the network connection it will most likely
complain about "Moderate NAT" settings if you're running the console behind a
Mikrotik using the default configuration. The fix for this is really simple - enable and
configure UPnP. This service allows the Xbox to request the router create dynamic
DST-NAT rules specifically for Xbox Live communications.
Allowing UPnP to dynamically forward ports is the flip-side of using static NAT
entries for port forwarding. A lot of networking folks take issue with UPnP (Universal
Plug and Play) for a couple reasons. First, UPnP takes some of the control out of the
hands of network administrators, allowing network devices themselves to
communicate with the router and create their own "pinhole" port forward settings.
Second, most UPnP implementations on low-end networking equipment are laughably
insecure. There's a laundry list of security issues created by home network equipment
manufacturers, and unfortunately that reputation has bled over to UPnP itself.
Fortunately Mikrotik's implementation isn't terrible, and only opens up access
specifically requested by the device on the LAN asking for it.
Next, we'll tell the router which is the internal interface that's LAN-facing, and which
is the external interface that's Internet-facing. In this case ether1-gateway is the WAN
connections, and bridge-local is the LAN connection:
/ip upnp interfaces
add interface=ether1-gateway type=external
add interface=bridge-local type=internal
That's it! UPnP is turned on, and we've told the router which interfaces are which so
that firewall NAT pinholes can be created. Now we'll fire up the Xbox so it can
communicate with the router and create dynamic NAT rules:
Mikrotik UPnP
The two NAT rules above marked with a "D" are the dynamic rules that UPnP created
for internal LAN devices that need them, including the Xbox. Testing the network
from the Xbox again will show green across the board, and everything should work
really well. That's it
Syslog Logging
SECURITY
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Syslog is one of the most widely supported event reporting mechanisms, across
almost all manufacturers and OS distributions. Using Syslog to report events
happening on routers, switches, and servers is pretty standard, and being able to
centrally monitor reportable events on network infrastructure is critical. Most
organizations don't report every single event, because that would create a huge,
unmanageable mess of logs. Instead administrators focus on hardware events,
authentication issues, interface up/down events, and network adjacency changes.
So, with that being said, we'll set up a Mikrotik router to report important events to a
Syslog server, and use The Dude as a dashboard for monitoring. This is a no-cost
solution that centralizes the administrative task of monitoring infrastructure, and it is
surprisingly flexible. The topology in this scenario is pretty basic - two branch routers,
both with an Internet connection, both with a connection to a management network as
well. Do you absolutely need to send Syslog events over a management network? No.
Should you be handling monitoring and reporting over a management network? Yes,
it's best practice. The server is just an instance of The Dude, running on a Windows
Server.
Topology, with Mikrotik routers connected to Syslog server via management network
First, we'll set up a logging action on the router. This is just a logging action that tells
the router to send the event to a Syslog server. We'll then assign that logging action to
different events.
On both routers:
On both routers:
Now that the routers are taken care of, let's set up the Syslog server. Mikrotik's The
Dude isn't the only Syslog server freely available - there are many - but it's one of the
few that's easily installed and you'd actually want to look at on a big screen display for
dashboarding. First, download and install the latest copy of The Dude from Mikrotik's
download page. Open up The Dude, and make sure that the Syslog server process is
running, as shown below:
Enabling Syslog in The Dude
Now that the routers are logging to Syslog, and The Dude is listening, let's create
some events. I'm going to log into the Seattle router with the wrong username and
password multiple times, and a few times successfully, to show what you'd see if
someone were trying to guess the admin username or password.
Failed and successful login attempts
This is now your running record of who logged into (or failed to log into) devices,
from what IP, and via which service. Also, very importantly, this gets your log entries
off of your devices and onto a separate, hopefully secure server. If a device has been
compromised there is a good chance the logs on the device have been too, but if
you're shipping events off to a separate server there is another record that can be
trusted. Assuming that all routers are syncing their clocks via NTP, you can use the
timestamps as well to create a chronological order of events, which is critical when
handling a security incident.
We've just covered the basics here, and there is a lot more you can report on with
Syslog and monitor using The Dude. I encourage you to ship critical and error events
to a Syslog server, and put the Syslog window up where people can see it and keep an
eye out for unusual entries. The last thing we want is someone trying to bruteforce the
admin password and no one even knows it's happening.
This isn't hard to fix at all, in fact it's fairly easy. Using Mikrotik Queues can allow us
to put bandwidth limitations in place, and also ensure a (relatively) fair distribution of
bandwidth between all users. We can also give priority to one network's bandwidth
usage over another's if we wanted to. We'll do this by using Mikrotik PCQ - PCQ
standing for "Per Connection Queue". Using a PCQ instead of another type of queue
allows for even distributions of bandwidth per connection. That means that one person
on a subnet gets just as much bandwidth when they open a webpage as someone else
does. Other types of queues are available, but we won't cover them in this article.
For this article we're just using one Mikrotik router, with a LAN subnet of
192.168.1.0/24 that all 200 users are sitting on. I'm not creating a topology diagram
for this article because there's just one router and one subnet. We have one WAN
connection, Ether3, and it's connected to a 50Mb/sec broadband line. Users have been
complaining that the Internet is always slow, and every now and then we've seen
someone who is using 10Mb or 15Mb/sec on their own just watching videos.
First, we'll flag the download traffic using a Mangle rule to mark the packets as they
come in from the WAN and head to the LAN. We have to mark the packets first so
the PCQ knows what packets to impose limits on. The mangle rule is shown below:
The packet counter should be incrementing for this rule if traffic is flowing and the
rule was set up correctly. This rule will mark all packets coming in interface Ether3
with a destination address in the 192.168.1.0/24. Once the packets have been marked
then we can apply a PCQ to them. The PCQ is shown below:
A few fields in the PCQ deserve explanation. Each connection that is matched to this
PCQ gets its own little queue, and the quantity of those little queues can be set by you.
Increasing them takes up more RAM and CPU time, but not having enough of them
means that packets could get bottlenecked in the queue and dropped - it's a balancing
act and some tuning may be required. In this case I have 200 users, so I'm going to
limit the amount of those little queues to 300, which admittedly is an educated guess.
The PCQ Rate is the most amount of bandwidth that an individual connection can use.
This is a user streaming Pandora, or loading a web page, or watching a streaming
video. The PCQ Total Limit is the total amount of bandwidth that all the connections
can use at once. We have a 50Mb/sec line, so I'm limiting total connections to
40Mb/sec and giving myself a little wiggle room for other traffic. This all relies on the
fact that not all 200 of our users are going to be using their allotted 5Mb/sec at once.
Finally we'll add the PCQ set up earlier to the Queue Tree. This is where the rubber
meets to the road, and the PCQ we created actually gets applied to the packets we're
marking:
That's it, everything is in place! Here is what an iperf bandwidth test looked like prior
to the PCQ being put in place:
Connection without Mikrotik PCQ
The first transfer is an upload, the second is the download, both sitting squarely
around 120Mb/sec. Now here's what it looks like with the PCQ in place:
With the PCQ in place it's a very different story, with bandwidth being limited to
4.89Mb/sec on average. Multiple iperf tests running in parallel show the same thing as
well. Please note: If you're going to implement bandwidth throttling some tuning will
be necessary. You may need to tweak one setting or multiple settings to make sure
that everyone gets their share of bandwidth, without being overly restrictive. Good
luck!
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Most routers act as just that - routers. Each interface acting as the gateway for a
distinct network, or as a trunk for VLANs that represent distinct networks. But for
some routers this isn't always the case, particularly in the SOHO or branch office
environment at the edge of the network. For those routers often one interface acts as
the gateway, with all the others working together in a switched capacity to connect
workstations, printers, APs, and other devices. This contrasts with routers in or near
the core of the network that strictly handle routed traffic, or are handling MPLS
traffic.
The first step to setting up one of these edge routers with a switching group of ports is
to determine how many switch chips are present in this particular model of router. In
the lab for this exercise I'm using an RB751U-2HnD which has one Atheros switch
chip. Other models like the RB1100AH and the RB2011 have two switch chips.
To determine how many switch chips you have and what kind:
Mikrotik Switch Chip
Only ports wired to the same switch chip can actually be switched together. Also, note
that ether1 is conspicuously missing from switch1 - it isn't wired to the switch chip.
Therefore it can't be switched with ether2-ether5, unless a bridge port is manually
configured (and considering ether1 is used as the WAN gateway that would be a
terrible idea). On routers like the RB2100 that have two switch chips, with half the
physical ports wired to each, the only way to switch ALL the ports together across
both switch chips is to create a software bridge between two ports, one on each of the
switch chips. This isn't a very efficient solution, and if more than just a few switched
ports are needed it would be prudent to purchase a Mikrotik CRS.
Ether1 is being used for the WAN gateway, but ether2 - ether5 in this scenario need to
be switched together to create a LAN. Two computers, a printer, and a NAS all need
to be part of this LAN. This configuration isn't taking into account VLANs, but if you
want to learn how to use VLANs then look at the Mikrotik VLAN tutorial.
The next step is determining which port out of all the switched ports will be the
"Master" - I chose ether2. The rest of the ports, ether3 - ether5 will be set as slaves to
ether2. Here is ether2's configuration:
Mikrotik Master Port
As you can see ether2 has be set as the Master port, therefore it has no Master Port
configuration chosen. Ether3 - ether5 look very different though:
Mikrotik Master Port
Ether3 has been configured with ether2 as the Master port. This tells RouterOS that
these ports are running in a switched configuration. The same change needs to be
made for the other switched ports:
Ether1 has no Master port because it's acting as our WAN gateway, and is a separate
routed interface. Ether2 has no Master port because it is the Master port for this switch
chip. Ether3 - ether5 are set with Ether2 as their Master port, which tells RouterOS to
switch all those ports (ether2 - ether5) together.
The last step is to assign an IP address to ether2, which acts as the gateway address for
all the hosts plugged into ether2 - ether5. If the network is utilizing DHCP then the
DHCP server would be set to run on ether2, and because all the other ports (except
ether1) are switched together the hosts would be able to receive dynamic addresses
Mikrotik VRRP
ROUTING , OPTIMIZATION
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Mikrotik VRRP (Virtual Router Redundancy Protocol) gives us the opportunity to
introduce some resiliency into our routing infrastructure. A common VRRP
implementation is to have redundant gateways for larger networks, whether in
enterprise or service provider environments. With VRRP two gateways can be
installed, one active and one standby. When one router drops because of power loss,
hardware failure, etc the other takes over, assigning itself the gateway address and
routing traffic. Very minimal traffic loss occurs during the switch, but there is some
loss nonetheless.
We'll implement the dual-gateway solution, and see what happens when we shut one
of the LAN interfaces down. Here is the topology we're working with in Boston - one
LAN with two gateways, each with a connection to the service provider.
Mikrotik VRRP Topology
Each of the routers has its own static WAN IP, each with a route pointing to the
service provider gateway - both routers are perfectly capable of shuttling packets in
and out of the network. Each router is also NAT'ing 192.168.70.0/24 traffic out its
respective ether1 WAN interface. Both routers each have their own LAN address as
well. However, Windows, Mac, and other clients can only accept one gateway by
default, so we need one LAN address that both routers can share. The routers will
share the VRRP address, and we'll give that VRRP address out to clients on the LAN
for use as the gateway. When one router dies the other will apply that VRRP address
and take over as the gateway, and LAN clients should see no real interruption in
connectivity.
First, we'll assign local addresses on ether2 interfaces, because they need to be part of
the network before VRRP can happen.
On Boston:
Now both routers are part of the network, and they can communicate to each other and
exchange VRRP traffic. This is everything we need to start configuring VRRP. Next,
we'll create the VRRP virtual interfaces, and link them to the physical ether2
interfaces.
On both routers:
With the virtual VRRP interfaces created we can now assign that 192.168.70.1/24
gateway address that both of the routers are going to share, and hand-off between each
other should one fail.
On both routers:
That's the whole of the configuration - both routers are now running VRRP, and one
of them has been elected the master and assigned 192.168.70.1. We'll start a constant
ping from the workstation on the Boston LAN to a static IP assigned to the Seattle
router (165.95.23.1), and disconnect the LAN interface of one of the routers to force
the VRRP transition.
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Preface
Syncing the clocks between all of your devices is a critical part of keeping your
networks healthy. Time affects network security, VPN stability, and more.
Navigation
1. Relying on NTP
2. NTP Options
3. Timezones
Relying on NTP
Protocols like IPSEC and Kerberos exchange keys and tokens that are time-stamped
with lifetime values that determine validity. If one router's clock is faster than
another's those keys will expire sooner, causing IPSEC tunnels to bounce. If the
clocks are far enough off each other IPSEC tunnels may not come up at all because
keys from one side of the tunnel will never appear valid on the other side.
There are security implications as well - in the event of a security incident if the logs
on your devices have inconsistent timestamps then event correlation will be
impossible. When investigating an incident it's paramount that logs be reliable and
accurate. Speaking of logging, it's also prudent to centralize your logging to a server,
commonly with Syslog and The Dude. For Syslog events we want reported
timestamps to be accurate across the board.
NTP Options
In terms of NTP servers you have a few choices - host an NTP server within your
network, refer your devices to an external NTP server, or do both. For the purposes of
this article we will simply use an external NTP service, hosted by the NTP.org
project. This is a fantastic project that millions of internet users rely on, and if you
have a spare server that you could volunteer to take part in their network please do.
One simple command will tell your routers to sync with the pool.ntp.org service,
which is load-balanced and reliable:
Your router will sync its clock to the nearest NTP server participating in the pool, and
continue to make small clock adjustments regularly over time as needed. Bear in mind
that if you're running services that depend on timestamps (like IPSEC) this may cause
a brief interruption if your clocks are off significantly.
Timezones
There is one other issue of note, particularly if you have multiple networks across
different time zones. Depending on your configuration it may be prudent to configure
all your devices for the UTC timezone. This ensures that all devices have consistent
time configurations, and log entry correlation between devices in different time zones
you won't have to adjust for local time. If you have routers in different states or
countries that observe daylight savings time differently UTC further simplifies things.
Be aware that changing timezones on your devices will most likely bounce VPN
tunnels momentarily, and it's important that all devices be on UTC time
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
I've worked with MPLS circuits for a long time, but always with provider hand-offs.
This is most people's first and only real exposure to MPLS. The service provider gives
the customer Ethernet connections and says, "This connection goes to Site A, this
other connection goes to Site B, you have X amount of bandwidth, do whatever you
want." It's magic, and the customer doesn't need to have any idea how it works on the
backend. It makes networking remote sites much easier for the customer, and it's a
lucrative value add for providers. Obviously there are a lot of other ways to network
remote sites together. There is EoIP, GRE, IPSEC, GRE with IPSEC, and if you have
Scrooge McDuck amounts of money you can run your own fiber too. With that being
said, MPLS is extremely popular compared to other solutions because it's transparent
for the customer, the customer doesn't have to administer the tunnels, and it's all fairly
turnkey. I wanted to learn what was going on behind the curtain, what it actually takes
to provide these tunnels, and so I did.
Before we go any further you should be familiar with some terms, and I suggest
reading up on basic MPLS. Two terms to be familiar with are LSR and LER, or Label
Switch Router and Label Edge Router respectively. An LSR is a router running MPLS
that only performs label switching in the core; it doesn't add or remove labels at
network ingress or egress. An LER is a router running MPLS that pushes (adds) or
pops (removes) an MPLS label when a packet enters or exits the MPLS network.
LSRs reside in the core, LERs reside at the edge.
This article describes how to set up a basic MPLS network in the core, supported by
OSPF, and run VPLS tunnels over that core between customer sites. This lets you give
the customer an Ethernet handoff on both sides of the tunnel, and basically tell them
to pretend it's a Cat5 cable strung between sites.
Here is the topology that we're working with, with two customer devices attached to a
Seattle and a Santa Fe provider router:
On Seattle LSR:
/mpls interface
set [ find default=yes ] interface=ether1
add interface=ether2
add interface=ether3
On Santa Fe LSR:
/interface bridge add comment="MPLS Loopback"
name="MPLS Loopback"
/mpls interface
set [ find default=yes ] interface=ether2
add interface=ether3
add interface=ether1
On Atlanta LSR:
At this point we have OSPF running in the core, and MPLS running as well on the
LSR routers. From this point on we'll focus on the LERs that actually connect to the
customers. We'll add an additional bridge for VPLS traffic, configure OSPF and
MPLS with LDP on each of the LERs, then we'll move on to building the VPLS
tunnels.
On Seattle LER:
/interface bridge
add comment="MPLS Loopback" name="MPLS Loopback"
add comment="Customer #4306 Site 1" name="VPLS
Customer 4306-1 Bridge"
On Santa Fe LER:
/interface bridge
add comment="MPLS Loopback" name="MPLS Loopback"
add comment="Customer #4306 Site 2" name="VPLS
Customer 4306-2 Bridge"
At this point OSPF should be fully converged, and in the MPLS Bindings tab we
should see some MPLS labels associated with destination addresses:
Mikrotik MPLS Local Bindings
This is MPLS at work, associating routes with labels for quick lookup, which is what
gives MPLS its trademark performance boost over regular end-to-end IP routing.
We're ready now to add the VPLS tunnels and start moving some traffic transparently
between sites. The extra bridge interfaces that we added on the two LERs will be used
to bridge the VPLS virtual interfaces with physical Ethernet interfaces that we hand
off to the customer.
On Seattle LER:
/interface vpls
add comment="Customer 4306-2 VPLS" disabled=no
l2mtu=1500 name="Customer 4306-2 VPLS" remote-
peer=72.156.30.120 vpls-id=90:0
/interface vpls
add comment="Customer 4306-1 VPLS" disabled=no
l2mtu=1500 name="Customer 4306-1 VPLS" remote-
peer=72.156.29.120 vpls-id=90:0
/interface bridge port
add bridge="VPLS Customer 4306-2 Bridge"
interface=ether3
add bridge="VPLS Customer 4306-2 Bridge"
interface="Customer 4306-1 VPLS"
At this point we've created a Layer 2 connection between whatever is plugged into
ether1 in Seattle and ether3 in Santa Fe. The customer could throw routers on those
connections, or switches, or plug servers directly in. For demonstration purposes I put
a virtual Ubuntu server on each of those physical interfaces, given them the IP
addresses 10.2.2.1 and 10.2.2.2, and run iperf both directions to test bandwidth as
shown below:
Mikrotik MPLS IPerf Testing
Bandwidth testing shows a consistent, fast connection. This whole network and
servers are virtualized, so while it isn't running at gigabit wire speed it still performs
well. One of the other requirements for this solution was that there be no hops
between the two locations - this should all be transparent to the customer. Traceroute
from 10.2.2.2 to 10.2.2.1 shows the following:
Exactly what we want to see, which is nothing. None of the provider routers in
between, none of the hops. Next time we'll cover MPLS with QoS and all the other
fancy features!
Mikrotik Firewall
FIREWALL , SECURITY
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Preface
The Mikrotik firewall, based on the Linux iptables firewall, is what allows traffic to
be filtered in, out, and across RouterOS devices. It is a different firewall
implementation than some vendors like Cisco, but if you have a working knowledge
of iptables, or just firewalls in general, you already know enough to dive in.
Understanding the RouterOS firewall is critical to securing your devices and ensuring
that remote attackers can't successfully scan or access your network.
We'll discuss firewall design, chains, actions, rules, and overall best practices.
Navigation
1. Firewall Design
2. Firewall Chains
3. Firewall Actions
4. Firewall Rules
5. Firewall Best Practices
Firewall Design
The general idea of firewalling is that traffic you need should be allowed, and all other
traffic should be dropped. By putting firewalls in place a network can be divided into
untrusted, semi-trusted, and trusted network enclaves. Combined with network
separation using VLANs, this creates a robust, secure network that can limit the scope
of a breach if one occurs.
Traffic that is allowed from one network to another should have a business or
organizational requirement, and be documented. The best approach is to whiteboard
out your current network design, and draw the network connections that should be
allowed. Allowed traffic will have a rule that allows the traffic to be passed, then a
final rule acts as a "catch-all" and drops all other traffic. Sometimes this is referred to
as the "Deny All" rule, and those coming from a Cisco background often call it the
"Deny Any-Any" rule. Allowing what you need and dropping everything else keeps
firewall rules simple, and the overall rule count to a minimum.
The first concept to understand is firewall chains and how they are used in firewall
rules.
Firewall Chains
Firewall Chains match traffic coming into and going out of interfaces. Once traffic has
been matched you can take action on it using rules, which fire off actions - allow,
block, reject, log, etc. Three default Chains exist to match traffic - Input, Output, and
Forward. You can create your own chains, but that is a more advanced topic that we'll
cover in another article.
Input Chain
The Input Chain matches traffic headed inbound towards the router itself, addressed to
an interface on the device. This could be Winbox traffic, SSH or Telnet sessions, or an
administrator pinging the router directly. Typically most Input traffic to the WAN is
dropped in order to stop port scanners, malicious login attempts, etc. Input traffic from
inside local networks is dropped as well in some organizations, because Winbox,
SSH, and other administrative traffic is limited to a Management VLAN.
Not all organizations use a dedicated Management VLAN, but it is considered a best
practice overall. This helps ensure that a malicious insider or someone who gains
internal access can't access devices directly and attempt to circumvent organizational
security measures.
Output Chain
The Output Chain matches traffic headed outbound from the router itself. This could
be an administrator sending a ping directly from the router to an ISP gateway to test
connectivity. It could also be the router sending a DNS query on behalf of an internal
host, or the router reaching out to mikrotik.com to check for updates. Many
organizations don't firewall Output traffic, because traffic that matches the Output
chain has to originate on the router itself. This is generally considered to be "trusted"
traffic, assuming the device has not been compromised somehow.
Forward Chain
The Forward Chain matches traffic headed across the router, from one interface to
another. This is routed traffic that the device is handing off from one network to
another. For most organizations the bulk of their firewalled traffic is across this chain.
After all we're talking about a router, whose job it is to push packets between
networks.
An example of traffic matching the Forward chain would be packets sent from a LAN
host through the router outbound to a service provider's gateway via the default route.
In one interface and out another, directed by the routing table.
Firewall Actions
Firewall rules can do a number of things with packets as they pass through the
firewall. There are three main actions that RouterOS firewall rules can take on packets
- Accept, Drop, and Reject. Other actions exist and will be covered in different
articles as they apply, but these three are the mainstay of firewalling.
Accept
Rules that "Accept" traffic allow matching packets through the firewall. Packets are
not modified or rerouted, they are simply allowed to travel through the firewall.
Remember, we only should allow the traffic that we need, and block all the rest.
Reject
Rules that "Reject" traffic block packets in the firewall, and send ICMP "reject"
messages to the traffic's source. Receiving the ICMP reject shows that the packet did
in fact arrive, but was blocked. This action will safely block malicious packets, but the
rejection messages can help an attacker fingerprint your devices during a port scan. It
also lets the attacker know that there is a device running on that IP, and that they
should probe further. During a security assessment, depending on the auditor and the
standards you're being audited against it may or may not become an audit finding if
your firewall is rejecting packets. It is not recommended as a security best practice to
reject packets, instead you should silently "drop" them.
Drop
Rules that "Drop" traffic block packets in the firewall, silently discarding them with
no reject message to the traffic source. This is the preferred method for handling
unwanted packets, as it doesn't send anything back that a port scanner could use to
fingerprint the device. When drop rules are configured correctly a scanner would get
absolutely nothing back, appearing as though nothing is actually running on a
particular IP address. This is the desired effect of good firewall rules.
Firewall Rules
Firewall rules dictate which packets are allowed to pass, and which will be discarded.
They are the combination of chains, actions, and addressing (source / destination).
Good firewall rules allow traffic that is required to pass for a genuine business or
organizational purpose, and drops all other traffic at the end of each chain. By using a
blanket "deny all" rule at the end of each chain we keep firewall rule sets much
shorter, because there don't have to be a bunch of "deny" rules for all other traffic
profiles.
Chains
Each rule applies to a particular chain, and assigning a chain on each rule is not
optional. Packets match a particular chain, and then for that chain firewall rules are
evaluated in descending order. Since the order matters, having rules in the correct
sequence can make a firewall run more efficiently and securely. Having rules in the
wrong order could mean the bulk of your packets have to be evaluated against many
rules before hitting the rule that finally allows it, wasting valuable processing
resources in the meantime.
Actions
All firewall rules must have an action, even if that action is only to log matching
packets. The three typical actions used in rules are Accept, Reject, and Drop as
described previously.
Addressing
This tells the firewall for each rule what traffic matches the rule. This part is optional -
you can simply block an entire protocol without specifying its source or destination.
There are a couple options for addressing traffic coming into or across the router. You
can specify the Source or Destination IP addresses, including individual host IPs or
subnets using CIDR notation (/24, /30, etc). Interfaces can also be used to filter traffic
in or out of a particular interface, which can by a physical interface like an Ethernet
port or a logical interface like those created by GRE tunnels. This is often done when
blocking traffic where the source or destination of the traffic isn't always known. A
good example of this is traffic inbound to the router via your service provider - that
traffic could originate from Asia, Europe, or anywhere else. Since you don't know
what that traffic is a deny rule is used inbound on the WAN interface to just drop it.
Comments
It's so important to add a comment to your firewall rules, for your own sanity and that
of your network team. It takes almost no time to do when you create firewall rules,
and it could save significant time when troubleshooting. It could also save you from
making a mistake when tweaking firewall rules down the line as networks change and
evolve. If you haven't created a comment at the time the rule was made just add a
comment like this, using firewall number 4 as an example:
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Since the release of RouterOS 6.29.1 and the introduction of the new FastTrack
feature, there's a bit of confusion out there about how to implement FastTrack rules in
the firewall. With later releases of RouterOS the FastTrack feature has started
working on more interfaces, including VLANs, so it's even more important to learn
this feature and implement it properly. We want forwarded traffic across the router to
be marked for FastTrack in the firewall, but we still have to Accept that same traffic
as well. Without both of these rules it won't work, and you won't reap the performance
benefits.
If you're not familiar with firewall rule and chain basics take a look at the Mikrotik
firewall article that breaks them down.
FastTrack has been shown to reduce CPU utilization by quite a bit, in some cases over
10% when traffic volume is high. It operates on the premise that if you've already
checked one packet in a stream against the firewall and allowed it, why do you need
to check all the other packets in the rest of the stream? In terms of overall efficiency
this is big, especially if you have more than just a few firewall rules to evaluate traffic
against.
Under the IP > Settings menu in Winbox you can also see a counter of all total
packets that have been marked for Fast Track:
Here are the firewall rules currently in use on one of my SOHO devices that take
advantage of FastTrack:
The two rules above in bold are where the rubber meets the road, and they are both
needed to make it work. These same rules can be applied in an enterprise network
environment and tweaked accordingly. Enjoy the performance boost!
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Preface
Mikrotik's RouterOS doesn't yet have specific functionality built in for network
"Zones" like some other router platforms, but with new releases of RouterOS we can
get the same functionality through "Interface Lists". An interface list is just like a
firewall address list, except instead of host IPs or CIDR subnets we're listing physical
or virtual interfaces. Once we've put interfaces into their respective lists we can use
those lists in firewall rules. If you have many interfaces, such as multiple trunked
VLANs or redundant WAN connections this could help you consolidate firewall rules.
Navigation
1. Zone Overview
2. Zone Types
3. Creating MikroTik Zones
Zone Overview
I like to group my interfaces into zones based on the trust level of the network that an
interface is attached to. Often we end up with "Trusted", "Semi-Trusted", and
"Untrusted" zones, and some additional zones as needed depending on how a network
is built. How you split up your zones will be dictated by your individual organization's
security, compliance, legal, and operational requirements.
Zone Types
It's easy to think of three zone types - Trusted, Semi-Trusted, and Untrusted.
Trusted Zone
Interfaces in a Trusted zone would be internal wired LAN or VLAN gateway
interfaces, and management interfaces. We have a reasonable level of trust that the
hosts in these networks are not trying to actively compromise our systems, and so we
allow them to communicate (relatively) freely. Access to these networks would
require physically plugging into a port on-premise, and hopefully port security is in
place adding an additional security layer.
Semi-Trusted Zone
A Semi-Trusted network could be a point-to-point VPN to a vendor's network, or a
corporate wireless network. We must have these networks in place for legitimate
business or organizational reasons, but there is a chance that a bad actor could get
access to these networks and we want that breach to be contained if it occurs. Many
organizations give these networks access to internal server resources (Active
Directory DCs, DNS servers, etc) as required, but access to other subnets or services
is forbidden.
Untrusted Zone
Untrusted networks are networks where we know or have reason to suspect that
malicious activities could occur, or do occur. A good example of an Untrusted
connection is a connection to the internet via an ISP. Port scans and malicious login
attempts are very common out on the internet, and it's a given that attackers are
actively searching for soft targets.
Guest wireless networks are great candidates for a custom zone with some additional
firewall rules. It's still untrusted, because there's no telling what kind of devices might
roam onto the network and what kind of issues they may bring with them. But even
though the network is untrusted, it still has to forward traffic outbound to the ISP, and
they may be allowed to resolve DNS names using an internal server if split DNS is
configured, or they will just use a public DNS like Google's 8.8.8.8 server.
One Interface List already exists, number zero, the "all" list. It can't be deleted, but by
default it isn't used anywhere either so it doesn't affect your security.
Now that we have lists we'll assign interfaces to the lists. In this case ether1 is our
internet-facing WAN address, ether2-5 are LAN ports, wlan1 is a corporate
(encrypted) wireless network, and wlan2 is an open (unencrypted) guest wireless
network:
With all of our interfaces in their respective lists we can use the lists in firewall rules.
Having multiple interfaces in a rule means you only need to put the interface list in
one rule, and that rule then applies to all those interfaces. For example, we can use an
input-drop rule on all WAN interfaces by applying that rule to the "Untrusted" list:
We can allow all of our trusted networks on ether2-ether5 to forward traffic out the
WAN to the internet by using just one rule:
When new interfaces are added to the router all that needs to be done is adding the
interface to the appropriate interface list, and the correct firewall rules will now apply.
(adsbygoogle = window.adsbygoogle || []).push({});
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Preface
SNMP can provide insight about a device's performance but there are some security
considerations to take into account. A secure version of the SNMP protocol should be
used, authentication configured, and non-default Community strings.
Navigation
1. SNMP Overview
2. SNMP Protocol Versions
1. SNMP v1
2. SNMP v2c
3. SNMP v3
3. Community Strings
1. Default Community
2. Create a Community
4. Enable SNMP
5. Summary
SNMP Overview
Simple Network Management Protocol (SNMP) is an industry-standard protocol for
pulling performance information from network devices. It is a pull protocol, meaning
the SNMP monitor must reach out on a regular basis and poll devices for information.
SNMP Collectors poll devices for information, and SNMP Agents on the devices
report that data.
With SNMP being such a ubiquitous protocol there are a number of both open source
and commercial collector suites, both hardware and software-based. Routers and
switches almost always feature SNMP Agents. Windows, Linux, and Mac OS also
feature SNMP Agents though they have to be enabled manually.
SNMP v1
Version 1 is the original SNMP version and is still widely used almost 30 years later.
There is no security built into v1 other than the SNMP Community string. If the
Community string presented by the Collector matches the string configured on the
Agent then it will be allowed to poll the device. This is why it's important to isolate
SNMP to a dedicated management subnet and change the default Community string.
It's not possible to delete the standard Community string, but the first command above
renamed it and removed read access.
SNMP v2c
Version 2c brings additional capabilities to SNMP but still relies on the Community
string for security. The next version is the preferred choice, though some
organizations still rely on v1 and v2c.
SNMP v3
Version 3 brings encryption and authentication, as well as the capability to push
settings to remote SNMP Agents. SNMP v3 is the preferred version when both the
Agent and Collector support it. While SNMP v3 does have the capability to push
settings to remote devices many organizations don't opt to use it, in favor of more
robust solutions like Ansible, Puppet, Chef, or proprietary management systems.
The network device must use SNMP Version 3 Security Model with FIPS 140-2
validated cryptography for any SNMP agent configured on the device.
https://www.stigviewer.com/stig/infrastructure_router/2016-07-07/finding/V-3196
Community Strings
A Community string is like a password, allowing SNMP Agents to vet polling from
SNMP Collectors in a very crude way. More modern versions of SNMP add
authentication and encryption to the protocol.
Default Community
The default Community string on almost all network devices is simply the word
"public". This is well-known, and many port scanners like Nmap will automatically
try the default "public" string. If the default Community string is left in place it can
allow attackers to perform reconnaissance quickly and easily. Infrastructure Router
STIG Finding V-3210 requires that the default string be changed:
The network device must not use the default or well-known SNMP Community strings
public and private.
https://www.stigviewer.com/stig/infrastructure_router/2016-07-07/finding/V-3210
On MikroTik platforms it's not possible to delete or disable the default "public"
Community string, but it can be renamed and restricted:
/snmp community set 0 name=not_public read-access=no
write-access=no
Create a Community
Next create an SNMP Community with the following attributes:
Non-default name
Read-only access
Secure authentication
Encryption
Enable SNMP
Only one command is necessary to enable SNMP and configure the location and
contact information for the device:
Summary
SNMP is a robust, well-supported monitoring protocol used by MikroTik and other
mainstream manufacturers. Use non-default Community names, authentication, and
encryption to ensure that no one else can read information from your devices. Enable
SNMP and set good contact and location information to help ease distributed network
monitoring.
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Preface
Some of the most requested topics folks ask me for are multi-WAN and load
balancing implementations. Unfortunately, as easy as most solutions are on MikroTik,
these aren't simple. Many vendors like Ubiquiti have wizards that you can use during
the initial device setup to configure multi-WAN and load balancing, but that hasn't
come to RouterOS yet. Those wizard-based implementations are still complex, but
that complexity is hidden from the device administrators.
Using a load balanced multi-WAN setup helps us meet a few design goals:
Something that should be noted before you go further - this is a fairly complex topic.
Multi-WAN and load balancing requires us to configure multiple gateways and
default routes, connection and router mark Mangle rules, and multiple outbound NAT
rules. If you aren't familiar with MikroTik firewalls, routing, and NAT then it might
be best to put this off until you've had some time to revisit those topics.
Navigation
1. Router Setup
2. Input Output Marking
3. Route Marking
4. Special Default Routes
5. Summary
Router Setup
A single MikroTik router is connected to two ISPs (Charter and Integra Telecom) on
ether1 and ether2 respectively, and a LAN on ether3. Traffic from the LAN will be
NAT'd out both WAN ports and load balanced. See the topology below:
Configure the local IP addresses:
/ip address
add address=1.1.1.199/24 interface=ether1
comment="Charter"
add address=2.2.2.199/24 interface=ether2
comment="Integra Telecom"
add address=192.168.1.1/24 interface=ether3 comment="LAN
Gateway"
/ip route
add dst-address=0.0.0.0/0 check-gateway=ping
gateway=1.1.1.1,2.2.2.1
At this point you could stop configuring the router and things would work just fine in
a failover situation. Should one of the two providers go down the other would be used.
However there is no load-balancing, and this is strictly a failover-only solution. Most
organizations wouldn't want to pay for a second circuit only to have it used just when
the first goes down.
Now we'll use the connection mark just created for packets coming IN to trigger a
routing mark. This routing mark will be used later on in a route that tells a connection
which provider's port to go OUT.
add action=mark-routing chain=output comment="Charter
Output" connection-mark="Charter Input" new-routing-
mark="Out Charter"
Connections that have been marked then get a routing mark so the router can route the
way we want. In the next step we'll have the router send packets in the connections
with those marks out the corresponding WAN interface.
These rules tell the router to balance traffic coming in ether3 (LAN), heading to any
non-local (!local) address over the Internet. We grab the traffic in the pre-
routing chain, so we can redirect it to the WAN port that we want based on the
routing mark.
The following commands balance ether3 LAN traffic across two groups:
NOTE: The routing marks above are the same in this step as they were in the previous
step, and correspond with the routes we're about to create.
/ip route
add distance=1 gateway=1.1.1.1 routing-mark="Out Charter"
add distance=1 gateway=2.2.2.1 routing-mark="Out Integra
Telecom"
Note: These routes only get applied with a matching routing mark. Unmarked packets
use the other default route rule created during router setup.
Routes that came in the Charter connection get a connection mark. That connection
mark triggers a routing mark. The routing mark matches the mark in the route above,
and the return packet goes out the interface it came in.
Summary
Here's what we've configured:
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Preface
If you don't have graphical access like Winbox or Webfig to a MikroTik router you
can easily do software updates via the command line. These commands can be used in
Ansible playlists as well to programatically update devices.
Navigation
1. Set Package Channel
2. Check for Updates
3. Install Updates
If you prefer to live on the bleeding edge or if you want to test new features in
development use the Release Candidate channel:
Install Updates
If a new version is available download it:
system reboot
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Preface
Being attacked sucks and we hate it. Done. Here's a solution for mitigating an attack.
This will not block large-scale DDoS attacks which requires coordination with
upstream providers and possibly additional hardware capabilities.
As new malicious IP addresses are detected just add them to the Address List.
Fin
P2P Filtering
SECURITY , FIREWALL
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Preface
Limiting Peer-to-Peer (P2P) network traffic is important for businesses and other
network operators for a couple reasons, mainly risk management and bandwidth
conservation.
Navigation
1. Risk Considerations with P2P
2. Possible Benefits of P2P
3. Blocking P2P in Firewalls
1. Mikrotik P2P Firewall Rules
2. Rule Breakdown
3. Additional Steps
Carefully consider the impact of blocking P2P in your networks before moving
forward.
Rule Breakdown
Let's break down each rule in turn. The first rule in the foward chain (chain=forward,
data going through the router) and being routed out the WAN interface (out-
interface=ether1-gateway) checks for P2P traffic (p2p=all-p2p). When P2P traffic is
found it triggers action=add-src-to-address-list and adds the offending host's IP to the
dynamic "P2P" list (address-list=P2P). It adds the host IP to the list for 30 minutes
(address-list-timeout=30m), so that traffic isn't blocked forever by the next rule in
case of a false-positive. If it isn't a false positive the host will be re-added after it falls
off the address list if more P2P traffic happens.
The second rule drops (action=drop) all traffic from hosts on the P2P address list (src-
address-list=P2P) going out the WAN port (out-interface=ether1-gateway) to the
Internet. If you need more information about firewall actions, rules, etc see
the Mikrotik Firewall write-up. It will continue dropping traffic until the host falls off
the address list after 30 minutes. Once on the list whomever is using this host can't
access the Internet, but they will still be able to reach internal network resources like
servers and printers. By tweaking the second firewall rule it's possible to limit
network traffic further.
Additional Steps
Adding a third network rule could allow for a Syslog message to be sent, if Network
Admins are monitoring Syslog messages and a Syslog log actionhas been set up.
Helpdesk staff can check the dynamic P2P address list to see what hosts have tripped
the P2P rules and begin to remediate the
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
One of the most common tasks that a network administrator will need to perform is
forwarding ports across a router using Network Address Translation (NAT) - also
known as "pinholing". This process can be set up statically by an administrator, only
forwarding a few ports as needed. It can also be handled dynamically by processes
like UPnP that pinhole the router on their own using instructions from devices that
request ports be opened. Ports are forwarded across routers to make internal services
like email and web servers available to the outside world.
For security these servers should be located in a separate network segment from the
rest of the internal network, and only the necessary ports should be forwarded across
the router to keep the attack surface as small as possible. One of the most frequently
seen implementations of this is forwarding HTTP(S) access across the router from an
external static IP to an internal Apache, NGINX, or IIS server. This could be put in
place to facilitate outside access to an Exchange OWA portal, a line of business web
application, Sharepoint server, etc.
In this case we have a single Mikrotik router with a static IP address on the WAN
interface and an internal Apache server running on a Linux box. We need to forward
HTTP (TCP port 80) across the router so that the web server is accessible from the
Internet. No other ports should be forwarded for security - we don't want someone
running a port scan and finding that SSH or some other protocol exposed on that
Linux server. Here is the topology:
Here is the NAT rule we'll use to accomplish the port forwarding:
The final part of the command indicates what internal IP address to NAT the traffic to
(to-addresses=192.168.88.198) and what port to use (to-ports=80). Everything hitting
the router on WAN port ether1 that is TCP/80 will be sent to internal IP
192.168.88.198:80 - that's all there is to it. If a port scan were to be run on the router
now we would see one extra port open, with an Apache server responding to HTTP
requests. We've forwarded only the necessary traffic, and assuming that the web
server is patched regularly there should be no security issues with this configuration.
This is a simple task, but be mindful going forward of the security implications of
exposing an internal server to external requests. Only forward what needs to be
forwarded, and be sure to patch your servers regularly to reduce the risk of someone
compromising your server and then using it to access your internal network segments.
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Mikrotik SOHO (Small Office, Home Office) wireless products are incredibly
versatile, and there are many flexible settings for Wireless interfaces in "Simple"
mode. "Advanced" mode gives access to highly-tunable wireless features, some of
which should be tweaked. Many SOHO models like the RB-750 and RB-951 come
with fairly good wireless settings already in place, but with a few changes you can
improve wireless connectivity significantly, and accommodate more wireless clients.
This is very important if you're putting a SOHO model into a small / branch office
with more than a few users. Considering the prevalence of end-user smart phones,
tablets, and laptops in offices, the load on wireless networks is growing. If wireless
infrastructure isn't tuned properly it will quickly become apparent that connectivity
isn't robust enough.
The following wireless interface options (most located in "Advanced" menus) can
improve connectivity significantly for SOHO routers:
Obviously the first setting's country designation is specific to the United States -
change as needed for your country. Some of these options are now defaults in the
latest version of RouterOS, however those defaults have not always been the case, and
I'm sure defaults will be updated in later releases of RouterOS.
You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!
Preface
MikroTik devices are very cost-effective - some would say downright cheap - so the
capital cost of upgrading networks tends to be fairly low. In some organizations this
can lead to a pile of RouterBOARD devices on someone's desk in a corner that are
eventually donated, repurposed in a lab, or re-used in a pinch. Unfortunately, a
repurposed RouterBOARD unit that hasn't been wiped can expose a lot of sensitive
information in the wrong hands. While some things are hidden in the configuration
and can't be viewed from the console, .rsc or .backup files in onboard storage can
disclose them.
First we'll delete sensitive files in the onboard storage, then we'll wipe the
configuration.
Delete Files
Resetting the configuration in the next step won't remove files in the onboard storage.
Use the following commands to delete sensitive files:
/file
remove [find name~".rif"]
remove [find name~".txt"]
remove [find name~".rsc"]
remove [find name~".backup"]
Reset Configuration
Use the following command to reset the device's configuration:
Confirm the command and the device will wipe its configuration, reboot, and
regenerate SSH keys. RouterOS will be returned to its default out-of-the-box
configuration and the device can be repurposed.
Need help securing your Ubiquiti routers? Configuring IPSEC links between
locations? The extended guides for Ubiquiti EdgeRouter Hardening and IPSEC
Site-to-Site VPNs are now available on the Solutions page.
Site-to-Site IPSEC
IPSEC can be used to link two remote locations together over an untrusted medium
like the Internet. The implementation itself is a combination of protocols, settings, and
encryption standards that have to match on both sides of the tunnel.
Terminology
Devices at both sides of the tunnel are called Peers. Each of the peers uses
combinations of encryption and hashing protocols to secure traffic that are specified in
a Proposal. Once both peers have negotiated a secure connection using the protocols
and standards in the proposals Security Associations (SAs) are installed. These SAs
have a finite lifetime before they expire and new SAs are negotiated.
Network Topology
This network scenario in this post has a West Branch office that needs to be connected
to the Central Office. This post does not include the additional configuration of the
East Office that is pictured in the topology below and covered in the extended IPSEC
guide.
Network Addresses
The West Office has a LAN on the 192.168.2.0/24 network, and a WAN address of
172.16.1.2/24. The Central Office has a LAN on the 192.168.1.0/24 network, and a
WAN address of 172.16.1.1/24. The WAN port on all routers is eth0, and the LAN
gateway port is eth1 in keeping with the typical Ubiquiti defaults.
Configuration Summary
The two sections of configuration commands below will perform the following steps
on both routers:
configure
set firewall group address-group IPSEC description ”IPSEC
peer addresses”
set firewall group address-group IPSEC address 172.16.1.2
set firewall name WAN LOCAL rule 15 description ”IPSEC
Peers”
set firewall name WAN LOCAL rule 15 action accept
set firewall name WAN LOCAL rule 15 source group address-
group IPSEC
commit
configure
set firewall group address-group IPSEC description ”IPSEC
peer addresses”
set firewall group address-group IPSEC address 172.16.1.1
set firewall name WAN LOCAL rule 15 description ”IPSEC
Peers”
set firewall name WAN LOCAL rule 15 action accept
set firewall name WAN LOCAL rule 15 source group address-
group IPSEC
commit
set vpn ipsec esp-group west-central proposal 1
encryption aes256
set vpn ipsec esp-group west-central proposal 1 hash sha1
set vpn ipsec esp-group west-central mode tunnel
set vpn ipsec esp-group west-central lifetime 1800
set vpn ipsec esp-group west-central pfs dh-group2
set vpn ipsec ike-group west-central key-exchange ikev2
set vpn ipsec ike-group west-central proposal 1
encryption aes256
set vpn ipsec ike-group west-central proposal 1 hash sha1
set vpn ipsec ike-group west-central proposal 1 dh-group
2
commit
Testing
To test the IPSEC tunnel send an ICMP Echo (Ping) from a device on one LAN to a
device on the other. This will generate the "interesting" traffic and force the IPSEC
tunnels to come up. To view how many IPSEC tunnels are currently up use the
following command:
Ubiquiti routers straight out of the box require security hardening like any Cisco,
Juniper, or Mikrotik router. Some very basic configuration changes can be made
immediately to reduce attack surface while also implementing best practices, and
more advanced changes allow routers to pass compliance scans and formal audits.
Almost all of the configuration changes below are included in requirements for PCI
and HIPAA compliance, and the best-practice steps are also included in CIS security
benchmarks and DISA STIGs.
If you'd like a printable copy of this guide complete with a checklist, links to
STIGs, and more in-depth discussions of best practices than will fit in a blog post
check out the Ubiquiti EdgeRouter Hardening Guide.
Ubiquiti EdgeRouter Hardening Guide
8.00
PURCHASE
The router that will be used for this article is a brand new Ubiquiti EdgeRouter X,
fresh out of the box and updated with the latest firmware (1.8.5 as of this writing).
Before going any further ensure that your device is updated with the latest
firmware and rebooted.
LAN and Management interfaces will be assigned the lowest usable address (.1/24) in
their respective subnets.
Securing Interfaces
The first step we'll take is disabling any physical network interfaces that aren't in use,
denying an intruder access to the device if they somehow got into the wiring closet or
server room. To plug into the router they'd have to disconnect a live connection and
draw attention by bouncing the port.
First list all the interfaces, making note of the numbers associated with each interface:
show interfaces
As mentioned earlier we'll be using eth0 for the WAN, and eth1 for the LAN. Let's
add interface descriptions so they don't get confused:
Not only does this help us not get confused, it also helps other networking staff keep
things straight. Then we'll shut off all the interfaces that aren't live so they can't be
used to access the device. In our case we're NOT using interfaces eth2 and eth3, so
let's shut them off:
We'll also take a moment to assign IP addresses to the LAN and Management
interfaces:
With those changes made let's commit and save the current configuration before
moving forward, then show interfaces again to see our changes.
commit
save
We see that the interface state has changed and our new descriptions and IP addresses
are listed as well.
Services
Next we'll disable or firewall services that don't need to be running or exposed.
Disabling a service rather than firewalling it is the most appropriate, long-term
solution. It reduces overall attack surface, and ensures that even if a firewall rule gets
botched, the service isn't available for an attacker to take advantage of. An Nmap port
scan of the router via Eth0 (the WAN port) shows four services running - SSH, HTTP,
HTTPS, and NTP as shown below:
Ubiquiti Nmap Port Scan
We'll restrict access to the HTTP(S) GUI to the Management network so only IT staff
plugged into that network can access it. We'll also disable older crypto ciphers that
now have documented vulnerabilities and available exploits. The same will happen
with SSH. We'll leave NTP running, assuming it's being used as the branch office's
NTP time source, but restrict it on the WAN port so it can't be used in NTP DDoS
attacks.
First set the HTTP(S) GUI to only listen for connections on the Management (eth4)
interface, then disable older, vulnerable ciphers:
Once those changes are committed and saved only hosts on the Management network
will be allowed to reach the device's web interface. We'll restrict SSH now in the
same way, and also require that SSH v2 be used for all connections. Once the
following commands are passed and committed you won't be able to access the
device unless you're in the Management network.
Credentials
The Ubiquiti factory-default username and password combination of "ubnt" and
"ubnt" is widely known and publicly available, and many compliance and security
scanners like Tenable's Nessus check for factory default credentials. Compliance
standards like PCI-DSS and HIPAA strictly forbid the use of factory-default
credentials. In keeping with that spirit we will set up our own credentials, and remove
the factory-default set. First, I'll create an admin user for myself:
set system login user tyler
set system login user tyler full-name "Tyler Hart"
set system login user tyler authentication plaintext-
password 1234
set system login user tyler level admin
In place of the "1234" password you should use a suitably secure password that meets
your organization's security and compliance requirements. Although the command
setting the password uses the phrase "plaintext-password", the system will encrypt it
for you. Only after setting my full name and a strong password do I pass the final
command, giving myself "admin"-level privileges. Commit and save the new
credentials, log out of the default "ubnt" user, and log in with your own admin-level
credentials. Logged in as yourself, delete the built-in "ubnt" user so that it can't ever
be used to breach the router:
If you're not comfortable deleting the built-in "ubnt" user you must set a complex
password, because port scanners will attempt to login with factory credentials given
the opportunity. Each device administrator should have their own credentials, so it's
possible to see who changed what and when on the device. If all admins log in as the
same user there is no non-repudiation, and no way to tell who may have made a
malicious configuration change.
Neighbor Discovery
Ubiquiti routers come with neighbor discovery turned on by default, which is great for
convenience but not great for security. It runs on UDP port 10001, and allows
administrators to easily see what devices are on the network and how they are
addressed. Unfortunately it can also allow attackers to easily fingerprint a network,
and can help them discover soft targets faster than they might otherwise. Having
neighbor discovery turned on can also make attackers aware of other devices they
may not have seen otherwise because of network segmentation. We'll shut off
neighbor discovery to start with:
This disables the neighbor discovery service on all ports, both physical and virtual. It
also ensures that any new virtual interfaces won't run neighbor discovery when they
are created. If you'd like to use neighbor discovery, you should disable it on all ports
except those you want it running on. This could mean adding a lot more configuration
lines, but that is the current state of things. Best practice says that you should run
neighbor discovery protocols like CDP, LLDP, etc only on management interfaces.
Firewalls
Firewalling is a complex topic, but there are basic rules that can be put in place to
secure a device from port scanners, malicious login attempts, and other probes from
the WAN. We'll create a firewall rule specifically for inbound traffic on the WAN,
and give it a good description:
Once the rule is created and described we'll set the default action that the rule should
take for matching packets. With this rule being on the inbound side of our WAN, port
scanners and others will be hitting it, so the default action should be to "drop":
If traffic doesn't match the rules we'll create in just a moment, then the default action
takes place and that traffic gets dropped. You may be tempted to use the "reject"
action instead of "drop", but even rejected packets can help a port scanner fingerprint
your router. The best option is to just silently "drop" the packets, and this is required
for PCI-DSS.
The first rule in "WAN_In" will allow any connections through the firewall with
states of either "Established" or "Related". This is authorized traffic that originated
properly, and should be quickly allowed through the firewall:
Next we'll define an Address Group of trusted external IPs. We'll allow remote
connections to the router via the WAN, but only from those trusted IPs. First, create
the Address Group, then set a description:
set firewall group address-group Trusted_IPs
set firewall group address-group Trusted_IPs
description "External Trusted IPs"
Add any trusted external IP addresses you have to this list. This could be for a site-to-
site VPN, ICMP echos to check if the device is up, SNMP monitoring traffic, or
anything else. I'll use 1.1.1.1 and 2.2.2.2 just as examples:
Now we'll create a firewall rule in WAN_In and reference the address group created
above:
By default Ubiquiti devices accept all ICMP requests, including echo requests or
"pings". We only want the WAN interface responding to pings from our trusted IPs,
so there are two options. The first is to disable all ICMP replies globally:
This is quick and easy, but it also removes the use of ICMP as a troubleshooting tool.
The second option is to create a third firewall rule for WAN_In, specifically dropping
ICMP and echo replies but letting it run elsewhere:
At this point we're ready to apply the rule on the inbound side of the WAN (Eth0)
interface. Traffic that has a state of "Established" or "Related", and traffic from IPs in
the Trusted_IPs list will be allowed. Everything else (port scans, login attempts, pings,
etc) will be dropped by the default action. We've created the firewall entry, added
rules to it, now we'll apply it:
We'll commit and save the configuration, then run another port scan like before. You
can see the results below, scanning both TCP and UDP:
Nmap can tell there is something there and it's a Ubiquiti router only because the
scanning computer is directly connected to it and can see the MAC address via the
physical link. Were this a production router being scanned across a WAN connection
there wouldn't be that information, and it would appear that the WAN IP has no
device assigned to it. This is exactly what we want remote scanners to see - nothing at
all.
Logging
It's best to set up logging to an external repository, like a Syslog server. If a device is
ever compromised or the configuration tampered with by an insider, the logs on the
local device become suspect. Sending logs to an external server and archiving them
preserves logs for investigations and forensics, and can ensure their integrity remains
intact.
See the Ubiquiti Syslog article for directions on how to configure Syslog logging.
In that same vein it's important to ensure that the timestamps of your log entries are
accurate. It's also important that the clocks of all your routers are actively updated, so
if you need to correlate events between devices you know that their time is correct.
See the Ubiquiti NTP article for directions on configuring NTP and keeping clocks in
sync
Need help securing your Ubiquiti routers? Configuring IPSEC links between
locations? The extended guides for Ubiquiti EdgeRouter Hardening and IPSEC
Site-to-Site VPNs are now available on theSolutions page.
Having a login banner is widely considered a best practice across the networking
industry. Though the legal merits and applicability of login banners is sometimes
disputed, there is value in notifying anyone who may try to log into a device that
access is monitored and audited. Some compliance standards also require a login
banner, and there is a DISA STIG that also requires it for any equipment in use by the
United States government. EdgeOS is somewhat unique in that it offers two login
banners instead of one - a pre-login and post-login banner. The pre-login banner
displays once a user is prompted for a password. The post-login banner displays once
a user is successfully authenticated.
First we'll configure the pre-login banner, then the post-login banner, and finally
commit and save the new configuration. The following command sets the pre-login
banner.
The banner text in the commands above are just examples, and you should create
banner text specific to your organization's legal requirements. Don't forget to commit
and save the new configuration, then log out and back in to see how the new banner
looks
Need help securing your Ubiquiti routers? Configuring IPSEC links between
locations? The extended guides for Ubiquiti EdgeRouter Hardening and IPSEC
Site-to-Site VPNs are now available on theSolutions page.
SNMP is easy to configure on Ubiquiti devices with just a few commands. It runs on
UDP port 161, and just like with Mikrotik or any other router brand it's used to
monitor network interface statistics, CPU and RAM utilization, and more. Network
monitoring suites like Solarwinds, PRTG, Zenoss, and others can use SNMP to graph
statistics over time, giving you a running log of device performance.
First, set the location, contact, and description information for your device.
configure
set service snmp location "Virginia, USA"
set service snmp description "Office Edge Router"
set service snmp contact "[email protected]"
There's all the basic device information in just a few commands. Next we need to
configure an SNMP community. The SNMP community is just a string of text that an
SNMP probe or collector will use to extract statistics from the device. Different
communities can have different permissions allowing SNMP to read and write, view
specific types of statistics, and more. The community string must match on the
device(s) being monitored and the collector.
By default many manufacturers have the SNMP community set to "public", which is
very well-known and should be modified immediately. SNMP can be a goldmine of
information for an attacker doing reconnaissance, trying to fingerprint devices and
identify vulnerabilities. Some compliance standards like PCI-DSS specifically call out
having "public" SNMP communities configured as a compliance violation. The
following command will create a new SNMP community "manitonetworks" and gives
it Read-Only permissions.
With the device details configured and the new community string set it's possible to
probe SNMP and get some basic statistics about the device once we configure the
device to listen on a particular interface. The following command configures the
device to listen on the interface configured for 192.168.1.1.
SNMP should only listen on trusted interfaces - if someone knows or guesses your
community string they will have full access to the device's information and
performance statistics. Best practice is to configure SNMP to listen on a physical
management interface or VLAN subinterface.
Now commit and save the configuration changes. With the SNMP configuration
complete you can add the device to your network monitoring software, add the
community string, and that's it.
Need help securing your Ubiquiti routers? Configuring IPSEC links between
locations? The extended guides for Ubiquiti EdgeRouter Hardening and IPSEC
Site-to-Site VPNs are now available on theSolutions page.
Syslog is one of the most widely supported event reporting mechanisms, across
almost all manufacturers and OS distributions including Ubiquiti and EdgeOS. Using
Syslog to report events happening on routers, switches, and servers is typical in the
networking industry, and being able to centrally monitor reportable events on network
infrastructure is critical as you scale up. Most organizations don't report every single
event because that would create a huge, unmanageable mess of logs.
Instead, administrators focus on hardware, authentication, interface up/down, and
network adjacency events.
Beyond the convenience of centralizing logs in one place for monitoring, Syslog plays
an important part in an organization's network security framework. If a device is
breached, or if a breach is suspected, the logs on that local device become suspect. An
attacker may wipe the local device logs wholesale, or modify them specifically to
cover their tracks or focus attention elsewhere. Having logs shipped to another device,
that preferably uses separate authentication, allows some assurance that the logs have
not been tampered with and can be used for investigation.
Event archiving also becomes possible when shipping events to a centralized server.
An organization's policy may require 90 days of log retention, or a legal requirement
may exist that sets a certain standard. Either way, this gives you a rolling historical
record of what's happened on your devices. This wouldn't be possible if you're just
storing logs locally, because many devices purge logs on reboot or power cycle, or
lack the embedded storage capacity for long-term log storage.
Syslog has varying degrees of event severity - 8 in total, 0 through 7. You can find
the severity levels here. Familiarize yourself with the severity levels, because they are
used across almost all device manufacturers. The protocol itself runs on UDP, port
514, but that is automatically included in the configuration and doesn't have to be
specified manually.
With that being said, we'll set up a Ubiquiti router to report important events to a
Syslog server, and use The Dude as a dashboard for monitoring running on
192.168.90.183. We'll be monitoring for all events level 4 (Warning) and up. This is a
no-cost solution that centralizes the administrative task of monitoring infrastructure,
and it's surprisingly flexible.
configure
Next, configure the device for the IP of your Syslog server (in this case
192.168.90.183), and the minimum severity level of events that should be shipped. If
you use the "warning" level like in the command below, then all events that are
warning, error, critical, alert, and emergency levels will be shipped. It's up to you to
determine what minimum level of events is most appropriate for your organization.
The "facility" portion of the command specifies what router functions are being
monitored. In this case it's "all" functions, though you can specify specific levels for
specific functions. This is really useful for troubleshooting, or monitoring specific
router functions that you suspect are misbehaving. Available functions include
protocols, security, auth, and more. Starting out with the "all" facility helps you
capture a broad swath of events, and then you can narrow down your reporting if
necessary for your organization's specific needs. It's always best with logging to start
broadly, then whittle it down from there - you may see something that you wouldn't
have otherwise that demands your attention.
Lastly, commit and save your configuration, then generate some events. Try logging
into the device with a wrong username and password on purpose to generate an event,
and verify it's been shipped to the Syslog server. Play around with it, so that when
actual events are triggered in production you know why they happened, and how to
respond.
Need help securing your Ubiquiti routers? Configuring IPSEC links between
locations? The extended guides for Ubiquiti EdgeRouter Hardening and IPSEC Site-
to-Site VPNs are now available on the Solutions page.
Keeping good time on your infrastructure devices like switches, routers, and firewalls
is absolutely essential. It ensures that log timestamps are accurate for use in
troubleshooting and forensics, and it ensures that devices relying on timestamped
certificates will expire them at the same time. This is particularly true with IPSEC and
other VPN technologies. Just like in the Mikrotik NTP tutorial, it's fairly
straightforward to set the NTP client up on an EdgeOS-based device. First, log into
the device via SSH and enter the Configure mode with the following command:
configure
Verify that your device's timezone is set correctly. Many organizations choose to set
all their devices to UTC time. This is described as a best practice by mainstream
vendors, and is especially important when an organization has devices located in
different timezones, or across states or regions that observe Daylight Savings Time
differently. Having all devices set to UTC time takes the guesswork out of adjusting
for local time or DST. It also helps enormously when correlating timestamped events
between devices because all device clocks are in sync, so no adjustment is necessary
when looking at events between devices side-by-side. The following command sets
the timezone to UTC:
commit
save
This will stop and then start the NTP daemon (ntpd) and resync the device's clock.
Verify that the time is up-to-date by running the "date" command, and comparing to a
known-good clock. That's it!