MikroTik OSPF Routing

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 124

MikroTik OSPF Routing

MIKROTIK

You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!
MikroTik OSPF Routing
Open Shortest Path First (OSPF) is a Link-State routing protocol used by routers to
dynamically exchange route information. It's an open, industry-standard protocol
supported by all major vendors. While OSPF doesn't move traffic across the network
on its own, it does allow routers to discover network paths. Configuring it on
MikroTik router's isn't difficult, and the long-term benefits of using dynamic routing
can be big.

OSPF Configuration
The following sections walk us through configuring interfaces, adding address, setting
up OSPF, and advertising networks:

1. Physical Connections
2. Loopback Interfaces
3. Assigning IP Addresses
4. OSPF Instance Configuration
5. Advertising Networks

Physical Connections

We'll establish the point-to-point links between the router first. This provides the
foundation for OSPF to communicate. The link from top—middle routers will use the
172.16.0.0/30 subnet. The link from middle—bottom routers will use the
172.16.0.4/30 subnet.

On the top router:

/ip address
add interface=ether2 address=172.16.0.1/30
On the middle router:

/ip address
add interface=ether1 address=172.16.0.2/30
add interface=ether2 address=172.16.0.5/30

On the bottom router:

/ip address
add interface=ether1 address=172.16.0.6/30

Loopback Interfaces

We'll use virtual loopback (bridge) interfaces for this exercise. This makes the
following steps work across any router model, regardless of how many ethernet ports
it has. Running OSPF on a virtual interface also makes the protocol more stable,
because that interface will always be online. The following commands create new
bridge interfaces on all three of our routers:

On the top router:

/interface bridge
add name=ospf comment="OSPF loopback"
add name=lan comment="LAN"

On the middle router:

/interface bridge
add name=ospf comment="OSPF loopback"
add name=lan comment="LAN"

On the bottom router:

/interface bridge
add name=ospf comment="OSPF loopback"
add name=lan comment="LAN"

Assigning IP Addresses
Each ospf bridge interface needs an IP address that can be used later to identify the
router. LAN interfaces need addresses for connecting user-facing LANs. Use the
following commands to assign IP addresses to the new bridge interfaces:

On the top router:

/ip address
add interface=ospf address=10.255.255.1
add interface=lan address=192.168.1.1/24

On the middle router:

/ip address
add interface=ospf address=10.255.255.2
add interface=lan address=192.168.2.1/24

On the bottom router:

/ip address
add interface=ospf address=10.255.255.3
add interface=lan address=192.168.3.1/24

OSPF Instance Configuration

We'll configure the IP addresses created in the previous steps as the OSPF router's ID.
Since the top router is attached to an upstream provider we'll also advertise the default
route from that device. Use the following commands to configure the OSPF instances:

On the top router:

/routing ospf instance


set default router-id=10.255.255.1
set default distribute-default=always-as-type-1

On the middle router:

/routing ospf instance


set default router-id=10.255.255.2

On the bottom router:


/routing ospf instance
set default router-id=10.255.255.3

Advertising Networks

With our OSPF instances configured properly we can now begin advertising our
connected networks. OSPF will advertise the following networks and addresses:

 OSPF loopback
 Point-to-point router links
 LAN subnets

Use the following commands to advertise the routes directly connected on each router:

On the top router:

/routing ospf network


add network=10.255.255.1 area=backbone comment=Loopback
add network=172.16.0.0/30 area=backbone comment="Middle
router"
add network=192.168.1.0/24 area=backbone comment=LAN

On the middle router:

/routing ospf network


add network=10.255.255.2 area=backbone comment=Loopback
add network=172.16.0.0/30 area=backbone comment="Top
router"
add network=172.16.0.4/30 area=backbone comment="Bottom
router"
add network=192.168.2.0/24 area=backbone comment=LAN

On the bottom router:

/routing ospf network


add network=10.255.255.3 area=backbone comment=Loopback
add network=172.16.0.4/30 area=backbone comment="Middle
router"
add network=192.168.3.0/24 area=backbone comment=LAN

Verifying OSPF
With networks connected and OSPF configured we need to verify functionality. The
following sections walk us through checking the status of OSPF routing:

1. Neighbor Routers
2. OSPF Routes

Neighbor Routers

By now OSPF should have established neighbor states between devices. The best
device to check for neighbors is the middle router — if it has two neighbors then the
top and bottom routers must be configured correctly. List the OSPF neighbors on the
device with the following command:

/routing ospf neighbor print

OSPF Routes

List the routes in OSPF's route table:

/routing ospf route print

The best routes for a given destination will be copied from the protocol's route table to
the main route table

MikroTik IPIP Tunnels with OSPF


MIKROTIK

You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!

Preface
Running an IP-IP tunnel between sites with OSPF for routing is an easy, dynamic site-
to-site solution. We'll set up a tunnel, configure OSPF, and verify connectivity.

Navigation
1. Network Topology
2. IPIP Tunnel
3. OSPF Routing

Network Topology
The network topology for this writeup is two sites, each with a Mikrotik router: Site |
WAN IP | LAN Subnet | LAN Gateway | Point-to-Point IP | --- | --- | --- | --- | --- |
Philly | 1.1.1.1 | 192.168.1.0/24 | 192.168.1.1 | 10.255.0.1/30 | Seattle | 2.2.2.2 |
10.1.0.0/24 | 10.1.0.1 | 10.255.0.2/30 |

Both routers are connected to the internet and have a publicly routable address. Their
respective LAN networks don't overlap, and we've set aside a 10.255.0.0/30 network
for the point-to-point IPIP addresses. Using the high 10.255.0.0/30 network ensures it
won't overlap with any additional sites that come online.

IPIP Tunnel
Setting up the IPIP tunnel is pretty straightforward - point one router to the other and
that's it.

On the Philly router:

/interface ipip add name=Seattle remote-address=2.2.2.2


comment=Seattle

On the Seattle router:

/interface ipip add name=Philly remote-address=1.1.1.1


comment=Philly

Add the routable IP addresses to the IPIP tunnel interfaces. This gives OSPF
something to run over between the two devices. Having a dynamic routing protocol
running means this solution can grow beyond two sites.

On the Philly router:

/ip address add interface=Seattle address=10.255.0.1/30


comment="Seattle link"

On the Seattle router:


/ip address add interface=Philly address=10.255.0.2/30
comment="Philly link"

OSPF Routing
We'll use a very simple OSPF configuration since there's only two sites. Both sites
will be put on the OSPF "Backbone" area, number zero. As the network grows you
can add additional OSPF areas.

On the Philly router:

/routing ospf network


add comment="Seattle link" network=10.255.0.0/30
add comment="LAN" network=192.168.1.0/24

On the Seattle router:

/routing ospf network


add comment="Philly link" network=10.255.0.0/30
add comment="LAN" network=10.1.0.0/24

These configurations have OSPF advertising the point-to-point links between the
routers, and the LAN's behind the routers. With those routes advertised we should
have full reachability between sites

MikroTik Rogue DHCP Server


Alerting
MIKROTIK , MONITORING

You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!

MikroTik Rogue DHCP Server Detection


Rogue devices on a network can cause serious issues for ongoing operations and
security. An unauthorized device running a DHCP server can be used to hijack local
clients and redirect traffic for man-in-the-middle and other attacks. Best-case scenario
is a user unknowingly plugging in a device that they brought in from home, not aware
that it will cause network problems. Worse-case scenario is a rogue device
deliberately planted by an attacker to redirect or sniff traffic.

Either way, it's important to monitor our networks for rogue DHCP servers. In
RouterOS there is a handy tool in the IP DHCP-Server menu for just this purpose.
We'll first set up a logging script. Then we'll configure DHCP server alerts. Finally,
we'll add trusted DHCP server MAC addresses so there won't be false positives in our
logs.

1. Logging Script
2. DHCP Alerts
3. Trusted DHCP Servers
4. Finding Rogue Devices

Logging Script
1. Create the logging script:

/system script add name=rogue-dhcp source=":log


warning message=\"Rogue DHCP server detected!\""

NOTE: The backslashes ("\") are required because nested quotes must be escaped.
2. Run the script and verify a log entry is shown:

3. /system script run rogue-dhcp


/log print

This log entry will be shown in addition to the default system log that has the
rogue server's MAC and IP addresses.

DHCP Alerts
1. Configure DHCP server alerts on interface ether2:
/ip dhcp-server alert add interface=ether2 on-
alert=rogue-dhcp disabled=no

Trusted DHCP Servers


1. Get MAC addresses of all trusted DHCP servers on the interface's broadcast
domain
2. Add the trusted MAC addresses to the DHCP server alert instance:
/ip dhcp-server alert set ether2 valid-
server=00:11:22:aa:bb:cc

Finding Rogue Devices


Once a rogue DHCP server has been identified it's important to locate and isolate the
device. With the server's MAC address it's easy to locate the physical port that it's
plugged into. Use the following command to locate the device in the RouterOS ARP
table:

/ip arp print

On switches or devices with a switch chip it's easy as well:

/interface ethernet switch host print

The interface that the rogue DHCP server is connected to can be turned off remotely
while someone else hand-over-hands the cable to find the device

ix Step Troubleshooting Method


BEST PRACTICES , NETWORK FUNDAMENTALS

You can now get MikroTik training direct from Manito Networks. MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!
Six Step Troubleshooting
The US Navy's six step troubleshooting procedure has become part of academic and
professional courses and certifications around the world. It presents a logical, step-by-
step approach for troubleshooting system faults. We can apply this to computer
networks, electrical and electronic circuits, or business processes. When we use the
six steps properly, our troubleshooting can be faster and more efficient than it would
be if we "just jump right in".

Troubleshooting Goals
The primary goal of troubleshooting is very simple - fix faults. But that goal is more
nuanced than it first appears. While we want to fix faults, we should aim to do it as
efficiently and quickly as possible. Time wasted troubleshooting a system that's
unrelated to the fault is expensive. Meanwhile, the person who originally reported the
fault is still unable to perform whatever task they had been attempting.

Six Steps
First, we'll outline the six steps. Second, we'll explore what each of them entails.
Third, we'll apply the six steps to a real-world network outage scenario. The following
six steps make up the formal troubleshooting process:

1. Symptom Recognition
2. Symptom Elaboration
3. List Probable Faulty Functions
4. Localize the Faulty Function
5. Localize the Faulty Component
6. Failure Analysis

With the steps originally developed for troubleshooting electrical and electronic
systems, some of the wording has been changed over time. For example, Step 5 was
originially "Localizing trouble to the circuit". The wording has evolved but the result
is still the same - finding the specific root cause.

Sympton Recognition
This first step kicks off the overall troubleshooting process. Often for IT professionals
this happens when someone calls the helpdesk or puts in a ticket. IT staff might also
be alerted by a monitoring tool that a system has gone offline. At this point we know
"something is wrong", but there's no indication of exactly what it is. Begin the
troubleshooting process and respond with urgency.

Symptom Elaboration
Now that we know something is wrong it's time to begin asking questions. Here's a
list of some questions that I like to ask my users when they come to me with a
problem:

1. What aren't you able to do?


2. Were you able to do it before?
3. Is it just you, or is this happening to others?
4. Has it ever worked?
5. Has anything changed recently?

When dealing with non-technical users it's important to understand that they may not
be able to fully articulate what they're experiencing when giving us answers. For
example, a common report I get when troubleshooting network outages is, "the
internet is down!". While this isn't strictly true, most users don't have the training to
understand the distinction between LAN, WAN, and the internet. It's not their job to
understand that the LAN isn't working and the internet is still there waiting for them.
A certain amount of interpretation is needed, and the skills to do it come with time
and experience.

During this steps I'm also looking for sights, sounds, and smells. Loss of power is
typically easy to spot because there will be a lack of LEDs, and the conspicuous sound
of silence where there should be the whirring of cooling fans. The smell of burning
plastic and electronic components is very distinctive as well.

List Probable Faulty Functions


The first step started up the troubleshooting process, and the second step probed the
general nature of the fault. Now we'll brainstorm what the general cause of the fault
could be. First though, we need to define what a "function" is. For the purposes of
troubleshooting IT systems, a "function" is a general area of operation. Some IT
professionals refer to these as "silos" or "domains" as well. The following examples
are all functions or silos that can fault-out for one reason or another:

 Power
 Environmental Controls
 Networks
 Servers
 Security

These are all very broad and that's the point of this step. We'll brainstorm which
function could be the cause of our fault, and we'll also rule out which could not be.
This points us in the right general direction. It's important to note what could not be
the cause of a fault because that prevents us from wasting time on an unrelated
system. When a technician gets pulled in the wrong direction and troubleshoots a
function unrelated to the fault it's sometimes called "going down the rabbit hole".
If the lights are on in a server room, and hardware LEDs on front panels are blinking
while the fans whir away, the Power domain can probably be ruled out. If one of those
servers with blinking lights cannot be pinged or accessed remotely it's a fair guess that
the Network domain might be faulted. There is also a possibility that a hardware
failure has occurred on the server, taking it off the network. Depending on past
reliability of your servers, you may or may not include the Server domain in your list
of possible faulty functions.

Localize the Faulty Function


At this stage we begin to actively search within those brainstormed domains likely at
fault. We want to narrow the cause down to a specific domain and focus our efforts
further. Going back to our server example, it's thought that the network could be
faulted, or possibly the server hardware itself. The server is powered on, with LEDs lit
and fans whirring away. Looking at the network connection on the back of the server,
our NIC's lights show both a lit connection and blinking activity LED. This tells us
the cable is connected and the NIC is powered on - server hardware domain is looking
less-likely.

Running a traceroute to the server's address shows successful hops all the way to the
switch that the server connects to. That switch is the final hop, after which all packets
are lost. Based on that result it appears likely that the Network domain in the culprit.

Localize the Faulty Component


Now that we know the network domain is most likely where the fault resides, we
home in on the actual cause. Looking at the NIC's lights showed us that the interface
is on and connected. The connection at the other end of the cable must be there as
well, otherwise there would be no link light. A traceroute to the server stopped at the
switch, so we'll log into the switch and investigate further. A list of all the switch
ports shows that the server's port is enabled, with speed and duplex set to
autonegotiate.

We know that our network is segmented using VLANs, so we list the VLANs
configured on the switch and their associated ports. The port that connects the server
is assigned to VLAN number 1 - that's the default VLAN, not the server VLAN. This
explains why we have a good physical connection with link lights, but no network
traffic.

Failure Analysis
At this final step we correct the fault and document the process. In the case of our
server, setting the port to the right VLAN restored network connectivity, and our users
could access the server once again. Once the fault is fixed we need to verify that
operations have returned to normal. It's important to follow up with whomever
originally reported the fault and ensure that it's been fully resolved. This leads us to
the point where we ask questions and document the process. By documenting the fault
we make it possible for future technicians to fix the same issue much faster if they
experience it again.

Here are the questions I like to ask when documenting a fault:

1. What was wrong?


2. What symptoms did we see?
3. What was the cause?
4. How do we prevent it from happening again?

The fault documentation might go something like this:

The network port attached to server ABC123 was placed in


the wrong VLAN, breaking network connectivity. The server
was powered on and had link lights, but couldn't be
reached over the network. Switchport status was up, but
the port's VLAN assignment did not match our
documentation. A technician "fat-fingered" the port
number when changing the VLAN for another host,
accidently knocking the server offline. Putting the
switchport back on the server VLAN restored connectivity.

Preventing the fault from happening again can be tricky. A mix of training, mentoring,
good documentation, and change management processes can stop it from happening
again. Even informal knowledge sharing within an IT team is better than nothing.
During a weekly meeting it's good to recap faults quickly with the following points:

1. This is what happened


2. This is what we saw while troubleshooting
3. Here's how we fixed it

Doing this week-over-week grows the knowledge base within an IT team and helps
develop good troubleshooters
MikroTik Winbox Security
MIKROTIK

You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!

MikroTik Winbox Security


MikroTik's Winbox application is one of the best router management interfaces I've
ever worked with. It's my go-to interface over Webfig any day, though lots of what I
do happens at the command line. For those of us using Winbox day-to-day to manage
client devices, WISP infrastructure, etc there are some security precautions that need
to be taken. If we aren't careful how we use Winbox it could add risk to our network.
If managed poorly it can compromise router and switch credentials.

First, we need to make sure that Winbox is updated. Second, we need to understand
how saved credentials can be used smartly. Third, we need to implement best
practices for managing credentials in Winbox overall.

Updates
It's a best practice all-around to run the latest stable, supported software. This is true
for RouterOS, and it's also true for Winbox. MikroTik has added a built-in updater
inside Winbox so checking for updates regularly is easy. Open Winbox, then
click Tools and Check for Updates:
Checking for Winbox updates

I do this about once per month, just in case a new version has been released that
patches security holes or adds new functionality.

Managed Hosts
We can store device connection profiles in Winbox to make reconnecting to them
easy. Unfortunately this can lead to some bad credential management practices.
Entering the IP address or hostname, login, and password then clicking
the Add/Set button saves our credentials:
Adding managed host

Anyone who walks up to the computer with Winbox open can double-click a managed
host entry and it will log them in. We can set a Master Password that requires a
password before the managed host entries are shown. Simply click Set Master
Password and enter a password twice:
Setting Winbox master password

Now when Winbox opens it will first prompt for the master password before giving us
access to the managed host credentials:
Using master password in Winbox

Of course, if the computer running Winbox is left unattended after the master
password was entered it doesn't do us any good, so locking the computer is a must.
After saving a bunch of managed host profiles many MikroTik administrators export
the list for backup purposes. I've seen some MSPs that manage MikroTik devices for
their customers share the exported file among their employees. While this might be
convenient it opens a can of security worms for customers that have to be PCI DSS or
HIPAA compliant. Exporting our managed host credentials can be done by
clicking Tools then Export:
Winbox managed hosts export

The exported .WBX file has all our login information, making it easy to restore the
saved entries in Winbox if they are lost. This can be dangerous though, because the
file that's exported is in plaintext. Exporting the file and opening it in a more
advanced text editor like Notepad++ shows our IP addresses or hostnames,
usernames, and passwords:
Winbox plaintext credentials

By unchecking the Keep Password box we can prevent Winbox from saving or
exporting the password for an individual managed host entry. Using Tools - Export
Without Passwords doesn't export passwords for any managed host, so it's a more
secure option. Of course it will still export usernames, which could allow an attacker
to kick-off a password guessing attack.

Best Practices
I recommend that these best practices be followed when storing credentials in
Winbox:

1. On computers with credentials stored in Winbox lock the screen when stepping
away.
2. Set a Master Password that must be entered before accessing the managed host
entries.
3. Don't include passwords when exporting the managed host list.
4. Don't share the .WBX export file with others.
5. If you must have passwords in the exported .WBX file then encrypt it with a
robust key.
6. For traveling laptops and tablets with credentials stored in Winbox encrypt the
entire drive in case of theft.

MikroTik Hyper-V Serial Console


MIKROTIK , VIRTUALIZATION

You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!

I do a lot of work with virtual MikroTik routers, mostly in Microsoft Hyper-V. The
CHR is great for labbing-out solutions and developing configuration templates for
clients. Unfortunately copy-and-paste operations aren't really possible through the
built-in Hyper-V console. Winbox is another solution, but much of what I do happens
at the command line. The cleanest solution I've come up with is using the built-in
serial device functionality with named pipes for the VMs. PuTTY provides a handy
serial interface for accessing the virtual device.

First we'll add a serial device to the CHR's VM configuration. Then we'll use it to
create a named pipe. Finally, we'll use PuTTY to access the serial console.

CHR Configuration
Adding a serial port to the CHR gives us the "hardware" that we need, even though it's
virtual. In Hyper-V right-click the CHR VM. Then select Settings and COM1 to the
left. Note how no device was included by default in the configuration:
Select the Named Pipe option and enter a name:
Remember the full name, beginning with .\pipe\ and ending with the chosen name.

PuTTY Serial
Launch PuTTY with administrative privileges, otherwise the named pipe can't be
accessed in Windows. Select the Serial option, then use the named pipe string:
Click Open, then click inside the PuTTY window and press [Enter]. The RouterOS
login prompt should appear. If the router is rebooted we can quickly restart the serial
session by right-clicking the title bar:
Restarting virtual serial connectio

MikroTik Port Switching


MIKROTIK

You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!

Preface
While MikroTik does sell switches, many organizations deploy SOHO
RouterBOARD models to small, remote offices with only a few devices. This is very
common inside residential networks as well. Switching ports connected to local
devices and using one port for an internet connection makes the most sense for these
locations, rather than deploying a separate switch and router. There are a couple ways
to combine ports in a switched (bridged) configuration depending on what RouterOS
version we're running.

On a given router we have interfaces ether1 through ether5. We'd like to


use ether1 for the WAN connection, and ether2 - ether5 as a switch. We'll plug in
desktops, a printer, and a NAS - all hosts should be able to communicate with each
other on the same LAN.

Master Port Configuration


Using a Master Port is the way ports were bridged prior to the 6.4x versions of
RouterOS. One port is set aside as the master port, and the others that need to be
switched are configured to use it. All ports configured for that master, and the master
port itself, become part of the same switched local network. This is very simple to
implement, but unfortunately this configuration doesn't benefit from hardware
acceleration. Since ether2 is the first port on the switch we'll use it as the Master Port.
The following commands configure ports ether3 - ether5 to use ether2 as the Master
in a switched configuration:
/interface ethernet
set ether3,ether4,ether5 master-port=ether2
In Winbox and at the console ether3, ether4, and ether5 should be running and in a
"slave" status if they are connected. A DHCP server or other service running
on ether2 would now be available to hosts connected to the other switched ports.

Hardware Bridge Configuration


The new method of bridging ports benefits from hardware acceleration and delivers
line-rate switching. The configuration is a bit more complex, but still straightforward
overall. Bridge interfaces and port configurations are used to combine ports in a
switched configuration. First we'll create a bridge, then add ports to it while enabling
the hardware option.

Create a bridge with the following commands:

/interface bridge
add name=Switch comment="Switched ports" fast-
forward=yes
Use the "protocol-mode" option with the command above to configure Spanning Tree
Protocol as needed. Options include STP, RSTP, and MSTP. Now add
ports ether2 - ether5 to the bridge and use the "hw=yes" option:
/interface bridge port
add interface=ether2,ether3,ether4,ether5 bridge=Switch
hw=yes
Once these ports are connected they should also be operating with the "Running" and
"Slave" statuses.

VLAN Trunking
ROUTING

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!

Preface
VLAN trunking and routing is one of the most basic and essential skills that a network
administrator can have. Segmenting the network with VLANs is required for PCI,
HIPAA, and other compliance standards, and it helps keep some measure of order and
sanity in large network infrastructures. Setting up VLANs on a Mikrotik router and
configuring VLAN trunking is easy, even if a couple of the steps are less-than-
intuitive.

Navigation
1. VLAN Design
2. VLAN Trunking Protocols
3. VLAN Topology
4. Creating VLANs on Mikrotik
5. Addressing VLAN Interfaces
6. DHCP for VLAN Networks
7. Switch VLAN Configuration

VLAN Design
The first step in segmenting the networking isn't done on the router at all, it's done on
the whiteboard - deciding how to structure your VLANs. If a network has to be
HIPAA or PCI compliant this decision is easier because it's spelled out in black and
white what has to be segmented. If segmenting a network is happening for another
reason, like a company mandate to improve security, then it's a bit "up in the air" but
still doesn't have to be hard.
For the most part I like to mirror the organizational structure with VLANs. Each
department typically gets its own VLAN, because each department is its own logical
group with a unique function, and probably has its own security needs. Servers and
storage get their own VLANs, or (preferably) their own switching hardware if that's in
the budget. I like being able to firewall and monitor traffic per-department, and having
their traffic going through virtual VLAN interfaces lets me use tools like Torch or
NetFlow. Guest networks get their own VLANs that are firewalled from accessing the
internal network. Wireless networks get their own VLANs too, keeping wireless
chatter, IOS / Android and App updates, etc off the other networks. Once you decide
who gets their own VLAN it's time to create them and segment the network.

VLAN Trunking Protocols


Mikrotik routers handle VLANs much like any other platform - 802.1q trunking is
used between switches and the router, and tagging is done like you'd expect on Cisco,
Juniper, Brocade, or other platforms with a simple VLAN ID. While Cisco offers
other encapsulation methods like (the now deprecated) ISL, Mikrotik only supports
the industry-standard 802.1q protocol. Using 802.1q you can trunk VLANs from a
Cisco, HP, or other switch to a Mikrotik router, and let the Mikrotik handle the
routing, firewalling, bandwidth throttling, etc.

VLAN Topology
For this scenario we only have one router, and we'll create VLANs for HR
(192.168.100.0/24), Accounting (192.168.150.0/24), and Guests (192.168.175.0/24).
If you can create 3 VLANs you can create 30, so I'm keeping the example brief. The
IP addresses for each VLAN were also chosen randomly, it's up to you to choose an
IP scheme that fits your organization. The router is connected to a switch on ether2,
with an 802.1q trunk link in between. This is also known as a "router on a stick" type
configuration. I'm not going to be specific about the switch being a Cisco, HP, or
whatever switch because 802.1q trunking is almost the same across platforms. Just
check your vendor's documentation for setting it up on a trunk port. The router also
has a WAN connection on ether1 that clients in the VLANs will use to access the
Internet via a default route to the ISP's gateway.

Creating VLANs on Mikrotik


First, create the VLANs on the Mikrotik router, and assign them to the ether2
interface. Doing this step will automatically set 802.1q trunking on the ether2
interface, and will take down the link for normal untagged traffic. This will create an
outage until the rest of the steps are complete, you have been warned.

/interface vlan
add comment="HR" interface=ether2 name="VLAN 100 - HR"
vlan-id=100
add comment="Accounting" interface=ether2 name="VLAN 150
- Accounting" vlan-id=150
add comment="Guests" interface=ether2 name="VLAN 175 -
Guests" vlan-id=175

I've taken the time to name the VLAN interfaces and give them a useful comment, and
I suggest you do the same. This will make administering VLANs and onboarding new
administrators easier. As mentioned earlier, creating the VLANs and assigning them
to the physical ether2 interface automatically changed encapsulation to 802.1q, even
though you won't see that if you print the interface details. This is one of those non-
intuitive things mentioned before.

Addressing VLAN Interfaces


Next we'll put IP addresses on the VLAN interfaces so they can function as gateways:

/ip address
add address=192.168.100.1/24 comment="HR Gateway"
interface="VLAN 100 - HR"
add address=192.168.150.1/24 comment="Accounting Gateway"
interface="VLAN 150 - Accounting"
add address=192.168.175.1/24 comment="Guests Gateway"
interface="VLAN 175 - Guests"

Again, I took the time to add comments and you should as well. At this point we have
our VLANs, and they have usable addresses. If you're using static IP addressing on
your network that's pretty much it for VLAN configurations. The next (optional) steps
are setting up DHCP instances on the VLAN interfaces, so that clients inside each
network segment can get dynamic addresses. First, create the address pools that
DHCP will hand out:

DHCP for VLAN Networks


First set up IP address pools for each VLAN:
/ip pool
add name=HR ranges=192.168.100.2-192.168.100.254
add name=Accounting ranges=192.168.150.2-192.168.150.254
add name=Guests ranges=192.168.175.2-192.168.175.254

Next, set up the DHCP networks with options for DNS (Google public servers) and
the gateways:

/ip dhcp-server network


add address=192.168.100.0/24 comment="HR Network" dns-
server=8.8.8.8,8.8.4.4 gateway=192.168.100.1
add address=192.168.150.0/24 comment="Accounting Network"
dns-server=8.8.8.8,8.8.4.4 gateway=192.168.150.1
add address=192.168.175.0/24 comment="Guest Network" dns-
server=8.8.8.8,8.8.4.4 gateway=192.168.175.1

In this case I'm using Google's Public DNS service, and the internal gateways are set
to the IP addresses you assigned before on the VLAN interfaces.

Lastly we'll spin up the DHCP server instances on the VLAN interfaces, using the
pools you set up earlier:

/ip dhcp-server
add address-pool=HR disabled=no interface="VLAN 100 - HR"
name=HR
add address-pool=Accounting disabled=no interface="VLAN
150 - Accounting" name=Accounting
add address-pool=Guests disabled=no interface="VLAN 175 -
Guests" name=Guests

The pools correspond with the networks set up previously, and that's how the DHCP
options like gateway and DNS are associated with a particular DHCP instance. I like
spinning up DHCP for each VLAN, so you can control lease times, options, etc
individually for each network segment. This gives you a lot of flexibility to tweak and
monitor DHCP across the organization.

Switch VLAN Configuration


At this point you'll need to assign access ports on your switches to specific VLANs,
and the clients that are plugged into those should pull DHCP addresses from the
Mikrotik and live happily inside their respective VLANs. It's up to you now to decide
what VLANs should be able to talk to each other, and implement those Forward -
Accept rules in the firewall. As a rule I like to only allow traffic forwarded to VLANs
that is absolutely necessary. Allowing all traffic between VLANs bypasses the
security of segmenting your network in the first place

IPSEC Tunnels
VPN , SECURITY

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!

Preface
IPSEC is one of the most commonly used VPN technologies to connect two sites
together over some kind of WAN connection like Ethernet-Over-Fiber or Broadband.
It creates an encrypted tunnel between the two peers and moves data over the tunnel
that matches IPSEC policies.

Navigation
1. Nomenclature
2. IPSEC Policy vs Routing
3. IPSEC Topology
4. Mikrotik IPSEC Peers
1. Seattle Peer
2. Boise Peer
5. Mikrotik IPSEC Policy
1. Seattle Policy
2. Boise Policy
6. Mikrotik NAT Bypass
1. Seattle NAT Bypass
2. Boise NAT Bypass
7. IPSEC Tunnel Testing

Nomenclature
"Peers" and "Policy" will be used a lot in this article, so it's important to know what
they mean. Peers are the endpoints for IPSEC tunnels. Policies are the settings that
define the interesting traffic that will get pushed over the tunnel. If packet traffic isn't
covered by a policy it isn't interesting, and gets routed like any other traffic would be.
If packet traffic does match what's in a policy, the router defines those packets as
interesting, and sends them over the tunnel, rather than routing them.

IPSEC Policy vs Routing


There's a very important distinction that needs to be made here - IPSEC isn't routing.
IPSEC doesn't create virtual interfaces that are added to a route table like PPTP or
GRE do. IPSEC isn't based on routing, it's based on policy. In fact in the diagram
below when tracerouting from one LAN subnet to another through two branch routers
and multiple Internet routers only one hop is seen.

IPSEC Topology
Below is the physical topology diagram of what we're working with, and it shows the
logical connection that the IPSEC tunnel will create between
subnets.

We have two routers, in Seattle and Boise, both connected to the Internet somehow
with their own static IP addresses. These routers could be at two offices owned by one
company, or just two locations that need to be connected together. We need
computers or servers at one location to be able to contact devices at the other, and it
has to be done securely. An IPSEC VPN is perfect for this sort of implementation.

Mikrotik IPSEC Peers


First, on each router we'll configure IPSEC peers. The peer will point to the opposite
router's public IP address, with Seattle pointing to Boise and Boise pointing to Seattle.
It's very important to add comments to your peer and policy entries, so you know
which points to which.

Seattle Peer
On the Seattle router:

/ip ipsec peer add address=87.16.79.2/32 comment="Boise


Peer" enc-algorithm=aes-128 nat-traversal=no
secret=my_secret

Boise Peer
On the Boise router:

/ip ipsec peer add address=165.95.23.2/32


comment="Seattle Peer" enc-algorithm=aes-128 nat-
traversal=no secret=my_great_secret

The encryption algorithm and secret must match, otherwise the IPSEC tunnel will
never initiate properly. In production networks a much more robust secret key should
be used. This is one time when network administrators often generate long random
strings and use them for the secret, because it's not something a human will have to
enter again by memory. Secret keys should be changed on a regular basis, perhaps
every 6 or 12 months, or more often depending on your regulatory needs. Do not
enable NAT traversal, it's pretty hit-or-miss. This feature is meant to help get around
NAT'ing, which breaks IPSEC, but it doesn't always work necessarily.

Mikrotik IPSEC Policy


Second, we'll configure the IPSEC policies. These are what tells the router was traffic
is "interesting" and should be sent over the tunnel instead of routed normally. Because
this IPSEC tunnel will be a site-to-site tunnel connecting two networks (instead of
hosts) we'll specify tunnel=yes in the configuration. This is also required per
Infrastructure Router STIG Finding V-3008:
...ensure IPSec VPNs are established as tunnel type VPNs when transporting
management traffic across an ip backbone network.

https://www.stigviewer.com/stig/network_devices/2015-09-22/finding/V-3008

If you look at the policies side-by-side you'll notice that the IP address entries on both
routers are reversed - each router points to the other. It really helps to open up the
same dialog boxes in two Winbox windows, looking at them side-by-side, checking
that the SRC address on one router is the DST address on the other.

Seattle Policy
On the Seattle router:

/ip ipsec policy add comment="Boise Traffic" dst-


address=192.168.30.0/24 sa-dst-address=87.16.79.2 sa-src-
address=165.95.23.2 src-address=192.168.90.0/24
tunnel=yes

Boise Policy
On the Boise router:

/ip ipsec policy add comment="Seattle Traffic" dst-


address=192.168.90.0/24 sa-dst-address=165.95.23.2 sa-
src-address=87.16.79.2 src-address=192.168.30.0/24
tunnel=yes

Mikrotik NAT Bypass


If you're using NAT to send multiple internal LAN IPs out one interface to the
Internet we'll need to bypass that. If we don't set up a NAT bypass the NAT process
will snatch up our traffic before the IPSEC policies have a chance to move it over the
tunnel, and those packets will get NAT'd out into oblivion. We'll create these NAT
rules on each router, and move them up above any others.

Seattle NAT Bypass


On the Seattle router:
/ip firewall nat add chain=srcnat comment="Boise NAT
bypass" dst-address=192.168.30.0/24 src-
address=192.168.90.0/24

Boise NAT Bypass


On the Boise router:

/ip firewall nat add chain=srcnat comment="Seattle NAT


bypass" dst-address=192.168.90.0/24 src-
address=192.168.30.0/24

IPSEC Tunnel Testing


At this point we have everything needed for a functioning IPSEC tunnel. With that
being said, most routers do not keep IPSEC tunnels up all the time. If no interesting
traffic is being pushed over the tunnel most routers tear the tunnel down and don't
bring it back up until the policies are triggered again with interesting traffic. This can
create a tiny bit of latency when traffic first starts, a moment is needed to build the
tunnel. RouterOS features like Netwatch and scheduled ping scripts can create traffic
that keeps the tunnels up, but you shouldn't see an appreciable difference, especially if
you're moving data frequently from one subnet to another.

For IPSEC tunnels that stay up all the time and also give you routed virtual interfaces,
take a look at running GRE over IPSEC.

To force this IPSEC tunnel to come up I've sent pings from one subnet to the other,
creating interesting traffic and triggering the IPSEC policy. When viewing the
Installed SAs on the Boise router we can see that encryption keys have been
established, and that on each side the SRC and DST addresses correspond with each
other:

In the Remote Peers tab it also indicates that the Seattle router is an established
remote
peer:

On the Seattle router you'll see the same information in the Installed SA and Remote
Peers tab, but the IP addresses will be backwards from Boise's.
Tracerouting from an IP address on the Seattle LAN shows one hop to an IP address
on the Boise
LAN:

Notice that I specified the source address in the traceroute above. This is so that the
packets sent for the traceroute will appear to originate inside the IPSEC policy's SRC
network, and be headed to a DST network that matches the policy as well - interesting
traffic. If you just try pinging straight from one router to another it won't work,
because the packets won't match the policy and IPSEC will ignore them. Either
specify the SRC to match the policy when pinging from the router, or ping from a real
host inside those subnets.
There is a lot more we can do with IPSEC VPNs, like running GRE over a tunnel for
routing or using OSPF, but this is a great start.

MikroTik IPSEC Site-to-Site Guide


11.00
The MikroTik IPSEC Site-to-Site Guide is over 30 pages of resources, notes, and
commands for expanding your networks securely. The guide is a printable PDF so you
can easily make notes and track your progress while building IPSEC tunnels. Included
in the download are text files for each router's configuration with commands you can
copy and paste directly to the terminal.

This guide uses a real-world network topology for creating secure site-to-site links in
two scenarios. The first scenario is a basic link between LANs at separate locations
using IPSEC. The second scenario uses IPSEC with GRE+OSPF to create secure,
routed links that can scale to dozens of networks or more.

Site to Site PPTP


VPN
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
PPTP is still one of the most ubiquitous VPN technologies in use. It's also one of the
oldest, and unfortunately while it does provide encryption it's one of the least secure.
However, PPTP is still widely supported by almost all routing platforms, and
Windows, Mac, Linux, and most smartphones like Android come with a PPTP client
built-in. The encryption it uses isn't as robust as IPSEC and doesn't use PFS, but we
can do a couple configuration tweaks to make it as secure as possible. At the same
time it isn't sending everything in the clear like GRE or EoIP tunnels do. PPTP is
commonly used in a "road warrior" configuration, with remote clients on laptops and
tablets VPNing into a network from the road. PPTP can also be used to create routable
interfaces on two Mikrotik device and function as a site-to-site tunnel. Put multiple
Site-to-Site tunnels together all connecting to a core location and you now have a
routable hub and spoke topology.

This article will focus on creating a site-to-site VPN tunnel using PPTP. We'll use
static routes on each router that allow devices in one LAN to communicate with
devices in the other. The topology being used is the same one in the MPLS with
VPLS article, but the Seattle and Santa Fe LER devices have been converted to
customer-owned routers. The topology is shown below:
Mikrotik PPTP Site to Site Topology

The requirements for this network aren't too complicated - connect customer LAN
networks 192.168.1.0/24 and 192.168.5.0/24 via a PPTP tunnel over a provider's
network. This is a cheaper alternative to MPLS tunnels, though in fairness it is also a
very different technology and somewhat legacy. The Seattle customer router will be
the PPTP server, and the Santa Fe router will run the PPTP client. It could be the other
way around, it doesn't matter, as long as one router is the server and the other is the
client. First we'll enable the PPTP server on the Seattle router:

/interface pptp-server server set


authentication=mschap2 enabled=yes
/ppp profile set [ find name=default-encryption ]
name=default-encryption use-encryption=required

I've specifically set the authentication to MSCHAP v2 because that is the best
encryption that PPTP can handle, and we don't want to use anything less than that.
We'll also set the PPTP profile being used to require encryption, it's no longer
optional.

Next on the Seattle router we'll set up the credentials that the Santa Fe PPTP client
will use to establish the tunnel:

/ppp secret add local-address=10.0.0.1 name=santafe


password=supersecretpassword remote-address=10.0.0.2
service=pptp

This PPP secret is what the PPTP client will use to establish the tunnel. It has a
username (santafe), a password, the local address that will be dynamically assigned to
the PPTP server, and the remote address that will be dynamically assigned to the
PPTP client. The IP addresses I chose for the PPTP tunnel are totally arbitrary, you
can use whatever you want as long as they don't overlap with anything already in use.

We also need to put some firewall rules in to allow PPTP (which uses GRE) into the
firewall:

/ip firewall filter


add chain=input comment=PPTP dst-port=1723
protocol=tcp src-address=72.156.30.2
add chain=input comment=PPTP protocol=gre src-
address=72.156.30.2

This allows PPTP traffic from the Santa Fe router into the Seattle router. I've only
opened up PPTP to a specific source address, and I suggest you do the same. That
wraps up the configuration on the PPTP server side in Seattle, let's look at Santa Fe.
First thing to do is add those same firewall rules, just with Seattle's source IP address:

/ip firewall filter


dd chain=input comment=PPTP dst-port=1723 protocol=tcp
src-address=72.156.29.2
add chain=input comment=PPTP protocol=gre src-
address=72.156.29.2

Then we'll make sure encryption is being required in the Santa Fe PPTP profile, just
like on Seattle's router:

/ppp profile set [ find name=default-encryption ]


name=default-encryption use-encryption=required

Next we'll create the PPTP client that connects to the Seattle router:

/interface pptp-client
add allow=mschap2 connect-to=72.156.29.2 disabled=no
mrru=1600 name=Seattle \
password=supersecretpassword user=santafe

At this point the PPTP client should automatically connect, and a dynamic PPTP
interface is created. The IP addresses assigned in the PPP secret will now be set
dynamically on both routers as well. On the Seattle PPTP server you should see
something like this for interfaces:
Mikrotik PPTP Server Interface

And on the Santa Fe PPTP client you should see something like this:

Mikrotik PPTP Client Interface


On the PPTP server you can see the 10.0.0.1 address that was dynamically (D)
assigned when the PPTP client connected:

On the PPTP client side you can see the same thing, just with the other IP address:

The final step is adding the static routes, pointing traffic from one LAN to another
over the new tunnel. Because PPTP creates interfaces and assigns IPs that can be used
for routing we could use a dynamic routing protocol like OSPF, but because this
implementation is so small I'm opting for static routes.
On the Seattle router:

/ip route add comment="Santa Fe LAN" distance=1 dst-


address=192.168.5.0/24 gateway=10.0.0.2

On the Santa Fe router:

/ip route add comment="Seattle LAN" distance=1 dst-


address=192.168.1.0/24 gateway=10.0.0.1

Traffic destined for the opposite LAN goes to the opposite side of the tunnel, hitting
the other router which hands off the traffic to the LAN port. In this case we've used
static routes for simplicity, but you could also use OSPF or another routing protocol.

EoIP Tunnel
VPN

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
MikroTik's EoIP tunnel functionality is very popular with users who need to extend
Layer 2 networks between sites. It's configured much like a GRE tunnel and extends
an OSI Layer 2 broadcast domain between sites. Once established the tunnel can be
bridged to physical adapters or other connections. For applications or other systems
that require a Layer 2 adjacency this is the only way to make it work across sites,
other than using a dedicated provider circuit or fiber / microwave link.

EoIP is also a solution for quick-and-dirty network integration for two sites that have
overlapping subnets that, for whatever reason, can't be completely readdressed. Small
businesses and branch offices often have flat networks in the 192.168.0.0/24 or
192.168.1.0/24 ranges, and when the mandate comes down to enable communication
between them quickly and cheaply, EoIP is a possible solution. There will need to be
a little bit of compromise though, and the same rules that apply to a single network in
one location now apply across locations. In the long-term readdressing networks to
not overlap is ideal, but for small businesses with limited IT budgets and often no IT
staff to speak of this is a solution that works for the short-term.
In this situation we have two small offices, one in St. Louis and the other in Norfolk.
Both offices have LANs in the 192.168.1.0/24 subnet, and users need to be able to
access resources in both offices remotely. Management doesn't understand what
network addresses do or why they matter, and this task has to be done with minimal
disruption of ongoing operations. Here is the network:

Mikrotik EoIP Topology

First we'll create the EoIP tunnels, then create the bridges that will connect them to the
physical LAN, and lastly do a bit of IP readdressing.

Create the EoIP tunnel on St. Louis router and enable encryption:

/interface eoip add comment="To Norfolk" name="To


Norfolk EoIP" remote-address=2.2.2.2 tunnel-id=0
ipsec-secret=jfvowev8rg844bg0

Create the EoIP tunnel on the Norfolk router and enable encryption:

/interface eoip add comment="To St. Louis" name="To


St. Louis EoIP" remote-address=1.1.1.2 tunnel-id=0
ipsec-secret=jfvowev8rg844bg0

The tunnel ID numbers must match on each side. Additionally, an IPSEC key has
been added, which will encrypt the EoIP traffic between the two sites. This is a good
idea to have in place, but it is an optional step depending on your security needs, and
it only works between Mikrotik devices. At this point you should see the tunnel come
up and be active, though there probably isn't any traffic going over it. Here is the
tunnel to the Norfolk side, just as an example:
Mikrotik EoIP Active Tunnel

You may have noticed the tunnel is running (R) in slave mode (S) because it has been
bridged. We haven't completed that step yet, but we'll get to it. First we need to
resolve a potential conflict.

Originally both routers had their LAN gateways set as 192.168.1.1 - not a big deal
because they are separate. However, once we bridge the two LANs together it
becomes a very big deal, because IP conflicts will wreak havoc. This is one of those
compromises mentioned earlier; while we can have two locations sharing the same
subnet, we can't have duplicate IP addresses between the locations. So on the Norfolk
side the router's gateway IP address has been changed to 192.168.1.128, which doesn't
conflict with the St. Louis router's gateway IP. Gateway address modifications would
need to be made on the Norfolk side for devices that have static IPs to accommodate
the change,

On the Norfolk router change the LAN gateway IP:

/ip address add address=192.168.1.128/24


interface=ether2

Another critical step that would have to be performed is splitting the DHCP scope on
each side of the tunnel, so that the DHCP servers running on the Mikrotik routers
aren't handing out duplicate client addresses. This step is unique to each organization
and the DHCP scope(s) they are using. Once that is done we can move on to bridging
interfaces to the EoIP tunnel.

Ether2 is our physical LAN interface on both routers, and we have to get traffic from
the physical LAN interface to the EoIP tunnel, then out of the tunnel and into the
physical LAN on the other side - easy to do with bridging. First we'll create the
bridges, then add ports to them.

Create the bridge on the St. Louis router and add ports:

/interface bridge add name="EoIP Bridge"


/interface bridge port
add bridge="EoIP Bridge" interface=ether2
add bridge="EoIP Bridge" interface="To Norfolk EoIP"

Create the bridge on the Norfolk router and add ports:

/interface bridge add name="EoIP Bridge"


/interface bridge port
add bridge="EoIP Bridge" interface=ether2
add bridge="EoIP Bridge" interface="To St. Louis EoIP"

At this point we're done, other than testing. Let's do a bandwidth test from 192.168.1.1
in St. Louis to 192.168.1.128 in Norfolk:
Mikrotik EoIP Speed Test

While 120Mbps isn't exactly thrilling, this is done in a lab environment on virtual
routers, and the graph shows consistent speed across two tests. That amount of
bandwidth is probably enough to handle data between two small offices, and using
larger Mikrotik Routers would allow for higher speeds. It should be noted, however,
that this particular solution isn't scalable for larger flat networks, and pushing a
consistently high volume of traffic is taxing for the CPU. Also, the overall speed
across the EoIP tunnel is limited to the speed of the slowest WAN connection, so
some testing will be needed to see if overall performance is acceptable in your
environment.

XBox and Playstation UPnP


ROUTING

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
For those of you playing the home game - and by that I mean playing on an Xbox or
Playstation at home using Mikrotik for routing, you've probably seen the console
complaining about your NAT configuration. It's not a huge deal in most cases, but for
some games and services it can cause issues with download speed, voice, and chat
communications. If you have the Xbox test the network connection it will most likely
complain about "Moderate NAT" settings if you're running the console behind a
Mikrotik using the default configuration. The fix for this is really simple - enable and
configure UPnP. This service allows the Xbox to request the router create dynamic
DST-NAT rules specifically for Xbox Live communications.

Allowing UPnP to dynamically forward ports is the flip-side of using static NAT
entries for port forwarding. A lot of networking folks take issue with UPnP (Universal
Plug and Play) for a couple reasons. First, UPnP takes some of the control out of the
hands of network administrators, allowing network devices themselves to
communicate with the router and create their own "pinhole" port forward settings.
Second, most UPnP implementations on low-end networking equipment are laughably
insecure. There's a laundry list of security issues created by home network equipment
manufacturers, and unfortunately that reputation has bled over to UPnP itself.
Fortunately Mikrotik's implementation isn't terrible, and only opens up access
specifically requested by the device on the LAN asking for it.

First, we'll enable the UPnP service, which is disabled by default:

/ip upnp set enabled=yes

Next, we'll tell the router which is the internal interface that's LAN-facing, and which
is the external interface that's Internet-facing. In this case ether1-gateway is the WAN
connections, and bridge-local is the LAN connection:
/ip upnp interfaces
add interface=ether1-gateway type=external
add interface=bridge-local type=internal

That's it! UPnP is turned on, and we've told the router which interfaces are which so
that firewall NAT pinholes can be created. Now we'll fire up the Xbox so it can
communicate with the router and create dynamic NAT rules:

Mikrotik UPnP

The two NAT rules above marked with a "D" are the dynamic rules that UPnP created
for internal LAN devices that need them, including the Xbox. Testing the network
from the Xbox again will show green across the board, and everything should work
really well. That's it

Syslog Logging
SECURITY

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Syslog is one of the most widely supported event reporting mechanisms, across
almost all manufacturers and OS distributions. Using Syslog to report events
happening on routers, switches, and servers is pretty standard, and being able to
centrally monitor reportable events on network infrastructure is critical. Most
organizations don't report every single event, because that would create a huge,
unmanageable mess of logs. Instead administrators focus on hardware events,
authentication issues, interface up/down events, and network adjacency changes.

So, with that being said, we'll set up a Mikrotik router to report important events to a
Syslog server, and use The Dude as a dashboard for monitoring. This is a no-cost
solution that centralizes the administrative task of monitoring infrastructure, and it is
surprisingly flexible. The topology in this scenario is pretty basic - two branch routers,
both with an Internet connection, both with a connection to a management network as
well. Do you absolutely need to send Syslog events over a management network? No.
Should you be handling monitoring and reporting over a management network? Yes,
it's best practice. The server is just an instance of The Dude, running on a Windows
Server.

Topology, with Mikrotik routers connected to Syslog server via management network

First, we'll set up a logging action on the router. This is just a logging action that tells
the router to send the event to a Syslog server. We'll then assign that logging action to
different events.

On both routers:

/system logging action add bsd-syslog=yes name=Syslog


remote=192.168.88.234 target=remote
The bsd-syslog=yes option forces the router to send Syslog events in RFC-3164
format, which is very well-supported. Next we'll configure the logging itself, sending
important entries (Account, Critical, and Error type events) to the Syslog server using
the action we just created.

On both routers:

/system logging add topics=critical,error,account action=Syslog disabled=no

Now that the routers are taken care of, let's set up the Syslog server. Mikrotik's The
Dude isn't the only Syslog server freely available - there are many - but it's one of the
few that's easily installed and you'd actually want to look at on a big screen display for
dashboarding. First, download and install the latest copy of The Dude from Mikrotik's
download page. Open up The Dude, and make sure that the Syslog server process is
running, as shown below:
Enabling Syslog in The Dude

Now that the routers are logging to Syslog, and The Dude is listening, let's create
some events. I'm going to log into the Seattle router with the wrong username and
password multiple times, and a few times successfully, to show what you'd see if
someone were trying to guess the admin username or password.
Failed and successful login attempts

This is now your running record of who logged into (or failed to log into) devices,
from what IP, and via which service. Also, very importantly, this gets your log entries
off of your devices and onto a separate, hopefully secure server. If a device has been
compromised there is a good chance the logs on the device have been too, but if
you're shipping events off to a separate server there is another record that can be
trusted. Assuming that all routers are syncing their clocks via NTP, you can use the
timestamps as well to create a chronological order of events, which is critical when
handling a security incident.

We've just covered the basics here, and there is a lot more you can report on with
Syslog and monitor using The Dude. I encourage you to ship critical and error events
to a Syslog server, and put the Syslog window up where people can see it and keep an
eye out for unusual entries. The last thing we want is someone trying to bruteforce the
admin password and no one even knows it's happening.

Throttling Download Speeds


OPTIMIZATION , ROUTING
The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
One of the most common issues that organizations who don't have a lot of bandwidth
deal with is WAN over-utilization. Either an organization isn't in an area where faster
broadband or fiber connectivity is available, the cost of fiber is out of reach, or there
are just too many users trying to use a finite resource. This affects the entire
organization, slowing down email and web use, voice and video conferences, etc. In
the worst cases one or two people are using up the entire organization's bandwidth
with P2P applications like BitTorrent, or by streaming high-def video while everyone
else struggles to understand why everything over the WAN is so slow. We have a
handy fix for P2P downloads as well when you're done with this article, if that is
something affecting your organization.

This isn't hard to fix at all, in fact it's fairly easy. Using Mikrotik Queues can allow us
to put bandwidth limitations in place, and also ensure a (relatively) fair distribution of
bandwidth between all users. We can also give priority to one network's bandwidth
usage over another's if we wanted to. We'll do this by using Mikrotik PCQ - PCQ
standing for "Per Connection Queue". Using a PCQ instead of another type of queue
allows for even distributions of bandwidth per connection. That means that one person
on a subnet gets just as much bandwidth when they open a webpage as someone else
does. Other types of queues are available, but we won't cover them in this article.

For this article we're just using one Mikrotik router, with a LAN subnet of
192.168.1.0/24 that all 200 users are sitting on. I'm not creating a topology diagram
for this article because there's just one router and one subnet. We have one WAN
connection, Ether3, and it's connected to a 50Mb/sec broadband line. Users have been
complaining that the Internet is always slow, and every now and then we've seen
someone who is using 10Mb or 15Mb/sec on their own just watching videos.

First, we'll flag the download traffic using a Mangle rule to mark the packets as they
come in from the WAN and head to the LAN. We have to mark the packets first so
the PCQ knows what packets to impose limits on. The mangle rule is shown below:

/ip firewall mangle add action=mark-packet


chain=forward comment="192.168.1.0/24 Download" dst-
address=192.168.1.0/24 in-interface=ether3 new-packet-
mark="192.168.1.0/24 Download"

The packet counter should be incrementing for this rule if traffic is flowing and the
rule was set up correctly. This rule will mark all packets coming in interface Ether3
with a destination address in the 192.168.1.0/24. Once the packets have been marked
then we can apply a PCQ to them. The PCQ is shown below:

/queue type add kind=pcq name="192.168.1.0/24


Download" pcq-dst-address6-mask=64 pcq-limit=300 pcq-
rate=5M pcq-src-address6-mask=64 pcq-total-limit=40M

A few fields in the PCQ deserve explanation. Each connection that is matched to this
PCQ gets its own little queue, and the quantity of those little queues can be set by you.
Increasing them takes up more RAM and CPU time, but not having enough of them
means that packets could get bottlenecked in the queue and dropped - it's a balancing
act and some tuning may be required. In this case I have 200 users, so I'm going to
limit the amount of those little queues to 300, which admittedly is an educated guess.
The PCQ Rate is the most amount of bandwidth that an individual connection can use.
This is a user streaming Pandora, or loading a web page, or watching a streaming
video. The PCQ Total Limit is the total amount of bandwidth that all the connections
can use at once. We have a 50Mb/sec line, so I'm limiting total connections to
40Mb/sec and giving myself a little wiggle room for other traffic. This all relies on the
fact that not all 200 of our users are going to be using their allotted 5Mb/sec at once.

Finally we'll add the PCQ set up earlier to the Queue Tree. This is where the rubber
meets to the road, and the PCQ we created actually gets applied to the packets we're
marking:

/queue tree add comment="192.168.1.0/24 Download"


name="192.168.1.0/24 Download" packet-
mark="192.168.1.0/24 Download" parent=global
queue="192.168.1.0/24 Download"

That's it, everything is in place! Here is what an iperf bandwidth test looked like prior
to the PCQ being put in place:
Connection without Mikrotik PCQ

The first transfer is an upload, the second is the download, both sitting squarely
around 120Mb/sec. Now here's what it looks like with the PCQ in place:

Connection with Mikrotik PCQ limiting bandwidth

With the PCQ in place it's a very different story, with bandwidth being limited to
4.89Mb/sec on average. Multiple iperf tests running in parallel show the same thing as
well. Please note: If you're going to implement bandwidth throttling some tuning will
be necessary. You may need to tweak one setting or multiple settings to make sure
that everyone gets their share of bandwidth, without being overly restrictive. Good
luck!

Master Port Configuration


ROUTING

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Most routers act as just that - routers. Each interface acting as the gateway for a
distinct network, or as a trunk for VLANs that represent distinct networks. But for
some routers this isn't always the case, particularly in the SOHO or branch office
environment at the edge of the network. For those routers often one interface acts as
the gateway, with all the others working together in a switched capacity to connect
workstations, printers, APs, and other devices. This contrasts with routers in or near
the core of the network that strictly handle routed traffic, or are handling MPLS
traffic.

The first step to setting up one of these edge routers with a switching group of ports is
to determine how many switch chips are present in this particular model of router. In
the lab for this exercise I'm using an RB751U-2HnD which has one Atheros switch
chip. Other models like the RB1100AH and the RB2011 have two switch chips.

To determine how many switch chips you have and what kind:
Mikrotik Switch Chip

Only ports wired to the same switch chip can actually be switched together. Also, note
that ether1 is conspicuously missing from switch1 - it isn't wired to the switch chip.
Therefore it can't be switched with ether2-ether5, unless a bridge port is manually
configured (and considering ether1 is used as the WAN gateway that would be a
terrible idea). On routers like the RB2100 that have two switch chips, with half the
physical ports wired to each, the only way to switch ALL the ports together across
both switch chips is to create a software bridge between two ports, one on each of the
switch chips. This isn't a very efficient solution, and if more than just a few switched
ports are needed it would be prudent to purchase a Mikrotik CRS.

Ether1 is being used for the WAN gateway, but ether2 - ether5 in this scenario need to
be switched together to create a LAN. Two computers, a printer, and a NAS all need
to be part of this LAN. This configuration isn't taking into account VLANs, but if you
want to learn how to use VLANs then look at the Mikrotik VLAN tutorial.

The next step is determining which port out of all the switched ports will be the
"Master" - I chose ether2. The rest of the ports, ether3 - ether5 will be set as slaves to
ether2. Here is ether2's configuration:
Mikrotik Master Port

As you can see ether2 has be set as the Master port, therefore it has no Master Port
configuration chosen. Ether3 - ether5 look very different though:
Mikrotik Master Port

Ether3 has been configured with ether2 as the Master port. This tells RouterOS that
these ports are running in a switched configuration. The same change needs to be
made for the other switched ports:
Ether1 has no Master port because it's acting as our WAN gateway, and is a separate
routed interface. Ether2 has no Master port because it is the Master port for this switch
chip. Ether3 - ether5 are set with Ether2 as their Master port, which tells RouterOS to
switch all those ports (ether2 - ether5) together.

The last step is to assign an IP address to ether2, which acts as the gateway address for
all the hosts plugged into ether2 - ether5. If the network is utilizing DHCP then the
DHCP server would be set to run on ether2, and because all the other ports (except
ether1) are switched together the hosts would be able to receive dynamic addresses

Mikrotik VRRP
ROUTING , OPTIMIZATION

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Mikrotik VRRP (Virtual Router Redundancy Protocol) gives us the opportunity to
introduce some resiliency into our routing infrastructure. A common VRRP
implementation is to have redundant gateways for larger networks, whether in
enterprise or service provider environments. With VRRP two gateways can be
installed, one active and one standby. When one router drops because of power loss,
hardware failure, etc the other takes over, assigning itself the gateway address and
routing traffic. Very minimal traffic loss occurs during the switch, but there is some
loss nonetheless.

We'll implement the dual-gateway solution, and see what happens when we shut one
of the LAN interfaces down. Here is the topology we're working with in Boston - one
LAN with two gateways, each with a connection to the service provider.
Mikrotik VRRP Topology

Each of the routers has its own static WAN IP, each with a route pointing to the
service provider gateway - both routers are perfectly capable of shuttling packets in
and out of the network. Each router is also NAT'ing 192.168.70.0/24 traffic out its
respective ether1 WAN interface. Both routers each have their own LAN address as
well. However, Windows, Mac, and other clients can only accept one gateway by
default, so we need one LAN address that both routers can share. The routers will
share the VRRP address, and we'll give that VRRP address out to clients on the LAN
for use as the gateway. When one router dies the other will apply that VRRP address
and take over as the gateway, and LAN clients should see no real interruption in
connectivity.

First, we'll assign local addresses on ether2 interfaces, because they need to be part of
the network before VRRP can happen.

On Boston:

/ip address add address=192.168.70.2/24


interface=ether2 network=192.168.70.0
On Boston Standby:

/ip address add address=192.168.70.3/24


interface=ether2 network=192.168.70.0

Now both routers are part of the network, and they can communicate to each other and
exchange VRRP traffic. This is everything we need to start configuring VRRP. Next,
we'll create the VRRP virtual interfaces, and link them to the physical ether2
interfaces.

On both routers:

/interface vrrp add interface=ether2 name="LAN


Gateway"

With the virtual VRRP interfaces created we can now assign that 192.168.70.1/24
gateway address that both of the routers are going to share, and hand-off between each
other should one fail.

On both routers:

/ip address add address=192.168.70.1/24 interface="LAN


Gateway" network=192.168.70.0

That's the whole of the configuration - both routers are now running VRRP, and one
of them has been elected the master and assigned 192.168.70.1. We'll start a constant
ping from the workstation on the Boston LAN to a static IP assigned to the Seattle
router (165.95.23.1), and disconnect the LAN interface of one of the routers to force
the VRRP transition.

Here's the ping:


VRRP Ping Failover Tes

Mikrotik NTP Synchronization


SECURITY , OPTIMIZATION

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!

Preface
Syncing the clocks between all of your devices is a critical part of keeping your
networks healthy. Time affects network security, VPN stability, and more.
Navigation
1. Relying on NTP
2. NTP Options
3. Timezones

Relying on NTP
Protocols like IPSEC and Kerberos exchange keys and tokens that are time-stamped
with lifetime values that determine validity. If one router's clock is faster than
another's those keys will expire sooner, causing IPSEC tunnels to bounce. If the
clocks are far enough off each other IPSEC tunnels may not come up at all because
keys from one side of the tunnel will never appear valid on the other side.

There are security implications as well - in the event of a security incident if the logs
on your devices have inconsistent timestamps then event correlation will be
impossible. When investigating an incident it's paramount that logs be reliable and
accurate. Speaking of logging, it's also prudent to centralize your logging to a server,
commonly with Syslog and The Dude. For Syslog events we want reported
timestamps to be accurate across the board.

NTP Options
In terms of NTP servers you have a few choices - host an NTP server within your
network, refer your devices to an external NTP server, or do both. For the purposes of
this article we will simply use an external NTP service, hosted by the NTP.org
project. This is a fantastic project that millions of internet users rely on, and if you
have a spare server that you could volunteer to take part in their network please do.

One simple command will tell your routers to sync with the pool.ntp.org service,
which is load-balanced and reliable:

/system ntp client set enabled=yes server-dns-


names=time.google.com,0.pool.ntp.org,1.pool.ntp.org,2.poo
l.ntp.org,3.pool.ntp.org

Your router will sync its clock to the nearest NTP server participating in the pool, and
continue to make small clock adjustments regularly over time as needed. Bear in mind
that if you're running services that depend on timestamps (like IPSEC) this may cause
a brief interruption if your clocks are off significantly.
Timezones
There is one other issue of note, particularly if you have multiple networks across
different time zones. Depending on your configuration it may be prudent to configure
all your devices for the UTC timezone. This ensures that all devices have consistent
time configurations, and log entry correlation between devices in different time zones
you won't have to adjust for local time. If you have routers in different states or
countries that observe daylight savings time differently UTC further simplifies things.

/system clock set time-zone-name=UTC

Be aware that changing timezones on your devices will most likely bounce VPN
tunnels momentarily, and it's important that all devices be on UTC time

Mikrotik MPLS with VPLS


ROUTING , MPLS

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
I've worked with MPLS circuits for a long time, but always with provider hand-offs.
This is most people's first and only real exposure to MPLS. The service provider gives
the customer Ethernet connections and says, "This connection goes to Site A, this
other connection goes to Site B, you have X amount of bandwidth, do whatever you
want." It's magic, and the customer doesn't need to have any idea how it works on the
backend. It makes networking remote sites much easier for the customer, and it's a
lucrative value add for providers. Obviously there are a lot of other ways to network
remote sites together. There is EoIP, GRE, IPSEC, GRE with IPSEC, and if you have
Scrooge McDuck amounts of money you can run your own fiber too. With that being
said, MPLS is extremely popular compared to other solutions because it's transparent
for the customer, the customer doesn't have to administer the tunnels, and it's all fairly
turnkey. I wanted to learn what was going on behind the curtain, what it actually takes
to provide these tunnels, and so I did.

Before we go any further you should be familiar with some terms, and I suggest
reading up on basic MPLS. Two terms to be familiar with are LSR and LER, or Label
Switch Router and Label Edge Router respectively. An LSR is a router running MPLS
that only performs label switching in the core; it doesn't add or remove labels at
network ingress or egress. An LER is a router running MPLS that pushes (adds) or
pops (removes) an MPLS label when a packet enters or exits the MPLS network.
LSRs reside in the core, LERs reside at the edge.

This article describes how to set up a basic MPLS network in the core, supported by
OSPF, and run VPLS tunnels over that core between customer sites. This lets you give
the customer an Ethernet handoff on both sides of the tunnel, and basically tell them
to pretend it's a Cat5 cable strung between sites.

Here is the topology that we're working with, with two customer devices attached to a
Seattle and a Santa Fe provider router:

Mikrotik MPLS Topology

The customer wants to be able to connect to devices in Santa Fe from Seattle as if


they were local devices. They don't want to see hops, routes, etc - just make it work.
IP addresses are already configured on Ethernet interfaces, I won't bore you with that.
OSPF networks are all being advertised in the backbone for brevity. First we'll set up
the core routers in Seattle, Santa Fe, and Atlanta, creating loopbacks, then getting
OSPF up, then MPLS with LDP. VPLS will run over the top of all that. OSPF will
give us some resiliency if a link fails, like between the Seattle and Santa Fe LSRs.

On Seattle LSR:

/interface bridge add comment="MPLS Loopback"


name="MPLS Loopback"

/routing ospf instance set [ find default=yes ]


router-id=72.156.28.150

/routing ospf network


add area=backbone network=72.156.28.0/30
add area=backbone network=72.156.28.8/30
add area=backbone network=72.156.28.150/32
add area=backbone network=72.156.29.0/24

/mpls interface
set [ find default=yes ] interface=ether1
add interface=ether2
add interface=ether3

/mpls ldp set enabled=yes lsr-id=72.156.28.150


transport-address=72.156.28.150

/mpls ldp interface


add interface=ether1
add interface=ether2
add interface=ether3

/mpls ldp neighbor


add transport=72.156.28.151
add transport=72.156.28.152
add transport=72.156.29.120

On Santa Fe LSR:
/interface bridge add comment="MPLS Loopback"
name="MPLS Loopback"

/routing ospf instance set [ find default=yes ]


router-id=72.156.28.151

/routing ospf network


add area=backbone network=72.156.28.8/30
add area=backbone network=72.156.28.4/30
add area=backbone network=72.156.30.0/24
add area=backbone network=72.156.28.151/32

/mpls interface
set [ find default=yes ] interface=ether2
add interface=ether3
add interface=ether1

/mpls ldp set enabled=yes lsr-id=72.156.28.151


transport-address=72.156.28.151

/mpls ldp interface


add interface=ether1
add interface=ether2
add interface=ether3

/mpls ldp neighbor


add transport=72.156.28.150
add transport=72.156.28.152
add transport=72.156.30.120

On Atlanta LSR:

/interface bridge add comment="MPLS Loopback"


name="MPLS Loopback"

/routing ospf instance set [ find default=yes ]


router-id=72.156.28.152

/routing ospf network


add area=backbone network=72.156.28.0/30
add area=backbone network=72.156.28.4/30
add area=backbone network=72.156.28.152/32
/mpls interface
set [ find default=yes ] interface=ether1
add interface=ether3

/mpls ldp set enabled=yes lsr-id=72.156.28.152


transport-address=72.156.28.152

/mpls ldp interface


add interface=ether1
add interface=ether3

/mpls ldp neighbor


add transport=72.156.28.150
add transport=72.156.28.151

At this point we have OSPF running in the core, and MPLS running as well on the
LSR routers. From this point on we'll focus on the LERs that actually connect to the
customers. We'll add an additional bridge for VPLS traffic, configure OSPF and
MPLS with LDP on each of the LERs, then we'll move on to building the VPLS
tunnels.

On Seattle LER:

/interface bridge
add comment="MPLS Loopback" name="MPLS Loopback"
add comment="Customer #4306 Site 1" name="VPLS
Customer 4306-1 Bridge"

/routing ospf instance set [ find default=yes ]


router-id=72.156.29.120

/routing ospf network


add area=backbone network=72.156.29.0/24
add area=backbone network=72.156.29.120/32

/mpls interface set [ find default=yes ]


interface=ether3

/mpls ldp set enabled=yes lsr-id=72.156.29.120


transport-address=72.156.29.120
/mpls ldp interface add interface=ether3

/mpls ldp neighbor add transport=72.156.28.150

On Santa Fe LER:

/interface bridge
add comment="MPLS Loopback" name="MPLS Loopback"
add comment="Customer #4306 Site 2" name="VPLS
Customer 4306-2 Bridge"

/routing ospf instance set [ find default=yes ]


router-id=72.156.30.120

/routing ospf network


add area=backbone network=72.156.30.0/24
add area=backbone network=72.156.30.120/32

/mpls interface set [ find default=yes ]


interface=ether1

/mpls ldp set enabled=yes lsr-id=72.156.30.120


transport-address=72.156.30.120

/mpls ldp interface add interface=ether1

/mpls ldp neighbor add transport=72.156.28.151

At this point OSPF should be fully converged, and in the MPLS Bindings tab we
should see some MPLS labels associated with destination addresses:
Mikrotik MPLS Local Bindings

We should also see some entries in the Forwarding Table too:


Mikrotik MPLS Forwarding Table

This is MPLS at work, associating routes with labels for quick lookup, which is what
gives MPLS its trademark performance boost over regular end-to-end IP routing.
We're ready now to add the VPLS tunnels and start moving some traffic transparently
between sites. The extra bridge interfaces that we added on the two LERs will be used
to bridge the VPLS virtual interfaces with physical Ethernet interfaces that we hand
off to the customer.

On Seattle LER:

/interface vpls
add comment="Customer 4306-2 VPLS" disabled=no
l2mtu=1500 name="Customer 4306-2 VPLS" remote-
peer=72.156.30.120 vpls-id=90:0

/interface bridge port


add bridge="VPLS Customer 4306-1 Bridge"
interface=ether1
add bridge="VPLS Customer 4306-1 Bridge"
interface="Customer 4306-2 VPLS"

On the Santa Fe LER:

/interface vpls
add comment="Customer 4306-1 VPLS" disabled=no
l2mtu=1500 name="Customer 4306-1 VPLS" remote-
peer=72.156.29.120 vpls-id=90:0
/interface bridge port
add bridge="VPLS Customer 4306-2 Bridge"
interface=ether3
add bridge="VPLS Customer 4306-2 Bridge"
interface="Customer 4306-1 VPLS"

At this point we've created a Layer 2 connection between whatever is plugged into
ether1 in Seattle and ether3 in Santa Fe. The customer could throw routers on those
connections, or switches, or plug servers directly in. For demonstration purposes I put
a virtual Ubuntu server on each of those physical interfaces, given them the IP
addresses 10.2.2.1 and 10.2.2.2, and run iperf both directions to test bandwidth as
shown below:
Mikrotik MPLS IPerf Testing

Bandwidth testing shows a consistent, fast connection. This whole network and
servers are virtualized, so while it isn't running at gigabit wire speed it still performs
well. One of the other requirements for this solution was that there be no hops
between the two locations - this should all be transparent to the customer. Traceroute
from 10.2.2.2 to 10.2.2.1 shows the following:

Mikrotik MPLS Traceroute

Exactly what we want to see, which is nothing. None of the provider routers in
between, none of the hops. Next time we'll cover MPLS with QoS and all the other
fancy features!

Mikrotik Firewall
FIREWALL , SECURITY

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Preface
The Mikrotik firewall, based on the Linux iptables firewall, is what allows traffic to
be filtered in, out, and across RouterOS devices. It is a different firewall
implementation than some vendors like Cisco, but if you have a working knowledge
of iptables, or just firewalls in general, you already know enough to dive in.
Understanding the RouterOS firewall is critical to securing your devices and ensuring
that remote attackers can't successfully scan or access your network.

We'll discuss firewall design, chains, actions, rules, and overall best practices.

Navigation
1. Firewall Design
2. Firewall Chains
3. Firewall Actions
4. Firewall Rules
5. Firewall Best Practices

Firewall Design
The general idea of firewalling is that traffic you need should be allowed, and all other
traffic should be dropped. By putting firewalls in place a network can be divided into
untrusted, semi-trusted, and trusted network enclaves. Combined with network
separation using VLANs, this creates a robust, secure network that can limit the scope
of a breach if one occurs.

Traffic that is allowed from one network to another should have a business or
organizational requirement, and be documented. The best approach is to whiteboard
out your current network design, and draw the network connections that should be
allowed. Allowed traffic will have a rule that allows the traffic to be passed, then a
final rule acts as a "catch-all" and drops all other traffic. Sometimes this is referred to
as the "Deny All" rule, and those coming from a Cisco background often call it the
"Deny Any-Any" rule. Allowing what you need and dropping everything else keeps
firewall rules simple, and the overall rule count to a minimum.

The first concept to understand is firewall chains and how they are used in firewall
rules.

Firewall Chains
Firewall Chains match traffic coming into and going out of interfaces. Once traffic has
been matched you can take action on it using rules, which fire off actions - allow,
block, reject, log, etc. Three default Chains exist to match traffic - Input, Output, and
Forward. You can create your own chains, but that is a more advanced topic that we'll
cover in another article.

Input Chain
The Input Chain matches traffic headed inbound towards the router itself, addressed to
an interface on the device. This could be Winbox traffic, SSH or Telnet sessions, or an
administrator pinging the router directly. Typically most Input traffic to the WAN is
dropped in order to stop port scanners, malicious login attempts, etc. Input traffic from
inside local networks is dropped as well in some organizations, because Winbox,
SSH, and other administrative traffic is limited to a Management VLAN.

Not all organizations use a dedicated Management VLAN, but it is considered a best
practice overall. This helps ensure that a malicious insider or someone who gains
internal access can't access devices directly and attempt to circumvent organizational
security measures.

Output Chain
The Output Chain matches traffic headed outbound from the router itself. This could
be an administrator sending a ping directly from the router to an ISP gateway to test
connectivity. It could also be the router sending a DNS query on behalf of an internal
host, or the router reaching out to mikrotik.com to check for updates. Many
organizations don't firewall Output traffic, because traffic that matches the Output
chain has to originate on the router itself. This is generally considered to be "trusted"
traffic, assuming the device has not been compromised somehow.

Forward Chain
The Forward Chain matches traffic headed across the router, from one interface to
another. This is routed traffic that the device is handing off from one network to
another. For most organizations the bulk of their firewalled traffic is across this chain.
After all we're talking about a router, whose job it is to push packets between
networks.
An example of traffic matching the Forward chain would be packets sent from a LAN
host through the router outbound to a service provider's gateway via the default route.
In one interface and out another, directed by the routing table.

Firewall Actions
Firewall rules can do a number of things with packets as they pass through the
firewall. There are three main actions that RouterOS firewall rules can take on packets
- Accept, Drop, and Reject. Other actions exist and will be covered in different
articles as they apply, but these three are the mainstay of firewalling.

Accept
Rules that "Accept" traffic allow matching packets through the firewall. Packets are
not modified or rerouted, they are simply allowed to travel through the firewall.
Remember, we only should allow the traffic that we need, and block all the rest.

Reject
Rules that "Reject" traffic block packets in the firewall, and send ICMP "reject"
messages to the traffic's source. Receiving the ICMP reject shows that the packet did
in fact arrive, but was blocked. This action will safely block malicious packets, but the
rejection messages can help an attacker fingerprint your devices during a port scan. It
also lets the attacker know that there is a device running on that IP, and that they
should probe further. During a security assessment, depending on the auditor and the
standards you're being audited against it may or may not become an audit finding if
your firewall is rejecting packets. It is not recommended as a security best practice to
reject packets, instead you should silently "drop" them.

Drop
Rules that "Drop" traffic block packets in the firewall, silently discarding them with
no reject message to the traffic source. This is the preferred method for handling
unwanted packets, as it doesn't send anything back that a port scanner could use to
fingerprint the device. When drop rules are configured correctly a scanner would get
absolutely nothing back, appearing as though nothing is actually running on a
particular IP address. This is the desired effect of good firewall rules.

Firewall Rules
Firewall rules dictate which packets are allowed to pass, and which will be discarded.
They are the combination of chains, actions, and addressing (source / destination).
Good firewall rules allow traffic that is required to pass for a genuine business or
organizational purpose, and drops all other traffic at the end of each chain. By using a
blanket "deny all" rule at the end of each chain we keep firewall rule sets much
shorter, because there don't have to be a bunch of "deny" rules for all other traffic
profiles.

Chains
Each rule applies to a particular chain, and assigning a chain on each rule is not
optional. Packets match a particular chain, and then for that chain firewall rules are
evaluated in descending order. Since the order matters, having rules in the correct
sequence can make a firewall run more efficiently and securely. Having rules in the
wrong order could mean the bulk of your packets have to be evaluated against many
rules before hitting the rule that finally allows it, wasting valuable processing
resources in the meantime.

Actions
All firewall rules must have an action, even if that action is only to log matching
packets. The three typical actions used in rules are Accept, Reject, and Drop as
described previously.

Addressing
This tells the firewall for each rule what traffic matches the rule. This part is optional -
you can simply block an entire protocol without specifying its source or destination.
There are a couple options for addressing traffic coming into or across the router. You
can specify the Source or Destination IP addresses, including individual host IPs or
subnets using CIDR notation (/24, /30, etc). Interfaces can also be used to filter traffic
in or out of a particular interface, which can by a physical interface like an Ethernet
port or a logical interface like those created by GRE tunnels. This is often done when
blocking traffic where the source or destination of the traffic isn't always known. A
good example of this is traffic inbound to the router via your service provider - that
traffic could originate from Asia, Europe, or anywhere else. Since you don't know
what that traffic is a deny rule is used inbound on the WAN interface to just drop it.

Comments
It's so important to add a comment to your firewall rules, for your own sanity and that
of your network team. It takes almost no time to do when you create firewall rules,
and it could save significant time when troubleshooting. It could also save you from
making a mistake when tweaking firewall rules down the line as networks change and
evolve. If you haven't created a comment at the time the rule was made just add a
comment like this, using firewall number 4 as an example:

ip firewall filter set 4 comment="New firewall comment"

Firewall Best Practices


A number of best practices are widely implemented across the networking industry,
and it's a good idea to familiarize yourself with what they are, why they are
implemented, and the impact they have on your organization's security.

Only allow necessary traffic


This has been mentioned a couple times already, but it's worth mentioning again. Only
allow traffic that's necessary in and out of the network. This reduces the attack surface
that's exposed to attackers, and helps limit the damage of a breach. With that being
said, restricting network traffic too much can limit functionality or productivity, so
some amount of balance and testing is required.

Allow trusted external addresses only


Opening up (or "pinholing") the firewall is an acceptable and necessary practice, but
you should only allow inbound connections from outside the network via trusted
addresses. This could be other offices or datacenter locations, or connections to
internal resources via VPN tunnels.

Use a "deny all" rule at the end of each chain


Instead of putting in many "deny" rules to drop traffic, rely on the final "deny all" rule
in each chain to handle unwanted traffic. Adding additional "deny" rules to monitor
for certain traffic profiles or to help in troubleshooting is good practice, but adding
deny rules on top of the final rule bloats the firewall and uses up resources long-term.

Scan your own firewalls


Periodically scanning your own firewalls and other devices for open ports and
services is a vital part of any network security program. It isn't hard to do at all and
you can use open source tools. Having devices connected to the internet means that
you are now virtually guaranteed to be scanned by threat actors looking for soft
targets and devices with poor or missing configurations. Nmap is the typical tool for
port and service scanning, and a tutorial on Nmap scanning is posted on our Security
blog.

Mikrotik FastTrack Firewall Rules


FIREWALL , SECURITY , OPTIMIZATION

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Since the release of RouterOS 6.29.1 and the introduction of the new FastTrack
feature, there's a bit of confusion out there about how to implement FastTrack rules in
the firewall. With later releases of RouterOS the FastTrack feature has started
working on more interfaces, including VLANs, so it's even more important to learn
this feature and implement it properly. We want forwarded traffic across the router to
be marked for FastTrack in the firewall, but we still have to Accept that same traffic
as well. Without both of these rules it won't work, and you won't reap the performance
benefits.

If you're not familiar with firewall rule and chain basics take a look at the Mikrotik
firewall article that breaks them down.

FastTrack has been shown to reduce CPU utilization by quite a bit, in some cases over
10% when traffic volume is high. It operates on the premise that if you've already
checked one packet in a stream against the firewall and allowed it, why do you need
to check all the other packets in the rest of the stream? In terms of overall efficiency
this is big, especially if you have more than just a few firewall rules to evaluate traffic
against.

You can see rule number 3 in the screenshot below:


Mikrotik firewall rules with FastTrack configured

Under the IP > Settings menu in Winbox you can also see a counter of all total
packets that have been marked for Fast Track:

Mikrotik FastTrack packet counter

Here are the firewall rules currently in use on one of my SOHO devices that take
advantage of FastTrack:

/ip firewall address-list


add address=192.168.0.0/16 list=Bogon
add address=10.0.0.0/8 list=Bogon
add address=172.16.0.0/12 list=Bogon
add address=127.0.0.0/8 list=Bogon
add address=0.0.0.0/8 list=Bogon
add address=169.254.0.0/16 list=Bogon
/ip firewall filter
add chain=input comment="Accept Established / Related
Input" connection-state=established,related

add chain=input comment="Allow Management Input -


192.168.88.0/24" src-address=192.168.88.0/24

add action=drop chain=input comment="Drop Input" log-


prefix="Input Drop"

add action=fasttrack-connection chain=forward


comment=\
"FastTrack Established / Related Forward" connection-
state=\
established,related

add chain=forward comment="Accept Established /


Related Forward" \
connection-state=established,related

add chain=forward comment="Allow forward traffic LAN


>> WAN" out-interface=ether1-gateway src-
address=192.168.88.0/24

add action=drop chain=forward comment="Drop Bogon


Forward >> Ether1" in-interface=ether1-gateway log=yes
log-prefix="Bogon Forward Drop" src-address-list=Bogon

add action=drop chain=forward comment="Drop Forward"

The two rules above in bold are where the rubber meets the road, and they are both
needed to make it work. These same rules can be applied in an enterprise network
environment and tweaked accordingly. Enjoy the performance boost!

Firewalling Zones with Interface Lists


FIREWALL , SECURITY

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Preface
Mikrotik's RouterOS doesn't yet have specific functionality built in for network
"Zones" like some other router platforms, but with new releases of RouterOS we can
get the same functionality through "Interface Lists". An interface list is just like a
firewall address list, except instead of host IPs or CIDR subnets we're listing physical
or virtual interfaces. Once we've put interfaces into their respective lists we can use
those lists in firewall rules. If you have many interfaces, such as multiple trunked
VLANs or redundant WAN connections this could help you consolidate firewall rules.

Navigation
1. Zone Overview
2. Zone Types
3. Creating MikroTik Zones

Zone Overview
I like to group my interfaces into zones based on the trust level of the network that an
interface is attached to. Often we end up with "Trusted", "Semi-Trusted", and
"Untrusted" zones, and some additional zones as needed depending on how a network
is built. How you split up your zones will be dictated by your individual organization's
security, compliance, legal, and operational requirements.

Zone Types
It's easy to think of three zone types - Trusted, Semi-Trusted, and Untrusted.

Trusted Zone
Interfaces in a Trusted zone would be internal wired LAN or VLAN gateway
interfaces, and management interfaces. We have a reasonable level of trust that the
hosts in these networks are not trying to actively compromise our systems, and so we
allow them to communicate (relatively) freely. Access to these networks would
require physically plugging into a port on-premise, and hopefully port security is in
place adding an additional security layer.

Semi-Trusted Zone
A Semi-Trusted network could be a point-to-point VPN to a vendor's network, or a
corporate wireless network. We must have these networks in place for legitimate
business or organizational reasons, but there is a chance that a bad actor could get
access to these networks and we want that breach to be contained if it occurs. Many
organizations give these networks access to internal server resources (Active
Directory DCs, DNS servers, etc) as required, but access to other subnets or services
is forbidden.

Untrusted Zone
Untrusted networks are networks where we know or have reason to suspect that
malicious activities could occur, or do occur. A good example of an Untrusted
connection is a connection to the internet via an ISP. Port scans and malicious login
attempts are very common out on the internet, and it's a given that attackers are
actively searching for soft targets.

Guest wireless networks are great candidates for a custom zone with some additional
firewall rules. It's still untrusted, because there's no telling what kind of devices might
roam onto the network and what kind of issues they may bring with them. But even
though the network is untrusted, it still has to forward traffic outbound to the ISP, and
they may be allowed to resolve DNS names using an internal server if split DNS is
configured, or they will just use a public DNS like Google's 8.8.8.8 server.

Creating MikroTik Zones


After you have decided what Zones to create and which interfaces should be in what
Zone, the first step is to create empty Interface Lists:

/interface list add name="Trusted" comment="Trusted


networks"
/interface list add name="Semi-Trusted" comment="Semi-
Trusted networks"
/interface list add name="Untrusted" comment="Untrusted
networks"
/interface list add name="Guest Wireless" comment="Guest
Wireless"

One Interface List already exists, number zero, the "all" list. It can't be deleted, but by
default it isn't used anywhere either so it doesn't affect your security.
Now that we have lists we'll assign interfaces to the lists. In this case ether1 is our
internet-facing WAN address, ether2-5 are LAN ports, wlan1 is a corporate
(encrypted) wireless network, and wlan2 is an open (unencrypted) guest wireless
network:

/interface list member add list=Semi-Trusted


interface=wlan1 comment="Corporate WLAN"
/interface list member add list=Untrusted
interface=ether1 comment="Pacific Telco WAN"
/interface list member add list="Guest Wireless"
interface=wlan2 comment="Guest WLAN"
/interface list member add list=Trusted interface=ether2
comment="LAN"
/interface list member add list=Trusted interface=ether3
comment="LAN"
/interface list member add list=Trusted interface=ether4
comment="LAN"
/interface list member add list=Trusted interface=ether5
comment="LAN"

With all of our interfaces in their respective lists we can use the lists in firewall rules.
Having multiple interfaces in a rule means you only need to put the interface list in
one rule, and that rule then applies to all those interfaces. For example, we can use an
input-drop rule on all WAN interfaces by applying that rule to the "Untrusted" list:

/ip firewall filter add chain=input in-interface-


list=Untrusted action=drop comment="Drop input from WAN"

We can allow all of our trusted networks on ether2-ether5 to forward traffic out the
WAN to the internet by using just one rule:

/ip firewall filter add chain=forward in-interface-


list=Trusted out-interface=ether1 action=accept
comment="Trusted nets to internet"

When new interfaces are added to the router all that needs to be done is adding the
interface to the appropriate interface list, and the correct firewall rules will now apply.
(adsbygoogle = window.adsbygoogle || []).push({});

Mikrotik SNMP Configuration


SECURITY , MONITORING

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!

Preface
SNMP can provide insight about a device's performance but there are some security
considerations to take into account. A secure version of the SNMP protocol should be
used, authentication configured, and non-default Community strings.

Navigation
1. SNMP Overview
2. SNMP Protocol Versions
1. SNMP v1
2. SNMP v2c
3. SNMP v3
3. Community Strings
1. Default Community
2. Create a Community
4. Enable SNMP
5. Summary

SNMP Overview
Simple Network Management Protocol (SNMP) is an industry-standard protocol for
pulling performance information from network devices. It is a pull protocol, meaning
the SNMP monitor must reach out on a regular basis and poll devices for information.
SNMP Collectors poll devices for information, and SNMP Agents on the devices
report that data.

The frequency of performance data polling will depend on a few factors:

 Required granularity of the performance data


 Available data storage capacity
 Performance data retention requirements

With SNMP being such a ubiquitous protocol there are a number of both open source
and commercial collector suites, both hardware and software-based. Routers and
switches almost always feature SNMP Agents. Windows, Linux, and Mac OS also
feature SNMP Agents though they have to be enabled manually.

SNMP Protocol Versions


There are three major versions of the SNMP protocol that have been accepted by the
industry, though others do exist. The three main versions are outlined below, and we
will use v3.

SNMP v1
Version 1 is the original SNMP version and is still widely used almost 30 years later.
There is no security built into v1 other than the SNMP Community string. If the
Community string presented by the Collector matches the string configured on the
Agent then it will be allowed to poll the device. This is why it's important to isolate
SNMP to a dedicated management subnet and change the default Community string.
It's not possible to delete the standard Community string, but the first command above
renamed it and removed read access.

SNMP v2c
Version 2c brings additional capabilities to SNMP but still relies on the Community
string for security. The next version is the preferred choice, though some
organizations still rely on v1 and v2c.

SNMP v3
Version 3 brings encryption and authentication, as well as the capability to push
settings to remote SNMP Agents. SNMP v3 is the preferred version when both the
Agent and Collector support it. While SNMP v3 does have the capability to push
settings to remote devices many organizations don't opt to use it, in favor of more
robust solutions like Ansible, Puppet, Chef, or proprietary management systems.

Infrastructure Router STIG Finding V-3196 requires that SNMP v3 be used:

The network device must use SNMP Version 3 Security Model with FIPS 140-2
validated cryptography for any SNMP agent configured on the device.

https://www.stigviewer.com/stig/infrastructure_router/2016-07-07/finding/V-3196
Community Strings
A Community string is like a password, allowing SNMP Agents to vet polling from
SNMP Collectors in a very crude way. More modern versions of SNMP add
authentication and encryption to the protocol.

Default Community
The default Community string on almost all network devices is simply the word
"public". This is well-known, and many port scanners like Nmap will automatically
try the default "public" string. If the default Community string is left in place it can
allow attackers to perform reconnaissance quickly and easily. Infrastructure Router
STIG Finding V-3210 requires that the default string be changed:
The network device must not use the default or well-known SNMP Community strings
public and private.

https://www.stigviewer.com/stig/infrastructure_router/2016-07-07/finding/V-3210

On MikroTik platforms it's not possible to delete or disable the default "public"
Community string, but it can be renamed and restricted:
/snmp community set 0 name=not_public read-access=no
write-access=no

Create a Community
Next create an SNMP Community with the following attributes:

 Non-default name
 Read-only access
 Secure authentication
 Encryption

The following is a long command but it does everything necessary:

/snmp community add name=fish_tank read-access=yes write-


access=no authentication-protocol=SHA1 authentication-
password=super_great_password encryption-protocol=AES
encryption-password=other_super_password security=private

Enable SNMP
Only one command is necessary to enable SNMP and configure the location and
contact information for the device:

/snmp set contact="Tyler @ Manito Networks"


location="Internet, USA" enabled=yes

Summary
SNMP is a robust, well-supported monitoring protocol used by MikroTik and other
mainstream manufacturers. Use non-default Community names, authentication, and
encryption to ensure that no one else can read information from your devices. Enable
SNMP and set good contact and location information to help ease distributed network
monitoring.

WAN Load Balancing


FIREWALL , OPTIMIZATION , ROUTING

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!

Preface
Some of the most requested topics folks ask me for are multi-WAN and load
balancing implementations. Unfortunately, as easy as most solutions are on MikroTik,
these aren't simple. Many vendors like Ubiquiti have wizards that you can use during
the initial device setup to configure multi-WAN and load balancing, but that hasn't
come to RouterOS yet. Those wizard-based implementations are still complex, but
that complexity is hidden from the device administrators.

Using a load balanced multi-WAN setup helps us meet a few design goals:

 Failover in case of ISP failure


 Increase total available bandwidth for users
 Distribute bandwidth utilization across providers

Something that should be noted before you go further - this is a fairly complex topic.
Multi-WAN and load balancing requires us to configure multiple gateways and
default routes, connection and router mark Mangle rules, and multiple outbound NAT
rules. If you aren't familiar with MikroTik firewalls, routing, and NAT then it might
be best to put this off until you've had some time to revisit those topics.

Navigation
1. Router Setup
2. Input Output Marking
3. Route Marking
4. Special Default Routes
5. Summary

Router Setup
A single MikroTik router is connected to two ISPs (Charter and Integra Telecom) on
ether1 and ether2 respectively, and a LAN on ether3. Traffic from the LAN will be
NAT'd out both WAN ports and load balanced. See the topology below:
Configure the local IP addresses:

/ip address
add address=1.1.1.199/24 interface=ether1
comment="Charter"
add address=2.2.2.199/24 interface=ether2
comment="Integra Telecom"
add address=192.168.1.1/24 interface=ether3 comment="LAN
Gateway"

Set the default gateways:

/ip route
add dst-address=0.0.0.0/0 check-gateway=ping
gateway=1.1.1.1,2.2.2.1

NAT (masquerade) out the WAN ports:

/ip firewall nat


add action=masquerade chain=srcnat comment="Charter" out-
interface=ether1
add action=masquerade chain=srcnat comment="Integra
Telecom" out-interface=ether2

At this point you could stop configuring the router and things would work just fine in
a failover situation. Should one of the two providers go down the other would be used.
However there is no load-balancing, and this is strictly a failover-only solution. Most
organizations wouldn't want to pay for a second circuit only to have it used just when
the first goes down.

Input Output Marking


One problem with having more than one WAN is that packets coming in one WAN
interface might go out the other. This could cause issues, and may break VPN-based
networks. We want packets that belong to the same connection to go in and out the
same WAN port. Should one provider go down the connections across that port would
die, then get re-established over the other WAN. Mark connections coming in the
router on each WAN:
/ip firewall mangle
add action=mark-connection chain=input comment="Charter
Input" in-interface=ether1 new-connection-mark="Charter
Input"

add action=mark-connection chain=input comment="Integra


Telecom Input" in-interface=ether2 new-connection-
mark="Integra Telecom Input"
This helps the router keep track of what port each connection came in from.

Now we'll use the connection mark just created for packets coming IN to trigger a
routing mark. This routing mark will be used later on in a route that tells a connection
which provider's port to go OUT.
add action=mark-routing chain=output comment="Charter
Output" connection-mark="Charter Input" new-routing-
mark="Out Charter"

add action=mark-routing chain=output comment="Integra


Telecom Output" connection-mark="Integra Telecom Input"
new-routing-mark="Out Integra Telecom"

Connections that have been marked then get a routing mark so the router can route the
way we want. In the next step we'll have the router send packets in the connections
with those marks out the corresponding WAN interface.

LAN Route Marking


Some special Mangle rules are needed to tell the router to load balance headed across
the router from the LAN. How this load balancing works is beyond the scope of this
article, but suffice to say a lot of hashing happens. If you want to learn more check out
the MikroTik documentation.

These rules tell the router to balance traffic coming in ether3 (LAN), heading to any
non-local (!local) address over the Internet. We grab the traffic in the pre-
routing chain, so we can redirect it to the WAN port that we want based on the
routing mark.

The following commands balance ether3 LAN traffic across two groups:

add action=mark-routing chain=prerouting comment="LAN


load balancing 2-0" \
dst-address-type=!local in-interface=ether3 new-
routing-mark=\
"Out Charter" passthrough=yes per-connection-
classifier=\
both-addresses-and-ports:2/0
add action=mark-routing chain=prerouting comment="LAN
load balancing 2-1" \
dst-address-type=!local in-interface=ether3 new-
routing-mark=\
"Out Integra Telecom" passthrough=yes per-connection-
classifier=\
both-addresses-and-ports:2/1

NOTE: The routing marks above are the same in this step as they were in the previous
step, and correspond with the routes we're about to create.

Special Default Routes


At this point we've marked connections coming in the WANs, and used those
connection marks to create routing marks. LAN load balancing steps above also create
routing marks, and they correspond with what the next step does. Create default routes
that grab traffic with the routing marks we created above:

/ip route
add distance=1 gateway=1.1.1.1 routing-mark="Out Charter"
add distance=1 gateway=2.2.2.1 routing-mark="Out Integra
Telecom"

Note: These routes only get applied with a matching routing mark. Unmarked packets
use the other default route rule created during router setup.

Routes that came in the Charter connection get a connection mark. That connection
mark triggers a routing mark. The routing mark matches the mark in the route above,
and the return packet goes out the interface it came in.

Summary
Here's what we've configured:

1. New connections inbound on a WAN get marked


2. Connections with that mark get a routing mark
3. LAN traffic heading outbound gets load balanced with the same routing marks
4. Routing marks match default gateway routes and head out that interface
5. Wash, Rinse, Repeat

MikroTik Command Line Upgrades


SOFTWARE

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!

Preface
If you don't have graphical access like Winbox or Webfig to a MikroTik router you
can easily do software updates via the command line. These commands can be used in
Ansible playlists as well to programatically update devices.

Navigation
1. Set Package Channel
2. Check for Updates
3. Install Updates

Set Package Channel


A few package channels exist and you can select which branch you'd like.
The current branch is always recommended because it's the latest stable release:
system package update set channel=current

If you prefer to live on the bleeding edge or if you want to test new features in
development use the Release Candidate channel:

system package update set channel=release-candidate

Check for Updates


With the channel selected run the update check:

system package update check-for-updates

Install Updates
If a new version is available download it:

system package update download


New package files being detected will trigger the install when the device boots.

Kick off the installation by rebooting the router:

system reboot

MikroTik DDoS Attack Mitigation


SECURITY

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!

Preface
Being attacked sucks and we hate it. Done. Here's a solution for mitigating an attack.
This will not block large-scale DDoS attacks which requires coordination with
upstream providers and possibly additional hardware capabilities.

Threat Address List


A DDoS attack comes from many sources and it's a heck of a lot easier to block
connections using an Address List. The alternative is making a ton of standalone rules
and we hate that too. Identify the malicious traffic sources (e.g. 1.1.1.1 and 2.2.2.2)
and create an Address list:

/ip firewall address-list


add address=1.1.1.1 list=Blackhole
add address=2.2.2.2 list=Blackhole

Prerouting Filter Rule


Check out the MikroTik RouterOS packet flow diagram first before going any further
if you aren't familiar with the packet flow. To endure an attack we want to filter / drop
traffic as close to the source as possible. The further a router has to process bad traffic
the more strain it puts on the device. The Prerouting process is a great place to block
traffic on the device itself if you don't have blackholing configured with your
upstream providers.
Create a Prerouting filter rule using the Blackhole address list we just created and the
Drop action:

/ip firewall raw


add chain=prerouting src-address-list=Blackhole
action=drop place-before=0

As new malicious IP addresses are detected just add them to the Address List.

Fin

P2P Filtering
SECURITY , FIREWALL

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!

Preface
Limiting Peer-to-Peer (P2P) network traffic is important for businesses and other
network operators for a couple reasons, mainly risk management and bandwidth
conservation.

Navigation
1. Risk Considerations with P2P
2. Possible Benefits of P2P
3. Blocking P2P in Firewalls
1. Mikrotik P2P Firewall Rules
2. Rule Breakdown
3. Additional Steps

Risk Considerations with P2P


Risk management in terms of P2P has multiple facets - stopping illicit material
downloads, preventing sensitive data exfiltration from a network. Some viruses use
P2P protocols to communicate with botnet operator Command & Control (C&C)
servers.
From a legal perspective there is also the risk of being labelled a "copyright infringer"
due to your public IP being flagged as part of the download / upload pool for a pirated
movie or music album. This brings with it a separate set of non-technical challenges,
and possible action on the part of your service provider to terminate services.

Possible Benefits of P2P


While being aware of the risks of P2P, also bear in mind possible benefits. Some
operating systems use P2P protocols to distribute software patches and updates within
the local network between clients, saving bandwidth and processing load on central
servers. Some antivirus vendors also use P2P to distribute large anti-virus signature
updates between agents, reducing bandwidth consumption considerably. Many AAA-
title games use P2P to distribute patches and content updates, so blocking P2P on a
WISP network or small ISP could have an impact on customers.

Carefully consider the impact of blocking P2P in your networks before moving
forward.

Blocking P2P in Firewalls


With Mikrotik's P2P firewall functionality it's very easy to both filter P2P traffic and
add the hosts that are creating the traffic to an address list, allowing for some
accountability.

Mikrotik P2P Firewall Rules


Here are the two firewall commands that detect P2P traffic, add the offending host to
a dynamic address list, and then block traffic from hosts on that list:

/ip firewall filter add action=add-src-to-address-list


address-list=P2P address-list-timeout=30m chain=forward
comment="Add P2P hosts to address list" out-
interface=ether1-gateway p2p=all-p2p

/ip firewall filter add action=drop chain=forward


comment="Drop traffic from P2P hosts" out-
interface=ether1-gateway src-address-list=P2P

Rule Breakdown
Let's break down each rule in turn. The first rule in the foward chain (chain=forward,
data going through the router) and being routed out the WAN interface (out-
interface=ether1-gateway) checks for P2P traffic (p2p=all-p2p). When P2P traffic is
found it triggers action=add-src-to-address-list and adds the offending host's IP to the
dynamic "P2P" list (address-list=P2P). It adds the host IP to the list for 30 minutes
(address-list-timeout=30m), so that traffic isn't blocked forever by the next rule in
case of a false-positive. If it isn't a false positive the host will be re-added after it falls
off the address list if more P2P traffic happens.

The second rule drops (action=drop) all traffic from hosts on the P2P address list (src-
address-list=P2P) going out the WAN port (out-interface=ether1-gateway) to the
Internet. If you need more information about firewall actions, rules, etc see
the Mikrotik Firewall write-up. It will continue dropping traffic until the host falls off
the address list after 30 minutes. Once on the list whomever is using this host can't
access the Internet, but they will still be able to reach internal network resources like
servers and printers. By tweaking the second firewall rule it's possible to limit
network traffic further.

Additional Steps
Adding a third network rule could allow for a Syslog message to be sent, if Network
Admins are monitoring Syslog messages and a Syslog log actionhas been set up.
Helpdesk staff can check the dynamic P2P address list to see what hosts have tripped
the P2P rules and begin to remediate the

NAT Port Forwarding


ROUTING

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
One of the most common tasks that a network administrator will need to perform is
forwarding ports across a router using Network Address Translation (NAT) - also
known as "pinholing". This process can be set up statically by an administrator, only
forwarding a few ports as needed. It can also be handled dynamically by processes
like UPnP that pinhole the router on their own using instructions from devices that
request ports be opened. Ports are forwarded across routers to make internal services
like email and web servers available to the outside world.

For security these servers should be located in a separate network segment from the
rest of the internal network, and only the necessary ports should be forwarded across
the router to keep the attack surface as small as possible. One of the most frequently
seen implementations of this is forwarding HTTP(S) access across the router from an
external static IP to an internal Apache, NGINX, or IIS server. This could be put in
place to facilitate outside access to an Exchange OWA portal, a line of business web
application, Sharepoint server, etc.

In this case we have a single Mikrotik router with a static IP address on the WAN
interface and an internal Apache server running on a Linux box. We need to forward
HTTP (TCP port 80) across the router so that the web server is accessible from the
Internet. No other ports should be forwarded for security - we don't want someone
running a port scan and finding that SSH or some other protocol exposed on that
Linux server. Here is the topology:

Mikrotik NAT Topology

This is as straightforward as it gets - one WAN connection, one server. We've


identified our internal (ether2) and external (ether1) interfaces, and we know what
protocol (TCP), port (80), and internal IP address (192.168.88.198) we need to NAT
to.

Here is the NAT rule we'll use to accomplish the port forwarding:

/ip firewall nat add action=dst-nat chain=dstnat


comment="NAT in HTTP" dst-port=80 in-interface=ether1-
gateway protocol=tcp to-addresses=192.168.88.198 to-
ports=80
First, we add a new action (dst-nat) using the NAT chain DSTNAT. The chain
DSTNAT handles traffic that is headed inbound toward an internal network, like the
traffic we want to handle coming in from the WAN to the internal server. The other
unused chain - SRCNAT - handles traffic leaving a NAT'd internal network and going
to some other segment, like the WAN. We'll discuss SRCNAT more when we talk
about Port Address Translation (PAT). In this command we're also indicating the dst-
port (80) that traffic will be hitting coming in from the WAN, and the type of protocol
(TCP). We also indicate that this traffic we want to NAT internally is coming into
interface ether1. By being so specific with our NAT rule we're restricting the attack
surface that's getting presented to the outside world.

The final part of the command indicates what internal IP address to NAT the traffic to
(to-addresses=192.168.88.198) and what port to use (to-ports=80). Everything hitting
the router on WAN port ether1 that is TCP/80 will be sent to internal IP
192.168.88.198:80 - that's all there is to it. If a port scan were to be run on the router
now we would see one extra port open, with an Apache server responding to HTTP
requests. We've forwarded only the necessary traffic, and assuming that the web
server is patched regularly there should be no security issues with this configuration.

This is a simple task, but be mindful going forward of the security implications of
exposing an internal server to external requests. Only forward what needs to be
forwarded, and be sure to patch your servers regularly to reduce the risk of someone
compromising your server and then using it to access your internal network segments.

SOHO Wireless Optimization


WIRELESS , OPTIMIZATION

The MikroTik Security Guide and Networking with MikroTik: MTCNA Study
Guide by Tyler Hart are available in paperback and Kindle!
Mikrotik SOHO (Small Office, Home Office) wireless products are incredibly
versatile, and there are many flexible settings for Wireless interfaces in "Simple"
mode. "Advanced" mode gives access to highly-tunable wireless features, some of
which should be tweaked. Many SOHO models like the RB-750 and RB-951 come
with fairly good wireless settings already in place, but with a few changes you can
improve wireless connectivity significantly, and accommodate more wireless clients.
This is very important if you're putting a SOHO model into a small / branch office
with more than a few users. Considering the prevalence of end-user smart phones,
tablets, and laptops in offices, the load on wireless networks is growing. If wireless
infrastructure isn't tuned properly it will quickly become apparent that connectivity
isn't robust enough.

The following wireless interface options (most located in "Advanced" menus) can
improve connectivity significantly for SOHO routers:

frequency-mode=regulatory-domain country=united states


frequency=auto
channel-width=20mhz
wireless-protocol=802.11
distance=indoors

This copy-and-paste command can be run on RB751/951 models to apply settings


above:

/interface wireless set wlan1 mode=ap-bridge wireless-


protocol=802.11 frequency=auto band=2ghz-b/g/n
channel-width=20mhz distance=indoors frequency-
mode=regulatory-domain country="united states"

Obviously the first setting's country designation is specific to the United States -
change as needed for your country. Some of these options are now defaults in the
latest version of RouterOS, however those defaults have not always been the case, and
I'm sure defaults will be updated in later releases of RouterOS.

These settings aren't always explained in-depth in Mikrotik wireless documentation,


however I've found these settings to create the most robust wireless performance.
There has been much discussion online in other forums as well around wireless
optimization that confirms these settings. Wireless network performance can be seen
by viewing a wireless interface's Overall Tx CCQ (Client Connection Quality).

Here is an RB-751 I currently have in service:


Overall Tx CCQ % with advanced settings applied

When deploying newer RB-951 models running stock wireless configuration I


typically saw CCQ < 60%, which obviously leaves room for improvement.

I recommend watching CCQ for a couple hours on a busy wireless network to


establish a baseline, then changing those advanced settings shown above, one at a
time to see how it affects CCQ. Not all of those settings are appropriate for all
installations, but if you can recognize a 10% or 20% improvement in CCQ your users
will appreciate it.

Wiping MikroTik Devices


MIKROTIK , BEST PRACTICES

You can now get MikroTik training direct from Manito Networks.MikroTik
Security Guide and Networking with MikroTik: MTCNA Study Guide by Tyler
Hart are both available in paperback and Kindle!

Preface
MikroTik devices are very cost-effective - some would say downright cheap - so the
capital cost of upgrading networks tends to be fairly low. In some organizations this
can lead to a pile of RouterBOARD devices on someone's desk in a corner that are
eventually donated, repurposed in a lab, or re-used in a pinch. Unfortunately, a
repurposed RouterBOARD unit that hasn't been wiped can expose a lot of sensitive
information in the wrong hands. While some things are hidden in the configuration
and can't be viewed from the console, .rsc or .backup files in onboard storage can
disclose them.

First we'll delete sensitive files in the onboard storage, then we'll wipe the
configuration.

Delete Files
Resetting the configuration in the next step won't remove files in the onboard storage.
Use the following commands to delete sensitive files:

/file
remove [find name~".rif"]
remove [find name~".txt"]
remove [find name~".rsc"]
remove [find name~".backup"]

Double-check that any sensitive files have been removed.

Reset Configuration
Use the following command to reset the device's configuration:

/system reset-configuration keep-users=no skip-backup=yes

Confirm the command and the device will wipe its configuration, reboot, and
regenerate SSH keys. RouterOS will be returned to its default out-of-the-box
configuration and the device can be repurposed.

Ubiquiti Site-to-Site IPSEC VPN


UBIQUITI , SECURITY

Need help securing your Ubiquiti routers? Configuring IPSEC links between
locations? The extended guides for Ubiquiti EdgeRouter Hardening and IPSEC
Site-to-Site VPNs are now available on the Solutions page.
Site-to-Site IPSEC
IPSEC can be used to link two remote locations together over an untrusted medium
like the Internet. The implementation itself is a combination of protocols, settings, and
encryption standards that have to match on both sides of the tunnel.

Terminology
Devices at both sides of the tunnel are called Peers. Each of the peers uses
combinations of encryption and hashing protocols to secure traffic that are specified in
a Proposal. Once both peers have negotiated a secure connection using the protocols
and standards in the proposals Security Associations (SAs) are installed. These SAs
have a finite lifetime before they expire and new SAs are negotiated.

IPSEC Policy vs. Routing


It's very important to note that IPSEC is not routing. Traffic is sent over IPSEC
tunnels when it matches Source and Destination addresses in an IPSEC Policy. Traffic
that matches the policy is termed "interesting" and sent via the tunnel, not routed like
typical network traffic. Some vendors have their own "routed IPSEC"
implementations but those are specific to their platforms and outside the scope of this
post.

Network Topology
This network scenario in this post has a West Branch office that needs to be connected
to the Central Office. This post does not include the additional configuration of the
East Office that is pictured in the topology below and covered in the extended IPSEC
guide.

Ubiquiti IPSEC VPN Topology

Network Addresses
The West Office has a LAN on the 192.168.2.0/24 network, and a WAN address of
172.16.1.2/24. The Central Office has a LAN on the 192.168.1.0/24 network, and a
WAN address of 172.16.1.1/24. The WAN port on all routers is eth0, and the LAN
gateway port is eth1 in keeping with the typical Ubiquiti defaults.

Configuration Summary
The two sections of configuration commands below will perform the following steps
on both routers:

1. Create firewall IP address groups for easier firewalling


2. Allow traffic between IPSEC peers
3. Create ESP groups with secure encryption and hashing protocols
4. Create IKE groups with the same
5. Create IPSEC peers pointing to the opposite router
6. Create IPSEC proposals to define "interesting" traffic
7. Enable the NAT exclusion feature in the firewall for IPSEC traffic

The two blocks of commands can be copy-pasted to routers on a workbench once


they've been configured with IP addresses and a basic default configuration.

Central Router Configuration


The following commands on the Central Office router are the first half of the tunnel
between Central and West:

configure
set firewall group address-group IPSEC description ”IPSEC
peer addresses”
set firewall group address-group IPSEC address 172.16.1.2
set firewall name WAN LOCAL rule 15 description ”IPSEC
Peers”
set firewall name WAN LOCAL rule 15 action accept
set firewall name WAN LOCAL rule 15 source group address-
group IPSEC
commit

set vpn ipsec esp-group central-west proposal 1


encryption aes256
set vpn ipsec esp-group central-west proposal 1 hash sha1
set vpn ipsec esp-group central-west mode tunnel
set vpn ipsec esp-group central-west lifetime 1800
set vpn ipsec esp-group central-west pfs dh-group2
set vpn ipsec ike-group central-west key-exchange ikev2
set vpn ipsec ike-group central-west proposal 1
encryption aes256
set vpn ipsec ike-group central-west proposal 1 hash sha1
set vpn ipsec ike-group central-west proposal 1 dh-group
2
commit

set vpn ipsec site-to-site peer 172.16.1.2 description


”West office”
set vpn ipsec site-to-site peer 172.16.1.2 local-address
172.16.1.1
set vpn ipsec site-to-site peer 172.16.1.2 tunnel 0 esp-
group central-west
set vpn ipsec site-to-site peer 172.16.1.2 ike-group
central-west
set vpn ipsec site-to-site peer 172.16.1.2 authentication
mode pre-shared-secret
set vpn ipsec site-to-site peer 172.16.1.2 authentication
pre-shared-secret ”manitowest”
set vpn ipsec site-to-site peer 172.16.1.2 tunnel 0 local
prefix 192.168.1.0/24
set vpn ipsec site-to-site peer 172.16.1.2 tunnel 0
remote prefix 192.168.2.0/24
commit

set vpn ipsec auto-firewall-nat-exclude enable


commit
save

West Router Configuration


The following commands on the West Office router are the second half of the tunnel
between Central and West:

configure
set firewall group address-group IPSEC description ”IPSEC
peer addresses”
set firewall group address-group IPSEC address 172.16.1.1
set firewall name WAN LOCAL rule 15 description ”IPSEC
Peers”
set firewall name WAN LOCAL rule 15 action accept
set firewall name WAN LOCAL rule 15 source group address-
group IPSEC
commit
set vpn ipsec esp-group west-central proposal 1
encryption aes256
set vpn ipsec esp-group west-central proposal 1 hash sha1
set vpn ipsec esp-group west-central mode tunnel
set vpn ipsec esp-group west-central lifetime 1800
set vpn ipsec esp-group west-central pfs dh-group2
set vpn ipsec ike-group west-central key-exchange ikev2
set vpn ipsec ike-group west-central proposal 1
encryption aes256
set vpn ipsec ike-group west-central proposal 1 hash sha1
set vpn ipsec ike-group west-central proposal 1 dh-group
2
commit

set vpn ipsec site-to-site peer 172.16.1.1 description


”Central office”
set vpn ipsec site-to-site peer 172.16.1.1 local􀀀address
172.16.1.2
set vpn ipsec site-to-site peer 172.16.1.1 tunnel 0 esp-
group west-central
set vpn ipsec site-to-site peer 172.16.1.1 ike-group
west-central
set vpn ipsec site-to-site peer 172.16.1.1 authentication
mode pre-shared-secret
set vpn ipsec site􀀀to􀀀site peer 172.16.1.1 authentication
pre-shared-secret ”manitowest”
set vpn ipsec site-to-site peer 172.16.1.1 tunnel 0 local
prefix 192.168.2.0/24
set vpn ipsec site-to-site peer 172.16.1.1 tunnel 0
remote prefix 192.168.1.0/24
set vpn ipsec auto-firewall-nat-exclude enable
commit
save

Testing
To test the IPSEC tunnel send an ICMP Echo (Ping) from a device on one LAN to a
device on the other. This will generate the "interesting" traffic and force the IPSEC
tunnels to come up. To view how many IPSEC tunnels are currently up use the
following command:

show vpn ipsec status


To get more specific information on the current SAs use the following command:

show vpn ipsec sa


Want to know why we ran the commands we did and how they affect your security?
Looking for links to best practices documentation? Check out the Ubiquiti Site-to-Site
IPSEC Guide - almost 30 pages of in-depth discussion of IPSEC and how to secure
your tunnels

Ubiquiti Router Hardening


UBIQUITI, BEST PRACTICES, SECURITY

Ubiquiti routers straight out of the box require security hardening like any Cisco,
Juniper, or Mikrotik router. Some very basic configuration changes can be made
immediately to reduce attack surface while also implementing best practices, and
more advanced changes allow routers to pass compliance scans and formal audits.
Almost all of the configuration changes below are included in requirements for PCI
and HIPAA compliance, and the best-practice steps are also included in CIS security
benchmarks and DISA STIGs.

If you'd like a printable copy of this guide complete with a checklist, links to
STIGs, and more in-depth discussions of best practices than will fit in a blog post
check out the Ubiquiti EdgeRouter Hardening Guide.
Ubiquiti EdgeRouter Hardening Guide
8.00
PURCHASE

The router that will be used for this article is a brand new Ubiquiti EdgeRouter X,
fresh out of the box and updated with the latest firmware (1.8.5 as of this writing).
Before going any further ensure that your device is updated with the latest
firmware and rebooted.

The device will be configured in a Branch Office-type configuration, with a WAN


connection on Eth0, a LAN connected to Eth1, and a Management interface on Eth4.

Addresses for the interfaces are as follows:


 WAN - DHCP from ISP
 LAN - 192.168.60.0/24
 Management - 192.168.61.0/24

LAN and Management interfaces will be assigned the lowest usable address (.1/24) in
their respective subnets.

Securing Interfaces
The first step we'll take is disabling any physical network interfaces that aren't in use,
denying an intruder access to the device if they somehow got into the wiring closet or
server room. To plug into the router they'd have to disconnect a live connection and
draw attention by bouncing the port.

First list all the interfaces, making note of the numbers associated with each interface:

show interfaces

On our current EdgeRouter X device this is what we're seeing:

Ubiquiti interface list

As mentioned earlier we'll be using eth0 for the WAN, and eth1 for the LAN. Let's
add interface descriptions so they don't get confused:

set interfaces ethernet eth0 description "WAN"


set interfaces ethernet eth1 description "LAN"
set interfaces ethernet eth4 description "Management"

Not only does this help us not get confused, it also helps other networking staff keep
things straight. Then we'll shut off all the interfaces that aren't live so they can't be
used to access the device. In our case we're NOT using interfaces eth2 and eth3, so
let's shut them off:

set interfaces ethernet eth2 disable


set interfaces ethernet eth3 disable

We'll also take a moment to assign IP addresses to the LAN and Management
interfaces:

set interfaces ethernet eth1 address 192.168.60.1/24


set interfaces ethernet eth4 address 192.168.61.1/24

With those changes made let's commit and save the current configuration before
moving forward, then show interfaces again to see our changes.

commit
save

Here's the results:

We see that the interface state has changed and our new descriptions and IP addresses
are listed as well.

Services
Next we'll disable or firewall services that don't need to be running or exposed.
Disabling a service rather than firewalling it is the most appropriate, long-term
solution. It reduces overall attack surface, and ensures that even if a firewall rule gets
botched, the service isn't available for an attacker to take advantage of. An Nmap port
scan of the router via Eth0 (the WAN port) shows four services running - SSH, HTTP,
HTTPS, and NTP as shown below:
Ubiquiti Nmap Port Scan

We'll restrict access to the HTTP(S) GUI to the Management network so only IT staff
plugged into that network can access it. We'll also disable older crypto ciphers that
now have documented vulnerabilities and available exploits. The same will happen
with SSH. We'll leave NTP running, assuming it's being used as the branch office's
NTP time source, but restrict it on the WAN port so it can't be used in NTP DDoS
attacks.

First set the HTTP(S) GUI to only listen for connections on the Management (eth4)
interface, then disable older, vulnerable ciphers:

set service gui listen-address 192.168.61.1


set service gui older-ciphers disable

Once those changes are committed and saved only hosts on the Management network
will be allowed to reach the device's web interface. We'll restrict SSH now in the
same way, and also require that SSH v2 be used for all connections. Once the
following commands are passed and committed you won't be able to access the
device unless you're in the Management network.

set service ssh listen-address 192.168.61.1


set service ssh protocol-version v2

Credentials
The Ubiquiti factory-default username and password combination of "ubnt" and
"ubnt" is widely known and publicly available, and many compliance and security
scanners like Tenable's Nessus check for factory default credentials. Compliance
standards like PCI-DSS and HIPAA strictly forbid the use of factory-default
credentials. In keeping with that spirit we will set up our own credentials, and remove
the factory-default set. First, I'll create an admin user for myself:
set system login user tyler
set system login user tyler full-name "Tyler Hart"
set system login user tyler authentication plaintext-
password 1234
set system login user tyler level admin

In place of the "1234" password you should use a suitably secure password that meets
your organization's security and compliance requirements. Although the command
setting the password uses the phrase "plaintext-password", the system will encrypt it
for you. Only after setting my full name and a strong password do I pass the final
command, giving myself "admin"-level privileges. Commit and save the new
credentials, log out of the default "ubnt" user, and log in with your own admin-level
credentials. Logged in as yourself, delete the built-in "ubnt" user so that it can't ever
be used to breach the router:

delete system login user ubnt

If you're not comfortable deleting the built-in "ubnt" user you must set a complex
password, because port scanners will attempt to login with factory credentials given
the opportunity. Each device administrator should have their own credentials, so it's
possible to see who changed what and when on the device. If all admins log in as the
same user there is no non-repudiation, and no way to tell who may have made a
malicious configuration change.

Neighbor Discovery
Ubiquiti routers come with neighbor discovery turned on by default, which is great for
convenience but not great for security. It runs on UDP port 10001, and allows
administrators to easily see what devices are on the network and how they are
addressed. Unfortunately it can also allow attackers to easily fingerprint a network,
and can help them discover soft targets faster than they might otherwise. Having
neighbor discovery turned on can also make attackers aware of other devices they
may not have seen otherwise because of network segmentation. We'll shut off
neighbor discovery to start with:

set service ubnt-discover disable

This disables the neighbor discovery service on all ports, both physical and virtual. It
also ensures that any new virtual interfaces won't run neighbor discovery when they
are created. If you'd like to use neighbor discovery, you should disable it on all ports
except those you want it running on. This could mean adding a lot more configuration
lines, but that is the current state of things. Best practice says that you should run
neighbor discovery protocols like CDP, LLDP, etc only on management interfaces.

Firewalls
Firewalling is a complex topic, but there are basic rules that can be put in place to
secure a device from port scanners, malicious login attempts, and other probes from
the WAN. We'll create a firewall rule specifically for inbound traffic on the WAN,
and give it a good description:

set firewall name WAN_In


set firewall name WAN_In description "Block WAN Probes"

Once the rule is created and described we'll set the default action that the rule should
take for matching packets. With this rule being on the inbound side of our WAN, port
scanners and others will be hitting it, so the default action should be to "drop":

set firewall name WAN_In default-action drop

If traffic doesn't match the rules we'll create in just a moment, then the default action
takes place and that traffic gets dropped. You may be tempted to use the "reject"
action instead of "drop", but even rejected packets can help a port scanner fingerprint
your router. The best option is to just silently "drop" the packets, and this is required
for PCI-DSS.

The first rule in "WAN_In" will allow any connections through the firewall with
states of either "Established" or "Related". This is authorized traffic that originated
properly, and should be quickly allowed through the firewall:

set firewall name WAN_In rule 1 action accept


set firewall name WAN_In rule 1 description "Allow
Established / Related Traffic"

set firewall name WAN_In rule 1 state established


enable
set firewall name WAN_In rule 1 state related enable

Next we'll define an Address Group of trusted external IPs. We'll allow remote
connections to the router via the WAN, but only from those trusted IPs. First, create
the Address Group, then set a description:
set firewall group address-group Trusted_IPs
set firewall group address-group Trusted_IPs
description "External Trusted IPs"

Add any trusted external IP addresses you have to this list. This could be for a site-to-
site VPN, ICMP echos to check if the device is up, SNMP monitoring traffic, or
anything else. I'll use 1.1.1.1 and 2.2.2.2 just as examples:

set firewall group address-group Trusted_IPs address


1.1.1.1
set firewall group address-group Trusted_IPs address
2.2.2.2

Now we'll create a firewall rule in WAN_In and reference the address group created
above:

set firewall name WAN_In rule 2 action accept


set firewall name WAN_In rule 2 description "Allow
Trusted IPs"
set firewall name WAN_In rule 2 source group address-
group Trusted_IPs

By default Ubiquiti devices accept all ICMP requests, including echo requests or
"pings". We only want the WAN interface responding to pings from our trusted IPs,
so there are two options. The first is to disable all ICMP replies globally:

set firewall all-ping disable

This is quick and easy, but it also removes the use of ICMP as a troubleshooting tool.
The second option is to create a third firewall rule for WAN_In, specifically dropping
ICMP and echo replies but letting it run elsewhere:

set firewall name WAN_In rule 3 action drop


set firewall name WAN_In rule 3 description "Drop ICMP"
set firewall name WAN_In rule 3 protocol icmp
set firewall name WAN_In rule 3 icmp type 8

At this point we're ready to apply the rule on the inbound side of the WAN (Eth0)
interface. Traffic that has a state of "Established" or "Related", and traffic from IPs in
the Trusted_IPs list will be allowed. Everything else (port scans, login attempts, pings,
etc) will be dropped by the default action. We've created the firewall entry, added
rules to it, now we'll apply it:

set interfaces ethernet eth0 firewall in name WAN_In


set interfaces ethernet eth0 firewall local name WAN_In

We'll commit and save the configuration, then run another port scan like before. You
can see the results below, scanning both TCP and UDP:

Nmap scan of secure Ubiquiti router

Nmap can tell there is something there and it's a Ubiquiti router only because the
scanning computer is directly connected to it and can see the MAC address via the
physical link. Were this a production router being scanned across a WAN connection
there wouldn't be that information, and it would appear that the WAN IP has no
device assigned to it. This is exactly what we want remote scanners to see - nothing at
all.

Logging
It's best to set up logging to an external repository, like a Syslog server. If a device is
ever compromised or the configuration tampered with by an insider, the logs on the
local device become suspect. Sending logs to an external server and archiving them
preserves logs for investigations and forensics, and can ensure their integrity remains
intact.

See the Ubiquiti Syslog article for directions on how to configure Syslog logging.

In that same vein it's important to ensure that the timestamps of your log entries are
accurate. It's also important that the clocks of all your routers are actively updated, so
if you need to correlate events between devices you know that their time is correct.
See the Ubiquiti NTP article for directions on configuring NTP and keeping clocks in
sync

Ubiquiti Login Banner Configuration


BEST PRACTICES , UBIQUITI

Need help securing your Ubiquiti routers? Configuring IPSEC links between
locations? The extended guides for Ubiquiti EdgeRouter Hardening and IPSEC
Site-to-Site VPNs are now available on theSolutions page.

Having a login banner is widely considered a best practice across the networking
industry. Though the legal merits and applicability of login banners is sometimes
disputed, there is value in notifying anyone who may try to log into a device that
access is monitored and audited. Some compliance standards also require a login
banner, and there is a DISA STIG that also requires it for any equipment in use by the
United States government. EdgeOS is somewhat unique in that it offers two login
banners instead of one - a pre-login and post-login banner. The pre-login banner
displays once a user is prompted for a password. The post-login banner displays once
a user is successfully authenticated.

First we'll configure the pre-login banner, then the post-login banner, and finally
commit and save the new configuration. The following command sets the pre-login
banner.

set system login banner pre-login "---THIS DEVICE IS


MONITORED, INCLUDING ACCESS ATTEMPTS AND LOGINS.
ACCESS ONLY AUTHORIZED TO MANITO NETWORKS STAFF---"

Now we'll set the post-login banner.

set system login banner post-login "---THIS DEVICE IS


MONITORED, AND ACCESS IS REGULARLY AUDITED---"

The banner text in the commands above are just examples, and you should create
banner text specific to your organization's legal requirements. Don't forget to commit
and save the new configuration, then log out and back in to see how the new banner
looks

Ubiquiti SNMP Configuration


UBIQUITI , MONITORING

Need help securing your Ubiquiti routers? Configuring IPSEC links between
locations? The extended guides for Ubiquiti EdgeRouter Hardening and IPSEC
Site-to-Site VPNs are now available on theSolutions page.

SNMP is easy to configure on Ubiquiti devices with just a few commands. It runs on
UDP port 161, and just like with Mikrotik or any other router brand it's used to
monitor network interface statistics, CPU and RAM utilization, and more. Network
monitoring suites like Solarwinds, PRTG, Zenoss, and others can use SNMP to graph
statistics over time, giving you a running log of device performance.

By default SNMP isn't configured on EdgeOS-based Ubiquiti devices, though on


many other platforms it is, with a default community string of "public". Right out of
the box SNMP has a few attributes that you can configure, including a device's
location, contact information, and description. This is great when you're onboarding
new administrators, and helps keep everything straight.

First, set the location, contact, and description information for your device.

configure
set service snmp location "Virginia, USA"
set service snmp description "Office Edge Router"
set service snmp contact "[email protected]"

There's all the basic device information in just a few commands. Next we need to
configure an SNMP community. The SNMP community is just a string of text that an
SNMP probe or collector will use to extract statistics from the device. Different
communities can have different permissions allowing SNMP to read and write, view
specific types of statistics, and more. The community string must match on the
device(s) being monitored and the collector.
By default many manufacturers have the SNMP community set to "public", which is
very well-known and should be modified immediately. SNMP can be a goldmine of
information for an attacker doing reconnaissance, trying to fingerprint devices and
identify vulnerabilities. Some compliance standards like PCI-DSS specifically call out
having "public" SNMP communities configured as a compliance violation. The
following command will create a new SNMP community "manitonetworks" and gives
it Read-Only permissions.

set service snmp community manitonetworks


set service snmp community manitonetworks
authorization ro

With the device details configured and the new community string set it's possible to
probe SNMP and get some basic statistics about the device once we configure the
device to listen on a particular interface. The following command configures the
device to listen on the interface configured for 192.168.1.1.

set service snmp listen-address 192.168.1.1

SNMP should only listen on trusted interfaces - if someone knows or guesses your
community string they will have full access to the device's information and
performance statistics. Best practice is to configure SNMP to listen on a physical
management interface or VLAN subinterface.

Now commit and save the configuration changes. With the SNMP configuration
complete you can add the device to your network monitoring software, add the
community string, and that's it.

Ubiquiti Syslog Configuration


UBIQUITI , BEST PRACTICES , MONITORING

Need help securing your Ubiquiti routers? Configuring IPSEC links between
locations? The extended guides for Ubiquiti EdgeRouter Hardening and IPSEC
Site-to-Site VPNs are now available on theSolutions page.

Syslog is one of the most widely supported event reporting mechanisms, across
almost all manufacturers and OS distributions including Ubiquiti and EdgeOS. Using
Syslog to report events happening on routers, switches, and servers is typical in the
networking industry, and being able to centrally monitor reportable events on network
infrastructure is critical as you scale up. Most organizations don't report every single
event because that would create a huge, unmanageable mess of logs.
Instead, administrators focus on hardware, authentication, interface up/down, and
network adjacency events.

Beyond the convenience of centralizing logs in one place for monitoring, Syslog plays
an important part in an organization's network security framework. If a device is
breached, or if a breach is suspected, the logs on that local device become suspect. An
attacker may wipe the local device logs wholesale, or modify them specifically to
cover their tracks or focus attention elsewhere. Having logs shipped to another device,
that preferably uses separate authentication, allows some assurance that the logs have
not been tampered with and can be used for investigation.

Event archiving also becomes possible when shipping events to a centralized server.
An organization's policy may require 90 days of log retention, or a legal requirement
may exist that sets a certain standard. Either way, this gives you a rolling historical
record of what's happened on your devices. This wouldn't be possible if you're just
storing logs locally, because many devices purge logs on reboot or power cycle, or
lack the embedded storage capacity for long-term log storage.

Syslog has varying degrees of event severity - 8 in total, 0 through 7. You can find
the severity levels here. Familiarize yourself with the severity levels, because they are
used across almost all device manufacturers. The protocol itself runs on UDP, port
514, but that is automatically included in the configuration and doesn't have to be
specified manually.

With that being said, we'll set up a Ubiquiti router to report important events to a
Syslog server, and use The Dude as a dashboard for monitoring running on
192.168.90.183. We'll be monitoring for all events level 4 (Warning) and up. This is a
no-cost solution that centralizes the administrative task of monitoring infrastructure,
and it's surprisingly flexible.

First, put your Ubiquiti device in configuration mode.

configure

Next, configure the device for the IP of your Syslog server (in this case
192.168.90.183), and the minimum severity level of events that should be shipped. If
you use the "warning" level like in the command below, then all events that are
warning, error, critical, alert, and emergency levels will be shipped. It's up to you to
determine what minimum level of events is most appropriate for your organization.

set system syslog host 192.168.90.183 facility all


level warning

The "facility" portion of the command specifies what router functions are being
monitored. In this case it's "all" functions, though you can specify specific levels for
specific functions. This is really useful for troubleshooting, or monitoring specific
router functions that you suspect are misbehaving. Available functions include
protocols, security, auth, and more. Starting out with the "all" facility helps you
capture a broad swath of events, and then you can narrow down your reporting if
necessary for your organization's specific needs. It's always best with logging to start
broadly, then whittle it down from there - you may see something that you wouldn't
have otherwise that demands your attention.

Lastly, commit and save your configuration, then generate some events. Try logging
into the device with a wrong username and password on purpose to generate an event,
and verify it's been shipped to the Syslog server. Play around with it, so that when
actual events are triggered in production you know why they happened, and how to
respond.

Ubiquiti NTP Configuration


BEST PRACTICES , UBIQUITI

Need help securing your Ubiquiti routers? Configuring IPSEC links between
locations? The extended guides for Ubiquiti EdgeRouter Hardening and IPSEC Site-
to-Site VPNs are now available on the Solutions page.

Keeping good time on your infrastructure devices like switches, routers, and firewalls
is absolutely essential. It ensures that log timestamps are accurate for use in
troubleshooting and forensics, and it ensures that devices relying on timestamped
certificates will expire them at the same time. This is particularly true with IPSEC and
other VPN technologies. Just like in the Mikrotik NTP tutorial, it's fairly
straightforward to set the NTP client up on an EdgeOS-based device. First, log into
the device via SSH and enter the Configure mode with the following command:

configure

For this tutorial we will be utilizing pool.ntp.org timeservers. This is a fantastic


organization, and if you can contribute to the project by volunteering an NTP server
of your own the whole community benefits. The NTP servers that are made publicly
available for use are load-balanced, so pointing your NTP client to a generic FQDN
like those shown below ensures that you'll always reach a viable time server. The
following commands will set the NTP servers on EdgeOS-based Ubiquiti products:

set system ntp server 0.pool.ntp.org


set system ntp server 1.pool.ntp.org
set system ntp server 2.pool.ntp.org
set system ntp server 3.pool.ntp.org

Verify that your device's timezone is set correctly. Many organizations choose to set
all their devices to UTC time. This is described as a best practice by mainstream
vendors, and is especially important when an organization has devices located in
different timezones, or across states or regions that observe Daylight Savings Time
differently. Having all devices set to UTC time takes the guesswork out of adjusting
for local time or DST. It also helps enormously when correlating timestamped events
between devices because all device clocks are in sync, so no adjustment is necessary
when looking at events between devices side-by-side. The following command sets
the timezone to UTC:

set system time-zone UTC

As always, don't forget to commit and save your new configuration:

commit
save

This will stop and then start the NTP daemon (ntpd) and resync the device's clock.
Verify that the time is up-to-date by running the "date" command, and comparing to a
known-good clock. That's it!

You might also like