In previous articles, we created our own Multipoint VPN solution using ZeroTier and BGP. While this is a great solution, some people would rather stick to what they know, and what they know is traditional DMVPN.


DMVPN as a solution, is generally made up of 3 parts; Multipoint GRE (mGRE), Next Hop Resolution Protocol (NHRP), and IPsec. This is a tried and true solution and is deployed in countless customer networks around the world.


One limitation of DMVPN is that encryption is generally handled by IPsec, which performs great on hardware dedicated to handling encryption (like enterprise routers), but not so great on something like Linux where the encryption is often handled in software.


In this post, we aim to solve that by using WireGuard to handle our encryption.

Why WireGuard?

While IPsec will generally only use a single core in Linux, WireGuard is well optimized and can use all cores in a system to scale the speed horizontally (e.g. more cores generally means more throughput).


Look at these iPerf results as a comparison. These were performed on a cheap ($100USD) mini-pc with a quad core Celeron:


IPsec:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.51 GBytes 1.30 Gbits/sec 53 sender
[ 5] 0.00-10.00 sec 1.51 GBytes 1.30 Gbits/sec receiver

Wireguard:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.26 GBytes 1.94 Gbits/sec 3366 sender
[ 5] 0.00-10.00 sec 2.26 GBytes 1.94 Gbits/sec receiver


You can see the performance is clearly better with WireGuard. Additionally, these tests were performed on some low powered hardware, and you can see by the 3366 retransmissions with WireGuard, it could have been much higher.


Here’s the CPU utilization during those tests:


IPsec

WireGuard:

Clearly much better scaling with multiple cores.

Limitations of WireGuard

DMVPN on Linux may have relatively poor encrypted performance, but what it does is scale very well, WireGuard has the opposite problem; it has great performance, but scales very poorly. This is because WireGuard is primarily a point-to-point technology, which traditionally would require a massive configuration effort just to ensure encrypted paths exist between all of your endpoints.


One way to solve this issue is to use overlay technologies like Tailscale, Netmaker, or NetBird. In this article, we’re going to use Netbird.

Why NetBird?

I tested multiple options for this solution, and I ultimately ended up on NetBird.


  • Tailscale:
    • Tailscale places it’s core functions within a dedicated routing table and not the main table, making it difficult to get working for basic overlay routing.
  • Nebula:
    • Nebula doesn’t use WireGuard, but it is a similar solution. It didn’t suit well for this as it was technologically hostile towards anything other than peer-to-peer communications.
  • Netmaker:
    • I didn’t test Netmaker. While it may have worked, the ease of use isn’t at the level of Tailscale or NetBird.
  • NetBird:
    • In testing NetBird, I was able to build GRE quite easily over it, making it a good candidate for DMVPN.


NOTE: You may wonder why use DMVPN at all and not just do like we did with ZeroTier (e.g. run BGP directly over NetBird). Routing non-NetBird traffic across it is limited to pushing routes from the NetBird service. This is fine if we want to use it as a peer-to-peer solution, but is not great for a site-to-site solution. By encapsulating our inside traffic in GRE, we ensure that the site-to-site traffic appears as peer-to-peer to NetBird.

Topology


We’re going to have a basic lab, with a single hub, and 2 spokes.

NetBird

Before we get to configuring any of the VyOS stuff, we’re going to need to get a NetBird account.


NOTE: You can self-host NetBird if you want, but that is beyond the scope of this article.


First we need to go to the NetBird website at :
https://netbird.io/


Once there, click the “Try for Free” button in the top right:



There, we will be presented the signup/login page.



Once your account is created, and you’re logged in, you’ll be presented with the NetBird console.



We need to do a few things before we move to VyOS. Go to the settings page and turn on “Peer approval” under Authentication. This is just an added step to ensure you control who can access your devices.



Next, we need to create a setup key. Go to the “Setup Keys” section, and select “Add Setup Key”.



We’re going to create a setup key called “VyOS DMVPN”, select it to be reusable, and set the usage limit to 3 (Hub and 2 spokes)



It’ll present you with your newly created key (code removed for security). You’ll need this in a few steps, so copy this and paste it into a notepad for right now.



Now on to VyOS.

Configuring VyOS

For this lab, we’re going to use 1.4.0-rc1, which you can grab from this blog post:

https://blog.vyos.io/vyos-1.4.0-rc1-release-candidate

Initial Setup

Once booted, we’re going to install to disk using default settings. Just follow along with the guided prompts.


vyos@vyos:~$ install image 
Would you like to continue? [y/N] y
What would you like to name this image? (Default: 1.4.0-rc1)
Please enter a password for the "vyos" user (Default: vyos)
What console should be used by default? (K: KVM, S: Serial, U: USB-Serial)? (Default: K)
Which one should be used for installation? (Default: /dev/sda)
Installation will delete all data on the drive. Continue? [y/N] y
Would you like to use all the free space on the drive? [Y/n] y
The image installed successfully; please reboot now.


Once it is done installing, reboot to finish the installation.


vyos@vyos:~$ reboot now 
The system will reboot now!


Once we’re fully booted, the first thing we’ll need to do is get an IP address, access to the internet, and DNS configured. I’m using DHCP here so I’ll get an IP and a default route from it.


configure 
set interfaces ethernet eth0 address dhcp
set system name-server 4.2.2.2
commit


You should be able to ping a public address like google.com


vyos@vyos# run ping www.google.com
64 bytes from bh-in-f105.1e100.net (172.253.122.105): icmp_seq=1 ttl=59 time=7.21 ms

NetBird Container

We’re going to install NetBird in a container. This makes for the easiest management of additional services added to VyOS. First we need to pull down the image.


vyos@vyos# run add container image netbirdio/netbird:latest
[edit]
vyos@vyos#


In 1.4, the download is entirely silent, so it may not seem like it’s doing anything, but it should be pulling the image.


Next, we want to make a directory so that our configuration will be persistent if we need to restart our container. I make my folder in /config/containers/, but you can do anything as long as it is within /config/.


sudo mkdir -p /config/containers/nb1


Now it’s time to apply our container config. Notice the NB_SETUP_KEY environment variable. The string after “value” should be the setup key you created in the NetBird console.


NOTE: NetBird will name your devices in the console based on their hostname. It is useful to ensure that your device has a unique hostname configured before registering the node. This prevents you from needing to rename it afterwards.


set container name nb1 allow-host-networks
set container name nb1 cap-add 'net-admin'
set container name nb1 cap-add 'net-raw'
set container name nb1 image 'netbirdio/netbird:latest'
set container name nb1 volume NB_PATH destination '/etc/netbird'
set container name nb1 volume NB_PATH source '/config/containers/nb1'
set container name nb1 environment NB_SETUP_KEY value '01234567-89AB-CDEF-0123-456789ABCDEF'
commit


After committing the container config in VyOS, you should see our new device in NetBird’s console under Peers.



You can see our node needs approval. If we didn’t turn the previous setting on, the node would already be online. Let’s approve it by clicking “needs approval” and selecting Approve.



Your node should now show as online.



All that’s left now is to repeat those steps for our other Spoke and our Hub. Once done, you should have all of your devices in your console.



We can see that we get auto assigned some IPs. Let’s see if we can ping both of the spokes from the Hub.


vyos@Hub:~$ ping 100.90.27.227
64 bytes from 100.90.27.227: icmp_seq=1 ttl=64 time=5.25

vyos@Hub:~$ ping 100.90.211.106
64 bytes from 100.90.211.106: icmp_seq=1 ttl=64 time=2.82 ms


Basic connectivity is there, let’s start configuring DMVPN.


OPINION: I’m not a fan on how a lot of these overlay network services use IPs that fall within BOGON ranges, but it’s become commonplace to use CG-NAT and APIPA space for VPN services.

Configuring DMVPN

We’re going to be configuring naked DMVPN (without encryption). This is obviously because NetBird is already handling our encryption, so it’ll be unnecessary.

Hub Config

set interfaces tunnel tun100 address '172.20.21.1/24'
set interfaces tunnel tun100 encapsulation 'gre'
set interfaces tunnel tun100 source-address '100.90.161.169'
set interfaces tunnel tun100 enable-multicast
set interfaces tunnel tun100 parameters ip key '1'

set protocols nhrp tunnel tun100 cisco-authentication 'authkey'
set protocols nhrp tunnel tun100 multicast 'dynamic'
set protocols nhrp tunnel tun100 redirect


This is your pretty standard Hub config. The main thing to look at is we enable NHRP redirects, which are the decentralized model of creating spoke-to-spoke tunnels (used in Cisco DMVPN Phase 3 deployments)

Spoke Config

Spoke1:
set interfaces tunnel tun100 address '172.20.21.11/24'
set interfaces tunnel tun100 source-address 100.90.27.227
set interfaces tunnel tun100 encapsulation 'gre'
set interfaces tunnel tun100 enable-multicast
set interfaces tunnel tun100 parameters ip key '1'

set protocols nhrp tunnel tun100 cisco-authentication 'authkey'
set protocols nhrp tunnel tun100 map 172.20.21.1/24 nbma-address '100.90.161.169'
set protocols nhrp tunnel tun100 map 172.20.21.1/24 register
set protocols nhrp tunnel tun100 multicast 'nhs'
set protocols nhrp tunnel tun100 shortcut

Spoke2:
set interfaces tunnel tun100 address '172.20.21.12/24'
set interfaces tunnel tun100 source-address 100.90.211.106
set interfaces tunnel tun100 encapsulation 'gre'
set interfaces tunnel tun100 enable-multicast
set interfaces tunnel tun100 parameters ip key '1'

set protocols nhrp tunnel tun100 cisco-authentication 'authkey'
set protocols nhrp tunnel tun100 map 172.20.21.1/24 nbma-address '100.90.161.169'
set protocols nhrp tunnel tun100 map 172.20.21.1/24 register
set protocols nhrp tunnel tun100 multicast 'nhs'
set protocols nhrp tunnel tun100 shortcut


Again, a pretty standard configuration. Here the main thing to look at is we enable NHRP shortcuts, which is the second part to establishing the dynamic spoke-to-spoke tunnels


One of the strongest things about DMVPN has always been it’s configuration management. You can see that the only difference in our Spoke configs is the Tunnel address, and source address. You really only have to maintain 2 configs; a Hub config, and a Spoke config.


Let’s put some BGP config on here to get some routing going.

Hub BGP Config

set protocols bgp address-family ipv4-unicast
set protocols bgp listen range 172.20.21.0/24 peer-group 'PG'
set protocols bgp peer-group PG address-family ipv4-unicast route-reflector-client
set protocols bgp peer-group PG remote-as '65000'
set protocols bgp system-as '65000'


We configure a BGP listener for the range of our Tunnel subnet (172.20.21.0/24), and configure it as a iBGP route-reflector. This is more scalable than eBGP since we won’t need to administratively manage ASNs.


NOTE: In Cisco’s DMVPN Phase 3, next-hop-self is required for the redirect/shortcut process, whereas Phase 2 uses next-hop-unchanged. In OpenNHRP, it appears to only work with next-hop-unchanged from the Hub.

Spoke BGP Config

Spoke1:
set protocols bgp address-family ipv4-unicast
set protocols bgp neighbor 172.20.21.1 address-family ipv4-unicast nexthop-self
set protocols bgp neighbor 172.20.21.1 remote-as '65000'
set protocols bgp system-as '65000'
set protocols bgp neighbor 172.20.21.1 update-source '172.20.21.11'


Spoke2::
set protocols bgp address-family ipv4-unicast
set protocols bgp neighbor 172.20.21.1 address-family ipv4-unicast nexthop-self
set protocols bgp neighbor 172.20.21.1 remote-as '65000'
set protocols bgp system-as '65000'
set protocols bgp neighbor 172.20.21.1 update-source '172.20.21.12'


Again, the config is almost identical. The only difference being the update-source.


NOTE: The update source shouldn’t be needed as the next hop should use the IP of the exit-interface. That didn’t happen in my lab and I was forced to configure an update-source.

Adding Routes

Let’s add some routes to Spoke1 and Spoke2 and advertise them to test the routing.


Spoke1:
set interfaces dummy dum0 address '10.0.0.11/32'
set protocols bgp address-family ipv4-unicast network 10.0.0.11/32

Spoke2:
set interfaces dummy dum0 address '10.0.0.12/32'
set protocols bgp address-family ipv4-unicast network 10.0.0.12/32


We should see these routes on the Spokes now:


vyos@Spoke2# run show ip bgp
Network Next Hop Metric LocPrf Weight Path
*>i10.0.0.11/32 172.20.21.11 0 100 0 i
*> 10.0.0.12/32 0.0.0.0 0 32768 i


We have them, and we can see that the next hop is that of the other spoke (next-hop-unchanged). Before we test to see if the dynamic spoke-to-spoke is working, let’s make sure we don’t have a dynamic tunnel built already.


vyos@Spoke2# run show nhrp tunnel 
Status: ok
Interface Type Protocol-Address Alias-Address Flags NBMA-Address
----------- ------ ------------------ --------------- ------- --------------
tun100 local 172.20.21.255/32 172.20.21.12 up
tun100 local 172.20.21.12/32 up
tun100 static 172.20.21.1/24 up 100.90.161.169


We can see the only tunnel we have up is to ‘.1’, which is the Hub. Let’s try to ping that loopback we just created:


vyos@Spoke2# run ping 10.0.0.11 source-address 10.0.0.12
64 bytes from 10.0.0.11: icmp_seq=1 ttl=63 time=9.33 ms


The ping works, but it’d work no matter what. What we really care about is if we are going directly between spokes. Let’s try a traceroute.


vyos@Spoke2# run traceroute  10.0.0.11 source-address 10.0.0.12
1 10.0.0.11 (10.0.0.11) 2.888 ms 2.834 ms 4.066 ms


We are only 1 hop away, meaning we are going direct. We’d have 2 hops if traversing the Hub. Let’s see if we have a dynamic tunnel with the “show nhrp tunnel” command.


vyos@Spoke2# run show nhrp tunnel 
Status: ok
Interface Type Protocol-Address Alias-Address Flags NBMA-Address Expires-In
----------- ------ ------------------ --------------- ------- -------------- ------------
tun100 local 172.20.21.255/32 172.20.21.12 up
tun100 local 172.20.21.12/32 up
tun100 cached 172.20.21.11/32 used up 100.90.27.227 4:53
tun100 static 172.20.21.1/24 up 100.90.161.169


And predictably, we do! As a final check, let’s see the underlay IPs when pinging those address.


Ping from Spoke2:
vyos@Spoke2# run ping 10.0.0.11 source-address 10.0.0.12 count 1
64 bytes from 10.0.0.11: icmp_seq=1 ttl=64 time=3.43 ms

Packets on Spoke1:
22:50:30.265105 IP 100.90.211.106 > 100.90.27.227: GREv0, key=0x1, length 92: IP 10.0.0.12 > 10.0.0.11: ICMP echo request, id 61283, seq 1, length 64

22:50:30.265267 IP 100.90.27.227 > 100.90.211.106: GREv0, key=0x1, length 92: IP 10.0.0.11 > 10.0.0.12: ICMP echo reply, id 61283, seq 1, length 64


We can see from our earlier screenshot, the 100.90.27.227 and 100.90.211.106 IPs are those of our spokes.


Conclusion

And that’s it for the lab. Before we button this up, let’s talk about whether you should use this solution, or use IPsec as is traditional.


When we looked at the speeds we were getting between devices at the beginning of this post, we saw that WireGuard can be substantially faster than IPsec. The draw back is we are adding additional overhead to each packet.


If we’re able to get 1.3Gbps using IPsec, and our circuit as only 1Gbps, then we would get more payload sent (data after overhead) by using IPsec instead of this solution using WireGuard. However, if our circuit was 2Gbps, we could max that circuit using WireGuard, and could see some immediate benefits from using a solution like this.

4 responses to “DMVPN (OpenNHRP) over WireGuard (NetBird) on VyOS”

  1. very infromational. Please also perform with self hosted netbird.

    1. Thanks! I’ll do a version with self hosted NetBird. It may be a while before I get around to it though.

  2. delectably123bdd2d1e Avatar
    delectably123bdd2d1e

    very informational. Please create 2 articles
    1. Use dmvpn with self hosted netbord on vyos
    2. how to use container images on vyos for noobs like me

    1. Both great suggestions! I’ll creat articles for those. It may be a while before I get around to them though.

Leave a Reply

Trending

Discover more from Level Zero Networking

Subscribe now to keep reading and get access to the full archive.

Continue reading