In a previous article, we deployed DMVPN over NetBird to improve encrypted performance using WireGuard. You can find that post here:
https://lev-0.com/2024/01/29/dmvpn-opennhrp-over-wireguard-netbird-on-vyos/


While that can be a great solution, the extensibility of that solution is limited. Functionality like L3VPN, L2VPN, Multitenancy, or Microsegementation are not possible. For that reason, the solution was really only good within a flat network.


Another great option to leverage NetBird to create a secure multisite solution is to use VxLAN. VxLAN will enable all of those extensible features I just mentioned. Let’s explore that idea.

Typical usage of VxLAN

VxLAN has long become the darling of data centers due to its ability to enable massive scaling of L2 segmentation over traditional VLANs and mostly eliminate Spanning-Tree, but through pairing it with EVPN, the propagation of Broadcast, Unknown Unicast, and Multicast (BUM) traffic can also be greatly limited. Deploying data centers in today’s cloud scale environments is almost impossible with a traditional Core/Aggregate/Access design. VxLAN has even started to provide the same scale and benefits at the Campus as well.


In this design, we’re going to use it as a multipoint/multi-site VPN solution, enabling advanced features in a space that traditionally is very flat when it comes to features.

Our Design

We’re going to deploy VxLAN over NetBird on VyOS. VyOS provides all of the advanced features that we’ll want to use in this solution, while NetBird will provide a scalable mechanism to encrypt our traffic using WireGuard.

Topology

We’ll have a basic Hub/Spoke topology that has been pretty consistent through a lot of our articles. It will have a single Hub and 2 Spoke sites.


IP Info:

  • Hub:
    • NetBird IP: 100.90.53.188/16
    • LAN IP: 172.16.0.1/24
    • Loopback: 10.0.0.1/32
  • Spoke1
    • NetBird IP: 100.90.184.124/16
    • LAN IP: 172.16.0.11/24
    • Loopback: 10.0.0.11/32
  • Spoke2:
    • NetBird IP: 100.90.71.45/16
    • LAN IP: 172.16.0.12/24
    • Loopback: 10.0.0.12/32

VyOS and NetBird

We’re going to deploy NetBird within a container on VyOS (version 1.4.0-rc3, which can be found in this post: https://blog.vyos.io/vyos-1.4.0-rc3-release-candidate). This is the preferred method, since it will easily allow NetBird to remain persistent across VyOS upgrades.


I’m not going to go into great depth on this process since I covered it in a previous article. If you’re unfamiliar with deploying NetBird (specifically inside of container in VyOS), I highly recommend you check out that article first. The article can be found here: https://lev-0.com/2024/01/29/dmvpn-opennhrp-over-wireguard-netbird-on-vyos/


NOTE: There is currently an issue where if an interface isn’t present at boot when VyOS loads its configuration, it will remove all config where that interface is applied. You can track the progress of this bug here: https://vyos.dev/T5991


If you need to reboot this lab, and config is removed, just do this to recover the config. You may need to commit a couple of times since some of the config items have dependencies on other applied config:

vyos@Spoke1:~$ configure 
vyos@Spoke1# rollback-soft 0
vyos@Spoke1# commit


Here’s summarized steps to perform this process.


  1. Deploy VyOS
    • Configure access to the internet
    • Configure a name-server to resolve hostnames
  2. Create NetBird account
    • Enable “Peer Approval” under settings. This is an extra security step.
    • Create a Setup Key that can be reused 3 times (for each of our 3 nodes)
  3. Download the NetBird Container image
  4. Configure container in VyOS, passing the setup key as an environment variable
  5. Approve the peers in the NetBird web console


Once that process is completed, you should have a ‘wt0’ interface on each peer with an IP in the CG-NAT range.

vyos@Hub:~$ sudo ifconfig wt0
inet 100.90.53.188 netmask 255.255.0.0 destination

EVPN

While it is possible to do point-to-point VxLAN tunnels, it wouldn’t scale very well with a design like this. We’re going to use EVPN in the control plane to help scale our design. It will allow for the Hub site to serve as an iBGP route-reflector so we only need to build configurations to the Hub to enable all site-to-site communication.

Create a BGP Listener on the Hub


We need to create a BGP Listener to listen for incoming BGP sessions on the Hub. This will allow us to build peerings to all of the spokes without needing to explicitly define each peer. We’re going to make the Hub a iBGP route-reflector for the l2vpn-evpn address-family, ensuring we keep the next-hop as unchanged. This way, the spokes will go directly between each other instead of through the Hub.

set protocols bgp listen range 100.90.0.0/16 peer-group 'PG'
set protocols bgp peer-group PG address-family l2vpn-evpn route-reflector-client
set protocols bgp peer-group PG remote-as '65000'
set protocols bgp system-as '65000'

Peer from Spokes to the Hub

The BGP config will be identical on each spoke. Unlike the Hub, we do want nexthop-self in the l2vpn-evpn address-family. This will ensure reachability between the site edges.

set protocols bgp neighbor 100.90.53.188 address-family l2vpn-evpn nexthop-self
set protocols bgp neighbor 100.90.53.188 remote-as '65000'
set protocols bgp system-as '65000'

Verify Peerings:

vyos@Hub:~$ show bgp l2vpn evpn summary 
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt Desc
*100.90.71.45 4 65000 23 40 40 0 0 00:18:42 2 6 N/A
*100.90.184.124 4 65000 23 40 40 0 0 00:18:42 2 6 N/A

Total number of neighbors 2
* - dynamic neighbor
2 dynamic neighbor(s), limit 100


Notice the ‘*’ indicating that they’re dynamic peerings.

Setting MTU

VxLAN will add 50-Bytes of overhead, so we need to account for that. NetBird will default to an MTU of 1280, which is very low for ethernet networks. We’re going to set NetBird to have an MTU of 1400 to account for overhead. We have a couple of options to change the MTU:


  1. Set the MTU with the ‘ip link’ command
  2. Set the MTU in VyOS


Setting the MTU with ‘ip link’ will not persist, so we’d need to account for that within a post-config script.


Setting the MTU with VyOS presents a problem, since we can’t set the MTU for an interface type that VyOS doesn’t understand, like ‘wt0’. We can set the NetBird interface to something VyOS will understand, so that’s what we’re going to do here.

Changing the NetBird interface name

We need to go into the directory we created for our NetBird container. That was ‘/config/containers/nb1’:

vyos@Hub:~$ sudo su
root@Hub:/home/vyos# cd /config/containers/nb1
root@Hub:/config/containers/nb1# ls
config.json


It’s that ‘config.json’ file that we’ll need to modify:

root@Hub:/config/containers/nb1# vi config.json 


We need to change the “wgIface” value. I’m going to call mine eth10.

    "WgIface": "eth10",


You won’t see the interface in our VyOS interface list yet, but you will see it after we set the MTU. You can see in the output that our MTU is 1400.

vyos@Hub# set interfaces ethernet eth10 mtu 1400
vyos@Hub# commit

vyos@Hub# run show interfaces | match eth10
eth10 100.90.53.188/16 n/a default 1400 u/u

Configuring VxLAN

We’re going to use the Single VxLAN Device (SVD) feature which you can read about here: https://blog.vyos.io/evpn-vxlan-enhancements-introducing-single-vxlan-device-support


The feature allows you to have multiple services tied to a single bridge domain. Without this, you would need a bridge-domain per VxLAN Network Identifier (VNI).


This will be the same on all nodes. Notice that we source our VxLAN interface off of ‘eth10’, which is our NetBird interface.

set interfaces vxlan vxlan0 mtu '1350'
set interfaces vxlan vxlan0 parameters external
set interfaces vxlan vxlan0 port '4789'
set interfaces vxlan vxlan0 source-interface 'eth10'

Configuring a Bridge

We need to attach VxLAN to a bridge interface. We will use this for all of our EVPN services throughout this entire series.

set interfaces bridge br0 enable-vlan
set interfaces bridge br0 member interface vxlan0

Configuring Services

This solution is intended to support multi-tenancy, so you will need to put LAN services within a VRF. Even if you intend to only have one L3 network, it will still need to be within a VRF. If you do only want one network, you can just call the network ‘Main’, ‘Core’, or whatever makes sense to you.

Configure VRF

We are going to redistribute connected and advertise IPv4 prefixes into EVPN, generating a type-5 EVPN route (IP Prefix).

set vrf name Main protocols bgp address-family ipv4-unicast redistribute connected
set vrf name Main protocols bgp address-family l2vpn-evpn advertise ipv4 unicast
set vrf name Main protocols bgp system-as '65000'
set vrf name Main table '1000'
set vrf name Main vni '1001'


Now let’s configure some interfaces for testing. Notice that we put the config under a VLAN. This is required for the SVD feature mentioned above.

Hub:
set interfaces bridge br0 vif 10 address '172.16.0.1/24'
set interfaces bridge br0 vif 10 vrf 'TEST'
set interfaces dummy dum0 address '10.0.0.1/32'
set interfaces dummy dum0 vrf 'TEST'

Spoke1:
set interfaces bridge br0 vif 10 address '172.16.0.11/24'
set interfaces bridge br0 vif 10 vrf 'TEST'
set interfaces dummy dum0 address '10.0.0.11/32'
set interfaces dummy dum0 vrf 'TEST'

Spoke2:
set interfaces bridge br0 vif 10 address '172.16.0.12/24'
set interfaces bridge br0 vif 10 vrf 'TEST'
set interfaces dummy dum0 address '10.0.0.12/32'
set interfaces dummy dum0 vrf 'TEST'

Advertise into EVPN

We have a few final steps before we can start seeing this in action. We need to advertise our VNI into EVPN and map a VLAN to a VNI on our VxLAN interface. This vlan-to-vni mapping is what allows SVD to only require that single bridge domain.

set protocols bgp address-family l2vpn-evpn advertise-all-vni
set interfaces vxlan vxlan0 vlan-to-vni 10 vni '1001'

Verification

Let’s look at our EVPN table.

vyos@Spoke1:~$ show bgp l2vpn evpn 
Network Next Hop Metric LocPrf Weight Path
Route Distinguisher: 172.16.0.1:2
*>i[5]:[0]:[24]:[172.16.0.0]
100.90.53.188 0 100 0 ?
RT:65000:1001 ET:8 Rmac:a2:9b:78:2b:11:cb
*>i[5]:[0]:[32]:[10.0.0.1]
100.90.53.188 0 100 0 ?
RT:65000:1001 ET:8 Rmac:a2:9b:78:2b:11:cb
Route Distinguisher: 172.16.0.11:2
*> [5]:[0]:[24]:[172.16.0.0]
0.0.0.0 0 32768 ?
ET:8 RT:65000:1001 Rmac:02:ec:2d:6b:2c:78
*> [5]:[0]:[32]:[10.0.0.11]
0.0.0.0 0 32768 ?
ET:8 RT:65000:1001 Rmac:02:ec:2d:6b:2c:78
Route Distinguisher: 172.16.0.12:2
*>i[5]:[0]:[24]:[172.16.0.0]
100.90.71.45 0 100 0 ?
RT:65000:1001 ET:8 Rmac:7e:5b:5a:51:2a:36
*>i[5]:[0]:[32]:[10.0.0.12]
100.90.71.45 0 100 0 ?
RT:65000:1001 ET:8 Rmac:7e:5b:5a:51:2a:36


We can see we have 2 type-5 routes in EVPN per site. These are those 2 subnets we advertised. One thing you might find curious is that we have RDs attached to these nodes, but we never configured any. These were auto-populated. The same thing happened for route-targets.

vyos@Spoke1:~$ show bgp l2vpn evpn 10.0.0.12/32
Paths: (1 available, best #1)
Local
100.90.71.45 from 100.90.53.188 (100.90.71.45)
Extended Community: RT:65000:1001


This feature can really help with the planning of your network, since it simplifies the administration of planning out RDs and RTs. You can of course set them manually if needed/wanted.


Let’s test some traffic between Spoke1 and Spoke2.

vyos@Spoke1:~$ ping 10.0.0.12 vrf Main source-address 10.0.0.11
64 bytes from 10.0.0.12: icmp_seq=1 ttl=64 time=1.97 ms

vyos@Spoke1:~$ trace 10.0.0.12 vrf Main source-address 10.0.0.11
1 10.0.0.12 (10.0.0.12) 4.230 ms 2.247 ms 2.307 ms


Traffic is working fine, and we can see that at least on the overlay, traffic is going directly between spokes. Let’s make sure it is also going directly between the spokes on the underlay.

vyos@Spoke1:~$ show interfaces ethernet eth10 brief 
Interface IP Address S/L Description
--------- ---------- --- -----------
eth10 100.90.184.124/16 u/u

vyos@Spoke2:~$ show interfaces ethernet eth10 brief
Interface IP Address S/L Description
--------- ---------- --- -----------
eth10 100.90.71.45/16 u/u

vyos@vyos# sudo tcpdump -i eth10
15:33:08.196056 IP 100.90.184.124.53069 > 100.90.71.45.4789: VXLAN, flags [I] (0x08), vni 1001
IP 10.0.0.11 > 10.0.0.12: ICMP echo request, id 52182, seq 1, length 64

15:33:08.196314 IP 100.90.71.45.50185 > 100.90.184.124.4789: VXLAN, flags [I] (0x08), vni 1001
IP 10.0.0.12 > 10.0.0.11: ICMP echo reply, id 52182, seq 1, length 64


You can see the pings are going directly between Spoke1 (100.90.184.124) and Spoke2 (100.90.71.45) on the underlay as well.


We can also see this in a packet capture. Notice the VNI listed under the VxLAN header?


Conclusion

That’s all for this one. Scaling this to a large number of spokes would be very easy since the Spoke config is largely repeatable, with only IPs needing to change. You can even script the deployment of the routers pretty easily by tying an IP Scheme to a Site ID.


In part 2 of this series, we’re going to expand on our current deployment and add L2VPN functionality.

Leave a Reply

Trending

Discover more from Level Zero Networking

Subscribe now to keep reading and get access to the full archive.

Continue reading