
In part one of this series, we covered creating a multipoint VPN using ZeroTier and iBGP on MikroTik. You can find that here: https://lev-0.com/2024/01/31/multipoint-vpn-with-zerotier-and-mikrotik-part-1-initial-design/
The design is incredibly scalable, since site-to-site traffic is dynamically built only when traffic needs to go between sites (and torn down when there’s no longer any interesting traffic). Additionally, the configuration of the spokes was heavily repeatable, since there were only a few small changes needed per device when deploying it.
As scalable as the solution was, there will always be inherit limitations to specific hardware. The main limitations you would encounter in a solution like this are:
- Size of the Routing Table:
- Realistically, a solution like this is meant for site-to-site overlay traffic, so a routing table should never really get too large for even lower spec hardware.
- Connections on a device:
- This is the more important limitation that you may run into. It’s important to spec out your hardware to be able to handle enough ZeroTier connections as it may possibly encounter, but every device has an upper limit.
- The spokes will only talk to the sites that it needs to talk to, but the Hubs need to talk to everyone. 500 sites, you’re probably fine; 2000, not so much.
Knowing that the Hubs will run into a limit of how many spokes can connect to it may seem like that means the solution is limited to whatever that number is. But we can scale out the solution horizontally with Regions. If one Hub can only handle 500 sites, then 10 regions could scale the design to 5000 sites.
Let’s start building out the design.
Topology

We’re basically just going to double our lab from part 1, with each region having a single hub and 2 spokes.
IP Scheme:
- Region1:
- Hub:
- ZeroTier IP: 10.13.0.1/16
- Spoke1:
- ZeroTier IP: 10.13.1.101/16
- LAN: 10.1.1.0/24
- Spoke2:
- ZeroTier IP: 10.13.1.102/16
- LAN: 10.1.2.0/24
- Hub:
- Region2:
- Hub:
- ZeroTier IP: 10.13.0.4/16
- The IP jumps to ‘.4’ so you can leave room for multiple hubs at each site for redundancy.
- ZeroTier IP: 10.13.0.4/16
- Spoke1:
- ZeroTier IP: 10.13.1.201/16
- LAN: 10.2.1.0/24
- Spoke2:
- ZeroTier IP: 10.13.1.202/16
- LAN: 10.2.2.0/24
- Hub:
A note on Hubs in this design
Unlike traditional solutions like DMVPN, the Hub doesn’t actually forward any traffic unless it’s also serving traffic behind it (like at a Data Center). The Hubs serve more like route servers in this solution. For that reason, it may make more sense to make the Hubs an x86 image. As of the time of this writing, the x86 MikroTik images don’t have a package for ZeroTier available, but you can use anything that can be an iBGP route-reflector and run ZeroTier.
The good news is this means you don’t need to try to place your hubs in places where they’d have low latency; they can be anywhere!
We’re going to use MikroTik as the hubs in this lab, but it’s a consideration you may want to weigh if deploying this in production.
Let’s get to the configuration.
Installing ZeroTier
I covered installing ZeroTier with more depth in part 1 of this series, so you can go there if you’re still a little unclear on installing ZeroTier on MikroTik. I’m just going to list the summarized steps here.
- Check version and architecture of MikroTik device
- Download Extra Packages for the architecture and version of your device.
- Extract the Extra Packages to a folder on your PC.
- Upload the file with a name like “zerotier-7.13.3-arm.npk” to your MikroTik.
- Reboot the device.
- Verify ZeroTier is installed.
- You should see GUI and CLI options present.
- Configure ZeroTier on the device
- Authorize and assign ZeroTier IPs in ZeroTier Central
- Verify ZeroTier IPs are on device and devices can communicate between each other.
Configuring BGP on the Hubs
Like in part 1, I’m going to configure everything using the CLI, but feel free to use the WebUI if you’re more comfortable with that. Using the CLI allows for easy copy and pasting of repeatable configuration.
Just like in part 1 of this series, the Hubs will be configured as iBGP route-reflectors and listen for incoming BGP sessions within a range of IPs.
Region1 Hub:
add address-families=ip as=65000 cluster-id=10.13.0.1 listen=yes local.address=10.13.0.1 .role=ibgp-rr name=reg1-hub remote.address=10.13.1.0/24 .as=65000
Region2 Hub:
add address-families=ip as=65000 cluster-id=10.13.0.4 listen=yes local.address=10.13.0.4 .role=ibgp-rr name=reg2-hub remote.address=10.13.1.0/24 .as=65000
One addition that we’re adding from part 1 is adding a cluster-id. When using route-reflection, you’re removing the default loop prevention mechanism for iBGP. A cluster ID creates an additional loop prevention mechanism by adding the cluster ID to the advertised prefixes. If a device sees it’s own cluster ID come in as an advertisement, it will not accept the prefix.
Notice that the remote-address is also not covering the full /16, more on that later.
Configuring BGP on the Spokes
We’re going to configure all of the spokes as route-reflector-clients, and they will only peer with the Hub in their region.
Region1 Spoke1:
/routing/bgp/connection/add address-families=ip as=65000 local.address=10.13.1.101 .role=ibgp-rr-client name=peer-to-reg1-hub remote.address=10.13.0.1 .as=65000
Region1 Spoke2:
/routing/bgp/connection/add address-families=ip as=65000 local.address=10.13.1.102 .role=ibgp-rr-client name=peer-to-reg1-hub remote.address=10.13.0.1 .as=65000
Region2 Spoke1:
/routing/bgp/connection/add address-families=ip as=65000 local.address=10.13.1.201 .role=ibgp-rr-client name=peer-to-reg1-hub remote.address=10.13.0.4 .as=65000
Region2 Spoke2:
/routing/bgp/connection/add address-families=ip as=65000 local.address=10.13.1.202 .role=ibgp-rr-client name=peer-to-reg1-hub remote.address=10.13.0.4 .as=65000
Verify BGP peerings on the Hub
We should have 2 peers on each of the hubs. Let’s verify that:
Region1 Hub:
[admin@reg1-hub] /routing/bgp/session> print
remote.address=10.13.1.101 .as=65000 .id=10.13.1.101
remote.address=10.13.1.102 .as=65000 .id=10.13.1.102
Region2 Hub:
[admin@reg2-hub] /routing/bgp/session> print
remote.address=10.13.1.201 .as=65000 .id=10.13.1.201
remote.address=10.13.1.202 .as=65000 .id=10.13.1.202
BGP is up between our hubs and spokes, let’s create some LAN prefixes on the Spokes and advertise them into BGP.
Configuring LAN IPs on the Spokes
We’re not actually going to deploy a LAN. We’ll simulate it by creating loopbacks, but this would be the same if actually creating subnets on LAN interfaces.
NOTE: For optimal scalability, make sure you summarize the routes at each spoke site.
Region1 Spoke1:
/interface/bridge/add name=loopback0
/ip/address/add address=10.1.1.1/24 interface=loopback0 network=10.1.1.0
/ip/firewall/address-list/add address=10.1.1.0/24 list=BGP_REDISTRIBUTE
/routing/bgp/connection/set output.network=BGP_REDISTRIBUTE numbers=0
Region1 Spoke2:
/interface/bridge/add name=loopback0
/ip/address/add address=10.1.2.1/24 interface=loopback0 network=10.1.2.0
/ip/firewall/address-list/add address=10.1.2.0/24 list=BGP_REDISTRIBUTE
/routing/bgp/connection/set output.network=BGP_REDISTRIBUTE numbers=0
Region2 Spoke1:
/interface/bridge/add name=loopback0
/ip/address/add address=10.2.1.1/24 interface=loopback0 network=10.2.1.0
/ip/firewall/address-list/add address=10.2.1.0/24 list=BGP_REDISTRIBUTE
/routing/bgp/connection/set output.network=BGP_REDISTRIBUTE numbers=0
Region2 Spoke1:
/interface/bridge/add name=loopback0
/ip/address/add address=10.2.2.1/24 interface=loopback0 network=10.2.2.0
/ip/firewall/address-list/add address=10.2.2.0/24 list=BGP_REDISTRIBUTE
/routing/bgp/connection/set output.network=BGP_REDISTRIBUTE numbers=0
Let’s see if we have those routes on our spokes.
[admin@reg1-spoke1] > /ip/route/print
DST-ADDRESS GATEWAY DISTANCE
DAc 10.1.1.0/24 loopback0 0
DAb 10.1.2.0/24 10.13.1.102 200
[admin@reg1-spoke2] > /ip/route/print
DST-ADDRESS GATEWAY DISTANCE
DAb 10.1.1.0/24 10.13.1.101 200
DAc 10.1.2.0/24 loopback0 0
[admin@reg2-spoke1] > /ip/route/print
DST-ADDRESS GATEWAY DISTANCE
DAc 10.2.1.0/24 loopback0 0
DAb 10.2.2.0/24 10.13.1.202 200
[admin@reg2-spoke2] > /ip/route/print
DST-ADDRESS GATEWAY DISTANCE
DAb 10.2.1.0/24 10.13.1.201 200
DAc 10.2.2.0/24 loopback0 0
You can see that the spokes have their regional routes. Let’s make sure we can ping the routes within each region.
Region1:
[admin@reg1-spoke1] > ping 10.1.2.1 src-address=10.1.1.1 STATUS
0 10.1.2.1 56 64 1ms586us
Region2:
[admin@reg2-spoke1] > ping 10.2.2.1 src-address=10.2.1.1 STATUS
0 10.2.2.1 56 64 2ms193us
Yep, everything looks great! But we want to be able to communicate between regions.
NOTE: Simplicity is one of the greatest things about this design. ZeroTier just creates a large LAN, and iBGP advertises routes with the next-hop-unchanged, so no matter how many regions we have, we can always go spoke-to-spoke, both within a region, as well as inter-region. If you’ve ever done inter-region DMVPN spoke-to-spoke, you know how much more is involved.
All we need to do to enable inter-region spoke-to-spoke traffic is to peer between Region1’s Hub, and Region2’s Hub.
Region1 Hub:
/routing/bgp/connection/add address-families=ip as=65000 local.address=10.13.0.1 .role=ibgp name=reg1-to-reg2 remote.address=10.13.0.4 .as=65000
Region2 Hub:
/routing/bgp/connection/add address-families=ip as=65000 local.address=10.13.0.4 .role=ibgp name=reg1-to-reg2 remote.address=10.13.0.1 .as=65000
If you recall from earlier, I mentioned how the listener wasn’t covering the full /16, and here is why. If we had tried to configure these with the listener covering the full 10.13.0.0/16, then the peerings would have come up as dynamic peers of each other, which would also give them the wrong roles. We just want these to peer with a role of “ibgp”, because the peer on the other side (the other Hub) is itself already a route-reflector, and not a client.
Now let’s look at the routing table of one of the spokes.
[admin@reg1-spoke1] > /ip/route/print
DST-ADDRESS GATEWAY DISTANCE
DAc 10.1.1.0/24 loopback0 0
DAb 10.1.2.0/24 10.13.1.102 200
DAb 10.2.1.0/24 10.13.1.201 200
DAb 10.2.2.0/24 10.13.1.202 200
You can see we now have all of the routes from the other spokes, and most importantly, they’re still advertised to us with next-hop-unchanged.
Before we can get to the testing, let’s do a baseline to make sure we haven’t created a full mesh of spokes, and that each connection will be dynamic.
First, let’s check out ARP.
[admin@reg1-spoke1] > /ip/arp/print where interface=zpath
# ADDRESS MAC-ADDRESS INTERFACE
14 DC 10.13.0.1 4E:86:F1:AB:07:F2 zpath
ARP looks good, we can see that Spoke1 only has ARP for 10.13.0.1, which is Region1’s Hub.
Let’s check out our ZeroTier peers.
[admin@reg1-spoke1] > /zerotier/peer/print
# INSTANCE ZT-ADDRESS LATENCY ROLE PATH
0 zt1 62f865ae71 241ms PLANET active,preferred,50.7.252.138/9993,recvd:35s295ms,sent:20s533ms
1 zt1 778cde7190 32ms PLANET active,preferred,103.195.103.66/9993,recvd:24s867ms,sent:5s532ms
2 zt1 cafe04eba9 108ms PLANET active,preferred,84.17.53.155/9993,recvd:35s427ms,sent:20s533ms
3 zt1 cafe9efeb9 64ms PLANET active,preferred,104.194.8.134/9993,recvd:35s469ms,sent:20s533ms
4 zt1 0123456789 29ms LEAF active,preferred,35.209.81.208/21003,recvd:24s789ms,sent:20s533ms
5 zt1 Region1Hub 6ms LEAF active,preferred,192.168.2.138/50815,recvd:17s900ms,sent:14s927ms
We can see that the only peers Spoke1 knows about is ZeroTier’s public nodes, and Region1’s Hub.
And finally, let’s get to testing. Let’s try to ping all of the other spokes from Region1’s Spoke1.
[admin@reg1-spoke1] > ping 10.1.2.1 src-address=10.1.1.1
SEQ HOST SIZE TTL TIME STATUS
0 10.1.2.1 56 64 72ms31us
1 10.1.2.1 56 64 2ms892us
[admin@reg1-spoke1] > ping 10.2.1.1 src-address=10.1.1.1
SEQ HOST SIZE TTL TIME STATUS
0 10.2.1.1 56 64 82ms370us
1 10.2.1.1 56 64 3ms145us
[admin@reg1-spoke1] > ping 10.2.2.1 src-address=10.1.1.1
SEQ HOST SIZE TTL TIME STATUS
0 10.2.2.1 56 64 267ms849us
1 10.2.2.1 56 64 5ms907us
Pings are successful! Again, you can see the first ping is latent as ARP resolves and the initial packets traverse the ZeroTier Root Servers. Once a direct connection is established, the pings are much lower.
Let’s check ARP again.
[admin@reg1-spoke1] > /ip/arp/print where interface=zpath
# ADDRESS MAC-ADDRESS INTERFACE
0 DC 10.13.1.201 4E:CA:36:11:CE:1E zpath
1 DC 10.13.0.1 4E:86:F1:AB:07:F2 zpath
2 DC 10.13.1.202 4E:48:12:81:7E:E7 zpath
3 DC 10.13.1.102 4E:04:58:E1:71:E6 zpath
You can see we now have ARP resolution for the other 3 spokes, and the Region1 Hub. Lastlly, let’s check our ZeroTier peers.
[admin@reg1-spoke1] > /zerotier/peer/print
# INSTANCE ZT-ADDRESS LATENCY ROLE PATH
0 zt1 62f865ae71 239ms PLANET active,preferred,50.7.252.138/9993,recvd:3m19s418ms,sent:54s509ms
1 zt1 778cde7190 31ms PLANET active,preferred,103.195.103.66/9993,recvd:2m51s733ms,sent:4s453ms
2 zt1 cafe04eba9 PLANET active,84.17.53.155/9993,recvd:7m4s714ms,sent:3m19s655ms
3 zt1 cafe9efeb9 66ms PLANET active,preferred,104.194.8.134/9993,recvd:3m19s589ms,sent:54s509ms
4 zt1 0123456789 29ms LEAF active,preferred,35.209.81.208/21003,recvd:15s426ms,sent:15s426ms
5 zt1 Reg1Hub 6ms LEAF active,preferred,192.168.2.138/50815,recvd:16s735ms,sent:16s735ms
6 zt1 Reg1spoke2 6ms LEAF active,preferred,192.168.2.49/58195,recvd:9s453ms,sent:9s457ms
7 zt1 Reg2spoke2 13ms LEAF active,preferred,192.168.2.246/61182,recvd:9s455ms,sent:3s892ms
8 zt1 Reg2spoke2 6ms LEAF active,preferred,192.168.2.240/40674,recvd:4s450ms,sent:4s453ms
And we can see we’re directly connected to all of the nodes we tried to communicate with. Notice that we don’t have a peering to Region2’s Hub, because we never tried to communicate with them. This further illustrates that the connections are dynamic.
Conclusion
And that’s it, adding regions is as simple as just creating more regions, and then peering the Hubs together. Need more regions, just continue on that full mesh of Hubs, or create a Central Hub.






Leave a Reply