
Dynamic Multipoint networks scale very well, because other than the Hub devices, devices can build tunnels only to the devices that they need to communicate between. This makes the solution very scalable. But what if you had 5000 sites that you needed to manage, the Hubs may not be able to handle all those connections.
Hub Limitations
There’s a couple of quick limitations that may come to mind when thinking about the Hubs:
- Number of Routes:
- You may think that this would be a major issue, but if given adequate memory, VyOS can hold the entire Internet’s routing table multiple times over.
- Number of Tunnels:
- This is really where the limitation comes in. Even enterprise gear has an upper threshold on how many tunnels can terminate on a single device. What is the limit for VyOS? I honestly have no idea, but this guide aims to provide a solution if you ever approach or hit that threshold.
Regional Design
One way you can dramatically increase the scale of this design is to Regionalize our deployment. The name might lead you to thinking about Geographic Regions, but it doesn’t necessarily need to follow that structure. While this was important for Cisco’s DMVPN, since traffic does need to traverse the Hub initially (meaning you wanted it close to the spokes), it isn’t a limitation with this design since traffic follows the next-hop-unchanged path directly to the remote device. A Region can just be up to a certain point in your IP addressing scheme.
For instance, maybe you split the 10.0.0.0/8 subnet in eight /11s. Where the site’s subnet falls into those blocks determines the regions.
Example:
- Region 1: 10.0.0.0/11
- Region 2: 10.32.0.0/11
- Region 3: 10.64.0.0/11
- Region 4: 10.96.0.0/11
- Region 5: 10.128.0.0/11
- Region 6: 10.160.0.0/11
- Region 7: 10.192.0.0/11
- Region 8: 10.224.0.0/11
Regions in Cisco DMVPN
Using Regions in a Cisco’s DMVPN is a common practice at large scale. Initially (in DMVPN Phase 2), Inter-Region traffic could not be spoke-to-spoke, and would always have to traverse the Hubs (Spoke–>Hub–>Hub–>Spoke). Later, in DMVPN Phase 3, the ability to do Inter-Region spoke-to-spoke was enabled, but it wasn’t straightforward. The shortcut messages sent from the spokes to create the spoke-to-spoke tunnel needed to be able to traverse the entire DMVPN Cloud. This often necessitated the need for a Central Hub in addition to the Regional Hubs. You’ll be able to see just how much easier it is in this design.
Our Topology
For this lab, we’re basically just building the topology that is in Part 1 of this series which can be found here: https://lev-0.com/2024/01/08/dynamic-multipoint-vpn-with-zerotier-and-vyos/
We’ll actually be building it twice; once for each Region (or more times if you want more regions).

I’m not going to walk through the configuration of each region, since that is covered in Part 1, but I will list the IPs and Subnets, along with the Config for each component.
- ZeroTier IPs:
- Region1:
- Hub: 10.13.1.1
- Spoke1: 10.13.1.11
- Spoke2: 10.13.1.12
- Region2:
- Hub: 10.13.2.1
- Spoke1: 10.13.2.11
- Spoke2: 10.13.2.12
- Region1:
- LAN Subnets:
- Region1:
- Spoke1: 10.1.1.0/24
- Spoke2: 10.1.2.0/24
- Region2:
- Spoke1: 10.2.1.0/24
- Spoke2: 10.2.2.0/24
- Region1:
These configurations assume you already have brought up ZeroTier.
All Hubs (any region):
set protocols bgp listen range 10.13.0.0/16 peer-group 'PG'
set protocols bgp peer-group PG address-family ipv4-unicast route-reflector-client
set protocols bgp peer-group PG remote-as '65000'
set protocols bgp system-as '65000'
All Region 1 Spokes:
set policy prefix-list ZT_LAN_PFX rule 10 action 'permit'
set policy prefix-list ZT_LAN_PFX rule 10 prefix '10.13.0.0/16'
set policy route-map DENY_ZT_LAN rule 10 action 'deny'
set policy route-map DENY_ZT_LAN rule 10 match ip address prefix-list 'ZT_LAN_PFX'
set policy route-map DENY_ZT_LAN rule 20 action 'permit'
set protocols bgp address-family ipv4-unicast redistribute connected route-map 'DENY_ZT_LAN'
set protocols bgp neighbor 10.13.1.1 address-family ipv4-unicast nexthop-self
set protocols bgp neighbor 10.13.1.1 remote-as '65000'
set protocols bgp system-as '65000'
All Region 1 Spokes:
set policy prefix-list ZT_LAN_PFX rule 10 action 'permit'
set policy prefix-list ZT_LAN_PFX rule 10 prefix '10.13.0.0/16'
set policy route-map DENY_ZT_LAN rule 10 action 'deny'
set policy route-map DENY_ZT_LAN rule 10 match ip address prefix-list 'ZT_LAN_PFX'
set policy route-map DENY_ZT_LAN rule 20 action 'permit'
set protocols bgp address-family ipv4-unicast redistribute connected route-map 'DENY_ZT_LAN'
set protocols bgp neighbor 10.13.2.1 address-family ipv4-unicast nexthop-self
set protocols bgp neighbor 10.13.2.1 remote-as '65000'
set protocols bgp system-as '65000'
You can see the only difference for the Spoke config is which Hub the spokes point to. And the Hub configurations are identical regardless on which Region it services. This means we only have to maintain 3 configurations: Hubs, Region1 Spokes, and Region2 Spokes.
This is very scalable, and ready for automation. For instance, if you did define the regions by their Site Subnet’s, a script can identify which region it should belong to and apply that region’s config.
- Region 1: 10.1.0.0/16
- Region 2: 10.2.0.0/16
Let’s look at our routes in Region 1.
vyos@Region1-Spoke1:~$ show ip bgp
Network Next Hop Metric LocPrf Weight Path
*> 10.1.1.0/24 0.0.0.0 0 32768 ?
*>i10.1.2.0/24 10.13.1.12 0 100 0 ?
What about Region 2?
vyos@Region2-Spoke1:~$ show ip bgp
Network Next Hop Metric LocPrf Weight Path
*> 10.2.1.0/24 0.0.0.0 0 32768 ?
*>i10.2.2.0/24 10.13.2.12 0 100 0 ?
Each region has all of its own routes, but none of the other region’s. We need to advertise those between hubs. The beauty of this, is it will create a cluster of the route-reflectors, which still advertise the routes as next-hop-unchanged. So not only will this let the regions know about spokes in other regions, it also is all that is needed to enable inter-region spoke-to-spoke. Much easier than Central Hubs and shortcut messages traversing an entire DMPVN Cloud.
Peering Between Hubs
Remember that the Hubs are listening for dynamic peers from the 10.13.0.0/16 range, which means if we define a neighbor between the hubs, the first one configured will build a dynamic neighbor to the other. We don’t want this for 2 reason:
- The dynamic peering will inherit the policy of the peer-group, making one of the hubs the client. We don’t want that. Each one needs to believe it is a route-reflector to build the cluster.
- When you try to configure the neighbor on the other hub after the dynamic peer is established, it will cause an error when trying to commit.
We’ll want to shutdown the peer-group before making this change. We run this command and both of the hubs.
NOTE: When deploying this in production, you’d just configure the peering between hubs prior to creating the BGP Listener
set protocols bgp peer-group PG shutdown
You can see we don’t have any dynamic neighbors now.
vyos@Region1-Hub# run show ip bgp summary
% No BGP neighbors found in VRF default
Here’s the config; couldn’t be simpler.
Region1 Hub:
set protocols bgp neighbor 10.13.2.1 address-family ipv4-unicast
Region2 Hub:
set protocols bgp neighbor 10.13.1.1 address-family ipv4-unicast
Now let’s bring back up the peer-group. Run this on both Hubs.
delete protocols bgp peer-group PG shutdown
Let’s look at the BGP peers on one of the hubs now.
vyos@Region1-Hub# run show ip bgp summary
IPv4 Unicast Summary (VRF default):
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt Desc
*10.13.1.11 4 65000 4 7 16 0 0 00:00:29 1 4 N/A
*10.13.1.12 4 65000 4 7 16 0 0 00:00:29 1 4 N/A
10.13.2.1 4 65000 7 8 16 0 0 00:02:15 2 2 N/A
Total number of neighbors 3
* - dynamic neighbor
2 dynamic neighbor(s), limit 100
That looks good! We have the 2 dynamic peers from our regional spokes, and the explicit neighbor to the other Hub. Let’s look at the routes learned on one of the spokes again.
vyos@Region1-Spoke1:~$ show ip bgp
Network Next Hop Metric LocPrf Weight Path
*> 10.1.1.0/24 0.0.0.0 0 32768 ?
*>i10.1.2.0/24 10.13.1.12 0 100 0 ?
*>i10.2.1.0/24 10.13.2.11 0 100 0 ?
*>i10.2.2.0/24 10.13.2.12 0 100 0 ?
We now have routes from spokes in all regions. Finally, let’s test to see our traffic is working to all of the spokes, and that they are a direct path.
vyos@Region1-Spoke1:~$ ping 10.1.2.1 source-address 10.1.1.1
PING 10.1.2.1 (10.1.2.1) from 10.1.1.1 : 56(84) bytes of data.
64 bytes from 10.1.2.1: icmp_seq=2 ttl=64 time=1.76 ms
vyos@Region1-Spoke1:~$ ping 10.2.1.1 source-address 10.1.1.1
PING 10.2.1.1 (10.2.1.1) from 10.1.1.1 : 56(84) bytes of data.
64 bytes from 10.2.1.1: icmp_seq=2 ttl=64 time=1.73 ms
vyos@Region1-Spoke1:~$ ping 10.2.2.1 source-address 10.1.1.1
PING 10.2.2.1 (10.2.2.1) from 10.1.1.1 : 56(84) bytes of data.
64 bytes from 10.2.2.1: icmp_seq=2 ttl=64 time=1.78 ms
vyos@Region1-Spoke1:~$ sudo zerotier-cli peers
200 peers
<ztaddr> <ver> <role> <lat> <link> <lastTX> <lastRX> <path>
40xxxxxxxx 1.12.2 LEAF 4 DIRECT 8901 8901 10.0.95.50/46498
44xxxxxxxx 1.12.2 LEAF 3 DIRECT 1256 1256 10.0.95.114/9993
47xxxxxxxx 1.12.2 LEAF 3 DIRECT 6124 11185 10.0.95.234/9993
5dxxxxxxxx 1.12.2 LEAF 3 DIRECT 1178 1175 10.0.95.57/9993
98xxxxxxxx 1.12.2 LEAF 6 DIRECT 812 6150 10.0.95.177/9993
In our lab environment, we know that each connection is direct because of the low latency. In an actual deployment, it may not be as obvious. The “zerotier-cli peers” command can be useful for this. If it can not build a direct path, it will state RELAY instead of DIRECT.
Conclusion
And that’s all that is required for a regional design. Just peer between the Hubs, creating a route-reflector cluster of the hubs. This will make the only limiting factor to the scale of this design the size of the routing table.
While this post covered a flat network, this works the same way if doing the MPLS based design we did in Part 2.






Leave a Reply