In a DMVPN network it’s likely that your spoke routers are connected using a variety of connections. You might be using DSL, cable or wireless connections on different sites.
With all these different connections, it’s not feasible to use a single QoS policy and apply it to all spoke routers. You also don’t want to create a unique QoS policy for each router if you have hundreds of spoke routers.
Per-Tunnel QoS allows us to create different QoS policies and we can apply them to different NHRP “groups”. Multiple spoke routers can be assigned to the same group but each router will be individually measured.
For example, let’s say we have 300 spoke routers:
- 100 spoke routers connected through DSL (1 Mbps).
- 100 spoke routers connected through Cable (5 Mbps).
- 100 spoke routers connected through Wireless (2 Mbps).
In this case we can create three QoS policies, one for DSL, cable and wireless. In each QoS policy we will configure the correct shaping rate so our spoke routers don’t get overburdened with traffic. When we apply one of the QoS policies, traffic to each spoke router will be shaped to the correct rate.
Let’s take a look at the configuration so you can see how this works…
Configuration
To demonstrate DMVPN Per-Tunnel QoS I will use the following topology:
Above we have a hub and four spoke routers. Let’s imagine that the spoke1 and spoke2 routers are connected using a 5 Mbps link, the spoke3 and spoke4 routers are using a slower 1 Mbps link.
Our DMVPN network is used for VoIP traffic so our QoS policy should have a priority queue for RTP traffic.
We use subnet 192.168.123.0/24 as the “underlay” network and 172.16.123.0/24 as the “overlay” (tunnel) network.
The DMVPN network is used for VoIP traffic
Let’s start with a basic DMVPN phase 2 configuration. Here’s the configuration of the hub router:
Hub#
interface Tunnel0
ip address 172.16.123.1 255.255.255.0
ip nhrp authentication DMVPN
ip nhrp map multicast dynamic
ip nhrp network-id 1
tunnel source GigabitEthernet0/1
tunnel mode gre multipointend
And here’s the configuration of one spoke router:
Spoke1#
interface Tunnel0
ip address 172.16.123.2 255.255.255.0
ip nhrp authentication DMVPN
ip nhrp map 172.16.123.1 192.168.123.1
ip nhrp map multicast 192.168.123.1
ip nhrp network-id 1
ip nhrp nhs 172.16.123.1
tunnel source GigabitEthernet0/1
tunnel mode gre multipointend
The configuration above is the same on all spoke routers except for the IP address.
Let’s start with the hub configuration…
Hub Configuration
Let’s create a class-map and policy-map that gives priority to VoIP traffic:
Hub(config)#class-map VOIP
Hub(config-cmap)#match protocol rtp
I’ll use NBAR to identify our RTP traffic. Here’s the policy-map:
Hub(config)#policy-map PRIORITY
Hub(config-pmap)#class VOIP
Hub(config-pmap-c)#priority percent 20
In the policy-map we will create a priority queue for 20 percent of the interface bandwidth. Instead of applying this policy-map to an interface, we’ll create two more policy-maps. The first one will be for our spoke routers that are connected through the 1 Mbps connections:
Hub(config)#policy-map SHAPE_1M
Hub(config-pmap)#class class-default
Hub(config-pmap-c)#shape average 1m
Hub(config-pmap-c)#service-policy PRIORITY
In the policy-map above called “SHAPE_1M” we shape all traffic in the class-default class-map to 1 Mbps. Within the shaper, we apply the policy map that creates a priority queue for 20% of the bandwidth. 20% of 1 Mbps is 200 kbps.
Let’s create the second policy-map for our spoke routers that are connected through a 5 Mbps connection:
Hub(config)#policy-map SHAPE_5M
Hub(config-pmap)#class class-default
Hub(config-pmap-c)#shape average 5M
Hub(config-pmap-c)#service-policy PRIORITY
The policy-map is the same except that it will shape up to 5 Mbps. The priority queue will get 20% of 5 Mbps which is 1 Mbps.
We have our policy-maps but we still have to apply them somehow. We don’t apply them to an interface but we will map them to NHRP groups:
Hub(config)#interface Tunnel 0
Hub(config-if)#nhrp ?
attribute NHRP attribute
event-publisher Enable NHRP smart spoke feature
group NHRP group name
map Map group name to QoS service policy
route-watch Enable NHRP route watch
On the tunnel interface we can use the nhrp map command. Take a look below:
Hub(config-if)#nhrp map ?
group NHRP group mapping
Let’s use the group parameter:
Hub(config-if)#nhrp map group ?
WORD NHRP group name
Now you can specify a NHRP group name. You can use any name but to keep it simple, it’s best to use the same name as the policy-maps that we created earlier:
Hub(config-if)#nhrp map group SHAPE_1M ?
service-policy QoS service-policy
The only thing left to do is to attach the NHRP group to the policy-map:
Hub(config-if)#nhrp map group SHAPE_1M service-policy output SHAPE_1M
We now have a NHRP group called “SHAPE_1M” that is attached to our policy-map “SHAPE_1M”. Let’s do the same thing for our second policy-map:
Hub(config-if)#nhrp map group SHAPE_5M service-policy output SHAPE_5M
That’s all we have to do on the hub router.
Spoke Configuration
The only thing left to do is to configure the spoke routers with the correct NHRP group:
Spoke1 & Spoke2
(config)#interface Tunnel 0
(config-if)#nhrp group SHAPE_1M
Spoke3 & Spoke4
(config)#interface Tunnel 0
(config-if)#nhrp group SHAPE_5M
When the spoke routers register themselves with the hub through NHRP, they will include the NHRP group that they want to use.
To see this in action, let’s bounce all the tunnel interfaces. I’ll start with the hub:
Hub(config)#interface Tunnel 0
Hub(config-if)#shutdown
Hub(config-if)#no shutdown
And then we’ll do the same on all spoke routers:
Spoke1, Spoke2, Spoke3 & Spoke4
(config)#interface Tunnel 0
(config-if)#shutdown
(config-if)#no shutdown
Great, this completes our configuration. Let’s verify our work.
Verification
The first thing we should check is if the spoke routers registered themselves with the hub:
Hub#show dmvpn detail
Legend: Attrb --> S - Static, D - Dynamic, I - Incomplete
N - NATed, L - Local, X - No Socket
T1 - Route Installed, T2 - Nexthop-override
C - CTS Capable
# Ent --> Number of NHRP entries with same NBMA peer
NHS Status: E --> Expecting Replies, R --> Responding, W --> Waiting
UpDn Time --> Up or Down Time for a Tunnel
==========================================================================
Interface Tunnel0 is up/up, Addr. is 172.16.123.1, VRF ""
Tunnel Src./Dest. addr: 192.168.123.1/MGRE, Tunnel VRF ""
Protocol/Transport: "multi-GRE/IP", Protect ""
Interface State Control: Disabled
nhrp event-publisher : Disabled
Type:Hub, Total NBMA Peers (v4/v6): 4
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb Target Network
----- --------------- --------------- ----- -------- ----- -----------------
1 192.168.123.2 172.16.123.2 UP 00:01:02 D 172.16.123.2/32
NHRP group: SHAPE_1M
Output QoS service-policy applied: SHAPE_1M
1 192.168.123.3 172.16.123.3 UP 00:00:56 D 172.16.123.3/32
NHRP group: SHAPE_1M
Output QoS service-policy applied: SHAPE_1M
1 192.168.123.4 172.16.123.4 UP 00:00:53 D 172.16.123.4/32
NHRP group: SHAPE_5M
Output QoS service-policy applied: SHAPE_5M
1 192.168.123.5 172.16.123.5 UP 00:00:49 D 172.16.123.5/32
NHRP group: SHAPE_5M
Output QoS service-policy applied: SHAPE_5M
When you use show dmvpn detail, you can see the NHRP groups and the policy-maps.
Here’s what the NHRP registration looks like in wireshark. Here’s a capture of the spoke1 router:
Above you can see the “data” field. That’s the NHRP group in hexadecimal:
Data: 010853484150455f314d
Use your favorite hex-to-ascii tool to convert it and you get “SHAPE_1M”.
We can also see the NHRP group in the NHRP registration reply from the hub to the spoke router:
Let’s see if our QoS policies are active or not:
Hub#show policy-map multipoint Tunnel 0
Interface Tunnel0 <--> 192.168.123.2
Service-policy output: SHAPE_1M
Class-map: class-default (match-any)
7 packets, 952 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
Queueing
queue limit 250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 7/1050
shape (average) cir 1000000, bc 4000, be 4000
target shape rate 1000000
Service-policy : PRIORITY
queue stats for all priority classes:
Queueing
queue limit 50 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: VOIP (match-all)
0 packets, 0 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: protocol rtp
Priority: 20% (200 kbps), burst bytes 5000, b/w exceed drops: 0
Class-map: class-default (match-any)
7 packets, 952 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
queue limit 200 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 7/1050
Interface Tunnel0 <--> 192.168.123.3
Service-policy output: SHAPE_1M
Class-map: class-default (match-any)
2 packets, 332 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
Queueing
queue limit 250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 2/360
shape (average) cir 1000000, bc 4000, be 4000
target shape rate 1000000
Service-policy : PRIORITY
queue stats for all priority classes:
Queueing
queue limit 50 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: VOIP (match-all)
0 packets, 0 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: protocol rtp
Priority: 20% (200 kbps), burst bytes 5000, b/w exceed drops: 0
Class-map: class-default (match-any)
2 packets, 332 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
queue limit 200 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 2/360
Interface Tunnel0 <--> 192.168.123.4
Service-policy output: SHAPE_5M
Class-map: class-default (match-any)
1 packets, 166 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
Queueing
queue limit 1250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 1/180
shape (average) cir 5000000, bc 20000, be 20000
target shape rate 5000000
Service-policy : PRIORITY
queue stats for all priority classes:
Queueing
queue limit 250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: VOIP (match-all)
0 packets, 0 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: protocol rtp
Priority: 20% (1000 kbps), burst bytes 25000, b/w exceed drops: 0
Class-map: class-default (match-any)
1 packets, 166 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
queue limit 1000 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 1/180
Interface Tunnel0 <--> 192.168.123.5
Service-policy output: SHAPE_5M
Class-map: class-default (match-any)
1 packets, 166 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
Queueing
queue limit 1250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 1/180
shape (average) cir 5000000, bc 20000, be 20000
target shape rate 5000000
Service-policy : PRIORITY
queue stats for all priority classes:
Queueing
queue limit 250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: VOIP (match-all)
0 packets, 0 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: protocol rtp
Priority: 20% (1000 kbps), burst bytes 25000, b/w exceed drops: 0
Class-map: class-default (match-any)
1 packets, 166 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
queue limit 1000 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 1/180
Above you can see that the QoS policy has been applied to each individual spoke router. Traffic will be shaped up to 5 Mbps or 1 Mbps for each router.
What about the priority queue for VoIP traffic? To test this, i’ll temporarily change the class-map from “rtp” to “icmp”:
Hub(config)#class-map VOIP
Hub(config-cmap)#no match protocol rtp
Hub(config-cmap)#match protocol icmp
Let’s send a ping from the hub router to one of the spoke routers. I’ll send something to spoke1:
Hub#ping 172.16.123.2 repeat 1000 size 1400
Type escape sequence to abort.
Sending 1000, 1400-byte ICMP Echos to 172.16.123.2, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!
Success rate is 99 percent (999/1000), round-trip min/avg/max = 4/8/14 ms
Let’s take a look at the QoS policy that is applied to the spoke1 router:
Hub#show policy-map multipoint Tunnel 0 192.168.123.2
Interface Tunnel0 <--> 192.168.123.2
Service-policy output: SHAPE_1M
Class-map: class-default (match-any)
1010 packets, 1425264 bytes
5 minute offered rate 36000 bps, drop rate 1000 bps
Match: any
Queueing
queue limit 250 packets
(queue depth/total drops/no-buffer drops) 0/1/0
(pkts output/bytes output) 1009/1437966
shape (average) cir 1000000, bc 4000, be 4000
target shape rate 1000000
Service-policy : PRIORITY
queue stats for all priority classes:
Queueing
queue limit 50 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 999/1436562
Class-map: VOIP (match-all)
1000 packets, 1424000 bytes
5 minute offered rate 36000 bps, drop rate 0000 bps
Match: protocol icmp
Priority: 20% (200 kbps), burst bytes 5000, b/w exceed drops: 1
Class-map: class-default (match-any)
10 packets, 1264 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: any
queue limit 200 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 10/1404
Above we can see that our traffic matches the class-map and ends up in the priority queue.
hostname Hub
!
class-map match-all VOIP
match protocol rtp
!
policy-map PRIORITY
class VOIP
priority percent 20
policy-map SHAPE_1M
class class-default
shape average 1000000
service-policy PRIORITY
policy-map SHAPE_5M
class class-default
shape average 5000000
service-policy PRIORITY
!
interface Loopback0
ip address 1.1.1.1 255.255.255.255
!
interface Tunnel0
ip address 172.16.123.1 255.255.255.0
no ip redirects
ip nhrp authentication DMVPN
ip nhrp map multicast dynamic
ip nhrp network-id 1
nhrp map group SHAPE_1M service-policy output SHAPE_1M
nhrp map group SHAPE_5M service-policy output SHAPE_5M
tunnel source GigabitEthernet0/1
tunnel mode gre multipoint
!
interface GigabitEthernet0/1
ip address 192.168.123.1 255.255.255.0
duplex auto
speed auto
media-type rj45
no cdp enable
!
end
hostname Spoke1
!
interface Tunnel0
ip address 172.16.123.2 255.255.255.0
no ip redirects
ip nhrp authentication DMVPN
ip nhrp map 172.16.123.1 192.168.123.1
ip nhrp map multicast 192.168.123.1
ip nhrp network-id 1
ip nhrp nhs 172.16.123.1
nhrp group SHAPE_1M
tunnel source GigabitEthernet0/1
tunnel mode gre multipoint
!
interface GigabitEthernet0/1
ip address 192.168.123.2 255.255.255.0
duplex auto
speed auto
media-type rj45
no cdp enable
!
end
hostname Spoke2
!
interface Tunnel0
ip address 172.16.123.3 255.255.255.0
no ip redirects
ip nhrp authentication DMVPN
ip nhrp map 172.16.123.1 192.168.123.1
ip nhrp map multicast 192.168.123.1
ip nhrp network-id 1
ip nhrp nhs 172.16.123.1
nhrp group SHAPE_1M
tunnel source GigabitEthernet0/1
tunnel mode gre multipoint
!
interface GigabitEthernet0/1
ip address 192.168.123.3 255.255.255.0
duplex auto
speed auto
media-type rj45
no cdp enable
!
end
hostname Spoke3
!
interface Tunnel0
ip address 172.16.123.4 255.255.255.0
no ip redirects
ip nhrp authentication DMVPN
ip nhrp map 172.16.123.1 192.168.123.1
ip nhrp map multicast 192.168.123.1
ip nhrp network-id 1
ip nhrp nhs 172.16.123.1
nhrp group SHAPE_5M
tunnel source GigabitEthernet0/1
tunnel mode gre multipoint
!
interface GigabitEthernet0/1
ip address 192.168.123.4 255.255.255.0
duplex auto
speed auto
media-type rj45
no cdp enable
!
end
hostname Spoke4
!
interface Tunnel0
ip address 172.16.123.5 255.255.255.0
no ip redirects
ip nhrp authentication DMVPN
ip nhrp map 172.16.123.1 192.168.123.1
ip nhrp map multicast 192.168.123.1
ip nhrp network-id 1
ip nhrp nhs 172.16.123.1
nhrp group SHAPE_5M
tunnel source GigabitEthernet0/1
tunnel mode gre multipoint
!
interface GigabitEthernet0/1
ip address 192.168.123.5 255.255.255.0
duplex auto
speed auto
media-type rj45
no cdp enable
!
end
Conclusion
You have now learned how to create NHRP groups and how to attach QoS policies to them. This is a scalable method to deploy QoS policies for all of your spoke routers.
Want to take a look at the wireshark capture yourself? You can find it here:
If you have any questions, feel free to leave a comment!
No comments:
Post a Comment